[yt-users] transparency and color from two arrays (cont.)

Nathan Goldbaum nathan12343 at gmail.com
Tue May 30 07:48:27 PDT 2017


On Tue, May 30, 2017 at 8:48 AM, Matej Tyc <matej.tyc at gmail.com> wrote:

> Hello again, I have took another iteration and we still need your help to
> proceed:
>
> On 17.5.2017 14:37, Nathan Goldbaum wrote:
>
>
> ...
>
> You will be using a VolumeSource in that case, not a MeshSource. There
> will still be a source to be rendered though.
>
> Would be the following snippet OK if I implement the MyCustomTF correctly?
>>
>> import scipy as sp
>> import yt
>> import yt.visualization.volume_rendering.api as vr_api
>>
>> trans = sp.rand(20, 40, 40)
>> color = sp.rand(20, 40, 40)
>>
>> ds = yt.load_uniform_grid(dict(color=color, trans=trans),
>>               color.shape)
>>
>> scene = vr_api.Scene()
>>
>> source = vr_api.VolumeSource(ds.all_data(), ["color", "trans"])
>> source.log_field = False
>> source.transfer_function = MyCustomTF(...)
>> scene.add_source(source)
>>
>> scene.add_camera()
>> scene.save('render.png')
>>
>
> In principle yes.
>
>
>> You are right that each source only renders a single field. If you want
>> to render more than one field you will need to create multiple sources.
>>
>>
>>> If I am wrong and it is meaningful and possible to construct a source
>>> with multiple fields, could you please provide an example?
>>>
>>
>> Here's an example:
>>
>> https://github.com/yt-project/yt/blob/master/doc/source/
>> cookbook/render_two_fields.py
>>
>> That said, this is simply compositing two separate volume renderings
>> together. A bivariate transfer function is a bit more complex than this.
>>
>> Transfer function is a property of a volume source, isn't it? So if one
>> source means one field, then how can a transfer function access multiple
>> fields? At least PlanckTransferFunction uses multiple fields.
>>
> Good point. You may need to relax that assumption.
>
> What do you mean by "relaxing that assumption"? Am I wrong when I think
> that the transfer function needs to use multiple fields?
> Regarding the volume source, I have examined the source code
> https://github.com/yt-project/yt/blob/master/yt/visualization/volume_
> rendering/render_source.py#L160 and although I can pass a dataset with a
> list of fields, the __init__ method completely forgets that there is more
> than one fields involved.
>

I mean that code modifications will need to happen. The behaviors you are
describing will need to be changed to get the multivariate transfer
function working once again with the current volume rendering interface.

It might even make sense to leave VolumeSource alone and instead create a
new MultiVariateVolumeSource to handle multivariate volume renderings.


>
> I have tried to fiddle with the ColorTransferFunction by making my own CTF
> and changing all the field IDs in the code from 0 to 1 (see
> https://github.com/yt-project/yt/blob/master/yt/visualization/volume_
> rendering/transfer_functions.py#L393), but although I have assigned the
> CTF to a supposedly two-field source, nothing came out (as opposed to the
> case when field IDs were flipped back to 0).
>

Unfortunately I don't know the details of how the color transfer function
is implemented to give you educated advice here. I think you're going to
need to poke at it with a debugger.


>
> 2. I have no doubts that transfer functions work, but I was unable to
>> find out how are they involved in the integration process. What I came
>> across were the integration routines and they have mentioned that the
>> transfer function is not used:
>> https://github.com/yt-project/yt/blob/master/yt/utilities/
>> lib/ray_integrators.pyx#L235
>> . This is really interesting, how is it in reality?
>>
>
> I believe the low-level routine you want to look at in is
> yt/utlities/lib/image_samplers.pyx, in particular the
> VolumeRenderSampler. The PlaneVoxelIntegration class you linked to here is
> experimental code that is not used elsewhere in yt.
>
> okay, I am happy about this as the code was extremely hard to read. This
>> one is somehow easier, although the code still doesn't reveal much intent
>> and names of variables are difficult to understand for an outsider. Anyway,
>> I will proceed and if I manage to understand it, I may be able to come up
>> with some proposals.
>>
>
> Feel free to ask questions as you have them.
>
> I see, it is VolumeRenderSampler.sample and ImageSampler.__call__,
> walk_volume etc. There is some very impressive functionality in there, but
> I would have to study that for weeks just to get to some level of
> understanding, which I can't do. So I hope that we will be able to achieve
> this multimodal visualization without understanding those masterful, but
> also quite sophisticated low-level routines.
>

To get this to work you might need to come back to these low-level
routines, in particular to figure out how the transfer function is
expressed at this level.


>
> _______________________________________________
> yt-users mailing list
> yt-users at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20170530/1297496e/attachment.htm>


More information about the yt-users mailing list