[yt-users] transparency and color from two arrays (cont.)

Matej Tyc matej.tyc at gmail.com
Tue May 30 06:48:08 PDT 2017


Hello again, I have took another iteration and we still need your help
to proceed:


On 17.5.2017 14:37, Nathan Goldbaum wrote:
>
> ...
>
> You will be using a VolumeSource in that case, not a MeshSource. There
> will still be a source to be rendered though.
>
>     Would be the following snippet OK if I implement the MyCustomTF
>     correctly?
>
>     import scipy as sp
>     import yt
>     import yt.visualization.volume_rendering.api as vr_api
>
>     trans = sp.rand(20, 40, 40)
>     color = sp.rand(20, 40, 40)
>
>     ds = yt.load_uniform_grid(dict(color=color, trans=trans),
>                   color.shape)
>
>     scene = vr_api.Scene()
>
>     source = vr_api.VolumeSource(ds.all_data(), ["color", "trans"])
>     source.log_field = False
>     source.transfer_function = MyCustomTF(...)
>     scene.add_source(source)
>
>     scene.add_camera()
>     scene.save('render.png')
>
>
> In principle yes.
>
>
>>     You are right that each source only renders a single field. If
>>     you want to render more than one field you will need to create
>>     multiple sources.
>>      
>>
>>         If I am wrong and it is meaningful and possible to construct
>>         a source
>>         with multiple fields, could you please provide an example?
>>
>>
>>     Here's an example:
>>
>>     https://github.com/yt-project/yt/blob/master/doc/source/cookbook/render_two_fields.py
>>
>>     That said, this is simply compositing two separate volume
>>     renderings together. A bivariate transfer function is a bit more
>>     complex than this.
>     Transfer function is a property of a volume source, isn't it? So
>     if one source means one field, then how can a transfer function
>     access multiple fields? At least PlanckTransferFunction uses
>     multiple fields.
>
> Good point. You may need to relax that assumption.
What do you mean by "relaxing that assumption"? Am I wrong when I think
that the transfer function needs to use multiple fields?
Regarding the volume source, I have examined the source code
https://github.com/yt-project/yt/blob/master/yt/visualization/volume_rendering/render_source.py#L160
and although I can pass a dataset with a list of fields, the __init__
method completely forgets that there is more than one fields involved.

I have tried to fiddle with the ColorTransferFunction by making my own
CTF and changing all the field IDs in the code from 0 to 1 (see
https://github.com/yt-project/yt/blob/master/yt/visualization/volume_rendering/transfer_functions.py#L393),
but although I have assigned the CTF to a supposedly two-field source,
nothing came out (as opposed to the case when field IDs were flipped
back to 0).

>>     2. I have no doubts that transfer functions work, but I was unable to
>>     find out how are they involved in the integration process. What I
>>     came
>>     across were the integration routines and they have mentioned that the
>>     transfer function is not used:
>>     https://github.com/yt-project/yt/blob/master/yt/utilities/lib/ray_integrators.pyx#L235
>>     . This is really interesting, how is it in reality?
>>
>>
>> I believe the low-level routine you want to look at in is
>> yt/utlities/lib/image_samplers.pyx, in particular the
>> VolumeRenderSampler. The PlaneVoxelIntegration class you linked to
>> here is experimental code that is not used elsewhere in yt.
>
>     okay, I am happy about this as the code was extremely hard to
>     read. This one is somehow easier, although the code still doesn't
>     reveal much intent and names of variables are difficult to
>     understand for an outsider. Anyway, I will proceed and if I manage
>     to understand it, I may be able to come up with some proposals.
>
>
> Feel free to ask questions as you have them.
I see, it is VolumeRenderSampler.sample and ImageSampler.__call__,
walk_volume etc. There is some very impressive functionality in there,
but I would have to study that for weeks just to get to some level of
understanding, which I can't do. So I hope that we will be able to
achieve this multimodal visualization without understanding those
masterful, but also quite sophisticated low-level routines.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20170530/0033f029/attachment.html>


More information about the yt-users mailing list