[Yt-dev] Multivariate Volume Rendering

Matthew Turk matthewturk at gmail.com
Tue Mar 16 23:34:35 PDT 2010


Hi all,

Over the last little while, the volume renderer has really taken off
in popularity!  I'm a bit conflicted about this, since I'd rather
people were super duper into things like phase plots, radial profiles
and clump finding, but it's also good to know people are using yt for
volume visualization as well as other types of analysis.  I used it
myself to what I think was great effect at a conference last week, and
I hope that other people are getting similar use out of it.

Anyway, a couple months ago John wrote to yt-users and suggested that
we try doing multivariate volume rendering, similar to what's in
Kaehler, Wise, & Abel.  For a while mostly nothing happened, but over
the Friday->Sunday, Jeff and I hammered it out and it's now working.
Jeff took care of the emission spectra (with some assistance from
John's RGB code), I took a whack at adding multiple variables to the
partitioned grids and the volume rendering itself, and we can now do
multivariate rendering -- although absorption still needs some work,
which I'll discuss below.

The new mechanism for grid partitioning only returns the indices into
the vertex-centered data.  The actual processing of the
vertex-centered data.  This means the actual locating of sweeps is
much faster -- right now we don't take advantage of that, but I can
see it being useful in the future.  A few weeks ago I wrote a
distributed object collection so that we could do 3D domain decomp to
get the bricks, then pass them around between processes for the actual
volume rendering, which was parallelized via 2D image plane decomp
into decomposed inclined boxes.  Now the 3D decomp will only pass
around single arrays that describe the indices into the bricks, and
then each image-decomp processor will generate and slice up the data
as necessary -- this isn't yet working, but it's a clear process and
I'll try to take care of it soon.

Transfer functions now have several more parameters, and I'm working
to convert the old-style transfer functions to this.  The idea is that
now you add field interpolation tables which correspond to a specific
field, link them to channels (R, Ralpha, G, Galpha, B, Balpha), and
optionally have their values weighted by other fields.  So for an
example, here's how the Planck function sets up its various channels:

http://hg.enzotools.org/yt/file/ede5d76cddd4/yt/extensions/volume_rendering/TransferFunction.py#l169

(there's a bit of cruft where I tested adding lines for density absorption.)

It creates a TransferFunction, links it to a field (0) and a weighting
field (2), then links it to channels.  There's a lot of complexity in
this, because it's a complex thing and we want to enable very complex
behavior, but for the most part it will be hidden in the
ColorTransferFunction, which will abstract this all away.

Absorption works slightly differently now.  Because we do
back-to-front integration, I believe we do not have to store the
accumulated opacity -- so while right now the volume renderer has
6xNxM arrays, I think it should just be 3.  Additionally, we're
presented with the radiative transfer equation:

dI/ds = -alpha * I_0 + j

Here j would be our emission at a given sample, I_0 is the current
value of the image plane, and ds is the distance between samples.
Since we're doing back-to-front, I believe we are directly solving
this, and so our alpha fed in directly corresponds to the absorption.
However, because ds is so low, careful calibration between alpha and
emissivity is necessary.  Alternately, converting everything back to
cgs is probably not difficult...  (approximate) Scattering should be
easy to put in, it just hasn't been yet.

However, that's neither here nor there.  I believe that some exciting
things can be done with this (Britton and Jeff and I have chatted
about line transfer as well as LyA maps) and I think it's rapidly
converging on a workable implementation.

I made up some images with it.  The first is a sample dataset of
John's with an HII region:

http://yt.enzotools.org/attachment/wiki/Screenshots/hiiregion_rgb.png

The second is one of my protostars from my thesis (I had to
aggressively clip the image here) just at around 1.5e4K.  This image
is 3AU on a side:

http://yt.enzotools.org/attachment/wiki/Screenshots/protostar_rgb.png

And, just for fun, I got the transfer function widget to steal the
camera position from the VTK interface, so I made up an image of the
two playing nicely together:

http://yt.enzotools.org/attachment/wiki/Screenshots/vtk_tf_vr.png

Anyway, that's my status update.  All of the MultiVariate stuff went
in the "vr-multivariate" branch, so feel free to take a look.  (You'll
probably have to re-cython.)  An example script is here:

http://paste.enzotools.org/show/367/

I'll shoot another, much shorter, email to the list when it's working
and merged back into the main hg branch.  We'll then have a parallel
software volume renderer that is fully multi-channel, multi-variate,
and comes with black body transfer functions.  All of this is only
achievable thanks to the development community we have, and our team
efforts.  So, thank you all for your hard work and your enthusiasm.

-Matt



More information about the yt-dev mailing list