[yt-users] Changes to trunk: Volume Rendering, Gadget & Chombo

Matthew Turk matthewturk at gmail.com
Wed Mar 17 11:36:46 PDT 2010


Hi all,

I've backported to trunk (r1664 & r1665) a bunch of the changes to the
volume rendering that have been going on in the mercurial repository.
If you're running on yt-1.6 (the stable branch) you can safely ignore
the rest of this message.  Recall that "yt instinfo" may be able to
auto-upgrade your installation!

The full set of changes, from my SVN commits, is here:

 * Updates to the time series code.  It's now just about usable.
 * Adding package parallel_tools, where we will be adding all NEW parallel
   support code and migrating old code over to.
 * Ray casting now works in parallel and with non-equal width/height values for
   the output image.  This necessitated a change in API.
 * Added AMRInclinedBox data object.
 * Heaviside function added to transfer functions.
 * VolumeRendering is now an object.  direct_ray_cast is now gone.
 * Added Sam Skillman's image_handling module.
 * Added HomogenizedBrickCollection and DistributedObjectCollection.  These
   were designed to parallelize the brick partitioning, but I believe we need a
   new mechanism for that, so currently they do things but not 3D domain decomp.
 * Adding preliminary support for both Chombo and Gadget.
 * Vertex-centering is a tiny bit faster.

The big things to note are that we have support for very small gadget
datasets, in a very limited fashion.  If you're interested in helping
out with this, let me know.  Additionally, Jeff Oishi has added basic
support for Chombo datasets.

The biggest changes here are that the volume rendering has had a shift
in API -- this was to accommodate the fact that it is now parallel.
Previously, direct_ray_cast was to be called; now, the object itself
hangs off the hierarchy, like all other first-class objects in yt.
Where previously, one would import volume_rendering and then call
direct_ray_cast, now one would do something like this:

--
import yt.extensions.volume_rendering as vr

pf = load("DataDump0155.dir/DataDump0155")
W = 100.0/pf['au']
SIZE = 1024
c = pf.h.find_max("Density")

tf = vr.ColorTransferFunction((-14.0, -10.0))
tf.add_layers(8)  # sample a colormap 8 times
vp = pf.h.volume_rendering(L, W, c, (SIZE, SIZE), tf)
vp.ray_cast()
vr.plot_rgb(vp.image, "my_image")
--

This would plot a 100 AU wide image, centered at the most dense point,
that's 1024x1024.  One of the side effects of this change in API is
that this exact operation will now operate in parallel.  Britton and
Sam have reported good scaling for the parallelism; I'm testing it out
myself on huge datasets, so hopefully I'll have some benchmarks in the
near future.

Sorry about the breakage, but I think it's worth it.  The "yt instinfo
-u" command will attempt to upgrade your current installation, if
you're interested in any of these changes.  Let us know if this has
caused any breakages!

Best,

Matt



More information about the yt-users mailing list