[Yt-dev] Inline Volume Rendering
matthewturk at gmail.com
Wed Dec 8 15:46:13 PST 2010
On Wed, Dec 8, 2010 at 3:19 PM, Sam Skillman <samskillman at gmail.com> wrote:
> Hi Matt,
> First things first - I've never even tried to do a volume rendering inline.
> If we don't want to move data around, it should be straightforward for
> simulations without load balancing turned on because the enzo domain
> decomposition mimics the kd-tree breadth-first decomposition (with a few
> adjustments). If load-balancing is turned on, I really have no clue how one
> would do this without some major additions.
Ah, I see the issue here. I think we can't/shouldn't assume load
balancing is off. The particular use case I had in mind initially was
unigrid, but the more interesting case occurs when there is AMR.
Ideally this would work for both RefineRegion runs and AMR-everywhere,
but it seems that in general this cannot be the case.
> If we are okay with moving data around then there are more options and we
> would just have to put an initial data distribution function before the
> rendering begins. We could even add in some better memory management so
> that chunks of data are sent as needed instead of having to load everything
> into memory at one time.
Well, let's back up for a moment. The initial implementation of
inline analysis as I wrote it is what I have referred to as "in situ"
analysis. This is where the simulation grinds to a halt while
analysis is conducted; for this reason, you can see why I'm a bit
hesitant to do any load-balancing of data. The alternate is something
that we could call "co-visualization." This is not yet implemented in
yt, but it is the next phase. This is where the data is passed off
and then the simulation continues; this is attractive for a number of
reasons. I've created a very simple initial implementation of this
that works with 1:1 processors, but it also does no load-balancing.
The recent focus on inline analysis has been for two reasons: the
first of which is that we are currently benchmarking and identifying
hot spots for the *existing* inline analysis. But, we need to think
to the next two iterations: the next iteration will involve coviz
capabilities, and the one following that will be a hybrid of the two,
wherein in situ visualization will be a byproduct of a rethinking of a
So I think for the current generation, we can't assume it's okay to
move data around. But it will be, eventually. This might just mean
we can't use the fanciest of the volume rendering with in situ and
need to move to coviz for that.
> Alternatively, if we don't care about back-to-front ray-casting (in some
> cases you can't tell much of a difference), then the problem gets very
> simple...we may want to try this out on some post processing renders and get
> a feel for how much it matters.
For the ProjectionTransferFunction, this manifestly is not an issue --
but that, of course, is not the fanciest of the renderings. It may be
interesting to have it as a switch: "unordered = True" in the Camera,
for instance, that lets the grids come in any order. What do you
think? Then for the gaussian-style TF's, we may get similar or
identical results, but for the Planck it would probably be gross and
> Anyways, I guess the current status would be that if we want it to work for
> all cases, it's going to take quite a bit more work. If we want it to work
> in some of the cases, it shouldn't be too much more work.
I think "some of the cases" is perfectly fine. This also speaks to
the idea that we should construct a more general load balancing
framework for spatially-oriented data in yt, but that's definitely not
going to be a near-term goal.
Thanks for your thoughts, Sam. I think the summary is:
* With a small bit of work, it will work for non-EnzoLoadBalanced simulations
* With unordered ray casting, it should work roughly as is with some
* Anything else will require coviz capabilities
Does that sound fair?
> On Wed, Dec 8, 2010 at 4:00 PM, Matthew Turk <matthewturk at gmail.com> wrote:
>> Hi all, (especially Sam)
>> What's the current status of inline Enzo volume rendering? Sam, you
>> had mentioned to me that with the new kD-tree decomposition that this
>> should be feasible. If we opt not to move data around, which is my
>> preference, is it still possible to partition every grid that belongs
>> to a processor and then do the appropriate number of intermediate
>> composition steps of the image? I recall Sam saying this may require
>> log_2 Nproc composition steps, which may in fact be acceptable.
>> PS Stephen, Britton and I have been chatting off-list about inline
>> HOP, but once we come to a consensus we'll float it back onto the
>> Yt-dev mailing list
>> Yt-dev at lists.spacepope.org
> Yt-dev mailing list
> Yt-dev at lists.spacepope.org
More information about the yt-dev