[yt-dev] streamlines, magnetic field loops, and such

Matthew Turk matthewturk at gmail.com
Mon Dec 31 14:19:55 PST 2012


Hi Jeff and Sam, et al,

Let me start out by saying I think that having non-volumetric elements
in a scene would be a huge win for yt, but that as it stands I suspect
it may be very difficult.  I think at some point we need to refactor
properly the scene + camera system; Sam and I have talked about this.
If that were refactored we could eventually try -- perhaps
unsuccessfully -- swapping out rendering engines for a GPU or
accelerator-based engine.

My own interests in this are quite keen, as I have been working on
projects that would benefit from outlines added onto the scene.  For
instance, some of the earthquake data could really use some
continents.  ;-)  For that particular case it might be a bit easier
than this, as in principle there will not be a point when they won't
either be applied post-facto or as input to the ray casting.


On Mon, Dec 31, 2012 at 10:25 AM, j s oishi <jsoishi at gmail.com> wrote:
> Hi all,
>
> I was wondering what the status of streamlines in yt's volume renderer is. I
> looked at Sam's talk from last year, and I was wondering if anyone has been
> working on them.

To my knowledge, no, no one has been.  I do believe that currently the
renderer can only handle either fluid elements (where the radiative
transfer equation, or some variant, is solved in the non-scattering
limit) and stars.  Stars may not work with more than a single OpenMP
thread; I have yet to track this down.  And by star, what I really
mean is a *source* that has a radius of influence.  This enters in
only as a source.

>
> I'd really like to make diagrams like figure 5 of this paper:
>
> http://arxiv.org/pdf/1212.5612v1.pdf
>
> I'd like to integrate streamlines into other kinds of visualizations,
> putting them in the same scene as (opaque) volume renders, but I honestly
> have zero knowledge of how graphics works, both inside and outside yt. I'm
> going to be working on some projects this year that would make great use of
> streamline visualization. I'd be happy to lend a hand to this effort, so if
> anyone has any suggestions on how I could get started, even basic resources,
> I'd be much appreciative.

These are absolutely gorgeous.

So here's how the rendering algorithm in yt works:

1) Split up grids into bricks; this means that the domain of interest
is fully tiled, but only by the finest resolution data available at a
given point.
2) Set up an empty image plane
3) Cast this image plane through the bricks, evolving the transfer
function at each point.  This means identifying which portions of the
image plane intersect the grid, traversing the zones via Amanatides &
Wu, and then ending.  Each cell is handled individually, independent
of how many sub-samples a cell gets.
4) Return this image plane to the caller once every brick has been cast through

Note that step 3 also involves "Samplers" -- a sampler accumulates
along a ray, given input cells and input values.  This is where in
principle we could add in other items.

This is a scene where every item in the scene is a brick.  In the case
that point sources are added, they get added during the ray traversal
of a brick; specifically, during each sampling in a cell, the kD-tree
for the brick (which is a non-exclusive set of point sources) gets
queried.  Incidentally, I think the kD-trees we use here may not be
thread safe, causing the OpenMP issue I note above.  Any sources
within the radius of their influence of the cell get added as sources
to each sample inside the cell.

I am not an expert on computer graphics; what I know I've picked up
either through targeted study or just miscellaneous sources.  My
(potentially quite wrong!) understanding of most scenes is that they
use some kind of a tree (B or kD) to identify whether or not a given
position is located in an object, blah blah.  And you can walk the
tree that way as a ray moves.  Where my understanding breaks down is
how to handle overlap in objects.  If you have two streamlines, you
can handle overlap easily as they are discrete objects.  But if you
have a brick, how do you handle intersecting?

I think we would need to have special samplers for that; for instance,
in those bricks where a streamline or other opaque object is
contained, we allow for collisions.  When a ray collides (assuming
casting from eye outward) we terminate sampling and add on whatever
our streamline is to look like.  For instance,  So as long as the
check was not expensive, we can do this.  We could speed it up by
masking cells where streamlines (which are best thought of here as
"tubes", with a radius as well as an axis) intersect, and only
performing the ray/object collision within that cell.  For the
diagrams you show, I think this would be a good method, and I think
the early ray termination would speed things up nicely.

Sam's article is paywalled, and I'm still technically on vacation, so
I haven't read it yet.

I think we can approach this from a few ways:

1) Implement this ourselves.  This is the option I think we should
explore first, as doing so will give us the opportunity to understand
the requirements and depth of this rabbit hole; this is going to
eventually be necessary anyway, as I think if we make a decision to
undertake something like this it needs to be done in full
understanding.  I believe it will require a proper "scene" with
multiple elements.  For instance, bricks would be components, then so
would streamlines / continent lines, and stars, and so on, and the
opaque, hard to identify ones would deposit themselves in the brick
values or masks somehow.  We'd do collision detection, blah blah.
2) Use something else, like Blender, to do this.  I have used Blender
in the past and even ported yt to it.  Blender allows for volumetric
renderings, and you can also have it call external code to handle
things like rendering volumes; this is ideal for interfacing with yt.
I have yet to be able to do this directly, since the documentation was
a bit sparse, but I think it is feasible.  Unfortunately, this has two
major problems -- the interface to Blender can be quite difficult to
manage unless you are familiar with computer graphics (I am not), and
it requires Python 3.  I did successfully port yt to Python 3 about
two years ago, but have not kept up with that.  Porting all of yt to
Python 3 is not appropriate at this time either, but fortunately you
can mostly do it at install time with some utilities.  Actually
hooking the right pipes up might be tricky but would be possible.
3) Export components to external renderers.

Any of these would work.  I think this can be done in yt by using
early termination of rays and a smart collision detection system.  We
may also need to reverse the order of vector propagation to be outward
from the plane of the observer, rather than inward to it from remote.
(This has other benefits as well.)  This may turn out to be simply too
expensive, though.  Already the stars are expensive.

But the first thing we need to do before we can do any of this is
construct a Scene object, refactor the camera and the volumetric
elements, and make yt work like a proper volume renderer.  Basically
everything we have now is still left over from when there was a single
"pf.h.volume_rendering" object.  Unfortunately, with everything else
going on, I think unless we had a proper development effort to work on
this -- and it is a multiple person job -- we probably can't do this
refactoring right now.  Experimenting with non-volumetric elements
once this is done is totally workable (even if the results are a bit
disappointing once we get there!) but we do need to take that step
first.

-Matt

>
> thanks,
>
> j
>
> _______________________________________________
> yt-dev mailing list
> yt-dev at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
>



More information about the yt-dev mailing list