[yt-users] About Implementation of a full radiative transfer equation in yt volume renderer

Matthew Turk matthewturk at gmail.com
Mon Nov 26 11:39:55 PST 2012


Dear Moon-Ryul,

Sorry for the delay in replying.  Your email took a bit of time to think
about and absorb, and I wanted to give the best answer I could.  As
background, in this paper we go into detail on the main method by which yt
volume renders:

http://adsabs.harvard.edu/abs/2011ApJS..192....9T

The underlying code has changed since that paper has been published, but I
believe the methods in there are still accurate.  For starters, as it
stands, the yt method of integration currently starts from fixed positions
and fixed vectors and then integrates outward.  This means that regardless
of how the values change as a ray moves over a cell, the ray always carries
with it a source point and a vector.  The next item that should be
considered is that there *is* an experimental routine (likely
non-functional) that dynamically adds and removes rays during integration;
this also carries with it a modification of the *direction* of a ray, but
not the source.  This can be found in the "adaptive healpix" code, but
again I suspect it is non-functional.  (Non-adaptive healpix *is*
functional, however.)

So with that out of the way, I see only one reason that as a ray moves
across the domain can't change both direction and source.  The reason is
the integration order of the volumetric patches (we call them "bricks")
that it traverses.  As it moves, if a ray changes source or direction, it
could potentially re-enter a brick it has previously traversed.  Sam
Skillman's streamlines module nicely addresses this, but for a different
purpose.  The scalability is also different, because the streamlines seek
to retain all traversed values, whereas volume rendering is a strictly
accumulating process.  So I think what you are describing may be *possible*
with yt, but it may be difficult.

That being said, there has been considerable interest in adding in a monte
carlo module to yt; I've explored this a bit in the past and I think that
it would be quite feasible and potentially quite scalable.


On Thu, Nov 15, 2012 at 8:56 PM, Moon-Ryul Jung <moon at sogang.ac.kr> wrote:

> Hi, everybody and Matthew:
>
> I happened to encounter the blog of Matthew in the internet, and left some
> notes in the blog. I was led to this
> mailing list with the following request:
>
> I like Python so much that I would like to do my experimentation with
> radiative transfer with Python environment.
>      So, yt is a perfect environment. But it does not support a full
> radiative transfer equation yet.
>
> By a full radiative transfer equation I mean:
>
> (1) dot (w, del L (x, w) ) = LeV_0 (x,w) + sigma_s (x) [  int_{ Sp} f_p(
> w, x, w' ) dw' ]  -  simga_t(x) L(x, w)
>
>
> for all x in a given volume V
>                                                     where V_0 is the "free
> " part of the volume  V  in which objects are not occupied.
>
> Here del is the spatial gradient operator, L(x,w) is the radiance at point
> x in the direction of w. LeV_0 (x,w) is the
> emitting radiance at point x in the direction of w. sigma_s(x) is the
> scattering coefficient at x. Sp  is the set of all solid angles. f_p(w, x,
> w') is the phase function at x, from direction w' to direction w. sigma_t
> (x) is the extinction or attenuation
> coefficient, and sigma_t = sigma_s + sigma_a, where sigma_a is the
> absorption coefficient.
>
>
> (2) boundary conditions: for all x on surfaces of objects within the
> volume V:
>
>    L(x,w) = LeS (x, w) + int_{Sp} f_s (w, x, w') L(x, w') dot ( N(x), w' )
> dw'
>
>  Here LeS(x,w) is the emitting radiace at x on a surface to direction w.
> f_s(w, x, w') is the reflection function  at x from
> direction w' to direction w.
>
> By combining (1) and (2), we get a single integral equation:
>
> (3) L(x, w) = rau( x_S, w) L (x_S, w) + int_{x_S, x} tau(x', x)  [ LeV_0
> (x', w) + sigma_s(x') int_{Sp } f_p( w, x', w') dw' ] dx'
>
> Here tau(x', x) is the optical depth from x' to x:
>
> tau(x', x) = exp [ -int_{x, x'} sigma_t (x'') dx''  ]
>
> It is not easy to find the radiance function L(x,w)  that satisfies
> the integral equation of (3), where L occurs in both sides of the equation.
>
> In computer graphics [within computer science], this problem has been
> solved mainly by means of Monte Carlo
> Integration technique, in particular "path tracing" techniques. In these
> techniques, we randomly generate a set of
> paths from the camera to the light sources, and compute the solution L by
> averaging the radiances carried by each path. A  path is a sequence of
> positions in space. The essence of the techniques comes down to how
> to sample good paths
> which contributes much to the final radiance at the camera. So may paths
> would not connect the camera and the light sources and thus would not
> contribute to the final radiance at the camera.
>
> I use the term "radiance"  to mean the radiant energy per unit area per
> solid angle, and is the same as
> the "intensity" often  used in the literature.
>
> NOW: As far as I gathered information from various sources, the yt volume
> renderer implements only a special
> case of a radiative transfer equation. It seems that it does not consider
> the second term ( the integration) in equation (3).
> Or  it simplifies the integration
>             int_{x_S, x} tau(x', x) [ LeV_0 (x', w) + sigma_s(x') int_{Sp
> } f_p( w, x', w') dw' ] dx'
>
> to
>
>  int_{x_S, x} tau(x', x) [ LeV_0 (x', w) ] dx'.
>
> That is, it does not support the phase function f_p(w, x', w').
>

Yes, that is correct -- we do not phase shift.


>
> But I would like to use the phase function. In that case, we may try a
> "single scattering" version or a "multiple scattering"
> version. In the first version, each sampled position in the volume
> scatters only the radiance coming directly from the
> light source. In the second version, each sampled  position in the
> volume scatters the radiances scattered by
> other positions. In this case, two or three multiple scattering would be
> good enough. But for me the single scattering
> version would be just fine.
>

We may be able to support this, but I am not entirely sure.  If I am to
understand correctly, this would imply that at a certain critical value or
threshold, a given ray emanating from a source would be scattered and be
sent in a new direction.  I'm thinking about this and I think that if the
ray vectors are constant in number (i.e., we do not add new rays) then this
could be done quite simply by swapping out the integrator in the
Streamlines, which would be simpler than the method of using the volume
integrator.  What this says to me is that we could actually gain quite a
bit of interesting functionality if the streamline integrator were
refactored to fit better with the volume rendering -- separation of
accumulation steps from the integration steps, and allowing both to be
swapped out independently.  Sam, what do you think of this?


>
>

> -----
> The project I am interested in is rainbow simulation. The phase functions
> of a collection of water drops will be computed
> by means of Mie theory or some other variants.
>
> The implementation in C++ of path tracers are available in open source in
> the following site:
>
> http://www.mitsuba-renderer.org/devblog/?p=473
>
> This implementation is reliable and tested. I am willing to add the
> relevant parts of the code
> into the yt volume renderer.
>

WOW!  That is quite cool.  This owuld be far, far simpler than what I
mentioned above, with the ray tracing.  In fact, reading the Mitsuba page,
it looks like it already accepts hierarchical grids, which is what yt
provides.  This would be amazing ,and I think mapping the data structures
could be straightforward.  I think the "gridvolume" data source (p116 of
the documentation) would be a good match.  There are already python
bindings, so I think what would work best would be:

1) Using python (no yt) supply a single grid to Mitsuba and render this
2) Using yt, use the amr kD-Tree to decompose the data
3) Using yt, supply *many* bricks from the kD-tree to Mitsuba and render
these

This would be an amazing addition to the code.  I'm happy to help you with
identifying where the right places are to hook up Mitsuba and yt.  I'd
never heard of Mitsuba before, but it's really impressive.


>
> But as is well known, it is not easy to find where to hack the existing
> code....
>
> I think I need to cooperate with some one who knows the internal structure
> of the volume rendering code.
>

Sure!  I'm happy to serve as that person.  I'd be eager to help out with
this.  Let me know if there's anything else I can do to help you get
started, and please let us know how it goes if you decide to take this on!

-Matt


>
> Sincerely
>
> Moon R. Jung
> Professor
> Dept of Media Technology
> School of Media
> Sogang University.
>
>
>
>
>
>
>
>
>
>
>
>
>  Science is the refinement of everyday thinking.
> Zen is your everyday life
> Life is  at root  playing
>
> _______________________________________________
> yt-users mailing list
> yt-users at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20121126/f57e4ec9/attachment.html>


More information about the yt-users mailing list