[yt-users] Light Cone + All Sky

Richard P Wagner rpwagner at sdsc.edu
Sun Apr 29 15:44:16 PDT 2012


Hi Matt & Britton,

Thanks for the feedback; you've given me enough information to make a decision based on my current needs, and some ideas for possible later work. As you might guess, in the short term, I was considering building an all-sky map of the SZ effect from the L7 light cone to include in the curation data. This seemed like a reasonable summary image for the project, if it was possible with the current yt.

However, what this idea reminded me of was work by Carbone, et al. [1] in creating lensing maps from the Millenium Simulation. Basically, they use a "shift and stack" method of spherical shells, and project the lensing potential to produce shear maps. When the all sky map algorithm in yt has the r_inner (or z_inner), thin projections of the column density or transverse potential gradient could be used to create something similar. (The light cone data happens to have the gravitational potential field, which is nice.) From the insight into the code below, the volume rendering refactor branch sounds like it could the job, with either the column density, or a derived field for the potential gradient. 

To deal with the practical aspects of getting things done, since the curation work doesn't require a deep all sky map, I'll put this on hold. There's enough yt already provides to build a nice stack of all sky maps from various data sets and some square light cone images; that's more than enough for the curation effort. Once the volume rendering refactor is promoted to stable, I'll see if I can revisit this, and produce some auxiliary data. It's also possible there are others out there who might be interested in this, after the L7 data is put online.

Thanks Again,
Rick

[1] http://adsabs.harvard.edu/abs/2008MNRAS.388.1618C

On Apr 28, 2012, at 9:15 PM, Matthew Turk wrote:

> Hi Rick and Britton,
> 
> There are three different HEALpix renderers.  Right now, they're using
> mostly for making all-sky projections and escape fraction
> calculations, but I don't see why they can't be repurposed for
> stacking different outputs.  You will probably have to do some manual
> work.
> 
> The first is an interpolation-based healpix volume walker; this is in
> both the current and stable trees in the main repository.  This takes
> a fixed Npix and walks outward, sampling and integrating using a
> trilinear interpolation inside each cell.  This is what the example on
> the main page uses, but it does not (right now) have an explicit
> r_inner, only an r_outer.  The return value is an array corresponding
> to the accumulated values at each pixel.
> 
> The second is an adaptive ray tracing algorithm, which splits up at
> some covering fraction.  For a long time this was in the repository,
> but I've resisted documenting it because it used to lose rays
> occasionally.  I believe I've fixed that, but I haven't gotten back
> around to testing it.  It also suffers from a lack of visualization
> mechanisms.
> 
> The final one is in the volume rendering refactor branch, slated to be
> merged into tip of development sometime in the next month or so.  This
> branch completely rewrote all of the samplers so that they're in C and
> threaded with OpenMP, and Sam added things like opacity and hard
> surfaces and directional lighting.  It also includes a
> non-interpolating, rewritten HEALpix renderer (and a from-scratch
> aitoff projection implementation which is quite fast) that has both an
> r_inner and r_outer.  (The r_inner, it turns out, is necessary since
> we don't perform sub-cell interpolation.)  This method also accepts
> rotation vectors, which could be useful for what you're looking at.
> The repository is at https://bitbucket.org/MatthewTurk/yt-refactor ,
> and if you wnat the volume_refactor stuff I'd recommend pulling with
> "hg pull -B volume_refactor
> https://bitbucket.org/MatthewTurk/yt-refactor" so that you get the
> bookmark, which will track that head.
> 
> I'd be happy to provide more information about these methods, if you
> want to use them.  I think we've mostly shied away from the
> HEALpix-style light cones because the datasets we've worked with so
> far have been relatively small compared to the hubble volume, so
> performing the stack-and-shift along an orthogonal LOS has been
> sufficient.  (I don't know much about the HEALpix-based ones, but I
> can only imagine they need data large enough that it subtends a good
> fraction of the sky.)  It might also help if you could describe a bit
> more in detail the algorithm you're looking to implement, and perhaps
> if it needs that third type we can accelerate merging the refactor
> into tip?  Also, I don't see why the light cone method itself couldn't
> use one of these, since when it all comes down to brass tacks, we're
> just stacking arrays anyway, and the output from the HEALpix camera
> would stack just as nicely.
> 
> Let me know if one of these in particular sticks out, or if you've got
> more ideas or questions!
> 
> Thanks,
> 
> Matt
> 
> On Sat, Apr 28, 2012 at 1:58 PM, Britton Smith <brittonsmith at gmail.com> wrote:
>> Hi Rick,
>> 
>> It looks to me like the HEALpix rendering works within the context of the
>> camera being somewhere inside the of the computational domain.  For a light
>> cone, it would need to be able to operate as it it were very far away from
>> the domain where the entire volume subtends a very small solid angle.  I'm
>> not sure this is possible in the current framework.  Can someone more
>> familiar with the HEALpix render comment on this?
>> 
>> Britton
>> 
>> 
>> On Sat, Apr 28, 2012 at 2:44 AM, Richard P Wagner <rpwagner at sdsc.edu> wrote:
>>> 
>>> Hi,
>>> 
>>> I'm looking at the light cone and all sky examples, and I'm wondering how
>>> much effort (in human units, not computer time) is required to produce an
>>> all sky map based on the "shift and stack" method used for light cone
>>> generation. Naively, the answer seems to be based on whether or not the
>>> light cone generator projection function could use the HEALpix camera.
>>> 
>>> Is there a simple way to create an all sky map combining several data
>>> sets?
>>> 
>>> Thanks,
>>> Rick
>>> 
>>> _______________________________________________
>>> yt-users mailing list
>>> yt-users at lists.spacepope.org
>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>> 
>> 
>> 
>> _______________________________________________
>> yt-users mailing list
>> yt-users at lists.spacepope.org
>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>> 
> _______________________________________________
> yt-users mailing list
> yt-users at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org




More information about the yt-users mailing list