[yt-dev] yt for lagrangian hydro

Matthew Turk matthewturk at gmail.com
Fri Aug 10 07:50:59 PDT 2012


Hi Matt,

On Fri, Aug 10, 2012 at 4:32 AM, Matt Terry <matt.terry at gmail.com> wrote:
> i think i follow.
>
> 1 and 2 i understand.  not entirely sure what spatial locality means
> in this context, but i'm sure it will make sense when digging into the
> code.

Well, so Tom and Jeff let me know that they think this could be
avoided if ghost zones were written out.  Does your code have the
ability to write out ghost zones?  That would alleviate a substantial
amount of the trickiness of spatial locality, as we could just read
those in and use them for any finite difference stencil fields.  (And
if not, we can just ignore it, and raise a "Don't know how to generate
ghost zones" error.)  So I think we can ignore spatial locality for
the moment as a problem.

>
> right now, all my visualization/analysis is slices and profiles, so
> not having volume rendering isn't a loss.  having it would be a very
> persuasive feature, though.  on that topic, volume rendering will take
> a bit of work, since we'll have to rewrite the ray casting routines to
> deal with an irregular structured mesh.  i'll say from experience,
> that ray trace through Lagrangian meshes is unpleasant.  its all
> corner cases.

Okay, let's hold off on this for now.  Maybe once we've seen the data
we can test some ideas, but let's explore this later.

>
> assuming we drop volume rendering, this sounds like a few days work to
> get all the pipes connected to the right places.  it also sounds like
> i should wait for v3 to finish baking before diving in.  this is all
> well and good, because it will be a month or so before i really have
> the bandwidth to work on this in earnest.

Sounds great.  Let's plan on revisiting this -- but I definitely think
it's feasible.  And fun.

-Matt

>
> -matt
>
> On Wed, Aug 8, 2012 at 12:07 PM, Matthew Turk <matthewturk at gmail.com> wrote:
>> Hi Sam, Matt, and Anthony,
>>
>> Sam's right -- as it stands we have a bias toward relatively simple
>> mesh geometries.  *However*.  We are trying to change that, and I'll
>> outline the ways in which we have those biases, and how they affect
>> components of yt.  Anthony, this also touches on the project we're
>> going to work on at Columbia in a few weeks.
>>
>> The main places that this touches:
>>
>>  * Data region selection
>>  * Spatial data organization and analysis
>>  * Visualization
>>
>> Hm, that's pretty much all of them.  :)  So to break this down:
>>
>> The data region selection would have to be modified.  This will be
>> most easily addressed in the 3.0 branch, which uses a "geometry"
>> object for each type of geometry, and which allows those to define
>> both data-on-disk access and data-in-memory selection.  As it stands
>> we could, however, swap out the main object_finding_mixins.py-related
>> objects for a Hydra hierarchy, and then ensure that the appropriate
>> steps were taken to ensure that slices, spheres, etc, were selected in
>> a manner that makes the most sense.  However, one fun thing is that
>> *cell* selection typically operates on cell-centers, so we do benefit
>> that of the two layers of data selection (coarse, i.e. "grid" and
>> fine, i.e., "cell") only the coarse layer would need to be redefined.
>> And, if we wanted, we could always eliminate the coarse selection and
>> operate only on the fine-grained level -- cell-only selection.
>>
>> The second part is the part that I think would be the hardest.  This
>> refers to the idea of generating ghost zones, which touches things
>> like calculating divergence, generating nice volume rnederings and so
>> on.  Additionally, the volume traversal code for volume rendering
>> would have to be rewritten.  I think this is tractable, but not on the
>> timescale of days, probably.  Projections would also likely need to be
>> carefully re-considered and probably not supported in the way they
>> typically are for rectilinear meshes, instead relying on ray-casting
>> using the VR infrastructure.
>>
>> The final part is probably quite simple; this is the turning of a set
>> of data into a set of pixels to be given to matplotlib.  I don't know
>> how difficult this would be, but I suspect it's relatively
>> straightforward.
>>
>> And along with all of this, the fields that govern volume, position
>> and so on would need to be carefully defined.
>>
>> So if one were to embark on this, the strategy I think that would work
>> the best would be:
>>
>>  1. Define the IO routines to read in the mesh structure and the deformed state.
>>  2. Modify existing IO selection routines to operate on mesh; this
>> would be maybe three or four routines
>>  3. Work on spatial locality
>>
>> However, *all* of these tasks become easier when the generic geometry
>> selection in the 3.0 fork approaches feature parity.  This would allow
>> these components to be plugged much more easily.  However, all of that
>> aside, I really do think we might be able to get something going if we
>> set aside the goals of VR and ghost-zones for now and focused only on
>> getting slices and profiles working; then down the road, we can work
>> with Sam's refactored volume container to try to get the VR working,
>> at which point projections and 3D viz become more approachable.
>>
>> Does all of that make sense?
>>
>> -Matt
>>
>> On Wed, Aug 8, 2012 at 1:54 PM, Sam Skillman <samskillman at gmail.com> wrote:
>>> Hi Matt,
>>>
>>> That helps a lot to clarify things.  Unfortunately it also makes it clear
>>> that right now I think (and others could chime in) that this would be
>>> difficult to get full functionality out of yt for your data.  Right now yt
>>> is only capable of handling particle or rectangular solid zones (elements,
>>> cells), and doesn't know how to handle complicated mesh geometries.  This
>>> means that things like slices, projections, and volume renderings would be
>>> difficult to get going.
>>>
>>> However, I think you might be able to get minimal functionality for things
>>> like profiles.  You would need to over-ride how x,y,z positions are
>>> calculated, but you could then do things like
>>> http://yt-project.org/doc/cookbook/simple_plots.html#simple-phase-plots
>>>
>>> Anyways, I think this would be difficult right now until yt handles meshes
>>> in a more explicit fashion.  Does anyone else have thoughts on this?
>>>
>>> Best,
>>> Sam
>>>
>>>
>>> On Wed, Aug 8, 2012 at 11:37 AM, Matt Terry <matt.terry at gmail.com> wrote:
>>>>
>>>> The basic ideas is that you have a simple x,y,z mesh with logical
>>>> indexes i,j,k.  The Lagrangian part is that the spatial grid moves
>>>> with the fluid flow.  Logically Cartisian meas that an i, j, k index
>>>> makes sense.  If you make several of these meshes, you can stitch them
>>>> together to make a mesh where a global i, j, no longer makes sense.
>>>> My mesh looks like this:
>>>>
>>>> http://visitusers.org/images/4/44/Enhanced_reduced.jpg
>>>>
>>>> The boundary between blocks 012 has reduced connectivity.  The 12345
>>>> boundary has enhanced connectivity.
>>>>
>>>> > 1.  By "logically rectangular", do you mean that each computational
>>>> > element
>>>> > has 6 neighbors that share a face, but the element itself can have a
>>>> > deformed shape?
>>>>
>>>> Yes.  In 3D cartesian, each zone (volume defined by 8 mesh mesh
>>>> vertexes) shares faces with 6 other zones.  The zones are generally
>>>> strangely shaped.  Aspect ratios (assuming a vaguely rectangular
>>>> shape) can of order dy/dx ~ 100.  Generally smaller, but can be
>>>> larger.
>>>>
>>>> > 2.  Does reduced/enhanced connectivity zones mean one element can share
>>>> > a
>>>> > face with, say, 4 elements?  This would make it behave a bit like and
>>>> > adaptive mesh refinement setup, which shouldn't be too bad.
>>>>
>>>> A single zone (element?) will always have 6 neighbors, however more
>>>> (or less) than 6 zones may share a vertex.
>>>>
>>>> > If the shapes of the elements change, I think it might be a little bit
>>>> > tricky, but if they are all geometrically rectangular then it would be
>>>> > easier.
>>>>
>>>> Geometrically rectangular zones are novel.
>>>>
>>>> > Another big determinant of how easy this will be to implement is what
>>>> > the
>>>> > data format looks like.  Is there a method paper or other reference that
>>>> > explains a bit more of the code/data structure?
>>>>
>>>> Data structures are very simple.  Each block is effectively a 3d
>>>> numpy.ndarray.  On disk, each block is contained within a single
>>>> binary file.  Multiple blocks may reside in the same file.
>>>>
>>>> Hope that helps.  Happy to answer more questions.
>>>>
>>>> -matt
>>>> _______________________________________________
>>>> yt-dev mailing list
>>>> yt-dev at lists.spacepope.org
>>>> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
>>>
>>>
>>>
>>> _______________________________________________
>>> yt-dev mailing list
>>> yt-dev at lists.spacepope.org
>>> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
>>>
>> _______________________________________________
>> yt-dev mailing list
>> yt-dev at lists.spacepope.org
>> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
> _______________________________________________
> yt-dev mailing list
> yt-dev at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org



More information about the yt-dev mailing list