[yt-dev] Best way to share BCs for the kD-tree?

Matthew Turk matthewturk at gmail.com
Tue Nov 26 13:10:10 PST 2013


Hi Sam,

On Tue, Nov 26, 2013 at 3:57 PM, Sam Skillman <samskillman at gmail.com> wrote:
> Hi Matt,
>
> There isn't currently an easy way to share boundary conditions.  However, I
> think we could attempt to address this with a few easy-ish changes. I can
> think of three ways, and there may be more.
>
> 1) Provide tools for neighbor location + communication
> Here we would use the structure of the tree to quickly identify which
> processor owns the neighboring region and send across whatever is needed.
> This is fairly straightforward since the tree is built out to the point
> where domain decomp is done, so this is super fast to just traverse to find
> the owner.  Once known, request a a boundary layer.  This would probably
> require coming up with some sort of data object that acts as a boundary
> layer.  I guess this could actually be an AMR2DData object that gets filled
> out, sent over, but that might be overkill?

This is interesting, but I agree, might be the hardest.  The way
contouring currently works requires traversing from
finest-to-coarsest, so that the mapping of cell to neighbor zone is
always 1:1, rather than 1:refine_by**(2*d_level).  Under this
assumption, I have a method for looking at neighbors in the contouring
already, where it calls the _find_node() function of a perturbed
vector.

>
> 2) Build the full tree on all procs, but only only own the data for a
> subset.
> I think the AMRKDTree build may be fast enough at this point to just build
> the entire thing on each processor, then later decide which nodes to
> populate with data.  This could be done by just setting a buffer around the
> initial parallel decomp.
>
> 3) Build the tree once up until the point where you have Nprocs leafs.  Then
> build a second tree that has left and right edges that are buffered around
> the original tree decomp.  This would be much like option 2, but necessary
> if the tree build time became dominant.
>
> I think option 2 and 3 would probably be the easiest, though I think it
> would be a neat to set up communication between nodes through option 1.

Either 2 or 3 would work for me as is, but I agree that 1 would be the
most fun and potentially the most rewarding in the long run.  For the
purposes of what I'm looking at right now, 2 would probably be the way
to go, since my recollection is that the tree-building is not dominant
anymore.

>
> I'd be happy to sprint on this sometime if you'd like.

Let's do that.  Maybe sometime in the new year, as it sounds like it
might require some work.  But I'll try to leave as much open as
possible for this as I update the PR for contouring, which I think is
just about ready to go (and includes inline extracted regions now.)

>
> Sam
>
>
> On Tue, Nov 26, 2013 at 12:29 PM, Matthew Turk <matthewturk at gmail.com>
> wrote:
>>
>> Hi all, especially Sam,
>>
>> Is there a good way to share boundary conditions between nodes of the
>> kD-tree?  I'm particularly interested in either speeding up vertex
>> value generation or storing an additional layer of values that
>> correspond to (expensive to compute) values on other processors, for
>> the purposes of the contour finder.
>>
>> Basically, I think we can parallelize the contour finder in yt-3.0 PR
>> 120 if we can share boundary conditions between processors.  But, I'm
>> not sure how to do that.  Alternately, if we could double-count
>> boundary nodes between processors, that would work as well.  For
>> instance, if processors 1 has a tile that borders a tile from
>> processor 4, that tile would be "owned" by 4 but would be replicated
>> on 1.
>>
>> Any ideas?
>>
>> -Matt
>> _______________________________________________
>> yt-dev mailing list
>> yt-dev at lists.spacepope.org
>> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
>
>
>
> _______________________________________________
> yt-dev mailing list
> yt-dev at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
>



More information about the yt-dev mailing list