[yt-dev] A particle approach to model Ray and AbsorptionSpectrum

Matthew Turk matthewturk at gmail.com
Mon Sep 12 09:41:30 PDT 2016


Hi Bili,

This looks cool; as I mentioend in Slack (and which you responded to! )
this shares some characteristics of the demeshening:

http://lists.spacepope.org/pipermail/yt-dev-spacepope.org/2016-May/006453.html

and Ting-Wai's PR:

https://bitbucket.org/yt_analysis/yt/pull-requests/2127/initial-addition-of-sph-smoothing-kernel/diff

In Slack you mentioned that your proposal is a different way from
demeshening that minimally changes the API, and I agree, it would -- the
only reason I can think of for having an API change (i.e., sampling) is if
the ordering is necessary; in that case, we would (I think?) still need
sampling points.  But in the case of doing column density, you're totally
right, and I'm on board with this.  For the SPLASH algorithm, Ting-Wai has
a merge conflict to fix, and then I think this can go in -- it only works
on-axis right now, but should be trivial to modify to be off-axis in a
subsequent PR.

Anyway, for everyone else, we were planning a sprint for local folks at
NCSA on Sept 19 to work on Demeshening, and it looks like Bili's going to
come out for that.  If anyone would like to participate online, let me know
off-list and we'll set something up.

-Matt

On Sat, Sep 10, 2016 at 3:37 PM, Nathan Goldbaum <nathan12343 at gmail.com>
wrote:

> Hi Bili,
>
> I'll let others respond in detail, but I wanted to say that a contribution
> along these lines would be very welcome. This is broadly along the route
> we'd like to take with SPH simulations for yt 3.4: relying less on the
> particle octree for as many analysis operations as possible
>
> Nathan
>
>
> On Saturday, September 10, 2016, Bili Dong - Gmail <qobilidop at gmail.com>
> wrote:
>
>> Hi all,
>>
>> I came up with an idea to bring a particle approach to model Ray and
>> AbsorptionSpectrum. But before I move on to implement it in yt, I'd like to
>> let you know what it is and get feedbacks about it.
>>
>> The situation is that I have an SPH simulation, and I want to model the
>> Ray (in order to get the AbsorptionSpectrum) as accurately as possible.
>> Currently when we create a Ray object, it's always created from the
>> deposited grid. Although it is a good approximation to the true particle
>> representation, it is still not the most accurate way. I'd like to be able
>> to do it in the particle way (like in SPLASH). In the long term, I know
>> that Matt and Meagan is working on a new system for particle dataset. The
>> work I'm going to propose could be thought as lying on top of that, in that
>> the method could be made faster utilizing Matt and Meagon's work, but the
>> main infrastructure would stay the same.
>>
>> To introduce what I plan to do, let's have a look at the first figure
>> here
>> <http://yt-project.org/docs/dev/analyzing/analysis_modules/light_ray_generator.html>.
>> The core concept of a Ray object is the *path length*, *`dl`*.
>> Basically, if we combine the normal fields with the `dl` field, we get a
>> Ray object. Now imagine instead of a ray intersecting a lot of grid cells,
>> we have a ray intersecting a lot of SPH particles. How do we define
>> *`dl`* then? We could define it as the *integral of the SPH kernel along
>> the intersection*! And that's the whole trick. From this  we could
>> define a particle Ray that just looks the 'same' as the original grid Ray.
>> Then any analysis built on top of the Ray object, AbsorptionSpectrum for
>> example, don't need to change a lot. They will work in different ways
>> simply when provided with different different kinds of Ray object.
>>
>> The main difficulty in the implementation is the construction of the
>> particle `dl` field. Currently I'm doing it brutal-forcedly by computing
>> `dl` for all the particles and mask out those with zero values. Matt and
>> Meagan's work will accelerate this by providing the neighboring
>> information, so I could do the computation on a small set of particles
>> then. The brutal-force method is not unbearably slow though. And the
>> computation acceleration could be saved for future work.
>>
>> I have an external implementation of the particle approach, and have used
>> it in my current research. I have compared results using the particle
>> method and those from Trident and they agree statistically as we expected
>> (thanks Cameron for the help). Now that it looks mature, I'd like to
>> implement it in yt.
>>
>> If anyone has any comments, opinions and suggestions, I'd like to hear
>> them.
>>
>> Thanks for reading,
>>
>> Bili
>>
>
> _______________________________________________
> yt-dev mailing list
> yt-dev at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-dev-spacepope.org/attachments/20160912/70737d3d/attachment.htm>


More information about the yt-dev mailing list