[yt-users] How to efficiently control the memory in yt with large simulation?

Junhwan Choi (최준환) choi.junhwan at gmail.com
Fri Dec 6 14:17:55 PST 2013


> On Fri, Dec 6, 2013 at 2:52 PM, Geoffrey So <gsiisg at gmail.com> wrote:
>> I believe you can access rectangular sub volume of the data with something
>> like
>>
>> sv = pf.h.region([0.5]*3, [0.21, .21, .72], [.28, .28, .79])
>>
>> Shown here when running halo finder on a sub volume
>> http://yt-project.org/docs/2.6/analyzing/analysis_modules/running_halofinder.html?highlight=sub%20region
>>
>> And you can use other 3D objects like spheres, ellipsoids etc, for a thin
>> slice of the data you can make your rectangular region small in one of the
>> dimension.
>>
>> When making plots just pass in the sv 3D sub volume object instead of the
>> usual pf for plotting the entire simulation.
>
> This will work if the Octree is parsed on demand; what I think Junhwan
> is running into right now is that the entire octree is loaded at
> startup.  So you end up with a memory constant that gets added on for
> each process; this will reduce the *additional* overhead of loading
> data, but he still has memory for each leaf node of the oct.
>
Do you mean that there is no way to read 4098^3 Ramses data using yt?
I tried to define small region of the simulation and make a density
projection plot, but it looks fail owing to the same reason.
Is there way to temporally get around this problem?

Thank you for continuous help,
Junhwan



More information about the yt-users mailing list