[yt-users] cloud in cell mesh construction for particle data

Matthew Turk matthewturk at gmail.com
Mon Jun 9 07:48:50 PDT 2014


Hi Brendan,

Looks like 30 gigs before yt gets used at all, which makes sense if
you're allocating 4 fields.  I have no idea how memory_profiler works;
I always use the yt memory profiler.  Without knowing where it crashes
*inside* yt (looks like memory_profiler swallows that info) I can't
say where it's dying.  The call you are making to the cic deposition
looks like this internally:

pos = data[ptype, coord_name]
d = data.deposit(pos, [data[ptype, mass_name]], method = "cic")
d = data.apply_units(d, data[ptype, mass_name].units)
d /= data["index", "cell_volume"]

Internally, there is one allocation call inside the deposit function.

I have no idea why it is taking up so much memory, and my suspicion is
there are unnecessary copies going on *inside* one of these functions.

On Mon, Jun 9, 2014 at 9:42 AM, Brendan Griffen
<brendan.f.griffen at gmail.com> wrote:
> HI Nathan,
>
> I just increased n_particles = 1e9 (roughly what I have) for the example you
> gave me and it does indeed crash due to memory.
>
> Line #    Mem usage    Increment   Line Contents
> ================================================
>      6   92.121 MiB    0.000 MiB   @profile
>      7                             def test():
>      8   92.121 MiB    0.000 MiB       n_particles = 1e9
>      9
>     10 22980.336 MiB 22888.215 MiB       ppx, ppy, ppz =
> 1e6*np.random.normal(size=[3, n_particles])
>     11
>     12 30609.734 MiB 7629.398 MiB       ppm = np.ones(n_particles)
>     13
>     14 30609.734 MiB    0.000 MiB       data = {'particle_position_x': ppx,
>     15 30609.734 MiB    0.000 MiB               'particle_position_y': ppy,
>     16 30609.734 MiB    0.000 MiB               'particle_position_z': ppz,
>     17 30609.734 MiB    0.000 MiB               'particle_mass': ppm,
>     18 30609.734 MiB    0.000 MiB               'number_of_particles':
> n_particles}
>     19
>     20 30609.738 MiB    0.004 MiB       bbox = 1.1*np.array([[min(ppx),
> max(ppx)], [min(ppy), max(ppy)], [min(ppy), max(ppy)]])
>     21
>     22 30610.027 MiB    0.289 MiB       ds = yt.load_uniform_grid(data,
> [256, 256, 256], length_unit=parsec, mass_unit=1e8*Msun, bbox=bbox)
>     23
>     24 30614.352 MiB    4.324 MiB       grid_object = ds.index.grids[0]
>     25
>     26                                 uniform_array =
> grid_object['deposit', 'all_cic']
>     27
>     28                                 print uniform_array.max()
>     29                                 print uniform_array.shape
>     30
>     31                                 plt.imshow(uniform_array[:,:,128].v)
>     32
>     33                                 plt.savefig('test.png')
>
>
> Traceback (most recent call last):
>   File
> "/nfs/blank/h4231/bgriffen/data/lib/yt-x86_64/lib/python2.7/runpy.py", line
> 162, in _run_module_as_main
>     "__main__", fname, loader, pkg_name)
>   File
> "/nfs/blank/h4231/bgriffen/data/lib/yt-x86_64/lib/python2.7/runpy.py", line
> 72, in _run_code
>     exec code in run_globals
>   File
> "/bigbang/data/bgriffen/lib/memory_profiler/lib/python/memory_profiler.py",
> line 821, in <module>
>     execfile(__file__, ns, ns)
>   File "profilecic.py", line 36, in <module>
>     test()
>   File
> "/bigbang/data/bgriffen/lib/memory_profiler/lib/python/memory_profiler.py",
> line 424, in f
>     result = func(*args, **kwds)
>   File "profilecic.py", line 28, in test
>     print uniform_array.max()
>   File "profilecic.py", line 28, in test
>     print uniform_array.max()
>   File
> "/bigbang/data/bgriffen/lib/memory_profiler/lib/python/memory_profiler.py",
> line 470, in trace_memory_usage
>     mem = _get_memory(-1)
>   File
> "/bigbang/data/bgriffen/lib/memory_profiler/lib/python/memory_profiler.py",
> line 69, in _get_memory
>     stdout=subprocess.PIPE
>   File
> "/nfs/blank/h4231/bgriffen/data/lib/yt-x86_64/lib/python2.7/subprocess.py",
> line 709, in __init__
>     errread, errwrite)
>   File
> "/nfs/blank/h4231/bgriffen/data/lib/yt-x86_64/lib/python2.7/subprocess.py",
> line 1222, in _execute_child
>     self.pid = os.fork()
> OSError: [Errno 12] Cannot allocate memory
> [yt-x86_64] bigbang%
>
> So even the "memory efficient" run can't be run on 1024^3 (ndim = 256) on
> 128GB machine. Though this because of the way the profiler works.
>
> Brendan
>
>
> On Mon, Jun 9, 2014 at 9:20 AM, Matthew Turk <matthewturk at gmail.com> wrote:
>>
>> > /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/units/yt_array.pyc in
>> > convert_to_units(self, units)
>> >     366
>> >     367         self.units = new_units
>> > --> 368         self *= conversion_factor
>> >     369         return self
>> >     370
>> >
>> > /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/units/yt_array.pyc in
>> > __imul__(self, other)
>> >     667         """ See __mul__. """
>> >     668         oth = sanitize_units_mul(self, other)
>> > --> 669         return np.multiply(self, oth, out=self)
>> >     670
>> >     671     def __div__(self, right_object):
>> >
>> > /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/units/yt_array.pyc in
>> > __array_wrap__(self, out_arr, context)
>> >     966                 # casting to YTArray avoids creating a
>> > YTQuantity
>> > with size > 1
>> >     967                 return YTArray(np.array(out_arr, unit))
>> > --> 968             return ret_class(np.array(out_arr), unit)
>> >     969
>> >     970
>> >
>> > MemoryError:
>> >
>>
>> Nathan, any idea why this is copying?  We shouldn't be copying here.
>> _______________________________________________
>> yt-users mailing list
>> yt-users at lists.spacepope.org
>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>
>
>
> _______________________________________________
> yt-users mailing list
> yt-users at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>



More information about the yt-users mailing list