<div dir="ltr">HI Nathan,<div><br></div><div>I just increased n_particles = 1e9 (roughly what I have) for the example you gave me and it does indeed crash due to memory.</div><div><br></div><div><div>Line # Mem usage Increment Line Contents</div>
<div>================================================</div><div> 6 92.121 MiB 0.000 MiB @profile</div><div> 7 def test():</div><div> 8 92.121 MiB 0.000 MiB n_particles = 1e9</div>
<div> 9 </div><div> 10 22980.336 MiB 22888.215 MiB ppx, ppy, ppz = 1e6*np.random.normal(size=[3, n_particles])</div><div> 11 </div><div> 12 30609.734 MiB 7629.398 MiB ppm = np.ones(n_particles)</div>
<div> 13 </div><div> 14 30609.734 MiB 0.000 MiB data = {'particle_position_x': ppx,</div><div> 15 30609.734 MiB 0.000 MiB 'particle_position_y': ppy,</div>
<div> 16 30609.734 MiB 0.000 MiB 'particle_position_z': ppz,</div><div> 17 30609.734 MiB 0.000 MiB 'particle_mass': ppm,</div><div> 18 30609.734 MiB 0.000 MiB 'number_of_particles': n_particles}</div>
<div> 19 </div><div> 20 30609.738 MiB 0.004 MiB bbox = 1.1*np.array([[min(ppx), max(ppx)], [min(ppy), max(ppy)], [min(ppy), max(ppy)]])</div><div> 21 </div>
<div> 22 30610.027 MiB 0.289 MiB ds = yt.load_uniform_grid(data, [256, 256, 256], length_unit=parsec, mass_unit=1e8*Msun, bbox=bbox)</div><div> 23 </div><div> 24 30614.352 MiB 4.324 MiB grid_object = ds.index.grids[0]</div>
<div> 25 </div><div> 26 uniform_array = grid_object['deposit', 'all_cic']</div><div> 27 </div><div> 28 print uniform_array.max()</div>
<div> 29 print uniform_array.shape</div><div> 30 </div><div> 31 plt.imshow(uniform_array[:,:,128].v)</div><div> 32 </div>
<div> 33 plt.savefig('test.png')</div><div><br></div><div><br></div><div>Traceback (most recent call last):</div><div> File "/nfs/blank/h4231/bgriffen/data/lib/yt-x86_64/lib/python2.7/runpy.py", line 162, in _run_module_as_main</div>
<div> "__main__", fname, loader, pkg_name)</div><div> File "/nfs/blank/h4231/bgriffen/data/lib/yt-x86_64/lib/python2.7/runpy.py", line 72, in _run_code</div><div> exec code in run_globals</div><div>
File "/bigbang/data/bgriffen/lib/memory_profiler/lib/python/memory_profiler.py", line 821, in <module></div><div> execfile(__file__, ns, ns)</div><div> File "profilecic.py", line 36, in <module></div>
<div> test()</div><div> File "/bigbang/data/bgriffen/lib/memory_profiler/lib/python/memory_profiler.py", line 424, in f</div><div> result = func(*args, **kwds)</div><div> File "profilecic.py", line 28, in test</div>
<div> print uniform_array.max()</div><div> File "profilecic.py", line 28, in test</div><div> print uniform_array.max()</div><div> File "/bigbang/data/bgriffen/lib/memory_profiler/lib/python/memory_profiler.py", line 470, in trace_memory_usage</div>
<div> mem = _get_memory(-1)</div><div> File "/bigbang/data/bgriffen/lib/memory_profiler/lib/python/memory_profiler.py", line 69, in _get_memory</div><div> stdout=subprocess.PIPE</div><div> File "/nfs/blank/h4231/bgriffen/data/lib/yt-x86_64/lib/python2.7/subprocess.py", line 709, in __init__</div>
<div> errread, errwrite)</div><div> File "/nfs/blank/h4231/bgriffen/data/lib/yt-x86_64/lib/python2.7/subprocess.py", line 1222, in _execute_child</div><div> self.pid = os.fork()</div><div>OSError: [Errno 12] Cannot allocate memory</div>
<div>[yt-x86_64] bigbang% </div></div><div><br></div><div>So even the "memory efficient" run can't be run on 1024^3 (ndim = 256) on 128GB machine. Though this because of the way the profiler works.</div><div>
<br></div><div>Brendan</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Jun 9, 2014 at 9:20 AM, Matthew Turk <span dir="ltr"><<a href="mailto:matthewturk@gmail.com" target="_blank">matthewturk@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">> /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/units/yt_array.pyc in<br>
> convert_to_units(self, units)<br>
> 366<br>
> 367 self.units = new_units<br>
> --> 368 self *= conversion_factor<br>
> 369 return self<br>
> 370<br>
><br>
> /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/units/yt_array.pyc in<br>
> __imul__(self, other)<br>
> 667 """ See __mul__. """<br>
> 668 oth = sanitize_units_mul(self, other)<br>
> --> 669 return np.multiply(self, oth, out=self)<br>
> 670<br>
> 671 def __div__(self, right_object):<br>
><br>
> /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/units/yt_array.pyc in<br>
> __array_wrap__(self, out_arr, context)<br>
> 966 # casting to YTArray avoids creating a YTQuantity<br>
> with size > 1<br>
> 967 return YTArray(np.array(out_arr, unit))<br>
> --> 968 return ret_class(np.array(out_arr), unit)<br>
> 969<br>
> 970<br>
><br>
> MemoryError:<br>
><br>
<br>
</div>Nathan, any idea why this is copying? We shouldn't be copying here.<br>
<div class="HOEnZb"><div class="h5">_______________________________________________<br>
yt-users mailing list<br>
<a href="mailto:yt-users@lists.spacepope.org">yt-users@lists.spacepope.org</a><br>
<a href="http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org" target="_blank">http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org</a><br>
</div></div></blockquote></div><br></div>