[yt-users] potential memory leak when using snapshot

Matthew Turk matthewturk at gmail.com
Mon Jun 22 11:23:54 PDT 2015


Hi Wolfgang,

I take that back, I found the problem.

Here's a pull request that fixes it:

https://bitbucket.org/yt_analysis/yt/pull-request/1620

You can keep memory lower if you also gc.collect() it, but it won't
grow out of control with this fix -- or it doesn't for me at least.

-Matt

On Mon, Jun 22, 2015 at 12:37 PM, Matthew Turk <matthewturk at gmail.com> wrote:
> Hi Wolfgang,
>
> Sorry for the disappointing reply.  I have attempted to identify the
> memory leak, and have come up short; I've got a few other things that
> I need to try first, but my *guess* is that there is a circular
> reference somewhere, or references being held to small numpy arrays.
> There's also the possibility that some of the memory is not being
> returned to the operating system, but I don't want to blame that just
> yet.
>
> A few quick fixes you might try would be to import gc at the top of
> your script, and at the end of each loop do: "del ytdat;
> gc.collect(2)".  Alternately, if you dataset only changes in the data
> inside it, you can swap that in place by updating the "density" key of
> the dictionary you feed in to the load_uniform_grid call.
>
> -Matt
>
> On Mon, Jun 22, 2015 at 10:25 AM, Matthew Turk <matthewturk at gmail.com> wrote:
>> Hi Wolfgang,
>>
>> This is indeed puzzling; I believe that what's happening is that the
>> volume isn't created until you do the snapshot.  I'm going to attempt
>> to replicate your bug here and let you know.
>>
>> -Matt
>>
>> On Mon, Jun 22, 2015 at 6:24 AM, Wolfgang Kastaun <physik at fangwolg.de> wrote:
>>> Hi,
>>>
>>> I've been trying to make a movie from volume renderings with Yt. The
>>> data is available as uniform numpy arrays, which are loaded using non-Yt
>>> infrastructure. For each movie frame, the following is done:
>>>
>>> Create Yt data from 3D numpy array rho
>>> dct = {"density":rho}
>>> ytdat = yt.load_uniform_grid(dct, sh, bbox=bbox)
>>>
>>> Create camera
>>> cam     = ytdat.camera(c, L, W, N, self.transf, fields['density'],
>>>                 north_vector=[0,0,1], steady_north=True,
>>>                 sub_samples=5, log_fields=[True])
>>>
>>>
>>> cam.transfer_function.map_to_colormap(self.lgmin,self.lgmax,
>>>                                       scale=15.0, colormap='algae')
>>>
>>>
>>> Save movie frame as png:
>>> cam.snapshot(fn=name)
>>>
>>>
>>> Even with medium sized datasets (60^3), this runs out of memory after 30
>>> frames or so. Some observations:
>>>
>>> Doing the same thing, but without the cam.snapshot(fn=name), the memory
>>> leak is gone.
>>>
>>>
>>> Creating the camera just once from the first dataset, and use
>>> cam.snapshot every frame does *not* increase memory usage. Of course, it
>>> then renders the same data over and over.
>>>
>>>
>>> Maybe I am doing something wrong. For example, is there a way to just
>>> replace the dataset for an existing camera?
>>>
>>> Any hints ?
>>>
>>>
>>> Best Regards,
>>> Wolfgang Kastaun.
>>>
>>> _______________________________________________
>>> yt-users mailing list
>>> yt-users at lists.spacepope.org
>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org



More information about the yt-users mailing list