[yt-users] getting more information about memory usage

nick moeckel nickolas1 at gmail.com
Wed Nov 6 05:13:58 PST 2013


Hi Matt,

thanks for the reply and the info- this should help me out in the short
term. On a longer timescale (months) I'd be interested in helping to make
this more efficient, I'll get in touch once my current pile of deadlines
clears up a bit.

thanks again,
Nick


On Wed, Nov 6, 2013 at 1:45 PM, Matthew Turk <matthewturk at gmail.com> wrote:

> Hi Nick,
>
> On Wed, Nov 6, 2013 at 5:40 AM, nick moeckel <nickolas1 at gmail.com> wrote:
> > Hi there,
> >
> > I'm trying to analyse some data with a 1024^3 base grid, and it's
> pushing at
> > the memory limits of the machine where the data resides. So two quick
> > questions:
> >
> > should I expect parallel analysis to have a reduced memory footprint?
> E.g.
> > if I need 16Gb of memory to run this analysis serially, will running on
> 16
> > cores get me anywhere near 1Gb/core?
>
> Sort of.
>
> With the current RAMSES front end, here's how the memory is set up and
> works.  (I'll also mention below how this can be improved.)  As it
> currently stands, when the RAMSES output is parsed, a full octree is
> created on each processor.  Each oct consists of 3 64-bit integers and
> an allocatable pointer.  (256 bits)  When allocated (i.e., non-leaf)
> the pointer is 8, which adds on an additional 448 bits.
>
> When actually doing analysis, the octree is still replicated across
> all nodes, *but*, the fields that occupy that octree will not be.  So,
> you won't be able to eliminate the overhead, but you will be able to
> reduce the additional cost of a given field.
>
> The next iteration should probably involve both index-on-demand (which
> is how the ARTIO frontend works, and how the N-body frontends are
> being transitioned) and "pinning" octree chunks to individual cores.
> Since this is a useful application area for you, I could provide some
> suggestions for areas to dig in, or work with you to experiment with
> ways of making this happen sooner rather than later, but I'm not sure
> I'd be able to dedicate full-time energy to it before the end of the
> year.  It is definitely a weak spot and something we want to get
> fixed, but there are some other barriers to the 3.0 release that need
> to be addressed first.
>
> >
> > does yt have a more verbose debug mode or something that I can activate
> to
> > output any details about memory consumption that might aid me in the
> > delicate and adversarial dance that this cluster and I are engaged in?
> >
>
> Yup.  There are a few handy functions:
>
> get_memory_usage()
>
> which will return in MB, and there's also the contextmanager
> memory_checker.  This sets up a periodic interval over which memory
> usage is reported, defaulting to every 15 seconds.  You'd do this
> like:
>
> with memory_checker():
>     do_something()
>     another_time()
>
> and then when it leaves the indented block, the memory checker terminates.
>
> Hope that helps,
>
> -Matt
>
> > thanks,
> > Nick
> >
> > _______________________________________________
> > yt-users mailing list
> > yt-users at lists.spacepope.org
> > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >
> _______________________________________________
> yt-users mailing list
> yt-users at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20131106/1f494d14/attachment.html>


More information about the yt-users mailing list