<div dir="ltr">Hi Matt,<div><br></div><div>thanks for the reply and the info- this should help me out in the short term. On a longer timescale (months) I'd be interested in helping to make this more efficient, I'll get in touch once my current pile of deadlines clears up a bit. </div>
<div><br></div><div>thanks again,</div><div>Nick</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Nov 6, 2013 at 1:45 PM, Matthew Turk <span dir="ltr"><<a href="mailto:matthewturk@gmail.com" target="_blank">matthewturk@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Nick,<br>
<div class="im"><br>
On Wed, Nov 6, 2013 at 5:40 AM, nick moeckel <<a href="mailto:nickolas1@gmail.com">nickolas1@gmail.com</a>> wrote:<br>
> Hi there,<br>
><br>
> I'm trying to analyse some data with a 1024^3 base grid, and it's pushing at<br>
> the memory limits of the machine where the data resides. So two quick<br>
> questions:<br>
><br>
> should I expect parallel analysis to have a reduced memory footprint? E.g.<br>
> if I need 16Gb of memory to run this analysis serially, will running on 16<br>
> cores get me anywhere near 1Gb/core?<br>
<br>
</div>Sort of.<br>
<br>
With the current RAMSES front end, here's how the memory is set up and<br>
works. (I'll also mention below how this can be improved.) As it<br>
currently stands, when the RAMSES output is parsed, a full octree is<br>
created on each processor. Each oct consists of 3 64-bit integers and<br>
an allocatable pointer. (256 bits) When allocated (i.e., non-leaf)<br>
the pointer is 8, which adds on an additional 448 bits.<br>
<br>
When actually doing analysis, the octree is still replicated across<br>
all nodes, *but*, the fields that occupy that octree will not be. So,<br>
you won't be able to eliminate the overhead, but you will be able to<br>
reduce the additional cost of a given field.<br>
<br>
The next iteration should probably involve both index-on-demand (which<br>
is how the ARTIO frontend works, and how the N-body frontends are<br>
being transitioned) and "pinning" octree chunks to individual cores.<br>
Since this is a useful application area for you, I could provide some<br>
suggestions for areas to dig in, or work with you to experiment with<br>
ways of making this happen sooner rather than later, but I'm not sure<br>
I'd be able to dedicate full-time energy to it before the end of the<br>
year. It is definitely a weak spot and something we want to get<br>
fixed, but there are some other barriers to the 3.0 release that need<br>
to be addressed first.<br>
<div class="im"><br>
><br>
> does yt have a more verbose debug mode or something that I can activate to<br>
> output any details about memory consumption that might aid me in the<br>
> delicate and adversarial dance that this cluster and I are engaged in?<br>
><br>
<br>
</div>Yup. There are a few handy functions:<br>
<br>
get_memory_usage()<br>
<br>
which will return in MB, and there's also the contextmanager<br>
memory_checker. This sets up a periodic interval over which memory<br>
usage is reported, defaulting to every 15 seconds. You'd do this<br>
like:<br>
<br>
with memory_checker():<br>
do_something()<br>
another_time()<br>
<br>
and then when it leaves the indented block, the memory checker terminates.<br>
<br>
Hope that helps,<br>
<br>
-Matt<br>
<br>
> thanks,<br>
> Nick<br>
><br>
> _______________________________________________<br>
> yt-users mailing list<br>
> <a href="mailto:yt-users@lists.spacepope.org">yt-users@lists.spacepope.org</a><br>
> <a href="http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org" target="_blank">http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org</a><br>
><br>
_______________________________________________<br>
yt-users mailing list<br>
<a href="mailto:yt-users@lists.spacepope.org">yt-users@lists.spacepope.org</a><br>
<a href="http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org" target="_blank">http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org</a><br>
</blockquote></div><br></div>