[yt-users] running halo_profiler.py using mpi4py

Britton Smith brittonsmith at gmail.com
Mon Dec 7 11:49:51 PST 2009


Shankar,

The HaloProfiler runs in parallel by splitting up the list of halos evenly
among processors, so shared vs. distributed memory doesn't really matter
here.  When in doubt, I recommend using as many cpus per node as you can.
This applies to any application, since you are usually being billed for the
number of nodes you are using time the number of cpus, regardless of whether
you are using the full number of cpus per node.  Of course, there is the
possibility that you will run out of memory.  If that happens, reduce the
number of cpus and try again and so on.  There's really no right answer
here, since it depends on the size of the simulation and the amount of
memory on the machine.

This is one of those cases where you just need to experiment.

Britton

On Mon, Dec 7, 2009 at 12:39 PM, Agarwal, Shankar <sagarwal at ku.edu> wrote:

> Hi,
>
> I have about 40,000 halos (using Hop_Finder). And I want to run
> Halo_profiler on it. Since the number of halos is so large, I would like to
> use...
>
> mpirun -np <cpus> halo_profiler.py --parallel
>
>
> Steele-purdue has 8cpus per node. The 8 cpus have shared memory. But the
> nodes do not share memory. I want to know would I be better off using just 8
> cpus (on 1 node) or should I use more nodes, considering I have 40,000
> halos.
>
> regards
> shankar
> _______________________________________________
> yt-users mailing list
> yt-users at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20091207/7de069e3/attachment.html>


More information about the yt-users mailing list