[Yt-dev] Nautilus: ParallelHaloProfiler I/O issue

Geoffrey So gsiisg at gmail.com
Fri Sep 23 09:46:29 PDT 2011


Hi Stephen,

I was running parallelHF on Nautilus when I got the following email from the
sys admins, I'll try to answer the questions but let me know if I'm wrong or
there's something else to be said.

(1) How much data are read/written by program ?
- After all the particles (3200^3 of them) are read in they are linked with
a fortran KDtree if they satisfy some conditions.

(2) How many parallel readers/writers are used by program ?
- It is reading using 512 cores from my submission script.  The amount of
write to disk depends on the distribution of the particle haloes across
processors, if they exist across processors then there will be more files
written out by write_particle_lists.

(3) Do you use MPI_IO ? or something else ?
- Yes, the program uses mpi4py-1.2.2 installed in my home directory

The details of the code can be found at:
http://yt-project.org/doc/analysis_modules/running_halofinder.html#halo-finding
under the section "Parallel HOP"

Currently I am using 512 cores with 4GB per core for a total of 2TB of ram
for this 3200 cube unigrid simulation, should I decrease the amount of
processors but keeping the same amount of ram?  Or is there other ways to
optimize as to not affect other users?

Sorry for the inconvenience.

From
G.S.


On Fri, Sep 23, 2011 at 9:12 AM, Patel, Pragneshkumar B <pragnesh at utk.edu>wrote:

>  Hello,
>
> We have noticed some I/O issue on Nautilus. We suspect that
> "ParallelHaloProfiler" program is doing some very I/O-intensive operations
> periodically (or may be checkpoint). We like to throttle these back a bit ,
> so that other users are not affected by it.
>
> I like to get some more information about your job #60891. E.g.
>
> (1) How much data are read/written by program ?
> (2) How many parallel readers/writers are used by program ?
> (3) Do you use MPI_IO ? or something else ?
>
> Please give me detail and we will work on your I/O issue.
>
> Thanks
> Pragnesh
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-dev-spacepope.org/attachments/20110923/d7dbaf96/attachment.htm>


More information about the yt-dev mailing list