[Yt-dev] quick question on particle IO

Geoffrey So gsiisg at gmail.com
Tue Oct 18 16:51:17 PDT 2011


Ah yes, I think that answers our question.
We were worried that all the particles were read in by each processor (which
I told him I don't think it did, or it would have crashed my smaller 800
cube long ago), but I wanted to get the answer from pros.

Thanks!

From
G.S.

On Tue, Oct 18, 2011 at 4:21 PM, Stephen Skory <s at skory.us> wrote:

> Geoffrey,
>
> > "Is the particle IO in YT that calls h5py spawned by multiple processors
> or is it doing it serially?"
>
> For your purposes, h5py is only used to *write* particle data to disk
> after the halos have been found (if you are saving them to disk, which
> you must do explicitly, of course). And in this case, it will open up
> one file using h5py per MPI task.
>
> I'm guessing that they're actually concerned about reading particle
> data, because that is more disk intensive. This is done with functions
> written in C that read the data, not h5py. Here each MPI task does its
> own reading of data, and may open up multiple files to retrieve the
> particle data it needs depending on the layouts of grids in the
> .cpuNNNN files.
>
> Does that help?
>
> --
> Stephen Skory
> s at skory.us
> http://stephenskory.com/
> 510.621.3687 (google voice)
> _______________________________________________
> Yt-dev mailing list
> Yt-dev at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-dev-spacepope.org/attachments/20111018/8b2a5888/attachment.htm>


More information about the yt-dev mailing list