[yt-users] SFR analysis on 3200 cube data

Stephen Skory s at skory.us
Sun Jun 19 14:59:56 PDT 2011


Hi Geoffrey,

> Even if the above method worked, loading each grid onto 1 processor/node,
> it would only alleviate the memory problem so much, because potentially a
> LOT of the particles can be on a single grid, which will still overload
> the memory sometimes, so this isn't as good as the parallel HOP's KD tree
> way of cutting up the particles for load balancing.

Are you saying that your first example of how to read the particles
(adapted from what I gave you) ran out of memory? Let me know!

Regarding running out of memory, as particle IO works in yt currently,
a grid with lots of particles can still be a problem, even in parallel
HOP. If you are in fact running out of memory due to a really heavy
grid, we should think about how to address that.

-- 
Stephen Skory
s at skory.us
http://stephenskory.com/
510.621.3687 (google voice)



More information about the yt-users mailing list