[yt-users] Parallelism in yt Applied to Large Datasets

Jason Galyardt jason.galyardt at gmail.com
Wed Dec 6 06:08:29 PST 2017


Hi yt Folks,

I've written a script that uses a yt DatasetSeries object to analyze a time
series dataset generated by FLASH. It worked beautifully, until I tried to
run it on a new cluster with significantly larger HDF5 files (4 GB to
greater than 8 GB per file). Now, while running the script, the RAM usage
just grows and grows until the OS kills the job.

It seems to me that I need to use domain decomposition to process these
large files. So, my question to the group is this: is it possible to use
both domain decomposition *and* parallel time series processing in a single
script? This would require that yt be able to subdivide the available MPI
processors into a number of work groups, each work group handling a single
input file.

Cheers,
Jason

------
Jason Galyardt
University of Georgia
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20171206/59ff9726/attachment.html>


More information about the yt-users mailing list