[yt-users] Parallelism in yt Applied to Large Datasets

Nathan Goldbaum nathan12343 at gmail.com
Wed Dec 6 06:25:01 PST 2017


That depends on what sort of analysis you are doing. Not all tasks in yt
are parallel-aware.

On Wed, Dec 6, 2017 at 8:08 AM Jason Galyardt <jason.galyardt at gmail.com>
wrote:

> Hi yt Folks,
>
> I've written a script that uses a yt DatasetSeries object to analyze a
> time series dataset generated by FLASH. It worked beautifully, until I
> tried to run it on a new cluster with significantly larger HDF5 files (4 GB
> to greater than 8 GB per file). Now, while running the script, the RAM
> usage just grows and grows until the OS kills the job.
>
> It seems to me that I need to use domain decomposition to process these
> large files. So, my question to the group is this: is it possible to use
> both domain decomposition *and* parallel time series processing in a single
> script? This would require that yt be able to subdivide the available MPI
> processors into a number of work groups, each work group handling a single
> input file.
>
> Cheers,
> Jason
>
> ------
> Jason Galyardt
> University of Georgia
>
> _______________________________________________
> yt-users mailing list
> yt-users at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20171206/5a5e24ad/attachment.html>


More information about the yt-users mailing list