[yt-dev] profiles on non-prgio sets

Matthew Turk matthewturk at gmail.com
Wed Mar 14 07:42:41 PDT 2012


Hi Dave,

On Wed, Mar 14, 2012 at 10:38 AM, david collins <antpuncher at gmail.com> wrote:
> Hi--
>
> Sorry for being sparse with the information, I should have been much
> more clear.  This was running in parallel, 64 cores on Nautilus.  The
> field was Density vs. MassFraction, MassFraction being a field I wrote
> that in this instance returns simply CellMass.  lazy_reader is True.
> The run is 512^3 with 4 levels by 4, with only about 100 subgrids, but
> was written in 2007 (? i think) without parallel root grid IO on.  My
> suspicion (without really understanding how yt parallelism works) is
> that each task needed to read in the root grid, rather than using the
> domain decomposition (?)

64 cores, using 12GB on each?  That's a problem.  MassFraction it
sounds like has a handful of dependencies -- CellMassMsun, which
depends on Density and dx, but dx should be used as a single value not
as a field.  So that gives 3*512^3*64 bits = 3 gigabytes, but that's a
far cry from 12 gb/core.  Plus, the domain decomp for profiles is
per-grid -- so only one of your cores should get assigned the root
grid.  This is very bizarre.

>
> Would lazy_reader off change things?

Yes, but it would make them worse.  :)

-Matt

>
> Thanks,
> d.
>
>> Are you running in parallel? Which fields are you profiling? Do they have
>> lots of dependencies, or require ghost zones? Is yt using lazy_reader? Does
>> your data have many grids?
>
>
>>
>> Matt
>>
>> On Mar 14, 2012 12:37 AM, "david collins" <antpuncher at gmail.com> wrote:
>>>
>>> I should add that this was done on 64 cores-- in serial it works fine,
>>> just slow.
>>>
>>> On Tue, Mar 13, 2012 at 9:35 PM, david collins <antpuncher at gmail.com>
>>> wrote:
>>> > Hi, all--
>>> >
>>> > I have an old dataset that I'm trying to make profiles on.  It's a
>>> > 512^3 root grid, but was written with ParallelRootGridIO off.  I find
>>> > that it's using strange amounts of memory, more than 12 Gb.  Is this a
>>> > known problem with a straight forward work-around?
>>> >
>>> > d.
>>> >
>>> > --
>>> > Sent from my computer.
>>>
>>>
>>>
>>> --
>>> Sent from my computer.
>>> _______________________________________________
>>> yt-dev mailing list
>>> yt-dev at lists.spacepope.org
>>> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
>>
>>
>> _______________________________________________
>> yt-dev mailing list
>> yt-dev at lists.spacepope.org
>> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
>>
>
>
>
> --
> Sent from my computer.
> _______________________________________________
> yt-dev mailing list
> yt-dev at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org



More information about the yt-dev mailing list