[yt-users] volume rendering job dies
Geoffrey So
gsiisg at gmail.com
Tue Apr 17 14:39:15 PDT 2012
Thanks for the note Nathan and Sam, will change my script accordingly.
From
G.S.
On Tue, Apr 17, 2012 at 2:35 PM, Sam Skillman <samskillman at gmail.com> wrote:
> Hi Geoffrey,
>
> You also need to modify your script so that all the cores don't try to
> write out the image. Your code should be something like:
>
> # Do a rotation over 30 frames
> for i, snapshot in enumerate(cam.rotation(2*na.pi, 36,
> rot_vector=(0.0,0.0,1.0))):
> if cam.comm.rank == 0:
> print "starting image %i" % i
> write_bitmap(snapshot, 'Volume/camera_movement_%04i.png' % frame)
> frame += 1
>
>
> I also just want to note here that the rendering has been used on a 4096^3
> Athena dataset in the past, running on somewhere between 128 and 256 cores.
> For the 800^3 you should be fine running on up to the number of cores that
> was used to run the simulation. Memory shouldn't be much of an issue, so
> just increase core count if you want to go faster. Also remember to run on
> a power of 2 processors -- otherwise there will be idle cores (it should
> still work).
>
> Sam
>
>
> On Tue, Apr 17, 2012 at 3:05 PM, Geoffrey So <gsiisg at gmail.com> wrote:
>
>> Matt's right on the spot went and checked the script, it's missing the
>> "--parallel", must have copied an old line of the submission script and ran
>> it serial. Going to try this with less nodes!
>>
>> From
>> G.S.
>>
>>
>> On Tue, Apr 17, 2012 at 2:03 PM, Nathan Goldbaum <goldbaum at ucolick.org>wrote:
>>
>>> Hi Geoffrey,
>>>
>>> I'm not sure why your run is crashing but I've solved similar problems
>>> in the past by directly calling the garbage collector at the end of the
>>> loop. Your loop should look like:
>>> import gc
>>>
>>> … doing stuff …
>>>
>>> # Do a rotation over 30 frames
>>> for i, snapshot in enumerate(cam.rotation(2*na.pi, 36,
>>> rot_vector=(0.0,0.0,1.0))):
>>> print "starting image %i" % i
>>> write_bitmap(snapshot, 'Volume/camera_movement_%04i.png' % frame)
>>> frame += 1
>>> gc.collect()
>>>
>>> Hope that helps!
>>>
>>> Nathan Goldbaum
>>> Graduate Student
>>> Astronomy & Astrophysics, UCSC
>>> goldbaum at ucolick.org
>>> http://www.ucolick.org/~goldbaum
>>>
>>> On Apr 17, 2012, at 1:57 PM, Geoffrey So wrote:
>>>
>>> Hi all,
>>>
>>> I'm starting to do some volume rendering of my 800 cube simulation, but
>>> the job in the super computer just dies without any kind of error in the
>>> logs. The last few lines are:
>>>
>>> yt : [INFO ] 2012-04-15 16:15:22,264 Warning: no_ghost is currently
>>> True (default). This may lead to artifacts at grid boundaries.
>>> yt : [INFO ] 2012-04-15 16:15:33,307 Max Value is 2.46548e-24 at
>>> 0.9743750000000000 0.0193750000000000 0.1781250000000000 in grid
>>> EnzoGrid_0442 at level 0 (79, 15, 42)
>>> yt : [INFO ] 2012-04-15 16:15:33,309 Warning: no_ghost is currently
>>> True (default). This may lead to artifacts at grid boundaries.
>>> yt : [INFO ] 2012-04-15 16:15:47,133 Max Value is 2.46548e-24 at
>>> 0.9743750000000000 0.0193750000000000 0.1781250000000000 in grid
>>> EnzoGrid_0442 at level 0 (79, 15, 42)
>>> yt : [INFO ] 2012-04-15 16:15:47,135 Warning: no_ghost is currently
>>> True (default). This may lead to artifacts at grid boundaries.
>>> yt : [INFO ] 2012-04-15 16:15:49,839 Max Value is 2.46548e-24 at
>>> 0.9743750000000000 0.0193750000000000 0.1781250000000000 in grid
>>> EnzoGrid_0442 at level 0 (79, 15, 42)
>>> yt : [INFO ] 2012-04-15 16:15:49,841 Warning: no_ghost is currently
>>> True (default). This may lead to artifacts at grid boundaries.
>>>
>>> I'm wondering if it is again an memory issue. I ran this job with 16
>>> nodes with 16 cores per node and 4GB per core. This setup is enough memory
>>> to do the halo finding, I'm wondering what the memory requirements is for
>>> volume rendering, and what's the biggest volume people have done up to date?
>>>
>>> The script works on my tiny test problem of 64 cube:
>>> http://paste.yt-project.org/show/2289/
>>>
>>> If this isn't a problem people have with rendering larger volumes than
>>> mine, then it might be a machine issue. I haven't moved the data to
>>> another super computer, but will try that next if others have done larger
>>> volume rendering with no problems at all.
>>>
>>> From
>>> G.S.
>>> !DSPAM:10175,4f8dd938113491945726294!
>>> _______________________________________________
>>>
>>> yt-users mailing list
>>> yt-users at lists.spacepope.org
>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>
>>>
>>> !DSPAM:10175,4f8dd938113491945726294!
>>>
>>>
>>>
>>> _______________________________________________
>>> yt-users mailing list
>>> yt-users at lists.spacepope.org
>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>
>>>
>>
>> _______________________________________________
>> yt-users mailing list
>> yt-users at lists.spacepope.org
>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>
>>
>
> _______________________________________________
> yt-users mailing list
> yt-users at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20120417/4edf8af1/attachment.html>
More information about the yt-users
mailing list