[yt-users] Volume Rendering and Memory usage

Andrew Myers atmyers at berkeley.edu
Tue Feb 1 20:13:57 PST 2011


Hello yt users,

I'm trying to volume render an Orion simulation with about 6,000 grids and
100 million cells, and I think I'm running out of memory. I don't know if
this is large compared to other simulations people have volume rendered
before, but if I set the width of my field of view to be 0.02 pc (20 times
smaller than the entire domain), the following code works fine. If I set it
to 0.04 pc or anything larger, the code segfaults, which I assume means I'm
running out of memory. This happens no matter how many cores I run on -
running in parallel seems to be speed up the calculation, but not increase
the size of the domain I can render. Am I doing something wrong? Or do I
just need to find a machine with more memory to do this on? The one I'm
using now has 3 gigs per core, which strikes me as pretty solid. I'm using
the trunk version of yt-2.0. Here's the script for reference:

from yt.mods import *

pf = load("plt01120")

dd = pf.h.all_data()
mi, ma = na.log10(dd.quantities["Extrema"]("Density")[0])
mi -= 0.1 ; ma += 0.1 # To allow a bit of room at the
edges


tf = ColorTransferFunction((mi, ma))
tf.add_layers(8, w=0.01)
c = na.array([0.0,0.0,0.0])
L = na.array([1.0, 1.0, 1.0])
W = 6.17e+16 # 0.02
pc


N = 512

cam = Camera(c, L, W, (N,N), tf, pf=pf)
fn = "%s_image.png" % pf

cam.snapshot(fn)

Thanks,
Andrew Myers
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20110201/71c423e5/attachment.htm>


More information about the yt-users mailing list