[yt-users] Volume Rendering and Memory usage

Andrew Myers atmyers at berkeley.edu
Wed Feb 2 09:32:44 PST 2011


Hi Matt,

Thanks for the help. This is the outcome of the "bt" command in gdb:

(gdb) bt
#0  __pyx_f_2yt_9utilities_9amr_utils_FIT_get_value (__pyx_v_self=0x87ab9c0,
__pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670,
__pyx_v_rgba=0x7fffd8a94f60,
    __pyx_v_grad=<value optimized out>) at yt/utilities/amr_utils.c:13705
#1  __pyx_f_2yt_9utilities_9amr_utils_21TransferFunctionProxy_eval_transfer
(__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413,
__pyx_v_dvs=0x50e9d670,
    __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized out>) at
yt/utilities/amr_utils.c:14285
#2  0x00002b5e0a62c464 in
__pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_sample_values
(__pyx_v_self=0x50e9d610, __pyx_v_v_pos=<value optimized out>,
    __pyx_v_v_dir=<value optimized out>, __pyx_v_enter_t=23.346866210722702,
__pyx_v_exit_t=<value optimized out>, __pyx_v_ci=<value optimized out>,
    __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at
yt/utilities/amr_utils.c:17719
#3  0x00002b5e0a62ce16 in
__pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_integrate_ray
(__pyx_v_self=0x50e9d610, __pyx_v_v_pos=0x7fffd8a94fd0,
    __pyx_v_v_dir=0x45457d0, __pyx_v_rgba=0x7fffd8a94f60,
__pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17386
#4  0x00002b5e0a624876 in
__pyx_pf_2yt_9utilities_9amr_utils_15PartitionedGrid_2cast_plane
(__pyx_v_self=0x50e9d610, __pyx_args=<value optimized out>,
    __pyx_kwds=<value optimized out>) at yt/utilities/amr_utils.c:16199
#5  0x0000000000495124 in call_function (f=0x5a7ce490, throwflag=<value
optimized out>) at Python/ceval.c:3706
#6  PyEval_EvalFrameEx (f=0x5a7ce490, throwflag=<value optimized out>) at
Python/ceval.c:2389
#7  0x00000000004943ff in call_function (f=0x87aa260, throwflag=<value
optimized out>) at Python/ceval.c:3792
#8  PyEval_EvalFrameEx (f=0x87aa260, throwflag=<value optimized out>) at
Python/ceval.c:2389
#9  0x0000000000495d6d in PyEval_EvalCodeEx (co=0x24286c0, globals=<value
optimized out>, locals=<value optimized out>, args=0xb62c38, argcount=2,
kws=0xb62c48,
    kwcount=0, defs=0x242a2a8, defcount=1, closure=0x0) at
Python/ceval.c:2968
#10 0x0000000000493c79 in call_function (f=0xb62ac0, throwflag=<value
optimized out>) at Python/ceval.c:3802
#11 PyEval_EvalFrameEx (f=0xb62ac0, throwflag=<value optimized out>) at
Python/ceval.c:2389
#12 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x2b5e01aed288,
globals=<value optimized out>, locals=<value optimized out>, args=0x0,
argcount=0, kws=0x0, kwcount=0,
    defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2968
#13 0x0000000000495db2 in PyEval_EvalCode (co=0x87ab9c0, globals=0x50e9d670,
locals=0x72f1270) at Python/ceval.c:522
#14 0x00000000004b7ee1 in run_mod (fp=0xb54ed0, filename=0x7fffd8a965a4
"vr.py", start=<value optimized out>, globals=0xb03190, locals=0xb03190,
closeit=1,
    flags=0x7fffd8a958d0) at Python/pythonrun.c:1335
#15 PyRun_FileExFlags (fp=0xb54ed0, filename=0x7fffd8a965a4 "vr.py",
start=<value optimized out>, globals=0xb03190, locals=0xb03190, closeit=1,
flags=0x7fffd8a958d0)
    at Python/pythonrun.c:1321
#16 0x00000000004b8198 in PyRun_SimpleFileExFlags (fp=<value optimized out>,
filename=0x7fffd8a965a4 "vr.py", closeit=1, flags=0x7fffd8a958d0) at
Python/pythonrun.c:931
#17 0x0000000000413e4f in Py_Main (argc=<value optimized out>,
argv=0x7fffd8a959f8) at Modules/main.c:599
#18 0x00002b5e0259a994 in __libc_start_main () from /lib64/libc.so.6
#19 0x00000000004130b9 in _start ()

Thanks,
Andrew


On Wed, Feb 2, 2011 at 5:25 AM, Matthew Turk <matthewturk at gmail.com> wrote:

> Hi Andrew,
>
> That's an odd bug!  Do you think you could get a backtrace from the
> segfault?  You might do this by setting your core dump ulimit to
> unlimited:
>
> [in base]
>
> ulimit -c unlimited
>
> [in csh]
>
> limit coredumpsize unlimited
>
> and then running again.  When the core dump gets spit out,
>
> gdb python2.6 -c that_core_file
> bt
>
> should tell us where in the code it died.  Sam Skillman should have a
> better idea about any possible memory issues, but the segfault to me
> feels like maybe there's a roundoff that's putting it outside a grid
> data array space or something.
>
> Sorry for the trouble,
>
> Matt
>
> On Tue, Feb 1, 2011 at 11:13 PM, Andrew Myers <atmyers at berkeley.edu>
> wrote:
> > Hello yt users,
> >
> > I'm trying to volume render an Orion simulation with about 6,000 grids
> and
> > 100 million cells, and I think I'm running out of memory. I don't know if
> > this is large compared to other simulations people have volume rendered
> > before, but if I set the width of my field of view to be 0.02 pc (20
> times
> > smaller than the entire domain), the following code works fine. If I set
> it
> > to 0.04 pc or anything larger, the code segfaults, which I assume means
> I'm
> > running out of memory. This happens no matter how many cores I run on -
> > running in parallel seems to be speed up the calculation, but not
> increase
> > the size of the domain I can render. Am I doing something wrong? Or do I
> > just need to find a machine with more memory to do this on? The one I'm
> > using now has 3 gigs per core, which strikes me as pretty solid. I'm
> using
> > the trunk version of yt-2.0. Here's the script for reference:
> >
> > from yt.mods import *
> >
> > pf = load("plt01120")
> >
> > dd = pf.h.all_data()
> > mi, ma = na.log10(dd.quantities["Extrema"]("Density")[0])
> > mi -= 0.1 ; ma += 0.1 # To allow a bit of room at the
> > edges
> >
> > tf = ColorTransferFunction((mi, ma))
> > tf.add_layers(8, w=0.01)
> > c = na.array([0.0,0.0,0.0])
> > L = na.array([1.0, 1.0, 1.0])
> > W = 6.17e+16 # 0.02
> > pc
> >
> > N = 512
> >
> > cam = Camera(c, L, W, (N,N), tf, pf=pf)
> > fn = "%s_image.png" % pf
> >
> > cam.snapshot(fn)
> >
> > Thanks,
> > Andrew Myers
> >
> >
> >
> > _______________________________________________
> > yt-users mailing list
> > yt-users at lists.spacepope.org
> > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >
> >
> _______________________________________________
> yt-users mailing list
> yt-users at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20110202/c5077fc3/attachment.html>


More information about the yt-users mailing list