[yt-users] running volume_render.py in a batch script

Agarwal, Shankar sagarwal at ku.edu
Thu Apr 14 14:45:19 PDT 2011


Hi Sam,

No. Through qsub, I get this error...

set_SCR: using existing PBS job directory /scratch/batch/56557
set_LSCR: using existing PBS job directory /scratch.local/batch/56557
set_SCR: using existing PBS job directory /scratch/batch/56557
set_LSCR: using existing PBS job directory /scratch.local/batch/56557
MPI: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize()
MPI: aborting job
MPI: Received signal 11



My batch script:
#!/bin/sh -x
#PBS -l walltime=00:10:00
#PBS -l mem=32mb
#PBS -l ncpus=6
#PBS -q debug
#PBS -V
#PBS -A TG-AST080030N
#PBS -N movie
#PBS -m be
#export MPI_MEMMAP_OFF=1
cd /gpfs1/u/ac/sagarwal/junk
mpirun -np 2 /u/ac/sagarwal/software/enzo/src/yt/doc/yt-x86_64/bin/python2.6 try.py


My try.py:
from mpi4py import MPI
print MPI.COMM_WORLD.rank


It works only on the login node...
[sagarwal at ember ~/junk]$ mpirun -np 4 python vv.py
3
1
2
0

Shankar

________________________________
From: yt-users-bounces at lists.spacepope.org [yt-users-bounces at lists.spacepope.org] on behalf of Sam Skillman [samskillman at gmail.com]
Sent: Thursday, April 14, 2011 1:34 PM
To: Discussion of the yt analysis package
Subject: Re: [yt-users] running volume_render.py in a batch script

Hi Shankar,

Can you run Britton's suggestion through qsub?

Sam

On Thu, Apr 14, 2011 at 2:00 PM, Agarwal, Shankar <sagarwal at ku.edu<mailto:sagarwal at ku.edu>> wrote:
Hi all,

The volume_render.py is reproduced at the end of this email. As Stephen noted, mpi4py is called internally in the script. I am successfully able to run volume_render.py as...

mpirun -np 2 python volume_render.py

(Britton's suggestion works fine. I see 0 and 1 printed out for the 2 cpus).


Then I commented out the mpi related lines and was successfully able to run volume_render.py serially as well...

python volume_render.py


All the above was done on the login node. Then I repeated the above through qsub. First, I started an interactive session...

qsub -I -V -l walltime=00:30:00,ncpus=6,mem=20gb -q debug

Then I tried the serial version of volume_render.py and was successful...
[sagarwal at ember-cmp1 ~/junk]$ python volume_render.py


But the mpi4py version failed as below...

[sagarwal at ember-cmp1 ~/junk]$ mpirun -np 2 python volume_render.py
MPI: MPI_COMM_WORLD rank 1 has terminated without calling MPI_Finalize()
MPI: aborting job
MPI: Received signal 11


So basically, mpi4py works fine only when I am on the login node. Me confused!

Shankar



-----------------------------------------------------------------------------------------------------------------------------------------
import matplotlib;matplotlib.use("Agg");import pylab
import numpy as na
import time
from yt.config import ytcfg; ytcfg["yt","loglevel"] = '50' ; ytcfg["lagos","serialize"] = "False"

from yt.mods import *
from yt.extensions.volume_rendering import *
from yt.funcs import *
mh = na.log10(1.67e-24)
from mpi4py import MPI
print MPI.COMM_WORLD.rank

def use_cluster(n = 1, frame=0, rotframes = 50):
   pf = EnzoStaticOutput("RD%04i/RedshiftOutput%04i" % (n,n) )

   z = pf.get_parameter('CosmologyCurrentRedshift')
   L = [na.sin(2*na.pi*frame/rotframes),na.cos(2*na.pi*frame/rotframes),0.4]

   c = [0.5,0.5,0.5]
   W = 0.8

   ncolors = 8
   max = -27.0+ na.log10((1.+z)**3)
   min = -32.0+ na.log10((1.+z)**3)
   tf = ColorTransferFunction((min-3., max+3.))

   valrange = na.linspace(min, max, ncolors)
   hues = range(0,256,256/ncolors)
   alphas = na.logspace(-2.0, 0.0, ncolors)
   width = 0.01*(max-min)/ncolors

   for i,val in enumerate(valrange):
#        thisrgb = yt.raven.ColorMaps.raven_colormaps['kamae'](hues[i])
       thisrgb = pylab.cm.spectral(hues[i])
       tf.add_gaussian(val, width, [thisrgb[0], thisrgb[1],
                                    thisrgb[2], alphas[i]])

   return (pf, L, c, W, tf)

myrank = MPI.COMM_WORLD.Get_rank()
procs = MPI.COMM_WORLD.Get_size()
rocnm = MPI.Get_processor_name()

Nvec = 1024
for i in range(0,3):
   if (i % nprocs != myrank):
       continue
   pre_name = 'rot_evolve_%04i' % i
   if os.path.isfile("%s_alt_rgb.png" % pre_name): continue
   print 'On data dump %04i' % i
   pf, L, c, W, tf = use_cluster(n = i, frame=i, rotframes=360)

   tf.plot("%s_tf.png" % pre_name)

   tf.light_dir = (0.,0.,1.)
   tf.light_color = (0.0, 0.3, 0.3)
   tf.use_light = 0

   t1 = time.time()
   if os.path.isfile("%s_partitioned.h5" % pf):
       to_export = 0
       grids = import_partitioned_grids("%s_partitioned.h5" % pf)
   else:
       to_export = 1
       grids = None

   grids, image, vectors, norm_vec, pos = direct_ray_cast(
           pf, L, c, W, Nvec, tf, partitioned_grids=grids, whole_box=True)

   v = 0.0
   for g in grids: v += (g.RightEdge - g.LeftEdge).prod()
   print "VOLUME", v

   print pf, L, c, W, tf

   t2 = time.time()

   ma = image[:,:,:3].max()
   nan_image = na.zeros(image.shape, dtype='float64')
   nan_image[na.isnan(image)] = 1.0
   image[na.isnan(image)] = 0.0
   to_plot = image[:,:,:3]

   avgval = to_plot[to_plot>0].mean()
   stdval = to_plot[to_plot>0].std()
   maxval = to_plot[to_plot>0].max()
   print avgval, stdval, maxval, (avgval+stdval)/maxval

   alt_plot = (to_plot - to_plot.min()) / (avgval+5*stdval)

   to_plot = na.clip(to_plot, to_plot.min(), 0.8 * to_plot.max())
   to_plot = (to_plot - to_plot.min()) / (to_plot.max() - to_plot.min())

   to_plot[to_plot>1.0]=1.0
   alt_plot[alt_plot>1.0]=1.0

   print to_plot.max(), to_plot.min()
   print "%0.3e" % (t2-t1)

   pylab.clf()
   pylab.gcf().set_dpi(100)
   pylab.gcf().set_size_inches((Nvec/100.0, Nvec/100.0))
   pylab.gcf().subplots_adjust(left=0.0, right=1.0, bottom=0.0, top=1.0, wspace=0.0, hspace=0.0)
   pylab.imshow(alt_plot, interpolation='nearest')
   pylab.text(20, 20,'z = %0.3f' % pf['CosmologyCurrentRedshift'],
              color = 'w',size='large')
   pylab.savefig("%s_alt_rgb.png" % pre_name)
   pylab.clf()

   if to_export:
       export_partitioned_grids(grids, "%s_partitioned.h5" % pf)

   del grids, image, vectors, norm_vec, pos, ma, nan_image, to_plot, pf, L, c, W, tf
-----------------------------------------------------------------------------------------------------------------------------------------
_______________________________________________
yt-users mailing list
yt-users at lists.spacepope.org<mailto:yt-users at lists.spacepope.org>
http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org




More information about the yt-users mailing list