[yt-users] creating projections on Ranger

Matthew Turk matthewturk at gmail.com
Tue Jan 25 08:50:05 PST 2011


Hi all,

The differences between the MPI implementations, especially the
MPI.Finalize() call, make me wonder if this might be a problem with
the interface between mpi4py and MPI.  You could try, when installing
mpi4py, also doing:

python2.6 setup.py install_exe

which will put python2.6-mpi in the appropriate bin directory.  Then
instead of launching a "python2.6" executable, change that to
"python2.6-mpi" in your queue submission script.  This executable will
ensure that MPI_Init and MPI_Finalize are called appropriate...  For
what it's worth, on some machines (SGI in particular) this is the only
way to get it to run; I discovered this again the hard way yesterday
on Nautilus.

-Matt

On Tue, Jan 25, 2011 at 11:47 AM, Britton Smith <brittonsmith at gmail.com> wrote:
> I've been seeing this behavior recently as well running with openmpi on my
> laptop.  Something may have crept into the halo profiler projection
> machinery.  I'll have a look at this.
>
> Britton
>
> On Tue, Jan 25, 2011 at 11:05 AM, John Wise <jwise at astro.princeton.edu>
> wrote:
>>
>> Hi Eric,
>>
>> I've seen something like this, using SGI's MPI implementation (mpt), when
>> making projections.  Do you see all of the processors reaching the point of
>> writing the image? If not, something's hanging one of the processors.  In my
>> experience, I sometimes have to call MPI_Finalize() at the end of my script,
>> e.g.,
>>
>> from mpi4py import MPI
>> .
>> .
>> .
>> MPI.Finalize()
>>
>> But I've found that you don't have to do this if you compiled mpi4py with
>> OpenMPI or MVAPICH.  I'm not sure if this is your problem, but I wanted to
>> throw this out there.
>>
>> John
>>
>> On 25 Jan 2011, at 10:37, Eric Hallman wrote:
>>
>> > Hello,
>> >  I am wondering if anyone else sees behavior like the following:
>> >
>> > I run the halo profiler on TACC ranger, and do Hop, radial profiles and
>> > projections of all the halos. When running in the queue in parallel,
>> > everything appears to run normally, but none of the projection outputs
>> > appear in the projections directory.  If I then run in serial
>> > (interactively) the same routine, all the projections are generated as
>> > normal in the correct directory.  This happens however I do the parallel
>> > run.  Really I'm only running the parallel version for hop in any case.  I
>> > have not yet tested whether running in serial on one node of ranger gives
>> > the same issue.
>> >
>> > I thought I would check and see if this is a known issue.
>> >
>> > Thanks.
>> >
>> >
>> > Eric Hallman
>> > Google Voice: (774) 469-0278
>> > hallman13 at gmail.com
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > yt-users mailing list
>> > yt-users at lists.spacepope.org
>> > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>
>> _______________________________________________
>> yt-users mailing list
>> yt-users at lists.spacepope.org
>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>
>
> _______________________________________________
> yt-users mailing list
> yt-users at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>
>



More information about the yt-users mailing list