[yt-users] mpi4py and yt

Britton Smith brittonsmith at gmail.com
Sat Feb 14 12:06:44 PST 2009


Well, yes.

On Sat, Feb 14, 2009 at 1:03 PM, rsoares <dlleuz at xmission.com> wrote:

> Yt  /python2.6 is of course compatible with openmpi then?
>
> Britton Smith wrote:
>
>> Here's how it works.
>> mpi4py is a module like any other.  You build it with the python
>> installation that you built all the other modules with, ala python setup.py
>> build and install.  In order for that to work, you need some mpi libraries
>> installed.  As I said, I prefer openmpi for this because they were the
>> easiest for me to install and build mpi4py with.  Before you do python build
>> install in the mpi4py directory, you'll need to edit the .cfg file (can't
>> remember exactly what it's called) so that the installation has the proper
>> paths to your mpi install.
>>
>> When you've got mpi4py properly built, you will be able to run some yt
>> operations in parallel in the following manner.
>> 1. Whatever you want to do needs to be in some python script.  As far as I
>> know, you can't do parallel entering lines directly into the interpreter.
>> Here's an example:
>>
>> ### Start
>> from yt.mods import *
>> from yt.config import ytcfg
>> pf = EnzoStaticOutput("EnzoRuns/cool_core_rediculous/DD0252/DD0252")
>> pc = PlotCollection(pf,center=[0.5,0.5,0.5])
>> pc.add_projection("Density",0)
>>
>> if ytcfg.getint("yt","__parallel_rank") == 0:
>>           pc.save("DD0252")
>> ### End
>> That if statement at the end assures that the final image save is done by
>> the root process only.  The nice thing is this script can be run in exactly
>> the same form in serial, too.
>>
>> 2. Let's say this script is called proj.py.  You'll run it like this:
>> mpirun -np 4 python proj.py --parallel
>>
>> If you don't unclude the --parallel, you'll see 4 instances of your
>> proj.py script running separately, but each one doing the entire projection
>> and not working together.
>>
>> Hope that helps,
>>
>> Britton
>>
>> On Fri, Feb 13, 2009 at 11:15 PM, rsoares <dlleuz at xmission.com <mailto:
>> dlleuz at xmission.com>> wrote:
>>
>>    What Python do you parallelize to install mpi4py into - or do you
>>    build /use mpi4py without python, then how?
>>
>>    R.Soares
>>
>>    Britton Smith wrote:
>>
>>        I recommend using openmpi.  I have been able to build openmpi
>>        on multiple platforms and then build mpi4py with it without
>>        any customization.  As Matt has said, though, you won't see
>>        any benefit to using parallel until your simulations are at
>>        least 256^3 cells or more.
>>
>>        On Thu, Feb 12, 2009 at 8:16 PM, Matthew Turk
>>        <matthewturk at gmail.com <mailto:matthewturk at gmail.com>
>>        <mailto:matthewturk at gmail.com <mailto:matthewturk at gmail.com>>>
>>        wrote:
>>
>>           Hi again,
>>
>>           I just realized that I should say a couple important caveats --
>>
>>           1. We haven't released 'yt-trunk' as 1.5 yet because it's
>>        not quite
>>           done or stable.  It's going well, and many people use it for
>>           production-quality work, but it's not really
>>        stamped-and-completed.
>>           2. I should *also* note that you won't really get a lot out of
>>           parallel yt unless you have relatively large datasets or
>>        relatively
>>           large amounts of computation on each cell while creating a
>>        derived
>>           field.  It might end up being a bit more work than you're
>>        looking for,
>>           if you just want to get some plots out quickly.
>>
>>           -Matt
>>
>>           On Thu, Feb 12, 2009 at 7:12 PM, Matthew Turk
>>           <matthewturk at gmail.com <mailto:matthewturk at gmail.com>
>>        <mailto:matthewturk at gmail.com <mailto:matthewturk at gmail.com>>>
>>
>>        wrote:
>>           > Hi!
>>           >
>>           > yt-trunk is now parallelized.  Not all tasks work in
>>        parallel, but
>>           > projections, profiles (if done in 'lazy' mode) and halo
>>        finding (if
>>           > you use the SS_HopOutput module) are now parallelized.
>>         Slices are
>>           > almost done, and the new covering grid will be.  It's not
>>           documented,
>>           > but those tasks should all run in parallel.  We will be
>>        rolling
>>           out a
>>           > 1.5 release relatively soon, likely shortly after I
>>        defend my thesis
>>           > in April, that will have documentation and so forth.
>>           >
>>           > I'm surprised you can't compile against the mpich
>>        libraries in a
>>           > shared fashion.  Unfortunately, I'm not an expert on MPI
>>           > implementations, so I can't quite help out there.  In my
>>        personal
>>           > experience, using OpenMPI, I have needed to except when
>>        running on
>>           > some form of linux without a loader -- the previous
>>        discussion about
>>           > this was related to Kraken, which runs a Cray-specific
>>        form of linux
>>           > called "Compute Node Linux."  I don't actually know
>>        offhand (anybody
>>           > else?) of any non-Cray machines at supercomputing out
>>        there require
>>           > static linking as opposed to a standard installation of
>>        Python.
>>            (I'm
>>           > sure they do, I just don't know of them!)
>>           >
>>           > As for the second part, usually when instantiating you
>>        have to
>>           run the
>>           > executable via mpirun.  (On other MPI implementations,
>>        this could be
>>           > something different.)  One option for this -- if you're
>>        running off
>>           > trunk -- would be to do something like:
>>           >
>>           > mpirun -np 4 python my_script.py --parallel
>>           >
>>           > where the file my_script.py has something like:
>>           >
>>           > --
>>           > from yt.mods import *
>>           > pf = EnzoStaticOutput("my_output")
>>           > pc = PlotCollection(pf, center=[0.5,0.5,0.5])
>>           > pc.add_projection("Density",0)
>>           > pc.save("hi_there")
>>           > --
>>           >
>>           > The projection would be executed in parallel, in this case.
>>            (There is
>>           > a command line interface called 'yt' that also works in
>>           parallel, but
>>           > it's still a bit in flux.)  You can't just run "python"
>>        because
>>           of the
>>           > way the stdin and stdout streams work; you have to supply a
>>           script, so
>>           > that it can proceed without input from the user.  (IPython's
>>           parallel
>>           > fanciness notwithstanding, which we do not use in yt.)
>>           >
>>           > But, keep in mind, running "mpirun -np 4" by itself,
>>        wihtout setting
>>           > up a means of distributing tasks (usually via a tasklist)
>>        will run
>>           > them all on the current machine.  I am, unfortunately,
>>        not really
>>           > qualified to speak to setting up MPI implementations.
>>         But please do
>>           > let us know if you have problems with the yt aspects of this!
>>           >
>>           > -Matt
>>           >
>>           > On Thu, Feb 12, 2009 at 6:59 PM, rsoares
>>        <dlleuz at xmission.com <mailto:dlleuz at xmission.com>
>>           <mailto:dlleuz at xmission.com <mailto:dlleuz at xmission.com>>>
>>
>>        wrote:
>>           >> Hi,
>>           >>
>>           >> I'm trying to run mpi4py on my 4 machines, but I need a
>>           parallelized version
>>           >> of Python. Tried to compile one with Python 2.5 and
>>        mpich2 but
>>           mpich2 won't
>>           >> let me built dynamic /shares libraries which it needs.
>>         Trying
>>           with the
>>           >> static ones involves alot of headers errors from both.
>>           >> Is yt-trunk capable of doing python in parallel?
>>           >>
>>           >> Without parallel-python, I mpdboot -n 4 then
>>           >>
>>           >> python
>>           >>>>>import MPI
>>           >>>>> rank, size = MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size
>>           >>>>> print 'Hello World! I am process', rank, 'of', size
>>           >> Hello World! I am process 0 of 1
>>           >>>>>
>>           >>
>>           >> not 4 processes, and  mpirun -np 4 python just hangs.
>>         mpi4py
>>           installed on
>>           >> all 4 nodes.
>>           >>
>>           >> Thanks.
>>           >>
>>           >> R.Soares
>>           >> _______________________________________________
>>           >> yt-users mailing list
>>           >> yt-users at lists.spacepope.org
>>        <mailto:yt-users at lists.spacepope.org>
>>        <mailto:yt-users at lists.spacepope.org
>>        <mailto:yt-users at lists.spacepope.org>>
>>
>>           >>
>>        http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>           >>
>>           >
>>           _______________________________________________
>>           yt-users mailing list
>>           yt-users at lists.spacepope.org
>>        <mailto:yt-users at lists.spacepope.org>
>>        <mailto:yt-users at lists.spacepope.org
>>        <mailto:yt-users at lists.spacepope.org>>
>>
>>           http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>
>>
>>
>>  ------------------------------------------------------------------------
>>
>>        _______________________________________________
>>        yt-users mailing list
>>        yt-users at lists.spacepope.org <mailto:yt-users at lists.spacepope.org>
>>        http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20090214/e324ecbf/attachment.htm>


More information about the yt-users mailing list