[yt-svn] commit/yt-doc: 4 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Tue Nov 19 15:14:57 PST 2013


4 new commits in yt-doc:

https://bitbucket.org/yt_analysis/yt-doc/commits/323b28aba3bf/
Changeset:   323b28aba3bf
User:        ngoldbaum
Date:        2013-11-16 08:05:58
Summary:     Updating the parallelism section.
Affected #:  1 file

diff -r b543866181e27eba86e9819c670b659eaf17f06f -r 323b28aba3bf9f9f3c85b26e289f18178c470bd4 source/analyzing/parallel_computation.rst
--- a/source/analyzing/parallel_computation.rst
+++ b/source/analyzing/parallel_computation.rst
@@ -36,10 +36,10 @@
 Setting Up Parallel YT
 ----------------------
 
-To run scripts in parallel, you must first install
-`mpi4py <http://code.google.com/p/mpi4py>`_.
-Instructions for doing so are provided on the mpi4py website, but you may
-have luck by just running: 
+To run scripts in parallel, you must first install `mpi4py
+<http://code.google.com/p/mpi4py>`_ as well as an MPI library, if one is not
+already available on your system.  Instructions for doing so are provided on the
+mpi4py website, but you may have luck by just running:
 
 .. code-block:: bash
 
@@ -76,11 +76,6 @@
 Furthermore, the ``yt`` command itself recognizes the ``--parallel`` option, so
 those commands will work in parallel as well.
 
-The Derived Quantities and Profile objects must both have the ``lazy_reader``
-option set to ``True`` when they are instantiated.  What this does is to
-operate on a grid-by-grid decomposed basis.  In ``yt`` version 1.5 and the
-trunk, this has recently been set to be the default.
-
 .. warning:: If you manually interact with the filesystem via
    functions, not through YT, you will have to ensure that you only
    execute these functions on the root processor.  You can do this
@@ -91,15 +86,12 @@
    the root process. See :ref:`cookbook-time-series-analysis` for
    an example.
 
-yt.pmods
---------
+Running a yt script in parallel
+-------------------------------
 
-yt.pmods is a replacement module for yt.mods, which can be enabled in
-the ``from yt.mods import *`` calls in yt scripts.  It should enable 
-more efficient use of parallel filesystems, if you are running on such a 
-system.
-
-For instance, the following script, which we'll save as ``my_script.py``:
+Many basic yt operations will run in parallel if yt's parallelism is enabled at
+startup.  For example, the following script finds the maximum density location
+in the simulation and then makes a plot of the projected density:
 
 .. code-block:: python
 
@@ -110,18 +102,22 @@
    p = ProjectionPlot(pf, "x", "Density")
    p.save()
 
-will execute the finding of the maximum density and the projection in parallel
-if launched in parallel.  Note that the usual ``from yt.mods import *`` has 
-been replaced by ``from yt.pmods import *``.
-To run this script at the command line you would execute:
+If this script is run in parallel, two of the most expensive operations -
+finding of the maximum density and the projection will be calulcated in
+parallel.  If we save the script as ``my_script.py``, we would run it on 16 MPI
+processes using the following Bash command:
 
 .. code-block:: bash
 
    $ mpirun -np 16 python2.7 my_script.py --parallel
 
-if you wanted it to run in parallel on 16 cores (you can always the number of 
-cores you want to run on).  If you run into problems, the you can use
-:ref:`remote-debugging` to examine what went wrong.
+.. note::
+
+   ``--parallel`` is appended to the call to ``python``.  This command line
+   argument turns on YT's parallelism.
+
+If you run into problems, the you can use :ref:`remote-debugging` to examine
+what went wrong.
 
 Types of Parallelism
 --------------------
@@ -186,12 +182,9 @@
 
 .. code-block:: python
    
-   # This is necessary to prevent a race-condition where each copy of
-   # yt attempts to save information about datasets to the same file on disk,
-   # simultaneously. This will be fixed, eventually...
-   from yt.config import ytcfg; ytcfg["yt","serialize"] = "False"
    # As always...
-   from yt.pmods import *
+   from yt.mods import *
+   
    import glob
    
    # The number 4, below, is the number of processes to parallelize over, which
@@ -203,13 +196,15 @@
    # MPI tasks automatically.
    num_procs = 4
    
-   # fns is a list of all Enzo hierarchy files in directories one level down.
-   fns = glob.glob("*/*.hierarchy")
+   # fns is a list of all the simulation data files in the current directory.
+   fns = glob.glob("./plot*")
    fns.sort()
+
    # This dict will store information collected in the loop, below.
    # Inside the loop each task will have a local copy of the dict, but
    # the dict will be combined once the loop finishes.
    my_storage = {}
+
    # In this example, because the storage option is used in the
    # parallel_objects function, the loop yields a tuple, which gets used
    # as (sto, fn) inside the loop.
@@ -218,23 +213,27 @@
    # would look like:
    #       for fn in parallel_objects(fns, num_procs):
    for sto, fn in parallel_objects(fns, num_procs, storage = my_storage):
+
        # Open a data file, remembering that fn is different on each task.
        pf = load(fn)
        dd = pf.h.all_data()
+
        # This copies fn and the min/max of density to the local copy of
        # my_storage
        sto.result_id = fn
        sto.result = dd.quantities["Extrema"]("Density")
+
        # Makes and saves a plot of the gas density.
        p = ProjectionPlot(pf, "x", "Density")
        p.save()
+
    # At this point, as the loop exits, the local copies of my_storage are
    # combined such that all tasks now have an identical and full version of
    # my_storage. Until this point, each task is unaware of what the other
    # tasks have produced.
    # Below, the values in my_storage are printed by only one task. The other
    # tasks do nothing.
-   if ytcfg.getint("yt", "__topcomm_parallel_rank") == 0:
+   if is_root()
        for fn, vals in sorted(my_storage.items()):
            print fn, vals
 
@@ -464,7 +463,7 @@
     
     .. code-block:: python
     
-       from yt.pmods import *
+       from yt.mods import *
        import time
        
        pf = load("DD0152")
@@ -478,7 +477,7 @@
            SaveTinyMiniStuffToDisk("out%06d.txt" % i, array)
        t2 = time.time()
        
-       if ytcfg.getint("yt", "__topcomm_parallel_rank") == 0:
+       if is_root()
            print "BigStuff took %.5e sec, TinyStuff took %.5e sec" % (t1 - t0, t2 - t1)
   
   * Remember that if the script handles disk IO explicitly, and does not use
@@ -490,7 +489,7 @@
     
     .. code-block:: python
        
-       if ytcfg.getint("yt", "__topcomm_parallel_rank") == 0:
+       if is_root()
            file = open("out.txt", "w")
            file.write(stuff)
            file.close()


https://bitbucket.org/yt_analysis/yt-doc/commits/d754541eb130/
Changeset:   d754541eb130
User:        ngoldbaum
Date:        2013-11-19 00:18:56
Summary:     Expanding the discussion of is_root and only_on_root.
Affected #:  2 files

diff -r 323b28aba3bf9f9f3c85b26e289f18178c470bd4 -r d754541eb130c1dcbd4fb3c1b4c6b816037e9a23 source/analyzing/parallel_computation.rst
--- a/source/analyzing/parallel_computation.rst
+++ b/source/analyzing/parallel_computation.rst
@@ -76,16 +76,6 @@
 Furthermore, the ``yt`` command itself recognizes the ``--parallel`` option, so
 those commands will work in parallel as well.
 
-.. warning:: If you manually interact with the filesystem via
-   functions, not through YT, you will have to ensure that you only
-   execute these functions on the root processor.  You can do this
-   with the function :func:`only_on_root`. If you have only a few
-   lines of code that interact with the filesystem
-   (e.g. ``pyplot.savefig('blah.png')``), you can wrap them in an if
-   statement, using yt's :func:`is_root` which returns True only for
-   the root process. See :ref:`cookbook-time-series-analysis` for
-   an example.
-
 Running a yt script in parallel
 -------------------------------
 
@@ -113,11 +103,58 @@
 
 .. note::
 
-   ``--parallel`` is appended to the call to ``python``.  This command line
-   argument turns on YT's parallelism.
+   If you run into problems, the you can use :ref:`remote-debugging` to examine
+   what went wrong.
 
-If you run into problems, the you can use :ref:`remote-debugging` to examine
-what went wrong.
+Creating Parallel and Serial Sections in a script
++++++++++++++++++++++++++++++++++++++++++++++++++
+
+Many yt operations will automatically run in parallel (see the next section for
+a full enumeration), however some operations, particularly ones that print
+output or save data to the filesystem, will be run by all processors in a
+parallel script.  For example, in the script above the lines ``print v,c`` and
+``p.save()`` will be run on all 16 processors.  This means that your terminal
+output will contain 16 repetitions of the output of the print statement and the
+plot will be saved to disk 16 times (overwritten each time).
+
+yt provides two convenience functions that make it easier to run some most of
+script in parallel but run some subset of the script on only one processor.  The
+first, :func:`~yt.funcs.is_root`, returns ``True`` if run on the 'root'
+processor (the processor with MPI rank 0) and ``False`` otherwise.  One could
+rewrite the above script to take advantage of :func:`~yt.funcs.is_root` like
+so:
+
+.. code-block:: python
+
+   from yt.pmods import *
+   pf = load("RD0035/RedshiftOutput0035")
+   v, c = pf.h.find_max("Density")
+   p = ProjectionPlot(pf, "x", "Density")
+   if is_root():
+       print v, c
+       p.save()
+
+The second function, :func:`~yt.funcs.only_on_root` accepts the name of a
+function as well as a set of parameters and keyword arguments to pass to the
+function.  This is useful when the serial component of your parallel script
+would clutter the script or if you like writing your scripts as a series of
+isolated function calls.  I can rewrite the example from the beginning of this
+section once more using :func:`~yt.funcs.only_on_root` to give you the flavor of
+how to use it:
+
+.. code-block:: python
+
+   from yt.pmods import *
+
+   def print_and_save_plot(v, c, plot, print=True):
+       if print:
+          print v, c
+       plot.save()
+
+   pf = load("RD0035/RedshiftOutput0035")
+   v, c = pf.h.find_max("Density")
+   p = ProjectionPlot(pf, "x", "Density")
+   only_on_root(print_and_save_plot, v, c, plot, print=True)
 
 Types of Parallelism
 --------------------

diff -r 323b28aba3bf9f9f3c85b26e289f18178c470bd4 -r d754541eb130c1dcbd4fb3c1b4c6b816037e9a23 source/reference/api/api.rst
--- a/source/reference/api/api.rst
+++ b/source/reference/api/api.rst
@@ -558,6 +558,7 @@
    ~yt.funcs.get_pbar
    ~yt.funcs.humanize_time
    ~yt.funcs.insert_ipython
+   ~yt.funcs.is_root
    ~yt.funcs.iterable
    ~yt.funcs.just_one
    ~yt.funcs.only_on_root


https://bitbucket.org/yt_analysis/yt-doc/commits/ed3141fd3829/
Changeset:   ed3141fd3829
User:        ngoldbaum
Date:        2013-11-19 02:57:56
Summary:     Fixing some typos
Affected #:  1 file

diff -r d754541eb130c1dcbd4fb3c1b4c6b816037e9a23 -r ed3141fd38295ea5af6122c6f7650d7b30c00505 source/analyzing/parallel_computation.rst
--- a/source/analyzing/parallel_computation.rst
+++ b/source/analyzing/parallel_computation.rst
@@ -54,14 +54,14 @@
 
     $ mpirun -np 8 python script.py --parallel
 
-Throughout its normal operation, yt keeps you aware of what is happening with
+Throughout its normal operation, YT keeps you aware of what is happening with
 regular messages to the stderr usually prefaced with: 
 
 .. code-block:: bash
 
     yt : [INFO   ] YYY-MM-DD HH:MM:SS
 
-However, when operating in parallel mode, yt outputs information from each
+However, when operating in parallel mode, YT outputs information from each
 of your processors to this log mode, as in:
 
 .. code-block:: bash
@@ -76,10 +76,10 @@
 Furthermore, the ``yt`` command itself recognizes the ``--parallel`` option, so
 those commands will work in parallel as well.
 
-Running a yt script in parallel
+Running a YT script in parallel
 -------------------------------
 
-Many basic yt operations will run in parallel if yt's parallelism is enabled at
+Many basic YT operations will run in parallel if yt's parallelism is enabled at
 startup.  For example, the following script finds the maximum density location
 in the simulation and then makes a plot of the projected density:
 
@@ -109,7 +109,7 @@
 Creating Parallel and Serial Sections in a script
 +++++++++++++++++++++++++++++++++++++++++++++++++
 
-Many yt operations will automatically run in parallel (see the next section for
+Many YT operations will automatically run in parallel (see the next section for
 a full enumeration), however some operations, particularly ones that print
 output or save data to the filesystem, will be run by all processors in a
 parallel script.  For example, in the script above the lines ``print v,c`` and
@@ -117,7 +117,7 @@
 output will contain 16 repetitions of the output of the print statement and the
 plot will be saved to disk 16 times (overwritten each time).
 
-yt provides two convenience functions that make it easier to run some most of
+YT provides two convenience functions that make it easier to run most of a
 script in parallel but run some subset of the script on only one processor.  The
 first, :func:`~yt.funcs.is_root`, returns ``True`` if run on the 'root'
 processor (the processor with MPI rank 0) and ``False`` otherwise.  One could
@@ -484,14 +484,14 @@
     on each object, you may find that setting ``num_procs`` equal to the 
     number of processors per compute node can lead to significant speedups.
     By default, most mpi implementations will assign tasks to processors on a
-    'by-slot' basis, so this setting will tell yt to do computations on a single
+    'by-slot' basis, so this setting will tell YT to do computations on a single
     object using only the processors on a single compute node.  A nice application
     for this type of parallelism is calculating a list of derived quantities for 
     a large number of simulation outputs.
 
   * It is impossible to tune a parallel operation without understanding what's
     going on. Read the documentation, look at the underlying code, or talk to
-    other yt users. Get informed!
+    other YT users. Get informed!
     
   * Sometimes it is difficult to know if a job is cpu, memory, or disk
     intensive, especially if the parallel job utilizes several of the kinds of
@@ -518,7 +518,7 @@
            print "BigStuff took %.5e sec, TinyStuff took %.5e sec" % (t1 - t0, t2 - t1)
   
   * Remember that if the script handles disk IO explicitly, and does not use
-    a built-in yt function to write data to disk,
+    a built-in YT function to write data to disk,
     care must be taken to
     avoid `race-conditions <http://en.wikipedia.org/wiki/Race_conditions>`_.
     Be explicit about which MPI task writes to disk using a construction


https://bitbucket.org/yt_analysis/yt-doc/commits/5ea8e4a29913/
Changeset:   5ea8e4a29913
User:        chummels
Date:        2013-11-20 00:14:56
Summary:     Merged in ngoldbaum/yt-doc (pull request #120)

Updating the parallelism section.
Affected #:  2 files

diff -r 29480f031f312f1f00e6ef8644b2b226f38ab364 -r 5ea8e4a299130cbe3e704abbe4adff29ec34a2df source/analyzing/parallel_computation.rst
--- a/source/analyzing/parallel_computation.rst
+++ b/source/analyzing/parallel_computation.rst
@@ -36,10 +36,10 @@
 Setting Up Parallel YT
 ----------------------
 
-To run scripts in parallel, you must first install
-`mpi4py <http://code.google.com/p/mpi4py>`_.
-Instructions for doing so are provided on the mpi4py website, but you may
-have luck by just running: 
+To run scripts in parallel, you must first install `mpi4py
+<http://code.google.com/p/mpi4py>`_ as well as an MPI library, if one is not
+already available on your system.  Instructions for doing so are provided on the
+mpi4py website, but you may have luck by just running:
 
 .. code-block:: bash
 
@@ -54,14 +54,14 @@
 
     $ mpirun -np 8 python script.py --parallel
 
-Throughout its normal operation, yt keeps you aware of what is happening with
+Throughout its normal operation, YT keeps you aware of what is happening with
 regular messages to the stderr usually prefaced with: 
 
 .. code-block:: bash
 
     yt : [INFO   ] YYY-MM-DD HH:MM:SS
 
-However, when operating in parallel mode, yt outputs information from each
+However, when operating in parallel mode, YT outputs information from each
 of your processors to this log mode, as in:
 
 .. code-block:: bash
@@ -76,30 +76,12 @@
 Furthermore, the ``yt`` command itself recognizes the ``--parallel`` option, so
 those commands will work in parallel as well.
 
-The Derived Quantities and Profile objects must both have the ``lazy_reader``
-option set to ``True`` when they are instantiated.  What this does is to
-operate on a grid-by-grid decomposed basis.  In ``yt`` version 1.5 and the
-trunk, this has recently been set to be the default.
+Running a YT script in parallel
+-------------------------------
 
-.. warning:: If you manually interact with the filesystem via
-   functions, not through YT, you will have to ensure that you only
-   execute these functions on the root processor.  You can do this
-   with the function :func:`only_on_root`. If you have only a few
-   lines of code that interact with the filesystem
-   (e.g. ``pyplot.savefig('blah.png')``), you can wrap them in an if
-   statement, using yt's :func:`is_root` which returns True only for
-   the root process. See :ref:`cookbook-time-series-analysis` for
-   an example.
-
-yt.pmods
---------
-
-yt.pmods is a replacement module for yt.mods, which can be enabled in
-the ``from yt.mods import *`` calls in yt scripts.  It should enable 
-more efficient use of parallel filesystems, if you are running on such a 
-system.
-
-For instance, the following script, which we'll save as ``my_script.py``:
+Many basic YT operations will run in parallel if yt's parallelism is enabled at
+startup.  For example, the following script finds the maximum density location
+in the simulation and then makes a plot of the projected density:
 
 .. code-block:: python
 
@@ -110,18 +92,69 @@
    p = ProjectionPlot(pf, "x", "Density")
    p.save()
 
-will execute the finding of the maximum density and the projection in parallel
-if launched in parallel.  Note that the usual ``from yt.mods import *`` has 
-been replaced by ``from yt.pmods import *``.
-To run this script at the command line you would execute:
+If this script is run in parallel, two of the most expensive operations -
+finding of the maximum density and the projection will be calulcated in
+parallel.  If we save the script as ``my_script.py``, we would run it on 16 MPI
+processes using the following Bash command:
 
 .. code-block:: bash
 
    $ mpirun -np 16 python2.7 my_script.py --parallel
 
-if you wanted it to run in parallel on 16 cores (you can always the number of 
-cores you want to run on).  If you run into problems, the you can use
-:ref:`remote-debugging` to examine what went wrong.
+.. note::
+
+   If you run into problems, the you can use :ref:`remote-debugging` to examine
+   what went wrong.
+
+Creating Parallel and Serial Sections in a script
++++++++++++++++++++++++++++++++++++++++++++++++++
+
+Many YT operations will automatically run in parallel (see the next section for
+a full enumeration), however some operations, particularly ones that print
+output or save data to the filesystem, will be run by all processors in a
+parallel script.  For example, in the script above the lines ``print v,c`` and
+``p.save()`` will be run on all 16 processors.  This means that your terminal
+output will contain 16 repetitions of the output of the print statement and the
+plot will be saved to disk 16 times (overwritten each time).
+
+YT provides two convenience functions that make it easier to run most of a
+script in parallel but run some subset of the script on only one processor.  The
+first, :func:`~yt.funcs.is_root`, returns ``True`` if run on the 'root'
+processor (the processor with MPI rank 0) and ``False`` otherwise.  One could
+rewrite the above script to take advantage of :func:`~yt.funcs.is_root` like
+so:
+
+.. code-block:: python
+
+   from yt.pmods import *
+   pf = load("RD0035/RedshiftOutput0035")
+   v, c = pf.h.find_max("Density")
+   p = ProjectionPlot(pf, "x", "Density")
+   if is_root():
+       print v, c
+       p.save()
+
+The second function, :func:`~yt.funcs.only_on_root` accepts the name of a
+function as well as a set of parameters and keyword arguments to pass to the
+function.  This is useful when the serial component of your parallel script
+would clutter the script or if you like writing your scripts as a series of
+isolated function calls.  I can rewrite the example from the beginning of this
+section once more using :func:`~yt.funcs.only_on_root` to give you the flavor of
+how to use it:
+
+.. code-block:: python
+
+   from yt.pmods import *
+
+   def print_and_save_plot(v, c, plot, print=True):
+       if print:
+          print v, c
+       plot.save()
+
+   pf = load("RD0035/RedshiftOutput0035")
+   v, c = pf.h.find_max("Density")
+   p = ProjectionPlot(pf, "x", "Density")
+   only_on_root(print_and_save_plot, v, c, plot, print=True)
 
 Types of Parallelism
 --------------------
@@ -186,12 +219,9 @@
 
 .. code-block:: python
    
-   # This is necessary to prevent a race-condition where each copy of
-   # yt attempts to save information about datasets to the same file on disk,
-   # simultaneously. This will be fixed, eventually...
-   from yt.config import ytcfg; ytcfg["yt","serialize"] = "False"
    # As always...
-   from yt.pmods import *
+   from yt.mods import *
+   
    import glob
    
    # The number 4, below, is the number of processes to parallelize over, which
@@ -203,13 +233,15 @@
    # MPI tasks automatically.
    num_procs = 4
    
-   # fns is a list of all Enzo hierarchy files in directories one level down.
-   fns = glob.glob("*/*.hierarchy")
+   # fns is a list of all the simulation data files in the current directory.
+   fns = glob.glob("./plot*")
    fns.sort()
+
    # This dict will store information collected in the loop, below.
    # Inside the loop each task will have a local copy of the dict, but
    # the dict will be combined once the loop finishes.
    my_storage = {}
+
    # In this example, because the storage option is used in the
    # parallel_objects function, the loop yields a tuple, which gets used
    # as (sto, fn) inside the loop.
@@ -218,23 +250,27 @@
    # would look like:
    #       for fn in parallel_objects(fns, num_procs):
    for sto, fn in parallel_objects(fns, num_procs, storage = my_storage):
+
        # Open a data file, remembering that fn is different on each task.
        pf = load(fn)
        dd = pf.h.all_data()
+
        # This copies fn and the min/max of density to the local copy of
        # my_storage
        sto.result_id = fn
        sto.result = dd.quantities["Extrema"]("Density")
+
        # Makes and saves a plot of the gas density.
        p = ProjectionPlot(pf, "x", "Density")
        p.save()
+
    # At this point, as the loop exits, the local copies of my_storage are
    # combined such that all tasks now have an identical and full version of
    # my_storage. Until this point, each task is unaware of what the other
    # tasks have produced.
    # Below, the values in my_storage are printed by only one task. The other
    # tasks do nothing.
-   if ytcfg.getint("yt", "__topcomm_parallel_rank") == 0:
+   if is_root()
        for fn, vals in sorted(my_storage.items()):
            print fn, vals
 
@@ -448,14 +484,14 @@
     on each object, you may find that setting ``num_procs`` equal to the 
     number of processors per compute node can lead to significant speedups.
     By default, most mpi implementations will assign tasks to processors on a
-    'by-slot' basis, so this setting will tell yt to do computations on a single
+    'by-slot' basis, so this setting will tell YT to do computations on a single
     object using only the processors on a single compute node.  A nice application
     for this type of parallelism is calculating a list of derived quantities for 
     a large number of simulation outputs.
 
   * It is impossible to tune a parallel operation without understanding what's
     going on. Read the documentation, look at the underlying code, or talk to
-    other yt users. Get informed!
+    other YT users. Get informed!
     
   * Sometimes it is difficult to know if a job is cpu, memory, or disk
     intensive, especially if the parallel job utilizes several of the kinds of
@@ -464,7 +500,7 @@
     
     .. code-block:: python
     
-       from yt.pmods import *
+       from yt.mods import *
        import time
        
        pf = load("DD0152")
@@ -478,11 +514,11 @@
            SaveTinyMiniStuffToDisk("out%06d.txt" % i, array)
        t2 = time.time()
        
-       if ytcfg.getint("yt", "__topcomm_parallel_rank") == 0:
+       if is_root()
            print "BigStuff took %.5e sec, TinyStuff took %.5e sec" % (t1 - t0, t2 - t1)
   
   * Remember that if the script handles disk IO explicitly, and does not use
-    a built-in yt function to write data to disk,
+    a built-in YT function to write data to disk,
     care must be taken to
     avoid `race-conditions <http://en.wikipedia.org/wiki/Race_conditions>`_.
     Be explicit about which MPI task writes to disk using a construction
@@ -490,7 +526,7 @@
     
     .. code-block:: python
        
-       if ytcfg.getint("yt", "__topcomm_parallel_rank") == 0:
+       if is_root()
            file = open("out.txt", "w")
            file.write(stuff)
            file.close()

diff -r 29480f031f312f1f00e6ef8644b2b226f38ab364 -r 5ea8e4a299130cbe3e704abbe4adff29ec34a2df source/reference/api/api.rst
--- a/source/reference/api/api.rst
+++ b/source/reference/api/api.rst
@@ -547,6 +547,7 @@
    ~yt.funcs.get_pbar
    ~yt.funcs.humanize_time
    ~yt.funcs.insert_ipython
+   ~yt.funcs.is_root
    ~yt.funcs.iterable
    ~yt.funcs.just_one
    ~yt.funcs.only_on_root

Repository URL: https://bitbucket.org/yt_analysis/yt-doc/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list