[yt-svn] commit/yt: 4 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Fri Jan 29 07:32:29 PST 2016


4 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/3c3177e83689/
Changeset:   3c3177e83689
Branch:      yt
User:        xarthisius
Date:        2016-01-28 21:42:14+00:00
Summary:     [doc] Use print compatible with py3
Affected #:  23 files

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/helper_scripts/show_fields.py
--- a/doc/helper_scripts/show_fields.py
+++ b/doc/helper_scripts/show_fields.py
@@ -68,18 +68,18 @@
 ==========
 
 This is a list of many of the fields available in yt.  We have attempted to
-include most of the fields that are accessible through the plugin system, as 
-well as the fields that are known by the frontends, however it is possible to 
-generate many more permutations, particularly through vector operations. For 
+include most of the fields that are accessible through the plugin system, as
+well as the fields that are known by the frontends, however it is possible to
+generate many more permutations, particularly through vector operations. For
 more information about the fields framework, see :ref:`fields`.
 
-Some fields are recognized by specific frontends only. These are typically 
-fields like density and temperature that have their own names and units in 
-the different frontend datasets. Often, these fields are aliased to their 
-yt-named counterpart fields (typically 'gas' fieldtypes). For example, in 
-the ``FLASH`` frontend, the ``dens`` field (i.e. ``(flash, dens)``) is aliased 
-to the gas field density (i.e. ``(gas, density)``), similarly ``(flash, velx)`` 
-is aliased to ``(gas, velocity_x)``, and so on. In what follows, if a field 
+Some fields are recognized by specific frontends only. These are typically
+fields like density and temperature that have their own names and units in
+the different frontend datasets. Often, these fields are aliased to their
+yt-named counterpart fields (typically 'gas' fieldtypes). For example, in
+the ``FLASH`` frontend, the ``dens`` field (i.e. ``(flash, dens)``) is aliased
+to the gas field density (i.e. ``(gas, density)``), similarly ``(flash, velx)``
+is aliased to ``(gas, velocity_x)``, and so on. In what follows, if a field
 is aliased it will be noted.
 
 Try using the ``ds.field_list`` and ``ds.derived_field_list`` to view the
@@ -91,7 +91,7 @@
   import yt
   ds = yt.load("Enzo_64/DD0043/data0043")
   for i in sorted(ds.field_list):
-    print i
+    print(i)
 
 To figure out out what all of the field types here mean, see
 :ref:`known-field-types`.
@@ -112,7 +112,7 @@
 Index of Fields
 ---------------
 
-.. contents:: 
+.. contents::
    :depth: 3
    :backlinks: none
 
@@ -198,11 +198,11 @@
     elif frontend == "boxlib":
         field_info_names = []
         for d in dataset_names:
-            if "Maestro" in d:  
+            if "Maestro" in d:
                 field_info_names.append("MaestroFieldInfo")
-            elif "Castro" in d: 
+            elif "Castro" in d:
                 field_info_names.append("CastroFieldInfo")
-            else: 
+            else:
                 field_info_names.append("BoxlibFieldInfo")
     elif frontend == "chombo":
         # remove low dimensional field info containters for ChomboPIC
@@ -273,7 +273,7 @@
                                   al=f.aliases, aw=len_aliases,
                                   pt=f.ptype, pw=len_part,
                                   dp=f.dname, dw=len_disp))
-                
+
             print(div)
             print("")
 

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/analyzing/analysis_modules/clump_finding.rst
--- a/doc/source/analyzing/analysis_modules/clump_finding.rst
+++ b/doc/source/analyzing/analysis_modules/clump_finding.rst
@@ -93,8 +93,8 @@
 
 .. code:: python
 
-   print leaf_clumps[0]["gas", "density"]
-   print leaf_clumps[0].quantities.total_mass()
+   print(leaf_clumps[0]["gas", "density"])
+   print(leaf_clumps[0].quantities.total_mass())
 
 The writing functions will write out a series or properties about each 
 clump by default.  Additional properties can be appended with the 

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/analyzing/analysis_modules/halo_catalogs.rst
--- a/doc/source/analyzing/analysis_modules/halo_catalogs.rst
+++ b/doc/source/analyzing/analysis_modules/halo_catalogs.rst
@@ -169,7 +169,7 @@
 .. code-block:: python
 
    def my_new_function(halo):
-       print halo.quantities["my_quantity"]
+       print(halo.quantities["my_quantity"])
    add_callback("print_quantity", my_new_function)
 
    # ... Anywhere after "my_quantity" has been called

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/analyzing/analysis_modules/photon_simulator.rst
--- a/doc/source/analyzing/analysis_modules/photon_simulator.rst
+++ b/doc/source/analyzing/analysis_modules/photon_simulator.rst
@@ -354,7 +354,7 @@
 
 .. code:: python
 
-    print events
+    print(events)
 
 .. code:: python
 

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/analyzing/analysis_modules/radial_column_density.rst
--- a/doc/source/analyzing/analysis_modules/radial_column_density.rst
+++ b/doc/source/analyzing/analysis_modules/radial_column_density.rst
@@ -50,15 +50,15 @@
   from yt.mods import *
   from yt.analysis_modules.radial_column_density.api import *
   ds = load("data0030")
-  
+
   rcdnumdens = RadialColumnDensity(ds, 'NumberDensity', [0.5, 0.5, 0.5],
     max_radius = 0.5)
   def _RCDNumberDensity(field, data, rcd = rcdnumdens):
       return rcd._build_derived_field(data)
   add_field('RCDNumberDensity', _RCDNumberDensity, units=r'1/\rm{cm}^2')
-  
+
   dd = ds.all_data()
-  print dd['RCDNumberDensity']
+  print(dd['RCDNumberDensity'])
 
 The field ``RCDNumberDensity`` can be used just like any other derived field
 in yt.

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/analyzing/analysis_modules/sunrise_export.rst
--- a/doc/source/analyzing/analysis_modules/sunrise_export.rst
+++ b/doc/source/analyzing/analysis_modules/sunrise_export.rst
@@ -101,7 +101,7 @@
 
 	def find_cell(grid,position):
 	    x=grid
-	    #print grid.LeftEdge
+	    #print(grid.LeftEdge)
 	    for child in grid.Children:
 	        if numpy.all(child.LeftEdge  < position) and\
 	           numpy.all(child.RightEdge > position):

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/analyzing/analysis_modules/two_point_functions.rst
--- a/doc/source/analyzing/analysis_modules/two_point_functions.rst
+++ b/doc/source/analyzing/analysis_modules/two_point_functions.rst
@@ -897,11 +897,11 @@
     all = ds.all_data()
     n = all.quantities["TotalNumDens"]()
     
-    print n,'n'
+    print(n,'n')
     
     # Instantiate our TPF object.
     tpf = TwoPointFunctions(ds, ['density', 'cell_volume'],
-        total_values=1e5, comm_size=10000, 
+        total_values=1e5, comm_size=10000,
         length_number=11, length_range=[-1, .5],
         length_type="lin", vol_ratio=1)
     
@@ -919,7 +919,7 @@
     
     # Now we add the function to the TPF.
     # ``corr_norm`` is used to normalize the correlation function.
-    tpf.add_function(function=dens_tpcorr, out_labels=['tpcorr'], sqrt=[False], 
+    tpf.add_function(function=dens_tpcorr, out_labels=['tpcorr'], sqrt=[False],
         corr_norm=n**2 * sm**2)
     
     # And define how we want to bin things.

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/analyzing/fields.rst
--- a/doc/source/analyzing/fields.rst
+++ b/doc/source/analyzing/fields.rst
@@ -30,9 +30,9 @@
 
 .. code-block:: python
 
-   print ad["humans", "particle_position"]
-   print ad["dogs", "particle_position"]
-   print ad["dinosaurs", "particle_position"]
+   print(ad["humans", "particle_position"])
+   print(ad["dogs", "particle_position"])
+   print(ad["dinosaurs", "particle_position"])
 
 Each of these three fields may have different sizes.  In order to enable
 falling back on asking only for a field by the name, yt will use the most
@@ -43,7 +43,7 @@
 
 .. code-block:: python
 
-   print ad["particle_velocity"]
+   print(ad["particle_velocity"])
 
 it would select ``dinosaurs`` as the field type.
 
@@ -54,7 +54,7 @@
 
 .. code-block:: python
 
-   print ad["deposit", "dark_matter_density"] / ad["gas", "density"]
+   print(ad["deposit", "dark_matter_density"] / ad["gas", "density"])
 
 The ``deposit`` field type is a mesh field, so it will have the same shape as
 the gas density.  If we weren't using ``deposit``, and instead directly
@@ -201,7 +201,9 @@
 information about the field and things like units and so on.  You can use this
 for tab-completing as well as easier access to information.
 
-As an example, you might browse the available fields like so:::
+As an example, you might browse the available fields like so:
+
+.. code-block:: python
 
   print(dir(ds.fields))
   print(dir(ds.fields.gas))
@@ -225,8 +227,8 @@
 .. code-block:: python
 
    ds = yt.load("my_data")
-   print ds.field_list
-   print ds.derived_field_list
+   print(ds.field_list)
+   print(ds.derived_field_list)
 
 By using the ``field_info()`` class, one can access information about a given
 field, like its default units or the source code for it.  
@@ -235,8 +237,8 @@
 
    ds = yt.load("my_data")
    ds.index
-   print ds.field_info["gas", "pressure"].get_units()
-   print ds.field_info["gas", "pressure"].get_source()
+   print(ds.field_info["gas", "pressure"].get_units())
+   print(ds.field_info["gas", "pressure"].get_source())
 
 Particle Fields
 ---------------
@@ -277,7 +279,7 @@
 
    ad.set_field_parameter("wickets", 13)
 
-   print ad.get_field_parameter("wickets")
+   print(ad.get_field_parameter("wickets"))
 
 If a field parameter is not set, ``get_field_parameter`` will return None.  
 Within a field function, these can then be retrieved and used in the same way.
@@ -427,7 +429,7 @@
    fn, = add_nearest_neighbor_field("all", "particle_position", ds)
 
    dd = ds.all_data()
-   print dd[fn]
+   print(dd[fn])
 
 Note that ``fn`` here is the "field name" that yt adds.  It will be of the form
 ``(ptype, nearest_neighbor_distance_NN)`` where ``NN`` is the integer.  By

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/analyzing/filtering.rst
--- a/doc/source/analyzing/filtering.rst
+++ b/doc/source/analyzing/filtering.rst
@@ -40,9 +40,9 @@
     import numpy as np
     a = np.arange(5)
     bigger_than_two = (a > 2)
-    print "Original Array: a = \n%s" % a
-    print "Boolean Mask: bigger_than_two = \n%s" % bigger_than_two
-    print "Masked Array: a[bigger_than_two] = \n%s" % a[bigger_than_two]
+    print("Original Array: a = \n%s" % a)
+    print("Boolean Mask: bigger_than_two = \n%s" % bigger_than_two)
+    print("Masked Array: a[bigger_than_two] = \n%s" % a[bigger_than_two])
 
 Similarly, if you've created a yt data object (e.g. a region, a sphere), you 
 can examine its field values as a NumPy array by simply indexing it with the 
@@ -55,10 +55,10 @@
     ds = yt.load('Enzo_64/DD0042/data0042')
     ad = ds.all_data()
     hot = ad["temperature"].in_units('K') > 1e6
-    print 'Temperature of all data: ad["temperature"] = \n%s' % ad["temperature"]
-    print "Boolean Mask: hot = \n%s" % hot
-    print 'Temperature of "hot" data: ad["temperature"][hot] = \n%s' % \
-          ad['temperature'][hot]
+    print('Temperature of all data: ad["temperature"] = \n%s' % ad["temperature"])
+    print("Boolean Mask: hot = \n%s" % hot)
+    print('Temperature of "hot" data: ad["temperature"][hot] = \n%s' %
+          ad['temperature'][hot])
 
 This was a simple example, but one can make the conditionals that define
 a boolean mask have multiple parts, and one can stack masks together to
@@ -71,9 +71,9 @@
     ds = yt.load('Enzo_64/DD0042/data0042')
     ad = ds.all_data()
     overpressure_and_fast = (ad["pressure"] > 1e-14) & (ad["velocity_magnitude"].in_units('km/s') > 1e2)
-    print 'Density of all data: ad["density"] = \n%s' % ad['density']
-    print 'Density of "overpressure and fast" data: ad["density"][overpressure_and_fast] = \n%s' % \
-           ad['density'][overpressure_and_fast]
+    print('Density of all data: ad["density"] = \n%s' % ad['density'])
+    print('Density of "overpressure and fast" data: ad["density"][overpressure_and_fast] = \n%s' %
+          ad['density'][overpressure_and_fast])
 
 .. _cut-regions:
 

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/analyzing/generating_processed_data.rst
--- a/doc/source/analyzing/generating_processed_data.rst
+++ b/doc/source/analyzing/generating_processed_data.rst
@@ -107,7 +107,7 @@
    import yt
    ds = yt.load("galaxy0030/galaxy0030")
    source = ds.sphere( "c", (10, "kpc"))
-   profile = yt.create_profile(source, 
+   profile = yt.create_profile(source,
                                [("gas", "density")],          # the bin field
                                [("gas", "temperature"),       # profile field
                                 ("gas", "radial_velocity")],  # profile field
@@ -117,17 +117,17 @@
 
 .. code-block:: python
 
-   print profile.x       # bin field
-   print profile.weight  # weight field
-   print profile["gas", "temperature"]      # profile field
-   print profile["gas", "radial_velocity"]  # profile field
+   print(profile.x)       # bin field
+   print(profile.weight)  # weight field
+   print(profile["gas", "temperature"])      # profile field
+   print(profile["gas", "radial_velocity"])  # profile field
 
 The ``profile.used`` attribute gives a boolean array of the bins which actually 
 have data.
 
 .. code-block:: python
 
-   print profile.used
+   print(profile.used)
 
 If a weight field was given, the profile data will represent the weighted mean of 
 a field.  In this case, the weighted variance will be calculated automatically and 
@@ -135,14 +135,14 @@
 
 .. code-block:: python
 
-   print profile.variance["gas", "temperature"]
+   print(profile.variance["gas", "temperature"])
 
 A two-dimensional profile of the total gas mass in bins of density and temperature 
 can be created as follows:
 
 .. code-block:: python
 
-   profile2d = yt.create_profile(source, 
+   profile2d = yt.create_profile(source,
                                  [("gas", "density"),      # the x bin field
                                   ("gas", "temperature")], # the y bin field
                                  [("gas", "cell_mass")],   # the profile field
@@ -152,9 +152,9 @@
 
 .. code-block:: python
 
-   print profile2d.x
-   print profile2d.y
-   print profile2d["gas", "cell_mass"]
+   print(profile2d.x)
+   print(profile2d.y)
+   print(profile2d["gas", "cell_mass"])
 
 One of the more interesting things that is enabled with this approach is
 the generation of 1D profiles that correspond to 2D profiles.  For instance, a
@@ -185,8 +185,8 @@
 
 .. code-block:: python
 
-   ray = ds.ray(  (0.3, 0.5, 0.9), (0.1, 0.8, 0.5) )
-   print ray["density"]
+   ray = ds.ray((0.3, 0.5, 0.9), (0.1, 0.8, 0.5))
+   print(ray["density"])
 
 The points are ordered, but the ray is also traversing cells of varying length,
 as well as taking a varying distance to cross each cell.  To determine the
@@ -198,8 +198,8 @@
 
 .. code-block:: python
 
-   print ray['dts'].sum()
-   print ray['t']
+   print(ray['dts'].sum())
+   print(ray['t'])
 
 These can be used as inputs to, for instance, the Matplotlib function
 :func:`~matplotlib.pyplot.plot`, or they can be saved to disk.

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/analyzing/objects.rst
--- a/doc/source/analyzing/objects.rst
+++ b/doc/source/analyzing/objects.rst
@@ -55,13 +55,14 @@
    sp = ds.sphere([0.5, 0.5, 0.5], (1, 'kpc'))
 
    # Show all temperature values
-   print sp["temperature"]
+   print(sp["temperature"])
 
    # Print things in a more human-friendly manner: one temperature at a time
-   print "(x,  y,  z) Temperature"
-   print "-----------------------"
+   print("(x,  y,  z) Temperature")
+   print("-----------------------")
    for i in range(sp["temperature"].size):
-       print "(%f,  %f,  %f)    %f" % (sp["x"][i], sp["y"][i], sp["z"][i], sp["temperature"][i])
+       print("(%f,  %f,  %f)    %f" %
+             (sp["x"][i], sp["y"][i], sp["z"][i], sp["temperature"][i]))
 
 .. _quickly-selecting-data:
 
@@ -390,7 +391,7 @@
 
    ds = load("my_data")
    sp = ds.sphere('c', (10, 'kpc'))
-   print sp.quantities.angular_momentum_vector()
+   print(sp.quantities.angular_momentum_vector())
 
 Quickly Processing Data
 ^^^^^^^^^^^^^^^^^^^^^^^
@@ -586,7 +587,7 @@
 
    obj = ds.arbitrary_grid([0.0, 0.0, 0.0], [0.99, 0.99, 0.99],
                           dims=[128, 128, 128])
-   print obj["deposit", "all_density"]
+   print(obj["deposit", "all_density"])
 
 While these cannot yet be used as input to projections or slices, slices and
 projections can be taken of the data in them and visualized by hand.

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/analyzing/parallel_computation.rst
--- a/doc/source/analyzing/parallel_computation.rst
+++ b/doc/source/analyzing/parallel_computation.rst
@@ -95,10 +95,10 @@
 
    import yt
    yt.enable_parallelism()
-   
+ 
    ds = yt.load("RD0035/RedshiftOutput0035")
    v, c = ds.find_max("density")
-   print v, c
+   print(v, c)
    p = yt.ProjectionPlot(ds, "x", "density")
    p.save()
 
@@ -130,7 +130,7 @@
 Many yt operations will automatically run in parallel (see the next section for
 a full enumeration), however some operations, particularly ones that print
 output or save data to the filesystem, will be run by all processors in a
-parallel script.  For example, in the script above the lines ``print v,c`` and
+parallel script.  For example, in the script above the lines ``print(v, c)`` and
 ``p.save()`` will be run on all 16 processors.  This means that your terminal
 output will contain 16 repetitions of the output of the print statement and the
 plot will be saved to disk 16 times (overwritten each time).
@@ -151,7 +151,7 @@
    v, c = ds.find_max("density")
    p = yt.ProjectionPlot(ds, "x", "density")
    if yt.is_root():
-       print v, c
+       print(v, c)
        p.save()
 
 The second function, :func:`~yt.funcs.only_on_root` accepts the name of a
@@ -167,15 +167,15 @@
    import yt
    yt.enable_parallelism()
 
-   def print_and_save_plot(v, c, plot, print=True):
-       if print:
-          print v, c
+   def print_and_save_plot(v, c, plot, verbose=True):
+       if verbose:
+          print(v, c)
        plot.save()
 
    ds = yt.load("RD0035/RedshiftOutput0035")
    v, c = ds.find_max("density")
    p = yt.ProjectionPlot(ds, "x", "density")
-   yt.only_on_root(print_and_save_plot, v, c, plot, print=True)
+   yt.only_on_root(print_and_save_plot, v, c, plot, verbose=True)
 
 Types of Parallelism
 --------------------
@@ -263,7 +263,7 @@
         sto.result = <some information processed for this dataset>
         sto.result_id = <some identfier for this dataset>
 
-    print my_dictionary
+    print(my_dictionary)
 
 .. _parallelizing-your-analysis:
 
@@ -281,9 +281,9 @@
    # As always...
    import yt
    yt.enable_parallelism()
-   
+
    import glob
-   
+
    # The number 4, below, is the number of processes to parallelize over, which
    # is generally equal to the number of MPI tasks the job is launched with.
    # If num_procs is set to zero or a negative number, the for loop below
@@ -292,7 +292,7 @@
    # MPI tasks the job is run with, num_procs will default to the number of
    # MPI tasks automatically.
    num_procs = 4
-   
+
    # fns is a list of all the simulation data files in the current directory.
    fns = glob.glob("./plot*")
    fns.sort()
@@ -332,7 +332,7 @@
    # tasks do nothing.
    if yt.is_root()
        for fn, vals in sorted(my_storage.items()):
-           print fn, vals
+           print(fn, vals)
 
 This example above can be modified to loop over anything that can be saved to
 a Python list: halos, data files, arrays, and more.
@@ -372,7 +372,7 @@
 
    # Print out the angular momentum vector for all of the datasets
    for L in sorted(storage.items()):
-       print L
+       print(L)
 
 Note that this script can be run in serial or parallel with an arbitrary number
 of processors.  When running in parallel, each output is given to a different
@@ -390,7 +390,7 @@
    yt.enable_parallelism()
 
    ts = yt.DatasetSeries("DD*/output_*", parallel = 4)
-   
+
    for ds in ts.piter():
        sphere = ds.sphere("max", (1.0, "pc))
        L_vecs = sphere.quantities.angular_momentum_vector()
@@ -563,9 +563,9 @@
        array = TinyTeensyParallelFunction(ds, tinystuff, ministuff)
        SaveTinyMiniStuffToDisk("out%06d.txt" % i, array)
    t2 = time.time()
-   
+
    if yt.is_root()
-       print "BigStuff took %.5e sec, TinyStuff took %.5e sec" % (t1 - t0, t2 - t1)
+       print("BigStuff took %.5e sec, TinyStuff took %.5e sec" % (t1 - t0, t2 - t1))
   
 * Remember that if the script handles disk IO explicitly, and does not use
   a built-in yt function to write data to disk,

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/analyzing/time_series_analysis.rst
--- a/doc/source/analyzing/time_series_analysis.rst
+++ b/doc/source/analyzing/time_series_analysis.rst
@@ -64,7 +64,7 @@
    import yt
    ts = yt.load("*/*.index")
    for ds in ts:
-       print ds.current_time
+       print(ds.current_time)
 
 This can also operate in parallel, using
 :meth:`~yt.data_objects.time_series.DatasetSeries.piter`.  For more examples,
@@ -113,7 +113,7 @@
 
   for ds in my_sim.piter()
       all_data = ds.all_data()
-      print all_data.quantities.extrema('density')
+      print(all_data.quantities.extrema('density'))
  
 Additional keywords can be given to 
 :meth:`frontends.enzo.simulation_handling.EnzoSimulation.get_time_series` 

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/analyzing/units/fields_and_unit_conversion.rst
--- a/doc/source/analyzing/units/fields_and_unit_conversion.rst
+++ b/doc/source/analyzing/units/fields_and_unit_conversion.rst
@@ -46,8 +46,8 @@
    ds = yt.load('HiresIsolatedGalaxy/DD0044/DD0044')
    ad = ds.all_data()
 
-   print ad['cell_volume'].in_cgs()
-   print np.sqrt(ad['cell_volume'].in_cgs())
+   print(ad['cell_volume'].in_cgs())
+   print(np.sqrt(ad['cell_volume'].in_cgs()))
 
 That said, it is necessary to specify the units in the call to the
 :code:`add_field` function.  Not only does this ensure the returned units

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/cookbook/rockstar_nest.py
--- a/doc/source/cookbook/rockstar_nest.py
+++ b/doc/source/cookbook/rockstar_nest.py
@@ -1,9 +1,9 @@
-# You must run this job in parallel.  
+# You must run this job in parallel.
 # There are several mpi flags which can be useful in order for it to work OK.
-# It requires at least 3 processors in order to run because of the way in which 
-# rockstar divides up the work.  Make sure you have mpi4py installed as per 
+# It requires at least 3 processors in order to run because of the way in which
+# rockstar divides up the work.  Make sure you have mpi4py installed as per
 # http://yt-project.org/docs/dev/analyzing/parallel_computation.html#setting-up-parallel-yt
-    
+
 # Usage: mpirun -np <num_procs> --mca btl ^openib python this_script.py
 
 import yt
@@ -47,16 +47,18 @@
 
 # If desired, we can see the total number of DM and High-res DM particles
 #if yt.is_root():
-#    print "Simulation has %d DM particles." % ad['dark_matter','particle_type'].shape
-#    print "Simulation has %d Highest Res DM particles." % ad['max_res_dark_matter', 'particle_type'].shape
+#    print("Simulation has %d DM particles." %
+#          ad['dark_matter','particle_type'].shape)
+#    print("Simulation has %d Highest Res DM particles." %
+#          ad['max_res_dark_matter', 'particle_type'].shape)
 
-# Run the halo catalog on the dataset only on the highest resolution dark matter 
+# Run the halo catalog on the dataset only on the highest resolution dark matter
 # particles
 hc = HaloCatalog(data_ds=ds, finder_method='rockstar', \
                  finder_kwargs={'dm_only':True, 'particle_type':'max_res_dark_matter'})
 hc.create()
 
-# Or alternatively, just run the RockstarHaloFinder and later import the 
+# Or alternatively, just run the RockstarHaloFinder and later import the
 # output file as necessary.  You can skip this step if you've already run it
 # once, but be careful since subsequent halo finds will overwrite this data.
 #rhf = RockstarHaloFinder(ds, particle_type="max_res_dark_matter")

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -651,7 +651,7 @@
 
   ds = yt.load("m33_hi.fits")
   circle_region = ds9_region(ds, "circle.reg")
-  print circle_region.quantities.extrema("flux")
+  print(circle_region.quantities.extrema("flux"))
 
 
 ``PlotWindowWCS``
@@ -1251,15 +1251,15 @@
    ds = yt.load("gadget_fof_halos/groups_042/fof_subhalo_tab_042.0.hdf5")
    ad = ds.all_data()
    # The halo mass
-   print ad["Group", "particle_mass"]
-   print ad["Subhalo", "particle_mass"]
+   print(ad["Group", "particle_mass"])
+   print(ad["Subhalo", "particle_mass"])
    # Halo ID
-   print ad["Group", "particle_identifier"]
-   print ad["Subhalo", "particle_identifier"]
+   print(ad["Group", "particle_identifier"])
+   print(ad["Subhalo", "particle_identifier"])
    # positions
-   print ad["Group", "particle_position_x"]
+   print(ad["Group", "particle_position_x"])
    # velocities
-   print ad["Group", "particle_velocity_x"]
+   print(ad["Group", "particle_velocity_x"])
 
 Multidimensional fields can be accessed through the field name followed by an 
 underscore and the index.
@@ -1267,7 +1267,7 @@
 .. code-block:: python
 
    # x component of the spin
-   print ad["Subhalo", "SubhaloSpin_0"]
+   print(ad["Subhalo", "SubhaloSpin_0"])
 
 OWLS FOF/SUBFIND
 ^^^^^^^^^^^^^^^^
@@ -1281,7 +1281,7 @@
    ds = yt.load("owls_fof_halos/groups_008/group_008.0.hdf5")
    ad = ds.all_data()
    # The halo mass
-   print ad["FOF", "particle_mass"]
+   print(ad["FOF", "particle_mass"])
 
 Rockstar
 ^^^^^^^^
@@ -1295,7 +1295,7 @@
    ds = yt.load("rockstar_halos/halos_0.0.bin")
    ad = ds.all_data()
    # The halo mass
-   print ad["halos", "particle_mass"]
+   print(ad["halos", "particle_mass"])
 
 PyNE Data
 ---------

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/examining/low_level_inspection.rst
--- a/doc/source/examining/low_level_inspection.rst
+++ b/doc/source/examining/low_level_inspection.rst
@@ -66,7 +66,7 @@
 
    g = ds.index.grids[1043]
    g2 = g.Children[1].Children[0]
-   print g2.LeftEdge
+   print(g2.LeftEdge)
 
 .. _examining-grid-data:
 
@@ -85,8 +85,8 @@
 .. code-block:: python
 
    g = ds.index.grids[1043]
-   print g["density"]
-   print g["density"].min()
+   print(g["density"])
+   print(g["density"].min())
 
 To access the raw data, you have to call the IO handler from the index
 instead.  This is somewhat more low-level.
@@ -119,7 +119,7 @@
 
    gs, gi = ds.find_point((0.5, 0.6, 0.9))
    for g in gs:
-       print g.Level, g.LeftEdge, g.RightEdge
+       print(g.Level, g.LeftEdge, g.RightEdge)
 
 Note that this doesn't just return the canonical output, but also all of the
 parent grids that overlap with that point.
@@ -160,14 +160,14 @@
 
 .. code-block:: python
 
-   print all_data_level_0['density'].shape
+   print(all_data_level_0['density'].shape)
    (64, 64, 64)
 
-   print all_data_level_0['density']
-    
+   print(all_data_level_0['density'])
+ 
    array([[[  1.92588925e-31,   1.74647692e-31,   2.54787518e-31, ...,
   
-   print all_data_level_0['temperature'].shape
+   print(all_data_level_0['temperature'].shape)
    (64, 64, 64)
 
 If you create a covering grid that spans two child grids of a single parent 
@@ -191,10 +191,10 @@
 
 .. code-block:: python
 
-   print all_data_level_2['density'].shape
+   print(all_data_level_2['density'].shape)
    (256, 256, 256)
 
-   print all_data_level_2['density'][128, 128, 128]
+   print(all_data_level_2['density'][128, 128, 128])
    1.7747457571203124e-31
 
 There are two different types of covering grids: unsmoothed and smoothed. 
@@ -212,10 +212,10 @@
    all_data_level_2_s = ds.smoothed_covering_grid(2, [0.0, 0.0, 0.0], 
                                                     ds.domain_dimensions * 2**2)
 
-   print all_data_level_2_s['density'].shape
+   print(all_data_level_2_s['density'].shape)
    (256, 256, 256)
 
-   print all_data_level_2_s['density'][128, 128, 128]
+   print(all_data_level_2_s['density'][128, 128, 128])
    1.763744852165591e-31
 
 .. _examining-image-data-in-a-fixed-resolution-array:

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/faq/index.rst
--- a/doc/source/faq/index.rst
+++ b/doc/source/faq/index.rst
@@ -160,15 +160,15 @@
 
 .. code-block:: python
 
-    print "Length unit: ", ds.length_unit
-    print "Time unit: ", ds.time_unit
-    print "Mass unit: ", ds.mass_unit
-    print "Velocity unit: ", ds.velocity_unit
+    print("Length unit: ", ds.length_unit)
+    print("Time unit: ", ds.time_unit)
+    print("Mass unit: ", ds.mass_unit)
+    print("Velocity unit: ", ds.velocity_unit)
 
-    print "Length unit: ", ds.length_unit.in_units('code_length')
-    print "Time unit: ", ds.time_unit.in_units('code_time')
-    print "Mass unit: ", ds.mass_unit.in_units('kg')
-    print "Velocity unit: ", ds.velocity_unit.in_units('Mpc/year')
+    print("Length unit: ", ds.length_unit.in_units('code_length'))
+    print("Time unit: ", ds.time_unit.in_units('code_time'))
+    print("Mass unit: ", ds.mass_unit.in_units('kg'))
+    print("Velocity unit: ", ds.velocity_unit.in_units('Mpc/year'))
 
 So to accomplish the example task of converting a scalar variable ``x`` in 
 code units to kpc in yt-3.0, you can do one of two things.  If ``x`` is 
@@ -205,9 +205,9 @@
 
 .. code-block:: python
 
-    print "One Mpc in code_units:", one_Mpc.in_units('code_length')
-    print "One Mpc in AU:", one_Mpc.in_units('AU')
-    print "One Mpc in comoving kpc:", one_Mpc.in_units('kpccm')
+    print("One Mpc in code_units:", one_Mpc.in_units('code_length'))
+    print("One Mpc in AU:", one_Mpc.in_units('AU'))
+    print("One Mpc in comoving kpc:", one_Mpc.in_units('kpccm'))
 
 For more information about unit conversion, see :ref:`data_selection_and_fields`.
 
@@ -226,7 +226,7 @@
     >>> x = ds.quan(1, 'kpc')
 
     # Try to add this to some non-dimensional quantity
-    >>> print x + 1
+    >>> print(x + 1)
     
     YTUnitOperationError: The addition operator for YTArrays with units (kpc) and (1) is not well defined.
 
@@ -240,14 +240,14 @@
 
     x = ds.quan(1, 'kpc')
     x_val = x.v
-    print x_val 
+    print(x_val)
 
     array(1.0)
 
     # Try to add this to some non-dimensional quantity
-    print x + 1
+    print(x + 1)
 
-    2.0 
+    2.0
 
 For more information about this functionality with units, see :ref:`data_selection_and_fields`.
 
@@ -298,15 +298,15 @@
 
 .. code-block:: python
 
-   print ds.field_list
-   print ds.derived_field_list
+   print(ds.field_list)
+   print(ds.derived_field_list)
 
 or for a more legible version, try:
 
 .. code-block:: python
 
-   for field in ds.derived_field_list: 
-       print field
+   for field in ds.derived_field_list:
+       print(field)
 
 .. _faq-add-field-diffs:
 
@@ -450,7 +450,7 @@
 
    from yt.config import ytcfg
    ytcfg["yt","loglevel"] = "40" # This sets the log level to "ERROR"
-   
+
 which in this case would suppress everything below error messages. For reference, the numerical 
 values corresponding to different log levels are:
 

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/reference/field_list.rst
--- a/doc/source/reference/field_list.rst
+++ b/doc/source/reference/field_list.rst
@@ -28,7 +28,7 @@
   import yt
   ds = yt.load("Enzo_64/DD0043/data0043")
   for i in sorted(ds.field_list):
-    print i
+    print(i)
 
 To figure out out what all of the field types here mean, see
 :ref:`known-field-types`.

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/reference/python_introduction.rst
--- a/doc/source/reference/python_introduction.rst
+++ b/doc/source/reference/python_introduction.rst
@@ -46,7 +46,7 @@
 This will open up Python and give to you a simple prompt of three greater-than
 signs.  Let's inaugurate the occasion appropriately -- type this::
 
-   >>> print "Hello, world."
+   >>> print("Hello, world.")
 
 As you can see, this printed out the string "Hello, world." just as we
 expected.  Now let's try a more advanced string, one with a number in it.  For
@@ -54,13 +54,13 @@
 values are fed into a formatted string.  We'll print pi, but only with three
 digits of accuracy.::
 
-   >>> print "Pi is precisely %0.2f" % (3.1415926)
+   >>> print("Pi is precisely %0.2f" % (3.1415926))
 
 This took the number we fed it (3.1415926) and printed it out as a floating
 point number with two decimal places.  Now let's try something a bit different
 -- let's print out both the name of the number and its value.::
 
-   >>> print "%s is precisely %0.2f" % ("pi", 3.1415926)
+   >>> print("%s is precisely %0.2f" % ("pi", 3.1415926))
 
 As you can see, we used ``%s`` to say that the string should print a value as a
 string (the supplied value does not have to be a string -- ``"pi"`` could be
@@ -99,18 +99,18 @@
 very easily::
 
    >>> my_string = "Hello there"
-   >>> print my_string
+   >>> print(my_string)
 
 We can also take a look at each individual part of a string.  We'll use the
 'slicing' notation for this.  As a brief note, slicing is 0-indexed, so that
 element 0 corresponds to the first element.  If we wanted to see the third
 element of our string::
 
-   >>> print my_string[2]
+   >>> print(my_string[2])
 
 We can also take the third through the 5 elements::
 
-   >>> print my_string[2:5]
+   >>> print(my_string[2:5])
 
 But note that if you try to change an element directly, Python objects and it
 won't let you -- that's because strings are immutable.  (But, note that because
@@ -119,14 +119,14 @@
 To create a number, we do something similar::
 
    >>> a = 10
-   >>> print a
+   >>> print(a)
 
 This works for floating points as well.  Now we can do math on these numbers::
 
-   >>> print a**2
-   >>> print a + 5
-   >>> print a + 5.1
-   >>> print a / 2.0
+   >>> print(a**2)
+   >>> print(a + 5)
+   >>> print(a + 5.1)
+   >>> print(a / 2.0)
 
 Because of a historical aversion to floating point division in Python (which is
 now changing) it's always safest to ensure that either the numerator or the
@@ -146,9 +146,9 @@
 
    >>> my_list.append(1)
    >>> my_list.append(my_string)
-   >>> print my_list[0]
-   >>> print my_list[-1]
-   >>> print len(my_list)
+   >>> print(my_list[0])
+   >>> print(my_list[-1])
+   >>> print(len(my_list))
 
 You can also create a list already containing an initial set of elements::
 
@@ -194,30 +194,30 @@
 
    >>> a = 10.1
    >>> b = 10.1
-   >>> print a == b
-   >>> print a is b
+   >>> print(a == b)
+   >>> print(a is b)
 
 The first one returned True, but the second one returned False.  Even though
 both numbers are equal, they point to different points in memory.  Now let's
 try assigning things a bit differently::
 
    >>> b = a
-   >>> print a is b
+   >>> print(a is b)
 
 This time it's true -- they point to the same part of memory.  Try incrementing
 one and seeing what happens.  Now let's try this with a string::
 
    >>> a = "Hi there"
    >>> b = a
-   >>> print a is b
+   >>> print(a is b)
 
 Okay, so our intuition here works the same way, and it returns True.  But what
 happens if we modify the string?::
 
    >>> a += "!"
-   >>> print a
-   >>> print b
-   >>> print a is b
+   >>> print(a)
+   >>> print(b)
+   >>> print(a is b)
 
 As you can see, now not only does a contain the value "Hi there!", but it also
 is a different value than what b contains, and it also points to a different
@@ -239,19 +239,19 @@
 shows up in b::
 
    >>> a.append("hat wobble")
-   >>> print b[-1]
+   >>> print(b[-1])
 
 This also works with the concatenation operator::
 
    >>> a += ["beta sequences"]
-   >>> print a[-1], b[-1]
+   >>> print(a[-1], b[-1])
 
 But we can force a break in this by slicing the list when we initialize::
 
    >>> a = [1, 2, 3, 4]
    >>> b = a[:]
    >>> a.append(5)
-   >>> print b[-1], a[-1]
+   >>> print(b[-1], a[-1])
 
 Here they are different, because we have sliced the list when initializing b.
 
@@ -264,7 +264,7 @@
    >>> my_dict["A"] = 1.0
    >>> my_dict["B"] = 154.014
    >>> my_dict[14001] = "This number is great"
-   >>> print my_dict["A"]
+   >>> print(my_dict["A"])
 
 As you can see, one value can be used to look up another.  Almost all datatypes
 (with a few notable exceptions, but for the most part these are quite uncommon)
@@ -296,7 +296,7 @@
 To see this in action, let's first take a look at the built-in function
 ``range``. ::
 
-   >>> print range(10)
+   >>> print(range(10))
 
 As you can see, what the function ``range`` returns is a list of integers,
 starting at zero, that is as long as the argument to the ``range`` function.
@@ -319,7 +319,7 @@
 enter again.  The entire entry should look like this::
 
    >>> for i in range(10):
-   ...     print i
+   ...     print(i)
    ...
 
 As you can see, it prints out each integer in turn.  So far this feels a lot
@@ -330,7 +330,7 @@
 
    >>> my_sequence = ["a", "b", 4, 110.4]
    >>> for i in my_sequence:
-   ...     print i
+   ...     print(i)
    ...
 
 This time it prints out every item in the sequence.
@@ -341,7 +341,7 @@
    >>> index = 0
    >>> my_sequence = ["a", "b", 4, 110.4]
    >>> for i in my_sequence:
-   ...     print "%s = %s" % (index, i)
+   ...     print("%s = %s" % (index, i))
    ...     index += 1
    ...
 
@@ -352,7 +352,7 @@
 
    >>> my_sequence = ["a", "b", 4, 110.4]
    >>> for index, val in enumerate(my_sequence):
-   ...     print "%s = %s" % (index, val)
+   ...     print("%s = %s" % (index, val))
    ...
 
 This does the exact same thing, but we didn't have to keep track of the counter
@@ -361,14 +361,14 @@
 
    >>> my_sequence = range(10)
    >>> for val in reversed(my_sequence):
-   ...     print val
+   ...     print(val)
    ...
 
 We can even combine the two!::
 
    >>> my_sequence = range(10)
    >>> for index, val in enumerate(reversed(my_sequence)):
-   ...     print "%s = %s" % (index, val)
+   ...     print("%s = %s" % (index, val))
    ...
 
 The most fun of all the built-in functions that operate on iterables, however,
@@ -385,7 +385,7 @@
    >>> for v1, v2 in zip(seq1, seq2):
    ...     seq3.append(v1 + v2)
    ...
-   >>> print seq3
+   >>> print(seq3)
 
 As you can see, this is much easier than constructing index values by hand and
 then drawing from the two sequences using those index values.  I should note
@@ -416,7 +416,7 @@
 
    >>> for val in range(100):
    ...     if val % 2 == 0:
-   ...         print "%s is a multiple of 2" % (val)
+   ...         print("%s is a multiple of 2" % (val))
    ...
 
 Now we'll add on an ``else`` statement, so that we print out all the odd
@@ -424,9 +424,9 @@
 
    >>> for val in range(100):
    ...     if val % 2 == 0:
-   ...         print "%s is a multiple of 2" % (val)
+   ...         print("%s is a multiple of 2" % (val))
    ...     else:
-   ...         print "%s is not a multiple of 2" % (val)
+   ...         print("%s is not a multiple of 2" % (val))
    ...
 
 Let's extend this to check the remainders of division with both 2 and 3, and
@@ -435,11 +435,11 @@
 
    >>> for val in range(100):
    ...     if val % 2 == 0:
-   ...         print "%s is a multiple of 2" % (val)
+   ...         print("%s is a multiple of 2" % (val))
    ...     elif val % 3 == 0:
-   ...         print "%s is a multiple of 3" % (val):
+   ...         print("%s is a multiple of 3" % (val))
    ...     else:
-   ...         print "%s is not a multiple of 2 or 3" % (val)
+   ...         print("%s is not a multiple of 2 or 3" % (val))
    ...
 
 This should print out which numbers are multiples of 2 or 3 -- but note that
@@ -449,13 +449,13 @@
 
    >>> for val in range(100):
    ...     if val % 2 == 3 and val % 3 == 0:
-   ...         print "%s is a multiple of 6" % (val)
+   ...         print("%s is a multiple of 6" % (val))
    ...     elif val % 2 == 0:
-   ...         print "%s is a multiple of 2" % (val)
+   ...         print("%s is a multiple of 2" % (val))
    ...     elif val % 3 == 0:
-   ...         print "%s is a multiple of 3" % (val):
+   ...         print("%s is a multiple of 3" % (val))
    ...     else:
-   ...         print "%s is not a multiple of 2 or 3" % (val)
+   ...         print("%s is not a multiple of 2 or 3" % (val))
    ...
 
 In addition to the ``and`` statement, the ``or`` and ``not`` statements work in
@@ -517,9 +517,9 @@
 numbers from 0 to 99::
 
    >>> my_array = numpy.arange(100)
-   >>> print my_array
-   >>> print my_array * 2.0
-   >>> print my_array * 2
+   >>> print(my_array)
+   >>> print(my_array * 2.0)
+   >>> print( my_array * 2)
 
 As you can see, each of these operations does exactly what we think it ought
 to.  And, in fact, so does this one::
@@ -533,14 +533,14 @@
 includes dimensionality) and the size (strictly the total number of elements)
 in an array by looking at a couple properties of the array::
 
-   >>> print my_array.size
-   >>> print my_array.shape
+   >>> print(my_array.size)
+   >>> print(my_array.shape)
 
 Note that size must be the product of the components of the shape.  In this
 case, both are 100.  We can obtain a new array of a different shape by calling
 the ``reshape`` method on an array::
 
-   >>> print my_array.reshape((10, 10))
+   >>> print(my_array.reshape((10, 10)))
 
 In this case, we have not modified ``my_array`` but instead created a new array
 containing the same elements, but with a different dimensionality and shape.
@@ -552,7 +552,7 @@
 them.  We can see what kind of datatype an array is by examining its ``dtype``
 attribute::
 
-   >>> print my_array.dtype
+   >>> print(my_array.dtype)
 
 This can be changed by calling ``astype`` with another datatype.  Datatypes
 include, but are not limited to, ``int32``, ``int64``, ``float32``,
@@ -566,29 +566,29 @@
 100, and then we'll multiply our original array against those random values.::
 
    >>> rand_array = numpy.random.random(100)
-   >>> print rand_array * my_array
+   >>> print(rand_array * my_array)
 
 There are a number of functions you can call on arrays, as well.  For
 instance::
 
-   >>> print rand_array.sum()
-   >>> print rand_array.mean()
-   >>> print rand_array.min()
-   >>> print rand_array.max()
+   >>> print(rand_array.sum())
+   >>> print(rand_array.mean())
+   >>> print(rand_array.min())
+   >>> print(rand_array.max())
 
 Indexing in NumPy is very fun, and also provides some advanced functionality
 for selecting values.  You can slice and dice arrays::
 
-   >>> print my_array[50:60]
-   >>> print my_array[::2]
-   >>> print my_array[:-10]
+   >>> print(my_array[50:60])
+   >>> print(my_array[::2])
+   >>> print(my_array[:-10])
 
 But Numpy also provides the ability to construct boolean arrays, which are the
 result of conditionals.  For example, let's say that you wanted to generate a
 random set of values, and select only those less than 0.2::
 
    >>> rand_array = numpy.random.random(100)
-   >>> print rand_array < 0.2
+   >>> print(rand_array < 0.2)
 
 What is returned is a long list of booleans.  Boolean arrays can be used as
 indices -- what this means is that you can construct an index array and then
@@ -599,21 +599,21 @@
 respectively.::
 
    >>> ind_array = rand_array < 0.2
-   >>> print rand_array[ind_array]
-   >>> print numpy.all(rand_array[ind_array] < 0.2)
+   >>> print(rand_array[ind_array])
+   >>> print(numpy.all(rand_array[ind_array] < 0.2))
 
 You can even skip the creation of the variable ``ind_array`` completely, and
 instead just coalesce the statements into a single statement::
 
-   >>> print numpy.all(rand_array[rand_array < 0.2] < 0.2)
-   >>> print numpy.any(rand_array[rand_array < 0.2] > 0.2)
+   >>> print(numpy.all(rand_array[rand_array < 0.2] < 0.2))
+   >>> print(numpy.any(rand_array[rand_array < 0.2] > 0.2))
 
 You might look at these and wonder why this is useful -- we've already selected
 those elements that are less than 0.2, so why do we want to re-evaluate it?
 But the interesting component to this is that a conditional applied to one
 array can be used to index another array.  For instance::
 
-   >>> print my_array[rand_array < 0.2]
+   >>> print(my_array[rand_array < 0.2])
 
 Here we've identified those elements in our random number array that are less
 than 0.2, and printed the corresponding elements from our original sequential
@@ -669,7 +669,7 @@
    my_array_squared = my_array**2.0
    t2 = time.time()
 
-   print "It took me %0.3e seconds to square the array using NumPy" % (t2-t1)
+   print("It took me %0.3e seconds to square the array using NumPy" % (t2-t1))
 
    t1 = time.time()
    my_sequence_squared = []
@@ -677,7 +677,7 @@
        my_sequence_squared.append(i**2.0)
    t2 = time.time()
 
-   print "It took me %0.3e seconds to square the sequence without NumPy" % (t2-t1)
+   print("It took me %0.3e seconds to square the sequence without NumPy" % (t2-t1))
 
 Now save this file, and return to the command prompt.  We can execute it by
 supplying it to Python:
@@ -703,7 +703,7 @@
 created will be available to you -- so you can, for instance, print out the
 contents of ``my_array_squared``::
 
-   >>> print my_array_squared
+   >>> print(my_array_squared)
 
 The scripting interface for Python is quite powerful, and by combining it with
 interactive execution, you can, for instance, set up variables and functions

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/visualizing/plots.rst
--- a/doc/source/visualizing/plots.rst
+++ b/doc/source/visualizing/plots.rst
@@ -795,9 +795,9 @@
                       weight_field=None)
    profile = plot.profiles[0]
    # print the bin field, in this case temperature
-   print profile.x
+   print(profile.x)
    # print the profiled cell_mass field
-   print profile['cell_mass']
+   print(profile['cell_mass'])
 
 Other options, such as the number of bins, are also configurable. See the
 documentation for :class:`~yt.visualization.profile_plotter.ProfilePlot` for

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/visualizing/sketchfab.rst
--- a/doc/source/visualizing/sketchfab.rst
+++ b/doc/source/visualizing/sketchfab.rst
@@ -64,7 +64,7 @@
 
 .. code-block:: python
 
-   print surface["temperature"].min(), surface["temperature"].max()
+   print(surface["temperature"].min(), surface["temperature"].max())
 
 will return the values 11850.7476943 and 13641.0663899.  These values are
 interpolated to the face centers of every triangle that constitutes a portion

diff -r 150126b6a5b559611cab2f30a66094e3e7e51d30 -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b doc/source/yt3differences.rst
--- a/doc/source/yt3differences.rst
+++ b/doc/source/yt3differences.rst
@@ -334,7 +334,7 @@
 
    for chunk in obj.chunks([], "spatial"):
        for grid in chunk._current_chunk.objs:
-           print grid
+           print(grid)
 
 This will "spatially" chunk the ``obj`` object and print out all the grids
 included.


https://bitbucket.org/yt_analysis/yt/commits/01292c5cd417/
Changeset:   01292c5cd417
Branch:      yt
User:        xarthisius
Date:        2016-01-28 21:43:16+00:00
Summary:     Drop obsolete version of numpydoc that's not py3 compatible
Affected #:  11 files

diff -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b -r 01292c5cd41759430a927fd902435d95d337e33c doc/extensions/numpydocmod/__init__.py
--- a/doc/extensions/numpydocmod/__init__.py
+++ /dev/null
@@ -1,1 +0,0 @@
-from numpydoc import setup

diff -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b -r 01292c5cd41759430a927fd902435d95d337e33c doc/extensions/numpydocmod/comment_eater.py
--- a/doc/extensions/numpydocmod/comment_eater.py
+++ /dev/null
@@ -1,158 +0,0 @@
-from cStringIO import StringIO
-import compiler
-import inspect
-import textwrap
-import tokenize
-
-from compiler_unparse import unparse
-
-
-class Comment(object):
-    """ A comment block.
-    """
-    is_comment = True
-    def __init__(self, start_lineno, end_lineno, text):
-        # int : The first line number in the block. 1-indexed.
-        self.start_lineno = start_lineno
-        # int : The last line number. Inclusive!
-        self.end_lineno = end_lineno
-        # str : The text block including '#' character but not any leading spaces.
-        self.text = text
-
-    def add(self, string, start, end, line):
-        """ Add a new comment line.
-        """
-        self.start_lineno = min(self.start_lineno, start[0])
-        self.end_lineno = max(self.end_lineno, end[0])
-        self.text += string
-
-    def __repr__(self):
-        return '%s(%r, %r, %r)' % (self.__class__.__name__, self.start_lineno,
-            self.end_lineno, self.text)
-
-
-class NonComment(object):
-    """ A non-comment block of code.
-    """
-    is_comment = False
-    def __init__(self, start_lineno, end_lineno):
-        self.start_lineno = start_lineno
-        self.end_lineno = end_lineno
-
-    def add(self, string, start, end, line):
-        """ Add lines to the block.
-        """
-        if string.strip():
-            # Only add if not entirely whitespace.
-            self.start_lineno = min(self.start_lineno, start[0])
-            self.end_lineno = max(self.end_lineno, end[0])
-
-    def __repr__(self):
-        return '%s(%r, %r)' % (self.__class__.__name__, self.start_lineno,
-            self.end_lineno)
-
-
-class CommentBlocker(object):
-    """ Pull out contiguous comment blocks.
-    """
-    def __init__(self):
-        # Start with a dummy.
-        self.current_block = NonComment(0, 0)
-
-        # All of the blocks seen so far.
-        self.blocks = []
-
-        # The index mapping lines of code to their associated comment blocks.
-        self.index = {}
-
-    def process_file(self, file):
-        """ Process a file object.
-        """
-        for token in tokenize.generate_tokens(file.next):
-            self.process_token(*token)
-        self.make_index()
-
-    def process_token(self, kind, string, start, end, line):
-        """ Process a single token.
-        """
-        if self.current_block.is_comment:
-            if kind == tokenize.COMMENT:
-                self.current_block.add(string, start, end, line)
-            else:
-                self.new_noncomment(start[0], end[0])
-        else:
-            if kind == tokenize.COMMENT:
-                self.new_comment(string, start, end, line)
-            else:
-                self.current_block.add(string, start, end, line)
-
-    def new_noncomment(self, start_lineno, end_lineno):
-        """ We are transitioning from a noncomment to a comment.
-        """
-        block = NonComment(start_lineno, end_lineno)
-        self.blocks.append(block)
-        self.current_block = block
-
-    def new_comment(self, string, start, end, line):
-        """ Possibly add a new comment.
-        
-        Only adds a new comment if this comment is the only thing on the line.
-        Otherwise, it extends the noncomment block.
-        """
-        prefix = line[:start[1]]
-        if prefix.strip():
-            # Oops! Trailing comment, not a comment block.
-            self.current_block.add(string, start, end, line)
-        else:
-            # A comment block.
-            block = Comment(start[0], end[0], string)
-            self.blocks.append(block)
-            self.current_block = block
-
-    def make_index(self):
-        """ Make the index mapping lines of actual code to their associated
-        prefix comments.
-        """
-        for prev, block in zip(self.blocks[:-1], self.blocks[1:]):
-            if not block.is_comment:
-                self.index[block.start_lineno] = prev
-
-    def search_for_comment(self, lineno, default=None):
-        """ Find the comment block just before the given line number.
-
-        Returns None (or the specified default) if there is no such block.
-        """
-        if not self.index:
-            self.make_index()
-        block = self.index.get(lineno, None)
-        text = getattr(block, 'text', default)
-        return text
-
-
-def strip_comment_marker(text):
-    """ Strip # markers at the front of a block of comment text.
-    """
-    lines = []
-    for line in text.splitlines():
-        lines.append(line.lstrip('#'))
-    text = textwrap.dedent('\n'.join(lines))
-    return text
-
-
-def get_class_traits(klass):
-    """ Yield all of the documentation for trait definitions on a class object.
-    """
-    # FIXME: gracefully handle errors here or in the caller?
-    source = inspect.getsource(klass)
-    cb = CommentBlocker()
-    cb.process_file(StringIO(source))
-    mod_ast = compiler.parse(source)
-    class_ast = mod_ast.node.nodes[0]
-    for node in class_ast.code.nodes:
-        # FIXME: handle other kinds of assignments?
-        if isinstance(node, compiler.ast.Assign):
-            name = node.nodes[0].name
-            rhs = unparse(node.expr).strip()
-            doc = strip_comment_marker(cb.search_for_comment(node.lineno, default=''))
-            yield name, rhs, doc
-

diff -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b -r 01292c5cd41759430a927fd902435d95d337e33c doc/extensions/numpydocmod/compiler_unparse.py
--- a/doc/extensions/numpydocmod/compiler_unparse.py
+++ /dev/null
@@ -1,860 +0,0 @@
-""" Turn compiler.ast structures back into executable python code.
-
-    The unparse method takes a compiler.ast tree and transforms it back into
-    valid python code.  It is incomplete and currently only works for
-    import statements, function calls, function definitions, assignments, and
-    basic expressions.
-
-    Inspired by python-2.5-svn/Demo/parser/unparse.py
-
-    fixme: We may want to move to using _ast trees because the compiler for
-           them is about 6 times faster than compiler.compile.
-"""
-
-import sys
-import cStringIO
-from compiler.ast import Const, Name, Tuple, Div, Mul, Sub, Add
-
-def unparse(ast, single_line_functions=False):
-    s = cStringIO.StringIO()
-    UnparseCompilerAst(ast, s, single_line_functions)
-    return s.getvalue().lstrip()
-
-op_precedence = { 'compiler.ast.Power':3, 'compiler.ast.Mul':2, 'compiler.ast.Div':2,
-                  'compiler.ast.Add':1, 'compiler.ast.Sub':1 }
-
-class UnparseCompilerAst:
-    """ Methods in this class recursively traverse an AST and
-        output source code for the abstract syntax; original formatting
-        is disregarged.
-    """
-
-    #########################################################################
-    # object interface.
-    #########################################################################
-
-    def __init__(self, tree, file = sys.stdout, single_line_functions=False):
-        """ Unparser(tree, file=sys.stdout) -> None.
-
-            Print the source for tree to file.
-        """
-        self.f = file
-        self._single_func = single_line_functions
-        self._do_indent = True
-        self._indent = 0
-        self._dispatch(tree)
-        self._write("\n")
-        self.f.flush()
-
-    #########################################################################
-    # Unparser private interface.
-    #########################################################################
-
-    ### format, output, and dispatch methods ################################
-
-    def _fill(self, text = ""):
-        "Indent a piece of text, according to the current indentation level"
-        if self._do_indent:
-            self._write("\n"+"    "*self._indent + text)
-        else:
-            self._write(text)
-
-    def _write(self, text):
-        "Append a piece of text to the current line."
-        self.f.write(text)
-
-    def _enter(self):
-        "Print ':', and increase the indentation."
-        self._write(": ")
-        self._indent += 1
-
-    def _leave(self):
-        "Decrease the indentation level."
-        self._indent -= 1
-
-    def _dispatch(self, tree):
-        "_dispatcher function, _dispatching tree type T to method _T."
-        if isinstance(tree, list):
-            for t in tree:
-                self._dispatch(t)
-            return
-        meth = getattr(self, "_"+tree.__class__.__name__)
-        if tree.__class__.__name__ == 'NoneType' and not self._do_indent:
-            return
-        meth(tree)
-
-
-    #########################################################################
-    # compiler.ast unparsing methods.
-    #
-    # There should be one method per concrete grammar type. They are
-    # organized in alphabetical order.
-    #########################################################################
-
-    def _Add(self, t):
-        self.__binary_op(t, '+')
-
-    def _And(self, t):
-        self._write(" (")
-        for i, node in enumerate(t.nodes):
-            self._dispatch(node)
-            if i != len(t.nodes)-1:
-                self._write(") and (")
-        self._write(")")
-               
-    def _AssAttr(self, t):
-        """ Handle assigning an attribute of an object
-        """
-        self._dispatch(t.expr)
-        self._write('.'+t.attrname)
- 
-    def _Assign(self, t):
-        """ Expression Assignment such as "a = 1".
-
-            This only handles assignment in expressions.  Keyword assignment
-            is handled separately.
-        """
-        self._fill()
-        for target in t.nodes:
-            self._dispatch(target)
-            self._write(" = ")
-        self._dispatch(t.expr)
-        if not self._do_indent:
-            self._write('; ')
-
-    def _AssName(self, t):
-        """ Name on left hand side of expression.
-
-            Treat just like a name on the right side of an expression.
-        """
-        self._Name(t)
-
-    def _AssTuple(self, t):
-        """ Tuple on left hand side of an expression.
-        """
-
-        # _write each elements, separated by a comma.
-        for element in t.nodes[:-1]:
-            self._dispatch(element)
-            self._write(", ")
-
-        # Handle the last one without writing comma
-        last_element = t.nodes[-1]
-        self._dispatch(last_element)
-
-    def _AugAssign(self, t):
-        """ +=,-=,*=,/=,**=, etc. operations
-        """
-        
-        self._fill()
-        self._dispatch(t.node)
-        self._write(' '+t.op+' ')
-        self._dispatch(t.expr)
-        if not self._do_indent:
-            self._write(';')
-            
-    def _Bitand(self, t):
-        """ Bit and operation.
-        """
-        
-        for i, node in enumerate(t.nodes):
-            self._write("(")
-            self._dispatch(node)
-            self._write(")")
-            if i != len(t.nodes)-1:
-                self._write(" & ")
-                
-    def _Bitor(self, t):
-        """ Bit or operation
-        """
-        
-        for i, node in enumerate(t.nodes):
-            self._write("(")
-            self._dispatch(node)
-            self._write(")")
-            if i != len(t.nodes)-1:
-                self._write(" | ")
-                
-    def _CallFunc(self, t):
-        """ Function call.
-        """
-        self._dispatch(t.node)
-        self._write("(")
-        comma = False
-        for e in t.args:
-            if comma: self._write(", ")
-            else: comma = True
-            self._dispatch(e)
-        if t.star_args:
-            if comma: self._write(", ")
-            else: comma = True
-            self._write("*")
-            self._dispatch(t.star_args)
-        if t.dstar_args:
-            if comma: self._write(", ")
-            else: comma = True
-            self._write("**")
-            self._dispatch(t.dstar_args)
-        self._write(")")
-
-    def _Compare(self, t):
-        self._dispatch(t.expr)
-        for op, expr in t.ops:
-            self._write(" " + op + " ")
-            self._dispatch(expr)
-
-    def _Const(self, t):
-        """ A constant value such as an integer value, 3, or a string, "hello".
-        """
-        self._dispatch(t.value)
-
-    def _Decorators(self, t):
-        """ Handle function decorators (eg. @has_units)
-        """
-        for node in t.nodes:
-            self._dispatch(node)
-
-    def _Dict(self, t):
-        self._write("{")
-        for  i, (k, v) in enumerate(t.items):
-            self._dispatch(k)
-            self._write(": ")
-            self._dispatch(v)
-            if i < len(t.items)-1:
-                self._write(", ")
-        self._write("}")
-
-    def _Discard(self, t):
-        """ Node for when return value is ignored such as in "foo(a)".
-        """
-        self._fill()
-        self._dispatch(t.expr)
-
-    def _Div(self, t):
-        self.__binary_op(t, '/')
-
-    def _Ellipsis(self, t):
-        self._write("...")
-
-    def _From(self, t):
-        """ Handle "from xyz import foo, bar as baz".
-        """
-        # fixme: Are From and ImportFrom handled differently?
-        self._fill("from ")
-        self._write(t.modname)
-        self._write(" import ")
-        for i, (name,asname) in enumerate(t.names):
-            if i != 0:
-                self._write(", ")
-            self._write(name)
-            if asname is not None:
-                self._write(" as "+asname)
-                
-    def _Function(self, t):
-        """ Handle function definitions
-        """
-        if t.decorators is not None:
-            self._fill("@")
-            self._dispatch(t.decorators)
-        self._fill("def "+t.name + "(")
-        defaults = [None] * (len(t.argnames) - len(t.defaults)) + list(t.defaults)
-        for i, arg in enumerate(zip(t.argnames, defaults)):
-            self._write(arg[0])
-            if arg[1] is not None:
-                self._write('=')
-                self._dispatch(arg[1])
-            if i < len(t.argnames)-1:
-                self._write(', ')
-        self._write(")")
-        if self._single_func:
-            self._do_indent = False
-        self._enter()
-        self._dispatch(t.code)
-        self._leave()
-        self._do_indent = True
-
-    def _Getattr(self, t):
-        """ Handle getting an attribute of an object
-        """
-        if isinstance(t.expr, (Div, Mul, Sub, Add)):
-            self._write('(')
-            self._dispatch(t.expr)
-            self._write(')')
-        else:
-            self._dispatch(t.expr)
-            
-        self._write('.'+t.attrname)
-        
-    def _If(self, t):
-        self._fill()
-        
-        for i, (compare,code) in enumerate(t.tests):
-            if i == 0:
-                self._write("if ")
-            else:
-                self._write("elif ")
-            self._dispatch(compare)
-            self._enter()
-            self._fill()
-            self._dispatch(code)
-            self._leave()
-            self._write("\n")
-
-        if t.else_ is not None:
-            self._write("else")
-            self._enter()
-            self._fill()
-            self._dispatch(t.else_)
-            self._leave()
-            self._write("\n")
-            
-    def _IfExp(self, t):
-        self._dispatch(t.then)
-        self._write(" if ")
-        self._dispatch(t.test)
-
-        if t.else_ is not None:
-            self._write(" else (")
-            self._dispatch(t.else_)
-            self._write(")")
-
-    def _Import(self, t):
-        """ Handle "import xyz.foo".
-        """
-        self._fill("import ")
-        
-        for i, (name,asname) in enumerate(t.names):
-            if i != 0:
-                self._write(", ")
-            self._write(name)
-            if asname is not None:
-                self._write(" as "+asname)
-
-    def _Keyword(self, t):
-        """ Keyword value assignment within function calls and definitions.
-        """
-        self._write(t.name)
-        self._write("=")
-        self._dispatch(t.expr)
-        
-    def _List(self, t):
-        self._write("[")
-        for  i,node in enumerate(t.nodes):
-            self._dispatch(node)
-            if i < len(t.nodes)-1:
-                self._write(", ")
-        self._write("]")
-
-    def _Module(self, t):
-        if t.doc is not None:
-            self._dispatch(t.doc)
-        self._dispatch(t.node)
-
-    def _Mul(self, t):
-        self.__binary_op(t, '*')
-
-    def _Name(self, t):
-        self._write(t.name)
-
-    def _NoneType(self, t):
-        self._write("None")
-        
-    def _Not(self, t):
-        self._write('not (')
-        self._dispatch(t.expr)
-        self._write(')')
-        
-    def _Or(self, t):
-        self._write(" (")
-        for i, node in enumerate(t.nodes):
-            self._dispatch(node)
-            if i != len(t.nodes)-1:
-                self._write(") or (")
-        self._write(")")
-                
-    def _Pass(self, t):
-        self._write("pass\n")
-
-    def _Printnl(self, t):
-        self._fill("print ")
-        if t.dest:
-            self._write(">> ")
-            self._dispatch(t.dest)
-            self._write(", ")
-        comma = False
-        for node in t.nodes:
-            if comma: self._write(', ')
-            else: comma = True
-            self._dispatch(node)
-
-    def _Power(self, t):
-        self.__binary_op(t, '**')
-
-    def _Return(self, t):
-        self._fill("return ")
-        if t.value:
-            if isinstance(t.value, Tuple):
-                text = ', '.join([ name.name for name in t.value.asList() ])
-                self._write(text)
-            else:
-                self._dispatch(t.value)
-            if not self._do_indent:
-                self._write('; ')
-
-    def _Slice(self, t):
-        self._dispatch(t.expr)
-        self._write("[")
-        if t.lower:
-            self._dispatch(t.lower)
-        self._write(":")
-        if t.upper:
-            self._dispatch(t.upper)
-        #if t.step:
-        #    self._write(":")
-        #    self._dispatch(t.step)
-        self._write("]")
-
-    def _Sliceobj(self, t):
-        for i, node in enumerate(t.nodes):
-            if i != 0:
-                self._write(":")
-            if not (isinstance(node, Const) and node.value is None):
-                self._dispatch(node)
-
-    def _Stmt(self, tree):
-        for node in tree.nodes:
-            self._dispatch(node)
-
-    def _Sub(self, t):
-        self.__binary_op(t, '-')
-
-    def _Subscript(self, t):
-        self._dispatch(t.expr)
-        self._write("[")
-        for i, value in enumerate(t.subs):
-            if i != 0:
-                self._write(",")
-            self._dispatch(value)
-        self._write("]")
-
-    def _TryExcept(self, t):
-        self._fill("try")
-        self._enter()
-        self._dispatch(t.body)
-        self._leave()
-
-        for handler in t.handlers:
-            self._fill('except ')
-            self._dispatch(handler[0])
-            if handler[1] is not None:
-                self._write(', ')
-                self._dispatch(handler[1])
-            self._enter()
-            self._dispatch(handler[2])
-            self._leave()
-            
-        if t.else_:
-            self._fill("else")
-            self._enter()
-            self._dispatch(t.else_)
-            self._leave()
-
-    def _Tuple(self, t):
-
-        if not t.nodes:
-            # Empty tuple.
-            self._write("()")
-        else:
-            self._write("(")
-
-            # _write each elements, separated by a comma.
-            for element in t.nodes[:-1]:
-                self._dispatch(element)
-                self._write(", ")
-
-            # Handle the last one without writing comma
-            last_element = t.nodes[-1]
-            self._dispatch(last_element)
-
-            self._write(")")
-            
-    def _UnaryAdd(self, t):
-        self._write("+")
-        self._dispatch(t.expr)
-        
-    def _UnarySub(self, t):
-        self._write("-")
-        self._dispatch(t.expr)        
-
-    def _With(self, t):
-        self._fill('with ')
-        self._dispatch(t.expr)
-        if t.vars:
-            self._write(' as ')
-            self._dispatch(t.vars.name)
-        self._enter()
-        self._dispatch(t.body)
-        self._leave()
-        self._write('\n')
-        
-    def _int(self, t):
-        self._write(repr(t))
-
-    def __binary_op(self, t, symbol):
-        # Check if parenthesis are needed on left side and then dispatch
-        has_paren = False
-        left_class = str(t.left.__class__)
-        if (left_class in op_precedence.keys() and
-            op_precedence[left_class] < op_precedence[str(t.__class__)]):
-            has_paren = True
-        if has_paren:
-            self._write('(')
-        self._dispatch(t.left)
-        if has_paren:
-            self._write(')')
-        # Write the appropriate symbol for operator
-        self._write(symbol)
-        # Check if parenthesis are needed on the right side and then dispatch
-        has_paren = False
-        right_class = str(t.right.__class__)
-        if (right_class in op_precedence.keys() and
-            op_precedence[right_class] < op_precedence[str(t.__class__)]):
-            has_paren = True
-        if has_paren:
-            self._write('(')
-        self._dispatch(t.right)
-        if has_paren:
-            self._write(')')
-
-    def _float(self, t):
-        # if t is 0.1, str(t)->'0.1' while repr(t)->'0.1000000000001'
-        # We prefer str here.
-        self._write(str(t))
-
-    def _str(self, t):
-        self._write(repr(t))
-        
-    def _tuple(self, t):
-        self._write(str(t))
-
-    #########################################################################
-    # These are the methods from the _ast modules unparse.
-    #
-    # As our needs to handle more advanced code increase, we may want to
-    # modify some of the methods below so that they work for compiler.ast.
-    #########################################################################
-
-#    # stmt
-#    def _Expr(self, tree):
-#        self._fill()
-#        self._dispatch(tree.value)
-#
-#    def _Import(self, t):
-#        self._fill("import ")
-#        first = True
-#        for a in t.names:
-#            if first:
-#                first = False
-#            else:
-#                self._write(", ")
-#            self._write(a.name)
-#            if a.asname:
-#                self._write(" as "+a.asname)
-#
-##    def _ImportFrom(self, t):
-##        self._fill("from ")
-##        self._write(t.module)
-##        self._write(" import ")
-##        for i, a in enumerate(t.names):
-##            if i == 0:
-##                self._write(", ")
-##            self._write(a.name)
-##            if a.asname:
-##                self._write(" as "+a.asname)
-##        # XXX(jpe) what is level for?
-##
-#
-#    def _Break(self, t):
-#        self._fill("break")
-#
-#    def _Continue(self, t):
-#        self._fill("continue")
-#
-#    def _Delete(self, t):
-#        self._fill("del ")
-#        self._dispatch(t.targets)
-#
-#    def _Assert(self, t):
-#        self._fill("assert ")
-#        self._dispatch(t.test)
-#        if t.msg:
-#            self._write(", ")
-#            self._dispatch(t.msg)
-#
-#    def _Exec(self, t):
-#        self._fill("exec ")
-#        self._dispatch(t.body)
-#        if t.globals:
-#            self._write(" in ")
-#            self._dispatch(t.globals)
-#        if t.locals:
-#            self._write(", ")
-#            self._dispatch(t.locals)
-#
-#    def _Print(self, t):
-#        self._fill("print ")
-#        do_comma = False
-#        if t.dest:
-#            self._write(">>")
-#            self._dispatch(t.dest)
-#            do_comma = True
-#        for e in t.values:
-#            if do_comma:self._write(", ")
-#            else:do_comma=True
-#            self._dispatch(e)
-#        if not t.nl:
-#            self._write(",")
-#
-#    def _Global(self, t):
-#        self._fill("global")
-#        for i, n in enumerate(t.names):
-#            if i != 0:
-#                self._write(",")
-#            self._write(" " + n)
-#
-#    def _Yield(self, t):
-#        self._fill("yield")
-#        if t.value:
-#            self._write(" (")
-#            self._dispatch(t.value)
-#            self._write(")")
-#
-#    def _Raise(self, t):
-#        self._fill('raise ')
-#        if t.type:
-#            self._dispatch(t.type)
-#        if t.inst:
-#            self._write(", ")
-#            self._dispatch(t.inst)
-#        if t.tback:
-#            self._write(", ")
-#            self._dispatch(t.tback)
-#
-#
-#    def _TryFinally(self, t):
-#        self._fill("try")
-#        self._enter()
-#        self._dispatch(t.body)
-#        self._leave()
-#
-#        self._fill("finally")
-#        self._enter()
-#        self._dispatch(t.finalbody)
-#        self._leave()
-#
-#    def _excepthandler(self, t):
-#        self._fill("except ")
-#        if t.type:
-#            self._dispatch(t.type)
-#        if t.name:
-#            self._write(", ")
-#            self._dispatch(t.name)
-#        self._enter()
-#        self._dispatch(t.body)
-#        self._leave()
-#
-#    def _ClassDef(self, t):
-#        self._write("\n")
-#        self._fill("class "+t.name)
-#        if t.bases:
-#            self._write("(")
-#            for a in t.bases:
-#                self._dispatch(a)
-#                self._write(", ")
-#            self._write(")")
-#        self._enter()
-#        self._dispatch(t.body)
-#        self._leave()
-#
-#    def _FunctionDef(self, t):
-#        self._write("\n")
-#        for deco in t.decorators:
-#            self._fill("@")
-#            self._dispatch(deco)
-#        self._fill("def "+t.name + "(")
-#        self._dispatch(t.args)
-#        self._write(")")
-#        self._enter()
-#        self._dispatch(t.body)
-#        self._leave()
-#
-#    def _For(self, t):
-#        self._fill("for ")
-#        self._dispatch(t.target)
-#        self._write(" in ")
-#        self._dispatch(t.iter)
-#        self._enter()
-#        self._dispatch(t.body)
-#        self._leave()
-#        if t.orelse:
-#            self._fill("else")
-#            self._enter()
-#            self._dispatch(t.orelse)
-#            self._leave
-#
-#    def _While(self, t):
-#        self._fill("while ")
-#        self._dispatch(t.test)
-#        self._enter()
-#        self._dispatch(t.body)
-#        self._leave()
-#        if t.orelse:
-#            self._fill("else")
-#            self._enter()
-#            self._dispatch(t.orelse)
-#            self._leave
-#
-#    # expr
-#    def _Str(self, tree):
-#        self._write(repr(tree.s))
-##
-#    def _Repr(self, t):
-#        self._write("`")
-#        self._dispatch(t.value)
-#        self._write("`")
-#
-#    def _Num(self, t):
-#        self._write(repr(t.n))
-#
-#    def _ListComp(self, t):
-#        self._write("[")
-#        self._dispatch(t.elt)
-#        for gen in t.generators:
-#            self._dispatch(gen)
-#        self._write("]")
-#
-#    def _GeneratorExp(self, t):
-#        self._write("(")
-#        self._dispatch(t.elt)
-#        for gen in t.generators:
-#            self._dispatch(gen)
-#        self._write(")")
-#
-#    def _comprehension(self, t):
-#        self._write(" for ")
-#        self._dispatch(t.target)
-#        self._write(" in ")
-#        self._dispatch(t.iter)
-#        for if_clause in t.ifs:
-#            self._write(" if ")
-#            self._dispatch(if_clause)
-#
-#    def _IfExp(self, t):
-#        self._dispatch(t.body)
-#        self._write(" if ")
-#        self._dispatch(t.test)
-#        if t.orelse:
-#            self._write(" else ")
-#            self._dispatch(t.orelse)
-#
-#    unop = {"Invert":"~", "Not": "not", "UAdd":"+", "USub":"-"}
-#    def _UnaryOp(self, t):
-#        self._write(self.unop[t.op.__class__.__name__])
-#        self._write("(")
-#        self._dispatch(t.operand)
-#        self._write(")")
-#
-#    binop = { "Add":"+", "Sub":"-", "Mult":"*", "Div":"/", "Mod":"%",
-#                    "LShift":">>", "RShift":"<<", "BitOr":"|", "BitXor":"^", "BitAnd":"&",
-#                    "FloorDiv":"//", "Pow": "**"}
-#    def _BinOp(self, t):
-#        self._write("(")
-#        self._dispatch(t.left)
-#        self._write(")" + self.binop[t.op.__class__.__name__] + "(")
-#        self._dispatch(t.right)
-#        self._write(")")
-#
-#    boolops = {_ast.And: 'and', _ast.Or: 'or'}
-#    def _BoolOp(self, t):
-#        self._write("(")
-#        self._dispatch(t.values[0])
-#        for v in t.values[1:]:
-#            self._write(" %s " % self.boolops[t.op.__class__])
-#            self._dispatch(v)
-#        self._write(")")
-#
-#    def _Attribute(self,t):
-#        self._dispatch(t.value)
-#        self._write(".")
-#        self._write(t.attr)
-#
-##    def _Call(self, t):
-##        self._dispatch(t.func)
-##        self._write("(")
-##        comma = False
-##        for e in t.args:
-##            if comma: self._write(", ")
-##            else: comma = True
-##            self._dispatch(e)
-##        for e in t.keywords:
-##            if comma: self._write(", ")
-##            else: comma = True
-##            self._dispatch(e)
-##        if t.starargs:
-##            if comma: self._write(", ")
-##            else: comma = True
-##            self._write("*")
-##            self._dispatch(t.starargs)
-##        if t.kwargs:
-##            if comma: self._write(", ")
-##            else: comma = True
-##            self._write("**")
-##            self._dispatch(t.kwargs)
-##        self._write(")")
-#
-#    # slice
-#    def _Index(self, t):
-#        self._dispatch(t.value)
-#
-#    def _ExtSlice(self, t):
-#        for i, d in enumerate(t.dims):
-#            if i != 0:
-#                self._write(': ')
-#            self._dispatch(d)
-#
-#    # others
-#    def _arguments(self, t):
-#        first = True
-#        nonDef = len(t.args)-len(t.defaults)
-#        for a in t.args[0:nonDef]:
-#            if first:first = False
-#            else: self._write(", ")
-#            self._dispatch(a)
-#        for a,d in zip(t.args[nonDef:], t.defaults):
-#            if first:first = False
-#            else: self._write(", ")
-#            self._dispatch(a),
-#            self._write("=")
-#            self._dispatch(d)
-#        if t.vararg:
-#            if first:first = False
-#            else: self._write(", ")
-#            self._write("*"+t.vararg)
-#        if t.kwarg:
-#            if first:first = False
-#            else: self._write(", ")
-#            self._write("**"+t.kwarg)
-#
-##    def _keyword(self, t):
-##        self._write(t.arg)
-##        self._write("=")
-##        self._dispatch(t.value)
-#
-#    def _Lambda(self, t):
-#        self._write("lambda ")
-#        self._dispatch(t.args)
-#        self._write(": ")
-#        self._dispatch(t.body)
-
-
-

diff -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b -r 01292c5cd41759430a927fd902435d95d337e33c doc/extensions/numpydocmod/docscrape.py
--- a/doc/extensions/numpydocmod/docscrape.py
+++ /dev/null
@@ -1,500 +0,0 @@
-"""Extract reference documentation from the NumPy source tree.
-
-"""
-
-import inspect
-import textwrap
-import re
-import pydoc
-from StringIO import StringIO
-from warnings import warn
-
-class Reader(object):
-    """A line-based string reader.
-
-    """
-    def __init__(self, data):
-        """
-        Parameters
-        ----------
-        data : str
-           String with lines separated by '\n'.
-
-        """
-        if isinstance(data,list):
-            self._str = data
-        else:
-            self._str = data.split('\n') # store string as list of lines
-
-        self.reset()
-
-    def __getitem__(self, n):
-        return self._str[n]
-
-    def reset(self):
-        self._l = 0 # current line nr
-
-    def read(self):
-        if not self.eof():
-            out = self[self._l]
-            self._l += 1
-            return out
-        else:
-            return ''
-
-    def seek_next_non_empty_line(self):
-        for l in self[self._l:]:
-            if l.strip():
-                break
-            else:
-                self._l += 1
-
-    def eof(self):
-        return self._l >= len(self._str)
-
-    def read_to_condition(self, condition_func):
-        start = self._l
-        for line in self[start:]:
-            if condition_func(line):
-                return self[start:self._l]
-            self._l += 1
-            if self.eof():
-                return self[start:self._l+1]
-        return []
-
-    def read_to_next_empty_line(self):
-        self.seek_next_non_empty_line()
-        def is_empty(line):
-            return not line.strip()
-        return self.read_to_condition(is_empty)
-
-    def read_to_next_unindented_line(self):
-        def is_unindented(line):
-            return (line.strip() and (len(line.lstrip()) == len(line)))
-        return self.read_to_condition(is_unindented)
-
-    def peek(self,n=0):
-        if self._l + n < len(self._str):
-            return self[self._l + n]
-        else:
-            return ''
-
-    def is_empty(self):
-        return not ''.join(self._str).strip()
-
-
-class NumpyDocString(object):
-    def __init__(self, docstring, config={}):
-        docstring = textwrap.dedent(docstring).split('\n')
-
-        self._doc = Reader(docstring)
-        self._parsed_data = {
-            'Signature': '',
-            'Summary': [''],
-            'Extended Summary': [],
-            'Parameters': [],
-            'Returns': [],
-            'Raises': [],
-            'Warns': [],
-            'Other Parameters': [],
-            'Attributes': [],
-            'Methods': [],
-            'See Also': [],
-            'Notes': [],
-            'Warnings': [],
-            'References': '',
-            'Examples': '',
-            'index': {}
-            }
-
-        self._parse()
-
-    def __getitem__(self,key):
-        return self._parsed_data[key]
-
-    def __setitem__(self,key,val):
-        if not self._parsed_data.has_key(key):
-            warn("Unknown section %s" % key)
-        else:
-            self._parsed_data[key] = val
-
-    def _is_at_section(self):
-        self._doc.seek_next_non_empty_line()
-
-        if self._doc.eof():
-            return False
-
-        l1 = self._doc.peek().strip()  # e.g. Parameters
-
-        if l1.startswith('.. index::'):
-            return True
-
-        l2 = self._doc.peek(1).strip() #    ---------- or ==========
-        return l2.startswith('-'*len(l1)) or l2.startswith('='*len(l1))
-
-    def _strip(self,doc):
-        i = 0
-        j = 0
-        for i,line in enumerate(doc):
-            if line.strip(): break
-
-        for j,line in enumerate(doc[::-1]):
-            if line.strip(): break
-
-        return doc[i:len(doc)-j]
-
-    def _read_to_next_section(self):
-        section = self._doc.read_to_next_empty_line()
-
-        while not self._is_at_section() and not self._doc.eof():
-            if not self._doc.peek(-1).strip(): # previous line was empty
-                section += ['']
-
-            section += self._doc.read_to_next_empty_line()
-
-        return section
-
-    def _read_sections(self):
-        while not self._doc.eof():
-            data = self._read_to_next_section()
-            name = data[0].strip()
-
-            if name.startswith('..'): # index section
-                yield name, data[1:]
-            elif len(data) < 2:
-                yield StopIteration
-            else:
-                yield name, self._strip(data[2:])
-
-    def _parse_param_list(self,content):
-        r = Reader(content)
-        params = []
-        while not r.eof():
-            header = r.read().strip()
-            if ' : ' in header:
-                arg_name, arg_type = header.split(' : ')[:2]
-            else:
-                arg_name, arg_type = header, ''
-
-            desc = r.read_to_next_unindented_line()
-            desc = dedent_lines(desc)
-
-            params.append((arg_name,arg_type,desc))
-
-        return params
-
-
-    _name_rgx = re.compile(r"^\s*(:(?P<role>\w+):`(?P<name>[a-zA-Z0-9_.-]+)`|"
-                           r" (?P<name2>[a-zA-Z0-9_.-]+))\s*", re.X)
-    def _parse_see_also(self, content):
-        """
-        func_name : Descriptive text
-            continued text
-        another_func_name : Descriptive text
-        func_name1, func_name2, :meth:`func_name`, func_name3
-
-        """
-        items = []
-
-        def parse_item_name(text):
-            """Match ':role:`name`' or 'name'"""
-            m = self._name_rgx.match(text)
-            if m:
-                g = m.groups()
-                if g[1] is None:
-                    return g[3], None
-                else:
-                    return g[2], g[1]
-            raise ValueError("%s is not a item name" % text)
-
-        def push_item(name, rest):
-            if not name:
-                return
-            name, role = parse_item_name(name)
-            items.append((name, list(rest), role))
-            del rest[:]
-
-        current_func = None
-        rest = []
-
-        for line in content:
-            if not line.strip(): continue
-
-            m = self._name_rgx.match(line)
-            if m and line[m.end():].strip().startswith(':'):
-                push_item(current_func, rest)
-                current_func, line = line[:m.end()], line[m.end():]
-                rest = [line.split(':', 1)[1].strip()]
-                if not rest[0]:
-                    rest = []
-            elif not line.startswith(' '):
-                push_item(current_func, rest)
-                current_func = None
-                if ',' in line:
-                    for func in line.split(','):
-                        if func.strip():
-                            push_item(func, [])
-                elif line.strip():
-                    current_func = line
-            elif current_func is not None:
-                rest.append(line.strip())
-        push_item(current_func, rest)
-        return items
-
-    def _parse_index(self, section, content):
-        """
-        .. index: default
-           :refguide: something, else, and more
-
-        """
-        def strip_each_in(lst):
-            return [s.strip() for s in lst]
-
-        out = {}
-        section = section.split('::')
-        if len(section) > 1:
-            out['default'] = strip_each_in(section[1].split(','))[0]
-        for line in content:
-            line = line.split(':')
-            if len(line) > 2:
-                out[line[1]] = strip_each_in(line[2].split(','))
-        return out
-
-    def _parse_summary(self):
-        """Grab signature (if given) and summary"""
-        if self._is_at_section():
-            return
-
-        summary = self._doc.read_to_next_empty_line()
-        summary_str = " ".join([s.strip() for s in summary]).strip()
-        if re.compile('^([\w., ]+=)?\s*[\w\.]+\(.*\)$').match(summary_str):
-            self['Signature'] = summary_str
-            if not self._is_at_section():
-                self['Summary'] = self._doc.read_to_next_empty_line()
-        else:
-            self['Summary'] = summary
-
-        if not self._is_at_section():
-            self['Extended Summary'] = self._read_to_next_section()
-
-    def _parse(self):
-        self._doc.reset()
-        self._parse_summary()
-
-        for (section,content) in self._read_sections():
-            if not section.startswith('..'):
-                section = ' '.join([s.capitalize() for s in section.split(' ')])
-            if section in ('Parameters', 'Returns', 'Raises', 'Warns',
-                           'Other Parameters', 'Attributes', 'Methods'):
-                self[section] = self._parse_param_list(content)
-            elif section.startswith('.. index::'):
-                self['index'] = self._parse_index(section, content)
-            elif section == 'See Also':
-                self['See Also'] = self._parse_see_also(content)
-            else:
-                self[section] = content
-
-    # string conversion routines
-
-    def _str_header(self, name, symbol='-'):
-        return [name, len(name)*symbol]
-
-    def _str_indent(self, doc, indent=4):
-        out = []
-        for line in doc:
-            out += [' '*indent + line]
-        return out
-
-    def _str_signature(self):
-        if self['Signature']:
-            return [self['Signature'].replace('*','\*')] + ['']
-        else:
-            return ['']
-
-    def _str_summary(self):
-        if self['Summary']:
-            return self['Summary'] + ['']
-        else:
-            return []
-
-    def _str_extended_summary(self):
-        if self['Extended Summary']:
-            return self['Extended Summary'] + ['']
-        else:
-            return []
-
-    def _str_param_list(self, name):
-        out = []
-        if self[name]:
-            out += self._str_header(name)
-            for param,param_type,desc in self[name]:
-                out += ['%s : %s' % (param, param_type)]
-                out += self._str_indent(desc)
-            out += ['']
-        return out
-
-    def _str_section(self, name):
-        out = []
-        if self[name]:
-            out += self._str_header(name)
-            out += self[name]
-            out += ['']
-        return out
-
-    def _str_see_also(self, func_role):
-        if not self['See Also']: return []
-        out = []
-        out += self._str_header("See Also")
-        last_had_desc = True
-        for func, desc, role in self['See Also']:
-            if role:
-                link = ':%s:`%s`' % (role, func)
-            elif func_role:
-                link = ':%s:`%s`' % (func_role, func)
-            else:
-                link = "`%s`_" % func
-            if desc or last_had_desc:
-                out += ['']
-                out += [link]
-            else:
-                out[-1] += ", %s" % link
-            if desc:
-                out += self._str_indent([' '.join(desc)])
-                last_had_desc = True
-            else:
-                last_had_desc = False
-        out += ['']
-        return out
-
-    def _str_index(self):
-        idx = self['index']
-        out = []
-        out += ['.. index:: %s' % idx.get('default','')]
-        for section, references in idx.iteritems():
-            if section == 'default':
-                continue
-            out += ['   :%s: %s' % (section, ', '.join(references))]
-        return out
-
-    def __str__(self, func_role=''):
-        out = []
-        out += self._str_signature()
-        out += self._str_summary()
-        out += self._str_extended_summary()
-        for param_list in ('Parameters', 'Returns', 'Other Parameters',
-                           'Raises', 'Warns'):
-            out += self._str_param_list(param_list)
-        out += self._str_section('Warnings')
-        out += self._str_see_also(func_role)
-        for s in ('Notes','References','Examples'):
-            out += self._str_section(s)
-        for param_list in ('Attributes', 'Methods'):
-            out += self._str_param_list(param_list)
-        out += self._str_index()
-        return '\n'.join(out)
-
-
-def indent(str,indent=4):
-    indent_str = ' '*indent
-    if str is None:
-        return indent_str
-    lines = str.split('\n')
-    return '\n'.join(indent_str + l for l in lines)
-
-def dedent_lines(lines):
-    """Deindent a list of lines maximally"""
-    return textwrap.dedent("\n".join(lines)).split("\n")
-
-def header(text, style='-'):
-    return text + '\n' + style*len(text) + '\n'
-
-
-class FunctionDoc(NumpyDocString):
-    def __init__(self, func, role='func', doc=None, config={}):
-        self._f = func
-        self._role = role # e.g. "func" or "meth"
-
-        if doc is None:
-            if func is None:
-                raise ValueError("No function or docstring given")
-            doc = inspect.getdoc(func) or ''
-        NumpyDocString.__init__(self, doc)
-
-        if not self['Signature'] and func is not None:
-            func, func_name = self.get_func()
-            try:
-                # try to read signature
-                argspec = inspect.getargspec(func)
-                argspec = inspect.formatargspec(*argspec)
-                argspec = argspec.replace('*','\*')
-                signature = '%s%s' % (func_name, argspec)
-            except TypeError, e:
-                signature = '%s()' % func_name
-            self['Signature'] = signature
-
-    def get_func(self):
-        func_name = getattr(self._f, '__name__', self.__class__.__name__)
-        if inspect.isclass(self._f):
-            func = getattr(self._f, '__call__', self._f.__init__)
-        else:
-            func = self._f
-        return func, func_name
-
-    def __str__(self):
-        out = ''
-
-        func, func_name = self.get_func()
-        signature = self['Signature'].replace('*', '\*')
-
-        roles = {'func': 'function',
-                 'meth': 'method'}
-
-        if self._role:
-            if not roles.has_key(self._role):
-                print("Warning: invalid role %s" % self._role)
-            out += '.. %s:: %s\n    \n\n' % (roles.get(self._role,''),
-                                             func_name)
-
-        out += super(FunctionDoc, self).__str__(func_role=self._role)
-        return out
-
-
-class ClassDoc(NumpyDocString):
-    def __init__(self, cls, doc=None, modulename='', func_doc=FunctionDoc,
-                 config={}):
-        if not inspect.isclass(cls) and cls is not None:
-            raise ValueError("Expected a class or None, but got %r" % cls)
-        self._cls = cls
-
-        if modulename and not modulename.endswith('.'):
-            modulename += '.'
-        self._mod = modulename
-
-        if doc is None:
-            if cls is None:
-                raise ValueError("No class or documentation string given")
-            doc = pydoc.getdoc(cls)
-
-        NumpyDocString.__init__(self, doc)
-
-        if config.get('show_class_members', True):
-            if not self['Methods']:
-                self['Methods'] = [(name, '', '')
-                                   for name in sorted(self.methods)]
-            if not self['Attributes']:
-                self['Attributes'] = [(name, '', '')
-                                      for name in sorted(self.properties)]
-
-    @property
-    def methods(self):
-        if self._cls is None:
-            return []
-        return [name for name,func in inspect.getmembers(self._cls)
-                if not name.startswith('_') and callable(func)]
-
-    @property
-    def properties(self):
-        if self._cls is None:
-            return []
-        return [name for name,func in inspect.getmembers(self._cls)
-                if not name.startswith('_') and func is None]

diff -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b -r 01292c5cd41759430a927fd902435d95d337e33c doc/extensions/numpydocmod/docscrape_sphinx.py
--- a/doc/extensions/numpydocmod/docscrape_sphinx.py
+++ /dev/null
@@ -1,227 +0,0 @@
-import re, inspect, textwrap, pydoc
-import sphinx
-from docscrape import NumpyDocString, FunctionDoc, ClassDoc
-
-class SphinxDocString(NumpyDocString):
-    def __init__(self, docstring, config={}):
-        self.use_plots = config.get('use_plots', False)
-        NumpyDocString.__init__(self, docstring, config=config)
-
-    # string conversion routines
-    def _str_header(self, name, symbol='`'):
-        return ['.. rubric:: ' + name, '']
-
-    def _str_field_list(self, name):
-        return [':' + name + ':']
-
-    def _str_indent(self, doc, indent=4):
-        out = []
-        for line in doc:
-            out += [' '*indent + line]
-        return out
-
-    def _str_signature(self):
-        return ['']
-        if self['Signature']:
-            return ['``%s``' % self['Signature']] + ['']
-        else:
-            return ['']
-
-    def _str_summary(self):
-        return self['Summary'] + ['']
-
-    def _str_extended_summary(self):
-        return self['Extended Summary'] + ['']
-
-    def _str_param_list(self, name):
-        out = []
-        if self[name]:
-            out += self._str_field_list(name)
-            out += ['']
-            for param,param_type,desc in self[name]:
-                out += self._str_indent(['**%s** : %s' % (param.strip(),
-                                                          param_type)])
-                out += ['']
-                out += self._str_indent(desc,8)
-                out += ['']
-        return out
-
-    @property
-    def _obj(self):
-        if hasattr(self, '_cls'):
-            return self._cls
-        elif hasattr(self, '_f'):
-            return self._f
-        return None
-
-    def _str_member_list(self, name):
-        """
-        Generate a member listing, autosummary:: table where possible,
-        and a table where not.
-
-        """
-        out = []
-        if self[name]:
-            out += ['.. rubric:: %s' % name, '']
-            prefix = getattr(self, '_name', '')
-
-            if prefix:
-                prefix = '~%s.' % prefix
-
-            autosum = []
-            others = []
-            for param, param_type, desc in self[name]:
-                param = param.strip()
-                if not self._obj or hasattr(self._obj, param):
-                    autosum += ["   %s%s" % (prefix, param)]
-                else:
-                    others.append((param, param_type, desc))
-
-            if 0:#autosum:
-                out += ['.. autosummary::', '   :toctree:', '']
-                out += autosum
-                
-            if others:
-                maxlen_0 = max([len(x[0]) for x in others])
-                maxlen_1 = max([len(x[1]) for x in others])
-                hdr = "="*maxlen_0 + "  " + "="*maxlen_1 + "  " + "="*10
-                fmt = '%%%ds  %%%ds  ' % (maxlen_0, maxlen_1)
-                n_indent = maxlen_0 + maxlen_1 + 4
-                out += [hdr]
-                for param, param_type, desc in others:
-                    out += [fmt % (param.strip(), param_type)]
-                    out += self._str_indent(desc, n_indent)
-                out += [hdr]
-            out += ['']
-        return out
-
-    def _str_section(self, name):
-        out = []
-        if self[name]:
-            out += self._str_header(name)
-            out += ['']
-            content = textwrap.dedent("\n".join(self[name])).split("\n")
-            out += content
-            out += ['']
-        return out
-
-    def _str_see_also(self, func_role):
-        out = []
-        if self['See Also']:
-            see_also = super(SphinxDocString, self)._str_see_also(func_role)
-            out = ['.. seealso::', '']
-            out += self._str_indent(see_also[2:])
-        return out
-
-    def _str_warnings(self):
-        out = []
-        if self['Warnings']:
-            out = ['.. warning::', '']
-            out += self._str_indent(self['Warnings'])
-        return out
-
-    def _str_index(self):
-        idx = self['index']
-        out = []
-        if len(idx) == 0:
-            return out
-
-        out += ['.. index:: %s' % idx.get('default','')]
-        for section, references in idx.iteritems():
-            if section == 'default':
-                continue
-            elif section == 'refguide':
-                out += ['   single: %s' % (', '.join(references))]
-            else:
-                out += ['   %s: %s' % (section, ','.join(references))]
-        return out
-
-    def _str_references(self):
-        out = []
-        if self['References']:
-            out += self._str_header('References')
-            if isinstance(self['References'], str):
-                self['References'] = [self['References']]
-            out.extend(self['References'])
-            out += ['']
-            # Latex collects all references to a separate bibliography,
-            # so we need to insert links to it
-            if sphinx.__version__ >= "0.6":
-                out += ['.. only:: latex','']
-            else:
-                out += ['.. latexonly::','']
-            items = []
-            for line in self['References']:
-                m = re.match(r'.. \[([a-z0-9._-]+)\]', line, re.I)
-                if m:
-                    items.append(m.group(1))
-            out += ['   ' + ", ".join(["[%s]_" % item for item in items]), '']
-        return out
-
-    def _str_examples(self):
-        examples_str = "\n".join(self['Examples'])
-
-        if (self.use_plots and 'import matplotlib' in examples_str
-                and 'plot::' not in examples_str):
-            out = []
-            out += self._str_header('Examples')
-            out += ['.. plot::', '']
-            out += self._str_indent(self['Examples'])
-            out += ['']
-            return out
-        else:
-            return self._str_section('Examples')
-
-    def __str__(self, indent=0, func_role="obj"):
-        out = []
-        out += self._str_signature()
-        out += self._str_index() + ['']
-        out += self._str_summary()
-        out += self._str_extended_summary()
-        for param_list in ('Parameters', 'Returns', 'Other Parameters',
-                           'Raises', 'Warns'):
-            out += self._str_param_list(param_list)
-        out += self._str_warnings()
-        out += self._str_see_also(func_role)
-        out += self._str_section('Notes')
-        out += self._str_references()
-        out += self._str_examples()
-        for param_list in ('Attributes', 'Methods'):
-            out += self._str_member_list(param_list)
-        out = self._str_indent(out,indent)
-        return '\n'.join(out)
-
-class SphinxFunctionDoc(SphinxDocString, FunctionDoc):
-    def __init__(self, obj, doc=None, config={}):
-        self.use_plots = config.get('use_plots', False)
-        FunctionDoc.__init__(self, obj, doc=doc, config=config)
-
-class SphinxClassDoc(SphinxDocString, ClassDoc):
-    def __init__(self, obj, doc=None, func_doc=None, config={}):
-        self.use_plots = config.get('use_plots', False)
-        ClassDoc.__init__(self, obj, doc=doc, func_doc=None, config=config)
-
-class SphinxObjDoc(SphinxDocString):
-    def __init__(self, obj, doc=None, config={}):
-        self._f = obj
-        SphinxDocString.__init__(self, doc, config=config)
-
-def get_doc_object(obj, what=None, doc=None, config={}):
-    if what is None:
-        if inspect.isclass(obj):
-            what = 'class'
-        elif inspect.ismodule(obj):
-            what = 'module'
-        elif callable(obj):
-            what = 'function'
-        else:
-            what = 'object'
-    if what == 'class':
-        return SphinxClassDoc(obj, func_doc=SphinxFunctionDoc, doc=doc,
-                              config=config)
-    elif what in ('function', 'method'):
-        return SphinxFunctionDoc(obj, doc=doc, config=config)
-    else:
-        if doc is None:
-            doc = pydoc.getdoc(obj)
-        return SphinxObjDoc(obj, doc, config=config)

diff -r 3c3177e83689bcf72bff9295269c1c5aa01b7d4b -r 01292c5cd41759430a927fd902435d95d337e33c doc/extensions/numpydocmod/numpydoc.py
--- a/doc/extensions/numpydocmod/numpydoc.py
+++ /dev/null
@@ -1,173 +0,0 @@
-"""
-========
-numpydoc
-========
-
-Sphinx extension that handles docstrings in the Numpy standard format. [1]
-
-It will:
-
-- Convert Parameters etc. sections to field lists.
-- Convert See Also section to a See also entry.
-- Renumber references.
-- Extract the signature from the docstring, if it can't be determined otherwise.
-
-.. [1] http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines#docstring-standard
-
-"""
-
-import os, re, pydoc
-from docscrape_sphinx import get_doc_object, SphinxDocString
-from sphinx.util.compat import Directive
-import inspect
-
-def mangle_docstrings(app, what, name, obj, options, lines,
-                      reference_offset=[0]):
-
-    cfg = dict(use_plots=app.config.numpydoc_use_plots,
-               show_class_members=app.config.numpydoc_show_class_members)
-
-    if what == 'module':
-        # Strip top title
-        title_re = re.compile(ur'^\s*[#*=]{4,}\n[a-z0-9 -]+\n[#*=]{4,}\s*',
-                              re.I|re.S)
-        lines[:] = title_re.sub(u'', u"\n".join(lines)).split(u"\n")
-    else:
-        doc = get_doc_object(obj, what, u"\n".join(lines), config=cfg)
-        lines[:] = unicode(doc).split(u"\n")
-
-    if app.config.numpydoc_edit_link and hasattr(obj, '__name__') and \
-           obj.__name__:
-        if hasattr(obj, '__module__'):
-            v = dict(full_name=u"%s.%s" % (obj.__module__, obj.__name__))
-        else:
-            v = dict(full_name=obj.__name__)
-        lines += [u'', u'.. htmlonly::', '']
-        lines += [u'    %s' % x for x in
-                  (app.config.numpydoc_edit_link % v).split("\n")]
-
-    # replace reference numbers so that there are no duplicates
-    references = []
-    for line in lines:
-        line = line.strip()
-        m = re.match(ur'^.. \[([a-z0-9_.-])\]', line, re.I)
-        if m:
-            references.append(m.group(1))
-
-    # start renaming from the longest string, to avoid overwriting parts
-    references.sort(key=lambda x: -len(x))
-    if references:
-        for i, line in enumerate(lines):
-            for r in references:
-                if re.match(ur'^\d+$', r):
-                    new_r = u"R%d" % (reference_offset[0] + int(r))
-                else:
-                    new_r = u"%s%d" % (r, reference_offset[0])
-                lines[i] = lines[i].replace(u'[%s]_' % r,
-                                            u'[%s]_' % new_r)
-                lines[i] = lines[i].replace(u'.. [%s]' % r,
-                                            u'.. [%s]' % new_r)
-
-    reference_offset[0] += len(references)
-
-def mangle_signature(app, what, name, obj, options, sig, retann):
-    # Do not try to inspect classes that don't define `__init__`
-    if (inspect.isclass(obj) and
-        (not hasattr(obj, '__init__') or
-        'initializes x; see ' in pydoc.getdoc(obj.__init__))):
-        return '', ''
-
-    if not (callable(obj) or hasattr(obj, '__argspec_is_invalid_')): return
-    if not hasattr(obj, '__doc__'): return
-
-    doc = SphinxDocString(pydoc.getdoc(obj))
-    if doc['Signature']:
-        sig = re.sub(u"^[^(]*", u"", doc['Signature'])
-        return sig, u''
-
-def setup(app, get_doc_object_=get_doc_object):
-    global get_doc_object
-    get_doc_object = get_doc_object_
-
-    app.connect('autodoc-process-docstring', mangle_docstrings)
-    app.connect('autodoc-process-signature', mangle_signature)
-    app.add_config_value('numpydoc_edit_link', None, False)
-    app.add_config_value('numpydoc_use_plots', None, False)
-    app.add_config_value('numpydoc_show_class_members', True, True)
-
-    # Extra mangling domains
-    app.add_domain(NumpyPythonDomain)
-    app.add_domain(NumpyCDomain)
-
-    retdict = dict(
-        version='0.1',
-        parallel_read_safe=True,
-        parallel_write_safe=True
-    )
-
-    return retdict
-
-
-#------------------------------------------------------------------------------
-# Docstring-mangling domains
-#------------------------------------------------------------------------------
-
-from docutils.statemachine import ViewList
-from sphinx.domains.c import CDomain
-from sphinx.domains.python import PythonDomain
-
-class ManglingDomainBase(object):
-    directive_mangling_map = {}
-
-    def __init__(self, *a, **kw):
-        super(ManglingDomainBase, self).__init__(*a, **kw)
-        self.wrap_mangling_directives()
-
-    def wrap_mangling_directives(self):
-        for name, objtype in self.directive_mangling_map.items():
-            self.directives[name] = wrap_mangling_directive(
-                self.directives[name], objtype)
-
-class NumpyPythonDomain(ManglingDomainBase, PythonDomain):
-    name = 'np'
-    directive_mangling_map = {
-        'function': 'function',
-        'class': 'class',
-        'exception': 'class',
-        'method': 'function',
-        'classmethod': 'function',
-        'staticmethod': 'function',
-        'attribute': 'attribute',
-    }
-
-class NumpyCDomain(ManglingDomainBase, CDomain):
-    name = 'np-c'
-    directive_mangling_map = {
-        'function': 'function',
-        'member': 'attribute',
-        'macro': 'function',
-        'type': 'class',
-        'var': 'object',
-    }
-
-def wrap_mangling_directive(base_directive, objtype):
-    class directive(base_directive):
-        def run(self):
-            env = self.state.document.settings.env
-
-            name = None
-            if self.arguments:
-                m = re.match(r'^(.*\s+)?(.*?)(\(.*)?', self.arguments[0])
-                name = m.group(2).strip()
-
-            if not name:
-                name = self.arguments[0]
-
-            lines = list(self.content)
-            mangle_docstrings(env.app, objtype, name, None, None, lines)
-            self.content = ViewList(lines, self.content.parent)
-
-            return base_directive.run(self)
-
-    return directive
-

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/bcb94bd84482/
Changeset:   bcb94bd84482
Branch:      yt
User:        xarthisius
Date:        2016-01-28 22:11:30+00:00
Summary:     Fix run_recipe definition
Affected #:  1 file

diff -r 01292c5cd41759430a927fd902435d95d337e33c -r bcb94bd84482bdb66c27b3c373691219c039cb14 doc/helper_scripts/run_recipes.py
--- a/doc/helper_scripts/run_recipes.py
+++ b/doc/helper_scripts/run_recipes.py
@@ -27,7 +27,8 @@
         os.symlink(directory, os.path.basename(directory))
 
 
-def run_recipe((recipe,)):
+def run_recipe(payload):
+    recipe, = payload
     module_name, ext = os.path.splitext(os.path.basename(recipe))
     dest = os.path.join(os.path.dirname(recipe), '_static', module_name)
     if module_name in BLACKLIST:


https://bitbucket.org/yt_analysis/yt/commits/34ed15d98fe4/
Changeset:   34ed15d98fe4
Branch:      yt
User:        xarthisius
Date:        2016-01-28 22:53:38+00:00
Summary:     Make sure rockstar is run with py2
Affected #:  1 file

diff -r bcb94bd84482bdb66c27b3c373691219c039cb14 -r 34ed15d98fe48c75d824fbf6246d8ab89c4e2398 doc/helper_scripts/run_recipes.py
--- a/doc/helper_scripts/run_recipes.py
+++ b/doc/helper_scripts/run_recipes.py
@@ -40,7 +40,7 @@
         prep_dirs()
         if module_name in PARALLEL_TEST:
             cmd = ["mpiexec", "-n", PARALLEL_TEST[module_name],
-                   "python", recipe]
+                   "python2", recipe]
         else:
             cmd = ["python", recipe]
         try:

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list