[yt-svn] commit/yt: 3 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Fri Oct 3 14:21:16 PDT 2014


3 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/aff8e0dea8a8/
Changeset:   aff8e0dea8a8
Branch:      stable
User:        MatthewTurk
Date:        2014-10-03 21:06:18+00:00
Summary:     Merging from development branch.
Affected #:  132 files

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/install_script.sh
--- a/doc/install_script.sh
+++ b/doc/install_script.sh
@@ -986,8 +986,10 @@
 
 if !( ( ${DEST_DIR}/bin/python2.7 -c "import readline" 2>&1 )>> ${LOG_FILE})
 then
-    echo "Installing pure-python readline"
-    ( ${DEST_DIR}/bin/pip install readline 2>&1 ) 1>> ${LOG_FILE}
+    if !( ( ${DEST_DIR}/bin/python2.7 -c "import gnureadline" 2>&1 )>> ${LOG_FILE})
+        echo "Installing pure-python readline"
+        ( ${DEST_DIR}/bin/pip install gnureadline 2>&1 ) 1>> ${LOG_FILE}
+    fi
 fi
 
 if [ $INST_ENZO -eq 1 ]

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/analyzing/analysis_modules/PPVCube.ipynb
--- a/doc/source/analyzing/analysis_modules/PPVCube.ipynb
+++ b/doc/source/analyzing/analysis_modules/PPVCube.ipynb
@@ -291,7 +291,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "prj = yt.ProjectionPlot(ds, \"z\", [\"density\"], proj_style=\"sum\")\n",
+      "prj = yt.ProjectionPlot(ds, \"z\", [\"density\"], method=\"sum\")\n",
       "prj.set_log(\"density\", True)\n",
       "prj.set_zlim(\"density\", 1.0e-3, 0.2)\n",
       "prj.show()"
@@ -304,4 +304,4 @@
    "metadata": {}
   }
  ]
-}
\ No newline at end of file
+}

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/analyzing/analysis_modules/halo_catalogs.rst
--- a/doc/source/analyzing/analysis_modules/halo_catalogs.rst
+++ b/doc/source/analyzing/analysis_modules/halo_catalogs.rst
@@ -72,6 +72,8 @@
 * Quantities
 * Callbacks
 
+A list of all available filters, quantities, and callbacks can be found in 
+:ref:`halo_analysis_ref`.  
 All interaction with this analysis can be performed by importing from 
 halo_analysis.
 
@@ -129,7 +131,14 @@
 are center_of_mass and bulk_velocity. Their definitions are available in 
 ``yt/analysis_modules/halo_analysis/halo_quantities.py``. If you think that 
 your quantity may be of use to the general community, add it to 
-``halo_quantities.py`` and issue a pull request.
+``halo_quantities.py`` and issue a pull request.  Default halo quantities are:
+
+* ``particle_identifier`` -- Halo ID (e.g. 0 to N)
+* ``particle_mass`` -- Mass of halo
+* ``particle_position_x`` -- Location of halo
+* ``particle_position_y`` -- Location of halo
+* ``particle_position_z`` -- Location of halo
+* ``virial_radius`` -- Virial radius of halo
 
 An example of adding a quantity:
 
@@ -154,6 +163,18 @@
    # ... Later on in your script
    hc.add_quantity("my_quantity") 
 
+This quantity will then be accessible for functions called later via the 
+*quantities* dictionary that is associated with the halo object.
+
+.. code-block:: python
+
+   def my_new_function(halo):
+       print halo.quantities["my_quantity"]
+   add_callback("print_quantity", my_new_function)
+
+   # ... Anywhere after "my_quantity" has been called
+   hc.add_callback("print_quantity")
+
 Callbacks
 ^^^^^^^^^
 
@@ -171,10 +192,10 @@
    hc.add_callback("sphere", factor=2.0)
     
 Currently available callbacks are located in 
-``yt/analysis_modules/halo_analysis/halo_callbacks.py``. New callbacks may 
+``yt/analysis_modules/halo_analysis/halo_callbacks.py``.  New callbacks may 
 be added by using the syntax shown below. If you think that your 
 callback may be of use to the general community, add it to 
-halo_callbacks.py and issue a pull request
+halo_callbacks.py and issue a pull request.
 
 An example of defining your own callback:
 

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/analyzing/analysis_modules/halo_finders.rst
--- a/doc/source/analyzing/analysis_modules/halo_finders.rst
+++ b/doc/source/analyzing/analysis_modules/halo_finders.rst
@@ -75,7 +75,8 @@
   mass. In simulations where the highest-resolution particles all have the 
   same mass (ie: zoom-in grid based simulations), one can set up a particle
   filter to select the lowest mass particles and perform the halo finding
-  only on those.
+  only on those.  See the this cookbook recipe for an example: 
+  :ref:`cookbook-rockstar-nested-grid`.
 
 To run the Rockstar Halo finding, you must launch python with MPI and 
 parallelization enabled. While Rockstar itself does not require MPI to run, 

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/analyzing/objects.rst
--- a/doc/source/analyzing/objects.rst
+++ b/doc/source/analyzing/objects.rst
@@ -225,12 +225,12 @@
 
 **Projection** 
     | Class :class:`~yt.data_objects.construction_data_containers.YTQuadTreeProjBase`
-    | Usage: ``proj(field, axis, weight_field=None, center=None, ds=None, data_source=None, style="integrate", field_parameters=None)``
+    | Usage: ``proj(field, axis, weight_field=None, center=None, ds=None, data_source=None, method="integrate", field_parameters=None)``
     | A 2D projection of a 3D volume along one of the axis directions.  
       By default, this is a line integral through the entire simulation volume 
       (although it can be a subset of that volume specified by a data object
       with the ``data_source`` keyword).  Alternatively, one can specify 
-      a weight_field and different ``style`` values to change the nature
+      a weight_field and different ``method`` values to change the nature
       of the projection outcome.  See :ref:`projection-types` for more information.
 
 **Streamline** 
@@ -263,7 +263,7 @@
 
    ds = load("my_data")
    sp = ds.sphere('c', (10, 'kpc'))
-   print ad.quantities.angular_momentum_vector()
+   print sp.quantities.angular_momentum_vector()
 
 Available Derived Quantities
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/cookbook/cosmological_analysis.rst
--- a/doc/source/cookbook/cosmological_analysis.rst
+++ b/doc/source/cookbook/cosmological_analysis.rst
@@ -14,6 +14,22 @@
 
 .. yt_cookbook:: halo_plotting.py
 
+.. _cookbook-rockstar-nested-grid:
+
+Running Rockstar to Find Halos on Multi-Resolution-Particle Datasets
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The version of Rockstar installed with yt does not have the capability
+to work on datasets with particles of different masses.  Unfortunately,
+many simulations possess particles of different masses, notably cosmological 
+zoom datasets.  This recipe uses Rockstar in two different ways to generate a 
+HaloCatalog from the highest resolution dark matter particles (the ones 
+inside the zoom region).  It then overlays some of those halos on a projection
+as a demonstration.  See :ref:`halo-analysis` and :ref:`annotate-halos` for
+more information.
+
+.. yt_cookbook:: rockstar_nest.py
+
 .. _cookbook-halo_finding:
 
 Halo Profiling and Custom Analysis

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/cookbook/fits_radio_cubes.ipynb
--- a/doc/source/cookbook/fits_radio_cubes.ipynb
+++ b/doc/source/cookbook/fits_radio_cubes.ipynb
@@ -183,14 +183,14 @@
      "cell_type": "markdown",
      "metadata": {},
      "source": [
-      "We can also make a projection of all the emission along the line of sight. Since we're not doing an integration along a path length, we needed to specify `proj_style = \"sum\"`:"
+      "We can also make a projection of all the emission along the line of sight. Since we're not doing an integration along a path length, we needed to specify `method = \"sum\"`:"
      ]
     },
     {
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "prj = yt.ProjectionPlot(ds, \"z\", [\"intensity\"], proj_style=\"sum\", origin=\"native\")\n",
+      "prj = yt.ProjectionPlot(ds, \"z\", [\"intensity\"], method=\"sum\", origin=\"native\")\n",
       "prj.show()"
      ],
      "language": "python",

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/cookbook/fits_xray_images.ipynb
--- a/doc/source/cookbook/fits_xray_images.ipynb
+++ b/doc/source/cookbook/fits_xray_images.ipynb
@@ -264,7 +264,7 @@
       "                   [\"flux\",\"projected_temperature\",\"pseudo_pressure\",\"pseudo_entropy\"], \n",
       "                   origin=\"native\", field_parameters={\"exposure_time\":exposure_time},\n",
       "                   data_source=circle_reg,\n",
-      "                   proj_style=\"sum\")\n",
+      "                   method=\"sum\")\n",
       "prj.set_log(\"flux\",True)\n",
       "prj.set_log(\"pseudo_pressure\",False)\n",
       "prj.set_log(\"pseudo_entropy\",False)\n",

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/cookbook/free_free_field.py
--- a/doc/source/cookbook/free_free_field.py
+++ b/doc/source/cookbook/free_free_field.py
@@ -6,6 +6,7 @@
 # Need to grab the proton mass from the constants database
 from yt.utilities.physical_constants import mp
 
+exit()
 # Define the emission field
 
 keVtoerg = 1.602e-9  # Convert energy in keV to energy in erg

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/cookbook/halo_profiler.py
--- a/doc/source/cookbook/halo_profiler.py
+++ b/doc/source/cookbook/halo_profiler.py
@@ -18,8 +18,8 @@
 
 # use the sphere to calculate radial profiles of gas density
 # weighted by cell volume in terms of the virial radius
-hc.add_callback("profile", x_field="radius",
-                y_fields=[("gas", "overdensity")],
+hc.add_callback("profile", ["radius"],
+                [("gas", "overdensity")],
                 weight_field="cell_volume",
                 accumulation=False,
                 storage="virial_quantities_profiles")
@@ -32,7 +32,8 @@
 field_params = dict(virial_radius=('quantity', 'radius_200'))
 hc.add_callback('sphere', radius_field='radius_200', factor=5,
                 field_parameters=field_params)
-hc.add_callback('profile', 'virial_radius', [('gas', 'temperature')],
+hc.add_callback('profile', ['virial_radius'], 
+                [('gas', 'temperature')],
                 storage='virial_profiles',
                 weight_field='cell_mass',
                 accumulation=False, output_dir='profiles')

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/cookbook/hse_field.py
--- a/doc/source/cookbook/hse_field.py
+++ b/doc/source/cookbook/hse_field.py
@@ -1,163 +1,35 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
-
 import numpy as np
 import yt
 
+from yt.fields.field_plugin_registry import \
+    register_field_plugin
+from yt.fields.fluid_fields import \
+    setup_gradient_fields
+
+
 # Define the components of the gravitational acceleration vector field by
 # taking the gradient of the gravitational potential
-
- at yt.derived_field(name='gravitational_acceleration_x',
-                  units='cm/s**2', take_log=False,
-                  validators=[yt.ValidateSpatial(1,["gravitational_potential"])])
-def gravitational_acceleration_x(field, data):
-
-    # We need to set up stencils
-
-    sl_left = slice(None, -2, None)
-    sl_right = slice(2, None, None)
-    div_fac = 2.0
-
-    dx = div_fac * data['dx'][0]
-
-    gx = data["gravitational_potential"][sl_right, 1:-1, 1:-1]/dx
-    gx -= data["gravitational_potential"][sl_left, 1:-1, 1:-1]/dx
-
-    new_field = np.zeros(data["gravitational_potential"].shape,
-                         dtype='float64')*gx.uq
-    new_field[1:-1, 1:-1, 1:-1] = -gx
-
-    return new_field
-
-
- at yt.derived_field(name='gravitational_acceleration_y',
-                  units='cm/s**2', take_log=False,
-                  validators=[yt.ValidateSpatial(1,["gravitational_potential"])])
-def gravitational_acceleration_y(field, data):
-
-    # We need to set up stencils
-
-    sl_left = slice(None, -2, None)
-    sl_right = slice(2, None, None)
-    div_fac = 2.0
-
-    dy = div_fac * data['dy'].flatten()[0]
-
-    gy = data["gravitational_potential"][1:-1, sl_right, 1:-1]/dy
-    gy -= data["gravitational_potential"][1:-1, sl_left, 1:-1]/dy
-
-    new_field = np.zeros(data["gravitational_potential"].shape,
-                         dtype='float64')*gy.uq
-
-    new_field[1:-1, 1:-1, 1:-1] = -gy
-
-    return new_field
-
-
- at yt.derived_field(name='gravitational_acceleration_z',
-                  units='cm/s**2', take_log=False,
-                  validators=[yt.ValidateSpatial(1,["gravitational_potential"])])
-def gravitational_acceleration_z(field, data):
-
-    # We need to set up stencils
-
-    sl_left = slice(None, -2, None)
-    sl_right = slice(2, None, None)
-    div_fac = 2.0
-
-    dz = div_fac * data['dz'].flatten()[0]
-
-    gz = data["gravitational_potential"][1:-1, 1:-1, sl_right]/dz
-    gz -= data["gravitational_potential"][1:-1, 1:-1, sl_left]/dz
-
-    new_field = np.zeros(data["gravitational_potential"].shape,
-                         dtype='float64')*gz.uq
-    new_field[1:-1, 1:-1, 1:-1] = -gz
-
-    return new_field
-
-
-# Define the components of the pressure gradient field
-
-
- at yt.derived_field(name='grad_pressure_x', units='g/(cm*s)**2', take_log=False,
-                  validators=[yt.ValidateSpatial(1,["pressure"])])
-def grad_pressure_x(field, data):
-
-    # We need to set up stencils
-
-    sl_left = slice(None, -2, None)
-    sl_right = slice(2, None, None)
-    div_fac = 2.0
-
-    dx = div_fac * data['dx'].flatten()[0]
-
-    px = data["pressure"][sl_right, 1:-1, 1:-1]/dx
-    px -= data["pressure"][sl_left, 1:-1, 1:-1]/dx
-
-    new_field = np.zeros(data["pressure"].shape, dtype='float64')*px.uq
-    new_field[1:-1, 1:-1, 1:-1] = px
-
-    return new_field
-
-
- at yt.derived_field(name='grad_pressure_y', units='g/(cm*s)**2', take_log=False,
-                  validators=[yt.ValidateSpatial(1,["pressure"])])
-def grad_pressure_y(field, data):
-
-    # We need to set up stencils
-
-    sl_left = slice(None, -2, None)
-    sl_right = slice(2, None, None)
-    div_fac = 2.0
-
-    dy = div_fac * data['dy'].flatten()[0]
-
-    py = data["pressure"][1:-1, sl_right, 1:-1]/dy
-    py -= data["pressure"][1:-1, sl_left, 1:-1]/dy
-
-    new_field = np.zeros(data["pressure"].shape, dtype='float64')*py.uq
-    new_field[1:-1, 1:-1, 1:-1] = py
-
-    return new_field
-
-
- at yt.derived_field(name='grad_pressure_z', units='g/(cm*s)**2', take_log=False,
-                  validators=[yt.ValidateSpatial(1,["pressure"])])
-def grad_pressure_z(field, data):
-
-    # We need to set up stencils
-
-    sl_left = slice(None, -2, None)
-    sl_right = slice(2, None, None)
-    div_fac = 2.0
-
-    dz = div_fac * data['dz'].flatten()[0]
-
-    pz = data["pressure"][1:-1, 1:-1, sl_right]/dz
-    pz -= data["pressure"][1:-1, 1:-1, sl_left]/dz
-
-    new_field = np.zeros(data["pressure"].shape, dtype='float64')*pz.uq
-    new_field[1:-1, 1:-1, 1:-1] = pz
-
-    return new_field
-
+ at register_field_plugin
+def setup_my_fields(registry, ftype="gas", slice_info=None):
+    setup_gradient_fields(registry, (ftype, "gravitational_potential"),
+                          "cm ** 2 / s ** 2", slice_info)
 
 # Define the "degree of hydrostatic equilibrium" field
 
+
 @yt.derived_field(name='HSE', units=None, take_log=False,
                   display_name='Hydrostatic Equilibrium')
 def HSE(field, data):
 
-    gx = data["density"]*data["gravitational_acceleration_x"]
-    gy = data["density"]*data["gravitational_acceleration_y"]
-    gz = data["density"]*data["gravitational_acceleration_z"]
+    gx = data["density"] * data["gravitational_potential_gradient_x"]
+    gy = data["density"] * data["gravitational_potential_gradient_y"]
+    gz = data["density"] * data["gravitational_potential_gradient_z"]
 
-    hx = data["grad_pressure_x"] - gx
-    hy = data["grad_pressure_y"] - gy
-    hz = data["grad_pressure_z"] - gz
+    hx = data["pressure_gradient_x"] - gx
+    hy = data["pressure_gradient_y"] - gy
+    hz = data["pressure_gradient_z"] - gz
 
-    h = np.sqrt((hx*hx+hy*hy+hz*hz)/(gx*gx+gy*gy+gz*gz))
+    h = np.sqrt((hx * hx + hy * hy + hz * hz) / (gx * gx + gy * gy + gz * gz))
 
     return h
 
@@ -166,6 +38,10 @@
 
 ds = yt.load("GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0350")
 
+# gradient operator requires periodic boundaries.  This dataset has
+# open boundary conditions.  We need to hack it for now (this will be fixed
+# in future version of yt)
+ds.periodicity = (True, True, True)
 
 # Take a slice through the center of the domain
 slc = yt.SlicePlot(ds, 2, ["density", "HSE"], width=(1, 'Mpc'))

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/cookbook/power_spectrum_example.py
--- a/doc/source/cookbook/power_spectrum_example.py
+++ b/doc/source/cookbook/power_spectrum_example.py
@@ -57,7 +57,7 @@
     
     # physical limits to the wavenumbers
     kmin = np.min(1.0/L)
-    kmax = np.max(0.5*dims/L)
+    kmax = np.min(0.5*dims/L)
     
     kbins = np.arange(kmin, kmax, kmin)
     N = len(kbins)
@@ -112,7 +112,6 @@
     return np.abs(ru)**2
 
 
-if __name__ == "__main__":
 
-    ds = yt.load("maestro_xrb_lores_23437")
-    doit(ds)
+ds = yt.load("maestro_xrb_lores_23437")
+doit(ds)

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/cookbook/rockstar_nest.py
--- /dev/null
+++ b/doc/source/cookbook/rockstar_nest.py
@@ -0,0 +1,74 @@
+# You must run this job in parallel.  
+# There are several mpi flags which can be useful in order for it to work OK.
+# It requires at least 3 processors in order to run because of the way in which 
+# rockstar divides up the work.  Make sure you have mpi4py installed as per 
+# http://yt-project.org/docs/dev/analyzing/parallel_computation.html#setting-up-parallel-yt
+    
+# Usage: mpirun -np <num_procs> --mca btl ^openib python this_script.py
+
+import yt
+from yt.analysis_modules.halo_analysis.halo_catalog import HaloCatalog
+from yt.data_objects.particle_filters import add_particle_filter
+from yt.analysis_modules.halo_finding.rockstar.api import RockstarHaloFinder
+yt.enable_parallelism() # rockstar halofinding requires parallelism
+
+# Create a dark matter particle filter
+# This will be code dependent, but this function here is true for enzo
+
+def DarkMatter(pfilter, data):
+    filter = data[("all", "particle_type")] == 1 # DM = 1, Stars = 2
+    return filter
+
+add_particle_filter("dark_matter", function=DarkMatter, filtered_type='all', \
+                    requires=["particle_type"])
+
+# First, we make sure that this script is being run using mpirun with
+# at least 3 processors as indicated in the comments above.
+assert(yt.communication_system.communicators[-1].size >= 3)
+
+# Load the dataset and apply dark matter filter
+fn = "Enzo_64/DD0043/data0043"
+ds = yt.load(fn)
+ds.add_particle_filter('dark_matter')
+
+# Determine highest resolution DM particle mass in sim by looking
+# at the extrema of the dark_matter particle_mass field.
+ad = ds.all_data()
+min_dm_mass = ad.quantities.extrema(('dark_matter','particle_mass'))[0]
+
+# Define a new particle filter to isolate all highest resolution DM particles
+# and apply it to dataset
+def MaxResDarkMatter(pfilter, data):
+    return data["particle_mass"] <= 1.01 * min_dm_mass
+
+add_particle_filter("max_res_dark_matter", function=MaxResDarkMatter, \
+                    filtered_type='dark_matter', requires=["particle_mass"])
+ds.add_particle_filter('max_res_dark_matter')
+
+# If desired, we can see the total number of DM and High-res DM particles
+#if yt.is_root():
+#    print "Simulation has %d DM particles." % ad['dark_matter','particle_type'].shape
+#    print "Simulation has %d Highest Res DM particles." % ad['max_res_dark_matter', 'particle_type'].shape
+
+# Run the halo catalog on the dataset only on the highest resolution dark matter 
+# particles
+hc = HaloCatalog(data_ds=ds, finder_method='rockstar', \
+                 finder_kwargs={'dm_only':True, 'particle_type':'max_res_dark_matter'})
+hc.create()
+
+# Or alternatively, just run the RockstarHaloFinder and later import the 
+# output file as necessary.  You can skip this step if you've already run it
+# once, but be careful since subsequent halo finds will overwrite this data.
+#rhf = RockstarHaloFinder(ds, particle_type="max_res_dark_matter")
+#rhf.run()
+# Load the halo list from a rockstar output for this dataset
+# Create a projection with the halos overplot on top
+#halos = yt.load('rockstar_halos/halos_0.0.bin')
+#hc = HaloCatalog(halos_ds=halos)
+#hc.load()
+
+# Regardless of your method of creating the halo catalog, use it to overplot the
+# halos on a projection.
+p = yt.ProjectionPlot(ds, "x", "density")
+p.annotate_halos(hc, annotate_field = 'particle_identifier', width=(10,'Mpc'), factor=2)
+p.save()

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/cookbook/tests/test_cookbook.py
--- /dev/null
+++ b/doc/source/cookbook/tests/test_cookbook.py
@@ -0,0 +1,40 @@
+# -*- coding: utf-8 -*-
+"""Module for cookbook testing
+
+
+This test should be run from main yt directory.
+
+Example:
+
+      $ sed -e '/where/d' -i nose.cfg setup.cfg
+      $ nosetests doc/source/cookbook/tests/test_cookbook.py -P -v
+"""
+import glob
+import os
+import subprocess
+
+
+PARALLEL_TEST = {"rockstar_nest.py": "3"}
+
+
+def test_recipe():
+    '''Dummy test grabbing all cookbook's recipes'''
+    for fname in glob.glob("doc/source/cookbook/*.py"):
+        recipe = os.path.basename(fname)
+        check_recipe.description = "Testing recipe: %s" % recipe
+        if recipe in PARALLEL_TEST:
+            yield check_recipe, \
+                ["mpiexec", "-n", PARALLEL_TEST[recipe], "python", fname]
+        else:
+            yield check_recipe, ["python", fname]
+
+
+def check_recipe(cmd):
+    '''Run single recipe'''
+    try:
+        subprocess.check_call(cmd)
+        result = True
+    except subprocess.CalledProcessError, e:
+        print("Stdout output:\n", e.output)
+        result = False
+    assert result

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/cookbook/thin_slice_projection.py
--- a/doc/source/cookbook/thin_slice_projection.py
+++ b/doc/source/cookbook/thin_slice_projection.py
@@ -4,7 +4,7 @@
 ds = yt.load("Enzo_64/DD0030/data0030")
 
 # Make a projection that is the full width of the domain,
-# but only 10 Mpc in depth.  This is done by creating a
+# but only 5 Mpc in depth.  This is done by creating a
 # region object with this exact geometry and providing it
 # as a data_source for the projection.
 
@@ -17,12 +17,12 @@
 right_corner = ds.domain_right_edge
 
 # Now adjust the size of the region along the line of sight (x axis).
-depth = ds.quan(10.0,'Mpc')
+depth = ds.quan(5.0,'Mpc')
 left_corner[0] = center[0] - 0.5 * depth
-left_corner[0] = center[0] + 0.5 * depth
+right_corner[0] = center[0] + 0.5 * depth
 
 # Create the region
-region = ds.region(center, left_corner, right_corner)
+region = ds.box(left_corner, right_corner)
 
 # Create a density projection and supply the region we have just created.
 # Only cells within the region will be included in the projection.

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/developing/building_the_docs.rst
--- a/doc/source/developing/building_the_docs.rst
+++ b/doc/source/developing/building_the_docs.rst
@@ -105,7 +105,10 @@
 
 You will need to have the yt repository available on your computer, which
 is done by default if you have yt installed.  In addition, you need a 
-current version of Sphinx_ (1.1.3) documentation software installed.
+current version of Sphinx_ (1.1.3) documentation software installed, as
+well as the Sphinx
+`Bootstrap theme <https://pypi.python.org/pypi/sphinx-bootstrap-theme/>`_,
+which can be installed via ``pip install sphinx_bootstrap_theme``.
 
 In order to tell sphinx not to do all of the dynamical building, you must
 set the ``$READTHEDOCS`` environment variable to be True by typing this at 

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/developing/developing.rst
--- a/doc/source/developing/developing.rst
+++ b/doc/source/developing/developing.rst
@@ -173,7 +173,7 @@
 If you plan to develop yt on Windows, it is necessary to use the `MinGW
 <http://www.mingw.org/>`_ gcc compiler that can be installed using the `Anaconda
 Python Distribution <https://store.continuum.io/cshop/anaconda/>`_. The libpython package must be
- installed from Anaconda as well. These can both be installed with a single command:
+installed from Anaconda as well. These can both be installed with a single command:
 
 .. code-block:: bash
 
@@ -229,6 +229,19 @@
  
    If you end up doing considerable development, you can set an alias in the
    file ``.hg/hgrc`` to point to this path.
+
+   .. note::
+     Note that the above approach uses HTTPS as the transfer protocol
+     between your machine and BitBucket.  If you prefer to use SSH - or
+     perhaps you're behind a proxy that doesn't play well with SSL via
+     HTTPS - you may want to set up an `SSH key`_ on BitBucket.  Then, you use
+     the syntax ``ssh://hg@bitbucket.org/YourUsername/yt``, or equivalent, in
+     place of ``https://bitbucket.org/YourUsername/yt`` in Mercurial commands.
+     For consistency, all commands we list in this document will use the HTTPS
+     protocol.
+
+     .. _SSH key: https://confluence.atlassian.com/display/BITBUCKET/Set+up+SSH+for+Mercurial
+
 #. Issue a pull request at
    https://bitbucket.org/YourUsername/yt/pull-request/new
 
@@ -246,6 +259,88 @@
 
 #. Your pull request will be automatically updated.
 
+.. _multiple-PRs:
+
+Working with Multiple BitBucket Pull Requests
++++++++++++++++++++++++++++++++++++++++++++++
+
+Once you become active developing for yt, you may be working on
+various aspects of the code or bugfixes at the same time.  Currently,
+BitBucket's *modus operandi* for pull requests automatically updates
+your active pull request with every ``hg push`` of commits that are a
+descendant of the head of your pull request.  In a normal workflow,
+this means that if you have an active pull request, make some changes
+locally for, say, an unrelated bugfix, then push those changes back to
+your fork in the hopes of creating a *new* pull request, you'll
+actually end up updating your current pull request!
+
+There are a few ways around this feature of BitBucket that will allow
+for multiple pull requests to coexist; we outline one such method
+below.  We assume that you have a fork of yt at
+``http://bitbucket.org/YourUsername/Your_yt`` (see
+:ref:`sharing-changes` for instructions on creating a fork) and that
+you have an active pull request to the main repository.
+
+The main issue with starting another pull request is to make sure that
+your push to BitBucket doesn't go to the same head as your
+existing pull request and trigger BitBucket's auto-update feature.
+Here's how to get your local repository away from your current pull
+request head using `revsets <http://www.selenic.com/hg/help/revsets>`_
+and your ``hgrc`` file:
+   
+#. Set up a Mercurial path for the main yt repository (note this is a convenience
+   step and only needs to be done once).  Add the following to your
+   ``Your_yt/.hg/hgrc``::
+
+     [paths]
+     upstream = https://bitbucket.org/yt_analysis/yt
+
+   This will create a path called ``upstream`` that is aliased to the URL of the
+   main yt repository.
+#. Now we'll use revsets_ to update your local repository to the tip of the
+   ``upstream`` path:
+
+   .. code-block:: bash
+
+      $ hg pull
+      $ hg update -r "remote(tip,'upstream')"
+
+After the above steps, your local repository should be at the tip of
+the main yt repository.  If you find yourself doing this a lot, it may
+be worth aliasing this task in your ``hgrc`` file by adding something like::
+
+  [alias]
+  myupdate = update -r "remote(tip,'upstream')"
+
+And then you can just issue ``hg myupdate`` to get at the tip of the yt
+branch of the main yt repository.
+
+You can then make changes and ``hg commit`` them.  If you prefer
+working with `bookmarks <http://mercurial.selenic.com/wiki/Bookmarks>`_, you may
+want to make a bookmark before committing your changes, such as
+``hg bookmark mybookmark``.
+
+To push to your fork on BitBucket if you didn't use a bookmark, you issue the following:
+
+.. code-block:: bash
+
+  $ hg push -r . -f https://bitbucket.org/YourUsername/Your_yt
+
+The ``-r .`` means "push only the commit I'm standing on and any ancestors."  The
+``-f`` is to force Mecurial to do the push since we are creating a new remote head.
+
+Note that if you *did* use a bookmark, you don't have to force the push, but you do
+need to push the bookmark; in other words do the following instead of the above:
+
+.. code-block:: bash
+		
+   $ hg push -B mybookmark https://bitbucket.org/YourUsername/Your_yt
+
+The ``-B`` means "publish my bookmark and any relevant changesets to the remote server."
+		
+You can then go to the BitBucket interface and issue a new pull request based on
+your last changes, as usual.
+
 How To Get The Source Code For Editing
 --------------------------------------
 

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/examining/Loading_Generic_Particle_Data.ipynb
--- a/doc/source/examining/Loading_Generic_Particle_Data.ipynb
+++ b/doc/source/examining/Loading_Generic_Particle_Data.ipynb
@@ -74,7 +74,7 @@
       "import yt\n",
       "from yt.units import parsec, Msun\n",
       "\n",
-      "bbox = 1.1*np.array([[min(ppx), max(ppx)], [min(ppy), max(ppy)], [min(ppy), max(ppy)]])\n",
+      "bbox = 1.1*np.array([[min(ppx), max(ppx)], [min(ppy), max(ppy)], [min(ppz), max(ppz)]])\n",
       "\n",
       "ds = yt.load_particles(data, length_unit=parsec, mass_unit=1e8*Msun, n_ref=256, bbox=bbox)"
      ],

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/examining/Loading_Spherical_Data.ipynb
--- /dev/null
+++ b/doc/source/examining/Loading_Spherical_Data.ipynb
@@ -0,0 +1,188 @@
+{
+ "metadata": {
+  "name": "",
+  "signature": "sha256:88ed88ce8d8f4a359052f287aea17a7cbed435ff960e195097b440191ce6c2ab"
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+  {
+   "cells": [
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "# Loading Spherical Data\n",
+      "\n",
+      "With version 3.0 of yt, it has gained the ability to load data from non-Cartesian systems.  This support is still being extended, but here is an example of how to load spherical data from a regularly-spaced grid.  For irregularly spaced grids, a similar setup can be used, but the `load_hexahedral_mesh` method will have to be used instead.\n",
+      "\n",
+      "Note that in yt, \"spherical\" means that it is ordered $r$, $\\theta$, $\\phi$, where $\\theta$ is the declination from the azimuth (running from $0$ to $\\pi$ and $\\phi$ is the angle around the zenith (running from $0$ to $2\\pi$).\n",
+      "\n",
+      "We first start out by loading yt."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "import numpy as np\n",
+      "import yt"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Now, we create a few derived fields.  The first three are just straight translations of the Cartesian coordinates, so that we can see where we are located in the data, and understand what we're seeing.  The final one is just a fun field that is some combination of the three coordinates, and will vary in all dimensions."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "@yt.derived_field(name = \"sphx\", units = \"cm\", take_log=False)\n",
+      "def sphx(field, data):\n",
+      "    return np.cos(data[\"phi\"]) * np.sin(data[\"theta\"])*data[\"r\"]\n",
+      "@yt.derived_field(name = \"sphy\", units = \"cm\", take_log=False)\n",
+      "def sphy(field, data):\n",
+      "    return np.sin(data[\"phi\"]) * np.sin(data[\"theta\"])*data[\"r\"]\n",
+      "@yt.derived_field(name = \"sphz\", units = \"cm\", take_log=False)\n",
+      "def sphz(field, data):\n",
+      "    return np.cos(data[\"theta\"])*data[\"r\"]\n",
+      "@yt.derived_field(name = \"funfield\", units=\"cm\", take_log=False)\n",
+      "def funfield(field, data):\n",
+      "    return (np.sin(data[\"phi\"])**2 + np.cos(data[\"theta\"])**2) * (1.0*data[\"r\"].uq+data[\"r\"])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "## Loading Data\n",
+      "\n",
+      "Now we can actually load our data.  We use the `load_uniform_grid` function here.  Normally, the first argument would be a dictionary of field data, where the keys were the field names and the values the field data arrays.  Here, we're just going to look at derived fields, so we supply an empty one.\n",
+      "\n",
+      "The next few arguments are the number of dimensions, the bounds, and we then specify the geometry as spherical."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds = yt.load_uniform_grid({}, [128, 128, 128],\n",
+      "                          bbox=np.array([[0.0, 1.0], [0.0, np.pi], [0.0, 2*np.pi]]),\n",
+      "                          geometry=\"spherical\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "## Looking at Data\n",
+      "\n",
+      "Now we can take slices.  The first thing we will try is making a slice of data along the \"phi\" axis, here $\\pi/2$, which will be along the y axis in the positive direction.  We use the `.slice` attribute, which creates a slice, and then we convert this into a plot window.  Note that here 2 is used to indicate the third axis (0-indexed) which for spherical data is $\\phi$.\n",
+      "\n",
+      "This is the manual way of creating a plot -- below, we'll use the standard, automatic ways.  Note that the coordinates run from $-r$ to $r$ along the $z$ axis and from $0$ to $r$ along the $R$ axis.  We use the capital $R$ to indicate that it's the $R$ along the $x-y$ plane."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "s = ds.slice(2, np.pi/2)\n",
+      "p = s.to_pw(\"funfield\", origin=\"native\")\n",
+      "p.set_zlim(\"all\", 0.0, 4.0)\n",
+      "p.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We can also slice along $r$.  For now, this creates a regular grid with *incorrect* units for phi and theta.  We are currently exploring two other options -- a simple aitoff projection, and fixing it to use the correct units as-is."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "s = yt.SlicePlot(ds, \"r\", \"funfield\")\n",
+      "s.set_zlim(\"all\", 0.0, 4.0)\n",
+      "s.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We can also slice at constant $\\theta$.  But, this is a weird thing!  We're slicing at a constant declination from the azimuth.  What this means is that when thought of in a Cartesian domain, this slice is actually a cone.  The axes have been labeled appropriately, to indicate that these are not exactly the $x$ and $y$ axes, but instead differ by a factor of $\\sin(\\theta))$."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "s = yt.SlicePlot(ds, \"theta\", \"funfield\")\n",
+      "s.set_zlim(\"all\", 0.0, 4.0)\n",
+      "s.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We've seen lots of the `funfield` plots, but we can also look at the Cartesian axes.  This next plot plots the Cartesian $x$, $y$ and $z$ values on a $\\theta$ slice.  Because we're not supplying an argument to the `center` parameter, yt will place it at the center of the $\\theta$ axis, which will be at $\\pi/2$, where it will be aligned with the $x-y$ plane.  The slight change in `sphz` results from the cells themselves migrating, and plotting the center of those cells."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "s = yt.SlicePlot(ds, \"theta\", [\"sphx\", \"sphy\", \"sphz\"])\n",
+      "s.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We can do the same with the $\\phi$ axis."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": true,
+     "input": [
+      "s = yt.SlicePlot(ds, \"phi\", [\"sphx\", \"sphy\", \"sphz\"])\n",
+      "s.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    }
+   ],
+   "metadata": {}
+  }
+ ]
+}
\ No newline at end of file

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/examining/index.rst
--- a/doc/source/examining/index.rst
+++ b/doc/source/examining/index.rst
@@ -9,4 +9,5 @@
    loading_data
    generic_array_data
    generic_particle_data
+   spherical_data
    low_level_inspection

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/examining/spherical_data.rst
--- /dev/null
+++ b/doc/source/examining/spherical_data.rst
@@ -0,0 +1,6 @@
+.. _loading-spherical-data:
+
+Loading Spherical Data
+======================
+
+.. notebook:: Loading_Spherical_Data.ipynb

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/index.rst
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -44,6 +44,16 @@
      <tr valign="top"><td width="25%"><p>
+           <a href="examining/index.html">Loading and Examining Data</a>
+         </p>
+       </td>
+       <td width="75%">
+         <p class="linkdescr">How to load and examine all dataset types in yt</p>
+       </td>
+     </tr>
+     <tr valign="top">
+       <td width="25%">
+         <p><a href="yt3differences.html">yt 3.0</a></p></td>
@@ -84,16 +94,6 @@
      <tr valign="top"><td width="25%"><p>
-           <a href="examining/index.html">Examining Data</a>
-         </p>
-       </td>
-       <td width="75%">
-         <p class="linkdescr">Load data and directly access raw values for low-level analysis</p>
-       </td>
-     </tr>
-     <tr valign="top">
-       <td width="25%">
-         <p><a href="developing/index.html">Developing in yt</a></p></td>

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/reference/api/api.rst
--- a/doc/source/reference/api/api.rst
+++ b/doc/source/reference/api/api.rst
@@ -407,6 +407,8 @@
    ~yt.data_objects.profiles.Profile3D
    ~yt.data_objects.profiles.create_profile
 
+.. _halo_analysis_ref:
+
 Halo Analysis
 ^^^^^^^^^^^^^
 
@@ -419,21 +421,22 @@
    ~yt.analysis_modules.halo_analysis.halo_catalog.HaloCatalog
    ~yt.analysis_modules.halo_analysis.halo_finding_methods.HaloFindingMethod
    ~yt.analysis_modules.halo_analysis.halo_callbacks.HaloCallback
+   ~yt.analysis_modules.halo_analysis.halo_callbacks.delete_attribute
    ~yt.analysis_modules.halo_analysis.halo_callbacks.halo_sphere
-   ~yt.analysis_modules.halo_analysis.halo_callbacks.sphere_field_max_recenter
-   ~yt.analysis_modules.halo_analysis.halo_callbacks.sphere_bulk_velocity
+   ~yt.analysis_modules.halo_analysis.halo_callbacks.iterative_center_of_mass
+   ~yt.analysis_modules.halo_analysis.halo_callbacks.load_profiles
+   ~yt.analysis_modules.halo_analysis.halo_callbacks.phase_plot
    ~yt.analysis_modules.halo_analysis.halo_callbacks.profile
    ~yt.analysis_modules.halo_analysis.halo_callbacks.save_profiles
-   ~yt.analysis_modules.halo_analysis.halo_callbacks.load_profiles
+   ~yt.analysis_modules.halo_analysis.halo_callbacks.sphere_bulk_velocity
+   ~yt.analysis_modules.halo_analysis.halo_callbacks.sphere_field_max_recenter
    ~yt.analysis_modules.halo_analysis.halo_callbacks.virial_quantities
-   ~yt.analysis_modules.halo_analysis.halo_callbacks.phase_plot
-   ~yt.analysis_modules.halo_analysis.halo_callbacks.delete_attribute
    ~yt.analysis_modules.halo_analysis.halo_filters.HaloFilter
+   ~yt.analysis_modules.halo_analysis.halo_filters.not_subhalo
    ~yt.analysis_modules.halo_analysis.halo_filters.quantity_value
-   ~yt.analysis_modules.halo_analysis.halo_filters.not_subhalo
    ~yt.analysis_modules.halo_analysis.halo_quantities.HaloQuantity
+   ~yt.analysis_modules.halo_analysis.halo_quantities.bulk_velocity
    ~yt.analysis_modules.halo_analysis.halo_quantities.center_of_mass
-   ~yt.analysis_modules.halo_analysis.halo_quantities.bulk_velocity
 
 Halo Finding
 ^^^^^^^^^^^^

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/reference/code_support.rst
--- a/doc/source/reference/code_support.rst
+++ b/doc/source/reference/code_support.rst
@@ -24,7 +24,7 @@
 +-----------------------+------------+-----------+------------+-------+----------+----------+------------+----------+ 
 | Castro                |     Y      |     Y     |   Partial  |   Y   |    Y     |    Y     |     N      |   Full   |
 +-----------------------+------------+-----------+------------+-------+----------+----------+------------+----------+ 
-| Chombo                |     Y      |     N     |      Y     |   Y   |    Y     |    Y     |     Y      | Partial  |
+| Chombo                |     Y      |     Y     |      Y     |   Y   |    Y     |    Y     |     Y      |   Full   |
 +-----------------------+------------+-----------+------------+-------+----------+----------+------------+----------+ 
 | Enzo                  |     Y      |     Y     |      Y     |   Y   |    Y     |    Y     |     Y      |   Full   |
 +-----------------------+------------+-----------+------------+-------+----------+----------+------------+----------+ 

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/visualizing/_cb_docstrings.inc
--- a/doc/source/visualizing/_cb_docstrings.inc
+++ b/doc/source/visualizing/_cb_docstrings.inc
@@ -151,19 +151,28 @@
 Overplot Halo Annotations
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
-.. function:: annotate_halos(self, halo_catalog, col='white', alpha=1, \
-                             width=None):
+.. function:: annotate_halos(self, halo_catalog, circle_kwargs=None, width=None, \ 
+                             annotate_field=None, font_kwargs=None, factor=1.0):
 
    (This is a proxy for
    :class:`~yt.visualization.plot_modifications.HaloCatalogCallback`.)
 
    Accepts a :class:`~yt.analysis_modules.halo_analysis.halo_catalog.HaloCatalog` 
-   and plots a circle at the location of each
-   halo with the radius of the circle corresponding to the virial radius of the
-   halo.  If ``width`` is set to None (default) all halos are plotted.
-   Otherwise, only halos that fall within a slab with width ``width`` centered
-   on the center of the plot data. The color and transparency of the circles can
-   be controlled with ``col`` and ``alpha`` respectively.
+   and plots a circle at the location of each halo with the radius of the 
+   circle corresponding to the virial radius of the halo.  If ``width`` is set 
+   to None (default) all halos are plotted, otherwise it accepts a tuple in 
+   the form (1.0, ‘Mpc’) to only display halos that fall within a slab with 
+   width ``width`` centered on the center of the plot data.  The appearance of 
+   the circles can be changed with the circle_kwargs dictionary, which is 
+   supplied to the Matplotlib patch Circle.  One can label each of the halos 
+   with the annotate_field, which accepts a field contained in the halo catalog 
+   to add text to the plot near the halo (example: ``annotate_field=
+   'particle_mass'`` will write the halo mass next to each halo, whereas 
+   ``'particle_identifier'`` shows the halo number).  font_kwargs contains the 
+   arguments controlling the text appearance of the annotated field.
+   Factor is the number the virial radius is multiplied by for plotting the 
+   circles. Ex: ``factor=2.0`` will plot circles with twice the radius of each 
+   halo virial radius.
 
 .. python-script::
 
@@ -177,7 +186,7 @@
    hc.create()
 
    prj = yt.ProjectionPlot(data_ds, 'z', 'density')
-   prj.annotate_halos(hc)
+   prj.annotate_halos(hc, annotate_field='particle_identifier')
    prj.save()
 
 Overplot a Straight Line

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/visualizing/colormaps/index.rst
--- a/doc/source/visualizing/colormaps/index.rst
+++ b/doc/source/visualizing/colormaps/index.rst
@@ -6,14 +6,20 @@
 There are several colormaps available for yt.  yt includes all of the 
 matplotlib colormaps as well for nearly all functions.  Individual visualization
 functions usually allow you to specify a colormap with the ``cmap`` flag.
-There are a small number of functions (mostly contained in the image_writer 
-module; e.g. write_bitmap, write_image, write_projection, etc.), which do 
-not load the matplotlib infrastructure and can only access the colormaps 
-native to yt.  
 
-Here is a chart of all of the colormaps available.  In addition to each 
-colormap displayed here, you can access its "reverse" by simply appending a 
-``"_r"`` to the end of the colormap name.
+If you have installed brewer2mpl (``pip install brewer2mpl`` or see
+`https://github.com/jiffyclub/brewer2mpl
+<https://github.com/jiffyclub/brewer2mpl>`_), you can also access the discrete
+colormaps available on `http://colorbrewer2.org <http://colorbrewer2.org>`_.
+Instead of supplying the colormap name, specify a tuple of the form (name, type,
+number), for example ``('RdBu', 'Diverging', 9)``.  These discrete colormaps will
+not be interpolated, and can be useful for creating
+colorblind/printer/grayscale-friendly plots. For more information, visit
+`http://colorbrewer2.org <http://colorbrewer2.org>`_.
+
+Here is a chart of all of the yt and matplotlib colormaps available.  In
+addition to each colormap displayed here, you can access its "reverse" by simply
+appending a ``"_r"`` to the end of the colormap name.
 
 All Colormaps (including matplotlib)
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 doc/source/visualizing/plots.rst
--- a/doc/source/visualizing/plots.rst
+++ b/doc/source/visualizing/plots.rst
@@ -239,13 +239,13 @@
 Types of Projections
 """"""""""""""""""""
 
-There are several different styles of projections that can be made either 
+There are several different methods of projections that can be made either 
 when creating a projection with ds.proj() or when making a ProjectionPlot.  
-In either construction method, set the ``style`` keyword to be one of the 
+In either construction method, set the ``method`` keyword to be one of the 
 following:
 
 ``integrate`` (unweighted)
-    This is the default projection style. It simply integrates the 
+    This is the default projection method. It simply integrates the 
     requested field  :math:`f(x)` along a line of sight  :math:`\hat{n}` , 
     given by the axis parameter (e.g. :math:`\hat{i},\hat{j},` or 
     :math:`\hat{k}`).  The units of the projected field  
@@ -258,7 +258,7 @@
     g(X) = {\int\ {f(x)\hat{n}\cdot{dx}}}
 
 ``integrate`` (weighted)
-    When using the ``integrate``  style, a ``weight_field`` argument may also 
+    When using the ``integrate``  method, a ``weight_field`` argument may also 
     be specified, which will produce a weighted projection.  :math:`w(x)` 
     is the field used as a weight. One common example would 
     be to weight the "temperature" field by the "density" field. In this case, 
@@ -269,15 +269,15 @@
     g(X) = \frac{\int\ {f(x)w(x)\hat{n}\cdot{dx}}}{\int\ {w(x)\hat{n}\cdot{dx}}}
 
 ``mip`` 
-    This style picks out the maximum value of a field along the line of 
+    This method picks out the maximum value of a field along the line of 
     sight given by the axis parameter.
 
 ``sum``
-    This style is the same as ``integrate``, except that it does not 
+    This method is the same as ``integrate``, except that it does not 
     multiply by a path length when performing the integration, and is just a 
     straight summation of the field along the given axis. The units of the 
     projected field will be the same as those of the unprojected field. This 
-    style is typically only useful for datasets such as 3D FITS cubes where 
+    method is typically only useful for datasets such as 3D FITS cubes where 
     the third axis of the dataset is something like velocity or frequency.
 
 .. _off-axis-projections:

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 scripts/iyt
--- a/scripts/iyt
+++ b/scripts/iyt
@@ -90,6 +90,7 @@
     kwargs = dict()
 
 ip.ex("from yt.mods import *")
+ip.ex("import yt")
 
 # Now we add some tab completers, in the vein of:
 # http://pymel.googlecode.com/svn/trunk/tools/ipymel.py

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 setup.py
--- a/setup.py
+++ b/setup.py
@@ -103,7 +103,7 @@
         options = Cython.Compiler.Main.CompilationOptions(
             defaults=Cython.Compiler.Main.default_options,
             include_path=extension.include_dirs,
-            language=extension.language, cplus=cplus,
+            cplus=cplus,
             output_file=target_file)
         cython_result = Cython.Compiler.Main.compile(source,
                                                      options=options)

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/analysis_modules/absorption_spectrum/absorption_line.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_line.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_line.py
@@ -195,7 +195,6 @@
     ## tau_0
     tau_X = np.sqrt(np.pi) * e**2 / (me * ccgs) * \
         column_density * fval / vdop
-    tau1 = tau_X * lam1cgs
     tau0 = tau_X * lam0cgs
 
     # dimensionless frequency offset in units of doppler freq

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/analysis_modules/cosmological_observation/light_cone/light_cone.py
--- a/yt/analysis_modules/cosmological_observation/light_cone/light_cone.py
+++ b/yt/analysis_modules/cosmological_observation/light_cone/light_cone.py
@@ -328,7 +328,7 @@
                                                         output["redshift"])
                 proper_box_size = self.simulation.box_size / \
                   (1.0 + output["redshift"])
-                pixel_xarea = (proper_box_size.in_cgs() / pixels)**2 #in proper cm^2
+                pixel_area = (proper_box_size.in_cgs() / pixels)**2 #in proper cm^2
                 factor = pixel_area / (4.0 * np.pi * dL.in_cgs()**2)
                 mylog.info("Distance to slice = %s" % dL)
                 frb[field] *= factor #in erg/s/cm^2/Hz on observer"s image plane.

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/analysis_modules/halo_analysis/fields.py
--- a/yt/analysis_modules/halo_analysis/fields.py
+++ b/yt/analysis_modules/halo_analysis/fields.py
@@ -30,7 +30,7 @@
         sl_right = slice(2, None, None)
         div_fac = 2.0
     else:
-        sl_left, sl_right, div_face = slice_info
+        sl_left, sl_right, div_fac = slice_info
 
     def _virial_radius(field, data):
         virial_radius = data.get_field_parameter("virial_radius")

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/analysis_modules/halo_analysis/halo_callbacks.py
--- a/yt/analysis_modules/halo_analysis/halo_callbacks.py
+++ b/yt/analysis_modules/halo_analysis/halo_callbacks.py
@@ -18,7 +18,7 @@
 import os
 
 from yt.data_objects.profiles import \
-     Profile1D
+     create_profile
 from yt.units.yt_array import \
      YTArray, YTQuantity
 from yt.utilities.exceptions import \
@@ -80,7 +80,6 @@
     """
 
     dds = halo.halo_catalog.data_ds
-    hds = halo.halo_catalog.halos_ds
     center = dds.arr([halo.quantities["particle_position_%s" % axis] \
                       for axis in "xyz"])
     radius = factor * halo.quantities[radius_field]
@@ -148,11 +147,11 @@
 
 add_callback("sphere_bulk_velocity", sphere_bulk_velocity)
 
-def profile(halo, x_field, y_fields, x_bins=32, x_range=None, x_log=True,
-            weight_field="cell_mass", accumulation=False, storage="profiles",
-            output_dir="."):
+def profile(halo, bin_fields, profile_fields, n_bins=32, extrema=None, logs=None,
+            weight_field="cell_mass", accumulation=False, fractional=False,
+            storage="profiles", output_dir="."):
     r"""
-    Create 1d profiles.
+    Create 1, 2, or 3D profiles of a halo.
 
     Store profile data in a dictionary associated with the halo object.
 
@@ -160,26 +159,37 @@
     ----------
     halo : Halo object
         The Halo object to be provided by the HaloCatalog.
-    x_field : string
-        The binning field for the profile.
-    y_fields : string or list of strings
+    bin_fields : list of strings
+        The binning fields for the profile.
+    profile_fields : string or list of strings
         The fields to be propython
         filed.
-    x_bins : int
-        The number of bins in the profile.
-        Default: 32
-    x_range : (float, float)
-        The range of the x_field.  If None, the extrema are used.
-        Default: None
-    x_log : bool
-        Flag for logarithmmic binning.
-        Default: True
+    n_bins : int or list of ints
+        The number of bins in each dimension.  If None, 32 bins for
+        each bin are used for each bin field.
+        Default: 32.
+    extrema : dict of min, max tuples
+        Minimum and maximum values of the bin_fields for the profiles.
+        The keys correspond to the field names. Defaults to the extrema
+        of the bin_fields of the dataset. If a units dict is provided, extrema
+        are understood to be in the units specified in the dictionary.
+    logs : dict of boolean values
+        Whether or not to log the bin_fields for the profiles.
+        The keys correspond to the field names. Defaults to the take_log
+        attribute of the field.
     weight_field : string
         Weight field for profiling.
         Default : "cell_mass"
-    accumulation : bool
-        If True, profile data is a cumulative sum.
-        Default : False
+    accumulation : bool or list of bools
+        If True, the profile values for a bin n are the cumulative sum of
+        all the values from bin 0 to n.  If -True, the sum is reversed so
+        that the value for bin n is the cumulative sum from bin N (total bins)
+        to n.  If the profile is 2D or 3D, a list of values can be given to
+        control the summation in each dimension independently.
+        Default: False.
+    fractional : If True the profile values are divided by the sum of all
+        the profile data such that the profile represents a probability
+        distribution function.
     storage : string
         Name of the dictionary to store profiles.
         Default: "profiles"
@@ -210,37 +220,18 @@
         output_dir = storage
     output_dir = os.path.join(halo.halo_catalog.output_dir, output_dir)
     
-    if x_range is None:
-        x_range = list(halo.data_object.quantities.extrema(x_field, non_zero=True))
-
-    my_profile = Profile1D(halo.data_object, x_field, x_bins, 
-                           x_range[0], x_range[1], x_log, 
-                           weight_field=weight_field)
-    my_profile.add_fields(ensure_list(y_fields))
-
-    # temporary fix since profiles do not have units at the moment
-    for field in my_profile.field_data:
-        my_profile.field_data[field] = dds.arr(my_profile[field],
-                                               dds.field_info[field].units)
-
-    # accumulate, if necessary
-    if accumulation:
-        used = my_profile.used
-        for field in my_profile.field_data:
-            if weight_field is None:
-                my_profile.field_data[field][used] = \
-                    np.cumsum(my_profile.field_data[field][used])
-            else:
-                my_weight = my_profile.weight
-                my_profile.field_data[field][used] = \
-                  np.cumsum(my_profile.field_data[field][used] * my_weight[used]) / \
-                  np.cumsum(my_weight[used])
+    bin_fields = ensure_list(bin_fields)
+    my_profile = create_profile(halo.data_object, bin_fields, profile_fields, n_bins=n_bins,
+                                extrema=extrema, logs=logs, weight_field=weight_field,
+                                accumulation=accumulation, fractional=fractional)
                   
-    # create storage dictionary
     prof_store = dict([(field, my_profile[field]) \
                        for field in my_profile.field_data])
     prof_store[my_profile.x_field] = my_profile.x
-
+    if len(bin_fields) > 1:
+        prof_store[my_profile.y_field] = my_profile.y
+    if len(bin_fields) > 2:
+        prof_store[my_profile.z_field] = my_profile.z
     if hasattr(halo, storage):
         halo_store = getattr(halo, storage)
         if "used" in halo_store:
@@ -250,6 +241,17 @@
         setattr(halo, storage, halo_store)
     halo_store.update(prof_store)
 
+    if hasattr(my_profile, "variance"):
+        variance_store = dict([(field, my_profile.variance[field]) \
+                           for field in my_profile.variance])
+        variance_storage = "%s_variance" % storage
+        if hasattr(halo, variance_storage):
+            halo_variance_store = getattr(halo, variance_storage)
+        else:
+            halo_variance_store = {}
+            setattr(halo, variance_storage, halo_variance_store)
+        halo_variance_store.update(variance_store)
+
 add_callback("profile", profile)
 
 @parallel_root_only
@@ -288,18 +290,24 @@
     mylog.info("Saving halo %d profile data to %s." %
                (halo.quantities["particle_identifier"], output_file))
 
-    out_file = h5py.File(output_file, "w")
+    fh = h5py.File(output_file, "w")
     my_profile = getattr(halo, storage)
+    profile_group = fh.create_group("profiles")
     for field in my_profile:
         # Don't write code units because we might not know those later.
         if isinstance(my_profile[field], YTArray):
             my_profile[field].convert_to_cgs()
-        dataset = out_file.create_dataset(str(field), data=my_profile[field])
-        units = ""
-        if isinstance(my_profile[field], YTArray):
-            units = str(my_profile[field].units)
-        dataset.attrs["units"] = units
-    out_file.close()
+        _yt_array_hdf5(profile_group, str(field), my_profile[field])
+    variance_storage = "%s_variance" % storage
+    if hasattr(halo, variance_storage):
+        my_profile = getattr(halo, variance_storage)
+        variance_group = fh.create_group("variance")
+        for field in my_profile:
+            # Don't write code units because we might not know those later.
+            if isinstance(my_profile[field], YTArray):
+                my_profile[field].convert_to_cgs()
+            _yt_array_hdf5(variance_group, str(field), my_profile[field])
+    fh.close()
 
 add_callback("save_profiles", save_profiles)
 
@@ -340,20 +348,33 @@
     mylog.info("Loading halo %d profile data from %s." %
                (halo.quantities["particle_identifier"], output_file))
 
-    out_file = h5py.File(output_file, "r")
+    fh = h5py.File(output_file, "r")
+    if fields is None:
+        profile_fields = fh["profiles"].keys()
+    else:
+        profile_fields = fields
     my_profile = {}
-    if fields is None:
-        fields = out_file.keys()
-    for field in fields:
-        if field not in out_file:
+    my_group = fh["profiles"]
+    for field in profile_fields:
+        if field not in my_group:
             raise RuntimeError("%s field not present in %s." % (field, output_file))
-        units = ""
-        if "units" in out_file[field].attrs:
-            units = out_file[field].attrs["units"]
-        if units == "dimensionless": units = ""
-        my_profile[field] = halo.halo_catalog.halos_ds.arr(out_file[field].value, units)
-    out_file.close()
+        my_profile[field] = _hdf5_yt_array(my_group, field,
+                                           ds=halo.halo_catalog.halos_ds)
     setattr(halo, storage, my_profile)
+    
+    if "variance" in fh:
+        my_variance = {}
+        my_group = fh["variance"]
+        if fields is None:
+            profile_fields = my_group.keys()
+        for field in profile_fields:
+            if field not in my_group:
+                raise RuntimeError("%s field not present in %s." % (field, output_file))
+            my_variance[field] = _hdf5_yt_array(my_group, field,
+                                                ds=halo.halo_catalog.halos_ds)
+        setattr(halo, "%s_variance" % storage, my_variance)
+        
+    fh.close()
 
 add_callback("load_profiles", load_profiles)
 
@@ -383,6 +404,8 @@
                halo.quantities["particle_identifier"])
 
     fields = ensure_list(fields)
+    fields = [halo.halo_catalog.data_source._determine_fields(field)[0]
+              for field in fields]
     
     dds = halo.halo_catalog.data_ds
     profile_data = getattr(halo, profile_storage)
@@ -498,3 +521,74 @@
         delattr(halo, attribute)
 
 add_callback("delete_attribute", delete_attribute)
+
+def iterative_center_of_mass(halo, radius_field="virial_radius", inner_ratio=0.1, step_ratio=0.9,
+                             units="pc"):
+    r"""
+    Adjust halo position by iteratively recalculating the center of mass while 
+    decreasing the radius.
+
+    Parameters
+    ----------
+    halo : Halo object
+        The Halo object to be provided by the HaloCatalog.
+    radius_field : string
+        The halo quantity to be used as the radius for the sphere.
+        Default: "virial_radius"
+    inner_ratio : float
+        The ratio of the smallest sphere radius used for calculating the center of 
+        mass to the initial radius.  The sphere radius is reduced and center of mass 
+        recalculated until the sphere has reached this size.
+        Default: 0.1
+    step_ratio : float
+        The multiplicative factor used to reduce the radius of the sphere after the 
+        center of mass is calculated.
+        Default: 0.9
+    units : str
+        The units for printing out the distance between the initial and final centers.
+        Default : "pc"
+        
+    """
+    if inner_ratio <= 0.0 or inner_ratio >= 1.0:
+        raise RuntimeError("iterative_center_of_mass: inner_ratio must be between 0 and 1.")
+    if step_ratio <= 0.0 or step_ratio >= 1.0:
+        raise RuntimeError("iterative_center_of_mass: step_ratio must be between 0 and 1.")
+
+    center_orig = halo.halo_catalog.data_ds.arr([halo.quantities["particle_position_%s" % axis]
+                                                 for axis in "xyz"])
+    sphere = halo.halo_catalog.data_ds.sphere(center_orig, halo.quantities[radius_field])
+
+    while sphere.radius > inner_ratio * halo.quantities[radius_field]:
+        new_center = sphere.quantities.center_of_mass(use_gas=True, use_particles=True)
+        sphere = sphere.ds.sphere(new_center, step_ratio * sphere.radius)
+
+    distance = periodic_distance(center_orig.in_units("code_length").to_ndarray(),
+                                 new_center.in_units("code_length").to_ndarray())
+    distance = halo.halo_catalog.data_ds.quan(distance, "code_length")
+    mylog.info("Recentering halo %d %f %s away." %
+               (halo.quantities["particle_identifier"],
+                distance.in_units(units), units))
+
+    for i, axis in enumerate("xyz"):
+        halo.quantities["particle_position_%s" % axis] = sphere.center[i]
+    del sphere
+    
+add_callback("iterative_center_of_mass", iterative_center_of_mass)
+
+def _yt_array_hdf5(fh, fieldname, data):
+    dataset = fh.create_dataset(fieldname, data=data)
+    units = ""
+    if isinstance(data, YTArray):
+        units = str(data.units)
+    dataset.attrs["units"] = units
+
+def _hdf5_yt_array(fh, fieldname, ds=None):
+    if ds is None:
+        new_arr = YTArray
+    else:
+        new_arr = ds.arr
+    units = ""
+    if "units" in fh[fieldname].attrs:
+        units = fh[fieldname].attrs["units"]
+    if units == "dimensionless": units = ""
+    return new_arr(fh[fieldname].value, units)

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/analysis_modules/halo_analysis/halo_catalog.py
--- a/yt/analysis_modules/halo_analysis/halo_catalog.py
+++ b/yt/analysis_modules/halo_analysis/halo_catalog.py
@@ -364,7 +364,6 @@
         if self.halos_ds is None:
             # Find the halos and make a dataset of them
             self.halos_ds = self.finder_method(self.data_ds)
-            self.halos_ds.index
             if self.halos_ds is None:
                 mylog.warning('No halos were found for {0}'.format(\
                         self.data_ds.basename))
@@ -373,6 +372,7 @@
                     self.save_catalog()
                     self.halos_ds = None
                 return
+            self.halos_ds.index
 
             # Assign ds and data sources appropriately
             self.data_source = self.halos_ds.all_data()

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/analysis_modules/halo_mass_function/halo_mass_function.py
--- a/yt/analysis_modules/halo_mass_function/halo_mass_function.py
+++ b/yt/analysis_modules/halo_mass_function/halo_mass_function.py
@@ -788,7 +788,7 @@
     
         # Now compute the CDM+HDM+baryon transfer functions
         tf_cb = self.tf_master*self.growth_cb/self.growth_k0;
-        tf_cbnu = self.tf_master*self.growth_cbnu/self.growth_k0;
+        #tf_cbnu = self.tf_master*self.growth_cbnu/self.growth_k0;
         return tf_cb
 
 
@@ -832,7 +832,6 @@
     area1 = np.sum(areas)
     # Now we refine until the error is smaller than *error*.
     diff = area1 - area0
-    area_final = area1
     area_last = area1
     one_pow = 3
     while diff > error:

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/analysis_modules/level_sets/contour_finder.py
--- a/yt/analysis_modules/level_sets/contour_finder.py
+++ b/yt/analysis_modules/level_sets/contour_finder.py
@@ -32,7 +32,6 @@
     contours = {}
     node_ids = []
     DLE = data_source.ds.domain_left_edge
-    total_vol = None
     selector = getattr(data_source, "base_object", data_source).selector
     masks = dict((g.id, m) for g, m in data_source.blocks)
     for (g, node, (sl, dims, gi)) in data_source.tiles.slice_traverse():

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/analysis_modules/photon_simulator/photon_models.py
--- a/yt/analysis_modules/photon_simulator/photon_models.py
+++ b/yt/analysis_modules/photon_simulator/photon_models.py
@@ -128,7 +128,6 @@
         energy = self.spectral_model.ebins
     
         cell_em = EM[idxs]*vol_scale
-        cell_vol = vol[idxs]*vol_scale
     
         number_of_photons = np.zeros(dshape, dtype='uint64')
         energies = []
@@ -139,7 +138,6 @@
 
         for i, ikT in enumerate(kT_idxs):
 
-            ncells = int(bcounts[i])
             ibegin = bcell[i]
             iend = ecell[i]
             kT = kT_bins[ikT] + 0.5*dkT

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/analysis_modules/photon_simulator/photon_simulator.py
--- a/yt/analysis_modules/photon_simulator/photon_simulator.py
+++ b/yt/analysis_modules/photon_simulator/photon_simulator.py
@@ -490,7 +490,6 @@
         z_hat = orient.unit_vectors[2]
 
         n_ph = self.photons["NumberOfPhotons"]
-        num_cells = len(n_ph)
         n_ph_tot = n_ph.sum()
         
         eff_area = None
@@ -667,7 +666,6 @@
         tblhdu = hdulist["MATRIX"]
         n_de = len(tblhdu.data["ENERG_LO"])
         mylog.info("Number of energy bins in RMF: %d" % (n_de))
-        de = tblhdu.data["ENERG_HI"] - tblhdu.data["ENERG_LO"]
         mylog.info("Energy limits: %g %g" % (min(tblhdu.data["ENERG_LO"]),
                                              max(tblhdu.data["ENERG_HI"])))
 
@@ -682,7 +680,6 @@
         phYY = events["ypix"][eidxs]
 
         detectedChannels = []
-        pindex = 0
 
         # run through all photon energies and find which bin they go in
         k = 0

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/analysis_modules/sunrise_export/sunrise_exporter.py
--- a/yt/analysis_modules/sunrise_export/sunrise_exporter.py
+++ b/yt/analysis_modules/sunrise_export/sunrise_exporter.py
@@ -128,7 +128,6 @@
     if fni.endswith('.fits'):
         fni = fni.replace('.fits','')
 
-    ndomains_finished = 0
     for (num_halos, domain, halos) in domains_list:
         dle,dre = domain
         print 'exporting: '
@@ -154,7 +153,6 @@
             fh.write("%6.6e \n"%(halo.Rvir*ds['kpc']))
         fh.close()
         export_to_sunrise(ds, fnf, star_particle_type, dle*1.0/dn, dre*1.0/dn)
-        ndomains_finished +=1
 
 def domains_from_halos(ds,halo_list,frvir=0.15):
     domains = {}
@@ -172,8 +170,6 @@
     domains_list = [(len(v),k,v) for k,v in domains.iteritems()]
     domains_list.sort() 
     domains_list.reverse() #we want the most populated domains first
-    domains_limits = [d[1] for d in domains_list]
-    domains_halos  = [d[2] for d in domains_list]
     return domains_list
 
 def prepare_octree(ds,ile,start_level=0,debug=True,dd=None,center=None):
@@ -245,10 +241,6 @@
     hs       = hilbert_state()
     start_time = time.time()
     if debug:
-        if center is not None: 
-            c = center*ds['kpc']
-        else:
-            c = ile*1.0/ds.domain_dimensions*ds['kpc']
         printing = lambda x: print_oct(x)
     else:
         printing = None
@@ -332,7 +324,7 @@
         #then translate onto the subgrid integer index 
         parent_fle  = grid.left_edges + cell_index*grid.dx
         subgrid_ile = np.floor((parent_fle - subgrid.left_edges)/subgrid.dx)
-        for i, (vertex,hilbert_child) in enumerate(hilbert):
+        for (vertex, hilbert_child) in hilbert:
             #vertex is a combination of three 0s and 1s to 
             #denote each of the 8 octs
             if level < 0:

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/analysis_modules/sunyaev_zeldovich/tests/test_projection.py
--- a/yt/analysis_modules/sunyaev_zeldovich/tests/test_projection.py
+++ b/yt/analysis_modules/sunyaev_zeldovich/tests/test_projection.py
@@ -89,8 +89,6 @@
     L = 2 * R * cm_per_kpc
     bbox = np.array([[-0.5,0.5],[-0.5,0.5],[-0.5,0.5]]) * L
 
-    dl = L/nz
-
     ds = load_uniform_grid(data, ddims, length_unit='cm', bbox=bbox)
     ds.index
 

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py
+++ b/yt/data_objects/construction_data_containers.py
@@ -186,11 +186,14 @@
     data_source : `yt.data_objects.api.AMRData`, optional
         If specified, this will be the data source used for selecting
         regions to project.
-    style : string, optional
-        The style of projection to be performed.
+    method : string, optional
+        The method of projection to be performed.
         "integrate" : integration along the axis
         "mip" : maximum intensity projection
         "sum" : same as "integrate", except that we don't multiply by the path length
+    style : string, optional
+        The same as the method keyword.  Deprecated as of version 3.0.2.  
+        Please use method keyword instead.
     field_parameters : dict of items
         Values to be passed as field parameters that can be
         accessed by generated fields.
@@ -208,20 +211,26 @@
     _container_fields = ('px', 'py', 'pdx', 'pdy', 'weight_field')
     def __init__(self, field, axis, weight_field = None,
                  center = None, ds = None, data_source = None,
-                 style = "integrate", field_parameters = None):
+                 style = None, method = "integrate", 
+                 field_parameters = None):
         YTSelectionContainer2D.__init__(self, axis, ds, field_parameters)
-        if style == "sum":
-            self.proj_style = "integrate"
+        # Style is deprecated, but if it is set, then it trumps method
+        # keyword.  TODO: Remove this keyword and this check at some point in 
+        # the future.
+        if style is not None:
+            method = style
+        if method == "sum":
+            self.method = "integrate"
             self._sum_only = True
         else:
-            self.proj_style = style
+            self.method = method
             self._sum_only = False
-        if style == "mip":
+        if self.method == "mip":
             self.func = np.max
-        elif style == "integrate" or style == "sum":
+        elif self.method == "integrate":
             self.func = np.sum # for the future
         else:
-            raise NotImplementedError(style)
+            raise NotImplementedError(self.method)
         self._set_center(center)
         if data_source is None: data_source = self.ds.all_data()
         for k, v in data_source.field_parameters.items():
@@ -260,7 +269,7 @@
                   self.ds.domain_left_edge[xax],
                   self.ds.domain_right_edge[yax])
         return QuadTree(np.array([xd,yd], dtype='int64'), nvals,
-                        bounds, style = self.proj_style)
+                        bounds, method = self.method)
 
     def get_data(self, fields = None):
         fields = fields or []
@@ -282,10 +291,10 @@
                     get_memory_usage()/1024.)
                 self._handle_chunk(chunk, fields, tree)
         # Note that this will briefly double RAM usage
-        if self.proj_style == "mip":
+        if self.method == "mip":
             merge_style = -1
             op = "max"
-        elif self.proj_style == "integrate":
+        elif self.method == "integrate":
             merge_style = 1
             op = "sum"
         else:
@@ -324,7 +333,10 @@
             finfo = self.ds._get_field_info(*field)
             mylog.debug("Setting field %s", field)
             units = finfo.units
-            if self.weight_field is None and not self._sum_only:
+            # add length units to "projected units" if non-weighted 
+            # integral projection
+            if self.weight_field is None and not self._sum_only and \
+               self.method == 'integrate':
                 # See _handle_chunk where we mandate cm
                 if units == '':
                     input_units = "cm"
@@ -336,7 +348,9 @@
             self[field] = YTArray(field_data[fi].ravel(),
                                   input_units=input_units,
                                   registry=self.ds.unit_registry)
-            if self.weight_field is None and not self._sum_only:
+            # convert units if non-weighted integral projection
+            if self.weight_field is None and not self._sum_only and \
+               self.method == 'integrate':
                 u_obj = Unit(units, registry=self.ds.unit_registry)
                 if ((u_obj.is_code_unit or self.ds.no_cgs_equiv_length) and
                     not u_obj.is_dimensionless) and input_units != units:
@@ -355,7 +369,7 @@
         tree.initialize_chunk(i1, i2, ilevel)
 
     def _handle_chunk(self, chunk, fields, tree):
-        if self.proj_style == "mip" or self._sum_only:
+        if self.method == "mip" or self._sum_only:
             dl = 1.0
         else:
             # This gets explicitly converted to cm

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -418,7 +418,6 @@
         otherwise Glue will be started.
         """
         from glue.core import DataCollection, Data
-        from glue.core.coordinates import coordinates_from_header
         from glue.qt.glue_application import GlueApplication
         
         gdata = Data(label=label)
@@ -494,6 +493,19 @@
                     ftype = self._current_fluid_type
                     if (ftype, fname) not in self.ds.field_info:
                         ftype = self.ds._last_freq[0]
+
+                # really ugly check to ensure that this field really does exist somewhere,
+                # in some naming convention, before returning it as a possible field type
+                if (ftype,fname) not in self.ds.field_info and \
+                        (ftype,fname) not in self.ds.field_list and \
+                        fname not in self.ds.field_list and \
+                        (ftype,fname) not in self.ds.derived_field_list and \
+                        fname not in self.ds.derived_field_list and \
+                        (ftype,fname) not in self._container_fields:
+                    raise YTFieldNotFound((ftype,fname),self.ds)
+
+            # these tests are really insufficient as a field type may be valid, and the
+            # field name may be valid, but not the combination (field type, field name)
             if finfo.particle_type and ftype not in self.ds.particle_types:
                 raise YTFieldTypeNotFound(ftype)
             elif not finfo.particle_type and ftype not in self.ds.fluid_types:
@@ -621,7 +633,7 @@
                 fields_to_generate.append(field)
                 continue
             fields_to_get.append(field)
-        if len(fields_to_get) == 0 and fields_to_generate == 0:
+        if len(fields_to_get) == 0 and len(fields_to_generate) == 0:
             return
         elif self._locked == True:
             raise GenerationInProgress(fields)
@@ -787,13 +799,16 @@
     def _get_pw(self, fields, center, width, origin, plot_type):
         from yt.visualization.plot_window import \
             get_window_parameters, PWViewerMPL
-        from yt.visualization.fixed_resolution import FixedResolutionBuffer as frb
+        from yt.visualization.fixed_resolution import \
+            FixedResolutionBuffer as frb
         axis = self.axis
         skip = self._key_fields
         skip += list(set(frb._exclude_fields).difference(set(self._key_fields)))
-        self.fields = ensure_list(fields) + \
-            [k for k in self.field_data if k not in skip]
-        (bounds, center) = get_window_parameters(axis, center, width, self.ds)
+        self.fields = [k for k in self.field_data if k not in skip]
+        if fields is not None:
+            self.fields = ensure_list(fields) + self.fields
+        (bounds, center, display_center) = \
+            get_window_parameters(axis, center, width, self.ds)
         pw = PWViewerMPL(self, bounds, fields=self.fields, origin=origin,
                          frb_generator=frb, plot_type=plot_type)
         pw._setup_plots()
@@ -1177,16 +1192,15 @@
         return self._particle_handler
 
 
-    def volume(self, unit = "unitary"):
+    def volume(self):
         """
-        Return the volume of the data container in units *unit*.
+        Return the volume of the data container.
         This is found by adding up the volume of the cells with centers
         in the container, rather than using the geometric shape of
         the container, so this may vary very slightly
         from what might be expected from the geometric volume.
         """
-        return self.quantities["TotalQuantity"]("CellVolume")[0] * \
-            (self.ds[unit] / self.ds['cm']) ** 3.0
+        return self.quantities.total_quantity(("index", "cell_volume"))
 
 # Many of these items are set up specifically to ensure that
 # we are not breaking old pickle files.  This means we must only call the

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py
+++ b/yt/data_objects/octree_subset.py
@@ -177,7 +177,7 @@
                 self.ds.domain_left_edge,
                 self.ds.domain_right_edge,
                 over_refine = self._oref)
-            particle_octree.n_ref = nneighbors / 2
+            particle_octree.n_ref = nneighbors
             particle_octree.add(morton)
             particle_octree.finalize()
             pdom_ind = particle_octree.domain_ind(self.selector)

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/data_objects/profiles.py
--- a/yt/data_objects/profiles.py
+++ b/yt/data_objects/profiles.py
@@ -955,7 +955,7 @@
     x_min : float
         The minimum value of the x profile field.
     x_max : float
-        The maximum value of hte x profile field.
+        The maximum value of the x profile field.
     x_log : boolean
         Controls whether or not the bins for the x field are evenly
         spaced in linear (False) or log (True) space.

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -45,17 +45,12 @@
     YTArray, \
     YTQuantity
 
-from yt.geometry.cartesian_coordinates import \
-    CartesianCoordinateHandler
-from yt.geometry.polar_coordinates import \
-    PolarCoordinateHandler
-from yt.geometry.cylindrical_coordinates import \
-    CylindricalCoordinateHandler
-from yt.geometry.spherical_coordinates import \
-    SphericalCoordinateHandler
-from yt.geometry.geographic_coordinates import \
-    GeographicCoordinateHandler
-from yt.geometry.spec_cube_coordinates import \
+from yt.geometry.coordinates.api import \
+    CartesianCoordinateHandler, \
+    PolarCoordinateHandler, \
+    CylindricalCoordinateHandler, \
+    SphericalCoordinateHandler, \
+    GeographicCoordinateHandler, \
     SpectralCubeCoordinateHandler
 
 # We want to support the movie format in the future.
@@ -460,8 +455,6 @@
             self._last_freq = field
             self._last_finfo = self.field_info[(ftype, fname)]
             return self._last_finfo
-        if fname == self._last_freq[1]:
-            return self._last_finfo
         if fname in self.field_info:
             # Sometimes, if guessing_type == True, this will be switched for
             # the type of field it is.  So we look at the field type and

diff -r f66765a58a61399bf39dcf73e81cbe90cb1657bc -r aff8e0dea8a8925323a187799637540d884bbc08 yt/data_objects/tests/test_projection.py
--- a/yt/data_objects/tests/test_projection.py
+++ b/yt/data_objects/tests/test_projection.py
@@ -54,12 +54,13 @@
                 yield assert_equal, np.unique(proj["py"]), uc[yax]
                 yield assert_equal, np.unique(proj["pdx"]), 1.0/(dims[xax]*2.0)
                 yield assert_equal, np.unique(proj["pdy"]), 1.0/(dims[yax]*2.0)
-                pw = proj.to_pw(fields='density')
-                for p in pw.plots.values():
-                    tmpfd, tmpname = tempfile.mkstemp(suffix='.png')
-                    os.close(tmpfd)
-                    p.save(name=tmpname)
-                    fns.append(tmpname)
+                plots = [proj.to_pw(fields='density'), proj.to_pw()]
+                for pw in plots:
+                    for p in pw.plots.values():
+                        tmpfd, tmpname = tempfile.mkstemp(suffix='.png')
+                        os.close(tmpfd)
+                        p.save(name=tmpname)
+                        fns.append(tmpname)
                 frb = proj.to_frb((1.0, 'unitary'), 64)
                 for proj_field in ['ones', 'density']:
                     fi = ds._get_field_info(proj_field)

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/511887af4c99/
Changeset:   511887af4c99
Branch:      stable
User:        MatthewTurk
Date:        2014-10-03 21:06:48+00:00
Summary:     Updating version to 3.0.2.
Affected #:  3 files

diff -r aff8e0dea8a8925323a187799637540d884bbc08 -r 511887af4c995a78fe606e58ce8162c88380ecdc doc/source/conf.py
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -67,7 +67,7 @@
 # built documents.
 #
 # The short X.Y version.
-version = '3.0.1'
+version = '3.0.2'
 # The full version, including alpha/beta/rc tags.
 release = '3.0.1'
 

diff -r aff8e0dea8a8925323a187799637540d884bbc08 -r 511887af4c995a78fe606e58ce8162c88380ecdc setup.py
--- a/setup.py
+++ b/setup.py
@@ -118,7 +118,7 @@
 # End snippet
 ######
 
-VERSION = "3.0.1"
+VERSION = "3.0.2"
 
 if os.path.exists('MANIFEST'):
     os.remove('MANIFEST')

diff -r aff8e0dea8a8925323a187799637540d884bbc08 -r 511887af4c995a78fe606e58ce8162c88380ecdc yt/__init__.py
--- a/yt/__init__.py
+++ b/yt/__init__.py
@@ -72,7 +72,7 @@
 # The full license is in the file COPYING.txt, distributed with this software.
 #-----------------------------------------------------------------------------
 
-__version__ = "3.0.1"
+__version__ = "3.0.2"
 
 # First module imports
 import numpy as np # For modern purposes


https://bitbucket.org/yt_analysis/yt/commits/7a0e5983e8a4/
Changeset:   7a0e5983e8a4
Branch:      stable
User:        MatthewTurk
Date:        2014-10-03 21:06:54+00:00
Summary:     Added tag yt-3.0.2 for changeset 511887af4c99
Affected #:  1 file

diff -r 511887af4c995a78fe606e58ce8162c88380ecdc -r 7a0e5983e8a4efbf37b2a92b6a7d7fce7a4c5402 .hgtags
--- a/.hgtags
+++ b/.hgtags
@@ -5179,3 +5179,4 @@
 f327552a6ede406b82711fb800ebcd5fe692d1cb yt-3.0a4
 73a9f749157260c8949f05c07715305aafa06408 yt-3.0.0
 0cf350f11a551f5a5b4039a70e9ff6d98342d1da yt-3.0.1
+511887af4c995a78fe606e58ce8162c88380ecdc yt-3.0.2

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.


More information about the yt-svn mailing list