[yt-svn] commit/yt: 16 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Fri Jul 18 14:23:36 PDT 2014


16 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/e2ddb70b9e25/
Changeset:   e2ddb70b9e25
Branch:      yt-3.0
User:        convert-repo
Date:        2014-06-16 08:34:31
Summary:     Replacing "pf" with "ds"

This makes the following substitutions across the codebase:

pf -> ds
parameter file -> dataset
pfs -> datasets
parameter_file -> dataset
pf.h -> ds or ds.index, where appropriate
Affected #:  329 files

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/cheatsheet.tex
--- a/doc/cheatsheet.tex
+++ b/doc/cheatsheet.tex
@@ -208,38 +208,38 @@
 After that, simulation data is generally accessed in yt using {\it Data Containers} which are Python objects
 that define a region of simulation space from which data should be selected.
 \settowidth{\MyLen}{\texttt{multicol} }
-\texttt{pf = load(}{\it dataset}\texttt{)} \textemdash\   Reference a single snapshot.\\
-\texttt{dd = pf.h.all\_data()} \textemdash\ Select the entire volume.\\
+\texttt{ds = load(}{\it dataset}\texttt{)} \textemdash\   Reference a single snapshot.\\
+\texttt{dd = ds.all\_data()} \textemdash\ Select the entire volume.\\
 \texttt{a = dd[}{\it field\_name}\texttt{]} \textemdash\ Saves the contents of {\it field} into the
 numpy array \texttt{a}. Similarly for other data containers.\\
-\texttt{pf.h.field\_list} \textemdash\ A list of available fields in the snapshot. \\
-\texttt{pf.h.derived\_field\_list} \textemdash\ A list of available derived fields
+\texttt{ds.field\_list} \textemdash\ A list of available fields in the snapshot. \\
+\texttt{ds.derived\_field\_list} \textemdash\ A list of available derived fields
 in the snapshot. \\
-\texttt{val, loc = pf.h.find\_max("Density")} \textemdash\ Find the \texttt{val}ue of
+\texttt{val, loc = ds.find\_max("Density")} \textemdash\ Find the \texttt{val}ue of
 the maximum of the field \texttt{Density} and its \texttt{loc}ation. \\
-\texttt{sp = pf.sphere(}{\it cen}\texttt{,}{\it radius}\texttt{)} \textemdash\   Create a spherical data 
+\texttt{sp = ds.sphere(}{\it cen}\texttt{,}{\it radius}\texttt{)} \textemdash\   Create a spherical data 
 container. {\it cen} may be a coordinate, or ``max'' which 
 centers on the max density point. {\it radius} may be a float in 
 code units or a tuple of ({\it length, unit}).\\
 
-\texttt{re = pf.region({\it cen}, {\it left edge}, {\it right edge})} \textemdash\ Create a
+\texttt{re = ds.region({\it cen}, {\it left edge}, {\it right edge})} \textemdash\ Create a
 rectilinear data container. {\it cen} is required but not used.
 {\it left} and {\it right edge} are coordinate values that define the region.
 
-\texttt{di = pf.disk({\it cen}, {\it normal}, {\it radius}, {\it height})} \textemdash\ 
+\texttt{di = ds.disk({\it cen}, {\it normal}, {\it radius}, {\it height})} \textemdash\ 
 Create a cylindrical data container centered at {\it cen} along the 
 direction set by {\it normal},with total length
  2$\times${\it height} and with radius {\it radius}. \\
  
- \texttt{bl = pf.boolean({\it constructor})} \textemdash\ Create a boolean data
+ \texttt{bl = ds.boolean({\it constructor})} \textemdash\ Create a boolean data
  container. {\it constructor} is a list of pre-defined non-boolean 
  data containers with nested boolean logic using the
  ``AND'', ``NOT'', or ``OR'' operators. E.g. {\it constructor=}
  {\it [sp, ``NOT'', (di, ``OR'', re)]} gives a volume defined
  by {\it sp} minus the patches covered by {\it di} and {\it re}.\\
  
-\texttt{pf.h.save\_object(sp, {\it ``sp\_for\_later''})} \textemdash\ Save an object (\texttt{sp}) for later use.\\
-\texttt{sp = pf.h.load\_object({\it ``sp\_for\_later''})} \textemdash\ Recover a saved object.\\
+\texttt{ds.save\_object(sp, {\it ``sp\_for\_later''})} \textemdash\ Save an object (\texttt{sp}) for later use.\\
+\texttt{sp = ds.load\_object({\it ``sp\_for\_later''})} \textemdash\ Recover a saved object.\\
 
 
 \subsection{Defining New Fields \& Quantities}
@@ -261,15 +261,15 @@
 
 \subsection{Slices and Projections}
 \settowidth{\MyLen}{\texttt{multicol} }
-\texttt{slc = SlicePlot(pf, {\it axis}, {\it field}, {\it center=}, {\it width=}, {\it weight\_field=}, {\it additional parameters})} \textemdash\ Make a slice plot
+\texttt{slc = SlicePlot(ds, {\it axis}, {\it field}, {\it center=}, {\it width=}, {\it weight\_field=}, {\it additional parameters})} \textemdash\ Make a slice plot
 perpendicular to {\it axis} of {\it field} weighted by {\it weight\_field} at (code-units) {\it center} with 
 {\it width} in code units or a (value, unit) tuple. Hint: try {\it SlicePlot?} in IPython to see additional parameters.\\
 \texttt{slc.save({\it file\_prefix})} \textemdash\ Save the slice to a png with name prefix {\it file\_prefix}.
 \texttt{.save()} works similarly for the commands below.\\
 
-\texttt{prj = ProjectionPlot(pf, {\it axis}, {\it field}, {\it addit. params})} \textemdash\ Make a projection. \\
-\texttt{prj = OffAxisSlicePlot(pf, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off-axis slice. Note this takes an array of fields. \\
-\texttt{prj = OffAxisProjectionPlot(pf, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off axis projection. Note this takes an array of fields. \\
+\texttt{prj = ProjectionPlot(ds, {\it axis}, {\it field}, {\it addit. params})} \textemdash\ Make a projection. \\
+\texttt{prj = OffAxisSlicePlot(ds, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off-axis slice. Note this takes an array of fields. \\
+\texttt{prj = OffAxisProjectionPlot(ds, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off axis projection. Note this takes an array of fields. \\
 
 \subsection{Plot Annotations}
 \settowidth{\MyLen}{\texttt{multicol} }
@@ -365,8 +365,8 @@
 \subsection{FAQ}
 \settowidth{\MyLen}{\texttt{multicol}}
 
-\texttt{pf.field\_info[`field'].take\_log = False} \textemdash\ When plotting \texttt{field}, do not take log.
-Must enter \texttt{pf.h} before this command. \\
+\texttt{ds.field\_info[`field'].take\_log = False} \textemdash\ When plotting \texttt{field}, do not take log.
+Must enter \texttt{ds.index} before this command. \\
 
 
 %\rule{0.3\linewidth}{0.25pt}

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/coding_styleguide.txt
--- a/doc/coding_styleguide.txt
+++ b/doc/coding_styleguide.txt
@@ -49,7 +49,7 @@
  * Don't create a new class to replicate the functionality of an old class --
    replace the old class.  Too many options makes for a confusing user
    experience.
- * Parameter files are a last resort.
+ * Parameter files external to yt are a last resort.
  * The usage of the **kwargs construction should be avoided.  If they cannot
    be avoided, they must be explained, even if they are only to be passed on to
    a nested function.
@@ -61,7 +61,7 @@
    * Hard-coding parameter names that are the same as those in Enzo.  The
      following translation table should be of some help.  Note that the
      parameters are now properties on a Dataset subclass: you access them
-     like pf.refine_by .
+     like ds.refine_by .
      * RefineBy => refine_by
      * TopGridRank => dimensionality
      * TopGridDimensions => domain_dimensions

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/docstring_example.txt
--- a/doc/docstring_example.txt
+++ b/doc/docstring_example.txt
@@ -73,7 +73,7 @@
     Examples
     --------
     These are written in doctest format, and should illustrate how to
-    use the function.  Use the variables 'pf' for the parameter file, 'pc' for
+    use the function.  Use the variables 'ds' for the dataset, 'pc' for
     a plot collection, 'c' for a center, and 'L' for a vector. 
 
     >>> a=[1,2,3]

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/docstring_idioms.txt
--- a/doc/docstring_idioms.txt
+++ b/doc/docstring_idioms.txt
@@ -19,7 +19,7 @@
 useful variable names that correspond to specific instances that the user is
 presupposed to have created.
 
-   * `pf`: a parameter file, loaded successfully
+   * `ds`: a dataset, loaded successfully
    * `sp`: a sphere
    * `c`: a 3-component "center"
    * `L`: a 3-component vector that corresponds to either angular momentum or a

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/helper_scripts/parse_cb_list.py
--- a/doc/helper_scripts/parse_cb_list.py
+++ b/doc/helper_scripts/parse_cb_list.py
@@ -2,7 +2,7 @@
 import inspect
 from textwrap import TextWrapper
 
-pf = load("RD0005-mine/RedshiftOutput0005")
+ds = load("RD0005-mine/RedshiftOutput0005")
 
 output = open("source/visualizing/_cb_docstrings.inc", "w")
 

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/helper_scripts/parse_dq_list.py
--- a/doc/helper_scripts/parse_dq_list.py
+++ b/doc/helper_scripts/parse_dq_list.py
@@ -2,7 +2,7 @@
 import inspect
 from textwrap import TextWrapper
 
-pf = load("RD0005-mine/RedshiftOutput0005")
+ds = load("RD0005-mine/RedshiftOutput0005")
 
 output = open("source/analyzing/_dq_docstrings.inc", "w")
 
@@ -29,7 +29,7 @@
                             docstring = docstring))
                             #docstring = "\n".join(tw.wrap(docstring))))
 
-dd = pf.h.all_data()
+dd = ds.all_data()
 for n,func in sorted(dd.quantities.functions.items()):
     print n, func
     write_docstring(output, n, func[1])

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/helper_scripts/parse_object_list.py
--- a/doc/helper_scripts/parse_object_list.py
+++ b/doc/helper_scripts/parse_object_list.py
@@ -2,7 +2,7 @@
 import inspect
 from textwrap import TextWrapper
 
-pf = load("RD0005-mine/RedshiftOutput0005")
+ds = load("RD0005-mine/RedshiftOutput0005")
 
 output = open("source/analyzing/_obj_docstrings.inc", "w")
 
@@ -27,7 +27,7 @@
     f.write(template % dict(clsname = clsname, sig = sig, clsproxy=clsproxy,
                             docstring = 'physical-object-api'))
 
-for n,c in sorted(pf.h.__dict__.items()):
+for n,c in sorted(ds.__dict__.items()):
     if hasattr(c, '_con_args'):
         print n
         write_docstring(output, n, c)

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/helper_scripts/show_fields.py
--- a/doc/helper_scripts/show_fields.py
+++ b/doc/helper_scripts/show_fields.py
@@ -17,15 +17,15 @@
 everywhere, "Enzo" fields in Enzo datasets, "Orion" fields in Orion datasets,
 and so on.
 
-Try using the ``pf.field_list`` and ``pf.derived_field_list`` to view the
+Try using the ``ds.field_list`` and ``ds.derived_field_list`` to view the
 native and derived fields available for your dataset respectively. For example
 to display the native fields in alphabetical order:
 
 .. notebook-cell::
 
   from yt.mods import *
-  pf = load("Enzo_64/DD0043/data0043")
-  for i in sorted(pf.field_list):
+  ds = load("Enzo_64/DD0043/data0043")
+  for i in sorted(ds.field_list):
     print i
 
 .. note:: Universal fields will be overridden by a code-specific field.

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/_obj_docstrings.inc
--- a/doc/source/analyzing/_obj_docstrings.inc
+++ b/doc/source/analyzing/_obj_docstrings.inc
@@ -1,12 +1,12 @@
 
 
-.. class:: boolean(self, regions, fields=None, pf=None, **field_parameters):
+.. class:: boolean(self, regions, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRBooleanRegionBase`.)
 
 
-.. class:: covering_grid(self, level, left_edge, dims, fields=None, pf=None, num_ghost_zones=0, use_pbar=True, **field_parameters):
+.. class:: covering_grid(self, level, left_edge, dims, fields=None, ds=None, num_ghost_zones=0, use_pbar=True, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRCoveringGridBase`.)
@@ -24,13 +24,13 @@
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRCuttingPlaneBase`.)
 
 
-.. class:: disk(self, center, normal, radius, height, fields=None, pf=None, **field_parameters):
+.. class:: disk(self, center, normal, radius, height, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRCylinderBase`.)
 
 
-.. class:: ellipsoid(self, center, A, B, C, e0, tilt, fields=None, pf=None, **field_parameters):
+.. class:: ellipsoid(self, center, A, B, C, e0, tilt, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMREllipsoidBase`.)
@@ -48,79 +48,79 @@
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRFixedResCuttingPlaneBase`.)
 
 
-.. class:: fixed_res_proj(self, axis, level, left_edge, dims, fields=None, pf=None, **field_parameters):
+.. class:: fixed_res_proj(self, axis, level, left_edge, dims, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRFixedResProjectionBase`.)
 
 
-.. class:: grid_collection(self, center, grid_list, fields=None, pf=None, **field_parameters):
+.. class:: grid_collection(self, center, grid_list, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRGridCollectionBase`.)
 
 
-.. class:: grid_collection_max_level(self, center, max_level, fields=None, pf=None, **field_parameters):
+.. class:: grid_collection_max_level(self, center, max_level, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRMaxLevelCollectionBase`.)
 
 
-.. class:: inclined_box(self, origin, box_vectors, fields=None, pf=None, **field_parameters):
+.. class:: inclined_box(self, origin, box_vectors, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRInclinedBoxBase`.)
 
 
-.. class:: ortho_ray(self, axis, coords, fields=None, pf=None, **field_parameters):
+.. class:: ortho_ray(self, axis, coords, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMROrthoRayBase`.)
 
 
-.. class:: overlap_proj(self, axis, field, weight_field=None, max_level=None, center=None, pf=None, source=None, node_name=None, field_cuts=None, preload_style='level', serialize=True, **field_parameters):
+.. class:: overlap_proj(self, axis, field, weight_field=None, max_level=None, center=None, ds=None, source=None, node_name=None, field_cuts=None, preload_style='level', serialize=True, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRProjBase`.)
 
 
-.. class:: periodic_region(self, center, left_edge, right_edge, fields=None, pf=None, **field_parameters):
+.. class:: periodic_region(self, center, left_edge, right_edge, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRPeriodicRegionBase`.)
 
 
-.. class:: periodic_region_strict(self, center, left_edge, right_edge, fields=None, pf=None, **field_parameters):
+.. class:: periodic_region_strict(self, center, left_edge, right_edge, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRPeriodicRegionStrictBase`.)
 
 
-.. class:: proj(self, axis, field, weight_field=None, max_level=None, center=None, pf=None, source=None, node_name=None, field_cuts=None, preload_style=None, serialize=True, style='integrate', **field_parameters):
+.. class:: proj(self, axis, field, weight_field=None, max_level=None, center=None, ds=None, source=None, node_name=None, field_cuts=None, preload_style=None, serialize=True, style='integrate', **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRQuadTreeProjBase`.)
 
 
-.. class:: ray(self, start_point, end_point, fields=None, pf=None, **field_parameters):
+.. class:: ray(self, start_point, end_point, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRRayBase`.)
 
 
-.. class:: region(self, center, left_edge, right_edge, fields=None, pf=None, **field_parameters):
+.. class:: region(self, center, left_edge, right_edge, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRRegionBase`.)
 
 
-.. class:: region_strict(self, center, left_edge, right_edge, fields=None, pf=None, **field_parameters):
+.. class:: region_strict(self, center, left_edge, right_edge, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRRegionStrictBase`.)
 
 
-.. class:: slice(self, axis, coord, fields=None, center=None, pf=None, node_name=False, **field_parameters):
+.. class:: slice(self, axis, coord, fields=None, center=None, ds=None, node_name=False, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRSliceBase`.)
@@ -132,13 +132,13 @@
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRSmoothedCoveringGridBase`.)
 
 
-.. class:: sphere(self, center, radius, fields=None, pf=None, **field_parameters):
+.. class:: sphere(self, center, radius, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRSphereBase`.)
 
 
-.. class:: streamline(self, positions, length=1.0, fields=None, pf=None, **field_parameters):
+.. class:: streamline(self, positions, length=1.0, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRStreamlineBase`.)

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb
--- a/doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb
+++ b/doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb
@@ -44,7 +44,7 @@
       "tmpdir = tempfile.mkdtemp()\n",
       "\n",
       "# Load the data set with the full simulation information\n",
-      "data_pf = load('Enzo_64/RD0006/RedshiftOutput0006')"
+      "data_ds = load('Enzo_64/RD0006/RedshiftOutput0006')"
      ],
      "language": "python",
      "metadata": {},
@@ -62,7 +62,7 @@
      "collapsed": false,
      "input": [
       "# Load the rockstar data files\n",
-      "halos_pf = load('rockstar_halos/halos_0.0.bin')"
+      "halos_ds = load('rockstar_halos/halos_0.0.bin')"
      ],
      "language": "python",
      "metadata": {},
@@ -80,7 +80,7 @@
      "collapsed": false,
      "input": [
       "# Instantiate a catalog using those two paramter files\n",
-      "hc = HaloCatalog(data_pf=data_pf, halos_pf=halos_pf, \n",
+      "hc = HaloCatalog(data_ds=data_ds, halos_ds=halos_ds, \n",
       "                 output_dir=os.path.join(tmpdir, 'halo_catalog'))"
      ],
      "language": "python",
@@ -295,9 +295,9 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "halos_pf =  load(os.path.join(tmpdir, 'halo_catalog/halo_catalog.0.h5'))\n",
+      "halos_ds =  load(os.path.join(tmpdir, 'halo_catalog/halo_catalog.0.h5'))\n",
       "\n",
-      "hc_reloaded = HaloCatalog(halos_pf=halos_pf,\n",
+      "hc_reloaded = HaloCatalog(halos_ds=halos_ds,\n",
       "                          output_dir=os.path.join(tmpdir, 'halo_catalog'))"
      ],
      "language": "python",
@@ -407,4 +407,4 @@
    "metadata": {}
   }
  ]
-}
\ No newline at end of file
+}

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/PPVCube.ipynb
--- a/doc/source/analyzing/analysis_modules/PPVCube.ipynb
+++ b/doc/source/analyzing/analysis_modules/PPVCube.ipynb
@@ -222,7 +222,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pf = load(\"cube.fits\")"
+      "ds = load(\"cube.fits\")"
      ],
      "language": "python",
      "metadata": {},
@@ -233,7 +233,7 @@
      "collapsed": false,
      "input": [
       "# Specifying no center gives us the center slice\n",
-      "slc = SlicePlot(pf, \"z\", [\"density\"])\n",
+      "slc = SlicePlot(ds, \"z\", [\"density\"])\n",
       "slc.show()"
      ],
      "language": "python",
@@ -246,9 +246,9 @@
      "input": [
       "import yt.units as u\n",
       "# Picking different velocities for the slices\n",
-      "new_center = pf.domain_center\n",
-      "new_center[2] = pf.spec2pixel(-1.0*u.km/u.s)\n",
-      "slc = SlicePlot(pf, \"z\", [\"density\"], center=new_center)\n",
+      "new_center = ds.domain_center\n",
+      "new_center[2] = ds.spec2pixel(-1.0*u.km/u.s)\n",
+      "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
       "slc.show()"
      ],
      "language": "python",
@@ -259,8 +259,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "new_center[2] = pf.spec2pixel(0.7*u.km/u.s)\n",
-      "slc = SlicePlot(pf, \"z\", [\"density\"], center=new_center)\n",
+      "new_center[2] = ds.spec2pixel(0.7*u.km/u.s)\n",
+      "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
       "slc.show()"
      ],
      "language": "python",
@@ -271,8 +271,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "new_center[2] = pf.spec2pixel(-0.3*u.km/u.s)\n",
-      "slc = SlicePlot(pf, \"z\", [\"density\"], center=new_center)\n",
+      "new_center[2] = ds.spec2pixel(-0.3*u.km/u.s)\n",
+      "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
       "slc.show()"
      ],
      "language": "python",
@@ -290,7 +290,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "prj = ProjectionPlot(pf, \"z\", [\"density\"], proj_style=\"sum\")\n",
+      "prj = ProjectionPlot(ds, \"z\", [\"density\"], proj_style=\"sum\")\n",
       "prj.set_log(\"density\", True)\n",
       "prj.set_zlim(\"density\", 1.0e-3, 0.2)\n",
       "prj.show()"
@@ -303,4 +303,4 @@
    "metadata": {}
   }
  ]
-}
\ No newline at end of file
+}

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/clump_finding.rst
--- a/doc/source/analyzing/analysis_modules/clump_finding.rst
+++ b/doc/source/analyzing/analysis_modules/clump_finding.rst
@@ -84,8 +84,8 @@
   
   from yt.mods import *
   
-  pf = load("DD0000")
-  sp = pf.sphere([0.5, 0.5, 0.5], radius=0.1)
+  ds = load("DD0000")
+  sp = ds.sphere([0.5, 0.5, 0.5], radius=0.1)
   
   ratio = sp.quantities["IsBound"](truncate=False, include_thermal_energy=True,
       treecode=True, opening_angle=2.0)
@@ -97,8 +97,8 @@
   
   from yt.mods import *
   
-  pf = load("DD0000")
-  sp = pf.sphere([0.5, 0.5, 0.5], radius=0.1)
+  ds = load("DD0000")
+  sp = ds.sphere([0.5, 0.5, 0.5], radius=0.1)
   
   ratio = sp.quantities["IsBound"](truncate=False, include_thermal_energy=True,
       treecode=False)

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
--- a/doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
+++ b/doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
@@ -58,8 +58,8 @@
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
 
-  pf=load('Enzo_64/RD0006/RedshiftOutput0006')
-  halo_list = parallelHF(pf)
+  ds=load('Enzo_64/RD0006/RedshiftOutput0006')
+  halo_list = parallelHF(ds)
   halo_list.dump('MyHaloList')
 
 Ellipsoid Parameters
@@ -69,8 +69,8 @@
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
 
-  pf=load('Enzo_64/RD0006/RedshiftOutput0006')
-  haloes = LoadHaloes(pf, 'MyHaloList')
+  ds=load('Enzo_64/RD0006/RedshiftOutput0006')
+  haloes = LoadHaloes(ds, 'MyHaloList')
 
 Once the halo information is saved you can load it into the data
 object "haloes", you can get loop over the list of haloes and do
@@ -107,7 +107,7 @@
 
 .. code-block:: python
 
-  ell = pf.ellipsoid(ell_param[0],
+  ell = ds.ellipsoid(ell_param[0],
   ell_param[1],
   ell_param[2],
   ell_param[3],

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/halo_catalogs.rst
--- a/doc/source/analyzing/analysis_modules/halo_catalogs.rst
+++ b/doc/source/analyzing/analysis_modules/halo_catalogs.rst
@@ -9,7 +9,7 @@
 backwards compatible in that output from old halo finders may be loaded.
 
 A catalog of halos can be created from any initial dataset given to halo 
-catalog through data_pf. These halos can be found using friends-of-friends,
+catalog through data_ds. These halos can be found using friends-of-friends,
 HOP, and Rockstar. The finder_method keyword dictates which halo finder to
 use. The available arguments are 'fof', 'hop', and'rockstar'. For more
 details on the relative differences between these halo finders see 
@@ -19,32 +19,32 @@
 
    from yt.mods import *
    from yt.analysis_modules.halo_analysis.api import HaloCatalog
-   data_pf = load('Enzo_64/RD0006/RedshiftOutput0006')
-   hc = HaloCatalog(data_pf=data_pf, finder_method='hop')
+   data_ds = load('Enzo_64/RD0006/RedshiftOutput0006')
+   hc = HaloCatalog(data_ds=data_ds, finder_method='hop')
 
 A halo catalog may also be created from already run rockstar outputs. 
 This method is not implemented for previously run friends-of-friends or 
 HOP finders. Even though rockstar creates one file per processor, 
 specifying any one file allows the full catalog to be loaded. Here we 
 only specify the file output by the processor with ID 0. Note that the 
-argument for supplying a rockstar output is `halos_pf`, not `data_pf`.
+argument for supplying a rockstar output is `halos_ds`, not `data_ds`.
 
 .. code-block:: python
 
-   halos_pf = load(path+'rockstar_halos/halos_0.0.bin')
-   hc = HaloCatalog(halos_pf=halos_pf)
+   halos_ds = load(path+'rockstar_halos/halos_0.0.bin')
+   hc = HaloCatalog(halos_ds=halos_ds)
 
 Although supplying only the binary output of the rockstar halo finder 
 is sufficient for creating a halo catalog, it is not possible to find 
 any new information about the identified halos. To associate the halos 
 with the dataset from which they were found, supply arguments to both 
-halos_pf and data_pf.
+halos_ds and data_ds.
 
 .. code-block:: python
 
-   halos_pf = load(path+'rockstar_halos/halos_0.0.bin')
-   data_pf = load('Enzo_64/RD0006/RedshiftOutput0006')
-   hc = HaloCatalog(data_pf=data_pf, halos_pf=halos_pf)
+   halos_ds = load(path+'rockstar_halos/halos_0.0.bin')
+   data_ds = load('Enzo_64/RD0006/RedshiftOutput0006')
+   hc = HaloCatalog(data_ds=data_ds, halos_ds=halos_ds)
 
 A data container can also be supplied via keyword data_source, 
 associated with either dataset, to control the spatial region in 
@@ -215,8 +215,8 @@
 
 .. code-block:: python
 
-   hpf = load(path+"halo_catalogs/catalog_0046/catalog_0046.0.h5")
-   hc = HaloCatalog(halos_pf=hpf,
+   hds = load(path+"halo_catalogs/catalog_0046/catalog_0046.0.h5")
+   hc = HaloCatalog(halos_ds=hds,
                     output_dir="halo_catalogs/catalog_0046")
    hc.add_callback("load_profiles", output_dir="profiles",
                    filename="virial_profiles")

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/halo_mass_function.rst
--- a/doc/source/analyzing/analysis_modules/halo_mass_function.rst
+++ b/doc/source/analyzing/analysis_modules/halo_mass_function.rst
@@ -60,8 +60,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_mass_function.api import *
-  pf = load("data0030")
-  hmf = HaloMassFcn(pf, halo_file="FilteredQuantities.out", num_sigma_bins=200,
+  ds = load("data0030")
+  hmf = HaloMassFcn(ds, halo_file="FilteredQuantities.out", num_sigma_bins=200,
   mass_column=5)
 
 Attached to ``hmf`` is the convenience function ``write_out``, which saves
@@ -102,8 +102,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_mass_function.api import *
-  pf = load("data0030")
-  hmf = HaloMassFcn(pf, halo_file="FilteredQuantities.out", 
+  ds = load("data0030")
+  hmf = HaloMassFcn(ds, halo_file="FilteredQuantities.out", 
   sigma8input=0.9, primordial_index=1., omega_baryon0=0.06,
   fitting_function=4)
   hmf.write_out(prefix='hmf')

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/halo_profiling.rst
--- a/doc/source/analyzing/analysis_modules/halo_profiling.rst
+++ b/doc/source/analyzing/analysis_modules/halo_profiling.rst
@@ -395,8 +395,8 @@
    def find_min_temp_dist(sphere):
        old = sphere.center
        ma, mini, mx, my, mz, mg = sphere.quantities['MinLocation']('temperature')
-       d = sphere.pf['kpc'] * periodic_dist(old, [mx, my, mz],
-           sphere.pf.domain_right_edge - sphere.pf.domain_left_edge)
+       d = sphere.ds['kpc'] * periodic_dist(old, [mx, my, mz],
+           sphere.ds.domain_right_edge - sphere.ds.domain_left_edge)
        # If new center farther than 5 kpc away, don't recenter
        if d > 5.: return [-1, -1, -1]
        return [mx,my,mz]
@@ -426,7 +426,7 @@
              128, 'temperature', 1e2, 1e7, True,
              end_collect=False)
        my_profile.add_fields('cell_mass', weight=None, fractional=False)
-       my_filename = os.path.join(sphere.pf.fullpath, '2D_profiles', 
+       my_filename = os.path.join(sphere.ds.fullpath, '2D_profiles', 
              'Halo_%04d.h5' % halo['id'])
        my_profile.write_out_h5(my_filename)
 

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/hmf_howto.rst
--- a/doc/source/analyzing/analysis_modules/hmf_howto.rst
+++ b/doc/source/analyzing/analysis_modules/hmf_howto.rst
@@ -27,8 +27,8 @@
 .. code-block:: python
 
   from yt.mods import *
-  pf = load("data0001")
-  halo_list = HaloFinder(pf)
+  ds = load("data0001")
+  halo_list = HaloFinder(ds)
   halo_list.write_out("HopAnalysis.out")
 
 The only important columns of data in the text file ``HopAnalysis.out``
@@ -79,8 +79,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_mass_function.api import *
-  pf = load("data0001")
-  hmf = HaloMassFcn(pf, halo_file="VirialHaloes.out", 
+  ds = load("data0001")
+  hmf = HaloMassFcn(ds, halo_file="VirialHaloes.out", 
   sigma8input=0.9, primordial_index=1., omega_baryon0=0.06,
   fitting_function=4, mass_column=5, num_sigma_bins=200)
   hmf.write_out(prefix='hmf')
@@ -107,9 +107,9 @@
   from yt.analysis_modules.halo_mass_function.api import *
   
   # If desired, start loop here.
-  pf = load("data0001")
+  ds = load("data0001")
   
-  halo_list = HaloFinder(pf)
+  halo_list = HaloFinder(ds)
   halo_list.write_out("HopAnalysis.out")
   
   hp = HP.HaloProfiler("data0001", halo_list_file='HopAnalysis.out')
@@ -120,7 +120,7 @@
                 virial_quantities=['TotalMassMsun','RadiusMpc'])
   hp.make_profiles(filename="VirialHaloes.out")
   
-  hmf = HaloMassFcn(pf, halo_file="VirialHaloes.out", 
+  hmf = HaloMassFcn(ds, halo_file="VirialHaloes.out", 
   sigma8input=0.9, primordial_index=1., omega_baryon0=0.06,
   fitting_function=4, mass_column=5, num_sigma_bins=200)
   hmf.write_out(prefix='hmf')

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/light_cone_generator.rst
--- a/doc/source/analyzing/analysis_modules/light_cone_generator.rst
+++ b/doc/source/analyzing/analysis_modules/light_cone_generator.rst
@@ -65,7 +65,7 @@
    gathering datasets for time series.  Default: True.
 
  * **set_parameters** (*dict*): Dictionary of parameters to attach to 
-   pf.parameters.  Default: None.
+   ds.parameters.  Default: None.
 
  * **output_dir** (*string*): The directory in which images and data files
     will be written.  Default: 'LC'.

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/photon_simulator.rst
--- a/doc/source/analyzing/analysis_modules/photon_simulator.rst
+++ b/doc/source/analyzing/analysis_modules/photon_simulator.rst
@@ -43,7 +43,7 @@
 
 .. code:: python
 
-    pf = load("MHDSloshing/virgo_low_res.0054.vtk",
+    ds = load("MHDSloshing/virgo_low_res.0054.vtk",
               parameters={"time_unit":(1.0,"Myr"),
                           "length_unit":(1.0,"Mpc"),
                           "mass_unit":(1.0e14,"Msun")}) 
@@ -418,7 +418,7 @@
 evacuated two "bubbles" of radius 30 kpc at a distance of 50 kpc from
 the center. 
 
-Now, we create a parameter file out of this dataset:
+Now, we create a yt Dataset object out of this dataset:
 
 .. code:: python
 
@@ -440,7 +440,7 @@
 
 .. code:: python
 
-   sphere = ds.sphere(pf.domain_center, (1.0,"Mpc"))
+   sphere = ds.sphere(ds.domain_center, (1.0,"Mpc"))
        
    A = 6000.
    exp_time = 2.0e5

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/radial_column_density.rst
--- a/doc/source/analyzing/analysis_modules/radial_column_density.rst
+++ b/doc/source/analyzing/analysis_modules/radial_column_density.rst
@@ -41,15 +41,15 @@
 
   from yt.mods import *
   from yt.analysis_modules.radial_column_density.api import *
-  pf = load("data0030")
+  ds = load("data0030")
   
-  rcdnumdens = RadialColumnDensity(pf, 'NumberDensity', [0.5, 0.5, 0.5],
+  rcdnumdens = RadialColumnDensity(ds, 'NumberDensity', [0.5, 0.5, 0.5],
     max_radius = 0.5)
   def _RCDNumberDensity(field, data, rcd = rcdnumdens):
       return rcd._build_derived_field(data)
   add_field('RCDNumberDensity', _RCDNumberDensity, units=r'1/\rm{cm}^2')
   
-  dd = pf.h.all_data()
+  dd = ds.all_data()
   print dd['RCDNumberDensity']
 
 The field ``RCDNumberDensity`` can be used just like any other derived field

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/radmc3d_export.rst
--- a/doc/source/analyzing/analysis_modules/radmc3d_export.rst
+++ b/doc/source/analyzing/analysis_modules/radmc3d_export.rst
@@ -41,8 +41,8 @@
 
 .. code-block:: python
 
-    pf = load("galaxy0030/galaxy0030")
-    writer = RadMC3DWriter(pf)
+    ds = load("galaxy0030/galaxy0030")
+    writer = RadMC3DWriter(ds)
     
     writer.write_amr_grid()
     writer.write_dust_file("DustDensity", "dust_density.inp")
@@ -87,8 +87,8 @@
         return (x_co/mu_h)*data["density"]
     add_field("NumberDensityCO", function=_NumberDensityCO)
     
-    pf = load("galaxy0030/galaxy0030")
-    writer = RadMC3DWriter(pf)
+    ds = load("galaxy0030/galaxy0030")
+    writer = RadMC3DWriter(ds)
     
     writer.write_amr_grid()
     writer.write_line_file("NumberDensityCO", "numberdens_co.inp")

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/running_halofinder.rst
--- a/doc/source/analyzing/analysis_modules/running_halofinder.rst
+++ b/doc/source/analyzing/analysis_modules/running_halofinder.rst
@@ -57,8 +57,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
-  pf = load("data0001")
-  halo_list = HaloFinder(pf)
+  ds = load("data0001")
+  halo_list = HaloFinder(ds)
 
 Running FoF is similar:
 
@@ -66,8 +66,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
-  pf = load("data0001")
-  halo_list = FOFHaloFinder(pf)
+  ds = load("data0001")
+  halo_list = FOFHaloFinder(ds)
 
 Halo Data Access
 ----------------
@@ -172,8 +172,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
-  pf = load("data0001")
-  haloes = HaloFinder(pf)
+  ds = load("data0001")
+  haloes = HaloFinder(ds)
   haloes.dump("basename")
 
 It is easy to load the halos using the ``LoadHaloes`` class:
@@ -182,8 +182,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
-  pf = load("data0001")
-  haloes = LoadHaloes(pf, "basename")
+  ds = load("data0001")
+  haloes = LoadHaloes(ds, "basename")
 
 Everything that can be done with ``haloes`` in the first example should be
 possible with ``haloes`` in the second.
@@ -229,10 +229,10 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
-  pf = load("data0001")
-  halo_list = HaloFinder(pf,padding=0.02)
+  ds = load("data0001")
+  halo_list = HaloFinder(ds,padding=0.02)
   # --or--
-  halo_list = FOFHaloFinder(pf,padding=0.02)
+  halo_list = FOFHaloFinder(ds,padding=0.02)
 
 The ``padding`` parameter is in simulation units and defaults to 0.02. This parameter is how much padding
 is added to each of the six sides of a subregion. This value should be 2x-3x larger than the largest
@@ -343,8 +343,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
-  pf = load("data0001")
-  halo_list = parallelHF(pf)
+  ds = load("data0001")
+  halo_list = parallelHF(ds)
 
 Parallel HOP has these user-set options:
 
@@ -421,8 +421,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
-  pf = load("data0001")
-  halo_list = parallelHF(pf, threshold=80.0, dm_only=True, resize=False, 
+  ds = load("data0001")
+  halo_list = parallelHF(ds, threshold=80.0, dm_only=True, resize=False, 
   rearrange=True, safety=1.5, premerge=True)
   halo_list.write_out("ParallelHopAnalysis.out")
   halo_list.write_particle_list("parts")
@@ -445,11 +445,11 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
-  pf = load('data0458')
+  ds = load('data0458')
   # Note that the first term below, [0.5]*3, defines the center of
   # the region and is not used. It can be any value.
-  sv = pf.region([0.5]*3, [0.21, .21, .72], [.28, .28, .79])
-  halos = HaloFinder(pf, subvolume = sv)
+  sv = ds.region([0.5]*3, [0.21, .21, .72], [.28, .28, .79])
+  halos = HaloFinder(ds, subvolume = sv)
   halos.write_out("sv.out")
 
 
@@ -522,7 +522,7 @@
     the width of the smallest grid element in the simulation from the
     last data snapshot (i.e. the one where time has evolved the
     longest) in the time series:
-    ``pf_last.index.get_smallest_dx() * pf_last['mpch']``.
+    ``ds_last.index.get_smallest_dx() * ds_last['mpch']``.
   * ``total_particles``, if supplied, this is a pre-calculated
     total number of dark matter
     particles present in the simulation. For example, this is useful
@@ -544,21 +544,21 @@
 out*list) and binary (halo*bin) files inside the ``outbase`` directory. 
 We use the halo list classes to recover the information. 
 
-Inside the ``outbase`` directory there is a text file named ``pfs.txt``
-that records the connection between pf names and the Rockstar file names.
+Inside the ``outbase`` directory there is a text file named ``datasets.txt``
+that records the connection between ds names and the Rockstar file names.
 
 The halo list can be automatically generated from the RockstarHaloFinder 
 object by calling ``RockstarHaloFinder.halo_list()``. Alternatively, the halo
 lists can be built from the RockstarHaloList class directly 
-``LoadRockstarHalos(pf,'outbase/out_0.list')``.
+``LoadRockstarHalos(ds,'outbase/out_0.list')``.
 
 .. code-block:: python
     
-    rh = RockstarHaloFinder(pf)
+    rh = RockstarHaloFinder(ds)
     #First method of creating the halo lists:
     halo_list = rh.halo_list()    
     #Alternate method of creating halo_list:
-    halo_list = LoadRockstarHalos(pf, 'rockstar_halos/out_0.list')
+    halo_list = LoadRockstarHalos(ds, 'rockstar_halos/out_0.list')
 
 The above ``halo_list`` is very similar to any other list of halos loaded off
 disk.
@@ -624,18 +624,18 @@
     
     def main():
         import enzo
-        pf = EnzoDatasetInMemory()
+        ds = EnzoDatasetInMemory()
         mine = ytcfg.getint('yt','__topcomm_parallel_rank')
         size = ytcfg.getint('yt','__topcomm_parallel_size')
 
         # Call rockstar.
-        ts = DatasetSeries([pf])
-        outbase = "./rockstar_halos_%04d" % pf['NumberOfPythonTopGridCalls']
+        ts = DatasetSeries([ds])
+        outbase = "./rockstar_halos_%04d" % ds['NumberOfPythonTopGridCalls']
         rh = RockstarHaloFinder(ts, num_readers = size,
             outbase = outbase)
         rh.run()
     
         # Load the halos off disk.
         fname = outbase + "/out_0.list"
-        rhalos = LoadRockstarHalos(pf, fname)
+        rhalos = LoadRockstarHalos(ds, fname)
 

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/star_analysis.rst
--- a/doc/source/analyzing/analysis_modules/star_analysis.rst
+++ b/doc/source/analyzing/analysis_modules/star_analysis.rst
@@ -27,9 +27,9 @@
 
   from yt.mods import *
   from yt.analysis_modules.star_analysis.api import *
-  pf = load("data0030")
-  dd = pf.h.all_data()
-  sfr = StarFormationRate(pf, data_source=dd)
+  ds = load("data0030")
+  dd = ds.all_data()
+  sfr = StarFormationRate(ds, data_source=dd)
 
 or just a small part of the volume:
 
@@ -37,9 +37,9 @@
 
   from yt.mods import *
   from yt.analysis_modules.star_analysis.api import *
-  pf = load("data0030")
+  ds = load("data0030")
   sp = p.h.sphere([0.5,0.5,0.5], 0.05)
-  sfr = StarFormationRate(pf, data_source=sp)
+  sfr = StarFormationRate(ds, data_source=sp)
 
 If the stars to be analyzed cannot be defined by a data_source, arrays can be
 passed. In this case, the units for the ``star_mass`` must be in Msun,
@@ -51,8 +51,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.star_analysis.api import *
-  pf = load("data0030")
-  re = pf.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
+  ds = load("data0030")
+  re = ds.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
   # This puts the particle data for *all* the particles in the region re
   # into the arrays sm and ct.
   sm = re["ParticleMassMsun"]
@@ -65,7 +65,7 @@
   # 100 is a time in code units.
   sm_old = sm[ct < 100]
   ct_old = ct[ct < 100]
-  sfr = StarFormationRate(pf, star_mass=sm_old, star_creation_time=ct_old,
+  sfr = StarFormationRate(ds, star_mass=sm_old, star_creation_time=ct_old,
   volume=re.volume('mpc'))
 
 To output the data to a text file, use the command ``.write_out``:
@@ -139,8 +139,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.star_analysis.api import *
-  pf = load("data0030")
-  spec = SpectrumBuilder(pf, bcdir="/home/username/bc/", model="chabrier")
+  ds = load("data0030")
+  spec = SpectrumBuilder(ds, bcdir="/home/username/bc/", model="chabrier")
 
 In order to analyze a set of stars, use the ``calculate_spectrum`` command.
 It accepts either a ``data_source``, or a set of arrays with the star 
@@ -148,7 +148,7 @@
 
 .. code-block:: python
 
-  re = pf.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
+  re = ds.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
   spec.calculate_spectrum(data_source=re)
 
 If a subset of stars are desired, call it like this. ``star_mass`` is in units
@@ -157,7 +157,7 @@
 
 .. code-block:: python
 
-  re = pf.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
+  re = ds.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
   # This puts the particle data for *all* the particles in the region re
   # into the arrays sm, ct and metal.
   sm = re["ParticleMassMsun"]
@@ -223,14 +223,14 @@
 
 Below is an example of an absurd SED for universe-old stars all with 
 solar metallicity at a redshift of zero. Note that even in this example,
-a ``pf`` is required.
+a ``ds`` is required.
 
 .. code-block:: python
 
   from yt.mods import *
   from yt.analysis_modules.star_analysis.api import *
-  pf = load("data0030")
-  spec = SpectrumBuilder(pf, bcdir="/home/user/bc", model="chabrier")
+  ds = load("data0030")
+  spec = SpectrumBuilder(ds, bcdir="/home/user/bc", model="chabrier")
   sm = np.ones(100)
   ct = np.zeros(100)
   spec.calculate_spectrum(star_mass=sm, star_creation_time=ct, star_metallicity_constant=0.02)
@@ -252,11 +252,11 @@
 
   from yt.mods import *
   from yt.analysis_modules.star_analysis.api import *
-  pf = load("data0030")
+  ds = load("data0030")
   # Find all the haloes, and include star particles.
-  haloes = HaloFinder(pf, dm_only=False)
+  haloes = HaloFinder(ds, dm_only=False)
   # Set up the spectrum builder.
-  spec = SpectrumBuilder(pf, bcdir="/home/user/bc", model="salpeter")
+  spec = SpectrumBuilder(ds, bcdir="/home/user/bc", model="salpeter")
   # Iterate over the haloes.
   for halo in haloes:
       # Get the pertinent arrays.

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/sunrise_export.rst
--- a/doc/source/analyzing/analysis_modules/sunrise_export.rst
+++ b/doc/source/analyzing/analysis_modules/sunrise_export.rst
@@ -18,15 +18,15 @@
 	from yt.mods import *
 	import numpy as na
 
-	pf = ARTDataset(file_amr)
-	potential_value,center=pf.h.find_min('Potential_New')
-	root_cells = pf.domain_dimensions[0]
+	ds = ARTDataset(file_amr)
+	potential_value,center=ds.find_min('Potential_New')
+	root_cells = ds.domain_dimensions[0]
 	le = np.floor(root_cells*center) #left edge
 	re = np.ceil(root_cells*center) #right edge
 	bounds = [(le[0], re[0]-le[0]), (le[1], re[1]-le[1]), (le[2], re[2]-le[2])] 
 	#bounds are left edge plus a span
 	bounds = numpy.array(bounds,dtype='int')
-	amods.sunrise_export.export_to_sunrise(pf, out_fits_file,subregion_bounds = bounds)
+	amods.sunrise_export.export_to_sunrise(ds, out_fits_file,subregion_bounds = bounds)
 
 To ensure that the camera is centered on the galaxy, we find the center by finding the minimum of the gravitational potential. The above code takes that center, and casts it in terms of which root cells should be extracted. At the moment, Sunrise accepts a strict octree, and you can only extract a 2x2x2 domain on the root grid, and not an arbitrary volume. See the optimization section later for workarounds. On my reasonably recent machine, the export process takes about 30 minutes.
 
@@ -51,7 +51,7 @@
 	col_list.append(pyfits.Column("L_bol", format="D",array=np.zeros(mass_current.size)))
 	cols = pyfits.ColDefs(col_list)
 
-	amods.sunrise_export.export_to_sunrise(pf, out_fits_file,write_particles=cols,
+	amods.sunrise_export.export_to_sunrise(ds, out_fits_file,write_particles=cols,
 	    subregion_bounds = bounds)
 
 This code snippet takes the stars in a region outlined by the ``bounds`` variable, organizes them into pyfits columns which are then passed to export_to_sunrise. Note that yt units are in CGS, and Sunrise accepts units in (physical) kpc, kelvin, solar masses, and years.  
@@ -68,8 +68,8 @@
 .. code-block:: python
 
 	for x,a in enumerate(zip(pos,age)): #loop over stars
-	    center = x*pf['kpc']
-	    grid,idx = find_cell(pf.index.grids[0],center)
+	    center = x*ds['kpc']
+	    grid,idx = find_cell(ds.index.grids[0],center)
 	    pk[i] = grid['Pk'][idx]
 
 This code is how Sunrise calculates the pressure, so we can add our own derived field:
@@ -79,7 +79,7 @@
 	def _Pk(field,data):
 	    #calculate pressure over Boltzmann's constant: P/k=(n/V)T
 	    #Local stellar ISM values are ~16500 Kcm^-3
-	    vol = data['cell_volume'].astype('float64')*data.pf['cm']**3.0 #volume in cm
+	    vol = data['cell_volume'].astype('float64')*data.ds['cm']**3.0 #volume in cm
 	    m_g = data["cell_mass"]*1.988435e33 #mass of H in g
 	    n_g = m_g*5.97e23 #number of H atoms
 	    teff = data["temperature"]

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/two_point_functions.rst
--- a/doc/source/analyzing/analysis_modules/two_point_functions.rst
+++ b/doc/source/analyzing/analysis_modules/two_point_functions.rst
@@ -35,7 +35,7 @@
     from yt.mods import *
     from yt.analysis_modules.two_point_functions.api import *
     
-    pf = load("data0005")
+    ds = load("data0005")
     
     # Calculate the S in RMS velocity difference between the two points.
     # All functions have five inputs. The first two are containers
@@ -55,7 +55,7 @@
     # the number of pairs of points to calculate, how big a data queue to
     # use, the range of pair separations and how many lengths to use, 
     # and how to divide that range (linear or log).
-    tpf = TwoPointFunctions(pf, ["velocity_x", "velocity_y", "velocity_z"],
+    tpf = TwoPointFunctions(ds, ["velocity_x", "velocity_y", "velocity_z"],
         total_values=1e5, comm_size=10000, 
         length_number=10, length_range=[1./128, .5],
         length_type="log")
@@ -90,7 +90,7 @@
 
     from yt.mods import *
     ...
-    tpf = amods.two_point_functions.TwoPointFunctions(pf, ...)
+    tpf = amods.two_point_functions.TwoPointFunctions(ds, ...)
 
 
 Probability Distribution Function
@@ -261,12 +261,12 @@
 Before any functions can be added, the ``TwoPointFunctions`` object needs
 to be created. It has these inputs:
 
-  * ``pf`` (the only required input and is always the first term).
+  * ``ds`` (the only required input and is always the first term).
   * Field list, required, an ordered list of field names used by the
     functions. The order in this list will need to be referenced when writing
     functions. Derived fields may be used here if they are defined first.
   * ``left_edge``, ``right_edge``, three-element lists of floats:
-    Used to define a sub-region of the full volume in which to run the TPF.
+    Used to define a sub-region of the full volume in which to run the TDS.
     Default=None, which is equivalent to running on the full volume. Both must
     be set to have any effect.
   * ``total_values``, integer: The number of random points to generate globally
@@ -298,7 +298,7 @@
     guarantees that the point pairs will be in different cells for the most 
     refined regions.
     If the first term of the list is -1, the minimum length will be automatically
-    set to sqrt(3)*dx, ex: ``length_range = [-1, 10/pf['kpc']]``.
+    set to sqrt(3)*dx, ex: ``length_range = [-1, 10/ds['kpc']]``.
   * ``vol_ratio``, integer: How to multiply-assign subvolumes to the parallel
     tasks. This number must be an integer factor of the total number of tasks or
     very bad things will happen. The default value of 1 will assign one task
@@ -639,7 +639,7 @@
       return vdiff
     
     ...
-    tpf = TwoPointFunctions(pf, ["velocity_x", "velocity_y", "velocity_z", "density"],
+    tpf = TwoPointFunctions(ds, ["velocity_x", "velocity_y", "velocity_z", "density"],
         total_values=1e5, comm_size=10000, 
         length_number=10, length_range=[1./128, .5],
         length_type="log")
@@ -667,7 +667,7 @@
     from yt.mods import *
     from yt.analysis_modules.two_point_functions.api import *
     
-    pf = load("data0005")
+    ds = load("data0005")
     
     # Calculate the S in RMS velocity difference between the two points.
     # Also store the ratio of densities (keeping them >= 1).
@@ -688,7 +688,7 @@
     # Set the number of pairs of points to calculate, how big a data queue to
     # use, the range of pair separations and how many lengths to use, 
     # and how to divide that range (linear or log).
-    tpf = TwoPointFunctions(pf, ["velocity_x", "velocity_y", "velocity_z", "density"],
+    tpf = TwoPointFunctions(ds, ["velocity_x", "velocity_y", "velocity_z", "density"],
         total_values=1e5, comm_size=10000, 
         length_number=10, length_range=[1./128, .5],
         length_type="log")
@@ -765,7 +765,7 @@
     from yt.analysis_modules.two_point_functions.api import *
     
     # Specify the dataset on which we want to base our work.
-    pf = load('data0005')
+    ds = load('data0005')
     
     # Read in the halo centers of masses.
     CoM = []
@@ -787,7 +787,7 @@
     # For technical reasons (hopefully to be fixed someday) `vol_ratio`
     # needs to be equal to the number of tasks used if this is run
     # in parallel. A value of -1 automatically does this.
-    tpf = TwoPointFunctions(pf, ['x'],
+    tpf = TwoPointFunctions(ds, ['x'],
         total_values=1e7, comm_size=10000, 
         length_number=11, length_range=[2*radius, .5],
         length_type="lin", vol_ratio=-1)
@@ -868,11 +868,11 @@
     from yt.analysis_modules.two_point_functions.api import *
     
     # Specify the dataset on which we want to base our work.
-    pf = load('data0005')
+    ds = load('data0005')
     
     # We work in simulation's units, these are for conversion.
-    vol_conv = pf['cm'] ** 3
-    sm = pf.index.get_smallest_dx()**3
+    vol_conv = ds['cm'] ** 3
+    sm = ds.index.get_smallest_dx()**3
     
     # Our density limit, in gm/cm**3
     dens = 2e-31
@@ -887,13 +887,13 @@
         return d.sum()
     add_quantity("TotalNumDens", function=_NumDens,
         combine_function=_combNumDens, n_ret=1)
-    all = pf.h.all_data()
+    all = ds.all_data()
     n = all.quantities["TotalNumDens"]()
     
     print n,'n'
     
     # Instantiate our TPF object.
-    tpf = TwoPointFunctions(pf, ['density', 'cell_volume'],
+    tpf = TwoPointFunctions(ds, ['density', 'cell_volume'],
         total_values=1e5, comm_size=10000, 
         length_number=11, length_range=[-1, .5],
         length_type="lin", vol_ratio=1)

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/analysis_modules/xray_emission_fields.rst
--- a/doc/source/analyzing/analysis_modules/xray_emission_fields.rst
+++ b/doc/source/analyzing/analysis_modules/xray_emission_fields.rst
@@ -60,10 +60,10 @@
   add_xray_emissivity_field(0.5, 7)
   add_xray_photon_emissivity_field(0.5, 7)
 
-  pf = load("enzo_tiny_cosmology/DD0046/DD0046")
-  plot = SlicePlot(pf, 'x', 'Xray_Luminosity_0.5_7keV')
+  ds = load("enzo_tiny_cosmology/DD0046/DD0046")
+  plot = SlicePlot(ds, 'x', 'Xray_Luminosity_0.5_7keV')
   plot.save()
-  plot = ProjectionPlot(pf, 'x', 'Xray_Emissivity_0.5_7keV')
+  plot = ProjectionPlot(ds, 'x', 'Xray_Emissivity_0.5_7keV')
   plot.save()
-  plot = ProjectionPlot(pf, 'x', 'Xray_Photon_Emissivity_0.5_7keV')
+  plot = ProjectionPlot(ds, 'x', 'Xray_Photon_Emissivity_0.5_7keV')
   plot.save()

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/creating_derived_fields.rst
--- a/doc/source/analyzing/creating_derived_fields.rst
+++ b/doc/source/analyzing/creating_derived_fields.rst
@@ -20,11 +20,11 @@
 .. code-block:: python
 
    def _Pressure(field, data):
-       return (data.pf["Gamma"] - 1.0) * \
+       return (data.ds.gamma - 1.0) * \
               data["density"] * data["thermal_energy"]
 
 Note that we do a couple different things here.  We access the "Gamma"
-parameter from the parameter file, we access the "density" field and we access
+parameter from the dataset, we access the "density" field and we access
 the "thermal_energy" field.  "thermal_energy" is, in fact, another derived field!
 ("thermal_energy" deals with the distinction in storage of energy between dual
 energy formalism and non-DEF.)  We don't do any loops, we don't do any
@@ -87,14 +87,14 @@
 .. code-block:: python
 
    >>> from yt.mods import *
-   >>> pf = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
-   >>> pf.field_list
+   >>> ds = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
+   >>> ds.field_list
    ['dens', 'temp', 'pres', 'gpot', 'divb', 'velx', 'vely', 'velz', 'magx', 'magy', 'magz', 'magp']
-   >>> pf.field_info['dens']._units
+   >>> ds.field_info['dens']._units
    '\\rm{g}/\\rm{cm}^{3}'
-   >>> pf.field_info['temp']._units
+   >>> ds.field_info['temp']._units
    '\\rm{K}'
-   >>> pf.field_info['velx']._units
+   >>> ds.field_info['velx']._units
    '\\rm{cm}/\\rm{s}'
 
 Thus if you were using any of these fields as input to your derived field, you 
@@ -178,7 +178,7 @@
 
     def _DivV(field, data):
         # We need to set up stencils
-        if data.pf["HydroMethod"] == 2:
+        if data.ds["HydroMethod"] == 2:
             sl_left = slice(None,-2,None)
             sl_right = slice(1,-1,None)
             div_fac = 1.0
@@ -189,11 +189,11 @@
         ds = div_fac * data['dx'].flat[0]
         f  = data["velocity_x"][sl_right,1:-1,1:-1]/ds
         f -= data["velocity_x"][sl_left ,1:-1,1:-1]/ds
-        if data.pf.dimensionality > 1:
+        if data.ds.dimensionality > 1:
             ds = div_fac * data['dy'].flat[0]
             f += data["velocity_y"][1:-1,sl_right,1:-1]/ds
             f -= data["velocity_y"][1:-1,sl_left ,1:-1]/ds
-        if data.pf.dimensionality > 2:
+        if data.ds.dimensionality > 2:
             ds = div_fac * data['dz'].flat[0]
             f += data["velocity_z"][1:-1,1:-1,sl_right]/ds
             f -= data["velocity_z"][1:-1,1:-1,sl_left ]/ds
@@ -241,8 +241,8 @@
         return data["temperature"]*data["density"]**(-2./3.)
     add_field("Entr", function=_Entropy)
 
-    pf = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
-    writer.save_field(pf, "Entr")
+    ds = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
+    writer.save_field(ds, "Entr")
 
 This creates a "_backup.gdf" file next to your datadump. If you load up the dataset again:
 
@@ -250,8 +250,8 @@
 
     from yt.mods import *
 
-    pf = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
-    data = pf.h.all_data()
+    ds = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
+    data = ds.all_data()
     print data["Entr"]
 
 you can work with the field exactly as before, without having to recompute it.

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/external_analysis.rst
--- a/doc/source/analyzing/external_analysis.rst
+++ b/doc/source/analyzing/external_analysis.rst
@@ -18,10 +18,10 @@
    from yt.mods import *
    import radtrans
 
-   pf = load("DD0010/DD0010")
+   ds = load("DD0010/DD0010")
    rt_grids = []
 
-   for grid in pf.index.grids:
+   for grid in ds.index.grids:
        rt_grid = radtrans.RegularBox(
             grid.LeftEdge, grid.RightEdge,
             grid["density"], grid["temperature"], grid["metallicity"])
@@ -39,8 +39,8 @@
    from yt.mods import *
    import pop_synthesis
 
-   pf = load("DD0010/DD0010")
-   dd = pf.h.all_data()
+   ds = load("DD0010/DD0010")
+   dd = ds.all_data()
    star_masses = dd["StarMassMsun"]
    star_metals = dd["StarMetals"]
 

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/generating_processed_data.rst
--- a/doc/source/analyzing/generating_processed_data.rst
+++ b/doc/source/analyzing/generating_processed_data.rst
@@ -43,7 +43,7 @@
 
 .. code-block:: python
 
-   sl = pf.slice(0, 0.5)
+   sl = ds.slice(0, 0.5)
    frb = FixedResolutionBuffer(sl, (0.3, 0.5, 0.6, 0.8), (512, 512))
    my_image = frb["density"]
 
@@ -98,7 +98,7 @@
 
 .. code-block:: python
 
-   source = pf.sphere( (0.3, 0.6, 0.4), 1.0/pf['pc'])
+   source = ds.sphere( (0.3, 0.6, 0.4), 1.0/ds['pc'])
    profile = BinnedProfile1D(source, 128, "density", 1e-24, 1e-10)
    profile.add_fields("cell_mass", weight = None)
    profile.add_fields("temperature")
@@ -128,7 +128,7 @@
 
 .. code-block:: python
 
-   source = pf.sphere( (0.3, 0.6, 0.4), 1.0/pf['pc'])
+   source = ds.sphere( (0.3, 0.6, 0.4), 1.0/ds['pc'])
    prof2d = BinnedProfile2D(source, 128, "density", 1e-24, 1e-10, True,
                                     128, "temperature", 10, 10000, True)
    prof2d.add_fields("cell_mass", weight = None)
@@ -171,7 +171,7 @@
 
 .. code-block:: python
 
-   ray = pf.ray(  (0.3, 0.5, 0.9), (0.1, 0.8, 0.5) )
+   ray = ds.ray(  (0.3, 0.5, 0.9), (0.1, 0.8, 0.5) )
    print ray["density"]
 
 The points are ordered, but the ray is also traversing cells of varying length,

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/ionization_cube.py
--- a/doc/source/analyzing/ionization_cube.py
+++ b/doc/source/analyzing/ionization_cube.py
@@ -13,9 +13,9 @@
 ionized_z = np.zeros(ts[0].domain_dimensions, dtype="float32")
 
 t1 = time.time()
-for pf in ts.piter():
-    z = pf.current_redshift
-    for g in parallel_objects(pf.index.grids, njobs = 16):
+for ds in ts.piter():
+    z = ds.current_redshift
+    for g in parallel_objects(ds.index.grids, njobs = 16):
         i1, j1, k1 = g.get_global_startindex() # Index into our domain
         i2, j2, k2 = g.get_global_startindex() + g.ActiveDimensions
         # Look for the newly ionized gas

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/objects.rst
--- a/doc/source/analyzing/objects.rst
+++ b/doc/source/analyzing/objects.rst
@@ -26,7 +26,7 @@
 while Enzo calls it "temperature".  Translator functions ensure that any
 derived field relying on "temp" or "temperature" works with both output types.
 
-When a field is requested, the parameter file first looks to see if that field
+When a field is requested, the dataset object first looks to see if that field
 exists on disk.  If it does not, it then queries the list of code-specific
 derived fields.  If it finds nothing there, it then defaults to examining the
 global set of derived fields.
@@ -82,7 +82,7 @@
 
 .. code-block:: python
 
-   sp = pf.sphere([0.5, 0.5, 0.5], 10.0/pf['kpc'])
+   sp = ds.sphere([0.5, 0.5, 0.5], 10.0/ds['kpc'])
 
 and then look at the temperature of its cells within it via:
 
@@ -105,25 +105,25 @@
 
 .. code-block:: python
 
-   pf = load("my_data")
-   print pf.field_list
-   print pf.derived_field_list
+   ds = load("my_data")
+   print ds.field_list
+   print ds.derived_field_list
 
 When a field is added, it is added to a container that hangs off of the
-parameter file, as well.  All of the field creation options
+dataset, as well.  All of the field creation options
 (:ref:`derived-field-options`) are accessible through this object:
 
 .. code-block:: python
 
-   pf = load("my_data")
-   print pf.field_info["pressure"].get_units()
+   ds = load("my_data")
+   print ds.field_info["pressure"].get_units()
 
 This is a fast way to examine the units of a given field, and additionally you
 can use :meth:`yt.utilities.pydot.get_source` to get the source code:
 
 .. code-block:: python
 
-   field = pf.field_info["pressure"]
+   field = ds.field_info["pressure"]
    print field.get_source()
 
 .. _available-objects:
@@ -142,8 +142,8 @@
 .. code-block:: python
 
    from yt.mods import *
-   pf = load("RedshiftOutput0005")
-   reg = pf.region([0.5, 0.5, 0.5], [0.0, 0.0, 0.0], [1.0, 1.0, 1.0])
+   ds = load("RedshiftOutput0005")
+   reg = ds.region([0.5, 0.5, 0.5], [0.0, 0.0, 0.0], [1.0, 1.0, 1.0])
 
 .. include:: _obj_docstrings.inc
 
@@ -192,8 +192,8 @@
 
 .. code-block:: python
 
-   pf = load("my_data")
-   dd = pf.h.all_data()
+   ds = load("my_data")
+   dd = ds.all_data()
    dd.quantities["AngularMomentumVector"]()
 
 The following quantities are available via the ``quantities`` interface.
@@ -264,10 +264,10 @@
 .. python-script::
 
    from yt.mods import *
-   pf = load("enzo_tiny_cosmology/DD0046/DD0046")
-   ad = pf.h.all_data()
+   ds = load("enzo_tiny_cosmology/DD0046/DD0046")
+   ad = ds.all_data()
    new_region = ad.cut_region(['obj["density"] > 1e-29'])
-   plot = ProjectionPlot(pf, "x", "density", weight_field="density",
+   plot = ProjectionPlot(ds, "x", "density", weight_field="density",
                          data_source=new_region)
    plot.save()
 
@@ -291,7 +291,7 @@
 
 .. code-block:: python
 
-   sp = pf.sphere("max", (1.0, 'pc'))
+   sp = ds.sphere("max", (1.0, 'pc'))
    contour_values, connected_sets = sp.extract_connected_sets(
         "density", 3, 1e-30, 1e-20)
 
@@ -355,12 +355,12 @@
 construction of the objects is the difficult part, rather than the generation
 of the data -- this means that you can save out an object as a description of
 how to recreate it in space, but not the actual data arrays affiliated with
-that object.  The information that is saved includes the parameter file off of
+that object.  The information that is saved includes the dataset off of
 which the object "hangs."  It is this piece of information that is the most
 difficult; the object, when reloaded, must be able to reconstruct a parameter
 file from whatever limited information it has in the save file.
 
-To do this, ``yt`` is able to identify parameter files based on a "hash"
+To do this, ``yt`` is able to identify datasets based on a "hash"
 generated from the base file name, the "CurrentTimeIdentifier", and the
 simulation time.  These three characteristics should never be changed outside
 of a simulation, they are independent of the file location on disk, and in
@@ -374,10 +374,10 @@
 .. code-block:: python
 
    from yt.mods import *
-   pf = load("my_data")
-   sp = pf.sphere([0.5, 0.5, 0.5], 10.0/pf['kpc'])
+   ds = load("my_data")
+   sp = ds.sphere([0.5, 0.5, 0.5], 10.0/ds['kpc'])
 
-   pf.h.save_object(sp, "sphere_to_analyze_later")
+   ds.save_object(sp, "sphere_to_analyze_later")
 
 
 In a later session, we can load it using
@@ -387,8 +387,8 @@
 
    from yt.mods import *
 
-   pf = load("my_data")
-   sphere_to_analyze = pf.h.load_object("sphere_to_analyze_later")
+   ds = load("my_data")
+   sphere_to_analyze = ds.load_object("sphere_to_analyze_later")
 
 Additionally, if we want to store the object independent of the ``.yt`` file,
 we can save the object directly:
@@ -397,8 +397,8 @@
 
    from yt.mods import *
 
-   pf = load("my_data")
-   sp = pf.sphere([0.5, 0.5, 0.5], 10.0/pf['kpc'])
+   ds = load("my_data")
+   sp = ds.sphere([0.5, 0.5, 0.5], 10.0/ds['kpc'])
 
    sp.save_object("my_sphere", "my_storage_file.cpkl")
 
@@ -414,10 +414,10 @@
    from yt.mods import *
    import shelve
 
-   pf = load("my_data") # not necessary if storeparameterfiles is on
+   ds = load("my_data") # not necessary if storeparameterfiles is on
 
    obj_file = shelve.open("my_storage_file.cpkl")
-   pf, obj = obj_file["my_sphere"]
+   ds, obj = obj_file["my_sphere"]
 
 If you have turned on ``storeparameterfiles`` in your configuration,
 you won't need to load the parameterfile again, as the load process

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/parallel_computation.rst
--- a/doc/source/analyzing/parallel_computation.rst
+++ b/doc/source/analyzing/parallel_computation.rst
@@ -86,10 +86,10 @@
 .. code-block:: python
 
    from yt.pmods import *
-   pf = load("RD0035/RedshiftOutput0035")
-   v, c = pf.h.find_max("density")
+   ds = load("RD0035/RedshiftOutput0035")
+   v, c = ds.find_max("density")
    print v, c
-   p = ProjectionPlot(pf, "x", "density")
+   p = ProjectionPlot(ds, "x", "density")
    p.save()
 
 If this script is run in parallel, two of the most expensive operations -
@@ -127,9 +127,9 @@
 .. code-block:: python
 
    from yt.pmods import *
-   pf = load("RD0035/RedshiftOutput0035")
-   v, c = pf.h.find_max("density")
-   p = ProjectionPlot(pf, "x", "density")
+   ds = load("RD0035/RedshiftOutput0035")
+   v, c = ds.find_max("density")
+   p = ProjectionPlot(ds, "x", "density")
    if is_root():
        print v, c
        p.save()
@@ -151,9 +151,9 @@
           print v, c
        plot.save()
 
-   pf = load("RD0035/RedshiftOutput0035")
-   v, c = pf.h.find_max("density")
-   p = ProjectionPlot(pf, "x", "density")
+   ds = load("RD0035/RedshiftOutput0035")
+   v, c = ds.find_max("density")
+   p = ProjectionPlot(ds, "x", "density")
    only_on_root(print_and_save_plot, v, c, plot, print=True)
 
 Types of Parallelism
@@ -252,8 +252,8 @@
    for sto, fn in parallel_objects(fns, num_procs, storage = my_storage):
 
        # Open a data file, remembering that fn is different on each task.
-       pf = load(fn)
-       dd = pf.h.all_data()
+       ds = load(fn)
+       dd = ds.all_data()
 
        # This copies fn and the min/max of density to the local copy of
        # my_storage
@@ -261,7 +261,7 @@
        sto.result = dd.quantities["Extrema"]("density")
 
        # Makes and saves a plot of the gas density.
-       p = ProjectionPlot(pf, "x", "density")
+       p = ProjectionPlot(ds, "x", "density")
        p.save()
 
    # At this point, as the loop exits, the local copies of my_storage are
@@ -301,7 +301,7 @@
 processor.  By default, parallel is set to ``True``, so you do not have to
 explicitly set ``parallel = True`` as in the above example. 
 
-One could get the same effect by iterating over the individual parameter files
+One could get the same effect by iterating over the individual datasets
 in the DatasetSeries object:
 
 .. code-block:: python
@@ -309,10 +309,10 @@
    from yt.pmods import *
    ts = DatasetSeries.from_filenames("DD*/output_*", parallel = True)
    my_storage = {}
-   for sto,pf in ts.piter(storage=my_storage):
-       sphere = pf.sphere("max", (1.0, "pc"))
+   for sto,ds in ts.piter(storage=my_storage):
+       sphere = ds.sphere("max", (1.0, "pc"))
        L_vec = sphere.quantities["AngularMomentumVector"]()
-       sto.result_id = pf.parameter_filename
+       sto.result_id = ds.parameter_filename
        sto.result = L_vec
 
    L_vecs = []
@@ -503,14 +503,14 @@
        from yt.mods import *
        import time
        
-       pf = load("DD0152")
+       ds = load("DD0152")
        t0 = time.time()
-       bigstuff, hugestuff = StuffFinder(pf)
-       BigHugeStuffParallelFunction(pf, bigstuff, hugestuff)
+       bigstuff, hugestuff = StuffFinder(ds)
+       BigHugeStuffParallelFunction(ds, bigstuff, hugestuff)
        t1 = time.time()
        for i in range(1000000):
            tinystuff, ministuff = GetTinyMiniStuffOffDisk("in%06d.txt" % i)
-           array = TinyTeensyParallelFunction(pf, tinystuff, ministuff)
+           array = TinyTeensyParallelFunction(ds, tinystuff, ministuff)
            SaveTinyMiniStuffToDisk("out%06d.txt" % i, array)
        t2 = time.time()
        

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/particles.rst
--- a/doc/source/analyzing/particles.rst
+++ b/doc/source/analyzing/particles.rst
@@ -63,8 +63,8 @@
 .. code-block:: python
 
    from yt.mods import *
-   pf = load("galaxy1200.dir/galaxy1200")
-   dd = pf.h.all_data()
+   ds = load("galaxy1200.dir/galaxy1200")
+   dd = ds.all_data()
 
    star_particles = dd["creation_time"] > 0.0
    print dd["ParticleMassMsun"][star_particles].max()
@@ -80,8 +80,8 @@
 .. code-block:: python
 
    from yt.mods import *
-   pf = load("galaxy1200.dir/galaxy1200")
-   dd = pf.h.all_data()
+   ds = load("galaxy1200.dir/galaxy1200")
+   dd = ds.all_data()
 
    star_particles = dd["particle_type"] == 2
    print dd["ParticleMassMsun"][star_particles].max()

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/time_series_analysis.rst
--- a/doc/source/analyzing/time_series_analysis.rst
+++ b/doc/source/analyzing/time_series_analysis.rst
@@ -11,10 +11,10 @@
 
 .. code-block:: python
 
-   for pfi in range(30):
-       fn = "DD%04i/DD%04i" % (pfi, pfi)
-       pf = load(fn)
-       process_output(pf)
+   for dsi in range(30):
+       fn = "DD%04i/DD%04i" % (dsi, dsi)
+       ds = load(fn)
+       process_output(ds)
 
 But this is not really very nice.  This ends up requiring a lot of maintenance.
 The :class:`~yt.data_objects.time_series.DatasetSeries` object has been
@@ -66,8 +66,8 @@
 
    from yt.mods import *
    ts = DatasetSeries.from_filenames("*/*.index")
-   for pf in ts:
-       print pf.current_time
+   for ds in ts:
+       print ds.current_time
 
 This can also operate in parallel, using
 :meth:`~yt.data_objects.time_series.DatasetSeries.piter`.  For more examples,
@@ -101,7 +101,7 @@
    max_rho = ts.tasks["MaximumValue"]("density")
 
 When we call the task, the time series object executes the task on each
-component parameter file.  The results are then returned to the user.  More
+component dataset.  The results are then returned to the user.  More
 complex, multi-task evaluations can be conducted by using the
 :meth:`~yt.data_objects.time_series.DatasetSeries.eval` call, which accepts a
 list of analysis tasks.
@@ -140,14 +140,14 @@
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 If you wanted to look at the mass in star particles as a function of time, you
-would write a function that accepts params and pf and then decorate it with
+would write a function that accepts params and ds and then decorate it with
 analysis_task. Here we have done so:
 
 .. code-block:: python
 
    @analysis_task(('particle_type',))
-   def MassInParticleType(params, pf):
-       dd = pf.h.all_data()
+   def MassInParticleType(params, ds):
+       dd = ds.all_data()
        ptype = (dd["particle_type"] == params.particle_type)
        return (ptype.sum(), dd["ParticleMassMsun"][ptype].sum())
 
@@ -196,8 +196,8 @@
 
 .. code-block:: python
 
-  for pf in my_sim.piter()
-      all_data = pf.h.all_data()
+  for ds in my_sim.piter()
+      all_data = ds.all_data()
       print all_data.quantities['Extrema']('density')
  
 Additional keywords can be given to :meth:`get_time_series` to select a subset

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/analyzing/units/data_selection_and_fields.rst
--- a/doc/source/analyzing/units/data_selection_and_fields.rst
+++ b/doc/source/analyzing/units/data_selection_and_fields.rst
@@ -34,7 +34,7 @@
 
    ds = load('HiresIsolatedGalaxy/DD0044/DD0044')
 
-   dd = ds.h.all_data()
+   dd = ds.all_data()
    dd['root_cell_volume']
 
 No special unit logic needs to happen inside of the function - `np.sqrt` will
@@ -47,7 +47,7 @@
    import numpy as np
 
    ds = load('HiresIsolatedGalaxy/DD0044/DD0044')
-   dd = ds.h.all_data()
+   dd = ds.all_data()
 
    print dd['cell_volume'].in_cgs()
    print np.sqrt(dd['cell_volume'].in_cgs())
@@ -70,5 +70,5 @@
 
    ds = load('HiresIsolatedGalaxy/DD0044/DD0044')
 
-   dd = ds.h.all_data()
+   dd = ds.all_data()
    dd['root_cell_volume']

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/bootcamp/4)_Data_Objects_and_Time_Series.ipynb
--- a/doc/source/bootcamp/4)_Data_Objects_and_Time_Series.ipynb
+++ b/doc/source/bootcamp/4)_Data_Objects_and_Time_Series.ipynb
@@ -57,7 +57,7 @@
      "source": [
       "### Example 1: Simple Time Series\n",
       "\n",
-      "As a simple example of how we can use this functionality, let's find the min and max of the density as a function of time in this simulation.  To do this we use the construction `for ds in ts` where `ds` means \"Dataset\" and `ts` is the \"Time Series\" we just loaded up.  For each parameter file, we'll create an object (`dd`) that covers the entire domain.  (`all_data` is a shorthand function for this.)  We'll then call the `extrema` Derived Quantity, and append the min and max to our extrema outputs."
+      "As a simple example of how we can use this functionality, let's find the min and max of the density as a function of time in this simulation.  To do this we use the construction `for ds in ts` where `ds` means \"Dataset\" and `ts` is the \"Time Series\" we just loaded up.  For each dataset, we'll create an object (`dd`) that covers the entire domain.  (`all_data` is a shorthand function for this.)  We'll then call the `extrema` Derived Quantity, and append the min and max to our extrema outputs."
      ]
     },
     {
@@ -102,7 +102,7 @@
       "\n",
       "Let's do something a bit different.  Let's calculate the total mass inside halos and outside halos.\n",
       "\n",
-      "This actually touches a lot of different pieces of machinery in yt.  For every parameter file, we will run the halo finder HOP.  Then, we calculate the total mass in the domain.  Then, for each halo, we calculate the sum of the baryon mass in that halo.  We'll keep running tallies of these two things."
+      "This actually touches a lot of different pieces of machinery in yt.  For every dataset, we will run the halo finder HOP.  Then, we calculate the total mass in the domain.  Then, for each halo, we calculate the sum of the baryon mass in that halo.  We'll keep running tallies of these two things."
      ]
     },
     {

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/cookbook/amrkdtree_downsampling.py
--- a/doc/source/cookbook/amrkdtree_downsampling.py
+++ b/doc/source/cookbook/amrkdtree_downsampling.py
@@ -20,7 +20,7 @@
 print kd.count_cells()
 
 tf = yt.ColorTransferFunction((-30, -22))
-cam = ds.h.camera([0.5, 0.5, 0.5], [0.2, 0.3, 0.4], 0.10, 256,
+cam = ds.camera([0.5, 0.5, 0.5], [0.2, 0.3, 0.4], 0.10, 256,
                   tf, volume=kd)
 tf.add_layers(4, 0.01, col_bounds=[-27.5, -25.5], colormap='RdBu_r')
 cam.snapshot("v1.png", clip_ratio=6.0)

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/cookbook/amrkdtree_to_uniformgrid.py
--- a/doc/source/cookbook/amrkdtree_to_uniformgrid.py
+++ b/doc/source/cookbook/amrkdtree_to_uniformgrid.py
@@ -15,7 +15,7 @@
 domain_center = (ds.domain_right_edge - ds.domain_left_edge)/2
 
 #determine the cellsize in the highest refinement level
-cell_size = pf.domain_width/(pf.domain_dimensions*2**lmax)
+cell_size = ds.domain_width/(ds.domain_dimensions*2**lmax)
 
 #calculate the left edge of the new grid
 left_edge = domain_center - 512*cell_size
@@ -24,7 +24,7 @@
 ncells = 1024
 
 #ask yt for the specified covering grid
-cgrid = pf.h.covering_grid(lmax, left_edge, np.array([ncells,]*3))
+cgrid = ds.covering_grid(lmax, left_edge, np.array([ncells,]*3))
 
 #get a map of the density into the new grid
 density_map = cgrid["density"].astype(dtype="float32")

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/cookbook/average_value.py
--- a/doc/source/cookbook/average_value.py
+++ b/doc/source/cookbook/average_value.py
@@ -5,7 +5,7 @@
 field = "temperature"  # The field to average
 weight = "cell_mass"  # The weight for the average
 
-dd = ds.h.all_data()  # This is a region describing the entire box,
+dd = ds.all_data()  # This is a region describing the entire box,
                       # but note it doesn't read anything in yet!
 # We now use our 'quantities' call to get the average quantity
 average_value = dd.quantities["WeightedAverageQuantity"](field, weight)

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/cookbook/contours_on_slice.py
--- a/doc/source/cookbook/contours_on_slice.py
+++ b/doc/source/cookbook/contours_on_slice.py
@@ -1,13 +1,13 @@
 import yt
 
 # first add density contours on a density slice
-pf = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")  # load data
-p = yt.SlicePlot(pf, "x", "density")
+ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")  # load data
+p = yt.SlicePlot(ds, "x", "density")
 p.annotate_contour("density")
 p.save()
 
 # then add temperature contours on the same densty slice
-pf = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")  # load data
-p = yt.SlicePlot(pf, "x", "density")
+ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")  # load data
+p = yt.SlicePlot(ds, "x", "density")
 p.annotate_contour("temperature")
-p.save(str(pf)+'_T_contour')
+p.save(str(ds)+'_T_contour')

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/cookbook/custom_colorbar_tickmarks.ipynb
--- a/doc/source/cookbook/custom_colorbar_tickmarks.ipynb
+++ b/doc/source/cookbook/custom_colorbar_tickmarks.ipynb
@@ -22,8 +22,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pf = load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
-      "slc = SlicePlot(pf, 'x', 'density')\n",
+      "ds = load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
+      "slc = SlicePlot(ds, 'x', 'density')\n",
       "slc"
      ],
      "language": "python",
@@ -87,4 +87,4 @@
    "metadata": {}
   }
  ]
-}
\ No newline at end of file
+}

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/cookbook/embedded_javascript_animation.ipynb
--- a/doc/source/cookbook/embedded_javascript_animation.ipynb
+++ b/doc/source/cookbook/embedded_javascript_animation.ipynb
@@ -51,12 +51,11 @@
       "prj.set_figure_size(5)\n",
       "prj.set_zlim('density',1e-32,1e-26)\n",
       "fig = prj.plots['density'].figure\n",
-      "fig.canvas = FigureCanvasAgg(fig)\n",
       "\n",
       "# animation function.  This is called sequentially\n",
       "def animate(i):\n",
-      "    pf = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
-      "    prj._switch_pf(pf)\n",
+      "    ds = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
+      "    prj._switch_ds(ds)\n",
       "\n",
       "# call the animator.  blit=True means only re-draw the parts that have changed.\n",
       "animation.FuncAnimation(fig, animate, frames=44, interval=200, blit=False)"
@@ -69,4 +68,4 @@
    "metadata": {}
   }
  ]
-}
\ No newline at end of file
+}

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/cookbook/embedded_webm_animation.ipynb
--- a/doc/source/cookbook/embedded_webm_animation.ipynb
+++ b/doc/source/cookbook/embedded_webm_animation.ipynb
@@ -99,12 +99,11 @@
       "prj = ProjectionPlot(load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
       "prj.set_zlim('density',1e-32,1e-26)\n",
       "fig = prj.plots['density'].figure\n",
-      "fig.canvas = FigureCanvasAgg(fig)\n",
       "\n",
       "# animation function.  This is called sequentially\n",
       "def animate(i):\n",
-      "    pf = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
-      "    prj._switch_pf(pf)\n",
+      "    ds = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
+      "    prj._switch_ds(ds)\n",
       "\n",
       "# call the animator.  blit=True means only re-draw the parts that have changed.\n",
       "anim = animation.FuncAnimation(fig, animate, frames=44, interval=200, blit=False)\n",
@@ -120,4 +119,4 @@
    "metadata": {}
   }
  ]
-}
\ No newline at end of file
+}

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/cookbook/find_clumps.py
--- a/doc/source/cookbook/find_clumps.py
+++ b/doc/source/cookbook/find_clumps.py
@@ -4,7 +4,7 @@
 from yt.analysis_modules.level_sets.api import (Clump, find_clumps,
                                                 get_lowest_clumps)
 
-fn = "IsolatedGalaxy/galaxy0030/galaxy0030"  # parameter file to load
+fn = "IsolatedGalaxy/galaxy0030/galaxy0030"  # dataset to load
 # this is the field we look for contours over -- we could do
 # this over anything.  Other common choices are 'AveragedDensity'
 # and 'Dark_Matter_Density'.
@@ -66,7 +66,7 @@
 
 # We can also save the clump object to disk to read in later so we don't have
 # to spend a lot of time regenerating the clump objects.
-ds.h.save_object(master_clump, 'My_clumps')
+ds.save_object(master_clump, 'My_clumps')
 
 # Later, we can read in the clump object like so,
 master_clump = ds.load_object('My_clumps')

diff -r f20d58ca2848dd2df5c6e97ae1627b0a623f130a -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db doc/source/cookbook/free_free_field.py
--- a/doc/source/cookbook/free_free_field.py
+++ b/doc/source/cookbook/free_free_field.py
@@ -70,9 +70,9 @@
 yt.add_quantity("FreeFree_Luminosity", function=_FreeFreeLuminosity,
                 combine_function=_combFreeFreeLuminosity, n_ret=1)
 
-pf = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
+ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
 
-sphere = pf.sphere(pf.domain_center, (100., "kpc"))
+sphere = ds.sphere(ds.domain_center, (100., "kpc"))
 
 # Print out the total luminosity at 1 keV for the sphere
 

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/db3fb4bd57ce/
Changeset:   db3fb4bd57ce
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-06-19 05:09:51
Summary:     Merging with mailine
Affected #:  97 files

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/README
--- a/doc/README
+++ b/doc/README
@@ -5,6 +5,6 @@
 http://sphinx.pocoo.org/
 
 Because the documentation requires a number of dependencies, we provide
-pre-build versions online, accessible here:
+pre-built versions online, accessible here:
 
-http://yt-project.org/docs/
+http://yt-project.org/docs/dev-3.0/

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/install_script.sh
--- a/doc/install_script.sh
+++ b/doc/install_script.sh
@@ -568,7 +568,7 @@
 mkdir -p ${DEST_DIR}/data
 cd ${DEST_DIR}/data
 echo 'de6d8c6ea849f0206d219303329a0276b3cce7c051eec34377d42aacbe0a4f47ac5145eb08966a338ecddd2b83c8f787ca9956508ad5c39ee2088ad875166410  xray_emissivity.h5' > xray_emissivity.h5.sha512
-get_ytdata xray_emissivity.h5
+[ ! -e xray_emissivity.h5 ] && get_ytdata xray_emissivity.h5
 
 # Set paths to what they should be when yt is activated.
 export PATH=${DEST_DIR}/bin:$PATH
@@ -608,7 +608,6 @@
 echo '3f53d0b474bfd79fea2536d0a9197eaef6c0927e95f2f9fd52dbd6c1d46409d0e649c21ac418d8f7767a9f10fe6114b516e06f2be4b06aec3ab5bdebc8768220  Forthon-0.8.11.tar.gz' > Forthon-0.8.11.tar.gz.sha512
 echo '4941f5aa21aff3743546495fb073c10d2657ff42b2aff401903498638093d0e31e344cce778980f28a7170c6d29eab72ac074277b9d4088376e8692dc71e55c1  PyX-0.12.1.tar.gz' > PyX-0.12.1.tar.gz.sha512
 echo '3df0ba4b1cfef5f02fb27925de4c2ca414eca9000af6a3d475d39063720afe987287c3d51377e0a36b88015573ef699f700782e1749c7a357b8390971d858a79  Python-2.7.6.tgz' > Python-2.7.6.tgz.sha512
-echo '172f2bc671145ebb0add2669c117863db35851fb3bdb192006cd710d4d038e0037497eb39a6d01091cb923f71a7e8982a77b6e80bf71d6275d5d83a363c8d7e5  rockstar-0.99.6.tar.gz' > rockstar-0.99.6.tar.gz.sha512
 echo '276bd9c061ec9a27d478b33078a86f93164ee2da72210e12e2c9da71dcffeb64767e4460b93f257302b09328eda8655e93c4b9ae85e74472869afbeae35ca71e  blas.tar.gz' > blas.tar.gz.sha512
 echo '00ace5438cfa0c577e5f578d8a808613187eff5217c35164ffe044fbafdfec9e98f4192c02a7d67e01e5a5ccced630583ad1003c37697219b0f147343a3fdd12  bzip2-1.0.6.tar.gz' > bzip2-1.0.6.tar.gz.sha512
 echo 'a296dfcaef7e853e58eed4e24b37c4fa29cfc6ac688def048480f4bb384b9e37ca447faf96eec7b378fd764ba291713f03ac464581d62275e28eb2ec99110ab6  reason-js-20120623.zip' > reason-js-20120623.zip.sha512
@@ -624,7 +623,6 @@
 echo 'd58177f3971b6d07baf6f81a2088ba371c7e43ea64ee7ada261da97c6d725b4bd4927122ac373c55383254e4e31691939276dab08a79a238bfa55172a3eff684  numpy-1.7.1.tar.gz' > numpy-1.7.1.tar.gz.sha512
 echo '9c0a61299779aff613131aaabbc255c8648f0fa7ab1806af53f19fbdcece0c8a68ddca7880d25b926d67ff1b9201954b207919fb09f6a290acb078e8bbed7b68  python-hglib-1.0.tar.gz' > python-hglib-1.0.tar.gz.sha512
 echo 'c65013293dd4049af5db009fdf7b6890a3c6b1e12dd588b58fb5f5a5fef7286935851fb7a530e03ea16f28de48b964e50f48bbf87d34545fd23b80dd4380476b  pyzmq-13.1.0.tar.gz' > pyzmq-13.1.0.tar.gz.sha512
-echo '172f2bc671145ebb0add2669c117863db35851fb3bdb192006cd710d4d038e0037497eb39a6d01091cb923f71a7e8982a77b6e80bf71d6275d5d83a363c8d7e5  rockstar-0.99.6.tar.gz' > rockstar-0.99.6.tar.gz.sha512
 echo '80c8e137c3ccba86575d4263e144ba2c4684b94b5cd620e200f094c92d4e118ea6a631d27bdb259b0869771dfaeeae68c0fdd37fdd740b9027ee185026e921d4  scipy-0.12.0.tar.gz' > scipy-0.12.0.tar.gz.sha512
 echo '96f3e51b46741450bc6b63779c10ebb4a7066860fe544385d64d1eda52592e376a589ef282ace2e1df73df61c10eab1a0d793abbdaf770e60289494d4bf3bcb4  sqlite-autoconf-3071700.tar.gz' > sqlite-autoconf-3071700.tar.gz.sha512
 echo '2992baa3edfb4e1842fb642abf0bf0fc0bf56fc183aab8fed6b3c42fbea928fa110ede7fdddea2d63fc5953e8d304b04da433dc811134fadefb1eecc326121b8  sympy-0.7.3.tar.gz' > sympy-0.7.3.tar.gz.sha512
@@ -657,7 +655,6 @@
 get_ytproject $NOSE.tar.gz
 get_ytproject $PYTHON_HGLIB.tar.gz
 get_ytproject $SYMPY.tar.gz
-get_ytproject $ROCKSTAR.tar.gz
 if [ $INST_BZLIB -eq 1 ]
 then
     if [ ! -e $BZLIB/done ]
@@ -816,6 +813,7 @@
         YT_DIR=`dirname $ORIG_PWD`
     elif [ ! -e yt-hg ]
     then
+        echo "Cloning yt"
         YT_DIR="$PWD/yt-hg/"
         ( ${HG_EXEC} --debug clone https://bitbucket.org/yt_analysis/yt-supplemental/ 2>&1 ) 1>> ${LOG_FILE}
         # Recently the hg server has had some issues with timeouts.  In lieu of
@@ -824,9 +822,9 @@
         ( ${HG_EXEC} --debug clone https://bitbucket.org/yt_analysis/yt/ ./yt-hg 2>&1 ) 1>> ${LOG_FILE}
         # Now we update to the branch we're interested in.
         ( ${HG_EXEC} -R ${YT_DIR} up -C ${BRANCH} 2>&1 ) 1>> ${LOG_FILE}
-    elif [ -e yt-3.0-hg ] 
+    elif [ -e yt-hg ]
     then
-        YT_DIR="$PWD/yt-3.0-hg/"
+        YT_DIR="$PWD/yt-hg/"
     fi
     echo Setting YT_DIR=${YT_DIR}
 fi
@@ -943,14 +941,19 @@
 # Now we build Rockstar and set its environment variable.
 if [ $INST_ROCKSTAR -eq 1 ]
 then
-    if [ ! -e Rockstar/done ]
+    if [ ! -e rockstar/done ]
     then
-        [ ! -e Rockstar ] && tar xfz $ROCKSTAR.tar.gz
         echo "Building Rockstar"
-        cd Rockstar
+        if [ ! -e rockstar ]
+        then
+            ( hg clone http://bitbucket.org/MatthewTurk/rockstar 2>&1 ) 1>> ${LOG_FILE}
+        fi
+        cd rockstar
+        ( hg pull 2>&1 ) 1>> ${LOG_FILE}
+        ( hg up -C tip 2>&1 ) 1>> ${LOG_FILE}
         ( make lib 2>&1 ) 1>> ${LOG_FILE} || do_exit
         cp librockstar.so ${DEST_DIR}/lib
-        ROCKSTAR_DIR=${DEST_DIR}/src/Rockstar
+        ROCKSTAR_DIR=${DEST_DIR}/src/rockstar
         echo $ROCKSTAR_DIR > ${YT_DIR}/rockstar.cfg
         touch done
         cd ..

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/bootcamp/1)_Introduction.ipynb
--- a/doc/source/bootcamp/1)_Introduction.ipynb
+++ b/doc/source/bootcamp/1)_Introduction.ipynb
@@ -1,6 +1,7 @@
 {
  "metadata": {
-  "name": ""
+  "name": "",
+  "signature": "sha256:f098e81e1851a0884440b328147d51c2cbba40bdbe4c5946b6fe3fec15189c84"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -32,9 +33,40 @@
       "5. Derived Fields and Profiles (IsolatedGalaxy dataset)\n",
       "6. Volume Rendering (IsolatedGalaxy dataset)"
      ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The following code will download the data needed for this tutorial automatically using `curl`. It may take some time so please wait when the kernel is busy. You will need to set `download_datasets` to True before using it."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "download_datasets = True\n",
+      "if download_datasets:\n",
+      "    !curl -sSO http://yt-project.org/data/enzo_tiny_cosmology.tar\n",
+      "    print \"Got enzo_tiny_cosmology\"\n",
+      "    !tar xf enzo_tiny_cosmology.tar\n",
+      "    \n",
+      "    !curl -sSO http://yt-project.org/data/Enzo_64.tar\n",
+      "    print \"Got Enzo_64\"\n",
+      "    !tar xf Enzo_64.tar\n",
+      "    \n",
+      "    !curl -sSO http://yt-project.org/data/IsolatedGalaxy.tar\n",
+      "    print \"Got IsolatedGalaxy\"\n",
+      "    !tar xf IsolatedGalaxy.tar\n",
+      "    \n",
+      "    print \"All done!\""
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
     }
    ],
    "metadata": {}
   }
  ]
-}
+}
\ No newline at end of file

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/bootcamp/2)_Data_Inspection.ipynb
--- a/doc/source/bootcamp/2)_Data_Inspection.ipynb
+++ b/doc/source/bootcamp/2)_Data_Inspection.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:15cdc35ddb8b1b938967237e17534149f734f4e7a61ebd37d74b675f8059da20"
+  "signature": "sha256:9d67e9e4ca5ce92dcd0658025dbfbd28be47b47ca8d4531fdac16cc2c2fa038b"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -21,7 +21,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "from yt.mods import *"
+      "import yt"
      ],
      "language": "python",
      "metadata": {},
@@ -38,7 +38,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "ds = load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")"
+      "ds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")"
      ],
      "language": "python",
      "metadata": {},

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/bootcamp/3)_Simple_Visualization.ipynb
--- a/doc/source/bootcamp/3)_Simple_Visualization.ipynb
+++ b/doc/source/bootcamp/3)_Simple_Visualization.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:eb5fbf5eb55a9c8997c687f072c8c6030e74bef0048a72b4f74a06893c11b80a"
+  "signature": "sha256:c00ba7fdbbd9ea957d06060ad70f06f629b1fd4ebf5379c1fdad2697ab0a4cd6"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -21,7 +21,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "from yt.mods import *"
+      "import yt"
      ],
      "language": "python",
      "metadata": {},
@@ -38,7 +38,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "ds = load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+      "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
       "print \"Redshift =\", ds.current_redshift"
      ],
      "language": "python",
@@ -58,7 +58,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "p = ProjectionPlot(ds, \"y\", \"density\")\n",
+      "p = yt.ProjectionPlot(ds, \"y\", \"density\")\n",
       "p.show()"
      ],
      "language": "python",
@@ -135,7 +135,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "p = ProjectionPlot(ds, \"z\", [\"density\", \"temperature\"], weight_field=\"density\")\n",
+      "p = yt.ProjectionPlot(ds, \"z\", [\"density\", \"temperature\"], weight_field=\"density\")\n",
       "p.show()"
      ],
      "language": "python",
@@ -189,8 +189,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "ds = load(\"Enzo_64/DD0043/data0043\")\n",
-      "s = SlicePlot(ds, \"z\", [\"density\", \"velocity_magnitude\"], center=\"max\")\n",
+      "ds = yt.load(\"Enzo_64/DD0043/data0043\")\n",
+      "s = yt.SlicePlot(ds, \"z\", [\"density\", \"velocity_magnitude\"], center=\"max\")\n",
       "s.set_cmap(\"velocity_magnitude\", \"kamae\")\n",
       "s.zoom(10.0)"
      ],
@@ -243,7 +243,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "s = SlicePlot(ds, \"x\", [\"density\"], center=\"max\")\n",
+      "s = yt.SlicePlot(ds, \"x\", [\"density\"], center=\"max\")\n",
       "s.annotate_contour(\"temperature\")\n",
       "s.zoom(2.5)"
      ],
@@ -272,4 +272,4 @@
    "metadata": {}
   }
  ]
-}
+}
\ No newline at end of file

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/bootcamp/4)_Data_Objects_and_Time_Series.ipynb
--- a/doc/source/bootcamp/4)_Data_Objects_and_Time_Series.ipynb
+++ b/doc/source/bootcamp/4)_Data_Objects_and_Time_Series.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:41293a66cd6fd5eae6da2d0343549144dc53d72e83286999faab3cf21d801f51"
+  "signature": "sha256:a46e1baa90d32045c2b524100f28bad41b3665249612c9a275ee0375a6f4be20"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -22,7 +22,8 @@
      "collapsed": false,
      "input": [
       "%matplotlib inline\n",
-      "from yt.mods import *\n",
+      "import yt\n",
+      "import numpy as np\n",
       "from matplotlib import pylab\n",
       "from yt.analysis_modules.halo_finding.api import HaloFinder"
      ],
@@ -45,7 +46,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "ts = DatasetSeries(\"enzo_tiny_cosmology/*/*.hierarchy\")"
+      "ts = yt.DatasetSeries(\"enzo_tiny_cosmology/*/*.hierarchy\")"
      ],
      "language": "python",
      "metadata": {},
@@ -87,8 +88,13 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pylab.semilogy(times, rho_ex[:,0], '-xk')\n",
-      "pylab.semilogy(times, rho_ex[:,1], '-xr')"
+      "pylab.semilogy(times, rho_ex[:,0], '-xk', label='Minimum')\n",
+      "pylab.semilogy(times, rho_ex[:,1], '-xr', label='Maximum')\n",
+      "pylab.ylabel(\"Density ($g/cm^3$)\")\n",
+      "pylab.xlabel(\"Time (Gyr)\")\n",
+      "pylab.legend()\n",
+      "pylab.ylim(1e-32, 1e-21)\n",
+      "pylab.show()"
      ],
      "language": "python",
      "metadata": {},
@@ -109,13 +115,15 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
+      "from yt.units import Msun\n",
+      "\n",
       "mass = []\n",
       "zs = []\n",
       "for ds in ts:\n",
       "    halos = HaloFinder(ds)\n",
       "    dd = ds.all_data()\n",
       "    total_mass = dd.quantities.total_quantity(\"cell_mass\").in_units(\"Msun\")\n",
-      "    total_in_baryons = 0.0\n",
+      "    total_in_baryons = 0.0*Msun\n",
       "    for halo in halos:\n",
       "        sp = halo.get_sphere()\n",
       "        total_in_baryons += sp.quantities.total_quantity(\"cell_mass\").in_units(\"Msun\")\n",
@@ -137,7 +145,11 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pylab.loglog(zs, mass, '-xb')"
+      "pylab.semilogx(zs, mass, '-xb')\n",
+      "pylab.xlabel(\"Redshift\")\n",
+      "pylab.ylabel(\"Mass in halos / Total mass\")\n",
+      "pylab.xlim(max(zs), min(zs))\n",
+      "pylab.ylim(-0.01, .18)"
      ],
      "language": "python",
      "metadata": {},
@@ -155,7 +167,9 @@
       "\n",
       "yt provides the ability to examine rays, or lines, through the domain.  Note that these are not periodic, unlike most other data objects.  We create a ray object and can then examine quantities of it.  Rays have the special fields `t` and `dts`, which correspond to the time the ray enters a given cell and the distance it travels through that cell.\n",
       "\n",
-      "To create a ray, we specify the start and end points."
+      "To create a ray, we specify the start and end points.\n",
+      "\n",
+      "Note that we need to convert these arrays to numpy arrays due to a bug in matplotlib 1.3.1."
      ]
     },
     {
@@ -163,7 +177,7 @@
      "collapsed": false,
      "input": [
       "ray = ds.ray([0.1, 0.2, 0.3], [0.9, 0.8, 0.7])\n",
-      "pylab.semilogy(ray[\"t\"], ray[\"density\"])"
+      "pylab.semilogy(np.array(ray[\"t\"]), np.array(ray[\"density\"]))"
      ],
      "language": "python",
      "metadata": {},
@@ -212,10 +226,12 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "ds = load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")\n",
+      "ds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")\n",
       "v, c = ds.find_max(\"density\")\n",
       "sl = ds.slice(0, c[0])\n",
-      "print sl[\"index\", \"x\"], sl[\"index\", \"z\"], sl[\"pdx\"]\n",
+      "print sl[\"index\", \"x\"]\n",
+      "print sl[\"index\", \"z\"]\n",
+      "print sl[\"pdx\"]\n",
       "print sl[\"gas\", \"density\"].shape"
      ],
      "language": "python",
@@ -251,8 +267,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "write_image(np.log10(frb[\"gas\", \"density\"]), \"temp.png\")\n",
-      "from IPython.core.display import Image\n",
+      "yt.write_image(np.log10(frb[\"gas\", \"density\"]), \"temp.png\")\n",
+      "from IPython.display import Image\n",
       "Image(filename = \"temp.png\")"
      ],
      "language": "python",
@@ -275,7 +291,7 @@
      "collapsed": false,
      "input": [
       "cp = ds.cutting([0.2, 0.3, 0.5], \"max\")\n",
-      "pw = cp.to_pw(fields = [\"density\"])"
+      "pw = cp.to_pw(fields = [(\"gas\", \"density\")])"
      ],
      "language": "python",
      "metadata": {},
@@ -310,7 +326,8 @@
      "collapsed": false,
      "input": [
       "pws = sl.to_pw(fields=[\"density\"])\n",
-      "pws.show()"
+      "#pws.show()\n",
+      "print pws.plots.keys()"
      ],
      "language": "python",
      "metadata": {},
@@ -362,4 +379,4 @@
    "metadata": {}
   }
  ]
-}
+}
\ No newline at end of file

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/bootcamp/5)_Derived_Fields_and_Profiles.ipynb
--- a/doc/source/bootcamp/5)_Derived_Fields_and_Profiles.ipynb
+++ b/doc/source/bootcamp/5)_Derived_Fields_and_Profiles.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:a19d451f3b4dcfeed448caa22c2cac35c46958e0646c19c226b1e467b76d0718"
+  "signature": "sha256:eca573e749829cacda0a8c07c6d5d11d07a5de657563a44b8c4ffff8f735caed"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -22,7 +22,9 @@
      "collapsed": false,
      "input": [
       "%matplotlib inline\n",
-      "from yt.mods import *\n",
+      "import yt\n",
+      "import numpy as np\n",
+      "from yt import derived_field\n",
       "from matplotlib import pylab"
      ],
      "language": "python",
@@ -61,7 +63,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "ds = load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")\n",
+      "ds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")\n",
       "dd = ds.all_data()\n",
       "print dd.quantities.keys()"
      ],
@@ -120,7 +122,9 @@
       "bv = sp.quantities.bulk_velocity()\n",
       "L = sp.quantities.angular_momentum_vector()\n",
       "rho_min, rho_max = sp.quantities.extrema(\"density\")\n",
-      "print bv, L, rho_min, rho_max"
+      "print bv\n",
+      "print L\n",
+      "print rho_min, rho_max"
      ],
      "language": "python",
      "metadata": {},
@@ -143,9 +147,11 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "prof = Profile1D(sp, \"density\", 32, rho_min, rho_max, True, weight_field=\"cell_mass\")\n",
+      "prof = yt.Profile1D(sp, \"density\", 32, rho_min, rho_max, True, weight_field=\"cell_mass\")\n",
       "prof.add_fields([\"temperature\",\"dinosaurs\"])\n",
-      "pylab.loglog(np.array(prof.x), np.array(prof[\"temperature\"]), \"-x\")"
+      "pylab.loglog(np.array(prof.x), np.array(prof[\"temperature\"]), \"-x\")\n",
+      "pylab.xlabel('Density $(g/cm^3)$')\n",
+      "pylab.ylabel('Temperature $(K)$')"
      ],
      "language": "python",
      "metadata": {},
@@ -162,7 +168,9 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pylab.loglog(np.array(prof.x), np.array(prof[\"dinosaurs\"]), '-x')"
+      "pylab.loglog(np.array(prof.x), np.array(prof[\"dinosaurs\"]), '-x')\n",
+      "pylab.xlabel('Density $(g/cm^3)$')\n",
+      "pylab.ylabel('Dinosaurs $(K cm / s)$')"
      ],
      "language": "python",
      "metadata": {},
@@ -179,9 +187,30 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "prof = Profile1D(sp, \"density\", 32, rho_min, rho_max, True, weight_field=None)\n",
+      "prof = yt.Profile1D(sp, \"density\", 32, rho_min, rho_max, True, weight_field=None)\n",
       "prof.add_fields([\"cell_mass\"])\n",
-      "pylab.loglog(np.array(prof.x), np.array(prof[\"cell_mass\"].in_units(\"Msun\")), '-x')"
+      "pylab.loglog(np.array(prof.x), np.array(prof[\"cell_mass\"].in_units(\"Msun\")), '-x')\n",
+      "pylab.xlabel('Density $(g/cm^3)$')\n",
+      "pylab.ylabel('Cell mass $(M_\\odot)$')"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "In addition to the low-level `ProfileND` interface, it's also quite straightforward to quickly create plots of profiles using the `ProfilePlot` class.  Let's redo the last plot using `ProfilePlot`"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "prof = yt.ProfilePlot(sp, 'density', 'cell_mass', weight_field=None)\n",
+      "prof.set_unit('cell_mass', 'Msun')\n",
+      "prof.show()"
      ],
      "language": "python",
      "metadata": {},

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/bootcamp/6)_Volume_Rendering.ipynb
--- a/doc/source/bootcamp/6)_Volume_Rendering.ipynb
+++ b/doc/source/bootcamp/6)_Volume_Rendering.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:2929940fc3977b495aa124dee851f7602d61e073ed65407dd95e7cf597684b35"
+  "signature": "sha256:2a24bbe82955f9d948b39cbd1b1302968ff57f62f73afb2c7a5c4953393d00ae"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -21,8 +21,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "from yt.mods import *\n",
-      "ds = load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")"
+      "import yt\n",
+      "ds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")"
      ],
      "language": "python",
      "metadata": {},
@@ -43,7 +43,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "tf = ColorTransferFunction((-28, -24))\n",
+      "tf = yt.ColorTransferFunction((-28, -24))\n",
       "tf.add_layers(4, w=0.01)\n",
       "cam = ds.camera([0.5, 0.5, 0.5], [1.0, 1.0, 1.0], (20, 'kpc'), 512, tf, fields=[\"density\"])\n",
       "cam.show()"
@@ -80,7 +80,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "tf = ColorTransferFunction((-28, -25))\n",
+      "tf = yt.ColorTransferFunction((-28, -25))\n",
       "tf.add_layers(4, w=0.03)\n",
       "cam = ds.camera([0.5, 0.5, 0.5], [1.0, 1.0, 1.0], (20.0, 'kpc'), 512, tf, no_ghost=False)\n",
       "cam.show(clip_ratio=4.0)"

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/aligned_cutting_plane.py
--- a/doc/source/cookbook/aligned_cutting_plane.py
+++ b/doc/source/cookbook/aligned_cutting_plane.py
@@ -1,18 +1,20 @@
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+
 import yt
 
 # Load the dataset.
 ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
 
-# Create a 1 kpc radius sphere, centered on the maximum gas density.  Note
-# that this sphere is very small compared to the size of our final plot,
-# and it has a non-axially aligned L vector.
-sp = ds.sphere("m", (1.0, "kpc"))
+# Create a 15 kpc radius sphere, centered on the center of the sim volume
+sp = ds.sphere("center", (15.0, "kpc"))
 
 # Get the angular momentum vector for the sphere.
 L = sp.quantities.angular_momentum_vector()
 
 print "Angular momentum vector: {0}".format(L)
 
-# Create an OffAxisSlicePlot on the object with the L vector as its normal
-p = yt.OffAxisSlicePlot(ds, L, "density", sp.center, (15, "kpc"))
+# Create an OffAxisSlicePlot of density centered on the object with the L 
+# vector as its normal and a width of 25 kpc on a side
+p = yt.OffAxisSlicePlot(ds, L, "density", sp.center, (25, "kpc"))
 p.save()

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/amrkdtree_downsampling.py
--- a/doc/source/cookbook/amrkdtree_downsampling.py
+++ b/doc/source/cookbook/amrkdtree_downsampling.py
@@ -1,3 +1,6 @@
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED 
+
 # Using AMRKDTree Homogenized Volumes to examine large datasets
 # at lower resolution.
 
@@ -10,14 +13,14 @@
 import yt
 from yt.utilities.amr_kdtree.api import AMRKDTree
 
-# Load up a data and print out the maximum refinement level
+# Load up a dataset
 ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
 
 kd = AMRKDTree(ds)
-# Print out the total volume of all the bricks
-print kd.count_volume()
-# Print out the number of cells
-print kd.count_cells()
+
+# Print out specifics of KD Tree
+print "Total volume of all bricks = %i" % kd.count_volume()
+print "Total number of cells = %i" % kd.count_cells()
 
 tf = yt.ColorTransferFunction((-30, -22))
 cam = ds.camera([0.5, 0.5, 0.5], [0.2, 0.3, 0.4], 0.10, 256,

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/average_value.py
--- a/doc/source/cookbook/average_value.py
+++ b/doc/source/cookbook/average_value.py
@@ -5,9 +5,10 @@
 field = "temperature"  # The field to average
 weight = "cell_mass"  # The weight for the average
 
-dd = ds.all_data()  # This is a region describing the entire box,
-                      # but note it doesn't read anything in yet!
+ad = ds.all_data()  # This is a region describing the entire box,
+                    # but note it doesn't read anything in yet!
+
 # We now use our 'quantities' call to get the average quantity
-average_value = dd.quantities["WeightedAverageQuantity"](field, weight)
+average_value = ad.quantities.weighted_average_quantity(field, weight)
 
-print "Average %s (weighted by %s) is %0.5e" % (field, weight, average_value)
+print "Average %s (weighted by %s) is %0.3e %s" % (field, weight, average_value, average_value.units)

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/boolean_data_objects.py
--- a/doc/source/cookbook/boolean_data_objects.py
+++ b/doc/source/cookbook/boolean_data_objects.py
@@ -1,23 +1,32 @@
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+
 import yt
 
 ds = yt.load("Enzo_64/DD0043/data0043")  # load data
-# Make a few data ojbects to start.
+# Make a few data ojbects to start. Two boxes and two spheres.
 re1 = ds.region([0.5, 0.5, 0.5], [0.4, 0.4, 0.4], [0.6, 0.6, 0.6])
 re2 = ds.region([0.5, 0.5, 0.5], [0.5, 0.5, 0.5], [0.6, 0.6, 0.6])
 sp1 = ds.sphere([0.5, 0.5, 0.5], 0.05)
 sp2 = ds.sphere([0.1, 0.2, 0.3], 0.1)
+
 # The "AND" operator. This will make a region identical to re2.
 bool1 = ds.boolean([re1, "AND", re2])
 xp = bool1["particle_position_x"]
+
 # The "OR" operator. This will make a region identical to re1.
 bool2 = ds.boolean([re1, "OR", re2])
+
 # The "NOT" operator. This will make a region like re1, but with the corner
 # that re2 covers cut out.
 bool3 = ds.boolean([re1, "NOT", re2])
+
 # Disjoint regions can be combined with the "OR" operator.
 bool4 = ds.boolean([sp1, "OR", sp2])
+
 # Find oddly-shaped overlapping regions.
 bool5 = ds.boolean([re2, "AND", sp1])
+
 # Nested logic with parentheses.
 # This is re1 with the oddly-shaped region cut out.
 bool6 = ds.boolean([re1, "NOT", "(", re1, "AND", sp1, ")"])

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/camera_movement.py
--- a/doc/source/cookbook/camera_movement.py
+++ b/doc/source/cookbook/camera_movement.py
@@ -1,11 +1,13 @@
-import numpy as np
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
 
 import yt
+import numpy as np
 
 # Follow the simple_volume_rendering cookbook for the first part of this.
 ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")  # load data
-dd = ds.all_data()
-mi, ma = dd.quantities["Extrema"]("density")
+ad = ds.all_data()
+mi, ma = ad.quantities.extrema("density")
 
 # Set up transfer function
 tf = yt.ColorTransferFunction((np.log10(mi), np.log10(ma)))
@@ -40,4 +42,4 @@
 # Zoom in by a factor of 10 over 5 frames
 for i, snapshot in enumerate(cam.zoomin(10.0, 5, clip_ratio=8.0)):
     snapshot.write_png('camera_movement_%04i.png' % frame)
-    frame += 1
\ No newline at end of file
+    frame += 1

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/contours_on_slice.py
--- a/doc/source/cookbook/contours_on_slice.py
+++ b/doc/source/cookbook/contours_on_slice.py
@@ -1,13 +1,14 @@
 import yt
 
 # first add density contours on a density slice
-ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")  # load data
+ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
+
+# add density contours on the density slice.
 p = yt.SlicePlot(ds, "x", "density")
 p.annotate_contour("density")
 p.save()
 
-# then add temperature contours on the same densty slice
-ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")  # load data
+# then add temperature contours on the same density slice
 p = yt.SlicePlot(ds, "x", "density")
 p.annotate_contour("temperature")
 p.save(str(ds)+'_T_contour')

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/extract_fixed_resolution_data.py
--- a/doc/source/cookbook/extract_fixed_resolution_data.py
+++ b/doc/source/cookbook/extract_fixed_resolution_data.py
@@ -8,21 +8,26 @@
 level = 2
 dims = ds.domain_dimensions * ds.refine_by**level
 
-# Now, we construct an object that describes the data region and structure we
-# want
-cube = ds.covering_grid(2,  # The level we are willing to extract to; higher
-                            # levels than this will not contribute to the data!
+# We construct an object that describes the data region and structure we want
+# In this case, we want all data up to the maximum "level" of refinement 
+# across the entire simulation volume.  Higher levels than this will not 
+# contribute to our covering grid.
+cube = ds.covering_grid(level,  
                         left_edge=[0.0, 0.0, 0.0],
+                        dims=dims,
                         # And any fields to preload (this is optional!)
-                        dims=dims,
                         fields=["density"])
 
 # Now we open our output file using h5py
-# Note that we open with 'w' which will overwrite existing files!
+# Note that we open with 'w' (write), which will overwrite existing files!
 f = h5py.File("my_data.h5", "w")
 
-# We create a dataset at the root note, calling it density...
+# We create a dataset at the root, calling it "density"
 f.create_dataset("/density", data=cube["density"])
 
 # We close our file
 f.close()
+
+# If we want to then access this datacube in the h5 file, we can now...
+f = h5py.File("my_data.h5", "r")
+print f["density"].value

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/find_clumps.py
--- a/doc/source/cookbook/find_clumps.py
+++ b/doc/source/cookbook/find_clumps.py
@@ -1,3 +1,6 @@
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+
 import numpy as np
 
 import yt

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/fit_spectrum.py
--- a/doc/source/cookbook/fit_spectrum.py
+++ b/doc/source/cookbook/fit_spectrum.py
@@ -1,22 +1,21 @@
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+
 import yt
 from yt.analysis_modules.cosmological_observation.light_ray.api import LightRay
-from yt.analysis_modules.api import AbsorptionSpectrum
+from yt.analysis_modules.absorption_spectrum.api import AbsorptionSpectrum
 from yt.analysis_modules.absorption_spectrum.api import generate_total_fit
 
 # Define and add a field to simulate OVI based on a constant relationship to HI
-def _OVI_NumberDensity(field, data):
-    return data['HI_NumberDensity']
+# Do *NOT* use this for science, because this is not how OVI actually behaves;
+# it is just an example.
 
+ at yt.derived_field(name='OVI_number_density', units='cm**-3')
+def _OVI_number_density(field, data):
+    return data['HI_NumberDensity']*2.0
 
-def _convertOVI(data):
-    return 4.9E-4*.2
 
-yt.add_field('my_OVI_NumberDensity',
-             function=_OVI_NumberDensity,
-             convert_function=_convertOVI)
-
-
-# Define species andi associated parameters to add to continuum
+# Define species and associated parameters to add to continuum
 # Parameters used for both adding the transition to the spectrum
 # and for fitting
 # Note that for single species that produce multiple lines
@@ -37,7 +36,7 @@
                  'init_N': 1E14}
 
 OVI_parameters = {'name': 'OVI',
-                  'field': 'my_OVI_NumberDensity',
+                  'field': 'OVI_number_density',
                   'f': [.1325, .06580],
                   'Gamma': [4.148E8, 4.076E8],
                   'wavelength': [1031.9261, 1037.6167],

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/free_free_field.py
--- a/doc/source/cookbook/free_free_field.py
+++ b/doc/source/cookbook/free_free_field.py
@@ -1,3 +1,6 @@
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+
 import numpy as np
 import yt
 # Need to grab the proton mass from the constants database

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/global_phase_plots.py
--- a/doc/source/cookbook/global_phase_plots.py
+++ b/doc/source/cookbook/global_phase_plots.py
@@ -6,8 +6,8 @@
 # This is an object that describes the entire box
 ad = ds.all_data()
 
-# We plot the average VelocityMagnitude (mass-weighted) in our object
-# as a function of Density and temperature
+# We plot the average velocity magnitude (mass-weighted) in our object
+# as a function of density and temperature
 plot = yt.PhasePlot(ad, "density", "temperature", "velocity_magnitude")
 
 # save the plot

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/halo_merger_tree.py
--- a/doc/source/cookbook/halo_merger_tree.py
+++ b/doc/source/cookbook/halo_merger_tree.py
@@ -1,3 +1,6 @@
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+
 # This script demonstrates some of the halo merger tracking infrastructure,
 # for tracking halos across multiple datadumps in a time series.
 # Ultimately, it outputs an HDF5 file with the important quantities for the

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/halo_plotting.py
--- a/doc/source/cookbook/halo_plotting.py
+++ b/doc/source/cookbook/halo_plotting.py
@@ -1,16 +1,20 @@
-"""
-This is a mechanism for plotting circles representing identified particle halos
-on an image.  For more information, see :ref:`halo_finding`.
-"""
-from yt.mods import * # set up our namespace
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
 
-data_ds = load("Enzo_64/RD0006/RedshiftOutput0006")
+import yt
+from yt.analysis_modules.halo_analysis.halo_catalog import HaloCatalog
 
-halo_ds = load('rockstar_halos/halos_0.0.bin')
+# Load the dataset
+ds = yt.load("Enzo_64/RD0006/RedshiftOutput0006")
 
-hc - HaloCatalog(halos_ds = halo_ds)
+# Load the halo list from a rockstar output for this dataset
+halos = yt.load('rockstar_halos/halos_0.0.bin')
+
+# Create the halo catalog from this halo list
+hc = HaloCatalog(halos_pf = halos)
 hc.load()
 
-p = ProjectionPlot(ds, "x", "density")
+# Create a projection with the halos overplot on top
+p = yt.ProjectionPlot(ds, "x", "density")
 p.annotate_halos(hc)
 p.save()

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/halo_profiler.py
--- a/doc/source/cookbook/halo_profiler.py
+++ b/doc/source/cookbook/halo_profiler.py
@@ -1,3 +1,6 @@
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+
 from yt.mods import *
 
 from yt.analysis_modules.halo_profiler.api import *

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/hse_field.py
--- a/doc/source/cookbook/hse_field.py
+++ b/doc/source/cookbook/hse_field.py
@@ -1,11 +1,14 @@
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+
 import numpy as np
 import yt
 
 # Define the components of the gravitational acceleration vector field by
 # taking the gradient of the gravitational potential
 
-
-def _Grav_Accel_x(field, data):
+ at yt.derived_field(name='grav_accel_x', units='cm/s**2', take_log=False)
+def grav_accel_x(field, data):
 
     # We need to set up stencils
 
@@ -19,13 +22,14 @@
     gx -= data["gravitational_potential"][sl_left, 1:-1, 1:-1]/dx
 
     new_field = np.zeros(data["gravitational_potential"].shape,
-                         dtype='float64')
+                         dtype='float64')*gx.unit_array
     new_field[1:-1, 1:-1, 1:-1] = -gx
 
     return new_field
 
 
-def _Grav_Accel_y(field, data):
+ at yt.derived_field(name='grav_accel_y', units='cm/s**2', take_log=False)
+def grav_accel_y(field, data):
 
     # We need to set up stencils
 
@@ -39,13 +43,14 @@
     gy -= data["gravitational_potential"][1:-1, sl_left, 1:-1]/dy
 
     new_field = np.zeros(data["gravitational_potential"].shape,
-                         dtype='float64')
+                         dtype='float64')*gx.unit_array
     new_field[1:-1, 1:-1, 1:-1] = -gy
 
     return new_field
 
 
-def _Grav_Accel_z(field, data):
+ at yt.derived_field(name='grav_accel_z', units='cm/s**2', take_log=False)
+def grav_accel_z(field, data):
 
     # We need to set up stencils
 
@@ -59,7 +64,7 @@
     gz -= data["gravitational_potential"][1:-1, 1:-1, sl_left]/dz
 
     new_field = np.zeros(data["gravitational_potential"].shape,
-                         dtype='float64')
+                         dtype='float64')*gx.unit_array
     new_field[1:-1, 1:-1, 1:-1] = -gz
 
     return new_field
@@ -68,7 +73,8 @@
 # Define the components of the pressure gradient field
 
 
-def _Grad_Pressure_x(field, data):
+ at yt.derived_field(name='grad_pressure_x', units='g/(cm*s)**2', take_log=False)
+def grad_pressure_x(field, data):
 
     # We need to set up stencils
 
@@ -81,13 +87,14 @@
     px = data["pressure"][sl_right, 1:-1, 1:-1]/dx
     px -= data["pressure"][sl_left, 1:-1, 1:-1]/dx
 
-    new_field = np.zeros(data["pressure"].shape, dtype='float64')
+    new_field = np.zeros(data["pressure"].shape, dtype='float64')*px.unit_array
     new_field[1:-1, 1:-1, 1:-1] = px
 
     return new_field
 
 
-def _Grad_Pressure_y(field, data):
+ at yt.derived_field(name='grad_pressure_y', units='g/(cm*s)**2', take_log=False)
+def grad_pressure_y(field, data):
 
     # We need to set up stencils
 
@@ -100,13 +107,14 @@
     py = data["pressure"][1:-1, sl_right, 1:-1]/dy
     py -= data["pressure"][1:-1, sl_left, 1:-1]/dy
 
-    new_field = np.zeros(data["pressure"].shape, dtype='float64')
+    new_field = np.zeros(data["pressure"].shape, dtype='float64')*px.unit_array
     new_field[1:-1, 1:-1, 1:-1] = py
 
     return new_field
 
 
-def _Grad_Pressure_z(field, data):
+ at yt.derived_field(name='grad_pressure_z', units='g/(cm*s)**2', take_log=False)
+def grad_pressure_z(field, data):
 
     # We need to set up stencils
 
@@ -119,7 +127,7 @@
     pz = data["pressure"][1:-1, 1:-1, sl_right]/dz
     pz -= data["pressure"][1:-1, 1:-1, sl_left]/dz
 
-    new_field = np.zeros(data["pressure"].shape, dtype='float64')
+    new_field = np.zeros(data["pressure"].shape, dtype='float64')*px.unit_array
     new_field[1:-1, 1:-1, 1:-1] = pz
 
     return new_field
@@ -127,8 +135,8 @@
 
 # Define the "degree of hydrostatic equilibrium" field
 
-
-def _HSE(field, data):
+ at yt.derived_field(name='HSE', units=None, take_log=False)
+def HSE(field, data):
 
     gx = data["density"]*data["Grav_Accel_x"]
     gy = data["density"]*data["Grav_Accel_y"]
@@ -138,31 +146,10 @@
     hy = data["Grad_Pressure_y"] - gy
     hz = data["Grad_Pressure_z"] - gz
 
-    h = np.sqrt((hx*hx+hy*hy+hz*hz)/(gx*gx+gy*gy+gz*gz))
+    h = np.sqrt((hx*hx+hy*hy+hz*hz)/(gx*gx+gy*gy+gz*gz))*gx.unit_array
 
     return h
 
-# Now add the fields to the database
-
-yt.add_field("Grav_Accel_x", function=_Grav_Accel_x, take_log=False,
-             validators=[yt.ValidateSpatial(1, ["gravitational_potential"])])
-
-yt.add_field("Grav_Accel_y", function=_Grav_Accel_y, take_log=False,
-             validators=[yt.ValidateSpatial(1, ["gravitational_potential"])])
-
-yt.add_field("Grav_Accel_z", function=_Grav_Accel_z, take_log=False,
-             validators=[yt.ValidateSpatial(1, ["gravitational_potential"])])
-
-yt.add_field("Grad_Pressure_x", function=_Grad_Pressure_x, take_log=False,
-             validators=[yt.ValidateSpatial(1, ["pressure"])])
-
-yt.add_field("Grad_Pressure_y", function=_Grad_Pressure_y, take_log=False,
-             validators=[yt.ValidateSpatial(1, ["pressure"])])
-
-yt.add_field("Grad_Pressure_z", function=_Grad_Pressure_z, take_log=False,
-             validators=[yt.ValidateSpatial(1, ["pressure"])])
-
-yt.add_field("HSE", function=_HSE, take_log=False)
 
 # Open two files, one at the beginning and the other at a later time when
 # there's a lot of sloshing going on.
@@ -173,8 +160,8 @@
 # Sphere objects centered at the cluster potential minimum with a radius
 # of 200 kpc
 
-sphere_i = dsi.h.sphere(dsi.domain_center, (200, "kpc"))
-sphere_f = dsf.h.sphere(dsf.domain_center, (200, "kpc"))
+sphere_i = dsi.sphere(dsi.domain_center, (200, "kpc"))
+sphere_f = dsf.sphere(dsf.domain_center, (200, "kpc"))
 
 # Average "degree of hydrostatic equilibrium" in these spheres
 
@@ -188,9 +175,9 @@
 # of the two files
 
 slc_i = yt.SlicePlot(dsi, 2, ["density", "HSE"], center=dsi.domain_center,
-                     width=(1.0, "mpc"))
+                     width=(1.0, "Mpc"))
 slc_f = yt.SlicePlot(dsf, 2, ["density", "HSE"], center=dsf.domain_center,
-                     width=(1.0, "mpc"))
+                     width=(1.0, "Mpc"))
 
 slc_i.save("initial")
 slc_f.save("final")

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/image_background_colors.py
--- a/doc/source/cookbook/image_background_colors.py
+++ b/doc/source/cookbook/image_background_colors.py
@@ -1,21 +1,24 @@
-from yt.mods import *
-
 # This shows how to save ImageArray objects, such as those returned from 
 # volume renderings, to pngs with varying backgrounds.
 
+import yt
+import numpy as np
+
 # Lets make a fake "rendering" that has 4 channels and looks like a linear
 # gradient from the bottom to top.
+
 im = np.zeros([64,128,4])
 for i in xrange(im.shape[0]):
     for k in xrange(im.shape[2]):
         im[i,:,k] = np.linspace(0.,10.*k, im.shape[1])
-im_arr = ImageArray(im)
+im_arr = yt.ImageArray(im)
 
 # in this case you would have gotten im_arr from something like:
 # im_arr = cam.snapshot() 
 
 # To save it with the default settings, we can just use write_png, where it 
 # rescales the image and uses a black background.
+
 im_arr.write_png('standard.png')
  
 # write_png accepts a background keyword argument that defaults to 'black'.
@@ -24,12 +27,8 @@
 # white (1.,1.,1.,1.)
 # None  (0.,0.,0.,0.) <-- Transparent!
 # any rgba list/array: [r,g,b,a], bounded by 0..1
+
 im_arr.write_png('black_bg.png', background='black')
 im_arr.write_png('white_bg.png', background='white')
 im_arr.write_png('green_bg.png', background=[0.,1.,0.,1.])
 im_arr.write_png('transparent_bg.png', background=None)
-
-
-
-
-

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/index.rst
--- a/doc/source/cookbook/index.rst
+++ b/doc/source/cookbook/index.rst
@@ -18,9 +18,6 @@
 `here <http://yt-project.org/data/>`_, where you will find links to download 
 individual datasets.
 
-If you want to take a look at more complex recipes, or submit your own,
-check out the `yt Hub <http://hub.yt-project.org>`_.
-
 .. note:: To contribute your own recipes, please follow the instructions 
     on how to contribute documentation code: :ref:`writing_documentation`.
 

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/light_cone_projection.py
--- a/doc/source/cookbook/light_cone_projection.py
+++ b/doc/source/cookbook/light_cone_projection.py
@@ -1,9 +1,13 @@
-from yt.mods import *
-from yt.analysis_modules.api import LightCone
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+
+import yt
+from yt.analysis_modules.cosmological_observation.light_cone.light_cone import LightCone
 
 # Create a LightCone object extending from z = 0 to z = 0.1
 # with a 600 arcminute field of view and a resolution of
 # 60 arcseconds.
+
 # We have already set up the redshift dumps to be
 # used for this, so we will not use any of the time
 # data dumps.

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/light_cone_with_halo_mask.py
--- a/doc/source/cookbook/light_cone_with_halo_mask.py
+++ b/doc/source/cookbook/light_cone_with_halo_mask.py
@@ -1,7 +1,10 @@
-from yt.mods import *
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
 
-from yt.analysis_modules.api import LightCone
-from yt.analysis_modules.halo_profiler.api import *
+import yt
+
+from yt.analysis_modules.cosmological_observation.light_cone.light_cone import LightCone
+from yt.analysis_modules.halo_profiler.api import HaloProfiler
 
 # Instantiate a light cone object as usual.
 lc = LightCone('enzo_tiny_cosmology/32Mpc_32.enzo',

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/make_light_ray.py
--- a/doc/source/cookbook/make_light_ray.py
+++ b/doc/source/cookbook/make_light_ray.py
@@ -1,13 +1,16 @@
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+
 import os
 import sys
-
-from yt.mods import *
-
-from yt.analysis_modules.halo_profiler.api import *
-from yt.analysis_modules.cosmological_observation.light_ray.api import \
+import yt
+from yt.analysis_modules.halo_profiler.api import HaloProfiler
+from yt.analysis_modules.cosmological_observation.light_ray.light_ray import \
      LightRay
 
-if not os.path.isdir("LR"): os.mkdir('LR')
+# Create a directory for the light rays
+if not os.path.isdir("LR"): 
+    os.mkdir('LR')
      
 # Create a LightRay object extending from z = 0 to z = 0.1
 # and use only the redshift dumps.

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/multi_plot_3x2_FRB.py
--- a/doc/source/cookbook/multi_plot_3x2_FRB.py
+++ b/doc/source/cookbook/multi_plot_3x2_FRB.py
@@ -1,11 +1,13 @@
-from yt.mods import * # set up our namespace
+import yt
+import numpy as np
+from yt.visualization.api import get_multi_plot
 import matplotlib.colorbar as cb
 from matplotlib.colors import LogNorm
 
 fn = "Enzo_64/RD0006/RedshiftOutput0006" # dataset to load
 
-
-ds = load(fn) # load data
+# load data and get center value and center location as maximum density location
+ds = yt.load(fn) 
 v, c = ds.find_max("density")
 
 # set up our Fixed Resolution Buffer parameters: a width, resolution, and center
@@ -39,11 +41,16 @@
         ax.xaxis.set_visible(False)
         ax.yaxis.set_visible(False)
 
-    plots.append(den_axis.imshow(frb['density'], norm=LogNorm()))
+    # converting our fixed resolution buffers to NDarray so matplotlib can
+    # render them
+    dens = np.array(frb['density'])
+    temp = np.array(frb['temperature'])
+
+    plots.append(den_axis.imshow(dens, norm=LogNorm()))
     plots[-1].set_clim((5e-32, 1e-29))
     plots[-1].set_cmap("bds_highcontrast")
 
-    plots.append(temp_axis.imshow(frb['temperature'], norm=LogNorm()))
+    plots.append(temp_axis.imshow(temp, norm=LogNorm()))
     plots[-1].set_clim((1e3, 1e8))
     plots[-1].set_cmap("hot")
     

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/multi_plot_slice_and_proj.py
--- a/doc/source/cookbook/multi_plot_slice_and_proj.py
+++ b/doc/source/cookbook/multi_plot_slice_and_proj.py
@@ -1,4 +1,5 @@
-from yt.mods import * # set up our namespace
+import yt
+import numpy as np
 from yt.visualization.base_plot_types import get_multi_plot
 import matplotlib.colorbar as cb
 from matplotlib.colors import LogNorm
@@ -6,7 +7,7 @@
 fn = "GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150" # dataset to load
 orient = 'horizontal'
 
-ds = load(fn) # load data
+ds = yt.load(fn) # load data
 
 # There's a lot in here:
 #   From this we get a containing figure, a list-of-lists of axes into which we
@@ -17,12 +18,11 @@
 #   bw is the base-width in inches, but 4 is about right for most cases.
 fig, axes, colorbars = get_multi_plot(3, 2, colorbar=orient, bw = 4)
 
-slc = ds.slice(2, 0.0, fields=["density","temperature","velocity_magnitude"], 
-                 center=ds.domain_center)
-proj = ds.proj("density", 2, weight_field="density", center=ds.domain_center)
+slc = yt.SlicePlot(ds, 'z', fields=["density","temperature","velocity_magnitude"])
+proj = yt.ProjectionPlot(ds, 'z', "density", weight_field="density")
 
-slc_frb = slc.to_frb((1.0, "mpc"), 512)
-proj_frb = proj.to_frb((1.0, "mpc"), 512)
+slc_frb = slc.data_source.to_frb((1.0, "Mpc"), 512)
+proj_frb = proj.data_source.to_frb((1.0, "Mpc"), 512)
 
 dens_axes = [axes[0][0], axes[1][0]]
 temp_axes = [axes[0][1], axes[1][1]]
@@ -37,12 +37,22 @@
     vax.xaxis.set_visible(False)
     vax.yaxis.set_visible(False)
 
-plots = [dens_axes[0].imshow(slc_frb["density"], origin='lower', norm=LogNorm()),
-         dens_axes[1].imshow(proj_frb["density"], origin='lower', norm=LogNorm()),
-         temp_axes[0].imshow(slc_frb["temperature"], origin='lower'),    
-         temp_axes[1].imshow(proj_frb["temperature"], origin='lower'),
-         vels_axes[0].imshow(slc_frb["velocity_magnitude"], origin='lower', norm=LogNorm()),
-         vels_axes[1].imshow(proj_frb["velocity_magnitude"], origin='lower', norm=LogNorm())]
+# Converting our Fixed Resolution Buffers to numpy arrays so that matplotlib
+# can render them
+
+slc_dens = np.array(slc_frb['density'])
+proj_dens = np.array(proj_frb['density'])
+slc_temp = np.array(slc_frb['temperature'])
+proj_temp = np.array(proj_frb['temperature'])
+slc_vel = np.array(slc_frb['velocity_magnitude'])
+proj_vel = np.array(proj_frb['velocity_magnitude'])
+
+plots = [dens_axes[0].imshow(slc_dens, origin='lower', norm=LogNorm()),
+         dens_axes[1].imshow(proj_dens, origin='lower', norm=LogNorm()),
+         temp_axes[0].imshow(slc_temp, origin='lower'),    
+         temp_axes[1].imshow(proj_temp, origin='lower'),
+         vels_axes[0].imshow(slc_vel, origin='lower', norm=LogNorm()),
+         vels_axes[1].imshow(proj_vel, origin='lower', norm=LogNorm())]
          
 plots[0].set_clim((1.0e-27,1.0e-25))
 plots[0].set_cmap("bds_highcontrast")
@@ -58,8 +68,8 @@
 plots[5].set_cmap("gist_rainbow")
 
 titles=[r'$\mathrm{Density}\ (\mathrm{g\ cm^{-3}})$', 
-        r'$\mathrm{temperature}\ (\mathrm{K})$',
-        r'$\mathrm{VelocityMagnitude}\ (\mathrm{cm\ s^{-1}})$']
+        r'$\mathrm{Temperature}\ (\mathrm{K})$',
+        r'$\mathrm{Velocity Magnitude}\ (\mathrm{cm\ s^{-1}})$']
 
 for p, cax, t in zip(plots[0:6:2], colorbars, titles):
     cbar = fig.colorbar(p, cax=cax, orientation=orient)

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/multi_width_image.py
--- a/doc/source/cookbook/multi_width_image.py
+++ b/doc/source/cookbook/multi_width_image.py
@@ -1,15 +1,16 @@
-from yt.mods import *
+import yt
 
 # Load the dataset.
-ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
 
 # Create a slice plot for the dataset.  With no additional arguments,
 # the width will be the size of the domain and the center will be the
 # center of the simulation box
-slc = SlicePlot(ds,2,'density')
+slc = yt.SlicePlot(ds, 'z', 'density')
 
-# Create a list of a couple of widths and units.
-widths = [(1, 'mpc'),
+# Create a list of a couple of widths and units. 
+# (N.B. Mpc (megaparsec) != mpc (milliparsec)
+widths = [(1, 'Mpc'),
           (15, 'kpc')]
 
 # Loop through the list of widths and units.
@@ -24,7 +25,7 @@
 zoomFactors = [2,4,5]
 
 # recreate the original slice
-slc = SlicePlot(ds,2,'density')
+slc = yt.SlicePlot(ds, 'z', 'density')
 
 for zoomFactor in zoomFactors:
 

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/multiplot_2x2.py
--- a/doc/source/cookbook/multiplot_2x2.py
+++ b/doc/source/cookbook/multiplot_2x2.py
@@ -1,9 +1,9 @@
-from yt.mods import *
+import yt
 import matplotlib.pyplot as plt
 from mpl_toolkits.axes_grid1 import AxesGrid
 
 fn = "IsolatedGalaxy/galaxy0030/galaxy0030"
-ds = load(fn) # load data
+ds = yt.load(fn) # load data
 
 fig = plt.figure()
 
@@ -22,11 +22,17 @@
                 cbar_size="3%",
                 cbar_pad="0%")
 
-fields = ['density', 'velocity_x', 'velocity_y', 'VelocityMagnitude']
+fields = ['density', 'velocity_x', 'velocity_y', 'velocity_magnitude']
 
 # Create the plot.  Since SlicePlot accepts a list of fields, we need only
 # do this once.
-p = SlicePlot(ds, 'z', fields)
+p = yt.SlicePlot(ds, 'z', fields)
+
+# Velocity is going to be both positive and negative, so let's make these
+# slices use a linear colorbar scale
+p.set_log('velocity_x', False)
+p.set_log('velocity_y', False)
+
 p.zoom(2)
 
 # For each plotted field, force the SlicePlot to redraw itself onto the AxesGrid

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/multiplot_2x2_coordaxes_slice.py
--- a/doc/source/cookbook/multiplot_2x2_coordaxes_slice.py
+++ b/doc/source/cookbook/multiplot_2x2_coordaxes_slice.py
@@ -1,9 +1,9 @@
-from yt.mods import *
+import yt
 import matplotlib.pyplot as plt
 from mpl_toolkits.axes_grid1 import AxesGrid
 
 fn = "IsolatedGalaxy/galaxy0030/galaxy0030"
-ds = load(fn) # load data
+ds = yt.load(fn) # load data
 
 fig = plt.figure()
 
@@ -27,7 +27,7 @@
 
 for i, (direction, field) in enumerate(zip(cuts, fields)):
     # Load the data and create a single plot
-    p = SlicePlot(ds, direction, field)
+    p = yt.SlicePlot(ds, direction, field)
     p.zoom(40)
 
     # This forces the ProjectionPlot to redraw itself on the AxesGrid axes.

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/multiplot_2x2_time_series.py
--- a/doc/source/cookbook/multiplot_2x2_time_series.py
+++ b/doc/source/cookbook/multiplot_2x2_time_series.py
@@ -1,4 +1,4 @@
-from yt.mods import *
+import yt
 import matplotlib.pyplot as plt
 from mpl_toolkits.axes_grid1 import AxesGrid
 
@@ -23,8 +23,8 @@
 
 for i, fn in enumerate(fns):
     # Load the data and create a single plot
-    ds = load(fn) # load data
-    p = ProjectionPlot(ds, 'z', 'density', width=(55, 'Mpccm'))
+    ds = yt.load(fn) # load data
+    p = yt.ProjectionPlot(ds, 'z', 'density', width=(55, 'Mpccm'))
 
     # Ensure the colorbar limits match for all plots
     p.set_zlim('density', 1e-4, 1e-2)

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/offaxis_projection.py
--- a/doc/source/cookbook/offaxis_projection.py
+++ b/doc/source/cookbook/offaxis_projection.py
@@ -1,7 +1,8 @@
-from yt.mods import *
+import yt
+import numpy as np
 
 # Load the dataset.
-ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
 
 # Choose a center for the render.
 c = [0.5, 0.5, 0.5]
@@ -25,10 +26,10 @@
 # Create the off axis projection.
 # Setting no_ghost to False speeds up the process, but makes a
 # slighly lower quality image.
-image = off_axis_projection(ds, c, L, W, Npixels, "density", no_ghost=False)
+image = yt.off_axis_projection(ds, c, L, W, Npixels, "density", no_ghost=False)
 
 # Write out the final image and give it a name
 # relating to what our dataset is called.
 # We save the log of the values so that the colors do not span
 # many orders of magnitude.  Try it without and see what happens.
-write_image(np.log10(image), "%s_offaxis_projection.png" % ds)
+yt.write_image(np.log10(image), "%s_offaxis_projection.png" % ds)

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/offaxis_projection_colorbar.py
--- a/doc/source/cookbook/offaxis_projection_colorbar.py
+++ b/doc/source/cookbook/offaxis_projection_colorbar.py
@@ -1,8 +1,9 @@
-from yt.mods import * # set up our namespace
+import yt
+import numpy as np
 
 fn = "IsolatedGalaxy/galaxy0030/galaxy0030" # dataset to load
 
-ds = load(fn) # load data
+ds = yt.load(fn) # load data
 
 # Now we need a center of our volume to render.  Here we'll just use
 # 0.5,0.5,0.5, because volume renderings are not periodic.
@@ -31,9 +32,9 @@
 # Also note that we set the field which we want to project as "density", but
 # really we could use any arbitrary field like "temperature", "metallicity"
 # or whatever.
-image = off_axis_projection(ds, c, L, W, Npixels, "density", no_ghost=False)
+image = yt.off_axis_projection(ds, c, L, W, Npixels, "density", no_ghost=False)
 
 # Image is now an NxN array representing the intensities of the various pixels.
 # And now, we call our direct image saver.  We save the log of the result.
-write_projection(image, "offaxis_projection_colorbar.png", 
-                 colorbar_label="Column Density (cm$^{-2}$)")
+yt.write_projection(image, "offaxis_projection_colorbar.png", 
+                    colorbar_label="Column Density (cm$^{-2}$)")

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/opaque_rendering.py
--- a/doc/source/cookbook/opaque_rendering.py
+++ b/doc/source/cookbook/opaque_rendering.py
@@ -1,19 +1,14 @@
-## Opaque Volume Rendering
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
 
-# The new version of yt also features opaque rendering, using grey opacity.
-# For example, this makes blues opaque to red and green.  In this example we
-# will explore how the opacity model you choose changes the appearance of the
-# rendering.
+import yt
+import numpy as np
 
-# Here we start by loading up a dataset, in this case galaxy0030.
-
-from yt.mods import *
-
-ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
 
 # We start by building a transfer function, and initializing a camera.
 
-tf = ColorTransferFunction((-30, -22))
+tf = yt.ColorTransferFunction((-30, -22))
 cam = ds.camera([0.5, 0.5, 0.5], [0.2, 0.3, 0.4], 0.10, 256, tf)
 
 # Now let's add some isocontours, and take a snapshot.
@@ -66,5 +61,3 @@
 
 # That looks pretty different, but the main thing is that you can see that the
 # inner contours are somewhat visible again.  
-
-

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/overplot_grids.py
--- a/doc/source/cookbook/overplot_grids.py
+++ b/doc/source/cookbook/overplot_grids.py
@@ -1,10 +1,10 @@
-from yt.mods import *
+import yt
 
 # Load the dataset.
-ds = load("Enzo_64/DD0043/data0043")
+ds = yt.load("Enzo_64/DD0043/data0043")
 
 # Make a density projection.
-p = ProjectionPlot(ds, "y", "density")
+p = yt.ProjectionPlot(ds, "y", "density")
 
 # Modify the projection
 # The argument specifies the region along the line of sight

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/overplot_particles.py
--- a/doc/source/cookbook/overplot_particles.py
+++ b/doc/source/cookbook/overplot_particles.py
@@ -1,10 +1,10 @@
-from yt.mods import *
+import yt
 
 # Load the dataset.
-ds = load("Enzo_64/DD0043/data0043")
+ds = yt.load("Enzo_64/DD0043/data0043")
 
 # Make a density projection.
-p = ProjectionPlot(ds, "y", "density")
+p = yt.ProjectionPlot(ds, "y", "density")
 
 # Modify the projection
 # The argument specifies the region along the line of sight

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/profile_with_variance.py
--- a/doc/source/cookbook/profile_with_variance.py
+++ b/doc/source/cookbook/profile_with_variance.py
@@ -1,30 +1,34 @@
-from matplotlib import pyplot
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
 
-from yt.mods import *
+import matplotlib.pyplot as plt
+import yt
 
 # Load the dataset.
-ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
 
-# Create a sphere of radius 1000 kpc centered on the max density.
-sphere = ds.sphere("max", (1000, "kpc"))
+# Create a sphere of radius 1 Mpc centered on the max density location.
+sp = ds.sphere("max", (1, "Mpc"))
 
 # Calculate and store the bulk velocity for the sphere.
-bulk_velocity = sphere.quantities['BulkVelocity']()
-sphere.set_field_parameter('bulk_velocity', bulk_velocity)
+bulk_velocity = sp.quantities['BulkVelocity']()
+sp.set_field_parameter('bulk_velocity', bulk_velocity)
 
 # Create a 1D profile object for profiles over radius
 # and add a velocity profile.
-profile = BinnedProfile1D(sphere, 100, "Radiuskpc", 0.1, 1000.)
-profile.add_fields('VelocityMagnitude')
+prof = yt.ProfilePlot(sp, 'radius', 'velocity_magnitude', 
+                      weight_field='cell_mass')
+prof.set_unit('radius', 'kpc')
+prof.set_xlim(0.1, 1000)
 
 # Plot the average velocity magnitude.
-pyplot.loglog(profile['Radiuskpc'], profile['VelocityMagnitude'],
-              label='mean')
+plt.loglog(prof['radius'], prof['velocity_magnitude'],
+              label='Mean')
 # Plot the variance of the velocity madnitude.
-pyplot.loglog(profile['Radiuskpc'], profile['VelocityMagnitude_std'],
-              label='std')
-pyplot.xlabel('r [kpc]')
-pyplot.ylabel('v [cm/s]')
-pyplot.legend()
+plt.loglog(prof['radius'], prof['velocity_magnitude_std'],
+              label='Standard Deviation')
+plt.xlabel('r [kpc]')
+plt.ylabel('v [cm/s]')
+plt.legend()
 
-pyplot.savefig('velocity_profiles.png')
+plt.savefig('velocity_profiles.png')

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/rad_velocity.py
--- a/doc/source/cookbook/rad_velocity.py
+++ b/doc/source/cookbook/rad_velocity.py
@@ -1,32 +1,38 @@
-from yt.mods import *
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+
+import yt
 import matplotlib.pyplot as plt
 
-ds = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
+ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
 
 # Get the first sphere
-
-sphere0 = ds.sphere(ds.domain_center, (500., "kpc"))
+sp0 = ds.sphere(ds.domain_center, (500., "kpc"))
 
 # Compute the bulk velocity from the cells in this sphere
+bulk_vel = sp0.quantities["BulkVelocity"]()
 
-bulk_vel = sphere0.quantities["BulkVelocity"]()
 
 # Get the second sphere
-
-sphere1 = ds.sphere(ds.domain_center, (500., "kpc"))
+sp1 = ds.sphere(ds.domain_center, (500., "kpc"))
 
 # Set the bulk velocity field parameter 
-sphere1.set_field_parameter("bulk_velocity", bulk_vel)
+sp1.set_field_parameter("bulk_velocity", bulk_vel)
 
 # Radial profile without correction
 
-rad_profile0 = BinnedProfile1D(sphere0, 100, "Radiuskpc", 0.0, 500., log_space=False)
-rad_profile0.add_fields("RadialVelocity")
+rp0 = yt.ProfilePlot(sp0, 'radius', 'radial_velocity')
+rp0.set_unit('radius', 'kpc')
+rp0.set_log('radius', False)
 
 # Radial profile with correction for bulk velocity
 
-rad_profile1 = BinnedProfile1D(sphere1, 100, "Radiuskpc", 0.0, 500., log_space=False)
-rad_profile1.add_fields("RadialVelocity")
+rp1 = yt.ProfilePlot(sp1, 'radius', 'radial_velocity')
+rp1.set_unit('radius', 'kpc')
+rp1.set_log('radius', False)
+
+#rp0.save('radial_velocity_profile_uncorrected.png')
+#rp1.save('radial_velocity_profile_corrected.png')
 
 # Make a plot using matplotlib
 
@@ -34,8 +40,8 @@
 ax = fig.add_subplot(111)
 
 # Here we scale the velocities by 1.0e5 to get into km/s
-ax.plot(rad_profile0["Radiuskpc"], rad_profile0["RadialVelocity"]/1.0e5,
-		rad_profile1["Radiuskpc"], rad_profile1["RadialVelocity"]/1.0e5)
+ax.plot(rad_profile0["radius"], rad_profile0["radial_velocity"],
+		rad_profile1["radius"], rad_profile1["radial_velocity"])
 
 ax.set_xlabel(r"$\mathrm{r\ (kpc)}$")
 ax.set_ylabel(r"$\mathrm{v_r\ (km/s)}$")

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/radial_profile_styles.py
--- a/doc/source/cookbook/radial_profile_styles.py
+++ b/doc/source/cookbook/radial_profile_styles.py
@@ -1,16 +1,20 @@
-from yt.mods import *
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+
+import yt
 import matplotlib.pyplot as plt
 
-ds = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
+ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
 
 # Get a sphere object
 
-sphere = ds.sphere(ds.domain_center, (500., "kpc"))
+sp = ds.sphere(ds.domain_center, (500., "kpc"))
 
 # Bin up the data from the sphere into a radial profile
 
-rad_profile = BinnedProfile1D(sphere, 100, "Radiuskpc", 0.0, 500., log_space=False)
-rad_profile.add_fields("density","temperature")
+rp = yt.ProfilePlot(sp, 'radius', ['density', 'temperature'])
+rp.set_unit('radius', 'kpc')
+rp.set_log('radius', False)
 
 # Make plots using matplotlib
 
@@ -18,7 +22,7 @@
 ax = fig.add_subplot(111)
 
 # Plot the density as a log-log plot using the default settings
-dens_plot = ax.loglog(rad_profile["Radiuskpc"], rad_profile["density"])
+dens_plot = ax.loglog(rp["Radiuskpc"], rp["density"])
 
 # Here we set the labels of the plot axes
 
@@ -51,10 +55,10 @@
 
 ax.lines = []
 
-# Since the rad_profile object also includes the standard deviation in each bin,
+# Since the radial profile object also includes the standard deviation in each bin,
 # we'll use these as errorbars. We have to make a new plot for this:
 
-dens_err_plot = ax.errorbar(rad_profile["Radiuskpc"], rad_profile["density"],
-                            yerr=rad_profile["Density_std"])
+dens_err_plot = ax.errorbar(pr["Radiuskpc"], rp["density"],
+                            yerr=rp["Density_std"])
                                                         
 fig.savefig("density_profile_with_errorbars.png")

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/rendering_with_box_and_grids.py
--- a/doc/source/cookbook/rendering_with_box_and_grids.py
+++ b/doc/source/cookbook/rendering_with_box_and_grids.py
@@ -1,18 +1,22 @@
-from yt.mods import *
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+
+import yt
+import numpy as np
 
 # Load the dataset.
-ds = load("Enzo_64/DD0043/data0043")
+ds = yt.load("Enzo_64/DD0043/data0043")
 
 # Create a data container (like a sphere or region) that
 # represents the entire domain.
-dd = ds.all_data()
+ad = ds.all_data()
 
 # Get the minimum and maximum densities.
-mi, ma = dd.quantities["Extrema"]("density")[0]
+mi, ma = ad.quantities.extrema("density")
 
 # Create a transfer function to map field values to colors.
 # We bump up our minimum to cut out some of the background fluid
-tf = ColorTransferFunction((np.log10(mi)+2.0, np.log10(ma)))
+tf = yt.ColorTransferFunction((np.log10(mi)+2.0, np.log10(ma)))
 
 # Add three guassians, evenly spaced between the min and
 # max specified above with widths of 0.02 and using the
@@ -58,4 +62,3 @@
 # it through the camera. Then save it out.
 cam.draw_coordinate_vectors(nim)
 nim.write_png("%s_vr_vectors.png" % ds)
-

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/save_profiles.py
--- a/doc/source/cookbook/save_profiles.py
+++ b/doc/source/cookbook/save_profiles.py
@@ -1,8 +1,11 @@
-from yt.mods import *
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+
+import yt
 import matplotlib.pyplot as plt
-import h5py
+import h5py as h5
 
-ds = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
+ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
 
 # Get a sphere
 
@@ -10,19 +13,19 @@
 
 # Radial profile from the sphere
 
-rad_profile = BinnedProfile1D(sp, 100, "Radiuskpc", 0.0, 500., log_space=False)
-
-# Adding density and temperature fields to the profile
-
-rad_profile.add_fields(["density","temperature"])
+prof = yt.BinnedProfile1D(sp, 100, "Radiuskpc", 0.0, 500., log_space=False)
+prof = yt.ProfilePlot(sp, 'radius', ['density', 'temperature'], weight_field="cell_mass")
+prof.set_unit('radius', 'kpc')
+prof.set_log('radius', False)
+prof.set_xlim(0, 500)
 
 # Write profiles to ASCII file
 
-rad_profile.write_out("%s_profile.dat" % ds, bin_style="center")
+prof.write_out("%s_profile.dat" % ds, bin_style="center")
 
 # Write profiles to HDF5 file
 
-rad_profile.write_out_h5("%s_profile.h5" % ds, bin_style="center")
+prof.write_out_h5("%s_profile.h5" % ds, bin_style="center")
 
 # Now we will show how using NumPy, h5py, and Matplotlib the data in these
 # files may be plotted.

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/show_hide_axes_colorbar.py
--- a/doc/source/cookbook/show_hide_axes_colorbar.py
+++ b/doc/source/cookbook/show_hide_axes_colorbar.py
@@ -1,8 +1,8 @@
-from yt.mods import *
+import yt
 
-ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
 
-slc = SlicePlot(ds, "x", "density")
+slc = yt.SlicePlot(ds, "x", "density")
 
 slc.save("default_sliceplot.png")
 

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/simple_contour_in_slice.py
--- a/doc/source/cookbook/simple_contour_in_slice.py
+++ b/doc/source/cookbook/simple_contour_in_slice.py
@@ -1,10 +1,10 @@
-from yt.mods import *
+import yt
 
 # Load the data file.
-ds = load("Sedov_3d/sedov_hdf5_chk_0002")
+ds = yt.load("Sedov_3d/sedov_hdf5_chk_0002")
 
 # Make a traditional slice plot.
-sp = SlicePlot(ds,"x","density")
+sp = yt.SlicePlot(ds, "x", "density")
 
 # Overlay the slice plot with thick red contours of density.
 sp.annotate_contour("density", ncont=3, clim=(1e-2,1e-1), label=True,

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/simple_off_axis_projection.py
--- a/doc/source/cookbook/simple_off_axis_projection.py
+++ b/doc/source/cookbook/simple_off_axis_projection.py
@@ -1,7 +1,7 @@
-from yt.mods import *
+import yt
 
 # Load the dataset.
-ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
 
 # Create a 1 kpc radius sphere, centered on the max density.  Note that this
 # sphere is very small compared to the size of our final plot, and it has a
@@ -14,5 +14,5 @@
 print "Angular momentum vector: {0}".format(L)
 
 # Create an OffAxisSlicePlot on the object with the L vector as its normal
-p = OffAxisProjectionPlot(ds, L, "density", sp.center, (25, "kpc"))
+p = yt.OffAxisProjectionPlot(ds, L, "density", sp.center, (25, "kpc"))
 p.save()

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/simple_pdf.py
--- a/doc/source/cookbook/simple_pdf.py
+++ b/doc/source/cookbook/simple_pdf.py
@@ -1,15 +1,15 @@
-from yt.mods import *
+import yt
 
 # Load the dataset.
-ds = load("GalaxyClusterMerger/fiducial_1to3_b0.273d_hdf5_plt_cnt_0175")
+ds = yt.load("GalaxyClusterMerger/fiducial_1to3_b0.273d_hdf5_plt_cnt_0175")
 
 # Create a data object that represents the whole box.
 ad = ds.all_data()
 
 # This is identical to the simple phase plot, except we supply 
 # the fractional=True keyword to divide the profile data by the sum. 
-plot = PhasePlot(ad, "density", "temperature", "cell_mass",
-                 weight_field=None, fractional=True)
+plot = yt.PhasePlot(ad, "density", "temperature", "cell_mass",
+                    weight_field=None, fractional=True)
 
 # Set a new title for the colorbar since it is now fractional.
 plot.z_title["cell_mass"] = r"$\mathrm{Mass}\/\mathrm{fraction}$"

diff -r e2ddb70b9e25b353ff6000e6a59de9515d22e4db -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 doc/source/cookbook/simple_phase.py
--- a/doc/source/cookbook/simple_phase.py
+++ b/doc/source/cookbook/simple_phase.py
@@ -1,7 +1,7 @@
-from yt.mods import *
+import yt
 
 # Load the dataset.
-ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
 
 # Create a sphere of radius 100 kpc in the center of the domain.
 my_sphere = ds.sphere("c", (100.0, "kpc"))
@@ -10,8 +10,11 @@
 # Setting weight to None will calculate a sum.
 # Setting weight to a field will calculate an average
 # weighted by that field.
-plot = PhasePlot(my_sphere, "density", "temperature", "cell_mass",
-                 weight_field=None)
+plot = yt.PhasePlot(my_sphere, "density", "temperature", "cell_mass",
+                    weight_field=None)
+
+# Set the units of mass to be in solar masses (not the default in cgs)
+plot.set_unit('cell_mass', 'Msun')
 
 # Save the image.
 # Optionally, give a string as an argument

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/84838f71c81a/
Changeset:   84838f71c81a
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-06-19 05:46:44
Summary:     Fixing test failures.
Affected #:  16 files

diff -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 -r 84838f71c81a42690339f6175222506f4f99ec3a yt/analysis_modules/halo_finding/halo_objects.py
--- a/yt/analysis_modules/halo_finding/halo_objects.py
+++ b/yt/analysis_modules/halo_finding/halo_objects.py
@@ -320,7 +320,7 @@
             center = self.maximum_density_location()
         radius = self.maximum_radius()
         # A bit of a long-reach here...
-        sphere = self.data.pf.sphere(center, radius=radius)
+        sphere = self.data.ds.sphere(center, radius=radius)
         return sphere
 
     def get_size(self):
@@ -684,7 +684,7 @@
         >>> ell = halos[0].get_ellipsoid()
         """
         ep = self.get_ellipsoid_parameters()
-        ell = self.data.pf.ellipsoid(ep[0], ep[1], ep[2], ep[3],
+        ell = self.data.ds.ellipsoid(ep[0], ep[1], ep[2], ep[3],
             ep[4], ep[5])
         return ell
     
@@ -2126,11 +2126,11 @@
                 self.comm.mpi_bcast(self.bucket_bounds)
             my_bounds = self.bucket_bounds[self.comm.rank]
             LE, RE = my_bounds[0], my_bounds[1]
-            self._data_source = self.pf.region([0.] * 3, LE, RE)
+            self._data_source = self.ds.region([0.] * 3, LE, RE)
         # If this isn't parallel, define the region as an AMRRegionStrict so
         # particle IO works.
         if self.comm.size == 1:
-            self._data_source = self.pf.region([0.5] * 3,
+            self._data_source = self.ds.region([0.5] * 3,
                 LE, RE)
         # get the average spacing between particles for this region
         # The except is for the serial case where the full box is what we want.

diff -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 -r 84838f71c81a42690339f6175222506f4f99ec3a yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -759,7 +759,7 @@
         skip += list(set(frb._exclude_fields).difference(set(self._key_fields)))
         self.fields = ensure_list(fields) + \
             [k for k in self.field_data if k not in skip]
-        (bounds, center) = get_window_parameters(axis, center, width, self.pf)
+        (bounds, center) = get_window_parameters(axis, center, width, self.ds)
         pw = PWViewerMPL(self, bounds, fields=self.fields, origin=origin,
                          frb_generator=frb, plot_type=plot_type)
         pw._setup_plots()

diff -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 -r 84838f71c81a42690339f6175222506f4f99ec3a yt/frontends/art/tests/test_outputs.py
--- a/yt/frontends/art/tests/test_outputs.py
+++ b/yt/frontends/art/tests/test_outputs.py
@@ -36,8 +36,8 @@
     dso = [None, ("sphere", ("max", (0.1, 'unitary')))]
     for field in _fields:
         for axis in [0, 1, 2]:
-            for ds in dso:
+            for dobj_name in dso:
                 for weight_field in [None, "density"]:
                     yield PixelizedProjectionValuesTest(
                         d9p, axis, field, weight_field,
-                        ds)
+                        dobj_name)

diff -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 -r 84838f71c81a42690339f6175222506f4f99ec3a yt/frontends/artio/tests/test_outputs.py
--- a/yt/frontends/artio/tests/test_outputs.py
+++ b/yt/frontends/artio/tests/test_outputs.py
@@ -33,15 +33,15 @@
     ds.max_range = 1024*1024
     yield assert_equal, str(ds), "sizmbhloz-clref04SNth-rs9_a0.9011.art"
     dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
-    for ds in dso:
+    for dobj_name in dso:
         for field in _fields:
             for axis in [0, 1, 2]:
                 for weight_field in [None, "density"]:
                     yield PixelizedProjectionValuesTest(
                         sizmbhloz, axis, field, weight_field,
-                        ds)
-            yield FieldValuesTest(sizmbhloz, field, ds)
-        dobj = create_obj(ds, ds)
+                        dobj_name)
+            yield FieldValuesTest(sizmbhloz, field, dobj_name)
+        dobj = create_obj(ds, dobj_name)
         s1 = dobj["ones"].sum()
         s2 = sum(mask.sum() for block, mask in dobj.blocks)
         yield assert_equal, s1, s2

diff -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 -r 84838f71c81a42690339f6175222506f4f99ec3a yt/frontends/enzo/answer_testing_support.py
--- a/yt/frontends/enzo/answer_testing_support.py
+++ b/yt/frontends/enzo/answer_testing_support.py
@@ -56,14 +56,14 @@
         if bitwise:
             yield GridValuesTest(ds_fn, field)
         if 'particle' in field: continue
-        for ds in dso:
+        for dobj_name in dso:
             for axis in [0, 1, 2]:
                 for weight_field in [None, "Density"]:
                     yield ProjectionValuesTest(
                         ds_fn, axis, field, weight_field,
-                        ds, decimals=tolerance)
+                        dobj_name, decimals=tolerance)
             yield FieldValuesTest(
-                    ds_fn, field, ds, decimals=tolerance)
+                    ds_fn, field, dobj_name, decimals=tolerance)
                     
 class ShockTubeTest(object):
     def __init__(self, data_file, solution_file, fields, 

diff -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 -r 84838f71c81a42690339f6175222506f4f99ec3a yt/frontends/moab/tests/test_c5.py
--- a/yt/frontends/moab/tests/test_c5.py
+++ b/yt/frontends/moab/tests/test_c5.py
@@ -53,6 +53,6 @@
             ray = ds.ray(p1, p2)
             yield assert_almost_equal, ray["dts"].sum(dtype="float64"), 1.0, 8
     for field in _fields:
-        for ds in dso:
-            yield FieldValuesTest(c5, field, ds)
+        for dobj_name in dso:
+            yield FieldValuesTest(c5, field, dobj_name)
 

diff -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 -r 84838f71c81a42690339f6175222506f4f99ec3a yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -370,12 +370,12 @@
         
 
         # TODO: copy/pasted from DomainFile; needs refactoring!
-        num = os.path.basename(self._ds.datasetname).split("."
+        num = os.path.basename(self._ds.paramaeter_filename).split("."
                 )[0].split("_")[1]
         testdomain = 1 # Just pick the first domain file to read
         basename = "%s/%%s_%s.out%05i" % (
             os.path.abspath(
-              os.path.dirname(self._ds.datasetname)),
+              os.path.dirname(self._ds.parameter_filename)),
             num, testdomain)
         hydro_fn = basename % "hydro"
         # Do we have a hydro file?

diff -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 -r 84838f71c81a42690339f6175222506f4f99ec3a yt/frontends/ramses/tests/test_outputs.py
--- a/yt/frontends/ramses/tests/test_outputs.py
+++ b/yt/frontends/ramses/tests/test_outputs.py
@@ -32,15 +32,15 @@
     ds = data_dir_load(output_00080)
     yield assert_equal, str(ds), "info_00080"
     dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
-    for ds in dso:
+    for dobj_name in dso:
         for field in _fields:
             for axis in [0, 1, 2]:
                 for weight_field in [None, "density"]:
                     yield PixelizedProjectionValuesTest(
                         output_00080, axis, field, weight_field,
-                        ds)
-            yield FieldValuesTest(output_00080, field, ds)
-        dobj = create_obj(ds, ds)
+                        dobj_name)
+            yield FieldValuesTest(output_00080, field, dobj_name)
+        dobj = create_obj(ds, dobj_name)
         s1 = dobj["ones"].sum()
         s2 = sum(mask.sum() for block, mask in dobj.blocks)
         yield assert_equal, s1, s2

diff -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 -r 84838f71c81a42690339f6175222506f4f99ec3a yt/frontends/sph/io.py
--- a/yt/frontends/sph/io.py
+++ b/yt/frontends/sph/io.py
@@ -610,10 +610,10 @@
                         eps = np.finfo(pp["Coordinates"][ax].dtype).eps
                         pos[:,i] = pp["Coordinates"][ax]
                     regions.add_data_file(pos, data_file.file_id,
-                                          data_file.pf.filter_bbox)
+                                          data_file.ds.filter_bbox)
                     morton[ind:ind+c] = compute_morton(
                         pos[:,0], pos[:,1], pos[:,2],
-                        DLE, DRE, data_file.pf.filter_bbox)
+                        DLE, DRE, data_file.ds.filter_bbox)
                     ind += c
         mylog.info("Adding %0.3e particles", morton.size)
         return morton
@@ -758,7 +758,7 @@
             c = np.frombuffer(s, dtype="float64")
             c.shape = (c.shape[0]/3.0, 3)
             regions.add_data_file(c, data_file.file_id,
-                                  data_file.pf.filter_bbox)
+                                  data_file.ds.filter_bbox)
             morton[ind:ind+c.shape[0]] = compute_morton(
                 c[:,0], c[:,1], c[:,2],
                 data_file.ds.domain_left_edge,

diff -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 -r 84838f71c81a42690339f6175222506f4f99ec3a yt/frontends/sph/tests/test_owls.py
--- a/yt/frontends/sph/tests/test_owls.py
+++ b/yt/frontends/sph/tests/test_owls.py
@@ -41,15 +41,15 @@
     tot = sum(dd[ptype,"particle_position"].shape[0]
               for ptype in ds.particle_types if ptype != "all")
     yield assert_equal, tot, (2*128*128*128)
-    for ds in dso:
+    for dobj_name in dso:
         for field in _fields:
             for axis in [0, 1, 2]:
                 for weight_field in [None, "density"]:
                     yield PixelizedProjectionValuesTest(
                         os33, axis, field, weight_field,
-                        ds)
-            yield FieldValuesTest(os33, field, ds)
-        dobj = create_obj(ds, ds)
+                        dobj_name)
+            yield FieldValuesTest(os33, field, dobj_name)
+        dobj = create_obj(ds, dobj_name)
         s1 = dobj["ones"].sum()
         s2 = sum(mask.sum() for block, mask in dobj.blocks)
         yield assert_equal, s1, s2

diff -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 -r 84838f71c81a42690339f6175222506f4f99ec3a yt/frontends/sph/tests/test_tipsy.py
--- a/yt/frontends/sph/tests/test_tipsy.py
+++ b/yt/frontends/sph/tests/test_tipsy.py
@@ -49,15 +49,15 @@
     tot = sum(dd[ptype,"Coordinates"].shape[0]
               for ptype in ds.particle_types if ptype != "all")
     yield assert_equal, tot, 26847360
-    for ds in dso:
+    for dobj_name in dso:
         for field in _fields:
             for axis in [0, 1, 2]:
                 for weight_field in [None, "density"]:
                     yield PixelizedProjectionValuesTest(
                         ds, axis, field, weight_field,
-                        ds)
-            yield FieldValuesTest(ds, field, ds)
-        dobj = create_obj(ds, ds)
+                        dobj_name)
+            yield FieldValuesTest(ds, field, dobj_name)
+        dobj = create_obj(ds, dobj_name)
         s1 = dobj["ones"].sum()
         s2 = sum(mask.sum() for block, mask in dobj.blocks)
         yield assert_equal, s1, s2
@@ -80,15 +80,15 @@
     tot = sum(dd[ptype,"Coordinates"].shape[0]
               for ptype in ds.particle_types if ptype != "all")
     yield assert_equal, tot, 10550576
-    for ds in dso:
+    for dobj_name in dso:
         for field in _fields:
             for axis in [0, 1, 2]:
                 for weight_field in [None, "density"]:
                     yield PixelizedProjectionValuesTest(
                         ds, axis, field, weight_field,
-                        ds)
-            yield FieldValuesTest(ds, field, ds)
-        dobj = create_obj(ds, ds)
+                        dobj_name)
+            yield FieldValuesTest(ds, field, dobj_name)
+        dobj = create_obj(ds, dobj_name)
         s1 = dobj["ones"].sum()
         s2 = sum(mask.sum() for block, mask in dobj.blocks)
         yield assert_equal, s1, s2

diff -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 -r 84838f71c81a42690339f6175222506f4f99ec3a yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -1535,11 +1535,11 @@
         self.min_level = self.base_selector.min_level
         self.max_level = self.base_selector.max_level
         self.filter_bbox = 0
-        if getattr(dobj.pf, "filter_bbox", False):
+        if getattr(dobj.ds, "filter_bbox", False):
             self.filter_bbox = 1
         for i in range(3):
-            self.DLE[i] = dobj.pf.domain_left_edge[i]
-            self.DRE[i] = dobj.pf.domain_right_edge[i]
+            self.DLE[i] = dobj.ds.domain_left_edge[i]
+            self.DRE[i] = dobj.ds.domain_right_edge[i]
 
     @cython.boundscheck(False)
     @cython.wraparound(False)

diff -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 -r 84838f71c81a42690339f6175222506f4f99ec3a yt/utilities/amr_kdtree/amr_kdtree.py
--- a/yt/utilities/amr_kdtree/amr_kdtree.py
+++ b/yt/utilities/amr_kdtree/amr_kdtree.py
@@ -94,8 +94,8 @@
             dds = grid.dds
             gle = grid.LeftEdge
             gre = grid.RightEdge
-            nle = self.pf.arr(get_left_edge(node), input_units="code_length")
-            nre = self.pf.arr(get_right_edge(node), input_units="code_length")
+            nle = self.ds.arr(get_left_edge(node), input_units="code_length")
+            nre = self.ds.arr(get_right_edge(node), input_units="code_length")
             li = np.rint((nle-gle)/dds).astype('int32')
             ri = np.rint((nre-gle)/dds).astype('int32')
             dims = (ri - li).astype('int32')
@@ -119,8 +119,8 @@
             grid = self.ds.index.grids[node.grid - self._id_offset]
             dds = grid.dds
             gle = grid.LeftEdge
-            nle = self.pf.arr(get_left_edge(node), input_units="code_length")
-            nre = self.pf.arr(get_right_edge(node), input_units="code_length")
+            nle = self.ds.arr(get_left_edge(node), input_units="code_length")
+            nre = self.ds.arr(get_right_edge(node), input_units="code_length")
             li = np.rint((nle-gle)/dds).astype('int32')
             ri = np.rint((nre-gle)/dds).astype('int32')
             dims = (ri - li).astype('int32')

diff -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 -r 84838f71c81a42690339f6175222506f4f99ec3a yt/utilities/answer_testing/framework.py
--- a/yt/utilities/answer_testing/framework.py
+++ b/yt/utilities/answer_testing/framework.py
@@ -699,13 +699,13 @@
     for field in fields:
         yield GridValuesTest(ds_fn, field)
         for axis in [0, 1, 2]:
-            for ds in dso:
+            for dobj_name in dso:
                 for weight_field in [None, "density"]:
                     yield ProjectionValuesTest(
                         ds_fn, axis, field, weight_field,
-                        ds)
+                        dboj_name)
                 yield FieldValuesTest(
-                        ds_fn, field, ds)
+                        ds_fn, field, dobj_name)
 
 def big_patch_amr(ds_fn, fields):
     if not can_run_ds(ds_fn): return
@@ -715,11 +715,11 @@
     for field in fields:
         yield GridValuesTest(ds_fn, field)
         for axis in [0, 1, 2]:
-            for ds in dso:
+            for dobj_name in dso:
                 for weight_field in [None, "density"]:
                     yield PixelizedProjectionValuesTest(
                         ds_fn, axis, field, weight_field,
-                        ds)
+                        dobj_name)
 
 def create_obj(ds, obj_type):
     # obj_type should be tuple of

diff -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 -r 84838f71c81a42690339f6175222506f4f99ec3a yt/utilities/parallel_tools/parallel_analysis_interface.py
--- a/yt/utilities/parallel_tools/parallel_analysis_interface.py
+++ b/yt/utilities/parallel_tools/parallel_analysis_interface.py
@@ -1081,7 +1081,7 @@
         RE[yax] = y[1] * (DRE[yax]-DLE[yax]) + DLE[yax]
         mylog.debug("Dimensions: %s %s", LE, RE)
 
-        reg = self.pf.region(self.center, LE, RE)
+        reg = self.ds.region(self.center, LE, RE)
         return True, reg
 
     def partition_index_3d(self, ds, padding=0.0, rank_ratio = 1):
@@ -1097,7 +1097,7 @@
             return False, LE, RE, ds
         if not self._distributed and subvol:
             return True, LE, RE, \
-            self.pf.region(self.center, LE-padding, RE+padding)
+            self.ds.region(self.center, LE-padding, RE+padding)
         elif ytcfg.getboolean("yt", "inline"):
             # At this point, we want to identify the root grid tile to which
             # this processor is assigned.
@@ -1110,7 +1110,7 @@
             #raise KeyError
             LE = root_grids[0].LeftEdge
             RE = root_grids[0].RightEdge
-            return True, LE, RE, self.pf.region(self.center, LE, RE)
+            return True, LE, RE, self.ds.region(self.center, LE, RE)
 
         cc = MPI.Compute_dims(self.comm.size / rank_ratio, 3)
         mi = self.comm.rank % (self.comm.size / rank_ratio)
@@ -1124,10 +1124,10 @@
 
         if padding > 0:
             return True, \
-                LE, RE, self.pf.region(self.center,
+                LE, RE, self.ds.region(self.center,
                 LE-padding, RE+padding)
 
-        return False, LE, RE, self.pf.region(self.center, LE, RE)
+        return False, LE, RE, self.ds.region(self.center, LE, RE)
 
     def partition_region_3d(self, left_edge, right_edge, padding=0.0,
             rank_ratio = 1):
@@ -1152,10 +1152,10 @@
 
         if padding > 0:
             return True, \
-                LE, RE, self.pf.region(self.center, LE-padding,
+                LE, RE, self.ds.region(self.center, LE-padding,
                     RE+padding)
 
-        return False, LE, RE, self.pf.region(self.center, LE, RE)
+        return False, LE, RE, self.ds.region(self.center, LE, RE)
 
     def partition_index_3d_bisection_list(self):
         """

diff -r db3fb4bd57cece083486bfbdbc9d78bf4cc4bdf0 -r 84838f71c81a42690339f6175222506f4f99ec3a yt/visualization/plot_window.py
--- a/yt/visualization/plot_window.py
+++ b/yt/visualization/plot_window.py
@@ -301,7 +301,7 @@
                       if i != self.data_source.axis]
             self.set_center(center)
         for field in self.data_source._determine_fields(self.frb.data.keys()):
-            finfo = self.data_source.pf._get_field_info(*field)
+            finfo = self.data_source.ds._get_field_info(*field)
             if finfo.take_log:
                 self._field_transform[field] = log_transform
             else:


https://bitbucket.org/yt_analysis/yt/commits/f918abffce76/
Changeset:   f918abffce76
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-06-19 06:02:17
Summary:     Merging to fix a merge conflict.
Affected #:  1 file



https://bitbucket.org/yt_analysis/yt/commits/e960351cf75c/
Changeset:   e960351cf75c
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-06-19 07:03:37
Summary:     Fixing typos.
Affected #:  2 files

diff -r f918abffce76dba1ae6de98b11094f8600918fbe -r e960351cf75c0fd1b6c53d70da05cc6a6d9620c5 yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -370,7 +370,7 @@
         
 
         # TODO: copy/pasted from DomainFile; needs refactoring!
-        num = os.path.basename(self._ds.paramaeter_filename).split("."
+        num = os.path.basename(self._ds.parameter_filename).split("."
                 )[0].split("_")[1]
         testdomain = 1 # Just pick the first domain file to read
         basename = "%s/%%s_%s.out%05i" % (

diff -r f918abffce76dba1ae6de98b11094f8600918fbe -r e960351cf75c0fd1b6c53d70da05cc6a6d9620c5 yt/utilities/answer_testing/framework.py
--- a/yt/utilities/answer_testing/framework.py
+++ b/yt/utilities/answer_testing/framework.py
@@ -703,7 +703,7 @@
                 for weight_field in [None, "density"]:
                     yield ProjectionValuesTest(
                         ds_fn, axis, field, weight_field,
-                        dboj_name)
+                        dobj_name)
                 yield FieldValuesTest(
                         ds_fn, field, dobj_name)
 


https://bitbucket.org/yt_analysis/yt/commits/e2cda56fbfb8/
Changeset:   e2cda56fbfb8
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-07-18 02:12:05
Summary:     Merging with mainline tip.
Affected #:  324 files

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/cheatsheet.tex
--- a/doc/cheatsheet.tex
+++ b/doc/cheatsheet.tex
@@ -208,38 +208,38 @@
 After that, simulation data is generally accessed in yt using {\it Data Containers} which are Python objects
 that define a region of simulation space from which data should be selected.
 \settowidth{\MyLen}{\texttt{multicol} }
-\texttt{pf = load(}{\it dataset}\texttt{)} \textemdash\   Reference a single snapshot.\\
-\texttt{dd = pf.h.all\_data()} \textemdash\ Select the entire volume.\\
+\texttt{ds = load(}{\it dataset}\texttt{)} \textemdash\   Reference a single snapshot.\\
+\texttt{dd = ds.all\_data()} \textemdash\ Select the entire volume.\\
 \texttt{a = dd[}{\it field\_name}\texttt{]} \textemdash\ Saves the contents of {\it field} into the
 numpy array \texttt{a}. Similarly for other data containers.\\
-\texttt{pf.h.field\_list} \textemdash\ A list of available fields in the snapshot. \\
-\texttt{pf.h.derived\_field\_list} \textemdash\ A list of available derived fields
+\texttt{ds.field\_list} \textemdash\ A list of available fields in the snapshot. \\
+\texttt{ds.derived\_field\_list} \textemdash\ A list of available derived fields
 in the snapshot. \\
-\texttt{val, loc = pf.h.find\_max("Density")} \textemdash\ Find the \texttt{val}ue of
+\texttt{val, loc = ds.find\_max("Density")} \textemdash\ Find the \texttt{val}ue of
 the maximum of the field \texttt{Density} and its \texttt{loc}ation. \\
-\texttt{sp = pf.sphere(}{\it cen}\texttt{,}{\it radius}\texttt{)} \textemdash\   Create a spherical data 
+\texttt{sp = ds.sphere(}{\it cen}\texttt{,}{\it radius}\texttt{)} \textemdash\   Create a spherical data 
 container. {\it cen} may be a coordinate, or ``max'' which 
 centers on the max density point. {\it radius} may be a float in 
 code units or a tuple of ({\it length, unit}).\\
 
-\texttt{re = pf.region({\it cen}, {\it left edge}, {\it right edge})} \textemdash\ Create a
+\texttt{re = ds.region({\it cen}, {\it left edge}, {\it right edge})} \textemdash\ Create a
 rectilinear data container. {\it cen} is required but not used.
 {\it left} and {\it right edge} are coordinate values that define the region.
 
-\texttt{di = pf.disk({\it cen}, {\it normal}, {\it radius}, {\it height})} \textemdash\ 
+\texttt{di = ds.disk({\it cen}, {\it normal}, {\it radius}, {\it height})} \textemdash\ 
 Create a cylindrical data container centered at {\it cen} along the 
 direction set by {\it normal},with total length
  2$\times${\it height} and with radius {\it radius}. \\
  
- \texttt{bl = pf.boolean({\it constructor})} \textemdash\ Create a boolean data
+ \texttt{bl = ds.boolean({\it constructor})} \textemdash\ Create a boolean data
  container. {\it constructor} is a list of pre-defined non-boolean 
  data containers with nested boolean logic using the
  ``AND'', ``NOT'', or ``OR'' operators. E.g. {\it constructor=}
  {\it [sp, ``NOT'', (di, ``OR'', re)]} gives a volume defined
  by {\it sp} minus the patches covered by {\it di} and {\it re}.\\
  
-\texttt{pf.h.save\_object(sp, {\it ``sp\_for\_later''})} \textemdash\ Save an object (\texttt{sp}) for later use.\\
-\texttt{sp = pf.h.load\_object({\it ``sp\_for\_later''})} \textemdash\ Recover a saved object.\\
+\texttt{ds.save\_object(sp, {\it ``sp\_for\_later''})} \textemdash\ Save an object (\texttt{sp}) for later use.\\
+\texttt{sp = ds.load\_object({\it ``sp\_for\_later''})} \textemdash\ Recover a saved object.\\
 
 
 \subsection{Defining New Fields \& Quantities}
@@ -261,15 +261,15 @@
 
 \subsection{Slices and Projections}
 \settowidth{\MyLen}{\texttt{multicol} }
-\texttt{slc = SlicePlot(pf, {\it axis}, {\it field}, {\it center=}, {\it width=}, {\it weight\_field=}, {\it additional parameters})} \textemdash\ Make a slice plot
+\texttt{slc = SlicePlot(ds, {\it axis}, {\it field}, {\it center=}, {\it width=}, {\it weight\_field=}, {\it additional parameters})} \textemdash\ Make a slice plot
 perpendicular to {\it axis} of {\it field} weighted by {\it weight\_field} at (code-units) {\it center} with 
 {\it width} in code units or a (value, unit) tuple. Hint: try {\it SlicePlot?} in IPython to see additional parameters.\\
 \texttt{slc.save({\it file\_prefix})} \textemdash\ Save the slice to a png with name prefix {\it file\_prefix}.
 \texttt{.save()} works similarly for the commands below.\\
 
-\texttt{prj = ProjectionPlot(pf, {\it axis}, {\it field}, {\it addit. params})} \textemdash\ Make a projection. \\
-\texttt{prj = OffAxisSlicePlot(pf, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off-axis slice. Note this takes an array of fields. \\
-\texttt{prj = OffAxisProjectionPlot(pf, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off axis projection. Note this takes an array of fields. \\
+\texttt{prj = ProjectionPlot(ds, {\it axis}, {\it field}, {\it addit. params})} \textemdash\ Make a projection. \\
+\texttt{prj = OffAxisSlicePlot(ds, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off-axis slice. Note this takes an array of fields. \\
+\texttt{prj = OffAxisProjectionPlot(ds, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off axis projection. Note this takes an array of fields. \\
 
 \subsection{Plot Annotations}
 \settowidth{\MyLen}{\texttt{multicol} }
@@ -365,8 +365,8 @@
 \subsection{FAQ}
 \settowidth{\MyLen}{\texttt{multicol}}
 
-\texttt{pf.field\_info[`field'].take\_log = False} \textemdash\ When plotting \texttt{field}, do not take log.
-Must enter \texttt{pf.h} before this command. \\
+\texttt{ds.field\_info[`field'].take\_log = False} \textemdash\ When plotting \texttt{field}, do not take log.
+Must enter \texttt{ds.index} before this command. \\
 
 
 %\rule{0.3\linewidth}{0.25pt}

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/coding_styleguide.txt
--- a/doc/coding_styleguide.txt
+++ b/doc/coding_styleguide.txt
@@ -49,7 +49,7 @@
  * Don't create a new class to replicate the functionality of an old class --
    replace the old class.  Too many options makes for a confusing user
    experience.
- * Parameter files are a last resort.
+ * Parameter files external to yt are a last resort.
  * The usage of the **kwargs construction should be avoided.  If they cannot
    be avoided, they must be explained, even if they are only to be passed on to
    a nested function.
@@ -61,7 +61,7 @@
    * Hard-coding parameter names that are the same as those in Enzo.  The
      following translation table should be of some help.  Note that the
      parameters are now properties on a Dataset subclass: you access them
-     like pf.refine_by .
+     like ds.refine_by .
      * RefineBy => refine_by
      * TopGridRank => dimensionality
      * TopGridDimensions => domain_dimensions

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/docstring_example.txt
--- a/doc/docstring_example.txt
+++ b/doc/docstring_example.txt
@@ -73,7 +73,7 @@
     Examples
     --------
     These are written in doctest format, and should illustrate how to
-    use the function.  Use the variables 'pf' for the parameter file, 'pc' for
+    use the function.  Use the variables 'ds' for the dataset, 'pc' for
     a plot collection, 'c' for a center, and 'L' for a vector. 
 
     >>> a=[1,2,3]

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/docstring_idioms.txt
--- a/doc/docstring_idioms.txt
+++ b/doc/docstring_idioms.txt
@@ -19,7 +19,7 @@
 useful variable names that correspond to specific instances that the user is
 presupposed to have created.
 
-   * `pf`: a parameter file, loaded successfully
+   * `ds`: a dataset, loaded successfully
    * `sp`: a sphere
    * `c`: a 3-component "center"
    * `L`: a 3-component vector that corresponds to either angular momentum or a

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/helper_scripts/parse_cb_list.py
--- a/doc/helper_scripts/parse_cb_list.py
+++ b/doc/helper_scripts/parse_cb_list.py
@@ -2,7 +2,7 @@
 import inspect
 from textwrap import TextWrapper
 
-pf = load("RD0005-mine/RedshiftOutput0005")
+ds = load("RD0005-mine/RedshiftOutput0005")
 
 output = open("source/visualizing/_cb_docstrings.inc", "w")
 

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/helper_scripts/parse_dq_list.py
--- a/doc/helper_scripts/parse_dq_list.py
+++ b/doc/helper_scripts/parse_dq_list.py
@@ -2,7 +2,7 @@
 import inspect
 from textwrap import TextWrapper
 
-pf = load("RD0005-mine/RedshiftOutput0005")
+ds = load("RD0005-mine/RedshiftOutput0005")
 
 output = open("source/analyzing/_dq_docstrings.inc", "w")
 
@@ -29,7 +29,7 @@
                             docstring = docstring))
                             #docstring = "\n".join(tw.wrap(docstring))))
 
-dd = pf.h.all_data()
+dd = ds.all_data()
 for n,func in sorted(dd.quantities.functions.items()):
     print n, func
     write_docstring(output, n, func[1])

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/helper_scripts/parse_object_list.py
--- a/doc/helper_scripts/parse_object_list.py
+++ b/doc/helper_scripts/parse_object_list.py
@@ -2,7 +2,7 @@
 import inspect
 from textwrap import TextWrapper
 
-pf = load("RD0005-mine/RedshiftOutput0005")
+ds = load("RD0005-mine/RedshiftOutput0005")
 
 output = open("source/analyzing/_obj_docstrings.inc", "w")
 
@@ -27,7 +27,7 @@
     f.write(template % dict(clsname = clsname, sig = sig, clsproxy=clsproxy,
                             docstring = 'physical-object-api'))
 
-for n,c in sorted(pf.h.__dict__.items()):
+for n,c in sorted(ds.__dict__.items()):
     if hasattr(c, '_con_args'):
         print n
         write_docstring(output, n, c)

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/helper_scripts/show_fields.py
--- a/doc/helper_scripts/show_fields.py
+++ b/doc/helper_scripts/show_fields.py
@@ -17,15 +17,15 @@
 everywhere, "Enzo" fields in Enzo datasets, "Orion" fields in Orion datasets,
 and so on.
 
-Try using the ``pf.field_list`` and ``pf.derived_field_list`` to view the
+Try using the ``ds.field_list`` and ``ds.derived_field_list`` to view the
 native and derived fields available for your dataset respectively. For example
 to display the native fields in alphabetical order:
 
 .. notebook-cell::
 
   from yt.mods import *
-  pf = load("Enzo_64/DD0043/data0043")
-  for i in sorted(pf.field_list):
+  ds = load("Enzo_64/DD0043/data0043")
+  for i in sorted(ds.field_list):
     print i
 
 .. note:: Universal fields will be overridden by a code-specific field.

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/_obj_docstrings.inc
--- a/doc/source/analyzing/_obj_docstrings.inc
+++ b/doc/source/analyzing/_obj_docstrings.inc
@@ -1,12 +1,12 @@
 
 
-.. class:: boolean(self, regions, fields=None, pf=None, **field_parameters):
+.. class:: boolean(self, regions, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRBooleanRegionBase`.)
 
 
-.. class:: covering_grid(self, level, left_edge, dims, fields=None, pf=None, num_ghost_zones=0, use_pbar=True, **field_parameters):
+.. class:: covering_grid(self, level, left_edge, dims, fields=None, ds=None, num_ghost_zones=0, use_pbar=True, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRCoveringGridBase`.)
@@ -24,13 +24,13 @@
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRCuttingPlaneBase`.)
 
 
-.. class:: disk(self, center, normal, radius, height, fields=None, pf=None, **field_parameters):
+.. class:: disk(self, center, normal, radius, height, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRCylinderBase`.)
 
 
-.. class:: ellipsoid(self, center, A, B, C, e0, tilt, fields=None, pf=None, **field_parameters):
+.. class:: ellipsoid(self, center, A, B, C, e0, tilt, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMREllipsoidBase`.)
@@ -48,79 +48,79 @@
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRFixedResCuttingPlaneBase`.)
 
 
-.. class:: fixed_res_proj(self, axis, level, left_edge, dims, fields=None, pf=None, **field_parameters):
+.. class:: fixed_res_proj(self, axis, level, left_edge, dims, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRFixedResProjectionBase`.)
 
 
-.. class:: grid_collection(self, center, grid_list, fields=None, pf=None, **field_parameters):
+.. class:: grid_collection(self, center, grid_list, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRGridCollectionBase`.)
 
 
-.. class:: grid_collection_max_level(self, center, max_level, fields=None, pf=None, **field_parameters):
+.. class:: grid_collection_max_level(self, center, max_level, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRMaxLevelCollectionBase`.)
 
 
-.. class:: inclined_box(self, origin, box_vectors, fields=None, pf=None, **field_parameters):
+.. class:: inclined_box(self, origin, box_vectors, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRInclinedBoxBase`.)
 
 
-.. class:: ortho_ray(self, axis, coords, fields=None, pf=None, **field_parameters):
+.. class:: ortho_ray(self, axis, coords, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMROrthoRayBase`.)
 
 
-.. class:: overlap_proj(self, axis, field, weight_field=None, max_level=None, center=None, pf=None, source=None, node_name=None, field_cuts=None, preload_style='level', serialize=True, **field_parameters):
+.. class:: overlap_proj(self, axis, field, weight_field=None, max_level=None, center=None, ds=None, source=None, node_name=None, field_cuts=None, preload_style='level', serialize=True, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRProjBase`.)
 
 
-.. class:: periodic_region(self, center, left_edge, right_edge, fields=None, pf=None, **field_parameters):
+.. class:: periodic_region(self, center, left_edge, right_edge, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRPeriodicRegionBase`.)
 
 
-.. class:: periodic_region_strict(self, center, left_edge, right_edge, fields=None, pf=None, **field_parameters):
+.. class:: periodic_region_strict(self, center, left_edge, right_edge, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRPeriodicRegionStrictBase`.)
 
 
-.. class:: proj(self, axis, field, weight_field=None, max_level=None, center=None, pf=None, source=None, node_name=None, field_cuts=None, preload_style=None, serialize=True, style='integrate', **field_parameters):
+.. class:: proj(self, axis, field, weight_field=None, max_level=None, center=None, ds=None, source=None, node_name=None, field_cuts=None, preload_style=None, serialize=True, style='integrate', **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRQuadTreeProjBase`.)
 
 
-.. class:: ray(self, start_point, end_point, fields=None, pf=None, **field_parameters):
+.. class:: ray(self, start_point, end_point, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRRayBase`.)
 
 
-.. class:: region(self, center, left_edge, right_edge, fields=None, pf=None, **field_parameters):
+.. class:: region(self, center, left_edge, right_edge, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRRegionBase`.)
 
 
-.. class:: region_strict(self, center, left_edge, right_edge, fields=None, pf=None, **field_parameters):
+.. class:: region_strict(self, center, left_edge, right_edge, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRRegionStrictBase`.)
 
 
-.. class:: slice(self, axis, coord, fields=None, center=None, pf=None, node_name=False, **field_parameters):
+.. class:: slice(self, axis, coord, fields=None, center=None, ds=None, node_name=False, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRSliceBase`.)
@@ -132,13 +132,13 @@
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRSmoothedCoveringGridBase`.)
 
 
-.. class:: sphere(self, center, radius, fields=None, pf=None, **field_parameters):
+.. class:: sphere(self, center, radius, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRSphereBase`.)
 
 
-.. class:: streamline(self, positions, length=1.0, fields=None, pf=None, **field_parameters):
+.. class:: streamline(self, positions, length=1.0, fields=None, ds=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRStreamlineBase`.)

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb
--- a/doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb
+++ b/doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb
@@ -44,7 +44,7 @@
       "tmpdir = tempfile.mkdtemp()\n",
       "\n",
       "# Load the data set with the full simulation information\n",
-      "data_pf = load('Enzo_64/RD0006/RedshiftOutput0006')"
+      "data_ds = load('Enzo_64/RD0006/RedshiftOutput0006')"
      ],
      "language": "python",
      "metadata": {},
@@ -62,7 +62,7 @@
      "collapsed": false,
      "input": [
       "# Load the rockstar data files\n",
-      "halos_pf = load('rockstar_halos/halos_0.0.bin')"
+      "halos_ds = load('rockstar_halos/halos_0.0.bin')"
      ],
      "language": "python",
      "metadata": {},
@@ -80,7 +80,7 @@
      "collapsed": false,
      "input": [
       "# Instantiate a catalog using those two paramter files\n",
-      "hc = HaloCatalog(data_pf=data_pf, halos_pf=halos_pf, \n",
+      "hc = HaloCatalog(data_ds=data_ds, halos_ds=halos_ds, \n",
       "                 output_dir=os.path.join(tmpdir, 'halo_catalog'))"
      ],
      "language": "python",
@@ -295,9 +295,9 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "halos_pf =  load(os.path.join(tmpdir, 'halo_catalog/halo_catalog.0.h5'))\n",
+      "halos_ds =  load(os.path.join(tmpdir, 'halo_catalog/halo_catalog.0.h5'))\n",
       "\n",
-      "hc_reloaded = HaloCatalog(halos_pf=halos_pf,\n",
+      "hc_reloaded = HaloCatalog(halos_ds=halos_ds,\n",
       "                          output_dir=os.path.join(tmpdir, 'halo_catalog'))"
      ],
      "language": "python",
@@ -407,4 +407,4 @@
    "metadata": {}
   }
  ]
-}
\ No newline at end of file
+}

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/PPVCube.ipynb
--- a/doc/source/analyzing/analysis_modules/PPVCube.ipynb
+++ b/doc/source/analyzing/analysis_modules/PPVCube.ipynb
@@ -222,7 +222,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pf = load(\"cube.fits\")"
+      "ds = load(\"cube.fits\")"
      ],
      "language": "python",
      "metadata": {},
@@ -233,7 +233,7 @@
      "collapsed": false,
      "input": [
       "# Specifying no center gives us the center slice\n",
-      "slc = SlicePlot(pf, \"z\", [\"density\"])\n",
+      "slc = SlicePlot(ds, \"z\", [\"density\"])\n",
       "slc.show()"
      ],
      "language": "python",
@@ -246,9 +246,9 @@
      "input": [
       "import yt.units as u\n",
       "# Picking different velocities for the slices\n",
-      "new_center = pf.domain_center\n",
-      "new_center[2] = pf.spec2pixel(-1.0*u.km/u.s)\n",
-      "slc = SlicePlot(pf, \"z\", [\"density\"], center=new_center)\n",
+      "new_center = ds.domain_center\n",
+      "new_center[2] = ds.spec2pixel(-1.0*u.km/u.s)\n",
+      "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
       "slc.show()"
      ],
      "language": "python",
@@ -259,8 +259,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "new_center[2] = pf.spec2pixel(0.7*u.km/u.s)\n",
-      "slc = SlicePlot(pf, \"z\", [\"density\"], center=new_center)\n",
+      "new_center[2] = ds.spec2pixel(0.7*u.km/u.s)\n",
+      "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
       "slc.show()"
      ],
      "language": "python",
@@ -271,8 +271,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "new_center[2] = pf.spec2pixel(-0.3*u.km/u.s)\n",
-      "slc = SlicePlot(pf, \"z\", [\"density\"], center=new_center)\n",
+      "new_center[2] = ds.spec2pixel(-0.3*u.km/u.s)\n",
+      "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
       "slc.show()"
      ],
      "language": "python",
@@ -290,7 +290,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "prj = ProjectionPlot(pf, \"z\", [\"density\"], proj_style=\"sum\")\n",
+      "prj = ProjectionPlot(ds, \"z\", [\"density\"], proj_style=\"sum\")\n",
       "prj.set_log(\"density\", True)\n",
       "prj.set_zlim(\"density\", 1.0e-3, 0.2)\n",
       "prj.show()"
@@ -303,4 +303,4 @@
    "metadata": {}
   }
  ]
-}
\ No newline at end of file
+}

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/clump_finding.rst
--- a/doc/source/analyzing/analysis_modules/clump_finding.rst
+++ b/doc/source/analyzing/analysis_modules/clump_finding.rst
@@ -84,8 +84,8 @@
   
   from yt.mods import *
   
-  pf = load("DD0000")
-  sp = pf.sphere([0.5, 0.5, 0.5], radius=0.1)
+  ds = load("DD0000")
+  sp = ds.sphere([0.5, 0.5, 0.5], radius=0.1)
   
   ratio = sp.quantities["IsBound"](truncate=False, include_thermal_energy=True,
       treecode=True, opening_angle=2.0)
@@ -97,8 +97,8 @@
   
   from yt.mods import *
   
-  pf = load("DD0000")
-  sp = pf.sphere([0.5, 0.5, 0.5], radius=0.1)
+  ds = load("DD0000")
+  sp = ds.sphere([0.5, 0.5, 0.5], radius=0.1)
   
   ratio = sp.quantities["IsBound"](truncate=False, include_thermal_energy=True,
       treecode=False)

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
--- a/doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
+++ b/doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
@@ -58,8 +58,8 @@
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
 
-  pf=load('Enzo_64/RD0006/RedshiftOutput0006')
-  halo_list = parallelHF(pf)
+  ds=load('Enzo_64/RD0006/RedshiftOutput0006')
+  halo_list = parallelHF(ds)
   halo_list.dump('MyHaloList')
 
 Ellipsoid Parameters
@@ -69,8 +69,8 @@
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
 
-  pf=load('Enzo_64/RD0006/RedshiftOutput0006')
-  haloes = LoadHaloes(pf, 'MyHaloList')
+  ds=load('Enzo_64/RD0006/RedshiftOutput0006')
+  haloes = LoadHaloes(ds, 'MyHaloList')
 
 Once the halo information is saved you can load it into the data
 object "haloes", you can get loop over the list of haloes and do
@@ -107,7 +107,7 @@
 
 .. code-block:: python
 
-  ell = pf.ellipsoid(ell_param[0],
+  ell = ds.ellipsoid(ell_param[0],
   ell_param[1],
   ell_param[2],
   ell_param[3],

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/halo_catalogs.rst
--- a/doc/source/analyzing/analysis_modules/halo_catalogs.rst
+++ b/doc/source/analyzing/analysis_modules/halo_catalogs.rst
@@ -9,7 +9,7 @@
 backwards compatible in that output from old halo finders may be loaded.
 
 A catalog of halos can be created from any initial dataset given to halo 
-catalog through data_pf. These halos can be found using friends-of-friends,
+catalog through data_ds. These halos can be found using friends-of-friends,
 HOP, and Rockstar. The finder_method keyword dictates which halo finder to
 use. The available arguments are 'fof', 'hop', and'rockstar'. For more
 details on the relative differences between these halo finders see 
@@ -19,32 +19,32 @@
 
    from yt.mods import *
    from yt.analysis_modules.halo_analysis.api import HaloCatalog
-   data_pf = load('Enzo_64/RD0006/RedshiftOutput0006')
-   hc = HaloCatalog(data_pf=data_pf, finder_method='hop')
+   data_ds = load('Enzo_64/RD0006/RedshiftOutput0006')
+   hc = HaloCatalog(data_ds=data_ds, finder_method='hop')
 
 A halo catalog may also be created from already run rockstar outputs. 
 This method is not implemented for previously run friends-of-friends or 
 HOP finders. Even though rockstar creates one file per processor, 
 specifying any one file allows the full catalog to be loaded. Here we 
 only specify the file output by the processor with ID 0. Note that the 
-argument for supplying a rockstar output is `halos_pf`, not `data_pf`.
+argument for supplying a rockstar output is `halos_ds`, not `data_ds`.
 
 .. code-block:: python
 
-   halos_pf = load(path+'rockstar_halos/halos_0.0.bin')
-   hc = HaloCatalog(halos_pf=halos_pf)
+   halos_ds = load(path+'rockstar_halos/halos_0.0.bin')
+   hc = HaloCatalog(halos_ds=halos_ds)
 
 Although supplying only the binary output of the rockstar halo finder 
 is sufficient for creating a halo catalog, it is not possible to find 
 any new information about the identified halos. To associate the halos 
 with the dataset from which they were found, supply arguments to both 
-halos_pf and data_pf.
+halos_ds and data_ds.
 
 .. code-block:: python
 
-   halos_pf = load(path+'rockstar_halos/halos_0.0.bin')
-   data_pf = load('Enzo_64/RD0006/RedshiftOutput0006')
-   hc = HaloCatalog(data_pf=data_pf, halos_pf=halos_pf)
+   halos_ds = load(path+'rockstar_halos/halos_0.0.bin')
+   data_ds = load('Enzo_64/RD0006/RedshiftOutput0006')
+   hc = HaloCatalog(data_ds=data_ds, halos_ds=halos_ds)
 
 A data container can also be supplied via keyword data_source, 
 associated with either dataset, to control the spatial region in 
@@ -215,8 +215,8 @@
 
 .. code-block:: python
 
-   hpf = load(path+"halo_catalogs/catalog_0046/catalog_0046.0.h5")
-   hc = HaloCatalog(halos_pf=hpf,
+   hds = load(path+"halo_catalogs/catalog_0046/catalog_0046.0.h5")
+   hc = HaloCatalog(halos_ds=hds,
                     output_dir="halo_catalogs/catalog_0046")
    hc.add_callback("load_profiles", output_dir="profiles",
                    filename="virial_profiles")

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/halo_mass_function.rst
--- a/doc/source/analyzing/analysis_modules/halo_mass_function.rst
+++ b/doc/source/analyzing/analysis_modules/halo_mass_function.rst
@@ -60,8 +60,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_mass_function.api import *
-  pf = load("data0030")
-  hmf = HaloMassFcn(pf, halo_file="FilteredQuantities.out", num_sigma_bins=200,
+  ds = load("data0030")
+  hmf = HaloMassFcn(ds, halo_file="FilteredQuantities.out", num_sigma_bins=200,
   mass_column=5)
 
 Attached to ``hmf`` is the convenience function ``write_out``, which saves
@@ -102,8 +102,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_mass_function.api import *
-  pf = load("data0030")
-  hmf = HaloMassFcn(pf, halo_file="FilteredQuantities.out", 
+  ds = load("data0030")
+  hmf = HaloMassFcn(ds, halo_file="FilteredQuantities.out", 
   sigma8input=0.9, primordial_index=1., omega_baryon0=0.06,
   fitting_function=4)
   hmf.write_out(prefix='hmf')

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/halo_profiling.rst
--- a/doc/source/analyzing/analysis_modules/halo_profiling.rst
+++ b/doc/source/analyzing/analysis_modules/halo_profiling.rst
@@ -395,8 +395,8 @@
    def find_min_temp_dist(sphere):
        old = sphere.center
        ma, mini, mx, my, mz, mg = sphere.quantities['MinLocation']('temperature')
-       d = sphere.pf['kpc'] * periodic_dist(old, [mx, my, mz],
-           sphere.pf.domain_right_edge - sphere.pf.domain_left_edge)
+       d = sphere.ds['kpc'] * periodic_dist(old, [mx, my, mz],
+           sphere.ds.domain_right_edge - sphere.ds.domain_left_edge)
        # If new center farther than 5 kpc away, don't recenter
        if d > 5.: return [-1, -1, -1]
        return [mx,my,mz]
@@ -426,7 +426,7 @@
              128, 'temperature', 1e2, 1e7, True,
              end_collect=False)
        my_profile.add_fields('cell_mass', weight=None, fractional=False)
-       my_filename = os.path.join(sphere.pf.fullpath, '2D_profiles', 
+       my_filename = os.path.join(sphere.ds.fullpath, '2D_profiles', 
              'Halo_%04d.h5' % halo['id'])
        my_profile.write_out_h5(my_filename)
 

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/hmf_howto.rst
--- a/doc/source/analyzing/analysis_modules/hmf_howto.rst
+++ b/doc/source/analyzing/analysis_modules/hmf_howto.rst
@@ -27,8 +27,8 @@
 .. code-block:: python
 
   from yt.mods import *
-  pf = load("data0001")
-  halo_list = HaloFinder(pf)
+  ds = load("data0001")
+  halo_list = HaloFinder(ds)
   halo_list.write_out("HopAnalysis.out")
 
 The only important columns of data in the text file ``HopAnalysis.out``
@@ -79,8 +79,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_mass_function.api import *
-  pf = load("data0001")
-  hmf = HaloMassFcn(pf, halo_file="VirialHaloes.out", 
+  ds = load("data0001")
+  hmf = HaloMassFcn(ds, halo_file="VirialHaloes.out", 
   sigma8input=0.9, primordial_index=1., omega_baryon0=0.06,
   fitting_function=4, mass_column=5, num_sigma_bins=200)
   hmf.write_out(prefix='hmf')
@@ -107,9 +107,9 @@
   from yt.analysis_modules.halo_mass_function.api import *
   
   # If desired, start loop here.
-  pf = load("data0001")
+  ds = load("data0001")
   
-  halo_list = HaloFinder(pf)
+  halo_list = HaloFinder(ds)
   halo_list.write_out("HopAnalysis.out")
   
   hp = HP.HaloProfiler("data0001", halo_list_file='HopAnalysis.out')
@@ -120,7 +120,7 @@
                 virial_quantities=['TotalMassMsun','RadiusMpc'])
   hp.make_profiles(filename="VirialHaloes.out")
   
-  hmf = HaloMassFcn(pf, halo_file="VirialHaloes.out", 
+  hmf = HaloMassFcn(ds, halo_file="VirialHaloes.out", 
   sigma8input=0.9, primordial_index=1., omega_baryon0=0.06,
   fitting_function=4, mass_column=5, num_sigma_bins=200)
   hmf.write_out(prefix='hmf')

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/light_cone_generator.rst
--- a/doc/source/analyzing/analysis_modules/light_cone_generator.rst
+++ b/doc/source/analyzing/analysis_modules/light_cone_generator.rst
@@ -60,7 +60,7 @@
    when gathering datasets for time series.  Default: True.
 
  * **set_parameters** (*dict*): Dictionary of parameters to attach to 
-   pf.parameters.  Default: None.
+   ds.parameters.  Default: None.
 
  * **output_dir** (*string*): The directory in which images and data files
     will be written.  Default: 'LC'.

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/photon_simulator.rst
--- a/doc/source/analyzing/analysis_modules/photon_simulator.rst
+++ b/doc/source/analyzing/analysis_modules/photon_simulator.rst
@@ -48,7 +48,7 @@
 
 .. code:: python
 
-    pf = load("MHDSloshing/virgo_low_res.0054.vtk",
+    ds = load("MHDSloshing/virgo_low_res.0054.vtk",
               parameters={"time_unit":(1.0,"Myr"),
                           "length_unit":(1.0,"Mpc"),
                           "mass_unit":(1.0e14,"Msun")}) 
@@ -423,7 +423,7 @@
 evacuated two "bubbles" of radius 30 kpc at a distance of 50 kpc from
 the center. 
 
-Now, we create a parameter file out of this dataset:
+Now, we create a yt Dataset object out of this dataset:
 
 .. code:: python
 
@@ -445,7 +445,7 @@
 
 .. code:: python
 
-   sphere = ds.sphere(pf.domain_center, (1.0,"Mpc"))
+   sphere = ds.sphere(ds.domain_center, (1.0,"Mpc"))
        
    A = 6000.
    exp_time = 2.0e5

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/radial_column_density.rst
--- a/doc/source/analyzing/analysis_modules/radial_column_density.rst
+++ b/doc/source/analyzing/analysis_modules/radial_column_density.rst
@@ -41,15 +41,15 @@
 
   from yt.mods import *
   from yt.analysis_modules.radial_column_density.api import *
-  pf = load("data0030")
+  ds = load("data0030")
   
-  rcdnumdens = RadialColumnDensity(pf, 'NumberDensity', [0.5, 0.5, 0.5],
+  rcdnumdens = RadialColumnDensity(ds, 'NumberDensity', [0.5, 0.5, 0.5],
     max_radius = 0.5)
   def _RCDNumberDensity(field, data, rcd = rcdnumdens):
       return rcd._build_derived_field(data)
   add_field('RCDNumberDensity', _RCDNumberDensity, units=r'1/\rm{cm}^2')
   
-  dd = pf.h.all_data()
+  dd = ds.all_data()
   print dd['RCDNumberDensity']
 
 The field ``RCDNumberDensity`` can be used just like any other derived field

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/radmc3d_export.rst
--- a/doc/source/analyzing/analysis_modules/radmc3d_export.rst
+++ b/doc/source/analyzing/analysis_modules/radmc3d_export.rst
@@ -41,8 +41,8 @@
 
 .. code-block:: python
 
-    pf = load("galaxy0030/galaxy0030")
-    writer = RadMC3DWriter(pf)
+    ds = load("galaxy0030/galaxy0030")
+    writer = RadMC3DWriter(ds)
     
     writer.write_amr_grid()
     writer.write_dust_file("DustDensity", "dust_density.inp")
@@ -87,8 +87,8 @@
         return (x_co/mu_h)*data["density"]
     add_field("NumberDensityCO", function=_NumberDensityCO)
     
-    pf = load("galaxy0030/galaxy0030")
-    writer = RadMC3DWriter(pf)
+    ds = load("galaxy0030/galaxy0030")
+    writer = RadMC3DWriter(ds)
     
     writer.write_amr_grid()
     writer.write_line_file("NumberDensityCO", "numberdens_co.inp")

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/running_halofinder.rst
--- a/doc/source/analyzing/analysis_modules/running_halofinder.rst
+++ b/doc/source/analyzing/analysis_modules/running_halofinder.rst
@@ -57,8 +57,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
-  pf = load("data0001")
-  halo_list = HaloFinder(pf)
+  ds = load("data0001")
+  halo_list = HaloFinder(ds)
 
 Running FoF is similar:
 
@@ -66,8 +66,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
-  pf = load("data0001")
-  halo_list = FOFHaloFinder(pf)
+  ds = load("data0001")
+  halo_list = FOFHaloFinder(ds)
 
 Halo Data Access
 ----------------
@@ -172,8 +172,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
-  pf = load("data0001")
-  haloes = HaloFinder(pf)
+  ds = load("data0001")
+  haloes = HaloFinder(ds)
   haloes.dump("basename")
 
 It is easy to load the halos using the ``LoadHaloes`` class:
@@ -182,8 +182,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
-  pf = load("data0001")
-  haloes = LoadHaloes(pf, "basename")
+  ds = load("data0001")
+  haloes = LoadHaloes(ds, "basename")
 
 Everything that can be done with ``haloes`` in the first example should be
 possible with ``haloes`` in the second.
@@ -229,10 +229,10 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
-  pf = load("data0001")
-  halo_list = HaloFinder(pf,padding=0.02)
+  ds = load("data0001")
+  halo_list = HaloFinder(ds,padding=0.02)
   # --or--
-  halo_list = FOFHaloFinder(pf,padding=0.02)
+  halo_list = FOFHaloFinder(ds,padding=0.02)
 
 The ``padding`` parameter is in simulation units and defaults to 0.02. This parameter is how much padding
 is added to each of the six sides of a subregion. This value should be 2x-3x larger than the largest
@@ -314,8 +314,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
-  pf = load("data0001")
-  halo_list = parallelHF(pf)
+  ds = load("data0001")
+  halo_list = parallelHF(ds)
 
 Parallel HOP has these user-set options:
 
@@ -392,8 +392,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
-  pf = load("data0001")
-  halo_list = parallelHF(pf, threshold=80.0, dm_only=True, resize=False, 
+  ds = load("data0001")
+  halo_list = parallelHF(ds, threshold=80.0, dm_only=True, resize=False, 
   rearrange=True, safety=1.5, premerge=True)
   halo_list.write_out("ParallelHopAnalysis.out")
   halo_list.write_particle_list("parts")
@@ -416,11 +416,11 @@
 
   from yt.mods import *
   from yt.analysis_modules.halo_finding.api import *
-  pf = load('data0458')
+  ds = load('data0458')
   # Note that the first term below, [0.5]*3, defines the center of
   # the region and is not used. It can be any value.
-  sv = pf.region([0.5]*3, [0.21, .21, .72], [.28, .28, .79])
-  halos = HaloFinder(pf, subvolume = sv)
+  sv = ds.region([0.5]*3, [0.21, .21, .72], [.28, .28, .79])
+  halos = HaloFinder(ds, subvolume = sv)
   halos.write_out("sv.out")
 
 
@@ -493,7 +493,7 @@
     the width of the smallest grid element in the simulation from the
     last data snapshot (i.e. the one where time has evolved the
     longest) in the time series:
-    ``pf_last.index.get_smallest_dx() * pf_last['mpch']``.
+    ``ds_last.index.get_smallest_dx() * ds_last['mpch']``.
   * ``total_particles``, if supplied, this is a pre-calculated
     total number of dark matter
     particles present in the simulation. For example, this is useful
@@ -515,21 +515,21 @@
 out*list) and binary (halo*bin) files inside the ``outbase`` directory. 
 We use the halo list classes to recover the information. 
 
-Inside the ``outbase`` directory there is a text file named ``pfs.txt``
-that records the connection between pf names and the Rockstar file names.
+Inside the ``outbase`` directory there is a text file named ``datasets.txt``
+that records the connection between ds names and the Rockstar file names.
 
 The halo list can be automatically generated from the RockstarHaloFinder 
 object by calling ``RockstarHaloFinder.halo_list()``. Alternatively, the halo
 lists can be built from the RockstarHaloList class directly 
-``LoadRockstarHalos(pf,'outbase/out_0.list')``.
+``LoadRockstarHalos(ds,'outbase/out_0.list')``.
 
 .. code-block:: python
     
-    rh = RockstarHaloFinder(pf)
+    rh = RockstarHaloFinder(ds)
     #First method of creating the halo lists:
     halo_list = rh.halo_list()    
     #Alternate method of creating halo_list:
-    halo_list = LoadRockstarHalos(pf, 'rockstar_halos/out_0.list')
+    halo_list = LoadRockstarHalos(ds, 'rockstar_halos/out_0.list')
 
 The above ``halo_list`` is very similar to any other list of halos loaded off
 disk.
@@ -595,18 +595,18 @@
     
     def main():
         import enzo
-        pf = EnzoDatasetInMemory()
+        ds = EnzoDatasetInMemory()
         mine = ytcfg.getint('yt','__topcomm_parallel_rank')
         size = ytcfg.getint('yt','__topcomm_parallel_size')
 
         # Call rockstar.
-        ts = DatasetSeries([pf])
-        outbase = "./rockstar_halos_%04d" % pf['NumberOfPythonTopGridCalls']
+        ts = DatasetSeries([ds])
+        outbase = "./rockstar_halos_%04d" % ds['NumberOfPythonTopGridCalls']
         rh = RockstarHaloFinder(ts, num_readers = size,
             outbase = outbase)
         rh.run()
     
         # Load the halos off disk.
         fname = outbase + "/out_0.list"
-        rhalos = LoadRockstarHalos(pf, fname)
+        rhalos = LoadRockstarHalos(ds, fname)
 

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/star_analysis.rst
--- a/doc/source/analyzing/analysis_modules/star_analysis.rst
+++ b/doc/source/analyzing/analysis_modules/star_analysis.rst
@@ -27,9 +27,9 @@
 
   from yt.mods import *
   from yt.analysis_modules.star_analysis.api import *
-  pf = load("data0030")
-  dd = pf.h.all_data()
-  sfr = StarFormationRate(pf, data_source=dd)
+  ds = load("data0030")
+  dd = ds.all_data()
+  sfr = StarFormationRate(ds, data_source=dd)
 
 or just a small part of the volume:
 
@@ -37,9 +37,9 @@
 
   from yt.mods import *
   from yt.analysis_modules.star_analysis.api import *
-  pf = load("data0030")
+  ds = load("data0030")
   sp = p.h.sphere([0.5,0.5,0.5], 0.05)
-  sfr = StarFormationRate(pf, data_source=sp)
+  sfr = StarFormationRate(ds, data_source=sp)
 
 If the stars to be analyzed cannot be defined by a data_source, arrays can be
 passed. In this case, the units for the ``star_mass`` must be in Msun,
@@ -51,8 +51,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.star_analysis.api import *
-  pf = load("data0030")
-  re = pf.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
+  ds = load("data0030")
+  re = ds.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
   # This puts the particle data for *all* the particles in the region re
   # into the arrays sm and ct.
   sm = re["ParticleMassMsun"]
@@ -65,7 +65,7 @@
   # 100 is a time in code units.
   sm_old = sm[ct < 100]
   ct_old = ct[ct < 100]
-  sfr = StarFormationRate(pf, star_mass=sm_old, star_creation_time=ct_old,
+  sfr = StarFormationRate(ds, star_mass=sm_old, star_creation_time=ct_old,
   volume=re.volume('mpc'))
 
 To output the data to a text file, use the command ``.write_out``:
@@ -139,8 +139,8 @@
 
   from yt.mods import *
   from yt.analysis_modules.star_analysis.api import *
-  pf = load("data0030")
-  spec = SpectrumBuilder(pf, bcdir="/home/username/bc/", model="chabrier")
+  ds = load("data0030")
+  spec = SpectrumBuilder(ds, bcdir="/home/username/bc/", model="chabrier")
 
 In order to analyze a set of stars, use the ``calculate_spectrum`` command.
 It accepts either a ``data_source``, or a set of arrays with the star 
@@ -148,7 +148,7 @@
 
 .. code-block:: python
 
-  re = pf.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
+  re = ds.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
   spec.calculate_spectrum(data_source=re)
 
 If a subset of stars are desired, call it like this. ``star_mass`` is in units
@@ -157,7 +157,7 @@
 
 .. code-block:: python
 
-  re = pf.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
+  re = ds.region([0.5,0.5,0.5], [0.4,0.5,0.6], [0.5,0.6,0.7])
   # This puts the particle data for *all* the particles in the region re
   # into the arrays sm, ct and metal.
   sm = re["ParticleMassMsun"]
@@ -223,14 +223,14 @@
 
 Below is an example of an absurd SED for universe-old stars all with 
 solar metallicity at a redshift of zero. Note that even in this example,
-a ``pf`` is required.
+a ``ds`` is required.
 
 .. code-block:: python
 
   from yt.mods import *
   from yt.analysis_modules.star_analysis.api import *
-  pf = load("data0030")
-  spec = SpectrumBuilder(pf, bcdir="/home/user/bc", model="chabrier")
+  ds = load("data0030")
+  spec = SpectrumBuilder(ds, bcdir="/home/user/bc", model="chabrier")
   sm = np.ones(100)
   ct = np.zeros(100)
   spec.calculate_spectrum(star_mass=sm, star_creation_time=ct, star_metallicity_constant=0.02)
@@ -252,11 +252,11 @@
 
   from yt.mods import *
   from yt.analysis_modules.star_analysis.api import *
-  pf = load("data0030")
+  ds = load("data0030")
   # Find all the haloes, and include star particles.
-  haloes = HaloFinder(pf, dm_only=False)
+  haloes = HaloFinder(ds, dm_only=False)
   # Set up the spectrum builder.
-  spec = SpectrumBuilder(pf, bcdir="/home/user/bc", model="salpeter")
+  spec = SpectrumBuilder(ds, bcdir="/home/user/bc", model="salpeter")
   # Iterate over the haloes.
   for halo in haloes:
       # Get the pertinent arrays.

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/sunrise_export.rst
--- a/doc/source/analyzing/analysis_modules/sunrise_export.rst
+++ b/doc/source/analyzing/analysis_modules/sunrise_export.rst
@@ -18,15 +18,15 @@
 	from yt.mods import *
 	import numpy as na
 
-	pf = ARTDataset(file_amr)
-	potential_value,center=pf.h.find_min('Potential_New')
-	root_cells = pf.domain_dimensions[0]
+	ds = ARTDataset(file_amr)
+	potential_value,center=ds.find_min('Potential_New')
+	root_cells = ds.domain_dimensions[0]
 	le = np.floor(root_cells*center) #left edge
 	re = np.ceil(root_cells*center) #right edge
 	bounds = [(le[0], re[0]-le[0]), (le[1], re[1]-le[1]), (le[2], re[2]-le[2])] 
 	#bounds are left edge plus a span
 	bounds = numpy.array(bounds,dtype='int')
-	amods.sunrise_export.export_to_sunrise(pf, out_fits_file,subregion_bounds = bounds)
+	amods.sunrise_export.export_to_sunrise(ds, out_fits_file,subregion_bounds = bounds)
 
 To ensure that the camera is centered on the galaxy, we find the center by finding the minimum of the gravitational potential. The above code takes that center, and casts it in terms of which root cells should be extracted. At the moment, Sunrise accepts a strict octree, and you can only extract a 2x2x2 domain on the root grid, and not an arbitrary volume. See the optimization section later for workarounds. On my reasonably recent machine, the export process takes about 30 minutes.
 
@@ -51,7 +51,7 @@
 	col_list.append(pyfits.Column("L_bol", format="D",array=np.zeros(mass_current.size)))
 	cols = pyfits.ColDefs(col_list)
 
-	amods.sunrise_export.export_to_sunrise(pf, out_fits_file,write_particles=cols,
+	amods.sunrise_export.export_to_sunrise(ds, out_fits_file,write_particles=cols,
 	    subregion_bounds = bounds)
 
 This code snippet takes the stars in a region outlined by the ``bounds`` variable, organizes them into pyfits columns which are then passed to export_to_sunrise. Note that yt units are in CGS, and Sunrise accepts units in (physical) kpc, kelvin, solar masses, and years.  
@@ -68,8 +68,8 @@
 .. code-block:: python
 
 	for x,a in enumerate(zip(pos,age)): #loop over stars
-	    center = x*pf['kpc']
-	    grid,idx = find_cell(pf.index.grids[0],center)
+	    center = x*ds['kpc']
+	    grid,idx = find_cell(ds.index.grids[0],center)
 	    pk[i] = grid['Pk'][idx]
 
 This code is how Sunrise calculates the pressure, so we can add our own derived field:
@@ -79,7 +79,7 @@
 	def _Pk(field,data):
 	    #calculate pressure over Boltzmann's constant: P/k=(n/V)T
 	    #Local stellar ISM values are ~16500 Kcm^-3
-	    vol = data['cell_volume'].astype('float64')*data.pf['cm']**3.0 #volume in cm
+	    vol = data['cell_volume'].astype('float64')*data.ds['cm']**3.0 #volume in cm
 	    m_g = data["cell_mass"]*1.988435e33 #mass of H in g
 	    n_g = m_g*5.97e23 #number of H atoms
 	    teff = data["temperature"]

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/two_point_functions.rst
--- a/doc/source/analyzing/analysis_modules/two_point_functions.rst
+++ b/doc/source/analyzing/analysis_modules/two_point_functions.rst
@@ -35,7 +35,7 @@
     from yt.mods import *
     from yt.analysis_modules.two_point_functions.api import *
     
-    pf = load("data0005")
+    ds = load("data0005")
     
     # Calculate the S in RMS velocity difference between the two points.
     # All functions have five inputs. The first two are containers
@@ -55,7 +55,7 @@
     # the number of pairs of points to calculate, how big a data queue to
     # use, the range of pair separations and how many lengths to use, 
     # and how to divide that range (linear or log).
-    tpf = TwoPointFunctions(pf, ["velocity_x", "velocity_y", "velocity_z"],
+    tpf = TwoPointFunctions(ds, ["velocity_x", "velocity_y", "velocity_z"],
         total_values=1e5, comm_size=10000, 
         length_number=10, length_range=[1./128, .5],
         length_type="log")
@@ -90,7 +90,7 @@
 
     from yt.mods import *
     ...
-    tpf = amods.two_point_functions.TwoPointFunctions(pf, ...)
+    tpf = amods.two_point_functions.TwoPointFunctions(ds, ...)
 
 
 Probability Distribution Function
@@ -261,12 +261,12 @@
 Before any functions can be added, the ``TwoPointFunctions`` object needs
 to be created. It has these inputs:
 
-  * ``pf`` (the only required input and is always the first term).
+  * ``ds`` (the only required input and is always the first term).
   * Field list, required, an ordered list of field names used by the
     functions. The order in this list will need to be referenced when writing
     functions. Derived fields may be used here if they are defined first.
   * ``left_edge``, ``right_edge``, three-element lists of floats:
-    Used to define a sub-region of the full volume in which to run the TPF.
+    Used to define a sub-region of the full volume in which to run the TDS.
     Default=None, which is equivalent to running on the full volume. Both must
     be set to have any effect.
   * ``total_values``, integer: The number of random points to generate globally
@@ -298,7 +298,7 @@
     guarantees that the point pairs will be in different cells for the most 
     refined regions.
     If the first term of the list is -1, the minimum length will be automatically
-    set to sqrt(3)*dx, ex: ``length_range = [-1, 10/pf['kpc']]``.
+    set to sqrt(3)*dx, ex: ``length_range = [-1, 10/ds['kpc']]``.
   * ``vol_ratio``, integer: How to multiply-assign subvolumes to the parallel
     tasks. This number must be an integer factor of the total number of tasks or
     very bad things will happen. The default value of 1 will assign one task
@@ -639,7 +639,7 @@
       return vdiff
     
     ...
-    tpf = TwoPointFunctions(pf, ["velocity_x", "velocity_y", "velocity_z", "density"],
+    tpf = TwoPointFunctions(ds, ["velocity_x", "velocity_y", "velocity_z", "density"],
         total_values=1e5, comm_size=10000, 
         length_number=10, length_range=[1./128, .5],
         length_type="log")
@@ -667,7 +667,7 @@
     from yt.mods import *
     from yt.analysis_modules.two_point_functions.api import *
     
-    pf = load("data0005")
+    ds = load("data0005")
     
     # Calculate the S in RMS velocity difference between the two points.
     # Also store the ratio of densities (keeping them >= 1).
@@ -688,7 +688,7 @@
     # Set the number of pairs of points to calculate, how big a data queue to
     # use, the range of pair separations and how many lengths to use, 
     # and how to divide that range (linear or log).
-    tpf = TwoPointFunctions(pf, ["velocity_x", "velocity_y", "velocity_z", "density"],
+    tpf = TwoPointFunctions(ds, ["velocity_x", "velocity_y", "velocity_z", "density"],
         total_values=1e5, comm_size=10000, 
         length_number=10, length_range=[1./128, .5],
         length_type="log")
@@ -765,7 +765,7 @@
     from yt.analysis_modules.two_point_functions.api import *
     
     # Specify the dataset on which we want to base our work.
-    pf = load('data0005')
+    ds = load('data0005')
     
     # Read in the halo centers of masses.
     CoM = []
@@ -787,7 +787,7 @@
     # For technical reasons (hopefully to be fixed someday) `vol_ratio`
     # needs to be equal to the number of tasks used if this is run
     # in parallel. A value of -1 automatically does this.
-    tpf = TwoPointFunctions(pf, ['x'],
+    tpf = TwoPointFunctions(ds, ['x'],
         total_values=1e7, comm_size=10000, 
         length_number=11, length_range=[2*radius, .5],
         length_type="lin", vol_ratio=-1)
@@ -868,11 +868,11 @@
     from yt.analysis_modules.two_point_functions.api import *
     
     # Specify the dataset on which we want to base our work.
-    pf = load('data0005')
+    ds = load('data0005')
     
     # We work in simulation's units, these are for conversion.
-    vol_conv = pf['cm'] ** 3
-    sm = pf.index.get_smallest_dx()**3
+    vol_conv = ds['cm'] ** 3
+    sm = ds.index.get_smallest_dx()**3
     
     # Our density limit, in gm/cm**3
     dens = 2e-31
@@ -887,13 +887,13 @@
         return d.sum()
     add_quantity("TotalNumDens", function=_NumDens,
         combine_function=_combNumDens, n_ret=1)
-    all = pf.h.all_data()
+    all = ds.all_data()
     n = all.quantities["TotalNumDens"]()
     
     print n,'n'
     
     # Instantiate our TPF object.
-    tpf = TwoPointFunctions(pf, ['density', 'cell_volume'],
+    tpf = TwoPointFunctions(ds, ['density', 'cell_volume'],
         total_values=1e5, comm_size=10000, 
         length_number=11, length_range=[-1, .5],
         length_type="lin", vol_ratio=1)

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/analysis_modules/xray_emission_fields.rst
--- a/doc/source/analyzing/analysis_modules/xray_emission_fields.rst
+++ b/doc/source/analyzing/analysis_modules/xray_emission_fields.rst
@@ -74,4 +74,4 @@
 
   The X-ray fields depend on the number density of hydrogen atoms, in the yt field
   ``H_number_density``. If this field is not defined (either in the dataset or by the user),
-  the primordial hydrogen mass fraction (X = 0.76) will be used to construct it.
\ No newline at end of file
+  the primordial hydrogen mass fraction (X = 0.76) will be used to construct it.

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/creating_derived_fields.rst
--- a/doc/source/analyzing/creating_derived_fields.rst
+++ b/doc/source/analyzing/creating_derived_fields.rst
@@ -20,11 +20,11 @@
 .. code-block:: python
 
    def _Pressure(field, data):
-       return (data.pf["Gamma"] - 1.0) * \
+       return (data.ds.gamma - 1.0) * \
               data["density"] * data["thermal_energy"]
 
 Note that we do a couple different things here.  We access the "Gamma"
-parameter from the parameter file, we access the "density" field and we access
+parameter from the dataset, we access the "density" field and we access
 the "thermal_energy" field.  "thermal_energy" is, in fact, another derived field!
 ("thermal_energy" deals with the distinction in storage of energy between dual
 energy formalism and non-DEF.)  We don't do any loops, we don't do any
@@ -87,14 +87,14 @@
 .. code-block:: python
 
    >>> from yt.mods import *
-   >>> pf = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
-   >>> pf.field_list
+   >>> ds = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
+   >>> ds.field_list
    ['dens', 'temp', 'pres', 'gpot', 'divb', 'velx', 'vely', 'velz', 'magx', 'magy', 'magz', 'magp']
-   >>> pf.field_info['dens']._units
+   >>> ds.field_info['dens']._units
    '\\rm{g}/\\rm{cm}^{3}'
-   >>> pf.field_info['temp']._units
+   >>> ds.field_info['temp']._units
    '\\rm{K}'
-   >>> pf.field_info['velx']._units
+   >>> ds.field_info['velx']._units
    '\\rm{cm}/\\rm{s}'
 
 Thus if you were using any of these fields as input to your derived field, you 
@@ -178,7 +178,7 @@
 
     def _DivV(field, data):
         # We need to set up stencils
-        if data.pf["HydroMethod"] == 2:
+        if data.ds["HydroMethod"] == 2:
             sl_left = slice(None,-2,None)
             sl_right = slice(1,-1,None)
             div_fac = 1.0
@@ -189,11 +189,11 @@
         ds = div_fac * data['dx'].flat[0]
         f  = data["velocity_x"][sl_right,1:-1,1:-1]/ds
         f -= data["velocity_x"][sl_left ,1:-1,1:-1]/ds
-        if data.pf.dimensionality > 1:
+        if data.ds.dimensionality > 1:
             ds = div_fac * data['dy'].flat[0]
             f += data["velocity_y"][1:-1,sl_right,1:-1]/ds
             f -= data["velocity_y"][1:-1,sl_left ,1:-1]/ds
-        if data.pf.dimensionality > 2:
+        if data.ds.dimensionality > 2:
             ds = div_fac * data['dz'].flat[0]
             f += data["velocity_z"][1:-1,1:-1,sl_right]/ds
             f -= data["velocity_z"][1:-1,1:-1,sl_left ]/ds
@@ -241,8 +241,8 @@
         return data["temperature"]*data["density"]**(-2./3.)
     add_field("Entr", function=_Entropy)
 
-    pf = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
-    writer.save_field(pf, "Entr")
+    ds = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
+    writer.save_field(ds, "Entr")
 
 This creates a "_backup.gdf" file next to your datadump. If you load up the dataset again:
 
@@ -250,8 +250,8 @@
 
     from yt.mods import *
 
-    pf = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
-    data = pf.h.all_data()
+    ds = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
+    data = ds.all_data()
     print data["Entr"]
 
 you can work with the field exactly as before, without having to recompute it.

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/external_analysis.rst
--- a/doc/source/analyzing/external_analysis.rst
+++ b/doc/source/analyzing/external_analysis.rst
@@ -18,10 +18,10 @@
    from yt.mods import *
    import radtrans
 
-   pf = load("DD0010/DD0010")
+   ds = load("DD0010/DD0010")
    rt_grids = []
 
-   for grid in pf.index.grids:
+   for grid in ds.index.grids:
        rt_grid = radtrans.RegularBox(
             grid.LeftEdge, grid.RightEdge,
             grid["density"], grid["temperature"], grid["metallicity"])
@@ -39,8 +39,8 @@
    from yt.mods import *
    import pop_synthesis
 
-   pf = load("DD0010/DD0010")
-   dd = pf.h.all_data()
+   ds = load("DD0010/DD0010")
+   dd = ds.all_data()
    star_masses = dd["StarMassMsun"]
    star_metals = dd["StarMetals"]
 

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/generating_processed_data.rst
--- a/doc/source/analyzing/generating_processed_data.rst
+++ b/doc/source/analyzing/generating_processed_data.rst
@@ -43,7 +43,7 @@
 
 .. code-block:: python
 
-   sl = pf.slice(0, 0.5)
+   sl = ds.slice(0, 0.5)
    frb = FixedResolutionBuffer(sl, (0.3, 0.5, 0.6, 0.8), (512, 512))
    my_image = frb["density"]
 
@@ -98,7 +98,7 @@
 
 .. code-block:: python
 
-   source = pf.sphere( (0.3, 0.6, 0.4), 1.0/pf['pc'])
+   source = ds.sphere( (0.3, 0.6, 0.4), 1.0/ds['pc'])
    profile = BinnedProfile1D(source, 128, "density", 1e-24, 1e-10)
    profile.add_fields("cell_mass", weight = None)
    profile.add_fields("temperature")
@@ -128,7 +128,7 @@
 
 .. code-block:: python
 
-   source = pf.sphere( (0.3, 0.6, 0.4), 1.0/pf['pc'])
+   source = ds.sphere( (0.3, 0.6, 0.4), 1.0/ds['pc'])
    prof2d = BinnedProfile2D(source, 128, "density", 1e-24, 1e-10, True,
                                     128, "temperature", 10, 10000, True)
    prof2d.add_fields("cell_mass", weight = None)
@@ -171,7 +171,7 @@
 
 .. code-block:: python
 
-   ray = pf.ray(  (0.3, 0.5, 0.9), (0.1, 0.8, 0.5) )
+   ray = ds.ray(  (0.3, 0.5, 0.9), (0.1, 0.8, 0.5) )
    print ray["density"]
 
 The points are ordered, but the ray is also traversing cells of varying length,

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/ionization_cube.py
--- a/doc/source/analyzing/ionization_cube.py
+++ b/doc/source/analyzing/ionization_cube.py
@@ -13,9 +13,9 @@
 ionized_z = np.zeros(ts[0].domain_dimensions, dtype="float32")
 
 t1 = time.time()
-for pf in ts.piter():
-    z = pf.current_redshift
-    for g in parallel_objects(pf.index.grids, njobs = 16):
+for ds in ts.piter():
+    z = ds.current_redshift
+    for g in parallel_objects(ds.index.grids, njobs = 16):
         i1, j1, k1 = g.get_global_startindex() # Index into our domain
         i2, j2, k2 = g.get_global_startindex() + g.ActiveDimensions
         # Look for the newly ionized gas

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/objects.rst
--- a/doc/source/analyzing/objects.rst
+++ b/doc/source/analyzing/objects.rst
@@ -26,7 +26,7 @@
 while Enzo calls it "temperature".  Translator functions ensure that any
 derived field relying on "temp" or "temperature" works with both output types.
 
-When a field is requested, the parameter file first looks to see if that field
+When a field is requested, the dataset object first looks to see if that field
 exists on disk.  If it does not, it then queries the list of code-specific
 derived fields.  If it finds nothing there, it then defaults to examining the
 global set of derived fields.
@@ -82,7 +82,7 @@
 
 .. code-block:: python
 
-   sp = pf.sphere([0.5, 0.5, 0.5], 10.0/pf['kpc'])
+   sp = ds.sphere([0.5, 0.5, 0.5], 10.0/ds['kpc'])
 
 and then look at the temperature of its cells within it via:
 
@@ -105,25 +105,25 @@
 
 .. code-block:: python
 
-   pf = load("my_data")
-   print pf.field_list
-   print pf.derived_field_list
+   ds = load("my_data")
+   print ds.field_list
+   print ds.derived_field_list
 
 When a field is added, it is added to a container that hangs off of the
-parameter file, as well.  All of the field creation options
+dataset, as well.  All of the field creation options
 (:ref:`derived-field-options`) are accessible through this object:
 
 .. code-block:: python
 
-   pf = load("my_data")
-   print pf.field_info["pressure"].get_units()
+   ds = load("my_data")
+   print ds.field_info["pressure"].get_units()
 
 This is a fast way to examine the units of a given field, and additionally you
 can use :meth:`yt.utilities.pydot.get_source` to get the source code:
 
 .. code-block:: python
 
-   field = pf.field_info["pressure"]
+   field = ds.field_info["pressure"]
    print field.get_source()
 
 .. _available-objects:
@@ -142,8 +142,8 @@
 .. code-block:: python
 
    from yt.mods import *
-   pf = load("RedshiftOutput0005")
-   reg = pf.region([0.5, 0.5, 0.5], [0.0, 0.0, 0.0], [1.0, 1.0, 1.0])
+   ds = load("RedshiftOutput0005")
+   reg = ds.region([0.5, 0.5, 0.5], [0.0, 0.0, 0.0], [1.0, 1.0, 1.0])
 
 .. include:: _obj_docstrings.inc
 
@@ -192,8 +192,8 @@
 
 .. code-block:: python
 
-   pf = load("my_data")
-   dd = pf.h.all_data()
+   ds = load("my_data")
+   dd = ds.all_data()
    dd.quantities["AngularMomentumVector"]()
 
 The following quantities are available via the ``quantities`` interface.
@@ -264,10 +264,10 @@
 .. python-script::
 
    from yt.mods import *
-   pf = load("enzo_tiny_cosmology/DD0046/DD0046")
-   ad = pf.h.all_data()
+   ds = load("enzo_tiny_cosmology/DD0046/DD0046")
+   ad = ds.all_data()
    new_region = ad.cut_region(['obj["density"] > 1e-29'])
-   plot = ProjectionPlot(pf, "x", "density", weight_field="density",
+   plot = ProjectionPlot(ds, "x", "density", weight_field="density",
                          data_source=new_region)
    plot.save()
 
@@ -291,7 +291,7 @@
 
 .. code-block:: python
 
-   sp = pf.sphere("max", (1.0, 'pc'))
+   sp = ds.sphere("max", (1.0, 'pc'))
    contour_values, connected_sets = sp.extract_connected_sets(
         "density", 3, 1e-30, 1e-20)
 
@@ -355,12 +355,12 @@
 construction of the objects is the difficult part, rather than the generation
 of the data -- this means that you can save out an object as a description of
 how to recreate it in space, but not the actual data arrays affiliated with
-that object.  The information that is saved includes the parameter file off of
+that object.  The information that is saved includes the dataset off of
 which the object "hangs."  It is this piece of information that is the most
 difficult; the object, when reloaded, must be able to reconstruct a parameter
 file from whatever limited information it has in the save file.
 
-To do this, ``yt`` is able to identify parameter files based on a "hash"
+To do this, ``yt`` is able to identify datasets based on a "hash"
 generated from the base file name, the "CurrentTimeIdentifier", and the
 simulation time.  These three characteristics should never be changed outside
 of a simulation, they are independent of the file location on disk, and in
@@ -374,10 +374,10 @@
 .. code-block:: python
 
    from yt.mods import *
-   pf = load("my_data")
-   sp = pf.sphere([0.5, 0.5, 0.5], 10.0/pf['kpc'])
+   ds = load("my_data")
+   sp = ds.sphere([0.5, 0.5, 0.5], 10.0/ds['kpc'])
 
-   pf.h.save_object(sp, "sphere_to_analyze_later")
+   ds.save_object(sp, "sphere_to_analyze_later")
 
 
 In a later session, we can load it using
@@ -387,8 +387,8 @@
 
    from yt.mods import *
 
-   pf = load("my_data")
-   sphere_to_analyze = pf.h.load_object("sphere_to_analyze_later")
+   ds = load("my_data")
+   sphere_to_analyze = ds.load_object("sphere_to_analyze_later")
 
 Additionally, if we want to store the object independent of the ``.yt`` file,
 we can save the object directly:
@@ -397,8 +397,8 @@
 
    from yt.mods import *
 
-   pf = load("my_data")
-   sp = pf.sphere([0.5, 0.5, 0.5], 10.0/pf['kpc'])
+   ds = load("my_data")
+   sp = ds.sphere([0.5, 0.5, 0.5], 10.0/ds['kpc'])
 
    sp.save_object("my_sphere", "my_storage_file.cpkl")
 
@@ -414,10 +414,10 @@
    from yt.mods import *
    import shelve
 
-   pf = load("my_data") # not necessary if storeparameterfiles is on
+   ds = load("my_data") # not necessary if storeparameterfiles is on
 
    obj_file = shelve.open("my_storage_file.cpkl")
-   pf, obj = obj_file["my_sphere"]
+   ds, obj = obj_file["my_sphere"]
 
 If you have turned on ``storeparameterfiles`` in your configuration,
 you won't need to load the parameterfile again, as the load process

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/parallel_computation.rst
--- a/doc/source/analyzing/parallel_computation.rst
+++ b/doc/source/analyzing/parallel_computation.rst
@@ -86,10 +86,10 @@
 .. code-block:: python
 
    from yt.pmods import *
-   pf = load("RD0035/RedshiftOutput0035")
-   v, c = pf.h.find_max("density")
+   ds = load("RD0035/RedshiftOutput0035")
+   v, c = ds.find_max("density")
    print v, c
-   p = ProjectionPlot(pf, "x", "density")
+   p = ProjectionPlot(ds, "x", "density")
    p.save()
 
 If this script is run in parallel, two of the most expensive operations -
@@ -127,9 +127,9 @@
 .. code-block:: python
 
    from yt.pmods import *
-   pf = load("RD0035/RedshiftOutput0035")
-   v, c = pf.h.find_max("density")
-   p = ProjectionPlot(pf, "x", "density")
+   ds = load("RD0035/RedshiftOutput0035")
+   v, c = ds.find_max("density")
+   p = ProjectionPlot(ds, "x", "density")
    if is_root():
        print v, c
        p.save()
@@ -151,9 +151,9 @@
           print v, c
        plot.save()
 
-   pf = load("RD0035/RedshiftOutput0035")
-   v, c = pf.h.find_max("density")
-   p = ProjectionPlot(pf, "x", "density")
+   ds = load("RD0035/RedshiftOutput0035")
+   v, c = ds.find_max("density")
+   p = ProjectionPlot(ds, "x", "density")
    only_on_root(print_and_save_plot, v, c, plot, print=True)
 
 Types of Parallelism
@@ -252,8 +252,8 @@
    for sto, fn in parallel_objects(fns, num_procs, storage = my_storage):
 
        # Open a data file, remembering that fn is different on each task.
-       pf = load(fn)
-       dd = pf.h.all_data()
+       ds = load(fn)
+       dd = ds.all_data()
 
        # This copies fn and the min/max of density to the local copy of
        # my_storage
@@ -261,7 +261,7 @@
        sto.result = dd.quantities["Extrema"]("density")
 
        # Makes and saves a plot of the gas density.
-       p = ProjectionPlot(pf, "x", "density")
+       p = ProjectionPlot(ds, "x", "density")
        p.save()
 
    # At this point, as the loop exits, the local copies of my_storage are
@@ -301,7 +301,7 @@
 processor.  By default, parallel is set to ``True``, so you do not have to
 explicitly set ``parallel = True`` as in the above example. 
 
-One could get the same effect by iterating over the individual parameter files
+One could get the same effect by iterating over the individual datasets
 in the DatasetSeries object:
 
 .. code-block:: python
@@ -309,10 +309,10 @@
    from yt.pmods import *
    ts = DatasetSeries.from_filenames("DD*/output_*", parallel = True)
    my_storage = {}
-   for sto,pf in ts.piter(storage=my_storage):
-       sphere = pf.sphere("max", (1.0, "pc"))
+   for sto,ds in ts.piter(storage=my_storage):
+       sphere = ds.sphere("max", (1.0, "pc"))
        L_vec = sphere.quantities["AngularMomentumVector"]()
-       sto.result_id = pf.parameter_filename
+       sto.result_id = ds.parameter_filename
        sto.result = L_vec
 
    L_vecs = []
@@ -503,14 +503,14 @@
        from yt.mods import *
        import time
        
-       pf = load("DD0152")
+       ds = load("DD0152")
        t0 = time.time()
-       bigstuff, hugestuff = StuffFinder(pf)
-       BigHugeStuffParallelFunction(pf, bigstuff, hugestuff)
+       bigstuff, hugestuff = StuffFinder(ds)
+       BigHugeStuffParallelFunction(ds, bigstuff, hugestuff)
        t1 = time.time()
        for i in range(1000000):
            tinystuff, ministuff = GetTinyMiniStuffOffDisk("in%06d.txt" % i)
-           array = TinyTeensyParallelFunction(pf, tinystuff, ministuff)
+           array = TinyTeensyParallelFunction(ds, tinystuff, ministuff)
            SaveTinyMiniStuffToDisk("out%06d.txt" % i, array)
        t2 = time.time()
        

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/particles.rst
--- a/doc/source/analyzing/particles.rst
+++ b/doc/source/analyzing/particles.rst
@@ -63,8 +63,8 @@
 .. code-block:: python
 
    from yt.mods import *
-   pf = load("galaxy1200.dir/galaxy1200")
-   dd = pf.h.all_data()
+   ds = load("galaxy1200.dir/galaxy1200")
+   dd = ds.all_data()
 
    star_particles = dd["creation_time"] > 0.0
    print dd["ParticleMassMsun"][star_particles].max()
@@ -80,8 +80,8 @@
 .. code-block:: python
 
    from yt.mods import *
-   pf = load("galaxy1200.dir/galaxy1200")
-   dd = pf.h.all_data()
+   ds = load("galaxy1200.dir/galaxy1200")
+   dd = ds.all_data()
 
    star_particles = dd["particle_type"] == 2
    print dd["ParticleMassMsun"][star_particles].max()

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/time_series_analysis.rst
--- a/doc/source/analyzing/time_series_analysis.rst
+++ b/doc/source/analyzing/time_series_analysis.rst
@@ -11,10 +11,10 @@
 
 .. code-block:: python
 
-   for pfi in range(30):
-       fn = "DD%04i/DD%04i" % (pfi, pfi)
-       pf = load(fn)
-       process_output(pf)
+   for dsi in range(30):
+       fn = "DD%04i/DD%04i" % (dsi, dsi)
+       ds = load(fn)
+       process_output(ds)
 
 But this is not really very nice.  This ends up requiring a lot of maintenance.
 The :class:`~yt.data_objects.time_series.DatasetSeries` object has been
@@ -66,8 +66,8 @@
 
    from yt.mods import *
    ts = DatasetSeries.from_filenames("*/*.index")
-   for pf in ts:
-       print pf.current_time
+   for ds in ts:
+       print ds.current_time
 
 This can also operate in parallel, using
 :meth:`~yt.data_objects.time_series.DatasetSeries.piter`.  For more examples,
@@ -101,7 +101,7 @@
    max_rho = ts.tasks["MaximumValue"]("density")
 
 When we call the task, the time series object executes the task on each
-component parameter file.  The results are then returned to the user.  More
+component dataset.  The results are then returned to the user.  More
 complex, multi-task evaluations can be conducted by using the
 :meth:`~yt.data_objects.time_series.DatasetSeries.eval` call, which accepts a
 list of analysis tasks.
@@ -140,14 +140,14 @@
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 If you wanted to look at the mass in star particles as a function of time, you
-would write a function that accepts params and pf and then decorate it with
+would write a function that accepts params and ds and then decorate it with
 analysis_task. Here we have done so:
 
 .. code-block:: python
 
    @analysis_task(('particle_type',))
-   def MassInParticleType(params, pf):
-       dd = pf.h.all_data()
+   def MassInParticleType(params, ds):
+       dd = ds.all_data()
        ptype = (dd["particle_type"] == params.particle_type)
        return (ptype.sum(), dd["ParticleMassMsun"][ptype].sum())
 
@@ -196,8 +196,8 @@
 
 .. code-block:: python
 
-  for pf in my_sim.piter()
-      all_data = pf.h.all_data()
+  for ds in my_sim.piter()
+      all_data = ds.all_data()
       print all_data.quantities['Extrema']('density')
  
 Additional keywords can be given to :meth:`get_time_series` to select a subset

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/analyzing/units/data_selection_and_fields.rst
--- a/doc/source/analyzing/units/data_selection_and_fields.rst
+++ b/doc/source/analyzing/units/data_selection_and_fields.rst
@@ -34,7 +34,7 @@
 
    ds = load('HiresIsolatedGalaxy/DD0044/DD0044')
 
-   dd = ds.h.all_data()
+   dd = ds.all_data()
    dd['root_cell_volume']
 
 No special unit logic needs to happen inside of the function - `np.sqrt` will
@@ -47,7 +47,7 @@
    import numpy as np
 
    ds = load('HiresIsolatedGalaxy/DD0044/DD0044')
-   dd = ds.h.all_data()
+   dd = ds.all_data()
 
    print dd['cell_volume'].in_cgs()
    print np.sqrt(dd['cell_volume'].in_cgs())
@@ -70,5 +70,5 @@
 
    ds = load('HiresIsolatedGalaxy/DD0044/DD0044')
 
-   dd = ds.h.all_data()
+   dd = ds.all_data()
    dd['root_cell_volume']

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/bootcamp/4)_Data_Objects_and_Time_Series.ipynb
--- a/doc/source/bootcamp/4)_Data_Objects_and_Time_Series.ipynb
+++ b/doc/source/bootcamp/4)_Data_Objects_and_Time_Series.ipynb
@@ -58,7 +58,7 @@
      "source": [
       "### Example 1: Simple Time Series\n",
       "\n",
-      "As a simple example of how we can use this functionality, let's find the min and max of the density as a function of time in this simulation.  To do this we use the construction `for ds in ts` where `ds` means \"Dataset\" and `ts` is the \"Time Series\" we just loaded up.  For each parameter file, we'll create an object (`dd`) that covers the entire domain.  (`all_data` is a shorthand function for this.)  We'll then call the `extrema` Derived Quantity, and append the min and max to our extrema outputs."
+      "As a simple example of how we can use this functionality, let's find the min and max of the density as a function of time in this simulation.  To do this we use the construction `for ds in ts` where `ds` means \"Dataset\" and `ts` is the \"Time Series\" we just loaded up.  For each dataset, we'll create an object (`dd`) that covers the entire domain.  (`all_data` is a shorthand function for this.)  We'll then call the `extrema` Derived Quantity, and append the min and max to our extrema outputs."
      ]
     },
     {
@@ -108,7 +108,7 @@
       "\n",
       "Let's do something a bit different.  Let's calculate the total mass inside halos and outside halos.\n",
       "\n",
-      "This actually touches a lot of different pieces of machinery in yt.  For every parameter file, we will run the halo finder HOP.  Then, we calculate the total mass in the domain.  Then, for each halo, we calculate the sum of the baryon mass in that halo.  We'll keep running tallies of these two things."
+      "This actually touches a lot of different pieces of machinery in yt.  For every dataset, we will run the halo finder HOP.  Then, we calculate the total mass in the domain.  Then, for each halo, we calculate the sum of the baryon mass in that halo.  We'll keep running tallies of these two things."
      ]
     },
     {

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/cookbook/amrkdtree_to_uniformgrid.py
--- a/doc/source/cookbook/amrkdtree_to_uniformgrid.py
+++ b/doc/source/cookbook/amrkdtree_to_uniformgrid.py
@@ -15,7 +15,7 @@
 domain_center = (ds.domain_right_edge - ds.domain_left_edge)/2
 
 #determine the cellsize in the highest refinement level
-cell_size = pf.domain_width/(pf.domain_dimensions*2**lmax)
+cell_size = ds.domain_width/(ds.domain_dimensions*2**lmax)
 
 #calculate the left edge of the new grid
 left_edge = domain_center - 512*cell_size
@@ -24,7 +24,7 @@
 ncells = 1024
 
 #ask yt for the specified covering grid
-cgrid = pf.h.covering_grid(lmax, left_edge, np.array([ncells,]*3))
+cgrid = ds.covering_grid(lmax, left_edge, np.array([ncells,]*3))
 
 #get a map of the density into the new grid
 density_map = cgrid["density"].astype(dtype="float32")

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/cookbook/contours_on_slice.py
--- a/doc/source/cookbook/contours_on_slice.py
+++ b/doc/source/cookbook/contours_on_slice.py
@@ -1,7 +1,9 @@
 import yt
 
 # first add density contours on a density slice
-ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")  
+ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
+
+# add density contours on the density slice.
 p = yt.SlicePlot(ds, "x", "density")
 p.annotate_contour("density")
 p.save()

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/cookbook/custom_colorbar_tickmarks.ipynb
--- a/doc/source/cookbook/custom_colorbar_tickmarks.ipynb
+++ b/doc/source/cookbook/custom_colorbar_tickmarks.ipynb
@@ -22,8 +22,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pf = load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
-      "slc = SlicePlot(pf, 'x', 'density')\n",
+      "ds = load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
+      "slc = SlicePlot(ds, 'x', 'density')\n",
       "slc"
      ],
      "language": "python",
@@ -87,4 +87,4 @@
    "metadata": {}
   }
  ]
-}
\ No newline at end of file
+}

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/cookbook/embedded_javascript_animation.ipynb
--- a/doc/source/cookbook/embedded_javascript_animation.ipynb
+++ b/doc/source/cookbook/embedded_javascript_animation.ipynb
@@ -51,12 +51,11 @@
       "prj.set_figure_size(5)\n",
       "prj.set_zlim('density',1e-32,1e-26)\n",
       "fig = prj.plots['density'].figure\n",
-      "fig.canvas = FigureCanvasAgg(fig)\n",
       "\n",
       "# animation function.  This is called sequentially\n",
       "def animate(i):\n",
-      "    pf = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
-      "    prj._switch_pf(pf)\n",
+      "    ds = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
+      "    prj._switch_ds(ds)\n",
       "\n",
       "# call the animator.  blit=True means only re-draw the parts that have changed.\n",
       "animation.FuncAnimation(fig, animate, frames=44, interval=200, blit=False)"
@@ -69,4 +68,4 @@
    "metadata": {}
   }
  ]
-}
\ No newline at end of file
+}

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/cookbook/embedded_webm_animation.ipynb
--- a/doc/source/cookbook/embedded_webm_animation.ipynb
+++ b/doc/source/cookbook/embedded_webm_animation.ipynb
@@ -99,12 +99,11 @@
       "prj = ProjectionPlot(load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
       "prj.set_zlim('density',1e-32,1e-26)\n",
       "fig = prj.plots['density'].figure\n",
-      "fig.canvas = FigureCanvasAgg(fig)\n",
       "\n",
       "# animation function.  This is called sequentially\n",
       "def animate(i):\n",
-      "    pf = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
-      "    prj._switch_pf(pf)\n",
+      "    ds = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
+      "    prj._switch_ds(ds)\n",
       "\n",
       "# call the animator.  blit=True means only re-draw the parts that have changed.\n",
       "anim = animation.FuncAnimation(fig, animate, frames=44, interval=200, blit=False)\n",
@@ -120,4 +119,4 @@
    "metadata": {}
   }
  ]
-}
\ No newline at end of file
+}

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/cookbook/find_clumps.py
--- a/doc/source/cookbook/find_clumps.py
+++ b/doc/source/cookbook/find_clumps.py
@@ -7,7 +7,7 @@
 from yt.analysis_modules.level_sets.api import (Clump, find_clumps,
                                                 get_lowest_clumps)
 
-fn = "IsolatedGalaxy/galaxy0030/galaxy0030"  # parameter file to load
+fn = "IsolatedGalaxy/galaxy0030/galaxy0030"  # dataset to load
 # this is the field we look for contours over -- we could do
 # this over anything.  Other common choices are 'AveragedDensity'
 # and 'Dark_Matter_Density'.
@@ -69,7 +69,7 @@
 
 # We can also save the clump object to disk to read in later so we don't have
 # to spend a lot of time regenerating the clump objects.
-ds.h.save_object(master_clump, 'My_clumps')
+ds.save_object(master_clump, 'My_clumps')
 
 # Later, we can read in the clump object like so,
 master_clump = ds.load_object('My_clumps')

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/cookbook/free_free_field.py
--- a/doc/source/cookbook/free_free_field.py
+++ b/doc/source/cookbook/free_free_field.py
@@ -73,9 +73,9 @@
 yt.add_quantity("FreeFree_Luminosity", function=_FreeFreeLuminosity,
                 combine_function=_combFreeFreeLuminosity, n_ret=1)
 
-pf = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
+ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
 
-sphere = pf.sphere(pf.domain_center, (100., "kpc"))
+sphere = ds.sphere(ds.domain_center, (100., "kpc"))
 
 # Print out the total luminosity at 1 keV for the sphere
 

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/cookbook/halo_merger_tree.py
--- a/doc/source/cookbook/halo_merger_tree.py
+++ b/doc/source/cookbook/halo_merger_tree.py
@@ -28,9 +28,9 @@
 # DEPENDING ON THE SIZE OF YOUR FILES, THIS CAN BE A LONG STEP 
 # but because we're writing them out to disk, you only have to do this once.
 # ------------------------------------------------------------
-for pf in ts:
-    halo_list = FOFHaloFinder(pf)
-    i = int(pf.basename[2:])
+for ds in ts:
+    halo_list = FOFHaloFinder(ds)
+    i = int(ds.basename[2:])
     halo_list.write_out("FOF/groups_%05i.txt" % i)
     halo_list.write_particle_lists("FOF/particles_%05i" % i)
 

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/cookbook/multi_plot_3x2_FRB.py
--- a/doc/source/cookbook/multi_plot_3x2_FRB.py
+++ b/doc/source/cookbook/multi_plot_3x2_FRB.py
@@ -4,7 +4,7 @@
 import matplotlib.colorbar as cb
 from matplotlib.colors import LogNorm
 
-fn = "Enzo_64/RD0006/RedshiftOutput0006" # parameter file to load
+fn = "Enzo_64/RD0006/RedshiftOutput0006" # dataset to load
 
 # load data and get center value and center location as maximum density location
 ds = yt.load(fn) 

diff -r b58796ca19881fca5b1244e13152816ad2164200 -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc doc/source/cookbook/multi_plot_slice_and_proj.py
--- a/doc/source/cookbook/multi_plot_slice_and_proj.py
+++ b/doc/source/cookbook/multi_plot_slice_and_proj.py
@@ -4,7 +4,7 @@
 import matplotlib.colorbar as cb
 from matplotlib.colors import LogNorm
 
-fn = "GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150" # parameter file to load
+fn = "GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150" # dataset to load
 orient = 'horizontal'
 
 ds = yt.load(fn) # load data

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/f459878f99dd/
Changeset:   f459878f99dd
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-07-18 02:30:26
Summary:     Fixing pf->ds instances that have been added since the last merge.
Affected #:  16 files

diff -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc -r f459878f99dd5197e20157639a943ab773de19a3 tests/object_field_values.py
--- a/tests/object_field_values.py
+++ b/tests/object_field_values.py
@@ -49,7 +49,7 @@
 
 @register_object
 def all_data(self):
-    self.data_object = self.ds.h.all_data()
+    self.data_object = self.ds.all_data()
 
 _new_known_objects = {}
 for field in ["Density"]:  # field_list:

diff -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc -r f459878f99dd5197e20157639a943ab773de19a3 tests/volume_rendering.py
--- a/tests/volume_rendering.py
+++ b/tests/volume_rendering.py
@@ -21,7 +21,7 @@
         n_contours = 5
         cmap = 'algae'
         field = 'Density'
-        mi, ma = self.ds.h.all_data().quantities['Extrema'](field)[0]
+        mi, ma = self.ds.all_data().quantities['Extrema'](field)[0]
         mi, ma = na.log10(mi), na.log10(ma)
         contour_width = (ma - mi) / 100.
         L = na.array([1.] * 3)
@@ -29,7 +29,7 @@
         tf.add_layers(n_contours, w=contour_width,
                       col_bounds=(mi * 1.001, ma * 0.999),
                       colormap=cmap, alpha=na.logspace(-1, 0, n_contours))
-        cam = self.ds.h.camera(c, L, W, (N, N), transfer_function=tf,
+        cam = self.ds.camera(c, L, W, (N, N), transfer_function=tf,
             no_ghost=True)
         image = cam.snapshot()
         # image = cam.snapshot('test_rendering_%s.png'%field)

diff -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc -r f459878f99dd5197e20157639a943ab773de19a3 yt/data_objects/profiles.py
--- a/yt/data_objects/profiles.py
+++ b/yt/data_objects/profiles.py
@@ -1280,7 +1280,7 @@
             except KeyError:
                 field_ex = list(extrema[bin_field])
             if units is not None and bin_field in units:
-                fe = data_source.pf.arr(field_ex, units[bin_field])
+                fe = data_source.ds.arr(field_ex, units[bin_field])
                 fe.convert_to_units(bf_units)
                 field_ex = [fe[0].v, fe[1].v]
             if iterable(field_ex[0]):

diff -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc -r f459878f99dd5197e20157639a943ab773de19a3 yt/data_objects/tests/test_points.py
--- a/yt/data_objects/tests/test_points.py
+++ b/yt/data_objects/tests/test_points.py
@@ -6,5 +6,5 @@
     ytcfg["yt","__withintesting"] = "True"
 
 def test_domain_point():
-    pf = fake_random_pf(16, fields = ("density"))
-    p = pf.point(pf.domain_center)
+    ds = fake_random_ds(16, fields = ("density"))
+    p = ds.point(ds.domain_center)

diff -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc -r f459878f99dd5197e20157639a943ab773de19a3 yt/fields/field_info_container.py
--- a/yt/fields/field_info_container.py
+++ b/yt/fields/field_info_container.py
@@ -71,7 +71,7 @@
             units = self.ds.field_units.get((ptype, f), units)
             if (f in aliases or ptype not in self.ds.particle_types_raw) and \
                 units not in skip_output_units:
-                u = Unit(units, registry = self.pf.unit_registry)
+                u = Unit(units, registry = self.ds.unit_registry)
                 output_units = str(u.get_cgs_equivalent())
             else:
                 output_units = units
@@ -253,66 +253,6 @@
             display_name = self[original_name].display_name,
             units = units)
 
-    def add_grad(self, field, **kwargs):
-        """
-        Creates the partial derivative of a given field. This function will
-        autogenerate the names of the gradient fields.
-
-        """
-        sl = slice(2,None,None)
-        sr = slice(None,-2,None)
-
-        def _gradx(f, data):
-            grad = data[field][sl,1:-1,1:-1] - data[field][sr,1:-1,1:-1]
-            grad /= 2.0*data["dx"].flat[0]*data.ds.units["cm"]
-            g = np.zeros(data[field].shape, dtype='float64')
-            g[1:-1,1:-1,1:-1] = grad
-            return g
-
-        def _grady(f, data):
-            grad = data[field][1:-1,sl,1:-1] - data[field][1:-1,sr,1:-1]
-            grad /= 2.0*data["dy"].flat[0]*data.ds.units["cm"]
-            g = np.zeros(data[field].shape, dtype='float64')
-            g[1:-1,1:-1,1:-1] = grad
-            return g
-
-        def _gradz(f, data):
-            grad = data[field][1:-1,1:-1,sl] - data[field][1:-1,1:-1,sr]
-            grad /= 2.0*data["dz"].flat[0]*data.ds.units["cm"]
-            g = np.zeros(data[field].shape, dtype='float64')
-            g[1:-1,1:-1,1:-1] = grad
-            return g
-
-        d_kwargs = kwargs.copy()
-        if "display_name" in kwargs: del d_kwargs["display_name"]
-
-        for ax in "xyz":
-            if "display_name" in kwargs:
-                disp_name = r"%s\_%s" % (kwargs["display_name"], ax)
-            else:
-                disp_name = r"\partial %s/\partial %s" % (field, ax)
-            name = "Grad_%s_%s" % (field, ax)
-            self[name] = DerivedField(name, function=eval('_grad%s' % ax),
-                         take_log=False, validators=[ValidateSpatial(1,[field])],
-                         display_name = disp_name, **d_kwargs)
-
-        def _grad(f, data) :
-            a = np.power(data["Grad_%s_x" % field],2)
-            b = np.power(data["Grad_%s_y" % field],2)
-            c = np.power(data["Grad_%s_z" % field],2)
-            norm = np.sqrt(a+b+c)
-            return norm
-
-        if "display_name" in kwargs:
-            disp_name = kwargs["display_name"]
-        else:
-            disp_name = r"\Vert\nabla %s\Vert" % (field)
-        name = "Grad_%s" % field
-        self[name] = DerivedField(name, function=_grad, take_log=False,
-                                  display_name = disp_name, **d_kwargs)
-        mylog.info("Added new fields: Grad_%s_x, Grad_%s_y, Grad_%s_z, Grad_%s" \
-                   % (field, field, field, field))
-
     def has_key(self, key):
         # This gets used a lot
         if key in self: return True

diff -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc -r f459878f99dd5197e20157639a943ab773de19a3 yt/frontends/chombo/tests/test_outputs.py
--- a/yt/frontends/chombo/tests/test_outputs.py
+++ b/yt/frontends/chombo/tests/test_outputs.py
@@ -45,10 +45,10 @@
 _zp_fields = ("rhs", "phi", "gravitational_field_x",
               "gravitational_field_y")
 zp = "ZeldovichPancake/plt32.2d.hdf5"
- at requires_pf(zp)
+ at requires_ds(zp)
 def test_zp():
-    pf = data_dir_load(zp)
-    yield assert_equal, str(pf), "plt32.2d.hdf5"
+    ds = data_dir_load(zp)
+    yield assert_equal, str(ds), "plt32.2d.hdf5"
     for test in small_patch_amr(zp, _zp_fields, input_center="c", input_weight="rhs"):
         test_tb.__name__ = test.description
         yield test

diff -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc -r f459878f99dd5197e20157639a943ab773de19a3 yt/frontends/sdf/io.py
--- a/yt/frontends/sdf/io.py
+++ b/yt/frontends/sdf/io.py
@@ -148,15 +148,15 @@
                 if mask is None: continue
                 for field in field_list:
                     if field == "mass":
-                        if self.pf.field_info._mass_field is None:
+                        if self.ds.field_info._mass_field is None:
                             pm = 1.0
-                            if 'particle_mass' in self.pf.parameters:
-                                pm = self.pf.parameters['particle_mass']
+                            if 'particle_mass' in self.ds.parameters:
+                                pm = self.ds.parameters['particle_mass']
                             else:
                                 raise RuntimeError
                             data = pm * np.ones(mask.sum(), dtype="float64")
                         else:
-                            data = self._handle[self.pf.field_info._mass_field][mask]
+                            data = self._handle[self.ds.field_info._mass_field][mask]
                     else:
                         data = self._handle[field][mask]
                     yield (ptype, field), data
@@ -170,24 +170,24 @@
 
 
     def _read_particle_coords(self, chunks, ptf):
-        dle = self.pf.domain_left_edge.in_units("code_length").d
-        dre = self.pf.domain_right_edge.in_units("code_length").d
-        for dd in self.pf.midx.iter_bbox_data(
+        dle = self.ds.domain_left_edge.in_units("code_length").d
+        dre = self.ds.domain_right_edge.in_units("code_length").d
+        for dd in self.ds.midx.iter_bbox_data(
             dle, dre,
             ['x','y','z']):
             yield "dark_matter", (
                 dd['x'], dd['y'], dd['z'])
 
     def _read_particle_fields(self, chunks, ptf, selector):
-        dle = self.pf.domain_left_edge.in_units("code_length").d
-        dre = self.pf.domain_right_edge.in_units("code_length").d
+        dle = self.ds.domain_left_edge.in_units("code_length").d
+        dre = self.ds.domain_right_edge.in_units("code_length").d
         required_fields = []
         for ptype, field_list in sorted(ptf.items()):
             for field in field_list:
                 if field == "mass": continue
                 required_fields.append(field)
 
-        for dd in self.pf.midx.iter_bbox_data(
+        for dd in self.ds.midx.iter_bbox_data(
             dle, dre,
             required_fields):
 
@@ -201,16 +201,16 @@
                 for field in field_list:
                     if field == "mass":
                         data = np.ones(mask.sum(), dtype="float64")
-                        data *= self.pf.parameters["particle_mass"]
+                        data *= self.ds.parameters["particle_mass"]
                     else:
                         data = dd[field][mask]
                     yield (ptype, field), data
 
     def _initialize_index(self, data_file, regions):
-        dle = self.pf.domain_left_edge.in_units("code_length").d
-        dre = self.pf.domain_right_edge.in_units("code_length").d
+        dle = self.ds.domain_left_edge.in_units("code_length").d
+        dre = self.ds.domain_right_edge.in_units("code_length").d
         pcount = 0
-        for dd in self.pf.midx.iter_bbox_data(
+        for dd in self.ds.midx.iter_bbox_data(
             dle, dre,
             ['x']):
             pcount += dd['x'].size
@@ -219,7 +219,7 @@
         ind = 0
 
         chunk_id = 0
-        for dd in self.pf.midx.iter_bbox_data(
+        for dd in self.ds.midx.iter_bbox_data(
             dle, dre,
             ['x','y','z']):
             npart = dd['x'].size
@@ -227,30 +227,30 @@
             pos[:,0] = dd['x']
             pos[:,1] = dd['y']
             pos[:,2] = dd['z']
-            if np.any(pos.min(axis=0) < self.pf.domain_left_edge) or \
-               np.any(pos.max(axis=0) > self.pf.domain_right_edge):
+            if np.any(pos.min(axis=0) < self.ds.domain_left_edge) or \
+               np.any(pos.max(axis=0) > self.ds.domain_right_edge):
                 raise YTDomainOverflow(pos.min(axis=0),
                                        pos.max(axis=0),
-                                       self.pf.domain_left_edge,
-                                       self.pf.domain_right_edge)
+                                       self.ds.domain_left_edge,
+                                       self.ds.domain_right_edge)
             regions.add_data_file(pos, chunk_id)
             morton[ind:ind+npart] = compute_morton(
                 pos[:,0], pos[:,1], pos[:,2],
-                data_file.pf.domain_left_edge,
-                data_file.pf.domain_right_edge)
+                data_file.ds.domain_left_edge,
+                data_file.ds.domain_right_edge)
             ind += npart
         return morton
 
     def _count_particles(self, data_file):
-        dle = self.pf.domain_left_edge.in_units("code_length").d
-        dre = self.pf.domain_right_edge.in_units("code_length").d
-        pcount_estimate = self.pf.midx.get_nparticles_bbox(dle, dre)
+        dle = self.ds.domain_left_edge.in_units("code_length").d
+        dre = self.ds.domain_right_edge.in_units("code_length").d
+        pcount_estimate = self.ds.midx.get_nparticles_bbox(dle, dre)
         if pcount_estimate > 1e9:
             mylog.warning("Filtering %i particles to find total."
                           % pcount_estimate + \
                           " You may want to reconsider your bounding box.")
         pcount = 0
-        for dd in self.pf.midx.iter_bbox_data(
+        for dd in self.ds.midx.iter_bbox_data(
             dle, dre,
             ['x']):
             pcount += dd['x'].size

diff -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc -r f459878f99dd5197e20157639a943ab773de19a3 yt/frontends/stream/io.py
--- a/yt/frontends/stream/io.py
+++ b/yt/frontends/stream/io.py
@@ -127,7 +127,7 @@
             for obj in chunk.objs:
                 count += selector.count_octs(obj.oct_handler, obj.domain_id)
         for ptype in ptf:
-            psize[ptype] = self.pf.n_ref * count / float(obj.nz)
+            psize[ptype] = self.ds.n_ref * count / float(obj.nz)
         return psize
 
     def _read_particle_fields(self, chunks, ptf, selector):

diff -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc -r f459878f99dd5197e20157639a943ab773de19a3 yt/frontends/stream/tests/test_stream_particles.py
--- a/yt/frontends/stream/tests/test_stream_particles.py
+++ b/yt/frontends/stream/tests/test_stream_particles.py
@@ -41,7 +41,7 @@
                     dimensions = grid.ActiveDimensions,
                     number_of_particles = grid.NumberOfParticles)
     
-        for field in amr0.h.field_list :
+        for field in amr0.field_list :
             
             data[field] = grid[field]
             
@@ -90,8 +90,8 @@
     # Now refine this
 
     amr1 = refine_amr(ug1, rc, fo, 3)
-    for field in sorted(ug1.h.field_list):
-        yield assert_equal, (field in amr1.h.field_list), True
+    for field in sorted(ug1.field_list):
+        yield assert_equal, (field in amr1.field_list), True
     
     grid_data = []
     
@@ -103,7 +103,7 @@
                     dimensions = grid.ActiveDimensions,
                     number_of_particles = grid.NumberOfParticles)
 
-        for field in amr1.h.field_list :
+        for field in amr1.field_list :
 
             data[field] = grid[field]
             

diff -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc -r f459878f99dd5197e20157639a943ab773de19a3 yt/geometry/grid_geometry_handler.py
--- a/yt/geometry/grid_geometry_handler.py
+++ b/yt/geometry/grid_geometry_handler.py
@@ -207,7 +207,7 @@
         Returns the values [field1, field2,...] of the fields at the given
         (x, y, z) points. Returns a numpy array of field values cross coords
         """
-        coords = YTArray(ensure_numpy_array(coords),'code_length', registry=self.pf.unit_registry)
+        coords = YTArray(ensure_numpy_array(coords),'code_length', registry=self.ds.unit_registry)
         grids = self._find_points(coords[:,0], coords[:,1], coords[:,2])[0]
         fields = ensure_list(fields)
         mark = np.zeros(3, dtype=np.int)

diff -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc -r f459878f99dd5197e20157639a943ab773de19a3 yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -607,9 +607,9 @@
             self.center[i] = center[i]
             self.bbox[i][0] = self.center[i] - self.radius
             self.bbox[i][1] = self.center[i] + self.radius
-            if self.bbox[i][0] < dobj.pf.domain_left_edge[i]:
+            if self.bbox[i][0] < dobj.ds.domain_left_edge[i]:
                 self.check_box[i] = False
-            elif self.bbox[i][1] > dobj.pf.domain_right_edge[i]:
+            elif self.bbox[i][1] > dobj.ds.domain_right_edge[i]:
                 self.check_box[i] = False
             else:
                 self.check_box[i] = True

diff -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc -r f459878f99dd5197e20157639a943ab773de19a3 yt/geometry/tests/test_particle_octree.py
--- a/yt/geometry/tests/test_particle_octree.py
+++ b/yt/geometry/tests/test_particle_octree.py
@@ -111,14 +111,14 @@
     _attrs = ('icoords', 'fcoords', 'fwidth', 'ires')
     for n_ref in [16, 32, 64, 512, 1024]:
         ds1 = load_particles(data, 1.0, bbox = bbox, n_ref = n_ref)
-        dd1 = ds1.h.all_data()
+        dd1 = ds1.all_data()
         v1 = dict((a, getattr(dd1, a)) for a in _attrs)
         cv1 = dd1["cell_volume"].sum(dtype="float64")
         for over_refine in [1, 2, 3]:
             f = 1 << (3*(over_refine-1))
             ds2 = load_particles(data, 1.0, bbox = bbox, n_ref = n_ref,
                                 over_refine_factor = over_refine)
-            dd2 = ds2.h.all_data()
+            dd2 = ds2.all_data()
             v2 = dict((a, getattr(dd2, a)) for a in _attrs)
             for a in sorted(v1):
                 yield assert_equal, v1[a].size * f, v2[a].size

diff -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc -r f459878f99dd5197e20157639a943ab773de19a3 yt/utilities/tests/test_selectors.py
--- a/yt/utilities/tests/test_selectors.py
+++ b/yt/utilities/tests/test_selectors.py
@@ -10,22 +10,22 @@
 
 def test_point_selector():
     # generate fake amr data
-    pf = fake_random_pf(64, nprocs=51)
-    assert(all(pf.periodicity))
+    ds = fake_random_ds(64, nprocs=51)
+    assert(all(ds.periodicity))
 
-    dd = pf.h.all_data()
+    dd = ds.all_data()
     positions = np.array([dd[ax] for ax in 'xyz'])
     delta = 0.5*np.array([dd['d'+ax] for ax in 'xyz'])
-    # ensure cell centers and corners always return one and 
+    # ensure cell centers and corners always return one and
     # only one point object
     for p in positions:
-        data = pf.point(p)
-        assert_equal(data["ones"].shape[0], 1) 
+        data = ds.point(p)
+        assert_equal(data["ones"].shape[0], 1)
     for p in positions - delta:
-        data = pf.point(p)
+        data = ds.point(p)
         assert_equal(data["ones"].shape[0], 1)
     for p in positions + delta:
-        data = pf.point(p)
+        data = ds.point(p)
         assert_equal(data["ones"].shape[0], 1)
  
 def test_sphere_selector():

diff -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc -r f459878f99dd5197e20157639a943ab773de19a3 yt/visualization/fixed_resolution.py
--- a/yt/visualization/fixed_resolution.py
+++ b/yt/visualization/fixed_resolution.py
@@ -410,18 +410,18 @@
         if item in self.data: return self.data[item]
         mylog.info("Making a fixed resolutuion buffer of (%s) %d by %d" % \
             (item, self.buff_size[0], self.buff_size[1]))
-        ds = self.data_source
+        dd = self.data_source
         width = self.ds.arr((self.bounds[1] - self.bounds[0],
                              self.bounds[3] - self.bounds[2],
                              self.bounds[5] - self.bounds[4]))
-        buff = off_axis_projection(ds.ds, ds.center, ds.normal_vector,
-                                   width, ds.resolution, item,
-                                   weight=ds.weight_field, volume=ds.volume,
-                                   no_ghost=ds.no_ghost, interpolated=ds.interpolated,
-                                   north_vector=ds.north_vector)
-        units = Unit(ds.pf.field_info[item].units, registry=ds.pf.unit_registry)
-        if ds.weight_field is None:
-            units *= Unit('cm', registry=ds.pf.unit_registry)
+        buff = off_axis_projection(dd.ds, dd.center, dd.normal_vector,
+                                   width, dd.resolution, item,
+                                   weight=dd.weight_field, volume=dd.volume,
+                                   no_ghost=dd.no_ghost, interpolated=dd.interpolated,
+                                   north_vector=dd.north_vector)
+        units = Unit(dd.ds.field_info[item].units, registry=dd.ds.unit_registry)
+        if dd.weight_field is None:
+            units *= Unit('cm', registry=dd.ds.unit_registry)
         ia = ImageArray(buff.swapaxes(0,1), input_units=units, info=self._get_info(item))
         self[item] = ia
         return ia 

diff -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc -r f459878f99dd5197e20157639a943ab773de19a3 yt/visualization/plot_window.py
--- a/yt/visualization/plot_window.py
+++ b/yt/visualization/plot_window.py
@@ -295,8 +295,8 @@
         self.origin = origin
         if self.data_source.center is not None and oblique == False:
             ax = self.data_source.axis
-            xax = self.pf.coordinates.x_axis[ax]
-            yax = self.pf.coordinates.y_axis[ax]
+            xax = self.ds.coordinates.x_axis[ax]
+            yax = self.ds.coordinates.y_axis[ax]
             center = [self.data_source.center[xax],
                       self.data_source.center[yax]]
             self.set_center(center)

diff -r e2cda56fbfb85bdd2f6ca3b17dd985ccd5a70cfc -r f459878f99dd5197e20157639a943ab773de19a3 yt/visualization/tests/test_profile_plots.py
--- a/yt/visualization/tests/test_profile_plots.py
+++ b/yt/visualization/tests/test_profile_plots.py
@@ -20,7 +20,7 @@
 from yt.data_objects.profiles import create_profile
 from yt.extern.parameterized import\
     parameterized, param
-from yt.testing import fake_random_pf
+from yt.testing import fake_random_ds
 from yt.visualization.profile_plotter import \
     ProfilePlot, PhasePlot
 from yt.visualization.tests.test_plotwindow import \
@@ -33,7 +33,7 @@
         fields = ('density', 'temperature', 'velocity_x', 'velocity_y',
                   'velocity_z')
         units = ('g/cm**3', 'K', 'cm/s', 'cm/s', 'cm/s')
-        test_ds = fake_random_pf(64, fields=fields, units=units)
+        test_ds = fake_random_ds(64, fields=fields, units=units)
         regions = [test_ds.region([0.5]*3, [0.4]*3, [0.6]*3), test_ds.all_data()]
         profiles = []
         phases = []


https://bitbucket.org/yt_analysis/yt/commits/44dd5d0ee1e3/
Changeset:   44dd5d0ee1e3
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-07-18 05:11:04
Summary:     Fixing issues discovered via failing tests.
Affected #:  3 files

diff -r f459878f99dd5197e20157639a943ab773de19a3 -r 44dd5d0ee1e38bf657f42a0fcf0caf1e098b1d50 yt/frontends/fits/tests/test_outputs.py
--- a/yt/frontends/fits/tests/test_outputs.py
+++ b/yt/frontends/fits/tests/test_outputs.py
@@ -25,7 +25,7 @@
 m33 = "radio_fits/m33_hi.fits"
 @requires_ds(m33, big_data=True)
 def test_m33():
-    ds = data_dir_load(m33, nan_mask=0.0)
+    ds = data_dir_load(m33, kwargs=dict(nan_mask=0.0))
     yield assert_equal, str(ds), "m33_hi.fits"
     for test in small_patch_amr(m33, _fields):
         test_m33.__name__ = test.description

diff -r f459878f99dd5197e20157639a943ab773de19a3 -r 44dd5d0ee1e38bf657f42a0fcf0caf1e098b1d50 yt/frontends/flash/data_structures.py
--- a/yt/frontends/flash/data_structures.py
+++ b/yt/frontends/flash/data_structures.py
@@ -141,7 +141,7 @@
         # Because we don't care about units, we're going to operate on views.
         gle = self.grid_left_edge.ndarray_view()
         gre = self.grid_right_edge.ndarray_view()
-        geom = self.parameter_file.geometry
+        geom = self.dataset.geometry
         if geom != 'cartesian' and ND < 3:
             if geom == 'spherical' and ND < 2:
                 gle[:,1] = 0.0

diff -r f459878f99dd5197e20157639a943ab773de19a3 -r 44dd5d0ee1e38bf657f42a0fcf0caf1e098b1d50 yt/utilities/answer_testing/framework.py
--- a/yt/utilities/answer_testing/framework.py
+++ b/yt/utilities/answer_testing/framework.py
@@ -691,7 +691,7 @@
     else:
         return ftrue
 
-def small_patch_amr(pf_fn, fields, input_center="max", input_weight="density"):
+def small_patch_amr(ds_fn, fields, input_center="max", input_weight="density"):
     if not can_run_ds(ds_fn): return
     dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
     yield GridHierarchyTest(ds_fn)
@@ -707,7 +707,7 @@
                 yield FieldValuesTest(
                         ds_fn, field, dobj_name)
 
-def big_patch_amr(pf_fn, fields, input_center="max", input_weight="density"):
+def big_patch_amr(pds_fn, fields, input_center="max", input_weight="density"):
     if not can_run_ds(ds_fn): return
     dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
     yield GridHierarchyTest(ds_fn)


https://bitbucket.org/yt_analysis/yt/commits/8c44dc027e28/
Changeset:   8c44dc027e28
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-07-18 05:15:39
Summary:     One more typo fix.
Affected #:  1 file

diff -r 44dd5d0ee1e38bf657f42a0fcf0caf1e098b1d50 -r 8c44dc027e28c8eb333836e816cff3f652a7d04a yt/utilities/answer_testing/framework.py
--- a/yt/utilities/answer_testing/framework.py
+++ b/yt/utilities/answer_testing/framework.py
@@ -707,7 +707,7 @@
                 yield FieldValuesTest(
                         ds_fn, field, dobj_name)
 
-def big_patch_amr(pds_fn, fields, input_center="max", input_weight="density"):
+def big_patch_amr(ds_fn, fields, input_center="max", input_weight="density"):
     if not can_run_ds(ds_fn): return
     dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
     yield GridHierarchyTest(ds_fn)


https://bitbucket.org/yt_analysis/yt/commits/838f9a6efc10/
Changeset:   838f9a6efc10
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-07-18 05:51:17
Summary:     Fix a merge error in the answer testing framework.
Affected #:  1 file

diff -r 8c44dc027e28c8eb333836e816cff3f652a7d04a -r 838f9a6efc10d0abbf8da7771d99c1a254005bd5 yt/utilities/answer_testing/framework.py
--- a/yt/utilities/answer_testing/framework.py
+++ b/yt/utilities/answer_testing/framework.py
@@ -693,7 +693,7 @@
 
 def small_patch_amr(ds_fn, fields, input_center="max", input_weight="density"):
     if not can_run_ds(ds_fn): return
-    dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
+    dso = [ None, ("sphere", (input_center, (0.1, 'unitary')))]
     yield GridHierarchyTest(ds_fn)
     yield ParentageRelationshipsTest(ds_fn)
     for field in fields:


https://bitbucket.org/yt_analysis/yt/commits/2ae0c7348784/
Changeset:   2ae0c7348784
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-07-18 06:07:49
Summary:     Fixing issue in the FITS frontend.
Affected #:  1 file

diff -r 838f9a6efc10d0abbf8da7771d99c1a254005bd5 -r 2ae0c7348784e807c79257bb3ed9751e0a3d1e0b yt/frontends/fits/tests/test_outputs.py
--- a/yt/frontends/fits/tests/test_outputs.py
+++ b/yt/frontends/fits/tests/test_outputs.py
@@ -31,7 +31,7 @@
         test_m33.__name__ = test.description
         yield test
 
-_fields = ("temperature")
+_fields = ("temperature",)
 
 grs = "radio_fits/grs-50-cube.fits"
 @requires_ds(grs)


https://bitbucket.org/yt_analysis/yt/commits/eaf21df893f7/
Changeset:   eaf21df893f7
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-07-18 07:36:20
Summary:     Really resolving a merge error in the fits frontend.
Affected #:  1 file

diff -r 2ae0c7348784e807c79257bb3ed9751e0a3d1e0b -r eaf21df893f7694cd942de55e8fede4584a5ddc8 yt/frontends/fits/tests/test_outputs.py
--- a/yt/frontends/fits/tests/test_outputs.py
+++ b/yt/frontends/fits/tests/test_outputs.py
@@ -22,17 +22,6 @@
 
 _fields_grs = ("temperature",)
 
-m33 = "radio_fits/m33_hi.fits"
- at requires_ds(m33, big_data=True)
-def test_m33():
-    ds = data_dir_load(m33, kwargs=dict(nan_mask=0.0))
-    yield assert_equal, str(ds), "m33_hi.fits"
-    for test in small_patch_amr(m33, _fields):
-        test_m33.__name__ = test.description
-        yield test
-
-_fields = ("temperature",)
-
 grs = "radio_fits/grs-50-cube.fits"
 @requires_ds(grs)
 def test_grs():


https://bitbucket.org/yt_analysis/yt/commits/83b83373c643/
Changeset:   83b83373c643
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-07-18 19:17:03
Summary:     One last merge to clear a conflict introduced this morning.
Affected #:  26 files

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 doc/source/cookbook/halo_plotting.py
--- a/doc/source/cookbook/halo_plotting.py
+++ b/doc/source/cookbook/halo_plotting.py
@@ -8,7 +8,7 @@
 halos = yt.load('rockstar_halos/halos_0.0.bin')
 
 # Create the halo catalog from this halo list
-hc = HaloCatalog(halos_pf = halos)
+hc = HaloCatalog(halos_ds=halos)
 hc.load()
 
 # Create a projection with the halos overplot on top

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 doc/source/cookbook/profile_with_variance.py
--- a/doc/source/cookbook/profile_with_variance.py
+++ b/doc/source/cookbook/profile_with_variance.py
@@ -1,6 +1,3 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
-
 import matplotlib.pyplot as plt
 import yt
 
@@ -16,15 +13,16 @@
 
 # Create a 1D profile object for profiles over radius
 # and add a velocity profile.
-prof = yt.create_profile(sp, 'radius', 'velocity_magnitude',
+prof = yt.create_profile(sp, 'radius', ('gas', 'velocity_magnitude'),
                          units = {'radius': 'kpc'},
                          extrema = {'radius': ((0.1, 'kpc'), (1000.0, 'kpc'))},
                          weight_field='cell_mass')
 
 # Plot the average velocity magnitude.
-plt.loglog(prof.x, prof['velocity_magnitude'], label='Mean')
+plt.loglog(prof.x, prof['gas', 'velocity_magnitude'], label='Mean')
 # Plot the variance of the velocity madnitude.
-plt.loglog(prof.x, prof['velocity_magnitude_std'], label='Standard Deviation')
+plt.loglog(prof.x, prof.variance['gas', 'velocity_magnitude'],
+           label='Standard Deviation')
 plt.xlabel('r [kpc]')
 plt.ylabel('v [cm/s]')
 plt.legend()

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
--- a/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
+++ b/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
@@ -320,8 +320,8 @@
 
             # Break periodic ray into non-periodic segments.
             sub_segments = periodic_ray(my_segment['start'], my_segment['end'],
-                                        left=pf.domain_left_edge,
-                                        right=pf.domain_right_edge)
+                                        left=ds.domain_left_edge,
+                                        right=ds.domain_right_edge)
 
             # Prepare data structure for subsegment.
             sub_data = {}
@@ -344,7 +344,7 @@
                 if get_los_velocity:
                     line_of_sight = sub_segment[1] - sub_segment[0]
                     line_of_sight /= ((line_of_sight**2).sum())**0.5
-                    sub_vel = pf.arr([sub_ray['x-velocity'],
+                    sub_vel = ds.arr([sub_ray['x-velocity'],
                                       sub_ray['y-velocity'],
                                       sub_ray['z-velocity']])
                     sub_data['los_velocity'].extend((np.rollaxis(sub_vel, 1) *
@@ -356,7 +356,7 @@
 
             for key in sub_data:
                 if key in "xyz": continue
-                sub_data[key] = pf.arr(sub_data[key]).in_cgs()
+                sub_data[key] = ds.arr(sub_data[key]).in_cgs()
 
             # Get redshift for each lixel.  Assume linear relation between l and z.
             sub_data['dredshift'] = (my_segment['redshift'] - next_redshift) * \

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py
+++ b/yt/data_objects/construction_data_containers.py
@@ -45,6 +45,8 @@
     parallel_objects, parallel_root_only, ParallelAnalysisInterface
 from yt.units.unit_object import Unit
 import yt.geometry.particle_deposit as particle_deposit
+from yt.utilities.grid_data_format.writer import write_to_gdf
+from yt.frontends.stream.api import load_uniform_grid
 
 from yt.fields.field_exceptions import \
     NeedsGridType,\
@@ -571,6 +573,10 @@
     def LeftEdge(self):
         return self.left_edge
 
+    @property
+    def RightEdge(self):
+        return self.right_edge
+
     def deposit(self, positions, fields = None, method = None):
         cls = getattr(particle_deposit, "deposit_%s" % method, None)
         if cls is None:
@@ -581,6 +587,48 @@
         vals = op.finalize()
         return vals.reshape(self.ActiveDimensions, order="C")
 
+    def write_to_gdf(self, gdf_path, fields, nprocs=1, field_units=None,
+                     **kwargs):
+        r"""
+        Write the covering grid data to a GDF file.
+
+        Parameters
+        ----------
+        gdf_path : string
+            Pathname of the GDF file to write.
+        fields : list of strings
+            Fields to write to the GDF file.
+        nprocs : integer, optional
+            Split the covering grid into *nprocs* subgrids before
+            writing to the GDF file. Default: 1
+        field_units : dictionary, optional
+            Dictionary of units to convert fields to. If not set, fields are
+            in their default units.
+        All remaining keyword arguments are passed to
+        yt.utilities.grid_data_format.writer.write_to_gdf.
+
+        Examples
+        --------
+        >>> cube.write_to_gdf("clumps.h5", ["density","temperature"], nprocs=16,
+        ...                   clobber=True)
+        """
+        data = {}
+        for field in fields:
+            if field in field_units:
+                units = field_units[field]
+            else:
+                units = str(self[field].units)
+            data[field] = (self[field].in_units(units).v, units)
+        le = self.left_edge.v
+        re = self.right_edge.v
+        bbox = np.array([[l,r] for l,r in zip(le, re)])
+        ds = load_uniform_grid(data, self.ActiveDimensions, bbox=bbox,
+                               length_unit=self.ds.length_unit,
+                               time_unit=self.ds.time_unit,
+                               mass_unit=self.ds.mass_unit, nprocs=nprocs,
+                               sim_time=self.ds.current_time.v)
+        write_to_gdf(ds, gdf_path, **kwargs)
+
 class YTArbitraryGridBase(YTCoveringGridBase):
     """A 3D region with arbitrary bounds and dimensions.
 

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -727,9 +727,9 @@
 
 class YTSelectionContainer0D(YTSelectionContainer):
     _spatial = False
-    def __init__(self, pf, field_parameters):
+    def __init__(self, ds, field_parameters):
         super(YTSelectionContainer0D, self).__init__(
-            pf, field_parameters)
+            ds, field_parameters)
 
 class YTSelectionContainer1D(YTSelectionContainer):
     _spatial = False

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/data_objects/particle_filters.py
--- a/yt/data_objects/particle_filters.py
+++ b/yt/data_objects/particle_filters.py
@@ -72,6 +72,8 @@
     def wrap_func(self, field_name, old_fi):
         new_fi = copy.copy(old_fi)
         new_fi.name = (self.filtered_type, field_name[1])
+        if old_fi._function == NullFunc:
+            new_fi._function = TranslationFunc(old_fi.name)
         return new_fi
 
 def add_particle_filter(name, function, requires = None, filtered_type = "all"):

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/data_objects/profiles.py
--- a/yt/data_objects/profiles.py
+++ b/yt/data_objects/profiles.py
@@ -759,6 +759,8 @@
         self.data_source = data_source
         self.ds = data_source.ds
         self.field_data = YTFieldData()
+        if weight_field is not None:
+            self.variance = YTFieldData()
         self.weight_field = weight_field
         self.field_units = {}
         ParallelAnalysisInterface.__init__(self, comm=data_source.comm)
@@ -805,20 +807,77 @@
     def _finalize_storage(self, fields, temp_storage):
         # We use our main comm here
         # This also will fill _field_data
-        temp_storage.values = self.comm.mpi_allreduce(temp_storage.values, op="sum", dtype="float64")
-        temp_storage.weight_values = self.comm.mpi_allreduce(temp_storage.weight_values, op="sum", dtype="float64")
-        temp_storage.used = self.comm.mpi_allreduce(temp_storage.used, op="sum", dtype="bool")
-        blank = ~temp_storage.used
-        self.used = temp_storage.used
-        if self.weight_field is not None:
-            # This is unnecessary, but it will suppress division errors.
-            temp_storage.weight_values[blank] = 1e-30
-            temp_storage.values /= temp_storage.weight_values[...,None]
-            self.weight = temp_storage.weight_values[...,None]
-            self.weight[blank] = 0.0
+
+        for i, field in enumerate(fields):
+            # q values are returned as q * weight but we want just q
+            temp_storage.qvalues[..., i][temp_storage.used] /= \
+              temp_storage.weight_values[temp_storage.used]
+
+        # get the profile data from all procs
+        all_store = {self.comm.rank: temp_storage}
+        all_store = self.comm.par_combine_object(all_store,
+                                                 "join", datatype="dict")
+
+        all_val = np.zeros_like(temp_storage.values)
+        all_mean = np.zeros_like(temp_storage.mvalues)
+        all_var = np.zeros_like(temp_storage.qvalues)
+        all_weight = np.zeros_like(temp_storage.weight_values)
+        all_used = np.zeros_like(temp_storage.used, dtype="bool")
+
+        # Combine the weighted mean and variance from each processor.
+        # For two samples with total weight, mean, and variance 
+        # given by w, m, and s, their combined mean and variance are:
+        # m12 = (m1 * w1 + m2 * w2) / (w1 + w2)
+        # s12 = (m1 * (s1**2 + (m1 - m12)**2) + 
+        #        m2 * (s2**2 + (m2 - m12)**2)) / (w1 + w2)
+        # Here, the mvalues are m and the qvalues are s**2.
+        for p in sorted(all_store.keys()):
+            all_used += all_store[p].used
+            old_mean = all_mean.copy()
+            old_weight = all_weight.copy()
+            all_weight[all_store[p].used] += \
+              all_store[p].weight_values[all_store[p].used]
+            for i, field in enumerate(fields):
+                all_val[..., i][all_store[p].used] += \
+                  all_store[p].values[..., i][all_store[p].used]
+
+                all_mean[..., i][all_store[p].used] = \
+                  (all_mean[..., i] * old_weight +
+                   all_store[p].mvalues[..., i] *
+                   all_store[p].weight_values)[all_store[p].used] / \
+                   all_weight[all_store[p].used]
+
+                all_var[..., i][all_store[p].used] = \
+                  (old_weight * (all_var[..., i] +
+                                 (old_mean[..., i] - all_mean[..., i])**2) +
+                   all_store[p].weight_values *
+                   (all_store[p].qvalues[..., i] + 
+                    (all_store[p].mvalues[..., i] -
+                     all_mean[..., i])**2))[all_store[p].used] / \
+                    all_weight[all_store[p].used]
+
+        all_var = np.sqrt(all_var)
+        del all_store
+        self.used = all_used
+        blank = ~all_used
+
+        self.weight = all_weight
+        self.weight[blank] = 0.0
+            
         self.field_map = {}
         for i, field in enumerate(fields):
-            self.field_data[field] = array_like_field(self.data_source, temp_storage.values[...,i], field)
+            if self.weight_field is None:
+                self.field_data[field] = \
+                  array_like_field(self.data_source, 
+                                   all_val[...,i], field)
+            else:
+                self.field_data[field] = \
+                  array_like_field(self.data_source, 
+                                   all_mean[...,i], field)
+                self.variance[field] = \
+                  array_like_field(self.data_source,
+                                   all_var[...,i], field)
+                self.variance[field][blank] = 0.0
             self.field_data[field][blank] = 0.0
             self.field_units[field] = self.field_data[field].units
             if isinstance(field, tuple):
@@ -1305,13 +1364,27 @@
             if not acc: continue
             temp = obj.field_data[field]
             temp = np.rollaxis(temp, axis)
+            if weight_field is not None:
+                temp_weight = obj.weight
+                temp_weight = np.rollaxis(temp_weight, axis)
             if acc < 0:
                 temp = temp[::-1]
-            temp = temp.cumsum(axis=0)
+                if weight_field is not None:
+                    temp_weight = temp_weight[::-1]
+            if weight_field is None:
+                temp = temp.cumsum(axis=0)
+            else:
+                temp = (temp * temp_weight).cumsum(axis=0) / \
+                  temp_weight.cumsum(axis=0)
             if acc < 0:
                 temp = temp[::-1]
+                if weight_field is not None:
+                    temp_weight = temp_weight[::-1]
             temp = np.rollaxis(temp, axis)
             obj.field_data[field] = temp
+            if weight_field is not None:
+                temp_weight = np.rollaxis(temp_weight, axis)
+                obj.weight = temp_weight
     if units is not None:
         for field, unit in units.iteritems():
             field = data_source._determine_fields(field)[0]

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/data_objects/selection_data_containers.py
--- a/yt/data_objects/selection_data_containers.py
+++ b/yt/data_objects/selection_data_containers.py
@@ -50,14 +50,14 @@
 
     Examples
     --------
-    >>> pf = load("DD0010/moving7_0010")
+    >>> ds = load("DD0010/moving7_0010")
     >>> c = [0.5,0.5,0.5]
-    >>> point = pf.point(c)
+    >>> point = ds.point(c)
     """
     _type_name = "point"
     _con_args = ('p',)
-    def __init__(self, p, pf = None, field_parameters = None):
-        super(YTPointBase, self).__init__(pf, field_parameters)
+    def __init__(self, p, ds = None, field_parameters = None):
+        super(YTPointBase, self).__init__(ds, field_parameters)
         self.p = p
 
 class YTOrthoRayBase(YTSelectionContainer1D):

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -431,7 +431,9 @@
         if available:
             self.particle_types += (filter.name,)
             self.filtered_particle_types.append(filter.name)
-            self._setup_particle_types([filter.name])
+            new_fields = self._setup_particle_types([filter.name])
+            deps, _ = self.field_info.check_derived_fields(new_fields)
+            self.field_dependencies.update(deps)
         return available
 
     def _setup_particle_types(self, ptypes = None):

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/fields/field_info_container.py
--- a/yt/fields/field_info_container.py
+++ b/yt/fields/field_info_container.py
@@ -75,11 +75,11 @@
                 output_units = str(u.get_cgs_equivalent())
             else:
                 output_units = units
+            if (ptype, f) not in self.field_list:
+                continue
             self.add_output_field((ptype, f),
                 units = units, particle_type = True, display_name = dn,
                 output_units = output_units)
-            if (ptype, f) not in self.field_list:
-                continue
             for alias in aliases:
                 self.alias((ptype, alias), (ptype, f), units = output_units)
 

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/frontends/athena/data_structures.py
--- a/yt/frontends/athena/data_structures.py
+++ b/yt/frontends/athena/data_structures.py
@@ -362,6 +362,7 @@
         if storage_filename is None:
             storage_filename = '%s.yt' % filename.split('/')[-1]
         self.storage_filename = storage_filename
+        self.backup_filename = self.filename[:-4] + "_backup.gdf"
         # Unfortunately we now have to mandate that the index gets 
         # instantiated so that we can make sure we have the correct left 
         # and right domain edges.

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/frontends/gdf/data_structures.py
--- a/yt/frontends/gdf/data_structures.py
+++ b/yt/frontends/gdf/data_structures.py
@@ -63,7 +63,7 @@
                 self.dds[2] = 1.0
         self.field_data['dx'], self.field_data['dy'], self.field_data['dz'] = \
             self.dds
-
+        self.dds = self.ds.arr(self.dds, "code_length")
 
 class GDFHierarchy(GridIndex):
 
@@ -185,17 +185,26 @@
             elif 'field_units' in current_field.attrs:
                 field_units = current_field.attrs['field_units']
                 if isinstance(field_units, types.StringTypes):
-                    current_fields_unit = current_field.attrs['field_units']
+                    current_field_units = current_field.attrs['field_units']
                 else:
-                    current_fields_unit = \
+                    current_field_units = \
                         just_one(current_field.attrs['field_units'])
                 self.field_units[field_name] = current_field_units
             else:
-                current_fields_unit = ""
+                self.field_units[field_name] = ""
+
+        if "dataset_units" in h5f:
+            for unit_name in h5f["/dataset_units"]:
+                current_unit = h5f["/dataset_units/%s" % unit_name]
+                value = current_unit.value
+                unit = current_unit.attrs["unit"]
+                setattr(self, unit_name, self.quan(value,unit))
+        else:
+            self.length_unit = self.quan(1.0, "cm")
+            self.mass_unit = self.quan(1.0, "g")
+            self.time_unit = self.quan(1.0, "s")
+
         h5f.close()
-        self.length_unit = self.quan(1.0, "cm")
-        self.mass_unit = self.quan(1.0, "g")
-        self.time_unit = self.quan(1.0, "s")
 
     def _parse_parameter_file(self):
         self._handle = h5py.File(self.parameter_filename, "r")

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/frontends/gdf/fields.py
--- a/yt/frontends/gdf/fields.py
+++ b/yt/frontends/gdf/fields.py
@@ -25,8 +25,9 @@
 class GDFFieldInfo(FieldInfoContainer):
     known_other_fields = (
         ("density", ("g/cm**3", ["density"], None)),
-        ("specific_energy", ("erg / g", ["thermal_energy"], None)),
-        ("pressure", ("", ["pressure"], None)),
+        ("specific_energy", ("erg/g", ["thermal_energy"], None)),
+        ("pressure", ("erg/cm**3", ["pressure"], None)),
+        ("temperature", ("K", ["temperature"], None)),
         ("velocity_x", ("cm/s", ["velocity_x"], None)),
         ("velocity_y", ("cm/s", ["velocity_y"], None)),
         ("velocity_z", ("cm/s", ["velocity_z"], None)),

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/frontends/sdf/data_structures.py
--- a/yt/frontends/sdf/data_structures.py
+++ b/yt/frontends/sdf/data_structures.py
@@ -191,7 +191,6 @@
     @classmethod
     def _is_valid(cls, *args, **kwargs):
         sdf_header = kwargs.get('sdf_header', args[0])
-        print 'Parsing sdf_header: %s' % sdf_header
         if sdf_header.startswith("http"):
             if requests is None: return False
             hreq = requests.get(sdf_header, stream=True)

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/frontends/sdf/fields.py
--- a/yt/frontends/sdf/fields.py
+++ b/yt/frontends/sdf/fields.py
@@ -38,7 +38,7 @@
     known_particle_fields = ()
     _mass_field = None
 
-    def __init__(self, pf, field_list):
+    def __init__(self, ds, field_list):
 
         if 'mass' in field_list:
             self.known_particle_fields.append(("mass", "code_mass",
@@ -46,17 +46,17 @@
         possible_masses = ['mass', 'm200b', 'mvir']
         mnf = 'mass'
         for mn in possible_masses:
-            if mn in pf.sdf_container.keys():
+            if mn in ds.sdf_container.keys():
                 mnf = self._mass_field = mn
                 break
 
-        idf = pf._field_map.get("particle_index", 'ident')
-        xf = pf._field_map.get("particle_position_x", 'x')
-        yf = pf._field_map.get("particle_position_y", 'y')
-        zf = pf._field_map.get("particle_position_z", 'z')
-        vxf = pf._field_map.get("particle_velocity_x", 'vx')
-        vyf = pf._field_map.get("particle_velocity_z", 'vy')
-        vzf = pf._field_map.get("particle_velocity_z", 'vz')
+        idf = ds._field_map.get("particle_index", 'ident')
+        xf = ds._field_map.get("particle_position_x", 'x')
+        yf = ds._field_map.get("particle_position_y", 'y')
+        zf = ds._field_map.get("particle_position_z", 'z')
+        vxf = ds._field_map.get("particle_velocity_x", 'vx')
+        vyf = ds._field_map.get("particle_velocity_z", 'vy')
+        vzf = ds._field_map.get("particle_velocity_z", 'vz')
 
         self.known_particle_fields = (
             (idf, ('dimensionless', ['particle_index'], None)),
@@ -68,6 +68,6 @@
             (vzf, ('code_velocity', ['particle_velocity_z'], None)),
             (mnf, ('code_mass', ['particle_mass'], None)),
         )
-        super(SDFFieldInfo, self).__init__(pf, field_list)
+        super(SDFFieldInfo, self).__init__(ds, field_list)
 
 

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/frontends/stream/data_structures.py
--- a/yt/frontends/stream/data_structures.py
+++ b/yt/frontends/stream/data_structures.py
@@ -337,8 +337,8 @@
 
     def _set_code_unit_attributes(self):
         base_units = self.stream_handler.code_units
-        attrs = ('length_unit', 'mass_unit', 'time_unit', 'velocity_unit')
-        cgs_units = ('cm', 'g', 's', 'cm/s')
+        attrs = ('length_unit', 'mass_unit', 'time_unit', 'velocity_unit', 'magnetic_unit')
+        cgs_units = ('cm', 'g', 's', 'cm/s', 'gauss')
         for unit, attr, cgs_unit in zip(base_units, attrs, cgs_units):
             if isinstance(unit, basestring):
                 uq = self.quan(1.0, unit)
@@ -512,7 +512,8 @@
 
 def load_uniform_grid(data, domain_dimensions, length_unit=None, bbox=None,
                       nprocs=1, sim_time=0.0, mass_unit=None, time_unit=None,
-                      velocity_unit=None, periodicity=(True, True, True),
+                      velocity_unit=None, magnetic_unit=None,
+                      periodicity=(True, True, True),
                       geometry = "cartesian"):
     r"""Load a uniform grid of data into yt as a
     :class:`~yt.frontends.stream.data_structures.StreamHandler`.
@@ -551,6 +552,8 @@
         Unit to use for times.  Defaults to unitless.
     velocity_unit : string
         Unit to use for velocities.  Defaults to unitless.
+    magnetic_unit : string
+        Unit to use for magnetic fields. Defaults to unitless.
     periodicity : tuple of booleans
         Determines whether the data will be treated as periodic along
         each axis
@@ -640,6 +643,8 @@
         time_unit = 'code_time'
     if velocity_unit is None:
         velocity_unit = 'code_velocity'
+    if magnetic_unit is None:
+        magnetic_unit = 'code_magnetic'
 
     handler = StreamHandler(
         grid_left_edges,
@@ -651,7 +656,7 @@
         np.zeros(nprocs).reshape((nprocs,1)),
         sfh,
         field_units,
-        (length_unit, mass_unit, time_unit, velocity_unit),
+        (length_unit, mass_unit, time_unit, velocity_unit, magnetic_unit),
         particle_types=particle_types,
         periodicity=periodicity
     )
@@ -685,7 +690,8 @@
 def load_amr_grids(grid_data, domain_dimensions,
                    field_units=None, bbox=None, sim_time=0.0, length_unit=None,
                    mass_unit=None, time_unit=None, velocity_unit=None,
-                   periodicity=(True, True, True), geometry = "cartesian"):
+                   magnetic_unit=None, periodicity=(True, True, True),
+                   geometry = "cartesian"):
     r"""Load a set of grids of data into yt as a
     :class:`~yt.frontends.stream.data_structures.StreamHandler`.
     This should allow a sequence of grids of varying resolution of data to be
@@ -723,6 +729,8 @@
         Unit to use for times.  Defaults to unitless.
     velocity_unit : string or float
         Unit to use for velocities.  Defaults to unitless.
+    magnetic_unit : string or float
+        Unit to use for magnetic fields.  Defaults to unitless.
     bbox : array_like (xdim:zdim, LE:RE), optional
         Size of computational domain in units specified by length_unit.
         Defaults to a cubic unit-length domain.
@@ -805,6 +813,8 @@
         time_unit = 'code_time'
     if velocity_unit is None:
         velocity_unit = 'code_velocity'
+    if magnetic_unit is None:
+        magnetic_unit = 'code_magnetic'
 
     handler = StreamHandler(
         grid_left_edges,
@@ -816,7 +826,7 @@
         np.zeros(ngrids).reshape((ngrids,1)),
         sfh,
         field_units,
-        (length_unit, mass_unit, time_unit, velocity_unit),
+        (length_unit, mass_unit, time_unit, velocity_unit, magnetic_unit),
         particle_types=set_particle_types(grid_data[0])
     )
 
@@ -968,7 +978,8 @@
 
 def load_particles(data, length_unit = None, bbox=None,
                    sim_time=0.0, mass_unit = None, time_unit = None,
-                   velocity_unit=None, periodicity=(True, True, True),
+                   velocity_unit=None, magnetic_unit=None,
+                   periodicity=(True, True, True),
                    n_ref = 64, over_refine_factor = 1, geometry = "cartesian"):
     r"""Load a set of particles into yt as a
     :class:`~yt.frontends.stream.data_structures.StreamParticleHandler`.
@@ -996,6 +1007,10 @@
         Conversion factor from simulation mass units to grams
     time_unit : float
         Conversion factor from simulation time units to seconds
+    velocity_unit : float
+        Conversion factor from simulation velocity units to cm/s
+    magnetic_unit : float
+        Conversion factor from simulation magnetic units to gauss
     bbox : array_like (xdim:zdim, LE:RE), optional
         Size of computational domain in units sim_unit_to_cm
     sim_time : float, optional
@@ -1056,6 +1071,8 @@
         time_unit = 'code_time'
     if velocity_unit is None:
         velocity_unit = 'code_velocity'
+    if magnetic_unit is None:
+        magnetic_unit = 'code_magnetic'
 
     # I'm not sure we need any of this.
     handler = StreamHandler(
@@ -1068,7 +1085,7 @@
         np.zeros(nprocs).reshape((nprocs,1)),
         sfh,
         field_units,
-        (length_unit, mass_unit, time_unit, velocity_unit),
+        (length_unit, mass_unit, time_unit, velocity_unit, magnetic_unit),
         particle_types=particle_types,
         periodicity=periodicity
     )
@@ -1140,7 +1157,8 @@
 def load_hexahedral_mesh(data, connectivity, coordinates,
                          length_unit = None, bbox=None, sim_time=0.0,
                          mass_unit = None, time_unit = None,
-                         velocity_unit = None, periodicity=(True, True, True),
+                         velocity_unit = None, magnetic_unit = None,
+                         periodicity=(True, True, True),
                          geometry = "cartesian"):
     r"""Load a hexahedral mesh of data into yt as a
     :class:`~yt.frontends.stream.data_structures.StreamHandler`.
@@ -1209,6 +1227,8 @@
         time_unit = 'code_time'
     if velocity_unit is None:
         velocity_unit = 'code_velocity'
+    if magnetic_unit is None:
+        magnetic_unit = 'code_magnetic'
 
     # I'm not sure we need any of this.
     handler = StreamHandler(
@@ -1221,7 +1241,7 @@
         np.zeros(nprocs).reshape((nprocs,1)),
         sfh,
         field_units,
-        (length_unit, mass_unit, time_unit, velocity_unit),
+        (length_unit, mass_unit, time_unit, velocity_unit, magnetic_unit),
         particle_types=particle_types,
         periodicity=periodicity
     )

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/utilities/exceptions.py
--- a/yt/utilities/exceptions.py
+++ b/yt/utilities/exceptions.py
@@ -409,3 +409,10 @@
             """ % (self.field,)
         r += "\n".join([c for c in self.conditions])
         return r
+
+class YTGDFAlreadyExists(Exception):
+    def __init__(self, filename):
+        self.filename = filename
+
+    def __str__(self):
+        return "A file already exists at %s and clobber=False." % self.filename
\ No newline at end of file

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/utilities/grid_data_format/writer.py
--- a/yt/utilities/grid_data_format/writer.py
+++ b/yt/utilities/grid_data_format/writer.py
@@ -18,10 +18,11 @@
 import numpy as np
 
 from yt import __version__ as yt_version
-
+from yt.utilities.exceptions import YTGDFAlreadyExists
 
 def write_to_gdf(ds, gdf_path, data_author=None, data_comment=None,
-                 particle_type_name="dark_matter"):
+                 dataset_units=None, particle_type_name="dark_matter",
+                 clobber=False):
     """
     Write a dataset to the given path in the Grid Data Format.
 
@@ -31,11 +32,38 @@
         The yt data to write out.
     gdf_path : string
         The path of the file to output.
+    data_author : string, optional
+        The name of the author who wrote the data. Default: None.
+    data_comment : string, optional
+        A descriptive comment. Default: None.
+    dataset_units : dictionary, optional
+        A dictionary of (value, unit) tuples to set the default units
+        of the dataset. Keys can be:
+            "length_unit"
+            "time_unit"
+            "mass_unit"
+            "velocity_unit"
+            "magnetic_unit"
+        If not specified, these will carry over from the parent
+        dataset.
+    particle_type_name : string, optional
+        The particle type of the particles in the dataset. Default: "dark_matter"
+    clobber : boolean, optional
+        Whether or not to clobber an already existing file. If False, attempting
+        to overwrite an existing file will result in an exception.
 
+    Examples
+    --------
+    >>> dataset_units = {"length_unit":(1.0,"Mpc"),
+    ...                  "time_unit":(1.0,"Myr")}
+    >>> write_to_gdf(ds, "clumps.h5", data_author="John ZuHone",
+    ...              dataset_units=dataset_units,
+    ...              data_comment="My Really Cool Dataset", clobber=True)
     """
 
     f = _create_new_gdf(ds, gdf_path, data_author, data_comment,
-                        particle_type_name)
+                        dataset_units=datasetUnits,
+                        particle_type_name=particle_type_name, clobber=clobber)
 
     # now add the fields one-by-one
     for field_name in ds.field_list:
@@ -102,7 +130,7 @@
 
     # grab the display name and units from the field info container.
     display_name = fi.display_name
-    units = fi.get_units()
+    units = fi.units
 
     # check that they actually contain something...
     if display_name:
@@ -113,8 +141,6 @@
         sg.attrs["field_units"] = units
     else:
         sg.attrs["field_units"] = "None"
-    # @todo: the values must be in CGS already right?
-    sg.attrs["field_to_cgs"] = 1.0
     # @todo: is this always true?
     sg.attrs["staggering"] = 0
 
@@ -134,21 +160,22 @@
         # Check if this is a real field or particle data.
         grid.get_data(field_name)
         if fi.particle_type:  # particle data
-            pt_group[field_name] = grid[field_name]
+            pt_group[field_name] = grid[field_name].in_units(units)
         else:  # a field
-            grid_group[field_name] = grid[field_name]
+            grid_group[field_name] = grid[field_name].in_units(units)
 
 
 def _create_new_gdf(ds, gdf_path, data_author=None, data_comment=None,
-                    particle_type_name="dark_matter"):
+                    dataset_units=None, particle_type_name="dark_matter",
+                    clobber=False):
+
     # Make sure we have the absolute path to the file first
     gdf_path = os.path.abspath(gdf_path)
 
-    # Stupid check -- is the file already there?
-    # @todo: make this a specific exception/error.
-    if os.path.exists(gdf_path):
-        raise IOError("A file already exists in the location: %s. Please \
-                      provide a new one or remove that file." % gdf_path)
+    # Is the file already there? If so, are we allowing
+    # clobbering?
+    if os.path.exists(gdf_path) and not clobber:
+        raise YTGDFAlreadyExists(gdf_path)
 
     ###
     # Create and open the file with h5py
@@ -191,6 +218,21 @@
         g.attrs["omega_lambda"] = ds.omega_lambda
         g.attrs["hubble_constant"] = ds.hubble_constant
 
+    if dataset_units is None:
+        dataset_units = {}
+
+    g = f.create_group("dataset_units")
+    for u in ["length","time","mass","velocity","magnetic"]:
+        unit_name = u+"_unit"
+        if unit_name in dataset_units:
+            value, units = dataset_units[unit_name]
+        else:
+            attr = getattr(pf, unit_name)
+            value = float(attr)
+            units = str(attr.units)
+        d = g.create_dataset(unit_name, data=value)
+        d.attrs["unit"] = units
+
     ###
     # "field_types" group
     ###
@@ -212,7 +254,7 @@
     f["grid_left_index"] = np.array(
         [grid.get_global_startindex() for grid in ds.index.grids]
     ).reshape(ds.index.grid_dimensions.shape[0], 3)
-    f["grid_level"] = ds.index.grid_levels
+    f["grid_level"] = ds.index.grid_levels.flat
     # @todo: Fill with proper values
     f["grid_parent_id"] = -np.ones(ds.index.grid_dimensions.shape[0])
     f["grid_particle_count"] = ds.index.grid_particle_count

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/utilities/lib/image_utilities.pyx
--- a/yt/utilities/lib/image_utilities.pyx
+++ b/yt/utilities/lib/image_utilities.pyx
@@ -43,7 +43,15 @@
         np.ndarray[np.float64_t, ndim=1] px, 
         np.ndarray[np.float64_t, ndim=1] py, 
         np.ndarray[np.float64_t, ndim=2] rgba,
-        ):  
+        ):
+    """
+    Splat rgba points onto an image
+
+    Given an image buffer, add colors to
+    pixels defined by fractional positions px and py,
+    with colors rgba.  px and py are one dimensional
+    arrays, and rgba is a an array of rgba values.
+    """
     cdef int i, j, k, pi
     cdef int npart = px.shape[0]
     cdef int xs = buffer.shape[0]
@@ -53,7 +61,7 @@
     for pi in range(npart):
         j = <int> (xs * px[pi])
         i = <int> (ys * py[pi])
-        if i < 0 or j < 0 or i >= xs or j >= ys: 
+        if i < 0 or j < 0 or i >= xs or j >= ys:
             continue
         for k in range(4):
             buffer[i, j, k] += rgba[pi, k]

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/visualization/image_writer.py
--- a/yt/visualization/image_writer.py
+++ b/yt/visualization/image_writer.py
@@ -18,6 +18,7 @@
 
 from yt.funcs import *
 from yt.utilities.exceptions import YTNotInsideNotebook
+from color_maps import mcm
 import _colormap_data as cmd
 import yt.utilities.lib.image_utilities as au
 import yt.utilities.png_writer as pw
@@ -157,7 +158,7 @@
     if transpose:
         bitmap_array = bitmap_array.swapaxes(0,1)
     if filename is not None:
-        pw.write_png(bitmap_array.copy(), filename)
+        pw.write_png(bitmap_array, filename)
     else:
         return pw.write_png_to_string(bitmap_array.copy())
     return bitmap_array
@@ -243,10 +244,18 @@
     return to_plot
 
 def map_to_colors(buff, cmap_name):
-    if cmap_name not in cmd.color_map_luts:
-        print ("Your color map was not found in the extracted colormap file.")
-        raise KeyError(cmap_name)
-    lut = cmd.color_map_luts[cmap_name]
+    try:
+        lut = cmd.color_map_luts[cmap_name]
+    except KeyError:
+        try:
+            cmap = mcm.get_cmap(cmap_name)
+            dummy = cmap(0.0)
+            lut = cmap._lut.T
+        except ValueError:
+            print "Your color map was not found in either the extracted" +\
+                " colormap file or matplotlib colormaps"
+            raise KeyError(cmap_name)
+
     x = np.mgrid[0.0:1.0:lut[0].shape[0]*1j]
     shape = buff.shape
     mapped = np.dstack(

diff -r eaf21df893f7694cd942de55e8fede4584a5ddc8 -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 yt/visualization/tests/test_splat.py
--- /dev/null
+++ b/yt/visualization/tests/test_splat.py
@@ -0,0 +1,58 @@
+"""
+Test for write_bitmap and add_rgba_points
+
+
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2013, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+import os
+import os.path
+import tempfile
+import shutil
+import numpy as np
+import yt
+from yt.testing import \
+    assert_equal, expand_keywords
+from yt.utilities.lib.api import add_rgba_points_to_image
+
+
+def setup():
+    """Test specific setup."""
+    from yt.config import ytcfg
+    ytcfg["yt", "__withintesting"] = "True"
+
+
+def test_splat():
+    """Tests functionality of off_axis_projection and write_projection."""
+    # Perform I/O in safe place instead of yt main dir
+    tmpdir = tempfile.mkdtemp()
+    curdir = os.getcwd()
+    os.chdir(tmpdir)
+
+    N = 16 
+    Np = int(1e2)
+    image = np.zeros([N,N,4])
+    xs = np.random.random(Np)
+    ys = np.random.random(Np)
+
+    cbx = yt.visualization.color_maps.mcm.RdBu
+    cs = cbx(np.random.random(Np))
+    add_rgba_points_to_image(image, xs, ys, cs)
+
+    before_hash = image.copy()
+    fn = 'tmp.png'
+    yt.write_bitmap(image, fn)
+    yield assert_equal, os.path.exists(fn), True
+    os.remove(fn)
+    yield assert_equal, before_hash, image
+
+    os.chdir(curdir)
+    # clean up
+    shutil.rmtree(tmpdir)


https://bitbucket.org/yt_analysis/yt/commits/280814cc8e62/
Changeset:   280814cc8e62
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-07-18 19:44:40
Summary:     Clearing one more merge conflict.
Affected #:  1 file

diff -r 83b83373c6430eaf90108492ca3f8cbe70fe7011 -r 280814cc8e62081b99fbd95b72515018519771bb yt/frontends/stream/io.py
--- a/yt/frontends/stream/io.py
+++ b/yt/frontends/stream/io.py
@@ -127,7 +127,7 @@
             for obj in chunk.objs:
                 count += selector.count_octs(obj.oct_handler, obj.domain_id)
         for ptype in ptf:
-            psize[ptype] = self.ds.n_ref * count / float(obj.nz)
+            psize[ptype] = self.ds.n_ref * count
         return psize
 
     def _read_particle_fields(self, chunks, ptf, selector):


https://bitbucket.org/yt_analysis/yt/commits/74fd61909c0d/
Changeset:   74fd61909c0d
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-07-18 21:02:32
Summary:     Fixing two more test failures.
Affected #:  2 files

diff -r 280814cc8e62081b99fbd95b72515018519771bb -r 74fd61909c0d981bac0f3e61b4a3597bccf442f6 yt/utilities/grid_data_format/writer.py
--- a/yt/utilities/grid_data_format/writer.py
+++ b/yt/utilities/grid_data_format/writer.py
@@ -62,7 +62,7 @@
     """
 
     f = _create_new_gdf(ds, gdf_path, data_author, data_comment,
-                        dataset_units=datasetUnits,
+                        dataset_units=dataset_units,
                         particle_type_name=particle_type_name, clobber=clobber)
 
     # now add the fields one-by-one

diff -r 280814cc8e62081b99fbd95b72515018519771bb -r 74fd61909c0d981bac0f3e61b4a3597bccf442f6 yt/visualization/profile_plotter.py
--- a/yt/visualization/profile_plotter.py
+++ b/yt/visualization/profile_plotter.py
@@ -797,7 +797,8 @@
                     positive_values = data[data > 0.0]
                     if len(positive_values) == 0:
                         mylog.warning("Profiled field %s has no positive "
-                                      "values.  Max = %d." % (f, data.max()))
+                                      "values.  Max = %d." %
+                                      (f, np.nanmax(data)))
                         mylog.warning("Switching to linear colorbar scaling.")
                         zmin = data.min()
                         z_scale = 'linear'


https://bitbucket.org/yt_analysis/yt/commits/b04c9b3692db/
Changeset:   b04c9b3692db
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-07-18 23:11:01
Summary:     Merging with mainline tip.
Affected #:  11 files

diff -r 74fd61909c0d981bac0f3e61b4a3597bccf442f6 -r b04c9b3692db47c99a367f0decda46fcbbca3801 doc/source/analyzing/analysis_modules/halo_analysis.rst
--- a/doc/source/analyzing/analysis_modules/halo_analysis.rst
+++ b/doc/source/analyzing/analysis_modules/halo_analysis.rst
@@ -8,6 +8,7 @@
    :maxdepth: 1
 
    halo_catalogs
+   halo_transition
    halo_finding
    halo_mass_function
    halo_analysis_example

diff -r 74fd61909c0d981bac0f3e61b4a3597bccf442f6 -r b04c9b3692db47c99a367f0decda46fcbbca3801 doc/source/analyzing/analysis_modules/halo_catalogs.rst
--- a/doc/source/analyzing/analysis_modules/halo_catalogs.rst
+++ b/doc/source/analyzing/analysis_modules/halo_catalogs.rst
@@ -7,6 +7,8 @@
 together into a single framework. This framework is substantially
 different from the limited framework included in yt-2.x and is only 
 backwards compatible in that output from old halo finders may be loaded.
+For a direct translation of various halo analysis tasks using yt-2.x
+to yt-3.0 please see :ref:`halo_transition`.
 
 A catalog of halos can be created from any initial dataset given to halo 
 catalog through data_ds. These halos can be found using friends-of-friends,

diff -r 74fd61909c0d981bac0f3e61b4a3597bccf442f6 -r b04c9b3692db47c99a367f0decda46fcbbca3801 doc/source/analyzing/analysis_modules/halo_finders.rst
--- /dev/null
+++ b/doc/source/analyzing/analysis_modules/halo_finders.rst
@@ -0,0 +1,192 @@
+.. _halo_finding:
+
+Halo Finding
+============
+
+There are four methods of finding particle haloes in yt. The 
+recommended and default method is called HOP, a method described 
+in `Eisenstein and Hut (1998) 
+<http://adsabs.harvard.edu/abs/1998ApJ...498..137E>`_. A basic 
+friends-of-friends (e.g. `Efstathiou et al. (1985) 
+<http://adsabs.harvard.edu/abs/1985ApJS...57..241E>`_) halo 
+finder is also implemented. Finally Rockstar (`Behroozi et a. 
+(2011) <http://adsabs.harvard.edu/abs/2011arXiv1110.4372B>`_) is 
+a 6D-phase space halo finder developed by Peter Behroozi that 
+excels in finding subhalos and substrcture, but does not allow 
+multiple particle masses.
+
+HOP
+---
+
+The version of HOP used in yt is an upgraded version of the 
+`publicly available HOP code 
+<http://cmb.as.arizona.edu/~eisenste/hop/hop.html>`_. Support 
+for 64-bit floats and integers has been added, as well as 
+parallel analysis through spatial decomposition. HOP builds 
+groups in this fashion:
+
+  1. Estimates the local density at each particle using a 
+       smoothing kernel.
+  2. Builds chains of linked particles by 'hopping' from one 
+       particle to its densest neighbor. A particle which is 
+       its own densest neighbor is the end of the chain.
+  3. All chains that share the same densest particle are 
+       grouped together.
+  4. Groups are included, linked together, or discarded 
+       depending on the user-supplied over density
+       threshold parameter. The default is 160.0.
+
+Please see the `HOP method paper 
+<http://adsabs.harvard.edu/abs/1998ApJ...498..137E>`_ for 
+full details.
+
+.. warning:: The FoF halo finder in yt is not thoroughly tested! 
+    It is probably fine to use, but you are strongly encouraged 
+    to check your results against the data for errors.
+
+Rockstar Halo Finding
+---------------------
+
+Rockstar uses an adaptive hierarchical refinement of friends-of-friends 
+groups in six phase-space dimensions and one time dimension, which 
+allows for robust (grid-independent, shape-independent, and noise-
+resilient) tracking of substructure. The code is prepackaged with yt, 
+but also `separately available <http://code.google.com/p/rockstar>`_. The lead 
+developer is Peter Behroozi, and the methods are described in `Behroozi
+et al. 2011 <http://rockstar.googlecode.com/files/rockstar_ap101911.pdf>`_. 
+
+.. note:: At the moment, Rockstar does not support multiple particle masses, 
+  instead using a fixed particle mass. This will not affect most dark matter 
+  simulations, but does make it less useful for finding halos from the stellar
+  mass. In simulations where the highest-resolution particles all have the 
+  same mass (ie: zoom-in grid based simulations), one can set up a particle
+  filter to select the lowest mass particles and perform the halo finding
+  only on those.
+
+To run the Rockstar Halo finding, you must launch python with MPI and 
+parallelization enabled. While Rockstar itself does not require MPI to run, 
+the MPI libraries allow yt to distribute particle information across multiple 
+nodes.
+
+.. warning:: At the moment, running Rockstar inside of yt on multiple compute nodes
+   connected by an Infiniband network can be problematic. Therefore, for now
+   we recommend forcing the use of the non-Infiniband network (e.g. Ethernet)
+   using this flag: ``--mca btl ^openib``.
+   For example, here is how Rockstar might be called using 24 cores:
+   ``mpirun -n 24 --mca btl ^openib python ./run_rockstar.py --parallel``.
+
+The script above configures the Halo finder, launches a server process which 
+disseminates run information and coordinates writer-reader processes. 
+Afterwards, it launches reader and writer tasks, filling the available MPI 
+slots, which alternately read particle information and analyze for halo 
+content.
+
+The RockstarHaloFinder class has these options that can be supplied to the 
+halo catalog through the ``finder_kwargs`` argument:
+
+  * ``dm_type``, the index of the dark matter particle. Default is 1. 
+  * ``outbase``, This is where the out*list files that Rockstar makes should be
+    placed. Default is 'rockstar_halos'.
+  * ``num_readers``, the number of reader tasks (which are idle most of the 
+    time.) Default is 1.
+  * ``num_writers``, the number of writer tasks (which are fed particles and
+    do most of the analysis). Default is MPI_TASKS-num_readers-1. 
+    If left undefined, the above options are automatically 
+    configured from the number of available MPI tasks.
+  * ``force_res``, the resolution that Rockstar uses for various calculations
+    and smoothing lengths. This is in units of Mpc/h.
+    If no value is provided, this parameter is automatically set to
+    the width of the smallest grid element in the simulation from the
+    last data snapshot (i.e. the one where time has evolved the
+    longest) in the time series:
+    ``ds_last.index.get_smallest_dx() * ds_last['mpch']``.
+  * ``total_particles``, if supplied, this is a pre-calculated
+    total number of dark matter
+    particles present in the simulation. For example, this is useful
+    when analyzing a series of snapshots where the number of dark
+    matter particles should not change and this will save some disk
+    access time. If left unspecified, it will
+    be calculated automatically. Default: ``None``.
+  * ``dm_only``, if set to ``True``, it will be assumed that there are
+    only dark matter particles present in the simulation.
+    This option does not modify the halos found by Rockstar, however
+    this option can save disk access time if there are no star particles
+    (or other non-dark matter particles) in the simulation. Default: ``False``.
+
+Rockstar dumps halo information in a series of text (halo*list and 
+out*list) and binary (halo*bin) files inside the ``outbase`` directory. 
+We use the halo list classes to recover the information. 
+
+Inside the ``outbase`` directory there is a text file named ``datasets.txt``
+that records the connection between ds names and the Rockstar file names.
+
+Parallel HOP and FOF
+--------------------
+
+Both the HOP and FoF halo finders can run in parallel using simple 
+spatial decomposition. In order to run them in parallel it is helpful 
+to understand how it works. Below in the first plot (i) is a simplified 
+depiction of three haloes labeled 1,2 and 3:
+
+.. image:: _images/ParallelHaloFinder.png
+   :width: 500
+
+Halo 3 is twice reflected around the periodic boundary conditions.
+
+In (ii), the volume has been sub-divided into four equal subregions, 
+A,B,C and D, shown with dotted lines. Notice that halo 2 is now in 
+two different subregions, C and D, and that halo 3 is now in three, 
+A, B and D. If the halo finder is run on these four separate subregions,
+halo 1 is be identified as a single halo, but haloes 2 and 3 are split 
+up into multiple haloes, which is incorrect. The solution is to give 
+each subregion padding to oversample into neighboring regions.
+
+In (iii), subregion C has oversampled into the other three regions, 
+with the periodic boundary conditions taken into account, shown by 
+dot-dashed lines. The other subregions oversample in a similar way.
+
+The halo finder is then run on each padded subregion independently 
+and simultaneously. By oversampling like this, haloes 2 and 3 will 
+both be enclosed fully in at least one subregion and identified 
+completely.
+
+Haloes identified with centers of mass inside the padded part of a 
+subregion are thrown out, eliminating the problem of halo duplication. 
+The centers for the three haloes are shown with stars. Halo 1 will
+belong to subregion A, 2 to C and 3 to B.
+
+To run with parallel halo finding, you must supply a value for 
+padding in the finder_kwargs argument. The ``padding`` parameter 
+is in simulation units and defaults to 0.02. This parameter is how 
+much padding is added to each of the six sides of a subregion. 
+This value should be 2x-3x larger than the largest expected halo 
+in the simulation. It is unlikely, of course, that the largest 
+object in the simulation will be on a subregion boundary, but there 
+is no way of knowing before the halo finder is run.
+
+.. code-block:: python
+
+  from yt.mods import *
+  from yt.analysis_modules.halo_analysis.api import *
+  ds = load("data0001")
+  hc= HaloCatalog(data_ds =ds,finder_method='hop'
+    finder_kwargs={'padding':0.02})
+  # --or--
+  hc= HaloCatalog(data_ds =ds,finder_method='fof'
+    finder_kwargs={'padding':0.02})
+
+
+In general, a little bit of padding goes a long way, and too much 
+just slows down the analysis and doesn't improve the answer (but 
+doesn't change it).  It may be worth your time to run the parallel 
+halo finder at a few paddings to find the right amount, especially 
+if you're analyzing many similar datasets.
+
+Rockstar Installation
+=====================
+
+The Rockstar is slightly patched and modified to run as a library inside of 
+yt. By default it will be built with yt using the ``install_script.sh``.
+If it wasn't installed, please make sure that the installation setting
+``INST_ROCKSTAR=1`` is defined in the ``install_script.sh`` and re-run
+the installation script.

diff -r 74fd61909c0d981bac0f3e61b4a3597bccf442f6 -r b04c9b3692db47c99a367f0decda46fcbbca3801 doc/source/analyzing/analysis_modules/halo_profiling.rst
--- a/doc/source/analyzing/analysis_modules/halo_profiling.rst
+++ /dev/null
@@ -1,451 +0,0 @@
-.. _halo_profiling:
-
-Halo Profiling
-==============
-.. sectionauthor:: Britton Smith <brittonsmith at gmail.com>,
-   Stephen Skory <s at skory.us>
-
-The ``HaloProfiler`` provides a means of performing analysis on multiple halos 
-in a parallel-safe way.
-
-The halo profiler performs three primary functions: radial profiles, 
-projections, and custom analysis.  See the cookbook for a recipe demonstrating 
-all of these features.
-
-Configuring the Halo Profiler
------------------------------
-
-The only argument required to create a ``HaloProfiler`` object is the path 
-to the dataset.
-
-.. code-block:: python
-
-  from yt.analysis_modules.halo_profiler.api import *
-  hp = HaloProfiler("enzo_tiny_cosmology/DD0046/DD0046")
-
-Most of the halo profiler's options are configured with additional keyword 
-arguments:
-
- * **output_dir** (*str*): if specified, all output will be put into this path
-   instead of in the dataset directories.  Default: None.
-
- * **halos** (*str*): "multiple" for profiling more than one halo.  In this mode
-   halos are read in from a list or identified with a
-   `halo finder <../cookbook/running_halofinder.html>`_.  In "single" mode, the
-   one and only halo center is identified automatically as the location of the
-   peak in the density field.  Default: "multiple".
-
- * **halo_list_file** (*str*): name of file containing the list of halos.
-   The halo profiler will look for this file in the data directory.
-   Default: "HopAnalysis.out".
-
- * **halo_list_format** (*str* or *dict*): the format of the halo list file.
-   "yt_hop" for the format given by yt's halo finders.  "enzo_hop" for the
-   format written by enzo_hop.  This keyword can also be given in the form of a
-   dictionary specifying the column in which various properties can be found.
-   For example, {"id": 0, "center": [1, 2, 3], "mass": 4, "radius": 5}.
-   Default: "yt_hop".
-
- * **halo_finder_function** (*function*): If halos is set to multiple and the
-   file given by halo_list_file does not exit, the halo finding function
-   specified here will be called.  Default: HaloFinder (yt_hop).
-
- * **halo_finder_args** (*tuple*): args given with call to halo finder function.
-   Default: None.
-
- * **halo_finder_kwargs** (*dict*): kwargs given with call to halo finder
-   function. Default: None.
-
- * **recenter** (*string* or function name): The name of a function
-   that will be used to move the center of the halo for the purposes of
-   analysis. See explanation and examples, below. Default: None, which
-   is equivalent to the center of mass of the halo as output by the halo
-   finder.
-
- * **halo_radius** (*float*): if no halo radii are provided in the halo list
-   file, this parameter is used to specify the radius out to which radial
-   profiles will be made.  This keyword is also used when halos is set to
-   single.  Default: 0.1.
-
- * **radius_units** (*str*): the units of **halo_radius**. 
-   Default: "1" (code units).
-
- * **n_profile_bins** (*int*): the number of bins in the radial profiles.
-   Default: 50.
-
- * **profile_output_dir** (*str*): the subdirectory, inside the data directory,
-   in which radial profile output files will be created.  The directory will be
-   created if it does not exist.  Default: "radial_profiles".
-
- * **projection_output_dir** (*str*): the subdirectory, inside the data
-   directory, in which projection output files will be created.  The directory
-   will be created if it does not exist.  Default: "projections".
-
- * **projection_width** (*float*): the width of halo projections.
-   Default: 8.0.
-
- * **projection_width_units** (*str*): the units of projection_width.
-   Default: "mpc".
-
- * **project_at_level** (*int* or "max"): the maximum refinement level to be
-   included in projections.  Default: "max" (maximum level within the dataset).
-
- * **velocity_center** (*list*): the method in which the halo bulk velocity is
-   calculated (used for calculation of radial and tangential velocities.  Valid
-   options are:
-   - ["bulk", "halo"] (Default): the velocity provided in the halo list
-   - ["bulk", "sphere"]: the bulk velocity of the sphere centered on the halo center.
-   - ["max", field]: the velocity of the cell that is the location of the maximum of the field specified.
-
- * **filter_quantities** (*list*): quantities from the original halo list
-   file to be written out in the filtered list file.  Default: ['id','center'].
-
- * **use_critical_density** (*bool*): if True, the definition of overdensity 
-     for virial quantities is calculated with respect to the critical 
-     density.  If False, overdensity is with respect to mean matter density, 
-     which is lower by a factor of Omega_M.  Default: False.
-
-Profiles
---------
-
-Once the halo profiler object has been instantiated, fields can be added for 
-profiling with the :meth:`add_profile` method:
-
-.. code-block:: python
-
-  hp.add_profile('cell_volume', weight_field=None, accumulation=True)
-  hp.add_profile('TotalMassMsun', weight_field=None, accumulation=True)
-  hp.add_profile('density', weight_field=None, accumulation=False)
-  hp.add_profile('temperature', weight_field='cell_mass', accumulation=False)
-  hp.make_profiles(njobs=-1, prefilters=["halo['mass'] > 1e13"],
-                   filename='VirialQuantities.h5')
-
-The :meth:`make_profiles` method will begin the profiling.  Use the
-**njobs** keyword to control the number of jobs over which the
-profiling is divided.  Setting to -1 results in a single processor per
-halo.  Setting to 1 results in all available processors working on the
-same halo.  The prefilters keyword tells the profiler to skip all halos with 
-masses (as loaded from the halo finder) less than a given amount.  See below 
-for more information.  Additional keyword arguments are:
-
- * **filename** (*str*): If set, a file will be written with all of the 
-   filtered halos and the quantities returned by the filter functions.
-   Default: None.
-
- * **prefilters** (*list*): A single dataset can contain thousands or tens of 
-   thousands of halos. Significant time can be saved by not profiling halos
-   that are certain to not pass any filter functions in place.  Simple filters 
-   based on quantities provided in the initial halo list can be used to filter 
-   out unwanted halos using this parameter.  Default: None.
-
- * **njobs** (*int*): The number of jobs over which to split the profiling.  
-   Set to -1 so that each halo is done by a single processor.  Default: -1.
-
- * **dynamic** (*bool*): If True, distribute halos using a task queue.  If 
-   False, distribute halos evenly over all jobs.  Default: False.
-
- * **profile_format** (*str*): The file format for the radial profiles, 
-   'ascii' or 'hdf5'.  Default: 'ascii'.
-
-.. image:: _images/profiles.png
-   :width: 500
-
-Radial profiles of Overdensity (left) and Temperature (right) for five halos.
-
-Projections
------------
-
-The process of making projections is similar to that of profiles:
-
-.. code-block:: python
-
-  hp.add_projection('density', weight_field=None)
-  hp.add_projection('temperature', weight_field='density')
-  hp.add_projection('metallicity', weight_field='density')
-  hp.make_projections(axes=[0, 1, 2], save_cube=True, save_images=True, 
-                      halo_list="filtered", njobs=-1)
-
-If **save_cube** is set to True, the projection data
-will be written to a set of hdf5 files 
-in the directory given by **projection_output_dir**. 
-The keyword, **halo_list**, can be 
-used to select between the full list of halos ("all"),
-the filtered list ("filtered"), or 
-an entirely new list given in the form of a file name.
-See :ref:`filter_functions` for a 
-discussion of filtering halos.  Use the **njobs** keyword to control
-the number of jobs over which the profiling is divided.  Setting to -1
-results in a single processor per halo.  Setting to 1 results in all
-available processors working on the same halo.  The keyword arguments are:
-
- * **axes** (*list*): A list of the axes to project along, using the usual 
-   0,1,2 convention. Default=[0,1,2].
-
- * **halo_list** (*str*) {'filtered', 'all'}: Which set of halos to make 
-   profiles of, either ones passed by the halo filters (if enabled/added), or 
-   all halos.  Default='filtered'.
-
- * **save_images** (*bool*): Whether or not to save images of the projections. 
-   Default=False.
-
- * **save_cube** (*bool*): Whether or not to save the HDF5 files of the halo 
-   projections.  Default=True.
-
- * **njobs** (*int*): The number of jobs over which to split the projections.  
-   Set to -1 so that each halo is done by a single processor.  Default: -1.
-
- * **dynamic** (*bool*): If True, distribute halos using a task queue.  If 
-   False, distribute halos evenly over all jobs.  Default: False.
-
-.. image:: _images/projections.png
-   :width: 500
-
-Projections of Density (top) and Temperature,
-weighted by Density (bottom), in the x (left), 
-y (middle), and z (right) directions for a single halo with a width of 8 Mpc.
-
-Halo Filters
-------------
-
-Filters can be added to create a refined list of
-halos based on their profiles or to avoid 
-profiling halos altogether based on information
-given in the halo list file.
-
-.. _filter_functions:
-
-Filter Functions
-^^^^^^^^^^^^^^^^
-
-It is often the case that one is looking to
-identify halos with a specific set of 
-properties.  This can be accomplished through the creation
-of filter functions.  A filter 
-function can take as many args and kwargs as you like,
-as long as the first argument is a 
-profile object, or at least a dictionary which contains
-the profile arrays for each field.  
-Filter functions must return a list of two things.
-The first is a True or False indicating 
-whether the halo passed the filter. 
-The second is a dictionary containing quantities 
-calculated for that halo that will be written to a
-file if the halo passes the filter.
-A  sample filter function based on virial quantities can be found in 
-``yt/analysis_modules/halo_profiler/halo_filters.py``.
-
-Halo filtering takes place during the call to :meth:`make_profiles`.
-The  :meth:`add_halo_filter` method is used to add a filter to be used
-during the profiling:
-
-.. code-block:: python
-
-  hp.add_halo_filter(HP.VirialFilter, must_be_virialized=True, 
-                     overdensity_field='ActualOverdensity', 
-		     virial_overdensity=200, 
-		     virial_filters=[['TotalMassMsun','>=','1e14']],
-		     virial_quantities=['TotalMassMsun','RadiusMpc'],
-		     use_log=True)
-
-The addition above will calculate and return virial quantities,
-mass and radius, for an 
-overdensity of 200.  In order to pass the filter, at least one
-point in the profile must be 
-above the specified overdensity and the virial mass must be at
-least 1e14 solar masses.  The **use_log** keyword indicates that interpolation 
-should be done in log space.  If 
-the VirialFilter function has been added to the filter list,
-the halo profiler will make 
-sure that the fields necessary for calculating virial quantities are added.
-As  many filters as desired can be added.  If filters have been added,
-the next call to :meth:`make_profiles` will filter by all of
-the added filter functions:
-
-.. code-block:: python
-
-  hp.make_profiles(filename="FilteredQuantities.out")
-
-If the **filename** keyword is set, a file will be written with all of the 
-filtered halos and the quantities returned by the filter functions.
-
-.. note:: If the profiles have already been run, the halo profiler will read
-   in the previously created output files instead of re-running the profiles.
-   The halo profiler will check to make sure the output file contains all of
-   the requested halo fields.  If not, the profile will be made again from
-   scratch.
-
-.. _halo_profiler_pre_filters:
-
-Pre-filters
-^^^^^^^^^^^
-
-A single dataset can contain thousands or tens of thousands of halos.
-Significant time can 
-be saved by not profiling halos that are certain to not pass any filter
-functions in place.  
-Simple filters based on quantities provided in the initial halo list
-can be used to filter 
-out unwanted halos using the **prefilters** keyword:
-
-.. code-block:: python
-
-  hp.make_profiles(filename="FilteredQuantities.out",
-		   prefilters=["halo['mass'] > 1e13"])
-
-Arguments provided with the **prefilters** keyword should be given
-as a list of strings.  
-Each string in the list will be evaluated with an *eval*.
-
-.. note:: If a VirialFilter function has been added with a filter based
-   on mass (as in the example above), a prefilter will be automatically
-   added to filter out halos with masses greater or less than (depending
-   on the conditional of the filter) a factor of ten of the specified
-   virial mass.
-
-Recentering the Halo For Analysis
----------------------------------
-
-It is possible to move the center of the halo to a new point using an
-arbitrary function for making profiles.
-By default, the center is provided by the halo finder,
-which outputs the center of mass of the particles. For the purposes of
-analysis, it may be important to recenter onto a gas density maximum,
-or a temperature minimum.
-
-There are a number of built-in functions to do this, listed below.
-Each of the functions uses mass-weighted fields for the calculations
-of new center points.
-To use
-them, supply the HaloProfiler with the ``recenter`` option and 
-the name of the function, as in the example below.
-
-.. code-block:: python
-
-   hp = HaloProfiler("enzo_tiny_cosmology/DD0046/DD0046", 
-                     recenter="Max_Dark_Matter_Density")
-
-Additional options are:
-
-  * *Min_Dark_Matter_Density* - Recenter on the point of minimum dark matter
-    density in the halo.
-
-  * *Max_Dark_Matter_Density* - Recenter on the point of maximum dark matter
-    density in the halo.
-
-  * *CoM_Dark_Matter_Density* - Recenter on the center of mass of the dark
-    matter density field. This will be very similar to what the halo finder
-    provides, but not precisely similar.
-
-  * *Min_Gas_Density* - Recenter on the point of minimum gas density in the
-    halo.
-
-  * *Max_Gas_Density* - Recenter on the point of maximum gas density in the
-    halo.
-
-  * *CoM_Gas_Density* - Recenter on the center of mass of the gas density field
-    in the halo.
-
-  * *Min_Total_Density* - Recenter on the point of minimum total (gas + dark
-    matter) density in the halo.
-
-  * *Max_Total_Density* - Recenter on the point of maximum total density in the
-    halo.
-
-  * *CoM_Total_Density* - Recenter on the center of mass for the total density
-    in the halo.
-
-  * *Min_Temperature* - Recenter on the point of minimum temperature in the
-    halo.
-
-  * *Max_Temperature* - Recenter on the point of maximum temperature in the
-    halo.
-
-It is also possible to supply a user-defined function to the HaloProfiler.
-This can be used if the pre-defined functions above are not sufficient.
-The function takes a single argument, a data container for the halo,
-which is a sphere. The function returns a 3-list with the new center.
-
-In this example below, a function is used such that the halos will be
-re-centered on the point of absolute minimum temperature, that is not
-mass weighted.
-
-.. code-block:: python
-
-   from yt.mods import *
-   
-   def find_min_temp(sphere):
-       ma, mini, mx, my, mz, mg = sphere.quantities['MinLocation']('temperature')
-       return [mx,my,mz]
-   
-   hp = HaloProfiler("enzo_tiny_cosmology/DD0046/DD0046", recenter=find_min_temp)
-
-It is possible to make more complicated functions. This example below extends
-the example above to include a distance control that prevents the center from
-being moved too far. If the recenter moves too far, ``[-1, -1, -1]`` is
-returned which will prevent the halo from being profiled.
-Any triplet of values less than the ``domain_left_edge`` will suffice.
-There will be a note made in the output (stderr) showing which halos were
-skipped.
-
-.. code-block:: python
-
-   from yt.mods import *
-   from yt.utilities.math_utils import periodic_dist
-   
-   def find_min_temp_dist(sphere):
-       old = sphere.center
-       ma, mini, mx, my, mz, mg = sphere.quantities['MinLocation']('temperature')
-       d = sphere.ds['kpc'] * periodic_dist(old, [mx, my, mz],
-           sphere.ds.domain_right_edge - sphere.ds.domain_left_edge)
-       # If new center farther than 5 kpc away, don't recenter
-       if d > 5.: return [-1, -1, -1]
-       return [mx,my,mz]
-   
-   hp = HaloProfiler("enzo_tiny_cosmology/DD0046/DD0046", 
-                     recenter=find_min_temp_dist)
-
-Custom Halo Analysis
---------------------
-
-Besides radial profiles and projections, the halo profiler has the
-ability to run custom analysis functions on each halo.  Custom halo
-analysis functions take two arguments: a halo dictionary containing
-the id, center, etc; and a sphere object.  The example function shown
-below creates a 2D profile of the total mass in bins of density and
-temperature for a given halo.
-
-.. code-block:: python
-
-   from yt.mods import *
-   from yt.data_objects.profiles import BinnedProfile2D
-
-   def halo_2D_profile(halo, sphere):
-       "Make a 2D profile for a halo."
-       my_profile = BinnedProfile2D(sphere,
-             128, 'density', 1e-30, 1e-24, True,
-             128, 'temperature', 1e2, 1e7, True,
-             end_collect=False)
-       my_profile.add_fields('cell_mass', weight=None, fractional=False)
-       my_filename = os.path.join(sphere.ds.fullpath, '2D_profiles', 
-             'Halo_%04d.h5' % halo['id'])
-       my_profile.write_out_h5(my_filename)
-
-Using the  :meth:`analyze_halo_spheres` function, the halo profiler
-will create a sphere centered on each halo, and perform the analysis
-from the custom routine.
-
-.. code-block:: python
-
-    hp.analyze_halo_sphere(halo_2D_profile, halo_list='filtered',
-                           analysis_output_dir='2D_profiles', 
-                           njobs=-1, dynamic=False)
-
-Just like with the :meth:`make_projections` function, the keyword,
-**halo_list**, can be used to select between the full list of halos
-("all"), the filtered list ("filtered"), or an entirely new list given
-in the form of a file name.  If the **analysis_output_dir** keyword is
-set, the halo profiler will make sure the desired directory exists in
-a parallel-safe manner.  Use the **njobs** keyword to control the
-number of jobs over which the profiling is divided.  Setting to -1
-results in a single processor per halo.  Setting to 1 results in all
-available processors working on the same halo.

diff -r 74fd61909c0d981bac0f3e61b4a3597bccf442f6 -r b04c9b3692db47c99a367f0decda46fcbbca3801 doc/source/analyzing/analysis_modules/halo_transition.rst
--- /dev/null
+++ b/doc/source/analyzing/analysis_modules/halo_transition.rst
@@ -0,0 +1,106 @@
+
+Getting up to Speed with Halo Analysis in yt-3.0
+================================================
+
+If you're used to halo analysis in yt-2.x, heres a guide to
+how to update your analysis pipeline to take advantage of
+the new halo catalog infrastructure. 
+
+Finding Halos
+-------------
+
+Previously, halos were found using calls to ``HaloFinder``, 
+``FOFHaloFinder`` and ``RockstarHaloFinder``. Now it is 
+encouraged that you find the halos upon creation of the halo catalog 
+by supplying a value to the ``finder_method`` keyword when calling
+``HaloCatalog``. Currently, only halos found using rockstar or a 
+previous instance of a halo catalog are able to be loaded 
+using the ``halos_ds`` keyword.
+
+To pass additional arguments to the halo finders 
+themselves, supply a dictionary to ``finder_kwargs`` where
+each key in the dictionary is a keyword of the halo finder
+and the corresponding value is the value to be passed for
+that keyword.
+
+Getting Halo Information
+------------------------
+All quantities that used to be present in a ``halo_list`` are
+still able to be found but are not necessarily included by default.
+Every halo will by default have the following properties:
+
+* particle_position_i (where i can be x,y,z)
+* particle_mass
+* virial_radius
+* particle_identifier
+
+If other quantities are desired, they can be included by adding
+the corresponding quantity before the catalog is created. See
+the full halo catalog documentation for further information about
+how to add these quantities and what quantities are available.
+
+You no longer have to iteratre over halos in the ``halo_list``.
+Now a halo dataset can be treated as a regular dataset and 
+all quantities are available by accessing ``all_data``.
+Specifically, all quantities can be accessed as shown:
+
+.. code-block:: python
+   from yt.mods import *
+   from yt.analysis_modules.halo_analysis.api import HaloCatalog
+   data_ds = load('Enzo_64/RD0006/RedshiftOutput0006')
+   hc = HaloCatalog(data_ds=data_ds, finder_method='hop')
+   hc.create()
+   ad = hc.all_data()
+   masses = ad['particle_mass'][:]
+
+
+Prefiltering Halos
+------------------
+
+Prefiltering halos before analysis takes place is now done
+by adding a filter before the call to create. An example
+is shown below
+
+.. code-block:: python
+   from yt.mods import *
+   from yt.analysis_modules.halo_analysis.api import HaloCatalog
+   data_ds = load('Enzo_64/RD0006/RedshiftOutput0006')
+   hc = HaloCatalog(data_ds=data_ds, finder_method='hop')
+   hc.add_filter("quantity_value", "particle_mass", ">", 1e13, "Msun")
+   hc.create()
+
+Profiling Halos
+---------------
+
+The halo profiler available in yt-2.x has been removed, and
+profiling functionality is now completely contained within the
+halo catalog. A complete example of how to profile halos by 
+radius using the new infrastructure is given in 
+:ref:`halo_analysis_example`. 
+
+Plotting Halos
+--------------
+
+Annotating halo locations onto a slice or projection works in 
+the same way as in yt-2.x, but now a halo catalog must be
+passed to the annotate halo call rather than a halo list.
+
+.. code-block:: python
+   from yt.mods import *
+   from yt.analysis_modules.halo_analysis.api import HaloCatalog
+
+   data_ds = load('Enzo_64/RD0006/RedshiftOutput0006')
+   hc = HaloCatalog(data_ds=data_ds, finder_method='hop')
+   hc.create()
+
+   prj = ProjectionPlot(data_ds, 'z', 'density')
+   prj.annotate_halos(hc)
+   prj.save()
+
+Written Data
+------------
+
+Data is now written out in the form of h5 files rather than
+text files. The directory they are written out to is 
+controlled by the keyword ``output_dir``. Each quantity
+is a field in the file.

diff -r 74fd61909c0d981bac0f3e61b4a3597bccf442f6 -r b04c9b3692db47c99a367f0decda46fcbbca3801 doc/source/analyzing/analysis_modules/merger_tree.rst
--- a/doc/source/analyzing/analysis_modules/merger_tree.rst
+++ b/doc/source/analyzing/analysis_modules/merger_tree.rst
@@ -2,8 +2,9 @@
 
 Halo Merger Tree
 ================
-.. sectionauthor:: Stephen Skory <sskory at physics.ucsd.edu>
-.. versionadded:: 1.7
+
+.. note:: At the moment the merger tree is not yet implemented using new 
+    halo catalog functionality. 
 
 The Halo Merger Tree extension is capable of building a database of halo mergers
 over a set of time-ordered Enzo datasets. The fractional contribution of older

diff -r 74fd61909c0d981bac0f3e61b4a3597bccf442f6 -r b04c9b3692db47c99a367f0decda46fcbbca3801 doc/source/analyzing/analysis_modules/running_halofinder.rst
--- a/doc/source/analyzing/analysis_modules/running_halofinder.rst
+++ /dev/null
@@ -1,612 +0,0 @@
-.. _halo_finding:
-
-Halo Finding
-============
-.. sectionauthor:: Stephen Skory <sskory at physics.ucsd.edu>
-
-There are four methods of finding particle haloes in yt. The recommended and default method is called HOP, a 
-method described in `Eisenstein and Hut (1998) <http://adsabs.harvard.edu/abs/1998ApJ...498..137E>`_. 
-A basic friends-of-friends (e.g. `Efstathiou et al. (1985) <http://adsabs.harvard.edu/abs/1985ApJS...57..241E>`_)
-halo finder is also implemented.
-Parallel HOP (`Skory et al. (2010) <http://adsabs.harvard.edu/abs/2010ApJS..191...43S>`_)
-is a true parallelization of the HOP method can analyze massive datasets on
-hundreds of processors.
-Finally Rockstar (`Behroozi et a. (2011) <http://adsabs.harvard.edu/abs/2011arXiv1110.4372B>`_)
-is a 6D-phase space halo finder developed by Peter Behroozi
-that excels in finding subhalos and substrcture,
-but does not allow multiple particle masses.
-
-HOP
----
-
-The version of HOP used in yt is an upgraded version of the `publicly available HOP code 
-<http://cmb.as.arizona.edu/~eisenste/hop/hop.html>`_. Support for 64-bit floats and integers has been
-added, as well as parallel analysis through spatial decomposition. HOP builds groups in this fashion:
-
-  1. Estimates the local density at each particle using a smoothing kernel.
-  2. Builds chains of linked particles by 'hopping' from one particle to its densest neighbor.
-     A particle which is its own densest neighbor is the end of the chain.
-  3. All chains that share the same densest particle are grouped together.
-  4. Groups are included, linked together, or discarded depending on the user-supplied over density
-     threshold parameter. The default is 160.0.
-
-Please see the `HOP method paper <http://adsabs.harvard.edu/abs/1998ApJ...498..137E>`_ 
-for full details.
-
-Friends-of-Friends
-------------------
-
-The version of FoF in yt is based on the `publicly available FoF code <http://www-hpcc.astro.washington.edu/tools/fof.html>`_ from the University of Washington. Like HOP,
-FoF supports parallel analysis through spatial decomposition. FoF is much simpler than HOP:
-
-  1. From the total number of particles, and the volume of the region, the average
-     inter-particle spacing is calculated.
-  2. Pairs of particles closer together than some fraction of the average inter-particle spacing
-     (the default is 0.2) are linked together. Particles can be paired with more than one other particle.
-  3. The final groups are formed the networks of particles linked together by friends, hence the name.
-
-.. warning:: The FoF halo finder in yt is not thoroughly tested! It is probably fine to use, but you
-   are strongly encouraged to check your results against the data for errors.
-
-Running HaloFinder
-------------------
-
-Running HOP on a dataset is straightforward
-
-.. code-block:: python
-
-  from yt.mods import *
-  from yt.analysis_modules.halo_finding.api import *
-  ds = load("data0001")
-  halo_list = HaloFinder(ds)
-
-Running FoF is similar:
-
-.. code-block:: python
-
-  from yt.mods import *
-  from yt.analysis_modules.halo_finding.api import *
-  ds = load("data0001")
-  halo_list = FOFHaloFinder(ds)
-
-Halo Data Access
-----------------
-
-``halo_list`` is a list of ``Halo`` class objects ordered by decreasing halo mass. A ``Halo`` object
-has convenient ways to access halo data. This loop will print the location of the center of mass
-for each halo found
-
-.. code-block:: python
-
-  for halo in halo_list:
-      print halo.center_of_mass()
-
-All the methods are:
-
-  * .center_of_mass() - the center of mass for the halo.
-  * .maximum_density() - the maximum density in "HOP" units.
-  * .maximum_density_location() - the location of the maximum density particle in the HOP halo.
-  * .total_mass() - the mass of the halo in Msol (not Msol/h).
-  * .bulk_velocity() - the velocity of the center of mass of the halo in simulation units.
-  * .maximum_radius() - the distance from the center of mass to the most distant particle in the halo
-    in simulation units.
-  * .get_size() - the number of particles in the halo.
-  * .get_sphere() - returns an an EnzoSphere object using the center of mass and maximum radius.
-  * .virial_mass(virial_overdensity=float, bins=int) - Finds the virial
-    mass for a halo using just the particles. This is inferior to the full
-    Halo Profiler extension (:ref:`halo_profiling`), but useful nonetheless in some cases.
-    Returns the mass in Msol, or -1 if the halo is not virialized.
-    Defaults: ``virial_overdensity=200.0`` and ``bins=300``.
-  * .virial_radius(virial_overdensity=float, bins=int) - Fins the virial
-    radius of the halo using just the particles. Returns the radius in code
-    units, or -1 if the halo is not virialized.
-    Defaults: ``virial_overdensity=200.0`` and ``bins=300``.
-
-.. note:: For FOF the maximum density value is meaningless and is set to -1 by default. For FOF
-   the maximum density location will be identical to the center of mass location.
-
-For each halo the data for the particles in the halo can be accessed like this
-
-.. code-block:: python
-
-  for halo in halo_list:
-      print halo["particle_index"]
-      print halo["particle_position_x"] # in simulation units
-
-Halo List Data Access
----------------------
-
-These are methods that operate on the list of halo objects, rather than on the
-haloes themselves (e.g. ``halo_list.write_out()`` instead of ``halo_list[0].center_of_mass()``).
-For example, The command
-
-.. code-block:: python
-
-  halo_list.write_out("HaloAnalysis.out")
-
-will output the haloes to a text file named ``HaloAnalysis.out``.
-
-  * .write_out(``name``) - Writes out the center of mass, maximum density point,
-    number of particles, mass, index, bulk velocity and maximum radius for all the haloes
-    to a text file ``name``.
-  * .write_particle_lists(``name``) - Writes the data for the particles in haloes
-    (position, velocity, mass and particle index) to a HDF5 file with prefix ``name``, or one HDF5
-    file per CPU when running in parallel.
-  * .write_particle_lists_txt(``name``) - Writes out one text file with prefix ``name`` that gives the
-    location of the particle data for haloes in the HDF5 files. This is only
-    necessary when running in parallel.
-  * .dump(``basename``) - Calls all of the above three functions using 
-    ``basename`` in each. This function is meant to be used in combination with
-    loading halos off disk (:ref:`load_haloes`).
-  * .nearest_neighbors_3D(haloID, num_neighbors=int, search_radius=float) - 
-    For a given halo ``haloID``, this finds the ``num_neighbors`` nearest (periodic)
-    neighbors that are within ``search_radius`` distance from it.
-    It returns a list of the neighbors distances and ID with format
-    [distance,haloID]. Defaults: ``num_neighbors=7``, ``search_radius=0.2``.
-  * .nearest_neighbors_2D(haloID, num_neighbors=int, search_radius=float, proj_dim={0,1,2}) -
-    Similarly to the 3D search, this finds the nearest (periodic) neighbors to a halo, but
-    with the positions of the haloes projected onto a 2D plane. The normal to the
-    projection plane is set with ``proj_dim``, which is set to {0,1,2} for the
-    {x,y,z}-axis. Defaults: ``num_neighbors=7``, ``search_radius=0.2`` and ``proj_dim=0``.
-    Returns a list of neighbors in the same format as the 3D case, but the distances
-    are the 2D projected distance.
-
-.. _load_haloes:
-
-Loading Haloes Off Disk
------------------------
-
-It is possible to load haloes off disk and use them as if they had just been
-located by the halo finder. This has at least two advantages.  Quite obviously
-this means that if the halos are properly saved (e.g. ``haloes.dump()``, see
-above and below), halo finding does not need to be run again, saving time.
-Another benefit is loaded haloes only use as much memory as needed because the
-particle data for the haloes is loaded off disk on demand. If only a few haloes
-are being examined, a dataset that required parallel analysis for halo finding
-can be analyzed in serial, interactively.
-
-The first step is to save the haloes in a consistent manner, which is made
-simple with the ``.dump()`` function:
-
-.. code-block:: python
-
-  from yt.mods import *
-  from yt.analysis_modules.halo_finding.api import *
-  ds = load("data0001")
-  haloes = HaloFinder(ds)
-  haloes.dump("basename")
-
-It is easy to load the halos using the ``LoadHaloes`` class:
-
-.. code-block:: python
-
-  from yt.mods import *
-  from yt.analysis_modules.halo_finding.api import *
-  ds = load("data0001")
-  haloes = LoadHaloes(ds, "basename")
-
-Everything that can be done with ``haloes`` in the first example should be
-possible with ``haloes`` in the second.
-
-General Parallel Halo Analysis
-------------------------------
-
-Both the HOP and FoF halo finders can run in parallel using simple spatial decomposition.
-In order to run them
-in parallel it is helpful to understand how it works.
-
-Below in the first plot (i) is a simplified depiction of three haloes labeled 1,2 and 3:
-
-.. image:: _images/ParallelHaloFinder.png
-   :width: 500
-
-Halo 3 is twice reflected around the periodic boundary conditions.
-
-In (ii), the volume has been
-sub-divided into four equal subregions, A,B,C and D, shown with dotted lines. Notice that halo 2
-is now in two different subregions,
-C and D, and that halo 3 is now in three, A, B and D. If the halo finder is run on these four separate subregions,
-halo 1 is be identified as a single halo, but haloes 2 and 3 are split up into multiple haloes, which is incorrect.
-The solution is to give each subregion padding to oversample into neighboring regions.
-
-In (iii), subregion C has oversampled into the other three regions, with the periodic boundary conditions taken
-into account, shown by dot-dashed lines. The other subregions oversample in a similar way.
-
-The halo finder is then run on each padded subregion independently and simultaneously.
-By oversampling like this, haloes 2 and 3 will both be enclosed fully in at least one subregion and
-identified completely.
-
-Haloes identified with centers of mass inside the padded part of a subregion are thrown out, eliminating
-the problem of halo duplication. The centers for the three haloes are shown with stars. Halo 1 will
-belong to subregion A, 2 to C and 3 to B.
-
-Parallel HaloFinder padding
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-To run with parallel halo finding, there is a slight modification to the script
-
-.. code-block:: python
-
-  from yt.mods import *
-  from yt.analysis_modules.halo_finding.api import *
-  ds = load("data0001")
-  halo_list = HaloFinder(ds,padding=0.02)
-  # --or--
-  halo_list = FOFHaloFinder(ds,padding=0.02)
-
-The ``padding`` parameter is in simulation units and defaults to 0.02. This parameter is how much padding
-is added to each of the six sides of a subregion. This value should be 2x-3x larger than the largest
-expected halo in the simulation. It is unlikely, of course, that the largest object in the simulation
-will be on a subregion boundary, but there is no way of knowing before the halo finder is run.
-
-In general, a little bit of padding goes a long way, and too much just slows down the analysis and doesn't
-improve the answer (but doesn't change it). 
-It may be worth your time to run the parallel halo finder at a few paddings to
-find the right amount, especially if you're analyzing many similar datasets.
-
-Parallel HOP
-------------
-
-**Parallel HOP** (not to be confused with HOP running in parallel as described
-above) is a wholly-new halo finder based on the HOP method.
-For extensive details and benchmarks of Parallel HOP, please see the
-pre-print version of the `method paper <http://adsabs.harvard.edu/abs/2010ApJS..191...43S>`_ at
-arXiv.org.
-While the method
-of parallelization described above can be quite effective, it has its limits.
-In particular
-for highly unbalanced datasets, where most of the particles are in a single
-part of the simulation's volume, it can become impossible to subdivide the
-volume sufficiently to fit a subvolume into a single node's memory.
-
-Parallel HOP is designed to be parallel at all levels of operation. There is
-a minimal amount of copied data across tasks. Unlike the parallel method above,
-whole haloes do not need to exist entirely in a single subvolume. In fact, a
-halo may have particles in several subvolumes simultaneously without a problem.
-
-Parallel HOP is appropriate for very large datasets where normal HOP, or
-the parallel method described above, won't work. For smaller datasets, it is
-actually faster to use the simpler methods above because the mechanisms employed for
-full parallelism are somewhat expensive.
-Whether to use Parallel HOP or not depends on the number of particles and
-the size of the largest object in the simulation.
-Because the padding of the other parallel method described above depends on
-the relative size to the box of the largest object, for smaller cosmologies
-that method may not work.
-If the largest object is quite large, the minimum padding will be a
-significant fraction of the full volume, and therefore the minimum number of
-particles per task can stay quite high.
-Below and including 256^3 particles, the other parallel methods are likely
-faster.
-However, above this and for smaller cosmologies (100 Mpc/h and smaller),
-Parallel HOP will offer better performance.
-
-The haloes identified by Parallel HOP are slightly different than normal HOP
-when run on the same dataset with the same over-density threshold.
-For a given threshold value, a few haloes have slightly different numbers of particles.
-Overall, it is not a big difference. In fact, changing the threshold value by
-a percent gives a far greater difference than the differences between HOP and
-Parallel HOP.
-
-HOP and Parallel HOP both use `KD Trees <http://en.wikipedia.org/wiki/Kd_tree>`_
-for nearest-neighbor searches.
-Parallel HOP uses the Fortran version of
-`KDTREE 2 <http://arxiv.org/abs/physics/0408067>`_ written by Matthew B. Kennel.
-The KD Tree in normal HOP calculates the distances
-between particles incorrectly by approximately one part in a million.
-KDTREE 2 is far more accurate (up to machine error),
-and this slight difference is sufficient to make perfect agreement between
-normal and Parallel HOP impossible.
-Therefore Parallel HOP is not a direct substitution for
-normal HOP, but is very similar.
-
-Running Parallel HOP
-^^^^^^^^^^^^^^^^^^^^
-
-Note: This is probably broken now that the Fortran kdtree has been removed.
-
-In the simplest form, Parallel HOP is run very similarly to the other halo finders.
-In the example below, Parallel HOP will be run on a dataset with all the default
-values. Parallel HOP can be run in serial, but as mentioned above, it is
-slower than normal HOP.
-
-.. code-block:: python
-
-  from yt.mods import *
-  from yt.analysis_modules.halo_finding.api import *
-  ds = load("data0001")
-  halo_list = parallelHF(ds)
-
-Parallel HOP has these user-set options:
-
-  * ``threshold``, positive float: This is the same as the option for normal HOP. Default=160.0.
-  * ``dm_only``, True/False: Whether or not to include particles other than dark
-    matter when building haloes. Default=True.
-  * ``resize``, True/False: Parallel HOP can load-balance the particles, such that
-    each subvolume has the same number of particles.
-    In general, this option is a good idea for simulations' volumes
-    smaller than about 300 Mpc/h, and absolutely required for those under
-    100 Mpc/h. For larger volumes the particles are distributed evenly enough
-    that this option is unnecessary. Default=True.
-  * ``sample``, positive float: In order to load-balance, a random subset of the
-    particle positions are read off disk, and the load-balancing routine is
-    applied to them. This parameter controls what fraction of the full dataset
-    population is used. Larger values result in more accurate load-balancing,
-    and smaller values are faster. The value cannot be too large as the data
-    for the subset of particles is communicated to one task for
-    load-balancing (meaning a value
-    of 1.0 will not work on very large datasets).
-    Tests show that values as low as 0.0003 keep the min/max variation between
-    tasks below 10%. Default = 0.03.
-  * ``rearrange``, True/False: The KD Tree used by Parallel HOP can make an
-    internal copy of the particle data which increases the speed of nearest
-    neighbor searches by approximately 20%. The only reason to turn this option
-    off is if memory is a concern. Default=True.
-  * ``safety``, positive float: Unlike the simpler parallel method, Parallel
-    HOP calculates the padding automatically. The padding is a
-    function of the inter-particle spacing inside each subvolume. This parameter
-    is multiplied to the padding distance to increase the padding volume to account for
-    density variations on the boundaries of the subvolumes. Increasing this
-    parameter beyond a certain point will have no effect other than consuming
-    more memory and slowing down runtimes.
-    Reducing it will speed up the calculation and use less memory, but 
-    going too far will result in degraded halo finding.
-    Default=1.5, but values as low as 1.0 will probably work for many datasets.
-  * ``fancy_padding``, True/False: When this is set to True, the amount of padding
-    is calculated independently for each of the six faces of each subvolume. When this is
-    False, the padding is the same on all six faces. There is generally no
-    good reason to set this to False. Default=True.
-  * ``premerge``, True/False: This option will pre-merge only the most dense
-    haloes in each subvolume, before haloes are merged on the global level. In
-    some cases this can speed up the runtime by a factor of two and reduce peak memory
-    greatly. At worst it slows down the runtime by a small amount. It has the
-    side-effect of changing the haloes slightly as a function of task count. Put in
-    another way, two otherwise identical runs of Parallel HOP on a dataset will end
-    up with very slightly different haloes when run with two different task counts
-    with this option turned on.  Not all haloes are changed between runs.  This is
-    due to the way merging happens in HOP - pre-merging destroys the global
-    determinacy of halo merging. Default=True.
-  * ``tree``, string: There are two kD-trees that may be used as part of the
-    halo-finding process. The Fortran ("F") one is (presently) faster, but requires
-    more memory. One based on `scipy.spatial
-    <http://docs.scipy.org/doc/scipy/reference/spatial.html>`_ utilizes
-    Cython ("C") and is (presently) slower, but is more memory efficient.
-    Default = "F".
-
-All the same halo data can be accessed from Parallel HOP haloes as with the other halo finders.
-However, when running in parallel, there are some
-important differences in the output of a couple of these functions.
-
-  * .write_particle_lists(``name``) - Because haloes may exist in more than
-    one subvolume, particle data for a halo may be saved in more than one HDF5 file.
-  * .write_particle_lists_txt(``name``) - If the particles for a halo is saved
-    in more than one HDF5 file, there will be more than one HDF5 file listed for
-    each halo in the text file.
-
-In this example script below, Parallel HOP is run on a dataset and the results
-saved to files. The summary of the haloes to ``ParallelHopAnalysis.out``, the
-particles to files named ``parts????.h5`` and the list of haloes in HDF5 files
-to ``parts.txt``.
-
-.. code-block:: python
-
-  from yt.mods import *
-  from yt.analysis_modules.halo_finding.api import *
-  ds = load("data0001")
-  halo_list = parallelHF(ds, threshold=80.0, dm_only=True, resize=False, 
-  rearrange=True, safety=1.5, premerge=True)
-  halo_list.write_out("ParallelHopAnalysis.out")
-  halo_list.write_particle_list("parts")
-  halo_list.write_particle_lists_txt("parts")
-
-Halo Finding In A Subvolume
----------------------------
-
-It is possible to run any of the halo finders over a subvolume.
-This may be advantageous when only one object or region of a simulation
-is being analyzed.
-The subvolume must be a ``region`` and cannot be a
-non-rectilinear shape.
-The halo finding can be performed in parallel on a subvolume, but it may
-not be necessary depending on the size of the subvolume.
-Below is a simple example for HOP; the other halo finders use the same
-``subvolume`` keyword identically.
-
-.. code-block:: python
-
-  from yt.mods import *
-  from yt.analysis_modules.halo_finding.api import *
-  ds = load('data0458')
-  # Note that the first term below, [0.5]*3, defines the center of
-  # the region and is not used. It can be any value.
-  sv = ds.region([0.5]*3, [0.21, .21, .72], [.28, .28, .79])
-  halos = HaloFinder(ds, subvolume = sv)
-  halos.write_out("sv.out")
-
-
-Rockstar Halo Finding
-=====================
-.. sectionauthor:: Matthew Turk <matthewturk at gmail.com>
-.. sectionauthor:: Christopher Erick Moody<cemoody at ucsc.edu>
-.. sectionauthor:: Stephen Skory <s at skory.us>
-
-Rockstar uses an adaptive hierarchical refinement of friends-of-friends 
-groups in six phase-space dimensions and one time dimension, which 
-allows for robust (grid-independent, shape-independent, and noise-
-resilient) tracking of substructure. The code is prepackaged with yt, 
-but also `separately available <http://code.google.com/p/rockstar>`_. The lead 
-developer is Peter Behroozi, and the methods are described in `Behroozi
-et al. 2011 <http://rockstar.googlecode.com/files/rockstar_ap101911.pdf>`_. 
-
-.. note:: At the moment, Rockstar does not support multiple particle masses, 
-  instead using a fixed particle mass. This will not affect most dark matter 
-  simulations, but does make it less useful for finding halos from the stellar
-  mass. Also note that halo finding in a subvolume is not supported by
-  Rockstar.
-
-To run the Rockstar Halo finding, you must launch python with MPI and 
-parallelization enabled. While Rockstar itself does not require MPI to run, 
-the MPI libraries allow yt to distribute particle information across multiple 
-nodes.
-
-.. warning:: At the moment, running Rockstar inside of yt on multiple compute nodes
-   connected by an Infiniband network can be problematic. Therefore, for now
-   we recommend forcing the use of the non-Infiniband network (e.g. Ethernet)
-   using this flag: ``--mca btl ^openib``.
-   For example, here is how Rockstar might be called using 24 cores:
-   ``mpirun -n 24 --mca btl ^openib python ./run_rockstar.py --parallel``.
-
-Designing the python script itself is straightforward:
-
-.. code-block:: python
-
-  from yt.mods import *
-  from yt.analysis_modules.halo_finding.rockstar.api import RockstarHaloFinder
-
-  #find all of our simulation files
-  files = glob.glob("Enzo_64/DD*/\*index")
-  #hopefully the file name order is chronological
-  files.sort()
-  ts = DatasetSeries.from_filenames(files[:])
-  rh = RockstarHaloFinder(ts)
-  rh.run()
-
-The script above configures the Halo finder, launches a server process which 
-disseminates run information and coordinates writer-reader processes. 
-Afterwards, it launches reader and writer tasks, filling the available MPI 
-slots, which alternately read particle information and analyze for halo 
-content.
-
-The RockstarHaloFinder class has these options:
-  * ``dm_type``, the index of the dark matter particle. Default is 1. 
-  * ``outbase``, This is where the out*list files that Rockstar makes should be
-    placed. Default is 'rockstar_halos'.
-  * ``num_readers``, the number of reader tasks (which are idle most of the 
-    time.) Default is 1.
-  * ``num_writers``, the number of writer tasks (which are fed particles and
-    do most of the analysis). Default is MPI_TASKS-num_readers-1. 
-    If left undefined, the above options are automatically 
-    configured from the number of available MPI tasks.
-  * ``force_res``, the resolution that Rockstar uses for various calculations
-    and smoothing lengths. This is in units of Mpc/h.
-    If no value is provided, this parameter is automatically set to
-    the width of the smallest grid element in the simulation from the
-    last data snapshot (i.e. the one where time has evolved the
-    longest) in the time series:
-    ``ds_last.index.get_smallest_dx() * ds_last['mpch']``.
-  * ``total_particles``, if supplied, this is a pre-calculated
-    total number of dark matter
-    particles present in the simulation. For example, this is useful
-    when analyzing a series of snapshots where the number of dark
-    matter particles should not change and this will save some disk
-    access time. If left unspecified, it will
-    be calculated automatically. Default: ``None``.
-  * ``dm_only``, if set to ``True``, it will be assumed that there are
-    only dark matter particles present in the simulation.
-    This option does not modify the halos found by Rockstar, however
-    this option can save disk access time if there are no star particles
-    (or other non-dark matter particles) in the simulation. Default: ``False``.
-
-
-Output Analysis
----------------
-
-Rockstar dumps halo information in a series of text (halo*list and 
-out*list) and binary (halo*bin) files inside the ``outbase`` directory. 
-We use the halo list classes to recover the information. 
-
-Inside the ``outbase`` directory there is a text file named ``datasets.txt``
-that records the connection between ds names and the Rockstar file names.
-
-The halo list can be automatically generated from the RockstarHaloFinder 
-object by calling ``RockstarHaloFinder.halo_list()``. Alternatively, the halo
-lists can be built from the RockstarHaloList class directly 
-``LoadRockstarHalos(ds,'outbase/out_0.list')``.
-
-.. code-block:: python
-    
-    rh = RockstarHaloFinder(ds)
-    #First method of creating the halo lists:
-    halo_list = rh.halo_list()    
-    #Alternate method of creating halo_list:
-    halo_list = LoadRockstarHalos(ds, 'rockstar_halos/out_0.list')
-
-The above ``halo_list`` is very similar to any other list of halos loaded off
-disk.
-It is possible to access particle data and use the halos in a manner like any
-other halo object, and the particle data is only loaded on demand.
-Additionally, each halo object has additional information attached that is
-pulled directly from the Rockstar output:
-
-.. code-block:: python
-
-    >>> halo_list[0].supp
-    Out[3]: 
-    {'J': array([ -6.15271728e+15,  -1.36593609e+17,  -7.80776865e+16], dtype=float32),
-     'bulkvel': array([-132.05046082,   11.53190422,   42.16183472], dtype=float32),
-     'child_r': 2.6411054,
-     'corevel': array([-132.05046082,   11.53190422,   42.16183472], dtype=float32),
-     'desc': 0,
-     'energy': -8.106986e+21,
-     'flags': 1,
-     'id': 166,
-     'm': 1.5341227e+15,
-     'mgrav': 1.5341227e+15,
-     'min_bulkvel_err': 1821.8152,
-     'min_pos_err': 0.00049575343,
-     'min_vel_err': 1821.8152,
-     'n_core': 1958,
-     'num_child_particles': 2764,
-     'num_p': 2409,
-     'p_start': 6540,
-     'pos': array([   0.20197368,    0.54656458,    0.11256824, -104.33285522,
-             29.02485085,   43.5154953 ], dtype=float32),
-     'r': 0.018403014,
-     'rs': 0.0026318002,
-     'rvmax': 1133.2,
-     'spin': 0.035755754,
-     'vmax': 1877.125,
-     'vrms': 1886.2648}
-
-Installation
-------------
-
-The Rockstar is slightly patched and modified to run as a library inside of 
-yt. By default it will be built with yt using the ``install_script.sh``.
-If it wasn't installed, please make sure that the installation setting
-``INST_ROCKSTAR=1`` is defined in the ``install_script.sh`` and re-run
-the installation script.
-
-Rockstar Inline with Enzo
--------------------------
-
-It is possible to run Rockstar inline with Enzo. Setting up
-Enzo with inline yt is covered
-`here <http://enzo-project.org/doc/user_guide/EmbeddedPython.html>`_.
-It is not necessary to run Enzo with load balancing off to use Rockstar.
-Here is an example ``user_script.py``:
-
-.. code-block:: python
-
-    from yt.mods import *
-    from yt.analysis_modules.halo_finding.api import *
-    from yt.config import ytcfg
-    from yt.analysis_modules.halo_finding.rockstar.api import *
-    
-    def main():
-        import enzo
-        ds = EnzoDatasetInMemory()
-        mine = ytcfg.getint('yt','__topcomm_parallel_rank')
-        size = ytcfg.getint('yt','__topcomm_parallel_size')
-
-        # Call rockstar.
-        ts = DatasetSeries([ds])
-        outbase = "./rockstar_halos_%04d" % ds['NumberOfPythonTopGridCalls']
-        rh = RockstarHaloFinder(ts, num_readers = size,
-            outbase = outbase)
-        rh.run()
-    
-        # Load the halos off disk.
-        fname = outbase + "/out_0.list"
-        rhalos = LoadRockstarHalos(ds, fname)
-

diff -r 74fd61909c0d981bac0f3e61b4a3597bccf442f6 -r b04c9b3692db47c99a367f0decda46fcbbca3801 doc/source/cookbook/fit_spectrum.py
--- a/doc/source/cookbook/fit_spectrum.py
+++ b/doc/source/cookbook/fit_spectrum.py
@@ -1,6 +1,3 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
-
 import yt
 from yt.analysis_modules.cosmological_observation.light_ray.api import LightRay
 from yt.analysis_modules.absorption_spectrum.api import AbsorptionSpectrum
@@ -10,9 +7,9 @@
 # Do *NOT* use this for science, because this is not how OVI actually behaves;
 # it is just an example.
 
- at yt.derived_field(name='OVI_number_density', units='cm**-3')
+ at yt.derived_field(name='O_p5_number_density', units='cm**-3')
 def _OVI_number_density(field, data):
-    return data['HI_NumberDensity']*2.0
+    return data['H_number_density']*2.0
 
 
 # Define species and associated parameters to add to continuum
@@ -23,7 +20,7 @@
 # of lines, and f,gamma, and wavelength will have multiple values.
 
 HI_parameters = {'name': 'HI',
-                 'field': 'HI_NumberDensity',
+                 'field': 'H_number_density',
                  'f': [.4164],
                  'Gamma': [6.265E8],
                  'wavelength': [1215.67],
@@ -36,7 +33,7 @@
                  'init_N': 1E14}
 
 OVI_parameters = {'name': 'OVI',
-                  'field': 'OVI_number_density',
+                  'field': 'O_p5_number_density',
                   'f': [.1325, .06580],
                   'Gamma': [4.148E8, 4.076E8],
                   'wavelength': [1031.9261, 1037.6167],
@@ -69,7 +66,6 @@
                   solution_filename='lightraysolution.txt',
                   data_filename='lightray.h5',
                   fields=fields,
-                  get_nearest_halo=False,
                   get_los_velocity=True,
                   njobs=-1)
 

diff -r 74fd61909c0d981bac0f3e61b4a3597bccf442f6 -r b04c9b3692db47c99a367f0decda46fcbbca3801 doc/source/cookbook/halo_profiler.py
--- a/doc/source/cookbook/halo_profiler.py
+++ b/doc/source/cookbook/halo_profiler.py
@@ -1,51 +1,44 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+from yt.mods import *
+from yt.analysis_modules.halo_analysis.api import *
 
-from yt.mods import *
+# Load the data set with the full simulation information
+# and rockstar halos
+data_pf = load('Enzo_64/RD0006/RedshiftOutput0006')
+halos_pf = load('rockstar_halos/halos_0.0.bin')
 
-from yt.analysis_modules.halo_profiler.api import *
+# Instantiate a catalog using those two paramter files
+hc = HaloCatalog(data_pf=data_pf, halos_pf=halos_pf)
 
-# Define a custom function to be called on all halos.
-# The first argument is a dictionary containing the
-# halo id, center, etc.
-# The second argument is the sphere centered on the halo.
-def get_density_extrema(halo, sphere):
-    my_extrema = sphere.quantities['Extrema']('density')
-    mylog.info('Halo %d has density extrema: %s',
-               halo['id'], my_extrema)
+# Filter out less massive halos
+hc.add_filter("quantity_value", "particle_mass", ">", 1e14, "Msun")
 
+# attach a sphere object to each halo whose radius extends
+#   to twice the radius of the halo
+hc.add_callback("sphere", factor=2.0)
 
-# Instantiate HaloProfiler for this dataset.
-hp = HaloProfiler('enzo_tiny_cosmology/DD0046/DD0046',
-                  output_dir='.')
+# use the sphere to calculate radial profiles of gas density
+# weighted by cell volume in terms of the virial radius
+hc.add_callback("profile", x_field="radius",
+                y_fields=[("gas", "overdensity")],
+                weight_field="cell_volume",
+                accumulation=False,
+                storage="virial_quantities_profiles")
 
-# Add a filter to remove halos that have no profile points with overdensity
-# above 200, and with virial masses less than 1e10 solar masses.
-# Also, return the virial mass and radius to be written out to a file.
-hp.add_halo_filter(amods.halo_profiler.VirialFilter, must_be_virialized=True,
-                   overdensity_field='ActualOverdensity',
-                   virial_overdensity=200,
-                   virial_filters=[['TotalMassMsun', '>=', '1e10']],
-                   virial_quantities=['TotalMassMsun', 'RadiusMpc'])
 
-# Add profile fields.
-hp.add_profile('cell_volume', weight_field=None, accumulation=True)
-hp.add_profile('TotalMassMsun', weight_field=None, accumulation=True)
-hp.add_profile('density', weight_field='cell_mass', accumulation=False)
-hp.add_profile('temperature', weight_field='cell_mass', accumulation=False)
+hc.add_callback("virial_quantities", ["radius"],
+                profile_storage="virial_quantities_profiles")
+hc.add_callback('delete_attribute', 'virial_quantities_profiles')
 
-# Make profiles and output filtered halo list to FilteredQuantities.h5.
-hp.make_profiles(filename="FilteredQuantities.h5",
-                 profile_format='hdf5', njobs=-1)
+field_params = dict(virial_radius=('quantity', 'radius_200'))
+hc.add_callback('sphere', radius_field='radius_200', factor=5,
+                field_parameters=field_params)
+hc.add_callback('profile', 'virial_radius', [('gas', 'temperature')],
+                storage='virial_profiles',
+                weight_field='cell_mass',
+                accumulation=False, output_dir='profiles')
 
-# Add projection fields.
-hp.add_projection('density', weight_field=None)
-hp.add_projection('temperature', weight_field='density')
-hp.add_projection('metallicity', weight_field='density')
+# Save the profiles
+hc.add_callback("save_profiles", storage="virial_profiles",
+                output_dir="profiles")
 
-# Make projections just along the x axis using the filtered halo list.
-hp.make_projections(save_cube=False, save_images=True,
-                    halo_list='filtered', axes=[0], njobs=-1)
-
-# Run our custom analysis function on all halos in the filtered list.
-hp.analyze_halo_spheres(get_density_extrema, njobs=-1)
+hc.create()

diff -r 74fd61909c0d981bac0f3e61b4a3597bccf442f6 -r b04c9b3692db47c99a367f0decda46fcbbca3801 yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -118,7 +118,7 @@
            if True, include line of sight velocity for shifting lines.
         """
 
-        input_fields = ['dl', 'redshift', 'Temperature']
+        input_fields = ['dl', 'redshift', 'temperature']
         field_data = {}
         if use_peculiar_velocity: input_fields.append('los_velocity')
         for feature in self.line_list + self.continuum_list:
@@ -201,14 +201,14 @@
                 delta_lambda += line['wavelength'] * (1 + field_data['redshift']) * \
                     field_data['los_velocity'] / speed_of_light_cgs
             thermal_b = km_per_cm * np.sqrt((2 * boltzmann_constant_cgs *
-                                             field_data['Temperature']) /
+                                             field_data['temperature']) /
                                             (amu_cgs * line['atomic_mass']))
             center_bins = np.digitize((delta_lambda + line['wavelength']),
                                       self.lambda_bins)
 
             # ratio of line width to bin width
-            width_ratio = (line['wavelength'] + delta_lambda) * \
-                thermal_b / speed_of_light_kms / self.bin_width
+            width_ratio = ((line['wavelength'] + delta_lambda) * \
+                thermal_b / speed_of_light_kms / self.bin_width).value
 
             # do voigt profiles for a subset of the full spectrum
             left_index  = (center_bins -

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list