[yt-svn] commit/yt: 4 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Thu Jul 24 09:07:28 PDT 2014


4 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/0c224e0c239a/
Changeset:   0c224e0c239a
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-07-22 23:01:50
Summary:     Updating the installation instructions.
Affected #:  8 files

diff -r 3c4dc9e27719f260e29bcbc6ad18c4a3601ed1f9 -r 0c224e0c239a1ba1b61c81c9181d672adde467fd doc/source/analyzing/units/index.rst
--- a/doc/source/analyzing/units/index.rst
+++ b/doc/source/analyzing/units/index.rst
@@ -12,9 +12,9 @@
 and execute the documentation interactively, you need to download the repository
 and start the IPython notebook.
 
-If you installed `yt` using the install script, you will need to navigate to
-:code:`$YT_DEST/src/yt-hg/doc/source/units`, then start an IPython notebook
-server:
+You will then need to navigate to :code:`$YT_HG/doc/source/units` (where $YT_HG
+is the location of a clone of the yt mercurial repository), and then start an
+IPython notebook server:
 
 .. code:: bash
   

diff -r 3c4dc9e27719f260e29bcbc6ad18c4a3601ed1f9 -r 0c224e0c239a1ba1b61c81c9181d672adde467fd doc/source/developing/building_the_docs.rst
--- a/doc/source/developing/building_the_docs.rst
+++ b/doc/source/developing/building_the_docs.rst
@@ -55,11 +55,11 @@
 
 .. code-block:: bash
 
-   cd $YT_DEST/src/yt-hg/doc
+   cd $YT_HG/doc
    make html
 
 This will produce an html version of the documentation locally in the 
-``$YT_DEST/src/yt-hg/doc/build/html`` directory.  You can now go there and open
+``$YT_HG/doc/build/html`` directory.  You can now go there and open
 up ``index.html`` or whatever file you wish in your web browser.
 
 Building the docs (full)
@@ -116,7 +116,7 @@
 
 .. code-block:: bash
 
-   cd $YT_DEST/src/yt-hg/doc
+   cd $YT_HG/doc
    make html
 
 If all of the dependencies are installed and all of the test data is in the

diff -r 3c4dc9e27719f260e29bcbc6ad18c4a3601ed1f9 -r 0c224e0c239a1ba1b61c81c9181d672adde467fd doc/source/developing/developing.rst
--- a/doc/source/developing/developing.rst
+++ b/doc/source/developing/developing.rst
@@ -185,17 +185,24 @@
 Making and Sharing Changes
 ++++++++++++++++++++++++++
 
-The simplest way to submit changes to yt is to commit changes in your
-``$YT_DEST/src/yt-hg`` directory, fork the repository on BitBucket,  push the
-changesets to your fork, and then issue a pull request.  
+The simplest way to submit changes to yt is to do the following:
+
+  * Build yt from the mercurial repository (
+  * Navigate to the root of the yt repository 
+  * Make some changes and commit them
+  * Fork the ` ytrepository on BitBucket<https://bitbucket.org/yt_analysis/yt>`_
+  * Push the changesets to your fork
+  * Issue a pull request.
 
 Here's a more detailed flowchart of how to submit changes.
 
   #. If you have used the installation script, the source code for yt can be
-     found in ``$YT_DEST/src/yt-hg``.  (Below, in :ref:`reading-source`, 
-     we describe how to find items of interest.)  Edit the source file you are
-     interested in and test your changes.  (See :ref:`testing` for more
-     information.)
+     found in ``$YT_DEST/src/yt-hg``.  Alternatively see
+     :ref:`source-installation` for instructions on how to build yt from the
+     mercurial repository. (Below, in :ref:`reading-source`, we describe how to
+     find items of interest.)  
+  #. Edit the source file you are interested in and
+     test your changes.  (See :ref:`testing` for more information.)
   #. Fork yt on BitBucket.  (This step only has to be done once.)  You can do
      this at: https://bitbucket.org/yt_analysis/yt/fork .  Call this repository
      ``yt``.
@@ -207,7 +214,7 @@
      these changes as well.
   #. Push your changes to your new fork using the command::
 
-        hg push https://bitbucket.org/YourUsername/yt/
+        hg push -r . https://bitbucket.org/YourUsername/yt/
  
      If you end up doing considerable development, you can set an alias in the
      file ``.hg/hgrc`` to point to this path.
@@ -244,9 +251,9 @@
 include a recipe in the cookbook section, or it could simply be adding a note 
 in the relevant docs text somewhere.
 
-The documentation exists in the main mercurial code repository for yt in the 
-``doc`` directory (i.e. ``$YT_DEST/src/yt-hg/doc/source`` on systems installed 
-using the installer script).  It is organized hierarchically into the main 
+The documentation exists in the main mercurial code repository for yt in the
+``doc`` directory (i.e. ``$YT_HG/doc/source`` where ``$YT_HG`` is the path of
+the yt mercurial repository).  It is organized hierarchically into the main
 categories of:
 
  * Visualizing
@@ -345,16 +352,6 @@
 yt``), then you must "activate" it using the following commands from within the
 repository directory.
 
-In order to do this for the first time with a new repository, you have to
-copy some config files over from your yt installation directory (where yt
-was initially installed from the install_script.sh).  Try this:
-
-.. code-block:: bash
-
-   $ cp $YT_DEST/src/yt-hg/*.cfg <REPOSITORY_NAME>
-
-and then every time you want to "activate" a different repository of yt.
-
 .. code-block:: bash
 
    $ cd <REPOSITORY_NAME>
@@ -367,11 +364,16 @@
 How To Read The Source Code
 ---------------------------
 
-If you just want to *look* at the source code, you already have it on your
-computer.  Go to the directory where you ran the install_script.sh, then
-go to ``$YT_DEST/src/yt-hg`` .  In this directory are a number of
-subdirectories with different components of the code, although most of them
-are in the yt subdirectory.  Feel free to explore here.
+If you just want to *look* at the source code, you may already have it on your
+computer.  If you build yt using the install script, the source is available at
+``$YT_DEST/src/yt-hg``.  See :ref:`source-installation` for more details about
+to obtain the yt source code if you did not build yt using the install
+script. 
+
+The root directory of the yt mercurial repository contains a number of
+subdirectories with different components of the code.  Most of the yt source
+code is contained in the ``yt`` subdirectory.  This directory its self contains
+the following subdirectories:
 
    ``frontends``
       This is where interfaces to codes are created.  Within each subdirectory of
@@ -380,10 +382,19 @@
       * ``data_structures.py``, where subclasses of AMRGridPatch, Dataset
         and AMRHierarchy are defined.
       * ``io.py``, where a subclass of IOHandler is defined.
+      * ``fields.py``, where fields we expect to find in datasets are defined
       * ``misc.py``, where any miscellaneous functions or classes are defined.
       * ``definitions.py``, where any definitions specific to the frontend are
         defined.  (i.e., header formats, etc.)
 
+   ``fields``
+      This is where all of the derived fields that ship with yt are defined.
+
+   ``geometry`` 
+      This is where geometric helpler routines are defined. Handlers
+      for grid and oct data, as well as helpers for coordinate transformations
+      can be found here.
+
    ``visualization``
       This is where all visualization modules are stored.  This includes plot
       collections, the volume rendering interface, and pixelization frontends.
@@ -409,6 +420,10 @@
       All broadly useful code that doesn't clearly fit in one of the other
       categories goes here.
 
+   ``extern`` 
+      Bundled external modules (i.e. code that was not written by one of
+      the yt authors but that yt depends on) lives here.
+
 
 If you're looking for a specific file or function in the yt source code, use
 the unix find command:

diff -r 3c4dc9e27719f260e29bcbc6ad18c4a3601ed1f9 -r 0c224e0c239a1ba1b61c81c9181d672adde467fd doc/source/developing/intro.rst
--- a/doc/source/developing/intro.rst
+++ b/doc/source/developing/intro.rst
@@ -66,11 +66,11 @@
 typo or grammatical fixes, adding a FAQ, or increasing coverage of
 functionality, it would be very helpful if you wanted to help out.
 
-The easiest way to help out is to fork the main yt repository (where 
-the documentation lives in the ``$YT_DEST/src/yt-hg/doc`` directory,
-and then make your changes in your own fork.  When you are done, issue a pull
-request through the website for your new fork, and we can comment back and
-forth and eventually accept your changes.
+The easiest way to help out is to fork the main yt repository (where the
+documentation lives in the ``doc`` directory in the root of the yt mercurial
+repository) and then make your changes in your own fork.  When you are done,
+issue a pull request through the website for your new fork, and we can comment
+back and forth and eventually accept your changes.
 
 One of the more interesting ways we are attempting to do lately is to add
 screencasts to the documentation -- these are recordings of people executing

diff -r 3c4dc9e27719f260e29bcbc6ad18c4a3601ed1f9 -r 0c224e0c239a1ba1b61c81c9181d672adde467fd doc/source/developing/testing.rst
--- a/doc/source/developing/testing.rst
+++ b/doc/source/developing/testing.rst
@@ -59,11 +59,13 @@
    $ cd $YT_HG
    $ nosetests
 
+where ``$YT_HG`` is the path to the root of the yt mercurial repository.
+
 If you want to specify a specific unit test to run (and not run the entire
 suite), you can do so by specifying the path of the test relative to the
-``$YT_DEST/src/yt-hg/yt`` directory -- note that you strip off one ``yt`` more
-than you normally would!  For example, if you want to run the
-plot_window tests, you'd run:
+``$YT_HG/yt`` directory -- note that you strip off one ``yt`` more than you
+normally would!  For example, if you want to run the plot_window tests, you'd
+run:
 
 .. code-block:: bash
 
@@ -172,7 +174,7 @@
    $ nosetests --with-answer-testing
 
 In either case, the current gold standard results will be downloaded from the
-amazon cloud and compared to what is generated locally.  The results from a
+rackspace cloud and compared to what is generated locally.  The results from a
 nose testing session are pretty straightforward to understand, the results for
 each test are printed directly to STDOUT. If a test passes, nose prints a
 period, F if a test fails, and E if the test encounters an exception or errors

diff -r 3c4dc9e27719f260e29bcbc6ad18c4a3601ed1f9 -r 0c224e0c239a1ba1b61c81c9181d672adde467fd doc/source/help/index.rst
--- a/doc/source/help/index.rst
+++ b/doc/source/help/index.rst
@@ -88,31 +88,40 @@
 -----------------------
 
 We've done our best to make the source clean, and it is easily searchable from 
-your computer.  Go inside your yt install directory by going to the 
-``$YT_DEST/src/yt-hg/yt`` directory where all the code lives.  You can then search 
-for the class, function, or keyword which is giving you problems with 
-``grep -r *``, which will recursively search throughout the code base.  (For a 
-much faster and cleaner experience, we recommend ``grin`` instead of 
-``grep -r *``.  To install ``grin`` with python, just type ``pip install 
-grin``.)  
+your computer.
 
-So let's say that pesky ``SlicePlot`` is giving you problems still, and you 
-want to look at the source to figure out what is going on.
+If you have not done so already (see :ref:`source-installation`), clone a copy of the yt mercurial repository and make it the 'active' installation by doing
+
+.. code-block::bash
+
+  python setup.py develop
+
+in the root directory of the yt mercurial repository.
+
+.. note::
+
+  This has already been done for you if you installed using the bash install
+  script.  Building yt from source will not work if you do not have a C compiler
+  installed.
+
+Once inside the yt mercurial repository, you can then search for the class,
+function, or keyword which is giving you problems with ``grep -r *``, which will
+recursively search throughout the code base.  (For a much faster and cleaner
+experience, we recommend ``grin`` instead of ``grep -r *``.  To install ``grin``
+with python, just type ``pip install grin``.)
+
+So let's say that ``SlicePlot`` is giving you problems still, and you want to
+look at the source to figure out what is going on.
 
 .. code-block:: bash
 
-  $ cd $YT_DEST/src/yt-hg/yt
+  $ cd $YT-HG/yt
   $ grep -r SlicePlot *         (or $ grin SlicePlot)
-  
-   data_objects/analyzer_objects.py:class SlicePlotDataset(AnalysisTask):
-   data_objects/analyzer_objects.py:        from yt.visualization.api import SlicePlot
-   data_objects/analyzer_objects.py:        self.SlicePlot = SlicePlot
-   data_objects/analyzer_objects.py:        slc = self.SlicePlot(ds, self.axis, self.field, center = self.center)
-   ...
 
-You can now followup on this and open up the files that have references to 
-``SlicePlot`` (particularly the one that definese SlicePlot) and inspect their
-contents for problems or clarification.
+This will print a number of locations in the yt source tree where ``SlicePlot``
+is mentioned.  You can now followup on this and open up the files that have
+references to ``SlicePlot`` (particularly the one that defines SlicePlot) and
+inspect their contents for problems or clarification.
 
 .. _isolate_and_document:
 
@@ -128,7 +137,6 @@
  * Put your script, errors, and outputs online:
 
    * ``$ yt pastebin script.py`` - pastes script.py online
-   * ``$ python script.py --paste`` - pastes errors online
    * ``$ yt upload_image image.png`` - pastes image online
 
  * Identify which version of the code you’re using. 

diff -r 3c4dc9e27719f260e29bcbc6ad18c4a3601ed1f9 -r 0c224e0c239a1ba1b61c81c9181d672adde467fd doc/source/installing.rst
--- a/doc/source/installing.rst
+++ b/doc/source/installing.rst
@@ -8,147 +8,21 @@
 Getting yt
 ----------
 
-yt is a Python package (with some components written in C), using NumPy as a
-computation engine, Matplotlib for some visualization tasks and Mercurial for
-version control.  Because installation of all of these interlocking parts can 
-be time-consuming, yt provides an installation script which downloads and builds
-a fully-isolated Python + NumPy + Matplotlib + HDF5 + Mercurial installation.  
-yt supports Linux and OSX deployment, with the possibility of deployment on 
-other Unix-like systems (XSEDE resources, clusters, etc.).
+yt is a Python package, using NumPy as a computation engine, Matplotlib for some
+visualization tasks, h5py and the hdf5 library for I/O, sympy for symbolic
+computations, Cython for speedy computations, and Mercurial for version
+control. To install yt, all of these supplementary packages must already be
+available.
 
-Since the install is fully-isolated, if you get tired of having yt on your 
-system, you can just delete its directory, and yt and all of its dependencies
-will be removed from your system (no scattered files remaining throughout 
-your system).  
-
-To get the installation script, download it from:
-
-.. code-block:: bash
-
-  http://hg.yt-project.org/yt/raw/stable/doc/install_script.sh
-
-.. _installing-yt:
-
-Installing yt
--------------
-
-By default, the bash script will install an array of items, but there are 
-additional packages that can be downloaded and installed (e.g. SciPy, enzo, 
-etc.). The script has all of these options at the top of the file. You should 
-be able to open it and edit it without any knowledge of bash syntax.  
-To execute it, run:
-
-.. code-block:: bash
-
-  $ bash install_script.sh
-
-Because the installer is downloading and building a variety of packages from
-source, this will likely take a while (e.g. 20 minutes), but you will get 
-updates of its status at the command line throughout.
-
-If you receive errors during this process, the installer will provide you 
-with a large amount of information to assist in debugging your problems.  The 
-file ``yt_install.log`` will contain all of the ``STDOUT`` and ``STDERR`` from 
-the entire installation process, so it is usually quite cumbersome.  By looking 
-at the last few hundred lines (i.e. ``tail -500 yt_install.log``), you can 
-potentially figure out what went wrong.  If you have problems, though, do not 
-hesitate to :ref:`contact us <asking-for-help>` for assistance.
-
-.. _activating-yt:
-
-Activating Your Installation
-----------------------------
-
-Once the installation has completed, there will be instructions on how to set up 
-your shell environment to use yt by executing the activate script.  You must 
-run this script in order to have yt properly recognized by your system.  You can 
-either add it to your login script, or you must execute it in each shell session 
-prior to working with yt.
-
-.. code-block:: bash
-
-  $ source <yt installation directory>/bin/activate
-
-If you use csh or tcsh as your shell, activate that version of the script:
-
-.. code-block:: bash
-
-  $ source <yt installation directory>/bin/activate.csh
-
-If you don't like executing outside scripts on your computer, you can set 
-the shell variables manually.  ``YT_DEST`` needs to point to the root of the
-directory containing the install. By default, this will be ``yt-<arch>``, where
-``<arch>`` is your machine's architecture (usually ``x86_64`` or ``i386``). You 
-will also need to set ``LD_LIBRARY_PATH`` and ``PYTHONPATH`` to contain 
-``$YT_DEST/lib`` and ``$YT_DEST/python2.7/site-packages``, respectively.
-
-.. _testing-installation:
-
-Testing Your Installation
--------------------------
-
-To test to make sure everything is installed properly, try running yt at
-the command line:
-
-.. code-block:: bash
-
-  $ yt --help
-
-If this works, you should get a list of the various command-line options for
-yt, which means you have successfully installed yt.  Congratulations!
-
-If you get an error, follow the instructions it gives you to debug the problem.
-Do not hesitate to :ref:`contact us <asking-for-help>` so we can help you
-figure it out.
-
-If you like, this might be a good time :ref:`to run the test suite <testing>`.
-
-.. _updating-yt:
-
-Updating yt and its dependencies
---------------------------------
-
-With many active developers, code development sometimes occurs at a furious
-pace in yt.  To make sure you're using the latest version of the code, run
-this command at a command-line:
-
-.. code-block:: bash
-
-  $ yt update
-
-Additionally, if you want to make sure you have the latest dependencies
-associated with yt and update the codebase simultaneously, type this:
-
-.. code-block:: bash
-
-  $ yt update --all
-
-.. _removing-yt:
-
-Removing yt and its dependencies
---------------------------------
-
-Because yt and its dependencies are installed in an isolated directory when
-you use the script installer, you can easily remove yt and all of its
-dependencies cleanly.  Simply remove the install directory and its
-subdirectories and you're done.  If you *really* had problems with the
-code, this is a last defense for solving: remove and then fully
-:ref:`re-install <installing-yt>` from the install script again.
-
-.. _alternative-installation:
-
-Alternative Installation Methods
---------------------------------
-
-.. _pip-installation:
+.. _source-installation:
 
 Installing yt Using pip or from Source
 ++++++++++++++++++++++++++++++++++++++
 
-If you want to forego the use of the install script, you need to make sure you
-have yt's dependencies installed on your system.  These include: a C compiler,
-``HDF5``, ``python``, ``cython``, ``NumPy``, ``matplotlib``, and ``h5py``. From here,
-you can use ``pip`` (which comes with ``Python``) to install yt as:
+To install yt from source, you must make sure you have yt's dependencies
+installed on your system.  These include: a C compiler, ``HDF5``, ``python``,
+``Cython``, ``NumPy``, ``matplotlib``, ``sympy``, and ``h5py``. From here, you
+can use ``pip`` (which comes with ``Python``) to install yt as:
 
 .. code-block:: bash
 
@@ -171,43 +45,198 @@
 If you choose this installation method, you do not need to run the activation
 script as it is unnecessary.
 
+Keeping yt Updated via Mercurial
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you want to maintain your yt installation via updates straight from the
+Bitbucket repository or if you want to do some development on your own, we
+suggest you check out some of the :ref:`development docs <contributing-code>`,
+especially the sections on :ref:`Mercurial <mercurial-with-yt>` and
+:ref:`building yt from source <building-yt>`.
+
 .. _anaconda-installation:
 
 Installing yt Using Anaconda
 ++++++++++++++++++++++++++++
 
-Perhaps the quickest way to get yt up and running is to install it using the `Anaconda Python
-Distribution <https://store.continuum.io/cshop/anaconda/>`_, which will provide you with a
-easy-to-use environment for installing Python packages. To install a bare-bones Python
-installation with yt, first visit http://repo.continuum.io/miniconda/ and download a recent
-version of the ``Miniconda-x.y.z`` script (corresponding to Python 2.7) for your platform and
+Perhaps the quickest way to get yt up and running is to install it using the
+`Anaconda Python Distribution <https://store.continuum.io/cshop/anaconda/>`_,
+which will provide you with a easy-to-use environment for installing Python
+packages.
+
+If you do not want to install the full anaconda python distribution, you can
+install a bare-bones Python installation using miniconda.  To install miniconda,
+visit http://repo.continuum.io/miniconda/ and download a recent version of the
+``Miniconda-x.y.z`` script (corresponding to Python 2.7) for your platform and
 system architecture. Next, run the script, e.g.:
 
 .. code-block:: bash
 
-  $ bash Miniconda-3.3.0-Linux-x86_64.sh
+  bash Miniconda-3.3.0-Linux-x86_64.sh
 
 Make sure that the Anaconda ``bin`` directory is in your path, and then issue:
 
 .. code-block:: bash
 
-  $ conda install yt
+  conda install yt
 
 which will install yt along with all of its dependencies.
 
+Recipes to build conda packages for yt are available at
+https://github.com/conda/conda-recipes.  To build the yt conda recipe, first
+clone the conda-recipes repository
+
+.. code-block:: bash
+
+  git clone https://github.com/conda/conda-recipes
+
+Then navigate to the repository root and invoke `conda build`:
+
+.. code-block:: bash
+
+  cd conda-recipes
+  conda build ./yt/
+
+Note that building a yt conda package requires a C compiler.
+
 .. _windows-installation:
 
 Installing yt on Windows
 ++++++++++++++++++++++++
 
-Installation on Microsoft Windows is only supported for Windows XP Service Pack 3 and
-higher (both 32-bit and 64-bit) using Anaconda.
+Installation on Microsoft Windows is only supported for Windows XP Service Pack
+3 and higher (both 32-bit and 64-bit) using Anaconda, see
+:ref:`anaconda-installation`.
 
-Keeping yt Updated via Mercurial
-++++++++++++++++++++++++++++++++
+All-in-one installation script
+++++++++++++++++++++++++++++++
 
-If you want to maintain your yt installation via updates straight from the Bitbucket repository,
-or if you want to do some development on your own, we suggest you check out some of the
-:ref:`development docs <contributing-code>`, especially the sections on :ref:`Mercurial
-<mercurial-with-yt>` and :ref:`building yt from source <building-yt>`.
+Because installation of all of the interlocking parts necessary to install yt
+its self can be time-consuming, yt provides an all-in-one installation script
+which downloads and builds a fully-isolated Python + NumPy + Matplotlib + HDF5 +
+Mercurial installation. Since the install script compiles yt's dependencies from
+source, you must have C, C++, and optionally Fortran compilers installed.
 
+The install script supports UNIX-like systems, including Linux, OS X, and most
+supercomputer and cluster environments. It is particularly suited for deployment
+on clusters where users do not usually have root access and can only install
+software into their home directory.
+
+Since the install is fully-isolated in a single directory, if you get tired of
+having yt on your system, you can just delete the directory and yt and all of
+its dependencies will be removed from your system (no scattered files remaining
+throughout your system).
+
+Running the install script
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To get the installation script, download it from:
+
+.. code-block:: bash
+
+  wget http://hg.yt-project.org/yt/raw/stable/doc/install_script.sh
+
+.. _installing-yt:
+
+By default, the bash install script will install an array of items, but there
+are additional packages that can be downloaded and installed (e.g. SciPy, enzo,
+etc.). The script has all of these options at the top of the file. You should be
+able to open it and edit it without any knowledge of bash syntax.  To execute
+it, run:
+
+.. code-block:: bash
+
+  bash install_script.sh
+
+Because the installer is downloading and building a variety of packages from
+source, this will likely take a while (e.g. 20 minutes), but you will get 
+updates of its status at the command line throughout.
+
+If you receive errors during this process, the installer will provide you 
+with a large amount of information to assist in debugging your problems.  The 
+file ``yt_install.log`` will contain all of the ``stdout`` and ``stderr`` from 
+the entire installation process, so it is usually quite cumbersome.  By looking 
+at the last few hundred lines (i.e. ``tail -500 yt_install.log``), you can 
+potentially figure out what went wrong.  If you have problems, though, do not 
+hesitate to :ref:`contact us <asking-for-help>` for assistance.
+
+.. _activating-yt:
+
+Activating Your Installation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Once the installation has completed, there will be instructions on how to set up 
+your shell environment to use yt by executing the activate script.  You must 
+run this script in order to have yt properly recognized by your system.  You can 
+either add it to your login script, or you must execute it in each shell session 
+prior to working with yt.
+
+.. code-block:: bash
+
+  source <yt installation directory>/bin/activate
+
+If you use csh or tcsh as your shell, activate that version of the script:
+
+.. code-block:: bash
+
+  source <yt installation directory>/bin/activate.csh
+
+If you don't like executing outside scripts on your computer, you can set 
+the shell variables manually.  ``YT_DEST`` needs to point to the root of the
+directory containing the install. By default, this will be ``yt-<arch>``, where
+``<arch>`` is your machine's architecture (usually ``x86_64`` or ``i386``). You 
+will also need to set ``LD_LIBRARY_PATH`` and ``PYTHONPATH`` to contain 
+``$YT_DEST/lib`` and ``$YT_DEST/python2.7/site-packages``, respectively.
+
+.. _updating-yt:
+
+Updating yt and its dependencies
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+With many active developers, code development sometimes occurs at a furious
+pace in yt.  To make sure you're using the latest version of the code, run
+this command at a command-line:
+
+.. code-block:: bash
+
+  yt update
+
+Additionally, if you want to make sure you have the latest dependencies
+associated with yt and update the codebase simultaneously, type this:
+
+.. code-block:: bash
+
+  yt update --all
+
+.. _removing-yt:
+
+Removing yt and its dependencies
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Because yt and its dependencies are installed in an isolated directory when
+you use the script installer, you can easily remove yt and all of its
+dependencies cleanly.  Simply remove the install directory and its
+subdirectories and you're done.  If you *really* had problems with the
+code, this is a last defense for solving: remove and then fully
+:ref:`re-install <installing-yt>` from the install script again.
+
+.. _testing-installation:
+
+Testing Your Installation
+-------------------------
+
+To test to make sure everything is installed properly, try running yt at
+the command line:
+
+.. code-block:: bash
+
+  yt --help
+
+If this works, you should get a list of the various command-line options for
+yt, which means you have successfully installed yt.  Congratulations!
+
+If you get an error, follow the instructions it gives you to debug the problem.
+Do not hesitate to :ref:`contact us <asking-for-help>` so we can help you
+figure it out.
+
+If you like, this might be a good time :ref:`to run the test suite <testing>`.

diff -r 3c4dc9e27719f260e29bcbc6ad18c4a3601ed1f9 -r 0c224e0c239a1ba1b61c81c9181d672adde467fd doc/source/reference/faq/index.rst
--- a/doc/source/reference/faq/index.rst
+++ b/doc/source/reference/faq/index.rst
@@ -196,33 +196,10 @@
 
 .. code-block:: bash
 
-    cd $YT_DEST/src/yt-hg
+    cd $YT_HG
     python setup.py develop
 
-
-Unresolved Installation Problem on OSX 10.6
--------------------------------------------
-When installing on some instances of OSX 10.6, a few users have noted a failure
-when yt tries to build with OpenMP support:
-
-    Symbol not found: _GOMP_barrier
-        Referenced from: <YT_DEST>/src/yt-hg/yt/utilities/lib/grid_traversal.so
-
-        Expected in: dynamic lookup
-
-To resolve this, please make a symbolic link:
-
-.. code-block:: bash
-
-  $ ln -s /usr/local/lib/x86_64 <YT_DEST>/lib64
-
-where ``<YT_DEST>`` is replaced by the path to the root of the directory
-containing the yt install, which will usually be ``yt-<arch>``. After doing so, 
-you should be able to cd to <YT_DEST>/src/yt-hg and run:
-
-.. code-block:: bash
-
-  $ python setup.py install
+where ``$YT_HG`` is the path to the yt mercurial repository.
 
 .. _plugin-file:
 


https://bitbucket.org/yt_analysis/yt/commits/c847b3889bb4/
Changeset:   c847b3889bb4
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-07-23 20:35:04
Summary:     merging with yt-3.0 tip
Affected #:  33 files

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 doc/source/analyzing/analysis_modules/clump_finding.rst
--- a/doc/source/analyzing/analysis_modules/clump_finding.rst
+++ b/doc/source/analyzing/analysis_modules/clump_finding.rst
@@ -2,185 +2,135 @@
 
 Clump Finding
 =============
-.. sectionauthor:: Britton Smith <britton.smith at colorado.edu>
 
-``yt`` has the ability to identify topologically disconnected structures based in a dataset using 
-any field available.  This is powered by a contouring algorithm that runs in a recursive 
-fashion.  The user specifies the initial data object in which the clump-finding will occur, 
-the field over which the contouring will be done, the upper and lower limits of the 
-initial contour, and the contour increment.
+The clump finder uses a contouring algorithm to identified topologically 
+disconnected structures within a dataset.  This works by first creating a 
+single contour over the full range of the contouring field, then continually 
+increasing the lower value of the contour until it reaches the maximum value 
+of the field.  As disconnected structures are identified as separate contoures, 
+the routine continues recursively through each object, creating a hierarchy of 
+clumps.  Individual clumps can be kept or removed from the hierarchy based on 
+the result of user-specified functions, such as checking for gravitational 
+boundedness.  A sample recipe can be found in :ref:`cookbook-find_clumps`.
 
-The clump finder begins by creating a single contour of the specified field over the entire 
-range given.  For every isolated contour identified in the initial iteration, contouring is 
-repeated with the same upper limit as before, but with the lower limit increased by the 
-specified increment.  This repeated for every isolated group until the lower limit is equal 
-to the upper limit.
+The clump finder requires a data container and a field over which the 
+contouring is to be performed.
 
-Often very tiny clumps can appear as groups of only a few cells that happen to be slightly 
-overdense (if contouring over density) with respect to the surrounding gas.  The user may 
-specify criteria that clumps must meet in order to be kept.  The most obvious example is 
-selecting only those clumps that are gravitationally bound.
+.. code:: python
 
-Once the clump-finder has finished, the user can write out a set of quantities for each clump in the 
-index.  Additional info items can also be added.  We also provide a recipe
-for finding clumps in :ref:`cookbook-find_clumps`.
+   import yt
+   from yt.analysis_modules.level_sets.api import *
 
-Treecode Optimization
----------------------
+   ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
 
-.. sectionauthor:: Stephen Skory <s at skory.us>
-.. versionadded:: 2.1
+   data_source = ds.disk([0.5, 0.5, 0.5], [0., 0., 1.],
+                         (8, 'kpc'), (1, 'kpc'))
 
-As mentioned above, the user has the option to limit clumps to those that are
-gravitationally bound.
-The correct and accurate way to calculate if a clump is gravitationally
-bound is to do the full double sum:
+   master_clump = Clump(data_source, ("gas", "density"))
 
-.. math::
+At this point, every isolated contour will be considered a clump, 
+whether this is physical or not.  Validator functions can be added to 
+determine if an individual contour should be considered a real clump.  
+These functions are specified with the ``Clump.add_validator`` function.  
+Current, two validators exist: a minimum number of cells and gravitational 
+boundedness.
 
-  PE = \Sigma_{i=1}^N \Sigma_{j=i}^N \frac{G M_i M_j}{r_{ij}}
+.. code:: python
 
-where :math:`PE` is the gravitational potential energy of :math:`N` cells,
-:math:`G` is the
-gravitational constant, :math:`M_i` is the mass of cell :math:`i`, 
-and :math:`r_{ij}` is the distance
-between cell :math:`i` and :math:`j`.
-The number of calculations required for this calculation
-grows with the square of :math:`N`. Therefore, for large clumps with many cells, the
-test for boundedness can take a significant amount of time.
+   master_clump.add_validator("min_cells", 20)
 
-An effective way to greatly speed up this calculation with minimal error
-is to use the treecode approximation pioneered by
-`Barnes and Hut (1986) <http://adsabs.harvard.edu/abs/1986Natur.324..446B>`_.
-This method of calculating gravitational potentials works by
-grouping individual masses that are located close together into a larger conglomerated
-mass with a geometric size equal to the distribution of the individual masses.
-For a mass cell that is sufficiently distant from the conglomerated mass,
-the gravitational calculation can be made using the conglomerate, rather than
-each individual mass, which saves time.
+   master_clump.add_validator("gravitationally_bound", use_particles=False)
 
-The decision whether or not to use a conglomerate depends on the accuracy control
-parameter ``opening_angle``. Using the small-angle approximation, a conglomerate
-may be used if its geometric size subtends an angle no greater than the
-``opening_angle`` upon the remote mass. The default value is
-``opening_angle = 1``, which gives errors well under 1%. A value of 
-``opening_angle = 0`` is identical to the full O(N^2) method, and larger values
-will speed up the calculation and sacrifice accuracy (see the figures below).
+As many validators as desired can be added, and a clump is only kept if all 
+return True.  If not, a clump is remerged into its parent.  Custom validators 
+can easily be added.  A validator function must only accept a ``Clump`` object 
+and either return True or False.
 
-The treecode method is iterative. Conglomerates may themselves form larger
-conglomerates. And if a larger conglomerate does not meet the ``opening_angle``
-criterion, the smaller conglomerates are tested as well. This iteration of 
-conglomerates will
-cease once the level of the original masses is reached (this is what happens
-for all pair calculations if ``opening_angle = 0``).
+.. code:: python
 
-Below are some examples of how to control the usage of the treecode.
+   def _minimum_gas_mass(clump, min_mass):
+       return (clump["gas", "cell_mass"].sum() >= min_mass)
+   add_validator("minimum_gas_mass", _minimum_gas_mass)
 
-This example will calculate the ratio of the potential energy to kinetic energy
-for a spherical clump using the treecode method with an opening angle of 2.
-The default opening angle is 1.0:
+The ``add_validator`` function adds the validator to a registry that can 
+be accessed by the clump finder.  Then, the validator can be added to the 
+clump finding just like the others.
 
-.. code-block:: python
-  
-  from yt.mods import *
-  
-  ds = load("DD0000")
-  sp = ds.sphere([0.5, 0.5, 0.5], radius=0.1)
-  
-  ratio = sp.quantities.is_bound(truncate=False, include_thermal_energy=True,
-      treecode=True, opening_angle=2.0)
+.. code:: python
 
-This example will accomplish the same as the above, but will use the full
-N^2 method.
+   master_clump.add_validator("minimum_gas_mass", ds.quan(1.0, "Msun"))
 
-.. code-block:: python
-  
-  from yt.mods import *
-  
-  ds = load("DD0000")
-  sp = ds.sphere([0.5, 0.5, 0.5], radius=0.1)
-  
-  ratio = sp.quantities.is_bound(truncate=False, include_thermal_energy=True,
-      treecode=False)
+The clump finding algorithm accepts the ``Clump`` object, the initial minimum 
+and maximum of the contouring field, and the step size.  The lower value of the 
+contour finder will be continually multiplied by the step size.
 
-Here the treecode method is specified for clump finding (this is default).
-Please see the link above for the full example of how to find clumps (the
-trailing backslash is important!):
+.. code:: python
 
-.. code-block:: python
-  
-  function_name = 'self.data.quantities.is_bound(truncate=True, \
-      include_thermal_energy=True, treecode=True, opening_angle=2.0) > 1.0'
-  master_clump = amods.level_sets.Clump(data_source, None, field,
-      function=function_name)
+   c_min = data_source["gas", "density"].min()
+   c_max = data_source["gas", "density"].max()
+   step = 2.0
+   find_clumps(master_clump, c_min, c_max, step)
 
-To turn off the treecode, of course one should turn treecode=False in the
-example above.
+After the clump finding has finished, the master clump will represent the top 
+of a hierarchy of clumps.  The ``children`` attribute within a ``Clump`` object 
+contains a list of all sub-clumps.  Each sub-clump is also a ``Clump`` object 
+with its own ``children`` attribute, and so on.
 
-Treecode Speedup and Accuracy Figures
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+A number of helper routines exist for examining the clump hierarchy.
 
-Two datasets are used to make the three figures below. Each is a zoom-in
-simulation with high resolution in the middle with AMR, and then lower
-resolution static grids on the periphery. In this way they are very similar to
-a clump in a full-AMR simulation, where there are many AMR levels stacked
-around a density peak. One dataset has a total of 3 levels of AMR, and
-the other has 10 levels, but in other ways are very similar.
+.. code:: python
 
-The first figure shows the effect of varying the opening angle on the speed
-and accuracy of the treecode. The tests were performed using the L=10 
-dataset on a clump with approximately 118,000 cells. The speedup of up the
-treecode is in green, and the accuracy in blue, with the opening angle
-on the x-axis.
+   # Write a text file of the full hierarchy.
+   write_clump_index(master_clump, 0, "%s_clump_hierarchy.txt" % ds)
 
-With an ``opening_angle`` = 0, the accuracy is perfect, but the treecode is
-less than half as fast as the brute-force method. However, by an
-``opening_angle`` of 1, the treecode is now nearly twice as fast, with
-about 0.2% error. This trend continues to an ``opening_angle`` 8, where
-large opening angles have no effect due to geometry.
+   # Write a text file of only the leaf nodes.
+   write_clumps(master_clump,0, "%s_clumps.txt" % ds)
 
-.. image:: _images/TreecodeOpeningAngleBig.png
-   :width: 450
-   :height: 400
+   # Get a list of just the leaf nodes.
+   leaf_clumps = get_lowest_clumps(master_clump)
 
-Note that the accuracy is always below 1. The treecode will always underestimate
-the gravitational binding energy of a clump.
+``Clump`` objects can be used like all other data containers.
 
-In this next figure, the ``opening_angle`` is kept constant at 1, but the
-number of cells is varied on the L=3 dataset by slowly expanding a spherical
-region of analysis. Up to about 100,000 cells,
-the treecode is actually slower than the brute-force method. This is due to
-the fact that with fewer cells, smaller geometric distances,
-and a shallow AMR index, the treecode
-method has very little chance to be applied. The calculation is overall
-slower due to the overhead of the treecode method & startup costs. This
-explanation is further strengthened by the fact that the accuracy of the
-treecode method stay perfect for the first couple thousand cells, indicating
-that the treecode method is not being applied over that range.
+.. code:: python
 
-Once the number of cells gets high enough, and the size of the region becomes
-large enough, the treecode method can work its magic and the treecode method
-becomes advantageous.
+   print leaf_clumps[0]["gas", "density"]
+   print leaf_clumps[0].quantities.total_mass()
 
-.. image:: _images/TreecodeCellsSmall.png
-   :width: 450
-   :height: 400
+The writing functions will write out a series or properties about each 
+clump by default.  Additional properties can be appended with the 
+``Clump.add_info_item`` function.
 
-The saving grace to the figure above is that for small clumps, a difference of
-50% in calculation time is on the order of a second or less, which is tiny
-compared to the minutes saved for the larger clumps where the speedup can
-be greater than 3.
+.. code:: python
 
-The final figure is identical to the one above, but for the L=10 dataset.
-Due to the higher number of AMR levels, which translates into more opportunities
-for the treecode method to be applied, the treecode becomes faster than the
-brute-force method at only about 30,000 cells. The accuracy shows a different
-behavior, with a dip and a rise, and overall lower accuracy. However, at all
-times the error is still well under 1%, and the time savings are significant.
+   master_clump.add_info_item("total_cells")
 
-.. image:: _images/TreecodeCellsBig.png
-   :width: 450
-   :height: 400
+Just like the validators, custom info items can be added by defining functions 
+that minimally accept a ``Clump`` object and return a string to be printed.
 
-The figures above show that the treecode method is generally very advantageous,
-and that the error introduced is minimal.
+.. code:: python
+
+   def _mass_weighted_jeans_mass(clump):
+       jeans_mass = clump.data.quantities.weighted_average_quantity(
+           "jeans_mass", ("gas", "cell_mass")).in_units("Msun")
+       return "Jeans Mass (mass-weighted): %.6e Msolar." % jeans_mass
+   add_clump_info("mass_weighted_jeans_mass", _mass_weighted_jeans_mass)
+
+Then, add it to the list:
+
+.. code:: python
+
+   master_clump.add_info_item("mass_weighted_jeans_mass")
+
+By default, the following info items are activated: **total_cells**, 
+**cell_mass**, **mass_weighted_jeans_mass**, **volume_weighted_jeans_mass**, 
+**max_grid_level**, **min_number_density**, **max_number_density**, and 
+**distance_to_main_clump**.
+
+Clumps can be visualized using the ``annotate_clumps`` callback.
+
+.. code:: python
+
+   prj = yt.ProjectionPlot(ds, 2, ("gas", "density"), 
+                           center='c', width=(20,'kpc'))
+   prj.annotate_clumps(leaf_clumps)
+   prj.save('clumps')

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 doc/source/conf.py
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -97,7 +97,7 @@
 
 # If true, sectionauthor and moduleauthor directives will be shown in the
 # output. They are ignored by default.
-show_authors = True
+show_authors = False
 
 # The name of the Pygments (syntax highlighting) style to use.
 pygments_style = 'sphinx'

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 doc/source/cookbook/amrkdtree_downsampling.py
--- a/doc/source/cookbook/amrkdtree_downsampling.py
+++ b/doc/source/cookbook/amrkdtree_downsampling.py
@@ -1,6 +1,3 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED 
-
 # Using AMRKDTree Homogenized Volumes to examine large datasets
 # at lower resolution.
 
@@ -13,15 +10,15 @@
 import yt
 from yt.utilities.amr_kdtree.api import AMRKDTree
 
-# Load up a dataset
+# Load up a dataset and define the kdtree
 ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
-
 kd = AMRKDTree(ds)
 
 # Print out specifics of KD Tree
 print "Total volume of all bricks = %i" % kd.count_volume()
 print "Total number of cells = %i" % kd.count_cells()
 
+# Define a camera and take an volume rendering.
 tf = yt.ColorTransferFunction((-30, -22))
 cam = ds.camera([0.5, 0.5, 0.5], [0.2, 0.3, 0.4], 0.10, 256,
                   tf, volume=kd)

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 doc/source/cookbook/find_clumps.py
--- a/doc/source/cookbook/find_clumps.py
+++ b/doc/source/cookbook/find_clumps.py
@@ -1,75 +1,50 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
-
 import numpy as np
 
 import yt
-from yt.analysis_modules.level_sets.api import (Clump, find_clumps,
-                                                get_lowest_clumps)
+from yt.analysis_modules.level_sets.api import *
 
-fn = "IsolatedGalaxy/galaxy0030/galaxy0030"  # dataset to load
-# this is the field we look for contours over -- we could do
-# this over anything.  Other common choices are 'AveragedDensity'
-# and 'Dark_Matter_Density'.
-field = "density"
+ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
 
-step = 2.0  # This is the multiplicative interval between contours.
+data_source = ds.disk([0.5, 0.5, 0.5], [0., 0., 1.], 
+                      (1, 'kpc'), (1, 'kpc'))
 
-ds = yt.load(fn)  # load data
+# the field to be used for contouring
+field = ("gas", "density")
 
-# We want to find clumps over the entire dataset, so we'll just grab the whole
-# thing!  This is a convenience parameter that prepares an object that covers
-# the whole domain.  Note, though, that it will load on demand and not before!
-data_source = ds.disk([0.5, 0.5, 0.5], [0., 0., 1.],
-                      (8., 'kpc'), (1., 'kpc'))
+# This is the multiplicative interval between contours.
+step = 2.0
 
 # Now we set some sane min/max values between which we want to find contours.
 # This is how we tell the clump finder what to look for -- it won't look for
 # contours connected below or above these threshold values.
-c_min = 10**np.floor(np.log10(data_source[field]).min())
-c_max = 10**np.floor(np.log10(data_source[field]).max() + 1)
-
-# keep only clumps with at least 20 cells
-function = 'self.data[\'%s\'].size > 20' % field
+c_min = 10**np.floor(np.log10(data_source[field]).min()  )
+c_max = 10**np.floor(np.log10(data_source[field]).max()+1)
 
 # Now find get our 'base' clump -- this one just covers the whole domain.
-master_clump = Clump(data_source, None, field, function=function)
+master_clump = Clump(data_source, field)
 
-# This next command accepts our base clump and we say the range between which
-# we want to contour.  It recursively finds clumps within the master clump, at
-# intervals defined by the step size we feed it.  The current value is
-# *multiplied* by step size, rather than added to it -- so this means if you
-# want to look in log10 space intervals, you would supply step = 10.0.
+# Add a "validator" to weed out clumps with less than 20 cells.
+# As many validators can be added as you want.
+master_clump.add_validator("min_cells", 20)
+
+# Begin clump finding.
 find_clumps(master_clump, c_min, c_max, step)
 
-# As it goes, it appends the information about all the sub-clumps to the
-# master-clump.  Among different ways we can examine it, there's a convenience
-# function for outputting the full index to a file.
-f = open('%s_clump_index.txt' % ds, 'w')
-yt.amods.level_sets.write_clump_index(master_clump, 0, f)
-f.close()
+# Write out the full clump hierarchy.
+write_clump_index(master_clump, 0, "%s_clump_hierarchy.txt" % ds)
 
-# We can also output some handy information, as well.
-f = open('%s_clumps.txt' % ds, 'w')
-yt.amods.level_sets.write_clumps(master_clump, 0, f)
-f.close()
+# Write out only the leaf nodes of the hierarchy.
+write_clumps(master_clump,0, "%s_clumps.txt" % ds)
 
-# We can traverse the clump index to get a list of all of the 'leaf' clumps
+# We can traverse the clump hierarchy to get a list of all of the 'leaf' clumps
 leaf_clumps = get_lowest_clumps(master_clump)
 
 # If you'd like to visualize these clumps, a list of clumps can be supplied to
 # the "clumps" callback on a plot.  First, we create a projection plot:
-prj = yt.ProjectionPlot(ds, 2, field, center='c', width=(20, 'kpc'))
+prj = yt.ProjectionPlot(ds, 2, field, center='c', width=(20,'kpc'))
 
 # Next we annotate the plot with contours on the borders of the clumps
 prj.annotate_clumps(leaf_clumps)
 
 # Lastly, we write the plot to disk.
 prj.save('clumps')
-
-# We can also save the clump object to disk to read in later so we don't have
-# to spend a lot of time regenerating the clump objects.
-ds.save_object(master_clump, 'My_clumps')
-
-# Later, we can read in the clump object like so,
-master_clump = ds.load_object('My_clumps')

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 doc/source/cookbook/opaque_rendering.py
--- a/doc/source/cookbook/opaque_rendering.py
+++ b/doc/source/cookbook/opaque_rendering.py
@@ -1,6 +1,3 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
-
 import yt
 import numpy as np
 

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 doc/source/cookbook/rendering_with_box_and_grids.py
--- a/doc/source/cookbook/rendering_with_box_and_grids.py
+++ b/doc/source/cookbook/rendering_with_box_and_grids.py
@@ -1,6 +1,3 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
-
 import yt
 import numpy as np
 

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 doc/source/cookbook/simple_profile.py
--- a/doc/source/cookbook/simple_profile.py
+++ b/doc/source/cookbook/simple_profile.py
@@ -1,6 +1,3 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
-
 import yt
 
 # Load the dataset.

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -795,6 +795,47 @@
 PyNE Data
 ---------
 
+`PyNE <http://pyne.io/>`_ Hex8 meshes are supported by yt and cared for by the PyNE development team
+(`pyne-dev at googlegroups.com <pyne-dev%40googlegroups.com>`_). 
+PyNE meshes are based on faceted geometries contained in hdf5 files (suffix ".h5m").
+
+To load a pyne mesh:
+
+.. code-block:: python
+
+  from pyne.mesh import Mesh
+  from pyne.dagmc import load
+
+  from yt.config import ytcfg; ytcfg["yt","suppressStreamLogging"] = "True"
+  from yt.frontends.moab.api import PyneMoabHex8StaticOutput
+  from yt.visualization.plot_window import SlicePlot
+
+  load("faceted_file.h5m")
+  
+Set up parameters for the mesh:
+
+.. code-block:: python
+
+  num_divisions = 50
+  coords0 = linspace(-6, 6, num_divisions)
+  coords1 = linspace(0, 7, num_divisions)
+  coords2 = linspace(-4, 4, num_divisions)
+
+Generate the Hex8 mesh and convert to a yt dataset using PyneHex8StaticOutput:
+
+.. code-block:: python 
+
+  m = Mesh(structured=True, structured_coords=[coords0, coords1, coords2], structured_ordering='zyx')
+  pf = PyneMoabHex8StaticOutput(m)
+
+Any field (tag) data on the mesh can then be viewed just like any other yt dataset!
+
+.. code-block:: python 
+
+  s = SlicePlot(pf, 'z', 'density')
+  s.display()
+
+
 Generic Array Data
 ------------------
 

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/analysis_modules/halo_analysis/halo_callbacks.py
--- a/yt/analysis_modules/halo_analysis/halo_callbacks.py
+++ b/yt/analysis_modules/halo_analysis/halo_callbacks.py
@@ -27,14 +27,15 @@
      ensure_list, is_root
 from yt.utilities.exceptions import YTUnitConversionError
 from yt.utilities.logger import ytLogger as mylog
+from yt.utilities.operator_registry import \
+     OperatorRegistry
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
     parallel_root_only
 from yt.visualization.profile_plotter import \
      PhasePlot
-     
-from .operator_registry import \
-    callback_registry
 
+callback_registry = OperatorRegistry()
+    
 def add_callback(name, function):
     callback_registry[name] =  HaloCallback(function)
 

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/analysis_modules/halo_analysis/halo_catalog.py
--- a/yt/analysis_modules/halo_analysis/halo_catalog.py
+++ b/yt/analysis_modules/halo_analysis/halo_catalog.py
@@ -27,10 +27,13 @@
      
 from .halo_object import \
      Halo
-from .operator_registry import \
-     callback_registry, \
-     filter_registry, \
-     finding_method_registry, \
+from .halo_callbacks import \
+     callback_registry
+from .halo_filters import \
+     filter_registry
+from .halo_finding_methods import \
+     finding_method_registry
+from .halo_quantities import \
      quantity_registry
 
 class HaloCatalog(ParallelAnalysisInterface):

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/analysis_modules/halo_analysis/halo_filters.py
--- a/yt/analysis_modules/halo_analysis/halo_filters.py
+++ b/yt/analysis_modules/halo_analysis/halo_filters.py
@@ -15,10 +15,13 @@
 
 import numpy as np
 
+from yt.utilities.operator_registry import \
+     OperatorRegistry
 from yt.utilities.spatial import KDTree
 
 from .halo_callbacks import HaloCallback
-from .operator_registry import filter_registry
+
+filter_registry = OperatorRegistry()
 
 def add_filter(name, function):
     filter_registry[name] = HaloFilter(function)

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/analysis_modules/halo_analysis/halo_finding_methods.py
--- a/yt/analysis_modules/halo_analysis/halo_finding_methods.py
+++ b/yt/analysis_modules/halo_analysis/halo_finding_methods.py
@@ -21,10 +21,10 @@
     HaloCatalogDataset
 from yt.frontends.stream.data_structures import \
     load_particles
+from yt.utilities.operator_registry import \
+     OperatorRegistry
 
-from .operator_registry import \
-    finding_method_registry
-
+finding_method_registry = OperatorRegistry()
 
 def add_finding_method(name, function):
     finding_method_registry[name] = HaloFindingMethod(function)

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/analysis_modules/halo_analysis/halo_quantities.py
--- a/yt/analysis_modules/halo_analysis/halo_quantities.py
+++ b/yt/analysis_modules/halo_analysis/halo_quantities.py
@@ -15,8 +15,12 @@
 
 import numpy as np
 
+from yt.utilities.operator_registry import \
+     OperatorRegistry
+
 from .halo_callbacks import HaloCallback
-from .operator_registry import quantity_registry
+
+quantity_registry = OperatorRegistry()
 
 def add_quantity(name, function):
     quantity_registry[name] = HaloQuantity(function)

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/analysis_modules/halo_analysis/operator_registry.py
--- a/yt/analysis_modules/halo_analysis/operator_registry.py
+++ /dev/null
@@ -1,31 +0,0 @@
-"""
-Operation registry class
-
-
-
-"""
-
-#-----------------------------------------------------------------------------
-# Copyright (c) 2013, yt Development Team.
-#
-# Distributed under the terms of the Modified BSD License.
-#
-# The full license is in the file COPYING.txt, distributed with this software.
-#-----------------------------------------------------------------------------
-
-import copy
-import types
-
-class OperatorRegistry(dict):
-    def find(self, op, *args, **kwargs):
-        if isinstance(op, types.StringTypes):
-            # Lookup, assuming string or hashable object
-            op = copy.deepcopy(self[op])
-            op.args = args
-            op.kwargs = kwargs
-        return op
-
-callback_registry = OperatorRegistry()
-filter_registry = OperatorRegistry()
-finding_method_registry = OperatorRegistry()
-quantity_registry = OperatorRegistry()

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/analysis_modules/level_sets/api.py
--- a/yt/analysis_modules/level_sets/api.py
+++ b/yt/analysis_modules/level_sets/api.py
@@ -21,12 +21,14 @@
     find_clumps, \
     get_lowest_clumps, \
     write_clump_index, \
-    write_clumps, \
-    write_old_clump_index, \
-    write_old_clumps, \
-    write_old_clump_info, \
-    _DistanceToMainClump
+    write_clumps
 
+from .clump_info_items import \
+    add_clump_info
+
+from .clump_validators import \
+    add_validator
+    
 from .clump_tools import \
     recursive_all_clumps, \
     return_all_clumps, \

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/analysis_modules/level_sets/clump_handling.py
--- a/yt/analysis_modules/level_sets/clump_handling.py
+++ b/yt/analysis_modules/level_sets/clump_handling.py
@@ -13,50 +13,82 @@
 # The full license is in the file COPYING.txt, distributed with this software.
 #-----------------------------------------------------------------------------
 
+import copy
 import numpy as np
-import copy
+import uuid
 
-from yt.funcs import *
+from yt.fields.derived_field import \
+    ValidateSpatial
+from yt.funcs import mylog
+    
+from .clump_info_items import \
+    clump_info_registry
+from .clump_validators import \
+    clump_validator_registry
+from .contour_finder import \
+    identify_contours
 
-from .contour_finder import identify_contours
+def add_contour_field(ds, contour_key):
+    def _contours(field, data):
+        fd = data.get_field_parameter("contour_slices_%s" % contour_key)
+        vals = data["index", "ones"] * -1
+        if fd is None or fd == 0.0:
+            return vals
+        for sl, v in fd.get(data.id, []):
+            vals[sl] = v
+        return vals
+
+    ds.add_field(("index", "contours_%s" % contour_key),
+                 function=_contours,
+                 validators=[ValidateSpatial(0)],
+                 take_log=False,
+                 display_field=False)
 
 class Clump(object):
     children = None
-    def __init__(self, data, parent, field, cached_fields = None, 
-                 function=None, clump_info=None):
+    def __init__(self, data, field, parent=None,
+                 clump_info=None, validators=None):
+        self.data = data
+        self.field = field
         self.parent = parent
-        self.data = data
         self.quantities = data.quantities
-        self.field = field
         self.min_val = self.data[field].min()
         self.max_val = self.data[field].max()
-        self.cached_fields = cached_fields
 
         # List containing characteristics about clumps that are to be written 
         # out by the write routines.
         if clump_info is None:
             self.set_default_clump_info()
         else:
-            # Clump info will act the same if add_info_item is called before or after clump finding.
+            # Clump info will act the same if add_info_item is called 
+            # before or after clump finding.
             self.clump_info = copy.deepcopy(clump_info)
 
-        # Function determining whether a clump is valid and should be kept.
-        self.default_function = 'self.data.quantities["IsBound"](truncate=True,include_thermal_energy=True) > 1.0'
-        if function is None:
-            self.function = self.default_function
-        else:
-            self.function = function
+        if validators is None:
+            validators = []
+        self.validators = validators
+        # Return value of validity function.
+        self.valid = None
 
-        # Return value of validity function, saved so it does not have to be calculated again.
-        self.function_value = None
-
-    def add_info_item(self,quantity,format):
+    def add_validator(self, validator, *args, **kwargs):
+        """
+        Add a validating function to determine whether the clump should 
+        be kept.
+        """
+        callback = clump_validator_registry.find(validator, *args, **kwargs)
+        self.validators.append(callback)
+        if self.children is None: return
+        for child in self.children:
+            child.add_validator(validator)
+        
+    def add_info_item(self, info_item, *args, **kwargs):
         "Adds an entry to clump_info list and tells children to do the same."
 
-        self.clump_info.append({'quantity':quantity, 'format':format})
+        callback = clump_info_registry.find(info_item, *args, **kwargs)
+        self.clump_info.append(callback)
         if self.children is None: return
         for child in self.children:
-            child.add_info_item(quantity,format)
+            child.add_info_item(info_item)
 
     def set_default_clump_info(self):
         "Defines default entries in the clump_info array."
@@ -64,60 +96,67 @@
         # add_info_item is recursive so this function does not need to be.
         self.clump_info = []
 
-        # Number of cells.
-        self.add_info_item('self.data["CellMassMsun"].size','"Cells: %d" % value')
-        # Gas mass in solar masses.
-        self.add_info_item('self.data["CellMassMsun"].sum()','"Mass: %e Msolar" % value')
-        # Volume-weighted Jeans mass.
-        self.add_info_item('self.data.quantities["WeightedAverageQuantity"]("JeansMassMsun","CellVolume")',
-                           '"Jeans Mass (vol-weighted): %.6e Msolar" % value')
-        # Mass-weighted Jeans mass.
-        self.add_info_item('self.data.quantities["WeightedAverageQuantity"]("JeansMassMsun","CellMassMsun")',
-                           '"Jeans Mass (mass-weighted): %.6e Msolar" % value')
-        # Max level.
-        self.add_info_item('self.data["GridLevel"].max()','"Max grid level: %d" % value')
-        # Minimum number density.
-        self.add_info_item('self.data["NumberDensity"].min()','"Min number density: %.6e cm^-3" % value')
-        # Maximum number density.
-        self.add_info_item('self.data["NumberDensity"].max()','"Max number density: %.6e cm^-3" % value')
+        self.add_info_item("total_cells")
+        self.add_info_item("cell_mass")
+        self.add_info_item("mass_weighted_jeans_mass")
+        self.add_info_item("volume_weighted_jeans_mass")
+        self.add_info_item("max_grid_level")
+        self.add_info_item("min_number_density")
+        self.add_info_item("max_number_density")
 
     def clear_clump_info(self):
-        "Clears the clump_info array and passes the instruction to its children."
+        """
+        Clears the clump_info array and passes the instruction to its 
+        children.
+        """
 
         self.clump_info = []
         if self.children is None: return
         for child in self.children:
             child.clear_clump_info()
 
-    def write_info(self,level,f_ptr):
+    def write_info(self, level, f_ptr):
         "Writes information for clump using the list of items in clump_info."
 
         for item in self.clump_info:
-            # Call if callable, otherwise do an eval.
-            if callable(item['quantity']):
-                value = item['quantity']()
-            else:
-                value = eval(item['quantity'])
-            output = eval(item['format'])
-            f_ptr.write("%s%s" % ('\t'*level,output))
-            f_ptr.write("\n")
+            value = item(self)
+            f_ptr.write("%s%s\n" % ('\t'*level, value))
 
     def find_children(self, min_val, max_val = None):
         if self.children is not None:
-            print "Wiping out existing children clumps."
+            mylog.info("Wiping out existing children clumps: %d.",
+                       len(self.children))
         self.children = []
         if max_val is None: max_val = self.max_val
         nj, cids = identify_contours(self.data, self.field, min_val, max_val)
-        for cid in range(nj):
-            new_clump = self.data.cut_region(
-                    ["obj['contours'] == %s" % (cid + 1)],
-                    {'contour_slices': cids})
-            self.children.append(Clump(new_clump, self, self.field,
-                                       self.cached_fields,function=self.function,
-                                       clump_info=self.clump_info))
+        # Here, cids is the set of slices and values, keyed by the
+        # parent_grid_id, that defines the contours.  So we can figure out all
+        # the unique values of the contours by examining the list here.
+        unique_contours = set([])
+        for sl_list in cids.values():
+            for sl, ff in sl_list:
+                unique_contours.update(np.unique(ff))
+        contour_key = uuid.uuid4().hex
+        base_object = getattr(self.data, 'base_object', self.data)
+        add_contour_field(base_object.pf, contour_key)
+        for cid in sorted(unique_contours):
+            if cid == -1: continue
+            new_clump = base_object.cut_region(
+                    ["obj['contours_%s'] == %s" % (contour_key, cid)],
+                    {('contour_slices_%s' % contour_key): cids})
+            if new_clump["ones"].size == 0:
+                # This is to skip possibly duplicate clumps.
+                # Using "ones" here will speed things up.
+                continue
+            self.children.append(Clump(new_clump, self.field, parent=self,
+                                       clump_info=self.clump_info,
+                                       validators=self.validators))
 
     def pass_down(self,operation):
-        "Performs an operation on a clump with an exec and passes the instruction down to clump children."
+        """
+        Performs an operation on a clump with an exec and passes the 
+        instruction down to clump children.
+        """
 
         # Call if callable, otherwise do an exec.
         if callable(operation):
@@ -129,24 +168,32 @@
         for child in self.children:
             child.pass_down(operation)
 
-    def _isValid(self):
-        "Perform user specified function to determine if child clumps should be kept."
+    def _validate(self):
+        "Apply all user specified validator functions."
 
-        # Only call function if it has not been already.
-        if self.function_value is None:
-            self.function_value = eval(self.function)
+        # Only call functions if not done already.
+        if self.valid is not None:
+            return self.valid
 
-        return self.function_value
+        self.valid = True
+        for validator in self.validators:
+            self.valid &= validator(self)
+            if not self.valid:
+                break
+
+        return self.valid
 
     def __reduce__(self):
         return (_reconstruct_clump, 
                 (self.parent, self.field, self.min_val, self.max_val,
-                 self.function_value, self.children, self.data, self.clump_info, self.function))
+                 self.valid, self.children, self.data, self.clump_info, 
+                 self.function))
 
     def __getitem__(self,request):
         return self.data[request]
 
-def _reconstruct_clump(parent, field, mi, ma, function_value, children, data, clump_info, 
+def _reconstruct_clump(parent, field, mi, ma, valid, children, 
+                       data, clump_info, 
         function=None):
     obj = object.__new__(Clump)
     if iterable(parent):
@@ -155,8 +202,9 @@
         except KeyError:
             parent = parent
     if children is None: children = []
-    obj.parent, obj.field, obj.min_val, obj.max_val, obj.function_value, obj.children, obj.clump_info, obj.function = \
-        parent, field, mi, ma, function_value, children, clump_info, function
+    obj.parent, obj.field, obj.min_val, obj.max_val, \
+      obj.valid, obj.children, obj.clump_info, obj.function = \
+        parent, field, mi, ma, valid, children, clump_info, function
     # Now we override, because the parent/child relationship seems a bit
     # unreliable in the unpickling
     for child in children: child.parent = obj
@@ -166,7 +214,8 @@
     return obj
 
 def find_clumps(clump, min_val, max_val, d_clump):
-    print "Finding clumps: min: %e, max: %e, step: %f" % (min_val, max_val, d_clump)
+    mylog.info("Finding clumps: min: %e, max: %e, step: %f" % 
+               (min_val, max_val, d_clump))
     if min_val >= max_val: return
     clump.find_children(min_val)
 
@@ -175,23 +224,28 @@
 
     elif (len(clump.children) > 0):
         these_children = []
-        print "Investigating %d children." % len(clump.children)
+        mylog.info("Investigating %d children." % len(clump.children))
         for child in clump.children:
             find_clumps(child, min_val*d_clump, max_val, d_clump)
             if ((child.children is not None) and (len(child.children) > 0)):
                 these_children.append(child)
-            elif (child._isValid()):
+            elif (child._validate()):
                 these_children.append(child)
             else:
-                print "Eliminating invalid, childless clump with %d cells." % len(child.data["Ones"])
+                mylog.info(("Eliminating invalid, childless clump with " +
+                            "%d cells.") % len(child.data["ones"]))
         if (len(these_children) > 1):
-            print "%d of %d children survived." % (len(these_children),len(clump.children))            
+            mylog.info("%d of %d children survived." %
+                       (len(these_children),len(clump.children)))
             clump.children = these_children
         elif (len(these_children) == 1):
-            print "%d of %d children survived, linking its children to parent." % (len(these_children),len(clump.children))
+            mylog.info(("%d of %d children survived, linking its " +
+                        "children to parent.") % 
+                        (len(these_children),len(clump.children)))
             clump.children = these_children[0].children
         else:
-            print "%d of %d children survived, erasing children." % (len(these_children),len(clump.children))
+            mylog.info("%d of %d children survived, erasing children." %
+                       (len(these_children),len(clump.children)))
             clump.children = []
 
 def get_lowest_clumps(clump, clump_list=None):
@@ -206,88 +260,35 @@
 
     return clump_list
 
-def write_clump_index(clump,level,f_ptr):
+def write_clump_index(clump, level, fh):
+    top = False
+    if not isinstance(fh, file):
+        fh = open(fh, "w")
+        top = True
     for q in range(level):
-        f_ptr.write("\t")
-    f_ptr.write("Clump at level %d:\n" % level)
-    clump.write_info(level,f_ptr)
-    f_ptr.write("\n")
-    f_ptr.flush()
+        fh.write("\t")
+    fh.write("Clump at level %d:\n" % level)
+    clump.write_info(level, fh)
+    fh.write("\n")
+    fh.flush()
     if ((clump.children is not None) and (len(clump.children) > 0)):
         for child in clump.children:
-            write_clump_index(child,(level+1),f_ptr)
+            write_clump_index(child, (level+1), fh)
+    if top:
+        fh.close()
 
-def write_clumps(clump,level,f_ptr):
+def write_clumps(clump, level, fh):
+    top = False
+    if not isinstance(fh, file):
+        fh = open(fh, "w")
+        top = True
     if ((clump.children is None) or (len(clump.children) == 0)):
-        f_ptr.write("%sClump:\n" % ("\t"*level))
-        clump.write_info(level,f_ptr)
-        f_ptr.write("\n")
-        f_ptr.flush()
+        fh.write("%sClump:\n" % ("\t"*level))
+        clump.write_info(level, fh)
+        fh.write("\n")
+        fh.flush()
     if ((clump.children is not None) and (len(clump.children) > 0)):
         for child in clump.children:
-            write_clumps(child,0,f_ptr)
-
-# Old clump info writing routines.
-def write_old_clump_index(clump,level,f_ptr):
-    for q in range(level):
-        f_ptr.write("\t")
-    f_ptr.write("Clump at level %d:\n" % level)
-    clump.write_info(level,f_ptr)
-    write_old_clump_info(clump,level,f_ptr)
-    f_ptr.write("\n")
-    f_ptr.flush()
-    if ((clump.children is not None) and (len(clump.children) > 0)):
-        for child in clump.children:
-            write_clump_index(child,(level+1),f_ptr)
-
-def write_old_clumps(clump,level,f_ptr):
-    if ((clump.children is None) or (len(clump.children) == 0)):
-        f_ptr.write("%sClump:\n" % ("\t"*level))
-        write_old_clump_info(clump,level,f_ptr)
-        f_ptr.write("\n")
-        f_ptr.flush()
-    if ((clump.children is not None) and (len(clump.children) > 0)):
-        for child in clump.children:
-            write_clumps(child,0,f_ptr)
-
-__clump_info_template = \
-"""
-%(tl)sCells: %(num_cells)s
-%(tl)sMass: %(total_mass).6e Msolar
-%(tl)sJeans Mass (vol-weighted): %(jeans_mass_vol).6e Msolar
-%(tl)sJeans Mass (mass-weighted): %(jeans_mass_mass).6e Msolar
-%(tl)sMax grid level: %(max_level)s
-%(tl)sMin number density: %(min_density).6e cm^-3
-%(tl)sMax number density: %(max_density).6e cm^-3
-
-"""
-
-def write_old_clump_info(clump,level,f_ptr):
-    fmt_dict = {'tl':  "\t" * level}
-    fmt_dict['num_cells'] = clump.data["CellMassMsun"].size,
-    fmt_dict['total_mass'] = clump.data["CellMassMsun"].sum()
-    fmt_dict['jeans_mass_vol'] = clump.data.quantities["WeightedAverageQuantity"]("JeansMassMsun","CellVolume")
-    fmt_dict['jeans_mass_mass'] = clump.data.quantities["WeightedAverageQuantity"]("JeansMassMsun","CellMassMsun")
-    fmt_dict['max_level'] =  clump.data["GridLevel"].max()
-    fmt_dict['min_density'] =  clump.data["NumberDensity"].min()
-    fmt_dict['max_density'] =  clump.data["NumberDensity"].max()
-    f_ptr.write(__clump_info_template % fmt_dict)
-
-# Recipes for various clump calculations.
-recipes = {}
-
-# Distance from clump center of mass to center of mass of top level object.
-def _DistanceToMainClump(master,units='pc'):
-    masterCOM = master.data.quantities['CenterOfMass']()
-    pass_command = "self.masterCOM = [%.10f, %.10f, %.10f]" % (masterCOM[0],
-                                                               masterCOM[1],
-                                                               masterCOM[2])
-    master.pass_down(pass_command)
-    master.pass_down("self.com = self.data.quantities['CenterOfMass']()")
-
-    quantity = "((self.com[0]-self.masterCOM[0])**2 + (self.com[1]-self.masterCOM[1])**2 + (self.com[2]-self.masterCOM[2])**2)**(0.5)*self.data.ds.units['%s']" % units
-    format = "%s%s%s" % ("'Distance from center: %.6e ",units,"' % value")
-
-    master.add_info_item(quantity,format)
-
-recipes['DistanceToMainClump'] = _DistanceToMainClump
+            write_clumps(child, 0, fh)
+    if top:
+        fh.close()

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/analysis_modules/level_sets/clump_info_items.py
--- /dev/null
+++ b/yt/analysis_modules/level_sets/clump_info_items.py
@@ -0,0 +1,87 @@
+"""
+ClumpInfoCallback and callbacks.
+
+
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2013, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+import numpy as np
+
+from yt.utilities.operator_registry import \
+     OperatorRegistry
+
+clump_info_registry = OperatorRegistry()
+
+def add_clump_info(name, function):
+    clump_info_registry[name] = ClumpInfoCallback(function)
+
+class ClumpInfoCallback(object):
+    r"""
+    A ClumpInfoCallback is a function that takes a clump, computes a 
+    quantity, and returns a string to be printed out for writing clump info.
+    """
+    def __init__(self, function, args=None, kwargs=None):
+        self.function = function
+        self.args = args
+        if self.args is None: self.args = []
+        self.kwargs = kwargs
+        if self.kwargs is None: self.kwargs = {}
+
+    def __call__(self, clump):
+        return self.function(clump, *self.args, **self.kwargs)
+    
+def _total_cells(clump):
+    n_cells = clump.data["index", "ones"].size
+    return "Cells: %d." % n_cells
+add_clump_info("total_cells", _total_cells)
+
+def _cell_mass(clump):
+    cell_mass = clump.data["gas", "cell_mass"].sum().in_units("Msun")
+    return "Mass: %e Msun." % cell_mass
+add_clump_info("cell_mass", _cell_mass)
+
+def _mass_weighted_jeans_mass(clump):
+    jeans_mass = clump.data.quantities.weighted_average_quantity(
+        "jeans_mass", ("gas", "cell_mass")).in_units("Msun")
+    return "Jeans Mass (mass-weighted): %.6e Msolar." % jeans_mass
+add_clump_info("mass_weighted_jeans_mass", _mass_weighted_jeans_mass)
+
+def _volume_weighted_jeans_mass(clump):
+    jeans_mass = clump.data.quantities.weighted_average_quantity(
+        "jeans_mass", ("index", "cell_volume")).in_units("Msun")
+    return "Jeans Mass (volume-weighted): %.6e Msolar." % jeans_mass
+add_clump_info("volume_weighted_jeans_mass", _volume_weighted_jeans_mass)
+
+def _max_grid_level(clump):
+    max_level = clump.data["index", "grid_level"].max()
+    return "Max grid level: %d." % max_level
+add_clump_info("max_grid_level", _max_grid_level)
+
+def _min_number_density(clump):
+    min_n = clump.data["gas", "number_density"].min().in_units("cm**-3")
+    return "Min number density: %.6e cm^-3." % min_n
+add_clump_info("min_number_density", _min_number_density)
+
+def _max_number_density(clump):
+    max_n = clump.data["gas", "number_density"].max().in_units("cm**-3")
+    return "Max number density: %.6e cm^-3." % max_n
+add_clump_info("max_number_density", _max_number_density)
+
+def _distance_to_main_clump(clump, units="pc"):
+    master = clump
+    while master.parent is not None:
+        master = master.parent
+    master_com = clump.data.ds.arr(master.data.quantities.center_of_mass())
+    my_com = clump.data.ds.arr(clump.data.quantities.center_of_mass())
+    distance = np.sqrt(((master_com - my_com)**2).sum())
+    return "Distance from master center of mass: %.6e %s." % \
+      (distance.in_units(units), units)
+add_clump_info("distance_to_main_clump", _distance_to_main_clump)

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/analysis_modules/level_sets/clump_validators.py
--- /dev/null
+++ b/yt/analysis_modules/level_sets/clump_validators.py
@@ -0,0 +1,95 @@
+"""
+ClumpValidators and callbacks.
+
+
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2014, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+import numpy as np
+
+from yt.utilities.data_point_utilities import FindBindingEnergy
+from yt.utilities.operator_registry import \
+    OperatorRegistry
+from yt.utilities.physical_constants import \
+    gravitational_constant_cgs as G
+
+clump_validator_registry = OperatorRegistry()
+
+def add_validator(name, function):
+    clump_validator_registry[name] = ClumpValidator(function)
+
+class ClumpValidator(object):
+    r"""
+    A ClumpValidator is a function that takes a clump and returns 
+    True or False as to whether the clump is valid and shall be kept.
+    """
+    def __init__(self, function, args=None, kwargs=None):
+        self.function = function
+        self.args = args
+        if self.args is None: self.args = []
+        self.kwargs = kwargs
+        if self.kwargs is None: self.kwargs = {}
+
+    def __call__(self, clump):
+        return self.function(clump, *self.args, **self.kwargs)
+    
+def _gravitationally_bound(clump, use_thermal_energy=True,
+                           use_particles=True, truncate=True):
+    "True if clump is gravitationally bound."
+
+    use_particles &= \
+      ("all", "particle_mass") in clump.data.ds.field_info
+    
+    bulk_velocity = clump.quantities.bulk_velocity(use_particles=use_particles)
+
+    kinetic = 0.5 * (clump["gas", "cell_mass"] *
+        ((bulk_velocity[0] - clump["gas", "velocity_x"])**2 +
+         (bulk_velocity[1] - clump["gas", "velocity_y"])**2 +
+         (bulk_velocity[2] - clump["gas", "velocity_z"])**2)).sum()
+
+    if use_thermal_energy:
+        kinetic += (clump["gas", "cell_mass"] *
+                    clump["gas", "thermal_energy"]).sum()
+
+    if use_particles:
+        kinetic += 0.5 * (clump["all", "particle_mass"] *
+            ((bulk_velocity[0] - clump["all", "particle_velocity_x"])**2 +
+             (bulk_velocity[1] - clump["all", "particle_velocity_y"])**2 +
+             (bulk_velocity[2] - clump["all", "particle_velocity_z"])**2)).sum()
+
+    potential = clump.data.ds.quan(G *
+        FindBindingEnergy(clump["gas", "cell_mass"].in_cgs(),
+                          clump["index", "x"].in_cgs(),
+                          clump["index", "y"].in_cgs(),
+                          clump["index", "z"].in_cgs(),
+                          truncate, (kinetic / G).in_cgs()),
+        kinetic.in_cgs().units)
+    
+    if truncate and potential >= kinetic:
+        return True
+
+    if use_particles:
+        potential += clump.data.ds.quan(G *
+            FindBindingEnergy(
+                clump["all", "particle_mass"].in_cgs(),
+                clump["all", "particle_position_x"].in_cgs(),
+                clump["all", "particle_position_y"].in_cgs(),
+                clump["all", "particle_position_z"].in_cgs(),
+                truncate, ((kinetic - potential) / G).in_cgs()),
+        kinetic.in_cgs().units)
+
+    return potential >= kinetic
+add_validator("gravitationally_bound", _gravitationally_bound)
+
+def _min_cells(clump, n_cells):
+    "True if clump has a minimum number of cells."
+    return (clump["index", "ones"].size >= n_cells)
+add_validator("min_cells", _min_cells)

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/analysis_modules/level_sets/contour_finder.py
--- a/yt/analysis_modules/level_sets/contour_finder.py
+++ b/yt/analysis_modules/level_sets/contour_finder.py
@@ -39,9 +39,9 @@
         node_ids.append(nid)
         values = g[field][sl].astype("float64")
         contour_ids = np.zeros(dims, "int64") - 1
-        gct.identify_contours(values, contour_ids, total_contours)
+        total_contours += gct.identify_contours(values, contour_ids,
+                                                total_contours)
         new_contours = tree.cull_candidates(contour_ids)
-        total_contours += new_contours.shape[0]
         tree.add_contours(new_contours)
         # Now we can create a partitioned grid with the contours.
         LE = (DLE + g.dds * gi).in_units("code_length").ndarray_view()
@@ -51,6 +51,8 @@
             LE, RE, dims.astype("int64"))
         contours[nid] = (g.Level, node.node_ind, pg, sl)
     node_ids = np.array(node_ids)
+    if node_ids.size == 0:
+        return 0, {}
     trunk = data_source.tiles.tree.trunk
     mylog.info("Linking node (%s) contours.", len(contours))
     link_node_contours(trunk, contours, tree, node_ids)

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -21,14 +21,12 @@
 
 from yt.config import ytcfg
 from yt.units.yt_array import YTArray, uconcatenate, array_like_field
-from yt.utilities.data_point_utilities import FindBindingEnergy
 from yt.utilities.exceptions import YTFieldNotFound
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
     ParallelAnalysisInterface, parallel_objects
 from yt.utilities.lib.Octree import Octree
 from yt.utilities.physical_constants import \
     gravitational_constant_cgs, \
-    mass_sun_cgs, \
     HUGE
 from yt.utilities.math_utils import prec_accum
 
@@ -237,14 +235,14 @@
           (("all", "particle_mass") in self.data_source.ds.field_info)
         vals = []
         if use_gas:
-            vals += [(data[ax] * data["cell_mass"]).sum(dtype=np.float64)
+            vals += [(data[ax] * data["gas", "cell_mass"]).sum(dtype=np.float64)
                      for ax in 'xyz']
-            vals.append(data["cell_mass"].sum(dtype=np.float64))
+            vals.append(data["gas", "cell_mass"].sum(dtype=np.float64))
         if use_particles:
-            vals += [(data["particle_position_%s" % ax] *
-                      data["particle_mass"]).sum(dtype=np.float64)
+            vals += [(data["all", "particle_position_%s" % ax] *
+                      data["all", "particle_mass"]).sum(dtype=np.float64)
                      for ax in 'xyz']
-            vals.append(data["particle_mass"].sum(dtype=np.float64))
+            vals.append(data["all", "particle_mass"].sum(dtype=np.float64))
         return vals
 
     def reduce_intermediate(self, values):
@@ -261,7 +259,7 @@
             y += values.pop(0).sum(dtype=np.float64)
             z += values.pop(0).sum(dtype=np.float64)
             w += values.pop(0).sum(dtype=np.float64)
-        return [v/w for v in [x, y, z]]
+        return self.data_source.ds.arr([v/w for v in [x, y, z]])
 
 class BulkVelocity(DerivedQuantity):
     r"""
@@ -299,14 +297,15 @@
     def process_chunk(self, data, use_gas = True, use_particles = False):
         vals = []
         if use_gas:
-            vals += [(data["velocity_%s" % ax] * data["cell_mass"]).sum(dtype=np.float64)
+            vals += [(data["gas", "velocity_%s" % ax] * 
+                      data["gas", "cell_mass"]).sum(dtype=np.float64)
                      for ax in 'xyz']
-            vals.append(data["cell_mass"].sum(dtype=np.float64))
+            vals.append(data["gas", "cell_mass"].sum(dtype=np.float64))
         if use_particles:
-            vals += [(data["particle_velocity_%s" % ax] *
-                      data["particle_mass"]).sum(dtype=np.float64)
+            vals += [(data["all", "particle_velocity_%s" % ax] *
+                      data["all", "particle_mass"]).sum(dtype=np.float64)
                      for ax in 'xyz']
-            vals.append(data["particle_mass"].sum(dtype=np.float64))
+            vals.append(data["all", "particle_mass"].sum(dtype=np.float64))
         return vals
 
     def reduce_intermediate(self, values):
@@ -323,7 +322,7 @@
             y += values.pop(0).sum(dtype=np.float64)
             z += values.pop(0).sum(dtype=np.float64)
             w += values.pop(0).sum(dtype=np.float64)
-        return [v/w for v in [x, y, z]]
+        return self.data_source.ds.arr([v/w for v in [x, y, z]])
 
 class WeightedVariance(DerivedQuantity):
     r"""

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/data_objects/selection_data_containers.py
--- a/yt/data_objects/selection_data_containers.py
+++ b/yt/data_objects/selection_data_containers.py
@@ -16,6 +16,7 @@
 
 import types
 import numpy as np
+from contextlib import contextmanager
 
 from yt.funcs import *
 from yt.utilities.lib.alt_ray_tracers import cylindrical_ray_trace
@@ -718,6 +719,22 @@
             self.field_data[field] = self.base_object[field][ind]
 
     @property
+    def blocks(self):
+        # We have to take a slightly different approach here.  Note that all
+        # that .blocks has to yield is a 3D array and a mask.
+        for obj, m in self.base_object.blocks:
+            m = m.copy()
+            with obj._field_parameter_state(self.field_parameters):
+                for cond in self.conditionals:
+                    ss = eval(cond)
+                    m = np.logical_and(m, ss, m)
+            if not np.any(m): continue
+            yield obj, m
+
+    def cut_region(self, *args, **kwargs):
+        raise NotImplementedError
+
+    @property
     def _cond_ind(self):
         ind = None
         obj = self.base_object

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/data_objects/tests/test_extract_regions.py
--- a/yt/data_objects/tests/test_extract_regions.py
+++ b/yt/data_objects/tests/test_extract_regions.py
@@ -22,10 +22,12 @@
         yield assert_equal, np.all(r["velocity_x"] > 0.25), True
         yield assert_equal, np.sort(dd["density"][t]), np.sort(r["density"])
         yield assert_equal, np.sort(dd["x"][t]), np.sort(r["x"])
-        r2 = r.cut_region( [ "obj['temperature'] < 0.75" ] )
-        t2 = (r["temperature"] < 0.75)
-        yield assert_equal, np.sort(r2["temperature"]), np.sort(r["temperature"][t2])
-        yield assert_equal, np.all(r2["temperature"] < 0.75), True
+        # We are disabling these, as cutting cut regions does not presently
+        # work
+        #r2 = r.cut_region( [ "obj['temperature'] < 0.75" ] )
+        #t2 = (r["temperature"] < 0.75)
+        #yield assert_equal, np.sort(r2["temperature"]), np.sort(r["temperature"][t2])
+        #yield assert_equal, np.all(r2["temperature"] < 0.75), True
 
         # Now we can test some projections
         dd = ds.all_data()

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/fields/geometric_fields.py
--- a/yt/fields/geometric_fields.py
+++ b/yt/fields/geometric_fields.py
@@ -207,18 +207,3 @@
              units="cm",
              display_field=False)
 
-    def _contours(field, data):
-        fd = data.get_field_parameter("contour_slices")
-        vals = data["index", "ones"] * -1
-        if fd is None or fd == 0.0:
-            return vals
-        for sl, v in fd.get(data.id, []):
-            vals[sl] = v
-        return vals
-    
-    registry.add_field(("index", "contours"),
-                       function=_contours,
-                       validators=[ValidateSpatial(0)],
-                       take_log=False,
-                       display_field=False)
-

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/frontends/boxlib/fields.py
--- a/yt/frontends/boxlib/fields.py
+++ b/yt/frontends/boxlib/fields.py
@@ -114,9 +114,9 @@
 
     known_other_fields = (
         ("density", ("g/cm**3", ["density"], r"\rho")),
-        ("xmom", ("g*cm/s", ["momentum_x"], r"\rho u")),
-        ("ymom", ("g*cm/s", ["momentum_y"], r"\rho v")),
-        ("zmom", ("g*cm/s", ["momentum_z"], r"\rho w")),
+        ("xmom", ("g/(cm**2 * s)", ["momentum_x"], r"\rho u")),
+        ("ymom", ("g/(cm**2 * s)", ["momentum_y"], r"\rho v")),
+        ("zmom", ("g/(cm**2 * s)", ["momentum_z"], r"\rho w")),
         # velocity components are not always present
         ("x_velocity", ("cm/s", ["velocity_x"], r"u")),
         ("y_velocity", ("cm/s", ["velocity_y"], r"v")),

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -552,10 +552,10 @@
             f.seek(0, os.SEEK_END)
             fs = f.tell()
             f.seek(0, os.SEEK_SET)
+            #Read in the header
+            t, n, ndim, ng, nd, ns = struct.unpack("<diiiii", f.read(28))
         except IOError:
             return False, 1
-        #Read in the header
-        t, n, ndim, ng, nd, ns = struct.unpack("<diiiii", f.read(28))
         endianswap = "<"
         #Check Endianness
         if (ndim < 1 or ndim > 3):

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/frontends/stream/data_structures.py
--- a/yt/frontends/stream/data_structures.py
+++ b/yt/frontends/stream/data_structures.py
@@ -436,7 +436,7 @@
         pts = MatchPointsToGrids(grid_tree, len(x), x, y, z)
         particle_grid_inds = pts.find_points_in_tree()
         idxs = np.argsort(particle_grid_inds)
-        particle_grid_count = np.bincount(particle_grid_inds,
+        particle_grid_count = np.bincount(particle_grid_inds.astype("intp"),
                                           minlength=num_grids)
         particle_indices = np.zeros(num_grids + 1, dtype='int64')
         if num_grids > 1 :

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/geometry/tests/test_particle_octree.py
--- a/yt/geometry/tests/test_particle_octree.py
+++ b/yt/geometry/tests/test_particle_octree.py
@@ -91,7 +91,7 @@
         ds = load_particles(data, 1.0, bbox = bbox, n_ref = n_ref)
         dd = ds.all_data()
         bi = dd["io","mesh_id"]
-        v = np.bincount(bi.astype("int64"))
+        v = np.bincount(bi.astype("intp"))
         yield assert_equal, v.max() <= n_ref, True
         bi2 = dd["all","mesh_id"]
         yield assert_equal, bi, bi2

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/utilities/lib/ContourFinding.pyx
--- a/yt/utilities/lib/ContourFinding.pyx
+++ b/yt/utilities/lib/ContourFinding.pyx
@@ -228,7 +228,7 @@
         cdef int i, n, ins
         cdef np.int64_t cid1, cid2
         # Okay, this requires lots of iteration, unfortunately
-        cdef ContourID *cur, *root
+        cdef ContourID *cur, *c1, *c2
         n = join_tree.shape[0]
         #print "Counting"
         #print "Checking", self.count()
@@ -253,6 +253,7 @@
                 print "  Inspected ", ins
                 raise RuntimeError
             else:
+                c1.count = c2.count = 0
                 contour_union(c1, c2)
 
     def count(self):
@@ -335,6 +336,7 @@
                                 c2 = container[offset]
                                 if c2 == NULL: continue
                                 c2 = contour_find(c2)
+                                cur.count = c2.count = 0
                                 contour_union(cur, c2)
                                 cur = contour_find(cur)
         for i in range(ni):
@@ -342,13 +344,13 @@
                 for k in range(nk):
                     c1 = container[i*nj*nk + j*nk + k]
                     if c1 == NULL: continue
-                    cur = c1
                     c1 = contour_find(c1)
                     contour_ids[i,j,k] = c1.contour_id
         
         for i in range(ni*nj*nk): 
             if container[i] != NULL: free(container[i])
         free(container)
+        return nc
 
 @cython.boundscheck(False)
 @cython.wraparound(False)
@@ -383,6 +385,7 @@
         if spos[i] <= vc.left_edge[i] or spos[i] >= vc.right_edge[i]: return 0
     return 1
 
+ at cython.cdivision(True)
 @cython.boundscheck(False)
 @cython.wraparound(False)
 cdef void construct_boundary_relationships(Node trunk, ContourTree tree, 
@@ -391,227 +394,68 @@
                 np.ndarray[np.int64_t, ndim=1] node_ids):
     # We only look at the boundary and find the nodes next to it.
     # Contours is a dict, keyed by the node.id.
-    cdef int i, j, nx, ny, nz, offset_i, offset_j, oi, oj, level
+    cdef int i, j, off_i, off_j, oi, oj, level, ax, ax0, ax1, n1, n2
     cdef np.int64_t c1, c2
     cdef Node adj_node
     cdef VolumeContainer *vc1, *vc0 = vcs[nid]
-    nx = vc0.dims[0]
-    ny = vc0.dims[1]
-    nz = vc0.dims[2]
-    cdef int s = (ny*nx + nx*nz + ny*nz) * 18
+    cdef int s = (vc0.dims[1]*vc0.dims[0]
+                + vc0.dims[0]*vc0.dims[2]
+                + vc0.dims[1]*vc0.dims[2]) * 18
     # We allocate an array of fixed (maximum) size
     cdef np.ndarray[np.int64_t, ndim=2] joins = np.zeros((s, 2), dtype="int64")
-    cdef int ti = 0
-    cdef int index
+    cdef int ti = 0, side
+    cdef int index, pos[3], my_pos[3]
     cdef np.float64_t spos[3]
 
-    # First the x-pass
-    for i in range(ny):
-        for j in range(nz):
-            for offset_i in range(3):
-                oi = offset_i - 1
-                for offset_j in range(3):
-                    oj = offset_j - 1
-                    # Adjust by -1 in x, then oi and oj in y and z
-                    get_spos(vc0, -1, i + oi, j + oj, 0, spos)
-                    adj_node = _find_node(trunk, spos)
-                    vc1 = vcs[adj_node.node_ind]
-                    if examined[adj_node.node_ind] == 0 and \
-                       spos_contained(vc1, spos):
-                        # This is outside our VC, as 0 is a boundary layer
-                        index = vc_index(vc0, 0, i, j)
-                        c1 = (<np.int64_t*>vc0.data[0])[index]
-                        index = vc_pos_index(vc1, spos)
-                        c2 = (<np.int64_t*>vc1.data[0])[index]
-                        if c1 > -1 and c2 > -1:
-                            joins[ti,0] = i64max(c1,c2)
-                            joins[ti,1] = i64min(c1,c2)
-                            ti += 1
-                    # This is outside our vc
-                    get_spos(vc0, nx, i + oi, j + oj, 0, spos)
-                    adj_node = _find_node(trunk, spos)
-                    vc1 = vcs[adj_node.node_ind]
-                    if examined[adj_node.node_ind] == 0 and \
-                       spos_contained(vc1, spos):
-                        # This is outside our VC, as 0 is a boundary layer
-                        index = vc_index(vc0, nx - 1, i, j)
-                        c1 = (<np.int64_t*>vc0.data[0])[index]
-                        index = vc_pos_index(vc1, spos)
-                        c2 = (<np.int64_t*>vc1.data[0])[index]
-                        if c1 > -1 and c2 > -1:
-                            joins[ti,0] = i64max(c1,c2)
-                            joins[ti,1] = i64min(c1,c2)
-                            ti += 1
-    # Now y-pass
-    for i in range(nx):
-        for j in range(nz):
-            for offset_i in range(3):
-                oi = offset_i - 1
-                if i == 0 and oi == -1: continue
-                if i == nx - 1 and oi == 1: continue
-                for offset_j in range(3):
-                    oj = offset_j - 1
-                    get_spos(vc0, i + oi, -1, j + oj, 1, spos)
-                    adj_node = _find_node(trunk, spos)
-                    vc1 = vcs[adj_node.node_ind]
-                    if examined[adj_node.node_ind] == 0 and \
-                       spos_contained(vc1, spos):
-                        # This is outside our VC, as 0 is a boundary layer
-                        index = vc_index(vc0, i, 0, j)
-                        c1 = (<np.int64_t*>vc0.data[0])[index]
-                        index = vc_pos_index(vc1, spos)
-                        c2 = (<np.int64_t*>vc1.data[0])[index]
-                        if c1 > -1 and c2 > -1:
-                            joins[ti,0] = i64max(c1,c2)
-                            joins[ti,1] = i64min(c1,c2)
-                            ti += 1
+    for ax in range(3):
+        ax0 = (ax + 1) % 3
+        ax1 = (ax + 2) % 3
+        n1 = vc0.dims[ax0]
+        n2 = vc0.dims[ax1]
+        for i in range(n1):
+            for j in range(n2):
+                for off_i in range(3):
+                    oi = off_i - 1
+                    if i == 0 and oi == -1: continue
+                    if i == n1 - 1 and oi == 1: continue
+                    for off_j in range(3):
+                        oj = off_j - 1
+                        if j == 0 and oj == -1: continue
+                        if j == n2 - 1 and oj == 1: continue
+                        pos[ax0] = i + oi
+                        pos[ax1] = j + oj
+                        my_pos[ax0] = i
+                        my_pos[ax1] = j
+                        for side in range(2):
+                            # We go off each end of the block.
+                            if side == 0:
+                                pos[ax] = -1
+                                my_pos[ax] = 0
+                            else:
+                                pos[ax] = vc0.dims[ax]
+                                my_pos[ax] = vc0.dims[ax]-1
+                            get_spos(vc0, pos[0], pos[1], pos[2], ax, spos)
+                            adj_node = _find_node(trunk, spos)
+                            vc1 = vcs[adj_node.node_ind]
+                            if spos_contained(vc1, spos):
+                                index = vc_index(vc0, my_pos[0], 
+                                                 my_pos[1], my_pos[2])
+                                c1 = (<np.int64_t*>vc0.data[0])[index]
+                                index = vc_pos_index(vc1, spos)
+                                c2 = (<np.int64_t*>vc1.data[0])[index]
+                                if c1 > -1 and c2 > -1:
+                                    if examined[adj_node.node_ind] == 0:
+                                        joins[ti,0] = i64max(c1,c2)
+                                        joins[ti,1] = i64min(c1,c2)
+                                    else:
+                                        joins[ti,0] = c1
+                                        joins[ti,1] = c2
+                                    ti += 1
 
-                    get_spos(vc0, i + oi, ny, j + oj, 1, spos)
-                    adj_node = _find_node(trunk, spos)
-                    vc1 = vcs[adj_node.node_ind]
-                    if examined[adj_node.node_ind] == 0 and \
-                       spos_contained(vc1, spos):
-                        # This is outside our VC, as 0 is a boundary layer
-                        index = vc_index(vc0, i, ny - 1, j)
-                        c1 = (<np.int64_t*>vc0.data[0])[index]
-                        index = vc_pos_index(vc1, spos)
-                        c2 = (<np.int64_t*>vc1.data[0])[index]
-                        if c1 > -1 and c2 > -1:
-                            joins[ti,0] = i64max(c1,c2)
-                            joins[ti,1] = i64min(c1,c2)
-                            ti += 1
-
-    # Now z-pass
-    for i in range(nx):
-        for j in range(ny):
-            for offset_i in range(3):
-                oi = offset_i - 1
-                for offset_j in range(3):
-                    oj = offset_j - 1
-                    get_spos(vc0, i + oi,  j + oj, -1, 2, spos)
-                    adj_node = _find_node(trunk, spos)
-                    vc1 = vcs[adj_node.node_ind]
-                    if examined[adj_node.node_ind] == 0 and \
-                       spos_contained(vc1, spos):
-                        # This is outside our VC, as 0 is a boundary layer
-                        index = vc_index(vc0, i, j, 0)
-                        c1 = (<np.int64_t*>vc0.data[0])[index]
-                        index = vc_pos_index(vc1, spos)
-                        c2 = (<np.int64_t*>vc1.data[0])[index]
-                        if c1 > -1 and c2 > -1:
-                            joins[ti,0] = i64max(c1,c2)
-                            joins[ti,1] = i64min(c1,c2)
-                            ti += 1
-
-                    get_spos(vc0, i + oi, j + oj, nz, 2, spos)
-                    adj_node = _find_node(trunk, spos)
-                    vc1 = vcs[adj_node.node_ind]
-                    if examined[adj_node.node_ind] == 0 and \
-                       spos_contained(vc1, spos):
-                        # This is outside our VC, as 0 is a boundary layer
-                        index = vc_index(vc0, i, j, nz - 1)
-                        c1 = (<np.int64_t*>vc0.data[0])[index]
-                        index = vc_pos_index(vc1, spos)
-                        c2 = (<np.int64_t*>vc1.data[0])[index]
-                        if c1 > -1 and c2 > -1:
-                            joins[ti,0] = i64max(c1,c2)
-                            joins[ti,1] = i64min(c1,c2)
-                            ti += 1
     if ti == 0: return
     new_joins = tree.cull_joins(joins[:ti,:])
     tree.add_joins(new_joins)
 
-cdef inline int are_neighbors(
-            np.float64_t x1, np.float64_t y1, np.float64_t z1,
-            np.float64_t dx1, np.float64_t dy1, np.float64_t dz1,
-            np.float64_t x2, np.float64_t y2, np.float64_t z2,
-            np.float64_t dx2, np.float64_t dy2, np.float64_t dz2,
-        ):
-    # We assume an epsilon of 1e-15
-    if fabs(x1-x2) > 0.5*(dx1+dx2): return 0
-    if fabs(y1-y2) > 0.5*(dy1+dy2): return 0
-    if fabs(z1-z2) > 0.5*(dz1+dz2): return 0
-    return 1
-
- at cython.boundscheck(False)
- at cython.wraparound(False)
-def identify_field_neighbors(
-            np.ndarray[dtype=np.float64_t, ndim=1] field,
-            np.ndarray[dtype=np.float64_t, ndim=1] x,
-            np.ndarray[dtype=np.float64_t, ndim=1] y,
-            np.ndarray[dtype=np.float64_t, ndim=1] z,
-            np.ndarray[dtype=np.float64_t, ndim=1] dx,
-            np.ndarray[dtype=np.float64_t, ndim=1] dy,
-            np.ndarray[dtype=np.float64_t, ndim=1] dz,
-        ):
-    # We assume this field is pre-jittered; it has no identical values.
-    cdef int outer, inner, N, added
-    cdef np.float64_t x1, y1, z1, dx1, dy1, dz1
-    N = field.shape[0]
-    #cdef np.ndarray[dtype=np.object_t] joins
-    joins = [[] for outer in range(N)]
-    #joins = np.empty(N, dtype='object')
-    for outer in range(N):
-        if (outer % 10000) == 0: print outer, N
-        x1 = x[outer]
-        y1 = y[outer]
-        z1 = z[outer]
-        dx1 = dx[outer]
-        dy1 = dy[outer]
-        dz1 = dz[outer]
-        this_joins = joins[outer]
-        added = 0
-        # Go in reverse order
-        for inner in range(outer, 0, -1):
-            if not are_neighbors(x1, y1, z1, dx1, dy1, dz1,
-                                 x[inner], y[inner], z[inner],
-                                 dx[inner], dy[inner], dz[inner]):
-                continue
-            # Hot dog, we have a weiner!
-            this_joins.append(inner)
-            added += 1
-            if added == 26: break
-    return joins
-
- at cython.boundscheck(False)
- at cython.wraparound(False)
-def extract_identified_contours(int max_ind, joins):
-    cdef int i
-    contours = []
-    for i in range(max_ind + 1): # +1 to get to the max_ind itself
-        contours.append(set([i]))
-        if len(joins[i]) == 0:
-            continue
-        proto_contour = [i]
-        for j in joins[i]:
-            proto_contour += contours[j]
-        proto_contour = set(proto_contour)
-        for j in proto_contour:
-            contours[j] = proto_contour
-    return contours
-
- at cython.boundscheck(False)
- at cython.wraparound(False)
-def update_flat_joins(np.ndarray[np.int64_t, ndim=2] joins,
-                 np.ndarray[np.int64_t, ndim=1] contour_ids,
-                 np.ndarray[np.int64_t, ndim=1] final_joins):
-    cdef np.int64_t new, old
-    cdef int i, j, nj, nf, counter
-    cdef int ci, cj, ck
-    nj = joins.shape[0]
-    nf = final_joins.shape[0]
-    for ci in range(contour_ids.shape[0]):
-        if contour_ids[ci] == -1: continue
-        for j in range(nj):
-            if contour_ids[ci] == joins[j,0]:
-                contour_ids[ci] = joins[j,1]
-                break
-        for j in range(nf):
-            if contour_ids[ci] == final_joins[j]:
-                contour_ids[ci] = j + 1
-                break
-
-
 @cython.boundscheck(False)
 @cython.wraparound(False)
 def update_joins(np.ndarray[np.int64_t, ndim=2] joins,

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/utilities/lib/alt_ray_tracers.pyx
--- a/yt/utilities/lib/alt_ray_tracers.pyx
+++ b/yt/utilities/lib/alt_ray_tracers.pyx
@@ -101,7 +101,7 @@
                                           rleft, rright, zleft, zright, \
                                           cleft, cright, thetaleft, thetaright, \
                                           tmleft, tpleft, tmright, tpright, tsect
-    cdef np.ndarray[np.int64_t, ndim=1] inds, tinds, sinds
+    cdef np.ndarray[np.intp_t, ndim=1] inds, tinds, sinds
     cdef np.ndarray[np.float64_t, ndim=2] xyz, rztheta, ptemp, b1, b2, dsect
 
     # set up  points
@@ -126,7 +126,7 @@
     bsqrd = b**2
 
     # Compute positive and negative times and associated masks
-    I = left_edges.shape[0]
+    I = np.intp(left_edges.shape[0])
     tmleft = np.empty(I, dtype='float64')
     tpleft = np.empty(I, dtype='float64')
     tmright = np.empty(I, dtype='float64')
@@ -152,7 +152,7 @@
                                      np.argwhere(tmmright).flat, 
                                      np.argwhere(tpmright).flat,]))
     if 0 == inds.shape[0]:
-        inds = np.arange(np.int64(I))
+        inds = np.arange(np.intp(I))
         thetaleft = np.empty(I)
         thetaleft.fill(p1[2])
         thetaright = np.empty(I)

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/utilities/lib/misc_utilities.pyx
--- a/yt/utilities/lib/misc_utilities.pyx
+++ b/yt/utilities/lib/misc_utilities.pyx
@@ -27,7 +27,7 @@
 @cython.boundscheck(False)
 @cython.wraparound(False)
 @cython.cdivision(True)
-def new_bin_profile1d(np.ndarray[np.int64_t, ndim=1] bins_x,
+def new_bin_profile1d(np.ndarray[np.intp_t, ndim=1] bins_x,
                   np.ndarray[np.float64_t, ndim=1] wsource,
                   np.ndarray[np.float64_t, ndim=2] bsource,
                   np.ndarray[np.float64_t, ndim=1] wresult,
@@ -58,8 +58,8 @@
 @cython.boundscheck(False)
 @cython.wraparound(False)
 @cython.cdivision(True)
-def new_bin_profile2d(np.ndarray[np.int64_t, ndim=1] bins_x,
-                  np.ndarray[np.int64_t, ndim=1] bins_y,
+def new_bin_profile2d(np.ndarray[np.intp_t, ndim=1] bins_x,
+                  np.ndarray[np.intp_t, ndim=1] bins_y,
                   np.ndarray[np.float64_t, ndim=1] wsource,
                   np.ndarray[np.float64_t, ndim=2] bsource,
                   np.ndarray[np.float64_t, ndim=2] wresult,
@@ -91,9 +91,9 @@
 @cython.boundscheck(False)
 @cython.wraparound(False)
 @cython.cdivision(True)
-def new_bin_profile3d(np.ndarray[np.int64_t, ndim=1] bins_x,
-                  np.ndarray[np.int64_t, ndim=1] bins_y,
-                  np.ndarray[np.int64_t, ndim=1] bins_z,
+def new_bin_profile3d(np.ndarray[np.intp_t, ndim=1] bins_x,
+                  np.ndarray[np.intp_t, ndim=1] bins_y,
+                  np.ndarray[np.intp_t, ndim=1] bins_z,
                   np.ndarray[np.float64_t, ndim=1] wsource,
                   np.ndarray[np.float64_t, ndim=2] bsource,
                   np.ndarray[np.float64_t, ndim=3] wresult,

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/utilities/operator_registry.py
--- /dev/null
+++ b/yt/utilities/operator_registry.py
@@ -0,0 +1,26 @@
+"""
+Operation registry class
+
+
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2013, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+import copy
+import types
+
+class OperatorRegistry(dict):
+    def find(self, op, *args, **kwargs):
+        if isinstance(op, types.StringTypes):
+            # Lookup, assuming string or hashable object
+            op = copy.deepcopy(self[op])
+            op.args = args
+            op.kwargs = kwargs
+        return op

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/utilities/particle_generator.py
--- a/yt/utilities/particle_generator.py
+++ b/yt/utilities/particle_generator.py
@@ -104,7 +104,7 @@
         self.particles[:,self.posx_index] = x[idxs]
         self.particles[:,self.posy_index] = y[idxs]
         self.particles[:,self.posz_index] = z[idxs]
-        self.NumberOfParticles = np.bincount(particle_grid_inds,
+        self.NumberOfParticles = np.bincount(particle_grid_inds.astype("intp"),
                                              minlength=self.num_grids)
         if self.num_grids > 1 :
             np.add.accumulate(self.NumberOfParticles.squeeze(),

diff -r 0c224e0c239a1ba1b61c81c9181d672adde467fd -r c847b3889bb4ea6db3c82a1c07f48f5949452437 yt/visualization/plot_modifications.py
--- a/yt/visualization/plot_modifications.py
+++ b/yt/visualization/plot_modifications.py
@@ -689,20 +689,20 @@
         nx, ny = plot.image._A.shape
         buff = np.zeros((nx,ny),dtype='float64')
         for i,clump in enumerate(reversed(self.clumps)):
-            mylog.debug("Pixelizing contour %s", i)
+            mylog.info("Pixelizing contour %s", i)
 
-            xf_copy = clump[xf].copy()
-            yf_copy = clump[yf].copy()
+            xf_copy = clump[xf].copy().in_units("code_length")
+            yf_copy = clump[yf].copy().in_units("code_length")
 
             temp = _MPL.Pixelize(xf_copy, yf_copy,
-                                 clump[dxf]/2.0,
-                                 clump[dyf]/2.0,
-                                 clump[dxf]*0.0+i+1, # inits inside Pixelize
+                                 clump[dxf].in_units("code_length")/2.0,
+                                 clump[dyf].in_units("code_length")/2.0,
+                                 clump[dxf].d*0.0+i+1, # inits inside Pixelize
                                  int(nx), int(ny),
                              (x0, x1, y0, y1), 0).transpose()
             buff = np.maximum(temp, buff)
         self.rv = plot._axes.contour(buff, np.unique(buff),
-                                     extent=extent,**self.plot_args)
+                                     extent=extent, **self.plot_args)
         plot._axes.hold(False)
 
 class ArrowCallback(PlotCallback):


https://bitbucket.org/yt_analysis/yt/commits/8eea80fbe341/
Changeset:   8eea80fbe341
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-07-23 21:08:26
Summary:     Responding to PR comments.  Fleshing out some of the text.
Affected #:  2 files

diff -r c847b3889bb4ea6db3c82a1c07f48f5949452437 -r 8eea80fbe341c7ebc5ea5acc3b2a861041c398ce doc/source/developing/developing.rst
--- a/doc/source/developing/developing.rst
+++ b/doc/source/developing/developing.rst
@@ -165,10 +165,15 @@
 
 Only one of these two options is needed.
 
-If you plan to develop yt on Windows, we recommend using the `MinGW <http://www.mingw.org/>`_ gcc
-compiler that can be installed using the `Anaconda Python
-Distribution <https://store.continuum.io/cshop/anaconda/>`_. Also, the syntax for the
-setup command is slightly different; you must type:
+.. _windows-developing:
+
+Developing yt on Windows
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you plan to develop yt on Windows, we recommend using the `MinGW
+<http://www.mingw.org/>`_ gcc compiler that can be installed using the `Anaconda
+Python Distribution <https://store.continuum.io/cshop/anaconda/>`_. Also, the
+syntax for the setup command is slightly different; you must type:
 
 .. code-block:: bash
 
@@ -187,10 +192,10 @@
 
 The simplest way to submit changes to yt is to do the following:
 
-  * Build yt from the mercurial repository (
+  * Build yt from the mercurial repository
   * Navigate to the root of the yt repository 
   * Make some changes and commit them
-  * Fork the ` ytrepository on BitBucket<https://bitbucket.org/yt_analysis/yt>`_
+  * Fork the `yt repository on BitBucket <https://bitbucket.org/yt_analysis/yt>`_
   * Push the changesets to your fork
   * Issue a pull request.
 

diff -r c847b3889bb4ea6db3c82a1c07f48f5949452437 -r 8eea80fbe341c7ebc5ea5acc3b2a861041c398ce doc/source/installing.rst
--- a/doc/source/installing.rst
+++ b/doc/source/installing.rst
@@ -8,11 +8,31 @@
 Getting yt
 ----------
 
-yt is a Python package, using NumPy as a computation engine, Matplotlib for some
-visualization tasks, h5py and the hdf5 library for I/O, sympy for symbolic
-computations, Cython for speedy computations, and Mercurial for version
-control. To install yt, all of these supplementary packages must already be
-available.
+In this document we describe several methods for installing yt. The method that
+will work best for you depends on your precise situation:
+
+* If you already have a scientific python software stack installed on your
+  computer and are comfortable installing python packages,
+  :ref:`source-installation` will probably be the best choice. If you have set
+  up python using a source-based package manager like `Homebrew
+  <http://brew.sh>`_ or `MacPorts <http://www.macports.org/>`_ this choice will
+  let you install yt using the python installed by the package manager. Similarly
+  for python environments set up via linux package managers so long as you
+  have the the necessary compilers installed (e.g. the ``build-essentials``
+  package on debian and ubuntu).
+
+* If you use the `Anaconda <https://store.continuum.io/cshop/anaconda/>`_ python
+  distribution see :ref:`anaconda-installation` for details on how to install
+  yt using the ``conda`` package manager.  Source-based installation from the
+  mercurial repository or via ``pip`` should also work under Anaconda. Note that
+  this is currently the only supported installation mechanism on Windows.
+
+* If you do not have root access on your computer, are not comfortable managing
+  python packages, or are working on a supercomputer or cluster computer, you
+  will probably want to use the bash installation script.  This builds python,
+  numpy, matplotlib, and yt from source to set up an isolated scientific python
+  environment inside of a single folder in your home directory. See
+  :ref:`install-script` for more details.
 
 .. _source-installation:
 
@@ -22,28 +42,40 @@
 To install yt from source, you must make sure you have yt's dependencies
 installed on your system.  These include: a C compiler, ``HDF5``, ``python``,
 ``Cython``, ``NumPy``, ``matplotlib``, ``sympy``, and ``h5py``. From here, you
-can use ``pip`` (which comes with ``Python``) to install yt as:
+can use ``pip`` (which comes with ``Python``) to install the latest stable
+version of yt:
 
 .. code-block:: bash
 
   $ pip install yt
 
-The source code for yt may be found at the Bitbucket project site and can also be
-utilized for installation. If you prefer to use it instead of relying on external
-tools, you will need ``mercurial`` to clone the official repo:
+The source code for yt may be found at the Bitbucket project site and can also
+be utilized for installation. If you prefer to install the development version
+of yt instead of the latest stable release, you will need ``mercurial`` to clone
+the official repo:
 
 .. code-block:: bash
 
-  $ hg clone https://bitbucket.org/yt_analysis/yt
-  $ cd yt
-  $ hg update yt
-  $ python setup.py install --user
+  hg clone https://bitbucket.org/yt_analysis/yt
+  cd yt
+  hg update yt
+  python setup.py install --user
 
-It will install yt into ``$HOME/.local/lib64/python2.7/site-packages``. 
+This will install yt into ``$HOME/.local/lib64/python2.7/site-packages``. 
 Please refer to ``setuptools`` documentation for the additional options.
 
-If you choose this installation method, you do not need to run the activation
-script as it is unnecessary.
+If you will be modifying yt, you can also make the clone of the yt mercurial
+repository the "active" installed copy:
+
+..code-block:: bash
+
+  hg clone https://bitbucket.org/yt_analysis/yt
+  cd yt
+  hg update yt
+  python setup.py develop  
+
+If you choose this installation method, you do not need to run any activation
+script since this will install yt into your global python environment.
 
 Keeping yt Updated via Mercurial
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -54,6 +86,16 @@
 especially the sections on :ref:`Mercurial <mercurial-with-yt>` and
 :ref:`building yt from source <building-yt>`.
 
+You can also make use of the following command to keep yt up to date from the
+command line:
+
+.. code-block:: bash
+
+  yt update
+
+This will detect that you have installed yt from the mercurial repository, pull
+any changes from bitbucket, and then recompile yt if necessary.
+
 .. _anaconda-installation:
 
 Installing yt Using Anaconda
@@ -102,24 +144,27 @@
 .. _windows-installation:
 
 Installing yt on Windows
-++++++++++++++++++++++++
+^^^^^^^^^^^^^^^^^^^^^^^^
 
 Installation on Microsoft Windows is only supported for Windows XP Service Pack
 3 and higher (both 32-bit and 64-bit) using Anaconda, see
-:ref:`anaconda-installation`.
+:ref:`anaconda-installation`.  Also see :ref:`windows-developing` for details on
+how to build yt from source in Windows.
+
+.. _install-script:
 
 All-in-one installation script
 ++++++++++++++++++++++++++++++
 
 Because installation of all of the interlocking parts necessary to install yt
-its self can be time-consuming, yt provides an all-in-one installation script
+itself can be time-consuming, yt provides an all-in-one installation script
 which downloads and builds a fully-isolated Python + NumPy + Matplotlib + HDF5 +
 Mercurial installation. Since the install script compiles yt's dependencies from
 source, you must have C, C++, and optionally Fortran compilers installed.
 
 The install script supports UNIX-like systems, including Linux, OS X, and most
 supercomputer and cluster environments. It is particularly suited for deployment
-on clusters where users do not usually have root access and can only install
+in environments where users do not have root access and can only install
 software into their home directory.
 
 Since the install is fully-isolated in a single directory, if you get tired of


https://bitbucket.org/yt_analysis/yt/commits/5a10dea0299b/
Changeset:   5a10dea0299b
Branch:      yt-3.0
User:        chummels
Date:        2014-07-24 18:07:18
Summary:     Merged in ngoldbaum/yt/yt-3.0 (pull request #1055)

Updating the installation instructions.
Affected #:  8 files

diff -r 58f37beaba3c763150dc4b1a83debafc5e8f63c8 -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e doc/source/analyzing/units/index.rst
--- a/doc/source/analyzing/units/index.rst
+++ b/doc/source/analyzing/units/index.rst
@@ -12,9 +12,9 @@
 and execute the documentation interactively, you need to download the repository
 and start the IPython notebook.
 
-If you installed `yt` using the install script, you will need to navigate to
-:code:`$YT_DEST/src/yt-hg/doc/source/units`, then start an IPython notebook
-server:
+You will then need to navigate to :code:`$YT_HG/doc/source/units` (where $YT_HG
+is the location of a clone of the yt mercurial repository), and then start an
+IPython notebook server:
 
 .. code:: bash
   

diff -r 58f37beaba3c763150dc4b1a83debafc5e8f63c8 -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e doc/source/developing/building_the_docs.rst
--- a/doc/source/developing/building_the_docs.rst
+++ b/doc/source/developing/building_the_docs.rst
@@ -55,11 +55,11 @@
 
 .. code-block:: bash
 
-   cd $YT_DEST/src/yt-hg/doc
+   cd $YT_HG/doc
    make html
 
 This will produce an html version of the documentation locally in the 
-``$YT_DEST/src/yt-hg/doc/build/html`` directory.  You can now go there and open
+``$YT_HG/doc/build/html`` directory.  You can now go there and open
 up ``index.html`` or whatever file you wish in your web browser.
 
 Building the docs (full)
@@ -116,7 +116,7 @@
 
 .. code-block:: bash
 
-   cd $YT_DEST/src/yt-hg/doc
+   cd $YT_HG/doc
    make html
 
 If all of the dependencies are installed and all of the test data is in the

diff -r 58f37beaba3c763150dc4b1a83debafc5e8f63c8 -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e doc/source/developing/developing.rst
--- a/doc/source/developing/developing.rst
+++ b/doc/source/developing/developing.rst
@@ -165,10 +165,15 @@
 
 Only one of these two options is needed.
 
-If you plan to develop yt on Windows, we recommend using the `MinGW <http://www.mingw.org/>`_ gcc
-compiler that can be installed using the `Anaconda Python
-Distribution <https://store.continuum.io/cshop/anaconda/>`_. Also, the syntax for the
-setup command is slightly different; you must type:
+.. _windows-developing:
+
+Developing yt on Windows
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you plan to develop yt on Windows, we recommend using the `MinGW
+<http://www.mingw.org/>`_ gcc compiler that can be installed using the `Anaconda
+Python Distribution <https://store.continuum.io/cshop/anaconda/>`_. Also, the
+syntax for the setup command is slightly different; you must type:
 
 .. code-block:: bash
 
@@ -185,17 +190,24 @@
 Making and Sharing Changes
 ++++++++++++++++++++++++++
 
-The simplest way to submit changes to yt is to commit changes in your
-``$YT_DEST/src/yt-hg`` directory, fork the repository on BitBucket,  push the
-changesets to your fork, and then issue a pull request.  
+The simplest way to submit changes to yt is to do the following:
+
+  * Build yt from the mercurial repository
+  * Navigate to the root of the yt repository 
+  * Make some changes and commit them
+  * Fork the `yt repository on BitBucket <https://bitbucket.org/yt_analysis/yt>`_
+  * Push the changesets to your fork
+  * Issue a pull request.
 
 Here's a more detailed flowchart of how to submit changes.
 
   #. If you have used the installation script, the source code for yt can be
-     found in ``$YT_DEST/src/yt-hg``.  (Below, in :ref:`reading-source`, 
-     we describe how to find items of interest.)  Edit the source file you are
-     interested in and test your changes.  (See :ref:`testing` for more
-     information.)
+     found in ``$YT_DEST/src/yt-hg``.  Alternatively see
+     :ref:`source-installation` for instructions on how to build yt from the
+     mercurial repository. (Below, in :ref:`reading-source`, we describe how to
+     find items of interest.)  
+  #. Edit the source file you are interested in and
+     test your changes.  (See :ref:`testing` for more information.)
   #. Fork yt on BitBucket.  (This step only has to be done once.)  You can do
      this at: https://bitbucket.org/yt_analysis/yt/fork .  Call this repository
      ``yt``.
@@ -207,7 +219,7 @@
      these changes as well.
   #. Push your changes to your new fork using the command::
 
-        hg push https://bitbucket.org/YourUsername/yt/
+        hg push -r . https://bitbucket.org/YourUsername/yt/
  
      If you end up doing considerable development, you can set an alias in the
      file ``.hg/hgrc`` to point to this path.
@@ -244,9 +256,9 @@
 include a recipe in the cookbook section, or it could simply be adding a note 
 in the relevant docs text somewhere.
 
-The documentation exists in the main mercurial code repository for yt in the 
-``doc`` directory (i.e. ``$YT_DEST/src/yt-hg/doc/source`` on systems installed 
-using the installer script).  It is organized hierarchically into the main 
+The documentation exists in the main mercurial code repository for yt in the
+``doc`` directory (i.e. ``$YT_HG/doc/source`` where ``$YT_HG`` is the path of
+the yt mercurial repository).  It is organized hierarchically into the main
 categories of:
 
  * Visualizing
@@ -345,16 +357,6 @@
 yt``), then you must "activate" it using the following commands from within the
 repository directory.
 
-In order to do this for the first time with a new repository, you have to
-copy some config files over from your yt installation directory (where yt
-was initially installed from the install_script.sh).  Try this:
-
-.. code-block:: bash
-
-   $ cp $YT_DEST/src/yt-hg/*.cfg <REPOSITORY_NAME>
-
-and then every time you want to "activate" a different repository of yt.
-
 .. code-block:: bash
 
    $ cd <REPOSITORY_NAME>
@@ -367,11 +369,16 @@
 How To Read The Source Code
 ---------------------------
 
-If you just want to *look* at the source code, you already have it on your
-computer.  Go to the directory where you ran the install_script.sh, then
-go to ``$YT_DEST/src/yt-hg`` .  In this directory are a number of
-subdirectories with different components of the code, although most of them
-are in the yt subdirectory.  Feel free to explore here.
+If you just want to *look* at the source code, you may already have it on your
+computer.  If you build yt using the install script, the source is available at
+``$YT_DEST/src/yt-hg``.  See :ref:`source-installation` for more details about
+to obtain the yt source code if you did not build yt using the install
+script. 
+
+The root directory of the yt mercurial repository contains a number of
+subdirectories with different components of the code.  Most of the yt source
+code is contained in the ``yt`` subdirectory.  This directory its self contains
+the following subdirectories:
 
    ``frontends``
       This is where interfaces to codes are created.  Within each subdirectory of
@@ -380,10 +387,19 @@
       * ``data_structures.py``, where subclasses of AMRGridPatch, Dataset
         and AMRHierarchy are defined.
       * ``io.py``, where a subclass of IOHandler is defined.
+      * ``fields.py``, where fields we expect to find in datasets are defined
       * ``misc.py``, where any miscellaneous functions or classes are defined.
       * ``definitions.py``, where any definitions specific to the frontend are
         defined.  (i.e., header formats, etc.)
 
+   ``fields``
+      This is where all of the derived fields that ship with yt are defined.
+
+   ``geometry`` 
+      This is where geometric helpler routines are defined. Handlers
+      for grid and oct data, as well as helpers for coordinate transformations
+      can be found here.
+
    ``visualization``
       This is where all visualization modules are stored.  This includes plot
       collections, the volume rendering interface, and pixelization frontends.
@@ -409,6 +425,10 @@
       All broadly useful code that doesn't clearly fit in one of the other
       categories goes here.
 
+   ``extern`` 
+      Bundled external modules (i.e. code that was not written by one of
+      the yt authors but that yt depends on) lives here.
+
 
 If you're looking for a specific file or function in the yt source code, use
 the unix find command:

diff -r 58f37beaba3c763150dc4b1a83debafc5e8f63c8 -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e doc/source/developing/intro.rst
--- a/doc/source/developing/intro.rst
+++ b/doc/source/developing/intro.rst
@@ -66,11 +66,11 @@
 typo or grammatical fixes, adding a FAQ, or increasing coverage of
 functionality, it would be very helpful if you wanted to help out.
 
-The easiest way to help out is to fork the main yt repository (where 
-the documentation lives in the ``$YT_DEST/src/yt-hg/doc`` directory,
-and then make your changes in your own fork.  When you are done, issue a pull
-request through the website for your new fork, and we can comment back and
-forth and eventually accept your changes.
+The easiest way to help out is to fork the main yt repository (where the
+documentation lives in the ``doc`` directory in the root of the yt mercurial
+repository) and then make your changes in your own fork.  When you are done,
+issue a pull request through the website for your new fork, and we can comment
+back and forth and eventually accept your changes.
 
 One of the more interesting ways we are attempting to do lately is to add
 screencasts to the documentation -- these are recordings of people executing

diff -r 58f37beaba3c763150dc4b1a83debafc5e8f63c8 -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e doc/source/developing/testing.rst
--- a/doc/source/developing/testing.rst
+++ b/doc/source/developing/testing.rst
@@ -59,11 +59,13 @@
    $ cd $YT_HG
    $ nosetests
 
+where ``$YT_HG`` is the path to the root of the yt mercurial repository.
+
 If you want to specify a specific unit test to run (and not run the entire
 suite), you can do so by specifying the path of the test relative to the
-``$YT_DEST/src/yt-hg/yt`` directory -- note that you strip off one ``yt`` more
-than you normally would!  For example, if you want to run the
-plot_window tests, you'd run:
+``$YT_HG/yt`` directory -- note that you strip off one ``yt`` more than you
+normally would!  For example, if you want to run the plot_window tests, you'd
+run:
 
 .. code-block:: bash
 
@@ -172,7 +174,7 @@
    $ nosetests --with-answer-testing
 
 In either case, the current gold standard results will be downloaded from the
-amazon cloud and compared to what is generated locally.  The results from a
+rackspace cloud and compared to what is generated locally.  The results from a
 nose testing session are pretty straightforward to understand, the results for
 each test are printed directly to STDOUT. If a test passes, nose prints a
 period, F if a test fails, and E if the test encounters an exception or errors

diff -r 58f37beaba3c763150dc4b1a83debafc5e8f63c8 -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e doc/source/help/index.rst
--- a/doc/source/help/index.rst
+++ b/doc/source/help/index.rst
@@ -88,31 +88,40 @@
 -----------------------
 
 We've done our best to make the source clean, and it is easily searchable from 
-your computer.  Go inside your yt install directory by going to the 
-``$YT_DEST/src/yt-hg/yt`` directory where all the code lives.  You can then search 
-for the class, function, or keyword which is giving you problems with 
-``grep -r *``, which will recursively search throughout the code base.  (For a 
-much faster and cleaner experience, we recommend ``grin`` instead of 
-``grep -r *``.  To install ``grin`` with python, just type ``pip install 
-grin``.)  
+your computer.
 
-So let's say that pesky ``SlicePlot`` is giving you problems still, and you 
-want to look at the source to figure out what is going on.
+If you have not done so already (see :ref:`source-installation`), clone a copy of the yt mercurial repository and make it the 'active' installation by doing
+
+.. code-block::bash
+
+  python setup.py develop
+
+in the root directory of the yt mercurial repository.
+
+.. note::
+
+  This has already been done for you if you installed using the bash install
+  script.  Building yt from source will not work if you do not have a C compiler
+  installed.
+
+Once inside the yt mercurial repository, you can then search for the class,
+function, or keyword which is giving you problems with ``grep -r *``, which will
+recursively search throughout the code base.  (For a much faster and cleaner
+experience, we recommend ``grin`` instead of ``grep -r *``.  To install ``grin``
+with python, just type ``pip install grin``.)
+
+So let's say that ``SlicePlot`` is giving you problems still, and you want to
+look at the source to figure out what is going on.
 
 .. code-block:: bash
 
-  $ cd $YT_DEST/src/yt-hg/yt
+  $ cd $YT-HG/yt
   $ grep -r SlicePlot *         (or $ grin SlicePlot)
-  
-   data_objects/analyzer_objects.py:class SlicePlotDataset(AnalysisTask):
-   data_objects/analyzer_objects.py:        from yt.visualization.api import SlicePlot
-   data_objects/analyzer_objects.py:        self.SlicePlot = SlicePlot
-   data_objects/analyzer_objects.py:        slc = self.SlicePlot(ds, self.axis, self.field, center = self.center)
-   ...
 
-You can now followup on this and open up the files that have references to 
-``SlicePlot`` (particularly the one that definese SlicePlot) and inspect their
-contents for problems or clarification.
+This will print a number of locations in the yt source tree where ``SlicePlot``
+is mentioned.  You can now followup on this and open up the files that have
+references to ``SlicePlot`` (particularly the one that defines SlicePlot) and
+inspect their contents for problems or clarification.
 
 .. _isolate_and_document:
 
@@ -128,7 +137,6 @@
  * Put your script, errors, and outputs online:
 
    * ``$ yt pastebin script.py`` - pastes script.py online
-   * ``$ python script.py --paste`` - pastes errors online
    * ``$ yt upload_image image.png`` - pastes image online
 
  * Identify which version of the code you’re using. 

diff -r 58f37beaba3c763150dc4b1a83debafc5e8f63c8 -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e doc/source/installing.rst
--- a/doc/source/installing.rst
+++ b/doc/source/installing.rst
@@ -8,39 +8,190 @@
 Getting yt
 ----------
 
-yt is a Python package (with some components written in C), using NumPy as a
-computation engine, Matplotlib for some visualization tasks and Mercurial for
-version control.  Because installation of all of these interlocking parts can 
-be time-consuming, yt provides an installation script which downloads and builds
-a fully-isolated Python + NumPy + Matplotlib + HDF5 + Mercurial installation.  
-yt supports Linux and OSX deployment, with the possibility of deployment on 
-other Unix-like systems (XSEDE resources, clusters, etc.).
+In this document we describe several methods for installing yt. The method that
+will work best for you depends on your precise situation:
 
-Since the install is fully-isolated, if you get tired of having yt on your 
-system, you can just delete its directory, and yt and all of its dependencies
-will be removed from your system (no scattered files remaining throughout 
-your system).  
+* If you already have a scientific python software stack installed on your
+  computer and are comfortable installing python packages,
+  :ref:`source-installation` will probably be the best choice. If you have set
+  up python using a source-based package manager like `Homebrew
+  <http://brew.sh>`_ or `MacPorts <http://www.macports.org/>`_ this choice will
+  let you install yt using the python installed by the package manager. Similarly
+  for python environments set up via linux package managers so long as you
+  have the the necessary compilers installed (e.g. the ``build-essentials``
+  package on debian and ubuntu).
+
+* If you use the `Anaconda <https://store.continuum.io/cshop/anaconda/>`_ python
+  distribution see :ref:`anaconda-installation` for details on how to install
+  yt using the ``conda`` package manager.  Source-based installation from the
+  mercurial repository or via ``pip`` should also work under Anaconda. Note that
+  this is currently the only supported installation mechanism on Windows.
+
+* If you do not have root access on your computer, are not comfortable managing
+  python packages, or are working on a supercomputer or cluster computer, you
+  will probably want to use the bash installation script.  This builds python,
+  numpy, matplotlib, and yt from source to set up an isolated scientific python
+  environment inside of a single folder in your home directory. See
+  :ref:`install-script` for more details.
+
+.. _source-installation:
+
+Installing yt Using pip or from Source
+++++++++++++++++++++++++++++++++++++++
+
+To install yt from source, you must make sure you have yt's dependencies
+installed on your system.  These include: a C compiler, ``HDF5``, ``python``,
+``Cython``, ``NumPy``, ``matplotlib``, ``sympy``, and ``h5py``. From here, you
+can use ``pip`` (which comes with ``Python``) to install the latest stable
+version of yt:
+
+.. code-block:: bash
+
+  $ pip install yt
+
+The source code for yt may be found at the Bitbucket project site and can also
+be utilized for installation. If you prefer to install the development version
+of yt instead of the latest stable release, you will need ``mercurial`` to clone
+the official repo:
+
+.. code-block:: bash
+
+  hg clone https://bitbucket.org/yt_analysis/yt
+  cd yt
+  hg update yt
+  python setup.py install --user
+
+This will install yt into ``$HOME/.local/lib64/python2.7/site-packages``. 
+Please refer to ``setuptools`` documentation for the additional options.
+
+If you will be modifying yt, you can also make the clone of the yt mercurial
+repository the "active" installed copy:
+
+..code-block:: bash
+
+  hg clone https://bitbucket.org/yt_analysis/yt
+  cd yt
+  hg update yt
+  python setup.py develop  
+
+If you choose this installation method, you do not need to run any activation
+script since this will install yt into your global python environment.
+
+Keeping yt Updated via Mercurial
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you want to maintain your yt installation via updates straight from the
+Bitbucket repository or if you want to do some development on your own, we
+suggest you check out some of the :ref:`development docs <contributing-code>`,
+especially the sections on :ref:`Mercurial <mercurial-with-yt>` and
+:ref:`building yt from source <building-yt>`.
+
+You can also make use of the following command to keep yt up to date from the
+command line:
+
+.. code-block:: bash
+
+  yt update
+
+This will detect that you have installed yt from the mercurial repository, pull
+any changes from bitbucket, and then recompile yt if necessary.
+
+.. _anaconda-installation:
+
+Installing yt Using Anaconda
+++++++++++++++++++++++++++++
+
+Perhaps the quickest way to get yt up and running is to install it using the
+`Anaconda Python Distribution <https://store.continuum.io/cshop/anaconda/>`_,
+which will provide you with a easy-to-use environment for installing Python
+packages.
+
+If you do not want to install the full anaconda python distribution, you can
+install a bare-bones Python installation using miniconda.  To install miniconda,
+visit http://repo.continuum.io/miniconda/ and download a recent version of the
+``Miniconda-x.y.z`` script (corresponding to Python 2.7) for your platform and
+system architecture. Next, run the script, e.g.:
+
+.. code-block:: bash
+
+  bash Miniconda-3.3.0-Linux-x86_64.sh
+
+Make sure that the Anaconda ``bin`` directory is in your path, and then issue:
+
+.. code-block:: bash
+
+  conda install yt
+
+which will install yt along with all of its dependencies.
+
+Recipes to build conda packages for yt are available at
+https://github.com/conda/conda-recipes.  To build the yt conda recipe, first
+clone the conda-recipes repository
+
+.. code-block:: bash
+
+  git clone https://github.com/conda/conda-recipes
+
+Then navigate to the repository root and invoke `conda build`:
+
+.. code-block:: bash
+
+  cd conda-recipes
+  conda build ./yt/
+
+Note that building a yt conda package requires a C compiler.
+
+.. _windows-installation:
+
+Installing yt on Windows
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Installation on Microsoft Windows is only supported for Windows XP Service Pack
+3 and higher (both 32-bit and 64-bit) using Anaconda, see
+:ref:`anaconda-installation`.  Also see :ref:`windows-developing` for details on
+how to build yt from source in Windows.
+
+.. _install-script:
+
+All-in-one installation script
+++++++++++++++++++++++++++++++
+
+Because installation of all of the interlocking parts necessary to install yt
+itself can be time-consuming, yt provides an all-in-one installation script
+which downloads and builds a fully-isolated Python + NumPy + Matplotlib + HDF5 +
+Mercurial installation. Since the install script compiles yt's dependencies from
+source, you must have C, C++, and optionally Fortran compilers installed.
+
+The install script supports UNIX-like systems, including Linux, OS X, and most
+supercomputer and cluster environments. It is particularly suited for deployment
+in environments where users do not have root access and can only install
+software into their home directory.
+
+Since the install is fully-isolated in a single directory, if you get tired of
+having yt on your system, you can just delete the directory and yt and all of
+its dependencies will be removed from your system (no scattered files remaining
+throughout your system).
+
+Running the install script
+^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 To get the installation script, download it from:
 
 .. code-block:: bash
 
-  http://hg.yt-project.org/yt/raw/stable/doc/install_script.sh
+  wget http://hg.yt-project.org/yt/raw/stable/doc/install_script.sh
 
 .. _installing-yt:
 
-Installing yt
--------------
-
-By default, the bash script will install an array of items, but there are 
-additional packages that can be downloaded and installed (e.g. SciPy, enzo, 
-etc.). The script has all of these options at the top of the file. You should 
-be able to open it and edit it without any knowledge of bash syntax.  
-To execute it, run:
+By default, the bash install script will install an array of items, but there
+are additional packages that can be downloaded and installed (e.g. SciPy, enzo,
+etc.). The script has all of these options at the top of the file. You should be
+able to open it and edit it without any knowledge of bash syntax.  To execute
+it, run:
 
 .. code-block:: bash
 
-  $ bash install_script.sh
+  bash install_script.sh
 
 Because the installer is downloading and building a variety of packages from
 source, this will likely take a while (e.g. 20 minutes), but you will get 
@@ -48,7 +199,7 @@
 
 If you receive errors during this process, the installer will provide you 
 with a large amount of information to assist in debugging your problems.  The 
-file ``yt_install.log`` will contain all of the ``STDOUT`` and ``STDERR`` from 
+file ``yt_install.log`` will contain all of the ``stdout`` and ``stderr`` from 
 the entire installation process, so it is usually quite cumbersome.  By looking 
 at the last few hundred lines (i.e. ``tail -500 yt_install.log``), you can 
 potentially figure out what went wrong.  If you have problems, though, do not 
@@ -57,7 +208,7 @@
 .. _activating-yt:
 
 Activating Your Installation
-----------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 Once the installation has completed, there will be instructions on how to set up 
 your shell environment to use yt by executing the activate script.  You must 
@@ -67,13 +218,13 @@
 
 .. code-block:: bash
 
-  $ source <yt installation directory>/bin/activate
+  source <yt installation directory>/bin/activate
 
 If you use csh or tcsh as your shell, activate that version of the script:
 
 .. code-block:: bash
 
-  $ source <yt installation directory>/bin/activate.csh
+  source <yt installation directory>/bin/activate.csh
 
 If you don't like executing outside scripts on your computer, you can set 
 the shell variables manually.  ``YT_DEST`` needs to point to the root of the
@@ -82,6 +233,38 @@
 will also need to set ``LD_LIBRARY_PATH`` and ``PYTHONPATH`` to contain 
 ``$YT_DEST/lib`` and ``$YT_DEST/python2.7/site-packages``, respectively.
 
+.. _updating-yt:
+
+Updating yt and its dependencies
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+With many active developers, code development sometimes occurs at a furious
+pace in yt.  To make sure you're using the latest version of the code, run
+this command at a command-line:
+
+.. code-block:: bash
+
+  yt update
+
+Additionally, if you want to make sure you have the latest dependencies
+associated with yt and update the codebase simultaneously, type this:
+
+.. code-block:: bash
+
+  yt update --all
+
+.. _removing-yt:
+
+Removing yt and its dependencies
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Because yt and its dependencies are installed in an isolated directory when
+you use the script installer, you can easily remove yt and all of its
+dependencies cleanly.  Simply remove the install directory and its
+subdirectories and you're done.  If you *really* had problems with the
+code, this is a last defense for solving: remove and then fully
+:ref:`re-install <installing-yt>` from the install script again.
+
 .. _testing-installation:
 
 Testing Your Installation
@@ -92,7 +275,7 @@
 
 .. code-block:: bash
 
-  $ yt --help
+  yt --help
 
 If this works, you should get a list of the various command-line options for
 yt, which means you have successfully installed yt.  Congratulations!
@@ -102,112 +285,3 @@
 figure it out.
 
 If you like, this might be a good time :ref:`to run the test suite <testing>`.
-
-.. _updating-yt:
-
-Updating yt and its dependencies
---------------------------------
-
-With many active developers, code development sometimes occurs at a furious
-pace in yt.  To make sure you're using the latest version of the code, run
-this command at a command-line:
-
-.. code-block:: bash
-
-  $ yt update
-
-Additionally, if you want to make sure you have the latest dependencies
-associated with yt and update the codebase simultaneously, type this:
-
-.. code-block:: bash
-
-  $ yt update --all
-
-.. _removing-yt:
-
-Removing yt and its dependencies
---------------------------------
-
-Because yt and its dependencies are installed in an isolated directory when
-you use the script installer, you can easily remove yt and all of its
-dependencies cleanly.  Simply remove the install directory and its
-subdirectories and you're done.  If you *really* had problems with the
-code, this is a last defense for solving: remove and then fully
-:ref:`re-install <installing-yt>` from the install script again.
-
-.. _alternative-installation:
-
-Alternative Installation Methods
---------------------------------
-
-.. _pip-installation:
-
-Installing yt Using pip or from Source
-++++++++++++++++++++++++++++++++++++++
-
-If you want to forego the use of the install script, you need to make sure you
-have yt's dependencies installed on your system.  These include: a C compiler,
-``HDF5``, ``python``, ``cython``, ``NumPy``, ``matplotlib``, and ``h5py``. From here,
-you can use ``pip`` (which comes with ``Python``) to install yt as:
-
-.. code-block:: bash
-
-  $ pip install yt
-
-The source code for yt may be found at the Bitbucket project site and can also be
-utilized for installation. If you prefer to use it instead of relying on external
-tools, you will need ``mercurial`` to clone the official repo:
-
-.. code-block:: bash
-
-  $ hg clone https://bitbucket.org/yt_analysis/yt
-  $ cd yt
-  $ hg update yt
-  $ python setup.py install --user
-
-It will install yt into ``$HOME/.local/lib64/python2.7/site-packages``. 
-Please refer to ``setuptools`` documentation for the additional options.
-
-If you choose this installation method, you do not need to run the activation
-script as it is unnecessary.
-
-.. _anaconda-installation:
-
-Installing yt Using Anaconda
-++++++++++++++++++++++++++++
-
-Perhaps the quickest way to get yt up and running is to install it using the `Anaconda Python
-Distribution <https://store.continuum.io/cshop/anaconda/>`_, which will provide you with a
-easy-to-use environment for installing Python packages. To install a bare-bones Python
-installation with yt, first visit http://repo.continuum.io/miniconda/ and download a recent
-version of the ``Miniconda-x.y.z`` script (corresponding to Python 2.7) for your platform and
-system architecture. Next, run the script, e.g.:
-
-.. code-block:: bash
-
-  $ bash Miniconda-3.3.0-Linux-x86_64.sh
-
-Make sure that the Anaconda ``bin`` directory is in your path, and then issue:
-
-.. code-block:: bash
-
-  $ conda install yt
-
-which will install yt along with all of its dependencies.
-
-.. _windows-installation:
-
-Installing yt on Windows
-++++++++++++++++++++++++
-
-Installation on Microsoft Windows is only supported for Windows XP Service Pack 3 and
-higher (both 32-bit and 64-bit) using Anaconda.
-
-Keeping yt Updated via Mercurial
-++++++++++++++++++++++++++++++++
-
-If you want to maintain your yt installation via updates straight from the Bitbucket repository,
-or if you want to do some development on your own, we suggest you check out some of the
-:ref:`development docs <contributing-code>`, especially the sections on :ref:`Mercurial
-<mercurial-with-yt>` and :ref:`building yt from source <building-yt>`.
-

diff -r 58f37beaba3c763150dc4b1a83debafc5e8f63c8 -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e doc/source/reference/faq/index.rst
--- a/doc/source/reference/faq/index.rst
+++ b/doc/source/reference/faq/index.rst
@@ -196,33 +196,10 @@
 
 .. code-block:: bash
 
-    cd $YT_DEST/src/yt-hg
+    cd $YT_HG
     python setup.py develop
 
-
-Unresolved Installation Problem on OSX 10.6
--------------------------------------------
-When installing on some instances of OSX 10.6, a few users have noted a failure
-when yt tries to build with OpenMP support:
-
-    Symbol not found: _GOMP_barrier
-        Referenced from: <YT_DEST>/src/yt-hg/yt/utilities/lib/grid_traversal.so
-
-        Expected in: dynamic lookup
-
-To resolve this, please make a symbolic link:
-
-.. code-block:: bash
-
-  $ ln -s /usr/local/lib/x86_64 <YT_DEST>/lib64
-
-where ``<YT_DEST>`` is replaced by the path to the root of the directory
-containing the yt install, which will usually be ``yt-<arch>``. After doing so, 
-you should be able to cd to <YT_DEST>/src/yt-hg and run:
-
-.. code-block:: bash
-
-  $ python setup.py install
+where ``$YT_HG`` is the path to the yt mercurial repository.
 
 .. _plugin-file:

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.


More information about the yt-svn mailing list