[yt-svn] commit/yt-doc: 8 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Tue Dec 3 08:37:54 PST 2013


8 new commits in yt-doc:

https://bitbucket.org/yt_analysis/yt-doc/commits/486b2b6ac754/
Changeset:   486b2b6ac754
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-10-01 23:20:50
Summary:     Updating cookbook for 3.0.
Affected #:  1 file

diff -r d8ffc03386be9b5ca6e388229481343bbbd1333b -r 486b2b6ac754909a3ae1d3e9cbef7a33fa9f66f9 source/cookbook/show_hide_axes_colorbar.py
--- a/source/cookbook/show_hide_axes_colorbar.py
+++ b/source/cookbook/show_hide_axes_colorbar.py
@@ -6,14 +6,14 @@
 
 slc.save("default_sliceplot.png")
 
-slc.plots["Density"].hide_axes()
+slc.plots["gas", "Density"].hide_axes()
 
 slc.save("no_axes_sliceplot.png")
 
-slc.plots["Density"].hide_colorbar()
+slc.plots["gas", "Density"].hide_colorbar()
 
 slc.save("no_axes_no_colorbar_sliceplot.png")
 
-slc.plots["Density"].show_axes()
+slc.plots["gas", "Density"].show_axes()
 
 slc.save("no_colorbar_sliceplot.png")


https://bitbucket.org/yt_analysis/yt-doc/commits/b87795b65568/
Changeset:   b87795b65568
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-10-22 16:00:34
Summary:     Updating to Sphinx Bootstrap.  Theme still up for grabs.
Affected #:  2 files

diff -r 486b2b6ac754909a3ae1d3e9cbef7a33fa9f66f9 -r b87795b65568ab0916b7f6ca3792f52010dcec4a source/_templates/layout.html
--- a/source/_templates/layout.html
+++ b/source/_templates/layout.html
@@ -35,44 +35,3 @@
     </div>
 {%- endblock %}
 
-{%- block sidebarsearch %}
-  {{ super() }}
-  <h3 style="margin-top: 1.5em;">{{ _('Search the Mailing Lists') }}</h3>
-  <form class="search" action="http://www.google.com/cse" id="cse-search-box">
-    <div>
-      <input type="hidden" name="cx" value="010428198273461986377:xyfd9ztykqm" />
-      <input type="hidden" name="ie" value="UTF-8" />
-      <input type="text" name="q" size="18" />
-      <input type="submit" name="sa" value="Search" />
-    </div>
-  </form>
-  <p class="searchtip" style="font-size: 90%">
-    {{ _('Search the yt mailing lists.') }}
-  </p>
-  <script type="text/javascript" src="http://www.google.com/cse/brand?form=cse-search-box&lang=en"></script>
-{%- endblock %}
-
-{# update static links in the relbar #}
-{% block rellinks %}
-{% endblock %}
-{% block header %}
-    <div class="header-wrapper">
-      <div class="header">
-        {%- if logo %}
-          <p class="logo"><a href="{{ pathto(master_doc) }}">
-            <img class="logo" src="{{ pathto('_static/' + logo, 1) }}" alt="Logo"/>
-          </a></p>
-        {%- endif %}
-        {%- block headertitle %}
-        <h1><a href="{{ pathto(master_doc) }}">{{ shorttitle|e }}</a></h1>
-        {%- endblock %}
-        <div class="rel">
-          <a href="http://yt-project.org/">YT Home </a> {{reldelim2}}
-          <a href={{ pathto(master_doc) }}>Docs Home </a> {{reldelim2}}
-          <a href="http://hub.yt-project.org/">Hub</a> {{reldelim2}}
-          <a href={{ pathto('search') }}>Search </a>
-        </div>
-       </div>
-    </div>
-{% endblock %}
-

diff -r 486b2b6ac754909a3ae1d3e9cbef7a33fa9f66f9 -r b87795b65568ab0916b7f6ca3792f52010dcec4a source/conf.py
--- a/source/conf.py
+++ b/source/conf.py
@@ -46,16 +46,16 @@
 
 # General information about the project.
 project = u'yt'
-copyright = u'2012, the yt Project'
+copyright = u'2013, the yt Project'
 
 # The version info for the project you're documenting, acts as replacement for
 # |version| and |release|, also used in various other places throughout the
 # built documents.
 #
 # The short X.Y version.
-version = '2.5'
+version = '3.0'
 # The full version, including alpha/beta/rc tags.
-release = '2.5'
+release = '3.0alpha'
 
 # The language for content autogenerated by Sphinx. Refer to documentation
 # for a list of supported languages.
@@ -96,27 +96,19 @@
 
 # The theme to use for HTML and HTML Help pages.  See the documentation for
 # a list of builtin themes.
-html_theme = 'agogo'
+import sphinx_bootstrap_theme
+html_theme = 'bootstrap'
+html_theme_path = sphinx_bootstrap_theme.get_html_theme_path()
 
 # Theme options are theme-specific and customize the look and feel of a theme
 # further.  For a list of options available for each theme, see the
 # documentation.
 html_theme_options = dict(
-    bodyfont = 'Droid Sans',
-    pagewidth = '1080px',
-    documentwidth = '880px',
-    sidebarwidth = '200px',
-    headerfont = 'Crimson Text',
-
-    footerbg="#003000",
-    headerbg="#4a8f43",
-    headercolor1="#000000",
-    headercolor2="#000000",
-    headerlinkcolor="#A5A999",
-    linkcolor="#4a8f43",
+    bootstrap_version = "3",
+    bootswatch_theme = "readable"
 )
 
-html_style = "agogo_yt.css"
+#html_style = "agogo_yt.css"
 
 # Add any paths that contain custom themes here, relative to this directory.
 #html_theme_path = []
@@ -167,7 +159,7 @@
 #html_split_index = False
 
 # If true, links to the reST sources are added to the pages.
-html_show_sourcelink = True
+html_show_sourcelink = False
 
 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
 #html_show_sphinx = True
@@ -238,10 +230,10 @@
 
 # Example configuration for intersphinx: refer to the Python standard library.
 intersphinx_mapping = {'http://docs.python.org/': None,
-                       'http://ipython.org/ipython-doc/rel-0.10/html/': None,
+                       'http://ipython.org/ipython-doc/rel-1.1.0/html/': None,
                        'http://docs.scipy.org/doc/numpy/': None,
                        'http://matplotlib.sourceforge.net/': None,
                        }
 
-if not on_rtd:
-    autosummary_generate = glob.glob("api/api.rst")
+#if not on_rtd:
+#    autosummary_generate = glob.glob("api/api.rst")


https://bitbucket.org/yt_analysis/yt-doc/commits/b8e877f469b5/
Changeset:   b8e877f469b5
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-10-22 16:22:35
Summary:     Updating front page.
Affected #:  1 file

diff -r b87795b65568ab0916b7f6ca3792f52010dcec4a -r b8e877f469b567041ebc22924584f3fa1f85aa7e source/index.rst
--- a/source/index.rst
+++ b/source/index.rst
@@ -2,21 +2,38 @@
 ===========
 
 yt is a community-developed analysis and visualization toolkit for
-astrophysical simulation data.  yt runs both interactively and
-non-interactively, and has been designed to support as many operations as
-possible in parallel. 
+volumetric data.  yt has been applied mostly to astrophysical simulation data,
+but it can be applied to many different types of data including seismology,
+radio telescope data, weather simulations, and nuclear engineering simulations.
 
-yt provides full support for several simulation codes in the current release:
+yt runs both interactively and non-interactively, and has been designed to
+support as many operations as possible in parallel. 
+
+These documents refer to the in-development "yt 3.0" branch.  This branch
+contains a complete rework of the underlying data model for yt, and as a result
+may be backwards incompatible in some ways.  For more information about the
+design decisions and why they have been made, please see the `YTEP
+<http://ytep.readthedocs.org/>`_ list.
+
+yt provides support for several simulation codes in the current release:
 
  * `Enzo <http://enzo-project.org/>`_ 
- * Orion
- * `Nyx <https://ccse.lbl.gov/Research/NYX/index.html>`_
+ * Boxlib simulations (tested with Orion, 
+   `Nyx <https://ccse.lbl.gov/Research/NYX/index.html>`_,
+   `Castro <https://ccse.lbl.gov/Research/CASTRO/>`_,
+   and `Maestro <https://ccse.lbl.gov/Research/MAESTRO/>`_.)
  * `FLASH <http://flash.uchicago.edu/website/home/>`_
- * Piernik
+ * `Piernik <http://piernik.astri.umk.pl/doku.php>`_
+ * `PyNE <http://pynesim.org/>`_ (and some MOAB formats such as Hex8)
+ * `RAMSES <http://www.itp.uzh.ch/~teyssier/ramses/RAMSES.html>`_
+ * `Athena <https://trac.princeton.edu/Athena/>`_
+ * ART and ARTIO
+ * Tipsy data format (Gasoline, PKDGrav)
+ * Gadget (Binary and HDF5)
 
-We also provide limited support for Castro, NMSU-ART, and Maestro.  A limited
-amount of RAMSES IO is provided, but full support  for RAMSES will not be
-completed until the 3.0 release of yt.
+Additionally, yt can load data manually from any NumPy array.  This type of
+data-loading support works for uniform and AMR grid patches, hexahedral
+irregular grids, particles, and octrees in depth-first traversal format.
 
 If you use ``yt`` in a paper, you are highly encouraged to submit the
 repository containing the scripts you used to analyze and visualize your data


https://bitbucket.org/yt_analysis/yt-doc/commits/a6705c765fd0/
Changeset:   a6705c765fd0
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-10-23 15:10:57
Summary:     First pass at a list of differences.
Affected #:  1 file

diff -r b8e877f469b567041ebc22924584f3fa1f85aa7e -r a6705c765fd0a67fe4ac10b42a8ecb8fb0a9bfff source/yt3differences.rst
--- /dev/null
+++ b/source/yt3differences.rst
@@ -0,0 +1,92 @@
+What's New and Different in yt 3.0?
+===================================
+
+If you are new to yt, welcome!  If you're coming to yt 3.0 from an older
+version, however, there may be a few things in this version that are different
+than what you are used to.  We have tried to build compatibility layers to
+minimize disruption to existing scripts, but necessarily things will be
+different in some ways.
+
+Additionally, this list is current as of the latest alpha release.  Several
+more changes are planned that are considerably more disruptive: these include
+unit handling, index/hierarchy handling, and field naming.
+
+.. warning:: This document covers *current* API changes.  As API changes occur
+             it will be updated.
+
+API Changes
+-----------
+
+These are the items that have already changed in *user-facing* API:
+
+Field Naming
+++++++++++++
+
+Fields can be accessed by their short names, but yt now has an explicit
+mechanism of distinguishing between field types and particle types.  This is
+expressed through a two-key description.  For example::
+
+   my_object["gas", "density"]
+
+will return the gas field density.  This extends to particle types as well.  By
+default you do *not* need to use the field "type" key, but in case of ambiguity
+it will utilize the default value in its place.
+
+Field Info
+++++++++++
+
+In the past, the object ``ds`` (or ``pf``) had a ``field_info`` object which
+was a dictionary leading to derived field definitions.  At the present time,
+because of the field naming changes (i.e., access-by-tuple) it is better to
+utilize the function ``_get_field_info`` than to directly access the
+``field_info`` dictionary.  For example::
+
+   finfo = ds._get_field_info("gas", "Density")
+
+This function respects the special "field type" ``unknown`` and will search all
+field types for the field name.
+
+
+Parameter Files are Now Datasets
+++++++++++++++++++++++++++++++++
+
+Wherever possible, we have attempted to replace the term "parameter file"
+(i.e., ``pf``) with the term "dataset."  Future revisions will change most of
+the ``pf`` atrributes of objects into ``ds`` or ``dataset`` attributes.
+
+Projection Argument Order
++++++++++++++++++++++++++
+
+Previously, projections were inconsistent with the other data objects.
+(The API for Plot Windows is the same.)  The argument order is now ``field``
+then ``axis``.
+
+Field Parameters
+++++++++++++++++
+
+All data objects now accept an explicit list of ``field_parameters`` rather
+than accepting ``kwargs`` and supplying them to field parameters.
+
+Object Renaming
++++++++++++++++
+
+Nearly all internal objects have been renamed.  Typically this means either
+removing ``AMR`` from the prefix or replacing it with ``YT``.  All names of
+objects remain the same for the purposes of selecting data and creating them;
+i.e., you will not need to change ``ds.h.sphere`` to something else.
+
+Boolean Regions
++++++++++++++++
+
+Boolean regions are not yet implemented in yt 3.0.
+
+Extracted Regions
++++++++++++++++++
+
+Extracted regions are not yet implemented in yt 3.0.
+
+Cut Regions
++++++++++++
+
+Cut regions are not yet implemented in yt 3.0.
+


https://bitbucket.org/yt_analysis/yt-doc/commits/3ff41e238be6/
Changeset:   3ff41e238be6
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-10-23 16:12:58
Summary:     Adding a bunch more, removing a few sections from the TOC (but not the repo.)
Affected #:  5 files

diff -r a6705c765fd0a67fe4ac10b42a8ecb8fb0a9bfff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb source/analyzing/loading_data.rst
--- a/source/analyzing/loading_data.rst
+++ /dev/null
@@ -1,336 +0,0 @@
-.. _loading-data:
-
-Loading Data
-============
-
-This section contains information on how to load data into ``yt``, as well as
-some important caveats about different data formats.
-
-.. _loading-numpy-array:
-
-Generic Array Data
-------------------
-
-Even if your data is not strictly related to fields commonly used in
-astrophysical codes or your code is not supported yet, you can still feed it to
-``yt`` to use its advanced visualization and analysis facilities. The only
-requirement is that your data can be represented as one or more uniform, three
-dimensional numpy arrays. Assuming that you have your data in ``arr``,
-the following code:
-
-.. code-block:: python
-
-   from yt.frontends.stream.api import load_uniform_grid
-
-   data = dict(Density = arr)
-   bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
-   pf = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
-
-will create ``yt``-native parameter file ``pf`` that will treat your array as
-density field in cubic domain of 3 Mpc edge size (3 * 3.08e24 cm) and
-simultaneously divide the domain into 12 chunks, so that you can take advantage
-of the underlying parallelism. 
-
-Particle fields are detected as one-dimensional fields. The number of
-particles is set by the ``number_of_particles`` key in
-``data``. Particle fields are then added as one-dimensional arrays in
-a similar manner as the three-dimensional grid fields:
-
-.. code-block:: python
-
-   from yt.frontends.stream.api import load_uniform_grid
-
-   data = dict(Density = dens, 
-               number_of_particles = 1000000,
-               particle_position_x = posx_arr, 
-	       particle_position_y = posy_arr,
-	       particle_position_z = posz_arr)
-   bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
-   pf = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
-
-where in this exampe the particle position fields have been assigned. ``number_of_particles`` must be the same size as the particle
-arrays. If no particle arrays are supplied then ``number_of_particles`` is assumed to be zero. 
-
-.. rubric:: Caveats
-
-* Units will be incorrect unless the data has already been converted to cgs.
-* Particles may be difficult to integrate.
-* Data must already reside in memory.
-
-.. _loading-enzo-data:
-
-Enzo Data
----------
-
-Enzo data is fully supported and cared for by Matthew Turk.  To load an Enzo
-dataset, you can use the ``load`` command provided by ``yt.mods`` and supply to
-it the parameter file name.  This would be the name of the output file, and it
-contains no extension.  For instance, if you have the following files:
-
-.. code-block:: none
-
-   DD0010/
-   DD0010/data0010
-   DD0010/data0010.hierarchy
-   DD0010/data0010.cpu0000
-   DD0010/data0010.cpu0001
-   DD0010/data0010.cpu0002
-   DD0010/data0010.cpu0003
-
-You would feed the ``load`` command the filename ``DD0010/data0010`` as
-mentioned.
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("DD0010/data0010")
-
-.. rubric:: Caveats
-
-* There are no major caveats for Enzo usage
-* Units should be correct, if you utilize standard unit-setting routines.  yt
-  will notify you if it cannot determine the units, although this
-  notification will be passive.
-* 2D and 1D data are supported, but the extraneous dimensions are set to be
-  of length 1.0
-
-.. _loading-orion-data:
-
-Orion Data
-----------
-
-Orion data is fully supported and cared for by Jeff Oishi.  This method should
-also work for CASTRO and MAESTRO data, which are cared for by Matthew Turk and
-Chris Malone, respectively.  To load an Orion dataset, you can use the ``load``
-command provided by ``yt.mods`` and supply to it the directory file name.
-**You must also have the ``inputs`` file in the base directory.**  For
-instance, if you were in a directory with the following files:
-
-.. code-block:: none
-
-   inputs
-   pltgmlcs5600/
-   pltgmlcs5600/Header
-   pltgmlcs5600/Level_0
-   pltgmlcs5600/Level_0/Cell_H
-   pltgmlcs5600/Level_1
-   pltgmlcs5600/Level_1/Cell_H
-   pltgmlcs5600/Level_2
-   pltgmlcs5600/Level_2/Cell_H
-   pltgmlcs5600/Level_3
-   pltgmlcs5600/Level_3/Cell_H
-   pltgmlcs5600/Level_4
-   pltgmlcs5600/Level_4/Cell_H
-
-You would feed it the filename ``pltgmlcs5600``:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("pltgmlcs5600")
-
-.. rubric:: Caveats
-
-* There are no major caveats for Orion usage
-* Star particles are not supported at the current time
-
-.. _loading-flash-data:
-
-FLASH Data
-----------
-
-FLASH HDF5 data is *mostly* supported and cared for by John ZuHone.  To load a
-FLASH dataset, you can use the ``load`` command provided by ``yt.mods`` and
-supply to it the file name of a plot file or checkpoint file, but particle
-files are not currently directly loadable by themselves, due to the
-fact that they typically lack grid information. For instance, if you were in a directory with
-the following files:
-
-.. code-block:: none
-
-   cosmoSim_coolhdf5_chk_0026
-
-You would feed it the filename ``cosmoSim_coolhdf5_chk_0026``:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("cosmoSim_coolhdf5_chk_0026")
-
-If you have a FLASH particle file that was created at the same time as
-a plotfile or checkpoint file (therefore having particle data
-consistent with the grid structure of the latter), its data may be loaded with the
-``particle_filename`` optional argument:
-
-.. code-block:: python
-
-    from yt.mods import *
-    pf = load("radio_halo_1kpc_hdf5_plt_cnt_0100", particle_filename="radio_halo_1kpc_hdf5_part_0100")
-
-.. rubric:: Caveats
-
-* Please be careful that the units are correctly utilized; yt assumes cgs
-* Velocities and length units will be scaled to comoving coordinates if yt is
-  able to discern you are examining a cosmology simulation; particle and grid
-  positions will not be.
-* Domains may be visualized assuming periodicity.
-
-.. _loading-ramses-data:
-
-RAMSES Data
------------
-
-RAMSES data enjoys preliminary support and is cared for by Matthew Turk.  If
-you are interested in taking a development or stewardship role, please contact
-him.  To load a RAMSES dataset, you can use the ``load`` command provided by
-``yt.mods`` and supply to it the ``info*.txt`` filename.  For instance, if you
-were in a directory with the following files:
-
-.. code-block:: none
-
-   output_00007
-   output_00007/amr_00007.out00001
-   output_00007/grav_00007.out00001
-   output_00007/hydro_00007.out00001
-   output_00007/info_00007.txt
-   output_00007/part_00007.out00001
-
-You would feed it the filename ``output_00007/info_00007.txt``:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("output_00007/info_00007.txt")
-
-.. rubric:: Caveats
-
-* Please be careful that the units are correctly set!  This may not be the
-  case for RAMSES data
-* Upon instantiation of the hierarchy, yt will attempt to regrid the entire
-  domain to ensure minimum-coverage from a set of grid patches.  (This is
-  described in the yt method paper.)  This is a time-consuming process and it
-  has not yet been written to be stored between calls.
-* Particles are not supported
-* Parallelism will not be terribly efficient for large datasets
-* There may be occasional segfaults on multi-domain data, which do not
-  reflect errors in the calculation
-
-If you are interested in helping with RAMSES support, we are eager to hear from
-you!
-
-.. _loading-art-data:
-
-ART Data
---------
-
-ART data enjoys preliminary support and is supported by Christopher Moody.
-Please contact the ``yt-dev`` mailing list if you are interested in using yt
-for ART data, or if you are interested in assisting with development of yt to
-work with ART data.
-
-At the moment, the ART octree is 'regridded' at each level to make the native
-octree look more like a mesh-based code. As a result, the initial outlay
-is about ~60 seconds to grid octs onto a mesh. This will be improved in 
-``yt-3.0``, where octs will be supported natively. 
-
-To load an ART dataset you can use the ``load`` command provided by 
-``yt.mods`` and passing the gas mesh file. It will search for and attempt 
-to find the complementary dark matter and stellar particle header and data 
-files. However, your simulations may not follow the same naming convention.
-
-So for example, a single snapshot might have a series of files looking like
-this:
-
-.. code-block:: none
-
-   10MpcBox_csf512_a0.300.d    #Gas mesh
-   PMcrda0.300.DAT             #Particle header
-   PMcrs0a0.300.DAT            #Particle data (positions,velocities)
-   stars_a0.300.dat            #Stellar data (metallicities, ages, etc.)
-
-The ART frontend tries to find the associated files matching the above, but
-if that fails you can specify ``file_particle_data``,``file_particle_data``,
-``file_star_data`` in addition to the specifying the gas mesh. You also have 
-the option of gridding particles, and assigning them onto the meshes.
-This process is in beta, and for the time being it's probably  best to leave
-``do_grid_particles=False`` as the default.
-
-To speed up the loading of an ART file, you have a few options. You can turn 
-off the particles entirely by setting ``discover_particles=False``. You can
-also only grid octs up to a certain level, ``limit_level=5``, which is useful
-when debugging by artificially creating a 'smaller' dataset to work with.
-
-Finally, when stellar ages are computed we 'spread' the ages evenly within a
-smoothing window. By default this is turned on and set to 10Myr. To turn this 
-off you can set ``spread=False``, and you can tweak the age smoothing window
-by specifying the window in seconds, ``spread=1.0e7*265*24*3600``. 
-
-.. code-block:: python
-    
-   from yt.mods import *
-
-   file = "/u/cmoody3/data/art_snapshots/SFG1/10MpcBox_csf512_a0.460.d"
-   pf = load(file,discover_particles=True,grid_particles=2,limit_level=3)
-   pf.h.print_stats()
-   dd=pf.h.all_data()
-   print np.sum(dd['particle_type']==0)
-
-In the above example code, the first line imports the standard yt functions,
-followed by defining the gas mesh file. It's loaded only through level 3, but
-grids particles on to meshes on level 2 and higher. Finally, we create a data
-container and ask it to gather the particle_type array. In this case ``type==0``
-is for the most highly-refined dark matter particle, and we print out how many
-high-resolution star particles we find in the simulation.  Typically, however,
-you shouldn't have to specify any keyword arguments to load in a dataset.
-
-.. loading-amr-data:
-
-Generic AMR Data
-----------------
-
-It is possible to create native ``yt`` parameter file from Python's dictionary
-that describes set of rectangular patches of data of possibly varying
-resolution. 
-
-.. code-block:: python
-
-   from yt.frontends.stream.api import load_amr_grids
-
-   grid_data = [
-       dict(left_edge = [0.0, 0.0, 0.0],
-            right_edge = [1.0, 1.0, 1.],
-            level = 0,
-            dimensions = [32, 32, 32],
-            number_of_particles = 0)
-       dict(left_edge = [0.25, 0.25, 0.25],
-            right_edge = [0.75, 0.75, 0.75],
-            level = 1,
-            dimensions = [32, 32, 32],
-            number_of_particles = 0)
-   ]
-  
-   for g in grid_data:
-       g["Density"] = np.random.random(g["dimensions"]) * 2**g["level"]
-  
-   pf = load_amr_grids(grid_data, [32, 32, 32], 1.0)
-
-Particle fields are supported by adding 1-dimensional arrays and
-setting the ``number_of_particles`` key to each ``grid``'s dict:
-
-.. code-block:: python
-
-    for g in grid_data:
-        g["number_of_particles"] = 100000
-        g["particle_position_x"] = np.random.random((g["number_of_particles"]))
-
-.. rubric:: Caveats
-
-* Units will be incorrect unless the data has already been converted to cgs.
-* Some functions may behave oddly, and parallelism will be disappointing or
-  non-existent in most cases.
-* No consistency checks are performed on the hierarchy
-* Data must already reside in memory.
-* Consistency between particle positions and grids is not checked;
-  ``load_amr_grids`` assumes that particle positions associated with one grid are
-  not bounded within another grid at a higher level, so this must be
-  ensured by the user prior to loading the grid data. 

diff -r a6705c765fd0a67fe4ac10b42a8ecb8fb0a9bfff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb source/conf.py
--- a/source/conf.py
+++ b/source/conf.py
@@ -29,7 +29,8 @@
 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
 extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx',
               'sphinx.ext.pngmath', 'sphinx.ext.viewcode',
-              'sphinx.ext.autosummary', 'numpydocmod', 'youtube',
+              #'sphinx.ext.autosummary', 
+              'numpydocmod', 'youtube',
               'yt_cookbook', 'yt_colormaps']
 
 # Add any paths that contain templates here, relative to this directory.

diff -r a6705c765fd0a67fe4ac10b42a8ecb8fb0a9bfff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb source/index.rst
--- a/source/index.rst
+++ b/source/index.rst
@@ -129,20 +129,18 @@
    :maxdepth: 2
 
    welcome/index
-   orientation/index
+   loading_data
+   yt3differences
    bootcamp
-   workshop
-   help/index
    interacting/index
-   configuration
-   cookbook/index
    analyzing/index
    visualizing/index
+   advanced/index
    analysis_modules/index
-   advanced/index
+   cookbook/index
+   help/index
    getting_involved/index
    api/api   
-   field_list
    faq/index
    changelog
 

diff -r a6705c765fd0a67fe4ac10b42a8ecb8fb0a9bfff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb source/loading_data.rst
--- /dev/null
+++ b/source/loading_data.rst
@@ -0,0 +1,338 @@
+.. _loading-data:
+
+Loading Data
+============
+
+This section contains information on how to load data into ``yt``, as well as
+some important caveats about different data formats.
+
+.. _loading-enzo-data:
+
+Enzo Data
+---------
+
+Enzo data is fully supported and cared for by Matthew Turk.  To load an Enzo
+dataset, you can use the ``load`` command provided by ``yt.mods`` and supply to
+it the parameter file name.  This would be the name of the output file, and it
+contains no extension.  For instance, if you have the following files:
+
+.. code-block:: none
+
+   DD0010/
+   DD0010/data0010
+   DD0010/data0010.hierarchy
+   DD0010/data0010.cpu0000
+   DD0010/data0010.cpu0001
+   DD0010/data0010.cpu0002
+   DD0010/data0010.cpu0003
+
+You would feed the ``load`` command the filename ``DD0010/data0010`` as
+mentioned.
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("DD0010/data0010")
+
+.. rubric:: Caveats
+
+* There are no major caveats for Enzo usage
+* Units should be correct, if you utilize standard unit-setting routines.  yt
+  will notify you if it cannot determine the units, although this
+  notification will be passive.
+* 2D and 1D data are supported, but the extraneous dimensions are set to be
+  of length 1.0
+
+.. _loading-orion-data:
+
+Boxlib Data
+-----------
+
+yt has been tested with Boxlib data generated by Orion, Nyx, Maestro and
+Castro.  Currently it is cared for by a combination of Andrew Myers, Chris
+Malone, and Matthew Turk.
+
+To load a Boxlib dataset, you can use the ``load`` command provided by
+``yt.mods`` and supply to it the directory file name.  **You must also have the
+``inputs`` file in the base directory.**  For instance, if you were in a
+directory with the following files:
+
+.. code-block:: none
+
+   inputs
+   pltgmlcs5600/
+   pltgmlcs5600/Header
+   pltgmlcs5600/Level_0
+   pltgmlcs5600/Level_0/Cell_H
+   pltgmlcs5600/Level_1
+   pltgmlcs5600/Level_1/Cell_H
+   pltgmlcs5600/Level_2
+   pltgmlcs5600/Level_2/Cell_H
+   pltgmlcs5600/Level_3
+   pltgmlcs5600/Level_3/Cell_H
+   pltgmlcs5600/Level_4
+   pltgmlcs5600/Level_4/Cell_H
+
+You would feed it the filename ``pltgmlcs5600``:
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("pltgmlcs5600")
+
+.. rubric:: Caveats
+
+* There are no major caveats for Orion usage
+
+.. _loading-flash-data:
+
+FLASH Data
+----------
+
+FLASH HDF5 data is *mostly* supported and cared for by John ZuHone.  To load a
+FLASH dataset, you can use the ``load`` command provided by ``yt.mods`` and
+supply to it the file name of a plot file or checkpoint file, but particle
+files are not currently directly loadable by themselves, due to the
+fact that they typically lack grid information. For instance, if you were in a directory with
+the following files:
+
+.. code-block:: none
+
+   cosmoSim_coolhdf5_chk_0026
+
+You would feed it the filename ``cosmoSim_coolhdf5_chk_0026``:
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("cosmoSim_coolhdf5_chk_0026")
+
+If you have a FLASH particle file that was created at the same time as
+a plotfile or checkpoint file (therefore having particle data
+consistent with the grid structure of the latter), its data may be loaded with the
+``particle_filename`` optional argument:
+
+.. code-block:: python
+
+    from yt.mods import *
+    pf = load("radio_halo_1kpc_hdf5_plt_cnt_0100", particle_filename="radio_halo_1kpc_hdf5_part_0100")
+
+.. rubric:: Caveats
+
+* Please be careful that the units are correctly utilized; yt assumes cgs
+* Velocities and length units will be scaled to comoving coordinates if yt is
+  able to discern you are examining a cosmology simulation; particle and grid
+  positions will not be.
+* Domains may be visualized assuming periodicity.
+
+.. _loading-ramses-data:
+
+RAMSES Data
+-----------
+
+RAMSES data enjoys preliminary support and is cared for by Matthew Turk.  If
+you are interested in taking a development or stewardship role, please contact
+him.  To load a RAMSES dataset, you can use the ``load`` command provided by
+``yt.mods`` and supply to it the ``info*.txt`` filename.  For instance, if you
+were in a directory with the following files:
+
+.. code-block:: none
+
+   output_00007
+   output_00007/amr_00007.out00001
+   output_00007/grav_00007.out00001
+   output_00007/hydro_00007.out00001
+   output_00007/info_00007.txt
+   output_00007/part_00007.out00001
+
+You would feed it the filename ``output_00007/info_00007.txt``:
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("output_00007/info_00007.txt")
+
+.. rubric:: Caveats
+
+* Please be careful that the units are correctly set!  This may not be the
+  case for RAMSES data
+* Upon instantiation of the hierarchy, yt will attempt to regrid the entire
+  domain to ensure minimum-coverage from a set of grid patches.  (This is
+  described in the yt method paper.)  This is a time-consuming process and it
+  has not yet been written to be stored between calls.
+* Particles are not supported
+* Parallelism will not be terribly efficient for large datasets
+* There may be occasional segfaults on multi-domain data, which do not
+  reflect errors in the calculation
+
+If you are interested in helping with RAMSES support, we are eager to hear from
+you!
+
+.. _loading-art-data:
+
+ART Data
+--------
+
+ART data enjoys preliminary support and is supported by Christopher Moody.
+Please contact the ``yt-dev`` mailing list if you are interested in using yt
+for ART data, or if you are interested in assisting with development of yt to
+work with ART data.
+
+At the moment, the ART octree is 'regridded' at each level to make the native
+octree look more like a mesh-based code. As a result, the initial outlay
+is about ~60 seconds to grid octs onto a mesh. This will be improved in 
+``yt-3.0``, where octs will be supported natively. 
+
+To load an ART dataset you can use the ``load`` command provided by 
+``yt.mods`` and passing the gas mesh file. It will search for and attempt 
+to find the complementary dark matter and stellar particle header and data 
+files. However, your simulations may not follow the same naming convention.
+
+So for example, a single snapshot might have a series of files looking like
+this:
+
+.. code-block:: none
+
+   10MpcBox_csf512_a0.300.d    #Gas mesh
+   PMcrda0.300.DAT             #Particle header
+   PMcrs0a0.300.DAT            #Particle data (positions,velocities)
+   stars_a0.300.dat            #Stellar data (metallicities, ages, etc.)
+
+The ART frontend tries to find the associated files matching the above, but
+if that fails you can specify ``file_particle_data``,``file_particle_data``,
+``file_star_data`` in addition to the specifying the gas mesh. You also have 
+the option of gridding particles, and assigning them onto the meshes.
+This process is in beta, and for the time being it's probably  best to leave
+``do_grid_particles=False`` as the default.
+
+To speed up the loading of an ART file, you have a few options. You can turn 
+off the particles entirely by setting ``discover_particles=False``. You can
+also only grid octs up to a certain level, ``limit_level=5``, which is useful
+when debugging by artificially creating a 'smaller' dataset to work with.
+
+Finally, when stellar ages are computed we 'spread' the ages evenly within a
+smoothing window. By default this is turned on and set to 10Myr. To turn this 
+off you can set ``spread=False``, and you can tweak the age smoothing window
+by specifying the window in seconds, ``spread=1.0e7*265*24*3600``. 
+
+.. code-block:: python
+    
+   from yt.mods import *
+
+   file = "/u/cmoody3/data/art_snapshots/SFG1/10MpcBox_csf512_a0.460.d"
+   pf = load(file,discover_particles=True,grid_particles=2,limit_level=3)
+   pf.h.print_stats()
+   dd=pf.h.all_data()
+   print np.sum(dd['particle_type']==0)
+
+In the above example code, the first line imports the standard yt functions,
+followed by defining the gas mesh file. It's loaded only through level 3, but
+grids particles on to meshes on level 2 and higher. Finally, we create a data
+container and ask it to gather the particle_type array. In this case ``type==0``
+is for the most highly-refined dark matter particle, and we print out how many
+high-resolution star particles we find in the simulation.  Typically, however,
+you shouldn't have to specify any keyword arguments to load in a dataset.
+
+.. _loading-numpy-array:
+
+Generic Array Data
+------------------
+
+Even if your data is not strictly related to fields commonly used in
+astrophysical codes or your code is not supported yet, you can still feed it to
+``yt`` to use its advanced visualization and analysis facilities. The only
+requirement is that your data can be represented as one or more uniform, three
+dimensional numpy arrays. Assuming that you have your data in ``arr``,
+the following code:
+
+.. code-block:: python
+
+   from yt.frontends.stream.api import load_uniform_grid
+
+   data = dict(Density = arr)
+   bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
+   pf = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
+
+will create ``yt``-native parameter file ``pf`` that will treat your array as
+density field in cubic domain of 3 Mpc edge size (3 * 3.08e24 cm) and
+simultaneously divide the domain into 12 chunks, so that you can take advantage
+of the underlying parallelism. 
+
+Particle fields are detected as one-dimensional fields. The number of
+particles is set by the ``number_of_particles`` key in
+``data``. Particle fields are then added as one-dimensional arrays in
+a similar manner as the three-dimensional grid fields:
+
+.. code-block:: python
+
+   from yt.frontends.stream.api import load_uniform_grid
+
+   data = dict(Density = dens, 
+               number_of_particles = 1000000,
+               particle_position_x = posx_arr, 
+	       particle_position_y = posy_arr,
+	       particle_position_z = posz_arr)
+   bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
+   pf = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
+
+where in this exampe the particle position fields have been assigned. ``number_of_particles`` must be the same size as the particle
+arrays. If no particle arrays are supplied then ``number_of_particles`` is assumed to be zero. 
+
+.. rubric:: Caveats
+
+* Units will be incorrect unless the data has already been converted to cgs.
+* Particles may be difficult to integrate.
+* Data must already reside in memory.
+
+.. _loading-amr-data:
+
+Generic AMR Data
+----------------
+
+It is possible to create native ``yt`` parameter file from Python's dictionary
+that describes set of rectangular patches of data of possibly varying
+resolution. 
+
+.. code-block:: python
+
+   from yt.frontends.stream.api import load_amr_grids
+
+   grid_data = [
+       dict(left_edge = [0.0, 0.0, 0.0],
+            right_edge = [1.0, 1.0, 1.],
+            level = 0,
+            dimensions = [32, 32, 32],
+            number_of_particles = 0)
+       dict(left_edge = [0.25, 0.25, 0.25],
+            right_edge = [0.75, 0.75, 0.75],
+            level = 1,
+            dimensions = [32, 32, 32],
+            number_of_particles = 0)
+   ]
+  
+   for g in grid_data:
+       g["Density"] = np.random.random(g["dimensions"]) * 2**g["level"]
+  
+   pf = load_amr_grids(grid_data, [32, 32, 32], 1.0)
+
+Particle fields are supported by adding 1-dimensional arrays and
+setting the ``number_of_particles`` key to each ``grid``'s dict:
+
+.. code-block:: python
+
+    for g in grid_data:
+        g["number_of_particles"] = 100000
+        g["particle_position_x"] = np.random.random((g["number_of_particles"]))
+
+.. rubric:: Caveats
+
+* Units will be incorrect unless the data has already been converted to cgs.
+* Some functions may behave oddly, and parallelism will be disappointing or
+  non-existent in most cases.
+* No consistency checks are performed on the hierarchy
+* Data must already reside in memory.
+* Consistency between particle positions and grids is not checked;
+  ``load_amr_grids`` assumes that particle positions associated with one grid are
+  not bounded within another grid at a higher level, so this must be
+  ensured by the user prior to loading the grid data. 
+

diff -r a6705c765fd0a67fe4ac10b42a8ecb8fb0a9bfff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb source/yt3differences.rst
--- a/source/yt3differences.rst
+++ b/source/yt3differences.rst
@@ -14,6 +14,35 @@
 .. warning:: This document covers *current* API changes.  As API changes occur
              it will be updated.
 
+Cool New Things
+---------------
+
+Lots of new things have been added in yt 3.0!  Below we summarize a handful of
+these.
+
+Octrees
++++++++
+
+Octree datasets such as RAMSES, ART and ARTIO are now supported -- without any
+regridding!  We have a native, lightweight octree indexing system.
+
+Particle Codes and SPH
+++++++++++++++++++++++
+
+yt 3.0 features particle selection, smoothing, and deposition.  This utilizes a
+combination of coarse-grained indexing and octree indexing for particles.
+
+Irregular Grids
++++++++++++++++
+
+MOAB Hex8 format is supported, and non-regular grids can be added relatively
+easily.
+
+Non-Cartesian Coordinates
++++++++++++++++++++++++++
+
+Preliminary support for non-cartesian coordinates has been added.
+
 API Changes
 -----------
 


https://bitbucket.org/yt_analysis/yt-doc/commits/b05412eecb34/
Changeset:   b05412eecb34
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-10-28 20:40:42
Summary:     Merging from 2.6 work
Affected #:  167 files

diff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb -r b05412eecb34c08a401e9d832fd49fd4850d2fb3 .hgignore
--- a/.hgignore
+++ b/.hgignore
@@ -6,3 +6,4 @@
 _temp/*
 **/.DS_Store
 RD0005-mine/*
+source/bootcamp/.ipynb_checkpoints/

diff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb -r b05412eecb34c08a401e9d832fd49fd4850d2fb3 extensions/notebook_sphinxext.py
--- /dev/null
+++ b/extensions/notebook_sphinxext.py
@@ -0,0 +1,154 @@
+import os, shutil, string, glob
+from sphinx.util.compat import Directive
+from docutils import nodes
+from docutils.parsers.rst import directives
+from IPython.nbconvert import html, python
+from runipy.notebook_runner import NotebookRunner
+from jinja2 import FileSystemLoader
+
+class NotebookDirective(Directive):
+    """Insert an evaluated notebook into a document
+
+    This uses runipy and nbconvert to transform a path to an unevaluated notebook
+    into html suitable for embedding in a Sphinx document.
+    """
+    required_arguments = 1
+    optional_arguments = 0
+
+    def run(self):
+        # check if raw html is supported
+        if not self.state.document.settings.raw_enabled:
+            raise self.warning('"%s" directive disabled.' % self.name)
+
+        # get path to notebook
+        source_dir = os.path.dirname(
+            os.path.abspath(self.state.document.current_source))
+        nb_basename = os.path.basename(self.arguments[0])
+        rst_file = self.state_machine.document.attributes['source']
+        rst_dir = os.path.abspath(os.path.dirname(rst_file))
+        nb_abs_path = os.path.join(rst_dir, nb_basename)
+
+        # Move files around.
+        rel_dir = os.path.relpath(rst_dir, setup.confdir)
+        dest_dir = os.path.join(setup.app.builder.outdir, rel_dir)
+        dest_path = os.path.join(dest_dir, nb_basename)
+
+        if not os.path.exists(dest_dir):
+            os.makedirs(dest_dir)
+
+        # Copy unevaluated script
+        try:
+            shutil.copyfile(nb_abs_path, dest_path)
+        except IOError:
+            raise RuntimeError("Unable to copy notebook to build destination.")
+
+        dest_path_eval = string.replace(dest_path, '.ipynb', '_evaluated.ipynb')
+        dest_path_script = string.replace(dest_path, '.ipynb', '.py')
+
+        # Create python script vesion
+        unevaluated_text = nb_to_html(nb_abs_path)
+        script_text = nb_to_python(nb_abs_path)
+        f = open(dest_path_script, 'w')
+        f.write(script_text.encode('utf8'))
+        f.close()
+
+        # Create evaluated version and save it to the dest path.
+        # Always use --pylab so figures appear inline
+        # perhaps this is questionable?
+        nb_runner = NotebookRunner(nb_in=nb_abs_path, pylab=True)
+        nb_runner.run_notebook()
+        nb_runner.save_notebook(dest_path_eval)
+        evaluated_text = nb_to_html(dest_path_eval)
+
+        # Create link to notebook and script files
+        link_rst = "(" + \
+                   formatted_link(dest_path) + "; " + \
+                   formatted_link(dest_path_eval) + "; " + \
+                   formatted_link(dest_path_script) + \
+                   ")"
+
+        self.state_machine.insert_input([link_rst], rst_file)
+
+        # create notebook node
+        attributes = {'format': 'html', 'source': 'nb_path'}
+        nb_node = nodes.raw('', evaluated_text, **attributes)
+        (nb_node.source, nb_node.line) = \
+            self.state_machine.get_source_and_line(self.lineno)
+
+        # add dependency
+        self.state.document.settings.record_dependencies.add(nb_abs_path)
+
+        # clean up png files left behind by notebooks.
+        png_files = glob.glob("*.png")
+        for file in png_files:
+            os.remove(file)
+
+        return [nb_node]
+
+class notebook_node(nodes.raw):
+    pass
+
+def nb_to_python(nb_path):
+    """convert notebook to python script"""
+    exporter = python.PythonExporter()
+    output, resources = exporter.from_filename(nb_path)
+    return output
+
+def nb_to_html(nb_path):
+    """convert notebook to html"""
+    exporter = html.HTMLExporter(template_file='full')
+    output, resources = exporter.from_filename(nb_path)
+    header = output.split('<head>', 1)[1].split('</head>',1)[0]
+    body = output.split('<body>', 1)[1].split('</body>',1)[0]
+
+    # http://imgur.com/eR9bMRH
+    header = header.replace('<style', '<style scoped="scoped"')
+    header = header.replace('body{background-color:#ffffff;}\n', '')
+    header = header.replace('body{background-color:white;position:absolute;'
+                            'left:0px;right:0px;top:0px;bottom:0px;'
+                            'overflow:visible;}\n', '')
+    header = header.replace('body{margin:0;'
+                            'font-family:"Helvetica Neue",Helvetica,Arial,'
+                            'sans-serif;font-size:13px;line-height:20px;'
+                            'color:#000000;background-color:#ffffff;}', '')
+    header = header.replace('\na{color:#0088cc;text-decoration:none;}', '')
+    header = header.replace(
+        'a:focus{color:#005580;text-decoration:underline;}', '')
+    header = header.replace(
+        '\nh1,h2,h3,h4,h5,h6{margin:10px 0;font-family:inherit;font-weight:bold;'
+        'line-height:20px;color:inherit;text-rendering:optimizelegibility;}'
+        'h1 small,h2 small,h3 small,h4 small,h5 small,'
+        'h6 small{font-weight:normal;line-height:1;color:#999999;}'
+        '\nh1,h2,h3{line-height:40px;}\nh1{font-size:35.75px;}'
+        '\nh2{font-size:29.25px;}\nh3{font-size:22.75px;}'
+        '\nh4{font-size:16.25px;}\nh5{font-size:13px;}'
+        '\nh6{font-size:11.049999999999999px;}\nh1 small{font-size:22.75px;}'
+        '\nh2 small{font-size:16.25px;}\nh3 small{font-size:13px;}'
+        '\nh4 small{font-size:13px;}', '')
+    header = header.replace('background-color:#ffffff;', '', 1)
+
+    # concatenate raw html lines
+    lines = ['<div class="ipynotebook">']
+    lines.append(header)
+    lines.append(body)
+    lines.append('</div>')
+    return '\n'.join(lines)
+
+def formatted_link(path):
+    return "`%s <%s>`__" % (os.path.basename(path), path)
+
+def visit_notebook_node(self, node):
+    self.visit_raw(node)
+
+def depart_notebook_node(self, node):
+    self.depart_raw(node)
+
+def setup(app):
+    setup.app = app
+    setup.config = app.config
+    setup.confdir = app.confdir
+
+    app.add_node(notebook_node,
+                 html=(visit_notebook_node, depart_notebook_node))
+
+    app.add_directive('notebook', NotebookDirective)

diff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb -r b05412eecb34c08a401e9d832fd49fd4850d2fb3 source/advanced/_static/axes.c
--- a/source/advanced/_static/axes.c
+++ /dev/null
@@ -1,15 +0,0 @@
-#include "axes.h"
-
-void calculate_axes(ParticleCollection *part,
-    double *ax1, double *ax2, double *ax3)
-{
-    int i;
-    for (i = 0; i < part->npart; i++) {
-        if (ax1[0] > part->xpos[i]) ax1[0] = part->xpos[i];
-        if (ax2[0] > part->ypos[i]) ax2[0] = part->ypos[i];
-        if (ax3[0] > part->zpos[i]) ax3[0] = part->zpos[i];
-        if (ax1[1] < part->xpos[i]) ax1[1] = part->xpos[i];
-        if (ax2[1] < part->ypos[i]) ax2[1] = part->ypos[i];
-        if (ax3[1] < part->zpos[i]) ax3[1] = part->zpos[i];
-    }
-}

diff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb -r b05412eecb34c08a401e9d832fd49fd4850d2fb3 source/advanced/_static/axes.h
--- a/source/advanced/_static/axes.h
+++ /dev/null
@@ -1,10 +0,0 @@
-typedef struct structParticleCollection {
-     long npart;
-     double *xpos;
-     double *ypos;
-     double *zpos;
-} ParticleCollection;
-
-void calculate_axes(ParticleCollection *part,
-         double *ax1, double *ax2, double *ax3);
-

diff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb -r b05412eecb34c08a401e9d832fd49fd4850d2fb3 source/advanced/_static/axes_calculator.pyx
--- a/source/advanced/_static/axes_calculator.pyx
+++ /dev/null
@@ -1,37 +0,0 @@
-import numpy as np
-cimport numpy as np
-cimport cython
-from stdlib cimport malloc, free
-
-cdef extern from "axes.h":
-    ctypedef struct ParticleCollection:
-            long npart
-            double *xpos
-            double *ypos
-            double *zpos
-
-    void calculate_axes(ParticleCollection *part,
-             double *ax1, double *ax2, double *ax3)
-
-def examine_axes(np.ndarray[np.float64_t, ndim=1] xpos,
-                 np.ndarray[np.float64_t, ndim=1] ypos,
-                 np.ndarray[np.float64_t, ndim=1] zpos):
-    cdef double ax1[3], ax2[3], ax3[3]
-    cdef ParticleCollection particles
-    cdef int i
-
-    particles.npart = len(xpos)
-    particles.xpos = <double *> xpos.data
-    particles.ypos = <double *> ypos.data
-    particles.zpos = <double *> zpos.data
-
-    for i in range(particles.npart):
-        particles.xpos[i] = xpos[i]
-        particles.ypos[i] = ypos[i]
-        particles.zpos[i] = zpos[i]
-
-    calculate_axes(&particles, ax1, ax2, ax3)
-
-    return ( (ax1[0], ax1[1], ax1[2]),
-             (ax2[0], ax2[1], ax2[2]),
-             (ax3[0], ax3[1], ax3[2]) )

diff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb -r b05412eecb34c08a401e9d832fd49fd4850d2fb3 source/advanced/_static/axes_calculator_setup.txt
--- a/source/advanced/_static/axes_calculator_setup.txt
+++ /dev/null
@@ -1,25 +0,0 @@
-NAME = "axes_calculator"
-EXT_SOURCES = ["axes.c"]
-EXT_LIBRARIES = []
-EXT_LIBRARY_DIRS = []
-EXT_INCLUDE_DIRS = []
-DEFINES = []
-
-from distutils.core import setup
-from distutils.extension import Extension
-from Cython.Distutils import build_ext
-
-ext_modules = [Extension(NAME,
-                 [NAME+".pyx"] + EXT_SOURCES,
-                 libraries = EXT_LIBRARIES,
-                 library_dirs = EXT_LIBRARY_DIRS,
-                 include_dirs = EXT_INCLUDE_DIRS,
-                 define_macros = DEFINES)
-]
-
-setup(
-  name = NAME,
-  cmdclass = {'build_ext': build_ext},
-  ext_modules = ext_modules
-)
-

diff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb -r b05412eecb34c08a401e9d832fd49fd4850d2fb3 source/advanced/creating_datatypes.rst
--- a/source/advanced/creating_datatypes.rst
+++ /dev/null
@@ -1,41 +0,0 @@
-.. _creating-objects:
-
-Creating 3D Datatypes
-=====================
-
-The three-dimensional datatypes in yt follow a fairly simple protocol.  The
-basic principle is that if you want to define a region in space, that region
-must be identifiable from some sort of cut applied against the cells --
-typically, in yt, this is done by examining the geometry.  (The
-:class:`yt.data_objects.data_containers.ExtractedRegionBase` type is a notable
-exception to this, as it is defined as a subset of an existing data object.)
-
-In principle, you can define any number of 3D data objects, as long as the
-following methods are implemented to protocol specifications.
-
-.. function:: __init__(self, args, kwargs)
-
-   This function can accept any number of arguments but must eventually call
-   AMR3DData.__init__.  It is used to set up the various parameters that
-   define the object.
-
-.. function:: _get_list_of_grids(self)
-
-   This function must set the property _grids to be a list of the grids
-   that should be considered to be a part of the data object.  Each of these
-   will be partly or completely contained within the object.
-
-.. function:: _is_fully_enclosed(self, grid)
-
-   This function returns true if the entire grid is part of the data object
-   and false if it is only partly enclosed.
-
-.. function:: _get_cut_mask(self, grid)
-
-   This function returns a boolean mask in the shape of the grid.  All of the
-   cells set to 'True' will be included in the data object and all of those set
-   to 'False' will be excluded.  Typically this is done via some logical
-   operation.
-
-For a good example of how to do this, see the
-:class:`yt.data_objects.data_containers.AMRCylinderBase` source code.

diff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb -r b05412eecb34c08a401e9d832fd49fd4850d2fb3 source/advanced/creating_derived_quantities.rst
--- a/source/advanced/creating_derived_quantities.rst
+++ /dev/null
@@ -1,31 +0,0 @@
-.. _creating_derived_quantities:
-
-Creating Derived Quantities
----------------------------
-
-The basic idea is that you need to be able to operate both on a set of data,
-and a set of sets of data.  (If this is not possible, the quantity needs to be
-added with the ``force_unlazy`` option.)
-
-Two functions are necessary.  One will operate on arrays of data, either fed
-from each grid individually or fed from the entire data object at once.  The
-second one takes the results of the first, either as lists of arrays or as
-single arrays, and returns the final values.  For an example, we look at the
-``TotalMass`` function:
-
-.. code-block:: python
-
-   def _TotalMass(data):
-       baryon_mass = data["CellMassMsun"].sum()
-       particle_mass = data["ParticleMassMsun"].sum()
-       return baryon_mass, particle_mass
-   def _combTotalMass(data, baryon_mass, particle_mass):
-       return baryon_mass.sum() + particle_mass.sum()
-   add_quantity("TotalMass", function=_TotalMass,
-                combine_function=_combTotalMass, n_ret = 2)
-
-Once the two functions have been defined, we then call :func:`add_quantity` to
-tell it the function that defines the data, the collator function, and the
-number of values that get passed between them.  In this case we return both the
-particle and the baryon mass, so we have two total values passed from the main
-function into the collator.

diff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb -r b05412eecb34c08a401e9d832fd49fd4850d2fb3 source/advanced/creating_frontend.rst
--- a/source/advanced/creating_frontend.rst
+++ /dev/null
@@ -1,143 +0,0 @@
-.. _creating_frontend:
-
-Creating A New Code Frontend
-============================
-
-``yt`` is designed to support analysis and visualization of data from multiple
-different simulation codes, although it has so far been most successfully
-applied to Adaptive Mesh Refinement (AMR) data. For a list of codes and the
-level of support they enjoy, we've created a handy [[CodeSupportLevels|table]].
-We'd like to support a broad range of codes, both AMR-based and otherwise. To
-add support for a new code, a few things need to be put into place. These
-necessary structures can be classified into a couple categories:
-
- * Data meaning: This is the set of parameters that convert the data into
-   physically relevant units; things like spatial and mass conversions, time
-   units, and so on.
- * Data localization: These are structures that help make a "first pass" at data
-   loading. Essentially, we need to be able to make a first pass at guessing
-   where data in a given physical region would be located on disk. With AMR
-   data, this is typically quite easy: the grid patches are the "first pass" at
-   localization.
- * Data reading: This is the set of routines that actually perform a read of
-   either all data in a region or a subset of that data.
-
-Data Meaning Structures
------------------------
-
-If you are interested in adding a new code, be sure to drop us a line on
-`yt-dev <http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org>`_!
-
-To get started, make a new directory in ``yt/frontends`` with the name of your
-code -- you can start by copying into it the contents of the ``stream``
-directory, which is a pretty empty format. You'll then have to create a subclass
-of ``StaticOutput``. This subclass will need to handle conversion between the
-different physical units and the code units; for the most part, the examples of
-``OrionStaticOutput`` and ``EnzoStaticOutput`` should be followed, but
-``ChomboStaticOutput``, as a slightly newer addition, can also be used as an
-instructive example -- be sure to add an ``_is_valid`` classmethod that will
-verify if a filename is valid for that output type, as that is how "load" works.
-
-A new set of fields must be added in the file ``fields.py`` in that directory.
-For the most part this means subclassing ``CodeFieldInfoContainer`` and adding
-the necessary fields specific to that code. Here is the Chombo field container:
-
-.. code-block:: python
-
-    from UniversalFields import *
-    class ChomboFieldContainer(CodeFieldInfoContainer):
-        _shared_state = {}
-        _field_list = {}
-    ChomboFieldInfo = ChomboFieldContainer()
-    add_chombo_field = ChomboFieldInfo.add_field
-
-The field container is a shared state object, which is why we explicitly set
-``_shared_state`` equal to a mutable.
-
-Data Localization Structures
-----------------------------
-
-As of right now, the "grid patch" mechanism is going to remain in yt, however in
-the future that may change. As such, some other output formats -- like Gadget --
-may be shoe-horned in, slightly.
-
-Hierarchy
-^^^^^^^^^
-
-To set up data localization, an ``AMRHierarchy`` subclass must be added in the
-file ``data_structures.py``. The hierarchy object must override the following
-methods:
-
- * ``_detect_fields``: ``self.field_list`` must be populated as a list of
-   strings corresponding to "native" fields in the data files.
- * ``_setup_classes``: it's probably safe to crib this from one of the other
-   ``AMRHierarchy`` subclasses.
- * ``_count_grids``: this must set self.num_grids to be the total number of
-   grids in the simulation.
- * ``_parse_hierarchy``: this must fill in ``grid_left_edge``,
-   ``grid_right_edge``, ``grid_particle_count``, ``grid_dimensions`` and
-   ``grid_levels`` with the appropriate information. Additionally, ``grids``
-   must be an array of grid objects that already know their IDs.
- * ``_populate_grid_objects``: this initializes the grids by calling
-   ``_prepare_grid`` and ``_setup_dx`` on all of them.  Additionally, it should
-   set up ``Children`` and ``Parent`` lists on each grid object.
- * ``_setup_unknown_fields``: If a field is in the data file that yt doesn't
-   already know, this is where you make a guess at it.
- * ``_setup_derived_fields``: ``self.derived_field_list`` needs to be made a
-   list of strings that correspond to all derived fields valid for this
-   hierarchy.
-
-For the most part, the ``ChomboHierarchy`` should be the first place to look for
-hints on how to do this; ``EnzoHierarchy`` is also instructive.
-
-Grids
-^^^^^
-
-A new grid object, subclassing ``AMRGridPatch``, will also have to be added.
-This should go in ``data_structures.py``. For the most part, this may be all
-that is needed:
-
-.. code-block:: python
-
-    class ChomboGrid(AMRGridPatch):
-        _id_offset = 0
-        __slots__ = ["_level_id"]
-        def __init__(self, id, hierarchy, level = -1):
-            AMRGridPatch.__init__(self, id, filename = hierarchy.hierarchy_filename,
-                                  hierarchy = hierarchy)
-            self.Parent = []
-            self.Children = []
-            self.Level = level
-
-
-Even the most complex grid object, ``OrionGrid``, is still relatively simple.
-
-Data Reading Functions
-----------------------
-
-In ``io.py``, there are a number of IO handlers that handle the mechanisms by
-which data is read off disk.  To implement a new data reader, you must subclass
-``BaseIOHandler`` and override the following methods:
-
- * ``_read_field_names``: this routine accepts a grid object and must return all
-   the fields in the data file affiliated with that grid. It is used at the
-   initialization of the ``AMRHierarchy`` but likely not later.
- * ``modify``: This accepts a field from a data file and returns it ready to be
-   used by yt. This is used in Enzo data for preloading.
- * ``_read_data_set``: This accepts a grid object and a field name and must
-   return that field, ready to be used by yt as a NumPy array. Note that this
-   presupposes that any actions done in ``modify`` (above) have been executed.
- * ``_read_data_slice``: This accepts a grid object, a field name, an axis and
-   an (integer) coordinate, and it must return a slice through the array at that
-   value.
- * ``preload``: (optional) This accepts a list of grids and a list of datasets
-   and it populates ``self.queue`` (a dict keyed by grid id) with dicts of
-   datasets.
- * ``_read_exception``: (property) This is a tuple of exceptions that can be
-   raised by the data reading to indicate a field does not exist in the file.
-
-
-And that just about covers it. Please feel free to email
-`yt-users <http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org>`_ or
-`yt-dev <http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org>`_ with
-any questions, or to let us know you're thinking about adding a new code to yt.

diff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb -r b05412eecb34c08a401e9d832fd49fd4850d2fb3 source/advanced/debugdrive.rst
--- a/source/advanced/debugdrive.rst
+++ /dev/null
@@ -1,127 +0,0 @@
-.. _debug-drive:
-
-Debugging and Driving YT
-========================
-
-There are several different convenience functions that allow you to control YT
-in perhaps unexpected and unorthodox manners.  These will allow you to conduct
-in-depth debugging of processes that may be running in parallel on multiple
-processors, as well as providing a mechanism of signalling to YT that you need
-more information about a running process.  Additionally, YT has a built-in
-mechanism for optional reporting of errors to a central server.  All of these
-allow for more rapid development and debugging of any problems you might
-encounter.
-
-Additionally, ``yt`` is able to leverage existing developments in the IPython
-community for parallel, interactive analysis.  This allows you to initialize
-multiple YT processes through ``mpirun`` and interact with all of them from a
-single, unified interactive prompt.  This enables and facilitates parallel
-analysis without sacrificing interactivity and flexibility.
-
-.. _pastebin:
-
-The Pastebin
-------------
-
-A pastebin is a website where you can easily copy source code and error
-messages to share with yt developers or your collaborators. At
-http://paste.yt-project.org/ a pastebin is available for placing scripts.  With
-``yt`` the script ``yt_lodgeit.py`` is distributed and wrapped with 
-the ``pastebin`` and ``pastebin_grab`` commands, which allow for commandline 
-uploading and downloading of pasted snippets.  To upload a script you
-would supply it to the command:
-
-.. code-block:: bash
-
-   $ yt pastebin some_script.py
-
-The URL will be returned.  If you'd like it to be marked 'private' and not show
-up in the list of pasted snippets, supply the argument ``--private``.  All
-snippets are given either numbers or hashes.  To download a pasted snippet, you
-would use the ``pastebin_grab`` option:
-
-.. code-block:: bash
-
-   $ yt pastebin_grab 1768
-
-The snippet will be output to the window, so output redirection can be used to
-store it in a file.
-
-.. _error-reporting:
-
-Error Reporting with the Pastebin
-+++++++++++++++++++++++++++++++++
-
-If you are having troubles with ``yt``, you can have it paste the error report
-to the pastebin by running your problematic script with the ``--paste`` option:
-
-.. code-block:: bash
-
-   $ python2.7 some_problematic_script.py --paste
-
-The ``--paste`` option has to come after the name of the script.  When the
-script dies and prints its error, it will also submit that error to the
-pastebin and return a URL for the error.  When reporting your bug, include this
-URL and then the problem can be debugged more easily.
-
-For more information on asking for help, see `asking-for-help`.
-
-Signaling YT to Do Something
-----------------------------
-
-During startup, ``yt`` inserts handlers for two operating system-level signals.
-These provide two diagnostic methods for interacting with a running process.
-Signalling the python process that is running your script with these signals
-will induce the requested behavior.  
-
-   SIGUSR1
-     This will cause the python code to print a stack trace, showing exactly
-     where in the function stack it is currently executing.
-   SIGUSR1
-     This will cause the python code to insert an IPython session wherever it
-     currently is, with all local variables in the local namespace.  It should
-     allow you to change the state variables.
-
-If your ``yt``-running process has PID 5829, you can signal it to print a
-traceback with:
-
-.. code-block:: bash
-
-   $ kill -SIGUSR1 5829
-
-Note, however, that if the code is currently inside a C function, the signal
-will not be handled, and the stacktrace will not be printed, until it returns
-from that function.
-
-.. _remote-debugging:
-
-Remote and Disconnected Debugging
----------------------------------
-
-If you are running a parallel job that fails, often it can be difficult to do a
-post-mortem analysis to determine what went wrong.  To facilitate this, ``yt``
-has implemented an `XML-RPC <http://en.wikipedia.org/wiki/XML-RPC>`_ interface
-to the Python debugger (``pdb``) event loop.  
-
-Running with the ``--rpdb`` command will cause any uncaught exception during
-execution to spawn this interface, which will sit and wait for commands,
-exposing the full Python debugger.  Additionally, a frontend to this is
-provided through the ``yt`` command.  So if you run the command:
-
-.. code-block:: bash
-
-   $ mpirun -np 4 python2.7 some_script.py --parallel --rpdb
-
-and it reaches an error or an exception, it will launch the debugger.
-Additionally, instructions will be printed for connecting to the debugger.
-Each of the four processes will be accessible via:
-
-.. code-block:: bash
-
-   $ yt rpdb 0
-
-where ``0`` here indicates the process 0.
-
-For security reasons, this will only work on local processes; to connect on a
-cluster, you will have to execute the command ``yt rpdb`` on the node on which
-that process was launched.

diff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb -r b05412eecb34c08a401e9d832fd49fd4850d2fb3 source/advanced/developing.rst
--- a/source/advanced/developing.rst
+++ /dev/null
@@ -1,436 +0,0 @@
-.. _contributing-code:
-
-How to Develop yt
-=================
-
-.. note:: If you already know how to use version control and are comfortable
-   with handling it yourself, the quickest way to contribute to yt is to `fork
-   us on BitBucket <http://hg.yt-project.org/yt/fork>`_, `make your changes
-   <http://mercurial.selenic.com/>`_, and issue a `pull request
-   <http://hg.yt-project.org/yt/pull>`_.  The rest of this document is just an
-   explanation of how to do that.
-
-yt is a community project!
-
-We are very happy to accept patches, features, and bugfixes from any member of
-the community!  yt is developed using mercurial, primarily because it enables
-very easy and straightforward submission of changesets.  We're eager to hear
-from you, and if you are developing yt, we encourage you to subscribe to the
-`developer mailing list
-<http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org>`_
-
-Please feel free to hack around, commit changes, and send them upstream.  If
-you're new to Mercurial, these three resources are pretty great for learning
-the ins and outs:
-
-   * http://hginit.com/
-   * http://hgbook.red-bean.com/read/
-   * http://mercurial.selenic.com/
-
-The commands that are essential for using mercurial include:
-
-   * ``hg commit`` which commits changes in the working directory to the
-     repository, creating a new "changeset object."
-   * ``hg add`` which adds a new file to be tracked by mercurial.  This does
-     not change the working directory.
-   * ``hg pull`` which pulls (from an optional path specifier) changeset
-     objects from a remote source.  The working directory is not modified.
-   * ``hg push`` which sends (to an optional path specifier) changeset objects
-     to a remote source.  The working directory is not modified.
-   * ``hg log`` which shows a log of all changeset objects in the current
-     repository.  Use ``-g`` to show a graph of changeset objects and their
-     relationship.
-   * ``hg update`` which (with an optional "revision" specifier) updates the
-     state of the working directory to match a changeset object in the
-     repository.
-   * ``hg merge`` which combines two changesets to make a union of their lines
-     of development.  This updates the working directory.
-
-Keep in touch, and happy hacking!  We also provide `doc/coding_styleguide.txt`
-and an example of a fiducial docstring in `doc/docstring_example.txt`.  Please
-read them before hacking on the codebase, and feel free to email any of the
-mailing lists for help with the codebase.
-
-.. _bootstrap-dev:
-
-Submitting Changes
-------------------
-
-We provide a brief introduction to submitting changes here.  yt thrives on the
-strength of its communities ( http://arxiv.org/abs/1301.7064 has further
-discussion) and we encourage contributions from any user.  While we do not
-discuss in detail version control, mercurial or the advanced usage of
-BitBucket, we do provide an outline of how to submit changes and we are happy
-to provide further assistance or guidance.
-
-Licensing
-+++++++++
-
-All contributed code must be GPL-compatible; we ask that you consider licensing
-under the GPL version 3, but we will consider submissions of code that are
-BSD-like licensed as well.  If you'd rather not license in this manner, but
-still want to contribute, just drop me a line and I'll put a link on the main
-wiki page to wherever you like!
-
-Requirements for Code Submission
-++++++++++++++++++++++++++++++++
-
-Modifications to the code typically fall into one of three categories, each of
-which have different requirements for acceptance into the code base.  These
-requirements are in place for a few reasons -- to make sure that the code is
-maintainable, testable, and that we can easily include information about
-changes in changelogs during the release procedure.  (See `YTEP-0008
-<https://ytep.readthedocs.org/en/latest/YTEPs/YTEP-0008.html>`_ for more
-detail.)
-
-  * New Features
-
-    * New unit tests (possibly new answer tests) (See :ref:`testing`)
-    * Docstrings for public API
-    * Addition of new feature to the narrative documentation
-    * Addition of cookbook recipe
-    * Issue created on issue tracker, to ensure this is added to the changelog
-
-  * Extension or Breakage of API in Existing Features
-
-    * Update existing narrative docs and docstrings
-    * Update existing cookbook recipes
-    * Modify of create new unit tests (See :ref:`testing`)
-    * Issue created on issue tracker, to ensure this is added to the changelog
-
-  * Bug fixes
-
-    * Unit test is encouraged, to ensure breakage does not happen again in the
-      future.
-    * Issue created on issue tracker, to ensure this is added to the changelog
-
-When submitting, you will be asked to make sure that your changes meet all of
-these requirements.  They are pretty easy to meet, and we're also happy to help
-out with them.  In :ref:`code-style-guide` there is a list of handy tips for
-how to structure and write your code.
-
-How to Use Mercurial with yt
-++++++++++++++++++++++++++++
-
-This document doesn't cover detailed mercurial use, but on IRC we are happy to
-walk you through any troubles you might have.  Here are some suggestions
-for using mercurial with yt:
-
-  * Named branches are to be avoided.  Try using bookmarks (``hg bookmark``) to
-    track work.  (`More <http://mercurial.selenic.com/wiki/Bookmarks>`_)
-  * Make sure you set a username in your ``~/.hgrc`` before you commit any
-    changes!  All of the tutorials above will describe how to do this as one of
-    the very first steps.
-  * When contributing changes, you might be asked to make a handful of
-    modifications to your source code.  We'll work through how to do this with
-    you, and try to make it as painless as possible.
-  * Please avoid deleting your yt forks, as that eliminates the code review
-    process from BitBucket's website.
-  * In all likelihood, you only need one fork.  To keep it in sync, you can
-    sync from the website.  (See Bitbucket's `Blog Post
-    <http://blog.bitbucket.org/2013/02/04/syncing-and-merging-come-to-bitbucket/>`_
-    about this.)
-  * If you run into any troubles, stop by IRC (see :ref:`irc`) or the mailing
-    list.
-
-Building yt
-+++++++++++
-
-If you have made changes to any C or Cython (``.pyx``) modules, you have to
-rebuild yt.  If your changes have exclusively been to Python modules, you will
-not need to re-build, but (see below) you may need to re-install.  
-
-If you are running from a clone that is executable in-place (i.e., has been
-installed via the installation script or you have run ``setup.py develop``) you
-can rebuild these modules by executing:
-
-.. code-block:: bash
-
-   python2.7 setup.py develop
-
-If you have previously "installed" via ``setup.py install`` you have to
-re-install:
-
-.. code-block:: bash
-
-   python2.7 setup.py install
-
-Only one of these two options is needed.  yt may require you to specify the
-location to libpng and hdf5.  This can be done through files named ``png.cfg``
-and ``hdf5.cfg``.  If you are using the installation script, these will already
-exist.
-
-Making and Sharing Changes
-++++++++++++++++++++++++++
-
-The simplest way to submit changes to yt is to commit changes in your
-``$YT_DEST/src/yt-hg`` directory, fork the repository on BitBucket,  push the
-changesets to your fork, and then issue a pull request.  If you will be
-developing much more in-depth features for yt, you will also
-likely want to edit the paths in your 
-
-Here's a more detailed flowchart of how to submit changes.
-
-  #. If you have used the installation script, the source code for yt can be
-     found in ``$YT_DEST/src/yt-hg``.  (Below, in :ref:`reading-source`, 
-     we describe how to find items of interest.)  Edit the source file you are
-     interested in and test your changes.  (See :ref:`testing` for more
-     information.)
-  #. Fork yt on BitBucket.  (This step only has to be done once.)  You can do
-     this at: https://bitbucket.org/yt_analysis/yt/fork .  Call this repository
-     ``yt``.
-  #. Commit these changes, using ``hg commit``.  This can take an argument
-     which is a series of filenames, if you have some changes you do not want
-     to commit.
-  #. If your changes include new functionality or cover an untested area of the
-     code, add a test.  (See :ref:`testing` for more information.)  Commit
-     these changes as well.
-  #. Push your changes to your new fork using the command::
-
-        hg push https://bitbucket.org/YourUsername/yt/
- 
-     If you end up doing considerable development, you can set an alias in the
-     file ``.hg/hgrc`` to point to this path.
-  #. Issue a pull request at
-     https://bitbucket.org/YourUsername/yt/pull-request/new
-
-During the course of your pull request you may be asked to make changes.  These
-changes may be related to style issues, correctness issues, or even requesting
-tests.  The process for responding to pull request code review is relatively
-straightforward.
-
-  #. Make requested changes, or leave a comment indicating why you don't think
-     they should be made.
-  #. Commit those changes to your local repository.
-  #. Push the changes to your fork::
-
-        hg push https://bitbucket.org/YourUsername/yt/
-
-  #. Update your pull request by visiting
-     https://bitbucket.org/YourUsername/yt/pull-request/new
-
-How to Write Documentation
-++++++++++++++++++++++++++
-
-The process for writing documentation is identical to the above, except that
-instead of ``yt_analysis/yt`` you should be forking and pushing to
-``yt_analysis/yt-doc``.  All the source for the documentation is written in
-`Sphinx <http://sphinx-doc.org/>`_, which uses ReST for markup.
-
-Cookbook recipes go in ``source/cookbook/`` and must be added to one of the
-``.rst`` files in that directory.
-
-How To Get The Source Code For Editing
---------------------------------------
-
-yt is hosted on BitBucket, and you can see all of the yt repositories at
-http://hg.yt-project.org/ .  With the yt installation script you should have a
-copy of Mercurial for checking out pieces of code.  Make sure you have followed
-the steps above for bootstrapping your development (to assure you have a
-bitbucket account, etc.)
-
-In order to modify the source code for yt, we ask that you make a "fork" of the
-main yt repository on bitbucket.  A fork is simply an exact copy of the main
-repository (along with its history) that you will now own and can make
-modifications as you please.  You can create a personal fork by visiting the yt
-bitbucket webpage at https://bitbucket.org/yt_analysis/yt/ .  After logging in,
-you should see an option near the top right labeled "fork".  Click this option,
-and then click the fork repository button on the subsequent page.  You now have
-a forked copy of the yt repository for your own personal modification.
-
-This forked copy exists on the bitbucket repository, so in order to access
-it locally, follow the instructions at the top of that webpage for that
-forked repository, namely run at a local command line:
-
-.. code-block:: bash
-
-   $ hg clone http://bitbucket.org/<USER>/<REPOSITORY_NAME>
-
-This downloads that new forked repository to your local machine, so that you
-can access it, read it, make modifications, etc.  It will put the repository in
-a local directory of the same name as the repository in the current working
-directory.  You can see any past state of the code by using the hg log command.
-For example, the following command would show you the last 5 changesets
-(modifications to the code) that were submitted to that repository.
-
-.. code-block:: bash
-
-   $ cd <REPOSITORY_NAME>
-   $ hg log -l 5
-
-Using the revision specifier (the number or hash identifier next to each
-changeset), you can update the local repository to any past state of the
-code (a previous changeset or version) by executing the command:
-
-.. code-block:: bash
-
-   $ hg up revision_specifier
-
-Lastly, if you want to use this new downloaded version of your yt repository
-as the *active* version of yt on your computer (i.e. the one which is executed
-when you run yt from the command line or ``from yt.mods import *``),
-then you must "activate" it using the following commands from within the
-repository directory.
-
-In order to do this for the first time with a new repository, you have to
-copy some config files over from your yt installation directory (where yt
-was initially installed from the install_script.sh).  Try this:
-
-.. code-block:: bash
-
-   $ cp $YT_DEST/src/yt-hg/*.cfg <REPOSITORY_NAME>
-
-and then every time you want to "activate" a different repository of yt.
-
-.. code-block:: bash
-
-   $ cd <REPOSITORY_NAME>
-   $ python2.7 setup.py develop
-
-This will rebuild all C modules as well.
-
-.. _reading-source:
-
-How To Read The Source Code
----------------------------
-
-If you just want to *look* at the source code, you already have it on your
-computer.  Go to the directory where you ran the install_script.sh, then
-go to ``$YT_DEST/src/yt-hg`` .  In this directory are a number of
-subdirectories with different components of the code, although most of them
-are in the yt subdirectory.  Feel free to explore here.
-
-   ``frontends``
-      This is where interfaces to codes are created.  Within each subdirectory of
-      yt/frontends/ there must exist the following files, even if empty:
-
-      * ``data_structures.py``, where subclasses of AMRGridPatch, StaticOutput
-        and AMRHierarchy are defined.
-      * ``io.py``, where a subclass of IOHandler is defined.
-      * ``misc.py``, where any miscellaneous functions or classes are defined.
-      * ``definitions.py``, where any definitions specific to the frontend are
-        defined.  (i.e., header formats, etc.)
-
-   ``visualization``
-      This is where all visualization modules are stored.  This includes plot
-      collections, the volume rendering interface, and pixelization frontends.
-
-   ``data_objects``
-      All objects that handle data, processed or unprocessed, not explicitly
-      defined as visualization are located in here.  This includes the base
-      classes for data regions, covering grids, time series, and so on.  This
-      also includes derived fields and derived quantities.
-
-   ``analysis_modules``
-      This is where all mechanisms for processing data live.  This includes
-      things like clump finding, halo profiling, halo finding, and so on.  This
-      is something of a catchall, but it serves as a level of greater
-      abstraction that simply data selection and modification.
-
-   ``gui``
-      This is where all GUI components go.  Typically this will be some small
-      tool used for one or two things, which contains a launching mechanism on
-      the command line.
-
-   ``utilities``
-      All broadly useful code that doesn't clearly fit in one of the other
-      categories goes here.
-
-
-If you're looking for a specific file or function in the yt source code, use
-the unix find command:
-
-.. code-block:: bash
-
-   $ find <DIRECTORY_TREE_TO_SEARCH> -name '<FILENAME>'
-
-The above command will find the FILENAME in any subdirectory in the
-DIRECTORY_TREE_TO_SEARCH.  Alternatively, if you're looking for a function
-call or a keyword in an unknown file in a directory tree, try:
-
-.. code-block:: bash
-
-   $ grep -R <KEYWORD_TO_FIND><DIRECTORY_TREE_TO_SEARCH>
-
-This can be very useful for tracking down functions in the yt source.
-
-.. _code-style-guide:
-
-Code Style Guide
-----------------
-
-To keep things tidy, we try to stick with a couple simple guidelines.
-
-General Guidelines
-++++++++++++++++++
-
- * In general, follow `PEP-8 <http://www.python.org/dev/peps/pep-0008/>`_ guidelines.
- * Classes are ConjoinedCapitals, methods and functions are
-   ``lowercase_with_underscores.``
- * Use 4 spaces, not tabs, to represent indentation.
- * Line widths should not be more than 80 characters.
- * Do not use nested classes unless you have a very good reason to, such as
-   requiring a namespace or class-definition modification.  Classes should live
-   at the top level.  ``__metaclass__`` is exempt from this.
- * Do not use unnecessary parenthesis in conditionals.  ``if((something) and
-   (something_else))`` should be rewritten as ``if something and
-   something_else``.  Python is more forgiving than C.
- * Avoid copying memory when possible. For example, don't do ``a =
-   a.reshape(3,4)`` when ``a.shape = (3,4)`` will do, and ``a = a * 3`` should be
-   ``na.multiply(a, 3, a)``.
- * In general, avoid all double-underscore method names: ``__something`` is
-   usually unnecessary.
- * Doc strings should describe input, output, behavior, and any state changes
-   that occur on an object.  See the file `doc/docstring_example.txt` for a
-   fiducial example of a docstring.
-
-API Guide
-+++++++++
-
- * Do not import "*" from anything other than ``yt.funcs``.
- * Internally, only import from source files directly; instead of: ``from
-   yt.visualization.api import PlotCollection`` do
-   ``from yt.visualization.plot_collection import PlotCollection``.
- * Numpy is to be imported as ``na`` not ``np``.  While this may change in the
-   future, for now this is the correct idiom.
- * Do not use too many keyword arguments.  If you have a lot of keyword
-   arguments, then you are doing too much in ``__init__`` and not enough via
-   parameter setting.
- * In function arguments, place spaces before commas.  ``def something(a,b,c)``
-   should be ``def something(a, b, c)``.
- * Don't create a new class to replicate the functionality of an old class --
-   replace the old class.  Too many options makes for a confusing user
-   experience.
- * Parameter files external to yt are a last resort.
- * The usage of the ``**kwargs`` construction should be avoided.  If they
-   cannot be avoided, they must be explained, even if they are only to be
-   passed on to a nested function.
- * Constructor APIs should be kept as *simple* as possible.
- * Variable names should be short but descriptive.
- * No global variables!
-
-Variable Names and Enzo-isms
-++++++++++++++++++++++++++++
-
- * Avoid Enzo-isms.  This includes but is not limited to:
-
-   + Hard-coding parameter names that are the same as those in Enzo.  The
-     following translation table should be of some help.  Note that the
-     parameters are now properties on a StaticOutput subclass: you access them
-     like ``pf.refine_by`` .
-
-     - ``RefineBy `` => `` refine_by``
-     - ``TopGridRank `` => `` dimensionality``
-     - ``TopGridDimensions `` => `` domain_dimensions``
-     - ``InitialTime `` => `` current_time``
-     - ``DomainLeftEdge `` => `` domain_left_edge``
-     - ``DomainRightEdge `` => `` domain_right_edge``
-     - ``CurrentTimeIdentifier `` => `` unique_identifier``
-     - ``CosmologyCurrentRedshift `` => `` current_redshift``
-     - ``ComovingCoordinates `` => `` cosmological_simulation``
-     - ``CosmologyOmegaMatterNow `` => `` omega_matter``
-     - ``CosmologyOmegaLambdaNow `` => `` omega_lambda``
-     - ``CosmologyHubbleConstantNow `` => `` hubble_constant``
-
-   + Do not assume that the domain runs from 0 to 1.  This is not true
-     everywhere.

diff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb -r b05412eecb34c08a401e9d832fd49fd4850d2fb3 source/advanced/external_analysis.rst
--- a/source/advanced/external_analysis.rst
+++ /dev/null
@@ -1,411 +0,0 @@
-Using yt with External Analysis Tools
-=====================================
-
-yt can be used as a ``glue`` code between simulation data and other methods of
-analyzing data.  Its facilities for understanding units, disk IO and data
-selection set it up ideally to use other mechanisms for analyzing, processing
-and visualizing data.
-
-Calling External Python Codes
------------------------------
-
-Calling external Python codes very straightforward.  For instance, if you had a
-Python code that accepted a set of structured meshes and then post-processed
-them to apply radiative feedback, one could imagine calling it directly:
-
-.. code-block:: python
-
-   from yt.mods import *
-   import radtrans
-
-   pf = load("DD0010/DD0010")
-   rt_grids = []
-
-   for grid in pf.h.grids:
-       rt_grid = radtrans.RegularBox(
-            grid.LeftEdge, grid.RightEdge,
-            grid["Density"], grid["Temperature"], grid["Metallicity"])
-       rt_grids.append(rt_grid)
-       grid.clear_data()
-
-   radtrans.process(rt_grids)
-
-Or if you wanted to run a population synthesis module on a set of star
-particles (and you could fit them all into memory) it might look something like
-this:
-
-.. code-block:: python
-
-   from yt.mods import *
-   import pop_synthesis
-
-   pf = load("DD0010/DD0010")
-   dd = pf.h.all_data()
-   star_masses = dd["StarMassMsun"]
-   star_metals = dd["StarMetals"]
-
-   pop_synthesis.CalculateSED(star_masses, star_metals)
-
-If you have a code that's written in Python that you are having trouble getting
-data into from yt, please feel encouraged to email the users list and we'll
-help out.
-
-Calling Non-Python External Codes
----------------------------------
-
-Independent of its ability to process, analyze and visualize data, yt can also
-serve as a mechanism for reading and selecting simulation data.  In this way,
-it can be used to supply data to an external analysis routine written in
-Fortran, C or C++.  This document describes how to supply that data, using the
-example of a simple code that calculates the best axes that describe a
-distribution of particles as a starting point.  (The underlying method is left
-as an exercise for the reader; we're only currently interested in the function
-specification and structs.)
-
-If you have written a piece of code that performs some analysis function, and
-you would like to include it in the base distribution of yt, we would be happy
-to do so; drop us a line or see :ref:`contributing-code` for more information.
-
-To accomplish the process of linking Python with our external code, we will be
-using a language called `Cython <http://www.cython.org/>`_, which is
-essentially a superset of Python that compiles down to C.  It is aware of NumPy
-arrays, and it is able to massage data between the interpreted language Python
-and C, Fortran or C++.  It will be much easier to utilize routines and analysis
-code that have been separated into subroutines that accept data structures, so
-we will assume that our halo axis calculator accepts a set of structs.
-
-Our Example Code
-++++++++++++++++
-
-Here is the ``axes.h`` file in our imaginary code, which we will then wrap:
-
-.. code-block:: c
-
-   typedef struct structParticleCollection {
-        long npart;
-        double *xpos;
-        double *ypos;
-        double *zpos;
-   } ParticleCollection;
-   
-   void calculate_axes(ParticleCollection *part, 
-            double *ax1, double *ax2, double *ax3);
-
-There are several components to this analysis routine which we will have to
-wrap.
-
-   #. We have to wrap the creation of an instance of ``ParticleCollection``.
-   #. We have to transform a set of NumPy arrays into pointers to doubles.
-   #. We have to create a set of doubles into which ``calculate_axes`` will be
-      placing the values of the axes it calculates.
-   #. We have to turn the return values back into Python objects.
-
-Each of these steps can be handled in turn, and we'll be doing it using Cython
-as our interface code.
-
-Setting Up and Building Our Wrapper
-+++++++++++++++++++++++++++++++++++
-
-To get started, we'll need to create two files:
-
-.. code-block:: bash
-
-   axes_calculator.pyx
-   axes_calculator_setup.py
-
-These can go anywhere, but it might be useful to put them in their own
-directory.  The contents of ``axes_calculator.pyx`` will be left for the next
-section, but we will need to put some boilerplate code into
-``axes_calculator_setup.pyx``.  As a quick sidenote, you should call these
-whatever is most appropriate for the external code you are wrapping;
-``axes_calculator`` is probably not the best bet.
-
-Here's a rough outline of what should go in ``axes_calculator_setup.py``:
-
-.. code-block:: python
-
-   NAME = "axes_calculator"
-   EXT_SOURCES = []
-   EXT_LIBRARIES = ["axes_utils", "m"]
-   EXT_LIBRARY_DIRS = ["/home/rincewind/axes_calculator/"]
-   EXT_INCLUDE_DIRS = []
-   DEFINES = []
-
-   from distutils.core import setup
-   from distutils.extension import Extension
-   from Cython.Distutils import build_ext
-
-   ext_modules = [Extension(NAME,
-                    [NAME+".pyx"] + EXT_SOURCES,
-                    libraries = EXT_LIBRARIES,
-                    library_dirs = EXT_LIBRARY_DIRS,
-                    include_dirs = EXT_INCLUDE_DIRS,
-                    define_macros = DEFINES)
-   ]
-
-   setup(
-     name = NAME,
-     cmdclass = {'build_ext': build_ext},
-     ext_modules = ext_modules
-   )
-
-The only variables you should have to change in this are the first six, and
-possibly only the first one.  We'll go through these variables one at a time.  
-
-``NAME``
-   This is the name of our source file, minus the ``.pyx``.  We're also
-   mandating that it be the name of the module we import.  You're free to
-   modify this.
-``EXT_SOURCES``
-   Any additional sources can be listed here.  For instance, if you are only
-   linking against a single ``.c`` file, you could list it here -- if our axes
-   calculator were fully contained within a file called ``calculate_my_axes.c``
-   we could link against it using this variable, and then we would not have to
-   specify any libraries.  This is usually the simplest way to do things, and in
-   fact, yt makes use of this itself for things like HEALpix and interpolation
-   functions.
-``EXT_LIBRARIES``
-   Any libraries that will need to be linked against (like ``m``!) should be
-   listed here.  Note that these are the name of the library minus the leading
-   ``lib`` and without the trailing ``.so``.  So ``libm.so`` would become ``m``
-   and ``libluggage.so`` would become ``luggage``.
-``EXT_LIBRARY_DIRS``
-   If the libraries listed in ``EXT_LIBRARIES`` reside in some other directory
-   or directories, those directories should be listed here.  For instance,
-   ``["/usr/local/lib", "/home/rincewind/luggage/"]`` .
-``EXT_INCLUDE_DIRS``
-   If any header files have been included that live in external directories,
-   those directories should be included here.
-``DEFINES``
-   Any define macros that should be passed to the C compiler should be listed
-   here; if they just need to be defined, then they should be specified to be
-   defined as "None."  For instance, if you wanted to pass ``-DTWOFLOWER``, you
-   would set this to equal: ``[("TWOFLOWER", None)]``.
-
-To build our extension, we would run:
-
-.. code-block:: bash
-
-   $ python2.7 axes_calculator_setup.py build_ext -i
-
-Note that since we don't yet have an ``axes_calculator.pyx``, this will fail.
-But once we have it, it ought to run.
-
-Writing and Calling our Wrapper
-+++++++++++++++++++++++++++++++
-
-Now we begin the tricky part, of writing our wrapper code.  We've already
-figured out how to build it, which is halfway to being able to test that it
-works, and we now need to start writing Cython code.
-
-For a more detailed introduction to Cython, see the Cython documentation at
-http://docs.cython.org/ .  We'll cover a few of the basics for wrapping code
-however.
-
-To start out with, we need to open up and edit our file,
-``axes_calculator.pyx``.  Open this in your favorite version of vi (mine is
-vim) and we will get started by declaring the struct we need to pass in.  But
-first, we need to include some header information:
-
-.. code-block:: cython
-
-   import numpy as np
-   cimport numpy as np
-   cimport cython
-   from stdlib cimport malloc, free
-
-These lines simply import and "Cython import" some common routines.  For more
-information about what is already available, see the Cython documentation.  For
-now, we need to start translating our data.
-
-To do so, we tell Cython both where the struct should come from, and then we
-describe the struct itself.  One fun thing to note is that if you don't need to
-set or access all the values in a struct, and it just needs to be passed around
-opaquely, you don't have to include them in the definition.  For an example of
-this, see the ``png_writer.pyx`` file in the yt repository.  Here's the syntax
-for pulling in (from a file called ``axes_calculator.h``) a struct like the one
-described above:
-
-.. code-block:: cython
-
-   cdef extern from "axes_calculator.h":
-       ctypedef struct ParticleCollection:
-           long npart
-           double *xpos
-           double *ypos
-           double *zpos
-
-So far, pretty easy!  We've basically just translated the declaration from the
-``.h`` file.  Now that we have done so, any other Cython code can create and
-manipulate these ``ParticleCollection`` structs -- which we'll do shortly.
-Next up, we need to declare the function we're going to call, which looks
-nearly exactly like the one in the ``.h`` file.  (One common problem is that
-Cython doesn't know what ``const`` means, so just remove it wherever you see
-it.)  Declare it like so:
-
-.. code-block:: cython
-
-       void calculate_axes(ParticleCollection *part,
-                double *ax1, double *ax2, double *ax3)
-
-Note that this is indented one level, to indicate that it, too, comes from
-``axes_calculator.h``.  The next step is to create a function that accepts
-arrays and converts them to the format the struct likes.  We declare our
-function just like we would a normal Python function, using ``def``.  You can
-also use ``cdef`` if you only want to call a function from within Cython.  We
-want to call it from Python, too, so we just use ``def``.  Note that we don't
-here specify types for the various arguments.  In a moment we'll refine this to
-have better argument types.
-
-.. code-block:: cython
-
-   def examine_axes(xpos, ypos, zpos):
-       cdef double ax1[3], ax2[3], ax3[3]
-       cdef ParticleCollection particles
-       cdef int i
-
-       particles.npart = len(xpos)
-       particles.xpos = <double *> malloc(particles.npart * sizeof(double))
-       particles.ypos = <double *> malloc(particles.npart * sizeof(double))
-       particles.zpos = <double *> malloc(particles.npart * sizeof(double))
-
-       for i in range(particles.npart):
-           particles.xpos[i] = xpos[i]
-           particles.ypos[i] = ypos[i]
-           particles.zpos[i] = zpos[i]
-
-       calculate_axes(&particles, ax1, ax2, ax3)
-
-       free(particles.xpos)
-       free(particles.ypos)
-       free(particles.zpos)
-
-       return ( (ax1[0], ax1[1], ax1[2]),
-                (ax2[0], ax2[1], ax2[2]),
-                (ax3[0], ax3[1], ax3[2]) )
-
-This does the rest.  Note that we've weaved in C-type declarations (ax1, ax2,
-ax3) and Python access to the variables fed in.  This function will probably be
-quite slow -- because it doesn't know anything about the variables xpos, ypos,
-zpos, it won't be able to speed up access to them.  Now we will see what we can
-do by declaring them to be of array-type before we start handling them at all.
-We can do that by annotating in the function argument list.  But first, let's
-test that it works.  From the directory in which you placed these files, run:
-
-.. code-block:: bash
-
-   $ python2.6 setup.py build_ext -i
-
-Now, create a sample file that feeds in the particles:
-
-.. code-block:: python
-
-    import axes_calculator
-    axes_calculator.examine_axes(xpos, ypos, zpos)
-
-Most of the time in that function is spent in converting the data.  So now we
-can go back and we'll try again, rewriting our converter function to believe
-that its being fed arrays from NumPy:
-
-.. code-block:: cython
-
-   def examine_axes(np.ndarray[np.float64_t, ndim=1] xpos,
-                    np.ndarray[np.float64_t, ndim=1] ypos,
-                    np.ndarray[np.float64_t, ndim=1] zpos):
-       cdef double ax1[3], ax2[3], ax3[3]
-       cdef ParticleCollection particles
-       cdef int i
-
-       particles.npart = len(xpos)
-       particles.xpos = <double *> malloc(particles.npart * sizeof(double))
-       particles.ypos = <double *> malloc(particles.npart * sizeof(double))
-       particles.zpos = <double *> malloc(particles.npart * sizeof(double))
-
-       for i in range(particles.npart):
-           particles.xpos[i] = xpos[i]
-           particles.ypos[i] = ypos[i]
-           particles.zpos[i] = zpos[i]
-
-       calculate_axes(&particles, ax1, ax2, ax3)
-
-       free(particles.xpos)
-       free(particles.ypos)
-       free(particles.zpos)
-
-       return ( (ax1[0], ax1[1], ax1[2]),
-                (ax2[0], ax2[1], ax2[2]),
-                (ax3[0], ax3[1], ax3[2]) )
-
-This should be substantially faster, assuming you feed it arrays.
-
-Now, there's one last thing we can try.  If we know our function won't modify
-our arrays, and they are C-Contiguous, we can simply grab pointers to the data:
-
-.. code-block:: cython
-
-   def examine_axes(np.ndarray[np.float64_t, ndim=1] xpos,
-                    np.ndarray[np.float64_t, ndim=1] ypos,
-                    np.ndarray[np.float64_t, ndim=1] zpos):
-       cdef double ax1[3], ax2[3], ax3[3]
-       cdef ParticleCollection particles
-       cdef int i
-
-       particles.npart = len(xpos)
-       particles.xpos = <double *> xpos.data
-       particles.ypos = <double *> ypos.data
-       particles.zpos = <double *> zpos.data
-
-       for i in range(particles.npart):
-           particles.xpos[i] = xpos[i]
-           particles.ypos[i] = ypos[i]
-           particles.zpos[i] = zpos[i]
-
-       calculate_axes(&particles, ax1, ax2, ax3)
-
-       return ( (ax1[0], ax1[1], ax1[2]),
-                (ax2[0], ax2[1], ax2[2]),
-                (ax3[0], ax3[1], ax3[2]) )
-
-But note!  This will break or do weird things if you feed it arrays that are
-non-contiguous.
-
-At this point, you should have a mostly working piece of wrapper code.  And it
-was pretty easy!  Let us know if you run into any problems, or if you are
-interested in distributing your code with yt.
-
-A complete set of files is available with this documentation.  These are
-slightly different, so that the whole thing will simply compile, but they
-provide a useful example.
-
- * `axes.c <../_static/axes.c>`_
- * `axes.h <../_static/axes.h>`_
- * `axes_calculator.pyx <../_static/axes_calculator.pyx>`_
- * `axes_calculator_setup.py <../_static/axes_calculator_setup.txt>`_
-
-Exporting Data from yt
-----------------------
-
-yt is installed alongside h5py.  If you need to export your data from yt, to
-share it with people or to use it inside another code, h5py is a good way to do
-so.  You can write out complete datasets with just a few commands.  You have to
-import, and then save things out into a file.
-
-.. code-block:: python
-
-   import h5py
-   f = h5py.File("some_file.h5")
-   f.create_dataset("/data", data=some_data)
-
-This will create ``some_file.h5`` if necessary and add a new dataset
-(``/data``) to it.  Writing out in ASCII should be relatively straightforward.
-For instance:
-
-.. code-block:: python
-
-   f = open("my_file.txt", "w")
-   for halo in halos:
-       x, y, z = halo.center_of_mass()
-       f.write("%0.2f %0.2f %0.2f\n", x, y, z)
-   f.close()
-
-This example could be extended to work with any data object's fields, as well.

diff -r 3ff41e238be60d2dd89056bc902c4bdc63de6bbb -r b05412eecb34c08a401e9d832fd49fd4850d2fb3 source/advanced/ionization_cube.py
--- a/source/advanced/ionization_cube.py
+++ /dev/null
@@ -1,37 +0,0 @@
-from yt.mods import *
-from yt.utilities.parallel_tools.parallel_analysis_interface \
-    import communication_system
-import h5py, glob, time
-
- at derived_field(name = "IonizedHydrogen",
-               units = r"\frac{\rho_{HII}}{rho_H}")
-def IonizedHydrogen(field, data):
-    return data["HII_Density"]/(data["HI_Density"]+data["HII_Density"])
-
-ts = TimeSeriesData.from_filenames("SED800/DD*/*.hierarchy", parallel = 8)
-
-ionized_z = np.zeros(ts[0].domain_dimensions, dtype="float32")
-
-t1 = time.time()
-for pf in ts.piter():
-    z = pf.current_redshift
-    for g in parallel_objects(pf.h.grids, njobs = 16):
-        i1, j1, k1 = g.get_global_startindex() # Index into our domain
-        i2, j2, k2 = g.get_global_startindex() + g.ActiveDimensions
-        # Look for the newly ionized gas
-        newly_ion = ((g["IonizedHydrogen"] > 0.999)
-                   & (ionized_z[i1:i2,j1:j2,k1:k2] < z))
-        ionized_z[i1:i2,j1:j2,k1:k2][newly_ion] = z
-        g.clear_data()
-
-print "Iteration completed  %0.3e" % (time.time()-t1)
-comm = communication_system.communicators[-1]
-for i in range(ionized_z.shape[0]):
-    ionized_z[i,:,:] = comm.mpi_allreduce(ionized_z[i,:,:], op="max")
-    print "Slab % 3i has minimum z of %0.3e" % (i, ionized_z[i,:,:].max())
-t2 = time.time()
-print "Completed.  %0.3e" % (t2-t1)
-
-if comm.rank == 0:
-    f = h5py.File("IonizationCube.h5", "w")
-    f.create_dataset("/z", data=ionized_z)

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt-doc/commits/ec78228fba3f/
Changeset:   ec78228fba3f
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-10-28 20:56:25
Summary:     Merging
Affected #:  1 file

diff -r b05412eecb34c08a401e9d832fd49fd4850d2fb3 -r ec78228fba3f268306c858dadf9c0eb86cad3e9a source/conf.py
--- a/source/conf.py
+++ b/source/conf.py
@@ -29,12 +29,19 @@
 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
 extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx',
               'sphinx.ext.pngmath', 'sphinx.ext.viewcode',
-              'numpydocmod', 'youtube',
-              'yt_cookbook', 'yt_colormaps', 'notebook_sphinxext']
+              'numpydocmod', 'youtube', 'yt_cookbook', 'yt_colormaps']
 
 if not on_rtd:
     extensions.append('sphinx.ext.autosummary')
 
+try:
+    import runipy
+    import IPython.nbconvert.utils.pandoc
+    if not on_rtd:
+        extensions.append('notebook_sphinxext')
+except ImportError:
+    pass
+
 # Add any paths that contain templates here, relative to this directory.
 templates_path = ['_templates']
 


https://bitbucket.org/yt_analysis/yt-doc/commits/bf3d20ad1bf2/
Changeset:   bf3d20ad1bf2
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-11-25 20:44:43
Summary:     Merging from all the development done in 2.x
Affected #:  150 files

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a .hgignore
--- a/.hgignore
+++ b/.hgignore
@@ -2,7 +2,7 @@
 *.pyc
 .*.swp
 build/*
-source/api/generated/*
+source/reference/api/generated/*
 _temp/*
 **/.DS_Store
 RD0005-mine/*

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a Makefile
--- a/Makefile
+++ b/Makefile
@@ -41,7 +41,7 @@
 
 fullclean:
 	-rm -rf $(BUILDDIR)/*
-	-rm -rf source/api/generated
+	-rm -rf source/reference/api/generated
 
 recipeclean:
 	-rm -rf _temp/*.done source/cookbook/_static/*

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a extensions/notebook_sphinxext.py
--- a/extensions/notebook_sphinxext.py
+++ b/extensions/notebook_sphinxext.py
@@ -4,7 +4,6 @@
 from docutils.parsers.rst import directives
 from IPython.nbconvert import html, python
 from runipy.notebook_runner import NotebookRunner
-from jinja2 import FileSystemLoader
 
 class NotebookDirective(Directive):
     """Insert an evaluated notebook into a document
@@ -52,13 +51,11 @@
         f.write(script_text.encode('utf8'))
         f.close()
 
-        # Create evaluated version and save it to the dest path.
-        # Always use --pylab so figures appear inline
-        # perhaps this is questionable?
-        nb_runner = NotebookRunner(nb_in=nb_abs_path, pylab=True)
-        nb_runner.run_notebook()
-        nb_runner.save_notebook(dest_path_eval)
-        evaluated_text = nb_to_html(dest_path_eval)
+        try:
+            evaluated_text = evaluate_notebook(nb_abs_path, dest_path_eval)
+        except:
+            # bail
+            return []
 
         # Create link to notebook and script files
         link_rst = "(" + \
@@ -71,7 +68,7 @@
 
         # create notebook node
         attributes = {'format': 'html', 'source': 'nb_path'}
-        nb_node = nodes.raw('', evaluated_text, **attributes)
+        nb_node = notebook_node('', evaluated_text, **attributes)
         (nb_node.source, nb_node.line) = \
             self.state_machine.get_source_and_line(self.lineno)
 
@@ -80,11 +77,15 @@
 
         # clean up png files left behind by notebooks.
         png_files = glob.glob("*.png")
+        fits_files = glob.glob("*.fits")
+        h5_files = glob.glob("*.h5")
         for file in png_files:
             os.remove(file)
 
         return [nb_node]
 
+
+
 class notebook_node(nodes.raw):
     pass
 
@@ -134,6 +135,20 @@
     lines.append('</div>')
     return '\n'.join(lines)
 
+def evaluate_notebook(nb_path, dest_path=None):
+    # Create evaluated version and save it to the dest path.
+    # Always use --pylab so figures appear inline
+    # perhaps this is questionable?
+    nb_runner = NotebookRunner(nb_in=nb_path, pylab=True)
+    nb_runner.run_notebook()
+    if dest_path is None:
+        dest_path = 'temp_evaluated.ipynb'
+    nb_runner.save_notebook(dest_path)
+    ret = nb_to_html(dest_path)
+    if dest_path is 'temp_evaluated.ipynb':
+        os.remove(dest_path)
+    return ret
+
 def formatted_link(path):
     return "`%s <%s>`__" % (os.path.basename(path), path)
 

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a extensions/notebookcell_sphinxext.py
--- /dev/null
+++ b/extensions/notebookcell_sphinxext.py
@@ -0,0 +1,68 @@
+import os, shutil, string, glob, io
+from sphinx.util.compat import Directive
+from docutils.parsers.rst import directives
+from IPython.nbconvert import html, python
+from IPython.nbformat import current
+from runipy.notebook_runner import NotebookRunner
+from jinja2 import FileSystemLoader
+from notebook_sphinxext import \
+    notebook_node, nb_to_html, nb_to_python, \
+    visit_notebook_node, depart_notebook_node, \
+    evaluate_notebook
+
+class NotebookCellDirective(Directive):
+    """Insert an evaluated notebook cell into a document
+
+    This uses runipy and nbconvert to transform an inline python
+    script into html suitable for embedding in a Sphinx document.
+    """
+    required_arguments = 0
+    optional_arguments = 0
+    has_content = True
+
+    def run(self):
+        # check if raw html is supported
+        if not self.state.document.settings.raw_enabled:
+            raise self.warning('"%s" directive disabled.' % self.name)
+
+        # Construct notebook from cell content
+        content = "\n".join(self.content)
+        with open("temp.py", "w") as f:
+            f.write(content)
+
+        convert_to_ipynb('temp.py', 'temp.ipynb')
+
+        try:
+            evaluated_text = evaluate_notebook('temp.ipynb')
+        except:
+            # bail
+            return []
+
+        # create notebook node
+        attributes = {'format': 'html', 'source': 'nb_path'}
+        nb_node = notebook_node('', evaluated_text, **attributes)
+        (nb_node.source, nb_node.line) = \
+            self.state_machine.get_source_and_line(self.lineno)
+
+        # clean up
+        files = glob.glob("*.png") + ['temp.py', 'temp.ipynb']
+        for file in files:
+            os.remove(file)
+
+        return [nb_node]
+
+def setup(app):
+    setup.app = app
+    setup.config = app.config
+    setup.confdir = app.confdir
+
+    app.add_node(notebook_node,
+                 html=(visit_notebook_node, depart_notebook_node))
+
+    app.add_directive('notebook-cell', NotebookCellDirective)
+
+def convert_to_ipynb(py_file, ipynb_file):
+    with io.open(py_file, 'r', encoding='utf-8') as f:
+        notebook = current.reads(f.read(), format='py')
+    with io.open(ipynb_file, 'w', encoding='utf-8') as f:
+        current.write(notebook, f, format='ipynb')

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a extensions/pythonscript_sphinxext.py
--- /dev/null
+++ b/extensions/pythonscript_sphinxext.py
@@ -0,0 +1,61 @@
+from sphinx.util.compat import Directive
+from subprocess import Popen,PIPE
+from docutils.parsers.rst import directives
+from docutils import nodes
+import os, glob, base64
+
+class PythonScriptDirective(Directive):
+    """Execute an inline python script and display images.
+
+    This uses exec to execute an inline python script, copies
+    any images produced by the script, and embeds them in the document
+    along with the script.
+
+    """
+    required_arguments = 0
+    optional_arguments = 0
+    has_content = True
+
+    def run(self):
+        # Construct script from cell content
+        content = "\n".join(self.content)
+        with open("temp.py", "w") as f:
+            f.write(content)
+
+        # Use sphinx logger?
+        print ""
+        print content
+        print ""
+
+        codeproc = Popen(['python', 'temp.py'], stdout=PIPE)
+        out = codeproc.stdout.read()
+
+        images = sorted(glob.glob("*.png"))
+        fns = []
+        text = ''
+        for im in images:
+            text += get_image_tag(im)
+            os.remove(im)
+            
+        os.remove("temp.py")
+
+        code = content
+
+        literal = nodes.literal_block(code,code)
+        literal['language'] = 'python'
+
+        attributes = {'format': 'html'}
+        img_node = nodes.raw('', text, **attributes)
+        
+        return [literal, img_node]
+
+def setup(app):
+    app.add_directive('python-script', PythonScriptDirective)
+    setup.app = app
+    setup.config = app.config
+    setup.confdir = app.confdir
+
+def get_image_tag(filename):
+    with open(filename, "rb") as image_file:
+        encoded_string = base64.b64encode(image_file.read())
+        return '<img src="data:image/png;base64,%s" width="600"><br>' % encoded_string

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a helper_scripts/show_fields.py
--- a/helper_scripts/show_fields.py
+++ b/helper_scripts/show_fields.py
@@ -17,6 +17,17 @@
 everywhere, "Enzo" fields in Enzo datasets, "Orion" fields in Orion datasets,
 and so on.
 
+Try using the ``pf.h.field_list`` and ``pf.h.derived_field_list`` to view the
+native and derived fields available for your dataset respectively. For example
+to display the native fields in alphabetical order:
+
+.. notebook-cell::
+
+  from yt.mods import *
+  pf = load("Enzo_64/DD0043/data0043")
+  for i in sorted(pf.h.field_list):
+    print i
+
 .. note:: Universal fields will be overridden by a code-specific field.
 
 .. rubric:: Table of Contents
@@ -95,7 +106,32 @@
 print
 print_all_fields(FLASHFieldInfo)
 
-print "Nyx-Specific Field List"
+print "Athena-Specific Field List"
 print "--------------------------"
 print
+print_all_fields(AthenaFieldInfo)
+
+print "Nyx-Specific Field List"
+print "-----------------------"
+print
 print_all_fields(NyxFieldInfo)
+
+print "Chombo-Specific Field List"
+print "--------------------------"
+print
+print_all_fields(ChomboFieldInfo)
+
+print "Pluto-Specific Field List"
+print "--------------------------"
+print
+print_all_fields(PlutoFieldInfo)
+
+print "Grid-Data-Format-Specific Field List"
+print "------------------------------------"
+print
+print_all_fields(GDFFieldInfo)
+
+print "Generic-Format (Stream) Field List"
+print "----------------------------------"
+print
+print_all_fields(StreamFieldInfo)

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/advanced/index.rst
--- a/source/advanced/index.rst
+++ /dev/null
@@ -1,21 +0,0 @@
-.. _advanced:
-
-Advanced yt Usage
-=================
-
-yt has been designed to be flexible, with several entry points.
-
-.. toctree::
-   :maxdepth: 2
-
-   installing
-   plugin_file
-   parallel_computation
-   creating_derived_quantities
-   creating_datatypes
-   debugdrive
-   external_analysis
-   developing
-   testing
-   creating_frontend
-   reason_architecture

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/advanced/installing.rst
--- a/source/advanced/installing.rst
+++ /dev/null
@@ -1,233 +0,0 @@
-.. _installing-yt:
-
-Installing yt
-=============
-
-.. _automated-installation:
-
-Automated Installation
-----------------------
-
-The recommended method for installing yt is to install an isolated environment,
-using the installation script.  The front yt homepage will always contain a
-link to the most up to date version, but you should be able to obtain it from a
-command prompt by executing:
-
-.. code-block:: bash
-
-   $ wget http://hg.yt-project.org/yt/raw/stable/doc/install_script.sh
-   $ bash install_script.sh
-
-at a command prompt.  This script will download the **stable** version of the
-``yt``, along with all of its affiliated dependencies.  It will tell you at the
-end various variables you need to set in order to ensure that ``yt`` works
-correctly.  If you run into *any* problems with the installation script, that
-is considered a bug we will fix, and we encourage you to write to `yt-users
-<http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org>`_.
-
-.. _manual-installation:
-
-Manual Installation
--------------------
-
-If you choose to install ``yt`` yourself, you will have to ensure that the
-correct dependencies have been met.  A few are optional, and one is necessary
-if you wish to install the latest development version of ``yt``, but here is a list
-of the various necessary items to build ``yt``.
-
-Installation of the various libraries is a bit beyond the scope of this
-document; if you run into any problems, your best course of action is to
-consult with the documentation for the individual projects.
-
-.. _dependencies:
-
-Required Libraries
-++++++++++++++++++
-
-This is a list of libraries installed by the installation script.  The version
-numbers are those used by the installation script -- ``yt`` may work with lower
-versions or higher versions, but these are known to work.
-
- * Python-2.7.3, but not (yet) 3.0 or higher
- * NumPy-1.6.1 (at least 1.4.1)
- * HDF5-1.8.9 or higher (at least 1.8.7)
- * h5py-2.1.0 (2.0 fixes a major bug)
- * Matplotlib-1.2.0 or higher
- * Mercurial-2.5.1 or higher (anything higher than 1.5 works)
- * Cython-0.17.1 or higher (at least 0.15.1)
-
-Optional Libraries
-++++++++++++++++++
-
-These libraries are all optional, but they are highly recommended.
-
- * Forthon-0.8.10 or higher (for halo finding and correlation functions)
- * libpng-1.5.12 or higher (for raw image outputting)
- * FreeType-2.4.4 or higher (for text annotation on raw images)
- * IPython-0.13.1 (0.10 will also work)
- * PyX-0.11.1
- * zeromq-2.2.0 (needed for IPython notebook)
- * pyzmq-2.2.11 (needed for IPython notebook)
- * tornado-2.2  (needed for IPython notebook)
- * sympy-0.7.2 
- * nose-1.2.1
-
-If you are attempting to install manually, and you are not installing into a
-fully-isolated location, you should probably use your system's package
-management system as much as possible.
-
-Once you have successfully installed the dependencies, you should clone the
-primary ``yt`` repository.  
-
-You can clone the repository with this mercurial command:
-
-.. code-block:: bash
-
-   $ hg clone http://hg.yt-project.org/yt ./yt-hg
-   $ cd yt-hg
-   $ hg up -C stable
-
-This will create a directory called ``yt-hg`` that contains the entire version
-control history of ``yt``.  If you would rather use the branch ``yt``, which is
-the current development version, issue the command ``hg up -C yt`` .
-
-To compile ``yt``, you will have to specify the location to the HDF5 libraries,
-and optionally the libpng and freetype libraries.  To do so, put the "prefix"
-for the installation location into the files ``hdf5.cfg`` and (optionally)
-``png.cfg`` and ``freetype.cfg``.  For instance, if you installed into
-``/opt/hdf5/`` you would put ``/opt/hdf5/`` into ``hdf5.cfg``.  Once you have
-specified the location to these libraries, you can execute the command:
-
-.. code-block:: bash
-
-   $ python2.7 setup.py install
-
-from the ``yt-hg`` directory.  Alternately, you can replace ``install`` with
-``develop`` if you anticipate making any modifications to the code; ``develop``
-simply means that the source will be read from that directory, whereas
-``install`` will copy it into the main Python package directory.
-
-That should install ``yt`` the library as well as the commands ``iyt`` and
-``yt``.  Good luck!
-
-Package Management System Installation
---------------------------------------
-
-While the installation script provides a complete stack of utilities,
-integration into your existing operating system can often be desirable.
-
-Ubuntu PPAs
-+++++++++++
-
-Mike Kuhlen has kindly provided PPAs for Ubuntu. If you're running Ubuntu, you
-can install these easily:
-
-.. code-block:: bash
-
-   $ sudo add-apt-repository ppa:kuhlen
-   $ sudo apt-get update
-   $ sudo apt-get install yt
-
-If you'd like a development branch of yt, you can change yt for yt-devel to get
-the most recently packaged development branch.
-
-MacPorts
-++++++++
-
-Thomas Robitaille has kindly provided a `MacPorts <http://www.macports.org/>`_
-installation of yt, as part of his `MacPorts for Python Astronomers
-<http://astrofrog.github.com/macports-python/>`_.  To activate, simply type:
-
-Thanks very much, Thomas!
-
-
-.. _community-installations:
-
-Community Installations
------------------------
-
-Recently, yt has been added as a module on several supercomputers.  We hope to
-increase this list through partnership with supercomputer centers.  You should
-be able to load an appropriate yt module on these systems:
-
- * NICS Kraken
- * NICS Nautilus
-
-.. _updating-yt:
-
-Updating yt
-===========
-
-.. _automated-update:
-
-Automated Update
-----------------
-
-The recommended method for updating yt is to run the update tool at a command 
-prompt:
-
-.. code-block:: bash
-
-   $ yt update
-
-This script will identify which repository you're using (stable, development, 
-etc.), connect to the yt-project.org server, download any recent changesets 
-for your version and then recompile any new code that needs 
-it (e.g. cython, rebuild).  This same behavior is achieved manually by running:
-
-.. code-block:: bash
-
-   $ cd $YT_DEST/src/yt-hg 
-   $ hg pull
-   $ python setup.py develop
-
-Note that this automated update will fail if you have made modifications to
-the yt code base that you have not yet committed.  If this occurs, identify
-your modifications using 'hg status', and then commit them using 'hg commit',
-in order to bring the repository back to a state where you can automatically
-update the code as above.  On the other hand, if you want to wipe out your
-uncommitted changes and just update to the latest version, you can type: 
-
-.. code-block:: bash
-
-   $ cd $YT_DEST/src/yt-hg 
-   $ hg pull
-   $ hg up -C      # N.B. This will wipe your uncommitted changes! 
-   $ python setup.py develop
-
-If you run into *any* problems with the update utility, it should be considered
-a bug, and we would love to hear about it so we can fix it.  Please inform us 
-through the bugsubmit utility or through the yt-users mailing list.
-
-Updating yt's Dependencies
---------------------------
-
-If you used the install script to originally install yt, updating the various 
-libraries and modules yt depends on can be done by running:
-
-.. code-block:: bash
-
-   $ yt update --all
-
-For custom installs, you will need to update the dependencies by hand.
-
-Switching Between Branches in yt
-================================
-
-.. _switching-versions:
-
-If you are running the stable version of the code, and you want to switch 
-to using the development version of the code (or vice versa), you can merely
-follow a few steps (without reinstalling all of the source again):
-
-.. code-block:: bash
-
-   $ cd $YT_DEST/src/yt-hg 
-   $ hg pull
-   <commit all changes or they will be lost>
-   $ hg up -C <branch>     # N.B. This will wipe your uncommitted changes! 
-   $ python setup.py develop
-
-If you want to switch to using the development version of the code, use: 
-"yt" as <branch>, whereas if you want to switch to using the stable version
-of the code, use: "stable" as <branch>.

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/_obj_docstrings.inc
--- a/source/analyzing/_obj_docstrings.inc
+++ b/source/analyzing/_obj_docstrings.inc
@@ -12,7 +12,13 @@
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRCoveringGridBase`.)
 
 
-.. class:: cutting(self, normal, center, fields=None, node_name=None, **field_parameters):
+.. class:: cut_region(self, base_region, field_cuts, **field_parameters):
+
+   For more information, see :ref:`physical-object-api`
+   (This is a proxy for :class:`~yt.data_objects.data_containers.InLineExtractedRegionBase`.)
+
+
+.. class:: cutting(self, normal, center, fields=None, node_name=None, north_vector=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRCuttingPlaneBase`.)
@@ -24,6 +30,12 @@
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRCylinderBase`.)
 
 
+.. class:: ellipsoid(self, center, A, B, C, e0, tilt, fields=None, pf=None, **field_parameters):
+
+   For more information, see :ref:`physical-object-api`
+   (This is a proxy for :class:`~yt.data_objects.data_containers.AMREllipsoidBase`.)
+
+
 .. class:: extracted_region(self, base_region, indices, force_refresh=True, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
@@ -48,6 +60,12 @@
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRGridCollectionBase`.)
 
 
+.. class:: grid_collection_max_level(self, center, max_level, fields=None, pf=None, **field_parameters):
+
+   For more information, see :ref:`physical-object-api`
+   (This is a proxy for :class:`~yt.data_objects.data_containers.AMRMaxLevelCollectionBase`.)
+
+
 .. class:: inclined_box(self, origin, box_vectors, fields=None, pf=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
@@ -78,7 +96,7 @@
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRPeriodicRegionStrictBase`.)
 
 
-.. class:: proj(self, axis, field, weight_field=None, max_level=None, center=None, pf=None, source=None, node_name=None, field_cuts=None, preload_style='level', serialize=True, **field_parameters):
+.. class:: proj(self, axis, field, weight_field=None, max_level=None, center=None, pf=None, source=None, node_name=None, field_cuts=None, preload_style=None, serialize=True, style='integrate', **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRQuadTreeProjBase`.)
@@ -120,7 +138,13 @@
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRSphereBase`.)
 
 
-.. class:: streamline(self, positions, fields=None, pf=None, **field_parameters):
+.. class:: streamline(self, positions, length=1.0, fields=None, pf=None, **field_parameters):
 
    For more information, see :ref:`physical-object-api`
    (This is a proxy for :class:`~yt.data_objects.data_containers.AMRStreamlineBase`.)
+
+
+.. class:: surface(self, data_source, surface_field, field_value):
+
+   For more information, see :ref:`physical-object-api`
+   (This is a proxy for :class:`~yt.data_objects.data_containers.AMRSurfaceBase`.)

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/Particle_Trajectories.ipynb
--- /dev/null
+++ b/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
@@ -0,0 +1,367 @@
+{
+ "metadata": {
+  "name": ""
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+  {
+   "cells": [
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The `particle_trajectories` analysis module enables the construction of particle trajectories from a time series of datasets for a specified list of particles identified by their unique indices. "
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "from yt.imods import *\n",
+      "from yt.analysis_modules.api import ParticleTrajectories\n",
+      "from yt.config import ytcfg\n",
+      "path = ytcfg.get(\"yt\", \"test_data_dir\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "First, let's start off with a FLASH dataset containing only two particles in a mutual circular orbit. We can get the list of filenames this way:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "my_fns = glob.glob(path+\"/Orbit/orbit_hdf5_chk_00[0-9][0-9]\")\n",
+      "my_fns.sort()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "And let's define a list of fields that we want to include in the trajectories. The position fields will be included by default, so let's just ask for the velocity fields:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "fields = [\"particle_velocity_x\", \"particle_velocity_y\", \"particle_velocity_z\"]"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "There are only two particles, but for consistency's sake let's grab their indices from the dataset itself:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pf = load(my_fns[0])\n",
+      "dd = pf.h.all_data()\n",
+      "indices = dd[\"particle_index\"].astype(\"int\")\n",
+      "print indices"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "which is what we expected them to be. Now we're ready to create a `ParticleTrajectories` object:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "trajs = ParticleTrajectories(my_fns, indices, fields=fields)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The `ParticleTrajectories` object `trajs` is essentially a dictionary-like container for the particle fields along the trajectory, and can be accessed as such:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "print trajs[\"particle_position_x\"]\n",
+      "print trajs[\"particle_position_x\"].shape"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Note that each field is a 2D NumPy array with the different particle indices along the first dimension and the times along the second dimension. As such, we can access them individually by indexing the field:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_position_x\"][0], trajs[\"particle_position_y\"][0])\n",
+      "pylab.plot(trajs[\"particle_position_x\"][1], trajs[\"particle_position_y\"][1])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "And we can plot the velocity fields as well:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_velocity_x\"][0], trajs[\"particle_velocity_y\"][0])\n",
+      "pylab.plot(trajs[\"particle_velocity_x\"][1], trajs[\"particle_velocity_y\"][1])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "If we want to access the time along the trajectory, we use the key `\"particle_time\"`:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_velocity_x\"][1])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_velocity_y\"][1])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Alternatively, if we know the particle index we'd like to examine, we can get an individual trajectory corresponding to that index:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "particle1 = trajs.trajectory_from_index(1)\n",
+      "pylab.plot(particle1[\"particle_time\"], particle1[\"particle_position_x\"])\n",
+      "pylab.plot(particle1[\"particle_time\"], particle1[\"particle_position_y\"])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Now let's look at a more complicated (and fun!) example. We'll use an Enzo cosmology dataset. First, we'll find the maximum density in the domain, and obtain the indices of the particles within some radius of the center. First, let's have a look at what we're getting:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pf = load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+      "slc = SlicePlot(pf, \"x\", [\"Density\",\"Dark_Matter_Density\"], center=\"max\", width=(3.0, \"mpc\"))\n",
+      "slc.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "So far, so good--it looks like we've centered on a galaxy cluster. Let's grab all of the dark matter particles within a sphere of 0.5 Mpc (identified by `\"particle_type == 1\"`):"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "sp = pf.h.sphere(\"max\", (0.5, \"mpc\"))\n",
+      "indices = sp[\"particle_index\"][sp[\"particle_type\"] == 1]"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Next we'll get the list of datasets we want, and create trajectories for these particles:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "my_fns = glob.glob(path+\"/enzo_tiny_cosmology/DD*/*.hierarchy\")\n",
+      "my_fns.sort()\n",
+      "trajs = ParticleTrajectories(my_fns, indices)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Matplotlib can make 3D plots, so let's pick three particle trajectories at random and look at them in the volume:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "import matplotlib.pyplot as plt\n",
+      "from mpl_toolkits.mplot3d import Axes3D\n",
+      "fig = plt.figure(figsize=(8.0, 8.0))\n",
+      "ax = fig.add_subplot(111, projection='3d')\n",
+      "ax.plot(trajs[\"particle_position_x\"][100], trajs[\"particle_position_z\"][100], trajs[\"particle_position_z\"][100])\n",
+      "ax.plot(trajs[\"particle_position_x\"][8], trajs[\"particle_position_z\"][8], trajs[\"particle_position_z\"][8])\n",
+      "ax.plot(trajs[\"particle_position_x\"][25], trajs[\"particle_position_z\"][25], trajs[\"particle_position_z\"][25])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "It looks like these three different particles fell into the cluster along different filaments. We can also look at their x-positions only as a function of time:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][100])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][8])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][25])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Suppose we wanted to know the gas density along the particle trajectory, but there wasn't a particle field corresponding to that in our dataset. Never fear! If the field exists as a grid field, `yt` will interpolate this field to the particle positions and add the interpolated field to the trajectory. To add such a field (or any field, including additional particle fields) we can call the `add_fields` method:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "trajs.add_fields([\"Density\"])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We also could have included `\"Density\"` in our original field list. Now, plot up the gas density for each particle as a function of time:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"Density\"][100])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"Density\"][8])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"Density\"][25])\n",
+      "pylab.yscale(\"log\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Finally, the particle trajectories can be written to disk. Two options are provided: ASCII text files with a column for each field and the time, and HDF5 files:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "trajs.write_out(\"halo_trajectories.txt\")\n",
+      "trajs.write_out_h5(\"halo_trajectories.h5\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "heading",
+     "level": 2,
+     "metadata": {},
+     "source": [
+      "Important Caveats"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "* Parallelization is not yet implemented.\n",
+      "* For large datasets, constructing trajectories can be very slow. We are working on optimizing the algorithm for a future release. \n",
+      "* At the moment, trajectories are limited for particles that exist in every dataset. Therefore, for codes like FLASH that allow for particles to exit the domain (and hence the simulation) for certain types of boundary conditions, you need to insure that the particles you wish to examine exist in all datasets in the time series from the beginning to the end. If this is not the case, `ParticleTrajectories` will throw an error. This is a limitation we hope to relax in a future release. "
+     ]
+    }
+   ],
+   "metadata": {}
+  }
+ ]
+}
\ No newline at end of file

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/SZ_projections.ipynb
--- /dev/null
+++ b/source/analyzing/analysis_modules/SZ_projections.ipynb
@@ -0,0 +1,224 @@
+{
+ "metadata": {
+  "name": ""
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+  {
+   "cells": [
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The change in the CMB intensity due to Compton scattering of CMB\n",
+      "photons off of thermal electrons in galaxy clusters, otherwise known as the\n",
+      "Sunyaev-Zeldovich (S-Z) effect, can to a reasonable approximation be represented by a\n",
+      "projection of the pressure field of a cluster. However, the *full* S-Z signal is a combination of thermal and kinetic\n",
+      "contributions, and for large frequencies and high temperatures\n",
+      "relativistic effects are important. For computing the full S-Z signal\n",
+      "incorporating all of these effects, Jens Chluba has written a library:\n",
+      "SZpack ([Chluba et al 2012](http://adsabs.harvard.edu/abs/2012MNRAS.426..510C)). \n",
+      "\n",
+      "The `sunyaev_zeldovich` analysis module in `yt` makes it possible\n",
+      "to make projections of the full S-Z signal given the properties of the\n",
+      "thermal gas in the simulation using SZpack. SZpack has several different options for computing the S-Z signal, from full\n",
+      "integrations to very good approximations.  Since a full or even a\n",
+      "partial integration of the signal for each cell in the projection\n",
+      "would be prohibitively expensive, we use the method outlined in\n",
+      "[Chluba et al 2013](http://adsabs.harvard.edu/abs/2013MNRAS.430.3054C) to expand the\n",
+      "total S-Z signal in terms of moments of the projected optical depth $\\tau$, projected electron temperature $T_e$, and\n",
+      "velocities $\\beta_{c,\\parallel}$ and $\\beta_{c,\\perp}$ (their equation 18):"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "$$S(\\tau, T_{e},\\beta_{c,\\parallel},\\beta_{\\rm c,\\perp}) \\approx S_{\\rm iso}^{(0)} + S_{\\rm iso}^{(2)}\\omega^{(1)} + C_{\\rm iso}^{(1)}\\sigma^{(1)} + D_{\\rm iso}^{(2)}\\kappa^{(1)} + E_{\\rm iso}^{(2)}\\beta_{\\rm c,\\perp,SZ}^2 +~...$$\n"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "`yt` makes projections of the various moments needed for the\n",
+      "calculation, and then the resulting projected fields are used to\n",
+      "compute the S-Z signal. In our implementation, the expansion is carried out to first-order\n",
+      "terms in $T_e$ and zeroth-order terms in $\\beta_{c,\\parallel}$ by default, but terms up to second-order in can be optionally\n",
+      "included. "
+     ]
+    },
+    {
+     "cell_type": "heading",
+     "level": 2,
+     "metadata": {},
+     "source": [
+      "Installing SZpack"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "SZpack can be downloaded [here](http://www.cita.utoronto.ca/~jchluba/Science_Jens/SZpack/SZpack.html). Make\n",
+      "sure you install a version later than v1.1.1. For computing the S-Z\n",
+      "integrals, SZpack requires the [GNU Scientific Library](http://www.gnu.org/software/gsl/). For compiling\n",
+      "the Python module, you need to have a recent version of [swig](http://www.swig.org>) installed. After running `make` in the top-level SZpack directory, you'll need to run it in the `python` subdirectory, which is the\n",
+      "location of the `SZpack` module. You may have to include this location in the `PYTHONPATH` environment variable.\n"
+     ]
+    },
+    {
+     "cell_type": "heading",
+     "level": 2,
+     "metadata": {},
+     "source": [
+      "Creating S-Z Projections"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Once you have SZpack installed, making S-Z projections from ``yt``\n",
+      "datasets is fairly straightforward:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "from yt.imods import *\n",
+      "from yt.analysis_modules.api import SZProjection\n",
+      "\n",
+      "pf = load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+      "\n",
+      "freqs = [90.,180.,240.]\n",
+      "szprj = SZProjection(pf, freqs)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "`freqs` is a list or array of frequencies in GHz at which the signal\n",
+      "is to be computed. The `SZProjection` constructor also accepts the\n",
+      "optional keywords, **mue** (mean molecular weight for computing the\n",
+      "electron number density, 1.143 is the default) and **high_order** (set\n",
+      "to True to compute terms in the S-Z signal expansion up to\n",
+      "second-order in $T_{e,SZ}$ and $\\beta$). "
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Once you have created the `SZProjection` object, you can use it to\n",
+      "make on-axis and off-axis projections:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "# An on-axis projection along the z-axis with width 10 Mpc, centered on the gas density maximum\n",
+      "szprj.on_axis(\"z\", center=\"max\", width=(10.0, \"mpc\"), nx=400)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "To make an off-axis projection, `szprj.off_axis` is called in the same way, except that the first argument is a three-component normal vector. \n",
+      "\n",
+      "Currently, only one projection can be in memory at once. These methods\n",
+      "create images of the projected S-Z signal at each requested frequency,\n",
+      "which can be accessed dict-like from the projection object (e.g.,\n",
+      "`szprj[\"90_GHz\"]`). Projections of other quantities may also be\n",
+      "accessed; to see what fields are available call `szprj.keys()`. The methods also accept standard ``yt``\n",
+      "keywords for projections such as **center**, **width**, and **source**. The image buffer size can be controlled by setting **nx**.  \n"
+     ]
+    },
+    {
+     "cell_type": "heading",
+     "level": 2,
+     "metadata": {},
+     "source": [
+      "Writing out the S-Z Projections"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "You may want to output the S-Z images to figures suitable for\n",
+      "inclusion in a paper, or save them to disk for later use. There are a\n",
+      "few methods included for this purpose. For PNG figures with a colorbar\n",
+      "and axes, use `write_png`:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "szprj.write_png(\"SZ_example\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "For simple output of the image data to disk, call `write_hdf5`:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "szprj.write_hdf5(\"SZ_example.h5\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Finally, for output to FITS files which can be opened or analyzed\n",
+      "using other programs (such as ds9), call `export_fits`."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "szprj.write_fits(\"SZ_example.fits\", clobber=True)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "which would write all of the projections to a single FITS file,\n",
+      "including coordinate information in kpc. The optional keyword\n",
+      "**clobber** allows a previous file to be overwritten. \n"
+     ]
+    }
+   ],
+   "metadata": {}
+  }
+ ]
+}
\ No newline at end of file

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/_images/Photon_Simulator_30_4.png
Binary file source/analyzing/analysis_modules/_images/Photon_Simulator_30_4.png has changed

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/_images/Photon_Simulator_34_1.png
Binary file source/analyzing/analysis_modules/_images/Photon_Simulator_34_1.png has changed

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/_images/bubbles.png
Binary file source/analyzing/analysis_modules/_images/bubbles.png has changed

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/_images/ds9_bubbles.png
Binary file source/analyzing/analysis_modules/_images/ds9_bubbles.png has changed

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/_images/ds9_sloshing.png
Binary file source/analyzing/analysis_modules/_images/ds9_sloshing.png has changed

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/_images/dsquared.png
Binary file source/analyzing/analysis_modules/_images/dsquared.png has changed

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/clump_finding.rst
--- a/source/analyzing/analysis_modules/clump_finding.rst
+++ b/source/analyzing/analysis_modules/clump_finding.rst
@@ -4,7 +4,7 @@
 =============
 .. sectionauthor:: Britton Smith <britton.smith at colorado.edu>
 
-YT has the ability to identify topologically disconnected structures based in a dataset using 
+``yt`` has the ability to identify topologically disconnected structures based in a dataset using 
 any field available.  This is powered by a contouring algorithm that runs in a recursive 
 fashion.  The user specifies the initial data object in which the clump-finding will occur, 
 the field over which the contouring will be done, the upper and lower limits of the 
@@ -183,4 +183,4 @@
    :height: 400
 
 The figures above show that the treecode method is generally very advantageous,
-and that the error introduced is minimal.
\ No newline at end of file
+and that the error introduced is minimal.

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/ellipsoid_analysis.rst
--- a/source/analyzing/analysis_modules/ellipsoid_analysis.rst
+++ b/source/analyzing/analysis_modules/ellipsoid_analysis.rst
@@ -4,7 +4,7 @@
 =======================
 .. sectionauthor:: Geoffrey So <gso at physics.ucsd.edu>
 
-.. warning:: This is my first attempt at modifying the YT source code,
+.. warning:: This is my first attempt at modifying the ``yt`` source code,
    so the program may be bug ridden.  Please send yt-dev an email and
    address to Geoffrey So if you discover something wrong with this
    portion of the code.
@@ -12,7 +12,7 @@
 Purpose
 -------
 
-The purpose of creating this feature in YT is to analyze field
+The purpose of creating this feature in ``yt`` is to analyze field
 properties that surround dark matter haloes.  Originally, this was
 usually done with the sphere 3D container, but since many halo
 particles are linked together in a more elongated shape, I thought it

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/exporting.rst
--- /dev/null
+++ b/source/analyzing/analysis_modules/exporting.rst
@@ -0,0 +1,8 @@
+Exporting to External Radiation Transport Codes
+===============================================
+
+.. toctree::
+   :maxdepth: 2
+
+   sunrise_export
+   radmc3d_export
\ No newline at end of file

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/halo_analysis.rst
--- /dev/null
+++ b/source/analyzing/analysis_modules/halo_analysis.rst
@@ -0,0 +1,14 @@
+Halo Analysis
+=============
+
+Halo finding, mass functions, merger trees, and profiling.
+
+.. toctree::
+   :maxdepth: 1
+
+   running_halofinder
+   halo_mass_function
+   hmf_howto
+   merger_tree
+   halo_profiling
+   ellipsoid_analysis

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/index.rst
--- a/source/analyzing/analysis_modules/index.rst
+++ b/source/analyzing/analysis_modules/index.rst
@@ -8,26 +8,11 @@
 -----------------------------
 
 .. toctree::
-   :maxdepth: 1
+   :maxdepth: 2
 
-   running_halofinder
-   hmf_howto
-   halo_profiling
-   light_cone_generator
-   light_ray_generator
-   planning_cosmology_simulations
-   absorption_spectrum
-   quick_start_fitting
-   fitting_procedure
-   star_analysis
-   simulated_observations
-   halo_mass_function
-   merger_tree
-   radial_column_density
-   sunrise_export
-   ellipsoid_analysis
-   xray_emission_fields
-   radmc3d_export
+   halo_analysis
+   synthetic_observation
+   exporting
 
 General Analysis Modules
 ------------------------
@@ -37,3 +22,4 @@
 
    two_point_functions
    clump_finding
+   particle_trajectories

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/particle_trajectories.rst
--- /dev/null
+++ b/source/analyzing/analysis_modules/particle_trajectories.rst
@@ -0,0 +1,4 @@
+Particle Trajectories
+---------------------
+
+.. notebook:: Particle_Trajectories.ipynb

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/photon_simulator.rst
--- /dev/null
+++ b/source/analyzing/analysis_modules/photon_simulator.rst
@@ -0,0 +1,533 @@
+Constructing Mock X-ray Observations
+------------------------------------
+
+The ``photon_simulator`` analysis module enables the creation of
+simulated X-ray photon lists of events from datasets that ``yt`` is able
+to read. The simulated events then can be exported to X-ray telescope
+simulators to produce realistic observations or can be analyzed in-line.
+The algorithm is based off of that implemented in
+`PHOX <http://www.mpa-garching.mpg.de/~kdolag/Phox/>`_ for SPH datasets
+by Veronica Biffi and Klaus Dolag. There are two relevant papers:
+
+`Biffi, V., Dolag, K., Bohringer, H., & Lemson, G. 2012, MNRAS, 420,
+3545 <http://adsabs.harvard.edu/abs/2012MNRAS.420.3545B>`_
+
+`Biffi, V., Dolag, K., Bohringer, H. 2013, MNRAS, 428,
+1395 <http://adsabs.harvard.edu/abs/2013MNRAS.428.1395B>`_
+
+The basic procedure is as follows:
+
+1. Using a spectral model for the photon flux given the gas properties,
+   and an algorithm for generating photons from the dataset loaded in
+   ``yt``, produce a large number of photons in three-dimensional space
+   associated with the cells of the dataset.
+2. Use this three-dimensional dataset as a sample from which to generate
+   photon events that are projected along a line of sight, Doppler and
+   cosmologically shifted, and absorbed by the Galactic foreground.
+3. Optionally convolve these photons with instrument responses and
+   produce images and spectra.
+
+We'll demonstrate the functionality on a realistic dataset of a galaxy
+cluster to get you started.
+
+Creating an X-ray observation of a dataset on disk
+++++++++++++++++++++++++++++++++++++++++++++++++++
+
+.. code:: python
+
+    from yt.imods import *
+    from yt.analysis_modules.api import *
+    from yt.utilities.cosmology import Cosmology
+
+We're going to load up an Athena dataset of a galaxy cluster core:
+
+.. code:: python
+
+    pf = load("MHDSloshing/virgo_low_res.0054.vtk", 
+              parameters={"TimeUnits":3.1557e13,
+                          "LengthUnits":3.0856e24,
+                          "DensityUnits":6.770424595218825e-27})
+
+First, to get a sense of what the resulting image will look like, let's
+make a new ``yt`` field called ``"DensitySquared"``, since the X-ray
+emission is proportional to :math:`\rho^2`, and a weak function of
+temperature and metallicity.
+
+.. code:: python
+
+    def _density_squared(field, data):
+        return data["Density"]**2
+    add_field("DensitySquared", function=_density_squared)
+
+Then we'll project this field along the z-axis.
+
+.. code:: python
+
+    prj = ProjectionPlot(pf, "z", ["DensitySquared"], width=(500., "kpc"))
+    prj.set_cmap("DensitySquared", "gray_r")
+    prj.show()
+
+.. image:: _images/dsquared.png
+
+In this simulation the core gas is sloshing, producing spiral-shaped
+cold fronts.
+
+.. note::
+
+   To work out the following examples, you should install
+   `AtomDB <http://www.atomdb.org>`_ and get the files from the
+   `xray_data <http://yt-project.org/data/xray_data.tar.gz>`_ auxiliary
+   data package (see the ``xray_data`` `README <xray_data_README.html>`_ for details on the latter). Make sure that
+   in what follows you specify the full path to the locations of these
+   files.
+
+To generate photons from this dataset, we have several different things
+we need to set up. The first is a standard ``yt`` data object. It could
+be all of the cells in the domain, a rectangular solid region, a
+cylindrical region, etc. Let's keep it simple and make a sphere at the
+center of the domain, with a radius of 250 kpc:
+
+.. code:: python
+
+    sp = pf.h.sphere("c", (250., "kpc"))
+
+This will serve as our ``data_source`` that we will use later. Next, we
+need to create the ``SpectralModel`` instance that will determine how
+the data in the grid cells will generate photons. By default, two
+options are available. The first, ``XSpecThermalModel``, allows one to
+use any thermal model that is known to
+`XSPEC <https://heasarc.gsfc.nasa.gov/xanadu/xspec/>`_, such as
+``"mekal"`` or ``"apec"``:
+
+.. code:: python
+
+    mekal_model = XSpecThermalModel("mekal", 0.01, 10.0, 2000)
+
+This requires XSPEC and
+`PyXspec <http://heasarc.gsfc.nasa.gov/xanadu/xspec/python/html/>`_ to
+be installed. The second option, ``TableApecModel``, utilizes the data
+from the `AtomDB <http://www.atomdb.org>`_ tables. We'll use this one
+here:
+
+.. code:: python
+
+    apec_model = TableApecModel("atomdb_v2.0.2",
+                                0.01, 20.0, 20000,
+                                thermal_broad=False,
+                                apec_vers="2.0.2")
+
+The first argument sets the location of the AtomDB files, and the next
+three arguments determine the minimum energy in keV, maximum energy in
+keV, and the number of linearly-spaced bins to bin the spectrum in. If
+the optional keyword ``thermal_broad`` is set to ``True``, the spectral
+lines will be thermally broadened.
+
+Now that we have our ``SpectralModel`` that gives us a spectrum, we need
+to connect this model to a ``PhotonModel`` that will connect the field
+data in the ``data_source`` to the spectral model to actually generate
+photons. For thermal spectra, we have a special ``PhotonModel`` called
+``ThermalPhotonModel``:
+
+.. code:: python
+
+    thermal_model = ThermalPhotonModel(apec_model, X_H=0.75, Zmet=0.3)
+
+Where we pass in the ``SpectralModel``, and can optionally set values for
+the hydrogen mass fraction ``X_H`` and metallicity ``Z_met``. If
+``Z_met`` is a float, it will assume that value for the metallicity
+everywhere in terms of the solar metallicity. If it is a string, it will
+assume that is the name of the metallicity field (which may be spatially
+varying).
+
+Next, we need to specify "fiducial" values for the telescope collecting
+area, exposure time, and cosmological redshift. Remember, the initial
+photon generation will act as a source for Monte-Carlo sampling for more
+realistic values of these parameters later, so choose generous values so
+that you have a large number of photons to sample from. We will also
+construct a ``Cosmology`` object:
+
+.. code:: python
+
+    A = 6000.
+    exp_time = 4.0e5
+    redshift = 0.05
+    cosmo = Cosmology()
+
+Now, we finally combine everything together and create a ``PhotonList``
+instance:
+
+.. code:: python
+
+    photons = PhotonList.from_scratch(sp, redshift, A, exp_time,
+                                      thermal_model, center="c",
+                                      cosmology=cosmo)
+
+By default, the angular diameter distance to the object is determined
+from the ``cosmology`` and the cosmological ``redshift``. If a
+``Cosmology`` instance is not provided, one will be made from the
+default cosmological parameters. If your source is local to the galaxy,
+you can set its distance directly, using a tuple, e.g.
+``dist=(30, "kpc")``. In this case, the ``redshift`` and ``cosmology``
+will be ignored. Finally, if the photon generating function accepts any
+parameters, they can be passed to ``from_scratch`` via a ``parameters``
+dictionary.
+
+At this point, the ``photons`` are distributed in the three-dimensional
+space of the ``data_source``, with energies in the rest frame of the
+plasma. Doppler and/or cosmological shifting of the photons will be
+applied in the next step.
+
+The ``photons`` can be saved to disk in an HDF5 file:
+
+.. code:: python
+
+    photons.write_h5_file("my_photons.h5")
+
+Which is most useful if it takes a long time to generate the photons,
+because a ``PhotonList`` can be created in-memory from the dataset
+stored on disk:
+
+.. code:: python
+
+    photons = PhotonList.from_file("my_photons.h5")
+
+This enables one to make many simulated event sets, along different
+projections, at different redshifts, with different exposure times, and
+different instruments, with the same ``data_source``, without having to
+do the expensive step of generating the photons all over again!
+
+To get a set of photon events such as that observed by X-ray telescopes,
+we need to take the three-dimensional photon distribution and project it
+along a line of sight. Also, this is the step at which we put in the
+realistic values for the telescope collecting area, cosmological
+redshift and/or source distance, and exposure time. The order of
+operations goes like this:
+
+1. From the adjusted exposure time, redshift and/or source distance, and
+   telescope collecting area, determine the number of photons we will
+   *actually* observe.
+2. Determine the plane of projection from the supplied normal vector,
+   and reproject the photon positions onto this plane.
+3. Doppler-shift the photon energies according to the velocity along the
+   line of sight, and apply cosmological redshift if the source is not
+   local.
+4. Optionally, alter the received distribution of photons via an
+   energy-dependent galactic absorption model.
+5. Optionally, alter the received distribution of photons using an
+   effective area curve provided from an ancillary response file (ARF).
+6. Optionally, scatter the photon energies into channels according to
+   the information from a redistribution matrix file (RMF).
+
+First, if we want to apply galactic absorption, we need to set up a
+spectral model for the absorption coefficient, similar to the spectral
+model for the emitted photons we set up before. Here again, we have two
+options. The first, ``XSpecAbsorbModel``, allows one to use any
+absorption model that XSpec is aware of that takes only the Galactic
+column density :math:`N_H` as input:
+
+.. code:: python
+
+    N_H = 0.1 
+    abs_model = XSpecAbsorbModel("wabs", N_H)  
+
+The second option, ``TableAbsorbModel``, takes as input an HDF5 file
+containing two datasets, ``"energy"`` (in keV), and ``"cross_section"``
+(in cm2), and the Galactic column density :math:`N_H`:
+
+.. code:: python
+
+    abs_model = TableAbsorbModel("tbabs_table.h5", 0.1)
+
+Now we're ready to project the photons. First, we choose a line-of-sight
+vector ``L``. Second, we'll adjust the exposure time and the redshift.
+Third, we'll pass in the absorption ``SpectrumModel``. Fourth, we'll
+specify a ``sky_center`` in RA,DEC on the sky in degrees.
+
+Also, we're going to convolve the photons with instrument ``responses``.
+For this, you need a ARF/RMF pair with matching energy bins. This is of
+course far short of a full simulation of a telescope ray-trace, but it's
+a quick-and-dirty way to get something close to the real thing. We'll
+discuss how to get your simulated events into a format suitable for
+reading by telescope simulation codes later.
+
+.. code:: python
+
+    ARF = "chandra_ACIS-S3_onaxis_arf.fits"
+    RMF = "chandra_ACIS-S3_onaxis_rmf.fits"
+    L = [0.0,0.0,1.0]
+    events = photons.project_photons(L, exp_time_new=2.0e5, redshift_new=0.07, absorb_model=abs_model,
+                                     sky_center=(187.5,12.333), responses=[ARF,RMF])
+
+.. parsed-literal::
+
+    WARNING:yt:This routine has not been tested to work with all RMFs. YMMV.
+
+
+Also, the optional keyword ``psf_sigma`` specifies a Gaussian standard
+deviation to scatter the photon sky positions around with, providing a
+crude representation of a PSF.
+
+.. warning::
+
+   The binned images that result, even if you convolve with responses,
+   are still of the same resolution as the finest cell size of the
+   simulation dataset. If you want a more accurate simulation of a
+   particular X-ray telescope, you should check out `Storing events for future use and for reading-in by telescope simulators`_.
+
+Let's just take a quick look at the raw events object:
+
+.. code:: python
+
+    print events
+
+.. code:: python
+
+    {'eobs': array([  0.32086522,   0.32271389,   0.32562708, ...,   8.90600621,
+             9.73534237,  10.21614256]), 
+     'xsky': array([ 187.5177707 ,  187.4887825 ,  187.50733609, ...,  187.5059345 ,
+            187.49897546,  187.47307048]), 
+     'ysky': array([ 12.33519996,  12.3544496 ,  12.32750903, ...,  12.34907707,
+            12.33327653,  12.32955225]), 
+     'ypix': array([ 133.85374195,  180.68583074,  115.14110561, ...,  167.61447493,
+            129.17278711,  120.11508562]), 
+     'PI': array([ 27,  15,  25, ..., 609, 611, 672]), 
+     'xpix': array([  86.26331108,  155.15934197,  111.06337043, ...,  114.39586907,
+            130.93509652,  192.50639633])}
+
+
+We can bin up the events into an image and save it to a FITS file. The
+pixel size of the image is equivalent to the smallest cell size from the
+original dataset. We can specify limits for the photon energies to be
+placed in the image:
+
+.. code:: python
+
+    events.write_fits_image("sloshing_image.fits", clobber=True, emin=0.5, emax=7.0)
+
+The resulting FITS image will have WCS coordinates in RA and Dec. It
+should be suitable for plotting in
+`ds9 <http://hea-www.harvard.edu/RD/ds9/site/Home.html>`_, for example.
+There is also a great project for opening astronomical images in Python,
+called `APLpy <http://aplpy.github.io>`_:
+
+.. code:: python
+
+    import aplpy
+    fig = aplpy.FITSFigure("sloshing_image.fits", figsize=(10,10))
+    fig.show_colorscale(stretch="log", vmin=0.1, cmap="gray_r")
+    fig.set_axis_labels_font(family="serif", size=16)
+    fig.set_tick_labels_font(family="serif", size=16)
+
+.. image:: _images/Photon_Simulator_30_4.png
+
+Which is starting to look like a real observation!
+
+We can also bin up the spectrum into energy bins, and write it to a FITS
+table file. This is an example where we've binned up the spectrum
+according to the unconvolved photon energy:
+
+.. code:: python
+
+    events.write_spectrum("virgo_spec.fits", energy_bins=True, emin=0.1, emax=10.0, nchan=2000, clobber=True)
+
+If we don't set ``energy_bins=True``, and we have convolved our events
+with response files, then any other keywords will be ignored and it will
+try to make a spectrum from the channel information that is contained
+within the RMF, suitable for analyzing in XSPEC. For now, we'll stick
+with the energy spectrum, and plot it up:
+
+.. code:: python
+
+    import astropy.io.fits as pyfits
+    f = pyfits.open("virgo_spec.fits")
+    pylab.loglog(f["SPECTRUM"].data.field("ENERGY"), f["SPECTRUM"].data.field("COUNTS"))
+    pylab.xlim(0.3, 10)
+    pylab.xlabel("E (keV)")
+    pylab.ylabel("counts/bin")
+
+.. image:: _images/Photon_Simulator_34_1.png
+
+
+We can also write the events to a FITS file that is of a format that can
+be manipulated by software packages like
+`CIAO <http://cxc.harvard.edu/ciao/>`_ and read in by ds9 to do more
+standard X-ray analysis:
+
+.. code:: python
+
+    events.write_fits_file("my_events.fits", clobber=True)
+
+**WARNING**: We've done some very low-level testing of this feature, and
+it seems to work, but it may not be consistent with standard FITS events
+files in subtle ways that we haven't been able to identify. Please email
+jzuhone at gmail.com if you find any bugs!
+
+Two ``EventList`` instances can be joined togther like this:
+
+.. code:: python
+
+    events3 = EventList.join_events(events1, events2)
+
+**WARNING**: This doesn't check for parameter consistency between the
+two lists!
+
+Creating a X-ray observation from an in-memory dataset
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+It may be useful, especially for observational applications, to create
+datasets in-memory and then create simulated observations from
+them. Here is a relevant example of creating a toy cluster and evacuating two AGN-blown bubbles in it. 
+
+First, we create the in-memory dataset (see :ref:`loading-numpy-array`
+for details on how to do this):
+
+.. code:: python
+
+   from yt.mods import *
+   from yt.utilities.physical_constants import cm_per_kpc, K_per_keV, mp
+   from yt.utilities.cosmology import Cosmology
+   from yt.analysis_modules.api import *
+   import aplpy
+
+   R = 1000. # in kpc
+   r_c = 100. # in kpc
+   rho_c = 1.673e-26 # in g/cm^3
+   beta = 1. 
+   T = 4. # in keV
+   nx = 256 
+
+   bub_rad = 30.0
+   bub_dist = 50.0
+
+   ddims = (nx,nx,nx)
+
+   x, y, z = np.mgrid[-R:R:nx*1j,
+                      -R:R:nx*1j,
+                      -R:R:nx*1j]
+ 
+   r = np.sqrt(x**2+y**2+z**2)
+
+   dens = np.zeros(ddims)
+   dens[r <= R] = rho_c*(1.+(r[r <= R]/r_c)**2)**(-1.5*beta)
+   dens[r > R] = 0.0
+   temp = T*K_per_keV*np.ones(ddims)
+   rbub1 = np.sqrt(x**2+(y-bub_rad)**2+z**2)
+   rbub2 = np.sqrt(x**2+(y+bub_rad)**2+z**2)
+   dens[rbub1 <= bub_rad] /= 100.
+   dens[rbub2 <= bub_rad] /= 100.
+   temp[rbub1 <= bub_rad] *= 100.
+   temp[rbub2 <= bub_rad] *= 100.
+
+This created a cluster with a radius of 1 Mpc, a uniform temperature
+of 4 keV, and a density distribution from a :math:`\beta`-model. We then
+evacuated two "bubbles" of radius 30 kpc at a distance of 50 kpc from
+the center. 
+
+Now, we create a parameter file out of this dataset:
+
+.. code:: python
+
+   data = {}
+   data["Density"] = dens
+   data["Temperature"] = temp
+   data["x-velocity"] = np.zeros(ddims)
+   data["y-velocity"] = np.zeros(ddims)
+   data["z-velocity"] = np.zeros(ddims)
+
+   bbox = np.array([[-0.5,0.5],[-0.5,0.5],[-0.5,0.5]])
+
+   pf = load_uniform_grid(data, ddims, 2*R*cm_per_kpc, bbox=bbox)
+
+where for simplicity we have set the velocities to zero, though we
+could have created a realistic velocity field as well. Now, we
+generate the photon and event lists in the same way as the previous
+example:
+
+.. code:: python
+
+   sphere = pf.h.sphere(pf.domain_center, 1.0/pf["mpc"])
+       
+   A = 6000.
+   exp_time = 2.0e5
+   redshift = 0.05
+   cosmo = Cosmology()
+
+   apec_model = TableApecModel("/Users/jzuhone/Data/atomdb_v2.0.2",
+                               0.01, 20.0, 20000)
+   abs_model = TableAbsorbModel("tbabs_table.h5", 0.1)
+
+   thermal_model = ThermalPhotonModel(apec_model)
+   photons = PhotonList.from_scratch(sphere, redshift, A,
+                                     exp_time, thermal_model, center="c")
+
+
+   events = photons.project_photons([0.0,0.0,1.0], 
+                                    responses=["sim_arf.fits","sim_rmf.fits"], 
+                                    absorb_model=abs_model)
+
+   events.write_fits_image("img.fits", clobber=True)
+
+which yields the following image:
+
+.. code:: python
+
+   fig = aplpy.FITSFigure("img.fits", figsize=(10,10))
+   fig.show_colorscale(stretch="log", vmin=0.1, vmax=600., cmap="jet")
+   fig.set_axis_labels_font(family="serif", size=16)
+   fig.set_tick_labels_font(family="serif", size=16)
+
+.. image:: _images/bubbles.png
+   :width: 80 %
+
+Storing events for future use and for reading-in by telescope simulators
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+If you want a more accurate representation of an observation taken by a
+particular instrument, there are tools available for such purposes. For
+the *Chandra* telescope, there is the venerable
+`MARX <http://space.mit.edu/ASC/MARX/>`_. For a wide range of
+instruments, both existing and future, there is
+`SIMX <http://hea-www.harvard.edu/simx/>`_. We'll discuss two ways
+to store your event files so that they can be input by these and other
+codes.
+
+The first option is the most general, and the simplest: simply dump the
+event data to an HDF5 file:
+
+.. code:: python
+
+   events.write_h5_file("my_events.h5")
+
+This will dump the raw event data, as well as the associated parameters,
+into the file. If you want to read these events back in, it's just as
+simple:
+
+.. code:: python
+
+   events = EventList.from_h5_file("my_events.h5")
+
+You can use event data written to HDF5 files to input events into MARX
+using `this code <http://bitbucket.org/jzuhone/yt_marx_source>`_.
+
+The second option, for use with SIMX, is to dump the events into a
+SIMPUT file:
+
+.. code:: python
+
+   events.write_simput_file("my_events", clobber=True, emin=0.1, emax=10.0)
+
+which will write two files, ``"my_events_phlist.fits"`` and
+``"my_events_simput.fits"``, the former being a auxiliary file for the
+latter. **NOTE**: You can only write SIMPUT files if you didn't convolve
+the photons with responses, since the idea is to pass unconvolved
+photons to the telescope simulator.
+
+The following images were made from the same yt-generated events in both MARX and
+SIMX. They are 200 ks observations of the two example clusters from above
+(the Chandra images have been reblocked by a factor of 4):
+
+.. image:: _images/ds9_sloshing.png
+
+.. image:: _images/ds9_bubbles.png
+
+

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/running_halofinder.rst
--- a/source/analyzing/analysis_modules/running_halofinder.rst
+++ b/source/analyzing/analysis_modules/running_halofinder.rst
@@ -250,7 +250,7 @@
 **Parallel HOP** (not to be confused with HOP running in parallel as described
 above) is a wholly-new halo finder based on the HOP method.
 For extensive details and benchmarks of Parallel HOP, please see the
-pre-print version of the `method paper <http://arxiv.org/abs/1001.3411>`_ at
+pre-print version of the `method paper <http://adsabs.harvard.edu/abs/2010ApJS..191...43S>`_ at
 arXiv.org.
 While the method
 of parallelization described above can be quite effective, it has its limits.

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/simulated_observations.rst
--- a/source/analyzing/analysis_modules/simulated_observations.rst
+++ /dev/null
@@ -1,66 +0,0 @@
-.. _simulated_observations:
-
-Generating Simulated Observations
-=================================
-
-yt has several facilities for generating simulated observations.  Each of these
-comes with several caveats, and none should be expected to produce a completely
-finished product.  You should investigate each option carefully and determine
-which, if any, will deliver the type of observation you are interested in.
-
-
-
-X-ray Observations
-++++++++++++++++++
-
-Under the assumption of optically thin gas, projections can be made
-using emissivity to generated simulated observations.  yt includes a
-method for handling output from CLOUDY in the ROCO (Smith et al 2008)
-format, and generating integrated emissivity over given energy ranges.
-
-Caveats: The ROCO format for input requires some non-trivial handling
-of CLOUDY output.
-
-= SED Generation and Deposition =
-
-Using BC03 models for stellar population synthesis, star particles in
-a given calculation can be assigned an integrated flux for a specific
-bandpass.  These fluxes can then be combined using either projections
-or volume rendering.  This can use CIC interpolation to deposit a
-total flux into each cell (which should be flux-conserving, modulo a
-multiplicative factor not currently included) which is then either
-projected or volume rendered.
-
-Caveats: The deposition method produces far too washed out and murky
-results.  The multiplicative factor is not currently set correctly
-universally.
-
-= Thermal Gas Emission =
-
-Applying a black body spectrum to the thermal content of the gas, we
-can volume render the domain and apply absorption based on broad
-arguments of scattering.  One could theoretically include star
-particles as point sources in this, using recent changes to the volume
-renderer.
-
-Caveats: Scattering that results in re-emission is completely
-neglected, such as Halpha emission.  Scattering that results in just
-attenuating the emission is set in an ad hoc fashion.  Emission from
-point sources, if included at all, is included in a non-conservative
-fashion.
-
-= Export to Sunrise =
-
-Data can be exported to Sunrise for simulated observation generation.
-
-Caveats: This process is poorly documented.
-
-= SZ Compton y and SZ Kinetic Maps =
-
-Future Directions
------------------
-
-* ALMA maps
-* 21cm observations
-* Applying PSFs
-* 

diff -r ec78228fba3f268306c858dadf9c0eb86cad3e9a -r bf3d20ad1bf2b175450280c6dbf70bd017d57e2a source/analyzing/analysis_modules/sunyaev_zeldovich.rst
--- /dev/null
+++ b/source/analyzing/analysis_modules/sunyaev_zeldovich.rst
@@ -0,0 +1,4 @@
+Mock Observations of the Sunyaev-Zeldovich Effect
+-------------------------------------------------
+
+.. notebook:: SZ_projections.ipynb

This diff is so big that we needed to truncate the remainder.

Repository URL: https://bitbucket.org/yt_analysis/yt-doc/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list