[yt-svn] commit/yt: 3 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Sun Mar 23 13:19:34 PDT 2014


3 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/553ffed0c7fa/
Changeset:   553ffed0c7fa
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-03-23 17:16:30
Summary:     Moving the new loading data docs into a toctree.
Affected #:  4 files

diff -r fbeae9ead81703c4436f6fb4f7331c2cd849d3ac -r 553ffed0c7fadee5affaef08bd85b4f2bcf93e07 doc/source/examining/index.rst
--- a/doc/source/examining/index.rst
+++ b/doc/source/examining/index.rst
@@ -6,6 +6,6 @@
 .. toctree::
    :maxdepth: 2
 
-   supported_frontends_data
+   loading_data
    generic_array_data
    low_level_inspection

diff -r fbeae9ead81703c4436f6fb4f7331c2cd849d3ac -r 553ffed0c7fadee5affaef08bd85b4f2bcf93e07 doc/source/examining/loading_data.rst
--- /dev/null
+++ b/doc/source/examining/loading_data.rst
@@ -0,0 +1,601 @@
+.. _loading-data:
+
+Loading Data
+============
+
+This section contains information on how to load data into ``yt``, as well as
+some important caveats about different data formats.
+
+.. _loading-enzo-data:
+
+Enzo Data
+---------
+
+Enzo data is fully supported and cared for by Matthew Turk.  To load an Enzo
+dataset, you can use the ``load`` command provided by ``yt.mods`` and supply to
+it the parameter file name.  This would be the name of the output file, and it
+contains no extension.  For instance, if you have the following files:
+
+.. code-block:: none
+
+   DD0010/
+   DD0010/data0010
+   DD0010/data0010.index
+   DD0010/data0010.cpu0000
+   DD0010/data0010.cpu0001
+   DD0010/data0010.cpu0002
+   DD0010/data0010.cpu0003
+
+You would feed the ``load`` command the filename ``DD0010/data0010`` as
+mentioned.
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("DD0010/data0010")
+
+.. rubric:: Caveats
+
+* There are no major caveats for Enzo usage
+* Units should be correct, if you utilize standard unit-setting routines.  yt
+  will notify you if it cannot determine the units, although this
+  notification will be passive.
+* 2D and 1D data are supported, but the extraneous dimensions are set to be
+  of length 1.0 in "code length" which may produce strange results for volume
+  quantities.
+
+.. _loading-orion-data:
+
+Boxlib Data
+-----------
+
+yt has been tested with Boxlib data generated by Orion, Nyx, Maestro and
+Castro.  Currently it is cared for by a combination of Andrew Myers, Chris
+Malone, and Matthew Turk.
+
+To load a Boxlib dataset, you can use the ``load`` command provided by
+``yt.mods`` and supply to it the directory file name.  **You must also have the
+``inputs`` file in the base directory.**  For instance, if you were in a
+directory with the following files:
+
+.. code-block:: none
+
+   inputs
+   pltgmlcs5600/
+   pltgmlcs5600/Header
+   pltgmlcs5600/Level_0
+   pltgmlcs5600/Level_0/Cell_H
+   pltgmlcs5600/Level_1
+   pltgmlcs5600/Level_1/Cell_H
+   pltgmlcs5600/Level_2
+   pltgmlcs5600/Level_2/Cell_H
+   pltgmlcs5600/Level_3
+   pltgmlcs5600/Level_3/Cell_H
+   pltgmlcs5600/Level_4
+   pltgmlcs5600/Level_4/Cell_H
+
+You would feed it the filename ``pltgmlcs5600``:
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("pltgmlcs5600")
+
+.. _loading-flash-data:
+
+FLASH Data
+----------
+
+FLASH HDF5 data is *mostly* supported and cared for by John ZuHone.  To load a
+FLASH dataset, you can use the ``load`` command provided by ``yt.mods`` and
+supply to it the file name of a plot file or checkpoint file, but particle
+files are not currently directly loadable by themselves, due to the fact that
+they typically lack grid information. For instance, if you were in a directory
+with the following files:
+
+.. code-block:: none
+
+   cosmoSim_coolhdf5_chk_0026
+
+You would feed it the filename ``cosmoSim_coolhdf5_chk_0026``:
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("cosmoSim_coolhdf5_chk_0026")
+
+If you have a FLASH particle file that was created at the same time as
+a plotfile or checkpoint file (therefore having particle data
+consistent with the grid structure of the latter), its data may be loaded with the
+``particle_filename`` optional argument:
+
+.. code-block:: python
+
+    from yt.mods import *
+    pf = load("radio_halo_1kpc_hdf5_plt_cnt_0100", particle_filename="radio_halo_1kpc_hdf5_part_0100")
+
+.. rubric:: Caveats
+
+* Please be careful that the units are correctly utilized; yt assumes cgs.
+
+.. _loading-ramses-data:
+
+RAMSES Data
+-----------
+
+In yt-3.0, RAMSES data is fully supported.  If you are interested in taking a
+development or stewardship role, please contact the yt-dev mailing list.  To
+load a RAMSES dataset, you can use the ``load`` command provided by ``yt.mods``
+and supply to it the ``info*.txt`` filename.  For instance, if you were in a
+directory with the following files:
+
+.. code-block:: none
+
+   output_00007
+   output_00007/amr_00007.out00001
+   output_00007/grav_00007.out00001
+   output_00007/hydro_00007.out00001
+   output_00007/info_00007.txt
+   output_00007/part_00007.out00001
+
+You would feed it the filename ``output_00007/info_00007.txt``:
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("output_00007/info_00007.txt")
+
+yt will attempt to guess the fields in the file.  You may also specify a list
+of fields by supplying the ``fields`` keyword in your call to ``load``.
+
+.. _loading-gadget-data:
+
+Gadget Data
+-----------
+
+yt has support for reading Gadget data in both raw binary and HDF5 formats.  It
+is able to access the particles as it would any other particle dataset, and it
+can apply smoothing kernels to the data to produce both quantitative analysis
+and visualization.
+
+Gadget data in HDF5 format can be loaded with the ``load`` command:
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("snapshot_061.hdf5")
+
+However, yt cannot detect raw-binary Gadget data, and so you must specify the
+format as being Gadget:
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = GadgetDataset("snapshot_061")
+
+.. _particle-bbox:
+
+Units and Bounding Boxes
+++++++++++++++++++++++++
+
+There are two additional pieces of information that may be needed.  If your
+simulation is cosmological, yt can often guess the bounding box and the units
+of the simulation.  However, for isolated simulations and for cosmological
+simulations with non-standard units, these must be supplied.  For example, if
+a length unit of 1.0 corresponds to a kiloparsec, you can supply this in the
+constructor.  yt can accept units such as ``Mpc``, ``kpc``, ``cm``, ``Mpccm/h``
+and so on.  In particular, note that ``Mpc/h`` and ``Mpccm/h`` (``cm`` for
+comoving here) are usable unit definitions.
+
+yt will attempt to use units for ``mass``, ``length`` and ``time`` as supplied
+in the argument ``unit_base``.  The ``bounding_box`` argument is a list of
+two-item tuples or lists that describe the left and right extents of the
+particles.
+
+.. code-block:: python
+
+   pf = GadgetDataset("snap_004",
+           unit_base = {'length': ('kpc', 1.0)},
+           bounding_box = [[-600.0, 600.0], [-600.0, 600.0], [-600.0, 600.0]])
+
+.. _particle-indexing-criteria:
+
+Indexing Criteria
++++++++++++++++++
+
+yt generates a global mesh index via octree that governs the resolution of
+volume elements.  This is governed by two parameters, ``n_ref`` and
+``over_refine_factor``.  They are weak proxies for each other.  The first,
+``n_ref``, governs how many particles in an oct results in that oct being
+refined into eight child octs.  Lower values mean higher resolution; the
+default is 64.  The secon parameter, ``over_refine_factor``, governs how many
+cells are in a given oct; the default value of 1 corresponds to 8 cells.
+The number of cells in an oct is defined by the expression
+``2**(3*over_refine_factor)``.
+
+It's recommended that if you want higher-resolution, try reducing the value of
+``n_ref`` to 32 or 16.
+
+.. _gadget-field-spec:
+
+Field Specifications
+++++++++++++++++++++
+
+Binary Gadget outputs often have additional fields or particle types that are
+non-standard from the default Gadget distribution format.  These can be
+specified in the call to ``GadgetDataset`` by either supplying one of the
+sets of field specifications as a string or by supplying a field specification
+itself.  As an example, yt has built-in definitions for ``default`` (the
+default) and ``agora_unlv``.  Field specifications must be tuples, and must be
+of this format:
+
+.. code-block:: python
+
+   default = ( "Coordinates",
+               "Velocities",
+               "ParticleIDs",
+               "Mass",
+               ("InternalEnergy", "Gas"),
+               ("Density", "Gas"),
+               ("SmoothingLength", "Gas"),
+   )
+
+This is the default specification used by the Gadget frontend.  It means that
+the fields are, in order, Coordinates, Velocities, ParticleIDs, Mass, and the
+fields InternalEnergy, Density and SmoothingLength *only* for Gas particles.
+So for example, if you have defined a Metallicity field for the particle type
+Halo, which comes right after ParticleIDs in the file, you could define it like
+this:
+
+.. code-block:: python
+
+   my_field_def = ( "Coordinates",
+               "Velocities",
+               "ParticleIDs",
+               ("Metallicity", "Halo"),
+               "Mass",
+               ("InternalEnergy", "Gas"),
+               ("Density", "Gas"),
+               ("SmoothingLength", "Gas"),
+   )
+
+To save time, you can utilize the plugins file for yt and use it to add items
+to the dictionary where these definitions are stored.  You could do this like
+so:
+
+.. code-block:: python
+
+   from yt.frontends.sph.definitions import gadget_field_specs
+   gadget_field_specs["my_field_def"] = my_field_def
+
+Please also feel free to issue a pull request with any new field
+specifications, as we're happy to include them in the main distribution!
+
+.. _gadget-ptype-spec:
+
+Particle Type Definitions
++++++++++++++++++++++++++
+
+In some cases, research groups add new particle types or re-order them.  You
+can supply alternate particle types by using the keyword ``ptype_spec`` to the
+``GadgetDataset`` call.  The default for Gadget binary data is:
+
+.. code-block:: python
+
+    ( "Gas", "Halo", "Disk", "Bulge", "Stars", "Bndry" )
+
+You can specify alternate names, but note that this may cause problems with the
+field specification if none of the names match old names.
+
+.. _gadget-header-spec:
+
+Header Specification
+++++++++++++++++++++
+
+If you have modified the header in your Gadget binary file, you can specify an
+alternate header specification with the keyword ``header_spec``.  This can
+either be a list of strings corresponding to individual header types known to
+yt, or it can be a combination of strings and header specifications.  The
+default header specification (found in ``yt/frontends/sph/definitions.py``) is:
+
+.. code-block:: python
+   
+    default      = (('Npart', 6, 'i'),
+                    ('Massarr', 6, 'd'),
+                    ('Time', 1, 'd'),
+                    ('Redshift', 1, 'd'),
+                    ('FlagSfr', 1, 'i'),
+                    ('FlagFeedback', 1, 'i'),
+                    ('Nall', 6, 'i'),
+                    ('FlagCooling', 1, 'i'),
+                    ('NumFiles', 1, 'i'),
+                    ('BoxSize', 1, 'd'),
+                    ('Omega0', 1, 'd'),
+                    ('OmegaLambda', 1, 'd'),
+                    ('HubbleParam', 1, 'd'),
+                    ('FlagAge', 1, 'i'),
+                    ('FlagMEtals', 1, 'i'),
+                    ('NallHW', 6, 'i'),
+                    ('unused', 16, 'i'))
+
+These items will all be accessible inside the object ``pf.parameters``, which
+is a dictionary.  You can add combinations of new items, specified in the same
+way, or alternately other types of headers.  The other string keys defined are
+``pad32``, ``pad64``, ``pad128``, and ``pad256`` each of which corresponds to
+an empty padding in bytes.  For example, if you have an additional 256 bytes of
+padding at the end, you can specify this with:
+
+.. code-block:: python
+
+   header_spec = ["default", "pad256"]
+
+This can then be supplied to the constructor.  Note that you can also do this
+manually, for instance with:
+
+
+.. code-block:: python
+
+   header_spec = ["default", (('some_value', 8, 'd'),
+                              ('another_value', 1, 'i'))]
+
+The letters correspond to data types from the Python struct module.  Please
+feel free to submit alternate header types to the main yt repository.
+
+.. _specifying-gadget-units:
+
+Specifying Units
+++++++++++++++++
+
+If you are running a cosmology simulation, yt will be able to guess the units
+with some reliability.  However, if you are not and you do not specify a
+parameter file, yt will not be able to and will use the defaults of length
+being 1.0 Mpc/h (comoving), velocity being in cm/s, and mass being in 10^10
+Msun/h.  You can specify alternate units by supplying the ``unit_base`` keyword
+argument of this form:
+
+.. code-block:: python
+
+   unit_base = {'length': (1.0, 'cm'), 'mass': (1.0, 'g'), 'time': (1.0, 's')}
+
+yt will utilize length, mass and time to set up all other units.
+
+.. _loading-tipsy-data:
+
+Tipsy Data
+----------
+
+yt also supports loading Tipsy data.  Many of its characteristics are similar
+to how Gadget data is loaded; specifically, it shares its definition of
+indexing and mesh-identification with that described in
+:ref:`particle-indexing-criteria`.  However, unlike Gadget, the Tipsy frontend
+has not yet implemented header specifications, field specifications, or
+particle type specifications.  *These are all excellent projects for new
+contributors!*
+
+Tipsy data cannot be automatically detected.  You can load it with a command
+similar to the following:
+
+.. code-block:: python
+
+    ds = TipsyDataset('test.00169',
+        parameter_file='test.param',
+        endian = '<',
+        domain_left_edge = domain_left_edge,
+        domain_right_edge = domain_right_edge,
+    )
+
+Not all of these arguments are necessary; additionally, yt accepts the
+arguments ``n_ref``, ``over_refine_factor``, ``cosmology_parameters``, and
+``unit_base``.  By default, yt will not utilize a parameter file, and by
+default it will assume the data is "big" endian (`>`).  Optionally, you may
+specify ``field_dtypes``, which describe the size of various fields.  For
+example, if you have stored positions as 64-bit floats, you can specify this
+with:
+
+.. code-block:: python
+
+    ds = TipsyDataset("./halo1e11_run1.00400", endian="<",
+                           field_dtypes = {"Coordinates": "d"})
+
+.. _specifying-cosmology-tipsy:
+
+Specifying Tipsy Cosmological Parameters
+++++++++++++++++++++++++++++++++++++++++
+
+Cosmological parameters can be specified to Tipsy to enable computation of
+default units.  The parameters recognized are of this form:
+
+.. code-block:: python
+
+   cosmology_parameters = {'current_redshift': 0.0,
+                           'omega_lambda': 0.728,
+                           'omega_matter': 0.272,
+                           'hubble_constant': 0.702}
+
+These will be used set the units, if they are specified.
+
+.. _loading-artio-data:
+
+ARTIO Data
+----------
+
+ARTIO data has a well-specified internal parameter system and has few free
+parameters.  However, for optimization purposes, the parameter that provides
+the most guidance to yt as to how to manage ARTIO data is ``max_range``.  This
+governs the maximum number of space-filling curve cells that will be used in a
+single "chunk" of data read from disk.  For small datasets, setting this number
+very large will enable more data to be loaded into memory at any given time;
+for very large datasets, this parameter can be left alone safely.  By default
+it is set to 1024; it can in principle be set as high as the total number of
+SFC cells.
+
+To load ARTIO data, you can specify a command such as this:
+
+.. code-block:: python
+
+    ds = load("./A11QR1/s11Qzm1h2_a1.0000.art")
+
+.. _loading-art-data:
+
+ART Data
+--------
+
+ART data enjoys preliminary support and has been supported in the past by
+Christopher Moody.  Please contact the ``yt-dev`` mailing list if you are
+interested in using yt for ART data, or if you are interested in assisting with
+development of yt to work with ART data.
+
+To load an ART dataset you can use the ``load`` command provided by 
+``yt.mods`` and passing the gas mesh file. It will search for and attempt 
+to find the complementary dark matter and stellar particle header and data 
+files. However, your simulations may not follow the same naming convention.
+
+So for example, a single snapshot might have a series of files looking like
+this:
+
+.. code-block:: none
+
+   10MpcBox_csf512_a0.300.d    #Gas mesh
+   PMcrda0.300.DAT             #Particle header
+   PMcrs0a0.300.DAT            #Particle data (positions,velocities)
+   stars_a0.300.dat            #Stellar data (metallicities, ages, etc.)
+
+The ART frontend tries to find the associated files matching the above, but
+if that fails you can specify ``file_particle_data``,``file_particle_data``,
+``file_star_data`` in addition to the specifying the gas mesh. You also have 
+the option of gridding particles, and assigning them onto the meshes.
+This process is in beta, and for the time being it's probably  best to leave
+``do_grid_particles=False`` as the default.
+
+To speed up the loading of an ART file, you have a few options. You can turn 
+off the particles entirely by setting ``discover_particles=False``. You can
+also only grid octs up to a certain level, ``limit_level=5``, which is useful
+when debugging by artificially creating a 'smaller' dataset to work with.
+
+Finally, when stellar ages are computed we 'spread' the ages evenly within a
+smoothing window. By default this is turned on and set to 10Myr. To turn this 
+off you can set ``spread=False``, and you can tweak the age smoothing window
+by specifying the window in seconds, ``spread=1.0e7*265*24*3600``. 
+
+.. code-block:: python
+    
+   from yt.mods import *
+
+   pf = load("/u/cmoody3/data/art_snapshots/SFG1/10MpcBox_csf512_a0.460.d")
+
+.. _loading-moab-data:
+
+MOAB Data
+---------
+
+.. _loading-pyne-data:
+
+PyNE Data
+---------
+
+.. _loading-numpy-array:
+
+Generic Array Data
+------------------
+
+Even if your data is not strictly related to fields commonly used in
+astrophysical codes or your code is not supported yet, you can still feed it to
+``yt`` to use its advanced visualization and analysis facilities. The only
+requirement is that your data can be represented as one or more uniform, three
+dimensional numpy arrays. Assuming that you have your data in ``arr``,
+the following code:
+
+.. code-block:: python
+
+   from yt.frontends.stream.api import load_uniform_grid
+
+   data = dict(Density = arr)
+   bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
+   pf = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
+
+will create ``yt``-native parameter file ``pf`` that will treat your array as
+density field in cubic domain of 3 Mpc edge size (3 * 3.08e24 cm) and
+simultaneously divide the domain into 12 chunks, so that you can take advantage
+of the underlying parallelism. 
+
+Particle fields are detected as one-dimensional fields. The number of
+particles is set by the ``number_of_particles`` key in
+``data``. Particle fields are then added as one-dimensional arrays in
+a similar manner as the three-dimensional grid fields:
+
+.. code-block:: python
+
+   from yt.frontends.stream.api import load_uniform_grid
+
+   data = dict(Density = dens, 
+               number_of_particles = 1000000,
+               particle_position_x = posx_arr, 
+	       particle_position_y = posy_arr,
+	       particle_position_z = posz_arr)
+   bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
+   pf = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
+
+where in this exampe the particle position fields have been assigned. ``number_of_particles`` must be the same size as the particle
+arrays. If no particle arrays are supplied then ``number_of_particles`` is assumed to be zero. 
+
+.. rubric:: Caveats
+
+* Units will be incorrect unless the data has already been converted to cgs.
+* Particles may be difficult to integrate.
+* Data must already reside in memory.
+
+.. _loading-amr-data:
+
+Generic AMR Data
+----------------
+
+It is possible to create native ``yt`` parameter file from Python's dictionary
+that describes set of rectangular patches of data of possibly varying
+resolution. 
+
+.. code-block:: python
+
+   from yt.frontends.stream.api import load_amr_grids
+
+   grid_data = [
+       dict(left_edge = [0.0, 0.0, 0.0],
+            right_edge = [1.0, 1.0, 1.],
+            level = 0,
+            dimensions = [32, 32, 32],
+            number_of_particles = 0)
+       dict(left_edge = [0.25, 0.25, 0.25],
+            right_edge = [0.75, 0.75, 0.75],
+            level = 1,
+            dimensions = [32, 32, 32],
+            number_of_particles = 0)
+   ]
+  
+   for g in grid_data:
+       g["density"] = np.random.random(g["dimensions"]) * 2**g["level"]
+  
+   pf = load_amr_grids(grid_data, [32, 32, 32], 1.0)
+
+Particle fields are supported by adding 1-dimensional arrays and
+setting the ``number_of_particles`` key to each ``grid``'s dict:
+
+.. code-block:: python
+
+    for g in grid_data:
+        g["number_of_particles"] = 100000
+        g["particle_position_x"] = np.random.random((g["number_of_particles"]))
+
+.. rubric:: Caveats
+
+* Units will be incorrect unless the data has already been converted to cgs.
+* Some functions may behave oddly, and parallelism will be disappointing or
+  non-existent in most cases.
+* No consistency checks are performed on the index
+* Data must already reside in memory.
+* Consistency between particle positions and grids is not checked;
+  ``load_amr_grids`` assumes that particle positions associated with one grid are
+  not bounded within another grid at a higher level, so this must be
+  ensured by the user prior to loading the grid data. 
+
+Generic Particle Data
+---------------------
+

diff -r fbeae9ead81703c4436f6fb4f7331c2cd849d3ac -r 553ffed0c7fadee5affaef08bd85b4f2bcf93e07 doc/source/examining/supported_frontends_data.rst
--- a/doc/source/examining/supported_frontends_data.rst
+++ /dev/null
@@ -1,186 +0,0 @@
-.. _loading-data-from-supported-codes:
-
-Loading Data from Supported Codes
-=================================
-
-This section contains information on how to load data into ``yt`` from
-supported codes, as well as some important caveats about different
-data formats.
-
-.. _loading-enzo-data:
-
-Enzo Data
----------
-
-Enzo data is fully supported and cared for by Matthew Turk.  To load an Enzo
-dataset, you can use the ``load`` command provided by ``yt.mods`` and supply to
-it the parameter file name.  This would be the name of the output file, and it
-contains no extension.  For instance, if you have the following files:
-
-.. code-block:: none
-
-   DD0010/
-   DD0010/data0010
-   DD0010/data0010.index
-   DD0010/data0010.cpu0000
-   DD0010/data0010.cpu0001
-   DD0010/data0010.cpu0002
-   DD0010/data0010.cpu0003
-
-You would feed the ``load`` command the filename ``DD0010/data0010`` as
-mentioned.
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("DD0010/data0010")
-
-.. rubric:: Caveats
-
-* There are no major caveats for Enzo usage
-* Units should be correct, if you utilize standard unit-setting routines.  yt
-  will notify you if it cannot determine the units, although this
-  notification will be passive.
-* 2D and 1D data are supported, but the extraneous dimensions are set to be
-  of length 1.0
-
-.. _loading-orion-data:
-
-Orion Data
-----------
-
-Orion data is fully supported. To load an Orion dataset, you can use the
-``load`` command provided by ``yt.mods`` and supply to it the directory file
-name.  **You must also have the ``inputs`` file in the base directory.** For
-instance, if you were in a directory with the following files:
-
-.. code-block:: none
-
-   inputs
-   pltgmlcs5600/
-   pltgmlcs5600/Header
-   pltgmlcs5600/Level_0
-   pltgmlcs5600/Level_0/Cell_H
-   pltgmlcs5600/Level_1
-   pltgmlcs5600/Level_1/Cell_H
-   pltgmlcs5600/Level_2
-   pltgmlcs5600/Level_2/Cell_H
-   pltgmlcs5600/Level_3
-   pltgmlcs5600/Level_3/Cell_H
-   pltgmlcs5600/Level_4
-   pltgmlcs5600/Level_4/Cell_H
-
-You would feed it the filename ``pltgmlcs5600``:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("pltgmlcs5600")
-
-.. rubric:: Caveats
-
-* There are no major caveats for Orion usage
-* Star particles are not supported at the current time
-
-.. _loading-flash-data:
-
-FLASH Data
-----------
-
-FLASH HDF5 data is fully supported and cared for by John ZuHone.  To load a
-FLASH dataset, you can use the ``load`` command provided by ``yt.mods`` and
-supply to it the file name of a plot file or checkpoint file.  Particle
-files are not currently directly loadable by themselves, due to the
-fact that they typically lack grid information. For instance, if you were in a directory with
-the following files:
-
-.. code-block:: none
-
-   cosmoSim_coolhdf5_chk_0026
-
-You would feed it the filename ``cosmoSim_coolhdf5_chk_0026``:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("cosmoSim_coolhdf5_chk_0026")
-
-If you have a FLASH particle file that was created at the same time as
-a plotfile or checkpoint file (therefore having particle data
-consistent with the grid structure of the latter), its data may be loaded with the
-``particle_filename`` optional argument:
-
-.. code-block:: python
-
-    from yt.mods import *
-    pf = load("radio_halo_1kpc_hdf5_plt_cnt_0100", particle_filename="radio_halo_1kpc_hdf5_part_0100")
-
-.. rubric:: Caveats
-
-* Please be careful that the units are correctly utilized; yt assumes cgs
-* Velocities and length units will be scaled to comoving coordinates if yt is
-  able to discern you are examining a cosmology simulation; particle and grid
-  positions will not be.
-* Domains may be visualized assuming periodicity.
-
-Athena Data
------------
-
-Athena 4.x VTK data is *mostly* supported and cared for by John
-ZuHone. Both uniform grid and SMR datasets are supported. 
-
-Loading Athena datasets is slightly different depending on whether
-your dataset came from a serial or a parallel run. If the data came
-from a serial run or you have joined the VTK files together using the
-Athena tool ``join_vtk``, you can load the data like this:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("kh.0010.vtk")
-
-The filename corresponds to the file on SMR level 0, whereas if there
-are multiple levels the corresponding files will be picked up
-automatically, assuming they are laid out in ``lev*`` subdirectories
-under the directory where the base file is located.
-
-For parallel datasets, yt assumes that they are laid out in
-directories named ``id*``, one for each processor number, each with
-``lev*`` subdirectories for additional refinement levels. To load this
-data, call ``load`` with the base file in the ``id0`` directory:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("id0/kh.0010.vtk")
-
-which will pick up all of the files in the different ``id*`` directories for
-the entire dataset. 
-
-yt works in cgs ("Gaussian") units, but Athena data is not
-normally stored in these units. If you would like to convert data to
-cgs units, you may supply conversions for length, time, and density to ``load``:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("id0/cluster_merger.0250.vtk", 
-          parameters={"LengthUnits":3.0856e24,
-                               "TimeUnits":3.1557e13,"DensityUnits":1.67e-24)
-
-This means that the yt fields (e.g. ``Density``, ``x-velocity``,
-``Bx``) will be in cgs units, but the Athena fields (e.g.,
-``density``, ``velocity_x``, ``cell_centered_B_x``) will be in code
-units. 
-
-.. rubric:: Caveats
-
-* yt primarily works with primitive variables. If the Athena
-  dataset contains conservative variables, the yt primitive fields will be generated from the
-  conserved variables on disk. 
-* Domains may be visualized assuming periodicity.
-* Particle list data is currently unsupported.
-* In some parallel Athena datasets, it is possible for a grid from one
-  refinement level to overlap with more than one grid on the parent
-  level. This may result in unpredictable behavior for some analysis
-  or visualization tasks. 

diff -r fbeae9ead81703c4436f6fb4f7331c2cd849d3ac -r 553ffed0c7fadee5affaef08bd85b4f2bcf93e07 doc/source/loading_data.rst
--- a/doc/source/loading_data.rst
+++ /dev/null
@@ -1,601 +0,0 @@
-.. _loading-data:
-
-Loading Data
-============
-
-This section contains information on how to load data into ``yt``, as well as
-some important caveats about different data formats.
-
-.. _loading-enzo-data:
-
-Enzo Data
----------
-
-Enzo data is fully supported and cared for by Matthew Turk.  To load an Enzo
-dataset, you can use the ``load`` command provided by ``yt.mods`` and supply to
-it the parameter file name.  This would be the name of the output file, and it
-contains no extension.  For instance, if you have the following files:
-
-.. code-block:: none
-
-   DD0010/
-   DD0010/data0010
-   DD0010/data0010.index
-   DD0010/data0010.cpu0000
-   DD0010/data0010.cpu0001
-   DD0010/data0010.cpu0002
-   DD0010/data0010.cpu0003
-
-You would feed the ``load`` command the filename ``DD0010/data0010`` as
-mentioned.
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("DD0010/data0010")
-
-.. rubric:: Caveats
-
-* There are no major caveats for Enzo usage
-* Units should be correct, if you utilize standard unit-setting routines.  yt
-  will notify you if it cannot determine the units, although this
-  notification will be passive.
-* 2D and 1D data are supported, but the extraneous dimensions are set to be
-  of length 1.0 in "code length" which may produce strange results for volume
-  quantities.
-
-.. _loading-orion-data:
-
-Boxlib Data
------------
-
-yt has been tested with Boxlib data generated by Orion, Nyx, Maestro and
-Castro.  Currently it is cared for by a combination of Andrew Myers, Chris
-Malone, and Matthew Turk.
-
-To load a Boxlib dataset, you can use the ``load`` command provided by
-``yt.mods`` and supply to it the directory file name.  **You must also have the
-``inputs`` file in the base directory.**  For instance, if you were in a
-directory with the following files:
-
-.. code-block:: none
-
-   inputs
-   pltgmlcs5600/
-   pltgmlcs5600/Header
-   pltgmlcs5600/Level_0
-   pltgmlcs5600/Level_0/Cell_H
-   pltgmlcs5600/Level_1
-   pltgmlcs5600/Level_1/Cell_H
-   pltgmlcs5600/Level_2
-   pltgmlcs5600/Level_2/Cell_H
-   pltgmlcs5600/Level_3
-   pltgmlcs5600/Level_3/Cell_H
-   pltgmlcs5600/Level_4
-   pltgmlcs5600/Level_4/Cell_H
-
-You would feed it the filename ``pltgmlcs5600``:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("pltgmlcs5600")
-
-.. _loading-flash-data:
-
-FLASH Data
-----------
-
-FLASH HDF5 data is *mostly* supported and cared for by John ZuHone.  To load a
-FLASH dataset, you can use the ``load`` command provided by ``yt.mods`` and
-supply to it the file name of a plot file or checkpoint file, but particle
-files are not currently directly loadable by themselves, due to the fact that
-they typically lack grid information. For instance, if you were in a directory
-with the following files:
-
-.. code-block:: none
-
-   cosmoSim_coolhdf5_chk_0026
-
-You would feed it the filename ``cosmoSim_coolhdf5_chk_0026``:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("cosmoSim_coolhdf5_chk_0026")
-
-If you have a FLASH particle file that was created at the same time as
-a plotfile or checkpoint file (therefore having particle data
-consistent with the grid structure of the latter), its data may be loaded with the
-``particle_filename`` optional argument:
-
-.. code-block:: python
-
-    from yt.mods import *
-    pf = load("radio_halo_1kpc_hdf5_plt_cnt_0100", particle_filename="radio_halo_1kpc_hdf5_part_0100")
-
-.. rubric:: Caveats
-
-* Please be careful that the units are correctly utilized; yt assumes cgs.
-
-.. _loading-ramses-data:
-
-RAMSES Data
------------
-
-In yt-3.0, RAMSES data is fully supported.  If you are interested in taking a
-development or stewardship role, please contact the yt-dev mailing list.  To
-load a RAMSES dataset, you can use the ``load`` command provided by ``yt.mods``
-and supply to it the ``info*.txt`` filename.  For instance, if you were in a
-directory with the following files:
-
-.. code-block:: none
-
-   output_00007
-   output_00007/amr_00007.out00001
-   output_00007/grav_00007.out00001
-   output_00007/hydro_00007.out00001
-   output_00007/info_00007.txt
-   output_00007/part_00007.out00001
-
-You would feed it the filename ``output_00007/info_00007.txt``:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("output_00007/info_00007.txt")
-
-yt will attempt to guess the fields in the file.  You may also specify a list
-of fields by supplying the ``fields`` keyword in your call to ``load``.
-
-.. _loading-gadget-data:
-
-Gadget Data
------------
-
-yt has support for reading Gadget data in both raw binary and HDF5 formats.  It
-is able to access the particles as it would any other particle dataset, and it
-can apply smoothing kernels to the data to produce both quantitative analysis
-and visualization.
-
-Gadget data in HDF5 format can be loaded with the ``load`` command:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("snapshot_061.hdf5")
-
-However, yt cannot detect raw-binary Gadget data, and so you must specify the
-format as being Gadget:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = GadgetDataset("snapshot_061")
-
-.. _particle-bbox:
-
-Units and Bounding Boxes
-++++++++++++++++++++++++
-
-There are two additional pieces of information that may be needed.  If your
-simulation is cosmological, yt can often guess the bounding box and the units
-of the simulation.  However, for isolated simulations and for cosmological
-simulations with non-standard units, these must be supplied.  For example, if
-a length unit of 1.0 corresponds to a kiloparsec, you can supply this in the
-constructor.  yt can accept units such as ``Mpc``, ``kpc``, ``cm``, ``Mpccm/h``
-and so on.  In particular, note that ``Mpc/h`` and ``Mpccm/h`` (``cm`` for
-comoving here) are usable unit definitions.
-
-yt will attempt to use units for ``mass``, ``length`` and ``time`` as supplied
-in the argument ``unit_base``.  The ``bounding_box`` argument is a list of
-two-item tuples or lists that describe the left and right extents of the
-particles.
-
-.. code-block:: python
-
-   pf = GadgetDataset("snap_004",
-           unit_base = {'length': ('kpc', 1.0)},
-           bounding_box = [[-600.0, 600.0], [-600.0, 600.0], [-600.0, 600.0]])
-
-.. _particle-indexing-criteria:
-
-Indexing Criteria
-+++++++++++++++++
-
-yt generates a global mesh index via octree that governs the resolution of
-volume elements.  This is governed by two parameters, ``n_ref`` and
-``over_refine_factor``.  They are weak proxies for each other.  The first,
-``n_ref``, governs how many particles in an oct results in that oct being
-refined into eight child octs.  Lower values mean higher resolution; the
-default is 64.  The secon parameter, ``over_refine_factor``, governs how many
-cells are in a given oct; the default value of 1 corresponds to 8 cells.
-The number of cells in an oct is defined by the expression
-``2**(3*over_refine_factor)``.
-
-It's recommended that if you want higher-resolution, try reducing the value of
-``n_ref`` to 32 or 16.
-
-.. _gadget-field-spec:
-
-Field Specifications
-++++++++++++++++++++
-
-Binary Gadget outputs often have additional fields or particle types that are
-non-standard from the default Gadget distribution format.  These can be
-specified in the call to ``GadgetDataset`` by either supplying one of the
-sets of field specifications as a string or by supplying a field specification
-itself.  As an example, yt has built-in definitions for ``default`` (the
-default) and ``agora_unlv``.  Field specifications must be tuples, and must be
-of this format:
-
-.. code-block:: python
-
-   default = ( "Coordinates",
-               "Velocities",
-               "ParticleIDs",
-               "Mass",
-               ("InternalEnergy", "Gas"),
-               ("Density", "Gas"),
-               ("SmoothingLength", "Gas"),
-   )
-
-This is the default specification used by the Gadget frontend.  It means that
-the fields are, in order, Coordinates, Velocities, ParticleIDs, Mass, and the
-fields InternalEnergy, Density and SmoothingLength *only* for Gas particles.
-So for example, if you have defined a Metallicity field for the particle type
-Halo, which comes right after ParticleIDs in the file, you could define it like
-this:
-
-.. code-block:: python
-
-   my_field_def = ( "Coordinates",
-               "Velocities",
-               "ParticleIDs",
-               ("Metallicity", "Halo"),
-               "Mass",
-               ("InternalEnergy", "Gas"),
-               ("Density", "Gas"),
-               ("SmoothingLength", "Gas"),
-   )
-
-To save time, you can utilize the plugins file for yt and use it to add items
-to the dictionary where these definitions are stored.  You could do this like
-so:
-
-.. code-block:: python
-
-   from yt.frontends.sph.definitions import gadget_field_specs
-   gadget_field_specs["my_field_def"] = my_field_def
-
-Please also feel free to issue a pull request with any new field
-specifications, as we're happy to include them in the main distribution!
-
-.. _gadget-ptype-spec:
-
-Particle Type Definitions
-+++++++++++++++++++++++++
-
-In some cases, research groups add new particle types or re-order them.  You
-can supply alternate particle types by using the keyword ``ptype_spec`` to the
-``GadgetDataset`` call.  The default for Gadget binary data is:
-
-.. code-block:: python
-
-    ( "Gas", "Halo", "Disk", "Bulge", "Stars", "Bndry" )
-
-You can specify alternate names, but note that this may cause problems with the
-field specification if none of the names match old names.
-
-.. _gadget-header-spec:
-
-Header Specification
-++++++++++++++++++++
-
-If you have modified the header in your Gadget binary file, you can specify an
-alternate header specification with the keyword ``header_spec``.  This can
-either be a list of strings corresponding to individual header types known to
-yt, or it can be a combination of strings and header specifications.  The
-default header specification (found in ``yt/frontends/sph/definitions.py``) is:
-
-.. code-block:: python
-   
-    default      = (('Npart', 6, 'i'),
-                    ('Massarr', 6, 'd'),
-                    ('Time', 1, 'd'),
-                    ('Redshift', 1, 'd'),
-                    ('FlagSfr', 1, 'i'),
-                    ('FlagFeedback', 1, 'i'),
-                    ('Nall', 6, 'i'),
-                    ('FlagCooling', 1, 'i'),
-                    ('NumFiles', 1, 'i'),
-                    ('BoxSize', 1, 'd'),
-                    ('Omega0', 1, 'd'),
-                    ('OmegaLambda', 1, 'd'),
-                    ('HubbleParam', 1, 'd'),
-                    ('FlagAge', 1, 'i'),
-                    ('FlagMEtals', 1, 'i'),
-                    ('NallHW', 6, 'i'),
-                    ('unused', 16, 'i'))
-
-These items will all be accessible inside the object ``pf.parameters``, which
-is a dictionary.  You can add combinations of new items, specified in the same
-way, or alternately other types of headers.  The other string keys defined are
-``pad32``, ``pad64``, ``pad128``, and ``pad256`` each of which corresponds to
-an empty padding in bytes.  For example, if you have an additional 256 bytes of
-padding at the end, you can specify this with:
-
-.. code-block:: python
-
-   header_spec = ["default", "pad256"]
-
-This can then be supplied to the constructor.  Note that you can also do this
-manually, for instance with:
-
-
-.. code-block:: python
-
-   header_spec = ["default", (('some_value', 8, 'd'),
-                              ('another_value', 1, 'i'))]
-
-The letters correspond to data types from the Python struct module.  Please
-feel free to submit alternate header types to the main yt repository.
-
-.. _specifying-gadget-units:
-
-Specifying Units
-++++++++++++++++
-
-If you are running a cosmology simulation, yt will be able to guess the units
-with some reliability.  However, if you are not and you do not specify a
-parameter file, yt will not be able to and will use the defaults of length
-being 1.0 Mpc/h (comoving), velocity being in cm/s, and mass being in 10^10
-Msun/h.  You can specify alternate units by supplying the ``unit_base`` keyword
-argument of this form:
-
-.. code-block:: python
-
-   unit_base = {'length': (1.0, 'cm'), 'mass': (1.0, 'g'), 'time': (1.0, 's')}
-
-yt will utilize length, mass and time to set up all other units.
-
-.. _loading-tipsy-data:
-
-Tipsy Data
-----------
-
-yt also supports loading Tipsy data.  Many of its characteristics are similar
-to how Gadget data is loaded; specifically, it shares its definition of
-indexing and mesh-identification with that described in
-:ref:`particle-indexing-criteria`.  However, unlike Gadget, the Tipsy frontend
-has not yet implemented header specifications, field specifications, or
-particle type specifications.  *These are all excellent projects for new
-contributors!*
-
-Tipsy data cannot be automatically detected.  You can load it with a command
-similar to the following:
-
-.. code-block:: python
-
-    ds = TipsyDataset('test.00169',
-        parameter_file='test.param',
-        endian = '<',
-        domain_left_edge = domain_left_edge,
-        domain_right_edge = domain_right_edge,
-    )
-
-Not all of these arguments are necessary; additionally, yt accepts the
-arguments ``n_ref``, ``over_refine_factor``, ``cosmology_parameters``, and
-``unit_base``.  By default, yt will not utilize a parameter file, and by
-default it will assume the data is "big" endian (`>`).  Optionally, you may
-specify ``field_dtypes``, which describe the size of various fields.  For
-example, if you have stored positions as 64-bit floats, you can specify this
-with:
-
-.. code-block:: python
-
-    ds = TipsyDataset("./halo1e11_run1.00400", endian="<",
-                           field_dtypes = {"Coordinates": "d"})
-
-.. _specifying-cosmology-tipsy:
-
-Specifying Tipsy Cosmological Parameters
-++++++++++++++++++++++++++++++++++++++++
-
-Cosmological parameters can be specified to Tipsy to enable computation of
-default units.  The parameters recognized are of this form:
-
-.. code-block:: python
-
-   cosmology_parameters = {'current_redshift': 0.0,
-                           'omega_lambda': 0.728,
-                           'omega_matter': 0.272,
-                           'hubble_constant': 0.702}
-
-These will be used set the units, if they are specified.
-
-.. _loading-artio-data:
-
-ARTIO Data
-----------
-
-ARTIO data has a well-specified internal parameter system and has few free
-parameters.  However, for optimization purposes, the parameter that provides
-the most guidance to yt as to how to manage ARTIO data is ``max_range``.  This
-governs the maximum number of space-filling curve cells that will be used in a
-single "chunk" of data read from disk.  For small datasets, setting this number
-very large will enable more data to be loaded into memory at any given time;
-for very large datasets, this parameter can be left alone safely.  By default
-it is set to 1024; it can in principle be set as high as the total number of
-SFC cells.
-
-To load ARTIO data, you can specify a command such as this:
-
-.. code-block:: python
-
-    ds = load("./A11QR1/s11Qzm1h2_a1.0000.art")
-
-.. _loading-art-data:
-
-ART Data
---------
-
-ART data enjoys preliminary support and has been supported in the past by
-Christopher Moody.  Please contact the ``yt-dev`` mailing list if you are
-interested in using yt for ART data, or if you are interested in assisting with
-development of yt to work with ART data.
-
-To load an ART dataset you can use the ``load`` command provided by 
-``yt.mods`` and passing the gas mesh file. It will search for and attempt 
-to find the complementary dark matter and stellar particle header and data 
-files. However, your simulations may not follow the same naming convention.
-
-So for example, a single snapshot might have a series of files looking like
-this:
-
-.. code-block:: none
-
-   10MpcBox_csf512_a0.300.d    #Gas mesh
-   PMcrda0.300.DAT             #Particle header
-   PMcrs0a0.300.DAT            #Particle data (positions,velocities)
-   stars_a0.300.dat            #Stellar data (metallicities, ages, etc.)
-
-The ART frontend tries to find the associated files matching the above, but
-if that fails you can specify ``file_particle_data``,``file_particle_data``,
-``file_star_data`` in addition to the specifying the gas mesh. You also have 
-the option of gridding particles, and assigning them onto the meshes.
-This process is in beta, and for the time being it's probably  best to leave
-``do_grid_particles=False`` as the default.
-
-To speed up the loading of an ART file, you have a few options. You can turn 
-off the particles entirely by setting ``discover_particles=False``. You can
-also only grid octs up to a certain level, ``limit_level=5``, which is useful
-when debugging by artificially creating a 'smaller' dataset to work with.
-
-Finally, when stellar ages are computed we 'spread' the ages evenly within a
-smoothing window. By default this is turned on and set to 10Myr. To turn this 
-off you can set ``spread=False``, and you can tweak the age smoothing window
-by specifying the window in seconds, ``spread=1.0e7*265*24*3600``. 
-
-.. code-block:: python
-    
-   from yt.mods import *
-
-   pf = load("/u/cmoody3/data/art_snapshots/SFG1/10MpcBox_csf512_a0.460.d")
-
-.. _loading-moab-data:
-
-MOAB Data
----------
-
-.. _loading-pyne-data:
-
-PyNE Data
----------
-
-.. _loading-numpy-array:
-
-Generic Array Data
-------------------
-
-Even if your data is not strictly related to fields commonly used in
-astrophysical codes or your code is not supported yet, you can still feed it to
-``yt`` to use its advanced visualization and analysis facilities. The only
-requirement is that your data can be represented as one or more uniform, three
-dimensional numpy arrays. Assuming that you have your data in ``arr``,
-the following code:
-
-.. code-block:: python
-
-   from yt.frontends.stream.api import load_uniform_grid
-
-   data = dict(Density = arr)
-   bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
-   pf = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
-
-will create ``yt``-native parameter file ``pf`` that will treat your array as
-density field in cubic domain of 3 Mpc edge size (3 * 3.08e24 cm) and
-simultaneously divide the domain into 12 chunks, so that you can take advantage
-of the underlying parallelism. 
-
-Particle fields are detected as one-dimensional fields. The number of
-particles is set by the ``number_of_particles`` key in
-``data``. Particle fields are then added as one-dimensional arrays in
-a similar manner as the three-dimensional grid fields:
-
-.. code-block:: python
-
-   from yt.frontends.stream.api import load_uniform_grid
-
-   data = dict(Density = dens, 
-               number_of_particles = 1000000,
-               particle_position_x = posx_arr, 
-	       particle_position_y = posy_arr,
-	       particle_position_z = posz_arr)
-   bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
-   pf = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
-
-where in this exampe the particle position fields have been assigned. ``number_of_particles`` must be the same size as the particle
-arrays. If no particle arrays are supplied then ``number_of_particles`` is assumed to be zero. 
-
-.. rubric:: Caveats
-
-* Units will be incorrect unless the data has already been converted to cgs.
-* Particles may be difficult to integrate.
-* Data must already reside in memory.
-
-.. _loading-amr-data:
-
-Generic AMR Data
-----------------
-
-It is possible to create native ``yt`` parameter file from Python's dictionary
-that describes set of rectangular patches of data of possibly varying
-resolution. 
-
-.. code-block:: python
-
-   from yt.frontends.stream.api import load_amr_grids
-
-   grid_data = [
-       dict(left_edge = [0.0, 0.0, 0.0],
-            right_edge = [1.0, 1.0, 1.],
-            level = 0,
-            dimensions = [32, 32, 32],
-            number_of_particles = 0)
-       dict(left_edge = [0.25, 0.25, 0.25],
-            right_edge = [0.75, 0.75, 0.75],
-            level = 1,
-            dimensions = [32, 32, 32],
-            number_of_particles = 0)
-   ]
-  
-   for g in grid_data:
-       g["density"] = np.random.random(g["dimensions"]) * 2**g["level"]
-  
-   pf = load_amr_grids(grid_data, [32, 32, 32], 1.0)
-
-Particle fields are supported by adding 1-dimensional arrays and
-setting the ``number_of_particles`` key to each ``grid``'s dict:
-
-.. code-block:: python
-
-    for g in grid_data:
-        g["number_of_particles"] = 100000
-        g["particle_position_x"] = np.random.random((g["number_of_particles"]))
-
-.. rubric:: Caveats
-
-* Units will be incorrect unless the data has already been converted to cgs.
-* Some functions may behave oddly, and parallelism will be disappointing or
-  non-existent in most cases.
-* No consistency checks are performed on the index
-* Data must already reside in memory.
-* Consistency between particle positions and grids is not checked;
-  ``load_amr_grids`` assumes that particle positions associated with one grid are
-  not bounded within another grid at a higher level, so this must be
-  ensured by the user prior to loading the grid data. 
-
-Generic Particle Data
----------------------
-


https://bitbucket.org/yt_analysis/yt/commits/3d1fdd825b5d/
Changeset:   3d1fdd825b5d
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-03-23 18:26:45
Summary:     Fixing a compatibility issue with runipy 0.0.8
Affected #:  1 file

diff -r 553ffed0c7fadee5affaef08bd85b4f2bcf93e07 -r 3d1fdd825b5d7e5040c453b627ebaa8cee893db5 doc/extensions/notebook_sphinxext.py
--- a/doc/extensions/notebook_sphinxext.py
+++ b/doc/extensions/notebook_sphinxext.py
@@ -138,7 +138,7 @@
     # Create evaluated version and save it to the dest path.
     # Always use --pylab so figures appear inline
     # perhaps this is questionable?
-    nb_runner = NotebookRunner(nb_in=nb_path, pylab=True)
+    nb_runner = NotebookRunner(nb_path, pylab=False)
     nb_runner.run_notebook(skip_exceptions=skip_exceptions)
     if dest_path is None:
         dest_path = 'temp_evaluated.ipynb'


https://bitbucket.org/yt_analysis/yt/commits/ecda2dbe9b1e/
Changeset:   ecda2dbe9b1e
Branch:      yt-3.0
User:        MatthewTurk
Date:        2014-03-23 21:19:29
Summary:     Merged in ngoldbaum/yt/yt-3.0 (pull request #738)

Moving the new loading data docs into a toctree.
Affected #:  5 files

diff -r 435069baaceba8e44ea666d4924f3ac28cadba45 -r ecda2dbe9b1e08c99bb4e4ce27bce50f98b620b4 doc/extensions/notebook_sphinxext.py
--- a/doc/extensions/notebook_sphinxext.py
+++ b/doc/extensions/notebook_sphinxext.py
@@ -138,7 +138,7 @@
     # Create evaluated version and save it to the dest path.
     # Always use --pylab so figures appear inline
     # perhaps this is questionable?
-    nb_runner = NotebookRunner(nb_in=nb_path, pylab=True)
+    nb_runner = NotebookRunner(nb_path, pylab=False)
     nb_runner.run_notebook(skip_exceptions=skip_exceptions)
     if dest_path is None:
         dest_path = 'temp_evaluated.ipynb'

diff -r 435069baaceba8e44ea666d4924f3ac28cadba45 -r ecda2dbe9b1e08c99bb4e4ce27bce50f98b620b4 doc/source/examining/index.rst
--- a/doc/source/examining/index.rst
+++ b/doc/source/examining/index.rst
@@ -6,6 +6,6 @@
 .. toctree::
    :maxdepth: 2
 
-   supported_frontends_data
+   loading_data
    generic_array_data
    low_level_inspection

diff -r 435069baaceba8e44ea666d4924f3ac28cadba45 -r ecda2dbe9b1e08c99bb4e4ce27bce50f98b620b4 doc/source/examining/loading_data.rst
--- /dev/null
+++ b/doc/source/examining/loading_data.rst
@@ -0,0 +1,601 @@
+.. _loading-data:
+
+Loading Data
+============
+
+This section contains information on how to load data into ``yt``, as well as
+some important caveats about different data formats.
+
+.. _loading-enzo-data:
+
+Enzo Data
+---------
+
+Enzo data is fully supported and cared for by Matthew Turk.  To load an Enzo
+dataset, you can use the ``load`` command provided by ``yt.mods`` and supply to
+it the parameter file name.  This would be the name of the output file, and it
+contains no extension.  For instance, if you have the following files:
+
+.. code-block:: none
+
+   DD0010/
+   DD0010/data0010
+   DD0010/data0010.index
+   DD0010/data0010.cpu0000
+   DD0010/data0010.cpu0001
+   DD0010/data0010.cpu0002
+   DD0010/data0010.cpu0003
+
+You would feed the ``load`` command the filename ``DD0010/data0010`` as
+mentioned.
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("DD0010/data0010")
+
+.. rubric:: Caveats
+
+* There are no major caveats for Enzo usage
+* Units should be correct, if you utilize standard unit-setting routines.  yt
+  will notify you if it cannot determine the units, although this
+  notification will be passive.
+* 2D and 1D data are supported, but the extraneous dimensions are set to be
+  of length 1.0 in "code length" which may produce strange results for volume
+  quantities.
+
+.. _loading-orion-data:
+
+Boxlib Data
+-----------
+
+yt has been tested with Boxlib data generated by Orion, Nyx, Maestro and
+Castro.  Currently it is cared for by a combination of Andrew Myers, Chris
+Malone, and Matthew Turk.
+
+To load a Boxlib dataset, you can use the ``load`` command provided by
+``yt.mods`` and supply to it the directory file name.  **You must also have the
+``inputs`` file in the base directory.**  For instance, if you were in a
+directory with the following files:
+
+.. code-block:: none
+
+   inputs
+   pltgmlcs5600/
+   pltgmlcs5600/Header
+   pltgmlcs5600/Level_0
+   pltgmlcs5600/Level_0/Cell_H
+   pltgmlcs5600/Level_1
+   pltgmlcs5600/Level_1/Cell_H
+   pltgmlcs5600/Level_2
+   pltgmlcs5600/Level_2/Cell_H
+   pltgmlcs5600/Level_3
+   pltgmlcs5600/Level_3/Cell_H
+   pltgmlcs5600/Level_4
+   pltgmlcs5600/Level_4/Cell_H
+
+You would feed it the filename ``pltgmlcs5600``:
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("pltgmlcs5600")
+
+.. _loading-flash-data:
+
+FLASH Data
+----------
+
+FLASH HDF5 data is *mostly* supported and cared for by John ZuHone.  To load a
+FLASH dataset, you can use the ``load`` command provided by ``yt.mods`` and
+supply to it the file name of a plot file or checkpoint file, but particle
+files are not currently directly loadable by themselves, due to the fact that
+they typically lack grid information. For instance, if you were in a directory
+with the following files:
+
+.. code-block:: none
+
+   cosmoSim_coolhdf5_chk_0026
+
+You would feed it the filename ``cosmoSim_coolhdf5_chk_0026``:
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("cosmoSim_coolhdf5_chk_0026")
+
+If you have a FLASH particle file that was created at the same time as
+a plotfile or checkpoint file (therefore having particle data
+consistent with the grid structure of the latter), its data may be loaded with the
+``particle_filename`` optional argument:
+
+.. code-block:: python
+
+    from yt.mods import *
+    pf = load("radio_halo_1kpc_hdf5_plt_cnt_0100", particle_filename="radio_halo_1kpc_hdf5_part_0100")
+
+.. rubric:: Caveats
+
+* Please be careful that the units are correctly utilized; yt assumes cgs.
+
+.. _loading-ramses-data:
+
+RAMSES Data
+-----------
+
+In yt-3.0, RAMSES data is fully supported.  If you are interested in taking a
+development or stewardship role, please contact the yt-dev mailing list.  To
+load a RAMSES dataset, you can use the ``load`` command provided by ``yt.mods``
+and supply to it the ``info*.txt`` filename.  For instance, if you were in a
+directory with the following files:
+
+.. code-block:: none
+
+   output_00007
+   output_00007/amr_00007.out00001
+   output_00007/grav_00007.out00001
+   output_00007/hydro_00007.out00001
+   output_00007/info_00007.txt
+   output_00007/part_00007.out00001
+
+You would feed it the filename ``output_00007/info_00007.txt``:
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("output_00007/info_00007.txt")
+
+yt will attempt to guess the fields in the file.  You may also specify a list
+of fields by supplying the ``fields`` keyword in your call to ``load``.
+
+.. _loading-gadget-data:
+
+Gadget Data
+-----------
+
+yt has support for reading Gadget data in both raw binary and HDF5 formats.  It
+is able to access the particles as it would any other particle dataset, and it
+can apply smoothing kernels to the data to produce both quantitative analysis
+and visualization.
+
+Gadget data in HDF5 format can be loaded with the ``load`` command:
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("snapshot_061.hdf5")
+
+However, yt cannot detect raw-binary Gadget data, and so you must specify the
+format as being Gadget:
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = GadgetDataset("snapshot_061")
+
+.. _particle-bbox:
+
+Units and Bounding Boxes
+++++++++++++++++++++++++
+
+There are two additional pieces of information that may be needed.  If your
+simulation is cosmological, yt can often guess the bounding box and the units
+of the simulation.  However, for isolated simulations and for cosmological
+simulations with non-standard units, these must be supplied.  For example, if
+a length unit of 1.0 corresponds to a kiloparsec, you can supply this in the
+constructor.  yt can accept units such as ``Mpc``, ``kpc``, ``cm``, ``Mpccm/h``
+and so on.  In particular, note that ``Mpc/h`` and ``Mpccm/h`` (``cm`` for
+comoving here) are usable unit definitions.
+
+yt will attempt to use units for ``mass``, ``length`` and ``time`` as supplied
+in the argument ``unit_base``.  The ``bounding_box`` argument is a list of
+two-item tuples or lists that describe the left and right extents of the
+particles.
+
+.. code-block:: python
+
+   pf = GadgetDataset("snap_004",
+           unit_base = {'length': ('kpc', 1.0)},
+           bounding_box = [[-600.0, 600.0], [-600.0, 600.0], [-600.0, 600.0]])
+
+.. _particle-indexing-criteria:
+
+Indexing Criteria
++++++++++++++++++
+
+yt generates a global mesh index via octree that governs the resolution of
+volume elements.  This is governed by two parameters, ``n_ref`` and
+``over_refine_factor``.  They are weak proxies for each other.  The first,
+``n_ref``, governs how many particles in an oct results in that oct being
+refined into eight child octs.  Lower values mean higher resolution; the
+default is 64.  The secon parameter, ``over_refine_factor``, governs how many
+cells are in a given oct; the default value of 1 corresponds to 8 cells.
+The number of cells in an oct is defined by the expression
+``2**(3*over_refine_factor)``.
+
+It's recommended that if you want higher-resolution, try reducing the value of
+``n_ref`` to 32 or 16.
+
+.. _gadget-field-spec:
+
+Field Specifications
+++++++++++++++++++++
+
+Binary Gadget outputs often have additional fields or particle types that are
+non-standard from the default Gadget distribution format.  These can be
+specified in the call to ``GadgetDataset`` by either supplying one of the
+sets of field specifications as a string or by supplying a field specification
+itself.  As an example, yt has built-in definitions for ``default`` (the
+default) and ``agora_unlv``.  Field specifications must be tuples, and must be
+of this format:
+
+.. code-block:: python
+
+   default = ( "Coordinates",
+               "Velocities",
+               "ParticleIDs",
+               "Mass",
+               ("InternalEnergy", "Gas"),
+               ("Density", "Gas"),
+               ("SmoothingLength", "Gas"),
+   )
+
+This is the default specification used by the Gadget frontend.  It means that
+the fields are, in order, Coordinates, Velocities, ParticleIDs, Mass, and the
+fields InternalEnergy, Density and SmoothingLength *only* for Gas particles.
+So for example, if you have defined a Metallicity field for the particle type
+Halo, which comes right after ParticleIDs in the file, you could define it like
+this:
+
+.. code-block:: python
+
+   my_field_def = ( "Coordinates",
+               "Velocities",
+               "ParticleIDs",
+               ("Metallicity", "Halo"),
+               "Mass",
+               ("InternalEnergy", "Gas"),
+               ("Density", "Gas"),
+               ("SmoothingLength", "Gas"),
+   )
+
+To save time, you can utilize the plugins file for yt and use it to add items
+to the dictionary where these definitions are stored.  You could do this like
+so:
+
+.. code-block:: python
+
+   from yt.frontends.sph.definitions import gadget_field_specs
+   gadget_field_specs["my_field_def"] = my_field_def
+
+Please also feel free to issue a pull request with any new field
+specifications, as we're happy to include them in the main distribution!
+
+.. _gadget-ptype-spec:
+
+Particle Type Definitions
++++++++++++++++++++++++++
+
+In some cases, research groups add new particle types or re-order them.  You
+can supply alternate particle types by using the keyword ``ptype_spec`` to the
+``GadgetDataset`` call.  The default for Gadget binary data is:
+
+.. code-block:: python
+
+    ( "Gas", "Halo", "Disk", "Bulge", "Stars", "Bndry" )
+
+You can specify alternate names, but note that this may cause problems with the
+field specification if none of the names match old names.
+
+.. _gadget-header-spec:
+
+Header Specification
+++++++++++++++++++++
+
+If you have modified the header in your Gadget binary file, you can specify an
+alternate header specification with the keyword ``header_spec``.  This can
+either be a list of strings corresponding to individual header types known to
+yt, or it can be a combination of strings and header specifications.  The
+default header specification (found in ``yt/frontends/sph/definitions.py``) is:
+
+.. code-block:: python
+   
+    default      = (('Npart', 6, 'i'),
+                    ('Massarr', 6, 'd'),
+                    ('Time', 1, 'd'),
+                    ('Redshift', 1, 'd'),
+                    ('FlagSfr', 1, 'i'),
+                    ('FlagFeedback', 1, 'i'),
+                    ('Nall', 6, 'i'),
+                    ('FlagCooling', 1, 'i'),
+                    ('NumFiles', 1, 'i'),
+                    ('BoxSize', 1, 'd'),
+                    ('Omega0', 1, 'd'),
+                    ('OmegaLambda', 1, 'd'),
+                    ('HubbleParam', 1, 'd'),
+                    ('FlagAge', 1, 'i'),
+                    ('FlagMEtals', 1, 'i'),
+                    ('NallHW', 6, 'i'),
+                    ('unused', 16, 'i'))
+
+These items will all be accessible inside the object ``pf.parameters``, which
+is a dictionary.  You can add combinations of new items, specified in the same
+way, or alternately other types of headers.  The other string keys defined are
+``pad32``, ``pad64``, ``pad128``, and ``pad256`` each of which corresponds to
+an empty padding in bytes.  For example, if you have an additional 256 bytes of
+padding at the end, you can specify this with:
+
+.. code-block:: python
+
+   header_spec = ["default", "pad256"]
+
+This can then be supplied to the constructor.  Note that you can also do this
+manually, for instance with:
+
+
+.. code-block:: python
+
+   header_spec = ["default", (('some_value', 8, 'd'),
+                              ('another_value', 1, 'i'))]
+
+The letters correspond to data types from the Python struct module.  Please
+feel free to submit alternate header types to the main yt repository.
+
+.. _specifying-gadget-units:
+
+Specifying Units
+++++++++++++++++
+
+If you are running a cosmology simulation, yt will be able to guess the units
+with some reliability.  However, if you are not and you do not specify a
+parameter file, yt will not be able to and will use the defaults of length
+being 1.0 Mpc/h (comoving), velocity being in cm/s, and mass being in 10^10
+Msun/h.  You can specify alternate units by supplying the ``unit_base`` keyword
+argument of this form:
+
+.. code-block:: python
+
+   unit_base = {'length': (1.0, 'cm'), 'mass': (1.0, 'g'), 'time': (1.0, 's')}
+
+yt will utilize length, mass and time to set up all other units.
+
+.. _loading-tipsy-data:
+
+Tipsy Data
+----------
+
+yt also supports loading Tipsy data.  Many of its characteristics are similar
+to how Gadget data is loaded; specifically, it shares its definition of
+indexing and mesh-identification with that described in
+:ref:`particle-indexing-criteria`.  However, unlike Gadget, the Tipsy frontend
+has not yet implemented header specifications, field specifications, or
+particle type specifications.  *These are all excellent projects for new
+contributors!*
+
+Tipsy data cannot be automatically detected.  You can load it with a command
+similar to the following:
+
+.. code-block:: python
+
+    ds = TipsyDataset('test.00169',
+        parameter_file='test.param',
+        endian = '<',
+        domain_left_edge = domain_left_edge,
+        domain_right_edge = domain_right_edge,
+    )
+
+Not all of these arguments are necessary; additionally, yt accepts the
+arguments ``n_ref``, ``over_refine_factor``, ``cosmology_parameters``, and
+``unit_base``.  By default, yt will not utilize a parameter file, and by
+default it will assume the data is "big" endian (`>`).  Optionally, you may
+specify ``field_dtypes``, which describe the size of various fields.  For
+example, if you have stored positions as 64-bit floats, you can specify this
+with:
+
+.. code-block:: python
+
+    ds = TipsyDataset("./halo1e11_run1.00400", endian="<",
+                           field_dtypes = {"Coordinates": "d"})
+
+.. _specifying-cosmology-tipsy:
+
+Specifying Tipsy Cosmological Parameters
+++++++++++++++++++++++++++++++++++++++++
+
+Cosmological parameters can be specified to Tipsy to enable computation of
+default units.  The parameters recognized are of this form:
+
+.. code-block:: python
+
+   cosmology_parameters = {'current_redshift': 0.0,
+                           'omega_lambda': 0.728,
+                           'omega_matter': 0.272,
+                           'hubble_constant': 0.702}
+
+These will be used set the units, if they are specified.
+
+.. _loading-artio-data:
+
+ARTIO Data
+----------
+
+ARTIO data has a well-specified internal parameter system and has few free
+parameters.  However, for optimization purposes, the parameter that provides
+the most guidance to yt as to how to manage ARTIO data is ``max_range``.  This
+governs the maximum number of space-filling curve cells that will be used in a
+single "chunk" of data read from disk.  For small datasets, setting this number
+very large will enable more data to be loaded into memory at any given time;
+for very large datasets, this parameter can be left alone safely.  By default
+it is set to 1024; it can in principle be set as high as the total number of
+SFC cells.
+
+To load ARTIO data, you can specify a command such as this:
+
+.. code-block:: python
+
+    ds = load("./A11QR1/s11Qzm1h2_a1.0000.art")
+
+.. _loading-art-data:
+
+ART Data
+--------
+
+ART data enjoys preliminary support and has been supported in the past by
+Christopher Moody.  Please contact the ``yt-dev`` mailing list if you are
+interested in using yt for ART data, or if you are interested in assisting with
+development of yt to work with ART data.
+
+To load an ART dataset you can use the ``load`` command provided by 
+``yt.mods`` and passing the gas mesh file. It will search for and attempt 
+to find the complementary dark matter and stellar particle header and data 
+files. However, your simulations may not follow the same naming convention.
+
+So for example, a single snapshot might have a series of files looking like
+this:
+
+.. code-block:: none
+
+   10MpcBox_csf512_a0.300.d    #Gas mesh
+   PMcrda0.300.DAT             #Particle header
+   PMcrs0a0.300.DAT            #Particle data (positions,velocities)
+   stars_a0.300.dat            #Stellar data (metallicities, ages, etc.)
+
+The ART frontend tries to find the associated files matching the above, but
+if that fails you can specify ``file_particle_data``,``file_particle_data``,
+``file_star_data`` in addition to the specifying the gas mesh. You also have 
+the option of gridding particles, and assigning them onto the meshes.
+This process is in beta, and for the time being it's probably  best to leave
+``do_grid_particles=False`` as the default.
+
+To speed up the loading of an ART file, you have a few options. You can turn 
+off the particles entirely by setting ``discover_particles=False``. You can
+also only grid octs up to a certain level, ``limit_level=5``, which is useful
+when debugging by artificially creating a 'smaller' dataset to work with.
+
+Finally, when stellar ages are computed we 'spread' the ages evenly within a
+smoothing window. By default this is turned on and set to 10Myr. To turn this 
+off you can set ``spread=False``, and you can tweak the age smoothing window
+by specifying the window in seconds, ``spread=1.0e7*265*24*3600``. 
+
+.. code-block:: python
+    
+   from yt.mods import *
+
+   pf = load("/u/cmoody3/data/art_snapshots/SFG1/10MpcBox_csf512_a0.460.d")
+
+.. _loading-moab-data:
+
+MOAB Data
+---------
+
+.. _loading-pyne-data:
+
+PyNE Data
+---------
+
+.. _loading-numpy-array:
+
+Generic Array Data
+------------------
+
+Even if your data is not strictly related to fields commonly used in
+astrophysical codes or your code is not supported yet, you can still feed it to
+``yt`` to use its advanced visualization and analysis facilities. The only
+requirement is that your data can be represented as one or more uniform, three
+dimensional numpy arrays. Assuming that you have your data in ``arr``,
+the following code:
+
+.. code-block:: python
+
+   from yt.frontends.stream.api import load_uniform_grid
+
+   data = dict(Density = arr)
+   bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
+   pf = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
+
+will create ``yt``-native parameter file ``pf`` that will treat your array as
+density field in cubic domain of 3 Mpc edge size (3 * 3.08e24 cm) and
+simultaneously divide the domain into 12 chunks, so that you can take advantage
+of the underlying parallelism. 
+
+Particle fields are detected as one-dimensional fields. The number of
+particles is set by the ``number_of_particles`` key in
+``data``. Particle fields are then added as one-dimensional arrays in
+a similar manner as the three-dimensional grid fields:
+
+.. code-block:: python
+
+   from yt.frontends.stream.api import load_uniform_grid
+
+   data = dict(Density = dens, 
+               number_of_particles = 1000000,
+               particle_position_x = posx_arr, 
+	       particle_position_y = posy_arr,
+	       particle_position_z = posz_arr)
+   bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
+   pf = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
+
+where in this exampe the particle position fields have been assigned. ``number_of_particles`` must be the same size as the particle
+arrays. If no particle arrays are supplied then ``number_of_particles`` is assumed to be zero. 
+
+.. rubric:: Caveats
+
+* Units will be incorrect unless the data has already been converted to cgs.
+* Particles may be difficult to integrate.
+* Data must already reside in memory.
+
+.. _loading-amr-data:
+
+Generic AMR Data
+----------------
+
+It is possible to create native ``yt`` parameter file from Python's dictionary
+that describes set of rectangular patches of data of possibly varying
+resolution. 
+
+.. code-block:: python
+
+   from yt.frontends.stream.api import load_amr_grids
+
+   grid_data = [
+       dict(left_edge = [0.0, 0.0, 0.0],
+            right_edge = [1.0, 1.0, 1.],
+            level = 0,
+            dimensions = [32, 32, 32],
+            number_of_particles = 0)
+       dict(left_edge = [0.25, 0.25, 0.25],
+            right_edge = [0.75, 0.75, 0.75],
+            level = 1,
+            dimensions = [32, 32, 32],
+            number_of_particles = 0)
+   ]
+  
+   for g in grid_data:
+       g["density"] = np.random.random(g["dimensions"]) * 2**g["level"]
+  
+   pf = load_amr_grids(grid_data, [32, 32, 32], 1.0)
+
+Particle fields are supported by adding 1-dimensional arrays and
+setting the ``number_of_particles`` key to each ``grid``'s dict:
+
+.. code-block:: python
+
+    for g in grid_data:
+        g["number_of_particles"] = 100000
+        g["particle_position_x"] = np.random.random((g["number_of_particles"]))
+
+.. rubric:: Caveats
+
+* Units will be incorrect unless the data has already been converted to cgs.
+* Some functions may behave oddly, and parallelism will be disappointing or
+  non-existent in most cases.
+* No consistency checks are performed on the index
+* Data must already reside in memory.
+* Consistency between particle positions and grids is not checked;
+  ``load_amr_grids`` assumes that particle positions associated with one grid are
+  not bounded within another grid at a higher level, so this must be
+  ensured by the user prior to loading the grid data. 
+
+Generic Particle Data
+---------------------
+

diff -r 435069baaceba8e44ea666d4924f3ac28cadba45 -r ecda2dbe9b1e08c99bb4e4ce27bce50f98b620b4 doc/source/examining/supported_frontends_data.rst
--- a/doc/source/examining/supported_frontends_data.rst
+++ /dev/null
@@ -1,186 +0,0 @@
-.. _loading-data-from-supported-codes:
-
-Loading Data from Supported Codes
-=================================
-
-This section contains information on how to load data into ``yt`` from
-supported codes, as well as some important caveats about different
-data formats.
-
-.. _loading-enzo-data:
-
-Enzo Data
----------
-
-Enzo data is fully supported and cared for by Matthew Turk.  To load an Enzo
-dataset, you can use the ``load`` command provided by ``yt.mods`` and supply to
-it the parameter file name.  This would be the name of the output file, and it
-contains no extension.  For instance, if you have the following files:
-
-.. code-block:: none
-
-   DD0010/
-   DD0010/data0010
-   DD0010/data0010.index
-   DD0010/data0010.cpu0000
-   DD0010/data0010.cpu0001
-   DD0010/data0010.cpu0002
-   DD0010/data0010.cpu0003
-
-You would feed the ``load`` command the filename ``DD0010/data0010`` as
-mentioned.
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("DD0010/data0010")
-
-.. rubric:: Caveats
-
-* There are no major caveats for Enzo usage
-* Units should be correct, if you utilize standard unit-setting routines.  yt
-  will notify you if it cannot determine the units, although this
-  notification will be passive.
-* 2D and 1D data are supported, but the extraneous dimensions are set to be
-  of length 1.0
-
-.. _loading-orion-data:
-
-Orion Data
-----------
-
-Orion data is fully supported. To load an Orion dataset, you can use the
-``load`` command provided by ``yt.mods`` and supply to it the directory file
-name.  **You must also have the ``inputs`` file in the base directory.** For
-instance, if you were in a directory with the following files:
-
-.. code-block:: none
-
-   inputs
-   pltgmlcs5600/
-   pltgmlcs5600/Header
-   pltgmlcs5600/Level_0
-   pltgmlcs5600/Level_0/Cell_H
-   pltgmlcs5600/Level_1
-   pltgmlcs5600/Level_1/Cell_H
-   pltgmlcs5600/Level_2
-   pltgmlcs5600/Level_2/Cell_H
-   pltgmlcs5600/Level_3
-   pltgmlcs5600/Level_3/Cell_H
-   pltgmlcs5600/Level_4
-   pltgmlcs5600/Level_4/Cell_H
-
-You would feed it the filename ``pltgmlcs5600``:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("pltgmlcs5600")
-
-.. rubric:: Caveats
-
-* There are no major caveats for Orion usage
-* Star particles are not supported at the current time
-
-.. _loading-flash-data:
-
-FLASH Data
-----------
-
-FLASH HDF5 data is fully supported and cared for by John ZuHone.  To load a
-FLASH dataset, you can use the ``load`` command provided by ``yt.mods`` and
-supply to it the file name of a plot file or checkpoint file.  Particle
-files are not currently directly loadable by themselves, due to the
-fact that they typically lack grid information. For instance, if you were in a directory with
-the following files:
-
-.. code-block:: none
-
-   cosmoSim_coolhdf5_chk_0026
-
-You would feed it the filename ``cosmoSim_coolhdf5_chk_0026``:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("cosmoSim_coolhdf5_chk_0026")
-
-If you have a FLASH particle file that was created at the same time as
-a plotfile or checkpoint file (therefore having particle data
-consistent with the grid structure of the latter), its data may be loaded with the
-``particle_filename`` optional argument:
-
-.. code-block:: python
-
-    from yt.mods import *
-    pf = load("radio_halo_1kpc_hdf5_plt_cnt_0100", particle_filename="radio_halo_1kpc_hdf5_part_0100")
-
-.. rubric:: Caveats
-
-* Please be careful that the units are correctly utilized; yt assumes cgs
-* Velocities and length units will be scaled to comoving coordinates if yt is
-  able to discern you are examining a cosmology simulation; particle and grid
-  positions will not be.
-* Domains may be visualized assuming periodicity.
-
-Athena Data
------------
-
-Athena 4.x VTK data is *mostly* supported and cared for by John
-ZuHone. Both uniform grid and SMR datasets are supported. 
-
-Loading Athena datasets is slightly different depending on whether
-your dataset came from a serial or a parallel run. If the data came
-from a serial run or you have joined the VTK files together using the
-Athena tool ``join_vtk``, you can load the data like this:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("kh.0010.vtk")
-
-The filename corresponds to the file on SMR level 0, whereas if there
-are multiple levels the corresponding files will be picked up
-automatically, assuming they are laid out in ``lev*`` subdirectories
-under the directory where the base file is located.
-
-For parallel datasets, yt assumes that they are laid out in
-directories named ``id*``, one for each processor number, each with
-``lev*`` subdirectories for additional refinement levels. To load this
-data, call ``load`` with the base file in the ``id0`` directory:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("id0/kh.0010.vtk")
-
-which will pick up all of the files in the different ``id*`` directories for
-the entire dataset. 
-
-yt works in cgs ("Gaussian") units, but Athena data is not
-normally stored in these units. If you would like to convert data to
-cgs units, you may supply conversions for length, time, and density to ``load``:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("id0/cluster_merger.0250.vtk", 
-          parameters={"LengthUnits":3.0856e24,
-                               "TimeUnits":3.1557e13,"DensityUnits":1.67e-24)
-
-This means that the yt fields (e.g. ``Density``, ``x-velocity``,
-``Bx``) will be in cgs units, but the Athena fields (e.g.,
-``density``, ``velocity_x``, ``cell_centered_B_x``) will be in code
-units. 
-
-.. rubric:: Caveats
-
-* yt primarily works with primitive variables. If the Athena
-  dataset contains conservative variables, the yt primitive fields will be generated from the
-  conserved variables on disk. 
-* Domains may be visualized assuming periodicity.
-* Particle list data is currently unsupported.
-* In some parallel Athena datasets, it is possible for a grid from one
-  refinement level to overlap with more than one grid on the parent
-  level. This may result in unpredictable behavior for some analysis
-  or visualization tasks. 

diff -r 435069baaceba8e44ea666d4924f3ac28cadba45 -r ecda2dbe9b1e08c99bb4e4ce27bce50f98b620b4 doc/source/loading_data.rst
--- a/doc/source/loading_data.rst
+++ /dev/null
@@ -1,601 +0,0 @@
-.. _loading-data:
-
-Loading Data
-============
-
-This section contains information on how to load data into ``yt``, as well as
-some important caveats about different data formats.
-
-.. _loading-enzo-data:
-
-Enzo Data
----------
-
-Enzo data is fully supported and cared for by Matthew Turk.  To load an Enzo
-dataset, you can use the ``load`` command provided by ``yt.mods`` and supply to
-it the parameter file name.  This would be the name of the output file, and it
-contains no extension.  For instance, if you have the following files:
-
-.. code-block:: none
-
-   DD0010/
-   DD0010/data0010
-   DD0010/data0010.index
-   DD0010/data0010.cpu0000
-   DD0010/data0010.cpu0001
-   DD0010/data0010.cpu0002
-   DD0010/data0010.cpu0003
-
-You would feed the ``load`` command the filename ``DD0010/data0010`` as
-mentioned.
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("DD0010/data0010")
-
-.. rubric:: Caveats
-
-* There are no major caveats for Enzo usage
-* Units should be correct, if you utilize standard unit-setting routines.  yt
-  will notify you if it cannot determine the units, although this
-  notification will be passive.
-* 2D and 1D data are supported, but the extraneous dimensions are set to be
-  of length 1.0 in "code length" which may produce strange results for volume
-  quantities.
-
-.. _loading-orion-data:
-
-Boxlib Data
------------
-
-yt has been tested with Boxlib data generated by Orion, Nyx, Maestro and
-Castro.  Currently it is cared for by a combination of Andrew Myers, Chris
-Malone, and Matthew Turk.
-
-To load a Boxlib dataset, you can use the ``load`` command provided by
-``yt.mods`` and supply to it the directory file name.  **You must also have the
-``inputs`` file in the base directory.**  For instance, if you were in a
-directory with the following files:
-
-.. code-block:: none
-
-   inputs
-   pltgmlcs5600/
-   pltgmlcs5600/Header
-   pltgmlcs5600/Level_0
-   pltgmlcs5600/Level_0/Cell_H
-   pltgmlcs5600/Level_1
-   pltgmlcs5600/Level_1/Cell_H
-   pltgmlcs5600/Level_2
-   pltgmlcs5600/Level_2/Cell_H
-   pltgmlcs5600/Level_3
-   pltgmlcs5600/Level_3/Cell_H
-   pltgmlcs5600/Level_4
-   pltgmlcs5600/Level_4/Cell_H
-
-You would feed it the filename ``pltgmlcs5600``:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("pltgmlcs5600")
-
-.. _loading-flash-data:
-
-FLASH Data
-----------
-
-FLASH HDF5 data is *mostly* supported and cared for by John ZuHone.  To load a
-FLASH dataset, you can use the ``load`` command provided by ``yt.mods`` and
-supply to it the file name of a plot file or checkpoint file, but particle
-files are not currently directly loadable by themselves, due to the fact that
-they typically lack grid information. For instance, if you were in a directory
-with the following files:
-
-.. code-block:: none
-
-   cosmoSim_coolhdf5_chk_0026
-
-You would feed it the filename ``cosmoSim_coolhdf5_chk_0026``:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("cosmoSim_coolhdf5_chk_0026")
-
-If you have a FLASH particle file that was created at the same time as
-a plotfile or checkpoint file (therefore having particle data
-consistent with the grid structure of the latter), its data may be loaded with the
-``particle_filename`` optional argument:
-
-.. code-block:: python
-
-    from yt.mods import *
-    pf = load("radio_halo_1kpc_hdf5_plt_cnt_0100", particle_filename="radio_halo_1kpc_hdf5_part_0100")
-
-.. rubric:: Caveats
-
-* Please be careful that the units are correctly utilized; yt assumes cgs.
-
-.. _loading-ramses-data:
-
-RAMSES Data
------------
-
-In yt-3.0, RAMSES data is fully supported.  If you are interested in taking a
-development or stewardship role, please contact the yt-dev mailing list.  To
-load a RAMSES dataset, you can use the ``load`` command provided by ``yt.mods``
-and supply to it the ``info*.txt`` filename.  For instance, if you were in a
-directory with the following files:
-
-.. code-block:: none
-
-   output_00007
-   output_00007/amr_00007.out00001
-   output_00007/grav_00007.out00001
-   output_00007/hydro_00007.out00001
-   output_00007/info_00007.txt
-   output_00007/part_00007.out00001
-
-You would feed it the filename ``output_00007/info_00007.txt``:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("output_00007/info_00007.txt")
-
-yt will attempt to guess the fields in the file.  You may also specify a list
-of fields by supplying the ``fields`` keyword in your call to ``load``.
-
-.. _loading-gadget-data:
-
-Gadget Data
------------
-
-yt has support for reading Gadget data in both raw binary and HDF5 formats.  It
-is able to access the particles as it would any other particle dataset, and it
-can apply smoothing kernels to the data to produce both quantitative analysis
-and visualization.
-
-Gadget data in HDF5 format can be loaded with the ``load`` command:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = load("snapshot_061.hdf5")
-
-However, yt cannot detect raw-binary Gadget data, and so you must specify the
-format as being Gadget:
-
-.. code-block:: python
-
-   from yt.mods import *
-   pf = GadgetDataset("snapshot_061")
-
-.. _particle-bbox:
-
-Units and Bounding Boxes
-++++++++++++++++++++++++
-
-There are two additional pieces of information that may be needed.  If your
-simulation is cosmological, yt can often guess the bounding box and the units
-of the simulation.  However, for isolated simulations and for cosmological
-simulations with non-standard units, these must be supplied.  For example, if
-a length unit of 1.0 corresponds to a kiloparsec, you can supply this in the
-constructor.  yt can accept units such as ``Mpc``, ``kpc``, ``cm``, ``Mpccm/h``
-and so on.  In particular, note that ``Mpc/h`` and ``Mpccm/h`` (``cm`` for
-comoving here) are usable unit definitions.
-
-yt will attempt to use units for ``mass``, ``length`` and ``time`` as supplied
-in the argument ``unit_base``.  The ``bounding_box`` argument is a list of
-two-item tuples or lists that describe the left and right extents of the
-particles.
-
-.. code-block:: python
-
-   pf = GadgetDataset("snap_004",
-           unit_base = {'length': ('kpc', 1.0)},
-           bounding_box = [[-600.0, 600.0], [-600.0, 600.0], [-600.0, 600.0]])
-
-.. _particle-indexing-criteria:
-
-Indexing Criteria
-+++++++++++++++++
-
-yt generates a global mesh index via octree that governs the resolution of
-volume elements.  This is governed by two parameters, ``n_ref`` and
-``over_refine_factor``.  They are weak proxies for each other.  The first,
-``n_ref``, governs how many particles in an oct results in that oct being
-refined into eight child octs.  Lower values mean higher resolution; the
-default is 64.  The secon parameter, ``over_refine_factor``, governs how many
-cells are in a given oct; the default value of 1 corresponds to 8 cells.
-The number of cells in an oct is defined by the expression
-``2**(3*over_refine_factor)``.
-
-It's recommended that if you want higher-resolution, try reducing the value of
-``n_ref`` to 32 or 16.
-
-.. _gadget-field-spec:
-
-Field Specifications
-++++++++++++++++++++
-
-Binary Gadget outputs often have additional fields or particle types that are
-non-standard from the default Gadget distribution format.  These can be
-specified in the call to ``GadgetDataset`` by either supplying one of the
-sets of field specifications as a string or by supplying a field specification
-itself.  As an example, yt has built-in definitions for ``default`` (the
-default) and ``agora_unlv``.  Field specifications must be tuples, and must be
-of this format:
-
-.. code-block:: python
-
-   default = ( "Coordinates",
-               "Velocities",
-               "ParticleIDs",
-               "Mass",
-               ("InternalEnergy", "Gas"),
-               ("Density", "Gas"),
-               ("SmoothingLength", "Gas"),
-   )
-
-This is the default specification used by the Gadget frontend.  It means that
-the fields are, in order, Coordinates, Velocities, ParticleIDs, Mass, and the
-fields InternalEnergy, Density and SmoothingLength *only* for Gas particles.
-So for example, if you have defined a Metallicity field for the particle type
-Halo, which comes right after ParticleIDs in the file, you could define it like
-this:
-
-.. code-block:: python
-
-   my_field_def = ( "Coordinates",
-               "Velocities",
-               "ParticleIDs",
-               ("Metallicity", "Halo"),
-               "Mass",
-               ("InternalEnergy", "Gas"),
-               ("Density", "Gas"),
-               ("SmoothingLength", "Gas"),
-   )
-
-To save time, you can utilize the plugins file for yt and use it to add items
-to the dictionary where these definitions are stored.  You could do this like
-so:
-
-.. code-block:: python
-
-   from yt.frontends.sph.definitions import gadget_field_specs
-   gadget_field_specs["my_field_def"] = my_field_def
-
-Please also feel free to issue a pull request with any new field
-specifications, as we're happy to include them in the main distribution!
-
-.. _gadget-ptype-spec:
-
-Particle Type Definitions
-+++++++++++++++++++++++++
-
-In some cases, research groups add new particle types or re-order them.  You
-can supply alternate particle types by using the keyword ``ptype_spec`` to the
-``GadgetDataset`` call.  The default for Gadget binary data is:
-
-.. code-block:: python
-
-    ( "Gas", "Halo", "Disk", "Bulge", "Stars", "Bndry" )
-
-You can specify alternate names, but note that this may cause problems with the
-field specification if none of the names match old names.
-
-.. _gadget-header-spec:
-
-Header Specification
-++++++++++++++++++++
-
-If you have modified the header in your Gadget binary file, you can specify an
-alternate header specification with the keyword ``header_spec``.  This can
-either be a list of strings corresponding to individual header types known to
-yt, or it can be a combination of strings and header specifications.  The
-default header specification (found in ``yt/frontends/sph/definitions.py``) is:
-
-.. code-block:: python
-   
-    default      = (('Npart', 6, 'i'),
-                    ('Massarr', 6, 'd'),
-                    ('Time', 1, 'd'),
-                    ('Redshift', 1, 'd'),
-                    ('FlagSfr', 1, 'i'),
-                    ('FlagFeedback', 1, 'i'),
-                    ('Nall', 6, 'i'),
-                    ('FlagCooling', 1, 'i'),
-                    ('NumFiles', 1, 'i'),
-                    ('BoxSize', 1, 'd'),
-                    ('Omega0', 1, 'd'),
-                    ('OmegaLambda', 1, 'd'),
-                    ('HubbleParam', 1, 'd'),
-                    ('FlagAge', 1, 'i'),
-                    ('FlagMEtals', 1, 'i'),
-                    ('NallHW', 6, 'i'),
-                    ('unused', 16, 'i'))
-
-These items will all be accessible inside the object ``pf.parameters``, which
-is a dictionary.  You can add combinations of new items, specified in the same
-way, or alternately other types of headers.  The other string keys defined are
-``pad32``, ``pad64``, ``pad128``, and ``pad256`` each of which corresponds to
-an empty padding in bytes.  For example, if you have an additional 256 bytes of
-padding at the end, you can specify this with:
-
-.. code-block:: python
-
-   header_spec = ["default", "pad256"]
-
-This can then be supplied to the constructor.  Note that you can also do this
-manually, for instance with:
-
-
-.. code-block:: python
-
-   header_spec = ["default", (('some_value', 8, 'd'),
-                              ('another_value', 1, 'i'))]
-
-The letters correspond to data types from the Python struct module.  Please
-feel free to submit alternate header types to the main yt repository.
-
-.. _specifying-gadget-units:
-
-Specifying Units
-++++++++++++++++
-
-If you are running a cosmology simulation, yt will be able to guess the units
-with some reliability.  However, if you are not and you do not specify a
-parameter file, yt will not be able to and will use the defaults of length
-being 1.0 Mpc/h (comoving), velocity being in cm/s, and mass being in 10^10
-Msun/h.  You can specify alternate units by supplying the ``unit_base`` keyword
-argument of this form:
-
-.. code-block:: python
-
-   unit_base = {'length': (1.0, 'cm'), 'mass': (1.0, 'g'), 'time': (1.0, 's')}
-
-yt will utilize length, mass and time to set up all other units.
-
-.. _loading-tipsy-data:
-
-Tipsy Data
-----------
-
-yt also supports loading Tipsy data.  Many of its characteristics are similar
-to how Gadget data is loaded; specifically, it shares its definition of
-indexing and mesh-identification with that described in
-:ref:`particle-indexing-criteria`.  However, unlike Gadget, the Tipsy frontend
-has not yet implemented header specifications, field specifications, or
-particle type specifications.  *These are all excellent projects for new
-contributors!*
-
-Tipsy data cannot be automatically detected.  You can load it with a command
-similar to the following:
-
-.. code-block:: python
-
-    ds = TipsyDataset('test.00169',
-        parameter_file='test.param',
-        endian = '<',
-        domain_left_edge = domain_left_edge,
-        domain_right_edge = domain_right_edge,
-    )
-
-Not all of these arguments are necessary; additionally, yt accepts the
-arguments ``n_ref``, ``over_refine_factor``, ``cosmology_parameters``, and
-``unit_base``.  By default, yt will not utilize a parameter file, and by
-default it will assume the data is "big" endian (`>`).  Optionally, you may
-specify ``field_dtypes``, which describe the size of various fields.  For
-example, if you have stored positions as 64-bit floats, you can specify this
-with:
-
-.. code-block:: python
-
-    ds = TipsyDataset("./halo1e11_run1.00400", endian="<",
-                           field_dtypes = {"Coordinates": "d"})
-
-.. _specifying-cosmology-tipsy:
-
-Specifying Tipsy Cosmological Parameters
-++++++++++++++++++++++++++++++++++++++++
-
-Cosmological parameters can be specified to Tipsy to enable computation of
-default units.  The parameters recognized are of this form:
-
-.. code-block:: python
-
-   cosmology_parameters = {'current_redshift': 0.0,
-                           'omega_lambda': 0.728,
-                           'omega_matter': 0.272,
-                           'hubble_constant': 0.702}
-
-These will be used set the units, if they are specified.
-
-.. _loading-artio-data:
-
-ARTIO Data
-----------
-
-ARTIO data has a well-specified internal parameter system and has few free
-parameters.  However, for optimization purposes, the parameter that provides
-the most guidance to yt as to how to manage ARTIO data is ``max_range``.  This
-governs the maximum number of space-filling curve cells that will be used in a
-single "chunk" of data read from disk.  For small datasets, setting this number
-very large will enable more data to be loaded into memory at any given time;
-for very large datasets, this parameter can be left alone safely.  By default
-it is set to 1024; it can in principle be set as high as the total number of
-SFC cells.
-
-To load ARTIO data, you can specify a command such as this:
-
-.. code-block:: python
-
-    ds = load("./A11QR1/s11Qzm1h2_a1.0000.art")
-
-.. _loading-art-data:
-
-ART Data
---------
-
-ART data enjoys preliminary support and has been supported in the past by
-Christopher Moody.  Please contact the ``yt-dev`` mailing list if you are
-interested in using yt for ART data, or if you are interested in assisting with
-development of yt to work with ART data.
-
-To load an ART dataset you can use the ``load`` command provided by 
-``yt.mods`` and passing the gas mesh file. It will search for and attempt 
-to find the complementary dark matter and stellar particle header and data 
-files. However, your simulations may not follow the same naming convention.
-
-So for example, a single snapshot might have a series of files looking like
-this:
-
-.. code-block:: none
-
-   10MpcBox_csf512_a0.300.d    #Gas mesh
-   PMcrda0.300.DAT             #Particle header
-   PMcrs0a0.300.DAT            #Particle data (positions,velocities)
-   stars_a0.300.dat            #Stellar data (metallicities, ages, etc.)
-
-The ART frontend tries to find the associated files matching the above, but
-if that fails you can specify ``file_particle_data``,``file_particle_data``,
-``file_star_data`` in addition to the specifying the gas mesh. You also have 
-the option of gridding particles, and assigning them onto the meshes.
-This process is in beta, and for the time being it's probably  best to leave
-``do_grid_particles=False`` as the default.
-
-To speed up the loading of an ART file, you have a few options. You can turn 
-off the particles entirely by setting ``discover_particles=False``. You can
-also only grid octs up to a certain level, ``limit_level=5``, which is useful
-when debugging by artificially creating a 'smaller' dataset to work with.
-
-Finally, when stellar ages are computed we 'spread' the ages evenly within a
-smoothing window. By default this is turned on and set to 10Myr. To turn this 
-off you can set ``spread=False``, and you can tweak the age smoothing window
-by specifying the window in seconds, ``spread=1.0e7*265*24*3600``. 
-
-.. code-block:: python
-    
-   from yt.mods import *
-
-   pf = load("/u/cmoody3/data/art_snapshots/SFG1/10MpcBox_csf512_a0.460.d")
-
-.. _loading-moab-data:
-
-MOAB Data
----------
-
-.. _loading-pyne-data:
-
-PyNE Data
----------
-
-.. _loading-numpy-array:
-
-Generic Array Data
-------------------
-
-Even if your data is not strictly related to fields commonly used in
-astrophysical codes or your code is not supported yet, you can still feed it to
-``yt`` to use its advanced visualization and analysis facilities. The only
-requirement is that your data can be represented as one or more uniform, three
-dimensional numpy arrays. Assuming that you have your data in ``arr``,
-the following code:
-
-.. code-block:: python
-
-   from yt.frontends.stream.api import load_uniform_grid
-
-   data = dict(Density = arr)
-   bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
-   pf = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
-
-will create ``yt``-native parameter file ``pf`` that will treat your array as
-density field in cubic domain of 3 Mpc edge size (3 * 3.08e24 cm) and
-simultaneously divide the domain into 12 chunks, so that you can take advantage
-of the underlying parallelism. 
-
-Particle fields are detected as one-dimensional fields. The number of
-particles is set by the ``number_of_particles`` key in
-``data``. Particle fields are then added as one-dimensional arrays in
-a similar manner as the three-dimensional grid fields:
-
-.. code-block:: python
-
-   from yt.frontends.stream.api import load_uniform_grid
-
-   data = dict(Density = dens, 
-               number_of_particles = 1000000,
-               particle_position_x = posx_arr, 
-	       particle_position_y = posy_arr,
-	       particle_position_z = posz_arr)
-   bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
-   pf = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
-
-where in this exampe the particle position fields have been assigned. ``number_of_particles`` must be the same size as the particle
-arrays. If no particle arrays are supplied then ``number_of_particles`` is assumed to be zero. 
-
-.. rubric:: Caveats
-
-* Units will be incorrect unless the data has already been converted to cgs.
-* Particles may be difficult to integrate.
-* Data must already reside in memory.
-
-.. _loading-amr-data:
-
-Generic AMR Data
-----------------
-
-It is possible to create native ``yt`` parameter file from Python's dictionary
-that describes set of rectangular patches of data of possibly varying
-resolution. 
-
-.. code-block:: python
-
-   from yt.frontends.stream.api import load_amr_grids
-
-   grid_data = [
-       dict(left_edge = [0.0, 0.0, 0.0],
-            right_edge = [1.0, 1.0, 1.],
-            level = 0,
-            dimensions = [32, 32, 32],
-            number_of_particles = 0)
-       dict(left_edge = [0.25, 0.25, 0.25],
-            right_edge = [0.75, 0.75, 0.75],
-            level = 1,
-            dimensions = [32, 32, 32],
-            number_of_particles = 0)
-   ]
-  
-   for g in grid_data:
-       g["density"] = np.random.random(g["dimensions"]) * 2**g["level"]
-  
-   pf = load_amr_grids(grid_data, [32, 32, 32], 1.0)
-
-Particle fields are supported by adding 1-dimensional arrays and
-setting the ``number_of_particles`` key to each ``grid``'s dict:
-
-.. code-block:: python
-
-    for g in grid_data:
-        g["number_of_particles"] = 100000
-        g["particle_position_x"] = np.random.random((g["number_of_particles"]))
-
-.. rubric:: Caveats
-
-* Units will be incorrect unless the data has already been converted to cgs.
-* Some functions may behave oddly, and parallelism will be disappointing or
-  non-existent in most cases.
-* No consistency checks are performed on the index
-* Data must already reside in memory.
-* Consistency between particle positions and grids is not checked;
-  ``load_amr_grids`` assumes that particle positions associated with one grid are
-  not bounded within another grid at a higher level, so this must be
-  ensured by the user prior to loading the grid data. 
-
-Generic Particle Data
----------------------
-

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list