[yt-svn] commit/yt: bwkeller: Merged in brittonsmith/yt (pull request #1788)

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Mon Nov 2 11:32:58 PST 2015


1 new commit in yt:

https://bitbucket.org/yt_analysis/yt/commits/697ca7baf306/
Changeset:   697ca7baf306
Branch:      yt
User:        bwkeller
Date:        2015-11-02 19:32:46+00:00
Summary:     Merged in brittonsmith/yt (pull request #1788)

Adding ytdata frontend
Affected #:  35 files

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 doc/source/analyzing/generating_processed_data.rst
--- a/doc/source/analyzing/generating_processed_data.rst
+++ b/doc/source/analyzing/generating_processed_data.rst
@@ -54,10 +54,13 @@
  
 .. code-block:: python
 
-   frb.export_hdf5("my_images.h5", fields=["density","temperature"])
+   frb.save_as_dataset("my_images.h5", fields=["density","temperature"])
    frb.export_fits("my_images.fits", fields=["density","temperature"],
                    clobber=True, units="kpc")
 
+In the HDF5 case, the created file can be reloaded just like a regular dataset with
+``yt.load`` and will, itself, be a first-class dataset.  For more information on
+this, see :ref:`saving-grid-data-containers`.
 In the FITS case, there is an option for setting the ``units`` of the coordinate system in
 the file. If you want to overwrite a file with the same name, set ``clobber=True``. 
 

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 doc/source/analyzing/index.rst
--- a/doc/source/analyzing/index.rst
+++ b/doc/source/analyzing/index.rst
@@ -20,5 +20,6 @@
    units/index
    filtering
    generating_processed_data
+   saving_data
    time_series_analysis
    parallel_computation

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 doc/source/analyzing/objects.rst
--- a/doc/source/analyzing/objects.rst
+++ b/doc/source/analyzing/objects.rst
@@ -457,69 +457,9 @@
 ---------------------------
 
 Often, when operating interactively or via the scripting interface, it is
-convenient to save an object or multiple objects out to disk and then restart
-the calculation later.  For example, this is useful after clump finding 
-(:ref:`clump_finding`), which can be very time consuming.  
-Typically, the save and load operations are used on 3D data objects.  yt
-has a separate set of serialization operations for 2D objects such as
-projections.
-
-yt will save out objects to disk under the presupposition that the
-construction of the objects is the difficult part, rather than the generation
-of the data -- this means that you can save out an object as a description of
-how to recreate it in space, but not the actual data arrays affiliated with
-that object.  The information that is saved includes the dataset off of
-which the object "hangs."  It is this piece of information that is the most
-difficult; the object, when reloaded, must be able to reconstruct a dataset
-from whatever limited information it has in the save file.
-
-You can save objects to an output file using the function 
-:func:`~yt.data_objects.index.save_object`: 
-
-.. code-block:: python
-
-   import yt
-   ds = yt.load("my_data")
-   sp = ds.sphere([0.5, 0.5, 0.5], (10.0, 'kpc'))
-   sp.save_object("sphere_name", "save_file.cpkl")
-
-This will store the object as ``sphere_name`` in the file
-``save_file.cpkl``, which will be created or accessed using the standard
-python module :mod:`shelve`.  
-
-To re-load an object saved this way, you can use the shelve module directly:
-
-.. code-block:: python
-
-   import yt
-   import shelve
-   ds = yt.load("my_data") 
-   saved_fn = shelve.open("save_file.cpkl")
-   ds, sp = saved_fn["sphere_name"]
-
-Additionally, we can store multiple objects in a single shelve file, so we 
-have to call the sphere by name.
-
-For certain data objects such as projections, serialization can be performed
-automatically if ``serialize`` option is set to ``True`` in :ref:`the
-configuration file <configuration-file>` or set directly in the script:
-
-.. code-block:: python
-
-   from yt.config import ytcfg; ytcfg["yt", "serialize"] = "True"
-
-.. note:: Use serialization with caution. Enabling serialization means that
-   once a projection of a dataset has been created (and stored in the .yt file
-   in the same directory), any subsequent changes to that dataset will be
-   ignored when attempting to create the same projection. So if you take a
-   density projection of your dataset in the 'x' direction, then somehow tweak
-   that dataset significantly, and take the density projection again, yt will
-   default to finding the original projection and 
-   :ref:`not your new one <faq-old-data>`.
-
-.. note:: It's also possible to use the standard :mod:`cPickle` module for
-          loading and storing objects -- so in theory you could even save a
-          list of objects!
-
-This method works for clumps, as well, and the entire clump index will be
-stored and restored upon load.
+convenient to save an object to disk and then restart the calculation later or
+transfer the data from a container to another filesystem.  This can be
+particularly useful when working with extremely large datasets.  Field data
+can be saved to disk in a format that allows for it to be reloaded just like
+a regular dataset.  For information on how to do this, see
+:ref:`saving-data-containers`.

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 doc/source/analyzing/saving_data.rst
--- /dev/null
+++ b/doc/source/analyzing/saving_data.rst
@@ -0,0 +1,243 @@
+.. _saving_data
+
+Saving Reloadable Data
+======================
+
+Most of the data loaded into or generated with yt can be saved to a
+format that can be reloaded as a first-class dataset.  This includes
+the following:
+
+  * geometric data containers (regions, spheres, disks, rays, etc.)
+
+  * grid data containers (covering grids, arbitrary grids, fixed
+    resolution buffers)
+
+  * spatial plots (projections, slices, cutting planes)
+
+  * profiles
+
+  * generic array data
+
+In the case of projections, slices, and profiles, reloaded data can be
+used to remake plots.  For information on this, see :ref:`remaking-plots`.
+
+.. _saving-data-containers:
+
+Geometric Data Containers
+-------------------------
+
+Data from geometric data containers can be saved with the
+:func:`~yt.data_objects.data_containers.save_as_dataset`` function.
+
+.. notebook-cell::
+
+   import yt
+   ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
+
+   sphere = ds.sphere([0.5]*3, (10, "Mpc"))
+   fn = sphere.save_as_dataset(fields=["density", "particle_mass"])
+   print (fn)
+
+This function will return the name of the file to which the dataset
+was saved.  The filename will be a combination of the name of the
+original dataset and the type of data container.  Optionally, a
+specific filename can be given with the ``filename`` keyword.  If no
+fields are given, the fields that have previously been queried will
+be saved.
+
+The newly created dataset can be loaded like all other supported
+data through ``yt.load``.  Once loaded, field data can be accessed
+through the traditional data containers or through the ``data``
+attribute, which will be a data container configured like the
+original data container used to make the dataset.  Grid data is
+accessed by the ``grid`` data type and particle data is accessed
+with the original particle type.  As with the original dataset, grid
+positions and cell sizes are accessible with, for example,
+("grid", "x") and ("grid", "dx").  Particle positions are
+accessible as (<particle_type>, "particle_position_x").  All original
+simulation parameters are accessible in the ``parameters``
+dictionary, normally associated with all datasets.
+
+.. code-block:: python
+
+   sphere_ds = yt.load("DD0046_sphere.h5")
+
+   # use the original data container
+   print (sphere_ds.data["grid", "density"])
+
+   # create a new data container
+   ad = sphere_ds.all_data()
+
+   # grid data
+   print (ad["grid", "density"])
+   print (ad["grid", "x"])
+   print (ad["grid", "dx"])
+
+   # particle data
+   print (ad["all", "particle_mass"])
+   print (ad["all", "particle_position_x"])
+
+Note that because field data queried from geometric containers is
+returned as unordered 1D arrays, data container datasets are treated,
+effectively, as particle data.  Thus, 3D indexing of grid data from
+these datasets is not possible.
+
+.. _saving-grid-data-containers:
+
+Grid Data Containers
+--------------------
+
+Data containers that return field data as multidimensional arrays
+can be saved so as to preserve this type of access.  This includes
+covering grids, arbitrary grids, and fixed resolution buffers.
+Saving data from these containers works just as with geometric data
+containers.  Field data can be accessed through geometric data
+containers.
+
+.. code-block:: python
+
+   cg = ds.covering_grid(level=0, left_edge=[0.25]*3, dims=[16]*3)
+   fn = cg.save_as_dataset(fields=["density", "particle_mass"])
+
+   cg_ds = yt.load(fn)
+   ad = cg_ds.all_data()
+   print (ad["grid", "density"])
+
+Multidimensional indexing of field data is also available through
+the ``data`` attribute.
+
+.. code-block:: python
+
+   print (cg_ds.data["grid", "density"])
+
+Fixed resolution buffers work just the same.
+
+.. code-block:: python
+
+   my_proj = ds.proj("density", "x", weight_field="density")
+   frb = my_proj.to_frb(1.0, (800, 800))
+   fn = frb.save_as_dataset(fields=["density"])
+   frb_ds = yt.load(fn)
+   print (frb_ds.data["density"])
+
+.. _saving-spatial-plots:
+
+Spatial Plots
+-------------
+
+Spatial plots, such as projections, slices, and off-axis slices
+(cutting planes) can also be saved and reloaded.
+
+.. code-block:: python
+
+   proj = ds.proj("density", "x", weight_field="density")
+   proj.save_as_dataset()
+
+Once reloaded, they can be handed to their associated plotting
+functions to make images.
+
+.. code-block:: python
+
+   proj_ds = yt.load("DD0046_proj.h5")
+   p = yt.ProjectionPlot(proj_ds, "x", "density",
+                         weight_field="density")
+   p.save()
+
+.. _saving-profile-data:
+
+Profiles
+--------
+
+Profiles created with :func:`~yt.data_objects.profiles.create_profile`,
+:class:`~yt.visualization.profile_plotter.ProfilePlot`, and
+:class:`~yt.visualization.profile_plotter.PhasePlot` can be saved with
+the :func:`~yt.data_objects.profiles.save_as_dataset` function, which
+works just as above.  Profile datasets are a type of non-spatial grid
+datasets.  Geometric selection is not possible, but data can be
+accessed through the ``.data`` attribute.
+
+.. notebook-cell::
+
+   import yt
+   ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
+   ad = ds.all_data()
+
+   profile_2d = yt.create_profile(ad, ["density", "temperature"],
+                                  "cell_mass", weight_field=None,
+                                  n_bins=(128, 128))
+   profile_2d.save_as_dataset()
+
+   prof_2d_ds = yt.load("DD0046_Profile2D.h5")
+   print (prof_2d_ds.data["cell_mass"])
+
+The x, y (if at least 2D), and z (if 3D) bin fields can be accessed as 1D
+arrays with "x", "y", and "z".
+
+.. code-block:: python
+
+   print (prof_2d_ds.data["x"])
+
+The bin fields can also be returned with the same shape as the profile
+data by accessing them with their original names.  This allows for
+boolean masking of profile data using the bin fields.
+
+.. code-block:: python
+
+   # density is the x bin field
+   print (prof_2d_ds.data["density"])
+
+For 1, 2, and 3D profile datasets, a fake profile object will be
+constructed by accessing the ".profile" attribute.  This is used
+primarily in the case of 1 and 2D profiles to create figures using
+:class:`~yt.visualization.profile_plotter.ProfilePlot` and
+:class:`~yt.visualization.profile_plotter.PhasePlot`.
+
+.. code-block:: python
+
+   p = yt.PhasePlot(prof_2d_ds.data, "density", "temperature",
+                    "cell_mass", weight_field=None)
+   p.save()
+
+.. _saving-array-data:
+
+Generic Array Data
+------------------
+
+Generic arrays can be saved and reloaded as non-spatial data using
+the :func:`~yt.frontends.ytdata.utilities.save_as_dataset` function,
+also available as ``yt.save_as_dataset``.  As with profiles, geometric
+selection is not possible, but the data can be accessed through the
+``.data`` attribute.
+
+.. notebook-cell::
+
+   import yt
+   ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
+
+   region = ds.box([0.25]*3, [0.75]*3)
+   sphere = ds.sphere(ds.domain_center, (10, "Mpc"))
+   my_data = {}
+   my_data["region_density"] = region["density"]
+   my_data["sphere_density"] = sphere["density"]
+   yt.save_as_dataset(ds, "test_data.h5", my_data)
+
+   array_ds = yt.load("test_data.h5")
+   print (array_ds.data["region_density"])
+   print (array_ds.data["sphere_density"])
+
+Array data can be saved with or without a dataset loaded.  If no
+dataset has been loaded, as fake dataset can be provided as a
+dictionary.
+
+.. notebook-cell::
+
+   import numpy as np
+   import yt
+
+   my_data = {"density": yt.YTArray(np.random.random(10), "g/cm**3"),
+              "temperature": yt.YTArray(np.random.random(10), "K")}
+   fake_ds = {"current_time": yt.YTQuantity(10, "Myr")}
+   yt.save_as_dataset(fake_ds, "random_data.h5", my_data)
+
+   new_ds = yt.load("random_data.h5")
+   print (new_ds.data["density"])

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 doc/source/reference/api/api.rst
--- a/doc/source/reference/api/api.rst
+++ b/doc/source/reference/api/api.rst
@@ -72,6 +72,7 @@
 .. autosummary::
    :toctree: generated/
 
+   ~yt.data_objects.data_containers.YTDataContainer
    ~yt.data_objects.data_containers.YTSelectionContainer
    ~yt.data_objects.data_containers.YTSelectionContainer0D
    ~yt.data_objects.data_containers.YTSelectionContainer1D
@@ -383,6 +384,28 @@
    ~yt.frontends.stream.io.IOHandlerStreamOctree
    ~yt.frontends.stream.io.StreamParticleIOHandler
 
+ytdata
+^^^^^^
+
+.. autosummary::
+   :toctree: generated/
+
+   ~yt.frontends.ytdata.data_structures.YTDataContainerDataset
+   ~yt.frontends.ytdata.data_structures.YTSpatialPlotDataset
+   ~yt.frontends.ytdata.data_structures.YTGridDataset
+   ~yt.frontends.ytdata.data_structures.YTGridHierarchy
+   ~yt.frontends.ytdata.data_structures.YTGrid
+   ~yt.frontends.ytdata.data_structures.YTNonspatialDataset
+   ~yt.frontends.ytdata.data_structures.YTNonspatialHierarchy
+   ~yt.frontends.ytdata.data_structures.YTNonspatialGrid
+   ~yt.frontends.ytdata.data_structures.YTProfileDataset
+   ~yt.frontends.ytdata.fields.YTDataContainerFieldInfo
+   ~yt.frontends.ytdata.fields.YTGridFieldInfo
+   ~yt.frontends.ytdata.io.IOHandlerYTDataContainerHDF5
+   ~yt.frontends.ytdata.io.IOHandlerYTGridHDF5
+   ~yt.frontends.ytdata.io.IOHandlerYTSpatialPlotHDF5
+   ~yt.frontends.ytdata.io.IOHandlerYTNonspatialhdf5
+
 Loading Data
 ------------
 
@@ -739,6 +762,7 @@
    :toctree: generated/
 
    ~yt.convenience.load
+   ~yt.frontends.ytdata.utilities.save_as_dataset
    ~yt.data_objects.static_output.Dataset.all_data
    ~yt.data_objects.static_output.Dataset.box
    ~yt.funcs.deprecate

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 doc/source/visualizing/plots.rst
--- a/doc/source/visualizing/plots.rst
+++ b/doc/source/visualizing/plots.rst
@@ -1284,6 +1284,81 @@
    bananas_Slice_z_kT.eps
    bananas_Slice_z_density.eps
 
+.. _remaking-plots:
+
+Remaking Figures from Plot Datasets
+-----------------------------------
+
+When working with datasets that are too large to be stored locally,
+making figures just right can be cumbersome as it requires continuously
+moving images somewhere they can be viewed.  However, image creation is
+actually a two step process of first creating the projection, slice,
+or profile object, and then converting that object into an actual image.
+Fortunately, the hard part (creating slices, projections, profiles) can
+be separated from the easy part (generating images).  The intermediate
+slice, projection, and profile objects can be saved as reloadable
+datasets, then handed back to the plotting machinery discussed here.
+
+For slices and projections, the savable object is associated with the
+plot object as ``data_source``.  This can be saved with the
+:func:`~yt.data_objects.data_containers.save_as_dataset`` function.  For
+more information, see :ref:`saving_data`.
+
+.. code-block:: python
+
+   p = yt.ProjectionPlot(ds, "x", "density",
+                         weight_field="density")
+   fn = p.data_source.save_as_dataset()
+
+This function will optionally take a ``filename`` keyword that follows
+the same logic as dicussed above in :ref:`saving_plots`.  The filename
+to which the dataset was written will be returned.
+
+Once saved, this file can be reloaded completely independently of the
+original dataset and given back to the plot function with the same
+arguments.  One can now continue to tweak the figure to one's liking.
+
+.. code-block:: python
+
+   new_ds = yt.load(fn)
+   new_p = yt.ProjectionPlot(new_ds, "x", "density",
+                             weight_field="density")
+   new_p.save()
+
+The same functionality is available for profile and phase plots.  In
+each case, a special data container, ``data``, is given to the plotting
+functions.
+
+For ``ProfilePlot``:
+
+.. code-block:: python
+
+   ad = ds.all_data()
+   p1 = yt.ProfilePlot(ad, "density", "temperature",
+                       weight_field="cell_mass")
+
+   # note that ProfilePlots can hold a list of profiles
+   fn = p1.profiles[0].save_as_dataset()
+
+   new_ds = yt.load(fn)
+   p2 = yt.ProfilePlot(new_ds.data, "density", "temperature",
+                       weight_field="cell_mass")
+   p2.save()
+
+For ``PhasePlot``:
+
+.. code-block:: python
+
+   ad = ds.all_data()
+   p1 = yt.PhasePlot(ad, "density", "temperature",
+                     "cell_mass", weight_field=None)
+   fn = p1.profile.save_as_dataset()
+
+   new_ds = yt.load(fn)
+   p2 = yt.PhasePlot(new_ds.data, "density", "temperature",
+                     "cell_mass", weight_field=None)
+   p2.save()
+
 .. _eps-writer:
 
 Publication-ready Figures

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 yt/__init__.py
--- a/yt/__init__.py
+++ b/yt/__init__.py
@@ -138,6 +138,9 @@
     load_particles, load_hexahedral_mesh, load_octree, \
     hexahedral_connectivity
 
+from yt.frontends.ytdata.api import \
+    save_as_dataset
+
 # For backwards compatibility
 GadgetDataset = frontends.gadget.GadgetDataset
 GadgetStaticOutput = deprecated_class(GadgetDataset)

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -20,6 +20,7 @@
 
 from .absorption_line import tau_profile
 
+from yt.convenience import load
 from yt.funcs import get_pbar, mylog
 from yt.units.yt_array import YTArray, YTQuantity
 from yt.utilities.physical_constants import \
@@ -121,8 +122,8 @@
         Parameters
         ----------
 
-        input_file : string
-           path to input ray data.
+        input_file : string or dataset
+           path to input ray data or a loaded ray dataset
         output_file : optional, string
            path for output file.  File formats are chosen based on the
            filename extension.  ``.h5`` for hdf5, ``.fits`` for fits,
@@ -156,7 +157,6 @@
 
         input_fields = ['dl', 'redshift', 'temperature']
         field_units = {"dl": "cm", "redshift": "", "temperature": "K"}
-        field_data = {}
         if use_peculiar_velocity:
             input_fields.append('velocity_los')
             input_fields.append('redshift_eff')
@@ -167,10 +167,11 @@
                 input_fields.append(feature['field_name'])
                 field_units[feature["field_name"]] = "cm**-3"
 
-        input = h5py.File(input_file, 'r')
-        for field in input_fields:
-            field_data[field] = YTArray(input[field].value, field_units[field])
-        input.close()
+        if isinstance(input_file, str):
+            input_ds = load(input_file)
+        else:
+            input_ds = input_file
+        field_data = input_ds.all_data()
 
         self.tau_field = np.zeros(self.lambda_bins.size)
         self.spectrum_line_list = []
@@ -337,6 +338,8 @@
         """
         Write out list of spectral lines.
         """
+        if filename is None:
+            return
         mylog.info("Writing spectral line list: %s." % filename)
         self.spectrum_line_list.sort(key=lambda obj: obj['wavelength'])
         f = open(filename, 'w')

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
--- a/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
+++ b/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
@@ -13,19 +13,20 @@
 # The full license is in the file COPYING.txt, distributed with this software.
 #-----------------------------------------------------------------------------
 
-from yt.utilities.on_demand_imports import _h5py as h5py
 import numpy as np
 
 from yt.analysis_modules.cosmological_observation.cosmology_splice import \
     CosmologySplice
 from yt.convenience import \
     load
-from yt.funcs import \
-    mylog
+from yt.frontends.ytdata.utilities import \
+    save_as_dataset
 from yt.units.yt_array import \
     YTArray
 from yt.utilities.cosmology import \
     Cosmology
+from yt.utilities.logger import \
+    ytLogger as mylog
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
     parallel_objects, \
     parallel_root_only
@@ -48,7 +49,7 @@
     synthetic QSO lines of sight.
 
     Light rays can also be made from single datasets.
-    
+
     Once the LightRay object is set up, use LightRay.make_light_ray to
     begin making rays.  Different randomizations can be created with a
     single object by providing different random seeds to make_light_ray.
@@ -58,17 +59,17 @@
     parameter_filename : string
         The path to the simulation parameter file or dataset.
     simulation_type : optional, string
-        The simulation type.  If None, the first argument is assumed to 
+        The simulation type.  If None, the first argument is assumed to
         refer to a single dataset.
         Default: None
     near_redshift : optional, float
-        The near (lowest) redshift for a light ray containing multiple 
-        datasets.  Do not use is making a light ray from a single 
+        The near (lowest) redshift for a light ray containing multiple
+        datasets.  Do not use is making a light ray from a single
         dataset.
         Default: None
     far_redshift : optional, float
-        The far (highest) redshift for a light ray containing multiple 
-        datasets.  Do not use is making a light ray from a single 
+        The far (highest) redshift for a light ray containing multiple
+        datasets.  Do not use is making a light ray from a single
         dataset.
         Default: None
     use_minimum_datasets : optional, bool
@@ -98,11 +99,11 @@
         datasets for time series.
         Default: True.
     find_outputs : optional, bool
-        Whether or not to search for datasets in the current 
+        Whether or not to search for datasets in the current
         directory.
         Default: False.
     load_kwargs : optional, dict
-        Optional dictionary of kwargs to be passed to the "load" 
+        Optional dictionary of kwargs to be passed to the "load"
         function, appropriate for use of certain frontends.  E.g.
         Tipsy using "bounding_box"
         Gadget using "unit_base", etc.
@@ -129,8 +130,9 @@
         self.light_ray_solution = []
         self._data = {}
 
-        # Make a light ray from a single, given dataset.        
+        # Make a light ray from a single, given dataset.
         if simulation_type is None:
+            self.simulation_type = simulation_type
             ds = load(parameter_filename, **self.load_kwargs)
             if ds.cosmological_simulation:
                 redshift = ds.current_redshift
@@ -156,7 +158,7 @@
                                            time_data=time_data,
                                            redshift_data=redshift_data)
 
-    def _calculate_light_ray_solution(self, seed=None, 
+    def _calculate_light_ray_solution(self, seed=None,
                                       start_position=None, end_position=None,
                                       trajectory=None, filename=None):
         "Create list of datasets to be added together to make the light ray."
@@ -172,9 +174,9 @@
             if not ((end_position is None) ^ (trajectory is None)):
                 raise RuntimeError("LightRay Error: must specify either end_position " + \
                                    "or trajectory, but not both.")
-            self.light_ray_solution[0]['start'] = np.array(start_position)
+            self.light_ray_solution[0]['start'] = np.asarray(start_position)
             if end_position is not None:
-                self.light_ray_solution[0]['end'] = np.array(end_position)
+                self.light_ray_solution[0]['end'] = np.asarray(end_position)
             else:
                 # assume trajectory given as r, theta, phi
                 if len(trajectory) != 3:
@@ -185,12 +187,12 @@
                                 np.sin(phi) * np.sin(theta),
                                 np.cos(theta)])
             self.light_ray_solution[0]['traversal_box_fraction'] = \
-              vector_length(self.light_ray_solution[0]['start'], 
+              vector_length(self.light_ray_solution[0]['start'],
                             self.light_ray_solution[0]['end'])
 
         # the normal way (random start positions and trajectories for each dataset)
         else:
-            
+
             # For box coherence, keep track of effective depth travelled.
             box_fraction_used = 0.0
 
@@ -285,15 +287,15 @@
             Default: None.
         trajectory : optional, list of floats
             Used only if creating a light ray from a single dataset.
-            The (r, theta, phi) direction of the light ray.  Use either 
+            The (r, theta, phi) direction of the light ray.  Use either
             end_position or trajectory, not both.
             Default: None.
         fields : optional, list
             A list of fields for which to get data.
             Default: None.
         setup_function : optional, callable, accepts a ds
-            This function will be called on each dataset that is loaded 
-            to create the light ray.  For, example, this can be used to 
+            This function will be called on each dataset that is loaded
+            to create the light ray.  For, example, this can be used to
             add new derived fields.
             Default: None.
         solution_filename : optional, string
@@ -308,13 +310,13 @@
             each point in the ray.
             Default: True.
         redshift : optional, float
-            Used with light rays made from single datasets to specify a 
-            starting redshift for the ray.  If not used, the starting 
-            redshift will be 0 for a non-cosmological dataset and 
+            Used with light rays made from single datasets to specify a
+            starting redshift for the ray.  If not used, the starting
+            redshift will be 0 for a non-cosmological dataset and
             the dataset redshift for a cosmological dataset.
             Default: None.
         njobs : optional, int
-            The number of parallel jobs over which the segments will 
+            The number of parallel jobs over which the segments will
             be split.  Choose -1 for one processor per segment.
             Default: -1.
 
@@ -322,7 +324,7 @@
         --------
 
         Make a light ray from multiple datasets:
-        
+
         >>> import yt
         >>> from yt.analysis_modules.cosmological_observation.light_ray.api import \
         ...     LightRay
@@ -348,12 +350,12 @@
         ...                       data_filename="my_ray.h5",
         ...                       fields=["temperature", "density"],
         ...                       get_los_velocity=True)
-        
+
         """
 
         # Calculate solution.
-        self._calculate_light_ray_solution(seed=seed, 
-                                           start_position=start_position, 
+        self._calculate_light_ray_solution(seed=seed,
+                                           start_position=start_position,
                                            end_position=end_position,
                                            trajectory=trajectory,
                                            filename=solution_filename)
@@ -364,6 +366,8 @@
         data_fields = fields[:]
         all_fields = fields[:]
         all_fields.extend(['dl', 'dredshift', 'redshift'])
+        all_fields.extend(['x', 'y', 'z', 'dx', 'dy', 'dz'])
+        data_fields.extend(['x', 'y', 'z', 'dx', 'dy', 'dz'])
         if get_los_velocity:
             all_fields.extend(['velocity_x', 'velocity_y',
                                'velocity_z', 'velocity_los', 'redshift_eff'])
@@ -399,10 +403,15 @@
             if not ds.cosmological_simulation:
                 next_redshift = my_segment["redshift"]
             elif self.near_redshift == self.far_redshift:
+                if isinstance(my_segment["traversal_box_fraction"], YTArray):
+                    segment_length = \
+                      my_segment["traversal_box_fraction"].in_units("Mpccm / h")
+                else:
+                    segment_length = my_segment["traversal_box_fraction"] * \
+                      ds.domain_width[0].in_units("Mpccm / h")
                 next_redshift = my_segment["redshift"] - \
-                  self._deltaz_forward(my_segment["redshift"], 
-                                       ds.domain_width[0].in_units("Mpccm / h") *
-                                       my_segment["traversal_box_fraction"])
+                  self._deltaz_forward(my_segment["redshift"],
+                                       segment_length)
             elif my_segment.get("next", None) is None:
                 next_redshift = self.near_redshift
             else:
@@ -454,7 +463,7 @@
 
             # Get redshift for each lixel.  Assume linear relation between l and z.
             sub_data['dredshift'] = (my_segment['redshift'] - next_redshift) * \
-                (sub_data['dl'] / vector_length(my_segment['start'], 
+                (sub_data['dl'] / vector_length(my_segment['start'],
                                                 my_segment['end']).in_cgs())
             sub_data['redshift'] = my_segment['redshift'] - \
               sub_data['dredshift'].cumsum() + sub_data['dredshift']
@@ -500,12 +509,17 @@
         # Flatten the list into a single dictionary containing fields
         # for the whole ray.
         all_data = _flatten_dict_list(all_data, exceptions=['segment_redshift'])
+        self._data = all_data
 
         if data_filename is not None:
             self._write_light_ray(data_filename, all_data)
+            ray_ds = load(data_filename)
+            return ray_ds
+        else:
+            return None
 
-        self._data = all_data
-        return all_data
+    def __getitem__(self, field):
+        return self._data[field]
 
     @parallel_root_only
     def _write_light_ray(self, filename, data):
@@ -514,19 +528,24 @@
 
         Write light ray data to hdf5 file.
         """
-
-        mylog.info("Saving light ray data to %s." % filename)
-        output = h5py.File(filename, 'w')
-        for field in data.keys():
-            # if the field is a tuple, only use the second part of the tuple
-            # in the hdf5 output (i.e. ('gas', 'density') -> 'density')
-            if isinstance(field, tuple):
-                fieldname = field[1]
-            else:
-                fieldname = field
-            output.create_dataset(fieldname, data=data[field])
-            output[fieldname].attrs["units"] = str(data[field].units)
-        output.close()
+        if self.simulation_type is None:
+            ds = load(self.parameter_filename, **self.load_kwargs)
+        else:
+            ds = {}
+            ds["dimensionality"] = self.simulation.dimensionality
+            ds["domain_left_edge"] = self.simulation.domain_left_edge
+            ds["domain_right_edge"] = self.simulation.domain_right_edge
+            ds["cosmological_simulation"] = self.simulation.cosmological_simulation
+            ds["periodicity"] = (True, True, True)
+            ds["current_redshift"] = self.near_redshift
+            for attr in ["omega_lambda", "omega_matter", "hubble_constant"]:
+                ds[attr] = getattr(self.cosmology, attr)
+            ds["current_time"] = \
+              self.cosmology.t_from_z(ds["current_redshift"])
+        extra_attrs = {"data_type": "yt_light_ray"}
+        field_types = dict([(field, "grid") for field in data.keys()])
+        save_as_dataset(ds, filename, data, field_types=field_types,
+                        extra_attrs=extra_attrs)
 
     @parallel_root_only
     def _write_light_ray_solution(self, filename, extra_info=None):
@@ -573,7 +592,7 @@
 def vector_length(start, end):
     """
     vector_length(start, end)
-    
+
     Calculate vector length.
     """
 
@@ -600,15 +619,15 @@
     """
     periodic_ray(start, end, left=None, right=None)
 
-    Break up periodic ray into non-periodic segments. 
+    Break up periodic ray into non-periodic segments.
     Accepts start and end points of periodic ray as YTArrays.
     Accepts optional left and right edges of periodic volume as YTArrays.
-    Returns a list of lists of coordinates, where each element of the 
-    top-most list is a 2-list of start coords and end coords of the 
-    non-periodic ray: 
+    Returns a list of lists of coordinates, where each element of the
+    top-most list is a 2-list of start coords and end coords of the
+    non-periodic ray:
 
-    [[[x0start,y0start,z0start], [x0end, y0end, z0end]], 
-     [[x1start,y1start,z1start], [x1end, y1end, z1end]], 
+    [[[x0start,y0start,z0start], [x0end, y0end, z0end]],
+     [[x1start,y1start,z1start], [x1end, y1end, z1end]],
      ...,]
 
     """

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 yt/analysis_modules/halo_analysis/halo_callbacks.py
--- a/yt/analysis_modules/halo_analysis/halo_callbacks.py
+++ b/yt/analysis_modules/halo_analysis/halo_callbacks.py
@@ -21,6 +21,9 @@
     periodic_distance
 from yt.data_objects.profiles import \
     create_profile
+from yt.frontends.ytdata.utilities import \
+    _hdf5_yt_array, \
+    _yt_array_hdf5
 from yt.units.yt_array import \
     YTArray
 from yt.utilities.exceptions import \
@@ -584,21 +587,3 @@
     del sphere
     
 add_callback("iterative_center_of_mass", iterative_center_of_mass)
-
-def _yt_array_hdf5(fh, fieldname, data):
-    dataset = fh.create_dataset(fieldname, data=data)
-    units = ""
-    if isinstance(data, YTArray):
-        units = str(data.units)
-    dataset.attrs["units"] = units
-
-def _hdf5_yt_array(fh, fieldname, ds=None):
-    if ds is None:
-        new_arr = YTArray
-    else:
-        new_arr = ds.arr
-    units = ""
-    if "units" in fh[fieldname].attrs:
-        units = fh[fieldname].attrs["units"]
-    if units == "dimensionless": units = ""
-    return new_arr(fh[fieldname].value, units)

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -13,7 +13,9 @@
 # The full license is in the file COPYING.txt, distributed with this software.
 #-----------------------------------------------------------------------------
 
+import h5py
 import itertools
+import os
 import types
 import uuid
 from yt.extern.six import string_types
@@ -25,9 +27,12 @@
 import shelve
 from contextlib import contextmanager
 
+from yt.funcs import get_output_filename
 from yt.funcs import *
 
 from yt.data_objects.particle_io import particle_handler_registry
+from yt.frontends.ytdata.utilities import \
+    save_as_dataset
 from yt.units.unit_object import UnitParseError
 from yt.utilities.exceptions import \
     YTUnitConversionError, \
@@ -98,6 +103,8 @@
     _con_args = ()
     _skip_add = False
     _container_fields = ()
+    _tds_attrs = ()
+    _tds_fields = ()
     _field_cache = None
     _index = None
 
@@ -463,6 +470,117 @@
         df = pd.DataFrame(data)
         return df
 
+    def save_as_dataset(self, filename=None, fields=None):
+        r"""Export a data object to a reloadable yt dataset.
+
+        This function will take a data object and output a dataset 
+        containing either the fields presently existing or fields 
+        given in the ``fields`` list.  The resulting dataset can be
+        reloaded as a yt dataset.
+
+        Parameters
+        ----------
+        filename : str, optional
+            The name of the file to be written.  If None, the name 
+            will be a combination of the original dataset and the type 
+            of data container.
+        fields : list of strings or tuples, optional
+            If this is supplied, it is the list of fields to be saved to
+            disk.  If not supplied, all the fields that have been queried
+            will be saved.
+
+        Returns
+        -------
+        filename : str
+            The name of the file that has been created.
+
+        Examples
+        --------
+
+        >>> import yt
+        >>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
+        >>> sp = ds.sphere(ds.domain_center, (10, "Mpc"))
+        >>> fn = sp.save_as_dataset(fields=["density", "temperature"])
+        >>> sphere_ds = yt.load(fn)
+        >>> # the original data container is available as the data attribute
+        >>> print (sds.data["density"])
+        [  4.46237613e-32   4.86830178e-32   4.46335118e-32 ...,   6.43956165e-30
+           3.57339907e-30   2.83150720e-30] g/cm**3
+        >>> ad = sphere_ds.all_data()
+        >>> print (ad["temperature"])
+        [  1.00000000e+00   1.00000000e+00   1.00000000e+00 ...,   4.40108359e+04
+           4.54380547e+04   4.72560117e+04] K
+
+        """
+
+        keyword = "%s_%s" % (str(self.ds), self._type_name)
+        filename = get_output_filename(filename, keyword, ".h5")
+
+        data = {}
+        if fields is not None:
+            for f in self._determine_fields(fields):
+                data[f] = self[f]
+        else:
+            data.update(self.field_data)
+        # get the extra fields needed to reconstruct the container
+        tds_fields = tuple(self._determine_fields(list(self._tds_fields)))
+        for f in [f for f in self._container_fields + tds_fields \
+                  if f not in data]:
+            data[f] = self[f]
+        data_fields = list(data.keys())
+
+        need_grid_positions = False
+        need_particle_positions = False
+        ptypes = []
+        ftypes = {}
+        for field in data_fields:
+            if field in self._container_fields:
+                ftypes[field] = "grid"
+                need_grid_positions = True
+            elif self.ds.field_info[field].particle_type:
+                if field[0] not in ptypes:
+                    ptypes.append(field[0])
+                ftypes[field] = field[0]
+                need_particle_positions = True
+            else:
+                ftypes[field] = "grid"
+                need_grid_positions = True
+        # projections and slices use px and py, so don't need positions
+        if self._type_name in ["cutting", "proj", "slice"]:
+            need_grid_positions = False
+
+        if need_particle_positions:
+            for ax in "xyz":
+                for ptype in ptypes:
+                    p_field = (ptype, "particle_position_%s" % ax)
+                    if p_field in self.ds.field_info and p_field not in data:
+                        data_fields.append(field)
+                        ftypes[p_field] = p_field[0]
+                        data[p_field] = self[p_field]
+        if need_grid_positions:
+            for ax in "xyz":
+                g_field = ("index", ax)
+                if g_field in self.ds.field_info and g_field not in data:
+                    data_fields.append(g_field)
+                    ftypes[g_field] = "grid"
+                    data[g_field] = self[g_field]
+                g_field = ("index", "d" + ax)
+                if g_field in self.ds.field_info and g_field not in data:
+                    data_fields.append(g_field)
+                    ftypes[g_field] = "grid"
+                    data[g_field] = self[g_field]
+
+        extra_attrs = dict([(arg, getattr(self, arg, None))
+                            for arg in self._con_args + self._tds_attrs])
+        extra_attrs["con_args"] = self._con_args
+        extra_attrs["data_type"] = "yt_data_container"
+        extra_attrs["container_type"] = self._type_name
+        extra_attrs["dimensionality"] = self._dimensionality
+        save_as_dataset(self.ds, filename, data, field_types=ftypes,
+                        extra_attrs=extra_attrs)
+
+        return filename
+        
     def to_glue(self, fields, label="yt", data_collection=None):
         """
         Takes specific *fields* in the container and exports them to

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -256,7 +256,8 @@
           (("all", "particle_mass") in self.data_source.ds.field_info)
         vals = []
         if use_gas:
-            vals += [(data[ax] * data["gas", "cell_mass"]).sum(dtype=np.float64)
+            vals += [(data["gas", ax] *
+                      data["gas", "cell_mass"]).sum(dtype=np.float64)
                      for ax in 'xyz']
             vals.append(data["gas", "cell_mass"].sum(dtype=np.float64))
         if use_particles:
@@ -657,7 +658,7 @@
         m = data.ds.quan(0., "g")
         if use_gas:
             e += (data["gas", "kinetic_energy"] *
-                  data["index", "cell_volume"]).sum(dtype=np.float64)
+                  data["gas", "cell_volume"]).sum(dtype=np.float64)
             j += data["gas", "angular_momentum_magnitude"].sum(dtype=np.float64)
             m += data["gas", "cell_mass"].sum(dtype=np.float64)
         if use_particles:

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 yt/data_objects/profiles.py
--- a/yt/data_objects/profiles.py
+++ b/yt/data_objects/profiles.py
@@ -16,8 +16,10 @@
 from yt.utilities.on_demand_imports import _h5py as h5py
 import numpy as np
 
+from yt.frontends.ytdata.utilities import \
+    save_as_dataset
+from yt.funcs import get_output_filename
 from yt.funcs import *
-
 from yt.units.yt_array import uconcatenate, array_like_field
 from yt.units.unit_object import Unit
 from yt.data_objects.data_containers import YTFieldData
@@ -949,6 +951,112 @@
         else:
             return np.linspace(mi, ma, n+1)
 
+    def save_as_dataset(self, filename=None):
+        r"""Export a profile to a reloadable yt dataset.
+
+        This function will take a profile and output a dataset
+        containing all relevant fields.  The resulting dataset
+        can be reloaded as a yt dataset.
+
+        Parameters
+        ----------
+        filename : str, optional
+            The name of the file to be written.  If None, the name
+            will be a combination of the original dataset plus
+            the type of object, e.g., Profile1D.
+
+        Returns
+        -------
+        filename : str
+            The name of the file that has been created.
+
+        Examples
+        --------
+
+        >>> import yt
+        >>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
+        >>> ad = ds.all_data()
+        >>> profile = yt.create_profile(ad, ["density", "temperature"],
+        ...                            "cell_mass", weight_field=None,
+        ...                             n_bins=(128, 128))
+        >>> fn = profile.save_as_dataset()
+        >>> prof_ds = yt.load(fn)
+        >>> print (prof_ds.data["cell_mass"])
+        (128, 128)
+        >>> print (prof_ds.data["x"].shape) # x bins as 1D array
+        (128,)
+        >>> print (prof_ds.data["density"]) # x bins as 2D array
+        (128, 128)
+        >>> p = yt.PhasePlot(prof_ds.data, "density", "temperature",
+        ...                  "cell_mass", weight_field=None)
+        >>> p.save()
+
+        """
+
+        keyword = "%s_%s" % (str(self.ds), self.__class__.__name__)
+        filename = get_output_filename(filename, keyword, ".h5")
+
+        args = ("field", "log")
+        extra_attrs = {"data_type": "yt_profile",
+                       "profile_dimensions": self.size,
+                       "weight_field": self.weight_field,
+                       "fractional": self.fractional}
+        data = {}
+        data.update(self.field_data)
+        data["weight"] = self.weight
+        data["used"] = self.used.astype("float64")
+
+        dimensionality = 0
+        bin_data = []
+        for ax in "xyz":
+            if hasattr(self, ax):
+                dimensionality += 1
+                data[ax] = getattr(self, ax)
+                bin_data.append(data[ax])
+                bin_field_name = "%s_bins" % ax
+                data[bin_field_name] = getattr(self, bin_field_name)
+                extra_attrs["%s_range" % ax] = self.ds.arr([data[bin_field_name][0],
+                                                            data[bin_field_name][-1]])
+                for arg in args:
+                    key = "%s_%s" % (ax, arg)
+                    extra_attrs[key] = getattr(self, key)
+
+        bin_fields = np.meshgrid(*bin_data)
+        for i, ax in enumerate("xyz"[:dimensionality]):
+            data[getattr(self, "%s_field" % ax)] = bin_fields[i]
+
+        extra_attrs["dimensionality"] = dimensionality
+        ftypes = dict([(field, "data") for field in data])
+        save_as_dataset(self.ds, filename, data, field_types=ftypes,
+                        extra_attrs=extra_attrs)
+
+        return filename
+
+class ProfileNDFromDataset(ProfileND):
+    """
+    An ND profile object loaded from a ytdata dataset.
+    """
+    def __init__(self, ds):
+        ProfileND.__init__(self, ds.data, ds.parameters["weight_field"])
+        self.fractional = ds.parameters["fractional"]
+        exclude_fields = ["used", "weight"]
+        for ax in "xyz"[:ds.dimensionality]:
+            setattr(self, ax, ds.data[ax])
+            setattr(self, "%s_bins" % ax, ds.data["%s_bins" % ax])
+            setattr(self, "%s_field" % ax,
+                    tuple(ds.parameters["%s_field" % ax]))
+            setattr(self, "%s_log" % ax, ds.parameters["%s_log" % ax])
+            exclude_fields.extend([ax, "%s_bins" % ax,
+                                   ds.parameters["%s_field" % ax][1]])
+        self.weight = ds.data["weight"]
+        self.used = ds.data["used"].d.astype(bool)
+        profile_fields = [f for f in ds.field_list
+                          if f[1] not in exclude_fields]
+        for field in profile_fields:
+            self.field_map[field[1]] = field
+            self.field_data[field] = ds.data[field]
+            self.field_units[field] = ds.data[field].units
+
 class Profile1D(ProfileND):
     """An object that represents a 1D profile.
 
@@ -1011,6 +1119,14 @@
     def bounds(self):
         return ((self.x_bins[0], self.x_bins[-1]),)
 
+class Profile1DFromDataset(ProfileNDFromDataset, Profile1D):
+    """
+    A 1D profile object loaded from a ytdata dataset.
+    """
+
+    def __init(self, ds):
+        ProfileNDFromDataset.__init__(self, ds)
+
 class Profile2D(ProfileND):
     """An object that represents a 2D profile.
 
@@ -1108,6 +1224,13 @@
         return ((self.x_bins[0], self.x_bins[-1]),
                 (self.y_bins[0], self.y_bins[-1]))
 
+class Profile2DFromDataset(ProfileNDFromDataset, Profile2D):
+    """
+    A 2D profile object loaded from a ytdata dataset.
+    """
+
+    def __init(self, ds):
+        ProfileNDFromDataset.__init__(self, ds)
 
 class ParticleProfile(Profile2D):
     """An object that represents a *deposited* 2D profile. This is like a
@@ -1354,6 +1477,13 @@
         self.z_bins.convert_to_units(new_unit)
         self.z = 0.5*(self.z_bins[1:]+self.z_bins[:-1])
 
+class Profile3DFromDataset(ProfileNDFromDataset, Profile3D):
+    """
+    A 2D profile object loaded from a ytdata dataset.
+    """
+
+    def __init(self, ds):
+        ProfileNDFromDataset.__init__(self, ds)
 
 def sanitize_field_tuple_keys(input_dict, data_source):
     if input_dict is not None:
@@ -1429,8 +1559,8 @@
     >>> profile = create_profile(ad, [("gas", "density")],
     ...                              [("gas", "temperature"),
     ...                               ("gas", "velocity_x")])
-    >>> print profile.x
-    >>> print profile["gas", "temperature"]
+    >>> print (profile.x)
+    >>> print (profile["gas", "temperature"])
 
     """
     bin_fields = data_source._determine_fields(bin_fields)

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 yt/data_objects/selection_data_containers.py
--- a/yt/data_objects/selection_data_containers.py
+++ b/yt/data_objects/selection_data_containers.py
@@ -347,6 +347,8 @@
     _key_fields = YTSelectionContainer2D._key_fields + ['pz','pdz']
     _type_name = "cutting"
     _con_args = ('normal', 'center')
+    _tds_attrs = ("_inv_mat",)
+    _tds_fields = ("x", "y", "z", "dx")
     _container_fields = ("px", "py", "pz", "pdx", "pdy", "pdz")
     def __init__(self, normal, center, north_vector=None,
                  ds=None, field_parameters=None, data_source=None):

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -1,7 +1,7 @@
 """
-Generalized Enzo output objects, both static and time-series.
+Dataset and related data structures.
 
-Presumably at some point EnzoRun will be absorbed into here.
+
 
 
 """
@@ -373,6 +373,7 @@
         self.field_info.setup_fluid_fields()
         for ptype in self.particle_types:
             self.field_info.setup_particle_fields(ptype)
+        self.field_info.setup_fluid_index_fields()
         if "all" not in self.particle_types:
             mylog.debug("Creating Particle Union 'all'")
             pu = ParticleUnion("all", list(self.particle_types_raw))

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 yt/fields/field_info_container.py
--- a/yt/fields/field_info_container.py
+++ b/yt/fields/field_info_container.py
@@ -63,6 +63,20 @@
     def setup_fluid_fields(self):
         pass
 
+    def setup_fluid_index_fields(self):
+        # Now we get all our index types and set up aliases to them
+        if self.ds is None: return
+        index_fields = set([f for _, f in self if _ == "index"])
+        for ftype in self.ds.fluid_types:
+            if ftype in ("index", "deposit"): continue
+            for f in index_fields:
+                if (ftype, f) in self: continue
+                self.alias((ftype, f), ("index", f))
+                # Different field types have different default units.
+                # We want to make sure the aliased field will have
+                # the same units as the "index" field.
+                self[(ftype, f)].units = self["index", f].units
+
     def setup_particle_fields(self, ptype, ftype='gas', num_neighbors=64 ):
         skip_output_units = ("code_length",)
         for f, (units, aliases, dn) in sorted(self.known_particle_fields):

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 yt/fields/fluid_fields.py
--- a/yt/fields/fluid_fields.py
+++ b/yt/fields/fluid_fields.py
@@ -52,7 +52,7 @@
     create_vector_fields(registry, "velocity", "cm / s", ftype, slice_info)
 
     def _cell_mass(field, data):
-        return data[ftype, "density"] * data["index", "cell_volume"]
+        return data[ftype, "density"] * data[ftype, "cell_volume"]
 
     registry.add_field((ftype, "cell_mass"),
         function=_cell_mass,
@@ -89,11 +89,11 @@
             units = "")
 
     def _courant_time_step(field, data):
-        t1 = data["index", "dx"] / (data[ftype, "sound_speed"]
+        t1 = data[ftype, "dx"] / (data[ftype, "sound_speed"]
                         + np.abs(data[ftype, "velocity_x"]))
-        t2 = data["index", "dy"] / (data[ftype, "sound_speed"]
+        t2 = data[ftype, "dy"] / (data[ftype, "sound_speed"]
                         + np.abs(data[ftype, "velocity_y"]))
-        t3 = data["index", "dz"] / (data[ftype, "sound_speed"]
+        t3 = data[ftype, "dz"] / (data[ftype, "sound_speed"]
                         + np.abs(data[ftype, "velocity_z"]))
         tr = np.minimum(np.minimum(t1, t2), t3)
         return tr
@@ -140,7 +140,7 @@
              units="Zsun")
 
     def _metal_mass(field, data):
-        return data[ftype, "metal_density"] * data["index", "cell_volume"]
+        return data[ftype, "metal_density"] * data[ftype, "cell_volume"]
     registry.add_field((ftype, "metal_mass"),
                        function=_metal_mass,
                        units="g")
@@ -188,7 +188,7 @@
         slice_3dl[axi] = sl_left
         slice_3dr[axi] = sl_right
         def func(field, data):
-            ds = div_fac * data["index", "d%s" % ax]
+            ds = div_fac * data[ftype, "d%s" % ax]
             f  = data[grad_field][slice_3dr]/ds[slice_3d]
             f -= data[grad_field][slice_3dl]/ds[slice_3d]
             new_field = data.ds.arr(np.zeros_like(data[grad_field], dtype=np.float64),

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 yt/frontends/api.py
--- a/yt/frontends/api.py
+++ b/yt/frontends/api.py
@@ -39,6 +39,7 @@
     'sdf',
     'stream',
     'tipsy',
+    'ytdata',
 ]
 
 class _frontend_container:

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 yt/frontends/enzo/simulation_handling.py
--- a/yt/frontends/enzo/simulation_handling.py
+++ b/yt/frontends/enzo/simulation_handling.py
@@ -40,7 +40,7 @@
     mylog
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
     parallel_objects
-    
+
 class EnzoSimulation(SimulationTimeSeries):
     r"""
     Initialize an Enzo Simulation object.
@@ -101,6 +101,8 @@
             self.length_unit = self.quan(self.box_size, "Mpccm / h",
                                          registry=self.unit_registry)
             self.box_size = self.length_unit
+            self.domain_left_edge = self.domain_left_edge * self.length_unit
+            self.domain_right_edge = self.domain_right_edge * self.length_unit
         else:
             self.time_unit = self.quan(self.parameters["TimeUnits"], "s")
         self.unit_registry.modify("code_time", self.time_unit)
@@ -133,21 +135,21 @@
             datasets for time series.
             Default: True.
         initial_time : tuple of type (float, str)
-            The earliest time for outputs to be included.  This should be 
+            The earliest time for outputs to be included.  This should be
             given as the value and the string representation of the units.
-            For example, (5.0, "Gyr").  If None, the initial time of the 
-            simulation is used.  This can be used in combination with 
+            For example, (5.0, "Gyr").  If None, the initial time of the
+            simulation is used.  This can be used in combination with
             either final_time or final_redshift.
             Default: None.
         final_time : tuple of type (float, str)
-            The latest time for outputs to be included.  This should be 
+            The latest time for outputs to be included.  This should be
             given as the value and the string representation of the units.
-            For example, (13.7, "Gyr"). If None, the final time of the 
-            simulation is used.  This can be used in combination with either 
+            For example, (13.7, "Gyr"). If None, the final time of the
+            simulation is used.  This can be used in combination with either
             initial_time or initial_redshift.
             Default: None.
         times : tuple of type (float array, str)
-            A list of times for which outputs will be found and the units 
+            A list of times for which outputs will be found and the units
             of those values.  For example, ([0, 1, 2, 3], "s").
             Default: None.
         initial_redshift : float
@@ -195,8 +197,8 @@
 
         >>> import yt
         >>> es = yt.simulation("my_simulation.par", "Enzo")
-        
-        >>> es.get_time_series(initial_redshift=10, final_time=(13.7, "Gyr"), 
+
+        >>> es.get_time_series(initial_redshift=10, final_time=(13.7, "Gyr"),
                                redshift_data=False)
 
         >>> es.get_time_series(redshifts=[3, 2, 1, 0])
@@ -304,7 +306,7 @@
         for output in my_outputs:
             if os.path.exists(output['filename']):
                 init_outputs.append(output['filename'])
-            
+
         DatasetSeries.__init__(self, outputs=init_outputs, parallel=parallel,
                                 setup_function=setup_function)
         mylog.info("%d outputs loaded into time series.", len(init_outputs))
@@ -586,11 +588,11 @@
         Check a list of files to see if they are valid datasets.
         """
 
-        only_on_root(mylog.info, "Checking %d potential outputs.", 
+        only_on_root(mylog.info, "Checking %d potential outputs.",
                      len(potential_outputs))
 
         my_outputs = {}
-        for my_storage, output in parallel_objects(potential_outputs, 
+        for my_storage, output in parallel_objects(potential_outputs,
                                                    storage=my_outputs):
             if self.parameters['DataDumpDir'] in output:
                 dir_key = self.parameters['DataDumpDir']
@@ -643,6 +645,6 @@
         self.initial_redshift = initial_redshift
         # time units = 1 / sqrt(4 * pi * G rho_0 * (1 + z_i)**3),
         # rho_0 = (3 * Omega_m * h**2) / (8 * pi * G)
-        self.time_unit = ((1.5 * self.omega_matter * self.hubble_constant**2 * 
+        self.time_unit = ((1.5 * self.omega_matter * self.hubble_constant**2 *
                            (1 + self.initial_redshift)**3)**-0.5).in_units("s")
         self.time_unit.units.registry = self.unit_registry

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 yt/frontends/setup.py
--- a/yt/frontends/setup.py
+++ b/yt/frontends/setup.py
@@ -29,6 +29,7 @@
     config.add_subpackage("sph")
     config.add_subpackage("stream")
     config.add_subpackage("tipsy")
+    config.add_subpackage("ytdata")
     config.add_subpackage("art/tests")
     config.add_subpackage("artio/tests")
     config.add_subpackage("athena/tests")
@@ -47,4 +48,5 @@
     config.add_subpackage("rockstar/tests")
     config.add_subpackage("stream/tests")
     config.add_subpackage("tipsy/tests")
+    config.add_subpackage("ytdata/tests")
     return config

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 yt/frontends/ytdata/__init__.py
--- /dev/null
+++ b/yt/frontends/ytdata/__init__.py
@@ -0,0 +1,15 @@
+"""
+API for ytData frontend.
+
+
+
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2015, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------

diff -r 811884bbb9f9ba343af6aaa2e2a178fde02a3453 -r 697ca7baf306df33d900376704a185a48ff08723 yt/frontends/ytdata/api.py
--- /dev/null
+++ b/yt/frontends/ytdata/api.py
@@ -0,0 +1,39 @@
+"""
+API for ytData frontend
+
+
+
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2014, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+from .data_structures import \
+    YTDataContainerDataset, \
+    YTSpatialPlotDataset, \
+    YTGridDataset, \
+    YTGridHierarchy, \
+    YTGrid, \
+    YTNonspatialDataset, \
+    YTNonspatialHierarchy, \
+    YTNonspatialGrid, \
+    YTProfileDataset
+
+from .io import \
+    IOHandlerYTDataContainerHDF5, \
+    IOHandlerYTGridHDF5, \
+    IOHandlerYTSpatialPlotHDF5, \
+    IOHandlerYTNonspatialhdf5
+
+from .fields import \
+    YTDataContainerFieldInfo, \
+    YTGridFieldInfo
+
+from .utilities import \
+    save_as_dataset

This diff is so big that we needed to truncate the remainder.

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list