[yt-svn] commit/yt: atmyers: Merged in MatthewTurk/yt (pull request #1763)

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Mon Nov 23 13:23:28 PST 2015


1 new commit in yt:

https://bitbucket.org/yt_analysis/yt/commits/eb45e6e815fb/
Changeset:   eb45e6e815fb
Branch:      yt
User:        atmyers
Date:        2015-11-23 21:23:01+00:00
Summary:     Merged in MatthewTurk/yt (pull request #1763)

Numpy-like operations
Affected #:  12 files

diff -r 5b5389c150b9d9ae33a8e36b03f6626f78b8da53 -r eb45e6e815fb3f781e57ebf9dd941b404da4e030 doc/source/analyzing/objects.rst
--- a/doc/source/analyzing/objects.rst
+++ b/doc/source/analyzing/objects.rst
@@ -63,6 +63,117 @@
    for i in range(sp["temperature"].size):
        print "(%f,  %f,  %f)    %f" % (sp["x"][i], sp["y"][i], sp["z"][i], sp["temperature"][i])
 
+.. _quickly-selecting-data:
+
+Slicing Syntax for Selecting Data
+---------------------------------
+
+yt provides a mechanism for easily selecting data while doing interactive work
+on the command line.  This allows for region selection based on the full domain
+of the object.  Selecting in this manner is exposed through a slice-like
+syntax.  All of these attributes are exposed through the ``RegionExpression``
+object, which is an attribute of a ``DataSet`` object, called ``r``.
+
+Getting All The Data
+^^^^^^^^^^^^^^^^^^^^
+
+The ``.r`` attribute serves as a persistent means of accessing the full data
+from a dataset.  You can access this shorthand operation by querying any field
+on the ``.r`` object, like so:
+
+.. code-block:: python
+
+   ds = yt.load("RedshiftOutput0005")
+   rho = ds.r["density"]
+
+This will return a *flattened* array of data.  The region expression object
+(``r``) doesn't have any derived quantities on it.  This is completely
+equivalent to this set of statements:
+
+.. code-block:: python
+
+   ds = yt.load("RedshiftOutput0005")
+   dd = ds.all_data()
+   rho = dd["density"]
+
+.. warning::
+
+   One thing to keep in mind with accessing data in this way is that it is
+   *persistent*.  It is loaded into memory, and then retained until the dataset
+   is deleted or garbage collected.
+
+Selecting Multiresolution Regions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To select rectilinear regions, where the data is selected the same way that it
+is selected in a :ref:`region-reference`, you can utilize slice-like syntax,
+supplying start and stop, but not supplying a step argument.  This requires
+that three components of the slice must be specified.  These take a start and a
+stop, and are for the three axes in simulation order (if your data is ordered
+z, y, x for instance, this would be in z, y, x order).
+
+The slices can have both position and, optionally, unit values.  These define
+the value with respect to the ``domain_left_edge`` of the dataset.  So for
+instance, you could specify it like so:::
+
+   ds.r[(100, 'kpc'):(200,'kpc'),:,:]
+
+This would return a region that included everything between 100 kpc from the
+left edge of the dataset to 200 kpc from the left edge of the dataset in the
+first dimension, and which spans the entire dataset in the second and third
+dimensions.  By default, if the units are unspecified, they are in the "native"
+code units of the dataset.
+
+This works in all types of datasets, as well.  For instance, if you have a
+geographic dataset (which is usually ordered latitude, longitude, altitude) you
+can easily select, for instance, one hemisphere with a region selection:::
+
+   ds.r[:,-180:0,:]
+
+Selecting Fixed Resolution Regions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+yt also provides functionality for selecting regions that have been turned into
+voxels.  This returns an :ref:`arbitrary-grid` object.  It can be created by
+specifying a complex slice "step", where the start and stop follow the same
+rules as above.  This is similar to how the numpy ``mgrid`` operation works.
+For instance, this code block will generate a grid covering the full domain,
+but converted to being 21x35x100 dimensions:::
+
+  region = ds.r[::21j, ::35j, ::100j]
+
+The left and right edges, as above, can be specified to provide bounds as well.
+For instance, to select a 10 meter cube, with 24 cells in each dimension, we
+could supply:::
+
+  region = ds.r[(20,'m'):(30,'m'):24j, (30,'m'):(40,'m'):24j,
+                (7,'m'):(17,'m'):24j]
+
+This can select both particles and mesh fields.  Mesh fields will be 3D arrays,
+and generated through volume-weighted overlap calculations.
+
+Selecting Slices
+^^^^^^^^^^^^^^^^
+
+If one dimension is specified as a single value, that will be the dimension
+along which a slice is made.  This provides a simple means of generating a
+slice from a subset of the data.  For instance, to create a slice of a dataset,
+you can very simply specify the full domain along two axes:::
+
+   sl = ds.r[:,:,0.25]
+
+This can also be very easily plotted:::
+
+   sl = ds.r[:,:,0.25]
+   sl.plot()
+
+This accepts arguments the same way:::
+
+
+   sl = ds.r[(20.1, 'km'):(31.0, 'km'), (504.143,'m'):(1000.0,'m'),
+             (900.1, 'm')]
+   sl.plot()
+
 .. _available-objects:
 
 Available Objects
@@ -144,6 +255,8 @@
       creating a Region covering the entire dataset domain.  It is effectively 
       ``ds.region(ds.domain_center, ds.domain_left_edge, ds.domain_right_edge)``.
 
+.. _region-reference:
+
 **Box Region** 
     | Class :class:`~yt.data_objects.selection_data_containers.YTRegion`
     | Usage: ``region(center, left_edge, right_edge, fields=None, ds=None, field_parameters=None, data_source=None)``
@@ -227,15 +340,15 @@
       interpolates as necessary from coarse regions to fine.  See 
       :ref:`examining-grid-data-in-a-fixed-resolution-array`.
 
-**Fixed-Resolution Region for Particle Deposition** 
+**Fixed-Resolution Region**
     | Class :class:`~yt.data_objects.construction_data_containers.YTArbitraryGrid`
     | Usage: ``arbitrary_grid(left_edge, right_edge, dimensions, ds=None, field_parameters=None)``
     | When particles are deposited on to mesh fields, they use the existing
       mesh structure, but this may have too much or too little resolution
       relative to the particle locations (or it may not exist at all!).  An
       `arbitrary_grid` provides a means for generating a new independent mesh 
-      structure for particle deposition.  See :ref:`arbitrary-grid` for more 
-      information.
+      structure for particle deposition and simple mesh field interpolation.
+      See :ref:`arbitrary-grid` for more information.
 
 **Projection** 
     | Class :class:`~yt.data_objects.construction_data_containers.YTQuadTreeProj`
@@ -279,6 +392,82 @@
    sp = ds.sphere('c', (10, 'kpc'))
    print sp.quantities.angular_momentum_vector()
 
+Quickly Processing Data
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Most data objects now have multiple numpy-like methods that allow you to
+quickly process data.  More of these methods will be added over time and added
+to this list.  Most, if not all, of these map to other yt operations and are
+designed as syntactic sugar to slightly simplify otherwise somewhat obtuse
+pipelines.
+
+These operations are parallelized.
+
+You can compute the extrema of a field by using the ``max`` or ``min``
+functions.  This will cache the extrema in between, so calling ``min`` right
+after ``max`` will be considerably faster.  Here is an example:::
+
+  ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
+  reg = ds.r[0.3:0.6, 0.2:0.4, 0.9:0.95]
+  min_rho = reg.min("density")
+  max_rho = reg.max("density")
+
+This is equivalent to:::
+
+  min_rho, max_rho = reg.quantities.extrema("density")
+
+The ``max`` operation can also compute the maximum intensity projection:::
+
+  proj = reg.max("density", axis="x")
+  proj.plot()
+
+This is equivalent to:::
+
+  proj = ds.proj("density", "x", data_source=reg, method="mip")
+  proj.plot()
+
+The ``min`` operator does not do this, however, as a minimum intensity
+projection is not currently implemented.
+
+You can also compute the ``mean`` value, which accepts a field, axis and wight
+function.  If the axis is not specified, it will return the average value of
+the specified field, weighted by the weight argument.  The weight argument
+defaults to ``ones``, which performs an arithmetic average.  For instance:::
+
+  mean_rho = reg.mean("density")
+  rho_by_vol = reg.mean("density", weight="cell_volume")
+
+This is equivalent to:::
+
+  mean_rho = reg.quantities.weighted_average("density", weight_field="ones")
+  rho_by_vol = reg.quantities.weighted_average("density",
+                    weight_field="cell_volume")
+
+If an axis is provided, it will project along that axis and return it to you:::
+
+  rho_proj = reg.mean("temperature", axis="y", weight="density")
+  rho_proj.plot()
+
+The ``sum`` function will add all the values in the data object.  It accepts a
+field and, optionally, an axis.  If the axis is left unspecified, it will sum
+the values in the object:::
+
+  vol = reg.sum("cell_volume")
+
+If the axis is specified, it will compute a projection using the method ``sum``
+(which does *not* take into account varying path length!) and return that to
+you.::
+
+  cell_count = reg.sum("ones", axis="z")
+  cell_count.plot()
+
+To compute a projection where the path length *is* taken into account, you can
+use the ``integrate`` function:::
+
+  proj = reg.integrate("density", "x")
+
+All of these projections supply the data object as their base input.
+
 Available Derived Quantities
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
@@ -350,8 +539,8 @@
 
 .. _arbitrary-grid:
 
-Arbitrary Grids Objects for Particle Deposition
------------------------------------------------
+Arbitrary Grids Objects
+-----------------------
 
 The covering grid and smoothed covering grid objects mandate that they be
 exactly aligned with the mesh.  This is a
@@ -379,6 +568,13 @@
 While these cannot yet be used as input to projections or slices, slices and
 projections can be taken of the data in them and visualized by hand.
 
+These objects, as of yt 3.3, are now also able to "voxelize" mesh fields.  This
+means that you can query the "density" field and it will return the density
+field as deposited, identically to how it would be deposited in a fixed
+resolution buffer.  Note that this means that contributions from misaligned or
+partially-overlapping cells are added in a volume-weighted way, which makes it
+inappropriate for some types of analysis.
+
 .. _boolean_data_objects:
 
 Combining Objects: Boolean Data Objects

diff -r 5b5389c150b9d9ae33a8e36b03f6626f78b8da53 -r eb45e6e815fb3f781e57ebf9dd941b404da4e030 yt/convenience.py
--- a/yt/convenience.py
+++ b/yt/convenience.py
@@ -54,17 +54,28 @@
                     valid_file.append(False)
         else:
             valid_file.append(False)
+    types_to_check = output_type_registry
     if not any(valid_file):
         try:
             from yt.data_objects.time_series import DatasetSeries
             ts = DatasetSeries.from_filenames(*args, **kwargs)
             return ts
-        except YTOutputNotIdentified:
+        except (TypeError, YTOutputNotIdentified):
             pass
-        mylog.error("None of the arguments provided to load() is a valid file")
-        mylog.error("Please check that you have used a correct path")
-        raise YTOutputNotIdentified(args, kwargs)
-    for n, c in output_type_registry.items():
+        # We check if either the first argument is a dict or list, in which
+        # case we try identifying candidates.
+        if len(args) > 0 and isinstance(args[0], (list, dict)):
+            # This fixes issues where it is assumed the first argument is a
+            # file
+            types_to_check = dict((n, v) for n, v in
+                    output_type_registry.items() if n.startswith("stream_"))
+            # Better way to do this is to override the output_type_registry
+        else:
+            mylog.error("None of the arguments provided to load() is a valid file")
+            mylog.error("Please check that you have used a correct path")
+            raise YTOutputNotIdentified(args, kwargs)
+    for n, c in types_to_check.items():
+        print n
         if n is None: continue
         if c._is_valid(*args, **kwargs): candidates.append(n)
 

diff -r 5b5389c150b9d9ae33a8e36b03f6626f78b8da53 -r eb45e6e815fb3f781e57ebf9dd941b404da4e030 yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py
+++ b/yt/data_objects/construction_data_containers.py
@@ -43,7 +43,7 @@
 from yt.utilities.lib.Interpolators import \
     ghost_zone_interpolate
 from yt.utilities.lib.misc_utilities import \
-    fill_region
+    fill_region, fill_region_float
 from yt.utilities.lib.marching_cubes import \
     march_cubes_grid, march_cubes_grid_flux
 from yt.utilities.minimal_representation import \
@@ -454,6 +454,25 @@
         pw = self._get_pw(fields, center, width, origin, 'Projection')
         return pw
 
+    def plot(self, fields=None):
+        if hasattr(self.data_source, "left_edge") and \
+            hasattr(self.data_source, "right_edge"):
+            left_edge = self.data_source.left_edge
+            right_edge = self.data_source.right_edge
+            center = (left_edge + right_edge)/2.0
+            width = right_edge - left_edge
+            xax = self.ds.coordinates.x_axis[self.axis]
+            yax = self.ds.coordinates.y_axis[self.axis]
+            lx, rx = left_edge[xax], right_edge[xax]
+            ly, ry = left_edge[yax], right_edge[yax]
+            width = (rx-lx), (ry-ly)
+        else:
+            width = self.ds.domain_width
+            center = self.ds.domain_center
+        pw = self._get_pw(fields, center, width, 'native', 'Projection')
+        pw.show()
+        return pw
+
 class YTCoveringGrid(YTSelectionContainer3D):
     """A 3D region with all data extracted to a single, specified
     resolution.  Left edge should align with a cell boundary, but
@@ -783,7 +802,19 @@
         self._setup_data_source()
 
     def _fill_fields(self, fields):
-        raise NotImplementedError
+        fields = [f for f in fields if f not in self.field_data]
+        if len(fields) == 0: return
+        assert(len(fields) == 1)
+        field = fields[0]
+        dest = np.zeros(self.ActiveDimensions, dtype="float64")
+        for chunk in self._data_source.chunks(fields, "io"):
+            fill_region_float(chunk.fcoords, chunk.fwidth, chunk[field],
+                              self.left_edge, self.right_edge, dest, 1,
+                              self.ds.domain_width,
+                              int(any(self.ds.periodicity)))
+        fi = self.ds._get_field_info(field)
+        self[field] = self.ds.arr(dest, fi.units)
+        
 
 class LevelState(object):
     current_dx = None

diff -r 5b5389c150b9d9ae33a8e36b03f6626f78b8da53 -r eb45e6e815fb3f781e57ebf9dd941b404da4e030 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -611,6 +611,216 @@
         else:
             data_collection.append(gdata)
 
+    # Numpy-like Operations
+    def argmax(self, field, axis=None):
+        raise NotImplementedError
+
+    def argmin(self, field, axis=None):
+        raise NotImplementedError
+
+    def _compute_extrema(self, field):
+        if self._extrema_cache is None:
+            self._extrema_cache = {}
+        if field not in self._extrema_cache:
+            # Note we still need to call extrema for each field, as of right
+            # now
+            mi, ma = self.quantities.extrema(field)
+            self._extrema_cache[field] = (mi, ma)
+        return self._extrema_cache[field]
+
+    _extrema_cache = None
+    def max(self, field, axis=None):
+        r"""Compute the maximum of a field, optionally along an axis.
+
+        This will, in a parallel-aware fashion, compute the maximum of the
+        given field.  Supplying an axis will result in a return value of a
+        YTProjection, with method 'mip' for maximum intensity.  If the max has
+        already been requested, it will use the cached extrema value.
+
+        Parameters
+        ----------
+        field : string or tuple of strings
+            The field to maximize.
+        axis : string, optional
+            If supplied, the axis to project the maximum along.
+
+        Returns
+        -------
+        Either a scalar or a YTProjection.
+
+        Examples
+        --------
+
+        >>> max_temp = reg.max("temperature")
+        >>> max_temp_proj = reg.max("temperature", axis="x")
+        """
+        if axis is None:
+            rv = ()
+            fields = ensure_list(field)
+            for f in fields:
+                rv += (self._compute_extrema(f)[1],)
+            if len(fields) == 1:
+                return rv[0]
+            else:
+                return rv
+        elif axis in self.ds.coordinates.axis_name:
+            r = self.ds.proj(field, axis, data_source=self, method="mip")
+            return r
+        else:
+            raise NotImplementedError("Unknown axis %s" % axis)
+
+    def min(self, field, axis=None):
+        r"""Compute the minimum of a field.
+
+        This will, in a parallel-aware fashion, compute the minimum of the
+        given field.  Supplying an axis is not currently supported.  If the max
+        has already been requested, it will use the cached extrema value.
+
+        Parameters
+        ----------
+        field : string or tuple of strings
+            The field to minimize.
+        axis : string, optional
+            If supplied, the axis to compute the minimum along.
+
+        Returns
+        -------
+        Scalar.
+
+        Examples
+        --------
+
+        >>> min_temp = reg.min("temperature")
+        """
+        if axis is None:
+            rv = ()
+            fields = ensure_list(field)
+            for f in ensure_list(fields):
+                rv += (self._compute_extrema(f)[0],)
+            if len(fields) == 1:
+                return rv[0]
+            else:
+                return rv
+            return rv
+        elif axis in self.ds.coordinates.axis_name:
+            raise NotImplementedError("Minimum intensity projection not"
+                                      " implemented.")
+        else:
+            raise NotImplementedError("Unknown axis %s" % axis)
+
+    def std(self, field, weight=None):
+        raise NotImplementedError
+
+    def ptp(self, field):
+        raise NotImplementedError
+
+    def hist(self, field, weight = None, bins = None):
+        raise NotImplementedError
+
+    def mean(self, field, axis=None, weight='ones'):
+        r"""Compute the mean of a field, optionally along an axis, with a
+        weight.
+
+        This will, in a parallel-aware fashion, compute the mean of the
+        given field.  If an axis is supplied, it will return a projection,
+        where the weight is also supplied.  By default the weight is "ones",
+        resulting in a strict average.
+
+        Parameters
+        ----------
+        field : string or tuple of strings
+            The field to average.
+        axis : string, optional
+            If supplied, the axis to compute the mean along (i.e., to project
+            along)
+        weight : string, optional
+            The field to use as a weight.
+
+        Returns
+        -------
+        Scalar or YTProjection.
+
+        Examples
+        --------
+
+        >>> avg_rho = reg.mean("density", weight="cell_volume")
+        >>> rho_weighted_T = reg.mean("temperature", axis="y", weight="density")
+        """
+        if axis in self.ds.coordinates.axis_name:
+            r = self.ds.proj(field, axis, data_source=self, weight_field=weight)
+        elif axis is None:
+            if weight is None:
+                r = self.quantities.total_quantity(field)
+            else:
+                r = self.quantities.weighted_average_quantity(field, weight)
+        else:
+            raise NotImplementedError("Unknown axis %s" % axis)
+        return r
+
+    def sum(self, field, axis=None):
+        r"""Compute the sum of a field, optionally along an axis.
+
+        This will, in a parallel-aware fashion, compute the sum of the given
+        field.  If an axis is specified, it will return a projection (using
+        method type "sum", which does not take into account path length) along
+        that axis.
+
+        Parameters
+        ----------
+        field : string or tuple of strings
+            The field to sum.
+        axis : string, optional
+            If supplied, the axis to sum along.
+
+        Returns
+        -------
+        Either a scalar or a YTProjection.
+
+        Examples
+        --------
+
+        >>> total_vol = reg.sum("cell_volume")
+        >>> cell_count = reg.sum("ones", axis="x")
+        """
+        # Because we're using ``sum`` to specifically mean a sum or a
+        # projection with the method="sum", we do not utilize the ``mean``
+        # function.
+        if axis in self.ds.coordinates.axis_name:
+            with self._field_parameter_state({'axis':axis}):
+                r = self.ds.proj(field, axis, data_source=self, method="sum")
+        elif axis is None:
+            r = self.quantities.total_quantity(field)
+        else:
+            raise NotImplementedError("Unknown axis %s" % axis)
+        return r
+
+    def integrate(self, field, axis=None):
+        r"""Compute the integral (projection) of a field along an axis.
+
+        This projects a field along an axis.
+
+        Parameters
+        ----------
+        field : string or tuple of strings
+            The field to project.
+        axis : string
+            The axis to project along.
+
+        Returns
+        -------
+        YTProjection
+
+        Examples
+        --------
+
+        >>> column_density = reg.integrate("density", axis="z")
+        """
+        if axis in self.ds.coordinates.axis_name:
+            r = self.ds.proj(field, axis, data_source=self)
+        else:
+            raise NotImplementedError("Unknown axis %s" % axis)
+        return r
+
     @property
     def _hash(self):
         s = "%s" % self

diff -r 5b5389c150b9d9ae33a8e36b03f6626f78b8da53 -r eb45e6e815fb3f781e57ebf9dd941b404da4e030 yt/data_objects/region_expression.py
--- /dev/null
+++ b/yt/data_objects/region_expression.py
@@ -0,0 +1,107 @@
+"""
+An object that can live on the dataset to facilitate data access.
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2015, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+import weakref
+import types
+
+from yt.utilities.exceptions import YTDimensionalityError
+
+class RegionExpression(object):
+    _all_data = None
+    def __init__(self, ds):
+        self.ds = weakref.proxy(ds)
+
+    @property
+    def all_data(self):
+        if self._all_data is None:
+            self._all_data = self.ds.all_data()
+        return self._all_data
+
+    def __getitem__(self, item):
+        # At first, we will only implement this as accepting a slice that is
+        # (optionally) unitful corresponding to a specific set of coordinates
+        # that result in a rectangular prism or a slice.
+        if isinstance(item, types.StringTypes):
+            # This is some field; we will instead pass this back to the
+            # all_data object.
+            return self.all_data[item]
+        if isinstance(item, tuple) and isinstance(item[1], types.StringTypes):
+            return self.all_data[item]
+        if len(item) != self.ds.dimensionality:
+            # Not the right specification, and we don't want to do anything
+            # implicitly.
+            raise YTDimensionalityError(len(item), self.ds.dimensionality)
+        if self.ds.dimensionality != 3:
+            # We'll pass on this for the time being.
+            raise RuntimeError
+
+        # OK, now we need to look at our slices.  How many are a specific
+        # coordinate?
+        
+        if not all(isinstance(v, slice) for v in item):
+            return self._create_slice(item)
+        else:
+            if all(s.start is s.stop is s.step is None for s in item):
+                return self.all_data
+            return self._create_region(item)
+            
+    def _spec_to_value(self, input_tuple):
+        if not isinstance(input_tuple, tuple):
+            # We now assume that it's in code_length
+            return self.ds.quan(input_tuple, 'code_length')
+        v, u = input_tuple
+        value = self.ds.quan(v, u)
+        return value
+
+    def _create_slice(self, slice_tuple):
+        axis = None
+        new_slice = []
+        for ax, v in enumerate(slice_tuple):
+            if not isinstance(v, slice):
+                if axis is not None: raise RuntimeError
+                axis = ax
+                coord = self._spec_to_value(v)
+                new_slice.append(slice(None, None, None))
+            else:
+                new_slice.append(v)
+        # This new slice doesn't need to be a tuple
+        source = self._create_region(new_slice)
+        sl = self.ds.slice(axis, coord, data_source = source)
+        return sl
+
+    def _slice_to_edges(self, ax, val):
+        if val.start is None:
+            l = self.ds.domain_left_edge[ax]
+        else:
+            l = self._spec_to_value(val.start)
+        if val.stop is None:
+            r = self.ds.domain_right_edge[ax]
+        else:
+            r = self._spec_to_value(val.stop)
+        if r < l:
+            raise RuntimeError
+        return l, r
+
+    def _create_region(self, bounds_tuple):
+        left_edge = []
+        right_edge = []
+        dims = []
+        for ax, b in enumerate(bounds_tuple):
+            l, r = self._slice_to_edges(ax, b)
+            left_edge.append(l)
+            right_edge.append(r)
+            dims.append(getattr(b.step, "imag", None))
+        center = [ (cl + cr)/2.0 for cl, cr in zip(left_edge, right_edge)]
+        if all(d is not None for d in dims):
+            return self.ds.arbitrary_grid(left_edge, right_edge, dims)
+        return self.ds.region(center, left_edge, right_edge)

diff -r 5b5389c150b9d9ae33a8e36b03f6626f78b8da53 -r eb45e6e815fb3f781e57ebf9dd941b404da4e030 yt/data_objects/selection_data_containers.py
--- a/yt/data_objects/selection_data_containers.py
+++ b/yt/data_objects/selection_data_containers.py
@@ -295,6 +295,25 @@
         pw = self._get_pw(fields, center, width, origin, 'Slice')
         return pw
 
+    def plot(self, fields=None):
+        if hasattr(self._data_source, "left_edge") and \
+            hasattr(self._data_source, "right_edge"):
+            left_edge = self._data_source.left_edge
+            right_edge = self._data_source.right_edge
+            center = (left_edge + right_edge)/2.0
+            width = right_edge - left_edge
+            xax = self.ds.coordinates.x_axis[self.axis]
+            yax = self.ds.coordinates.y_axis[self.axis]
+            lx, rx = left_edge[xax], right_edge[xax]
+            ly, ry = left_edge[yax], right_edge[yax]
+            width = (rx-lx), (ry-ly)
+        else:
+            width = self.ds.domain_width
+            center = self.ds.domain_center
+        pw = self._get_pw(fields, center, width, 'native', 'Slice')
+        pw.show()
+        return pw
+
 class YTCuttingPlane(YTSelectionContainer2D):
     """
     This is a data object corresponding to an oblique slice through the

diff -r 5b5389c150b9d9ae33a8e36b03f6626f78b8da53 -r eb45e6e815fb3f781e57ebf9dd941b404da4e030 yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -59,6 +59,8 @@
 from yt.units.yt_array import \
     YTArray, \
     YTQuantity
+from yt.data_objects.region_expression import \
+    RegionExpression
 
 from yt.geometry.coordinates.api import \
     CoordinateHandler, \
@@ -202,6 +204,7 @@
         self.file_style = file_style
         self.conversion_factors = {}
         self.parameters = {}
+        self.region_expression = self.r = RegionExpression(self)
         self.known_filters = self.known_filters or {}
         self.particle_unions = self.particle_unions or {}
         self.field_units = self.field_units or {}

diff -r 5b5389c150b9d9ae33a8e36b03f6626f78b8da53 -r eb45e6e815fb3f781e57ebf9dd941b404da4e030 yt/data_objects/tests/test_covering_grid.py
--- a/yt/data_objects/tests/test_covering_grid.py
+++ b/yt/data_objects/tests/test_covering_grid.py
@@ -3,7 +3,8 @@
 from yt.frontends.stream.data_structures import load_particles
 from yt.testing import \
     fake_random_ds, \
-    assert_equal
+    assert_equal, \
+    assert_almost_equal
 
 def setup():
     from yt.config import ytcfg
@@ -108,3 +109,14 @@
             deposited_mass = obj["deposit", "all_density"].sum() * volume
 
             yield assert_equal, deposited_mass, ds.quan(1.0, 'g')
+
+    # Test that we get identical results to the covering grid for unigrid data.
+    # Testing AMR data is much harder.
+    for nprocs in [1, 2, 4, 8]:
+        ds = fake_random_ds(32, nprocs = nprocs)
+        for ref_level in [0, 1, 2]:
+            cg = ds.covering_grid(ref_level, [0.0, 0.0, 0.0],
+                    2**ref_level * ds.domain_dimensions)
+            ag = ds.arbitrary_grid([0.0, 0.0, 0.0], [1.0, 1.0, 1.0],
+                    2**ref_level * ds.domain_dimensions)
+            yield assert_almost_equal, cg["density"], ag["density"]

diff -r 5b5389c150b9d9ae33a8e36b03f6626f78b8da53 -r eb45e6e815fb3f781e57ebf9dd941b404da4e030 yt/data_objects/tests/test_dataset_access.py
--- /dev/null
+++ b/yt/data_objects/tests/test_dataset_access.py
@@ -0,0 +1,39 @@
+from yt.testing import fake_amr_ds, assert_equal
+
+# This will test the "dataset access" method.
+
+def test_region_from_d():
+    ds = fake_amr_ds(fields=["density"])
+    # We'll do a couple here
+
+    # First, no string units
+    reg1 = ds.r[0.2:0.3,0.4:0.6,:]
+    reg2 = ds.region([0.25, 0.5, 0.5], [0.2, 0.4, 0.0], [0.3, 0.6, 1.0])
+    yield assert_equal, reg1["density"], reg2["density"]
+
+    # Now, string units in some -- 1.0 == cm
+    reg1 = ds.r[(0.1, 'cm'):(0.5, 'cm'), :, (0.25, 'cm'): (0.35, 'cm')]
+    reg2 = ds.region([0.3, 0.5, 0.3], [0.1, 0.0, 0.25], [0.5, 1.0, 0.35])
+    yield assert_equal, reg1["density"], reg2["density"]
+
+    # Now, string units in some -- 1.0 == cm
+    reg1 = ds.r[(0.1, 'cm'):(0.5, 'cm'), :, 0.25:0.35]
+    reg2 = ds.region([0.3, 0.5, 0.3], [0.1, 0.0, 0.25], [0.5, 1.0, 0.35])
+    yield assert_equal, reg1["density"], reg2["density"]
+
+    # And, lots of : usage!
+    reg1 = ds.r[:, :, :]
+    reg2 = ds.all_data()
+    yield assert_equal, reg1["density"], reg2["density"]
+
+def test_accessing_all_data():
+    # This will test first that we can access all_data, and next that we can
+    # access it multiple times and get the *same object*.
+    ds = fake_amr_ds(fields=["density"])
+    dd = ds.all_data()
+    yield assert_equal, ds.r["density"], dd["density"]
+    # Now let's assert that it's the same object
+    rho = ds.r["density"]
+    rho *= 2.0
+    yield assert_equal, dd["density"]*2.0, ds.r["density"]
+    yield assert_equal, dd["gas", "density"]*2.0, ds.r["gas", "density"]

diff -r 5b5389c150b9d9ae33a8e36b03f6626f78b8da53 -r eb45e6e815fb3f781e57ebf9dd941b404da4e030 yt/data_objects/tests/test_numpy_ops.py
--- /dev/null
+++ b/yt/data_objects/tests/test_numpy_ops.py
@@ -0,0 +1,92 @@
+from yt.testing import fake_random_ds, fake_amr_ds, assert_equal
+
+
+def setup():
+    from yt.config import ytcfg
+    ytcfg["yt", "__withintesting"] = "True"
+
+
+def test_mean_sum_integrate():
+    for nprocs in [-1, 1, 2, 16]:
+        if nprocs == -1:
+            ds = fake_amr_ds(fields=("density",))
+        else:
+            ds = fake_random_ds(32, nprocs=nprocs, fields=("density",))
+        ad = ds.all_data()
+
+        # Sums
+        q = ad.sum('density')
+
+        q1 = ad.quantities.total_quantity('density')
+
+        yield assert_equal, q, q1
+
+        # Weighted Averages
+        w = ad.mean("density")
+
+        w1 = ad.quantities.weighted_average_quantity('density', 'ones')
+
+        yield assert_equal, w, w1
+
+        w = ad.mean("density", weight="density")
+
+        w1 = ad.quantities.weighted_average_quantity('density', 'density')
+
+        yield assert_equal, w, w1
+
+        # Projections
+        p = ad.sum('density', axis=0)
+
+        p1 = ds.proj('density', 0, data_source=ad, method="sum")
+
+        yield assert_equal, p['density'], p1['density']
+
+        # Check by axis-name
+        p = ad.sum('density', axis='x')
+
+        yield assert_equal, p['density'], p1['density']
+
+        # Now we check proper projections
+        p = ad.integrate("density", axis=0)
+        p1 = ds.proj("density", 0, data_source=ad)
+
+        yield assert_equal, p['density'], p1['density']
+
+        # Check by axis-name
+        p = ad.integrate('density', axis='x')
+
+        yield assert_equal, p['density'], p1['density']
+
+def test_min_max():
+    for nprocs in [-1, 1, 2, 16]:
+        if nprocs == -1:
+            ds = fake_amr_ds(fields=("density","temperature"))
+        else:
+            ds = fake_random_ds(32, nprocs=nprocs,
+                fields=("density","temperature"))
+
+        ad = ds.all_data()
+
+        q = ad.min("density").v
+        yield assert_equal, q, ad["density"].min()
+
+        q = ad.max("density").v
+        yield assert_equal, q, ad["density"].max()
+
+        p = ad.max("density", axis=1)
+        p1 = ds.proj("density", 1, data_source=ad, method="mip")
+        yield assert_equal, p["density"], p1["density"]
+
+        p = ad.max("density", axis="y")
+        p1 = ds.proj("density", 1, data_source=ad, method="mip")
+        yield assert_equal, p["density"], p1["density"]
+
+        # Test that we can get multiple in a single pass
+
+        qrho, qtemp = ad.max(["density", "temperature"])
+        yield assert_equal, qrho, ad["density"].max()
+        yield assert_equal, qtemp, ad["temperature"].max()
+
+        qrho, qtemp = ad.min(["density", "temperature"])
+        yield assert_equal, qrho, ad["density"].min()
+        yield assert_equal, qtemp, ad["temperature"].min()

diff -r 5b5389c150b9d9ae33a8e36b03f6626f78b8da53 -r eb45e6e815fb3f781e57ebf9dd941b404da4e030 yt/utilities/exceptions.py
--- a/yt/utilities/exceptions.py
+++ b/yt/utilities/exceptions.py
@@ -510,3 +510,12 @@
 
     def __str__(self):
         return self.message
+
+class YTDimensionalityError(YTException):
+    def __init__(self, wrong, right):
+        self.wrong = wrong
+        self.right = right
+
+    def __str__(self):
+        return 'Dimensionality specified was %s but we need %s' % (
+            self.wrong, self.right)

diff -r 5b5389c150b9d9ae33a8e36b03f6626f78b8da53 -r eb45e6e815fb3f781e57ebf9dd941b404da4e030 yt/utilities/lib/misc_utilities.pyx
--- a/yt/utilities/lib/misc_utilities.pyx
+++ b/yt/utilities/lib/misc_utilities.pyx
@@ -20,6 +20,7 @@
 cimport libc.math as math
 from libc.math cimport abs
 from fp_utils cimport fmin, fmax, i64min, i64max
+from yt.geometry.selection_routines cimport _ensure_code
 
 cdef extern from "stdlib.h":
     # NOTE that size_t might not be int
@@ -839,3 +840,109 @@
                                         ifield[i]
                                     tot += 1
     return tot
+
+ at cython.cdivision(True)
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+def fill_region_float(np.ndarray[np.float64_t, ndim=2] fcoords,
+                      np.ndarray[np.float64_t, ndim=2] fwidth,
+                      np.ndarray[np.float64_t, ndim=1] data,
+                      np.ndarray[np.float64_t, ndim=1] box_left_edge,
+                      np.ndarray[np.float64_t, ndim=1] box_right_edge,
+                      np.ndarray[np.float64_t, ndim=3] dest,
+                      int antialias = 1,
+                      period = None,
+                      int check_period = 1):
+    cdef np.float64_t ds_period[3]
+    cdef np.float64_t box_dds[3], box_idds[3], width[3], LE[3], RE[3]
+    cdef np.int64_t i, j, k, p, xi, yi, ji
+    cdef np.int64_t dims[3], ld[3], ud[3]
+    cdef np.float64_t overlap[3]
+    cdef np.float64_t dsp, osp[3], odsp[3], sp[3], lfd[3], ufd[3]
+    # These are the temp vars we get from the arrays
+    # Some periodicity helpers
+    cdef int diter[3][2]
+    cdef np.float64_t diterv[3][2]
+    if period is not None:
+        for i in range(3):
+            ds_period[i] = period[i]
+    else:
+        ds_period[0] = ds_period[1] = ds_period[2] = 0.0
+    box_left_edge = _ensure_code(box_left_edge)
+    box_right_edge = _ensure_code(box_right_edge)
+    _ensure_code(fcoords)
+    _ensure_code(fwidth)
+    for i in range(3):
+        LE[i] = box_left_edge[i]
+        RE[i] = box_right_edge[i]
+        width[i] = RE[i] - LE[i]
+        dims[i] = dest.shape[i]
+        box_dds[i] = width[i] / dims[i]
+        box_idds[i] = 1.0/box_dds[i]
+        diter[i][0] = diter[i][1] = 0
+        diterv[i][0] = diterv[i][1] = 0.0
+        overlap[i] = 1.0 
+    with nogil:
+        for p in range(fcoords.shape[0]):
+            for i in range(3):
+               diter[i][1] = 999
+               odsp[i] = fwidth[p,i]*0.5
+               osp[i] = fcoords[p,i] # already centered
+               overlap[i] = 1.0
+            dsp = data[p]
+            if check_period == 1:
+                for i in range(3):
+                    if (osp[i] - odsp[i] < LE[i]):
+                        diter[i][1] = +1
+                        diterv[i][1] = ds_period[i]
+                    elif (osp[i] + odsp[i] > RE[i]):
+                        diter[i][1] = -1
+                        diterv[i][1] = -ds_period[i]
+            for xi in range(2):
+                if diter[0][xi] == 999: continue
+                sp[0] = osp[0] + diterv[0][xi]
+                if (sp[0] + odsp[0] < LE[0]) or (sp[0] - odsp[0] > RE[0]): continue
+                for yi in range(2):
+                    if diter[1][yi] == 999: continue
+                    sp[1] = osp[1] + diterv[1][yi]
+                    if (sp[1] + odsp[1] < LE[1]) or (sp[1] - odsp[1] > RE[1]): continue
+                    for zi in range(2):
+                        if diter[2][zi] == 999: continue
+                        sp[2] = osp[2] + diterv[2][yi]
+                        if (sp[2] + odsp[2] < LE[2]) or (sp[2] - odsp[2] > RE[2]): continue
+                        for i in range(3):
+                            ld[i] = <np.int64_t> fmax(((sp[i]-odsp[i]-LE[i])*box_idds[i]),0)
+                            # NOTE: This is a different way of doing it than in the C
+                            # routines.  In C, we were implicitly casting the
+                            # initialization to int, but *not* the conditional, which
+                            # was allowed an extra value:
+                            #     for(j=lc;j<rc;j++)
+                            # here, when assigning lc (double) to j (int) it got
+                            # truncated, but no similar truncation was done in the
+                            # comparison of j to rc (double).  So give ourselves a
+                            # bonus row and bonus column here.
+                            ud[i] = <np.int64_t> fmin(((sp[i]+odsp[i]-LE[i])*box_idds[i] + 1), dims[i])
+                        for i in range(ld[0], ud[0]):
+                            if antialias == 1:
+                                lfd[0] = box_dds[0] * i + LE[0]
+                                ufd[0] = box_dds[0] * (i + 1) + LE[0]
+                                overlap[0] = ((fmin(ufd[0], sp[0]+odsp[0])
+                                           - fmax(lfd[0], (sp[0]-odsp[0])))*box_idds[0])
+                            if overlap[0] < 0.0: continue
+                            for j in range(ld[1], ud[1]):
+                                if antialias == 1:
+                                    lfd[1] = box_dds[1] * j + LE[1]
+                                    ufd[1] = box_dds[1] * (j + 1) + LE[1]
+                                    overlap[1] = ((fmin(ufd[1], sp[1]+odsp[1])
+                                               - fmax(lfd[1], (sp[1]-odsp[1])))*box_idds[1])
+                                if overlap[1] < 0.0: continue
+                                for k in range(ld[2], ud[2]):
+                                    if antialias == 1:
+                                        lfd[2] = box_dds[2] * k + LE[2]
+                                        ufd[2] = box_dds[2] * (k + 1) + LE[2]
+                                        overlap[2] = ((fmin(ufd[2], sp[2]+odsp[2])
+                                                   - fmax(lfd[2], (sp[2]-odsp[2])))*box_idds[2])
+                                        if overlap[2] < 0.0: continue
+                                        dest[i,j,k] += dsp * (overlap[0]*overlap[1]*overlap[2])
+                                    else:
+                                        dest[i,j,k] = dsp

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list