[yt-svn] commit/yt: ngoldbaum: Merged in atmyers/yt (pull request #1643)

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Thu Aug 27 09:33:38 PDT 2015


1 new commit in yt:

https://bitbucket.org/yt_analysis/yt/commits/fceb5ab0e3c6/
Changeset:   fceb5ab0e3c6
Branch:      yt
User:        ngoldbaum
Date:        2015-08-27 16:33:21+00:00
Summary:     Merged in atmyers/yt (pull request #1643)

Unstructured Mesh Rendering
Affected #:  40 files

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -1026,6 +1026,60 @@
 * Some functions may behave oddly or not work at all.
 * Data must already reside in memory.
 
+Unstructured Grid Data
+----------------------
+
+See :ref:`loading-numpy-array`,
+:func:`~yt.frontends.stream.data_structures.load_unstructured_mesh` for
+more detail.
+
+In addition to the above grid types, you can also load data stored on
+unstructured meshes. This type of mesh is used, for example, in many
+finite element calculations. Currently, hexahedral, tetrahedral, and
+wedge-shaped mesh element are supported.
+
+To load an unstructured mesh, you need to specify the following. First,
+you need to have a coordinates array, which should be an (L, 3) array
+that stores the (x, y, z) positions of all of the vertices in the mesh.
+Second, you need to specify a connectivity array, which describes how
+those vertices are connected into mesh elements. The connectivity array
+should be (N, M), where N is the number of elements and M is the
+connectivity length, i.e. the number of vertices per element. Finally,
+you must also specify a data dictionary, where the keys should be
+the names of the fields and the values should be numpy arrays that
+contain the field data. These arrays can either supply the cell-averaged
+data for each element, in which case they would be (N, 1), or they
+can have node-centered data, in which case they would also be (N, M).
+
+Here is an example of how to load an in-memory, unstructured mesh dataset:
+
+.. code-block:: python
+
+   import yt
+   import numpy
+   from yt.utilities.exodusII_reader import get_data
+
+   coords, connectivity, data = get_data("MOOSE_sample_data/out.e-s010")
+
+This uses a publically available `MOOSE <http://mooseframework.org/>` 
+dataset along with the get_data function to parse the coords, connectivity, 
+and data. Then, these can be loaded as an in-memory dataset as follows:
+
+.. code-block:: python
+
+    mesh_id = 0
+    ds = yt.load_unstructured_mesh(data[mesh_id], connectivity[mesh_id], coords[mesh_id])
+
+Note that load_unstructured_mesh can take either a single or a list of meshes.
+Here, we have selected only the first mesh to load.
+
+.. rubric:: Caveats
+
+* Units will be incorrect unless the data has already been converted to cgs.
+* Integration is not implemented.
+* Some functions may behave oddly or not work at all.
+* Data must already reside in memory.
+
 Generic Particle Data
 ---------------------
 

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 doc/source/reference/api/api.rst
--- a/doc/source/reference/api/api.rst
+++ b/doc/source/reference/api/api.rst
@@ -397,6 +397,7 @@
    ~yt.frontends.stream.data_structures.load_amr_grids
    ~yt.frontends.stream.data_structures.load_particles
    ~yt.frontends.stream.data_structures.load_hexahedral_mesh
+   ~yt.frontends.stream.data_structures.load_unstructured_mesh
 
 Derived Datatypes
 -----------------
@@ -632,6 +633,7 @@
    ~yt.visualization.volume_rendering.api.BoxSource
    ~yt.visualization.volume_rendering.api.GridSource
    ~yt.visualization.volume_rendering.api.CoordinateVectorSource
+   ~yt.visualization.volume_rendering.render_source.MeshSource
 
 Streamlining
 ^^^^^^^^^^^^

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 doc/source/visualizing/index.rst
--- a/doc/source/visualizing/index.rst
+++ b/doc/source/visualizing/index.rst
@@ -15,6 +15,7 @@
    callbacks
    manual_plotting
    volume_rendering
+   unstructured_mesh_rendering
    hardware_volume_rendering
    sketchfab
    mapserver

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 doc/source/visualizing/unstructured_mesh_rendering.rst
--- /dev/null
+++ b/doc/source/visualizing/unstructured_mesh_rendering.rst
@@ -0,0 +1,174 @@
+.. _unstructured_mesh_rendering:
+
+Unstructured Mesh Rendering
+===========================
+
+Beginning with version 3.3, yt has the ability to volume render unstructured
+meshes from, for example, finite element calculations. In order to use this
+capability, a few additional dependencies are required beyond those you get
+when you run the install script. First, `embree <https://embree.github.io>`
+(a fast software ray-tracing library from Intel) must be installed, either
+by compiling from source or by using one of the pre-built binaries available
+at Embree's `downloads <https://embree.github.io/downloads.html>` page. Once
+Embree is installed, you must also create a symlink next to the library. For
+example, if the libraries were installed at /usr/local/lib/, you must do
+
+.. code-block:: bash
+
+    sudo ln -s /usr/local/lib/libembree.2.6.1.dylib /usr/local/lib/libembree.so
+
+Second, the python bindings for embree (called 
+`pyembree <https://github.com/scopatz/pyembree>`) must also be installed. To
+do so, first obtain a copy, by .e.g. cloning the repo:
+
+.. code-block:: bash
+
+    git clone https://github.com/scopatz/pyembree
+
+To install, navigate to the root directory and run the setup script:
+
+.. code-block:: bash
+
+    python setup.py develop
+
+If Embree was installed to some location that is not in your path by default,
+you will need to pass in CFLAGS and LDFLAGS to the setup.py script. For example,
+the Mac OS package installer puts the installation at /opt/local/ instead of 
+usr/local. To account for this, you would do:
+
+.. code-block:: bash
+
+    CFLAGS='-I/opt/local/include' LDFLAGS='-L/opt/local/lib' python setup.py develop
+
+You must also use these flags when building any part of yt that links against
+pyembree.
+
+Once the pre-requisites are installed, unstructured mesh data can be rendered
+much like any other dataset. In particular, a new type of 
+:class:`~yt.visualization.volume_rendering.render_source.RenderSource` object
+has been defined, called the 
+:class:`~yt.visualization.volume_rendering.render_source.MeshSource`, that
+represents the unstructured mesh data that will be rendered. The user creates 
+this object, and also defines a
+:class:`~yt.visualization.volume_rendering.camera.Camera` 
+that specifies your viewpoint into the scene. When 
+:class:`~yt.visualization.volume_rendering.render_source.RenderSource` is called,
+a set of rays are cast at the source. Each time a ray strikes the source mesh,
+the data is sampled at the intersection point at the resulting value gets 
+saved into an image.
+
+See below for examples. First, here is an example of rendering a hexahedral mesh.
+
+.. python-script::
+
+   import yt
+   import pylab as plt
+   from yt.visualization.volume_rendering.render_source import MeshSource
+   from yt.visualization.volume_rendering.camera import Camera
+   from yt.utilities.exodusII_reader import get_data
+
+   # load the data
+   coords, connectivity, data = get_data("MOOSE_sample_data/out.e-s010")
+   mesh_id = 0
+   field_name = ('gas', 'diffused')
+   ds = yt.load_unstructured_mesh(data[mesh_id], connectivity[mesh_id], coords[mesh_id])
+
+   # create the RenderSource
+   ms = MeshSource(ds, field_name)
+
+   # set up camera
+   cam = Camera(ds)
+   camera_position = ds.arr([-3.0, 3.0, -3.0], 'code_length')
+   north_vector = ds.arr([0.0, 1.0, 0.0], 'dimensionless')
+   cam.resolution = (800, 800)
+   cam.set_position(camera_position, north_vector)
+
+   # make the image
+   im = ms.render(cam)
+
+   # plot and save
+   plt.imshow(im, cmap='Eos A', origin='lower', vmin=0, vmax=2.0)
+   plt.gca().axes.get_xaxis().set_visible(False)
+   plt.gca().axes.get_yaxis().set_visible(False)
+   cb = plt.colorbar()
+   cb.set_label(field_name[1])
+   plt.savefig('hex_mesh_render.png')
+
+Next, here is an example of rendering a dataset with tetrahedral mesh elements.
+
+.. python-script::
+
+   import yt
+   import pylab as plt
+   from yt.visualization.volume_rendering.render_source import MeshSource
+   from yt.visualization.volume_rendering.camera import Camera
+   from yt.utilities.exodusII_reader import get_data
+
+   # load the data
+   filename = "MOOSE_sample_data/high_order_elems_tet4_refine_out.e"
+   coords, connectivity, data = get_data(filename)
+   mesh_id = 0
+   field_name = ('gas', 'u')
+   ds = yt.load_unstructured_mesh(data[mesh_id], connectivity[mesh_id], coords[mesh_id])
+
+   # create the RenderSource
+   ms = MeshSource(ds, field_name)
+
+   # set up camera
+   cam = Camera(ds)
+   camera_position = ds.arr([3.0, 3.0, 3.0], 'code_length')
+   cam.set_width(ds.arr([2.0, 2.0, 2.0], 'code_length'))
+   north_vector = ds.arr([0.0, 1.0, 0.0], 'dimensionless')
+   cam.resolution = (800, 800)
+   cam.set_position(camera_position, north_vector)
+
+   # make the image
+   im = ms.render(cam)
+
+   # plot and save
+   plt.imshow(im, cmap='Eos A', origin='lower', vmin=0.0, vmax=1.0)
+   plt.gca().axes.get_xaxis().set_visible(False)
+   plt.gca().axes.get_yaxis().set_visible(False)
+   cb = plt.colorbar()
+   cb.set_label(field_name[1])
+   plt.savefig('tet_mesh_render.png')
+
+Finally, here is a script that creates frames of a movie. It calls the rotate()
+method 300 times, saving a new image to the disk each time.
+
+.. code-block:: python
+
+   import yt
+   import pylab as plt
+   from yt.visualization.volume_rendering.render_source import MeshSource
+   from yt.visualization.volume_rendering.camera import Camera
+   from yt.utilities.exodusII_reader import get_data
+
+   # load dataset
+   coords, connectivity, data = get_data("MOOSE_sample_data/out.e-s010")
+   mesh_id = 0
+   field_name = ('gas', 'diffused')
+   ds = yt.load_unstructured_mesh(data[mesh_id], connectivity[mesh_id], coords[mesh_id])
+
+   # create the RenderSource
+   ms = MeshSource(ds, field_name)
+
+   # set up camera
+   cam = Camera(ds)
+   camera_position = ds.arr([-3.0, 3.0, -3.0], 'code_length')
+   north_vector = ds.arr([0.0, 1.0, 0.0], 'dimensionless')
+   cam.set_position(camera_position, north_vector)
+   cam.steady_north = True
+
+   # make movie frames
+   num_frames = 301
+   for i in range(num_frames):
+       cam.rotate(2.0*np.pi/num_frames)
+       im = ms.render(cam)
+       plt.imshow(im, cmap='Eos A', origin='lower',vmin=0.0, vmax=2.0)
+       plt.gca().axes.get_xaxis().set_visible(False)
+       plt.gca().axes.get_yaxis().set_visible(False)
+       cb = plt.colorbar()
+       cb.set_label('diffused')
+       plt.savefig('movie_frames/surface_render_%.4d.png' % i)
+       plt.clf()

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/__init__.py
--- a/yt/__init__.py
+++ b/yt/__init__.py
@@ -134,7 +134,7 @@
 from yt.frontends.stream.api import \
     load_uniform_grid, load_amr_grids, \
     load_particles, load_hexahedral_mesh, load_octree, \
-    hexahedral_connectivity
+    hexahedral_connectivity, load_unstructured_mesh
 
 # For backwards compatibility
 GadgetDataset = frontends.gadget.GadgetDataset

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -826,6 +826,13 @@
             self.index._identify_base_chunk(self)
         return self._current_chunk.fwidth
 
+    @property
+    def fcoords_vertex(self):
+        if self._current_chunk is None:
+            self.index._identify_base_chunk(self)
+        return self._current_chunk.fcoords_vertex
+
+
 class YTSelectionContainer0D(YTSelectionContainer):
     _spatial = False
     _dimensionality = 0

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/data_objects/unstructured_mesh.py
--- a/yt/data_objects/unstructured_mesh.py
+++ b/yt/data_objects/unstructured_mesh.py
@@ -46,7 +46,10 @@
         # This is where we set up the connectivity information
         self.connectivity_indices = connectivity_indices
         if connectivity_indices.shape[1] != self._connectivity_length:
-            raise RuntimeError
+            if self._connectivity_length == -1:
+                self._connectivity_length = connectivity_indices.shape[1]
+            else:
+                raise RuntimeError
         self.connectivity_coords = connectivity_coords
         self.ds = index.dataset
         self._index = index
@@ -90,8 +93,14 @@
     def _generate_container_field(self, field):
         raise NotImplementedError
 
-    def select_fcoords(self, dobj):
-        raise NotImplementedError
+    def select_fcoords(self, dobj = None):
+        # This computes centroids!
+        mask = self._get_selector_mask(dobj.selector)
+        if mask is None: return np.empty((0,3), dtype='float64')
+        centers = fill_fcoords(self.connectivity_coords,
+                               self.connectivity_indices,
+                               self._index_offset)
+        return centers[mask, :]
 
     def select_fwidth(self, dobj):
         raise NotImplementedError
@@ -126,7 +135,7 @@
         mask = self._get_selector_mask(selector)
         count = self.count(selector)
         if count == 0: return 0
-        dest[offset:offset+count] = source.flat[mask]
+        dest[offset:offset+count] = source[mask,...]
         return count
 
     def count(self, selector):
@@ -143,6 +152,25 @@
         mask = selector.select_points(x,y,z, 0.0)
         return mask
 
+    def _get_selector_mask(self, selector):
+        if hash(selector) == self._last_selector_id:
+            mask = self._last_mask
+        else:
+            self._last_mask = mask = selector.fill_mesh_cell_mask(self)
+            self._last_selector_id = hash(selector)
+            if mask is None:
+                self._last_count = 0
+            else:
+                self._last_count = mask.sum()
+        return mask
+
+    def select_fcoords_vertex(self, dobj = None):
+        mask = self._get_selector_mask(dobj.selector)
+        if mask is None: return np.empty((0,self._connectivity_length,3), dtype='float64')
+        vertices = self.connectivity_coords[
+                self.connectivity_indices - 1]
+        return vertices[mask, :, :]
+
 class SemiStructuredMesh(UnstructuredMesh):
     _connectivity_length = 8
     _type_name = 'semi_structured_mesh'
@@ -161,14 +189,6 @@
         elif field == "dz":
             return self._current_chunk.fwidth[:,2]
 
-    def select_fcoords(self, dobj = None):
-        mask = self._get_selector_mask(dobj.selector)
-        if mask is None: return np.empty((0,3), dtype='float64')
-        centers = fill_fcoords(self.connectivity_coords,
-                               self.connectivity_indices,
-                               self._index_offset)
-        return centers[mask, :]
-
     def select_fwidth(self, dobj):
         mask = self._get_selector_mask(dobj.selector)
         if mask is None: return np.empty((0,3), dtype='float64')
@@ -189,15 +209,3 @@
         dt, t = dobj.selector.get_dt_mesh(self, mask.sum(), self._index_offset)
         return dt, t
 
-    def _get_selector_mask(self, selector):
-        if hash(selector) == self._last_selector_id:
-            mask = self._last_mask
-        else:
-            self._last_mask = mask = selector.fill_mesh_cell_mask(self)
-            self._last_selector_id = hash(selector)
-            if mask is None:
-                self._last_count = 0
-            else:
-                self._last_count = mask.sum()
-        return mask
-

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/fields/field_detector.py
--- a/yt/fields/field_detector.py
+++ b/yt/fields/field_detector.py
@@ -240,6 +240,13 @@
         return self.ds.arr(fc, input_units = "code_length")
 
     @property
+    def fcoords_vertex(self):
+        fc = np.random.random((self.nd, self.nd, self.nd, 8, 3))
+        if self.flat:
+            fc.shape = (self.nd*self.nd*self.nd, 8, 3)
+        return self.ds.arr(fc, input_units = "code_length")
+
+    @property
     def icoords(self):
         ic = np.mgrid[0:self.nd-1:self.nd*1j,
                       0:self.nd-1:self.nd*1j,

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/fields/tests/test_fields.py
--- a/yt/fields/tests/test_fields.py
+++ b/yt/fields/tests/test_fields.py
@@ -178,6 +178,9 @@
     for field in sorted(base_ds.field_info):
         if field[1].find("beta_p") > -1:
             continue
+        if field[1].find("vertex") > -1:
+            # don't test the vertex fields for now
+            continue
         if field in base_ds.field_list:
             # Don't know how to test this.  We need some way of having fields
             # that are fallbacks be tested, but we don't have that now.

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/frontends/stream/api.py
--- a/yt/frontends/stream/api.py
+++ b/yt/frontends/stream/api.py
@@ -24,7 +24,8 @@
       load_hexahedral_mesh, \
       hexahedral_connectivity, \
       load_octree, \
-      refine_amr
+      refine_amr, \
+      load_unstructured_mesh
 
 from .fields import \
       StreamFieldInfo

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/frontends/stream/data_structures.py
--- a/yt/frontends/stream/data_structures.py
+++ b/yt/frontends/stream/data_structures.py
@@ -50,7 +50,7 @@
 from yt.geometry.oct_container import \
     OctreeContainer
 from yt.geometry.unstructured_mesh_handler import \
-           UnstructuredIndex
+    UnstructuredIndex
 from yt.data_objects.static_output import \
     Dataset
 from yt.utilities.logger import ytLogger as mylog
@@ -69,8 +69,8 @@
 from yt.utilities.flagging_methods import \
     FlaggingGrid
 from yt.data_objects.unstructured_mesh import \
-           SemiStructuredMesh, \
-           UnstructuredMesh
+    SemiStructuredMesh, \
+    UnstructuredMesh
 from yt.extern.six import string_types, iteritems
 from .fields import \
     StreamFieldInfo
@@ -1602,7 +1602,7 @@
         connec = ensure_list(self.stream_handler.fields.pop("connectivity"))
         self.meshes = [StreamUnstructuredMesh(
           i, self.index_filename, c1, c2, self)
-          for i, (c1, c2) in enumerate(zip(coords, connec))]
+          for i, (c1, c2) in enumerate(zip(connec, coords))]
 
     def _setup_data_io(self):
         if self.stream_handler.io is not None:
@@ -1614,7 +1614,139 @@
         self.field_list = list(set(self.stream_handler.get_fields()))
 
 class StreamUnstructuredMeshDataset(StreamDataset):
-    _index_class = StreamUnstructuredMesh
+    _index_class = StreamUnstructuredIndex
     _field_info_class = StreamFieldInfo
     _dataset_type = "stream_unstructured"
 
+def load_unstructured_mesh(data, connectivity, coordinates,
+                         length_unit = None, bbox=None, sim_time=0.0,
+                         mass_unit = None, time_unit = None,
+                         velocity_unit = None, magnetic_unit = None,
+                         periodicity=(False, False, False),
+                         geometry = "cartesian"):
+    r"""Load an unstructured mesh of data into yt as a
+    :class:`~yt.frontends.stream.data_structures.StreamHandler`.
+
+    This should allow an unstructured mesh data to be loaded directly into
+    yt and analyzed as would any others.  Not all functionality for
+    visualization will be present, and some analysis functions may not yet have
+    been implemented.
+
+    Particle fields are detected as one-dimensional fields. The number of particles
+    is set by the "number_of_particles" key in data.
+
+    Parameters
+    ----------
+    data : dict or list of dicts
+        This is a list of dicts of numpy arrays, where each element in the list
+        is a different mesh, and where the keys of dicts are the field names.
+        If a dict is supplied, this will be assumed to be the only mesh.
+    connectivity : list of array_like or array_like
+        This is the connectivity array for the meshes; this should either be a
+        list where each element in the list is a numpy array or a single numpy
+        array.  Each array in the list can have different connectivity length
+        and should be of shape (N,M) where N is the number of elements and M is
+        the connectivity length.
+    coordinates : array_like
+        This should be of size (L,3) where L is the number of vertices
+        indicated in the connectivity matrix.
+    bbox : array_like (xdim:zdim, LE:RE), optional
+        Size of computational domain in units of the length unit.
+    sim_time : float, optional
+        The simulation time in seconds
+    mass_unit : string
+        Unit to use for masses.  Defaults to unitless.
+    time_unit : string
+        Unit to use for times.  Defaults to unitless.
+    velocity_unit : string
+        Unit to use for velocities.  Defaults to unitless.
+    magnetic_unit : string
+        Unit to use for magnetic fields. Defaults to unitless.
+    periodicity : tuple of booleans
+        Determines whether the data will be treated as periodic along
+        each axis
+    geometry : string or tuple
+        "cartesian", "cylindrical", "polar", "spherical", "geographic" or
+        "spectral_cube".  Optionally, a tuple can be provided to specify the
+        axis ordering -- for instance, to specify that the axis ordering should
+        be z, x, y, this would be: ("cartesian", ("z", "x", "y")).  The same
+        can be done for other coordinates, for instance: 
+        ("spherical", ("theta", "phi", "r")).
+
+    """
+
+    domain_dimensions = np.ones(3, "int32") * 2
+    nprocs = 1
+    data = ensure_list(data)
+    connectivity = ensure_list(connectivity)
+    if bbox is None:
+        bbox = np.array([[coordinates[:,i].min() - 0.1 * abs(coordinates[:,i].min()),
+                          coordinates[:,i].max() + 0.1 * abs(coordinates[:,i].max())]
+                          for i in range(3)], "float64")
+    domain_left_edge = np.array(bbox[:, 0], 'float64')
+    domain_right_edge = np.array(bbox[:, 1], 'float64')
+    grid_levels = np.zeros(nprocs, dtype='int32').reshape((nprocs,1))
+
+    field_units = {}
+    particle_types = {}
+    sfh = StreamDictFieldHandler()
+
+    sfh.update({'connectivity': connectivity,
+                'coordinates': coordinates})
+    for i, d in enumerate(data):
+        _f_unit, _data = unitify_data(d)
+        field_units.update(_f_unit)
+        sfh[i] = _data
+        particle_types.update(set_particle_types(d))
+    # Simple check for axis length correctness
+    if 0 and len(data) > 0:
+        fn = list(sorted(data))[0]
+        array_values = data[fn]
+        if array_values.size != connectivity.shape[0]:
+            mylog.error("Dimensions of array must be one fewer than the" +
+                        " coordinate set.")
+            raise RuntimeError
+    grid_left_edges = domain_left_edge
+    grid_right_edges = domain_right_edge
+    grid_dimensions = domain_dimensions.reshape(nprocs,3).astype("int32")
+
+    if length_unit is None:
+        length_unit = 'code_length'
+    if mass_unit is None:
+        mass_unit = 'code_mass'
+    if time_unit is None:
+        time_unit = 'code_time'
+    if velocity_unit is None:
+        velocity_unit = 'code_velocity'
+    if magnetic_unit is None:
+        magnetic_unit = 'code_magnetic'
+
+    # I'm not sure we need any of this.
+    handler = StreamHandler(
+        grid_left_edges,
+        grid_right_edges,
+        grid_dimensions,
+        grid_levels,
+        -np.ones(nprocs, dtype='int64'),
+        np.zeros(nprocs, dtype='int64').reshape(nprocs,1), # Temporary
+        np.zeros(nprocs).reshape((nprocs,1)),
+        sfh,
+        field_units,
+        (length_unit, mass_unit, time_unit, velocity_unit, magnetic_unit),
+        particle_types=particle_types,
+        periodicity=periodicity
+    )
+
+    handler.name = "UnstructuredMeshData"
+    handler.domain_left_edge = domain_left_edge
+    handler.domain_right_edge = domain_right_edge
+    handler.refine_by = 2
+    handler.dimensionality = 3
+    handler.domain_dimensions = domain_dimensions
+    handler.simulation_time = sim_time
+    handler.cosmology_simulation = 0
+
+    sds = StreamUnstructuredMeshDataset(handler, geometry = geometry)
+
+    return sds
+

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/frontends/stream/io.py
--- a/yt/frontends/stream/io.py
+++ b/yt/frontends/stream/io.py
@@ -255,3 +255,39 @@
                         subset.domain_id - subset._domain_offset][field]
                 subset.fill(field_vals, rv, selector, ind)
         return rv
+
+
+class IOHandlerStreamUnstructured(BaseIOHandler):
+    _dataset_type = "stream_unstructured"
+    _node_types = ("diffused", "convected", "u", "temp")
+
+    def __init__(self, ds):
+        self.fields = ds.stream_handler.fields
+        super(IOHandlerStreamUnstructured, self).__init__(ds)
+
+    def _read_fluid_selection(self, chunks, selector, fields, size):
+        chunks = list(chunks)
+        chunk = chunks[0]
+        mesh_id = chunk.objs[0].mesh_id
+        rv = {}
+        for field in fields:
+            ftype, fname = field
+            nodes_per_element = self.fields[mesh_id][field].shape[1]
+            if fname in self._node_types:
+                rv[field] = np.empty((size, nodes_per_element), dtype="float64")
+            else:
+                rv[field] = np.empty(size, dtype="float64")
+        ngrids = sum(len(chunk.objs) for chunk in chunks)
+        mylog.debug("Reading %s cells of %s fields in %s blocks",
+                    size, [fname for ftype, fname in fields], ngrids)
+        for field in fields:
+            ind = 0
+            ftype, fname = field
+            for chunk in chunks:
+                for g in chunk.objs:
+                    ds = self.fields[g.mesh_id].get(field, None)
+                    if ds is None:
+                        ds = self.fields[g.mesh_id][fname]
+                    ind += g.select(selector, ds, rv[field], ind) # caches
+        return rv
+

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/geometry/coordinates/cartesian_coordinates.py
--- a/yt/geometry/coordinates/cartesian_coordinates.py
+++ b/yt/geometry/coordinates/cartesian_coordinates.py
@@ -18,9 +18,13 @@
 from .coordinate_handler import \
     CoordinateHandler, \
     _unknown_coord, \
-    _get_coord_fields
+    _get_coord_fields, \
+    _get_vert_fields, \
+    cartesian_to_cylindrical, \
+    cylindrical_to_cartesian
 import yt.visualization._MPL as _MPL
 
+
 class CartesianCoordinateHandler(CoordinateHandler):
 
     def __init__(self, ds, ordering = ('x','y','z')):
@@ -38,6 +42,10 @@
             registry.add_field(("index", "%s" % ax), function = f2,
                                display_field = False,
                                units = "code_length")
+            f3 = _get_vert_fields(axi)
+            registry.add_field(("index", "vertex_%s" % ax), function = f3,
+                               display_field = False,
+                               units = "code_length")
         def _cell_volume(field, data):
             rv  = data["index", "dx"].copy(order='K')
             rv *= data["index", "dy"]

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/geometry/coordinates/coordinate_handler.py
--- a/yt/geometry/coordinates/coordinate_handler.py
+++ b/yt/geometry/coordinates/coordinate_handler.py
@@ -45,6 +45,12 @@
         return data._reshape_vals(rv)
     return _dds, _coords
 
+def _get_vert_fields(axi, units = "code_length"):
+    def _vert(field, data):
+        rv = data.ds.arr(data.fcoords_vertex[...,axi].copy(), units)
+        return rv
+    return _vert
+
 def validate_iterable_width(width, ds, unit=None):
     if isinstance(width[0], tuple) and isinstance(width[1], tuple):
         validate_width_tuple(width[0])

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/geometry/geometry_handler.py
--- a/yt/geometry/geometry_handler.py
+++ b/yt/geometry/geometry_handler.py
@@ -402,6 +402,20 @@
             ind += gt.size
         return cdt
 
+    @cached_property
+    def fcoords_vertex(self):
+        ci = np.empty((self.data_size, 8, 3), dtype='float64')
+        ci = YTArray(ci, input_units = "code_length",
+                     registry = self.dobj.ds.unit_registry)
+        if self.data_size == 0: return ci
+        ind = 0
+        for obj in self.objs:
+            c = obj.select_fcoords_vertex(self.dobj)
+            if c.shape[0] == 0: continue
+            ci[ind:ind+c.shape[0], :, :] = c
+            ind += c.shape[0]
+        return ci
+
 class ChunkDataCache(object):
     def __init__(self, base_iter, preload_fields, geometry_handler,
                  max_length = 256):

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -387,8 +387,6 @@
         cdef int npoints, nv = mesh._connectivity_length
         cdef int total = 0
         cdef int offset = mesh._index_offset
-        if nv != 8:
-            raise RuntimeError
         coords = _ensure_code(mesh.connectivity_coords)
         indices = mesh.connectivity_indices
         npoints = indices.shape[0]

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/utilities/exodusII_reader.py
--- /dev/null
+++ b/yt/utilities/exodusII_reader.py
@@ -0,0 +1,46 @@
+import string
+from itertools import takewhile
+from netCDF4 import Dataset
+import numpy as np
+from yt.config import ytcfg
+import os
+
+
+def sanitize_string(s):
+    s = "".join(_ for _ in takewhile(lambda a: a in string.printable, s))
+    return s
+
+
+def get_data(fn):
+    try:
+        f = Dataset(fn)
+    except RuntimeError:
+        f = Dataset(os.path.join(ytcfg.get("yt", "test_data_dir"), fn))
+    fvars = f.variables
+    # Is this correct?
+    etypes = fvars["eb_status"][:]
+    nelem = etypes.shape[0]
+#    varnames = [sanitize_string(v.tostring()) for v in
+#                fvars["name_elem_var"][:]]
+    nodnames = [sanitize_string(v.tostring()) for v in
+                fvars["name_nod_var"][:]]
+    coord = np.array([fvars["coord%s" % ax][:]
+                     for ax in 'xyz']).transpose().copy()
+    coords = []
+    connects = []
+    data = []
+    for i in range(nelem):
+        connects.append(fvars["connect%s" % (i+1)][:].astype("i8"))
+        ci = connects[-1]
+        coords.append(coord)  # Same for all
+        vals = {}
+#        for j, v in enumerate(varnames):
+#            values = fvars["vals_elem_var%seb%s" % (j+1, i+1)][:]
+#            vals['gas', v] = values.astype("f8")[-1, :]
+        for j, v in enumerate(nodnames):
+            # We want just for this set of nodes all the node variables
+            # Use (ci - 1) to get these values
+            values = fvars["vals_nod_var%s" % (j+1)][:]
+            vals['gas', v] = values.astype("f8")[-1, ci - 1, ...]
+        data.append(vals)
+    return coords, connects, data

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/utilities/lib/element_mappings.pxd
--- /dev/null
+++ b/yt/utilities/lib/element_mappings.pxd
@@ -0,0 +1,113 @@
+cimport numpy as np
+from numpy cimport ndarray
+cimport cython
+import numpy as np
+from libc.math cimport fabs, fmax
+
+cdef class ElementSampler:
+
+    # how close a point has to be to the element
+    # to get counted as "inside". This is in the
+    # mapped coordinates of the element.
+    cdef np.float64_t inclusion_tol
+
+    cdef void map_real_to_unit(self,
+                               double* mapped_x, 
+                               double* vertices,
+                               double* physical_x) nogil
+
+
+    cdef double sample_at_unit_point(self,
+                                     double* coord,
+                                     double* vals) nogil
+    
+
+    cdef double sample_at_real_point(self,
+                                     double* vertices,
+                                     double* field_values,
+                                     double* physical_x) nogil
+
+    cdef int check_inside(self, double* mapped_coord) nogil
+
+
+cdef class P1Sampler3D(ElementSampler):
+
+    cdef void map_real_to_unit(self,
+                               double* mapped_x, 
+                               double* vertices,
+                               double* physical_x) nogil
+
+
+    cdef double sample_at_unit_point(self,
+                                     double* coord,
+                                     double* vals) nogil
+
+    cdef int check_inside(self, double* mapped_coord) nogil
+
+# This typedef defines a function pointer that defines the system
+# of equations that will be solved by the NonlinearSolveSamplers.
+# 
+# inputs:
+#     x        - pointer to the mapped coordinate
+#     vertices - pointer to the element vertices
+#     phys_x   - pointer to the physical coordinate
+#
+# outputs:
+#
+#     fx - the result of solving the system, should be close to 0
+#          once it is converged.
+#
+ctypedef void (*func_type)(double* fx, 
+                           double* x, 
+                           double* vertices, 
+                           double* phys_x) nogil
+
+# This typedef defines a function pointer that defines the Jacobian
+# matrix used by the NonlinearSolveSamplers. Subclasses needed to 
+# define a Jacobian function in this form.
+# 
+# inputs:
+#     x        - pointer to the mapped coordinate
+#     vertices - pointer to the element vertices
+#     phys_x   - pointer to the physical coordinate
+#
+# outputs:
+#
+#     rcol     - the first column of the jacobian
+#     scol     - the second column of the jacobian
+#     tcol     - the third column of the jaocobian
+#
+ctypedef void (*jac_type)(double* rcol, 
+                          double* scol, 
+                          double* tcol, 
+                          double* x, 
+                          double* vertices, 
+                          double* phys_x) nogil
+
+cdef class NonlinearSolveSampler(ElementSampler):
+
+    cdef int dim
+    cdef int max_iter
+    cdef np.float64_t tolerance
+    cdef func_type func 
+    cdef jac_type jac
+
+    cdef void map_real_to_unit(self,
+                               double* mapped_x, 
+                               double* vertices,
+                               double* physical_x) nogil
+    
+
+cdef class Q1Sampler3D(NonlinearSolveSampler):
+
+    cdef void map_real_to_unit(self,
+                               double* mapped_x, 
+                               double* vertices,
+                               double* physical_x) nogil
+
+
+    cdef double sample_at_unit_point(self,
+                                     double* coord,
+                                     double* vals) nogil
+
+    cdef int check_inside(self, double* mapped_coord) nogil

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/utilities/lib/element_mappings.pyx
--- /dev/null
+++ b/yt/utilities/lib/element_mappings.pyx
@@ -0,0 +1,351 @@
+"""
+This file contains coordinate mappings between physical coordinates and those
+defined on unit elements, as well as doing the corresponding intracell 
+interpolation on finite element data.
+
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2015, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+cimport numpy as np
+from numpy cimport ndarray
+cimport cython
+import numpy as np
+from libc.math cimport fabs, fmax
+
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+cdef double determinant_3x3(double* col0, 
+                            double* col1, 
+                            double* col2) nogil:
+    return col0[0]*col1[1]*col2[2] - col0[0]*col1[2]*col2[1] - \
+           col0[1]*col1[0]*col2[2] + col0[1]*col1[2]*col2[0] + \
+           col0[2]*col1[0]*col2[1] - col0[2]*col1[1]*col2[0]
+
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+cdef double maxnorm(double* f) nogil:
+    cdef double err
+    cdef int i
+    err = fabs(f[0])
+    for i in range(1, 2):
+        err = fmax(err, fabs(f[i])) 
+    return err
+
+
+cdef class ElementSampler:
+    '''
+
+    This is a base class for sampling the value of a finite element solution
+    at an arbitrary point inside a mesh element. In general, this will be done
+    by transforming the requested physical coordinate into a mapped coordinate 
+    system, sampling the solution in mapped coordinates, and returning the result.
+    This is not to be used directly; use one of the subclasses instead.
+
+    '''
+
+    def __init__(self):
+        self.inclusion_tol = 1.0e-8
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef void map_real_to_unit(self,
+                               double* mapped_x, 
+                               double* vertices,
+                               double* physical_x) nogil:
+        pass
+        
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef double sample_at_unit_point(self,
+                                     double* coord,
+                                     double* vals) nogil:
+        pass
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef int check_inside(self, double* mapped_coord) nogil:
+        pass
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef double sample_at_real_point(self,
+                                     double* vertices,
+                                     double* field_values,
+                                     double* physical_x) nogil:
+        cdef double val
+        cdef double mapped_coord[4]
+
+        self.map_real_to_unit(mapped_coord, vertices, physical_x)
+        val = self.sample_at_unit_point(mapped_coord, field_values)
+        return val
+
+
+cdef class P1Sampler3D(ElementSampler):
+    '''
+
+    This implements sampling inside a linear, tetrahedral mesh element.
+    This mapping is linear and can be inverted easily.
+
+    '''
+
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef void map_real_to_unit(self, double* mapped_x, 
+                               double* vertices, double* physical_x) nogil:
+    
+        cdef int i
+        cdef double d
+        cdef double[3] bvec
+        cdef double[3] col0
+        cdef double[3] col1
+        cdef double[3] col2
+    
+        # here, we express positions relative to the 4th element,
+        # which is selected by vertices[9]
+        for i in range(3):
+            bvec[i] = physical_x[i]       - vertices[9 + i]
+            col0[i] = vertices[0 + i]     - vertices[9 + i]
+            col1[i] = vertices[3 + i]     - vertices[9 + i]
+            col2[i] = vertices[6 + i]     - vertices[9 + i]
+        
+        d = determinant_3x3(col0, col1, col2)
+        mapped_x[0] = determinant_3x3(bvec, col1, col2)/d
+        mapped_x[1] = determinant_3x3(col0, bvec, col2)/d
+        mapped_x[2] = determinant_3x3(col0, col1, bvec)/d
+        mapped_x[3] = 1.0 - mapped_x[0] - mapped_x[1] - mapped_x[2]
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef double sample_at_unit_point(self,
+                                     double* coord, 
+                                     double* vals) nogil:
+        return vals[0]*coord[0] + vals[1]*coord[1] + \
+            vals[2]*coord[2] + vals[3]*coord[3]
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef int check_inside(self, double* mapped_coord) nogil:
+        cdef int i
+        for i in range(4):
+            if (mapped_coord[i] < -self.inclusion_tol or
+                mapped_coord[i] - 1.0 > self.inclusion_tol):
+                return 0
+        return 1
+
+
+cdef class NonlinearSolveSampler(ElementSampler):
+
+    '''
+
+    This is a base class for handling element samplers that require
+    a nonlinear solve to invert the mapping between coordinate systems.
+    To do this, we perform Newton-Raphson iteration using a specified 
+    system of equations with an analytic Jacobian matrix. This is
+    not to be used directly, use one of the subclasses instead.
+
+    '''
+
+    def __init__(self):
+        super(NonlinearSolveSampler, self).__init__()
+        self.tolerance = 1.0e-9
+        self.max_iter = 10
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef void map_real_to_unit(self,
+                               double* mapped_x,
+                               double* vertices,
+                               double* physical_x) nogil:
+        cdef int i
+        cdef double d, val
+        cdef double[3] f
+        cdef double[3] r
+        cdef double[3] s
+        cdef double[3] t
+        cdef double[3] x
+        cdef int iterations = 0
+        cdef double err
+   
+        # initial guess
+        for i in range(3):
+            x[i] = 0.0
+    
+        # initial error norm
+        self.func(f, x, vertices, physical_x)
+        err = maxnorm(f)  
+   
+        # begin Newton iteration
+        while (err > self.tolerance and iterations < self.max_iter):
+            self.jac(r, s, t, x, vertices, physical_x)
+            d = determinant_3x3(r, s, t)
+            x[0] = x[0] - (determinant_3x3(f, s, t)/d)
+            x[1] = x[1] - (determinant_3x3(r, f, t)/d)
+            x[2] = x[2] - (determinant_3x3(r, s, f)/d)
+            self.func(f, x, vertices, physical_x)        
+            err = maxnorm(f)
+            iterations += 1
+
+        for i in range(3):
+            mapped_x[i] = x[i]
+
+
+cdef class Q1Sampler3D(NonlinearSolveSampler):
+
+    ''' 
+
+    This implements sampling inside a 3D, linear, hexahedral mesh element.
+
+    '''
+
+    def __init__(self):
+        super(Q1Sampler3D, self).__init__()
+        self.dim = 3
+        self.func = Q1Function3D
+        self.jac = Q1Jacobian3D
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef double sample_at_unit_point(self, double* coord, double* vals) nogil:
+        cdef double F, rm, rp, sm, sp, tm, tp
+    
+        rm = 1.0 - coord[0]
+        rp = 1.0 + coord[0]
+        sm = 1.0 - coord[1]
+        sp = 1.0 + coord[1]
+        tm = 1.0 - coord[2]
+        tp = 1.0 + coord[2]
+    
+        F = vals[0]*rm*sm*tm + vals[1]*rp*sm*tm + vals[2]*rp*sp*tm + vals[3]*rm*sp*tm + \
+            vals[4]*rm*sm*tp + vals[5]*rp*sm*tp + vals[6]*rp*sp*tp + vals[7]*rm*sp*tp
+        return 0.125*F
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef int check_inside(self, double* mapped_coord) nogil:
+        if (fabs(mapped_coord[0]) - 1.0 > self.inclusion_tol or
+            fabs(mapped_coord[1]) - 1.0 > self.inclusion_tol or 
+            fabs(mapped_coord[2]) - 1.0 > self.inclusion_tol):
+            return 0
+        return 1
+
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+cdef inline void Q1Function3D(double* fx,
+                              double* x, 
+                              double* vertices, 
+                              double* phys_x) nogil:
+    cdef int i
+    cdef double rm, rp, sm, sp, tm, tp
+    
+    rm = 1.0 - x[0]
+    rp = 1.0 + x[0]
+    sm = 1.0 - x[1]
+    sp = 1.0 + x[1]
+    tm = 1.0 - x[2]
+    tp = 1.0 + x[2]
+    
+    for i in range(3):
+        fx[i] = vertices[0 + i]*rm*sm*tm \
+              + vertices[3 + i]*rp*sm*tm \
+              + vertices[6 + i]*rp*sp*tm \
+              + vertices[9 + i]*rm*sp*tm \
+              + vertices[12 + i]*rm*sm*tp \
+              + vertices[15 + i]*rp*sm*tp \
+              + vertices[18 + i]*rp*sp*tp \
+              + vertices[21 + i]*rm*sp*tp \
+              - 8.0*phys_x[i]
+
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+cdef inline void Q1Jacobian3D(double* rcol,
+                              double* scol,
+                              double* tcol,
+                              double* x, 
+                              double* vertices, 
+                              double* phys_x) nogil:    
+    cdef int i
+    cdef double rm, rp, sm, sp, tm, tp
+    
+    rm = 1.0 - x[0]
+    rp = 1.0 + x[0]
+    sm = 1.0 - x[1]
+    sp = 1.0 + x[1]
+    tm = 1.0 - x[2]
+    tp = 1.0 + x[2]
+    
+    for i in range(3):
+        rcol[i] = -sm*tm*vertices[0 + i]  + sm*tm*vertices[3 + i]  + \
+                   sp*tm*vertices[6 + i]  - sp*tm*vertices[9 + i]  - \
+                   sm*tp*vertices[12 + i] + sm*tp*vertices[15 + i] + \
+                   sp*tp*vertices[18 + i] - sp*tp*vertices[21 + i]
+        scol[i] = -rm*tm*vertices[0 + i]  - rp*tm*vertices[3 + i]  + \
+                   rp*tm*vertices[6 + i]  + rm*tm*vertices[9 + i]  - \
+                   rm*tp*vertices[12 + i] - rp*tp*vertices[15 + i] + \
+                   rp*tp*vertices[18 + i] + rm*tp*vertices[21 + i]
+        tcol[i] = -rm*sm*vertices[0 + i]  - rp*sm*vertices[3 + i]  - \
+                   rp*sp*vertices[6 + i]  - rm*sp*vertices[9 + i]  + \
+                   rm*sm*vertices[12 + i] + rp*sm*vertices[15 + i] + \
+                   rp*sp*vertices[18 + i] + rm*sp*vertices[21 + i]
+
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+def test_hex_sampler(np.ndarray[np.float64_t, ndim=2] vertices,
+                     np.ndarray[np.float64_t, ndim=1] field_values,
+                     np.ndarray[np.float64_t, ndim=1] physical_x):
+
+    cdef double val
+
+    cdef Q1Sampler3D sampler = Q1Sampler3D()
+
+    val = sampler.sample_at_real_point(<double*> vertices.data,
+                                       <double*> field_values.data,
+                                       <double*> physical_x.data)
+    return val
+
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+def test_tetra_sampler(np.ndarray[np.float64_t, ndim=2] vertices,
+                       np.ndarray[np.float64_t, ndim=1] field_values,
+                       np.ndarray[np.float64_t, ndim=1] physical_x):
+
+    cdef double val
+    cdef double[4] mapped_coord
+
+    sampler = P1Sampler3D()
+
+    val = sampler.sample_at_real_point(<double*> vertices.data,
+                                       <double*> field_values.data,
+                                       <double*> physical_x.data)
+
+    return val

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/utilities/lib/grid_traversal.pxd
--- a/yt/utilities/lib/grid_traversal.pxd
+++ b/yt/utilities/lib/grid_traversal.pxd
@@ -19,6 +19,46 @@
 cimport cython
 cimport kdtree_utils
 
+cdef struct ImageContainer:
+    np.float64_t *vp_pos
+    np.float64_t *vp_dir
+    np.float64_t *center
+    np.float64_t *image
+    np.float64_t *zbuffer
+    np.float64_t pdx, pdy
+    np.float64_t bounds[4]
+    int nv[2]
+    int vp_strides[3]
+    int im_strides[3]
+    int vd_strides[3]
+    np.float64_t *x_vec
+    np.float64_t *y_vec
+
+ctypedef void sampler_function(
+                VolumeContainer *vc,
+                np.float64_t v_pos[3],
+                np.float64_t v_dir[3],
+                np.float64_t enter_t,
+                np.float64_t exit_t,
+                int index[3],
+                void *data) nogil
+
+
+cdef class ImageSampler:
+    cdef ImageContainer *image
+    cdef sampler_function *sampler
+    cdef public object avp_pos, avp_dir, acenter, aimage, ax_vec, ay_vec
+    cdef public object azbuffer
+    cdef void *supp_data
+    cdef np.float64_t width[3]
+
+    cdef void get_start_stop(self, np.float64_t *ex, np.int64_t *rv)
+
+    cdef void calculate_extent(self, np.float64_t extrema[4],
+                               VolumeContainer *vc) nogil
+
+    cdef void setup(self, PartitionedGrid pg)
+
 cdef struct VolumeContainer:
     int n_fields
     np.float64_t **data

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/utilities/lib/grid_traversal.pyx
--- a/yt/utilities/lib/grid_traversal.pyx
+++ b/yt/utilities/lib/grid_traversal.pyx
@@ -30,15 +30,6 @@
 
 DEF Nch = 4
 
-ctypedef void sampler_function(
-                VolumeContainer *vc,
-                np.float64_t v_pos[3],
-                np.float64_t v_dir[3],
-                np.float64_t enter_t,
-                np.float64_t exit_t,
-                int index[3],
-                void *data) nogil
-
 cdef class PartitionedGrid:
 
     @cython.boundscheck(False)
@@ -183,32 +174,12 @@
             for i in range(3):
                 vel[i] /= vel_mag[0]
 
-cdef struct ImageContainer:
-    np.float64_t *vp_pos
-    np.float64_t *vp_dir
-    np.float64_t *center
-    np.float64_t *image
-    np.float64_t *zbuffer
-    np.float64_t pdx, pdy
-    np.float64_t bounds[4]
-    int nv[2]
-    int vp_strides[3]
-    int im_strides[3]
-    int vd_strides[3]
-    np.float64_t *x_vec
-    np.float64_t *y_vec
 
 cdef struct ImageAccumulator:
     np.float64_t rgba[Nch]
     void *supp_data
 
 cdef class ImageSampler:
-    cdef ImageContainer *image
-    cdef sampler_function *sampler
-    cdef public object avp_pos, avp_dir, acenter, aimage, ax_vec, ay_vec
-    cdef public object azbuffer
-    cdef void *supp_data
-    cdef np.float64_t width[3]
     def __init__(self,
                   np.ndarray vp_pos,
                   np.ndarray vp_dir,

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/utilities/lib/mesh_construction.h
--- /dev/null
+++ b/yt/utilities/lib/mesh_construction.h
@@ -0,0 +1,36 @@
+#define MAX_NUM_TRI 12
+#define HEX_NV 8
+#define HEX_NT 12
+#define TETRA_NV 4
+#define TETRA_NT 4
+
+// This array is used to triangulate the hexahedral mesh elements
+// Each element has six faces with two triangles each.
+// The vertex ordering convention is assumed to follow that used
+// here: http://homepages.cae.wisc.edu/~tautges/papers/cnmev3.pdf
+// Note that this is the case for Exodus II data.
+int triangulate_hex[MAX_NUM_TRI][3] = {
+  {0, 2, 1}, {0, 3, 2}, // Face is 3 2 1 0 
+  {4, 5, 6}, {4, 6, 7}, // Face is 4 5 6 7
+  {0, 1, 5}, {0, 5, 4}, // Face is 0 1 5 4
+  {1, 2, 6}, {1, 6, 5}, // Face is 1 2 6 5
+  {0, 7, 3}, {0, 4, 7}, // Face is 3 0 4 7
+  {3, 6, 2}, {3, 7, 6}  // Face is 2 3 7 6
+};
+
+// Similarly, this is used to triangulate the tetrahedral cells
+int triangulate_tetra[MAX_NUM_TRI][3] = {
+  {0, 1, 2}, 
+  {0, 1, 3},
+  {0, 2, 3},
+  {1, 2, 3},
+
+  {-1, -1, -1},
+  {-1, -1, -1},
+  {-1, -1, -1},
+  {-1, -1, -1},
+  {-1, -1, -1},
+  {-1, -1, -1},
+  {-1, -1, -1},
+  {-1, -1, -1}
+};

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/utilities/lib/mesh_construction.pxd
--- /dev/null
+++ b/yt/utilities/lib/mesh_construction.pxd
@@ -0,0 +1,12 @@
+from pyembree.rtcore cimport \
+    Vertex, \
+    Triangle, \
+    Vec3f
+
+ctypedef struct MeshDataContainer:
+    Vertex* vertices       # array of triangle vertices
+    Triangle* indices      # which vertices belong to which triangles
+    double* field_data     # the field values at the vertices
+    int* element_indices   # which vertices belong to which *element*
+    int tpe                # the number of triangles per element
+    int vpe                # the number of vertices per element

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/utilities/lib/mesh_construction.pyx
--- /dev/null
+++ b/yt/utilities/lib/mesh_construction.pyx
@@ -0,0 +1,190 @@
+"""
+This file contains the ElementMesh, which represents the target that the 
+rays will be cast at when rendering finite element data. This class handles
+the interface between the internal representation of the mesh and the pyembree
+representation.
+
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2015, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+cimport numpy as np
+cimport cython
+cimport pyembree.rtcore as rtc 
+from mesh_traversal cimport YTEmbreeScene
+cimport pyembree.rtcore_geometry as rtcg
+cimport pyembree.rtcore_ray as rtcr
+cimport pyembree.rtcore_geometry_user as rtcgu
+from mesh_samplers cimport \
+    sample_hex, \
+    sample_tetra
+from pyembree.rtcore cimport \
+    Vertex, \
+    Triangle, \
+    Vec3f
+from libc.stdlib cimport malloc, free
+import numpy as np
+
+cdef extern from "mesh_construction.h":
+    enum:
+        MAX_NUM_TRI
+        
+    int HEX_NV
+    int HEX_NT
+    int TETRA_NV
+    int TETRA_NT
+    int triangulate_hex[MAX_NUM_TRI][3]
+    int triangulate_tetra[MAX_NUM_TRI][3]
+
+
+cdef class ElementMesh:
+    r'''
+
+    Currently, we handle non-triangular mesh types by converting them 
+    to triangular meshes. This class performs this transformation.
+    Currently, this is implemented for hexahedral and tetrahedral
+    meshes.
+
+    Parameters
+    ----------
+
+    scene : EmbreeScene
+        This is the scene to which the constructed polygons will be
+        added.
+    vertices : a np.ndarray of floats. 
+        This specifies the x, y, and z coordinates of the vertices in 
+        the polygon mesh. This should either have the shape 
+        (num_vertices, 3). For example, vertices[2][1] should give the 
+        y-coordinate of the 3rd vertex in the mesh.
+    indices : a np.ndarray of ints
+        This should either have the shape (num_elements, 4) or 
+        (num_elements, 8) for tetrahedral and hexahedral meshes, 
+        respectively. For tetrahedral meshes, each element will 
+        be represented by four triangles in the scene. For hex meshes,
+        each element will be represented by 12 triangles, 2 for each 
+        face. For hex meshes, we assume that the node ordering is as
+        defined here: 
+        http://homepages.cae.wisc.edu/~tautges/papers/cnmev3.pdf
+            
+    '''
+
+    cdef Vertex* vertices
+    cdef Triangle* indices
+    cdef unsigned int mesh
+    cdef double* field_data
+    cdef rtcg.RTCFilterFunc filter_func
+    cdef int tpe, vpe
+    cdef int[MAX_NUM_TRI][3] tri_array
+    cdef int* element_indices
+    cdef MeshDataContainer datac
+
+    def __init__(self, YTEmbreeScene scene,
+                 np.ndarray vertices, 
+                 np.ndarray indices,
+                 np.ndarray data):
+
+        # We need now to figure out if we've been handed quads or tetrahedra.
+        if indices.shape[1] == 8:
+            self.vpe = HEX_NV
+            self.tpe = HEX_NT
+            self.tri_array = triangulate_hex
+        elif indices.shape[1] == 4:
+            self.vpe = TETRA_NV
+            self.tpe = TETRA_NT
+            self.tri_array = triangulate_tetra
+        else:
+            raise NotImplementedError
+
+        self._build_from_indices(scene, vertices, indices)
+        self._set_field_data(scene, data)
+        self._set_sampler_type(scene)
+
+    cdef void _build_from_indices(self, YTEmbreeScene scene,
+                                  np.ndarray vertices_in,
+                                  np.ndarray indices_in):
+        cdef int i, j, ind
+        cdef int nv = vertices_in.shape[0]
+        cdef int ne = indices_in.shape[0]
+        cdef int nt = self.tpe*ne
+
+        cdef unsigned int mesh = rtcg.rtcNewTriangleMesh(scene.scene_i,
+                    rtcg.RTC_GEOMETRY_STATIC, nt, nv, 1)
+
+        # first just copy over the vertices
+        cdef Vertex* vertices = <Vertex*> malloc(nv * sizeof(Vertex))
+        for i in range(nv):
+            vertices[i].x = vertices_in[i, 0]
+            vertices[i].y = vertices_in[i, 1]
+            vertices[i].z = vertices_in[i, 2]       
+        rtcg.rtcSetBuffer(scene.scene_i, mesh, rtcg.RTC_VERTEX_BUFFER,
+                          vertices, 0, sizeof(Vertex))
+
+        # now build up the triangles
+        cdef Triangle* triangles = <Triangle*> malloc(nt * sizeof(Triangle))
+        for i in range(ne):
+            for j in range(self.tpe):
+                triangles[self.tpe*i+j].v0 = indices_in[i][self.tri_array[j][0]]
+                triangles[self.tpe*i+j].v1 = indices_in[i][self.tri_array[j][1]]
+                triangles[self.tpe*i+j].v2 = indices_in[i][self.tri_array[j][2]]
+        rtcg.rtcSetBuffer(scene.scene_i, mesh, rtcg.RTC_INDEX_BUFFER,
+                          triangles, 0, sizeof(Triangle))
+
+        cdef int* element_indices = <int *> malloc(ne * self.vpe * sizeof(int))    
+        for i in range(ne):
+            for j in range(self.vpe):
+                element_indices[i*self.vpe + j] = indices_in[i][j]
+
+        self.element_indices = element_indices
+        self.vertices = vertices
+        self.indices = triangles
+        self.mesh = mesh
+
+    cdef void _set_field_data(self, YTEmbreeScene scene,
+                              np.ndarray data_in):
+
+        cdef int ne = data_in.shape[0]
+        cdef double* field_data = <double *> malloc(ne * self.vpe * sizeof(double))
+
+        for i in range(ne):
+            for j in range(self.vpe):
+                field_data[i*self.vpe+j] = data_in[i][j]
+
+        self.field_data = field_data
+
+        cdef MeshDataContainer datac
+        datac.vertices = self.vertices
+        datac.indices = self.indices
+        datac.field_data = self.field_data
+        datac.element_indices = self.element_indices
+        datac.tpe = self.tpe
+        datac.vpe = self.vpe
+        self.datac = datac
+        
+        rtcg.rtcSetUserData(scene.scene_i, self.mesh, &self.datac)
+
+    cdef void _set_sampler_type(self, YTEmbreeScene scene):
+
+        if self.vpe == 8:
+            self.filter_func = <rtcg.RTCFilterFunc> sample_hex
+        elif self.vpe == 4:
+            self.filter_func = <rtcg.RTCFilterFunc> sample_tetra
+        else:
+            print "Error - sampler type not implemented."
+            raise NotImplementedError
+
+        rtcg.rtcSetIntersectionFilterFunction(scene.scene_i,
+                                              self.mesh,
+                                              self.filter_func)
+        
+    def __dealloc__(self):
+        free(self.field_data)
+        free(self.element_indices)
+        free(self.vertices)
+        free(self.indices)

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/utilities/lib/mesh_samplers.pxd
--- /dev/null
+++ b/yt/utilities/lib/mesh_samplers.pxd
@@ -0,0 +1,9 @@
+cimport pyembree.rtcore as rtc
+cimport pyembree.rtcore_ray as rtcr
+cimport cython
+
+cdef void sample_hex(void* userPtr,
+                     rtcr.RTCRay& ray) nogil
+
+cdef void sample_tetra(void* userPtr,
+                       rtcr.RTCRay& ray) nogil

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/utilities/lib/mesh_samplers.pyx
--- /dev/null
+++ b/yt/utilities/lib/mesh_samplers.pyx
@@ -0,0 +1,138 @@
+"""
+This file contains functions that sample a surface mesh at the point hit by
+a ray. These can be used with pyembree in the form of "filter feedback functions."
+
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2015, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+cimport pyembree.rtcore as rtc
+cimport pyembree.rtcore_ray as rtcr
+from pyembree.rtcore cimport Vec3f, Triangle, Vertex
+from yt.utilities.lib.mesh_construction cimport MeshDataContainer
+from yt.utilities.lib.element_mappings cimport \
+    ElementSampler, \
+    P1Sampler3D, \
+    Q1Sampler3D
+cimport numpy as np
+cimport cython
+from libc.math cimport fabs, fmax
+
+cdef ElementSampler Q1Sampler = Q1Sampler3D()
+cdef ElementSampler P1Sampler = P1Sampler3D()
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+cdef void get_hit_position(double* position,
+                           void* userPtr,
+                           rtcr.RTCRay& ray) nogil:
+    cdef int primID, i
+    cdef double[3][3] vertex_positions
+    cdef Triangle tri
+    cdef MeshDataContainer* data
+
+    primID = ray.primID
+    data = <MeshDataContainer*> userPtr
+    tri = data.indices[primID]
+
+    vertex_positions[0][0] = data.vertices[tri.v0].x
+    vertex_positions[0][1] = data.vertices[tri.v0].y
+    vertex_positions[0][2] = data.vertices[tri.v0].z
+
+    vertex_positions[1][0] = data.vertices[tri.v1].x
+    vertex_positions[1][1] = data.vertices[tri.v1].y
+    vertex_positions[1][2] = data.vertices[tri.v1].z
+
+    vertex_positions[2][0] = data.vertices[tri.v2].x
+    vertex_positions[2][1] = data.vertices[tri.v2].y
+    vertex_positions[2][2] = data.vertices[tri.v2].z
+
+    for i in range(3):
+        position[i] = vertex_positions[0][i]*(1.0 - ray.u - ray.v) + \
+                      vertex_positions[1][i]*ray.u + \
+                      vertex_positions[2][i]*ray.v
+
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+cdef void sample_hex(void* userPtr,
+                     rtcr.RTCRay& ray) nogil:
+    cdef int ray_id, elem_id, i
+    cdef double val
+    cdef double[8] field_data
+    cdef int[8] element_indices
+    cdef double[24] vertices
+    cdef double[3] position
+    cdef MeshDataContainer* data
+
+    data = <MeshDataContainer*> userPtr
+    ray_id = ray.primID
+    if ray_id == -1:
+        return
+
+    # ray_id records the id number of the hit according to
+    # embree, in which the primitives are triangles. Here,
+    # we convert this to the element id by dividing by the
+    # number of triangles per element.
+    elem_id = ray_id / data.tpe
+
+    get_hit_position(position, userPtr, ray)
+    
+    for i in range(8):
+        element_indices[i] = data.element_indices[elem_id*8+i]
+        field_data[i]      = data.field_data[elem_id*8+i]
+
+    for i in range(8):
+        vertices[i*3]     = data.vertices[element_indices[i]].x
+        vertices[i*3 + 1] = data.vertices[element_indices[i]].y
+        vertices[i*3 + 2] = data.vertices[element_indices[i]].z    
+
+    val = Q1Sampler.sample_at_real_point(vertices, field_data, position)
+    ray.time = val
+
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+cdef void sample_tetra(void* userPtr,
+                       rtcr.RTCRay& ray) nogil:
+
+    cdef int ray_id, elem_id, i
+    cdef double val
+    cdef double[4] field_data
+    cdef int[4] element_indices
+    cdef double[12] vertices
+    cdef double[3] position
+    cdef MeshDataContainer* data
+
+    data = <MeshDataContainer*> userPtr
+    ray_id = ray.primID
+    if ray_id == -1:
+        return
+
+    get_hit_position(position, userPtr, ray)
+
+    # ray_id records the id number of the hit according to
+    # embree, in which the primitives are triangles. Here,
+    # we convert this to the element id by dividing by the
+    # number of triangles per element.    
+    elem_id = ray_id / data.tpe
+
+    for i in range(4):
+        element_indices[i] = data.element_indices[elem_id*4+i]
+        field_data[i] = data.field_data[elem_id*4+i]
+        vertices[i*3] = data.vertices[element_indices[i]].x
+        vertices[i*3 + 1] = data.vertices[element_indices[i]].y
+        vertices[i*3 + 2] = data.vertices[element_indices[i]].z    
+
+    val = P1Sampler.sample_at_real_point(vertices, field_data, position)
+    ray.time = val

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/utilities/lib/mesh_traversal.pxd
--- /dev/null
+++ b/yt/utilities/lib/mesh_traversal.pxd
@@ -0,0 +1,7 @@
+cimport pyembree.rtcore
+cimport pyembree.rtcore_scene as rtcs
+cimport pyembree.rtcore_ray
+
+cdef class YTEmbreeScene:
+    cdef rtcs.RTCScene scene_i
+

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/utilities/lib/mesh_traversal.pyx
--- /dev/null
+++ b/yt/utilities/lib/mesh_traversal.pyx
@@ -0,0 +1,127 @@
+"""
+This file contains the MeshSampler class, which handles casting rays at a
+MeshSource using pyembree.
+
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2015, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+cimport cython
+cimport numpy as np
+import numpy as np
+from libc.stdlib cimport malloc, free
+cimport pyembree.rtcore as rtc
+cimport pyembree.rtcore_ray as rtcr
+cimport pyembree.rtcore_geometry as rtcg
+cimport pyembree.rtcore_scene as rtcs
+from grid_traversal cimport ImageSampler, \
+    ImageContainer
+from cython.parallel import prange, parallel, threadid
+
+rtc.rtcInit(NULL)
+rtc.rtcSetErrorFunction(error_printer)
+
+cdef void error_printer(const rtc.RTCError code, const char *_str):
+    print "ERROR CAUGHT IN EMBREE"
+    rtc.print_error(code)
+    print "ERROR MESSAGE:", _str
+
+cdef class YTEmbreeScene:
+
+    def __init__(self):
+        self.scene_i = rtcs.rtcNewScene(rtcs.RTC_SCENE_STATIC, rtcs.RTC_INTERSECT1)
+
+    def __dealloc__(self):
+        rtcs.rtcDeleteScene(self.scene_i)
+
+cdef class MeshSampler(ImageSampler):
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def __call__(self, 
+                 YTEmbreeScene scene,
+                 int num_threads = 0):
+        '''
+
+        This function is supposed to cast the rays and return the
+        image.
+
+        '''
+
+        rtcs.rtcCommit(scene.scene_i)
+        cdef int vi, vj, i, j, ni, nj, nn
+        cdef np.int64_t offset
+        cdef ImageContainer *im = self.image
+        cdef np.int64_t elemID
+        cdef np.float64_t value
+        cdef np.float64_t *v_pos
+        cdef np.float64_t *v_dir
+        cdef np.int64_t nx, ny, size
+        cdef np.float64_t px, py
+        cdef np.float64_t width[3]
+        for i in range(3):
+            width[i] = self.width[i]
+        cdef np.ndarray[np.float64_t, ndim=1] data
+        nx = im.nv[0]
+        ny = im.nv[1]
+        size = nx * ny
+        data = np.empty(size, dtype="float64")
+        cdef rtcr.RTCRay ray
+        if im.vd_strides[0] == -1:
+            v_pos = <np.float64_t *> malloc(3 * sizeof(np.float64_t))
+            for j in range(size):
+                vj = j % ny
+                vi = (j - vj) / ny
+                vj = vj
+                # Dynamically calculate the position
+                px = width[0] * (<np.float64_t>vi)/(<np.float64_t>im.nv[0]-1) - width[0]/2.0
+                py = width[1] * (<np.float64_t>vj)/(<np.float64_t>im.nv[1]-1) - width[1]/2.0
+                v_pos[0] = im.vp_pos[0]*px + im.vp_pos[3]*py + im.vp_pos[9]
+                v_pos[1] = im.vp_pos[1]*px + im.vp_pos[4]*py + im.vp_pos[10]
+                v_pos[2] = im.vp_pos[2]*px + im.vp_pos[5]*py + im.vp_pos[11]
+                for i in range(3):
+                    ray.org[i] = v_pos[i]
+                    ray.dir[i] = im.vp_dir[i]
+                ray.tnear = 0.0
+                ray.tfar = 1e37
+                ray.geomID = rtcg.RTC_INVALID_GEOMETRY_ID
+                ray.primID = rtcg.RTC_INVALID_GEOMETRY_ID
+                ray.instID = rtcg.RTC_INVALID_GEOMETRY_ID
+                ray.mask = -1
+                ray.time = 0
+                rtcs.rtcIntersect(scene.scene_i, ray)
+                data[j] = ray.time
+            self.aimage = data.reshape(self.image.nv[0], self.image.nv[1])
+            free(v_pos)
+        else:
+            v_pos = <np.float64_t *> malloc(3 * sizeof(np.float64_t))
+            v_dir = <np.float64_t *> malloc(3 * sizeof(np.float64_t))
+            # If we do not have a simple image plane, we have to cast all
+            # our rays 
+            for j in range(size):
+                offset = j * 3
+                for i in range(3): v_pos[i] = im.vp_pos[i + offset]
+                for i in range(3): v_dir[i] = im.vp_dir[i + offset]
+                for i in range(3):
+                    ray.org[i] = v_pos[i]
+                    ray.dir[i] = v_dir[i]
+                ray.tnear = 0.0
+                ray.tfar = 1e37
+                ray.geomID = rtcg.RTC_INVALID_GEOMETRY_ID
+                ray.primID = rtcg.RTC_INVALID_GEOMETRY_ID
+                ray.instID = rtcg.RTC_INVALID_GEOMETRY_ID
+                ray.mask = -1
+                ray.time = 0
+                rtcs.rtcIntersect(scene.scene_i, ray)
+                data[j] = ray.time
+            self.aimage = data.reshape(self.image.nv[0], self.image.nv[1])
+            free(v_pos)
+            free(v_dir)

diff -r a5fb8a8d19009e8aa9b2c8e0859bc2c7c7949934 -r fceb5ab0e3c639902f1d00e9ad6413367fd8fe27 yt/utilities/lib/misc_utilities.pyx
--- a/yt/utilities/lib/misc_utilities.pyx
+++ b/yt/utilities/lib/misc_utilities.pyx
@@ -372,9 +372,10 @@
                                 talpha = image[x0, yi0, 3]
                                 image[x0, yi0, 3] = alpha[3] + talpha * (1 - alpha[3])
                                 for i in range(3):
-                                    image[x0, yi0, i] = (alpha[3]*alpha[i] + image[x0, yi0, i]*talpha*(1.0-alpha[3]))/image[x0,yi0,3]
                                     if image[x0, yi0, 3] == 0.0:
                                         image[x0, yi0, i] = 0.0
+                                    else:
+                                        image[x0, yi0, i] = (alpha[3]*alpha[i] + image[x0, yi0, i]*talpha*(1.0-alpha[3]))/image[x0,yi0,3]
                             else:
                                 for i in range(4):
                                     image[x0, yi0, i] = alpha[i]

This diff is so big that we needed to truncate the remainder.

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.


More information about the yt-svn mailing list