[yt-svn] commit/yt: 20 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Wed Jun 15 11:23:26 PDT 2016


20 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/88671629f535/
Changeset:   88671629f535
Branch:      yt
User:        atmyers
Date:        2016-05-26 23:21:02+00:00
Summary:     merging with tip.
Affected #:  28 files

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb appveyor.yml
--- /dev/null
+++ b/appveyor.yml
@@ -0,0 +1,38 @@
+# AppVeyor.com is a Continuous Integration service to build and run tests under
+# Windows
+
+environment:
+
+  global:
+      PYTHON: "C:\\Miniconda-x64"
+
+  matrix:
+
+      - PYTHON_VERSION: "2.7"
+
+      - PYTHON_VERSION: "3.5"
+
+
+platform:
+    -x64
+
+install:
+    - "SET PATH=%PYTHON%;%PYTHON%\\Scripts;%PATH%"
+
+    # Install the build and runtime dependencies of the project.
+    # Create a conda environment
+    - "conda create -q --yes -n test python=%PYTHON_VERSION%"
+    - "activate test"
+
+    # Check that we have the expected version of Python
+    - "python --version"
+
+    # Install specified version of numpy and dependencies
+    - "conda install -q --yes numpy nose setuptools ipython Cython sympy h5py matplotlib"
+    - "python setup.py develop"
+
+# Not a .NET project
+build: false
+
+test_script:
+  - "nosetests -e test_all_fields ."

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb doc/source/analyzing/fields.rst
--- a/doc/source/analyzing/fields.rst
+++ b/doc/source/analyzing/fields.rst
@@ -14,14 +14,117 @@
 out of favor, as these discrete fields can be any type of sparsely populated
 data.
 
+What are fields?
+----------------
+
+Fields in yt are denoted by a two-element tuple, of the form ``(field_type,
+field_name)``. The first element, the "field type" is a category for a
+field. Possible field types used in yt include *gas* (for fluid mesh fields
+defined on a mesh) or *io* (for fields defined at particle locations). Field
+types can also correspond to distinct particle of fluid types in a single
+simulation. For example, a plasma physics simulation using the Particle in Cell
+method might have particle types corresponding to *electrons* and *ions*. See
+:ref:`known-field-types` below for more info about field types in yt.
+
+The second element of field tuples, the "field name", denotes the specific field
+to select, given the field type. Possible field names include *density*,
+*velocity_x* or *pressure* --- these three fields are examples of field names
+that might be used for a fluid defined on a mesh. Examples of particle fields
+include *particle_mass*, *particle_position*, or *particle_velocity_x*. In
+general, particle field names are prefixed by "particle\_", which makes it easy
+to distinguish between a particle field or a mesh field when no field type is
+provided.
+
+What fields are available?
+--------------------------
+
+We provide a full list of fields that yt recognizes by default at
+:ref:`field-list`.  If you want to create additional custom derived fields,
+see :ref:`creating-derived-fields`.
+
+Every dataset has an attribute, ``ds.fields``.  This attribute possesses
+attributes itself, each of which is a "field type," and each field type has as
+its attributes the fields themselves.  When one of these is printed, it returns
+information about the field and things like units and so on.  You can use this
+for tab-completing as well as easier access to information.
+
+As an example, you might browse the available fields like so:
+
+.. code-block:: python
+
+  print(dir(ds.fields))
+  print(dir(ds.fields.gas))
+  print(ds.fields.gas.density)
+
+On an Enzo dataset, the result from the final command would look something like
+this:::
+
+  Alias Field for "('enzo', 'Density')" (gas, density): (units: g/cm**3)
+
+You can use this to easily explore available fields, particularly through
+tab-completion in Jupyter/IPython.
+
+For a more programmatic method of accessing fields, you can utilize the
+``ds.field_list``, ``ds.derived_field_list`` and some accessor methods to gain
+information about fields.  The full list of fields available for a dataset can
+be found as the attribute ``field_list`` for native, on-disk fields and
+``derived_field_list`` for derived fields (``derived_field_list`` is a superset
+of ``field_list``).  You can view these lists by examining a dataset like this:
+
+.. code-block:: python
+
+   ds = yt.load("my_data")
+   print(ds.field_list)
+   print(ds.derived_field_list)
+
+By using the ``field_info()`` class, one can access information about a given
+field, like its default units or the source code for it.
+
+.. code-block:: python
+
+   ds = yt.load("my_data")
+   ds.index
+   print(ds.field_info["gas", "pressure"].get_units())
+   print(ds.field_info["gas", "pressure"].get_source())
+
+Using fields to access data
+---------------------------
+
+The primary *use* of fields in yt is to access data from a dataset. For example,
+if I want to use a data object (see :ref:`Data-objects` for more detail about
+data objects) to access the ``('gas', 'density')`` field, one can do any of the
+following:
+
+.. code-block:: python
+
+    ad = ds.all_data()
+    density = ad['density']
+    density = ad['gas', 'density']
+    density = ad[('gas', 'density')]
+    dnesity = ad[ds.fields.gas.density]
+
+The first data access example is the simplest. In that example, the field type
+is inferred from the name of the field. The next two examples use the field type
+explicitly, this might be necessary if there is more than one field type with a
+"density" field defined in the same simulation. The third example is a slightly
+more verbose and is syntactically identical to the second example due to the way
+indexing functions in Python. The final example uses the ``ds.fields` object
+described above. This way of accessing fields lends itself to interactive use,
+especially if you make heavy use of IPython's tab completion features. Any of
+these ways of denoting the ``('gas', 'density')`` field can be used when
+supplying a field name to a yt data object, analysis routines, or plotting and
+visualization function.
+
+Accessing Fields without a Field Type
+-------------------------------------
+
 In previous versions of yt, there was a single mechanism of accessing fields on
-a data container -- by their name, which was mandated to be a single string,
-and which often varied between different code frontends.  yt 3.0 allows
-for datasets containing multiple different types of fluid fields, mesh fields,
-particles (with overlapping or disjoint lists of fields).  To enable accessing
-these fields in a meaningful, simple way, the mechanism for accessing them has
-changed to take an optional *field type* in addition to the *field name* of
-the form ('*field type*', '*field name*').
+a data container -- by their name, which was mandated to be a single string, and
+which often varied between different code frontends.  yt 3.0 allows for datasets
+containing multiple different types of fluid fields, mesh fields, particles
+(with overlapping or disjoint lists of fields). However, to preserve backward
+compatibility and make interactive use simpler, yt will still accept field names
+given as a string and will try to infer the field type given a field name.
 
 As an example, we may be in a situation where have multiple types of particles
 which possess the ``particle_position`` field.  In the case where a data
@@ -30,9 +133,9 @@
 
 .. code-block:: python
 
-   print(ad["humans", "particle_position"])
-   print(ad["dogs", "particle_position"])
-   print(ad["dinosaurs", "particle_position"])
+   print(ad["dark_matter", "particle_position"])
+   print(ad["stars", "particle_position"])
+   print(ad["black_holes", "particle_position"])
 
 Each of these three fields may have different sizes.  In order to enable
 falling back on asking only for a field by the name, yt will use the most
@@ -45,7 +148,8 @@
 
    print(ad["particle_velocity"])
 
-it would select ``dinosaurs`` as the field type.
+it would select ``black_holes`` as the field type, since the last field accessed
+used that field type.
 
 The same operations work for fluid and mesh fields.  As an example, in some
 cosmology simulations, we may want to examine the mass of particles in a region
@@ -189,58 +293,6 @@
  * Species fields, such as for chemistry species (yt can recognize the entire
    periodic table in field names and construct ionization fields as need be)
 
-What fields are available?
---------------------------
-
-We provide a full list of fields that yt recognizes by default at
-:ref:`field-list`.  If you want to create additional custom derived fields,
-see :ref:`creating-derived-fields`.
-
-Every dataset has an attribute, ``ds.fields``.  This attribute possesses
-attributes itself, each of which is a "field type," and each field type has as
-its attributes the fields themselves.  When one of these is printed, it returns
-information about the field and things like units and so on.  You can use this
-for tab-completing as well as easier access to information.
-
-As an example, you might browse the available fields like so:
-
-.. code-block:: python
-
-  print(dir(ds.fields))
-  print(dir(ds.fields.gas))
-  print(ds.fields.gas.density)
-
-On an Enzo dataset, the result from the final command would look something like
-this:::
-
-  Alias Field for "('enzo', 'Density')" (gas, density): (units: g/cm**3)
-
-You can use this to easily explore available fields, particularly through
-tab-completion in Jupyter/IPython.
-
-For a more programmatic method of accessing fields, you can utilize the
-``ds.field_list``, ``ds.derived_field_list`` and some accessor methods to gain
-information about fields.  The full list of fields available for a dataset can
-be found as the attribute ``field_list`` for native, on-disk fields and
-``derived_field_list`` for derived fields (``derived_field_list`` is a superset
-of ``field_list``).  You can view these lists by examining a dataset like this:
-
-.. code-block:: python
-
-   ds = yt.load("my_data")
-   print(ds.field_list)
-   print(ds.derived_field_list)
-
-By using the ``field_info()`` class, one can access information about a given
-field, like its default units or the source code for it.
-
-.. code-block:: python
-
-   ds = yt.load("my_data")
-   ds.index
-   print(ds.field_info["gas", "pressure"].get_units())
-   print(ds.field_info["gas", "pressure"].get_source())
-
 .. _bfields:
 
 Magnetic Fields

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb doc/source/developing/testing.rst
--- a/doc/source/developing/testing.rst
+++ b/doc/source/developing/testing.rst
@@ -95,17 +95,17 @@
 To create new unit tests:
 
 #. Create a new ``tests/`` directory next to the file containing the
-   functionality you want to test.  Be sure to add this new directory as a
-   subpackage in the setup.py script located in the directory you're adding a
-   new ``tests/`` folder to.  This ensures that the tests will be deployed in
-   yt source and binary distributions.
+   functionality you want to test and add an empty ``__init__.py`` file to
+   it.
 #. Inside that directory, create a new python file prefixed with ``test_`` and
    including the name of the functionality.
 #. Inside that file, create one or more routines prefixed with ``test_`` that
-   accept no arguments.  These should ``yield`` a tuple of the form
-   ``function``, ``argument_one``, ``argument_two``, etc.  For example
-   ``yield assert_equal, 1.0, 1.0`` would be captured by nose as a test that
-   asserts that 1.0 is equal to 1.0.
+   accept no arguments. The test function should do some work that tests some
+   functionality and should also verify that the results are correct using
+   assert statements or functions.  
+# Tests can ``yield`` a tuple of the form ``function``, ``argument_one``,
+   ``argument_two``, etc.  For example ``yield assert_equal, 1.0, 1.0`` would be
+   captured by nose as a test that asserts that 1.0 is equal to 1.0.
 #. Use ``fake_random_ds`` to test on datasets, and be sure to test for
    several combinations of ``nproc``, so that domain decomposition can be
    tested as well.
@@ -470,9 +470,9 @@
        test.prefix = "my_unique_name"
 
        # this ensures a nice test name in nose's output
-       test_my_ds.__description__ = test.description
+       test_my_ds.__name__ = test.description
 
-       yield test_my_ds
+       yield test
 
 Another good example of an image comparison test is the
 ``PlotWindowAttributeTest`` defined in the answer testing framework and used in

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb doc/source/examining/low_level_inspection.rst
--- a/doc/source/examining/low_level_inspection.rst
+++ b/doc/source/examining/low_level_inspection.rst
@@ -232,3 +232,28 @@
 whatever interface they wish for displaying and saving their image data.
 You can use the :class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer`
 to accomplish this as described in :ref:`fixed-resolution-buffers`.
+
+High-level Information about Particles
+--------------------------------------
+
+There are a number of high-level helpers attached to ``Dataset`` objects to find
+out information about the particles in an output file. First, one can check if
+there are any particles in a dataset at all by examining
+``ds.particles_exist``. This will be ``True`` for datasets the include particles
+and ``False`` otherwise.
+
+One can also see which particle types are available in a dataset. Particle types
+that are available in the dataset's on-disk output are known as "raw" particle
+types, and they will appear in ``ds.particle_types_raw``. Particle types that
+are dynamically defined via a particle filter of a particle union will also
+appear in the ``ds.particle_types`` list. If the simulation only has one
+particle type on-disk, its name will by ``'io'``. If there is more than one
+particle type, the names of the particle types will be inferred from the output
+file. For example, Gadget HDF5 files have particle type names like ``PartType0``
+and ``PartType1``, while Enzo data, which usually only has one particle type,
+will only have a particle named ``io``.
+
+Finally, one can see the number of each particle type by inspecting
+``ds.particle_type_counts``. This will be a dictionary mappying the names of
+particle types in ``ds.particle_types_raw`` to the number of each particle type
+in a simulation output.

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb doc/source/quickstart/2)_Data_Inspection.ipynb
--- a/doc/source/quickstart/2)_Data_Inspection.ipynb
+++ b/doc/source/quickstart/2)_Data_Inspection.ipynb
@@ -154,6 +154,35 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
+    "Finally, we can get basic information about the particle types and number of particles in a simulation:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (ds.particle_types)\n",
+    "print (ds.particle_types_raw)\n",
+    "print (ds.particle_type_counts)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "For this dataset, we see that there are two particle types defined, (`io` and `all`), but that only one of these particle types in in `ds.particle_types_raw`. The `ds.particle_types` list contains *all* particle types in the simulation, including ones that are dynamically defined like particle unions. The `ds.particle_types_raw` list includes only particle types that are in the output file we loaded the dataset from.\n",
+    "\n",
+    "We can also see that there are a bit more than 1.1 million particles in this simulation. Only particle types in `ds.particle_types_raw` will appear in the `ds.particle_type_counts` dictionary."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
     "# Mesh Structure\n",
     "\n",
     "If you're using a simulation type that has grids (for instance, here we're using an Enzo simulation) you can examine the structure of the mesh.  For the most part, you probably won't have to use this unless you're debugging a simulation or examining in detail what is going on."

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb doc/source/visualizing/callbacks.rst
--- a/doc/source/visualizing/callbacks.rst
+++ b/doc/source/visualizing/callbacks.rst
@@ -692,6 +692,28 @@
    s.annotate_triangle_facets(points, plot_args={"colors": 'black'})
    s.save()
 
+.. _annotate-mesh-lines:
+
+Annotate Mesh Lines Callback
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. function:: annotate_mesh_lines(plot_args=None)
+
+   (This is a proxy for
+   :class:`~yt.visualization.plot_modifications.MeshLinesCallback`.)
+
+   This draws the mesh line boundaries over a plot using a Matplotlib
+   line collection. This callback is only useful for unstructured or 
+   semi-structured mesh datasets. 
+
+.. python-script::
+
+   import yt
+   ds = yt.load('MOOSE_sample_data/out.e')
+   sl = yt.SlicePlot(ds, 2, ('connect1', 'nodal_aux'))
+   sl.annotate_mesh_lines(plot_args={'color':'black'})
+   sl.save()
+
 .. _annotate-ray:
 
 Overplot the Path of a Ray

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -24,6 +24,8 @@
 from contextlib import contextmanager
 
 from yt.data_objects.particle_io import particle_handler_registry
+from yt.fields.derived_field import \
+    DerivedField
 from yt.frontends.ytdata.utilities import \
     save_as_dataset
 from yt.funcs import \
@@ -348,6 +350,7 @@
             for i, chunk in enumerate(chunks):
                 with self._chunked_read(chunk):
                     gz = self._current_chunk.objs[0]
+                    gz.field_parameters = self.field_parameters
                     wogz = gz._base_grid
                     ind += wogz.select(
                         self.selector,
@@ -982,6 +985,9 @@
                     raise YTFieldNotParseable(field)
                 ftype, fname = field
                 finfo = self.ds._get_field_info(ftype, fname)
+            elif isinstance(field, DerivedField):
+                ftype, fname = field.name
+                finfo = field
             else:
                 fname = field
                 finfo = self.ds._get_field_info("unknown", fname)

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/data_objects/profiles.py
--- a/yt/data_objects/profiles.py
+++ b/yt/data_objects/profiles.py
@@ -15,6 +15,7 @@
 
 import numpy as np
 
+from yt.fields.derived_field import DerivedField
 from yt.frontends.ytdata.utilities import \
     save_as_dataset
 from yt.funcs import \
@@ -249,8 +250,11 @@
 
     def __getitem__(self, field):
         fname = self.field_map.get(field, None)
-        if fname is None and isinstance(field, tuple):
-            fname = self.field_map.get(field[1], None)
+        if fname is None:
+            if isinstance(field, tuple):
+                fname = self.field_map.get(field[1], None)
+            elif isinstance(field, DerivedField):
+                fname = self.field_map.get(field.name[1], None)
         if fname is None:
             raise KeyError(field)
         else:

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -15,6 +15,7 @@
 #-----------------------------------------------------------------------------
 
 import functools
+import itertools
 import numpy as np
 import os
 import time
@@ -24,6 +25,8 @@
 from yt.extern.six import add_metaclass, string_types
 
 from yt.config import ytcfg
+from yt.fields.derived_field import \
+    DerivedField
 from yt.funcs import \
     mylog, \
     set_intersection, \
@@ -62,7 +65,6 @@
 from yt.units.unit_systems import create_code_unit_system
 from yt.data_objects.region_expression import \
     RegionExpression
-
 from yt.geometry.coordinates.api import \
     CoordinateHandler, \
     CartesianCoordinateHandler, \
@@ -171,6 +173,7 @@
     derived_field_list = requires_index("derived_field_list")
     fields = requires_index("fields")
     _instantiated = False
+    _particle_type_counts = None
 
     def __new__(cls, filename=None, *args, **kwargs):
         if not isinstance(filename, string_types):
@@ -590,7 +593,10 @@
     def _get_field_info(self, ftype, fname = None):
         self.index
         if fname is None:
-            ftype, fname = "unknown", ftype
+            if isinstance(ftype, DerivedField):
+                ftype, fname = ftype.name
+            else:
+                ftype, fname = "unknown", ftype
         guessing_type = False
         if ftype == "unknown":
             guessing_type = True
@@ -763,6 +769,27 @@
         return fields
 
     @property
+    def particles_exist(self):
+        for pt, f in itertools.product(self.particle_types_raw, self.field_list):
+            if pt == f[0]:
+                return True
+        return False
+
+    @property
+    def particle_type_counts(self):
+        self.index
+        if self.particles_exist is False:
+            return {}
+
+        # frontends or index implementation can populate this dict while
+        # creating the index if they know particle counts at that time
+        if self._particle_type_counts is not None:
+            return self._particle_type_counts
+
+        self._particle_type_counts = self.index._get_particle_type_counts()
+        return self._particle_type_counts
+
+    @property
     def ires_factor(self):
         o2 = np.log2(self.refine_by)
         if o2 != int(o2):

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/data_objects/tests/test_dataset_access.py
--- a/yt/data_objects/tests/test_dataset_access.py
+++ b/yt/data_objects/tests/test_dataset_access.py
@@ -1,4 +1,8 @@
-from yt.testing import fake_amr_ds, assert_equal
+from yt.testing import \
+    assert_equal, \
+    fake_amr_ds, \
+    fake_particle_ds, \
+    fake_random_ds
 
 # This will test the "dataset access" method.
 
@@ -37,3 +41,10 @@
     rho *= 2.0
     yield assert_equal, dd["density"]*2.0, ds.r["density"]
     yield assert_equal, dd["gas", "density"]*2.0, ds.r["gas", "density"]
+
+def test_particle_counts():
+    ds = fake_random_ds(16, particles=100)
+    assert ds.particle_type_counts == {'io': 100}
+
+    pds = fake_particle_ds(npart=128)
+    assert pds.particle_type_counts == {'io': 128}

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/fields/field_info_container.py
--- a/yt/fields/field_info_container.py
+++ b/yt/fields/field_info_container.py
@@ -253,12 +253,17 @@
                 self[name] = DerivedField(name, f, **kwargs)
                 return f
             return create_function
-        ptype = kwargs.get("particle_type", False)
-        if ptype:
+
+        if isinstance(name, tuple):
+            self[name] = DerivedField(name, function, **kwargs)
+            return
+
+        if kwargs.get("particle_type", False):
             ftype = 'all'
         else:
             ftype = self.ds.default_fluid_type
-        if not isinstance(name, tuple) and (ftype, name) not in self:
+
+        if (ftype, name) not in self:
             tuple_name = (ftype, name)
             self[tuple_name] = DerivedField(tuple_name, function, **kwargs)
             self.alias(name, tuple_name)

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/fields/tests/test_field_access.py
--- /dev/null
+++ b/yt/fields/tests/test_field_access.py
@@ -0,0 +1,40 @@
+from yt.testing import fake_random_ds, assert_equal
+from yt.data_objects.profiles import create_profile
+from yt.visualization.plot_window import \
+    SlicePlot, \
+    ProjectionPlot, \
+    OffAxisProjectionPlot
+from yt.visualization.profile_plotter import \
+    ProfilePlot, \
+    PhasePlot
+
+def test_field_access():
+    ds = fake_random_ds(16)
+
+    ad = ds.all_data()
+    sp = ds.sphere(ds.domain_center, 0.25)
+    cg = ds.covering_grid(0, ds.domain_left_edge, ds.domain_dimensions)
+    scg = ds.smoothed_covering_grid(0, ds.domain_left_edge, ds.domain_dimensions)
+    sl = ds.slice(0, ds.domain_center[0])
+    proj = ds.proj('density', 0)
+    prof = create_profile(ad, 'radius', 'density')
+
+    for data_object in [ad, sp, cg, scg, sl, proj, prof]:
+        assert_equal(
+            data_object['gas', 'density'],
+            data_object[ds.fields.gas.density]
+        )
+
+    for field in [('gas', 'density'), ds.fields.gas.density]:
+        ad = ds.all_data()
+        prof = ProfilePlot(ad, 'radius', field)
+        phase = PhasePlot(ad, 'radius', field, 'cell_mass')
+        s = SlicePlot(ds, 2, field)
+        oas = SlicePlot(ds, [1, 1, 1], field)
+        p = ProjectionPlot(ds, 2, field)
+        oap = OffAxisProjectionPlot(ds, [1, 1, 1], field)
+
+        for plot_object in [s, oas, p, oap, prof, phase]:
+            plot_object._setup_plots()
+            if hasattr(plot_object, '_frb'):
+                plot_object._frb[field]

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -337,6 +337,8 @@
             mylog.info("Discovered %i species of particles", len(ls_nonzero))
             mylog.info("Particle populations: "+'%9i '*len(ls_nonzero),
                        *ls_nonzero)
+            self._particle_type_counts = dict(
+                zip(self.particle_types_raw, ls_nonzero))
             for k, v in particle_header_vals.items():
                 if k in self.parameters.keys():
                     if not self.parameters[k] == v:

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/frontends/art/tests/test_outputs.py
--- a/yt/frontends/art/tests/test_outputs.py
+++ b/yt/frontends/art/tests/test_outputs.py
@@ -69,6 +69,14 @@
                          ad[('specie2', 'particle_type')].size +
                          ad[('specie3', 'particle_type')].size), AnaNDM
 
+    for spnum in range(5):
+        npart_read = ad['specie%s' % spnum, 'particle_type'].size
+        npart_header = ds.particle_type_counts['specie%s' % spnum]
+        if spnum == 3:
+            # see issue 814
+            npart_read += 1
+        assert_equal(npart_read, npart_header)
+
     AnaBoxSize = YTQuantity(7.1442196564, 'Mpc')
     AnaVolume = YTQuantity(364.640074656, 'Mpc**3')
     Volume = 1

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/frontends/artio/data_structures.py
--- a/yt/frontends/artio/data_structures.py
+++ b/yt/frontends/artio/data_structures.py
@@ -182,6 +182,17 @@
         return (self.dataset.domain_width /
                 (self.dataset.domain_dimensions * 2**(self.max_level))).min()
 
+    def _get_particle_type_counts(self):
+        # this could be done in the artio C interface without creating temporary
+        # arrays but I don't want to touch that code
+        # if a future brave soul wants to try, take a look at
+        # `read_sfc_particles` in _artio_caller.pyx
+        result = {}
+        ad = self.ds.all_data()
+        for ptype in self.ds.particle_types_raw:
+            result[ptype] = ad[ptype, 'PID'].size
+        return result
+
     def convert(self, unit):
         return self.dataset.conversion_factors[unit]
 

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/frontends/artio/tests/test_outputs.py
--- a/yt/frontends/artio/tests/test_outputs.py
+++ b/yt/frontends/artio/tests/test_outputs.py
@@ -48,6 +48,7 @@
         s1 = dobj["ones"].sum()
         s2 = sum(mask.sum() for block, mask in dobj.blocks)
         yield assert_equal, s1, s2
+    assert_equal(ds.particle_type_counts, {'N-BODY': 100000, 'STAR': 110650})
 
 
 @requires_file(sizmbhloz)

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py
+++ b/yt/frontends/enzo/data_structures.py
@@ -483,6 +483,15 @@
             random_sample = np.mgrid[0:max(len(self.grids),1)].astype("int32")
         return self.grids[(random_sample,)]
 
+    def _get_particle_type_counts(self):
+        try:
+            ret = {}
+            for ptype in self.grid_active_particle_count:
+                ret[ptype] = self.grid_active_particle_count[ptype].sum()
+            return ret
+        except AttributeError:
+            return super(EnzoHierarchy, self)._get_particle_type_counts()
+
     def find_particles_by_type(self, ptype, max_num=None, additional_fields=None):
         """
         Returns a structure of arrays with all of the particles'

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/frontends/enzo/tests/test_outputs.py
--- a/yt/frontends/enzo/tests/test_outputs.py
+++ b/yt/frontends/enzo/tests/test_outputs.py
@@ -83,6 +83,7 @@
     for test in big_patch_amr(ds, _fields):
         test_galaxy0030.__name__ = test.description
         yield test
+    assert_equal(ds.particle_type_counts, {'io': 1124453})
 
 @requires_ds(enzotiny)
 def test_simulated_halo_mass_function():
@@ -152,3 +153,6 @@
         [f for f in apcos.field_list if f[0] == 'CenOstriker'])
 
     assert_equal(apcos_fields, real_apcos_fields)
+
+    assert_equal(apcos.particle_type_counts,
+                 {'CenOstriker': 899755, 'DarkMatter': 32768})

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -474,6 +474,13 @@
         for level in range(self.max_level+1):
             self.level_stats[level+self.dataset.min_level+1]['numcells'] = levels[level]
 
+    def _get_particle_type_counts(self):
+        npart = 0
+        for dom in self.domains:
+            npart += dom.local_particle_count
+
+        return {'io': npart}
+
     def print_stats(self):
         
         # This function prints information based on the fluid on the grids,

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/frontends/ramses/tests/test_outputs.py
--- a/yt/frontends/ramses/tests/test_outputs.py
+++ b/yt/frontends/ramses/tests/test_outputs.py
@@ -47,7 +47,7 @@
         s1 = dobj["ones"].sum()
         s2 = sum(mask.sum() for block, mask in dobj.blocks)
         yield assert_equal, s1, s2
-
+    assert_equal(ds.particle_type_counts, {'io': 1090895})
 
 @requires_file(output_00080)
 def test_RAMSESDataset():

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/geometry/geometry_handler.py
--- a/yt/geometry/geometry_handler.py
+++ b/yt/geometry/geometry_handler.py
@@ -191,6 +191,10 @@
         except TypeError:
             return self._data_file[full_name]
 
+    def _get_particle_type_counts(self):
+        # this is implemented by subclasses
+        raise NotImplementedError
+
     def _close_data_file(self):
         if self._data_file:
             self._data_file.close()

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/geometry/grid_geometry_handler.py
--- a/yt/geometry/grid_geometry_handler.py
+++ b/yt/geometry/grid_geometry_handler.py
@@ -119,6 +119,9 @@
         """
         return self.select_grids(self.grid_levels.max())[0].dds[:].min()
 
+    def _get_particle_type_counts(self):
+        return {self.ds.particle_types_raw[0]: self.grid_particle_count.sum()}
+
     def _initialize_level_stats(self):
         # Now some statistics:
         #   0 = number of grids

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/geometry/grid_visitors.pxd
--- a/yt/geometry/grid_visitors.pxd
+++ b/yt/geometry/grid_visitors.pxd
@@ -17,36 +17,36 @@
 cimport numpy as np
 
 cdef struct GridTreeNode:
-    int num_children
-    int level
+    np.int32_t num_children
+    np.int32_t level
     np.int64_t index
     np.float64_t left_edge[3]
     np.float64_t right_edge[3]
     GridTreeNode **children
     np.int64_t start_index[3]
-    int dims[3]
+    np.int32_t dims[3]
     np.float64_t dds[3]
 
 cdef struct GridTreeNodePadded:
-    int num_children
-    int level
-    long int index
-    double left_edge_x
-    double left_edge_y
-    double left_edge_z
-    double right_edge_x
-    double right_edge_y
-    double right_edge_z
-    long int children_pointers
-    long int start_index_x
-    long int start_index_y
-    long int start_index_z
-    int dims_x
-    int dims_y
-    int dims_z
-    double dds_x
-    double dds_y
-    double dds_z
+    np.int32_t num_children
+    np.int32_t level
+    np.int64_t index
+    np.float64_t left_edge_x
+    np.float64_t left_edge_y
+    np.float64_t left_edge_z
+    np.float64_t right_edge_x
+    np.float64_t right_edge_y
+    np.float64_t right_edge_z
+    np.int64_t children_pointers
+    np.int64_t start_index_x
+    np.int64_t start_index_y
+    np.int64_t start_index_z
+    np.int32_t dims_x
+    np.int32_t dims_y
+    np.int32_t dims_z
+    np.float64_t dds_x
+    np.float64_t dds_y
+    np.float64_t dds_z
 
 cdef struct GridVisitorData:
     GridTreeNode *grid

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/geometry/particle_geometry_handler.py
--- a/yt/geometry/particle_geometry_handler.py
+++ b/yt/geometry/particle_geometry_handler.py
@@ -14,6 +14,7 @@
 # The full license is in the file COPYING.txt, distributed with this software.
 #-----------------------------------------------------------------------------
 
+import collections
 import numpy as np
 import os
 import weakref
@@ -51,6 +52,13 @@
                    self.dataset.domain_left_edge)
         return dx.min()
 
+    def _get_particle_type_counts(self):
+        result = collections.defaultdict(lambda: 0)
+        for df in self.data_files:
+            for k in df.total_particles.keys():
+                result[k] += df.total_particles[k]
+        return dict(result)
+
     def convert(self, unit):
         return self.dataset.conversion_factors[unit]
 

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/utilities/lib/bounding_volume_hierarchy.pyx
--- a/yt/utilities/lib/bounding_volume_hierarchy.pyx
+++ b/yt/utilities/lib/bounding_volume_hierarchy.pyx
@@ -4,6 +4,8 @@
 from libc.math cimport fabs
 from libc.stdlib cimport malloc, free
 from cython.parallel import parallel, prange
+from grid_traversal cimport ImageSampler, \
+    ImageContainer
 
 from yt.utilities.lib.primitives cimport \
     BBox, \
@@ -371,3 +373,58 @@
     
     cdef int N = origins.shape[0]
     cast_rays(&image[0], &origins[0, 0], &direction[0], N, bvh)
+
+cdef class BVHMeshSampler(ImageSampler):
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def __call__(self,
+                 BVH bvh,
+                 int num_threads = 0):
+        '''
+
+        This function is supposed to cast the rays and return the
+        image.
+
+        '''
+
+        cdef int vi, vj, i, j
+        cdef ImageContainer *im = self.image
+        cdef np.float64_t *v_pos
+        cdef np.float64_t *v_dir
+        cdef np.int64_t nx, ny, size
+        cdef np.float64_t width[3]
+        for i in range(3):
+            width[i] = self.width[i]
+        cdef np.ndarray[np.float64_t, ndim=1] data
+        cdef np.ndarray[np.int64_t, ndim=1] used
+        nx = im.nv[0]
+        ny = im.nv[1]
+        size = nx * ny
+        cdef Ray* ray
+        with nogil, parallel():
+            ray = <Ray *> malloc(sizeof(Ray))
+            v_pos = <np.float64_t *> malloc(3 * sizeof(np.float64_t))
+            v_dir = <np.float64_t *> malloc(3 * sizeof(np.float64_t))
+            for j in prange(size):
+                vj = j % ny
+                vi = (j - vj) / ny
+                vj = vj
+                self.vector_function(im, vi, vj, width, v_dir, v_pos)
+                for i in range(3):
+                    ray.origin[i] = v_pos[i]
+                    ray.direction[i] = v_dir[i]
+                    ray.inv_dir[i] = 1.0 / v_dir[i]
+                ray.t_far = 1e37
+                ray.t_near = 0.0
+                ray.data_val = 0
+                ray.elem_id = -1
+                bvh.intersect(ray)
+                im.image[vi, vj, 0] = ray.data_val
+                im.image_used[vi, vj] = ray.elem_id
+                im.mesh_lines[vi, vj] = ray.near_boundary
+                im.zbuffer[vi, vj] = ray.t_far
+            free(v_pos)
+            free(v_dir)
+            free(ray)

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/utilities/lib/mesh_traversal.pyx
--- a/yt/utilities/lib/mesh_traversal.pyx
+++ b/yt/utilities/lib/mesh_traversal.pyx
@@ -98,59 +98,3 @@
             im.zbuffer[vi, vj] = ray.tfar
         free(v_pos)
         free(v_dir)
-
-
-cdef class BVHMeshSampler(ImageSampler):
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def __call__(self, 
-                 BVH bvh,
-                 int num_threads = 0):
-        '''
-
-        This function is supposed to cast the rays and return the
-        image.
-
-        '''
-
-        cdef int vi, vj, i, j
-        cdef ImageContainer *im = self.image
-        cdef np.float64_t *v_pos
-        cdef np.float64_t *v_dir
-        cdef np.int64_t nx, ny, size
-        cdef np.float64_t width[3]
-        for i in range(3):
-            width[i] = self.width[i]
-        cdef np.ndarray[np.float64_t, ndim=1] data
-        cdef np.ndarray[np.int64_t, ndim=1] used
-        nx = im.nv[0]
-        ny = im.nv[1]
-        size = nx * ny
-        cdef Ray* ray
-        with nogil, parallel():
-            ray = <Ray *> malloc(sizeof(Ray))
-            v_pos = <np.float64_t *> malloc(3 * sizeof(np.float64_t))
-            v_dir = <np.float64_t *> malloc(3 * sizeof(np.float64_t))
-            for j in prange(size):
-                vj = j % ny
-                vi = (j - vj) / ny
-                vj = vj
-                self.vector_function(im, vi, vj, width, v_dir, v_pos)
-                for i in range(3):
-                    ray.origin[i] = v_pos[i]
-                    ray.direction[i] = v_dir[i]
-                    ray.inv_dir[i] = 1.0 / v_dir[i]
-                ray.t_far = 1e37
-                ray.t_near = 0.0
-                ray.data_val = 0
-                ray.elem_id = -1
-                bvh.intersect(ray)
-                im.image[vi, vj, 0] = ray.data_val
-                im.image_used[vi, vj] = ray.elem_id
-                im.mesh_lines[vi, vj] = ray.near_boundary
-                im.zbuffer[vi, vj] = ray.t_far
-            free(v_pos)
-            free(v_dir)
-            free(ray)

diff -r d740180148ec76546bc51dd1c83907d1d80064db -r 88671629f535ad2178270b1c97334d8677744bbb yt/visualization/volume_rendering/utils.py
--- a/yt/visualization/volume_rendering/utils.py
+++ b/yt/visualization/volume_rendering/utils.py
@@ -1,7 +1,9 @@
 import numpy as np
 from yt.data_objects.static_output import Dataset
+from yt.utilities.lib import bounding_volume_hierarchy
 from yt.utilities.lib.grid_traversal import \
     VolumeRenderSampler, InterpolatedProjectionSampler, ProjectionSampler
+
 from yt.utilities.on_demand_imports import NotAModule
 try:
     from yt.utilities.lib import mesh_traversal
@@ -30,7 +32,7 @@
     if engine == 'embree':
         sampler = mesh_traversal.EmbreeMeshSampler(*args, **kwargs)
     elif engine == 'yt':
-        sampler = mesh_traversal.BVHMeshSampler(*args, **kwargs)
+        sampler = bounding_volume_hierarchy.BVHMeshSampler(*args, **kwargs)
     return sampler
 
 


https://bitbucket.org/yt_analysis/yt/commits/b79034ef9732/
Changeset:   b79034ef9732
Branch:      yt
User:        atmyers
Date:        2016-05-27 20:25:04+00:00
Summary:     merging with tip
Affected #:  0 files



https://bitbucket.org/yt_analysis/yt/commits/88ea09554ef8/
Changeset:   88ea09554ef8
Branch:      yt
User:        atmyers
Date:        2016-05-28 00:12:31+00:00
Summary:     a function for removing the interior triangles from a mesh.
Affected #:  1 file

diff -r b79034ef97322d9ab5a5d59fda54710b34917ba7 -r 88ea09554ef8eceae81a7eddd3477193ca387e1c yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -1,3 +1,4 @@
+# cython: profile=True
 import numpy as np
 cimport numpy as np
 cimport cython
@@ -23,6 +24,47 @@
 @cython.boundscheck(False)
 @cython.wraparound(False)
 @cython.cdivision(True)
+def cull_interior(np.ndarray[np.int64_t, ndim=2] indices):
+    cdef np.int64_t num_elem = indices.shape[0]
+    cdef np.int64_t VPE = indices.shape[1]  # num verts per element
+    cdef np.int64_t TPE  # num triangles per element
+    cdef int[MAX_NUM_TRI][3] tri_array
+    
+    if (VPE == 8 or VPE == 20 or VPE == 27):
+        TPE = HEX_NT
+        tri_array = triangulate_hex
+    elif VPE == 4:
+        TPE = TETRA_NT
+        tri_array = triangulate_tetra
+    elif VPE == 6:
+        TPE = WEDGE_NT
+        tri_array = triangulate_wedge
+    else:
+        raise YTElementTypeNotRecognized(3, VPE)
+
+    cdef np.int64_t num_tri = TPE * num_elem
+    cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
+
+    cdef set s = set()
+    cdef np.int64_t i, j, k, offset
+    cdef np.int64_t tri[3]
+    cdef frozenset t
+    for i in range(num_elem):
+        for j in range(TPE):
+            for k in range(3):
+                offset = tri_array[j][k]
+                tri[k] = indices_ptr[i*VPE + offset]
+            t = frozenset([tri[0], tri[1], tri[2]])
+            try:
+                s.remove(t)
+            except KeyError:
+                s.add(t)
+    return s
+
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
 def triangulate_indices(np.ndarray[np.int64_t, ndim=2] indices):
     cdef np.int64_t num_elem = indices.shape[0]
     cdef np.int64_t VPE = indices.shape[1]  # num verts per element


https://bitbucket.org/yt_analysis/yt/commits/5d93dded63d2/
Changeset:   5d93dded63d2
Branch:      yt
User:        atmyers
Date:        2016-05-29 19:40:26+00:00
Summary:     switching to pure Cython hash set.
Affected #:  1 file

diff -r 88ea09554ef8eceae81a7eddd3477193ca387e1c -r 5d93dded63d22ef1054724430c65fe236226f748 yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -1,14 +1,14 @@
-# cython: profile=True
 import numpy as np
 cimport numpy as np
 cimport cython
+from libc.stdlib cimport malloc, free
 
 from yt.utilities.exceptions import YTElementTypeNotRecognized
 
-cdef extern from "mesh_triangulation.h":
+cdef extern from "/Users/atmyers/yt-conda/src/yt-hg/yt/utilities/lib/mesh_triangulation.h":
     enum:
         MAX_NUM_TRI
-        
+
     int HEX_NV
     int HEX_NT
     int TETRA_NV
@@ -20,16 +20,141 @@
     int triangulate_wedge[MAX_NUM_TRI][3]
     int hex20_faces[6][8]
 
+cdef struct TriNode:
+    np.uint64_t key
+    np.int64_t tri[3]
+    TriNode* next_node
 
 @cython.boundscheck(False)
 @cython.wraparound(False)
 @cython.cdivision(True)
+cdef np.int64_t triangles_are_equal(np.int64_t tri1[3], np.int64_t tri2[3]) nogil:
+    cdef np.int64_t found
+    for i in range(3):
+        found = False
+        for j in range(3):
+            if tri1[i] == tri2[j]:
+                found = True
+        if not found:
+            return 0
+    return 1
+    
+cdef np.int64_t TABLE_SIZE = 2**24
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+cdef np.uint64_t hash_func(np.int64_t tri[3]) nogil:
+    # http://stackoverflow.com/questions/1536393/good-hash-function-for-permutations
+    cdef np.uint64_t h = 1
+    for i in range(3):
+        h *= (1779033703 + 2*tri[i])
+    return h / 2
+
+cdef class TriSet:
+    cdef TriNode **table
+    cdef np.uint64_t num_items
+    
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def __cinit__(self):
+        self.table = <TriNode**> malloc(TABLE_SIZE * sizeof(TriNode*))
+        for i in range(TABLE_SIZE):
+            self.table[i] = NULL
+        self.num_items = 0
+        
+    def __dealloc__(self):
+        cdef np.int64_t i
+        cdef TriNode *node
+        cdef TriNode *delete_node
+        for i in range(TABLE_SIZE):
+            node = self.table[i]
+            while (node != NULL):
+                delete_node = node
+                node = node.next_node
+                free(delete_node)
+            self.table[i] = NULL
+        free(self.table)
+    
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef void add(self, np.int64_t tri[3]) nogil:
+        cdef np.uint64_t key = hash_func(tri)
+        cdef np.uint64_t index = key % TABLE_SIZE
+        cdef TriNode *node = self.table[index]        
+        
+        if node == NULL:
+            # this is the first item in this bin
+            self.table[index] = <TriNode* > malloc(sizeof(TriNode))
+            self.table[index].key = key
+            self.table[index].tri[0] = tri[0]
+            self.table[index].tri[1] = tri[1]
+            self.table[index].tri[2] = tri[2]
+            self.table[index].next_node = NULL
+            self.num_items += 1
+            return
+    
+        # there is already something here; walk through list
+        while node != NULL:
+            if triangles_are_equal(node.tri, tri):
+                # this triangle is already here, do nothing
+                return
+            if node.next_node == NULL:
+                # we have reached the end; add new node
+                node.next_node = <TriNode* > malloc(sizeof(TriNode))
+                node.next_node.key = key
+                node.next_node.tri[0] = tri[0]
+                node.next_node.tri[1] = tri[1]
+                node.next_node.tri[2] = tri[2]
+                node.next_node.next_node = NULL
+                self.num_items += 1
+                return
+            node = node.next_node
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef np.int64_t delete(self, np.int64_t tri[3]) nogil:
+        cdef np.uint64_t key = hash_func(tri)
+        cdef np.uint64_t index = key % TABLE_SIZE
+        cdef TriNode *node = self.table[index]
+
+        if node == NULL:
+            # nothing is here
+            return 0
+        
+        if triangles_are_equal(node.tri, tri):
+            # this tri is at the first position
+            self.table[index] = node.next_node
+            free(node)
+            self.num_items -= 1
+            return 1
+
+        # otherwise, try to find the tri in the list
+        cdef TriNode* prev = node
+        node = node.next_node
+        while node != NULL:
+            if triangles_are_equal(node.tri, tri):
+                prev.next_node = node.next_node
+                free(node)
+                self.num_items -= 1
+                return 1
+            prev = node
+            node = node.next_node
+
+        return 0
+    
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
 def cull_interior(np.ndarray[np.int64_t, ndim=2] indices):
     cdef np.int64_t num_elem = indices.shape[0]
     cdef np.int64_t VPE = indices.shape[1]  # num verts per element
     cdef np.int64_t TPE  # num triangles per element
     cdef int[MAX_NUM_TRI][3] tri_array
-    
+
     if (VPE == 8 or VPE == 20 or VPE == 27):
         TPE = HEX_NT
         tri_array = triangulate_hex
@@ -44,23 +169,18 @@
 
     cdef np.int64_t num_tri = TPE * num_elem
     cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
-
-    cdef set s = set()
-    cdef np.int64_t i, j, k, offset
+    
+    cdef TriSet s = TriSet()
+    cdef np.int64_t i, j, k, found
     cdef np.int64_t tri[3]
-    cdef frozenset t
     for i in range(num_elem):
         for j in range(TPE):
             for k in range(3):
-                offset = tri_array[j][k]
-                tri[k] = indices_ptr[i*VPE + offset]
-            t = frozenset([tri[0], tri[1], tri[2]])
-            try:
-                s.remove(t)
-            except KeyError:
-                s.add(t)
-    return s
-
+                tri[k] = indices_ptr[i*VPE + tri_array[j][k]]
+            found = s.delete(tri)
+            if not found:
+                s.add(tri)
+    print s.num_items
 
 @cython.boundscheck(False)
 @cython.wraparound(False)


https://bitbucket.org/yt_analysis/yt/commits/77835074ba55/
Changeset:   77835074ba55
Branch:      yt
User:        atmyers
Date:        2016-05-29 19:40:54+00:00
Summary:     switching to pure Cython hash set.
Affected #:  1 file

diff -r 5d93dded63d22ef1054724430c65fe236226f748 -r 77835074ba551139efe1f774727e890dc15ab996 yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -21,7 +21,7 @@
     int hex20_faces[6][8]
 
 cdef struct TriNode:
-    np.uint64_t key
+#    np.uint64_t key
     np.int64_t tri[3]
     TriNode* next_node
 
@@ -88,7 +88,7 @@
         if node == NULL:
             # this is the first item in this bin
             self.table[index] = <TriNode* > malloc(sizeof(TriNode))
-            self.table[index].key = key
+#            self.table[index].key = key
             self.table[index].tri[0] = tri[0]
             self.table[index].tri[1] = tri[1]
             self.table[index].tri[2] = tri[2]
@@ -98,13 +98,13 @@
     
         # there is already something here; walk through list
         while node != NULL:
-            if triangles_are_equal(node.tri, tri):
-                # this triangle is already here, do nothing
-                return
+#            if triangles_are_equal(node.tri, tri):
+#                # this triangle is already here, do nothing
+#                return
             if node.next_node == NULL:
                 # we have reached the end; add new node
                 node.next_node = <TriNode* > malloc(sizeof(TriNode))
-                node.next_node.key = key
+#                node.next_node.key = key
                 node.next_node.tri[0] = tri[0]
                 node.next_node.tri[1] = tri[1]
                 node.next_node.tri[2] = tri[2]


https://bitbucket.org/yt_analysis/yt/commits/d14cb2c0d4ab/
Changeset:   d14cb2c0d4ab
Branch:      yt
User:        atmyers
Date:        2016-05-29 21:26:14+00:00
Summary:     convert the identified triangles to a numpy array
Affected #:  1 file

diff -r 77835074ba551139efe1f774727e890dc15ab996 -r d14cb2c0d4ab592f95875b4f3ac8a732336700a2 yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -21,7 +21,7 @@
     int hex20_faces[6][8]
 
 cdef struct TriNode:
-#    np.uint64_t key
+    np.uint64_t key
     np.int64_t tri[3]
     TriNode* next_node
 
@@ -77,74 +77,73 @@
             self.table[i] = NULL
         free(self.table)
     
+    def get_exterior_tris(self):
+        cdef np.ndarray[np.int64_t, ndim=2] tri_indices
+        tri_indices = np.empty((self.num_items, 3), dtype="int64")
+        cdef np.int64_t *tri_indices_ptr = <np.int64_t*> tri_indices.data
+
+        cdef TriNode* node
+        cdef np.int64_t counter = 0
+        for i in range(TABLE_SIZE):
+            node = self.table[i]
+            while node != NULL:
+                for j in range(3):
+                    tri_indices_ptr[3*counter + j] = node.tri[j]
+                counter += 1
+                node = node.next_node
+                
+        return tri_indices
+
+    cdef TriNode* _allocate_new_node(self, np.int64_t tri[3], np.uint64_t key) nogil:
+        cdef TriNode* new_node = <TriNode* > malloc(sizeof(TriNode))
+        new_node.key = key
+        new_node.tri[0] = tri[0]
+        new_node.tri[1] = tri[1]
+        new_node.tri[2] = tri[2]
+        new_node.next_node = NULL
+        self.num_items += 1
+        return new_node
+        
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    cdef void add(self, np.int64_t tri[3]) nogil:
-        cdef np.uint64_t key = hash_func(tri)
-        cdef np.uint64_t index = key % TABLE_SIZE
-        cdef TriNode *node = self.table[index]        
-        
-        if node == NULL:
-            # this is the first item in this bin
-            self.table[index] = <TriNode* > malloc(sizeof(TriNode))
-#            self.table[index].key = key
-            self.table[index].tri[0] = tri[0]
-            self.table[index].tri[1] = tri[1]
-            self.table[index].tri[2] = tri[2]
-            self.table[index].next_node = NULL
-            self.num_items += 1
-            return
-    
-        # there is already something here; walk through list
-        while node != NULL:
-#            if triangles_are_equal(node.tri, tri):
-#                # this triangle is already here, do nothing
-#                return
-            if node.next_node == NULL:
-                # we have reached the end; add new node
-                node.next_node = <TriNode* > malloc(sizeof(TriNode))
-#                node.next_node.key = key
-                node.next_node.tri[0] = tri[0]
-                node.next_node.tri[1] = tri[1]
-                node.next_node.tri[2] = tri[2]
-                node.next_node.next_node = NULL
-                self.num_items += 1
-                return
-            node = node.next_node
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    cdef np.int64_t delete(self, np.int64_t tri[3]) nogil:
+    cdef void update(self, np.int64_t tri[3]) nogil:
         cdef np.uint64_t key = hash_func(tri)
         cdef np.uint64_t index = key % TABLE_SIZE
         cdef TriNode *node = self.table[index]
+        
+        if node == NULL:
+            self.table[index] = self._allocate_new_node(tri, key)
+            return
 
-        if node == NULL:
-            # nothing is here
-            return 0
-        
-        if triangles_are_equal(node.tri, tri):
-            # this tri is at the first position
+        if key == node.key and triangles_are_equal(node.tri, tri):
+            # this triangle is already here, delete it
             self.table[index] = node.next_node
             free(node)
             self.num_items -= 1
-            return 1
+            return
 
-        # otherwise, try to find the tri in the list
+        elif node.next_node == NULL:
+            node.next_node = self._allocate_new_node(tri, key)
+            return
+    
+        # walk through node list
         cdef TriNode* prev = node
         node = node.next_node
         while node != NULL:
-            if triangles_are_equal(node.tri, tri):
+            if key == node.key and triangles_are_equal(node.tri, tri):
+                # this triangle is already here, delete it
                 prev.next_node = node.next_node
                 free(node)
                 self.num_items -= 1
-                return 1
+                return
+            if node.next_node == NULL:
+                # we have reached the end; add new node
+                node.next_node = self._allocate_new_node(tri, key)
+                return
             prev = node
             node = node.next_node
 
-        return 0
     
 @cython.boundscheck(False)
 @cython.wraparound(False)
@@ -177,10 +176,8 @@
         for j in range(TPE):
             for k in range(3):
                 tri[k] = indices_ptr[i*VPE + tri_array[j][k]]
-            found = s.delete(tri)
-            if not found:
-                s.add(tri)
-    print s.num_items
+            s.update(tri)
+    return s.get_exterior_tris()
 
 @cython.boundscheck(False)
 @cython.wraparound(False)


https://bitbucket.org/yt_analysis/yt/commits/40b673390c9d/
Changeset:   40b673390c9d
Branch:      yt
User:        atmyers
Date:        2016-05-31 18:20:59+00:00
Summary:     interior culling that works for moab data.
Affected #:  2 files

diff -r d14cb2c0d4ab592f95875b4f3ac8a732336700a2 -r 40b673390c9d14ea2ce14cf37f30accd793b964d yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -22,6 +22,7 @@
 
 cdef struct TriNode:
     np.uint64_t key
+    np.int64_t elem
     np.int64_t tri[3]
     TriNode* next_node
 
@@ -78,10 +79,15 @@
         free(self.table)
     
     def get_exterior_tris(self):
+
         cdef np.ndarray[np.int64_t, ndim=2] tri_indices
         tri_indices = np.empty((self.num_items, 3), dtype="int64")
         cdef np.int64_t *tri_indices_ptr = <np.int64_t*> tri_indices.data
 
+        cdef np.ndarray[np.int64_t, ndim=1] element_map
+        element_map = np.empty(self.num_items, dtype="int64")
+        cdef np.int64_t *elem_map_ptr = <np.int64_t*> element_map.data
+
         cdef TriNode* node
         cdef np.int64_t counter = 0
         for i in range(TABLE_SIZE):
@@ -89,14 +95,19 @@
             while node != NULL:
                 for j in range(3):
                     tri_indices_ptr[3*counter + j] = node.tri[j]
+                elem_map_ptr[counter] = node.elem
                 counter += 1
                 node = node.next_node
                 
-        return tri_indices
+        return tri_indices, element_map
 
-    cdef TriNode* _allocate_new_node(self, np.int64_t tri[3], np.uint64_t key) nogil:
+    cdef TriNode* _allocate_new_node(self,
+                                     np.int64_t tri[3],
+                                     np.uint64_t key,
+                                     np.int64_t elem) nogil:
         cdef TriNode* new_node = <TriNode* > malloc(sizeof(TriNode))
         new_node.key = key
+        new_node.elem = elem
         new_node.tri[0] = tri[0]
         new_node.tri[1] = tri[1]
         new_node.tri[2] = tri[2]
@@ -107,13 +118,13 @@
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    cdef void update(self, np.int64_t tri[3]) nogil:
+    cdef void update(self, np.int64_t tri[3], np.int64_t elem) nogil:
         cdef np.uint64_t key = hash_func(tri)
         cdef np.uint64_t index = key % TABLE_SIZE
         cdef TriNode *node = self.table[index]
         
         if node == NULL:
-            self.table[index] = self._allocate_new_node(tri, key)
+            self.table[index] = self._allocate_new_node(tri, key, elem)
             return
 
         if key == node.key and triangles_are_equal(node.tri, tri):
@@ -124,7 +135,7 @@
             return
 
         elif node.next_node == NULL:
-            node.next_node = self._allocate_new_node(tri, key)
+            node.next_node = self._allocate_new_node(tri, key, elem)
             return
     
         # walk through node list
@@ -139,7 +150,7 @@
                 return
             if node.next_node == NULL:
                 # we have reached the end; add new node
-                node.next_node = self._allocate_new_node(tri, key)
+                node.next_node = self._allocate_new_node(tri, key, elem)
                 return
             prev = node
             node = node.next_node
@@ -148,7 +159,9 @@
 @cython.boundscheck(False)
 @cython.wraparound(False)
 @cython.cdivision(True)
-def cull_interior(np.ndarray[np.int64_t, ndim=2] indices):
+def cull_interior(np.ndarray[np.float64_t, ndim=2] coords,
+                  np.ndarray[np.float64_t, ndim=1] data,
+                  np.ndarray[np.int64_t, ndim=2] indices):
     cdef np.int64_t num_elem = indices.shape[0]
     cdef np.int64_t VPE = indices.shape[1]  # num verts per element
     cdef np.int64_t TPE  # num triangles per element
@@ -167,8 +180,15 @@
         raise YTElementTypeNotRecognized(3, VPE)
 
     cdef np.int64_t num_tri = TPE * num_elem
+    cdef np.int64_t num_verts = num_tri*3
+
     cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
-    
+    cdef np.float64_t *data_ptr = <np.float64_t*> data.data
+    cdef np.float64_t *coords_ptr = <np.float64_t*> coords.data
+
+    cdef np.ndarray[np.int64_t, ndim=2] exterior_tris
+    cdef np.ndarray[np.int64_t, ndim=1] element_map
+
     cdef TriSet s = TriSet()
     cdef np.int64_t i, j, k, found
     cdef np.int64_t tri[3]
@@ -176,8 +196,32 @@
         for j in range(TPE):
             for k in range(3):
                 tri[k] = indices_ptr[i*VPE + tri_array[j][k]]
-            s.update(tri)
-    return s.get_exterior_tris()
+            s.update(tri, i)
+    exterior_tris, element_map = s.get_exterior_tris()
+
+    cdef np.int64_t num_exterior_tris = exterior_tris.shape[0]
+    cdef np.ndarray[np.int32_t, ndim=1] tri_indices
+    tri_indices = np.empty(num_exterior_tris * 3, dtype=np.int32)
+    cdef np.int32_t *tri_indices_ptr = <np.int32_t*> tri_indices.data
+    
+    cdef np.ndarray[np.float32_t, ndim=1] tri_data
+    tri_data = np.empty(num_exterior_tris * 3, dtype=np.float32)
+    cdef np.float32_t *tri_data_ptr = <np.float32_t*> tri_data.data
+
+    cdef np.ndarray[np.float32_t, ndim=1] tri_coords
+    tri_coords = np.empty(num_exterior_tris*3*3, dtype=np.float32)
+    cdef np.float32_t *tri_coords_ptr = <np.float32_t*> tri_coords.data
+    
+    cdef np.int64_t vert_index
+    for i in range(num_exterior_tris):
+        for j in range(3):
+            vert_index = i*3 + j
+            tri_data_ptr[vert_index] = data_ptr[element_map[i]]
+            tri_indices_ptr[vert_index] = vert_index
+            for k in range(3):
+                tri_coords_ptr[vert_index*3 + k] = coords_ptr[3*exterior_tris[i][j] + k]
+
+    return tri_coords, tri_data, tri_indices
 
 @cython.boundscheck(False)
 @cython.wraparound(False)

diff -r d14cb2c0d4ab592f95875b4f3ac8a732336700a2 -r 40b673390c9d14ea2ce14cf37f30accd793b964d yt/visualization/volume_rendering/interactive_vr.py
--- a/yt/visualization/volume_rendering/interactive_vr.py
+++ b/yt/visualization/volume_rendering/interactive_vr.py
@@ -29,7 +29,7 @@
     quaternion_to_rotation_matrix, \
     rotation_matrix_to_quaternion
 from yt.utilities.lib.mesh_triangulation import triangulate_vertex_data, \
-    triangulate_element_data
+    triangulate_element_data, cull_interior
 from .shader_objects import known_shaders, ShaderProgram
 
 bbox_vertices = np.array(
@@ -598,6 +598,8 @@
 
         vertices, data, indices = self.get_mesh_data(data_source, field)
 
+        print data.min(), data.max()
+
         self._initialize_vertex_array("mesh_info")
         GL.glBindVertexArray(self.vert_arrays["mesh_info"])
 
@@ -649,7 +651,8 @@
         data = data_source[field]
 
         if len(data.shape) == 1:
-            return triangulate_element_data(vertices, data, indices)
+#            return triangulate_element_data(vertices, data, indices)
+            return cull_interior(vertices, data, indices)
         elif data.shape[1] == indices.shape[1]:
             return triangulate_vertex_data(vertices, data, indices)
 


https://bitbucket.org/yt_analysis/yt/commits/8f0b32945732/
Changeset:   8f0b32945732
Branch:      yt
User:        atmyers
Date:        2016-05-31 18:43:45+00:00
Summary:     this is slightly faster.
Affected #:  2 files

diff -r 40b673390c9d14ea2ce14cf37f30accd793b964d -r 8f0b329457322eb36f362ce3807416b94a300115 yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -90,6 +90,7 @@
 
         cdef TriNode* node
         cdef np.int64_t counter = 0
+        cdef np.int64_t i, j
         for i in range(TABLE_SIZE):
             node = self.table[i]
             while node != NULL:
@@ -197,9 +198,11 @@
             for k in range(3):
                 tri[k] = indices_ptr[i*VPE + tri_array[j][k]]
             s.update(tri, i)
+
     exterior_tris, element_map = s.get_exterior_tris()
+    cdef np.int64_t num_exterior_tris = exterior_tris.shape[0]
+    cdef np.int64_t *exterior_tris_ptr = <np.int64_t*> exterior_tris.data
 
-    cdef np.int64_t num_exterior_tris = exterior_tris.shape[0]
     cdef np.ndarray[np.int32_t, ndim=1] tri_indices
     tri_indices = np.empty(num_exterior_tris * 3, dtype=np.int32)
     cdef np.int32_t *tri_indices_ptr = <np.int32_t*> tri_indices.data
@@ -212,14 +215,15 @@
     tri_coords = np.empty(num_exterior_tris*3*3, dtype=np.float32)
     cdef np.float32_t *tri_coords_ptr = <np.float32_t*> tri_coords.data
     
-    cdef np.int64_t vert_index
+    cdef np.int64_t vert_index, coord_index
     for i in range(num_exterior_tris):
         for j in range(3):
             vert_index = i*3 + j
             tri_data_ptr[vert_index] = data_ptr[element_map[i]]
             tri_indices_ptr[vert_index] = vert_index
             for k in range(3):
-                tri_coords_ptr[vert_index*3 + k] = coords_ptr[3*exterior_tris[i][j] + k]
+                coord_index = 3*exterior_tris_ptr[i*3 + j] + k
+                tri_coords_ptr[vert_index*3 + k] = coords_ptr[coord_index]
 
     return tri_coords, tri_data, tri_indices
 

diff -r 40b673390c9d14ea2ce14cf37f30accd793b964d -r 8f0b329457322eb36f362ce3807416b94a300115 yt/visualization/volume_rendering/interactive_vr.py
--- a/yt/visualization/volume_rendering/interactive_vr.py
+++ b/yt/visualization/volume_rendering/interactive_vr.py
@@ -596,9 +596,10 @@
         GL.glEnable(GL.GL_CULL_FACE)
         GL.glCullFace(GL.GL_BACK)
 
+        import time
+        start = time.clock()
         vertices, data, indices = self.get_mesh_data(data_source, field)
-
-        print data.min(), data.max()
+        print time.clock() - start
 
         self._initialize_vertex_array("mesh_info")
         GL.glBindVertexArray(self.vert_arrays["mesh_info"])


https://bitbucket.org/yt_analysis/yt/commits/51a955a081f2/
Changeset:   51a955a081f2
Branch:      yt
User:        atmyers
Date:        2016-05-31 19:37:57+00:00
Summary:     only use the @cython decorators as needed.
Affected #:  1 file

diff -r 8f0b329457322eb36f362ce3807416b94a300115 -r 51a955a081f26825ff8944f4e3ebe0b7949ad45f yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -26,9 +26,6 @@
     np.int64_t tri[3]
     TriNode* next_node
 
- at cython.boundscheck(False)
- at cython.wraparound(False)
- at cython.cdivision(True)
 cdef np.int64_t triangles_are_equal(np.int64_t tri1[3], np.int64_t tri2[3]) nogil:
     cdef np.int64_t found
     for i in range(3):
@@ -40,11 +37,6 @@
             return 0
     return 1
     
-cdef np.int64_t TABLE_SIZE = 2**24
-
- at cython.boundscheck(False)
- at cython.wraparound(False)
- at cython.cdivision(True)
 cdef np.uint64_t hash_func(np.int64_t tri[3]) nogil:
     # http://stackoverflow.com/questions/1536393/good-hash-function-for-permutations
     cdef np.uint64_t h = 1
@@ -52,13 +44,12 @@
         h *= (1779033703 + 2*tri[i])
     return h / 2
 
+cdef np.int64_t TABLE_SIZE = 2**24
+
 cdef class TriSet:
     cdef TriNode **table
     cdef np.uint64_t num_items
     
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
     def __cinit__(self):
         self.table = <TriNode**> malloc(TABLE_SIZE * sizeof(TriNode*))
         for i in range(TABLE_SIZE):
@@ -116,8 +107,6 @@
         self.num_items += 1
         return new_node
         
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
     @cython.cdivision(True)
     cdef void update(self, np.int64_t tri[3], np.int64_t elem) nogil:
         cdef np.uint64_t key = hash_func(tri)
@@ -158,8 +147,6 @@
 
     
 @cython.boundscheck(False)
- at cython.wraparound(False)
- at cython.cdivision(True)
 def cull_interior(np.ndarray[np.float64_t, ndim=2] coords,
                   np.ndarray[np.float64_t, ndim=1] data,
                   np.ndarray[np.int64_t, ndim=2] indices):
@@ -227,9 +214,7 @@
 
     return tri_coords, tri_data, tri_indices
 
- at cython.boundscheck(False)
- at cython.wraparound(False)
- at cython.cdivision(True)
+
 def triangulate_indices(np.ndarray[np.int64_t, ndim=2] indices):
     cdef np.int64_t num_elem = indices.shape[0]
     cdef np.int64_t VPE = indices.shape[1]  # num verts per element
@@ -264,9 +249,7 @@
                 tri_indices_ptr[i*TPE*3 + 3*j + k] = indices_ptr[i*VPE + offset]
     return tri_indices
 
- at cython.boundscheck(False)
- at cython.wraparound(False)
- at cython.cdivision(True)
+
 def triangulate_vertex_data(np.ndarray[np.float64_t, ndim=2] coords,
                             np.ndarray[np.float64_t, ndim=2] data,
                             np.ndarray[np.int64_t, ndim=2] indices):
@@ -321,9 +304,7 @@
 
     return tri_coords, tri_data, tri_indices
 
- at cython.boundscheck(False)
- at cython.wraparound(False)
- at cython.cdivision(True)
+
 def triangulate_element_data(np.ndarray[np.float64_t, ndim=2] coords,
                              np.ndarray[np.float64_t, ndim=1] data,
                              np.ndarray[np.int64_t, ndim=2] indices):


https://bitbucket.org/yt_analysis/yt/commits/84873ec368c5/
Changeset:   84873ec368c5
Branch:      yt
User:        atmyers
Date:        2016-05-31 20:10:36+00:00
Summary:     Removing debug junk and now un-used code.
Affected #:  2 files

diff -r 51a955a081f26825ff8944f4e3ebe0b7949ad45f -r 84873ec368c5cbf877e35806a475b6715394c504 yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -147,9 +147,9 @@
 
     
 @cython.boundscheck(False)
-def cull_interior(np.ndarray[np.float64_t, ndim=2] coords,
-                  np.ndarray[np.float64_t, ndim=1] data,
-                  np.ndarray[np.int64_t, ndim=2] indices):
+def triangulate_element_data(np.ndarray[np.float64_t, ndim=2] coords,
+                             np.ndarray[np.float64_t, ndim=1] data,
+                             np.ndarray[np.int64_t, ndim=2] indices):
     cdef np.int64_t num_elem = indices.shape[0]
     cdef np.int64_t VPE = indices.shape[1]  # num verts per element
     cdef np.int64_t TPE  # num triangles per element
@@ -303,58 +303,3 @@
             tri_coords_ptr[i*3 + j] = coords_ptr[i*3 + j]
 
     return tri_coords, tri_data, tri_indices
-
-
-def triangulate_element_data(np.ndarray[np.float64_t, ndim=2] coords,
-                             np.ndarray[np.float64_t, ndim=1] data,
-                             np.ndarray[np.int64_t, ndim=2] indices):
-
-    cdef np.int64_t num_elem = indices.shape[0]
-    cdef np.int64_t VPE = indices.shape[1]  # num verts per element
-    cdef np.int64_t TPE  # num triangles per element
-    cdef int[MAX_NUM_TRI][3] tri_array
-    
-    if (VPE == 8 or VPE == 20 or VPE == 27):
-        TPE = HEX_NT
-        tri_array = triangulate_hex
-    elif VPE == 4:
-        TPE = TETRA_NT
-        tri_array = triangulate_tetra
-    elif VPE == 6:
-        TPE = WEDGE_NT
-        tri_array = triangulate_wedge
-    else:
-        raise YTElementTypeNotRecognized(3, VPE)
-
-    cdef np.int64_t num_tri = TPE * num_elem
-    cdef np.int64_t num_verts = num_tri*3
-    
-    cdef np.ndarray[np.int32_t, ndim=1] tri_indices
-    tri_indices = np.empty(num_verts, dtype="int32")
-    cdef np.int32_t *tri_indices_ptr = <np.int32_t*> tri_indices.data
-    
-    cdef np.ndarray[np.float32_t, ndim=1] tri_data
-    tri_data = np.empty(num_verts, dtype=np.float32)
-    cdef np.float32_t *tri_data_ptr = <np.float32_t*> tri_data.data
-
-    cdef np.ndarray[np.float32_t, ndim=1] tri_coords
-    tri_coords = np.empty(num_verts*3, dtype=np.float32)
-    cdef np.float32_t *tri_coords_ptr = <np.float32_t*> tri_coords.data
-    
-    cdef np.float64_t *data_ptr = <np.float64_t*> data.data
-    cdef np.float64_t *coords_ptr = <np.float64_t*> coords.data
-    cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
-
-    cdef np.int64_t i, j, k, l, 
-    cdef np.int64_t coord_index, vert_index
-    for i in range(num_elem):
-        for j in range(TPE):
-            for k in range(3):
-                coord_index = indices_ptr[i*VPE + tri_array[j][k]]*3
-                vert_index = i*TPE*3+j*3+k
-                tri_data_ptr[vert_index] = data_ptr[i]
-                for l in range(3):
-                    tri_coords_ptr[vert_index*3 + l] = coords_ptr[coord_index + l]
-                tri_indices_ptr[vert_index] = vert_index
-
-    return tri_coords, tri_data, tri_indices

diff -r 51a955a081f26825ff8944f4e3ebe0b7949ad45f -r 84873ec368c5cbf877e35806a475b6715394c504 yt/visualization/volume_rendering/interactive_vr.py
--- a/yt/visualization/volume_rendering/interactive_vr.py
+++ b/yt/visualization/volume_rendering/interactive_vr.py
@@ -29,7 +29,7 @@
     quaternion_to_rotation_matrix, \
     rotation_matrix_to_quaternion
 from yt.utilities.lib.mesh_triangulation import triangulate_vertex_data, \
-    triangulate_element_data, cull_interior
+    triangulate_element_data
 from .shader_objects import known_shaders, ShaderProgram
 
 bbox_vertices = np.array(
@@ -596,10 +596,7 @@
         GL.glEnable(GL.GL_CULL_FACE)
         GL.glCullFace(GL.GL_BACK)
 
-        import time
-        start = time.clock()
         vertices, data, indices = self.get_mesh_data(data_source, field)
-        print time.clock() - start
 
         self._initialize_vertex_array("mesh_info")
         GL.glBindVertexArray(self.vert_arrays["mesh_info"])
@@ -652,8 +649,7 @@
         data = data_source[field]
 
         if len(data.shape) == 1:
-#            return triangulate_element_data(vertices, data, indices)
-            return cull_interior(vertices, data, indices)
+            return triangulate_element_data(vertices, data, indices)
         elif data.shape[1] == indices.shape[1]:
             return triangulate_vertex_data(vertices, data, indices)
 


https://bitbucket.org/yt_analysis/yt/commits/5b7a64b91e9f/
Changeset:   5b7a64b91e9f
Branch:      yt
User:        atmyers
Date:        2016-05-31 20:20:18+00:00
Summary:     don't hard code path to this include.
Affected #:  1 file

diff -r 84873ec368c5cbf877e35806a475b6715394c504 -r 5b7a64b91e9fd816318b54a10d61e4fdc4452bbf yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -5,10 +5,9 @@
 
 from yt.utilities.exceptions import YTElementTypeNotRecognized
 
-cdef extern from "/Users/atmyers/yt-conda/src/yt-hg/yt/utilities/lib/mesh_triangulation.h":
+cdef extern from "mesh_triangulation.h":
     enum:
         MAX_NUM_TRI
-
     int HEX_NV
     int HEX_NT
     int TETRA_NV


https://bitbucket.org/yt_analysis/yt/commits/6d2ecf76d2e6/
Changeset:   6d2ecf76d2e6
Branch:      yt
User:        atmyers
Date:        2016-05-31 22:25:32+00:00
Summary:     some refactoring
Affected #:  2 files

diff -r 5b7a64b91e9fd816318b54a10d61e4fdc4452bbf -r 6d2ecf76d2e6eaaf73a858b6f0c28cbcf25fa6cc yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -144,11 +144,7 @@
             prev = node
             node = node.next_node
 
-    
- at cython.boundscheck(False)
-def triangulate_element_data(np.ndarray[np.float64_t, ndim=2] coords,
-                             np.ndarray[np.float64_t, ndim=1] data,
-                             np.ndarray[np.int64_t, ndim=2] indices):
+def cull_interior_triangles(np.ndarray[np.int64_t, ndim=2] indices):
     cdef np.int64_t num_elem = indices.shape[0]
     cdef np.int64_t VPE = indices.shape[1]  # num verts per element
     cdef np.int64_t TPE  # num triangles per element
@@ -168,13 +164,7 @@
 
     cdef np.int64_t num_tri = TPE * num_elem
     cdef np.int64_t num_verts = num_tri*3
-
     cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
-    cdef np.float64_t *data_ptr = <np.float64_t*> data.data
-    cdef np.float64_t *coords_ptr = <np.float64_t*> coords.data
-
-    cdef np.ndarray[np.int64_t, ndim=2] exterior_tris
-    cdef np.ndarray[np.int64_t, ndim=1] element_map
 
     cdef TriSet s = TriSet()
     cdef np.int64_t i, j, k, found
@@ -185,24 +175,38 @@
                 tri[k] = indices_ptr[i*VPE + tri_array[j][k]]
             s.update(tri, i)
 
-    exterior_tris, element_map = s.get_exterior_tris()
-    cdef np.int64_t num_exterior_tris = exterior_tris.shape[0]
+    return s.get_exterior_tris()
+    
+
+ at cython.boundscheck(False)
+def triangulate_element_data(np.ndarray[np.float64_t, ndim=2] coords,
+                             np.ndarray[np.float64_t, ndim=1] data,
+                             np.ndarray[np.int64_t, ndim=2] indices):
+
+    cdef np.ndarray[np.int64_t, ndim=2] exterior_tris
+    cdef np.ndarray[np.int64_t, ndim=1] element_map
+    exterior_tris, element_map = cull_interior_triangles(indices)
+
+    cdef np.float64_t *data_ptr = <np.float64_t*> data.data
+    cdef np.float64_t *coords_ptr = <np.float64_t*> coords.data
+
+    cdef np.int64_t num_tri = exterior_tris.shape[0]
     cdef np.int64_t *exterior_tris_ptr = <np.int64_t*> exterior_tris.data
 
     cdef np.ndarray[np.int32_t, ndim=1] tri_indices
-    tri_indices = np.empty(num_exterior_tris * 3, dtype=np.int32)
+    tri_indices = np.empty(num_tri * 3, dtype=np.int32)
     cdef np.int32_t *tri_indices_ptr = <np.int32_t*> tri_indices.data
     
     cdef np.ndarray[np.float32_t, ndim=1] tri_data
-    tri_data = np.empty(num_exterior_tris * 3, dtype=np.float32)
+    tri_data = np.empty(num_tri * 3, dtype=np.float32)
     cdef np.float32_t *tri_data_ptr = <np.float32_t*> tri_data.data
 
     cdef np.ndarray[np.float32_t, ndim=1] tri_coords
-    tri_coords = np.empty(num_exterior_tris*3*3, dtype=np.float32)
+    tri_coords = np.empty(num_tri*3*3, dtype=np.float32)
     cdef np.float32_t *tri_coords_ptr = <np.float32_t*> tri_coords.data
     
     cdef np.int64_t vert_index, coord_index
-    for i in range(num_exterior_tris):
+    for i in range(num_tri):
         for j in range(3):
             vert_index = i*3 + j
             tri_data_ptr[vert_index] = data_ptr[element_map[i]]

diff -r 5b7a64b91e9fd816318b54a10d61e4fdc4452bbf -r 6d2ecf76d2e6eaaf73a858b6f0c28cbcf25fa6cc yt/visualization/volume_rendering/interactive_vr.py
--- a/yt/visualization/volume_rendering/interactive_vr.py
+++ b/yt/visualization/volume_rendering/interactive_vr.py
@@ -596,7 +596,10 @@
         GL.glEnable(GL.GL_CULL_FACE)
         GL.glCullFace(GL.GL_BACK)
 
+        import time
+        start = time.clock()
         vertices, data, indices = self.get_mesh_data(data_source, field)
+        print time.clock() - start
 
         self._initialize_vertex_array("mesh_info")
         GL.glBindVertexArray(self.vert_arrays["mesh_info"])


https://bitbucket.org/yt_analysis/yt/commits/dd6a1db4f5ad/
Changeset:   dd6a1db4f5ad
Branch:      yt
User:        atmyers
Date:        2016-05-31 22:27:07+00:00
Summary:     rename the hash function to something a bit more descriptive.
Affected #:  1 file

diff -r 6d2ecf76d2e6eaaf73a858b6f0c28cbcf25fa6cc -r dd6a1db4f5adba1ec4b93cbf09253c3ac167745c yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -36,7 +36,7 @@
             return 0
     return 1
     
-cdef np.uint64_t hash_func(np.int64_t tri[3]) nogil:
+cdef np.uint64_t triangle_hash(np.int64_t tri[3]) nogil:
     # http://stackoverflow.com/questions/1536393/good-hash-function-for-permutations
     cdef np.uint64_t h = 1
     for i in range(3):
@@ -108,7 +108,7 @@
         
     @cython.cdivision(True)
     cdef void update(self, np.int64_t tri[3], np.int64_t elem) nogil:
-        cdef np.uint64_t key = hash_func(tri)
+        cdef np.uint64_t key = triangle_hash(tri)
         cdef np.uint64_t index = key % TABLE_SIZE
         cdef TriNode *node = self.table[index]
         


https://bitbucket.org/yt_analysis/yt/commits/c4850ba27bbd/
Changeset:   c4850ba27bbd
Branch:      yt
User:        atmyers
Date:        2016-05-31 22:32:51+00:00
Summary:     adding some docstrings.
Affected #:  1 file

diff -r dd6a1db4f5adba1ec4b93cbf09253c3ac167745c -r c4850ba27bbdca304e6d8736c2d8d474a86ec416 yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -46,6 +46,15 @@
 cdef np.int64_t TABLE_SIZE = 2**24
 
 cdef class TriSet:
+    '''
+
+    This is a hash table data structure for rapidly identifying the exterior
+    triangles in a polygon mesh. We loop over each triangle in each element and
+    update the TriSet for each one. We keep only the triangles that appear once,
+    as these make up the exterior of the mesh.
+
+    '''
+
     cdef TriNode **table
     cdef np.uint64_t num_items
     
@@ -69,6 +78,12 @@
         free(self.table)
     
     def get_exterior_tris(self):
+        '''
+
+        Returns two numpy arrays, one storing the exterior triangle
+        indices and the other storing the corresponding element ids.
+
+        '''
 
         cdef np.ndarray[np.int64_t, ndim=2] tri_indices
         tri_indices = np.empty((self.num_items, 3), dtype="int64")


https://bitbucket.org/yt_analysis/yt/commits/99a6bcb841bc/
Changeset:   99a6bcb841bc
Branch:      yt
User:        atmyers
Date:        2016-06-01 00:23:19+00:00
Summary:     make the triangle culling work with vertex centered data as well.
Affected #:  1 file

diff -r c4850ba27bbdca304e6d8736c2d8d474a86ec416 -r 99a6bcb841bc2783138e4b4522b56be06999dc6d yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -193,6 +193,45 @@
     return s.get_exterior_tris()
     
 
+def get_vertex_data(np.ndarray[np.float64_t, ndim=2] coords,
+                    np.ndarray[np.float64_t, ndim=2] data,
+                    np.ndarray[np.int64_t, ndim=2] indices):
+
+    cdef np.int64_t num_elem = indices.shape[0]
+    cdef np.int64_t VPE = indices.shape[1]  # num verts per element
+    cdef np.int64_t TPE  # num triangles per element
+    cdef int[MAX_NUM_TRI][3] tri_array
+    
+    if (VPE == 8 or VPE == 20 or VPE == 27):
+        TPE = HEX_NT
+        tri_array = triangulate_hex
+    elif VPE == 4:
+        TPE = TETRA_NT
+        tri_array = triangulate_tetra
+    elif VPE == 6:
+        TPE = WEDGE_NT
+        tri_array = triangulate_wedge
+    else:
+        raise YTElementTypeNotRecognized(3, VPE)
+
+    cdef np.int64_t num_tri = TPE * num_elem
+    cdef np.int64_t num_verts = coords.shape[0]
+    
+    cdef np.ndarray[np.float32_t, ndim=1] vertex_data
+    vertex_data = np.zeros(num_verts, dtype="float32")
+    cdef np.float32_t *vertex_data_ptr = <np.float32_t*> vertex_data.data
+    
+    cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
+    cdef np.float64_t *data_ptr = <np.float64_t*> data.data
+    
+    cdef np.int64_t i, j, k
+    for i in range(num_elem):
+        for j in range(VPE):
+            vertex_data_ptr[indices_ptr[i*VPE + j]] = data_ptr[i*VPE + j]
+
+    return vertex_data
+
+
 @cython.boundscheck(False)
 def triangulate_element_data(np.ndarray[np.float64_t, ndim=2] coords,
                              np.ndarray[np.float64_t, ndim=1] data,
@@ -204,23 +243,26 @@
 
     cdef np.float64_t *data_ptr = <np.float64_t*> data.data
     cdef np.float64_t *coords_ptr = <np.float64_t*> coords.data
+    cdef np.int64_t *exterior_tris_ptr = <np.int64_t*> exterior_tris.data
 
     cdef np.int64_t num_tri = exterior_tris.shape[0]
-    cdef np.int64_t *exterior_tris_ptr = <np.int64_t*> exterior_tris.data
+    cdef np.int64_t num_verts = 3 * num_tri
+    cdef np.int64_t num_coords = 3 * num_verts
 
     cdef np.ndarray[np.int32_t, ndim=1] tri_indices
-    tri_indices = np.empty(num_tri * 3, dtype=np.int32)
+    tri_indices = np.empty(num_verts, dtype=np.int32)
     cdef np.int32_t *tri_indices_ptr = <np.int32_t*> tri_indices.data
     
     cdef np.ndarray[np.float32_t, ndim=1] tri_data
-    tri_data = np.empty(num_tri * 3, dtype=np.float32)
+    tri_data = np.empty(num_verts, dtype=np.float32)
     cdef np.float32_t *tri_data_ptr = <np.float32_t*> tri_data.data
 
     cdef np.ndarray[np.float32_t, ndim=1] tri_coords
-    tri_coords = np.empty(num_tri*3*3, dtype=np.float32)
+    tri_coords = np.empty(num_coords, dtype=np.float32)
     cdef np.float32_t *tri_coords_ptr = <np.float32_t*> tri_coords.data
     
     cdef np.int64_t vert_index, coord_index
+    cdef np.int64_t i, j, k
     for i in range(num_tri):
         for j in range(3):
             vert_index = i*3 + j
@@ -272,52 +314,43 @@
                             np.ndarray[np.float64_t, ndim=2] data,
                             np.ndarray[np.int64_t, ndim=2] indices):
 
-    cdef np.int64_t num_elem = indices.shape[0]
-    cdef np.int64_t VPE = indices.shape[1]  # num verts per element
-    cdef np.int64_t TPE  # num triangles per element
-    cdef int[MAX_NUM_TRI][3] tri_array
-    
-    if (VPE == 8 or VPE == 20 or VPE == 27):
-        TPE = HEX_NT
-        tri_array = triangulate_hex
-    elif VPE == 4:
-        TPE = TETRA_NT
-        tri_array = triangulate_tetra
-    elif VPE == 6:
-        TPE = WEDGE_NT
-        tri_array = triangulate_wedge
-    else:
-        raise YTElementTypeNotRecognized(3, VPE)
+    cdef np.ndarray[np.int64_t, ndim=2] exterior_tris
+    cdef np.ndarray[np.int64_t, ndim=1] element_map
+    exterior_tris, element_map = cull_interior_triangles(indices)
 
-    cdef np.int64_t num_tri = TPE * num_elem
-    cdef np.int64_t num_verts = coords.shape[0]
-    
+    cdef np.ndarray[np.float32_t, ndim=1] vertex_data
+    vertex_data = get_vertex_data(coords, data, indices)
+    cdef np.float32_t *vertex_data_ptr = <np.float32_t*> vertex_data.data
+
+    cdef np.float64_t *data_ptr = <np.float64_t*> data.data
+    cdef np.float64_t *coords_ptr = <np.float64_t*> coords.data
+    cdef np.int64_t *exterior_tris_ptr = <np.int64_t*> exterior_tris.data
+
+    cdef np.int64_t num_tri = exterior_tris.shape[0]
+    cdef np.int64_t num_verts = 3 * num_tri
+    cdef np.int64_t num_coords = 3 * num_verts
+
     cdef np.ndarray[np.int32_t, ndim=1] tri_indices
-    tri_indices = np.empty(num_tri*3, dtype="int32")
+    tri_indices = np.empty(num_verts, dtype=np.int32)
     cdef np.int32_t *tri_indices_ptr = <np.int32_t*> tri_indices.data
     
+    cdef np.ndarray[np.float32_t, ndim=1] tri_data
+    tri_data = np.empty(num_verts, dtype=np.float32)
+    cdef np.float32_t *tri_data_ptr = <np.float32_t*> tri_data.data
+
     cdef np.ndarray[np.float32_t, ndim=1] tri_coords
-    tri_coords = np.empty(num_verts*3, dtype=np.float32)
+    tri_coords = np.empty(num_coords, dtype=np.float32)
     cdef np.float32_t *tri_coords_ptr = <np.float32_t*> tri_coords.data
     
-    cdef np.ndarray[np.float32_t, ndim=1] tri_data
-    tri_data = np.zeros(num_verts, dtype="float32")
-    cdef np.float32_t *tri_data_ptr = <np.float32_t*> tri_data.data
-    
-    cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
-    cdef np.float64_t *coords_ptr = <np.float64_t*> coords.data
-    cdef np.float64_t *data_ptr = <np.float64_t*> data.data
-    
-    cdef np.int64_t i, j, k, offset
-    for i in range(num_elem):
-        for j in range(TPE):
+    cdef np.int64_t vert_index, coord_index
+    cdef np.int64_t i, j, k
+    for i in range(num_tri):
+        for j in range(3):
+            vert_index = i*3 + j
+            tri_data_ptr[vert_index] = vertex_data_ptr[exterior_tris_ptr[i*3 + j]]
+            tri_indices_ptr[vert_index] = vert_index
             for k in range(3):
-                offset = tri_array[j][k]
-                tri_indices_ptr[i*TPE*3 + 3*j + k] = indices_ptr[i*VPE + offset]
-        for j in range(VPE):
-            tri_data_ptr[indices_ptr[i*VPE + j]] = data_ptr[i*VPE + j]
-    for i in range(num_verts):
-        for j in range(3):
-            tri_coords_ptr[i*3 + j] = coords_ptr[i*3 + j]
+                coord_index = 3*exterior_tris_ptr[i*3 + j] + k
+                tri_coords_ptr[vert_index*3 + k] = coords_ptr[coord_index]
 
     return tri_coords, tri_data, tri_indices


https://bitbucket.org/yt_analysis/yt/commits/d95c6f338ba4/
Changeset:   d95c6f338ba4
Branch:      yt
User:        atmyers
Date:        2016-06-01 01:04:45+00:00
Summary:     some more refactoring.
Affected #:  2 files

diff -r 99a6bcb841bc2783138e4b4522b56be06999dc6d -r d95c6f338ba40aebeb1142e80aad9a336bfb5173 yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -43,6 +43,7 @@
         h *= (1779033703 + 2*tri[i])
     return h / 2
 
+# should be enough, consider dynamic resizing in the future
 cdef np.int64_t TABLE_SIZE = 2**24
 
 cdef class TriSet:
@@ -159,35 +160,53 @@
             prev = node
             node = node.next_node
 
+
+cdef class MeshInfoHolder:
+    cdef np.int64_t num_elem
+    cdef np.int64_t num_tri
+    cdef np.int64_t num_verts
+    cdef np.int64_t VPE  # num verts per element
+    cdef np.int64_t TPE  # num tris per element
+    cdef int[MAX_NUM_TRI][3] tri_array
+    
+    def __cinit__(self, np.ndarray[np.int64_t, ndim=2] indices):
+        self.num_elem = indices.shape[0]
+        self.VPE = indices.shape[1]
+
+        if (self.VPE == 8 or self.VPE == 20 or self.VPE == 27):
+            self.TPE = HEX_NT
+            self.tri_array = triangulate_hex
+        elif self.VPE == 4:
+            self.TPE = TETRA_NT
+            self.tri_array = triangulate_tetra
+        elif self.VPE == 6:
+            self.TPE = WEDGE_NT
+            self.tri_array = triangulate_wedge
+        else:
+            raise YTElementTypeNotRecognized(3, self.VPE)
+
+        self.num_tri = self.TPE * self.num_elem
+        self.num_verts = self.num_tri * 3
+        
+
 def cull_interior_triangles(np.ndarray[np.int64_t, ndim=2] indices):
-    cdef np.int64_t num_elem = indices.shape[0]
-    cdef np.int64_t VPE = indices.shape[1]  # num verts per element
-    cdef np.int64_t TPE  # num triangles per element
-    cdef int[MAX_NUM_TRI][3] tri_array
+    '''
 
-    if (VPE == 8 or VPE == 20 or VPE == 27):
-        TPE = HEX_NT
-        tri_array = triangulate_hex
-    elif VPE == 4:
-        TPE = TETRA_NT
-        tri_array = triangulate_tetra
-    elif VPE == 6:
-        TPE = WEDGE_NT
-        tri_array = triangulate_wedge
-    else:
-        raise YTElementTypeNotRecognized(3, VPE)
+    This is used to remove interior triangles from the mesh before rendering
+    it on the GPU.
 
-    cdef np.int64_t num_tri = TPE * num_elem
-    cdef np.int64_t num_verts = num_tri*3
+    '''
+
+    cdef MeshInfoHolder m = MeshInfoHolder(indices)
     cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
 
     cdef TriSet s = TriSet()
     cdef np.int64_t i, j, k, found
     cdef np.int64_t tri[3]
-    for i in range(num_elem):
-        for j in range(TPE):
+    for i in range(m.num_elem):
+        for j in range(m.TPE):
             for k in range(3):
-                tri[k] = indices_ptr[i*VPE + tri_array[j][k]]
+                tri[k] = indices_ptr[i*m.VPE + m.tri_array[j][k]]
             s.update(tri, i)
 
     return s.get_exterior_tris()
@@ -197,24 +216,14 @@
                     np.ndarray[np.float64_t, ndim=2] data,
                     np.ndarray[np.int64_t, ndim=2] indices):
 
-    cdef np.int64_t num_elem = indices.shape[0]
-    cdef np.int64_t VPE = indices.shape[1]  # num verts per element
-    cdef np.int64_t TPE  # num triangles per element
-    cdef int[MAX_NUM_TRI][3] tri_array
-    
-    if (VPE == 8 or VPE == 20 or VPE == 27):
-        TPE = HEX_NT
-        tri_array = triangulate_hex
-    elif VPE == 4:
-        TPE = TETRA_NT
-        tri_array = triangulate_tetra
-    elif VPE == 6:
-        TPE = WEDGE_NT
-        tri_array = triangulate_wedge
-    else:
-        raise YTElementTypeNotRecognized(3, VPE)
+    '''
 
-    cdef np.int64_t num_tri = TPE * num_elem
+    This converts the data array from the shape (num_elem, conn_length)
+    to (num_verts, ).
+
+    '''
+
+    cdef MeshInfoHolder m = MeshInfoHolder(indices)
     cdef np.int64_t num_verts = coords.shape[0]
     
     cdef np.ndarray[np.float32_t, ndim=1] vertex_data
@@ -225,101 +234,32 @@
     cdef np.float64_t *data_ptr = <np.float64_t*> data.data
     
     cdef np.int64_t i, j, k
-    for i in range(num_elem):
-        for j in range(VPE):
-            vertex_data_ptr[indices_ptr[i*VPE + j]] = data_ptr[i*VPE + j]
+    for i in range(m.num_elem):
+        for j in range(m.VPE):
+            vertex_data_ptr[indices_ptr[i*m.VPE + j]] = data_ptr[i*m.VPE + j]
 
     return vertex_data
 
 
 @cython.boundscheck(False)
-def triangulate_element_data(np.ndarray[np.float64_t, ndim=2] coords,
-                             np.ndarray[np.float64_t, ndim=1] data,
-                             np.ndarray[np.int64_t, ndim=2] indices):
+def triangulate_mesh(np.ndarray[np.float64_t, ndim=2] coords,
+                     np.ndarray data,
+                     np.ndarray[np.int64_t, ndim=2] indices):
+    '''
 
-    cdef np.ndarray[np.int64_t, ndim=2] exterior_tris
-    cdef np.ndarray[np.int64_t, ndim=1] element_map
-    exterior_tris, element_map = cull_interior_triangles(indices)
+    This converts a mesh into a flattened triangle array suitable for
+    rendering on the GPU.
 
-    cdef np.float64_t *data_ptr = <np.float64_t*> data.data
-    cdef np.float64_t *coords_ptr = <np.float64_t*> coords.data
-    cdef np.int64_t *exterior_tris_ptr = <np.int64_t*> exterior_tris.data
-
-    cdef np.int64_t num_tri = exterior_tris.shape[0]
-    cdef np.int64_t num_verts = 3 * num_tri
-    cdef np.int64_t num_coords = 3 * num_verts
-
-    cdef np.ndarray[np.int32_t, ndim=1] tri_indices
-    tri_indices = np.empty(num_verts, dtype=np.int32)
-    cdef np.int32_t *tri_indices_ptr = <np.int32_t*> tri_indices.data
-    
-    cdef np.ndarray[np.float32_t, ndim=1] tri_data
-    tri_data = np.empty(num_verts, dtype=np.float32)
-    cdef np.float32_t *tri_data_ptr = <np.float32_t*> tri_data.data
-
-    cdef np.ndarray[np.float32_t, ndim=1] tri_coords
-    tri_coords = np.empty(num_coords, dtype=np.float32)
-    cdef np.float32_t *tri_coords_ptr = <np.float32_t*> tri_coords.data
-    
-    cdef np.int64_t vert_index, coord_index
-    cdef np.int64_t i, j, k
-    for i in range(num_tri):
-        for j in range(3):
-            vert_index = i*3 + j
-            tri_data_ptr[vert_index] = data_ptr[element_map[i]]
-            tri_indices_ptr[vert_index] = vert_index
-            for k in range(3):
-                coord_index = 3*exterior_tris_ptr[i*3 + j] + k
-                tri_coords_ptr[vert_index*3 + k] = coords_ptr[coord_index]
-
-    return tri_coords, tri_data, tri_indices
-
-
-def triangulate_indices(np.ndarray[np.int64_t, ndim=2] indices):
-    cdef np.int64_t num_elem = indices.shape[0]
-    cdef np.int64_t VPE = indices.shape[1]  # num verts per element
-    cdef np.int64_t TPE  # num triangles per element
-    cdef int[MAX_NUM_TRI][3] tri_array
-    
-    if (VPE == 8 or VPE == 20 or VPE == 27):
-        TPE = HEX_NT
-        tri_array = triangulate_hex
-    elif VPE == 4:
-        TPE = TETRA_NT
-        tri_array = triangulate_tetra
-    elif VPE == 6:
-        TPE = WEDGE_NT
-        tri_array = triangulate_wedge
-    else:
-        raise YTElementTypeNotRecognized(3, VPE)
-
-    cdef np.int64_t num_tri = TPE * num_elem
-        
-    cdef np.ndarray[np.int64_t, ndim=2] tri_indices
-    tri_indices = np.empty((num_tri, 3), dtype="int64")
-    
-    cdef np.int64_t *tri_indices_ptr = <np.int64_t*> tri_indices.data
-    cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
-
-    cdef np.int64_t i, j, k, offset
-    for i in range(num_elem):
-        for j in range(TPE):
-            for k in range(3):
-                offset = tri_array[j][k]
-                tri_indices_ptr[i*TPE*3 + 3*j + k] = indices_ptr[i*VPE + offset]
-    return tri_indices
-
-
-def triangulate_vertex_data(np.ndarray[np.float64_t, ndim=2] coords,
-                            np.ndarray[np.float64_t, ndim=2] data,
-                            np.ndarray[np.int64_t, ndim=2] indices):
-
+    '''
     cdef np.ndarray[np.int64_t, ndim=2] exterior_tris
     cdef np.ndarray[np.int64_t, ndim=1] element_map
     exterior_tris, element_map = cull_interior_triangles(indices)
 
     cdef np.ndarray[np.float32_t, ndim=1] vertex_data
-    vertex_data = get_vertex_data(coords, data, indices)
+    if data.ndim == 2:
+        vertex_data = get_vertex_data(coords, data, indices)
+    else:
+        vertex_data = np.empty(1, dtype=np.float32)
     cdef np.float32_t *vertex_data_ptr = <np.float32_t*> vertex_data.data
 
     cdef np.float64_t *data_ptr = <np.float64_t*> data.data
@@ -347,10 +287,39 @@
     for i in range(num_tri):
         for j in range(3):
             vert_index = i*3 + j
-            tri_data_ptr[vert_index] = vertex_data_ptr[exterior_tris_ptr[i*3 + j]]
+            if data.ndim == 1:
+                tri_data_ptr[vert_index] = data_ptr[element_map[i]]  # element data
+            else:
+                tri_data_ptr[vert_index] = vertex_data_ptr[exterior_tris_ptr[i*3 + j]]
             tri_indices_ptr[vert_index] = vert_index
             for k in range(3):
                 coord_index = 3*exterior_tris_ptr[i*3 + j] + k
                 tri_coords_ptr[vert_index*3 + k] = coords_ptr[coord_index]
 
     return tri_coords, tri_data, tri_indices
+
+
+def triangulate_indices(np.ndarray[np.int64_t, ndim=2] indices):
+    '''
+
+    This is like triangulate_mesh, except it only considers the 
+    connectivity information, instead of also copying the vertex 
+    coordinates and the data values.
+
+    '''
+
+    cdef MeshInfoHolder m = MeshInfoHolder(indices)
+        
+    cdef np.ndarray[np.int64_t, ndim=2] tri_indices
+    tri_indices = np.empty((m.num_tri, 3), dtype="int64")
+    
+    cdef np.int64_t *tri_indices_ptr = <np.int64_t*> tri_indices.data
+    cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
+
+    cdef np.int64_t i, j, k, offset
+    for i in range(m.num_elem):
+        for j in range(m.TPE):
+            for k in range(3):
+                offset = m.tri_array[j][k]
+                tri_indices_ptr[i*m.TPE*3 + 3*j + k] = indices_ptr[i*m.VPE + offset]
+    return tri_indices

diff -r 99a6bcb841bc2783138e4b4522b56be06999dc6d -r d95c6f338ba40aebeb1142e80aad9a336bfb5173 yt/visualization/volume_rendering/interactive_vr.py
--- a/yt/visualization/volume_rendering/interactive_vr.py
+++ b/yt/visualization/volume_rendering/interactive_vr.py
@@ -28,8 +28,7 @@
     quaternion_mult, \
     quaternion_to_rotation_matrix, \
     rotation_matrix_to_quaternion
-from yt.utilities.lib.mesh_triangulation import triangulate_vertex_data, \
-    triangulate_element_data
+from yt.utilities.lib.mesh_triangulation import triangulate_mesh
 from .shader_objects import known_shaders, ShaderProgram
 
 bbox_vertices = np.array(
@@ -651,10 +650,7 @@
 
         data = data_source[field]
 
-        if len(data.shape) == 1:
-            return triangulate_element_data(vertices, data, indices)
-        elif data.shape[1] == indices.shape[1]:
-            return triangulate_vertex_data(vertices, data, indices)
+        return triangulate_mesh(vertices, data, indices)
 
     def run_program(self):
         """ Renders one frame of the scene. """


https://bitbucket.org/yt_analysis/yt/commits/7f3e0eb6cd47/
Changeset:   7f3e0eb6cd47
Branch:      yt
User:        atmyers
Date:        2016-06-01 01:05:37+00:00
Summary:     remove debug statements again.
Affected #:  1 file

diff -r d95c6f338ba40aebeb1142e80aad9a336bfb5173 -r 7f3e0eb6cd47e1324239f6c35ad61e7ff0ecf98d yt/visualization/volume_rendering/interactive_vr.py
--- a/yt/visualization/volume_rendering/interactive_vr.py
+++ b/yt/visualization/volume_rendering/interactive_vr.py
@@ -595,10 +595,7 @@
         GL.glEnable(GL.GL_CULL_FACE)
         GL.glCullFace(GL.GL_BACK)
 
-        import time
-        start = time.clock()
         vertices, data, indices = self.get_mesh_data(data_source, field)
-        print time.clock() - start
 
         self._initialize_vertex_array("mesh_info")
         GL.glBindVertexArray(self.vert_arrays["mesh_info"])


https://bitbucket.org/yt_analysis/yt/commits/f8184535466c/
Changeset:   f8184535466c
Branch:      yt
User:        atmyers
Date:        2016-06-09 22:32:32+00:00
Summary:     removing a few un-used variables.
Affected #:  1 file

diff -r 7f3e0eb6cd47e1324239f6c35ad61e7ff0ecf98d -r f8184535466ce3b97107c3ebe31354240b72b81c yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -201,7 +201,7 @@
     cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
 
     cdef TriSet s = TriSet()
-    cdef np.int64_t i, j, k, found
+    cdef np.int64_t i, j, k
     cdef np.int64_t tri[3]
     for i in range(m.num_elem):
         for j in range(m.TPE):
@@ -233,7 +233,7 @@
     cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
     cdef np.float64_t *data_ptr = <np.float64_t*> data.data
     
-    cdef np.int64_t i, j, k
+    cdef np.int64_t i, j
     for i in range(m.num_elem):
         for j in range(m.VPE):
             vertex_data_ptr[indices_ptr[i*m.VPE + j]] = data_ptr[i*m.VPE + j]


https://bitbucket.org/yt_analysis/yt/commits/6f52b945306e/
Changeset:   6f52b945306e
Branch:      yt
User:        atmyers
Date:        2016-06-09 22:49:54+00:00
Summary:     switching to memoryviews for the mesh triangulation code.
Affected #:  1 file

diff -r f8184535466ce3b97107c3ebe31354240b72b81c -r 6f52b945306eb605900618159bcca774b1df1b34 yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -78,6 +78,8 @@
             self.table[i] = NULL
         free(self.table)
     
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
     def get_exterior_tris(self):
         '''
 
@@ -86,13 +88,8 @@
 
         '''
 
-        cdef np.ndarray[np.int64_t, ndim=2] tri_indices
-        tri_indices = np.empty((self.num_items, 3), dtype="int64")
-        cdef np.int64_t *tri_indices_ptr = <np.int64_t*> tri_indices.data
-
-        cdef np.ndarray[np.int64_t, ndim=1] element_map
-        element_map = np.empty(self.num_items, dtype="int64")
-        cdef np.int64_t *elem_map_ptr = <np.int64_t*> element_map.data
+        cdef np.int64_t[:, ::1] tri_indices = np.empty((self.num_items, 3), dtype="int64")
+        cdef np.int64_t[::1] element_map = np.empty(self.num_items, dtype="int64")
 
         cdef TriNode* node
         cdef np.int64_t counter = 0
@@ -101,8 +98,8 @@
             node = self.table[i]
             while node != NULL:
                 for j in range(3):
-                    tri_indices_ptr[3*counter + j] = node.tri[j]
-                elem_map_ptr[counter] = node.elem
+                    tri_indices[counter, j] = node.tri[j]
+                element_map[counter] = node.elem
                 counter += 1
                 node = node.next_node
                 
@@ -169,7 +166,13 @@
     cdef np.int64_t TPE  # num tris per element
     cdef int[MAX_NUM_TRI][3] tri_array
     
-    def __cinit__(self, np.ndarray[np.int64_t, ndim=2] indices):
+    def __cinit__(self, np.int64_t[:, ::1] indices):
+        '''
+
+        This class is used to store metadata about the type of mesh being used.
+
+        '''
+
         self.num_elem = indices.shape[0]
         self.VPE = indices.shape[1]
 
@@ -188,8 +191,9 @@
         self.num_tri = self.TPE * self.num_elem
         self.num_verts = self.num_tri * 3
         
-
-def cull_interior_triangles(np.ndarray[np.int64_t, ndim=2] indices):
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+def cull_interior_triangles(np.int64_t[:, ::1] indices):
     '''
 
     This is used to remove interior triangles from the mesh before rendering
@@ -198,7 +202,6 @@
     '''
 
     cdef MeshInfoHolder m = MeshInfoHolder(indices)
-    cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
 
     cdef TriSet s = TriSet()
     cdef np.int64_t i, j, k
@@ -206,15 +209,16 @@
     for i in range(m.num_elem):
         for j in range(m.TPE):
             for k in range(3):
-                tri[k] = indices_ptr[i*m.VPE + m.tri_array[j][k]]
+                tri[k] = indices[i, m.tri_array[j][k]]
             s.update(tri, i)
 
     return s.get_exterior_tris()
     
-
-def get_vertex_data(np.ndarray[np.float64_t, ndim=2] coords,
-                    np.ndarray[np.float64_t, ndim=2] data,
-                    np.ndarray[np.int64_t, ndim=2] indices):
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+def get_vertex_data(np.float64_t[:, ::1] coords,
+                    np.float64_t[:, ::1] data,
+                    np.int64_t[:, ::1] indices):
 
     '''
 
@@ -225,101 +229,75 @@
 
     cdef MeshInfoHolder m = MeshInfoHolder(indices)
     cdef np.int64_t num_verts = coords.shape[0]
-    
-    cdef np.ndarray[np.float32_t, ndim=1] vertex_data
-    vertex_data = np.zeros(num_verts, dtype="float32")
-    cdef np.float32_t *vertex_data_ptr = <np.float32_t*> vertex_data.data
-    
-    cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
-    cdef np.float64_t *data_ptr = <np.float64_t*> data.data
-    
+    cdef np.float32_t[:] vertex_data = np.zeros(num_verts, dtype="float32")
+        
     cdef np.int64_t i, j
     for i in range(m.num_elem):
         for j in range(m.VPE):
-            vertex_data_ptr[indices_ptr[i*m.VPE + j]] = data_ptr[i*m.VPE + j]
-
+            vertex_data[indices[i, j]] = data[i, j]
     return vertex_data
 
 
 @cython.boundscheck(False)
-def triangulate_mesh(np.ndarray[np.float64_t, ndim=2] coords,
+ at cython.wraparound(False)
+def triangulate_mesh(np.float64_t[:, ::1] coords,
                      np.ndarray data,
-                     np.ndarray[np.int64_t, ndim=2] indices):
+                     np.int64_t[:, ::1] indices):
     '''
 
     This converts a mesh into a flattened triangle array suitable for
     rendering on the GPU.
 
     '''
-    cdef np.ndarray[np.int64_t, ndim=2] exterior_tris
-    cdef np.ndarray[np.int64_t, ndim=1] element_map
+    cdef np.int64_t[:, ::1] exterior_tris
+    cdef np.int64_t[::1] element_map
     exterior_tris, element_map = cull_interior_triangles(indices)
 
-    cdef np.ndarray[np.float32_t, ndim=1] vertex_data
-    if data.ndim == 2:
-        vertex_data = get_vertex_data(coords, data, indices)
-    else:
-        vertex_data = np.empty(1, dtype=np.float32)
-    cdef np.float32_t *vertex_data_ptr = <np.float32_t*> vertex_data.data
-
-    cdef np.float64_t *data_ptr = <np.float64_t*> data.data
-    cdef np.float64_t *coords_ptr = <np.float64_t*> coords.data
-    cdef np.int64_t *exterior_tris_ptr = <np.int64_t*> exterior_tris.data
-
     cdef np.int64_t num_tri = exterior_tris.shape[0]
     cdef np.int64_t num_verts = 3 * num_tri
     cdef np.int64_t num_coords = 3 * num_verts
-
-    cdef np.ndarray[np.int32_t, ndim=1] tri_indices
-    tri_indices = np.empty(num_verts, dtype=np.int32)
-    cdef np.int32_t *tri_indices_ptr = <np.int32_t*> tri_indices.data
     
-    cdef np.ndarray[np.float32_t, ndim=1] tri_data
-    tri_data = np.empty(num_verts, dtype=np.float32)
-    cdef np.float32_t *tri_data_ptr = <np.float32_t*> tri_data.data
-
-    cdef np.ndarray[np.float32_t, ndim=1] tri_coords
-    tri_coords = np.empty(num_coords, dtype=np.float32)
-    cdef np.float32_t *tri_coords_ptr = <np.float32_t*> tri_coords.data
+    cdef np.float32_t[:] vertex_data
+    if data.ndim == 2:
+        vertex_data = get_vertex_data(coords, data, indices)
+    else:
+        vertex_data = data.astype("float32")
     
-    cdef np.int64_t vert_index, coord_index
-    cdef np.int64_t i, j, k
+    cdef np.int32_t[:] tri_indices = np.empty(num_verts, dtype=np.int32)
+    cdef np.float32_t[:] tri_data = np.empty(num_verts, dtype=np.float32)
+    cdef np.float32_t[:] tri_coords = np.empty(num_coords, dtype=np.float32)
+        
+    cdef np.int64_t vert_index, i, j, k
     for i in range(num_tri):
         for j in range(3):
             vert_index = i*3 + j
             if data.ndim == 1:
-                tri_data_ptr[vert_index] = data_ptr[element_map[i]]  # element data
+                tri_data[vert_index] = vertex_data[element_map[i]]
             else:
-                tri_data_ptr[vert_index] = vertex_data_ptr[exterior_tris_ptr[i*3 + j]]
-            tri_indices_ptr[vert_index] = vert_index
+                tri_data[vert_index] = vertex_data[exterior_tris[i, j]]
+            tri_indices[vert_index] = vert_index
             for k in range(3):
-                coord_index = 3*exterior_tris_ptr[i*3 + j] + k
-                tri_coords_ptr[vert_index*3 + k] = coords_ptr[coord_index]
+                tri_coords[vert_index*3 + k] = coords[exterior_tris[i, j], k]
+                
+    return np.array(tri_coords), np.array(tri_data), np.array(tri_indices)
 
-    return tri_coords, tri_data, tri_indices
-
-
-def triangulate_indices(np.ndarray[np.int64_t, ndim=2] indices):
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+def triangulate_indices(np.int64_t[:, ::1] indices):
     '''
 
-    This is like triangulate_mesh, except it only considers the 
-    connectivity information, instead of also copying the vertex 
+    This is like triangulate_mesh, except it only considers the
+    connectivity information, instead of also copying the vertex
     coordinates and the data values.
 
     '''
-
+    
     cdef MeshInfoHolder m = MeshInfoHolder(indices)
-        
-    cdef np.ndarray[np.int64_t, ndim=2] tri_indices
-    tri_indices = np.empty((m.num_tri, 3), dtype="int64")
+    cdef np.int64_t[:, ::1] tri_indices = np.empty((m.num_tri, 3), dtype=np.int64)
     
-    cdef np.int64_t *tri_indices_ptr = <np.int64_t*> tri_indices.data
-    cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
-
-    cdef np.int64_t i, j, k, offset
+    cdef np.int64_t i, j, k
     for i in range(m.num_elem):
         for j in range(m.TPE):
             for k in range(3):
-                offset = m.tri_array[j][k]
-                tri_indices_ptr[i*m.TPE*3 + 3*j + k] = indices_ptr[i*m.VPE + offset]
-    return tri_indices
+                tri_indices[i*m.TPE + j, k] = indices[i, m.tri_array[j][k]]
+    return np.array(tri_indices)


https://bitbucket.org/yt_analysis/yt/commits/c540131e2cb4/
Changeset:   c540131e2cb4
Branch:      yt
User:        xarthisius
Date:        2016-06-15 18:23:16+00:00
Summary:     Merged in atmyers/yt (pull request #2202)

Don't send impossible to see mesh data to the GPU for rendering.
Affected #:  2 files

diff -r bb67af7434bfd0dba16114e1674c03401361dcce -r c540131e2cb401a05896b9f7fc4f8e61e3241964 yt/utilities/lib/mesh_triangulation.pyx
--- a/yt/utilities/lib/mesh_triangulation.pyx
+++ b/yt/utilities/lib/mesh_triangulation.pyx
@@ -1,13 +1,13 @@
 import numpy as np
 cimport numpy as np
 cimport cython
+from libc.stdlib cimport malloc, free
 
 from yt.utilities.exceptions import YTElementTypeNotRecognized
 
 cdef extern from "mesh_triangulation.h":
     enum:
         MAX_NUM_TRI
-        
     int HEX_NV
     int HEX_NT
     int TETRA_NV
@@ -19,154 +19,285 @@
     int triangulate_wedge[MAX_NUM_TRI][3]
     int hex20_faces[6][8]
 
+cdef struct TriNode:
+    np.uint64_t key
+    np.int64_t elem
+    np.int64_t tri[3]
+    TriNode* next_node
+
+cdef np.int64_t triangles_are_equal(np.int64_t tri1[3], np.int64_t tri2[3]) nogil:
+    cdef np.int64_t found
+    for i in range(3):
+        found = False
+        for j in range(3):
+            if tri1[i] == tri2[j]:
+                found = True
+        if not found:
+            return 0
+    return 1
+    
+cdef np.uint64_t triangle_hash(np.int64_t tri[3]) nogil:
+    # http://stackoverflow.com/questions/1536393/good-hash-function-for-permutations
+    cdef np.uint64_t h = 1
+    for i in range(3):
+        h *= (1779033703 + 2*tri[i])
+    return h / 2
+
+# should be enough, consider dynamic resizing in the future
+cdef np.int64_t TABLE_SIZE = 2**24
+
+cdef class TriSet:
+    '''
+
+    This is a hash table data structure for rapidly identifying the exterior
+    triangles in a polygon mesh. We loop over each triangle in each element and
+    update the TriSet for each one. We keep only the triangles that appear once,
+    as these make up the exterior of the mesh.
+
+    '''
+
+    cdef TriNode **table
+    cdef np.uint64_t num_items
+    
+    def __cinit__(self):
+        self.table = <TriNode**> malloc(TABLE_SIZE * sizeof(TriNode*))
+        for i in range(TABLE_SIZE):
+            self.table[i] = NULL
+        self.num_items = 0
+        
+    def __dealloc__(self):
+        cdef np.int64_t i
+        cdef TriNode *node
+        cdef TriNode *delete_node
+        for i in range(TABLE_SIZE):
+            node = self.table[i]
+            while (node != NULL):
+                delete_node = node
+                node = node.next_node
+                free(delete_node)
+            self.table[i] = NULL
+        free(self.table)
+    
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    def get_exterior_tris(self):
+        '''
+
+        Returns two numpy arrays, one storing the exterior triangle
+        indices and the other storing the corresponding element ids.
+
+        '''
+
+        cdef np.int64_t[:, ::1] tri_indices = np.empty((self.num_items, 3), dtype="int64")
+        cdef np.int64_t[::1] element_map = np.empty(self.num_items, dtype="int64")
+
+        cdef TriNode* node
+        cdef np.int64_t counter = 0
+        cdef np.int64_t i, j
+        for i in range(TABLE_SIZE):
+            node = self.table[i]
+            while node != NULL:
+                for j in range(3):
+                    tri_indices[counter, j] = node.tri[j]
+                element_map[counter] = node.elem
+                counter += 1
+                node = node.next_node
+                
+        return tri_indices, element_map
+
+    cdef TriNode* _allocate_new_node(self,
+                                     np.int64_t tri[3],
+                                     np.uint64_t key,
+                                     np.int64_t elem) nogil:
+        cdef TriNode* new_node = <TriNode* > malloc(sizeof(TriNode))
+        new_node.key = key
+        new_node.elem = elem
+        new_node.tri[0] = tri[0]
+        new_node.tri[1] = tri[1]
+        new_node.tri[2] = tri[2]
+        new_node.next_node = NULL
+        self.num_items += 1
+        return new_node
+        
+    @cython.cdivision(True)
+    cdef void update(self, np.int64_t tri[3], np.int64_t elem) nogil:
+        cdef np.uint64_t key = triangle_hash(tri)
+        cdef np.uint64_t index = key % TABLE_SIZE
+        cdef TriNode *node = self.table[index]
+        
+        if node == NULL:
+            self.table[index] = self._allocate_new_node(tri, key, elem)
+            return
+
+        if key == node.key and triangles_are_equal(node.tri, tri):
+            # this triangle is already here, delete it
+            self.table[index] = node.next_node
+            free(node)
+            self.num_items -= 1
+            return
+
+        elif node.next_node == NULL:
+            node.next_node = self._allocate_new_node(tri, key, elem)
+            return
+    
+        # walk through node list
+        cdef TriNode* prev = node
+        node = node.next_node
+        while node != NULL:
+            if key == node.key and triangles_are_equal(node.tri, tri):
+                # this triangle is already here, delete it
+                prev.next_node = node.next_node
+                free(node)
+                self.num_items -= 1
+                return
+            if node.next_node == NULL:
+                # we have reached the end; add new node
+                node.next_node = self._allocate_new_node(tri, key, elem)
+                return
+            prev = node
+            node = node.next_node
+
+
+cdef class MeshInfoHolder:
+    cdef np.int64_t num_elem
+    cdef np.int64_t num_tri
+    cdef np.int64_t num_verts
+    cdef np.int64_t VPE  # num verts per element
+    cdef np.int64_t TPE  # num tris per element
+    cdef int[MAX_NUM_TRI][3] tri_array
+    
+    def __cinit__(self, np.int64_t[:, ::1] indices):
+        '''
+
+        This class is used to store metadata about the type of mesh being used.
+
+        '''
+
+        self.num_elem = indices.shape[0]
+        self.VPE = indices.shape[1]
+
+        if (self.VPE == 8 or self.VPE == 20 or self.VPE == 27):
+            self.TPE = HEX_NT
+            self.tri_array = triangulate_hex
+        elif self.VPE == 4:
+            self.TPE = TETRA_NT
+            self.tri_array = triangulate_tetra
+        elif self.VPE == 6:
+            self.TPE = WEDGE_NT
+            self.tri_array = triangulate_wedge
+        else:
+            raise YTElementTypeNotRecognized(3, self.VPE)
+
+        self.num_tri = self.TPE * self.num_elem
+        self.num_verts = self.num_tri * 3
+        
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+def cull_interior_triangles(np.int64_t[:, ::1] indices):
+    '''
+
+    This is used to remove interior triangles from the mesh before rendering
+    it on the GPU.
+
+    '''
+
+    cdef MeshInfoHolder m = MeshInfoHolder(indices)
+
+    cdef TriSet s = TriSet()
+    cdef np.int64_t i, j, k
+    cdef np.int64_t tri[3]
+    for i in range(m.num_elem):
+        for j in range(m.TPE):
+            for k in range(3):
+                tri[k] = indices[i, m.tri_array[j][k]]
+            s.update(tri, i)
+
+    return s.get_exterior_tris()
+    
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+def get_vertex_data(np.float64_t[:, ::1] coords,
+                    np.float64_t[:, ::1] data,
+                    np.int64_t[:, ::1] indices):
+
+    '''
+
+    This converts the data array from the shape (num_elem, conn_length)
+    to (num_verts, ).
+
+    '''
+
+    cdef MeshInfoHolder m = MeshInfoHolder(indices)
+    cdef np.int64_t num_verts = coords.shape[0]
+    cdef np.float32_t[:] vertex_data = np.zeros(num_verts, dtype="float32")
+        
+    cdef np.int64_t i, j
+    for i in range(m.num_elem):
+        for j in range(m.VPE):
+            vertex_data[indices[i, j]] = data[i, j]
+    return vertex_data
+
 
 @cython.boundscheck(False)
 @cython.wraparound(False)
- at cython.cdivision(True)
-def triangulate_indices(np.ndarray[np.int64_t, ndim=2] indices):
-    cdef np.int64_t num_elem = indices.shape[0]
-    cdef np.int64_t VPE = indices.shape[1]  # num verts per element
-    cdef np.int64_t TPE  # num triangles per element
-    cdef int[MAX_NUM_TRI][3] tri_array
+def triangulate_mesh(np.float64_t[:, ::1] coords,
+                     np.ndarray data,
+                     np.int64_t[:, ::1] indices):
+    '''
+
+    This converts a mesh into a flattened triangle array suitable for
+    rendering on the GPU.
+
+    '''
+    cdef np.int64_t[:, ::1] exterior_tris
+    cdef np.int64_t[::1] element_map
+    exterior_tris, element_map = cull_interior_triangles(indices)
+
+    cdef np.int64_t num_tri = exterior_tris.shape[0]
+    cdef np.int64_t num_verts = 3 * num_tri
+    cdef np.int64_t num_coords = 3 * num_verts
     
-    if (VPE == 8 or VPE == 20 or VPE == 27):
-        TPE = HEX_NT
-        tri_array = triangulate_hex
-    elif VPE == 4:
-        TPE = TETRA_NT
-        tri_array = triangulate_tetra
-    elif VPE == 6:
-        TPE = WEDGE_NT
-        tri_array = triangulate_wedge
+    cdef np.float32_t[:] vertex_data
+    if data.ndim == 2:
+        vertex_data = get_vertex_data(coords, data, indices)
     else:
-        raise YTElementTypeNotRecognized(3, VPE)
-
-    cdef np.int64_t num_tri = TPE * num_elem
+        vertex_data = data.astype("float32")
+    
+    cdef np.int32_t[:] tri_indices = np.empty(num_verts, dtype=np.int32)
+    cdef np.float32_t[:] tri_data = np.empty(num_verts, dtype=np.float32)
+    cdef np.float32_t[:] tri_coords = np.empty(num_coords, dtype=np.float32)
         
-    cdef np.ndarray[np.int64_t, ndim=2] tri_indices
-    tri_indices = np.empty((num_tri, 3), dtype="int64")
-    
-    cdef np.int64_t *tri_indices_ptr = <np.int64_t*> tri_indices.data
-    cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
-
-    cdef np.int64_t i, j, k, offset
-    for i in range(num_elem):
-        for j in range(TPE):
+    cdef np.int64_t vert_index, i, j, k
+    for i in range(num_tri):
+        for j in range(3):
+            vert_index = i*3 + j
+            if data.ndim == 1:
+                tri_data[vert_index] = vertex_data[element_map[i]]
+            else:
+                tri_data[vert_index] = vertex_data[exterior_tris[i, j]]
+            tri_indices[vert_index] = vert_index
             for k in range(3):
-                offset = tri_array[j][k]
-                tri_indices_ptr[i*TPE*3 + 3*j + k] = indices_ptr[i*VPE + offset]
-    return tri_indices
+                tri_coords[vert_index*3 + k] = coords[exterior_tris[i, j], k]
+                
+    return np.array(tri_coords), np.array(tri_data), np.array(tri_indices)
 
 @cython.boundscheck(False)
 @cython.wraparound(False)
- at cython.cdivision(True)
-def triangulate_vertex_data(np.ndarray[np.float64_t, ndim=2] coords,
-                            np.ndarray[np.float64_t, ndim=2] data,
-                            np.ndarray[np.int64_t, ndim=2] indices):
+def triangulate_indices(np.int64_t[:, ::1] indices):
+    '''
 
-    cdef np.int64_t num_elem = indices.shape[0]
-    cdef np.int64_t VPE = indices.shape[1]  # num verts per element
-    cdef np.int64_t TPE  # num triangles per element
-    cdef int[MAX_NUM_TRI][3] tri_array
+    This is like triangulate_mesh, except it only considers the
+    connectivity information, instead of also copying the vertex
+    coordinates and the data values.
+
+    '''
     
-    if (VPE == 8 or VPE == 20 or VPE == 27):
-        TPE = HEX_NT
-        tri_array = triangulate_hex
-    elif VPE == 4:
-        TPE = TETRA_NT
-        tri_array = triangulate_tetra
-    elif VPE == 6:
-        TPE = WEDGE_NT
-        tri_array = triangulate_wedge
-    else:
-        raise YTElementTypeNotRecognized(3, VPE)
-
-    cdef np.int64_t num_tri = TPE * num_elem
-    cdef np.int64_t num_verts = coords.shape[0]
+    cdef MeshInfoHolder m = MeshInfoHolder(indices)
+    cdef np.int64_t[:, ::1] tri_indices = np.empty((m.num_tri, 3), dtype=np.int64)
     
-    cdef np.ndarray[np.int32_t, ndim=1] tri_indices
-    tri_indices = np.empty(num_tri*3, dtype="int32")
-    cdef np.int32_t *tri_indices_ptr = <np.int32_t*> tri_indices.data
-    
-    cdef np.ndarray[np.float32_t, ndim=1] tri_coords
-    tri_coords = np.empty(num_verts*3, dtype=np.float32)
-    cdef np.float32_t *tri_coords_ptr = <np.float32_t*> tri_coords.data
-    
-    cdef np.ndarray[np.float32_t, ndim=1] tri_data
-    tri_data = np.zeros(num_verts, dtype="float32")
-    cdef np.float32_t *tri_data_ptr = <np.float32_t*> tri_data.data
-    
-    cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
-    cdef np.float64_t *coords_ptr = <np.float64_t*> coords.data
-    cdef np.float64_t *data_ptr = <np.float64_t*> data.data
-    
-    cdef np.int64_t i, j, k, offset
-    for i in range(num_elem):
-        for j in range(TPE):
+    cdef np.int64_t i, j, k
+    for i in range(m.num_elem):
+        for j in range(m.TPE):
             for k in range(3):
-                offset = tri_array[j][k]
-                tri_indices_ptr[i*TPE*3 + 3*j + k] = indices_ptr[i*VPE + offset]
-        for j in range(VPE):
-            tri_data_ptr[indices_ptr[i*VPE + j]] = data_ptr[i*VPE + j]
-    for i in range(num_verts):
-        for j in range(3):
-            tri_coords_ptr[i*3 + j] = coords_ptr[i*3 + j]
-
-    return tri_coords, tri_data, tri_indices
-
- at cython.boundscheck(False)
- at cython.wraparound(False)
- at cython.cdivision(True)
-def triangulate_element_data(np.ndarray[np.float64_t, ndim=2] coords,
-                             np.ndarray[np.float64_t, ndim=1] data,
-                             np.ndarray[np.int64_t, ndim=2] indices):
-
-    cdef np.int64_t num_elem = indices.shape[0]
-    cdef np.int64_t VPE = indices.shape[1]  # num verts per element
-    cdef np.int64_t TPE  # num triangles per element
-    cdef int[MAX_NUM_TRI][3] tri_array
-    
-    if (VPE == 8 or VPE == 20 or VPE == 27):
-        TPE = HEX_NT
-        tri_array = triangulate_hex
-    elif VPE == 4:
-        TPE = TETRA_NT
-        tri_array = triangulate_tetra
-    elif VPE == 6:
-        TPE = WEDGE_NT
-        tri_array = triangulate_wedge
-    else:
-        raise YTElementTypeNotRecognized(3, VPE)
-
-    cdef np.int64_t num_tri = TPE * num_elem
-    cdef np.int64_t num_verts = num_tri*3
-    
-    cdef np.ndarray[np.int32_t, ndim=1] tri_indices
-    tri_indices = np.empty(num_verts, dtype="int32")
-    cdef np.int32_t *tri_indices_ptr = <np.int32_t*> tri_indices.data
-    
-    cdef np.ndarray[np.float32_t, ndim=1] tri_data
-    tri_data = np.empty(num_verts, dtype=np.float32)
-    cdef np.float32_t *tri_data_ptr = <np.float32_t*> tri_data.data
-
-    cdef np.ndarray[np.float32_t, ndim=1] tri_coords
-    tri_coords = np.empty(num_verts*3, dtype=np.float32)
-    cdef np.float32_t *tri_coords_ptr = <np.float32_t*> tri_coords.data
-    
-    cdef np.float64_t *data_ptr = <np.float64_t*> data.data
-    cdef np.float64_t *coords_ptr = <np.float64_t*> coords.data
-    cdef np.int64_t *indices_ptr = <np.int64_t*> indices.data
-
-    cdef np.int64_t i, j, k, l, 
-    cdef np.int64_t coord_index, vert_index
-    for i in range(num_elem):
-        for j in range(TPE):
-            for k in range(3):
-                coord_index = indices_ptr[i*VPE + tri_array[j][k]]*3
-                vert_index = i*TPE*3+j*3+k
-                tri_data_ptr[vert_index] = data_ptr[i]
-                for l in range(3):
-                    tri_coords_ptr[vert_index*3 + l] = coords_ptr[coord_index + l]
-                tri_indices_ptr[vert_index] = vert_index
-
-    return tri_coords, tri_data, tri_indices
+                tri_indices[i*m.TPE + j, k] = indices[i, m.tri_array[j][k]]
+    return np.array(tri_indices)

diff -r bb67af7434bfd0dba16114e1674c03401361dcce -r c540131e2cb401a05896b9f7fc4f8e61e3241964 yt/visualization/volume_rendering/interactive_vr.py
--- a/yt/visualization/volume_rendering/interactive_vr.py
+++ b/yt/visualization/volume_rendering/interactive_vr.py
@@ -28,8 +28,7 @@
     quaternion_mult, \
     quaternion_to_rotation_matrix, \
     rotation_matrix_to_quaternion
-from yt.utilities.lib.mesh_triangulation import triangulate_vertex_data, \
-    triangulate_element_data
+from yt.utilities.lib.mesh_triangulation import triangulate_mesh
 from .shader_objects import known_shaders, ShaderProgram
 
 bbox_vertices = np.array(
@@ -648,10 +647,7 @@
 
         data = data_source[field]
 
-        if len(data.shape) == 1:
-            return triangulate_element_data(vertices, data, indices)
-        elif data.shape[1] == indices.shape[1]:
-            return triangulate_vertex_data(vertices, data, indices)
+        return triangulate_mesh(vertices, data, indices)
 
     def run_program(self):
         """ Renders one frame of the scene. """

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.


More information about the yt-svn mailing list