[yt-svn] commit/yt: 2 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Wed Jul 23 18:30:32 PDT 2014


2 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/0148c810c6bb/
Changeset:   0148c810c6bb
Branch:      yt-3.0
User:        mzingale
Date:        2014-07-24 03:25:33
Summary:     fix some documentation
Affected #:  2 files

diff -r e816afd0f0637f5a4eb1136a01309ad4875a97ce -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 doc/source/cookbook/profile_with_variance.py
--- a/doc/source/cookbook/profile_with_variance.py
+++ b/doc/source/cookbook/profile_with_variance.py
@@ -20,7 +20,7 @@
 
 # Plot the average velocity magnitude.
 plt.loglog(prof.x, prof['gas', 'velocity_magnitude'], label='Mean')
-# Plot the variance of the velocity madnitude.
+# Plot the variance of the velocity magnitude.
 plt.loglog(prof.x, prof.variance['gas', 'velocity_magnitude'],
            label='Standard Deviation')
 plt.xlabel('r [kpc]')

diff -r e816afd0f0637f5a4eb1136a01309ad4875a97ce -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 doc/source/visualizing/plots.rst
--- a/doc/source/visualizing/plots.rst
+++ b/doc/source/visualizing/plots.rst
@@ -736,6 +736,8 @@
 :class:`~yt.visualization.profile_plotter.PhasePlot` object.  Much like 1D
 profiles, 2D profiles (phase plots) are best thought of as plotting a
 distribution of points, either taking the average or the accumulation in a bin.
+The default behavior is to average, using the cell mass as the weighting,
+but this behavior can be controlled through the ``weight_field`` parameter.
 For example, to generate a 2D distribution of mass enclosed in density and
 temperature bins, you can do:
 
@@ -749,7 +751,8 @@
    plot.save()
 
 If you would rather see the average value of a field as a function of two other
-fields, you can set the ``weight_field`` parameter to ``None``.  This would look
+fields, leave off the ``weight_field`` argument, and it will average by
+the cell mass.  This would look
 something like:
 
 .. python-script::
@@ -757,8 +760,7 @@
    import yt
    ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
    my_sphere = ds.sphere("c", (50, "kpc"))
-   plot = yt.PhasePlot(my_sphere, "density", "temperature", ["H_fraction"],
-                       weight_field=None)
+   plot = yt.PhasePlot(my_sphere, "density", "temperature", ["H_fraction"])
    plot.save()
 
 Customizing Phase Plots


https://bitbucket.org/yt_analysis/yt/commits/9e7fefe0865b/
Changeset:   9e7fefe0865b
Branch:      yt-3.0
User:        mzingale
Date:        2014-07-24 03:26:15
Summary:     merge
Affected #:  113 files

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d doc/source/_static/agogo_yt.css
--- a/doc/source/_static/agogo_yt.css
+++ /dev/null
@@ -1,41 +0,0 @@
- at import url("agogo.css");
- at import url("http://fonts.googleapis.com/css?family=Crimson+Text");
- at import url("http://fonts.googleapis.com/css?family=Droid+Sans");
-
-div.document ul {
-  margin-left: 1.5em;
-  margin-top: 0.0em;
-  margin-bottom: 1.0em;
-}
-
-div.document li.toctree-l1 {
-  margin-bottom: 0.5em;
-}
-
-table.contentstable {
-  width: 100%;
-}
-
-table.contentstable td {
-  padding: 5px 15px 0px 15px;
-}
-
-table.contentstable tr {
-  border-bottom: 1px solid black;
-}
-
-a.biglink {
-  line-height: 1.2em;
-}
-
-a tt.xref {
-  font-weight: bolder;
-}
-
-table.docutils {
-  width: 100%;
-}
-
-table.docutils td {
-  width: 50%;
-}

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d doc/source/_static/custom.css
--- /dev/null
+++ b/doc/source/_static/custom.css
@@ -0,0 +1,8 @@
+blockquote {
+    font-size: 16px
+    border-left: none
+}
+
+dd {
+    margin-left: 30px
+}
\ No newline at end of file

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d doc/source/_templates/layout.html
--- a/doc/source/_templates/layout.html
+++ b/doc/source/_templates/layout.html
@@ -35,3 +35,5 @@
     </div>
 {%- endblock %}
 
+{# Custom CSS overrides #}
+{% set bootswatch_css_custom = ['_static/custom.css'] %}

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d doc/source/analyzing/analysis_modules/clump_finding.rst
--- a/doc/source/analyzing/analysis_modules/clump_finding.rst
+++ b/doc/source/analyzing/analysis_modules/clump_finding.rst
@@ -2,185 +2,135 @@
 
 Clump Finding
 =============
-.. sectionauthor:: Britton Smith <britton.smith at colorado.edu>
 
-``yt`` has the ability to identify topologically disconnected structures based in a dataset using 
-any field available.  This is powered by a contouring algorithm that runs in a recursive 
-fashion.  The user specifies the initial data object in which the clump-finding will occur, 
-the field over which the contouring will be done, the upper and lower limits of the 
-initial contour, and the contour increment.
+The clump finder uses a contouring algorithm to identified topologically 
+disconnected structures within a dataset.  This works by first creating a 
+single contour over the full range of the contouring field, then continually 
+increasing the lower value of the contour until it reaches the maximum value 
+of the field.  As disconnected structures are identified as separate contoures, 
+the routine continues recursively through each object, creating a hierarchy of 
+clumps.  Individual clumps can be kept or removed from the hierarchy based on 
+the result of user-specified functions, such as checking for gravitational 
+boundedness.  A sample recipe can be found in :ref:`cookbook-find_clumps`.
 
-The clump finder begins by creating a single contour of the specified field over the entire 
-range given.  For every isolated contour identified in the initial iteration, contouring is 
-repeated with the same upper limit as before, but with the lower limit increased by the 
-specified increment.  This repeated for every isolated group until the lower limit is equal 
-to the upper limit.
+The clump finder requires a data container and a field over which the 
+contouring is to be performed.
 
-Often very tiny clumps can appear as groups of only a few cells that happen to be slightly 
-overdense (if contouring over density) with respect to the surrounding gas.  The user may 
-specify criteria that clumps must meet in order to be kept.  The most obvious example is 
-selecting only those clumps that are gravitationally bound.
+.. code:: python
 
-Once the clump-finder has finished, the user can write out a set of quantities for each clump in the 
-index.  Additional info items can also be added.  We also provide a recipe
-for finding clumps in :ref:`cookbook-find_clumps`.
+   import yt
+   from yt.analysis_modules.level_sets.api import *
 
-Treecode Optimization
----------------------
+   ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
 
-.. sectionauthor:: Stephen Skory <s at skory.us>
-.. versionadded:: 2.1
+   data_source = ds.disk([0.5, 0.5, 0.5], [0., 0., 1.],
+                         (8, 'kpc'), (1, 'kpc'))
 
-As mentioned above, the user has the option to limit clumps to those that are
-gravitationally bound.
-The correct and accurate way to calculate if a clump is gravitationally
-bound is to do the full double sum:
+   master_clump = Clump(data_source, ("gas", "density"))
 
-.. math::
+At this point, every isolated contour will be considered a clump, 
+whether this is physical or not.  Validator functions can be added to 
+determine if an individual contour should be considered a real clump.  
+These functions are specified with the ``Clump.add_validator`` function.  
+Current, two validators exist: a minimum number of cells and gravitational 
+boundedness.
 
-  PE = \Sigma_{i=1}^N \Sigma_{j=i}^N \frac{G M_i M_j}{r_{ij}}
+.. code:: python
 
-where :math:`PE` is the gravitational potential energy of :math:`N` cells,
-:math:`G` is the
-gravitational constant, :math:`M_i` is the mass of cell :math:`i`, 
-and :math:`r_{ij}` is the distance
-between cell :math:`i` and :math:`j`.
-The number of calculations required for this calculation
-grows with the square of :math:`N`. Therefore, for large clumps with many cells, the
-test for boundedness can take a significant amount of time.
+   master_clump.add_validator("min_cells", 20)
 
-An effective way to greatly speed up this calculation with minimal error
-is to use the treecode approximation pioneered by
-`Barnes and Hut (1986) <http://adsabs.harvard.edu/abs/1986Natur.324..446B>`_.
-This method of calculating gravitational potentials works by
-grouping individual masses that are located close together into a larger conglomerated
-mass with a geometric size equal to the distribution of the individual masses.
-For a mass cell that is sufficiently distant from the conglomerated mass,
-the gravitational calculation can be made using the conglomerate, rather than
-each individual mass, which saves time.
+   master_clump.add_validator("gravitationally_bound", use_particles=False)
 
-The decision whether or not to use a conglomerate depends on the accuracy control
-parameter ``opening_angle``. Using the small-angle approximation, a conglomerate
-may be used if its geometric size subtends an angle no greater than the
-``opening_angle`` upon the remote mass. The default value is
-``opening_angle = 1``, which gives errors well under 1%. A value of 
-``opening_angle = 0`` is identical to the full O(N^2) method, and larger values
-will speed up the calculation and sacrifice accuracy (see the figures below).
+As many validators as desired can be added, and a clump is only kept if all 
+return True.  If not, a clump is remerged into its parent.  Custom validators 
+can easily be added.  A validator function must only accept a ``Clump`` object 
+and either return True or False.
 
-The treecode method is iterative. Conglomerates may themselves form larger
-conglomerates. And if a larger conglomerate does not meet the ``opening_angle``
-criterion, the smaller conglomerates are tested as well. This iteration of 
-conglomerates will
-cease once the level of the original masses is reached (this is what happens
-for all pair calculations if ``opening_angle = 0``).
+.. code:: python
 
-Below are some examples of how to control the usage of the treecode.
+   def _minimum_gas_mass(clump, min_mass):
+       return (clump["gas", "cell_mass"].sum() >= min_mass)
+   add_validator("minimum_gas_mass", _minimum_gas_mass)
 
-This example will calculate the ratio of the potential energy to kinetic energy
-for a spherical clump using the treecode method with an opening angle of 2.
-The default opening angle is 1.0:
+The ``add_validator`` function adds the validator to a registry that can 
+be accessed by the clump finder.  Then, the validator can be added to the 
+clump finding just like the others.
 
-.. code-block:: python
-  
-  from yt.mods import *
-  
-  ds = load("DD0000")
-  sp = ds.sphere([0.5, 0.5, 0.5], radius=0.1)
-  
-  ratio = sp.quantities.is_bound(truncate=False, include_thermal_energy=True,
-      treecode=True, opening_angle=2.0)
+.. code:: python
 
-This example will accomplish the same as the above, but will use the full
-N^2 method.
+   master_clump.add_validator("minimum_gas_mass", ds.quan(1.0, "Msun"))
 
-.. code-block:: python
-  
-  from yt.mods import *
-  
-  ds = load("DD0000")
-  sp = ds.sphere([0.5, 0.5, 0.5], radius=0.1)
-  
-  ratio = sp.quantities.is_bound(truncate=False, include_thermal_energy=True,
-      treecode=False)
+The clump finding algorithm accepts the ``Clump`` object, the initial minimum 
+and maximum of the contouring field, and the step size.  The lower value of the 
+contour finder will be continually multiplied by the step size.
 
-Here the treecode method is specified for clump finding (this is default).
-Please see the link above for the full example of how to find clumps (the
-trailing backslash is important!):
+.. code:: python
 
-.. code-block:: python
-  
-  function_name = 'self.data.quantities.is_bound(truncate=True, \
-      include_thermal_energy=True, treecode=True, opening_angle=2.0) > 1.0'
-  master_clump = amods.level_sets.Clump(data_source, None, field,
-      function=function_name)
+   c_min = data_source["gas", "density"].min()
+   c_max = data_source["gas", "density"].max()
+   step = 2.0
+   find_clumps(master_clump, c_min, c_max, step)
 
-To turn off the treecode, of course one should turn treecode=False in the
-example above.
+After the clump finding has finished, the master clump will represent the top 
+of a hierarchy of clumps.  The ``children`` attribute within a ``Clump`` object 
+contains a list of all sub-clumps.  Each sub-clump is also a ``Clump`` object 
+with its own ``children`` attribute, and so on.
 
-Treecode Speedup and Accuracy Figures
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+A number of helper routines exist for examining the clump hierarchy.
 
-Two datasets are used to make the three figures below. Each is a zoom-in
-simulation with high resolution in the middle with AMR, and then lower
-resolution static grids on the periphery. In this way they are very similar to
-a clump in a full-AMR simulation, where there are many AMR levels stacked
-around a density peak. One dataset has a total of 3 levels of AMR, and
-the other has 10 levels, but in other ways are very similar.
+.. code:: python
 
-The first figure shows the effect of varying the opening angle on the speed
-and accuracy of the treecode. The tests were performed using the L=10 
-dataset on a clump with approximately 118,000 cells. The speedup of up the
-treecode is in green, and the accuracy in blue, with the opening angle
-on the x-axis.
+   # Write a text file of the full hierarchy.
+   write_clump_index(master_clump, 0, "%s_clump_hierarchy.txt" % ds)
 
-With an ``opening_angle`` = 0, the accuracy is perfect, but the treecode is
-less than half as fast as the brute-force method. However, by an
-``opening_angle`` of 1, the treecode is now nearly twice as fast, with
-about 0.2% error. This trend continues to an ``opening_angle`` 8, where
-large opening angles have no effect due to geometry.
+   # Write a text file of only the leaf nodes.
+   write_clumps(master_clump,0, "%s_clumps.txt" % ds)
 
-.. image:: _images/TreecodeOpeningAngleBig.png
-   :width: 450
-   :height: 400
+   # Get a list of just the leaf nodes.
+   leaf_clumps = get_lowest_clumps(master_clump)
 
-Note that the accuracy is always below 1. The treecode will always underestimate
-the gravitational binding energy of a clump.
+``Clump`` objects can be used like all other data containers.
 
-In this next figure, the ``opening_angle`` is kept constant at 1, but the
-number of cells is varied on the L=3 dataset by slowly expanding a spherical
-region of analysis. Up to about 100,000 cells,
-the treecode is actually slower than the brute-force method. This is due to
-the fact that with fewer cells, smaller geometric distances,
-and a shallow AMR index, the treecode
-method has very little chance to be applied. The calculation is overall
-slower due to the overhead of the treecode method & startup costs. This
-explanation is further strengthened by the fact that the accuracy of the
-treecode method stay perfect for the first couple thousand cells, indicating
-that the treecode method is not being applied over that range.
+.. code:: python
 
-Once the number of cells gets high enough, and the size of the region becomes
-large enough, the treecode method can work its magic and the treecode method
-becomes advantageous.
+   print leaf_clumps[0]["gas", "density"]
+   print leaf_clumps[0].quantities.total_mass()
 
-.. image:: _images/TreecodeCellsSmall.png
-   :width: 450
-   :height: 400
+The writing functions will write out a series or properties about each 
+clump by default.  Additional properties can be appended with the 
+``Clump.add_info_item`` function.
 
-The saving grace to the figure above is that for small clumps, a difference of
-50% in calculation time is on the order of a second or less, which is tiny
-compared to the minutes saved for the larger clumps where the speedup can
-be greater than 3.
+.. code:: python
 
-The final figure is identical to the one above, but for the L=10 dataset.
-Due to the higher number of AMR levels, which translates into more opportunities
-for the treecode method to be applied, the treecode becomes faster than the
-brute-force method at only about 30,000 cells. The accuracy shows a different
-behavior, with a dip and a rise, and overall lower accuracy. However, at all
-times the error is still well under 1%, and the time savings are significant.
+   master_clump.add_info_item("total_cells")
 
-.. image:: _images/TreecodeCellsBig.png
-   :width: 450
-   :height: 400
+Just like the validators, custom info items can be added by defining functions 
+that minimally accept a ``Clump`` object and return a string to be printed.
 
-The figures above show that the treecode method is generally very advantageous,
-and that the error introduced is minimal.
+.. code:: python
+
+   def _mass_weighted_jeans_mass(clump):
+       jeans_mass = clump.data.quantities.weighted_average_quantity(
+           "jeans_mass", ("gas", "cell_mass")).in_units("Msun")
+       return "Jeans Mass (mass-weighted): %.6e Msolar." % jeans_mass
+   add_clump_info("mass_weighted_jeans_mass", _mass_weighted_jeans_mass)
+
+Then, add it to the list:
+
+.. code:: python
+
+   master_clump.add_info_item("mass_weighted_jeans_mass")
+
+By default, the following info items are activated: **total_cells**, 
+**cell_mass**, **mass_weighted_jeans_mass**, **volume_weighted_jeans_mass**, 
+**max_grid_level**, **min_number_density**, **max_number_density**, and 
+**distance_to_main_clump**.
+
+Clumps can be visualized using the ``annotate_clumps`` callback.
+
+.. code:: python
+
+   prj = yt.ProjectionPlot(ds, 2, ("gas", "density"), 
+                           center='c', width=(20,'kpc'))
+   prj.annotate_clumps(leaf_clumps)
+   prj.save('clumps')

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d doc/source/conf.py
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -97,7 +97,7 @@
 
 # If true, sectionauthor and moduleauthor directives will be shown in the
 # output. They are ignored by default.
-show_authors = True
+show_authors = False
 
 # The name of the Pygments (syntax highlighting) style to use.
 pygments_style = 'sphinx'

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d doc/source/cookbook/amrkdtree_downsampling.py
--- a/doc/source/cookbook/amrkdtree_downsampling.py
+++ b/doc/source/cookbook/amrkdtree_downsampling.py
@@ -1,6 +1,3 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED 
-
 # Using AMRKDTree Homogenized Volumes to examine large datasets
 # at lower resolution.
 
@@ -13,15 +10,15 @@
 import yt
 from yt.utilities.amr_kdtree.api import AMRKDTree
 
-# Load up a dataset
+# Load up a dataset and define the kdtree
 ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
-
 kd = AMRKDTree(ds)
 
 # Print out specifics of KD Tree
 print "Total volume of all bricks = %i" % kd.count_volume()
 print "Total number of cells = %i" % kd.count_cells()
 
+# Define a camera and take an volume rendering.
 tf = yt.ColorTransferFunction((-30, -22))
 cam = ds.camera([0.5, 0.5, 0.5], [0.2, 0.3, 0.4], 0.10, 256,
                   tf, volume=kd)

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d doc/source/cookbook/find_clumps.py
--- a/doc/source/cookbook/find_clumps.py
+++ b/doc/source/cookbook/find_clumps.py
@@ -1,75 +1,50 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
-
 import numpy as np
 
 import yt
-from yt.analysis_modules.level_sets.api import (Clump, find_clumps,
-                                                get_lowest_clumps)
+from yt.analysis_modules.level_sets.api import *
 
-fn = "IsolatedGalaxy/galaxy0030/galaxy0030"  # dataset to load
-# this is the field we look for contours over -- we could do
-# this over anything.  Other common choices are 'AveragedDensity'
-# and 'Dark_Matter_Density'.
-field = "density"
+ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
 
-step = 2.0  # This is the multiplicative interval between contours.
+data_source = ds.disk([0.5, 0.5, 0.5], [0., 0., 1.], 
+                      (1, 'kpc'), (1, 'kpc'))
 
-ds = yt.load(fn)  # load data
+# the field to be used for contouring
+field = ("gas", "density")
 
-# We want to find clumps over the entire dataset, so we'll just grab the whole
-# thing!  This is a convenience parameter that prepares an object that covers
-# the whole domain.  Note, though, that it will load on demand and not before!
-data_source = ds.disk([0.5, 0.5, 0.5], [0., 0., 1.],
-                      (8., 'kpc'), (1., 'kpc'))
+# This is the multiplicative interval between contours.
+step = 2.0
 
 # Now we set some sane min/max values between which we want to find contours.
 # This is how we tell the clump finder what to look for -- it won't look for
 # contours connected below or above these threshold values.
-c_min = 10**np.floor(np.log10(data_source[field]).min())
-c_max = 10**np.floor(np.log10(data_source[field]).max() + 1)
-
-# keep only clumps with at least 20 cells
-function = 'self.data[\'%s\'].size > 20' % field
+c_min = 10**np.floor(np.log10(data_source[field]).min()  )
+c_max = 10**np.floor(np.log10(data_source[field]).max()+1)
 
 # Now find get our 'base' clump -- this one just covers the whole domain.
-master_clump = Clump(data_source, None, field, function=function)
+master_clump = Clump(data_source, field)
 
-# This next command accepts our base clump and we say the range between which
-# we want to contour.  It recursively finds clumps within the master clump, at
-# intervals defined by the step size we feed it.  The current value is
-# *multiplied* by step size, rather than added to it -- so this means if you
-# want to look in log10 space intervals, you would supply step = 10.0.
+# Add a "validator" to weed out clumps with less than 20 cells.
+# As many validators can be added as you want.
+master_clump.add_validator("min_cells", 20)
+
+# Begin clump finding.
 find_clumps(master_clump, c_min, c_max, step)
 
-# As it goes, it appends the information about all the sub-clumps to the
-# master-clump.  Among different ways we can examine it, there's a convenience
-# function for outputting the full index to a file.
-f = open('%s_clump_index.txt' % ds, 'w')
-yt.amods.level_sets.write_clump_index(master_clump, 0, f)
-f.close()
+# Write out the full clump hierarchy.
+write_clump_index(master_clump, 0, "%s_clump_hierarchy.txt" % ds)
 
-# We can also output some handy information, as well.
-f = open('%s_clumps.txt' % ds, 'w')
-yt.amods.level_sets.write_clumps(master_clump, 0, f)
-f.close()
+# Write out only the leaf nodes of the hierarchy.
+write_clumps(master_clump,0, "%s_clumps.txt" % ds)
 
-# We can traverse the clump index to get a list of all of the 'leaf' clumps
+# We can traverse the clump hierarchy to get a list of all of the 'leaf' clumps
 leaf_clumps = get_lowest_clumps(master_clump)
 
 # If you'd like to visualize these clumps, a list of clumps can be supplied to
 # the "clumps" callback on a plot.  First, we create a projection plot:
-prj = yt.ProjectionPlot(ds, 2, field, center='c', width=(20, 'kpc'))
+prj = yt.ProjectionPlot(ds, 2, field, center='c', width=(20,'kpc'))
 
 # Next we annotate the plot with contours on the borders of the clumps
 prj.annotate_clumps(leaf_clumps)
 
 # Lastly, we write the plot to disk.
 prj.save('clumps')
-
-# We can also save the clump object to disk to read in later so we don't have
-# to spend a lot of time regenerating the clump objects.
-ds.save_object(master_clump, 'My_clumps')
-
-# Later, we can read in the clump object like so,
-master_clump = ds.load_object('My_clumps')

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d doc/source/cookbook/opaque_rendering.py
--- a/doc/source/cookbook/opaque_rendering.py
+++ b/doc/source/cookbook/opaque_rendering.py
@@ -1,6 +1,3 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
-
 import yt
 import numpy as np
 

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d doc/source/cookbook/rendering_with_box_and_grids.py
--- a/doc/source/cookbook/rendering_with_box_and_grids.py
+++ b/doc/source/cookbook/rendering_with_box_and_grids.py
@@ -1,6 +1,3 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
-
 import yt
 import numpy as np
 

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d doc/source/cookbook/simple_profile.py
--- a/doc/source/cookbook/simple_profile.py
+++ b/doc/source/cookbook/simple_profile.py
@@ -1,6 +1,3 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
-
 import yt
 
 # Load the dataset.

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -795,6 +795,47 @@
 PyNE Data
 ---------
 
+`PyNE <http://pyne.io/>`_ Hex8 meshes are supported by yt and cared for by the PyNE development team
+(`pyne-dev at googlegroups.com <pyne-dev%40googlegroups.com>`_). 
+PyNE meshes are based on faceted geometries contained in hdf5 files (suffix ".h5m").
+
+To load a pyne mesh:
+
+.. code-block:: python
+
+  from pyne.mesh import Mesh
+  from pyne.dagmc import load
+
+  from yt.config import ytcfg; ytcfg["yt","suppressStreamLogging"] = "True"
+  from yt.frontends.moab.api import PyneMoabHex8StaticOutput
+  from yt.visualization.plot_window import SlicePlot
+
+  load("faceted_file.h5m")
+  
+Set up parameters for the mesh:
+
+.. code-block:: python
+
+  num_divisions = 50
+  coords0 = linspace(-6, 6, num_divisions)
+  coords1 = linspace(0, 7, num_divisions)
+  coords2 = linspace(-4, 4, num_divisions)
+
+Generate the Hex8 mesh and convert to a yt dataset using PyneHex8StaticOutput:
+
+.. code-block:: python 
+
+  m = Mesh(structured=True, structured_coords=[coords0, coords1, coords2], structured_ordering='zyx')
+  pf = PyneMoabHex8StaticOutput(m)
+
+Any field (tag) data on the mesh can then be viewed just like any other yt dataset!
+
+.. code-block:: python 
+
+  s = SlicePlot(pf, 'z', 'density')
+  s.display()
+
+
 Generic Array Data
 ------------------
 

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d doc/source/reference/command-line.rst
--- a/doc/source/reference/command-line.rst
+++ b/doc/source/reference/command-line.rst
@@ -59,7 +59,6 @@
     help                Print help message
     bootstrap_dev       Bootstrap a yt development environment
     bugreport           Report a bug in yt
-    hop                 Run HOP on one or more datasets
     hub_register        Register a user on the Hub: http://hub.yt-project.org/
     hub_submit          Submit a mercurial repository to the yt Hub
                         (http://hub.yt-project.org/), creating a BitBucket
@@ -67,18 +66,14 @@
     instinfo            Get some information about the yt installation
     version             Get some information about the yt installation
     load                Load a single dataset into an IPython instance
-    mapserver           Serve a plot in a GMaps-style interface
     pastebin            Post a script to an anonymous pastebin
     pastebin_grab       Print an online pastebin to STDOUT for local use.
     upload_notebook     Upload an IPython notebook to hub.yt-project.org.
     plot                Create a set of images
-    render              Create a simple volume rendering
     rpdb                Connect to a currently running (on localhost) rpd
                         session. Commands run with --rpdb will trigger an rpdb
                         session with any uncaught exceptions.
     notebook            Run the IPython Notebook
-    serve               Run the Web GUI Reason
-    reason              Run the Web GUI Reason
     stats               Print stats and max/min value of a given field (if
                         requested), for one or more datasets (default field is
                         Density)
@@ -154,15 +149,6 @@
 
 Help lists all of the various command-line options in yt.
 
-bootstrap_dev
-+++++++++++++
-
-After you have installed yt and you want to do some development, there may 
-be a few more steps to complete.  This subcommand automates building a 
-development environment for you by setting up your hg preferences correctly,
-creating/linking to a bitbucket account for hosting and sharing your code, 
-and setting up a pasteboard for your code snippets.  A full description of 
-how this works can be found in :ref:`bootstrap-dev`.
 
 bugreport         
 +++++++++
@@ -174,16 +160,6 @@
 making a mistake (see :ref:`asking-for-help`), you can submit bug 
 reports using this nice utility.
 
-hop               
-+++
-
-This lets you run the HOP algorithm as a halo-finder on one or more 
-datasets.  It nominally reproduces the behavior of enzohop from the 
-enzo suite.  There are several flags you can use in order to specify
-your threshold, input names, output names, and whether you want to use 
-dark matter or all particles.  To view these flags run help with the 
-hop subcommand.
-
 hub_register and hub_submit
 +++++++++++++++++++++++++++
 
@@ -215,13 +191,6 @@
 This will start the iyt interactive environment with your specified 
 dataset already loaded.  See :ref:`interactive-prompt` for more details.
 
-mapserver
-+++++++++
-
-Ever wanted to interact with your data using the 
-`google maps <http://maps.google.com/>`_ interface?  Now you can by using the
-yt mapserver.  See :ref:`mapserver` for more details.
-
 pastebin and pastebin_grab
 ++++++++++++++++++++++++++
 
@@ -263,14 +232,6 @@
 notebooks will be viewable online, and the appropriate URLs will be returned on
 the command line.
 
-render
-++++++
-
-This command generates a volume rendering for a single dataset.  By specifying
-the center, width, number of pixels, number and thickness of contours, etc.
-(run ``yt help render`` for details),  you can create high-quality volume
-renderings at the command-line before moving on to more involved volume
-rendering scripts.
 
 rpdb
 ++++

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/analysis_modules/halo_analysis/halo_callbacks.py
--- a/yt/analysis_modules/halo_analysis/halo_callbacks.py
+++ b/yt/analysis_modules/halo_analysis/halo_callbacks.py
@@ -27,14 +27,15 @@
      ensure_list, is_root
 from yt.utilities.exceptions import YTUnitConversionError
 from yt.utilities.logger import ytLogger as mylog
+from yt.utilities.operator_registry import \
+     OperatorRegistry
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
     parallel_root_only
 from yt.visualization.profile_plotter import \
      PhasePlot
-     
-from .operator_registry import \
-    callback_registry
 
+callback_registry = OperatorRegistry()
+    
 def add_callback(name, function):
     callback_registry[name] =  HaloCallback(function)
 

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/analysis_modules/halo_analysis/halo_catalog.py
--- a/yt/analysis_modules/halo_analysis/halo_catalog.py
+++ b/yt/analysis_modules/halo_analysis/halo_catalog.py
@@ -27,10 +27,13 @@
      
 from .halo_object import \
      Halo
-from .operator_registry import \
-     callback_registry, \
-     filter_registry, \
-     finding_method_registry, \
+from .halo_callbacks import \
+     callback_registry
+from .halo_filters import \
+     filter_registry
+from .halo_finding_methods import \
+     finding_method_registry
+from .halo_quantities import \
      quantity_registry
 
 class HaloCatalog(ParallelAnalysisInterface):
@@ -103,7 +106,6 @@
                  finder_kwargs=None,
                  output_dir="halo_catalogs/catalog"):
         ParallelAnalysisInterface.__init__(self)
-        halos_ds.index
         self.halos_ds = halos_ds
         self.data_ds = data_ds
         self.output_dir = ensure_dir(output_dir)
@@ -120,17 +122,18 @@
 
         if data_source is None:
             if halos_ds is not None:
+                halos_ds.index
                 data_source = halos_ds.all_data()
             else:
                 data_source = data_ds.all_data()
         self.data_source = data_source
 
+        if finder_kwargs is None:
+            finder_kwargs = {}
         if finder_method is not None:
             finder_method = finding_method_registry.find(finder_method,
                         **finder_kwargs)
         self.finder_method = finder_method            
-        if finder_kwargs is None:
-            finder_kwargs = {}
         
         # all of the analysis actions to be performed: callbacks, filters, and quantities
         self.actions = []

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/analysis_modules/halo_analysis/halo_filters.py
--- a/yt/analysis_modules/halo_analysis/halo_filters.py
+++ b/yt/analysis_modules/halo_analysis/halo_filters.py
@@ -15,10 +15,13 @@
 
 import numpy as np
 
+from yt.utilities.operator_registry import \
+     OperatorRegistry
 from yt.utilities.spatial import KDTree
 
 from .halo_callbacks import HaloCallback
-from .operator_registry import filter_registry
+
+filter_registry = OperatorRegistry()
 
 def add_filter(name, function):
     filter_registry[name] = HaloFilter(function)

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/analysis_modules/halo_analysis/halo_finding_methods.py
--- a/yt/analysis_modules/halo_analysis/halo_finding_methods.py
+++ b/yt/analysis_modules/halo_analysis/halo_finding_methods.py
@@ -21,10 +21,10 @@
     HaloCatalogDataset
 from yt.frontends.stream.data_structures import \
     load_particles
+from yt.utilities.operator_registry import \
+     OperatorRegistry
 
-from .operator_registry import \
-    finding_method_registry
-
+finding_method_registry = OperatorRegistry()
 
 def add_finding_method(name, function):
     finding_method_registry[name] = HaloFindingMethod(function)

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/analysis_modules/halo_analysis/halo_quantities.py
--- a/yt/analysis_modules/halo_analysis/halo_quantities.py
+++ b/yt/analysis_modules/halo_analysis/halo_quantities.py
@@ -15,8 +15,12 @@
 
 import numpy as np
 
+from yt.utilities.operator_registry import \
+     OperatorRegistry
+
 from .halo_callbacks import HaloCallback
-from .operator_registry import quantity_registry
+
+quantity_registry = OperatorRegistry()
 
 def add_quantity(name, function):
     quantity_registry[name] = HaloQuantity(function)

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/analysis_modules/halo_analysis/operator_registry.py
--- a/yt/analysis_modules/halo_analysis/operator_registry.py
+++ /dev/null
@@ -1,31 +0,0 @@
-"""
-Operation registry class
-
-
-
-"""
-
-#-----------------------------------------------------------------------------
-# Copyright (c) 2013, yt Development Team.
-#
-# Distributed under the terms of the Modified BSD License.
-#
-# The full license is in the file COPYING.txt, distributed with this software.
-#-----------------------------------------------------------------------------
-
-import copy
-import types
-
-class OperatorRegistry(dict):
-    def find(self, op, *args, **kwargs):
-        if isinstance(op, types.StringTypes):
-            # Lookup, assuming string or hashable object
-            op = copy.deepcopy(self[op])
-            op.args = args
-            op.kwargs = kwargs
-        return op
-
-callback_registry = OperatorRegistry()
-filter_registry = OperatorRegistry()
-finding_method_registry = OperatorRegistry()
-quantity_registry = OperatorRegistry()

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/analysis_modules/halo_finding/halo_objects.py
--- a/yt/analysis_modules/halo_finding/halo_objects.py
+++ b/yt/analysis_modules/halo_finding/halo_objects.py
@@ -549,22 +549,23 @@
             temp_e2[:,dim] = e2_vector[dim]
         length = np.abs(np.sum(rr * temp_e2, axis = 1) * (1 - \
             np.sum(rr * temp_e0, axis = 1)**2. * mag_A**-2. - \
-            np.sum(rr * temp_e1, axis = 1)**2. * mag_B**-2)**(-0.5))
+            np.sum(rr * temp_e1, axis = 1)**2. * mag_B**-2.)**(-0.5))
         length[length == np.inf] = 0.
         tC_index = np.nanargmax(length)
         mag_C = length[tC_index]
         # tilt is calculated from the rotation about x axis
         # needed to align e1 vector with the y axis
         # after e0 is aligned with x axis
-        # find the t1 angle needed to rotate about z axis to align e0 to x
-        t1 = np.arctan(e0_vector[1] / e0_vector[0])
-        RZ = get_rotation_matrix(-t1, (0, 0, 1)).transpose()
-        r1 = (e0_vector * RZ).sum(axis = 1)
+        # find the t1 angle needed to rotate about z axis to align e0 onto x-z plane
+        t1 = np.arctan(-e0_vector[1] / e0_vector[0])
+        RZ = get_rotation_matrix(t1, (0, 0, 1))
+        r1 = np.dot(RZ, e0_vector)
         # find the t2 angle needed to rotate about y axis to align e0 to x
-        t2 = np.arctan(-r1[2] / r1[0])
-        RY = get_rotation_matrix(-t2, (0, 1, 0)).transpose()
+        t2 = np.arctan(r1[2] / r1[0])
+        RY = get_rotation_matrix(t2, (0, 1, 0))
         r2 = np.dot(RY, np.dot(RZ, e1_vector))
-        tilt = np.arctan(r2[2]/r2[1])
+        # find the tilt angle needed to rotate about x axis to align e1 to y and e2 to z
+        tilt = np.arctan(-r2[2] / r2[1])
         return (mag_A, mag_B, mag_C, e0_vector[0], e0_vector[1],
             e0_vector[2], tilt)
 
@@ -782,13 +783,13 @@
         
         Returns
         -------
-        tuple : (cm, mag_A, mag_B, mag_C, e1_vector, tilt)
+        tuple : (cm, mag_A, mag_B, mag_C, e0_vector, tilt)
             The 6-tuple has in order:
               #. The center of mass as an array.
               #. mag_A as a float.
               #. mag_B as a float.
               #. mag_C as a float.
-              #. e1_vector as an array.
+              #. e0_vector as an array.
               #. tilt as a float.
         
         Examples
@@ -819,7 +820,7 @@
     def __init__(self, ds, id, size=None, CoM=None,
         max_dens_point=None, group_total_mass=None, max_radius=None, bulk_vel=None,
         rms_vel=None, fnames=None, mag_A=None, mag_B=None, mag_C=None,
-        e1_vec=None, tilt=None, supp=None):
+        e0_vec=None, tilt=None, supp=None):
 
         self.ds = ds
         self.gridsize = (self.ds.domain_right_edge - \
@@ -835,7 +836,7 @@
         self.mag_A = mag_A
         self.mag_B = mag_B
         self.mag_C = mag_C
-        self.e1_vec = e1_vec
+        self.e0_vec = e0_vec
         self.tilt = tilt
         # locs=the names of the h5 files that have particle data for this halo
         self.fnames = fnames
@@ -928,8 +929,8 @@
 
     def _get_ellipsoid_parameters_basic_loadedhalo(self):
         if self.mag_A is not None:
-            return (self.mag_A, self.mag_B, self.mag_C, self.e1_vec[0],
-                self.e1_vec[1], self.e1_vec[2], self.tilt)
+            return (self.mag_A, self.mag_B, self.mag_C, self.e0_vec[0],
+                self.e0_vec[1], self.e0_vec[2], self.tilt)
         else:
             return self._get_ellipsoid_parameters_basic()
 
@@ -943,13 +944,13 @@
 
         Returns
         -------
-        tuple : (cm, mag_A, mag_B, mag_C, e1_vector, tilt)
+        tuple : (cm, mag_A, mag_B, mag_C, e0_vector, tilt)
             The 6-tuple has in order:
               #. The center of mass as an array.
               #. mag_A as a float.
               #. mag_B as a float.
               #. mag_C as a float.
-              #. e1_vector as an array.
+              #. e0_vector as an array.
               #. tilt as a float.
 
         Examples
@@ -1021,7 +1022,7 @@
 
         max_dens_point=None, group_total_mass=None, max_radius=None, bulk_vel=None,
         rms_vel=None, fnames=None, mag_A=None, mag_B=None, mag_C=None,
-        e1_vec=None, tilt=None, supp=None):
+        e0_vec=None, tilt=None, supp=None):
 
         self.ds = ds
         self.gridsize = (self.ds.domain_right_edge - \
@@ -1037,7 +1038,7 @@
         self.mag_A = mag_A
         self.mag_B = mag_B
         self.mag_C = mag_C
-        self.e1_vec = e1_vec
+        self.e0_vec = e0_vec
         self.tilt = tilt
         self.bin_count = None
         self.overdensity = None
@@ -1181,8 +1182,8 @@
                                "x","y","z", "center-of-mass",
                                "x","y","z",
                                "vx","vy","vz","max_r","rms_v",
-                               "mag_A", "mag_B", "mag_C", "e1_vec0",
-                               "e1_vec1", "e1_vec2", "tilt", "\n"]))
+                               "mag_A", "mag_B", "mag_C", "e0_vec0",
+                               "e0_vec1", "e0_vec2", "tilt", "\n"]))
 
         for group in self:
             f.write("%10i\t" % group.id)
@@ -1494,17 +1495,17 @@
                 mag_A = float(line[15])
                 mag_B = float(line[16])
                 mag_C = float(line[17])
-                e1_vec0 = float(line[18])
-                e1_vec1 = float(line[19])
-                e1_vec2 = float(line[20])
-                e1_vec = np.array([e1_vec0, e1_vec1, e1_vec2])
+                e0_vec0 = float(line[18])
+                e0_vec1 = float(line[19])
+                e0_vec2 = float(line[20])
+                e0_vec = np.array([e0_vec0, e0_vec1, e0_vec2])
                 tilt = float(line[21])
                 self._groups.append(LoadedHalo(self.ds, halo, size = size,
                     CoM = CoM,
                     max_dens_point = max_dens_point,
                     group_total_mass = group_total_mass, max_radius = max_radius,
                     bulk_vel = bulk_vel, rms_vel = rms_vel, fnames = fnames,
-                    mag_A = mag_A, mag_B = mag_B, mag_C = mag_C, e1_vec = e1_vec,
+                    mag_A = mag_A, mag_B = mag_B, mag_C = mag_C, e0_vec = e0_vec,
                     tilt = tilt))
             else:
                 mylog.error("I am unable to parse this line. Too many or too few items. %s" % orig)

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/analysis_modules/level_sets/api.py
--- a/yt/analysis_modules/level_sets/api.py
+++ b/yt/analysis_modules/level_sets/api.py
@@ -21,12 +21,14 @@
     find_clumps, \
     get_lowest_clumps, \
     write_clump_index, \
-    write_clumps, \
-    write_old_clump_index, \
-    write_old_clumps, \
-    write_old_clump_info, \
-    _DistanceToMainClump
+    write_clumps
 
+from .clump_info_items import \
+    add_clump_info
+
+from .clump_validators import \
+    add_validator
+    
 from .clump_tools import \
     recursive_all_clumps, \
     return_all_clumps, \

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/analysis_modules/level_sets/clump_handling.py
--- a/yt/analysis_modules/level_sets/clump_handling.py
+++ b/yt/analysis_modules/level_sets/clump_handling.py
@@ -13,50 +13,82 @@
 # The full license is in the file COPYING.txt, distributed with this software.
 #-----------------------------------------------------------------------------
 
+import copy
 import numpy as np
-import copy
+import uuid
 
-from yt.funcs import *
+from yt.fields.derived_field import \
+    ValidateSpatial
+from yt.funcs import mylog
+    
+from .clump_info_items import \
+    clump_info_registry
+from .clump_validators import \
+    clump_validator_registry
+from .contour_finder import \
+    identify_contours
 
-from .contour_finder import identify_contours
+def add_contour_field(ds, contour_key):
+    def _contours(field, data):
+        fd = data.get_field_parameter("contour_slices_%s" % contour_key)
+        vals = data["index", "ones"] * -1
+        if fd is None or fd == 0.0:
+            return vals
+        for sl, v in fd.get(data.id, []):
+            vals[sl] = v
+        return vals
+
+    ds.add_field(("index", "contours_%s" % contour_key),
+                 function=_contours,
+                 validators=[ValidateSpatial(0)],
+                 take_log=False,
+                 display_field=False)
 
 class Clump(object):
     children = None
-    def __init__(self, data, parent, field, cached_fields = None, 
-                 function=None, clump_info=None):
+    def __init__(self, data, field, parent=None,
+                 clump_info=None, validators=None):
+        self.data = data
+        self.field = field
         self.parent = parent
-        self.data = data
         self.quantities = data.quantities
-        self.field = field
         self.min_val = self.data[field].min()
         self.max_val = self.data[field].max()
-        self.cached_fields = cached_fields
 
         # List containing characteristics about clumps that are to be written 
         # out by the write routines.
         if clump_info is None:
             self.set_default_clump_info()
         else:
-            # Clump info will act the same if add_info_item is called before or after clump finding.
+            # Clump info will act the same if add_info_item is called 
+            # before or after clump finding.
             self.clump_info = copy.deepcopy(clump_info)
 
-        # Function determining whether a clump is valid and should be kept.
-        self.default_function = 'self.data.quantities["IsBound"](truncate=True,include_thermal_energy=True) > 1.0'
-        if function is None:
-            self.function = self.default_function
-        else:
-            self.function = function
+        if validators is None:
+            validators = []
+        self.validators = validators
+        # Return value of validity function.
+        self.valid = None
 
-        # Return value of validity function, saved so it does not have to be calculated again.
-        self.function_value = None
-
-    def add_info_item(self,quantity,format):
+    def add_validator(self, validator, *args, **kwargs):
+        """
+        Add a validating function to determine whether the clump should 
+        be kept.
+        """
+        callback = clump_validator_registry.find(validator, *args, **kwargs)
+        self.validators.append(callback)
+        if self.children is None: return
+        for child in self.children:
+            child.add_validator(validator)
+        
+    def add_info_item(self, info_item, *args, **kwargs):
         "Adds an entry to clump_info list and tells children to do the same."
 
-        self.clump_info.append({'quantity':quantity, 'format':format})
+        callback = clump_info_registry.find(info_item, *args, **kwargs)
+        self.clump_info.append(callback)
         if self.children is None: return
         for child in self.children:
-            child.add_info_item(quantity,format)
+            child.add_info_item(info_item)
 
     def set_default_clump_info(self):
         "Defines default entries in the clump_info array."
@@ -64,60 +96,67 @@
         # add_info_item is recursive so this function does not need to be.
         self.clump_info = []
 
-        # Number of cells.
-        self.add_info_item('self.data["CellMassMsun"].size','"Cells: %d" % value')
-        # Gas mass in solar masses.
-        self.add_info_item('self.data["CellMassMsun"].sum()','"Mass: %e Msolar" % value')
-        # Volume-weighted Jeans mass.
-        self.add_info_item('self.data.quantities["WeightedAverageQuantity"]("JeansMassMsun","CellVolume")',
-                           '"Jeans Mass (vol-weighted): %.6e Msolar" % value')
-        # Mass-weighted Jeans mass.
-        self.add_info_item('self.data.quantities["WeightedAverageQuantity"]("JeansMassMsun","CellMassMsun")',
-                           '"Jeans Mass (mass-weighted): %.6e Msolar" % value')
-        # Max level.
-        self.add_info_item('self.data["GridLevel"].max()','"Max grid level: %d" % value')
-        # Minimum number density.
-        self.add_info_item('self.data["NumberDensity"].min()','"Min number density: %.6e cm^-3" % value')
-        # Maximum number density.
-        self.add_info_item('self.data["NumberDensity"].max()','"Max number density: %.6e cm^-3" % value')
+        self.add_info_item("total_cells")
+        self.add_info_item("cell_mass")
+        self.add_info_item("mass_weighted_jeans_mass")
+        self.add_info_item("volume_weighted_jeans_mass")
+        self.add_info_item("max_grid_level")
+        self.add_info_item("min_number_density")
+        self.add_info_item("max_number_density")
 
     def clear_clump_info(self):
-        "Clears the clump_info array and passes the instruction to its children."
+        """
+        Clears the clump_info array and passes the instruction to its 
+        children.
+        """
 
         self.clump_info = []
         if self.children is None: return
         for child in self.children:
             child.clear_clump_info()
 
-    def write_info(self,level,f_ptr):
+    def write_info(self, level, f_ptr):
         "Writes information for clump using the list of items in clump_info."
 
         for item in self.clump_info:
-            # Call if callable, otherwise do an eval.
-            if callable(item['quantity']):
-                value = item['quantity']()
-            else:
-                value = eval(item['quantity'])
-            output = eval(item['format'])
-            f_ptr.write("%s%s" % ('\t'*level,output))
-            f_ptr.write("\n")
+            value = item(self)
+            f_ptr.write("%s%s\n" % ('\t'*level, value))
 
     def find_children(self, min_val, max_val = None):
         if self.children is not None:
-            print "Wiping out existing children clumps."
+            mylog.info("Wiping out existing children clumps: %d.",
+                       len(self.children))
         self.children = []
         if max_val is None: max_val = self.max_val
         nj, cids = identify_contours(self.data, self.field, min_val, max_val)
-        for cid in range(nj):
-            new_clump = self.data.cut_region(
-                    ["obj['contours'] == %s" % (cid + 1)],
-                    {'contour_slices': cids})
-            self.children.append(Clump(new_clump, self, self.field,
-                                       self.cached_fields,function=self.function,
-                                       clump_info=self.clump_info))
+        # Here, cids is the set of slices and values, keyed by the
+        # parent_grid_id, that defines the contours.  So we can figure out all
+        # the unique values of the contours by examining the list here.
+        unique_contours = set([])
+        for sl_list in cids.values():
+            for sl, ff in sl_list:
+                unique_contours.update(np.unique(ff))
+        contour_key = uuid.uuid4().hex
+        base_object = getattr(self.data, 'base_object', self.data)
+        add_contour_field(base_object.pf, contour_key)
+        for cid in sorted(unique_contours):
+            if cid == -1: continue
+            new_clump = base_object.cut_region(
+                    ["obj['contours_%s'] == %s" % (contour_key, cid)],
+                    {('contour_slices_%s' % contour_key): cids})
+            if new_clump["ones"].size == 0:
+                # This is to skip possibly duplicate clumps.
+                # Using "ones" here will speed things up.
+                continue
+            self.children.append(Clump(new_clump, self.field, parent=self,
+                                       clump_info=self.clump_info,
+                                       validators=self.validators))
 
     def pass_down(self,operation):
-        "Performs an operation on a clump with an exec and passes the instruction down to clump children."
+        """
+        Performs an operation on a clump with an exec and passes the 
+        instruction down to clump children.
+        """
 
         # Call if callable, otherwise do an exec.
         if callable(operation):
@@ -129,24 +168,32 @@
         for child in self.children:
             child.pass_down(operation)
 
-    def _isValid(self):
-        "Perform user specified function to determine if child clumps should be kept."
+    def _validate(self):
+        "Apply all user specified validator functions."
 
-        # Only call function if it has not been already.
-        if self.function_value is None:
-            self.function_value = eval(self.function)
+        # Only call functions if not done already.
+        if self.valid is not None:
+            return self.valid
 
-        return self.function_value
+        self.valid = True
+        for validator in self.validators:
+            self.valid &= validator(self)
+            if not self.valid:
+                break
+
+        return self.valid
 
     def __reduce__(self):
         return (_reconstruct_clump, 
                 (self.parent, self.field, self.min_val, self.max_val,
-                 self.function_value, self.children, self.data, self.clump_info, self.function))
+                 self.valid, self.children, self.data, self.clump_info, 
+                 self.function))
 
     def __getitem__(self,request):
         return self.data[request]
 
-def _reconstruct_clump(parent, field, mi, ma, function_value, children, data, clump_info, 
+def _reconstruct_clump(parent, field, mi, ma, valid, children, 
+                       data, clump_info, 
         function=None):
     obj = object.__new__(Clump)
     if iterable(parent):
@@ -155,8 +202,9 @@
         except KeyError:
             parent = parent
     if children is None: children = []
-    obj.parent, obj.field, obj.min_val, obj.max_val, obj.function_value, obj.children, obj.clump_info, obj.function = \
-        parent, field, mi, ma, function_value, children, clump_info, function
+    obj.parent, obj.field, obj.min_val, obj.max_val, \
+      obj.valid, obj.children, obj.clump_info, obj.function = \
+        parent, field, mi, ma, valid, children, clump_info, function
     # Now we override, because the parent/child relationship seems a bit
     # unreliable in the unpickling
     for child in children: child.parent = obj
@@ -166,7 +214,8 @@
     return obj
 
 def find_clumps(clump, min_val, max_val, d_clump):
-    print "Finding clumps: min: %e, max: %e, step: %f" % (min_val, max_val, d_clump)
+    mylog.info("Finding clumps: min: %e, max: %e, step: %f" % 
+               (min_val, max_val, d_clump))
     if min_val >= max_val: return
     clump.find_children(min_val)
 
@@ -175,23 +224,28 @@
 
     elif (len(clump.children) > 0):
         these_children = []
-        print "Investigating %d children." % len(clump.children)
+        mylog.info("Investigating %d children." % len(clump.children))
         for child in clump.children:
             find_clumps(child, min_val*d_clump, max_val, d_clump)
             if ((child.children is not None) and (len(child.children) > 0)):
                 these_children.append(child)
-            elif (child._isValid()):
+            elif (child._validate()):
                 these_children.append(child)
             else:
-                print "Eliminating invalid, childless clump with %d cells." % len(child.data["Ones"])
+                mylog.info(("Eliminating invalid, childless clump with " +
+                            "%d cells.") % len(child.data["ones"]))
         if (len(these_children) > 1):
-            print "%d of %d children survived." % (len(these_children),len(clump.children))            
+            mylog.info("%d of %d children survived." %
+                       (len(these_children),len(clump.children)))
             clump.children = these_children
         elif (len(these_children) == 1):
-            print "%d of %d children survived, linking its children to parent." % (len(these_children),len(clump.children))
+            mylog.info(("%d of %d children survived, linking its " +
+                        "children to parent.") % 
+                        (len(these_children),len(clump.children)))
             clump.children = these_children[0].children
         else:
-            print "%d of %d children survived, erasing children." % (len(these_children),len(clump.children))
+            mylog.info("%d of %d children survived, erasing children." %
+                       (len(these_children),len(clump.children)))
             clump.children = []
 
 def get_lowest_clumps(clump, clump_list=None):
@@ -206,88 +260,35 @@
 
     return clump_list
 
-def write_clump_index(clump,level,f_ptr):
+def write_clump_index(clump, level, fh):
+    top = False
+    if not isinstance(fh, file):
+        fh = open(fh, "w")
+        top = True
     for q in range(level):
-        f_ptr.write("\t")
-    f_ptr.write("Clump at level %d:\n" % level)
-    clump.write_info(level,f_ptr)
-    f_ptr.write("\n")
-    f_ptr.flush()
+        fh.write("\t")
+    fh.write("Clump at level %d:\n" % level)
+    clump.write_info(level, fh)
+    fh.write("\n")
+    fh.flush()
     if ((clump.children is not None) and (len(clump.children) > 0)):
         for child in clump.children:
-            write_clump_index(child,(level+1),f_ptr)
+            write_clump_index(child, (level+1), fh)
+    if top:
+        fh.close()
 
-def write_clumps(clump,level,f_ptr):
+def write_clumps(clump, level, fh):
+    top = False
+    if not isinstance(fh, file):
+        fh = open(fh, "w")
+        top = True
     if ((clump.children is None) or (len(clump.children) == 0)):
-        f_ptr.write("%sClump:\n" % ("\t"*level))
-        clump.write_info(level,f_ptr)
-        f_ptr.write("\n")
-        f_ptr.flush()
+        fh.write("%sClump:\n" % ("\t"*level))
+        clump.write_info(level, fh)
+        fh.write("\n")
+        fh.flush()
     if ((clump.children is not None) and (len(clump.children) > 0)):
         for child in clump.children:
-            write_clumps(child,0,f_ptr)
-
-# Old clump info writing routines.
-def write_old_clump_index(clump,level,f_ptr):
-    for q in range(level):
-        f_ptr.write("\t")
-    f_ptr.write("Clump at level %d:\n" % level)
-    clump.write_info(level,f_ptr)
-    write_old_clump_info(clump,level,f_ptr)
-    f_ptr.write("\n")
-    f_ptr.flush()
-    if ((clump.children is not None) and (len(clump.children) > 0)):
-        for child in clump.children:
-            write_clump_index(child,(level+1),f_ptr)
-
-def write_old_clumps(clump,level,f_ptr):
-    if ((clump.children is None) or (len(clump.children) == 0)):
-        f_ptr.write("%sClump:\n" % ("\t"*level))
-        write_old_clump_info(clump,level,f_ptr)
-        f_ptr.write("\n")
-        f_ptr.flush()
-    if ((clump.children is not None) and (len(clump.children) > 0)):
-        for child in clump.children:
-            write_clumps(child,0,f_ptr)
-
-__clump_info_template = \
-"""
-%(tl)sCells: %(num_cells)s
-%(tl)sMass: %(total_mass).6e Msolar
-%(tl)sJeans Mass (vol-weighted): %(jeans_mass_vol).6e Msolar
-%(tl)sJeans Mass (mass-weighted): %(jeans_mass_mass).6e Msolar
-%(tl)sMax grid level: %(max_level)s
-%(tl)sMin number density: %(min_density).6e cm^-3
-%(tl)sMax number density: %(max_density).6e cm^-3
-
-"""
-
-def write_old_clump_info(clump,level,f_ptr):
-    fmt_dict = {'tl':  "\t" * level}
-    fmt_dict['num_cells'] = clump.data["CellMassMsun"].size,
-    fmt_dict['total_mass'] = clump.data["CellMassMsun"].sum()
-    fmt_dict['jeans_mass_vol'] = clump.data.quantities["WeightedAverageQuantity"]("JeansMassMsun","CellVolume")
-    fmt_dict['jeans_mass_mass'] = clump.data.quantities["WeightedAverageQuantity"]("JeansMassMsun","CellMassMsun")
-    fmt_dict['max_level'] =  clump.data["GridLevel"].max()
-    fmt_dict['min_density'] =  clump.data["NumberDensity"].min()
-    fmt_dict['max_density'] =  clump.data["NumberDensity"].max()
-    f_ptr.write(__clump_info_template % fmt_dict)
-
-# Recipes for various clump calculations.
-recipes = {}
-
-# Distance from clump center of mass to center of mass of top level object.
-def _DistanceToMainClump(master,units='pc'):
-    masterCOM = master.data.quantities['CenterOfMass']()
-    pass_command = "self.masterCOM = [%.10f, %.10f, %.10f]" % (masterCOM[0],
-                                                               masterCOM[1],
-                                                               masterCOM[2])
-    master.pass_down(pass_command)
-    master.pass_down("self.com = self.data.quantities['CenterOfMass']()")
-
-    quantity = "((self.com[0]-self.masterCOM[0])**2 + (self.com[1]-self.masterCOM[1])**2 + (self.com[2]-self.masterCOM[2])**2)**(0.5)*self.data.ds.units['%s']" % units
-    format = "%s%s%s" % ("'Distance from center: %.6e ",units,"' % value")
-
-    master.add_info_item(quantity,format)
-
-recipes['DistanceToMainClump'] = _DistanceToMainClump
+            write_clumps(child, 0, fh)
+    if top:
+        fh.close()

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/analysis_modules/level_sets/clump_info_items.py
--- /dev/null
+++ b/yt/analysis_modules/level_sets/clump_info_items.py
@@ -0,0 +1,87 @@
+"""
+ClumpInfoCallback and callbacks.
+
+
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2013, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+import numpy as np
+
+from yt.utilities.operator_registry import \
+     OperatorRegistry
+
+clump_info_registry = OperatorRegistry()
+
+def add_clump_info(name, function):
+    clump_info_registry[name] = ClumpInfoCallback(function)
+
+class ClumpInfoCallback(object):
+    r"""
+    A ClumpInfoCallback is a function that takes a clump, computes a 
+    quantity, and returns a string to be printed out for writing clump info.
+    """
+    def __init__(self, function, args=None, kwargs=None):
+        self.function = function
+        self.args = args
+        if self.args is None: self.args = []
+        self.kwargs = kwargs
+        if self.kwargs is None: self.kwargs = {}
+
+    def __call__(self, clump):
+        return self.function(clump, *self.args, **self.kwargs)
+    
+def _total_cells(clump):
+    n_cells = clump.data["index", "ones"].size
+    return "Cells: %d." % n_cells
+add_clump_info("total_cells", _total_cells)
+
+def _cell_mass(clump):
+    cell_mass = clump.data["gas", "cell_mass"].sum().in_units("Msun")
+    return "Mass: %e Msun." % cell_mass
+add_clump_info("cell_mass", _cell_mass)
+
+def _mass_weighted_jeans_mass(clump):
+    jeans_mass = clump.data.quantities.weighted_average_quantity(
+        "jeans_mass", ("gas", "cell_mass")).in_units("Msun")
+    return "Jeans Mass (mass-weighted): %.6e Msolar." % jeans_mass
+add_clump_info("mass_weighted_jeans_mass", _mass_weighted_jeans_mass)
+
+def _volume_weighted_jeans_mass(clump):
+    jeans_mass = clump.data.quantities.weighted_average_quantity(
+        "jeans_mass", ("index", "cell_volume")).in_units("Msun")
+    return "Jeans Mass (volume-weighted): %.6e Msolar." % jeans_mass
+add_clump_info("volume_weighted_jeans_mass", _volume_weighted_jeans_mass)
+
+def _max_grid_level(clump):
+    max_level = clump.data["index", "grid_level"].max()
+    return "Max grid level: %d." % max_level
+add_clump_info("max_grid_level", _max_grid_level)
+
+def _min_number_density(clump):
+    min_n = clump.data["gas", "number_density"].min().in_units("cm**-3")
+    return "Min number density: %.6e cm^-3." % min_n
+add_clump_info("min_number_density", _min_number_density)
+
+def _max_number_density(clump):
+    max_n = clump.data["gas", "number_density"].max().in_units("cm**-3")
+    return "Max number density: %.6e cm^-3." % max_n
+add_clump_info("max_number_density", _max_number_density)
+
+def _distance_to_main_clump(clump, units="pc"):
+    master = clump
+    while master.parent is not None:
+        master = master.parent
+    master_com = clump.data.ds.arr(master.data.quantities.center_of_mass())
+    my_com = clump.data.ds.arr(clump.data.quantities.center_of_mass())
+    distance = np.sqrt(((master_com - my_com)**2).sum())
+    return "Distance from master center of mass: %.6e %s." % \
+      (distance.in_units(units), units)
+add_clump_info("distance_to_main_clump", _distance_to_main_clump)

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/analysis_modules/level_sets/clump_validators.py
--- /dev/null
+++ b/yt/analysis_modules/level_sets/clump_validators.py
@@ -0,0 +1,95 @@
+"""
+ClumpValidators and callbacks.
+
+
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2014, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+import numpy as np
+
+from yt.utilities.data_point_utilities import FindBindingEnergy
+from yt.utilities.operator_registry import \
+    OperatorRegistry
+from yt.utilities.physical_constants import \
+    gravitational_constant_cgs as G
+
+clump_validator_registry = OperatorRegistry()
+
+def add_validator(name, function):
+    clump_validator_registry[name] = ClumpValidator(function)
+
+class ClumpValidator(object):
+    r"""
+    A ClumpValidator is a function that takes a clump and returns 
+    True or False as to whether the clump is valid and shall be kept.
+    """
+    def __init__(self, function, args=None, kwargs=None):
+        self.function = function
+        self.args = args
+        if self.args is None: self.args = []
+        self.kwargs = kwargs
+        if self.kwargs is None: self.kwargs = {}
+
+    def __call__(self, clump):
+        return self.function(clump, *self.args, **self.kwargs)
+    
+def _gravitationally_bound(clump, use_thermal_energy=True,
+                           use_particles=True, truncate=True):
+    "True if clump is gravitationally bound."
+
+    use_particles &= \
+      ("all", "particle_mass") in clump.data.ds.field_info
+    
+    bulk_velocity = clump.quantities.bulk_velocity(use_particles=use_particles)
+
+    kinetic = 0.5 * (clump["gas", "cell_mass"] *
+        ((bulk_velocity[0] - clump["gas", "velocity_x"])**2 +
+         (bulk_velocity[1] - clump["gas", "velocity_y"])**2 +
+         (bulk_velocity[2] - clump["gas", "velocity_z"])**2)).sum()
+
+    if use_thermal_energy:
+        kinetic += (clump["gas", "cell_mass"] *
+                    clump["gas", "thermal_energy"]).sum()
+
+    if use_particles:
+        kinetic += 0.5 * (clump["all", "particle_mass"] *
+            ((bulk_velocity[0] - clump["all", "particle_velocity_x"])**2 +
+             (bulk_velocity[1] - clump["all", "particle_velocity_y"])**2 +
+             (bulk_velocity[2] - clump["all", "particle_velocity_z"])**2)).sum()
+
+    potential = clump.data.ds.quan(G *
+        FindBindingEnergy(clump["gas", "cell_mass"].in_cgs(),
+                          clump["index", "x"].in_cgs(),
+                          clump["index", "y"].in_cgs(),
+                          clump["index", "z"].in_cgs(),
+                          truncate, (kinetic / G).in_cgs()),
+        kinetic.in_cgs().units)
+    
+    if truncate and potential >= kinetic:
+        return True
+
+    if use_particles:
+        potential += clump.data.ds.quan(G *
+            FindBindingEnergy(
+                clump["all", "particle_mass"].in_cgs(),
+                clump["all", "particle_position_x"].in_cgs(),
+                clump["all", "particle_position_y"].in_cgs(),
+                clump["all", "particle_position_z"].in_cgs(),
+                truncate, ((kinetic - potential) / G).in_cgs()),
+        kinetic.in_cgs().units)
+
+    return potential >= kinetic
+add_validator("gravitationally_bound", _gravitationally_bound)
+
+def _min_cells(clump, n_cells):
+    "True if clump has a minimum number of cells."
+    return (clump["index", "ones"].size >= n_cells)
+add_validator("min_cells", _min_cells)

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/analysis_modules/level_sets/contour_finder.py
--- a/yt/analysis_modules/level_sets/contour_finder.py
+++ b/yt/analysis_modules/level_sets/contour_finder.py
@@ -39,9 +39,9 @@
         node_ids.append(nid)
         values = g[field][sl].astype("float64")
         contour_ids = np.zeros(dims, "int64") - 1
-        gct.identify_contours(values, contour_ids, total_contours)
+        total_contours += gct.identify_contours(values, contour_ids,
+                                                total_contours)
         new_contours = tree.cull_candidates(contour_ids)
-        total_contours += new_contours.shape[0]
         tree.add_contours(new_contours)
         # Now we can create a partitioned grid with the contours.
         LE = (DLE + g.dds * gi).in_units("code_length").ndarray_view()
@@ -51,6 +51,8 @@
             LE, RE, dims.astype("int64"))
         contours[nid] = (g.Level, node.node_ind, pg, sl)
     node_ids = np.array(node_ids)
+    if node_ids.size == 0:
+        return 0, {}
     trunk = data_source.tiles.tree.trunk
     mylog.info("Linking node (%s) contours.", len(contours))
     link_node_contours(trunk, contours, tree, node_ids)

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py
+++ b/yt/data_objects/construction_data_containers.py
@@ -225,6 +225,10 @@
         self.weight_field = weight_field
         self._set_center(center)
         if data_source is None: data_source = self.ds.all_data()
+        for k, v in data_source.field_parameters.items():
+            if k not in self.field_parameters or \
+              self._is_default_field_parameter(k):
+                self.set_field_parameter(k, v)
         self.data_source = data_source
         self.weight_field = weight_field
         self.get_data(field)

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -107,10 +107,19 @@
         self.ds.objects.append(weakref.proxy(self))
         mylog.debug("Appending object to %s (type: %s)", self.ds, type(self))
         self.field_data = YTFieldData()
-        if field_parameters is None: field_parameters = {}
+        self._default_field_parameters = {
+            'center': self.ds.arr(np.zeros(3, dtype='float64'), 'cm'),
+            'bulk_velocity': self.ds.arr(np.zeros(3, dtype='float64'), 'cm/s'),
+            'normal': self.ds.arr(np.zeros(3, dtype='float64'), ''),
+        }
+        if field_parameters is None:
+            self.field_parameters = {}
+        else:
+            self.field_parameters = field_parameters
         self._set_default_field_parameters()
-        for key, val in field_parameters.items():
-            mylog.debug("Setting %s to %s", key, val)
+        for key, val in self.field_parameters.items():
+            if not self._is_default_field_parameter(key):
+                mylog.debug("Setting %s to %s", key, val)
             self.set_field_parameter(key, val)
 
     @property
@@ -125,13 +134,14 @@
         return self._index
 
     def _set_default_field_parameters(self):
-        self.field_parameters = {}
-        self.set_field_parameter(
-            "center",self.ds.arr(np.zeros(3,dtype='float64'),'cm'))
-        self.set_field_parameter(
-            "bulk_velocity",self.ds.arr(np.zeros(3,dtype='float64'),'cm/s'))
-        self.set_field_parameter(
-            "normal",np.array([0,0,1],dtype='float64'))
+        for k,v in self._default_field_parameters.items():
+            self.set_field_parameter(k,v)
+
+    def _is_default_field_parameter(self, parameter):
+        if parameter not in self._default_field_parameters:
+            return False
+        return self._default_field_parameters[parameter] is \
+          self.field_parameters[parameter]
 
     def apply_units(self, arr, units):
         return self.ds.arr(arr, input_units = units)

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -21,14 +21,12 @@
 
 from yt.config import ytcfg
 from yt.units.yt_array import YTArray, uconcatenate, array_like_field
-from yt.utilities.data_point_utilities import FindBindingEnergy
 from yt.utilities.exceptions import YTFieldNotFound
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
     ParallelAnalysisInterface, parallel_objects
 from yt.utilities.lib.Octree import Octree
 from yt.utilities.physical_constants import \
     gravitational_constant_cgs, \
-    mass_sun_cgs, \
     HUGE
 from yt.utilities.math_utils import prec_accum
 
@@ -237,14 +235,14 @@
           (("all", "particle_mass") in self.data_source.ds.field_info)
         vals = []
         if use_gas:
-            vals += [(data[ax] * data["cell_mass"]).sum(dtype=np.float64)
+            vals += [(data[ax] * data["gas", "cell_mass"]).sum(dtype=np.float64)
                      for ax in 'xyz']
-            vals.append(data["cell_mass"].sum(dtype=np.float64))
+            vals.append(data["gas", "cell_mass"].sum(dtype=np.float64))
         if use_particles:
-            vals += [(data["particle_position_%s" % ax] *
-                      data["particle_mass"]).sum(dtype=np.float64)
+            vals += [(data["all", "particle_position_%s" % ax] *
+                      data["all", "particle_mass"]).sum(dtype=np.float64)
                      for ax in 'xyz']
-            vals.append(data["particle_mass"].sum(dtype=np.float64))
+            vals.append(data["all", "particle_mass"].sum(dtype=np.float64))
         return vals
 
     def reduce_intermediate(self, values):
@@ -261,7 +259,7 @@
             y += values.pop(0).sum(dtype=np.float64)
             z += values.pop(0).sum(dtype=np.float64)
             w += values.pop(0).sum(dtype=np.float64)
-        return [v/w for v in [x, y, z]]
+        return self.data_source.ds.arr([v/w for v in [x, y, z]])
 
 class BulkVelocity(DerivedQuantity):
     r"""
@@ -299,14 +297,15 @@
     def process_chunk(self, data, use_gas = True, use_particles = False):
         vals = []
         if use_gas:
-            vals += [(data["velocity_%s" % ax] * data["cell_mass"]).sum(dtype=np.float64)
+            vals += [(data["gas", "velocity_%s" % ax] * 
+                      data["gas", "cell_mass"]).sum(dtype=np.float64)
                      for ax in 'xyz']
-            vals.append(data["cell_mass"].sum(dtype=np.float64))
+            vals.append(data["gas", "cell_mass"].sum(dtype=np.float64))
         if use_particles:
-            vals += [(data["particle_velocity_%s" % ax] *
-                      data["particle_mass"]).sum(dtype=np.float64)
+            vals += [(data["all", "particle_velocity_%s" % ax] *
+                      data["all", "particle_mass"]).sum(dtype=np.float64)
                      for ax in 'xyz']
-            vals.append(data["particle_mass"].sum(dtype=np.float64))
+            vals.append(data["all", "particle_mass"].sum(dtype=np.float64))
         return vals
 
     def reduce_intermediate(self, values):
@@ -323,7 +322,7 @@
             y += values.pop(0).sum(dtype=np.float64)
             z += values.pop(0).sum(dtype=np.float64)
             w += values.pop(0).sum(dtype=np.float64)
-        return [v/w for v in [x, y, z]]
+        return self.data_source.ds.arr([v/w for v in [x, y, z]])
 
 class WeightedVariance(DerivedQuantity):
     r"""

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/data_objects/selection_data_containers.py
--- a/yt/data_objects/selection_data_containers.py
+++ b/yt/data_objects/selection_data_containers.py
@@ -16,6 +16,7 @@
 
 import types
 import numpy as np
+from contextlib import contextmanager
 
 from yt.funcs import *
 from yt.utilities.lib.alt_ray_tracers import cylindrical_ray_trace
@@ -718,6 +719,22 @@
             self.field_data[field] = self.base_object[field][ind]
 
     @property
+    def blocks(self):
+        # We have to take a slightly different approach here.  Note that all
+        # that .blocks has to yield is a 3D array and a mask.
+        for obj, m in self.base_object.blocks:
+            m = m.copy()
+            with obj._field_parameter_state(self.field_parameters):
+                for cond in self.conditionals:
+                    ss = eval(cond)
+                    m = np.logical_and(m, ss, m)
+            if not np.any(m): continue
+            yield obj, m
+
+    def cut_region(self, *args, **kwargs):
+        raise NotImplementedError
+
+    @property
     def _cond_ind(self):
         ind = None
         obj = self.base_object

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/data_objects/tests/test_boolean_regions.py
--- a/yt/data_objects/tests/test_boolean_regions.py
+++ b/yt/data_objects/tests/test_boolean_regions.py
@@ -256,10 +256,8 @@
     for n in [1, 2, 4, 8]:
         ds = fake_random_ds(64, nprocs=n)
         ds.index
-        ell1 = ds.ellipsoid([0.25]*3, 0.05, 0.05, 0.05, np.array([0.1]*3),
-            np.array([0.1]*3))
-        ell2 = ds.ellipsoid([0.75]*3, 0.05, 0.05, 0.05, np.array([0.1]*3),
-            np.array([0.1]*3))
+        ell1 = ds.ellipsoid([0.25]*3, 0.05, 0.05, 0.05, np.array([0.1]*3), 0.1)
+        ell2 = ds.ellipsoid([0.75]*3, 0.05, 0.05, 0.05, np.array([0.1]*3), 0.1)
         # Store the original indices
         i1 = ell1['ID']
         i1.sort()
@@ -298,10 +296,8 @@
     for n in [1, 2, 4, 8]:
         ds = fake_random_ds(64, nprocs=n)
         ds.index
-        ell1 = ds.ellipsoid([0.45]*3, 0.05, 0.05, 0.05, np.array([0.1]*3),
-            np.array([0.1]*3))
-        ell2 = ds.ellipsoid([0.55]*3, 0.05, 0.05, 0.05, np.array([0.1]*3),
-            np.array([0.1]*3))
+        ell1 = ds.ellipsoid([0.45]*3, 0.05, 0.05, 0.05, np.array([0.1]*3), 0.1)
+        ell2 = ds.ellipsoid([0.55]*3, 0.05, 0.05, 0.05, np.array([0.1]*3), 0.1)
         # Get indices of both.
         i1 = ell1['ID']
         i2 = ell2['ID']

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/data_objects/tests/test_extract_regions.py
--- a/yt/data_objects/tests/test_extract_regions.py
+++ b/yt/data_objects/tests/test_extract_regions.py
@@ -22,10 +22,12 @@
         yield assert_equal, np.all(r["velocity_x"] > 0.25), True
         yield assert_equal, np.sort(dd["density"][t]), np.sort(r["density"])
         yield assert_equal, np.sort(dd["x"][t]), np.sort(r["x"])
-        r2 = r.cut_region( [ "obj['temperature'] < 0.75" ] )
-        t2 = (r["temperature"] < 0.75)
-        yield assert_equal, np.sort(r2["temperature"]), np.sort(r["temperature"][t2])
-        yield assert_equal, np.all(r2["temperature"] < 0.75), True
+        # We are disabling these, as cutting cut regions does not presently
+        # work
+        #r2 = r.cut_region( [ "obj['temperature'] < 0.75" ] )
+        #t2 = (r["temperature"] < 0.75)
+        #yield assert_equal, np.sort(r2["temperature"]), np.sort(r["temperature"][t2])
+        #yield assert_equal, np.all(r2["temperature"] < 0.75), True
 
         # Now we can test some projections
         dd = ds.all_data()

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/data_objects/tests/test_projection.py
--- a/yt/data_objects/tests/test_projection.py
+++ b/yt/data_objects/tests/test_projection.py
@@ -35,6 +35,12 @@
         rho_tot = dd.quantities["TotalQuantity"]("density")
         coords = np.mgrid[xi:xf:xn*1j, yi:yf:yn*1j, zi:zf:zn*1j]
         uc = [np.unique(c) for c in coords]
+        # test if projections inherit the field parameters of their data sources
+        dd.set_field_parameter("bulk_velocity", np.array([0,1,2]))
+        proj = ds.proj(0, "density", data_source=dd)
+        yield assert_equal, dd.field_parameters["bulk_velocity"], \
+          proj.field_parameters["bulk_velocity"]
+
         # Some simple projection tests with single grids
         for ax, an in enumerate("xyz"):
             xax = ds.coordinates.x_axis[ax]

diff -r 0148c810c6bb92db7d364b06e3bbcde39fb2f682 -r 9e7fefe0865b1cc5bb6d26b25b9d6009d562ef6d yt/fields/geometric_fields.py
--- a/yt/fields/geometric_fields.py
+++ b/yt/fields/geometric_fields.py
@@ -207,18 +207,3 @@
              units="cm",
              display_field=False)
 
-    def _contours(field, data):
-        fd = data.get_field_parameter("contour_slices")
-        vals = data["index", "ones"] * -1
-        if fd is None or fd == 0.0:
-            return vals
-        for sl, v in fd.get(data.id, []):
-            vals[sl] = v
-        return vals
-    
-    registry.add_field(("index", "contours"),
-                       function=_contours,
-                       validators=[ValidateSpatial(0)],
-                       take_log=False,
-                       display_field=False)
-

This diff is so big that we needed to truncate the remainder.

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list