[yt-svn] commit/yt: 5 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Wed Mar 30 14:51:33 PDT 2016


5 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/1230110b2034/
Changeset:   1230110b2034
Branch:      yt
User:        jmt354
Date:        2016-03-24 02:22:39+00:00
Summary:     Edited PhasePlot class to have an annotate_title method and, I think, changed all docs that had old set_title method
Affected #:  2 files

diff -r de823c10fed9f0b5fd4acf0512c45fc7d9cc491a -r 1230110b20345305361a346ed32ed12fa6b64bed doc/source/analyzing/mesh_filter.ipynb
--- a/doc/source/analyzing/mesh_filter.ipynb
+++ b/doc/source/analyzing/mesh_filter.ipynb
@@ -143,13 +143,13 @@
    "source": [
     "ph1 = yt.PhasePlot(ad, 'density', 'temperature', 'cell_mass', weight_field=None)\n",
     "ph1.set_xlim(3e-31, 3e-27)\n",
-    "ph1.set_title('cell_mass', 'No Cuts')\n",
+    "ph1.annotate_title('No Cuts')\n",
     "ph1.set_figure_size(5)\n",
     "ph1.show()\n",
     "\n",
     "ph1 = yt.PhasePlot(dense_ad, 'density', 'temperature', 'cell_mass', weight_field=None)\n",
     "ph1.set_xlim(3e-31, 3e-27)\n",
-    "ph1.set_title('cell_mass', 'Dense Gas')\n",
+    "ph1.annotate_title('Dense Gas')\n",
     "ph1.set_figure_size(5)\n",
     "ph1.show()"
    ]

diff -r de823c10fed9f0b5fd4acf0512c45fc7d9cc491a -r 1230110b20345305361a346ed32ed12fa6b64bed yt/visualization/profile_plotter.py
--- a/yt/visualization/profile_plotter.py
+++ b/yt/visualization/profile_plotter.py
@@ -697,7 +697,7 @@
     >>> # Change plot properties.
     >>> plot.set_cmap("cell_mass", "jet")
     >>> plot.set_zlim("cell_mass", 1e8, 1e13)
-    >>> plot.set_title("cell_mass", "This is a phase plot")
+    >>> plot.annotate_title("cell_mass", "This is a phase plot")
 
     """
     x_log = None
@@ -1061,6 +1061,27 @@
         return self
 
     @invalidate_plot
+    def annotate_title(self,title):
+        """Set a title for the plot.
+
+        Parameters
+        ----------
+        title : str
+            The title to add.
+
+        Examples
+        --------
+
+        >>> plot.annotate_title("This is a phase plot")
+
+        """
+        for f in self.profile.field_data:
+            if isinstance(f,tuple):
+                f = f[1]
+            self.plot_title[self.data_source._determine_fields(f)[0]] = title
+        return self
+
+    @invalidate_plot
     def reset_plot(self):
         self.plots = {}
         return self


https://bitbucket.org/yt_analysis/yt/commits/c980b573469a/
Changeset:   c980b573469a
Branch:      yt
User:        jmt354
Date:        2016-03-24 02:37:38+00:00
Summary:     Merge
Affected #:  18 files

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 doc/source/analyzing/analysis_modules/halo_catalogs.rst
--- a/doc/source/analyzing/analysis_modules/halo_catalogs.rst
+++ b/doc/source/analyzing/analysis_modules/halo_catalogs.rst
@@ -65,12 +65,13 @@
 
 Analysis is done by adding actions to the 
 :class:`~yt.analysis_modules.halo_analysis.halo_catalog.HaloCatalog`.
-Each action is represented by a callback function that will be run on each halo. 
-There are three types of actions:
+Each action is represented by a callback function that will be run on
+each halo.  There are four types of actions:
 
 * Filters
 * Quantities
 * Callbacks
+* Recipes
 
 A list of all available filters, quantities, and callbacks can be found in 
 :ref:`halo_analysis_ref`.  
@@ -213,6 +214,50 @@
    # ...  Later on in your script
    hc.add_callback("my_callback")
 
+Recipes
+^^^^^^^
+
+Recipes allow you to create analysis tasks that consist of a series of
+callbacks, quantities, and filters that are run in succession.  An example
+of this is
+:func:`~yt.analysis_modules.halo_analysis.halo_recipes.calculate_virial_quantities`,
+which calculates virial quantities by first creating a sphere container,
+performing 1D radial profiles, and then interpolating to get values at a
+specified threshold overdensity.  All of these operations are separate
+callbacks, but the recipes allow you to add them to your analysis pipeline
+with one call.  For example,
+
+.. code-block:: python
+
+   hc.add_recipe("calculate_virial_quantities", ["radius", "matter_mass"])
+
+The available recipes are located in
+``yt/analysis_modules/halo_analysis/halo_recipes.py``.  New recipes can be
+created in the following manner:
+
+.. code-block:: python
+
+   def my_recipe(halo_catalog, fields, weight_field=None):
+       # create a sphere
+       halo_catalog.add_callback("sphere")
+       # make profiles
+       halo_catalog.add_callback("profile", ["radius"], fields,
+                                 weight_field=weight_field)
+       # save the profile data
+       halo_catalog.add_callback("save_profiles", output_dir="profiles")
+
+   # add recipe to the registry of recipes
+   add_recipe("profile_and_save", my_recipe)
+
+
+   # ...  Later on in your script
+   hc.add_recipe("profile_and_save", ["density", "temperature"],
+                 weight_field="cell_mass")
+
+Note, that unlike callback, filter, and quantity functions that take a ``Halo``
+object as the first argument, recipe functions should take a ``HaloCatalog``
+object as the first argument.
+
 Running Analysis
 ----------------
 

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 doc/source/cookbook/custom_colorbar_tickmarks.ipynb
--- a/doc/source/cookbook/custom_colorbar_tickmarks.ipynb
+++ b/doc/source/cookbook/custom_colorbar_tickmarks.ipynb
@@ -64,6 +64,24 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
+    "Next, we call `_setup_plots()` to ensure the plot is properly initialized. Without this, the custom tickmarks we are adding will be ignored."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": true
+   },
+   "outputs": [],
+   "source": [
+    "slc._setup_plots()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
     "To set custom tickmarks, simply call the `matplotlib` [`set_ticks`](http://matplotlib.org/api/colorbar_api.html#matplotlib.colorbar.ColorbarBase.set_ticks) and [`set_ticklabels`](http://matplotlib.org/api/colorbar_api.html#matplotlib.colorbar.ColorbarBase.set_ticklabels) functions."
    ]
   },

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 doc/source/cookbook/halo_profiler.py
--- a/doc/source/cookbook/halo_profiler.py
+++ b/doc/source/cookbook/halo_profiler.py
@@ -12,26 +12,16 @@
 # Filter out less massive halos
 hc.add_filter("quantity_value", "particle_mass", ">", 1e14, "Msun")
 
-# attach a sphere object to each halo whose radius extends
-#   to twice the radius of the halo
-hc.add_callback("sphere", factor=2.0)
+# This recipe creates a spherical data container, computes
+# radial profiles, and calculates r_200 and M_200.
+hc.add_recipe("calculate_virial_quantities", ["radius", "matter_mass"])
 
-# use the sphere to calculate radial profiles of gas density
-# weighted by cell volume in terms of the virial radius
-hc.add_callback("profile", ["radius"],
-                [("gas", "overdensity")],
-                weight_field="cell_volume",
-                accumulation=True,
-                storage="virial_quantities_profiles")
-
-
-hc.add_callback("virial_quantities", ["radius"],
-                profile_storage="virial_quantities_profiles")
-hc.add_callback('delete_attribute', 'virial_quantities_profiles')
-
+# Create a sphere container with radius 5x r_200.
 field_params = dict(virial_radius=('quantity', 'radius_200'))
 hc.add_callback('sphere', radius_field='radius_200', factor=5,
                 field_parameters=field_params)
+
+# Compute profiles of T vs. r/r_200
 hc.add_callback('profile', ['virial_radius_fraction'], 
                 [('gas', 'temperature')],
                 storage='virial_profiles',

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 doc/source/developing/testing.rst
--- a/doc/source/developing/testing.rst
+++ b/doc/source/developing/testing.rst
@@ -19,7 +19,7 @@
 checking in any code that breaks existing functionality.  To further this goal,
 an automatic buildbot runs the test suite after each code commit to confirm
 that yt hasn't broken recently.  To supplement this effort, we also maintain a
-`continuous integration server <http://tests.yt-project.org>`_ that runs the
+`continuous integration server <https://tests.yt-project.org>`_ that runs the
 tests with each commit to the yt version control repository.
 
 .. _unit_testing:
@@ -476,3 +476,82 @@
 test is more useful if you are finding yourself writing a ton of boilerplate
 code to get your image comparison test working.  The ``GenericImageTest`` is
 more useful if you only need to do a one-off image comparison test.
+
+Enabling Answer Tests on Jenkins
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Before any code is added to or modified in the yt codebase, each incoming
+changeset is run against all available unit and answer tests on our `continuous
+integration server <http://tests.yt-project.org>`_. While unit tests are
+autodiscovered by `nose <http://nose.readthedocs.org/en/latest/>`_ itself,
+answer tests require definition of which set of tests constitute to a given
+answer. Configuration for the integration server is stored in
+*tests/tests_2.7.yaml* in the main yt repository:
+
+.. code-block:: yaml
+
+   answer_tests:
+      local_artio_270:
+         - yt/frontends/artio/tests/test_outputs.py
+   # ...
+   other_tests:
+      unittests:
+         - '-v'
+         - '-s'
+
+Each element under *answer_tests* defines answer name (*local_artio_270* in above
+snippet) and specifies a list of files/classes/methods that will be validated
+(*yt/frontends/artio/tests/test_outputs.py* in above snippet). On the testing
+server it is translated to:
+
+.. code-block:: bash
+
+   $ nosetests --with-answer-testing --local --local-dir ... --answer-big-data \
+      --answer-name=local_artio_270 \
+      yt/frontends/artio/tests/test_outputs.py
+
+If the answer doesn't exist on the server yet, ``nosetests`` is run twice and
+during first pass ``--answer-store`` is added to the commandline. 
+
+Updating Answers
+~~~~~~~~~~~~~~~~
+
+In order to regenerate answers for a particular set of tests it is sufficient to
+change the answer name in *tests/tests_2.7.yaml* e.g.:
+
+.. code-block:: diff
+
+   --- a/tests/tests_2.7.yaml
+   +++ b/tests/tests_2.7.yaml
+   @@ -25,7 +25,7 @@
+        - yt/analysis_modules/halo_finding/tests/test_rockstar.py
+        - yt/frontends/owls_subfind/tests/test_outputs.py
+   
+   -  local_owls_270:
+   +  local_owls_271:
+        - yt/frontends/owls/tests/test_outputs.py
+   
+      local_pw_270:
+
+would regenerate answers for OWLS frontend.
+
+Adding New Answer Tests
+~~~~~~~~~~~~~~~~~~~~~~~
+
+In order to add a new set of answer tests, it is sufficient to extend the
+*answer_tests* list in *tests/tests_2.7.yaml* e.g.: 
+
+.. code-block:: diff
+
+   --- a/tests/tests_2.7.yaml
+   +++ b/tests/tests_2.7.yaml
+   @@ -60,6 +60,10 @@
+        - yt/analysis_modules/absorption_spectrum/tests/test_absorption_spectrum.py:test_absorption_spectrum_non_cosmo
+        - yt/analysis_modules/absorption_spectrum/tests/test_absorption_spectrum.py:test_absorption_spectrum_cosmo
+    
+   +  local_gdf_270:
+   +    - yt/frontends/gdf/tests/test_outputs.py
+   +
+   +
+    other_tests:
+      unittests:
+

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 doc/source/reference/api/api.rst
--- a/doc/source/reference/api/api.rst
+++ b/doc/source/reference/api/api.rst
@@ -472,6 +472,8 @@
    ~yt.analysis_modules.halo_analysis.halo_quantities.HaloQuantity
    ~yt.analysis_modules.halo_analysis.halo_quantities.bulk_velocity
    ~yt.analysis_modules.halo_analysis.halo_quantities.center_of_mass
+   ~yt.analysis_modules.halo_analysis.halo_recipes.HaloRecipe
+   ~yt.analysis_modules.halo_analysis.halo_recipes.calculate_virial_quantities
 
 Halo Finding
 ^^^^^^^^^^^^

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 setup.py
--- a/setup.py
+++ b/setup.py
@@ -362,7 +362,11 @@
                  "Operating System :: POSIX :: AIX",
                  "Operating System :: POSIX :: Linux",
                  "Programming Language :: C",
-                 "Programming Language :: Python",
+                 "Programming Language :: Python :: 2",
+                 "Programming Language :: Python :: 2.7",
+                 "Programming Language :: Python :: 3",
+                 "Programming Language :: Python :: 3.4",
+                 "Programming Language :: Python :: 3.5",
                  "Topic :: Scientific/Engineering :: Astronomy",
                  "Topic :: Scientific/Engineering :: Physics",
                  "Topic :: Scientific/Engineering :: Visualization"],

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -119,8 +119,9 @@
 
     def make_spectrum(self, input_file, output_file=None,
                       line_list_file=None, output_absorbers_file=None,
-                      use_peculiar_velocity=True, 
-                      subgrid_resolution=10, njobs="auto"):
+                      use_peculiar_velocity=True,
+                      subgrid_resolution=10, observing_redshift=0.,
+                      njobs="auto"):
         """
         Make spectrum from ray data using the line list.
 
@@ -130,33 +131,38 @@
         input_file : string or dataset
            path to input ray data or a loaded ray dataset
         output_file : optional, string
-           Option to save a file containing the wavelength, flux, and optical 
-           depth fields.  File formats are chosen based on the filename extension.  
-           ``.h5`` for hdf5, ``.fits`` for fits, and everything else is ASCII.
+           Option to save a file containing the wavelength, flux, and optical
+           depth fields.  File formats are chosen based on the filename
+           extension. ``.h5`` for hdf5, ``.fits`` for fits, and everything
+           else is ASCII.
            Default: None
         output_absorbers_file : optional, string
-           Option to save a text file containing all of the absorbers and 
+           Option to save a text file containing all of the absorbers and
            corresponding wavelength and redshift information.
            For parallel jobs, combining the lines lists can be slow so it
            is recommended to set to None in such circumstances.
            Default: None
         use_peculiar_velocity : optional, bool
            if True, include peculiar velocity for calculating doppler redshift
-           to shift lines.  Requires similar flag to be set in LightRay 
+           to shift lines.  Requires similar flag to be set in LightRay
            generation.
            Default: True
         subgrid_resolution : optional, int
            When a line is being added that is unresolved (ie its thermal
            width is less than the spectral bin width), the voigt profile of
-           the line is deposited into an array of virtual wavelength bins at 
-           higher resolution.  The optical depth from these virtual bins is 
-           integrated and then added to the coarser spectral wavelength bin.  
-           The subgrid_resolution value determines the ratio between the 
-           thermal width and the bin width of the virtual bins.  Increasing 
-           this value yields smaller virtual bins, which increases accuracy, 
-           but is more expensive.  A value of 10 yields accuracy to the 4th 
+           the line is deposited into an array of virtual wavelength bins at
+           higher resolution.  The optical depth from these virtual bins is
+           integrated and then added to the coarser spectral wavelength bin.
+           The subgrid_resolution value determines the ratio between the
+           thermal width and the bin width of the virtual bins.  Increasing
+           this value yields smaller virtual bins, which increases accuracy,
+           but is more expensive.  A value of 10 yields accuracy to the 4th
            significant digit in tau.
            Default: 10
+        observing_redshift : optional, float
+           This is the redshift at which the observer is observing
+           the absorption spectrum.
+           Default: 0
         njobs : optional, int or "auto"
            the number of process groups into which the loop over
            absorption lines will be divided.  If set to -1, each
@@ -183,6 +189,9 @@
             input_fields.append('redshift_eff')
             field_units["velocity_los"] = "cm/s"
             field_units["redshift_eff"] = ""
+        if observing_redshift != 0.:
+            input_fields.append('redshift_dopp')
+            field_units["redshift_dopp"] = ""
         for feature in self.line_list + self.continuum_list:
             if not feature['field_name'] in input_fields:
                 input_fields.append(feature['field_name'])
@@ -204,8 +213,10 @@
         self._add_lines_to_spectrum(field_data, use_peculiar_velocity,
                                     output_absorbers_file,
                                     subgrid_resolution=subgrid_resolution,
+                                    observing_redshift=observing_redshift,
                                     njobs=njobs)
-        self._add_continua_to_spectrum(field_data, use_peculiar_velocity)
+        self._add_continua_to_spectrum(field_data, use_peculiar_velocity,
+                                       observing_redshift=observing_redshift)
 
         self.flux_field = np.exp(-self.tau_field)
 
@@ -223,20 +234,63 @@
         del field_data
         return (self.lambda_field, self.flux_field)
 
-    def _add_continua_to_spectrum(self, field_data, use_peculiar_velocity):
+    def _apply_observing_redshift(self, field_data, use_peculiar_velocity,
+                                 observing_redshift):
+        """
+        Change the redshifts of individual absorbers to account for the 
+        redshift at which the observer sits.
+
+        The intermediate redshift that is seen by an observer
+        at a redshift other than z=0 is z12, where z1 is the
+        observing redshift and z2 is the emitted photon's redshift
+        Hogg (2000) eq. 13:
+
+        1 + z12 = (1 + z2) / (1 + z1)
+        """
+        if observing_redshift == 0.:
+            # This is already assumed in the generation of the LightRay
+            redshift = field_data['redshift']
+            if use_peculiar_velocity:
+                redshift_eff = field_data['redshift_eff']
+        else:
+            # The intermediate redshift that is seen by an observer
+            # at a redshift other than z=0 is z12, where z1 is the
+            # observing redshift and z2 is the emitted photon's redshift
+            # Hogg (2000) eq. 13:
+            # 1 + z12 = (1 + z2) / (1 + z1)
+            redshift = ((1 + field_data['redshift']) / \
+                        (1 + observing_redshift)) - 1.
+            # Combining cosmological redshift and doppler redshift
+            # into an effective redshift is found in Peacock's
+            # Cosmological Physics eqn 3.75:
+            # 1 + z_eff = (1 + z_cosmo) * (1 + z_doppler)
+            if use_peculiar_velocity:
+                redshift_eff = ((1 + redshift) * \
+                                (1 + field_data['redshift_dopp'])) - 1.
+
+        return redshift, redshift_eff
+
+    def _add_continua_to_spectrum(self, field_data, use_peculiar_velocity,
+                                  observing_redshift=0.):
         """
         Add continuum features to the spectrum.
         """
+        # Change the redshifts of continuum sources to account for the 
+        # redshift at which the observer sits
+        redshift, redshift_eff = self._apply_observing_redshift(field_data, 
+                                 use_peculiar_velocity, observing_redshift)
+
         # Only add continuum features down to tau of 1.e-4.
         min_tau = 1.e-3
 
         for continuum in self.continuum_list:
             column_density = field_data[continuum['field_name']] * field_data['dl']
+
             # redshift_eff field combines cosmological and velocity redshifts
             if use_peculiar_velocity:
-                delta_lambda = continuum['wavelength'] * field_data['redshift_eff']
+                delta_lambda = continuum['wavelength'] * redshift_eff
             else:
-                delta_lambda = continuum['wavelength'] * field_data['redshift']
+                delta_lambda = continuum['wavelength'] * redshift
             this_wavelength = delta_lambda + continuum['wavelength']
             right_index = np.digitize(this_wavelength, self.lambda_field).clip(0, self.n_lambda)
             left_index = np.digitize((this_wavelength *
@@ -259,13 +313,19 @@
             pbar.finish()
 
     def _add_lines_to_spectrum(self, field_data, use_peculiar_velocity,
-                               output_absorbers_file, subgrid_resolution=10, 
-                               njobs=-1):
+                               output_absorbers_file, subgrid_resolution=10,
+                               observing_redshift=0., njobs=-1):
         """
         Add the absorption lines to the spectrum.
         """
-        # Widen wavelength window until optical depth falls below this tau 
-        # value at the ends to assure that the wings of a line have been 
+
+        # Change the redshifts of individual absorbers to account for the 
+        # redshift at which the observer sits
+        redshift, redshift_eff = self._apply_observing_redshift(field_data, 
+                                 use_peculiar_velocity, observing_redshift)
+
+        # Widen wavelength window until optical depth falls below this tau
+        # value at the ends to assure that the wings of a line have been
         # fully resolved.
         min_tau = 1e-3
 
@@ -276,11 +336,11 @@
 
             # redshift_eff field combines cosmological and velocity redshifts
             # so delta_lambda gives the offset in angstroms from the rest frame
-            # wavelength to the observed wavelength of the transition 
+            # wavelength to the observed wavelength of the transition
             if use_peculiar_velocity:
-                delta_lambda = line['wavelength'] * field_data['redshift_eff']
+                delta_lambda = line['wavelength'] * redshift_eff
             else:
-                delta_lambda = line['wavelength'] * field_data['redshift']
+                delta_lambda = line['wavelength'] * redshift
             # lambda_obs is central wavelength of line after redshift
             lambda_obs = line['wavelength'] + delta_lambda
             # the total number of absorbers per transition
@@ -308,7 +368,7 @@
                                   line['atomic_mass'])
 
             # the actual thermal width of the lines
-            thermal_width = (lambda_obs * thermal_b / 
+            thermal_width = (lambda_obs * thermal_b /
                              speed_of_light_cgs).convert_to_units("angstrom")
 
             # Sanitize units for faster runtime of the tau_profile machinery.
@@ -320,20 +380,20 @@
 
             # When we actually deposit the voigt profile, sometimes we will
             # have underresolved lines (ie lines with smaller widths than
-            # the spectral bin size).  Here, we create virtual wavelength bins 
-            # small enough in width to well resolve each line, deposit the 
-            # voigt profile into them, then numerically integrate their tau 
-            # values and sum them to redeposit them into the actual spectral 
+            # the spectral bin size).  Here, we create virtual wavelength bins
+            # small enough in width to well resolve each line, deposit the
+            # voigt profile into them, then numerically integrate their tau
+            # values and sum them to redeposit them into the actual spectral
             # bins.
 
             # virtual bins (vbins) will be:
             # 1) <= the bin_width; assures at least as good as spectral bins
             # 2) <= 1/10th the thermal width; assures resolving voigt profiles
             #   (actually 1/subgrid_resolution value, default is 1/10)
-            # 3) a bin width will be divisible by vbin_width times a power of 
+            # 3) a bin width will be divisible by vbin_width times a power of
             #    10; this will assure we don't get spikes in the deposited
             #    spectra from uneven numbers of vbins per bin
-            resolution = thermal_width / self.bin_width 
+            resolution = thermal_width / self.bin_width
             n_vbins_per_bin = 10**(np.ceil(np.log10(subgrid_resolution/resolution)).clip(0, np.inf))
             vbin_width = self.bin_width.d / n_vbins_per_bin
 
@@ -341,17 +401,17 @@
             if (thermal_width < self.bin_width).any():
                 mylog.info(("%d out of %d line components will be " + \
                             "deposited as unresolved lines.") %
-                           ((thermal_width < self.bin_width).sum(), 
+                           ((thermal_width < self.bin_width).sum(),
                             n_absorbers))
 
             # provide a progress bar with information about lines processsed
             pbar = get_pbar("Adding line - %s [%f A]: " % \
                             (line['label'], line['wavelength']), n_absorbers)
 
-            # for a given transition, step through each location in the 
+            # for a given transition, step through each location in the
             # observed spectrum where it occurs and deposit a voigt profile
             for i in parallel_objects(np.arange(n_absorbers), njobs=-1):
-
+ 
                 # the virtual window into which the line is deposited initially 
                 # spans a region of 2 coarse spectral bins 
                 # (one on each side of the center_index) but the window
@@ -362,12 +422,9 @@
                 window_width_in_bins = 2
 
                 while True:
-                    left_index = (center_index[i] - \
-                            window_width_in_bins/2)
-                    right_index = (center_index[i] + \
-                            window_width_in_bins/2)
-                    n_vbins = (right_index - left_index) * \
-                              n_vbins_per_bin[i]
+                    left_index = (center_index[i] - window_width_in_bins/2)
+                    right_index = (center_index[i] + window_width_in_bins/2)
+                    n_vbins = (right_index - left_index) * n_vbins_per_bin[i]
                     
                     # the array of virtual bins in lambda space
                     vbins = \
@@ -384,8 +441,8 @@
 
                     # If tau has not dropped below min tau threshold by the
                     # edges (ie the wings), then widen the wavelength
-                    # window and repeat process. 
-                    if ((vtau[0] < min_tau) and (vtau[-1] < min_tau)):
+                    # window and repeat process.
+                    if (vtau[0] < min_tau and vtau[-1] < min_tau):
                         break
                     window_width_in_bins *= 2
 
@@ -421,8 +478,9 @@
                         += EW[(intersect_left_index - left_index): \
                               (intersect_right_index - left_index)]
 
+
                 # write out absorbers to file if the column density of
-                # an absorber is greater than the specified "label_threshold" 
+                # an absorber is greater than the specified "label_threshold"
                 # of that absorption line
                 if output_absorbers_file and \
                    line['label_threshold'] is not None and \
@@ -436,15 +494,15 @@
                                                 'wavelength': (lambda_0 + dlambda[i]),
                                                 'column_density': column_density[i],
                                                 'b_thermal': thermal_b[i],
-                                                'redshift': field_data['redshift'][i],
-                                                'redshift_eff': field_data['redshift_eff'][i],
+                                                'redshift': redshift[i],
+                                                'redshift_eff': redshift_eff[i],
                                                 'v_pec': peculiar_velocity})
                 pbar.update(i)
             pbar.finish()
 
             del column_density, delta_lambda, lambda_obs, center_index, \
                 thermal_b, thermal_width, cdens, thermb, dlambda, \
-                vlos, resolution, vbin_width, n_vbins_per_bin
+                vlos, resolution, vbin_width, n_vbins, n_vbins_per_bin
 
         comm = _get_comm(())
         self.tau_field = comm.mpi_allreduce(self.tau_field, op="sum")

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 yt/analysis_modules/halo_analysis/api.py
--- a/yt/analysis_modules/halo_analysis/api.py
+++ b/yt/analysis_modules/halo_analysis/api.py
@@ -15,16 +15,19 @@
 
 
 from .halo_catalog import \
-     HaloCatalog
+    HaloCatalog
 
 from .halo_callbacks import \
-     add_callback
+    add_callback
 
 from .halo_finding_methods import \
-     add_finding_method
+    add_finding_method
 
 from .halo_filters import \
-     add_filter
+    add_filter
      
 from .halo_quantities import \
-     add_quantity
+    add_quantity
+
+from .halo_recipes import \
+    add_recipe

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 yt/analysis_modules/halo_analysis/halo_callbacks.py
--- a/yt/analysis_modules/halo_analysis/halo_callbacks.py
+++ b/yt/analysis_modules/halo_analysis/halo_callbacks.py
@@ -76,7 +76,7 @@
     factor : float
         Factor to be multiplied by the base radius for defining 
         the radius of the sphere.
-        Defautl: 1.0.
+        Default: 1.0.
     field_parameters : dict
         Dictionary of field parameters to be set with the sphere 
         created.
@@ -166,8 +166,7 @@
     bin_fields : list of strings
         The binning fields for the profile.
     profile_fields : string or list of strings
-        The fields to be propython
-        filed.
+        The fields to be profiled.
     n_bins : int or list of ints
         The number of bins in each dimension.  If None, 32 bins for
         each bin are used for each bin field.

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 yt/analysis_modules/halo_analysis/halo_catalog.py
--- a/yt/analysis_modules/halo_analysis/halo_catalog.py
+++ b/yt/analysis_modules/halo_analysis/halo_catalog.py
@@ -35,6 +35,8 @@
     finding_method_registry
 from .halo_quantities import \
     quantity_registry
+from .halo_recipes import \
+    recipe_registry
 
 class HaloCatalog(ParallelAnalysisInterface):
     r"""Create a HaloCatalog: an object that allows for the creation and association
@@ -257,6 +259,46 @@
         halo_filter = filter_registry.find(halo_filter, *args, **kwargs)
         self.actions.append(("filter", halo_filter))
 
+    def add_recipe(self, recipe, *args, **kwargs):
+        r"""
+        Add a recipe to the halo catalog action list.
+
+        A recipe is an operation consisting of a series of callbacks, quantities,
+        and/or filters called in succession.  Recipes can be used to store a more
+        complex series of analysis tasks as a single entity.
+
+        Parameters
+        ----------
+        halo_recipe : string
+            The name of the recipe.
+
+        Examples
+        --------
+
+        >>> import yt
+        >>> from yt.analysis_modules.halo_analysis.api import HaloCatalog
+        >>>
+        >>> data_ds = yt.load('Enzo_64/RD0006/RedshiftOutput0006')
+        >>> halos_ds = yt.load('rockstar_halos/halos_0.0.bin')
+        >>> hc = HaloCatalog(data_ds=data_ds, halos_ds=halos_ds)
+        >>>
+        >>> # Filter out less massive halos
+        >>> hc.add_filter("quantity_value", "particle_mass", ">", 1e14, "Msun")
+        >>>
+        >>> # Calculate virial radii
+        >>> hc.add_recipe("calculate_virial_quantities", ["radius", "matter_mass"])
+        >>>
+        >>> hc.create()
+
+        Available Recipes
+        -----------------
+        calculate_virial_quantities
+
+        """
+
+        halo_recipe = recipe_registry.find(recipe, *args, **kwargs)
+        halo_recipe(self)
+
     def create(self, save_halos=False, save_catalog=True, njobs=-1, dynamic=False):
         r"""
         Create the halo catalog given the callbacks, quantities, and filters that

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 yt/analysis_modules/halo_analysis/halo_recipes.py
--- /dev/null
+++ b/yt/analysis_modules/halo_analysis/halo_recipes.py
@@ -0,0 +1,107 @@
+"""
+Halo recipe object
+
+
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2016, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+from yt.utilities.operator_registry import \
+    OperatorRegistry
+
+recipe_registry = OperatorRegistry()
+
+def add_recipe(name, function):
+    recipe_registry[name] =  HaloRecipe(function)
+
+class HaloRecipe(object):
+    r"""
+    A HaloRecipe is a function that minimally takes in a Halo object
+    and performs some analysis on it.  This function may attach attributes
+    to the Halo object, write out data, etc, but does not return anything.
+    """
+    def __init__(self, function, args=None, kwargs=None):
+        self.function = function
+        self.args = args
+        if self.args is None: self.args = []
+        self.kwargs = kwargs
+        if self.kwargs is None: self.kwargs = {}
+
+    def __call__(self, halo_catalog):
+        return self.function(halo_catalog, *self.args, **self.kwargs)
+
+def calculate_virial_quantities(hc, fields,
+                                weight_field=None, accumulation=True,
+                                radius_field="virial_radius", factor=2.0,
+                                overdensity_field=("gas", "overdensity"),
+                                critical_overdensity=200):
+    r"""
+    Calculate virial quantities with the following procedure:
+    1. Create a sphere data container.
+    2. Create 1D radial profiles of overdensity and any requested fields.
+    3. Call virial_quantities callback to interpolate profiles for
+       value of critical overdensity.
+    4. Delete profile and sphere objects from halo.
+
+    Parameters
+    ----------
+    halo : Halo object
+        The Halo object to be provided by the HaloCatalog.
+    fields: string or list of strings
+        The fields for which virial values are to be calculated.
+    weight_field : string
+        Weight field for profiling.
+        Default : "cell_mass"
+    accumulation : bool or list of bools
+        If True, the profile values for a bin n are the cumulative sum of
+        all the values from bin 0 to n.  If -True, the sum is reversed so
+        that the value for bin n is the cumulative sum from bin N (total bins)
+        to n.  If the profile is 2D or 3D, a list of values can be given to
+        control the summation in each dimension independently.
+        Default: False.
+    radius_field : string
+        Field to be retrieved from the quantities dictionary as
+        the basis of the halo radius.
+        Default: "virial_radius".
+    factor : float
+        Factor to be multiplied by the base radius for defining
+        the radius of the sphere.
+        Default: 2.0.
+    overdensity_field : string or tuple of strings
+        The field used as the overdensity from which interpolation is done to
+        calculate virial quantities.
+        Default: ("gas", "overdensity")
+    critical_overdensity : float
+        The value of the overdensity at which to evaulate the virial quantities.
+        Overdensity is with respect to the critical density.
+        Default: 200
+
+    """
+
+    storage = "virial_quantities_profiles"
+    pfields = [field for field in fields if field != "radius"]
+
+    hc.add_callback("sphere", factor=factor)
+    if pfields:
+        hc.add_callback("profile", ["radius"], pfields,
+                        weight_field=weight_field,
+                        accumulation=accumulation,
+                        storage=storage)
+    hc.add_callback("profile", ["radius"], [overdensity_field],
+                    weight_field="cell_volume", accumulation=True,
+                    storage=storage)
+    hc.add_callback("virial_quantities", fields,
+                    overdensity_field=overdensity_field,
+                    critical_overdensity=critical_overdensity,
+                    profile_storage=storage)
+    hc.add_callback("delete_attribute", storage)
+    hc.add_callback("delete_attribute", "data_object")
+
+add_recipe("calculate_virial_quantities", calculate_virial_quantities)

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py
+++ b/yt/data_objects/octree_subset.py
@@ -118,7 +118,7 @@
         mask = self.oct_handler.mask(selector, domain_id = self.domain_id)
         slicer = OctreeSubsetBlockSlice(self)
         for i, sl in slicer:
-            yield sl, mask[i,...]
+            yield sl, np.atleast_3d(mask[i,...])
 
     def select_tcoords(self, dobj):
         # These will not be pre-allocated, which can be a problem for speed and
@@ -129,6 +129,8 @@
             dt, t = dobj.selector.get_dt(sl)
             dts.append(dt)
             ts.append(t)
+        if len(dts) == len(ts) == 0:
+            return np.empty(0, "f8"), np.empty(0, "f8")
         return np.concatenate(dts), np.concatenate(ts)
 
     @property

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 yt/utilities/lib/grid_traversal.pyx
--- a/yt/utilities/lib/grid_traversal.pyx
+++ b/yt/utilities/lib/grid_traversal.pyx
@@ -279,6 +279,8 @@
         vertex[1] = corners[1][iv]
         vertex[2] = corners[2][iv]
 
+        cam_width[1] = cam_width[0] * image.nv[1] / image.nv[0]
+
         subtract(vertex, cam_pos, sight_vector)
         fma(cam_width[2], normal_vector, cam_pos, sight_center)
 
@@ -399,7 +401,7 @@
                     vp_pos.shape[1], vp_dir.shape[1], image.shape[1])
                 raise RuntimeError(msg)
 
-            if camera_data is not None and "perspective" in self.lens_type:
+            if camera_data is not None and self.lens_type == 'perspective':
                 self.extent_function = calculate_extent_perspective
             else:
                 self.extent_function = calculate_extent_null

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 yt/utilities/lib/pixelization_routines.pyx
--- a/yt/utilities/lib/pixelization_routines.pyx
+++ b/yt/utilities/lib/pixelization_routines.pyx
@@ -261,7 +261,7 @@
                        fabs(ysp - cy) * 0.95 > dysp or \
                        fabs(zsp - cz) * 0.95 > dzsp:
                         continue
-                    mask[i, j] = 1
+                    mask[i, j] += 1
                     my_array[i, j] += dsp
     my_array /= mask
     return my_array

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 yt/utilities/parallel_tools/parallel_analysis_interface.py
--- a/yt/utilities/parallel_tools/parallel_analysis_interface.py
+++ b/yt/utilities/parallel_tools/parallel_analysis_interface.py
@@ -28,6 +28,7 @@
     ensure_list, iterable
 
 from yt.config import ytcfg
+from yt.data_objects.image_array import ImageArray
 import yt.utilities.logger
 from yt.utilities.lib.quad_tree import \
     QuadTree, merge_quadtrees
@@ -794,15 +795,25 @@
             if self.comm.rank == root:
                 if isinstance(data, YTArray):
                     info = (data.shape, data.dtype, str(data.units), data.units.registry.lut)
+                    if isinstance(data, ImageArray):
+                        info += ('ImageArray',)
+                    else:
+                        info += ('YTArray',)
                 else:
                     info = (data.shape, data.dtype)
             else:
                 info = ()
             info = self.comm.bcast(info, root=root)
             if self.comm.rank != root:
-                if len(info) == 4:
+                if len(info) == 5:
                     registry = UnitRegistry(lut=info[3], add_default_symbols=False)
-                    data = YTArray(np.empty(info[0], dtype=info[1]), info[2], registry=registry)
+                    if info[-1] == "ImageArray":
+                        data = ImageArray(np.empty(info[0], dtype=info[1]),
+                                          input_units=info[2],
+                                          registry=registry)
+                    else:
+                        data = YTArray(np.empty(info[0], dtype=info[1]), 
+                                       info[2], registry=registry)
                 else:
                     data = np.empty(info[0], dtype=info[1])
             mpi_type = get_mpi_type(info[1])
@@ -1008,6 +1019,10 @@
         # communicate type and shape and optionally units
         if isinstance(arr, YTArray):
             unit_metadata = (str(arr.units), arr.units.registry.lut)
+            if isinstance(arr, ImageArray):
+                unit_metadata += ('ImageArray',)
+            else:
+                unit_metadata += ('YTArray',)
         else:
             unit_metadata = ()
         self.comm.send((arr.dtype.str, arr.shape) + unit_metadata, dest=dest, tag=tag)
@@ -1020,9 +1035,13 @@
         if ne is None and dt is None:
             return self.comm.recv(source=source, tag=tag)
         arr = np.empty(ne, dtype=dt)
-        if len(metadata) == 4:
+        if len(metadata) == 5:
             registry = UnitRegistry(lut=metadata[3], add_default_symbols=False)
-            arr = YTArray(arr, metadata[2], registry=registry)
+            if metadata[-1] == "ImageArray":
+                arr = ImageArray(arr, input_units=metadata[2],
+                                 registry=registry)
+            else:
+                arr = YTArray(arr, metadata[2], registry=registry)
         tmp = arr.view(self.__tocast)
         self.comm.Recv([tmp, MPI.CHAR], source=source, tag=tag)
         return arr
@@ -1041,7 +1060,10 @@
         if isinstance(send, YTArray):
             # We assume send.units is consitent with the units
             # on the receiving end.
-            recv = YTArray(recv, send.units)
+            if isinstance(send, ImageArray):
+                recv = ImageArray(recv, input_units=send.units)
+            else:
+                recv = YTArray(recv, send.units)
         recv[offset:offset+send.size] = send[:]
         dtr = send.dtype.itemsize / tmp_send.dtype.itemsize # > 1
         roff = [off * dtr for off in offsets]

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 yt/visualization/streamlines.py
--- a/yt/visualization/streamlines.py
+++ b/yt/visualization/streamlines.py
@@ -168,9 +168,11 @@
         if self.get_magnitude:
             self.magnitudes = self.comm.mpi_allreduce(
                 self.magnitudes, op='sum')
-        
+
     def _integrate_through_brick(self, node, stream, step,
                                  periodic=False, mag=None):
+        LE = self.ds.domain_left_edge.d
+        RE = self.ds.domain_right_edge.d
         while (step > 1):
             self.volume.get_brick_data(node)
             brick = node.data
@@ -183,13 +185,14 @@
                 brick.integrate_streamline(
                     stream[-step+1], self.direction*self.dx, marr)
                 mag[-step+1] = marr[0]
-                
-            if np.any(stream[-step+1,:] <= self.ds.domain_left_edge) | \
-                   np.any(stream[-step+1,:] >= self.ds.domain_right_edge):
+
+            cur_stream = stream[-step+1, :]
+            if np.sum(np.logical_or(cur_stream < LE, cur_stream >= RE)):
                 return 0
 
-            if np.any(stream[-step+1,:] < node.get_left_edge()) | \
-                   np.any(stream[-step+1,:] >= node.get_right_edge()):
+            nLE = node.get_left_edge()
+            nRE = node.get_right_edge()
+            if np.sum(np.logical_or(cur_stream < nLE, cur_stream >= nRE)):
                 return step-1
             step -= 1
         return step

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 yt/visualization/volume_rendering/camera.py
--- a/yt/visualization/volume_rendering/camera.py
+++ b/yt/visualization/volume_rendering/camera.py
@@ -280,8 +280,10 @@
     def _get_sampler_params(self, render_source):
         lens_params = self.lens._get_sampler_params(self, render_source)
         lens_params.update(width=self.width)
+        pos = self.position.in_units("code_length").d
+        width = self.width.in_units("code_length").d
         lens_params.update(camera_data=np.vstack(
-            (self.position.d, self.width.d, self.unit_vectors.d)))
+            (pos, width, self.unit_vectors.d)))
         return lens_params
 
     def set_lens(self, lens_type):

diff -r 1230110b20345305361a346ed32ed12fa6b64bed -r c980b573469a147a0d3a102d714f7aeb2f81f667 yt/visualization/volume_rendering/scene.py
--- a/yt/visualization/volume_rendering/scene.py
+++ b/yt/visualization/volume_rendering/scene.py
@@ -406,7 +406,8 @@
     def _show_mpl(self, im, sigma_clip=None, dpi=100):
         import matplotlib.pyplot as plt
         s = im.shape
-        self._render_figure = plt.figure(1, figsize=(s[1]/dpi, s[0]/dpi))
+        self._render_figure = plt.figure(1, figsize=(s[1]/float(dpi), s[0]/float(dpi)))
+        self._render_figure.clf()
         ax = plt.gca()
         ax.set_position([0, 0, 1, 1])
 


https://bitbucket.org/yt_analysis/yt/commits/05b4216d0f78/
Changeset:   05b4216d0f78
Branch:      yt
User:        jmt354
Date:        2016-03-24 04:18:36+00:00
Summary:     Minor style fixes
Affected #:  1 file

diff -r c980b573469a147a0d3a102d714f7aeb2f81f667 -r 05b4216d0f783fae7eddc7ffa01fe2d97c07c450 yt/visualization/profile_plotter.py
--- a/yt/visualization/profile_plotter.py
+++ b/yt/visualization/profile_plotter.py
@@ -1061,7 +1061,7 @@
         return self
 
     @invalidate_plot
-    def annotate_title(self,title):
+    def annotate_title(self, title):
         """Set a title for the plot.
 
         Parameters
@@ -1076,7 +1076,7 @@
 
         """
         for f in self.profile.field_data:
-            if isinstance(f,tuple):
+            if isinstance(f, tuple):
                 f = f[1]
             self.plot_title[self.data_source._determine_fields(f)[0]] = title
         return self


https://bitbucket.org/yt_analysis/yt/commits/30a1e6a86592/
Changeset:   30a1e6a86592
Branch:      yt
User:        jmt354
Date:        2016-03-24 04:25:58+00:00
Summary:     Fixed error in example usage
Affected #:  1 file

diff -r 05b4216d0f783fae7eddc7ffa01fe2d97c07c450 -r 30a1e6a86592c2a93593abf0749adb9fb8ace40c yt/visualization/profile_plotter.py
--- a/yt/visualization/profile_plotter.py
+++ b/yt/visualization/profile_plotter.py
@@ -697,7 +697,7 @@
     >>> # Change plot properties.
     >>> plot.set_cmap("cell_mass", "jet")
     >>> plot.set_zlim("cell_mass", 1e8, 1e13)
-    >>> plot.annotate_title("cell_mass", "This is a phase plot")
+    >>> plot.annotate_title("This is a phase plot")
 
     """
     x_log = None


https://bitbucket.org/yt_analysis/yt/commits/f027ae00224c/
Changeset:   f027ae00224c
Branch:      yt
User:        MatthewTurk
Date:        2016-03-30 21:51:17+00:00
Summary:     Merged in Jmt354/yt (pull request #2072)

Edited PhasePlot class to have an annotate_title method. Closes #931
Affected #:  2 files

diff -r ac130eab01032636d00b9de431b8e5f1b4b788d6 -r f027ae00224c357e0cf4b49c3ff1afdd9d69bbaa doc/source/analyzing/mesh_filter.ipynb
--- a/doc/source/analyzing/mesh_filter.ipynb
+++ b/doc/source/analyzing/mesh_filter.ipynb
@@ -143,13 +143,13 @@
    "source": [
     "ph1 = yt.PhasePlot(ad, 'density', 'temperature', 'cell_mass', weight_field=None)\n",
     "ph1.set_xlim(3e-31, 3e-27)\n",
-    "ph1.set_title('cell_mass', 'No Cuts')\n",
+    "ph1.annotate_title('No Cuts')\n",
     "ph1.set_figure_size(5)\n",
     "ph1.show()\n",
     "\n",
     "ph1 = yt.PhasePlot(dense_ad, 'density', 'temperature', 'cell_mass', weight_field=None)\n",
     "ph1.set_xlim(3e-31, 3e-27)\n",
-    "ph1.set_title('cell_mass', 'Dense Gas')\n",
+    "ph1.annotate_title('Dense Gas')\n",
     "ph1.set_figure_size(5)\n",
     "ph1.show()"
    ]

diff -r ac130eab01032636d00b9de431b8e5f1b4b788d6 -r f027ae00224c357e0cf4b49c3ff1afdd9d69bbaa yt/visualization/profile_plotter.py
--- a/yt/visualization/profile_plotter.py
+++ b/yt/visualization/profile_plotter.py
@@ -697,7 +697,7 @@
     >>> # Change plot properties.
     >>> plot.set_cmap("cell_mass", "jet")
     >>> plot.set_zlim("cell_mass", 1e8, 1e13)
-    >>> plot.set_title("cell_mass", "This is a phase plot")
+    >>> plot.annotate_title("This is a phase plot")
 
     """
     x_log = None
@@ -1061,6 +1061,27 @@
         return self
 
     @invalidate_plot
+    def annotate_title(self, title):
+        """Set a title for the plot.
+
+        Parameters
+        ----------
+        title : str
+            The title to add.
+
+        Examples
+        --------
+
+        >>> plot.annotate_title("This is a phase plot")
+
+        """
+        for f in self.profile.field_data:
+            if isinstance(f, tuple):
+                f = f[1]
+            self.plot_title[self.data_source._determine_fields(f)[0]] = title
+        return self
+
+    @invalidate_plot
     def reset_plot(self):
         self.plots = {}
         return self

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-svn-spacepope.org/attachments/20160330/3c9dd865/attachment.html>


More information about the yt-svn mailing list