[yt-svn] commit/yt: 6 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Tue Aug 19 10:16:10 PDT 2014


6 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/ec3f892564ff/
Changeset:   ec3f892564ff
Branch:      yt
User:        ChrisMalone
Date:        2014-08-05 02:58:37
Summary:     sync text with actual code example
Affected #:  1 file

diff -r 9e50338a7e3706e73a8406b41bb7379d06a7e96e -r ec3f892564ffc93877e30b4d9f6b0b868eb87377 doc/source/developing/creating_derived_fields.rst
--- a/doc/source/developing/creating_derived_fields.rst
+++ b/doc/source/developing/creating_derived_fields.rst
@@ -114,14 +114,15 @@
                 function=_my_radial_velocity,
                 units="cm/s",
                 take_log=False,
+                display_field=False,
                 validators=[ValidateParameter('center'),
                             ValidateParameter('bulk_velocity')])
 
 Note that we have added a few parameters below the main function; we specify
-that we do not wish to display this field as logged, that we require both
-``bulk_velocity`` and ``center`` to be present in a given data object we wish
-to calculate this for, and we say that it should not be displayed in a
-drop-down box of fields to display. This is done through the parameter
+that we do not wish to display this field as logged, that the field should not be displayed in a
+drop-down box of fields to display within Reason, and that we require both
+``bulk_velocity`` and ``center`` to be present in a given data object for which we wish
+to calculate this field. This is done through the parameter
 *validators*, which accepts a list of :class:`~yt.fields.derived_field.FieldValidator` 
 objects. These objects define the way in which the field is generated, and 
 when it is able to be created. In this case, we mandate that parameters 


https://bitbucket.org/yt_analysis/yt/commits/5000cb90c634/
Changeset:   5000cb90c634
Branch:      yt
User:        ChrisMalone
Date:        2014-08-08 19:00:05
Summary:     backing out
Affected #:  1 file

diff -r ec3f892564ffc93877e30b4d9f6b0b868eb87377 -r 5000cb90c63444e195632cd1688c7ebf45e9fc57 doc/source/developing/creating_derived_fields.rst
--- a/doc/source/developing/creating_derived_fields.rst
+++ b/doc/source/developing/creating_derived_fields.rst
@@ -114,15 +114,14 @@
                 function=_my_radial_velocity,
                 units="cm/s",
                 take_log=False,
-                display_field=False,
                 validators=[ValidateParameter('center'),
                             ValidateParameter('bulk_velocity')])
 
 Note that we have added a few parameters below the main function; we specify
-that we do not wish to display this field as logged, that the field should not be displayed in a
-drop-down box of fields to display within Reason, and that we require both
-``bulk_velocity`` and ``center`` to be present in a given data object for which we wish
-to calculate this field. This is done through the parameter
+that we do not wish to display this field as logged, that we require both
+``bulk_velocity`` and ``center`` to be present in a given data object we wish
+to calculate this for, and we say that it should not be displayed in a
+drop-down box of fields to display. This is done through the parameter
 *validators*, which accepts a list of :class:`~yt.fields.derived_field.FieldValidator` 
 objects. These objects define the way in which the field is generated, and 
 when it is able to be created. In this case, we mandate that parameters 


https://bitbucket.org/yt_analysis/yt/commits/a58cf9b644f8/
Changeset:   a58cf9b644f8
Branch:      yt
User:        ChrisMalone
Date:        2014-08-08 19:04:32
Summary:     Merge
Affected #:  7 files

diff -r 5000cb90c63444e195632cd1688c7ebf45e9fc57 -r a58cf9b644f8bba90c9429dc7b005b9128527e27 doc/install_script.sh
--- a/doc/install_script.sh
+++ b/doc/install_script.sh
@@ -16,7 +16,7 @@
 
 DEST_SUFFIX="yt-`uname -m`"
 DEST_DIR="`pwd`/${DEST_SUFFIX/ /}"   # Installation location
-BRANCH="yt-3.0" # This is the branch to which we will forcibly update.
+BRANCH="yt" # This is the branch to which we will forcibly update.
 
 if [ ${REINST_YT} ] && [ ${REINST_YT} -eq 1 ] && [ -n ${YT_DEST} ]
 then

diff -r 5000cb90c63444e195632cd1688c7ebf45e9fc57 -r a58cf9b644f8bba90c9429dc7b005b9128527e27 yt/analysis_modules/halo_analysis/halo_callbacks.py
--- a/yt/analysis_modules/halo_analysis/halo_callbacks.py
+++ b/yt/analysis_modules/halo_analysis/halo_callbacks.py
@@ -225,13 +225,13 @@
 
     # accumulate, if necessary
     if accumulation:
-        used = my_profile.used        
+        used = my_profile.used
         for field in my_profile.field_data:
             if weight_field is None:
                 my_profile.field_data[field][used] = \
                     np.cumsum(my_profile.field_data[field][used])
             else:
-                my_weight = my_profile.weight[:, 0]
+                my_weight = my_profile.weight
                 my_profile.field_data[field][used] = \
                   np.cumsum(my_profile.field_data[field][used] * my_weight[used]) / \
                   np.cumsum(my_weight[used])

diff -r 5000cb90c63444e195632cd1688c7ebf45e9fc57 -r a58cf9b644f8bba90c9429dc7b005b9128527e27 yt/analysis_modules/halo_finding/halo_objects.py
--- a/yt/analysis_modules/halo_finding/halo_objects.py
+++ b/yt/analysis_modules/halo_finding/halo_objects.py
@@ -27,7 +27,7 @@
 from yt.extern.six import add_metaclass
 
 from yt.config import ytcfg
-from yt.funcs import mylog
+from yt.funcs import mylog, ensure_dir_exists
 from yt.utilities.performance_counters import \
     time_function, \
     yt_counters

diff -r 5000cb90c63444e195632cd1688c7ebf45e9fc57 -r a58cf9b644f8bba90c9429dc7b005b9128527e27 yt/analysis_modules/halo_finding/rockstar/rockstar_interface.pyx
--- a/yt/analysis_modules/halo_finding/rockstar/rockstar_interface.pyx
+++ b/yt/analysis_modules/halo_finding/rockstar/rockstar_interface.pyx
@@ -233,7 +233,6 @@
             fi += 1
         pi += npart
     num_p[0] = local_parts
-    del ds._instantiated_hierarchy
     del ds
 
 cdef class RockstarInterface:

diff -r 5000cb90c63444e195632cd1688c7ebf45e9fc57 -r a58cf9b644f8bba90c9429dc7b005b9128527e27 yt/frontends/halo_catalogs/rockstar/definitions.py
--- a/yt/frontends/halo_catalogs/rockstar/definitions.py
+++ b/yt/frontends/halo_catalogs/rockstar/definitions.py
@@ -35,17 +35,25 @@
     ("unused", BINARY_HEADER_SIZE - 4*12 - 4 - 8*6 - 12, "c")
 )
 
-halo_dt = np.dtype([
+# Note the final field here, which is a field for min/max format revision in
+# which the field appears.
+
+KNOWN_REVISIONS=[0, 1]
+
+halo_dt = [
     ('particle_identifier', np.int64),
     ('particle_position_x', np.float32),
     ('particle_position_y', np.float32),
     ('particle_position_z', np.float32),
+    ('particle_mposition_x', np.float32, (0, 0)),
+    ('particle_mposition_y', np.float32, (0, 0)),
+    ('particle_mposition_z', np.float32, (0, 0)),
     ('particle_velocity_x', np.float32),
     ('particle_velocity_y', np.float32),
     ('particle_velocity_z', np.float32),
-    ('particle_corevel_x', np.float32),
-    ('particle_corevel_y', np.float32),
-    ('particle_corevel_z', np.float32),
+    ('particle_corevel_x', np.float32, (1, 100)),
+    ('particle_corevel_y', np.float32, (1, 100)),
+    ('particle_corevel_z', np.float32, (1, 100)),
     ('particle_bulkvel_x', np.float32),
     ('particle_bulkvel_y', np.float32),
     ('particle_bulkvel_z', np.float32),
@@ -75,15 +83,15 @@
     ('Ax', np.float32),
     ('Ay', np.float32),
     ('Az', np.float32),
-    ('b_to_a2', np.float32),
-    ('c_to_a2', np.float32),
-    ('A2x', np.float32),
-    ('A2y', np.float32),
-    ('A2z', np.float32),
+    ('b_to_a2', np.float32, (1, 100)),
+    ('c_to_a2', np.float32, (1, 100)),
+    ('A2x', np.float32, (1, 100)),
+    ('A2y', np.float32, (1, 100)),
+    ('A2z', np.float32, (1, 100)),
     ('bullock_spin', np.float32),
     ('kin_to_pot', np.float32),
-    ('m_pe_b', np.float32),
-    ('m_pe_d', np.float32),
+    ('m_pe_b', np.float32, (1, 100)),
+    ('m_pe_d', np.float32, (1, 100)),
     ('num_p', np.int64),
     ('num_child_particles', np.int64),
     ('p_start', np.int64),
@@ -93,7 +101,20 @@
     ('min_pos_err', np.float32),
     ('min_vel_err', np.float32),
     ('min_bulkvel_err', np.float32),
-], align=True)
+]
+
+halo_dts = {}
+
+for rev in KNOWN_REVISIONS:
+    halo_dts[rev] = []
+    for item in halo_dt:
+        if len(item) == 2:
+            halo_dts[rev].append(item)
+        else:
+            mi, ma = item[2]
+            if (mi <= rev) and (rev <= ma):
+                halo_dts[rev].append(item[:2])
+    halo_dts[rev] = np.dtype(halo_dts[rev], align=True)
 
 particle_dt = np.dtype([
     ('particle_identifier', np.int64),

diff -r 5000cb90c63444e195632cd1688c7ebf45e9fc57 -r a58cf9b644f8bba90c9429dc7b005b9128527e27 yt/frontends/halo_catalogs/rockstar/io.py
--- a/yt/frontends/halo_catalogs/rockstar/io.py
+++ b/yt/frontends/halo_catalogs/rockstar/io.py
@@ -24,7 +24,7 @@
     BaseIOHandler
 
 import yt.utilities.fortran_utils as fpu
-from .definitions import halo_dt
+from .definitions import halo_dts
 from yt.utilities.lib.geometry_utils import compute_morton
 
 from yt.geometry.oct_container import _ORDER_MAX
@@ -32,6 +32,10 @@
 class IOHandlerRockstarBinary(BaseIOHandler):
     _dataset_type = "rockstar_binary"
 
+    def __init__(self, *args, **kwargs):
+        super(IOHandlerRockstarBinary, self).__init__(*args, **kwargs)
+        self._halo_dt = halo_dts[self.ds.parameters['format_revision']]
+
     def _read_fluid_selection(self, chunks, selector, fields, size):
         raise NotImplementedError
 
@@ -45,11 +49,12 @@
         for chunk in chunks:
             for obj in chunk.objs:
                 data_files.update(obj.data_files)
+        
         for data_file in sorted(data_files):
             pcount = data_file.header['num_halos']
             with open(data_file.filename, "rb") as f:
                 f.seek(data_file._position_offset, os.SEEK_SET)
-                halos = np.fromfile(f, dtype=halo_dt, count = pcount)
+                halos = np.fromfile(f, dtype=self._halo_dt, count = pcount)
                 x = halos['particle_position_x'].astype("float64")
                 y = halos['particle_position_y'].astype("float64")
                 z = halos['particle_position_z'].astype("float64")
@@ -70,7 +75,7 @@
             with open(data_file.filename, "rb") as f:
                 for ptype, field_list in sorted(ptf.items()):
                     f.seek(data_file._position_offset, os.SEEK_SET)
-                    halos = np.fromfile(f, dtype=halo_dt, count = pcount)
+                    halos = np.fromfile(f, dtype=self._halo_dt, count = pcount)
                     x = halos['particle_position_x'].astype("float64")
                     y = halos['particle_position_y'].astype("float64")
                     z = halos['particle_position_z'].astype("float64")
@@ -89,7 +94,7 @@
         ind = 0
         with open(data_file.filename, "rb") as f:
             f.seek(data_file._position_offset, os.SEEK_SET)
-            halos = np.fromfile(f, dtype=halo_dt, count = pcount)
+            halos = np.fromfile(f, dtype=self._halo_dt, count = pcount)
             pos = np.empty((halos.size, 3), dtype="float64")
             # These positions are in Mpc, *not* "code" units
             pos = data_file.ds.arr(pos, "code_length")
@@ -121,6 +126,6 @@
         return {'halos': data_file.header['num_halos']}
 
     def _identify_fields(self, data_file):
-        fields = [("halos", f) for f in halo_dt.fields if
+        fields = [("halos", f) for f in self._halo_dt.fields if
                   "padding" not in f]
         return fields, {}

diff -r 5000cb90c63444e195632cd1688c7ebf45e9fc57 -r a58cf9b644f8bba90c9429dc7b005b9128527e27 yt/visualization/plot_container.py
--- a/yt/visualization/plot_container.py
+++ b/yt/visualization/plot_container.py
@@ -280,6 +280,7 @@
         name = old_object._type_name
         kwargs = dict((n, getattr(old_object, n))
                       for n in old_object._con_args)
+        kwargs['center'] = getattr(old_object, 'center', None)
         if data_source is not None:
             if name != "proj":
                 raise RuntimeError("The data_source keyword argument "
@@ -289,6 +290,12 @@
         self.ds = new_ds
         self.data_source = new_object
         self._data_valid = self._plot_valid = False
+        for d in 'xyz':
+            lim_name = d+'lim'
+            if hasattr(self, lim_name):
+                lim = getattr(self, lim_name)
+                lim = tuple(new_ds.quan(l.value, str(l.units)) for l in lim)
+                setattr(self, lim_name, lim)
         self._recreate_frb()
         self._setup_plots()
 


https://bitbucket.org/yt_analysis/yt/commits/a061af2dca0b/
Changeset:   a061af2dca0b
Branch:      yt
User:        ChrisMalone
Date:        2014-08-19 17:10:03
Summary:     Merged yt_analysis/yt into yt
Affected #:  33 files

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 doc/source/_static/custom.css
--- a/doc/source/_static/custom.css
+++ b/doc/source/_static/custom.css
@@ -39,6 +39,13 @@
         padding-top: 10px;
         padding-bottom: 10px;
     }
+    /* since 3.1.0 */
+    .navbar-collapse.collapse.in { 
+        display: block!important;
+    }
+    .collapsing {
+        overflow: hidden!important;
+    }
 }
 
 /* 

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 doc/source/analyzing/particle_filter.ipynb
--- a/doc/source/analyzing/particle_filter.ipynb
+++ b/doc/source/analyzing/particle_filter.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:4d705a81671d5692ed6691b3402115edbe9c98af815af5bb160ddf551bf02c76"
+  "signature": "sha256:427da1e1d02deb543246218dc8cce991268b518b25cfdd5944a4a436695f874b"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -40,11 +40,13 @@
      "source": [
       "We will filter these into young stars and old stars by masking on the ('Stars', 'creation_time') field. \n",
       "\n",
-      "In order to do this, we first make a function which applies our desired cut.  This function must accept two arguments: `pfilter` and `data`.  The second argument is a yt data container and is usually the only one used in a filter definition.\n",
+      "In order to do this, we first make a function which applies our desired cut.  This function must accept two arguments: `pfilter` and `data`.  The first argument is a `ParticleFilter` object that contains metadata about the filter its self.  The second argument is a yt data container.\n",
       "\n",
-      "Let's call \"young\" stars only those stars with ages less 5 million years.  Since Tipsy assigns a very large `creation_time` for stars in the initial conditions, we need to also exclude stars with negative ages.\n",
+      "Let's call \"young\" stars only those stars with ages less 5 million years.  Since Tipsy assigns a very large `creation_time` for stars in the initial conditions, we need to also exclude stars with negative ages. \n",
       "\n",
-      "Old stars either formed dynamically in the simulation (ages greater than 5 Myr) or were present in the initial conditions (negative ages)."
+      "Conversely, let's define \"old\" stars as those stars formed dynamically in the simulation with ages greater than 5 Myr.  We also include stars with negative ages, since these stars were included in the simulation initial conditions.\n",
+      "\n",
+      "We make use of `pfilter.filtered_type` so that the filter definition will use the same particle type as the one specified in the call to `add_particle_filter` below.  This makes the filter definition usable for arbitrary particle types.  Since we're only filtering the `\"Stars\"` particle type in this example, we could have also replaced `pfilter.filtered_type` with `\"Stars\"` and gotten the same result."
      ]
     },
     {
@@ -52,12 +54,12 @@
      "collapsed": false,
      "input": [
       "def young_stars(pfilter, data):\n",
-      "    age = data.ds.current_time - data[\"Stars\", \"creation_time\"]\n",
+      "    age = data.ds.current_time - data[pfilter.filtered_type, \"creation_time\"]\n",
       "    filter = np.logical_and(age.in_units('Myr') <= 5, age >= 0)\n",
       "    return filter\n",
       "\n",
       "def old_stars(pfilter, data):\n",
-      "    age = data.ds.current_time - data[\"Stars\", \"creation_time\"]\n",
+      "    age = data.ds.current_time - data[pfilter.filtered_type, \"creation_time\"]\n",
       "    filter = np.logical_or(age.in_units('Myr') >= 5, age < 0)\n",
       "    return filter"
      ],
@@ -140,4 +142,4 @@
    "metadata": {}
   }
  ]
-}
+}
\ No newline at end of file

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 doc/source/developing/creating_derived_fields.rst
--- a/doc/source/developing/creating_derived_fields.rst
+++ b/doc/source/developing/creating_derived_fields.rst
@@ -179,6 +179,38 @@
      fields or that get aliased to themselves, we can specify a different
      desired output unit than the unit found on disk.
 
+Debugging a Derived Field
+-------------------------
+
+If your derived field is not behaving as you would like, you can insert a call
+to ``data._debug()`` to spawn an interactive interpreter whenever that line is
+reached.  Note that this is slightly different from calling
+``pdb.set_trace()``, as it will *only* trigger when the derived field is being
+called on an actual data object, rather than during the field detection phase.
+The starting position will be one function lower in the stack than you are
+likely interested in, but you can either step through back to the derived field
+function, or simply type ``u`` to go up a level in the stack.
+
+For instance, if you had defined this derived field:
+
+.. code-block:: python
+
+   @yt.derived_field(name = "funthings")
+   def funthings(field, data):
+       return data["sillythings"] + data["humorousthings"]**2.0
+
+And you wanted to debug it, you could do:
+
+.. code-block:: python
+
+   @yt.derived_field(name = "funthings")
+   def funthings(field, data):
+       data._debug()
+       return data["sillythings"] + data["humorousthings"]**2.0
+
+And now, when that derived field is actually used, you will be placed into a
+debugger.
+
 Units for Cosmological Datasets
 -------------------------------
 

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 doc/source/reference/code_support.rst
--- a/doc/source/reference/code_support.rst
+++ b/doc/source/reference/code_support.rst
@@ -20,7 +20,7 @@
 +-----------------------+------------+-----------+------------+-------+----------+----------+------------+----------+ 
 | ARTIO                 |     Y      |     Y     |      Y     |   Y   |    Y     |    Y     |     Y      |   Full   |
 +-----------------------+------------+-----------+------------+-------+----------+----------+------------+----------+ 
-| Athena                |     Y      |     N     |      Y     |   Y   |    Y     |    Y     |     N      |   Full   |
+| Athena                |     Y      |     N     |      Y     |   Y   |    Y     |    Y     |     Y      |   Full   |
 +-----------------------+------------+-----------+------------+-------+----------+----------+------------+----------+ 
 | Castro                |     Y      |     Y     |   Partial  |   Y   |    Y     |    Y     |     N      |   Full   |
 +-----------------------+------------+-----------+------------+-------+----------+----------+------------+----------+ 

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb
--- a/doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb
+++ b/doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:f3e6416e4807e008a016ad8c7dfc8e78cab0d7519498458660554a4c88549c23"
+  "signature": "sha256:5a1547973517987ff047f1b2405277a0e98392e8fd5ffe04521cb2dc372d32d3"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -83,7 +83,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "# Build a transfer function that is a multivariate gaussian in Density\n",
+      "# Build a transfer function that is a multivariate gaussian in temperature\n",
       "tfh = yt.TransferFunctionHelper(ds)\n",
       "tfh.set_field('temperature')\n",
       "tfh.set_log(True)\n",
@@ -180,4 +180,4 @@
    "metadata": {}
   }
  ]
-}
+}
\ No newline at end of file

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/analysis_modules/absorption_spectrum/absorption_spectrum_fit.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum_fit.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum_fit.py
@@ -1,8 +1,10 @@
-from scipy import optimize
-import numpy as na
+import numpy as np
 import h5py
 from yt.analysis_modules.absorption_spectrum.absorption_line \
         import voigt
+from yt.utilities.on_demand_imports import _scipy
+
+optimize = _scipy.optimize
 
 def generate_total_fit(x, fluxData, orderFits, speciesDicts, 
         minError=1E-4, complexLim=.995,
@@ -83,7 +85,7 @@
     x0,xRes=x[0],x[1]-x[0]
 
     #Empty fit without any lines
-    yFit = na.ones(len(fluxData))
+    yFit = np.ones(len(fluxData))
 
     #Force the first and last flux pixel to be 1 to prevent OOB
     fluxData[0]=1
@@ -98,10 +100,10 @@
     #Fit all species one at a time in given order from low to high wavelength
     for species in orderFits:
         speciesDict = speciesDicts[species]
-        speciesLines = {'N':na.array([]),
-                        'b':na.array([]),
-                        'z':na.array([]),
-                        'group#':na.array([])}
+        speciesLines = {'N':np.array([]),
+                        'b':np.array([]),
+                        'z':np.array([]),
+                        'group#':np.array([])}
 
         #Set up wavelengths for species
         initWl = speciesDict['wavelength'][0]
@@ -131,7 +133,7 @@
                         yFitBounded,z,speciesDict,
                         minSize,minError)
 
-            if na.size(newLinesP)> 0:
+            if np.size(newLinesP)> 0:
 
                 #Check for EXPLOOOOSIIONNNSSS
                 newLinesP = _check_numerical_instability(x, newLinesP, speciesDict,b)
@@ -150,12 +152,12 @@
 
 
             #Add new group to all fitted lines
-            if na.size(newLinesP)>0:
-                speciesLines['N']=na.append(speciesLines['N'],newLinesP[:,0])
-                speciesLines['b']=na.append(speciesLines['b'],newLinesP[:,1])
-                speciesLines['z']=na.append(speciesLines['z'],newLinesP[:,2])
-                groupNums = b_i*na.ones(na.size(newLinesP[:,0]))
-                speciesLines['group#']=na.append(speciesLines['group#'],groupNums)
+            if np.size(newLinesP)>0:
+                speciesLines['N']=np.append(speciesLines['N'],newLinesP[:,0])
+                speciesLines['b']=np.append(speciesLines['b'],newLinesP[:,1])
+                speciesLines['z']=np.append(speciesLines['z'],newLinesP[:,2])
+                groupNums = b_i*np.ones(np.size(newLinesP[:,0]))
+                speciesLines['group#']=np.append(speciesLines['group#'],groupNums)
 
         allSpeciesLines[species]=speciesLines
 
@@ -226,7 +228,7 @@
             initP[0] = speciesDict['init_N']
         initP[1] = speciesDict['init_b']
         initP[2]=initz
-        initP=na.array([initP])
+        initP=np.array([initP])
 
     linesP = initP
 
@@ -259,7 +261,7 @@
 
 
         #Set results of optimization
-        linesP = na.reshape(fitP,(-1,3))
+        linesP = np.reshape(fitP,(-1,3))
 
         #Generate difference between current best fit and data
         yNewFit=_gen_flux_lines(x,linesP,speciesDict)
@@ -288,7 +290,7 @@
                 break
 
         #If too many lines 
-        if na.shape(linesP)[0]>8 or na.size(linesP)+3>=len(x):
+        if np.shape(linesP)[0]>8 or np.size(linesP)+3>=len(x):
             #If its fitable by flag tools and still bad, use flag tools
             if errSq >1E2*errBound and speciesDict['name']=='HI lya':
                 return [],True
@@ -315,17 +317,17 @@
             newP[0] = speciesDict['init_N']
         newP[1] = speciesDict['init_b']
         newP[2]=(x[dif.argmax()]-wl0)/wl0
-        linesP=na.append(linesP,[newP],axis=0)
+        linesP=np.append(linesP,[newP],axis=0)
 
 
     #Check the parameters of all lines to see if they fall in an
     #   acceptable range, as given in dict ref
     remove=[]
     for i,p in enumerate(linesP):
-        check=_check_params(na.array([p]),speciesDict,x)
+        check=_check_params(np.array([p]),speciesDict,x)
         if check: 
             remove.append(i)
-    linesP = na.delete(linesP,remove,axis=0)
+    linesP = np.delete(linesP,remove,axis=0)
 
     return linesP,flag
 
@@ -377,7 +379,7 @@
     #Iterate through test line guesses
     for initLines in lineTests:
         if initLines[1,0]==0:
-            initLines = na.delete(initLines,1,axis=0)
+            initLines = np.delete(initLines,1,axis=0)
 
         #Do fitting with initLines as first guess
         linesP,flag=_complex_fit(x,yDat,yFit,initz,
@@ -421,7 +423,7 @@
     """
 
     #Set up a bunch of empty lines
-    testP = na.zeros((10,2,3))
+    testP = np.zeros((10,2,3))
 
     testP[0,0,:]=[1E18,20,initz]
     testP[1,0,:]=[1E18,40,initz]
@@ -542,7 +544,7 @@
                 errBound = 10*errBound*len(yb)
 
             #Generate a fit and find the difference to data
-            yFitb=_gen_flux_lines(xb,na.array([p]),speciesDict)
+            yFitb=_gen_flux_lines(xb,np.array([p]),speciesDict)
             dif =yb-yFitb
 
 
@@ -557,7 +559,7 @@
                 break
 
     #Remove all bad line fits
-    linesP = na.delete(linesP,removeLines,axis=0)
+    linesP = np.delete(linesP,removeLines,axis=0)
 
     return linesP 
 
@@ -755,7 +757,7 @@
             if firstLine: 
                 break
 
-    flux = na.exp(-y)
+    flux = np.exp(-y)
     return flux
 
 def _gen_tau(t, p, f, Gamma, lambda_unshifted):
@@ -768,7 +770,7 @@
     a=7.95774715459E-15*Gamma*lambda_unshifted/b
     x=299792.458/b*(lambda_unshifted*(1+z)/t-1)
     
-    H = na.zeros(len(x))
+    H = np.zeros(len(x))
     H = voigt(a,x)
     
     tau = tau_o*H
@@ -910,9 +912,9 @@
 
             # Make the final line parameters. Its annoying because
             # one or both regions may have fit to nothing
-            if na.size(p1)> 0 and na.size(p2)>0:
-                p = na.r_[p1,p2]
-            elif na.size(p1) > 0:
+            if np.size(p1)> 0 and np.size(p2)>0:
+                p = np.r_[p1,p2]
+            elif np.size(p1) > 0:
                 p = p1
             else:
                 p = p2
@@ -952,7 +954,7 @@
             # max and min to prevent boundary errors
 
             flux = _gen_flux_lines(x,[line],speciesDict,firstLine=True)
-            flux = na.r_[flux[:max(b[1]-10,0)], flux[min(b[2]+10,len(x)):]]
+            flux = np.r_[flux[:max(b[1]-10,0)], flux[min(b[2]+10,len(x)):]]
 
             #Find regions that are absorbing outside the region we fit
             flux_dif = 1 - flux
@@ -971,7 +973,7 @@
                 remove_lines.append(i)
     
     if remove_lines:
-        p = na.delete(p, remove_lines, axis=0)
+        p = np.delete(p, remove_lines, axis=0)
 
     return p
 

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/analysis_modules/photon_simulator/spectral_models.py
--- a/yt/analysis_modules/photon_simulator/spectral_models.py
+++ b/yt/analysis_modules/photon_simulator/spectral_models.py
@@ -22,14 +22,10 @@
 except ImportError:
     pass
 
-try:
-    import xspec
-    from scipy.integrate import cumtrapz
-    from scipy import stats        
-except ImportError:
-    pass
-from yt.utilities.on_demand_imports import _astropy
+from yt.utilities.on_demand_imports import _astropy, _scipy
+
 pyfits = _astropy.pyfits
+stats = _scipy.stats
 
 from yt.utilities.physical_constants import hcgs, clight, erg_per_keV, amu_cgs
 
@@ -212,11 +208,14 @@
         try:
             self.line_handle = pyfits.open(self.linefile)
         except IOError:
-            mylog.error("LINE file %s does not exist" % (self.linefile))
+            mylog.error("LINE file %s does not exist" % self.linefile)
+            raise IOError("LINE file %s does not exist" % self.linefile)
         try:
             self.coco_handle = pyfits.open(self.cocofile)
         except IOError:
-            mylog.error("COCO file %s does not exist" % (self.cocofile))
+            mylog.error("COCO file %s does not exist" % self.cocofile)
+            raise IOError("COCO file %s does not exist" % self.cocofile)
+
         self.Tvals = self.line_handle[1].data.field("kT")
         self.dTvals = np.diff(self.Tvals)
         self.minlam = self.wvbins.min()
@@ -224,18 +223,18 @@
     
     def _make_spectrum(self, element, tindex):
         
-        tmpspec = np.zeros((self.nchan))
+        tmpspec = np.zeros(self.nchan)
         
         i = np.where((self.line_handle[tindex].data.field('element') == element) &
                      (self.line_handle[tindex].data.field('lambda') > self.minlam) &
                      (self.line_handle[tindex].data.field('lambda') < self.maxlam))[0]
 
-        vec = np.zeros((self.nchan))
+        vec = np.zeros(self.nchan)
         E0 = hc.value/self.line_handle[tindex].data.field('lambda')[i]
         amp = self.line_handle[tindex].data.field('epsilon')[i]
         ebins = self.ebins.ndarray_view()
         if self.thermal_broad:
-            vec = np.zeros((self.nchan))
+            vec = np.zeros(self.nchan)
             sigma = E0*np.sqrt(self.Tvals[tindex]*erg_per_keV/(self.A[element]*amu_cgs))/clight.value
             for E, sig, a in zip(E0, sigma, amp):
                 cdf = stats.norm(E,sig).cdf(ebins)
@@ -270,10 +269,10 @@
         """
         Get the thermal emission spectrum given a temperature *kT* in keV. 
         """
-        cspec_l = np.zeros((self.nchan))
-        mspec_l = np.zeros((self.nchan))
-        cspec_r = np.zeros((self.nchan))
-        mspec_r = np.zeros((self.nchan))
+        cspec_l = np.zeros(self.nchan)
+        mspec_l = np.zeros(self.nchan)
+        cspec_r = np.zeros(self.nchan)
+        mspec_r = np.zeros(self.nchan)
         tindex = np.searchsorted(self.Tvals, kT)-1
         if tindex >= self.Tvals.shape[0]-1 or tindex < 0:
             return cspec_l*cm3/units.s, mspec_l*cm3/units.s

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/analysis_modules/setup.py
--- a/yt/analysis_modules/setup.py
+++ b/yt/analysis_modules/setup.py
@@ -6,6 +6,7 @@
     config.make_config_py()  # installs __config__.py
     config.add_subpackage("absorption_spectrum")
     config.add_subpackage("cosmological_observation")
+    config.add_subpackage("halo_analysis")
     config.add_subpackage("halo_finding")
     config.add_subpackage("halo_mass_function")
     config.add_subpackage("level_sets")

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py
+++ b/yt/data_objects/construction_data_containers.py
@@ -336,8 +336,8 @@
                                   registry=self.ds.unit_registry)
             if self.weight_field is None and not self._sum_only:
                 u_obj = Unit(units, registry=self.ds.unit_registry)
-                if (u_obj.is_code_unit and not u_obj.is_dimensionless) and \
-                  input_units != units or self.ds.no_cgs_equiv_length:
+                if ((u_obj.is_code_unit or self.ds.no_cgs_equiv_length) and
+                    not u_obj.is_dimensionless) and input_units != units:
                     final_unit = "(%s) * code_length" % units
                     self[field].convert_to_units(final_unit)
         for i in data.keys(): self[i] = data.pop(i)

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -129,6 +129,15 @@
         self._index = self.ds.index
         return self._index
 
+    def _debug(self):
+        """
+        When called from within a derived field, this will run pdb.  However,
+        during field detection, it will not.  This allows you to more easily
+        debug fields that are being called on actual objects.
+        """
+        import pdb
+        pdb.set_trace()
+
     def _set_default_field_parameters(self):
         self.field_parameters = {}
         for k,v in self._default_field_parameters.items():
@@ -438,8 +447,14 @@
 
     @contextmanager
     def _field_parameter_state(self, field_parameters):
+        # What we're doing here is making a copy of the incoming field
+        # parameters, and then updating it with our own.  This means that we'll
+        # be using our own center, if set, rather than the supplied one.  But
+        # it also means that any additionally set values can override it.
         old_field_parameters = self.field_parameters
-        self.field_parameters = field_parameters
+        new_field_parameters = field_parameters.copy()
+        new_field_parameters.update(old_field_parameters)
+        self.field_parameters = new_field_parameters
         yield
         self.field_parameters = old_field_parameters
 

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/data_objects/selection_data_containers.py
--- a/yt/data_objects/selection_data_containers.py
+++ b/yt/data_objects/selection_data_containers.py
@@ -517,15 +517,15 @@
     Parameters
     ----------
     center : array_like 
-        coordinate to which the normal, radius, and height all reference; in
-        the center of one of the bases of the cylinder
+        coordinate to which the normal, radius, and height all reference
     normal : array_like
         the normal vector defining the direction of lengthwise part of the 
         cylinder
     radius : float
         the radius of the cylinder
     height : float
-        the height of the lengthwise part of the cylinder
+        the distance from the midplane of the cylinder to the top and 
+        bottom planes
     fields : array of fields, optional
         any fields to be pre-loaded in the cylinder object
     ds: Dataset, optional
@@ -543,14 +543,15 @@
     >>> disk = ds.disk(c, [1,0,0], (1, 'kpc'), (10, 'kpc'))
     """
     _type_name = "disk"
-    _con_args = ('center', '_norm_vec', '_radius', '_height')
+    _con_args = ('center', '_norm_vec', 'radius', 'height')
     def __init__(self, center, normal, radius, height, fields=None,
                  ds=None, **kwargs):
         YTSelectionContainer3D.__init__(self, center, fields, ds, **kwargs)
         self._norm_vec = np.array(normal)/np.sqrt(np.dot(normal,normal))
         self.set_field_parameter("normal", self._norm_vec)
-        self._height = fix_length(height, self.ds)
-        self._radius = fix_length(radius, self.ds)
+        self.set_field_parameter("center", self.center)
+        self.height = fix_length(height, self.ds)
+        self.radius = fix_length(radius, self.ds)
         self._d = -1.0 * np.dot(self._norm_vec, self.center)
 
 class YTRegionBase(YTSelectionContainer3D):
@@ -575,7 +576,7 @@
     _con_args = ('center', 'left_edge', 'right_edge')
     def __init__(self, center, left_edge, right_edge, fields = None,
                  ds = None, **kwargs):
-        YTSelectionContainer3D.__init__(self, center, fields, ds, **kwargs)
+        YTSelectionContainer3D.__init__(self, center, ds, **kwargs)
         if not isinstance(left_edge, YTArray):
             self.left_edge = self.ds.arr(left_edge, 'code_length')
         else:
@@ -615,7 +616,7 @@
     >>> import yt
     >>> ds = yt.load("RedshiftOutput0005")
     >>> c = [0.5,0.5,0.5]
-    >>> sphere = ds.sphere(c,1.*ds['kpc'])
+    >>> sphere = ds.sphere(c, (1., "kpc"))
     """
     _type_name = "sphere"
     _con_args = ('center', 'radius')
@@ -627,6 +628,7 @@
             raise YTSphereTooSmall(ds, radius.in_units("code_length"),
                                    self.index.get_smallest_dx().in_units("code_length"))
         self.set_field_parameter('radius',radius)
+        self.set_field_parameter("center", self.center)
         self.radius = radius
 
 class YTEllipsoidBase(YTSelectionContainer3D):

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -572,7 +572,7 @@
         return out
 
     # Now all the object related stuff
-    def all_data(self, find_max=False):
+    def all_data(self, find_max=False, **kwargs):
         """
         all_data is a wrapper to the Region object for creating a region
         which covers the entire simulation domain.
@@ -580,7 +580,7 @@
         if find_max: c = self.find_max("density")[1]
         else: c = (self.domain_right_edge + self.domain_left_edge)/2.0
         return self.region(c,
-            self.domain_left_edge, self.domain_right_edge)
+            self.domain_left_edge, self.domain_right_edge, **kwargs)
 
     def box(self, left_edge, right_edge, **kwargs):
         """

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/fields/field_detector.py
--- a/yt/fields/field_detector.py
+++ b/yt/fields/field_detector.py
@@ -148,6 +148,10 @@
             self[item] = self._read_data(item)
         return self[item]
 
+    def _debug(self):
+        # We allow this to pass through.
+        return
+
     def deposit(self, *args, **kwargs):
         return np.random.random((self.nd, self.nd, self.nd))
 

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/fields/field_functions.py
--- a/yt/fields/field_functions.py
+++ b/yt/fields/field_functions.py
@@ -15,6 +15,9 @@
 
 import numpy as np
 
+from yt.utilities.lib.geometry_utils import \
+    obtain_rvec
+
 def get_radius(data, field_prefix):
     center = data.get_field_parameter("center").in_units("cm")
     DW = (data.ds.domain_right_edge - data.ds.domain_left_edge).in_units("cm")
@@ -43,3 +46,17 @@
     # Alias it, just for clarity.
     radius = radius2
     return radius
+
+def get_periodic_rvec(data):
+    coords = obtain_rvec(data)
+    if sum(data.ds.periodicity) == 0: return coords
+    le = data.ds.domain_left_edge.in_units("code_length").d
+    dw = data.ds.domain_width.in_units("code_length").d
+    for i in range(coords.shape[0]):
+        if not data.ds.periodicity[i]: continue
+        coords[i, ...] -= le[i]
+        coords[i, ...] = np.min([np.abs(np.mod(coords[i, ...],  dw[i])),
+                                 np.abs(np.mod(coords[i, ...], -dw[i]))],
+                                 axis=0)
+        coords[i, ...] += le[i]
+    return coords

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/fields/geometric_fields.py
--- a/yt/fields/geometric_fields.py
+++ b/yt/fields/geometric_fields.py
@@ -22,6 +22,7 @@
     ValidateSpatial
 
 from .field_functions import \
+     get_periodic_rvec, \
      get_radius
 
 from .field_plugin_registry import \
@@ -39,9 +40,6 @@
     get_sph_theta, get_sph_phi, \
     periodic_dist, euclidean_dist
 
-from yt.utilities.lib.geometry_utils import \
-    obtain_rvec
-
 @register_field_plugin
 def setup_geometric_fields(registry, ftype = "gas", slice_info = None):
 
@@ -92,12 +90,8 @@
 
     ### spherical coordinates: r (radius)
     def _spherical_r(field, data):
-        center = data.get_field_parameter("center")
-        coords = data.ds.arr(obtain_rvec(data), "code_length")
-        coords[0,...] -= center[0]
-        coords[1,...] -= center[1]
-        coords[2,...] -= center[2]
-        return get_sph_r(coords).in_cgs()
+        coords = get_periodic_rvec(data)
+        return data.ds.arr(get_sph_r(coords), "code_length").in_cgs()
 
     registry.add_field(("index", "spherical_r"),
              function=_spherical_r,
@@ -106,27 +100,19 @@
 
     ### spherical coordinates: theta (angle with respect to normal)
     def _spherical_theta(field, data):
-        center = data.get_field_parameter("center")
         normal = data.get_field_parameter("normal")
-        coords = obtain_rvec(data)
-        coords[0,...] -= center[0]
-        coords[1,...] -= center[1]
-        coords[2,...] -= center[2]
+        coords = get_periodic_rvec(data)
         return get_sph_theta(coords, normal)
 
     registry.add_field(("index", "spherical_theta"),
              function=_spherical_theta,
              validators=[ValidateParameter("center"),
-             ValidateParameter("normal")])
+                         ValidateParameter("normal")])
 
     ### spherical coordinates: phi (angle in the plane perpendicular to the normal)
     def _spherical_phi(field, data):
-        center = data.get_field_parameter("center")
         normal = data.get_field_parameter("normal")
-        coords = obtain_rvec(data)
-        coords[0,...] -= center[0]
-        coords[1,...] -= center[1]
-        coords[2,...] -= center[2]
+        coords = get_periodic_rvec(data)
         return get_sph_phi(coords, normal)
 
     registry.add_field(("index", "spherical_phi"),
@@ -136,29 +122,21 @@
 
     ### cylindrical coordinates: R (radius in the cylinder's plane)
     def _cylindrical_r(field, data):
-        center = data.get_field_parameter("center")
         normal = data.get_field_parameter("normal")
-        coords = obtain_rvec(data)
-        coords[0,...] -= center[0]
-        coords[1,...] -= center[1]
-        coords[2,...] -= center[2]
+        coords = get_periodic_rvec(data)
         return data.ds.arr(get_cyl_r(coords, normal), "code_length").in_cgs()
 
     registry.add_field(("index", "cylindrical_r"),
              function=_cylindrical_r,
              validators=[ValidateParameter("center"),
-                        ValidateParameter("normal")],
+                         ValidateParameter("normal")],
              units="cm")
 
     ### cylindrical coordinates: z (height above the cylinder's plane)
     def _cylindrical_z(field, data):
-        center = data.get_field_parameter("center")
         normal = data.get_field_parameter("normal")
-        coords = data.ds.arr(obtain_rvec(data), "code_length")
-        coords[0,...] -= center[0]
-        coords[1,...] -= center[1]
-        coords[2,...] -= center[2]
-        return get_cyl_z(coords, normal).in_cgs()
+        coords = get_periodic_rvec(data)
+        return data.ds.arr(get_cyl_z(coords, normal), "code_length").in_cgs()
 
     registry.add_field(("index", "cylindrical_z"),
              function=_cylindrical_z,
@@ -168,12 +146,8 @@
 
     ### cylindrical coordinates: theta (angle in the cylinder's plane)
     def _cylindrical_theta(field, data):
-        center = data.get_field_parameter("center")
         normal = data.get_field_parameter("normal")
-        coords = obtain_rvec(data)
-        coords[0,...] -= center[0]
-        coords[1,...] -= center[1]
-        coords[2,...] -= center[2]
+        coords = get_periodic_rvec(data)
         return get_cyl_theta(coords, normal)
 
     registry.add_field(("index", "cylindrical_theta"),

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/fields/vector_operations.py
--- a/yt/fields/vector_operations.py
+++ b/yt/fields/vector_operations.py
@@ -116,7 +116,11 @@
                                 "bulk_%s" % basename)
         theta = data['index', 'spherical_theta']
         phi   = data['index', 'spherical_phi']
-        return get_sph_r_component(vectors, theta, phi, normal)
+        rv = get_sph_r_component(vectors, theta, phi, normal)
+        # Now, anywhere that radius is in fact zero, we want to zero out our
+        # return values.
+        rv[np.isnan(theta)] = 0.0
+        return rv
     def _radial_absolute(field, data):
         return np.abs(data[ftype, "radial_%s" % basename])
 

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/frontends/athena/data_structures.py
--- a/yt/frontends/athena/data_structures.py
+++ b/yt/frontends/athena/data_structures.py
@@ -107,7 +107,7 @@
         self.directory = os.path.dirname(self.dataset.filename)
         self.dataset_type = dataset_type
         # for now, the index file is the dataset!
-        self.index_filename = self.dataset.filename
+        self.index_filename = os.path.join(os.getcwd(), self.dataset.filename)
         #self.directory = os.path.dirname(self.index_filename)
         self._fhandle = file(self.index_filename,'rb')
         GridIndex.__init__(self, ds, dataset_type)
@@ -366,7 +366,7 @@
         # Unfortunately we now have to mandate that the index gets 
         # instantiated so that we can make sure we have the correct left 
         # and right domain edges.
-        self.h
+        self.index
 
     def _set_code_unit_attributes(self):
         """
@@ -458,14 +458,13 @@
             self.hubble_constant = self.cosmological_simulation = 0.0
         self.parameters['Time'] = self.current_time # Hardcode time conversion for now.
         self.parameters["HydroMethod"] = 0 # Hardcode for now until field staggering is supported.
-        if self.specified_parameters.has_key("gamma") :
+        if self.specified_parameters.has_key("gamma"):
             self.parameters["Gamma"] = self.specified_parameters["gamma"]
-        else :
+        else:
             self.parameters["Gamma"] = 5./3. 
         self.geometry = self.specified_parameters.get("geometry", "cartesian")
         self._handle.close()
 
-
     @classmethod
     def _is_valid(self, *args, **kwargs):
         try:

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/frontends/athena/fields.py
--- a/yt/frontends/athena/fields.py
+++ b/yt/frontends/athena/fields.py
@@ -25,6 +25,11 @@
 erg_units = "code_mass * (code_length/code_time)**2"
 rho_units = "code_mass / code_length**3"
 
+def velocity_field(comp):
+    def _velocity(field, data):
+        return data["athena", "momentum_%s" % comp]/data["athena","density"]
+    return _velocity
+
 class AthenaFieldInfo(FieldInfoContainer):
     known_other_fields = (
         ("density", ("code_mass/code_length**3", ["density"], None)),
@@ -41,19 +46,17 @@
     def setup_fluid_fields(self):
         # Add velocity fields
         for comp in "xyz":
-            vel_field = ("athena", "velocity_%s" % (comp))
-            mom_field = ("athena", "momentum_%s" % (comp))
+            vel_field = ("athena", "velocity_%s" % comp)
+            mom_field = ("athena", "momentum_%s" % comp)
             if vel_field in self.field_list:
                 self.add_output_field(vel_field, units="code_length/code_time")
-                self.alias(("gas","velocity_%s" % (comp)), vel_field,
+                self.alias(("gas","velocity_%s" % comp), vel_field,
                            units="cm/s")
             elif mom_field in self.field_list:
                 self.add_output_field(mom_field,
-                                      units="code_mass*code_length/code_time")
-                f = lambda data: data["athena","momentum_%s" % (comp)] / \
-                                 data["athena","density"]
-                self.add_field(("gas","velocity_%s" % (comp)),
-                               function=f, units = "cm/s")
+                                      units="code_mass/code_time/code_length**2")
+                self.add_field(("gas","velocity_%s" % comp),
+                               function=velocity_field(comp), units = "cm/s")
         # Add pressure, energy, and temperature fields
         def ekin1(data):
             return 0.5*(data["athena","momentum_x"]**2 +
@@ -96,6 +99,8 @@
                            function=_total_energy,
                            units="erg/g")
         elif ("athena","total_energy") in self.field_list:
+            self.add_output_field(("athena","total_energy"),
+                                  units=pres_units)
             def _pressure(field, data):
                 return eint_from_etot(data)*(data.ds.gamma-1.0)
             self.add_field(("gas","pressure"), function=_pressure,

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/frontends/athena/tests/test_outputs.py
--- /dev/null
+++ b/yt/frontends/athena/tests/test_outputs.py
@@ -0,0 +1,59 @@
+"""
+Athena frontend tests
+
+
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2013, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+from yt.testing import *
+from yt.utilities.answer_testing.framework import \
+    requires_ds, \
+    small_patch_amr, \
+    big_patch_amr, \
+    data_dir_load
+from yt.frontends.athena.api import AthenaDataset
+
+_fields_cloud = ("scalar[0]", "density", "total_energy")
+
+cloud = "ShockCloud/id0/Cloud.0050.vtk"
+ at requires_ds(cloud)
+def test_cloud():
+    ds = data_dir_load(cloud)
+    yield assert_equal, str(ds), "Cloud.0050"
+    for test in small_patch_amr(cloud, _fields_cloud):
+        test_cloud.__name__ = test.description
+        yield test
+
+_fields_blast = ("temperature", "density", "velocity_magnitude")
+
+blast = "MHDBlast/id0/Blast.0100.vtk"
+ at requires_ds(blast)
+def test_blast():
+    ds = data_dir_load(blast)
+    yield assert_equal, str(ds), "Blast.0100"
+    for test in small_patch_amr(blast, _fields_blast):
+        test_blast.__name__ = test.description
+        yield test
+
+parameters_stripping = {"time_unit":3.086e14,
+                        "length_unit":8.0236e22,
+                        "mass_unit":9.999e-30*8.0236e22**3}
+
+_fields_stripping = ("temperature", "density", "specific_scalar[0]")
+
+stripping = "RamPressureStripping/id0/rps.0062.vtk"
+ at requires_ds(stripping, big_data=True)
+def test_stripping():
+    ds = data_dir_load(stripping, kwargs={"parameters":parameters_stripping})
+    yield assert_equal, str(ds), "rps.0062"
+    for test in small_patch_amr(stripping, _fields_stripping):
+        test_stripping.__name__ = test.description
+        yield test

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/frontends/chombo/io.py
--- a/yt/frontends/chombo/io.py
+++ b/yt/frontends/chombo/io.py
@@ -83,16 +83,8 @@
             if not (len(chunks) == len(chunks[0].objs) == 1):
                 raise RuntimeError
             grid = chunks[0].objs[0]
-            lstring = 'level_%i' % grid.Level
-            lev = self._handle[lstring]
-            grid_offset = lev[self._offset_string][grid._level_id]
-            boxsize = grid.ActiveDimensions.prod()
             for ftype, fname in fields:
-                start = grid_offset+self.field_dict[fname]*boxsize
-                stop = start + boxsize
-                data = lev[self._data_string][start:stop]
-                rv[ftype, fname] = data.reshape(grid.ActiveDimensions,
-                                        order='F')
+                rv[ftype, fname] = self._read_data(grid, fname)
             return rv
         if size is None:
             size = sum((g.count(selector) for chunk in chunks
@@ -108,16 +100,10 @@
         ind = 0
         for chunk in chunks:
             for g in chunk.objs:
-                lstring = 'level_%i' % g.Level
-                lev = self._handle[lstring]
-                grid_offset = lev[self._offset_string][g._level_id]
-                boxsize = g.ActiveDimensions.prod()
                 nd = 0
                 for field in fields:
-                    start = grid_offset+self.field_dict[fname]*boxsize
-                    stop = start + boxsize
-                    data = lev[self._data_string][start:stop]
-                    data = data.reshape(g.ActiveDimensions, order='F')
+                    ftype, fname = field
+                    data = self._read_data(g, fname)
                     nd = g.select(selector, data, rv[field], ind) # caches
                 ind += nd
         return rv

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py
+++ b/yt/frontends/enzo/data_structures.py
@@ -286,7 +286,7 @@
             active_particles = True
             nap = dict((ap_type, []) for ap_type in 
                 params["Physics"]["ActiveParticles"]["ActiveParticlesEnabled"])
-        elif version > 2.0:
+        elif version == 2.2:
             active_particles = True
             nap = {}
             for type in self.parameters.get("AppendActiveParticleType", []):

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/frontends/sph/io.py
--- a/yt/frontends/sph/io.py
+++ b/yt/frontends/sph/io.py
@@ -344,21 +344,26 @@
                         for ptype in self._ptypes):
                 continue
             pos += 4
+            any_ptypes = False
             for ptype in self._ptypes:
                 if field == "Mass" and ptype not in self.var_mass:
                     continue
                 if (ptype, field) not in field_list:
                     continue
                 offsets[(ptype, field)] = pos
+                any_ptypes = True
                 if field in self._vector_fields:
                     pos += 3 * pcount[ptype] * fs
                 else:
                     pos += pcount[ptype] * fs
             pos += 4
+            if not any_ptypes: pos -= 8
         if file_size is not None:
             if file_size != pos:
                 mylog.warning("Your Gadget-2 file may have extra " +
-                              "columns or different precision!")
+                              "columns or different precision!" +
+                              " (%s file vs %s computed)",
+                              file_size, pos)
         return offsets
 
     def _identify_fields(self, domain):

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -785,9 +785,9 @@
         for i in range(3):
             self.norm_vec[i] = dobj._norm_vec[i]
             self.center[i] = _ensure_code(dobj.center[i])
-        self.radius = _ensure_code(dobj._radius)
+        self.radius = _ensure_code(dobj.radius)
         self.radius2 = self.radius * self.radius
-        self.height = _ensure_code(dobj._height)
+        self.height = _ensure_code(dobj.height)
 
     @cython.boundscheck(False)
     @cython.wraparound(False)
@@ -799,12 +799,17 @@
     @cython.wraparound(False)
     @cython.cdivision(True)
     cdef int select_point(self, np.float64_t pos[3]) nogil:
-        cdef np.float64_t h, d, r2, temp
-        cdef int i
+        cdef np.float64_t h, d, r2, temp, spos
+        cdef int i, j, k
         h = d = 0
-        for i in range(3):
-            temp = self.difference(pos[i], self.center[i], i)
-            h += temp * self.norm_vec[i]
+        for ax in range(3):
+            temp = 1e30
+            for i in range(3):
+                if self.periodicity[ax] == 0 and i != 1: continue
+                spos = pos[ax] + (i-1)*self.domain_width[ax]
+                if fabs(spos - self.center[ax]) < fabs(temp):
+                    temp = spos - self.center[ax]
+            h += temp * self.norm_vec[ax]
             d += temp*temp
         r2 = (d - h*h)
         if fabs(h) <= self.height and r2 <= self.radius2: return 1
@@ -831,6 +836,8 @@
     @cython.cdivision(True)
     cdef int select_bbox(self, np.float64_t left_edge[3],
                                np.float64_t right_edge[3]) nogil:
+        # Until we can get our OBB/OBB intersection correct, disable this.
+        return 1
         cdef np.float64_t *arr[2]
         cdef np.float64_t pos[3], H, D, R2, temp
         cdef int i, j, k, n

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/utilities/answer_testing/framework.py
--- a/yt/utilities/answer_testing/framework.py
+++ b/yt/utilities/answer_testing/framework.py
@@ -758,7 +758,7 @@
 
 def big_patch_amr(ds_fn, fields, input_center="max", input_weight="density"):
     if not can_run_ds(ds_fn): return
-    dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
+    dso = [ None, ("sphere", (input_center, (0.1, 'unitary')))]
     yield GridHierarchyTest(ds_fn)
     yield ParentageRelationshipsTest(ds_fn)
     for field in fields:

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/utilities/fits_image.py
--- a/yt/utilities/fits_image.py
+++ b/yt/utilities/fits_image.py
@@ -252,7 +252,7 @@
 
 axis_wcs = [[1,2],[0,2],[0,1]]
 
-def construct_image(data_source):
+def construct_image(data_source, center=None):
     ds = data_source.ds
     axis = data_source.axis
     if hasattr(ds, "wcs"):
@@ -266,11 +266,14 @@
     else:
         # This is some other kind of dataset
         unit = ds.get_smallest_appropriate_unit(ds.domain_width.max())
+        if center is None:
+            crval = [0.0,0.0]
+        else:
+            crval = [(ds.domain_center-center)[idx].in_units(unit) for idx in axis_wcs[axis]]
         dx = ds.index.get_smallest_dx()
         nx, ny = (ds.domain_width[axis_wcs[axis]]/dx).ndarray_view().astype("int")
         crpix = [0.5*(nx+1), 0.5*(ny+1)]
         cdelt = [dx.in_units(unit)]*2
-        crval = [ds.domain_center[idx].in_units(unit) for idx in axis_wcs[axis]]
         cunit = [unit]*2
         ctype = ["LINEAR"]*2
     frb = data_source.to_frb((1.0,"unitary"), (nx,ny))
@@ -295,7 +298,7 @@
     fields : string or list of strings
         The fields to slice
     center : A sequence floats, a string, or a tuple.
-         The coordinate of the center of the image. If set to 'c', 'center' or
+         The coordinate of the origin of the image. If set to 'c', 'center' or
          left blank, the plot is centered on the middle of the domain. If set to
          'max' or 'm', the center will be located at the maximum of the
          ('gas', 'density') field. Units can be specified by passing in center
@@ -308,7 +311,7 @@
         axis = fix_axis(axis, ds)
         center = get_sanitized_center(center, ds)
         slc = ds.slice(axis, center[axis], **kwargs)
-        w, frb = construct_image(slc)
+        w, frb = construct_image(slc, center=center)
         super(FITSSlice, self).__init__(frb, fields=fields, wcs=w)
         for i, field in enumerate(fields):
             self[i].header["bunit"] = str(frb[field].units)
@@ -327,12 +330,21 @@
         The fields to project
     weight_field : string
         The field used to weight the projection.
+    center : A sequence floats, a string, or a tuple.
+        The coordinate of the origin of the image. If set to 'c', 'center' or
+        left blank, the plot is centered on the middle of the domain. If set to
+        'max' or 'm', the center will be located at the maximum of the
+        ('gas', 'density') field. Units can be specified by passing in center
+        as a tuple containing a coordinate and string unit name or by passing
+        in a YTArray.  If a list or unitless array is supplied, code units are
+        assumed.
     """
-    def __init__(self, ds, axis, fields, weight_field=None, **kwargs):
+    def __init__(self, ds, axis, fields, center="c", weight_field=None, **kwargs):
         fields = ensure_list(fields)
         axis = fix_axis(axis, ds)
+        center = get_sanitized_center(center, ds)
         prj = ds.proj(fields[0], axis, weight_field=weight_field, **kwargs)
-        w, frb = construct_image(prj)
+        w, frb = construct_image(prj, center=center)
         super(FITSProjection, self).__init__(frb, fields=fields, wcs=w)
         for i, field in enumerate(fields):
             self[i].header["bunit"] = str(frb[field].units)

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/utilities/on_demand_imports.py
--- a/yt/utilities/on_demand_imports.py
+++ b/yt/utilities/on_demand_imports.py
@@ -10,7 +10,20 @@
 # The full license is in the file COPYING.txt, distributed with this software.
 #-----------------------------------------------------------------------------
 
+class NotAModule(object):
+    """
+    A class to implement an informative error message that will be outputted if
+    someone tries to use an on-demand import without having the requisite package installed.
+    """
+    def __init__(self, pkg_name):
+        self.pkg_name = pkg_name
+
+    def __getattr__(self, item):
+        raise ImportError("This functionality requires the %s package to be installed."
+                          % self.pkg_name)
+
 class astropy_imports:
+    _name = "astropy"
     _pyfits = None
     @property
     def pyfits(self):
@@ -19,7 +32,7 @@
                 import astropy.io.fits as pyfits
                 self.log
             except ImportError:
-                pyfits = None
+                pyfits = NotAModule(self._name)
             self._pyfits = pyfits
         return self._pyfits
 
@@ -31,7 +44,7 @@
                 import astropy.wcs as pywcs
                 self.log
             except ImportError:
-                pywcs = None
+                pywcs = NotAModule(self._name)
             self._pywcs = pywcs
         return self._pywcs
 
@@ -44,7 +57,7 @@
                 if log.exception_logging_enabled():
                     log.disable_exception_logging()
             except ImportError:
-                log = None
+                log = NotAModule(self._name)
             self._log = log
         return self._log
 
@@ -55,7 +68,7 @@
             try:
                 from astropy import units
             except ImportError:
-                units = None
+                units = NotAModule(self._name)
             self._units = units
         return self._units
 
@@ -67,8 +80,56 @@
                 import astropy.convolution as conv
                 self.log
             except ImportError:
-                conv = None
+                conv = NotAModule(self._name)
             self._conv = conv
         return self._conv
 
-_astropy = astropy_imports()
\ No newline at end of file
+_astropy = astropy_imports()
+
+class scipy_imports:
+    _name = "scipy"
+    _integrate = None
+    @property
+    def integrate(self):
+        if self._integrate is None:
+            try:
+                import scipy.integrate as integrate
+            except ImportError:
+                integrate = NotAModule(self._name)
+            self._integrate = integrate
+        return self._integrate
+
+    _stats = None
+    @property
+    def stats(self):
+        if self._stats is None:
+            try:
+                import scipy.stats as stats
+            except ImportError:
+                stats = NotAModule(self._name)
+            self._stats = stats
+        return self._stats
+
+    _optimize = None
+    @property
+    def optimize(self):
+        if self._optimize is None:
+            try:
+                import scipy.optimize as optimize
+            except ImportError:
+                optimize = NotAModule(self._name)
+            self._optimize = optimize
+        return self._optimize
+
+    _interpolate = None
+    @property
+    def interpolate(self):
+        if self._interpolate is None:
+            try:
+                import scipy.interpolate as interpolate
+            except ImportError:
+                interpolate = NotAModule(self._name)
+            self._interpolate = interpolate
+        return self._interpolate
+
+_scipy = scipy_imports()
\ No newline at end of file

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/visualization/volume_rendering/transfer_function_helper.py
--- a/yt/visualization/volume_rendering/transfer_function_helper.py
+++ b/yt/visualization/volume_rendering/transfer_function_helper.py
@@ -154,7 +154,9 @@
             xmi, xma = np.log10(self.bounds[0]), np.log10(self.bounds[1])
         else:
             xfunc = np.linspace
-            xmi, xma = self.bounds
+            # Need to strip units off of the bounds to avoid a recursion error
+            # in matplotlib 1.3.1
+            xmi, xma = [np.float64(b) for b in self.bounds]
 
         x = xfunc(xmi, xma, tf.nbins)
         y = tf.funcs[3].y
@@ -166,7 +168,7 @@
         canvas = FigureCanvasAgg(fig)
         ax = fig.add_axes([0.2, 0.2, 0.75, 0.75])
         ax.bar(x, tf.funcs[3].y, w, edgecolor=[0.0, 0.0, 0.0, 0.0],
-               log=True, color=colors, bottom=[0])
+               log=self.log, color=colors, bottom=[0])
 
         if profile_field is not None:
             try:
@@ -177,10 +179,12 @@
             if profile_field not in prof.keys():
                 prof.add_fields([profile_field], fractional=False,
                                 weight=profile_weight)
-            ax.plot(prof[self.field], prof[profile_field]*tf.funcs[3].y.max() /
-                    prof[profile_field].max(), color='w', linewidth=3)
-            ax.plot(prof[self.field], prof[profile_field]*tf.funcs[3].y.max() /
-                    prof[profile_field].max(), color='k')
+            # Strip units, if any, for matplotlib 1.3.1
+            xplot = np.array(prof[self.field])
+            yplot = np.array(prof[profile_field]*tf.funcs[3].y.max() /
+                             prof[profile_field].max())
+            ax.plot(xplot, yplot, color='w', linewidth=3)
+            ax.plot(xplot, yplot, color='k')
 
         ax.set_xscale({True: 'log', False: 'linear'}[self.log])
         ax.set_xlim(x.min(), x.max())
@@ -200,7 +204,7 @@
 
     def setup_profile(self, profile_field=None, profile_weight=None):
         if profile_field is None:
-            profile_field = 'CellVolume'
+            profile_field = 'cell_volume'
         prof = BinnedProfile1D(self.ds.all_data(), 128, self.field,
                                self.bounds[0], self.bounds[1],
                                log_space=self.log,

diff -r a58cf9b644f8bba90c9429dc7b005b9128527e27 -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 yt/visualization/volume_rendering/transfer_functions.py
--- a/yt/visualization/volume_rendering/transfer_functions.py
+++ b/yt/visualization/volume_rendering/transfer_functions.py
@@ -48,6 +48,8 @@
     def __init__(self, x_bounds, nbins=256):
         self.pass_through = 0
         self.nbins = nbins
+        # Strip units off of x_bounds, if any
+        x_bounds = [np.float64(xb) for xb in x_bounds]
         self.x_bounds = x_bounds
         self.x = np.linspace(x_bounds[0], x_bounds[1], nbins).astype('float64')
         self.y = np.zeros(nbins, dtype='float64')
@@ -353,6 +355,8 @@
     """
     def __init__(self, x_bounds, nbins=256, grey_opacity = False):
         MultiVariateTransferFunction.__init__(self)
+        # Strip units off of x_bounds, if any
+        x_bounds = [np.float64(xb) for xb in x_bounds]
         self.x_bounds = x_bounds
         self.nbins = nbins
         # This is all compatibility and convenience.
@@ -633,6 +637,7 @@
         >>> tf = ColorTransferFunction( (-10.0, -5.0) )
         >>> tf.sample_colormap(-7.0, 0.01, colormap='algae')
         """
+        v = np.float64(v)
         if col_bounds is None:
             rel = (v - self.x_bounds[0])/(self.x_bounds[1] - self.x_bounds[0])
         else:
@@ -680,7 +685,8 @@
         >>> tf.map_to_colormap(-6.0, -5.0, scale=10.0, colormap='algae',
         ...                    scale_func = linramp)
         """
-
+        mi = np.float64(mi)
+        ma = np.float64(ma)
         rel0 = int(self.nbins*(mi - self.x_bounds[0])/(self.x_bounds[1] -
                                                        self.x_bounds[0]))
         rel1 = int(self.nbins*(ma - self.x_bounds[0])/(self.x_bounds[1] -
@@ -800,6 +806,8 @@
         if n_fields > 3:
             raise NotImplementedError
         MultiVariateTransferFunction.__init__(self)
+        # Strip units off of x_bounds, if any
+        x_bounds = [np.float64(xb) for xb in x_bounds]
         self.x_bounds = x_bounds
         self.nbins = 2
         self.linear_mapping = TransferFunction(x_bounds, 2)


https://bitbucket.org/yt_analysis/yt/commits/c088c22cb968/
Changeset:   c088c22cb968
Branch:      yt
User:        ChrisMalone
Date:        2014-08-19 17:20:27
Summary:     fix reading geometry
Affected #:  1 file

diff -r a061af2dca0b55d2c4673be8bb3999e247a6afd0 -r c088c22cb9682f6a8cf629813dfd72bf0ba2c265 yt/frontends/boxlib/data_structures.py
--- a/yt/frontends/boxlib/data_structures.py
+++ b/yt/frontends/boxlib/data_structures.py
@@ -566,7 +566,9 @@
         # Skip timesteps per level
         header_file.readline()
         self._header_mesh_start = header_file.tell()
-        header_file.next()
+        # Skip the cell size information per level - we'll get this later
+        for i in range(self._max_level): header_file.next()
+        # Get the geometry
         next_line = header_file.next()
         if len(next_line.split()) == 1:
             coordinate_type = int(next_line)


https://bitbucket.org/yt_analysis/yt/commits/c2c505644b4c/
Changeset:   c2c505644b4c
Branch:      yt
User:        ChrisMalone
Date:        2014-08-19 17:38:58
Summary:     data_structures.py edited online with Bitbucket
Affected #:  1 file

diff -r c088c22cb9682f6a8cf629813dfd72bf0ba2c265 -r c2c505644b4cbcdfedeef6a0c15ed7801b151b83 yt/frontends/boxlib/data_structures.py
--- a/yt/frontends/boxlib/data_structures.py
+++ b/yt/frontends/boxlib/data_structures.py
@@ -567,9 +567,9 @@
         header_file.readline()
         self._header_mesh_start = header_file.tell()
         # Skip the cell size information per level - we'll get this later
-        for i in range(self._max_level): header_file.next()
+        for i in range(self._max_level+1): header_file.readline()
         # Get the geometry
-        next_line = header_file.next()
+        next_line = header_file.readline()
         if len(next_line.split()) == 1:
             coordinate_type = int(next_line)
         else:

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list