[yt-svn] commit/yt: 287 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Mon Jun 3 13:28:28 PDT 2013


287 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/23f418d40647/
Changeset:   23f418d40647
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-15 22:26:38
Summary:     First pass at adding particle type general identification.

This adds the ability to check for all types of particles when a bare particle
field is added to the fields.  This means "Something" with particle_type=True
will be checked against all particle_type values.  The fieldinfo entry will be
duplicated.

Unfortunately, as of right now, get_dependencies does not correctly pass things
through the __missing__ or __getitem__ calls.  That will be the next step.
Affected #:  1 file

diff -r a71dffe4bc813fdadc506ccad9efb632e23dc843 -r 23f418d4064701b72c43f52d4d8cc22a89294e0c yt/geometry/geometry_handler.py
--- a/yt/geometry/geometry_handler.py
+++ b/yt/geometry/geometry_handler.py
@@ -159,16 +159,43 @@
                 self.parameter_file.field_info[field] = known_fields[field]
 
     def _setup_derived_fields(self):
+        fi = self.parameter_file.field_info
         self.derived_field_list = []
-        for field in self.parameter_file.field_info:
+        # First we construct our list of fields to check
+        fields_to_check = []
+        fields_to_allcheck = []
+        for field in fi:
+            finfo = fi[field]
+            # Explicitly defined
+            if isinstance(field, tuple):
+                fields_to_check.append(field)
+                continue
+            # This one is implicity defined for all particle or fluid types.
+            # So we check each.
+            if not finfo.particle_type:
+                fields_to_check.append(field)
+                continue
+            # We do a special case for 'all' later
+            new_fields = [(pt, field) for pt in
+                          self.parameter_file.particle_types]
+            fields_to_check += new_fields
+            fi.update( (new_field, fi[field]) for new_field in new_fields )
+            fields_to_allcheck.append(field)
+        for field in fields_to_check:
             try:
-                fd = self.parameter_file.field_info[field].get_dependencies(
-                            pf = self.parameter_file)
+                fd = fi[field].get_dependencies(pf = self.parameter_file)
                 self.parameter_file.field_dependencies[field] = fd
-            except:
+            except Exception as e:
                 continue
             available = np.all([f in self.field_list for f in fd.requested])
             if available: self.derived_field_list.append(field)
+        for base_field in fields_to_allcheck:
+            # Now we expand our field_info with the new fields
+            all_available = all(((pt, field) in self.derived_field_list
+                                 for pt in self.parameter_file.particle_types))
+            if all_available:
+                self.derived_field_list.append( ("all", field) )
+                fi["all", base_field] = fi[base_field]
         for field in self.field_list:
             if field not in self.derived_field_list:
                 self.derived_field_list.append(field)


https://bitbucket.org/yt_analysis/yt/commits/de15a182455b/
Changeset:   de15a182455b
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-15 22:43:04
Summary:     This will now correctly identify particle type fields as existing.
Affected #:  1 file

diff -r 23f418d4064701b72c43f52d4d8cc22a89294e0c -r de15a182455b8f633b5498d3610d94afaade1ccd yt/geometry/geometry_handler.py
--- a/yt/geometry/geometry_handler.py
+++ b/yt/geometry/geometry_handler.py
@@ -187,8 +187,14 @@
                 self.parameter_file.field_dependencies[field] = fd
             except Exception as e:
                 continue
-            available = np.all([f in self.field_list for f in fd.requested])
-            if available: self.derived_field_list.append(field)
+            missing = False
+            # This next bit checks that we can't somehow generate everything.
+            for f in fd.requested:
+                if f not in self.field_list and \
+                    (field[0], f) not in self.field_list:
+                    missing = True
+                    break
+            if not missing: self.derived_field_list.append(field)
         for base_field in fields_to_allcheck:
             # Now we expand our field_info with the new fields
             all_available = all(((pt, field) in self.derived_field_list


https://bitbucket.org/yt_analysis/yt/commits/0a386d47521d/
Changeset:   0a386d47521d
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-15 23:12:28
Summary:     This changeset finishes particle field detection and broadcasting.

These relatively simple fixes should fix how field dependencies are calculated
and applied during the course of reading IO.  The change in data_containers
actually corrects what I believe was a typo.  The change to static_output
enables "unknown" fields to be switched to whatever was requested last (this
means that requested data["particle_position_x"] from within a particle type'd
function will work) and the final fix updates field dependencies at
determination time to properly identify field types.
Affected #:  3 files

diff -r de15a182455b8f633b5498d3610d94afaade1ccd -r 0a386d47521dea1964640bda426547557d49b6cd yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -453,7 +453,7 @@
     def _identify_dependencies(self, fields_to_get):
         inspected = 0
         fields_to_get = fields_to_get[:]
-        for ftype, field in itertools.cycle(fields_to_get):
+        for field in itertools.cycle(fields_to_get):
             if inspected >= len(fields_to_get): break
             inspected += 1
             if field not in self.pf.field_dependencies: continue

diff -r de15a182455b8f633b5498d3610d94afaade1ccd -r 0a386d47521dea1964640bda426547557d49b6cd yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -251,6 +251,8 @@
     _last_freq = (None, None)
     _last_finfo = None
     def _get_field_info(self, ftype, fname):
+        if ftype == "unknown" and self._last_freq[0] != None:
+            ftype = self._last_freq[0]
         field = (ftype, fname)
         if field == self._last_freq or fname == self._last_freq[1]:
             return self._last_finfo

diff -r de15a182455b8f633b5498d3610d94afaade1ccd -r 0a386d47521dea1964640bda426547557d49b6cd yt/geometry/geometry_handler.py
--- a/yt/geometry/geometry_handler.py
+++ b/yt/geometry/geometry_handler.py
@@ -184,17 +184,23 @@
         for field in fields_to_check:
             try:
                 fd = fi[field].get_dependencies(pf = self.parameter_file)
-                self.parameter_file.field_dependencies[field] = fd
             except Exception as e:
                 continue
             missing = False
             # This next bit checks that we can't somehow generate everything.
+            # We also manually update the 'requested' attribute
+            requested = []
             for f in fd.requested:
-                if f not in self.field_list and \
-                    (field[0], f) not in self.field_list:
+                if (field[0], f) in self.field_list:
+                    requested.append( (field[0], f) )
+                elif f in self.field_list:
+                    requested.append( f )
+                else:
                     missing = True
                     break
             if not missing: self.derived_field_list.append(field)
+            fd.requested = set(requested)
+            self.parameter_file.field_dependencies[field] = fd
         for base_field in fields_to_allcheck:
             # Now we expand our field_info with the new fields
             all_available = all(((pt, field) in self.derived_field_list


https://bitbucket.org/yt_analysis/yt/commits/0a402c7eebb9/
Changeset:   0a402c7eebb9
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-16 21:25:39
Summary:     Adding ("gas",field) to field_dependencies for gas fields.

Also fix an __iter__ bug where the size of field_info changed over the
iteration.
Affected #:  1 file

diff -r 0a386d47521dea1964640bda426547557d49b6cd -r 0a402c7eebb9c27ee1c91c9f480c0701d63a365d yt/geometry/geometry_handler.py
--- a/yt/geometry/geometry_handler.py
+++ b/yt/geometry/geometry_handler.py
@@ -164,6 +164,7 @@
         # First we construct our list of fields to check
         fields_to_check = []
         fields_to_allcheck = []
+        fields_to_add = []
         for field in fi:
             finfo = fi[field]
             # Explicitly defined
@@ -179,8 +180,10 @@
             new_fields = [(pt, field) for pt in
                           self.parameter_file.particle_types]
             fields_to_check += new_fields
-            fi.update( (new_field, fi[field]) for new_field in new_fields )
+            fields_to_add.extend( (new_field, fi[field]) for
+                                   new_field in new_fields )
             fields_to_allcheck.append(field)
+        fi.update(fields_to_add)
         for field in fields_to_check:
             try:
                 fd = fi[field].get_dependencies(pf = self.parameter_file)
@@ -201,6 +204,9 @@
             if not missing: self.derived_field_list.append(field)
             fd.requested = set(requested)
             self.parameter_file.field_dependencies[field] = fd
+            if not fi[field].particle_type and not isinstance(field, tuple):
+                # Manually hardcode to 'gas'
+                self.parameter_file.field_dependencies["gas", field] = fd
         for base_field in fields_to_allcheck:
             # Now we expand our field_info with the new fields
             all_available = all(((pt, field) in self.derived_field_list


https://bitbucket.org/yt_analysis/yt/commits/190abd27358b/
Changeset:   190abd27358b
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-18 19:12:48
Summary:     Adding an additional test that "gas" is accessible for fluid fields.
Affected #:  1 file

diff -r 0a402c7eebb9c27ee1c91c9f480c0701d63a365d -r 190abd27358bdf8728519c9473a85a3a431cd3aa yt/data_objects/tests/test_fields.py
--- a/yt/data_objects/tests/test_fields.py
+++ b/yt/data_objects/tests/test_fields.py
@@ -69,6 +69,8 @@
         dd2.field_parameters.update(_sample_parameters)
         v1 = dd1[self.field_name]
         conv = field._convert_function(dd1) or 1.0
+        if not field.particle_type:
+            assert_equal(v1, dd1["gas", self.field_name])
         if not needs_spatial:
             assert_equal(v1, conv*field._function(field, dd2))
         if not skip_grids:


https://bitbucket.org/yt_analysis/yt/commits/d7a076d0ff01/
Changeset:   d7a076d0ff01
Branch:      yt-3.0
User:        drudd
Date:        2013-03-15 00:52:32
Summary:     Updates to SphereSelector to allow for periodicity and turn back on grid selection
Affected #:  2 files

diff -r a71dffe4bc813fdadc506ccad9efb632e23dc843 -r d7a076d0ff01055eaec8d3c7005e1070a0e4ac90 yt/data_objects/selection_data_containers.py
--- a/yt/data_objects/selection_data_containers.py
+++ b/yt/data_objects/selection_data_containers.py
@@ -928,7 +928,6 @@
             raise YTSphereTooSmall(pf, radius, self.hierarchy.get_smallest_dx())
         self.set_field_parameter('radius',radius)
         self.radius = radius
-        self.DW = self.pf.domain_right_edge - self.pf.domain_left_edge
 
 class YTEllipsoidBase(YTSelectionContainer3D):
     """

diff -r a71dffe4bc813fdadc506ccad9efb632e23dc843 -r d7a076d0ff01055eaec8d3c7005e1070a0e4ac90 yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -407,12 +407,22 @@
 cdef class SphereSelector(SelectorObject):
     cdef np.float64_t radius2
     cdef np.float64_t center[3]
+    cdef np.float64_t domain_left_edge[3]
+    cdef np.float64_t domain_right_edge[3]
+    cdef np.float64_t domain_width[3]
+    cdef bint periodicity[3]
 
     def __init__(self, dobj):
         for i in range(3):
             self.center[i] = dobj.center[i]
         self.radius2 = dobj.radius * dobj.radius
 
+        for i in range(3) :
+            self.domain_left_edge[i] = dobj.pf.domain_left_edge[i]
+            self.domain_right_edge[i] = dobj.pf.domain_right_edge[i]
+            self.domain_width[i] = self.domain_right_edge[i] - self.domain_left_edge[i]
+            self.periodicity[i] = dobj.pf.periodicity[i]
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
@@ -420,8 +430,7 @@
                                np.float64_t right_edge[3],
                                np.int32_t level) nogil:
         cdef np.float64_t box_center, relcenter, closest, dist, edge
-        return 1
-        cdef int id
+        cdef int i
         if (left_edge[0] <= self.center[0] <= right_edge[0] and
             left_edge[1] <= self.center[1] <= right_edge[1] and
             left_edge[2] <= self.center[2] <= right_edge[2]):
@@ -431,10 +440,15 @@
         for i in range(3):
             box_center = (right_edge[i] + left_edge[i])/2.0
             relcenter = self.center[i] - box_center
+            if self.periodicity[i]:
+                if relcenter > self.domain_width[i]/2.0: 
+                    relcenter -= self.domain_width[i] 
+                elif relcenter < -self.domain_width[i]/2.0: 
+                    relcenter += self.domain_width[i] 
             edge = right_edge[i] - left_edge[i]
             closest = relcenter - fclip(relcenter, -edge/2.0, edge/2.0)
             dist += closest * closest
-        if dist < self.radius2: return 1
+        if dist <= self.radius2: return 1
         return 0
 
     @cython.boundscheck(False)
@@ -444,10 +458,20 @@
                          int eterm[3]) nogil:
         cdef np.float64_t dist2, temp
         cdef int i
+        if (pos[0] - 0.5*dds[0] <= self.center[0] <= pos[0]+0.5*dds[0] and
+            pos[1] - 0.5*dds[1] <= self.center[1] <= pos[1]+0.5*dds[1] and
+            pos[2] - 0.5*dds[2] <= self.center[2] <= pos[2]+0.5*dds[2]):
+            return 1
         dist2 = 0
         for i in range(3):
-            temp = (pos[i] - self.center[i])
-            dist2 += temp * temp
+            temp = self.center[i] - pos[i]
+            if self.periodicity[i]:
+                if temp > self.domain_width[i]/2.0:
+                    temp -= self.domain_width[i]
+                elif temp < -self.domain_width[i]/2.0:
+                    temp += self.domain_width[i]
+            temp = temp - fclip(temp, -dds[i]/2.0, dds[i]/2.0)
+            dist2 += temp*temp
         if dist2 <= self.radius2: return 1
         return 0
 


https://bitbucket.org/yt_analysis/yt/commits/2f14933d1904/
Changeset:   2f14933d1904
Branch:      yt-3.0
User:        drudd
Date:        2013-03-15 21:08:41
Summary:     Re-enabled select_grids for SphereSelector and added periodicity check
Affected #:  2 files

diff -r d7a076d0ff01055eaec8d3c7005e1070a0e4ac90 -r 2f14933d1904ac50f512ae72f2200c5c2dc769ff yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -407,8 +407,6 @@
 cdef class SphereSelector(SelectorObject):
     cdef np.float64_t radius2
     cdef np.float64_t center[3]
-    cdef np.float64_t domain_left_edge[3]
-    cdef np.float64_t domain_right_edge[3]
     cdef np.float64_t domain_width[3]
     cdef bint periodicity[3]
 
@@ -418,11 +416,10 @@
         self.radius2 = dobj.radius * dobj.radius
 
         for i in range(3) :
-            self.domain_left_edge[i] = dobj.pf.domain_left_edge[i]
-            self.domain_right_edge[i] = dobj.pf.domain_right_edge[i]
-            self.domain_width[i] = self.domain_right_edge[i] - self.domain_left_edge[i]
+            self.domain_width[i] = dobj.pf.domain_right_edge[i] - \
+                                   dobj.pf.domain_left_edge[i]
             self.periodicity[i] = dobj.pf.periodicity[i]
-
+        
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)

diff -r d7a076d0ff01055eaec8d3c7005e1070a0e4ac90 -r 2f14933d1904ac50f512ae72f2200c5c2dc769ff yt/utilities/tests/test_selectors.py
--- /dev/null
+++ b/yt/utilities/tests/test_selectors.py
@@ -0,0 +1,31 @@
+from yt.testing import *
+from yt.utilities.math_utils import periodic_dist
+
+def setup():
+    from yt.config import ytcfg
+    ytcfg["yt","__withintesting"] = "True"
+
+def test_sphere_selector():
+    # generate fake data with a number of non-cubical grids
+    pf = fake_random_pf(64,nprocs=51)
+    assert(all(pf.periodicity))
+
+    # aligned tests
+    spheres = [ [0.0,0.0,0.0],
+                [0.5,0.5,0.5],
+                [1.0,1.0,1.0],
+                [0.25,0.75,0.25] ]
+
+    for center in spheres :
+        data = pf.h.sphere(center, 0.25)
+        data.get_data()
+        # WARNING: this value has not be externally verified
+        yield assert_equal, data.size, 19568
+
+        positions = np.array([data[ax] for ax in 'xyz'])
+        centers = np.tile( data.center, data.shape[0] ).reshape(data.shape[0],3).transpose()
+        dist = periodic_dist(positions, centers,
+                         pf.domain_right_edge-pf.domain_left_edge,
+                         pf.periodicity)
+        # WARNING: this value has not been externally verified
+        yield assert_almost_equal, dist.max(), 0.261806188752


https://bitbucket.org/yt_analysis/yt/commits/dba8e90381b5/
Changeset:   dba8e90381b5
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-18 20:28:15
Summary:     Merged in drudd/yt-3.0 (pull request #18)

SphereSelector work
Affected #:  3 files

diff -r 190abd27358bdf8728519c9473a85a3a431cd3aa -r dba8e90381b5be3b028790fc00201cca02fd936c yt/data_objects/selection_data_containers.py
--- a/yt/data_objects/selection_data_containers.py
+++ b/yt/data_objects/selection_data_containers.py
@@ -928,7 +928,6 @@
             raise YTSphereTooSmall(pf, radius, self.hierarchy.get_smallest_dx())
         self.set_field_parameter('radius',radius)
         self.radius = radius
-        self.DW = self.pf.domain_right_edge - self.pf.domain_left_edge
 
 class YTEllipsoidBase(YTSelectionContainer3D):
     """

diff -r 190abd27358bdf8728519c9473a85a3a431cd3aa -r dba8e90381b5be3b028790fc00201cca02fd936c yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -407,12 +407,19 @@
 cdef class SphereSelector(SelectorObject):
     cdef np.float64_t radius2
     cdef np.float64_t center[3]
+    cdef np.float64_t domain_width[3]
+    cdef bint periodicity[3]
 
     def __init__(self, dobj):
         for i in range(3):
             self.center[i] = dobj.center[i]
         self.radius2 = dobj.radius * dobj.radius
 
+        for i in range(3) :
+            self.domain_width[i] = dobj.pf.domain_right_edge[i] - \
+                                   dobj.pf.domain_left_edge[i]
+            self.periodicity[i] = dobj.pf.periodicity[i]
+        
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
@@ -420,8 +427,7 @@
                                np.float64_t right_edge[3],
                                np.int32_t level) nogil:
         cdef np.float64_t box_center, relcenter, closest, dist, edge
-        return 1
-        cdef int id
+        cdef int i
         if (left_edge[0] <= self.center[0] <= right_edge[0] and
             left_edge[1] <= self.center[1] <= right_edge[1] and
             left_edge[2] <= self.center[2] <= right_edge[2]):
@@ -431,10 +437,15 @@
         for i in range(3):
             box_center = (right_edge[i] + left_edge[i])/2.0
             relcenter = self.center[i] - box_center
+            if self.periodicity[i]:
+                if relcenter > self.domain_width[i]/2.0: 
+                    relcenter -= self.domain_width[i] 
+                elif relcenter < -self.domain_width[i]/2.0: 
+                    relcenter += self.domain_width[i] 
             edge = right_edge[i] - left_edge[i]
             closest = relcenter - fclip(relcenter, -edge/2.0, edge/2.0)
             dist += closest * closest
-        if dist < self.radius2: return 1
+        if dist <= self.radius2: return 1
         return 0
 
     @cython.boundscheck(False)
@@ -444,10 +455,20 @@
                          int eterm[3]) nogil:
         cdef np.float64_t dist2, temp
         cdef int i
+        if (pos[0] - 0.5*dds[0] <= self.center[0] <= pos[0]+0.5*dds[0] and
+            pos[1] - 0.5*dds[1] <= self.center[1] <= pos[1]+0.5*dds[1] and
+            pos[2] - 0.5*dds[2] <= self.center[2] <= pos[2]+0.5*dds[2]):
+            return 1
         dist2 = 0
         for i in range(3):
-            temp = (pos[i] - self.center[i])
-            dist2 += temp * temp
+            temp = self.center[i] - pos[i]
+            if self.periodicity[i]:
+                if temp > self.domain_width[i]/2.0:
+                    temp -= self.domain_width[i]
+                elif temp < -self.domain_width[i]/2.0:
+                    temp += self.domain_width[i]
+            temp = temp - fclip(temp, -dds[i]/2.0, dds[i]/2.0)
+            dist2 += temp*temp
         if dist2 <= self.radius2: return 1
         return 0
 

diff -r 190abd27358bdf8728519c9473a85a3a431cd3aa -r dba8e90381b5be3b028790fc00201cca02fd936c yt/utilities/tests/test_selectors.py
--- /dev/null
+++ b/yt/utilities/tests/test_selectors.py
@@ -0,0 +1,31 @@
+from yt.testing import *
+from yt.utilities.math_utils import periodic_dist
+
+def setup():
+    from yt.config import ytcfg
+    ytcfg["yt","__withintesting"] = "True"
+
+def test_sphere_selector():
+    # generate fake data with a number of non-cubical grids
+    pf = fake_random_pf(64,nprocs=51)
+    assert(all(pf.periodicity))
+
+    # aligned tests
+    spheres = [ [0.0,0.0,0.0],
+                [0.5,0.5,0.5],
+                [1.0,1.0,1.0],
+                [0.25,0.75,0.25] ]
+
+    for center in spheres :
+        data = pf.h.sphere(center, 0.25)
+        data.get_data()
+        # WARNING: this value has not be externally verified
+        yield assert_equal, data.size, 19568
+
+        positions = np.array([data[ax] for ax in 'xyz'])
+        centers = np.tile( data.center, data.shape[0] ).reshape(data.shape[0],3).transpose()
+        dist = periodic_dist(positions, centers,
+                         pf.domain_right_edge-pf.domain_left_edge,
+                         pf.periodicity)
+        # WARNING: this value has not been externally verified
+        yield assert_almost_equal, dist.max(), 0.261806188752


https://bitbucket.org/yt_analysis/yt/commits/6f02ff4f88f0/
Changeset:   6f02ff4f88f0
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-19 12:14:19
Summary:     Adding check_field call to handle non-tuple field names.  Fixes #530.
Affected #:  1 file

diff -r dba8e90381b5be3b028790fc00201cca02fd936c -r 6f02ff4f88f052c518cc7dc56623655784350403 yt/visualization/plot_window.py
--- a/yt/visualization/plot_window.py
+++ b/yt/visualization/plot_window.py
@@ -595,7 +595,7 @@
             fields = self.plots.keys()
         else:
             fields = [field]
-        for field in fields:
+        for field in self._field_check(fields):
             if log:
                 self._field_transform[field] = log_transform
             else:
@@ -603,6 +603,7 @@
 
     @invalidate_plot
     def set_transform(self, field, name):
+        field = self._field_check(field)
         if name not in field_transforms: 
             raise KeyError(name)
         self._field_transform[field] = field_transforms[name]
@@ -621,11 +622,11 @@
 
         """
 
-        if field is 'all':
+        if field == 'all':
             fields = self.plots.keys()
         else:
             fields = [field]
-        for field in fields:
+        for field in self._field_check(fields):
             self._colorbar_valid = False
             self._colormaps[field] = cmap_name
 
@@ -659,7 +660,7 @@
             fields = self.plots.keys()
         else:
             fields = [field]
-        for field in fields:
+        for field in self._field_check(fields):
             myzmin = zmin
             myzmax = zmax
             if zmin == 'min':
@@ -773,7 +774,7 @@
     def get_field_units(self, field, strip_mathml = True):
         ds = self._frb.data_source
         pf = self.pf
-        field = self.data_source._determine_fields(field)[0]
+        field = self._check_field(field)
         finfo = self.data_source.pf._get_field_info(*field)
         if ds._type_name in ("slice", "cutting"):
             units = finfo.get_units()
@@ -788,6 +789,12 @@
             units = units.replace(r"\rm{", "").replace("}","")
         return units
 
+    def _field_check(self, field):
+        field = self.data_source._determine_fields(field)
+        if isinstance(field, (list, tuple)):
+            return field
+        else:
+            return field[0]
 
 class PWViewerMPL(PWViewer):
     """Viewer using matplotlib as a backend via the WindowPlotMPL. 
@@ -1011,7 +1018,7 @@
         else:
             fields = [field]
 
-        for field in fields:
+        for field in self._field_check(fields):
             self._colorbar_valid = False
             self._colormaps[field] = cmap
             if isinstance(cmap, types.StringTypes):
@@ -1678,7 +1685,7 @@
 
     @invalidate_data
     def set_current_field(self, field):
-        field = self.data_source._determine_fields(field)[0]
+        field = self._check_field(field)
         self._current_field = field
         self._frb[field]
         finfo = self.data_source.pf._get_field_info(*field)


https://bitbucket.org/yt_analysis/yt/commits/e4852e262442/
Changeset:   e4852e262442
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-19 12:20:50
Summary:     This should be an "r" for reading particles.
Affected #:  1 file

diff -r 6f02ff4f88f052c518cc7dc56623655784350403 -r e4852e262442f29c5b2b30b8c5b8d38a2750ebef yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py
+++ b/yt/frontends/enzo/data_structures.py
@@ -415,7 +415,7 @@
             # attributes in a defined location.
             if last != g.filename:
                 if handle is not None: handle.close()
-                handle = h5py.File(g.filename)
+                handle = h5py.File(g.filename, "r")
             node = handle["/Grid%08i/Particles/" % g.id]
             for ptype in (str(p) for p in node):
                 if ptype not in _fields: continue


https://bitbucket.org/yt_analysis/yt/commits/b56fb7756976/
Changeset:   b56fb7756976
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-19 12:32:43
Summary:     Resetting cartesian_fields.py to *not* apply offsets or scaling.

Fixes #528.  Needs verification from other frontends.
Affected #:  3 files

diff -r e4852e262442f29c5b2b30b8c5b8d38a2750ebef -r b56fb77569764f6f5fd8dc1416b6b182170345fe yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -265,7 +265,8 @@
 
     def select_fwidth(self, dobj):
         # Recall domain_dimensions is the number of cells, not octs
-        base_dx = 1.0/self.domain.pf.domain_dimensions
+        base_dx = (self.domain.pf.domain_width /
+                   self.domain.pf.domain_dimensions)
         widths = np.empty((self.cell_count, 3), dtype="float64")
         dds = (2**self.ires(dobj))
         for i in range(3):

diff -r e4852e262442f29c5b2b30b8c5b8d38a2750ebef -r b56fb77569764f6f5fd8dc1416b6b182170345fe yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -92,7 +92,8 @@
 
     def select_fwidth(self, dobj):
         # Recall domain_dimensions is the number of cells, not octs
-        base_dx = 1.0/self.domain.pf.domain_dimensions
+        base_dx = (self.domain.pf.domain_width /
+                   self.domain.pf.domain_dimensions)
         widths = np.empty((self.cell_count, 3), dtype="float64")
         dds = (2**self.ires(dobj))
         for i in range(3):

diff -r e4852e262442f29c5b2b30b8c5b8d38a2750ebef -r b56fb77569764f6f5fd8dc1416b6b182170345fe yt/geometry/cartesian_fields.py
--- a/yt/geometry/cartesian_fields.py
+++ b/yt/geometry/cartesian_fields.py
@@ -41,26 +41,26 @@
 add_cart_field = CartesianFieldInfo.add_field
 
 def _dx(field, data):
-    return data.pf.domain_width[0] * data.fwidth[...,0]
+    return data.fwidth[...,0]
 add_cart_field('dx', function=_dx, display_field=False)
 
 def _dy(field, data):
-    return data.pf.domain_width[1] * data.fwidth[...,1]
+    return data.fwidth[...,1]
 add_cart_field('dy', function=_dy, display_field=False)
 
 def _dz(field, data):
-    return data.pf.domain_width[2] * data.fwidth[...,2]
+    return data.fwidth[...,2]
 add_cart_field('dz', function=_dz, display_field=False)
 
 def _coordX(field, data):
-    return data.pf.domain_left_edge[0] + data.fcoords[...,0]
+    return data.fcoords[...,0]
 add_cart_field('x', function=_coordX, display_field=False)
 
 def _coordY(field, data):
-    return data.pf.domain_left_edge[1] + data.fcoords[...,1]
+    return data.fcoords[...,1]
 add_cart_field('y', function=_coordY, display_field=False)
 
 def _coordZ(field, data):
-    return data.pf.domain_left_edge[2] + data.fcoords[...,2]
+    return data.fcoords[...,2]
 add_cart_field('z', function=_coordZ, display_field=False)
 


https://bitbucket.org/yt_analysis/yt/commits/0e5c5352ce0d/
Changeset:   0e5c5352ce0d
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-19 19:54:05
Summary:     Go back to just_one for data['dx'].  Fixes #532.
Affected #:  2 files

diff -r e4852e262442f29c5b2b30b8c5b8d38a2750ebef -r 0e5c5352ce0dd97d2b629d0ea21c4172ac54d0aa yt/data_objects/universal_fields.py
--- a/yt/data_objects/universal_fields.py
+++ b/yt/data_objects/universal_fields.py
@@ -946,7 +946,7 @@
                  data["particle_position_x"].size,
                  blank, np.array(data.LeftEdge).astype(np.float64),
                  np.array(data.ActiveDimensions).astype(np.int32),
-                 np.float64(data['dx']))
+                 just_one(data['dx']))
     return blank
 add_field("particle_density", function=_pdensity,
           validators=[ValidateGridType()], convert_function=_convertDensity,

diff -r e4852e262442f29c5b2b30b8c5b8d38a2750ebef -r 0e5c5352ce0dd97d2b629d0ea21c4172ac54d0aa yt/frontends/enzo/fields.py
--- a/yt/frontends/enzo/fields.py
+++ b/yt/frontends/enzo/fields.py
@@ -361,7 +361,7 @@
                            np.int64(np.where(filter)[0].size),
                            blank, np.array(data.LeftEdge).astype(np.float64),
                            np.array(data.ActiveDimensions).astype(np.int32), 
-                           np.float64(data['dx']))
+                           just_one(data['dx']))
     return blank
 add_field("star_density", function=_spdensity,
           validators=[ValidateSpatial(0)], convert_function=_convertDensity)
@@ -383,7 +383,7 @@
                            num,
                            blank, np.array(data.LeftEdge).astype(np.float64),
                            np.array(data.ActiveDimensions).astype(np.int32), 
-                           np.float64(data['dx']))
+                           just_one(data['dx']))
     return blank
 add_field("dm_density", function=_dmpdensity,
           validators=[ValidateSpatial(0)], convert_function=_convertDensity)
@@ -404,7 +404,7 @@
                            data["particle_position_x"].size,
                            top, np.array(data.LeftEdge).astype(np.float64),
                            np.array(data.ActiveDimensions).astype(np.int32), 
-                           np.float64(data['dx']))
+                           just_one(data['dx']))
     del particle_field_data
 
     bottom = np.zeros(data.ActiveDimensions, dtype='float32')
@@ -415,7 +415,7 @@
                            data["particle_position_x"].size,
                            bottom, np.array(data.LeftEdge).astype(np.float64),
                            np.array(data.ActiveDimensions).astype(np.int32), 
-                           np.float64(data['dx']))
+                           just_one(data['dx']))
     top[bottom == 0] = 0.0
     bnz = bottom.nonzero()
     top[bnz] /= bottom[bnz]
@@ -445,7 +445,7 @@
                           np.int64(np.where(filter)[0].size),
                           top, np.array(data.LeftEdge).astype(np.float64),
                           np.array(data.ActiveDimensions).astype(np.int32), 
-                          np.float64(data['dx']))
+                          just_one(data['dx']))
     del particle_field_data
 
     bottom = np.zeros(data.ActiveDimensions, dtype='float32')
@@ -456,7 +456,7 @@
                           np.int64(np.where(filter)[0].size),
                           bottom, np.array(data.LeftEdge).astype(np.float64),
                           np.array(data.ActiveDimensions).astype(np.int32), 
-                          np.float64(data['dx']))
+                          just_one(data['dx']))
     top[bottom == 0] = 0.0
     bnz = bottom.nonzero()
     top[bnz] /= bottom[bnz]


https://bitbucket.org/yt_analysis/yt/commits/a8bec4f5e6a3/
Changeset:   a8bec4f5e6a3
Branch:      yt-3.0
User:        ngoldbaum
Date:        2013-03-19 19:55:23
Summary:     Adding support for enzo-3.0 datasets that contain no active particles.
Affected #:  1 file

diff -r e4852e262442f29c5b2b30b8c5b8d38a2750ebef -r a8bec4f5e6a346f184da32123b54647d1c07927b yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py
+++ b/yt/frontends/enzo/data_structures.py
@@ -288,7 +288,7 @@
         if self.parameter_file.parameters["VersionNumber"] > 2.0:
             active_particles = True
             nap = {}
-            for type in self.parameters["AppendActiveParticleType"]:
+            for type in self.parameters.get("AppendActiveParticleType", []):
                 nap[type] = []
         else:
             active_particles = False
@@ -309,7 +309,7 @@
             if active_particles:
                 ptypes = _next_token_line("PresentParticleTypes", f)
                 counts = [int(c) for c in _next_token_line("ParticleTypeCounts", f)]
-                for ptype in self.parameters["AppendActiveParticleType"]:
+                for ptype in self.parameters.get("AppendActiveParticleType", []):
                     if ptype in ptypes:
                         nap[ptype].append(counts[ptypes.index(ptype)])
                     else:


https://bitbucket.org/yt_analysis/yt/commits/ad73d0642064/
Changeset:   ad73d0642064
Branch:      yt-3.0
User:        ngoldbaum
Date:        2013-03-19 19:55:39
Summary:     Adding some untracked autogenerated C files to hgignore.
Affected #:  1 file

diff -r a8bec4f5e6a346f184da32123b54647d1c07927b -r ad73d0642064f1147c67dfec7d5d71d76ffefdcd .hgignore
--- a/.hgignore
+++ b/.hgignore
@@ -5,11 +5,16 @@
 hdf5.cfg
 png.cfg
 yt_updater.log
+yt/frontends/artio/_artio_caller.c
 yt/frontends/ramses/_ramses_reader.cpp
+yt/frontends/sph/smoothing_kernel.c
+yt/geometry/oct_container.c
+yt/geometry/selection_routines.c
 yt/utilities/amr_utils.c
 yt/utilities/kdtree/forthonf2c.h
 yt/utilities/libconfig_wrapper.c
 yt/utilities/spatial/ckdtree.c
+yt/utilities/lib/alt_ray_tracers.c
 yt/utilities/lib/CICDeposit.c
 yt/utilities/lib/ContourFinding.c
 yt/utilities/lib/DepthFirstOctree.c


https://bitbucket.org/yt_analysis/yt/commits/3f43eabe2b7c/
Changeset:   3f43eabe2b7c
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-19 20:04:30
Summary:     Merged in ngoldbaum/yt-3.0 (pull request #23)

Adding support for enzo-3.0 datasets that contain no active particles.
Affected #:  2 files

diff -r b56fb77569764f6f5fd8dc1416b6b182170345fe -r 3f43eabe2b7c0fb88452542a9ff867830564540c .hgignore
--- a/.hgignore
+++ b/.hgignore
@@ -5,11 +5,16 @@
 hdf5.cfg
 png.cfg
 yt_updater.log
+yt/frontends/artio/_artio_caller.c
 yt/frontends/ramses/_ramses_reader.cpp
+yt/frontends/sph/smoothing_kernel.c
+yt/geometry/oct_container.c
+yt/geometry/selection_routines.c
 yt/utilities/amr_utils.c
 yt/utilities/kdtree/forthonf2c.h
 yt/utilities/libconfig_wrapper.c
 yt/utilities/spatial/ckdtree.c
+yt/utilities/lib/alt_ray_tracers.c
 yt/utilities/lib/CICDeposit.c
 yt/utilities/lib/ContourFinding.c
 yt/utilities/lib/DepthFirstOctree.c

diff -r b56fb77569764f6f5fd8dc1416b6b182170345fe -r 3f43eabe2b7c0fb88452542a9ff867830564540c yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py
+++ b/yt/frontends/enzo/data_structures.py
@@ -288,7 +288,7 @@
         if self.parameter_file.parameters["VersionNumber"] > 2.0:
             active_particles = True
             nap = {}
-            for type in self.parameters["AppendActiveParticleType"]:
+            for type in self.parameters.get("AppendActiveParticleType", []):
                 nap[type] = []
         else:
             active_particles = False
@@ -309,7 +309,7 @@
             if active_particles:
                 ptypes = _next_token_line("PresentParticleTypes", f)
                 counts = [int(c) for c in _next_token_line("ParticleTypeCounts", f)]
-                for ptype in self.parameters["AppendActiveParticleType"]:
+                for ptype in self.parameters.get("AppendActiveParticleType", []):
                     if ptype in ptypes:
                         nap[ptype].append(counts[ptypes.index(ptype)])
                     else:


https://bitbucket.org/yt_analysis/yt/commits/e13c58124602/
Changeset:   e13c58124602
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-19 20:58:16
Summary:     This backports the periodicity fix to yt-3.0 from yt.  Additionally, it fixes
failing tests by comparing inside/outside, looking at Radius, and using two
methods of calculating the radius.  Note that this makes the requirements for
cell inclusion in a sphere more strict, mandating that the cell center is
enclosed, not just any of the eight corners.
Affected #:  3 files

diff -r 3f43eabe2b7c0fb88452542a9ff867830564540c -r e13c58124602c20675ac73b5b3e9548b5f5b034b yt/data_objects/universal_fields.py
--- a/yt/data_objects/universal_fields.py
+++ b/yt/data_objects/universal_fields.py
@@ -1,4 +1,5 @@
 """
+
 The basic field info container resides here.  These classes, code specific and
 universal, are the means by which we access fields across YT, both derived and
 native.
@@ -754,8 +755,9 @@
     for i, ax in enumerate('xyz'):
         np.subtract(data["%s%s" % (field_prefix, ax)], center[i], r)
         if data.pf.periodicity[i] == True:
-            np.subtract(DW[i], r, rdw)
             np.abs(r, r)
+            np.subtract(r, DW[i], rdw)
+            np.abs(rdw, rdw)
             np.minimum(r, rdw, r)
         np.power(r, 2.0, r)
         np.add(radius, r, radius)

diff -r 3f43eabe2b7c0fb88452542a9ff867830564540c -r e13c58124602c20675ac73b5b3e9548b5f5b034b yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -467,7 +467,7 @@
                     temp -= self.domain_width[i]
                 elif temp < -self.domain_width[i]/2.0:
                     temp += self.domain_width[i]
-            temp = temp - fclip(temp, -dds[i]/2.0, dds[i]/2.0)
+            #temp = temp - fclip(temp, -dds[i]/2.0, dds[i]/2.0)
             dist2 += temp*temp
         if dist2 <= self.radius2: return 1
         return 0

diff -r 3f43eabe2b7c0fb88452542a9ff867830564540c -r e13c58124602c20675ac73b5b3e9548b5f5b034b yt/utilities/tests/test_selectors.py
--- a/yt/utilities/tests/test_selectors.py
+++ b/yt/utilities/tests/test_selectors.py
@@ -20,7 +20,10 @@
         data = pf.h.sphere(center, 0.25)
         data.get_data()
         # WARNING: this value has not be externally verified
-        yield assert_equal, data.size, 19568
+        dd = pf.h.all_data()
+        dd.set_field_parameter("center", center)
+        n_outside = (dd["RadiusCode"] >= 0.25).sum()
+        assert_equal( data.size + n_outside, dd.size)
 
         positions = np.array([data[ax] for ax in 'xyz'])
         centers = np.tile( data.center, data.shape[0] ).reshape(data.shape[0],3).transpose()
@@ -28,4 +31,4 @@
                          pf.domain_right_edge-pf.domain_left_edge,
                          pf.periodicity)
         # WARNING: this value has not been externally verified
-        yield assert_almost_equal, dist.max(), 0.261806188752
+        yield assert_array_less, dist, 0.25


https://bitbucket.org/yt_analysis/yt/commits/ac8bc6a91206/
Changeset:   ac8bc6a91206
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-19 22:02:21
Summary:     Adding suggestion from Nathan for the filter = None fix.
Affected #:  1 file

diff -r 0e5c5352ce0dd97d2b629d0ea21c4172ac54d0aa -r ac8bc6a9120690f7e8223fa1bc7f32c9eced7805 yt/frontends/enzo/fields.py
--- a/yt/frontends/enzo/fields.py
+++ b/yt/frontends/enzo/fields.py
@@ -374,7 +374,7 @@
         if not filter.any(): return blank
         num = filter.sum()
     else:
-        filter = None
+        filter = Ellipsis
         num = data["particle_position_x"].size
     amr_utils.CICDeposit_3(data["particle_position_x"][filter].astype(np.float64),
                            data["particle_position_y"][filter].astype(np.float64),


https://bitbucket.org/yt_analysis/yt/commits/5067013852c7/
Changeset:   5067013852c7
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-19 22:33:54
Summary:     This changes assert_equal in field unit conversion tests to allow 4 nulp.
Affected #:  1 file

diff -r ac8bc6a9120690f7e8223fa1bc7f32c9eced7805 -r 5067013852c78606067c982e8d8c8d22692d0881 yt/data_objects/tests/test_fields.py
--- a/yt/data_objects/tests/test_fields.py
+++ b/yt/data_objects/tests/test_fields.py
@@ -72,7 +72,7 @@
         if not field.particle_type:
             assert_equal(v1, dd1["gas", self.field_name])
         if not needs_spatial:
-            assert_equal(v1, conv*field._function(field, dd2))
+            assert_almost_equal_nulp(v1, conv*field._function(field, dd2), 4)
         if not skip_grids:
             for g in pf.h.grids:
                 g.field_parameters.update(_sample_parameters)


https://bitbucket.org/yt_analysis/yt/commits/b73090c09e55/
Changeset:   b73090c09e55
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-19 22:57:50
Summary:     My typo broke the build.
Affected #:  1 file

diff -r 5067013852c78606067c982e8d8c8d22692d0881 -r b73090c09e55acb5e69241a626e2bdfa9f14a237 yt/data_objects/tests/test_fields.py
--- a/yt/data_objects/tests/test_fields.py
+++ b/yt/data_objects/tests/test_fields.py
@@ -72,7 +72,7 @@
         if not field.particle_type:
             assert_equal(v1, dd1["gas", self.field_name])
         if not needs_spatial:
-            assert_almost_equal_nulp(v1, conv*field._function(field, dd2), 4)
+            assert_array_almost_equal_nulp(v1, conv*field._function(field, dd2), 4)
         if not skip_grids:
             for g in pf.h.grids:
                 g.field_parameters.update(_sample_parameters)


https://bitbucket.org/yt_analysis/yt/commits/cc46e0009c84/
Changeset:   cc46e0009c84
Branch:      yt-3.0
User:        ngoldbaum
Date:        2013-03-19 22:08:20
Summary:     Merged in MatthewTurk/yt-3.0 (pull request #22)

Go back to just_one for data['dx'].  Fixes #532.
Affected #:  2 files

diff -r e13c58124602c20675ac73b5b3e9548b5f5b034b -r cc46e0009c84f96b6a3f85836b97a06a9eabd846 yt/data_objects/universal_fields.py
--- a/yt/data_objects/universal_fields.py
+++ b/yt/data_objects/universal_fields.py
@@ -948,7 +948,7 @@
                  data["particle_position_x"].size,
                  blank, np.array(data.LeftEdge).astype(np.float64),
                  np.array(data.ActiveDimensions).astype(np.int32),
-                 np.float64(data['dx']))
+                 just_one(data['dx']))
     return blank
 add_field("particle_density", function=_pdensity,
           validators=[ValidateGridType()], convert_function=_convertDensity,

diff -r e13c58124602c20675ac73b5b3e9548b5f5b034b -r cc46e0009c84f96b6a3f85836b97a06a9eabd846 yt/frontends/enzo/fields.py
--- a/yt/frontends/enzo/fields.py
+++ b/yt/frontends/enzo/fields.py
@@ -361,7 +361,7 @@
                            np.int64(np.where(filter)[0].size),
                            blank, np.array(data.LeftEdge).astype(np.float64),
                            np.array(data.ActiveDimensions).astype(np.int32), 
-                           np.float64(data['dx']))
+                           just_one(data['dx']))
     return blank
 add_field("star_density", function=_spdensity,
           validators=[ValidateSpatial(0)], convert_function=_convertDensity)
@@ -374,7 +374,7 @@
         if not filter.any(): return blank
         num = filter.sum()
     else:
-        filter = None
+        filter = Ellipsis
         num = data["particle_position_x"].size
     amr_utils.CICDeposit_3(data["particle_position_x"][filter].astype(np.float64),
                            data["particle_position_y"][filter].astype(np.float64),
@@ -383,7 +383,7 @@
                            num,
                            blank, np.array(data.LeftEdge).astype(np.float64),
                            np.array(data.ActiveDimensions).astype(np.int32), 
-                           np.float64(data['dx']))
+                           just_one(data['dx']))
     return blank
 add_field("dm_density", function=_dmpdensity,
           validators=[ValidateSpatial(0)], convert_function=_convertDensity)
@@ -404,7 +404,7 @@
                            data["particle_position_x"].size,
                            top, np.array(data.LeftEdge).astype(np.float64),
                            np.array(data.ActiveDimensions).astype(np.int32), 
-                           np.float64(data['dx']))
+                           just_one(data['dx']))
     del particle_field_data
 
     bottom = np.zeros(data.ActiveDimensions, dtype='float32')
@@ -415,7 +415,7 @@
                            data["particle_position_x"].size,
                            bottom, np.array(data.LeftEdge).astype(np.float64),
                            np.array(data.ActiveDimensions).astype(np.int32), 
-                           np.float64(data['dx']))
+                           just_one(data['dx']))
     top[bottom == 0] = 0.0
     bnz = bottom.nonzero()
     top[bnz] /= bottom[bnz]
@@ -445,7 +445,7 @@
                           np.int64(np.where(filter)[0].size),
                           top, np.array(data.LeftEdge).astype(np.float64),
                           np.array(data.ActiveDimensions).astype(np.int32), 
-                          np.float64(data['dx']))
+                          just_one(data['dx']))
     del particle_field_data
 
     bottom = np.zeros(data.ActiveDimensions, dtype='float32')
@@ -456,7 +456,7 @@
                           np.int64(np.where(filter)[0].size),
                           bottom, np.array(data.LeftEdge).astype(np.float64),
                           np.array(data.ActiveDimensions).astype(np.int32), 
-                          np.float64(data['dx']))
+                          just_one(data['dx']))
     top[bottom == 0] = 0.0
     bnz = bottom.nonzero()
     top[bnz] /= bottom[bnz]


https://bitbucket.org/yt_analysis/yt/commits/0776521341cd/
Changeset:   0776521341cd
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-19 22:41:16
Summary:     Merged in MatthewTurk/yt-3.0 (pull request #24)

This changes assert_equal in field unit conversion tests to allow 4 nulp.
Affected #:  1 file

diff -r cc46e0009c84f96b6a3f85836b97a06a9eabd846 -r 0776521341cd4c318e942af0cd69f47d69341739 yt/data_objects/tests/test_fields.py
--- a/yt/data_objects/tests/test_fields.py
+++ b/yt/data_objects/tests/test_fields.py
@@ -72,7 +72,7 @@
         if not field.particle_type:
             assert_equal(v1, dd1["gas", self.field_name])
         if not needs_spatial:
-            assert_equal(v1, conv*field._function(field, dd2))
+            assert_almost_equal_nulp(v1, conv*field._function(field, dd2), 4)
         if not skip_grids:
             for g in pf.h.grids:
                 g.field_parameters.update(_sample_parameters)


https://bitbucket.org/yt_analysis/yt/commits/b7a4dbc55e0c/
Changeset:   b7a4dbc55e0c
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-19 22:58:51
Summary:     Merge
Affected #:  8 files

diff -r b73090c09e55acb5e69241a626e2bdfa9f14a237 -r b7a4dbc55e0c8e835ff314c1fe0e46e5f592ee7e .hgignore
--- a/.hgignore
+++ b/.hgignore
@@ -5,11 +5,16 @@
 hdf5.cfg
 png.cfg
 yt_updater.log
+yt/frontends/artio/_artio_caller.c
 yt/frontends/ramses/_ramses_reader.cpp
+yt/frontends/sph/smoothing_kernel.c
+yt/geometry/oct_container.c
+yt/geometry/selection_routines.c
 yt/utilities/amr_utils.c
 yt/utilities/kdtree/forthonf2c.h
 yt/utilities/libconfig_wrapper.c
 yt/utilities/spatial/ckdtree.c
+yt/utilities/lib/alt_ray_tracers.c
 yt/utilities/lib/CICDeposit.c
 yt/utilities/lib/ContourFinding.c
 yt/utilities/lib/DepthFirstOctree.c

diff -r b73090c09e55acb5e69241a626e2bdfa9f14a237 -r b7a4dbc55e0c8e835ff314c1fe0e46e5f592ee7e yt/data_objects/universal_fields.py
--- a/yt/data_objects/universal_fields.py
+++ b/yt/data_objects/universal_fields.py
@@ -1,4 +1,5 @@
 """
+
 The basic field info container resides here.  These classes, code specific and
 universal, are the means by which we access fields across YT, both derived and
 native.
@@ -754,8 +755,9 @@
     for i, ax in enumerate('xyz'):
         np.subtract(data["%s%s" % (field_prefix, ax)], center[i], r)
         if data.pf.periodicity[i] == True:
-            np.subtract(DW[i], r, rdw)
             np.abs(r, r)
+            np.subtract(r, DW[i], rdw)
+            np.abs(rdw, rdw)
             np.minimum(r, rdw, r)
         np.power(r, 2.0, r)
         np.add(radius, r, radius)

diff -r b73090c09e55acb5e69241a626e2bdfa9f14a237 -r b7a4dbc55e0c8e835ff314c1fe0e46e5f592ee7e yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py
+++ b/yt/frontends/enzo/data_structures.py
@@ -288,7 +288,7 @@
         if self.parameter_file.parameters["VersionNumber"] > 2.0:
             active_particles = True
             nap = {}
-            for type in self.parameters["AppendActiveParticleType"]:
+            for type in self.parameters.get("AppendActiveParticleType", []):
                 nap[type] = []
         else:
             active_particles = False
@@ -309,7 +309,7 @@
             if active_particles:
                 ptypes = _next_token_line("PresentParticleTypes", f)
                 counts = [int(c) for c in _next_token_line("ParticleTypeCounts", f)]
-                for ptype in self.parameters["AppendActiveParticleType"]:
+                for ptype in self.parameters.get("AppendActiveParticleType", []):
                     if ptype in ptypes:
                         nap[ptype].append(counts[ptypes.index(ptype)])
                     else:

diff -r b73090c09e55acb5e69241a626e2bdfa9f14a237 -r b7a4dbc55e0c8e835ff314c1fe0e46e5f592ee7e yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -265,7 +265,8 @@
 
     def select_fwidth(self, dobj):
         # Recall domain_dimensions is the number of cells, not octs
-        base_dx = 1.0/self.domain.pf.domain_dimensions
+        base_dx = (self.domain.pf.domain_width /
+                   self.domain.pf.domain_dimensions)
         widths = np.empty((self.cell_count, 3), dtype="float64")
         dds = (2**self.ires(dobj))
         for i in range(3):

diff -r b73090c09e55acb5e69241a626e2bdfa9f14a237 -r b7a4dbc55e0c8e835ff314c1fe0e46e5f592ee7e yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -92,7 +92,8 @@
 
     def select_fwidth(self, dobj):
         # Recall domain_dimensions is the number of cells, not octs
-        base_dx = 1.0/self.domain.pf.domain_dimensions
+        base_dx = (self.domain.pf.domain_width /
+                   self.domain.pf.domain_dimensions)
         widths = np.empty((self.cell_count, 3), dtype="float64")
         dds = (2**self.ires(dobj))
         for i in range(3):

diff -r b73090c09e55acb5e69241a626e2bdfa9f14a237 -r b7a4dbc55e0c8e835ff314c1fe0e46e5f592ee7e yt/geometry/cartesian_fields.py
--- a/yt/geometry/cartesian_fields.py
+++ b/yt/geometry/cartesian_fields.py
@@ -41,26 +41,26 @@
 add_cart_field = CartesianFieldInfo.add_field
 
 def _dx(field, data):
-    return data.pf.domain_width[0] * data.fwidth[...,0]
+    return data.fwidth[...,0]
 add_cart_field('dx', function=_dx, display_field=False)
 
 def _dy(field, data):
-    return data.pf.domain_width[1] * data.fwidth[...,1]
+    return data.fwidth[...,1]
 add_cart_field('dy', function=_dy, display_field=False)
 
 def _dz(field, data):
-    return data.pf.domain_width[2] * data.fwidth[...,2]
+    return data.fwidth[...,2]
 add_cart_field('dz', function=_dz, display_field=False)
 
 def _coordX(field, data):
-    return data.pf.domain_left_edge[0] + data.fcoords[...,0]
+    return data.fcoords[...,0]
 add_cart_field('x', function=_coordX, display_field=False)
 
 def _coordY(field, data):
-    return data.pf.domain_left_edge[1] + data.fcoords[...,1]
+    return data.fcoords[...,1]
 add_cart_field('y', function=_coordY, display_field=False)
 
 def _coordZ(field, data):
-    return data.pf.domain_left_edge[2] + data.fcoords[...,2]
+    return data.fcoords[...,2]
 add_cart_field('z', function=_coordZ, display_field=False)
 

diff -r b73090c09e55acb5e69241a626e2bdfa9f14a237 -r b7a4dbc55e0c8e835ff314c1fe0e46e5f592ee7e yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -467,7 +467,7 @@
                     temp -= self.domain_width[i]
                 elif temp < -self.domain_width[i]/2.0:
                     temp += self.domain_width[i]
-            temp = temp - fclip(temp, -dds[i]/2.0, dds[i]/2.0)
+            #temp = temp - fclip(temp, -dds[i]/2.0, dds[i]/2.0)
             dist2 += temp*temp
         if dist2 <= self.radius2: return 1
         return 0

diff -r b73090c09e55acb5e69241a626e2bdfa9f14a237 -r b7a4dbc55e0c8e835ff314c1fe0e46e5f592ee7e yt/utilities/tests/test_selectors.py
--- a/yt/utilities/tests/test_selectors.py
+++ b/yt/utilities/tests/test_selectors.py
@@ -20,7 +20,10 @@
         data = pf.h.sphere(center, 0.25)
         data.get_data()
         # WARNING: this value has not be externally verified
-        yield assert_equal, data.size, 19568
+        dd = pf.h.all_data()
+        dd.set_field_parameter("center", center)
+        n_outside = (dd["RadiusCode"] >= 0.25).sum()
+        assert_equal( data.size + n_outside, dd.size)
 
         positions = np.array([data[ax] for ax in 'xyz'])
         centers = np.tile( data.center, data.shape[0] ).reshape(data.shape[0],3).transpose()
@@ -28,4 +31,4 @@
                          pf.domain_right_edge-pf.domain_left_edge,
                          pf.periodicity)
         # WARNING: this value has not been externally verified
-        yield assert_almost_equal, dist.max(), 0.261806188752
+        yield assert_array_less, dist, 0.25


https://bitbucket.org/yt_analysis/yt/commits/f49fc8747a99/
Changeset:   f49fc8747a99
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-27 14:27:56
Summary:     Fixing RAMSES particle fields.

Previously, RAMSES read the particles completely incorrectly.  This fixes that
by applying validation and splitting the particle IO into a different routine.

Still need by-type validation and reading, which I think will be eased by a
better structuring of the IO handler.

Example: http://paste.yt-project.org/show/3310/
Affected #:  2 files

diff -r b7a4dbc55e0c8e835ff314c1fe0e46e5f592ee7e -r f49fc8747a994d9f8c7326d7ba2a25f05430e56a yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -143,6 +143,7 @@
             fpu.skip(f, 1)
             field_offsets[field] = f.tell()
         self.particle_field_offsets = field_offsets
+        self.particle_field_types = dict(particle_fields)
 
     def _read_amr_header(self):
         hvals = {}

diff -r b7a4dbc55e0c8e835ff314c1fe0e46e5f592ee7e -r f49fc8747a994d9f8c7326d7ba2a25f05430e56a yt/frontends/ramses/io.py
--- a/yt/frontends/ramses/io.py
+++ b/yt/frontends/ramses/io.py
@@ -60,33 +60,45 @@
     def _read_particle_selection(self, chunks, selector, fields):
         size = 0
         masks = {}
+        chunks = list(chunks)
+        pos_fields = [("all","particle_position_%s" % ax) for ax in "xyz"]
         for chunk in chunks:
             for subset in chunk.objs:
                 # We read the whole thing, then feed it back to the selector
-                offsets = []
-                f = open(subset.domain.part_fn, "rb")
-                foffsets = subset.domain.particle_field_offsets
-                selection = {}
-                for ax in 'xyz':
-                    field = "particle_position_%s" % ax
-                    f.seek(foffsets[field])
-                    selection[ax] = fpu.read_vector(f, 'd')
-                mask = selector.select_points(selection['x'],
-                            selection['y'], selection['z'])
+                selection = self._read_particle_subset(subset, pos_fields)
+                mask = selector.select_points(
+                    selection["all", "particle_position_x"],
+                    selection["all", "particle_position_y"],
+                    selection["all", "particle_position_z"])
                 if mask is None: continue
+                #print "MASK", mask
                 size += mask.sum()
                 masks[id(subset)] = mask
         # Now our second pass
-        tr = dict((f, np.empty(size, dtype="float64")) for f in fields)
+        tr = {}
+        pos = 0
         for chunk in chunks:
             for subset in chunk.objs:
-                f = open(subset.domain.part_fn, "rb")
+                selection = self._read_particle_subset(subset, fields)
                 mask = masks.pop(id(subset), None)
                 if mask is None: continue
-                for ftype, fname in fields:
-                    offsets.append((foffsets[fname], (ftype,fname)))
-                for offset, field in sorted(offsets):
-                    f.seek(offset)
-                    tr[field] = fpu.read_vector(f, 'd')[mask]
+                count = mask.sum()
+                for field in fields:
+                    ti = selection.pop(field)[mask]
+                    if field not in tr:
+                        dt = subset.domain.particle_field_types[field[1]]
+                        tr[field] = np.empty(size, dt)
+                    tr[field][pos:pos+count] = ti
+                pos += count
         return tr
 
+    def _read_particle_subset(self, subset, fields):
+        f = open(subset.domain.part_fn, "rb")
+        foffsets = subset.domain.particle_field_offsets
+        tr = {}
+        #for field in sorted(fields, key=lambda a:foffsets[a]):
+        for field in fields:
+            f.seek(foffsets[field[1]])
+            dt = subset.domain.particle_field_types[field[1]]
+            tr[field] = fpu.read_vector(f, dt)
+        return tr


https://bitbucket.org/yt_analysis/yt/commits/6bdd0c4ba161/
Changeset:   6bdd0c4ba161
Branch:      yt-3.0
User:        xarthisius
Date:        2013-03-20 10:02:32
Summary:     [test_fields] omega_matter needs to be defined for DensityPerturbation
Affected #:  1 file

diff -r b7a4dbc55e0c8e835ff314c1fe0e46e5f592ee7e -r 6bdd0c4ba1619072b110248e87d120eb2e14d30f yt/data_objects/tests/test_fields.py
--- a/yt/data_objects/tests/test_fields.py
+++ b/yt/data_objects/tests/test_fields.py
@@ -33,6 +33,7 @@
     pf.conversion_factors.update( dict((f, 1.0) for f in fields) )
     pf.current_redshift = 0.0001
     pf.hubble_constant = 0.7
+    pf.omega_matter = 0.27
     for unit in mpc_conversion:
         pf.units[unit+'h'] = pf.units[unit]
         pf.units[unit+'cm'] = pf.units[unit]


https://bitbucket.org/yt_analysis/yt/commits/f68f40f99fd0/
Changeset:   f68f40f99fd0
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-13 18:50:24
Summary:     updated for explicit endianess
Affected #:  1 file

diff -r 707a83d094b676485304aafe2439d39ad9d748d0 -r f68f40f99fd0722ef7a39b4c4f68c08d9e413742 yt/utilities/fortran_utils.py
--- a/yt/utilities/fortran_utils.py
+++ b/yt/utilities/fortran_utils.py
@@ -27,7 +27,7 @@
 import numpy as np
 import os
 
-def read_attrs(f, attrs):
+def read_attrs(f, attrs,endian='='):
     r"""This function accepts a file pointer and reads from that file pointer
     according to a definition of attributes, returning a dictionary.
 
@@ -44,6 +44,8 @@
     attrs : iterable of iterables
         This object should be an iterable of the format [ (attr_name, count,
         struct type), ... ].
+    endian : str
+        '=' is native, '>' is big, '<' is little endian
 
     Returns
     -------
@@ -59,7 +61,7 @@
     >>> rv = read_attrs(f, header)
     """
     vv = {}
-    net_format = "="
+    net_format = endian
     for a, n, t in attrs:
         net_format += "".join(["I"] + ([t] * n) + ["I"])
     size = struct.calcsize(net_format)
@@ -70,16 +72,15 @@
         v = [vals.pop(0) for i in range(b)]
         s2 = vals.pop(0)
         if s1 != s2:
-            size = struct.calcsize("=I" + "".join(b*[n]) + "I")
+            size = struct.calcsize(endian "I" + "".join(b*[n]) + "I")
             print "S1 = %s ; S2 = %s ; %s %s %s = %s" % (
                     s1, s2, a, b, n, size)
-            raise RuntimeError
         assert(s1 == s2)
         if b == 1: v = v[0]
         vv[a] = v
     return vv
 
-def read_vector(f, d):
+def read_vector(f, d, endian='='):
     r"""This function accepts a file pointer and reads from that file pointer
     a vector of values.
 
@@ -89,6 +90,8 @@
         An open file object.  Should have been opened in mode rb.
     d : data type
         This is the datatype (from the struct module) that we should read.
+    endian : str
+        '=' is native, '>' is big, '<' is little endian
 
     Returns
     -------
@@ -101,9 +104,9 @@
     >>> f = open("fort.3", "rb")
     >>> rv = read_vector(f, 'd')
     """
-    fmt = "=I"
+    fmt = endian+"I"
     ss = struct.unpack(fmt, f.read(struct.calcsize(fmt)))[0]
-    ds = struct.calcsize("=%s" % d)
+    ds = struct.calcsize(endian+"%s" % d)
     if ss % ds != 0:
         print "fmt = '%s' ; ss = %s ; ds = %s" % (fmt, ss, ds)
         raise RuntimeError
@@ -113,9 +116,10 @@
     assert(vec[-1] == ss)
     return tr
 
-def skip(f, n = 1):
+def skip(f, n=1, endian='='):
     r"""This function accepts a file pointer and skips a Fortran unformatted
-    record.
+    record. Optionally check that the skip was done correctly by checking 
+    the pad bytes.
 
     Parameters
     ----------
@@ -123,6 +127,10 @@
         An open file object.  Should have been opened in mode rb.
     n : int
         Number of records to skip.
+    check : bool
+        Assert that the pad bytes are equal
+    endian : str
+        '=' is native, '>' is big, '<' is little endian
 
     Examples
     --------
@@ -131,11 +139,13 @@
     >>> skip(f, 3)
     """
     for i in range(n):
-        fmt = "=I"
-        ss = struct.unpack(fmt, f.read(struct.calcsize(fmt)))[0]
-        f.seek(ss + struct.calcsize("=I"), os.SEEK_CUR)
+        fmt = endian+"I"
+        s1= struct.unpack(fmt, f.read(struct.calcsize(fmt)))[0]
+        f.seek(s1+ struct.calcsize(fmt), os.SEEK_CUR)
+        s2= struct.unpack(fmt, f.read(struct.calcsize(fmt)))[0]
+        assert s1==s2 
 
-def read_record(f, rspec):
+def read_record(f, rspec, endian='='):
     r"""This function accepts a file pointer and reads from that file pointer
     a single "record" with different components.
 
@@ -150,6 +160,8 @@
     rspec : iterable of iterables
         This object should be an iterable of the format [ (attr_name, count,
         struct type), ... ].
+    endian : str
+        '=' is native, '>' is big, '<' is little endian
 
     Returns
     -------


https://bitbucket.org/yt_analysis/yt/commits/9e50ea351f33/
Changeset:   9e50ea351f33
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-13 18:56:25
Summary:     skip now returns the number of elements skipped
Affected #:  1 file

diff -r f68f40f99fd0722ef7a39b4c4f68c08d9e413742 -r 9e50ea351f3327189b271e7237ad7c83581c4e03 yt/utilities/fortran_utils.py
--- a/yt/utilities/fortran_utils.py
+++ b/yt/utilities/fortran_utils.py
@@ -132,18 +132,25 @@
     endian : str
         '=' is native, '>' is big, '<' is little endian
 
+    Returns
+    -------
+    skipped: The number of elements in the skipped array
+
     Examples
     --------
 
     >>> f = open("fort.3", "rb")
     >>> skip(f, 3)
     """
+    skipped = 0
     for i in range(n):
         fmt = endian+"I"
         s1= struct.unpack(fmt, f.read(struct.calcsize(fmt)))[0]
         f.seek(s1+ struct.calcsize(fmt), os.SEEK_CUR)
         s2= struct.unpack(fmt, f.read(struct.calcsize(fmt)))[0]
         assert s1==s2 
+        skipped += s1/struct.calcsize(fmt)
+    return skipped
 
 def read_record(f, rspec, endian='='):
     r"""This function accepts a file pointer and reads from that file pointer


https://bitbucket.org/yt_analysis/yt/commits/e2724afc5bf4/
Changeset:   e2724afc5bf4
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-13 19:11:59
Summary:     added a more concise struct format
Affected #:  1 file

diff -r 9e50ea351f3327189b271e7237ad7c83581c4e03 -r e2724afc5bf4db83014640d89fa58e65f55f801c yt/utilities/fortran_utils.py
--- a/yt/utilities/fortran_utils.py
+++ b/yt/utilities/fortran_utils.py
@@ -42,8 +42,10 @@
     f : File object
         An open file object.  Should have been opened in mode rb.
     attrs : iterable of iterables
-        This object should be an iterable of the format [ (attr_name, count,
-        struct type), ... ].
+        This object should be an iterable of the format 
+        [ (attr_name, count,  struct type), ... ]. 
+        or of format 
+        [ (attr_name, struct type), ... ]. 
     endian : str
         '=' is native, '>' is big, '<' is little endian
 
@@ -62,21 +64,25 @@
     """
     vv = {}
     net_format = endian
-    for a, n, t in attrs:
+    for attr in attrs:
+        a,t = attr[0],attr[-1]
+        n = 1 if len(attr)==2 else attr[1]
         net_format += "".join(["I"] + ([t] * n) + ["I"])
     size = struct.calcsize(net_format)
     vals = list(struct.unpack(net_format, f.read(size)))
     vv = {}
-    for a, b, n in attrs:
+    for attr in attrs:
+        a,t = attr[0],attr[-1]
+        n = 1 if len(attr)==2 else attr[1]
         s1 = vals.pop(0)
-        v = [vals.pop(0) for i in range(b)]
+        v = [vals.pop(0) for i in range(n)]
         s2 = vals.pop(0)
         if s1 != s2:
-            size = struct.calcsize(endian "I" + "".join(b*[n]) + "I")
+            size = struct.calcsize(endian "I" + "".join([t]*n) + "I")
             print "S1 = %s ; S2 = %s ; %s %s %s = %s" % (
-                    s1, s2, a, b, n, size)
+                    s1, s2, a, n, t, size)
         assert(s1 == s2)
-        if b == 1: v = v[0]
+        if n == 1: v = v[0]
         vv[a] = v
     return vv
 


https://bitbucket.org/yt_analysis/yt/commits/9a6dee9b1269/
Changeset:   9a6dee9b1269
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-13 19:15:07
Summary:     switched to using fortran utils
Affected #:  1 file

diff -r e2724afc5bf4db83014640d89fa58e65f55f801c -r 9a6dee9b12699a0c9f15393f9f465e8def7f79c2 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -53,15 +53,11 @@
 import yt.utilities.lib as amr_utils
 
 from .definitions import *
-from .io import _read_frecord
-from .io import _read_record
-from .io import _read_struct
+from yt.utilities.fortran_utils import *
 from .io import _read_art_level_info
 from .io import _read_child_mask_level
 from .io import _read_child_level
 from .io import _read_root_level
-from .io import _read_record_size
-from .io import _skip_record
 from .io import _count_art_octs
 from .io import b2t
 
@@ -322,10 +318,10 @@
         self.parameters.update(constants)
         #read the amr header
         with open(self.file_amr,'rb') as f:
-            amr_header_vals = _read_struct(f,amr_header_struct)
+            amr_header_vals = read_attrs(f,amr_header_struct)
             for to_skip in ['tl','dtl','tlold','dtlold','iSO']:
-                _skip_record(f)
-            (self.ncell,) = struct.unpack('>l', _read_record(f))
+                skip(f)
+            self.ncell = read_vector(f,'l','>')
             # Try to figure out the root grid dimensions
             est = int(np.rint(self.ncell**(1.0/3.0)))
             # Note here: this is the number of *cells* on the root grid.
@@ -337,27 +333,25 @@
             self.root_ncells = self.root_nocts*8
             mylog.debug("Estimating %i cells on a root grid side,"+ \
                         "%i root octs",est,self.root_nocts)
-            self.root_iOctCh = _read_frecord(f,'>i')[:self.root_ncells]
+            self.root_iOctCh = read_vector(f,'i','>')[:self.root_ncells]
             self.root_iOctCh = self.root_iOctCh.reshape(self.domain_dimensions,
                  order='F')
             self.root_grid_offset = f.tell()
-            #_skip_record(f) # hvar
-            #_skip_record(f) # var
-            self.root_nhvar = _read_frecord(f,'>f',size_only=True)
-            self.root_nvar  = _read_frecord(f,'>f',size_only=True)
+            self.root_nhvar = skip(f)
+            self.root_nvar  = skip(f)
             #make sure that the number of root variables is a multiple of rootcells
             assert self.root_nhvar%self.root_ncells==0
             assert self.root_nvar%self.root_ncells==0
             self.nhydro_variables = ((self.root_nhvar+self.root_nvar)/ 
                                     self.root_ncells)
-            self.iOctFree, self.nOct = struct.unpack('>ii', _read_record(f))
+            self.iOctFree, self.nOct = read_vector(f,'ii','>')
             self.child_grid_offset = f.tell()
             self.parameters.update(amr_header_vals)
             self.parameters['ncell0'] = self.parameters['ng']**3
         #read the particle header
         if not self.skip_particles and self.file_particle_header:
             with open(self.file_particle_header,"rb") as fh:
-                particle_header_vals = _read_struct(fh,particle_header_struct)
+                particle_header_vals = read_attrs(fh,particle_header_struct)
                 fh.seek(seek_extras)
                 n = particle_header_vals['Nspecies']
                 wspecies = np.fromfile(fh,dtype='>f',count=10)


https://bitbucket.org/yt_analysis/yt/commits/b827e6f8af57/
Changeset:   b827e6f8af57
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-13 19:15:18
Summary:     wiped old fortran utils
Affected #:  1 file

diff -r 9a6dee9b12699a0c9f15393f9f465e8def7f79c2 -r b827e6f8af5705cf09314666682d503185fb0ca9 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -31,6 +31,7 @@
 from yt.utilities.io_handler import \
     BaseIOHandler
 import yt.utilities.lib as au
+from yt.utilities.fortran_utils import *
 from yt.utilities.logger import ytLogger as mylog
 from yt.frontends.art.definitions import *
 
@@ -107,8 +108,7 @@
         #Get the info for this level, skip the rest
         #print "Reading oct tree data for level", Lev
         #print 'offset:',f.tell()
-        Level[Lev], iNOLL[Lev], iHOLL[Lev] = struct.unpack(
-           '>iii', _read_record(f))
+        Level[Lev], iNOLL[Lev], iHOLL[Lev] = read_vector(f,'iii','>')
         #print 'Level %i : '%Lev, iNOLL
         #print 'offset after level record:',f.tell()
         iOct = iHOLL[Lev] - 1
@@ -117,13 +117,13 @@
         ntot = ntot + nLevel
 
         #Skip all the oct hierarchy data
-        ns = _read_record_size(f)
+        ns = skip(f)
         size = struct.calcsize('>i') + ns + struct.calcsize('>i')
         f.seek(f.tell()+size * nLevel)
 
         level_child_offsets.append(f.tell())
         #Skip the child vars data
-        ns = _read_record_size(f)
+        ns = skip(f)
         size = struct.calcsize('>i') + ns + struct.calcsize('>i')
         f.seek(f.tell()+size * nLevel*nchild)
 
@@ -136,8 +136,7 @@
     pos = f.tell()
     f.seek(level_oct_offsets[level])
     #Get the info for this level, skip the rest
-    junk, nLevel, iOct = struct.unpack(
-       '>iii', _read_record(f))
+    junk, nLevel, iOct = read_vector(f,'iii','>')
     
     #fortran indices start at 1
     
@@ -218,9 +217,9 @@
     with open(file,'rb') as fh:
         for dtype, variables in star_struct:
             if field in variables or dtype=='>d' or dtype=='>d':
-                data[field] = _read_frecord(fh,'>f')
+                data[field] = read_vector(fh,'f','>')
             else:
-                _skip_record(fh)
+                skip(fh,endian='>')
     return data.pop(field),data
 
 def _read_child_mask_level(f, level_child_offsets,level,nLevel,nhydro_vars):
@@ -279,54 +278,13 @@
 def _read_root_level(f,level_offsets,level_info,nhydro_vars=10):
     nocts = level_info[0]
     f.seek(level_offsets[0]) # Ditch the header
-    hvar = _read_frecord(f,'>f')
-    var  = _read_frecord(f,'>f')
+    hvar = read_vector(f,'f','>')
+    var = read_vector(f,'f','>')
     hvar = hvar.reshape((nhydro_vars, nocts*8), order="F")
     var = var.reshape((2, nocts*8), order="F")
     arr = np.concatenate((hvar,var))
     return arr
 
-def _skip_record(f):
-    s = struct.unpack('>i', f.read(struct.calcsize('>i')))
-    f.seek(s[0], 1)
-    s = struct.unpack('>i', f.read(struct.calcsize('>i')))
-
-def _read_frecord(f,fmt,size_only=False):
-    s1 = struct.unpack('>i', f.read(struct.calcsize('>i')))[0]
-    count = s1/np.dtype(fmt).itemsize
-    ss = np.fromfile(f,fmt,count=count)
-    s2 = struct.unpack('>i', f.read(struct.calcsize('>i')))[0]
-    assert s1==s2
-    if size_only:
-        return count
-    return ss
-
-
-def _read_record(f,fmt=None):
-    s = struct.unpack('>i', f.read(struct.calcsize('>i')))[0]
-    ss = f.read(s)
-    s = struct.unpack('>i', f.read(struct.calcsize('>i')))
-    if fmt is not None:
-        return struct.unpack(ss,fmt)
-    return ss
-
-def _read_record_size(f):
-    pos = f.tell()
-    s = struct.unpack('>i', f.read(struct.calcsize('>i')))
-    f.seek(pos)
-    return s[0]
-
-def _read_struct(f,structure,verbose=False):
-    vals = {}
-    for format,name in structure:
-        size = struct.calcsize(format)
-        (val,) = struct.unpack(format,f.read(size))
-        vals[name] = val
-        if verbose: print "%s:\t%s\t (%d B)" %(name,val,f.tell())
-    return vals
-
-
-
 #All of these functions are to convert from hydro time var to 
 #proper time
 sqrt = np.sqrt


https://bitbucket.org/yt_analysis/yt/commits/96cce19fa09f/
Changeset:   96cce19fa09f
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-13 19:37:17
Summary:     reversing changes to read struct
Affected #:  1 file

diff -r b827e6f8af5705cf09314666682d503185fb0ca9 -r 96cce19fa09feaaee5de3e04d633b5bee64ffda9 yt/utilities/fortran_utils.py
--- a/yt/utilities/fortran_utils.py
+++ b/yt/utilities/fortran_utils.py
@@ -42,10 +42,8 @@
     f : File object
         An open file object.  Should have been opened in mode rb.
     attrs : iterable of iterables
-        This object should be an iterable of the format 
-        [ (attr_name, count,  struct type), ... ]. 
-        or of format 
-        [ (attr_name, struct type), ... ]. 
+        This object should be an iterable of the format [ (attr_name, count,
+        struct type), ... ].
     endian : str
         '=' is native, '>' is big, '<' is little endian
 
@@ -64,25 +62,21 @@
     """
     vv = {}
     net_format = endian
-    for attr in attrs:
-        a,t = attr[0],attr[-1]
-        n = 1 if len(attr)==2 else attr[1]
+    for a, n, t in attrs:
         net_format += "".join(["I"] + ([t] * n) + ["I"])
     size = struct.calcsize(net_format)
     vals = list(struct.unpack(net_format, f.read(size)))
     vv = {}
-    for attr in attrs:
-        a,t = attr[0],attr[-1]
-        n = 1 if len(attr)==2 else attr[1]
+    for a, b, n in attrs:
         s1 = vals.pop(0)
-        v = [vals.pop(0) for i in range(n)]
+        v = [vals.pop(0) for i in range(b)]
         s2 = vals.pop(0)
         if s1 != s2:
-            size = struct.calcsize(endian "I" + "".join([t]*n) + "I")
+            size = struct.calcsize(endian "I" + "".join(b*[n]) + "I")
             print "S1 = %s ; S2 = %s ; %s %s %s = %s" % (
-                    s1, s2, a, n, t, size)
+                    s1, s2, a, b, n, size)
         assert(s1 == s2)
-        if n == 1: v = v[0]
+        if b == 1: v = v[0]
         vv[a] = v
     return vv
 


https://bitbucket.org/yt_analysis/yt/commits/a795bea9ad00/
Changeset:   a795bea9ad00
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-13 19:51:32
Summary:     updated the structs to the standard fortran format
Affected #:  1 file

diff -r 96cce19fa09feaaee5de3e04d633b5bee64ffda9 -r a795bea9ad006db6aef3a9533401524b10c5cfe9 yt/frontends/art/definitions.py
--- a/yt/frontends/art/definitions.py
+++ b/yt/frontends/art/definitions.py
@@ -40,10 +40,10 @@
     'PotentialOld'
 ]
 
-hydro_struct = [('pad1','>i'),('idc','>i'),('iOctCh','>i')]
+hydro_struct = [('pad1',1,'>i'),('idc',1,'>i'),('iOctCh',1,'>i')]
 for field in fluid_fields:
-    hydro_struct += (field,'>f'),
-hydro_struct += ('pad2','>i'),
+    hydro_struct += (field,1,'>f'),
+hydro_struct += ('pad2',1,'>i'),
 
 particle_fields= [
     'particle_age',
@@ -87,68 +87,68 @@
 }
 
 amr_header_struct = [
-    ('>i','pad byte'),
-    ('>256s','jname'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>i','istep'),
-    ('>d','t'),
-    ('>d','dt'),
-    ('>f','aexpn'),
-    ('>f','ainit'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>f','boxh'),
-    ('>f','Om0'),
-    ('>f','Oml0'),
-    ('>f','Omb0'),
-    ('>f','hubble'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>i','nextras'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>f','extra1'),
-    ('>f','extra2'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>256s','lextra'),
-    ('>256s','lextra'),
-    ('>i','pad byte'),
-    ('>i', 'pad byte'),
-    ('>i', 'min_level'),
-    ('>i', 'max_level'),
-    ('>i', 'pad byte'),
+    ('pad byte',1,'>i'),
+    ('jname',1,'>256s'),
+    ('pad byte',1,'>i'),
+    ('pad byte',1,'>i'),
+    ('istep',1,'>i'),
+    ('t',1,'>d'),
+    ('dt',1,'>d'),
+    ('aexpn',1,'>f'),
+    ('ainit',1,'>f'),
+    ('pad byte',1,'>i'),
+    ('pad byte',1,'>i'),
+    ('boxh',1,'>f'),
+    ('Om0',1,'>f'),
+    ('Oml0',1,'>f'),
+    ('Omb0',1,'>f'),
+    ('hubble',1,'>f'),
+    ('pad byte',1,'>i'),
+    ('pad byte',1,'>i'),
+    ('nextras',1,'>i'),
+    ('pad byte',1,'>i'),
+    ('pad byte',1,'>i'),
+    ('extra1',1,'>f'),
+    ('extra2',1,'>f'),
+    ('pad byte',1,'>i'),
+    ('pad byte',1,'>i'),
+    ('lextra',1,'>256s'),
+    ('lextra',1,'>256s'),
+    ('pad byte',1,'>i'),
+    ( 'pad byte',1,'>i'),
+    ( 'min_level',1,'>i'),
+    ( 'max_level',1,'>i'),
+    ( 'pad byte',1,'>i')
 ]
 
 particle_header_struct =[
-    ('>i','pad'),
-    ('45s','header'), 
-    ('>f','aexpn'),
-    ('>f','aexp0'),
-    ('>f','amplt'),
-    ('>f','astep'),
-    ('>i','istep'),
-    ('>f','partw'),
-    ('>f','tintg'),
-    ('>f','Ekin'),
-    ('>f','Ekin1'),
-    ('>f','Ekin2'),
-    ('>f','au0'),
-    ('>f','aeu0'),
-    ('>i','Nrow'),
-    ('>i','Ngridc'),
-    ('>i','Nspecies'),
-    ('>i','Nseed'),
-    ('>f','Om0'),
-    ('>f','Oml0'),
-    ('>f','hubble'),
-    ('>f','Wp5'),
-    ('>f','Ocurv'),
-    ('>f','Omb0'),
-    ('>%ds'%(396),'extras'),
-    ('>f','unknown'),
-    ('>i','pad')
+    ('pad',1,'>i'),
+    ('header',1,'45s', 
+    ('aexpn',1,'>f'),
+    ('aexp0',1,'>f'),
+    ('amplt',1,'>f'),
+    ('astep',1,'>f'),
+    ('istep',1,'>i'),
+    ('partw',1,'>f'),
+    ('tintg',1,'>f'),
+    ('Ekin',1,'>f'),
+    ('Ekin1',1,'>f'),
+    ('Ekin2',1,'>f'),
+    ('au0',1,'>f'),
+    ('aeu0',1,'>f'),
+    ('Nrow',1,'>i'),
+    ('Ngridc',1,'>i'),
+    ('Nspecies',1,'>i'),
+    ('Nseed',1,'>i'),
+    ('Om0',1,'>f'),
+    ('Oml0',1,'>f'),
+    ('hubble',1,'>f'),
+    ('Wp5',1,'>f'),
+    ('Ocurv',1,'>f'),
+    ('Omb0',1,'>f'),
+    ('extras',1,'>%ds'%(396)),
+    ('unknown',1,'>f'),
+    ('pad',1,'>i')
 ]
 
 star_struct = [


https://bitbucket.org/yt_analysis/yt/commits/56e250e70da6/
Changeset:   56e250e70da6
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-13 20:46:06
Summary:     added new formats to read struct
Affected #:  1 file

diff -r a795bea9ad006db6aef3a9533401524b10c5cfe9 -r 56e250e70da6345f7250353fcd26cc5169696aa6 yt/utilities/fortran_utils.py
--- a/yt/utilities/fortran_utils.py
+++ b/yt/utilities/fortran_utils.py
@@ -42,8 +42,10 @@
     f : File object
         An open file object.  Should have been opened in mode rb.
     attrs : iterable of iterables
-        This object should be an iterable of the format [ (attr_name, count,
-        struct type), ... ].
+        This object should be an iterable of one of the formats: 
+        [ (attr_name, count, struct type), ... ].
+        [ ((name1,name2,name3),count, vector type]
+        [ ((name1,name2,name3),count, [type,type,type]]
     endian : str
         '=' is native, '>' is big, '<' is little endian
 
@@ -63,21 +65,32 @@
     vv = {}
     net_format = endian
     for a, n, t in attrs:
+        for end in '@=<>':
+            t = t.replace(end,'')
         net_format += "".join(["I"] + ([t] * n) + ["I"])
     size = struct.calcsize(net_format)
     vals = list(struct.unpack(net_format, f.read(size)))
     vv = {}
-    for a, b, n in attrs:
+    for a, n, t in attrs:
+        for end in '@=<>':
+            t = t.replace(end,'')
+        if type(a)==tuple:
+            n = len(a)
         s1 = vals.pop(0)
-        v = [vals.pop(0) for i in range(b)]
+        v = [vals.pop(0) for i in range(n)]
         s2 = vals.pop(0)
         if s1 != s2:
-            size = struct.calcsize(endian "I" + "".join(b*[n]) + "I")
+            size = struct.calcsize(endian + "I" + "".join(n*[t]) + "I")
             print "S1 = %s ; S2 = %s ; %s %s %s = %s" % (
-                    s1, s2, a, b, n, size)
+                    s1, s2, a, n, t, size)
         assert(s1 == s2)
-        if b == 1: v = v[0]
-        vv[a] = v
+        if n == 1: v = v[0]
+        if type(a)==tuple:
+            assert len(a) == len(v)
+            for k,val in zip(a,v):
+                vv[k]=val
+        else:
+            vv[a] = v
     return vv
 
 def read_vector(f, d, endian='='):
@@ -184,7 +197,11 @@
     >>> rv = read_record(f, header)
     """
     vv = {}
-    net_format = "=I" + "".join(["%s%s" % (n, t) for a, n, t in rspec]) + "I"
+    net_format = endian + "I"
+    for a, n, t in rspec:
+        t = t if len(t)==1 else t[-1]
+        net_format += "%s%s"%(n, t)
+    net_format += "I"
     size = struct.calcsize(net_format)
     vals = list(struct.unpack(net_format, f.read(size)))
     vvv = vals[:]


https://bitbucket.org/yt_analysis/yt/commits/55ff846e350d/
Changeset:   55ff846e350d
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-13 20:51:18
Summary:     changing the format of the parameter structs
Affected #:  1 file

diff -r 56e250e70da6345f7250353fcd26cc5169696aa6 -r 55ff846e350dc99562fca28c9daf24cb0a324a87 yt/frontends/art/definitions.py
--- a/yt/frontends/art/definitions.py
+++ b/yt/frontends/art/definitions.py
@@ -25,6 +25,9 @@
 
 """
 
+#If not otherwise specified, we are big endian
+endian = '>'
+
 fluid_fields= [ 
     'Density',
     'TotalEnergy',
@@ -87,43 +90,18 @@
 }
 
 amr_header_struct = [
-    ('pad byte',1,'>i'),
-    ('jname',1,'>256s'),
-    ('pad byte',1,'>i'),
-    ('pad byte',1,'>i'),
-    ('istep',1,'>i'),
-    ('t',1,'>d'),
-    ('dt',1,'>d'),
-    ('aexpn',1,'>f'),
-    ('ainit',1,'>f'),
-    ('pad byte',1,'>i'),
-    ('pad byte',1,'>i'),
-    ('boxh',1,'>f'),
-    ('Om0',1,'>f'),
-    ('Oml0',1,'>f'),
-    ('Omb0',1,'>f'),
-    ('hubble',1,'>f'),
-    ('pad byte',1,'>i'),
-    ('pad byte',1,'>i'),
-    ('nextras',1,'>i'),
-    ('pad byte',1,'>i'),
-    ('pad byte',1,'>i'),
-    ('extra1',1,'>f'),
-    ('extra2',1,'>f'),
-    ('pad byte',1,'>i'),
-    ('pad byte',1,'>i'),
-    ('lextra',1,'>256s'),
-    ('lextra',1,'>256s'),
-    ('pad byte',1,'>i'),
-    ( 'pad byte',1,'>i'),
-    ( 'min_level',1,'>i'),
-    ( 'max_level',1,'>i'),
-    ( 'pad byte',1,'>i')
+    ('jname',1,'256s'),
+    (('istep','i','dt','aexpn','ainit'),1,'iddff'),
+    (('boxh','Om0','Oml0','Omb0','hubble'),5,'f'),
+    ('nextras',1,'i'),
+    (('extra1','extra2'),2,'f'),
+    ('lextra',1,'512s'),
+    (('min_level','max_level'),2,'i')
 ]
 
 particle_header_struct =[
     ('pad',1,'>i'),
-    ('header',1,'45s', 
+    ('header',1,'45s'), 
     ('aexpn',1,'>f'),
     ('aexp0',1,'>f'),
     ('amplt',1,'>f'),


https://bitbucket.org/yt_analysis/yt/commits/fd0a7b20b647/
Changeset:   fd0a7b20b647
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-13 21:35:31
Summary:     amr now reads in with new fortran modules
Affected #:  1 file

diff -r 55ff846e350dc99562fca28c9daf24cb0a324a87 -r fd0a7b20b6476badfac37992ce6860476290280d yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -52,7 +52,7 @@
     get_box_grids_level
 import yt.utilities.lib as amr_utils
 
-from .definitions import *
+from yt.frontends.art.definitions import *
 from yt.utilities.fortran_utils import *
 from .io import _read_art_level_info
 from .io import _read_child_mask_level
@@ -318,10 +318,10 @@
         self.parameters.update(constants)
         #read the amr header
         with open(self.file_amr,'rb') as f:
-            amr_header_vals = read_attrs(f,amr_header_struct)
+            amr_header_vals = read_attrs(f,amr_header_struct,'>')
             for to_skip in ['tl','dtl','tlold','dtlold','iSO']:
-                skip(f)
-            self.ncell = read_vector(f,'l','>')
+                skipped=skip(f,endian='>')
+            (self.ncell) = read_vector(f,'i','>')[0]
             # Try to figure out the root grid dimensions
             est = int(np.rint(self.ncell**(1.0/3.0)))
             # Note here: this is the number of *cells* on the root grid.
@@ -337,14 +337,14 @@
             self.root_iOctCh = self.root_iOctCh.reshape(self.domain_dimensions,
                  order='F')
             self.root_grid_offset = f.tell()
-            self.root_nhvar = skip(f)
-            self.root_nvar  = skip(f)
+            self.root_nhvar = skip(f,endian='>')
+            self.root_nvar  = skip(f,endian='>')
             #make sure that the number of root variables is a multiple of rootcells
             assert self.root_nhvar%self.root_ncells==0
             assert self.root_nvar%self.root_ncells==0
             self.nhydro_variables = ((self.root_nhvar+self.root_nvar)/ 
                                     self.root_ncells)
-            self.iOctFree, self.nOct = read_vector(f,'ii','>')
+            self.iOctFree, self.nOct = read_vector(f,'i','>')
             self.child_grid_offset = f.tell()
             self.parameters.update(amr_header_vals)
             self.parameters['ncell0'] = self.parameters['ng']**3


https://bitbucket.org/yt_analysis/yt/commits/798f8ac6d81b/
Changeset:   798f8ac6d81b
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-13 21:35:52
Summary:     updating to clearer names
Affected #:  1 file

diff -r fd0a7b20b6476badfac37992ce6860476290280d -r 798f8ac6d81b704202c0e35ff2964b96b12703d1 yt/utilities/fortran_utils.py
--- a/yt/utilities/fortran_utils.py
+++ b/yt/utilities/fortran_utils.py
@@ -45,7 +45,7 @@
         This object should be an iterable of one of the formats: 
         [ (attr_name, count, struct type), ... ].
         [ ((name1,name2,name3),count, vector type]
-        [ ((name1,name2,name3),count, [type,type,type]]
+        [ ((name1,name2,name3),count, 'type type type']
     endian : str
         '=' is native, '>' is big, '<' is little endian
 
@@ -117,16 +117,18 @@
     >>> f = open("fort.3", "rb")
     >>> rv = read_vector(f, 'd')
     """
-    fmt = endian+"I"
-    ss = struct.unpack(fmt, f.read(struct.calcsize(fmt)))[0]
-    ds = struct.calcsize(endian+"%s" % d)
-    if ss % ds != 0:
-        print "fmt = '%s' ; ss = %s ; ds = %s" % (fmt, ss, ds)
+    fmt = endian+"%s" % d
+    size = struct.calcsize(fmt)
+    padfmt = endian + "I"
+    padsize = struct.calcsize(padfmt)
+    length = struct.unpack(padfmt,f.read(padsize))[0]
+    if length % size!= 0:
+        print "fmt = '%s' ; length = %s ; size= %s" % (fmt, length, size)
         raise RuntimeError
-    count = ss / ds
-    tr = np.fromstring(f.read(np.dtype(d).itemsize*count), d, count)
-    vec = struct.unpack(fmt, f.read(struct.calcsize(fmt)))
-    assert(vec[-1] == ss)
+    count = length/ size
+    tr = np.fromfile(f,fmt,count=count)
+    length2= struct.unpack(padfmt,f.read(padsize))[0]
+    assert(length == length2)
     return tr
 
 def skip(f, n=1, endian='='):
@@ -158,9 +160,10 @@
     skipped = 0
     for i in range(n):
         fmt = endian+"I"
-        s1= struct.unpack(fmt, f.read(struct.calcsize(fmt)))[0]
+        size = f.read(struct.calcsize(fmt))
+        s1= struct.unpack(fmt, size)[0]
         f.seek(s1+ struct.calcsize(fmt), os.SEEK_CUR)
-        s2= struct.unpack(fmt, f.read(struct.calcsize(fmt)))[0]
+        s2= struct.unpack(fmt, size)[0]
         assert s1==s2 
         skipped += s1/struct.calcsize(fmt)
     return skipped


https://bitbucket.org/yt_analysis/yt/commits/ff16bb17a32f/
Changeset:   ff16bb17a32f
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-13 22:06:01
Summary:     updating particle struct format
Affected #:  2 files

diff -r 798f8ac6d81b704202c0e35ff2964b96b12703d1 -r ff16bb17a32f1f5eff579deff9b133d8eab23087 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -351,7 +351,7 @@
         #read the particle header
         if not self.skip_particles and self.file_particle_header:
             with open(self.file_particle_header,"rb") as fh:
-                particle_header_vals = read_attrs(fh,particle_header_struct)
+                particle_header_vals = read_attrs(fh,particle_header_struct,'>')
                 fh.seek(seek_extras)
                 n = particle_header_vals['Nspecies']
                 wspecies = np.fromfile(fh,dtype='>f',count=10)

diff -r 798f8ac6d81b704202c0e35ff2964b96b12703d1 -r ff16bb17a32f1f5eff579deff9b133d8eab23087 yt/frontends/art/definitions.py
--- a/yt/frontends/art/definitions.py
+++ b/yt/frontends/art/definitions.py
@@ -100,33 +100,17 @@
 ]
 
 particle_header_struct =[
-    ('pad',1,'>i'),
-    ('header',1,'45s'), 
-    ('aexpn',1,'>f'),
-    ('aexp0',1,'>f'),
-    ('amplt',1,'>f'),
-    ('astep',1,'>f'),
-    ('istep',1,'>i'),
-    ('partw',1,'>f'),
-    ('tintg',1,'>f'),
-    ('Ekin',1,'>f'),
-    ('Ekin1',1,'>f'),
-    ('Ekin2',1,'>f'),
-    ('au0',1,'>f'),
-    ('aeu0',1,'>f'),
-    ('Nrow',1,'>i'),
-    ('Ngridc',1,'>i'),
-    ('Nspecies',1,'>i'),
-    ('Nseed',1,'>i'),
-    ('Om0',1,'>f'),
-    ('Oml0',1,'>f'),
-    ('hubble',1,'>f'),
-    ('Wp5',1,'>f'),
-    ('Ocurv',1,'>f'),
-    ('Omb0',1,'>f'),
-    ('extras',1,'>%ds'%(396)),
-    ('unknown',1,'>f'),
-    ('pad',1,'>i')
+    (('header',
+     'aexpn','aexp0','amplt','astep',
+     'istep',
+     'partw','tintg',
+     'Ekin','Ekin1','Ekin2',
+     'au0','aeu0',
+     'Nrow','Ngridc','Nspecies','Nseed',
+     'Om0','Oml0','hubble','Wp5','Ocurv','Omb0',
+     'extras','unknown'),
+      1,
+     '45sffffi'+'fffffff'+'iiii'+'ffffff'+'396s'+'f')
 ]
 
 star_struct = [


https://bitbucket.org/yt_analysis/yt/commits/d5d137fc13c0/
Changeset:   d5d137fc13c0
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-13 22:09:17
Summary:     typo in i -> t
Affected #:  1 file

diff -r ff16bb17a32f1f5eff579deff9b133d8eab23087 -r d5d137fc13c0fa9d44de6df0b9bb07ee641d60cf yt/frontends/art/definitions.py
--- a/yt/frontends/art/definitions.py
+++ b/yt/frontends/art/definitions.py
@@ -91,7 +91,7 @@
 
 amr_header_struct = [
     ('jname',1,'256s'),
-    (('istep','i','dt','aexpn','ainit'),1,'iddff'),
+    (('istep','t','dt','aexpn','ainit'),1,'iddff'),
     (('boxh','Om0','Oml0','Omb0','hubble'),5,'f'),
     ('nextras',1,'i'),
     (('extra1','extra2'),2,'f'),


https://bitbucket.org/yt_analysis/yt/commits/75deb32034d4/
Changeset:   75deb32034d4
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-13 22:29:25
Summary:     IO refactor complete
Affected #:  3 files

diff -r d5d137fc13c0fa9d44de6df0b9bb07ee641d60cf -r 75deb32034d4cd88947efee162b23994db4a5163 yt/frontends/art/definitions.py
--- a/yt/frontends/art/definitions.py
+++ b/yt/frontends/art/definitions.py
@@ -43,10 +43,10 @@
     'PotentialOld'
 ]
 
-hydro_struct = [('pad1',1,'>i'),('idc',1,'>i'),('iOctCh',1,'>i')]
+hydro_struct = [('pad1','>i'),('idc','>i'),('iOctCh','>i')]
 for field in fluid_fields:
-    hydro_struct += (field,1,'>f'),
-hydro_struct += ('pad2',1,'>i'),
+    hydro_struct += (field,'>f'),
+hydro_struct += ('pad2','>i'),
 
 particle_fields= [
     'particle_age',

diff -r d5d137fc13c0fa9d44de6df0b9bb07ee641d60cf -r 75deb32034d4cd88947efee162b23994db4a5163 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -108,7 +108,7 @@
         #Get the info for this level, skip the rest
         #print "Reading oct tree data for level", Lev
         #print 'offset:',f.tell()
-        Level[Lev], iNOLL[Lev], iHOLL[Lev] = read_vector(f,'iii','>')
+        Level[Lev], iNOLL[Lev], iHOLL[Lev] = read_vector(f,'i','>')
         #print 'Level %i : '%Lev, iNOLL
         #print 'offset after level record:',f.tell()
         iOct = iHOLL[Lev] - 1
@@ -117,13 +117,13 @@
         ntot = ntot + nLevel
 
         #Skip all the oct hierarchy data
-        ns = skip(f)
+        ns = peek_record_size(f,endian='>')
         size = struct.calcsize('>i') + ns + struct.calcsize('>i')
         f.seek(f.tell()+size * nLevel)
 
         level_child_offsets.append(f.tell())
         #Skip the child vars data
-        ns = skip(f)
+        ns = peek_record_size(f,endian='>')
         size = struct.calcsize('>i') + ns + struct.calcsize('>i')
         f.seek(f.tell()+size * nLevel*nchild)
 
@@ -136,7 +136,7 @@
     pos = f.tell()
     f.seek(level_oct_offsets[level])
     #Get the info for this level, skip the rest
-    junk, nLevel, iOct = read_vector(f,'iii','>')
+    junk, nLevel, iOct = read_vector(f,'i','>')
     
     #fortran indices start at 1
     

diff -r d5d137fc13c0fa9d44de6df0b9bb07ee641d60cf -r 75deb32034d4cd88947efee162b23994db4a5163 yt/utilities/fortran_utils.py
--- a/yt/utilities/fortran_utils.py
+++ b/yt/utilities/fortran_utils.py
@@ -158,6 +158,7 @@
     >>> skip(f, 3)
     """
     skipped = 0
+    pos = f.tell()
     for i in range(n):
         fmt = endian+"I"
         size = f.read(struct.calcsize(fmt))
@@ -168,6 +169,27 @@
         skipped += s1/struct.calcsize(fmt)
     return skipped
 
+def peek_record_size(f,endian='='):
+    r""" This function accept the file handle and returns
+    the size of the next record and then rewinds the file
+    to the previous position.
+
+    Parameters
+    ----------
+    f : File object
+        An open file object.  Should have been opened in mode rb.
+    endian : str
+        '=' is native, '>' is big, '<' is little endian
+
+    Returns
+    -------
+    Number of bytes in the next record
+    """
+    pos = f.tell()
+    s = struct.unpack('>i', f.read(struct.calcsize('>i')))
+    f.seek(pos)
+    return s[0]
+
 def read_record(f, rspec, endian='='):
     r"""This function accepts a file pointer and reads from that file pointer
     a single "record" with different components.


https://bitbucket.org/yt_analysis/yt/commits/0c06f26e04bd/
Changeset:   0c06f26e04bd
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-13 23:43:27
Summary:     root mesh still not working
Affected #:  1 file

diff -r 75deb32034d4cd88947efee162b23994db4a5163 -r 0c06f26e04bd1f1e92a0f38529704108d8b6560e yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -461,8 +461,12 @@
                                          self.domain.pf.parameters['ncell0'])
             source= {}
             for i,field in enumerate(fields):
-                source[field] = np.empty((no, 8), dtype="float64")
-                source[field][:,:] = np.reshape(data[i,:],(no,8))
+                if level==0:
+                    temp = np.reshape(data[i,:],(no,8),order='C')
+                else:
+                    temp = np.reshape(data[i,:],(no,8),order='C')
+                temp = temp.astype('float64')
+                source[field] = temp
             level_offset += oct_handler.fill_level(self.domain.domain_id, 
                                    level, dest, source, self.mask, level_offset)
         return dest
@@ -545,6 +549,10 @@
         root_fc= np.mgrid[LL[0]:RL[0]:NX[0]*1j,
                           LL[1]:RL[1]:NX[1]*1j,
                           LL[2]:RL[2]:NX[2]*1j ]
+        root_idx = na.arange(np.prod(root_fc.shape))
+        import pdb; pdb.set_trace()
+        #must add in 000,100,200,300,...010,020,...
+        #001,002,003,... xyz order
         root_fc= np.vstack([p.ravel() for p in root_fc]).T
         nocts_check = oct_handler.add(1, 0, root_octs_side**3,
                                       root_fc, self.domain_id)


https://bitbucket.org/yt_analysis/yt/commits/6b507936ae79/
Changeset:   6b507936ae79
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-13 23:45:31
Summary:     removed pdb
Affected #:  1 file

diff -r 0c06f26e04bd1f1e92a0f38529704108d8b6560e -r 6b507936ae7948a9061d2990b8c1669991515196 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -550,7 +550,6 @@
                           LL[1]:RL[1]:NX[1]*1j,
                           LL[2]:RL[2]:NX[2]*1j ]
         root_idx = na.arange(np.prod(root_fc.shape))
-        import pdb; pdb.set_trace()
         #must add in 000,100,200,300,...010,020,...
         #001,002,003,... xyz order
         root_fc= np.vstack([p.ravel() for p in root_fc]).T


https://bitbucket.org/yt_analysis/yt/commits/c6318049078b/
Changeset:   c6318049078b
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-14 00:38:33
Summary:     chunk by level works
Affected #:  1 file

diff -r 6b507936ae7948a9061d2990b8c1669991515196 -r c6318049078b719469c8c1002be74b51d6c85360 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -98,7 +98,8 @@
         allocate the requisite memory in the oct tree
         """
         nv = len(self.fluid_field_list)
-        self.domains = [ARTDomainFile(self.parameter_file,1,nv)]
+        self.domains = [ARTDomainFile(self.parameter_file,i+1,nv,l)
+                        for i,l in enumerate(range(self.pf.max_level))]
         self.octs_per_domain = [dom.level_count.sum() for dom in self.domains]
         self.total_octs = sum(self.octs_per_domain)
         self.oct_handler = RAMSESOctreeContainer(
@@ -134,8 +135,10 @@
             counts = self.oct_handler.count_cells(dobj.selector, mask)
             #For all domains, figure out how many counts we have 
             #and build a subset=mask of domains 
-            subsets = [ARTDomainSubset(d, mask, c)
-                       for d, c in zip(self.domains, counts) if c > 0]
+            subsets = []
+            for d,c in zip(self.domains,counts):
+                if c>0:
+                    subsets += ARTDomainSubset(d,mask,c,d.domain_level),
             dobj._chunk_info = subsets
             dobj.size = sum(counts)
             dobj.shape = (dobj.size,)
@@ -396,11 +399,12 @@
         return False
 
 class ARTDomainSubset(object):
-    def __init__(self, domain, mask, cell_count):
+    def __init__(self, domain, mask, cell_count,domain_level):
         self.mask = mask
         self.domain = domain
         self.oct_handler = domain.pf.h.oct_handler
         self.cell_count = cell_count
+        self.domain_level = domain_level
         level_counts = self.oct_handler.count_levels(
             self.domain.pf.max_level, self.domain.domain_id, mask)
         assert(level_counts.sum() == cell_count)
@@ -447,28 +451,29 @@
         field_idxs = [all_fields.index(f) for f in fields]
         for field in fields:
             dest[field] = np.zeros(self.cell_count, 'float64')
-        for level, offset in enumerate(self.domain.level_offsets):
-            no = self.domain.level_count[level]
+        level = self.domain_level
+        offset = self.domain.level_offsets
+        no = self.domain.level_count[level]
+        if level==0:
+            data = _read_root_level(content,self.domain.level_child_offsets,
+                                   self.domain.level_count)
+            data = data[field_idxs,:]
+        else:
+            data = _read_child_level(content,self.domain.level_child_offsets,
+                                     self.domain.level_offsets,
+                                     self.domain.level_count,level,fields,
+                                     self.domain.pf.domain_dimensions,
+                                     self.domain.pf.parameters['ncell0'])
+        source= {}
+        for i,field in enumerate(fields):
             if level==0:
-                data = _read_root_level(content,self.domain.level_child_offsets,
-                                       self.domain.level_count)
-                data = data[field_idxs,:]
+                temp = np.reshape(data[i,:],(no,8),order='C')
             else:
-                data = _read_child_level(content,self.domain.level_child_offsets,
-                                         self.domain.level_offsets,
-                                         self.domain.level_count,level,fields,
-                                         self.domain.pf.domain_dimensions,
-                                         self.domain.pf.parameters['ncell0'])
-            source= {}
-            for i,field in enumerate(fields):
-                if level==0:
-                    temp = np.reshape(data[i,:],(no,8),order='C')
-                else:
-                    temp = np.reshape(data[i,:],(no,8),order='C')
-                temp = temp.astype('float64')
-                source[field] = temp
-            level_offset += oct_handler.fill_level(self.domain.domain_id, 
-                                   level, dest, source, self.mask, level_offset)
+                temp = np.reshape(data[i,:],(no,8),order='C')
+            temp = temp.astype('float64')
+            source[field] = temp
+        level_offset += oct_handler.fill_level(self.domain.domain_id, 
+                               level, dest, source, self.mask, level_offset)
         return dest
 
 class ARTDomainFile(object):
@@ -481,20 +486,22 @@
     _last_mask = None
     _last_seletor_id = None
 
-    def __init__(self,pf,domain_id,nvar):
+    def __init__(self,pf,domain_id,nvar,level):
         self.nvar = nvar
         self.pf = pf
         self.domain_id = domain_id
+        self.domain_level = level
         self._level_count = None
         self._level_oct_offsets = None
         self._level_child_offsets = None
 
+
     @property
     def level_count(self):
         #this is number of *octs*
         if self._level_count is not None: return self._level_count
         self.level_offsets
-        return self._level_count
+        return self._level_count[self.domain_level]
 
     @property
     def level_child_offsets(self):
@@ -538,44 +545,43 @@
         self.level_offsets
         f = open(self.pf.file_amr, "rb")
         #add the root *cell* not *oct* mesh
+        level = self.domain_level
         root_octs_side = self.pf.domain_dimensions[0]/2
         NX = np.ones(3)*root_octs_side
-        LE = np.array([0.0, 0.0, 0.0], dtype='float64')
-        RE = np.array([1.0, 1.0, 1.0], dtype='float64')
-        root_dx = (RE - LE) / NX
-        LL = LE + root_dx/2.0
-        RL = RE - root_dx/2.0
-        #compute floating point centers of root octs
-        root_fc= np.mgrid[LL[0]:RL[0]:NX[0]*1j,
-                          LL[1]:RL[1]:NX[1]*1j,
-                          LL[2]:RL[2]:NX[2]*1j ]
-        root_idx = na.arange(np.prod(root_fc.shape))
-        #must add in 000,100,200,300,...010,020,...
-        #001,002,003,... xyz order
-        root_fc= np.vstack([p.ravel() for p in root_fc]).T
-        nocts_check = oct_handler.add(1, 0, root_octs_side**3,
-                                      root_fc, self.domain_id)
-        assert(oct_handler.nocts == root_fc.shape[0])
-        nocts_added = root_fc.shape[0]
-        mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
-                    root_octs_side**3, 0,nocts_added)
-        for level in xrange(1, self.pf.max_level+1):
+        octs_side = NX*2**level
+        if level == 0:
+            LE = np.array([0.0, 0.0, 0.0], dtype='float64')
+            RE = np.array([1.0, 1.0, 1.0], dtype='float64')
+            root_dx = (RE - LE) / NX
+            LL = LE + root_dx/2.0
+            RL = RE - root_dx/2.0
+            #compute floating point centers of root octs
+            root_fc= np.mgrid[LL[0]:RL[0]:NX[0]*1j,
+                              LL[1]:RL[1]:NX[1]*1j,
+                              LL[2]:RL[2]:NX[2]*1j ]
+            root_fc= np.vstack([p.ravel() for p in root_fc]).T
+            nocts_check = oct_handler.add(self.domain_id, level, 
+                                          root_octs_side**3,
+                                          root_fc, self.domain_id)
+            assert(oct_handler.nocts == root_fc.shape[0])
+            mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
+                        root_octs_side**3, 0,oct_handler.nocts)
+        else:
             left_index, fl, iocts, nocts,root_level = _read_art_level_info(f, 
                 self._level_oct_offsets,level,
                 coarse_grid=self.pf.domain_dimensions[0])
             left_index/=2
             #at least one of the indices should be odd
             #assert np.sum(left_index[:,0]%2==1)>0
-            octs_side = NX*2**level
             float_left_edge = left_index.astype("float64") / octs_side
             float_center = float_left_edge + 0.5*1.0/octs_side
             #all floatin unitary positions should fit inside the domain
             assert np.all(float_center<1.0)
-            nocts_check = oct_handler.add(1,level, nocts, float_left_edge, self.domain_id)
-            nocts_added += nocts
-            assert(oct_handler.nocts == nocts_added)
+            nocts_check = oct_handler.add(self.domain_id,level, nocts, 
+                                          float_left_edge, self.domain_id)
+            assert(nocts_check == nocts)
             mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
-                        nocts, level,nocts_added)
+                        nocts, level,oct_handler.nocts)
 
     def select(self, selector):
         if id(selector) == self._last_selector_id:


https://bitbucket.org/yt_analysis/yt/commits/bb6e0006016d/
Changeset:   bb6e0006016d
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-16 20:38:48
Summary:     root mesh works. not sure about orientation
Affected #:  2 files

diff -r c6318049078b719469c8c1002be74b51d6c85360 -r bb6e0006016db30895a802dfb4e2ee9355afaf8d yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -467,13 +467,18 @@
         source= {}
         for i,field in enumerate(fields):
             if level==0:
-                temp = np.reshape(data[i,:],(no,8),order='C')
+                temp = np.reshape(data[i,:],self.domain.pf.domain_dimensions,
+                                  order='C').T
             else:
                 temp = np.reshape(data[i,:],(no,8),order='C')
             temp = temp.astype('float64')
             source[field] = temp
-        level_offset += oct_handler.fill_level(self.domain.domain_id, 
-                               level, dest, source, self.mask, level_offset)
+        if level==0:
+            level_offset += oct_handler.fill_level_from_grid(self.domain.domain_id, 
+                                   level, dest, source, self.mask, level_offset)
+        else:
+            level_offset += oct_handler.fill_level(self.domain.domain_id, 
+                                   level, dest, source, self.mask, level_offset)
         return dest
 
 class ARTDomainFile(object):

diff -r c6318049078b719469c8c1002be74b51d6c85360 -r bb6e0006016db30895a802dfb4e2ee9355afaf8d yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -872,6 +872,48 @@
                             local_filled += 1
         return local_filled
 
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def fill_level_from_grid(self, int domain, int level, dest_fields, 
+                             source_fields, 
+                             np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                             int offset):
+        #precisely like fill level, but instead of assuming that the source
+        #order is that of the oct order, we look up the oct position
+        #and fill its children from the the source field
+        #as a result, source is 3D field with 8 times as many
+        #elements as the number of octs on this level in this domain
+        cdef np.ndarray[np.float64_t, ndim=3] source
+        cdef np.ndarray[np.float64_t, ndim=1] dest
+        cdef OctAllocationContainer *dom = self.domains[domain - 1]
+        cdef Oct *o
+        cdef int n
+        cdef int i, j, k, ii
+        cdef int local_pos, local_filled
+        cdef np.float64_t val
+        cdef np.float64_t ox,oy,oz
+        for key in dest_fields:
+            local_filled = 0
+            dest = dest_fields[key]
+            source = source_fields[key]
+            for n in range(dom.n):
+                o = &dom.my_octs[n]
+                if o.level != level: continue
+                for i in range(2):
+                    for j in range(2):
+                        for k in range(2):
+                            ii = ((k*2)+j)*2+i
+                            if mask[o.local_ind, ii] == 0: continue
+                            ox = o.pos[0]*2 + i
+                            oy = o.pos[1]*2 + j
+                            oz = o.pos[2]*2 + k
+                            dest[local_filled + offset] = source[ox,oy,oz]
+                            local_filled += 1
+        return local_filled
+
+
 cdef int compare_octs(void *vo1, void *vo2) nogil:
     cdef Oct *o1 = (<Oct**> vo1)[0]
     cdef Oct *o2 = (<Oct**> vo2)[0]


https://bitbucket.org/yt_analysis/yt/commits/de3ec92150bc/
Changeset:   de3ec92150bc
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-16 21:04:57
Summary:     choosing this root mesh orientation; still not sure
Affected #:  2 files

diff -r bb6e0006016db30895a802dfb4e2ee9355afaf8d -r de3ec92150bce327448894fe19acd68520815669 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -467,8 +467,8 @@
         source= {}
         for i,field in enumerate(fields):
             if level==0:
-                temp = np.reshape(data[i,:],self.domain.pf.domain_dimensions,
-                                  order='C').T
+                temp1 = np.reshape(data[i,:],self.domain.pf.domain_dimensions,
+                                  order='F')
             else:
                 temp = np.reshape(data[i,:],(no,8),order='C')
             temp = temp.astype('float64')

diff -r bb6e0006016db30895a802dfb4e2ee9355afaf8d -r de3ec92150bce327448894fe19acd68520815669 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -873,7 +873,7 @@
         return local_filled
 
 
-    @cython.boundscheck(False)
+    @cython.boundscheck(True)
     @cython.wraparound(False)
     @cython.cdivision(True)
     def fill_level_from_grid(self, int domain, int level, dest_fields, 
@@ -893,7 +893,7 @@
         cdef int i, j, k, ii
         cdef int local_pos, local_filled
         cdef np.float64_t val
-        cdef np.float64_t ox,oy,oz
+        cdef np.int64_t ox,oy,oz
         for key in dest_fields:
             local_filled = 0
             dest = dest_fields[key]
@@ -906,9 +906,9 @@
                         for k in range(2):
                             ii = ((k*2)+j)*2+i
                             if mask[o.local_ind, ii] == 0: continue
-                            ox = o.pos[0]*2 + i
-                            oy = o.pos[1]*2 + j
-                            oz = o.pos[2]*2 + k
+                            ox = (o.pos[0] << 1) + i
+                            oy = (o.pos[1] << 1) + j
+                            oz = (o.pos[2] << 1) + k
                             dest[local_filled + offset] = source[ox,oy,oz]
                             local_filled += 1
         return local_filled


https://bitbucket.org/yt_analysis/yt/commits/36ed96388a0f/
Changeset:   36ed96388a0f
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-16 21:06:13
Summary:     typo
Affected #:  1 file

diff -r de3ec92150bce327448894fe19acd68520815669 -r 36ed96388a0f895c0525e269b97e2f67295bb7cb yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -467,7 +467,7 @@
         source= {}
         for i,field in enumerate(fields):
             if level==0:
-                temp1 = np.reshape(data[i,:],self.domain.pf.domain_dimensions,
+                temp = np.reshape(data[i,:],self.domain.pf.domain_dimensions,
                                   order='F')
             else:
                 temp = np.reshape(data[i,:],(no,8),order='C')


https://bitbucket.org/yt_analysis/yt/commits/a5d8d694965e/
Changeset:   a5d8d694965e
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-17 08:04:41
Summary:     first pass at subchunking
Affected #:  2 files

diff -r 36ed96388a0f895c0525e269b97e2f67295bb7cb -r a5d8d694965ee91c2172167871bb36e0ad1b0024 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -90,6 +90,7 @@
         self.directory = os.path.dirname(self.hierarchy_filename)
         self.max_level = pf.max_level
         self.float_type = np.float64
+        self.subchunk_size = long(2e5) #maximum # of octs per chunk
         super(ARTGeometryHandler,self).__init__(pf,data_style)
 
     def _initialize_oct_handler(self):
@@ -136,9 +137,21 @@
             #For all domains, figure out how many counts we have 
             #and build a subset=mask of domains 
             subsets = []
+            def subchunk(count,size):
+                for i in range(0,count,size):
+                    yield i,i+min(size,count-i)
             for d,c in zip(self.domains,counts):
-                if c>0:
+                nocts = d.level_count[d.domain_level]
+                if c<1: continue
+                if d.domain_level > 0:
+                    for noct_range in subchunk(nocts,self.subchunk_size):
+                        subsets += ARTDomainSubset(d,mask,c,d.domain_level,
+                                                   noct_range=noct_range),
+                        mylog.debug("Creating subset of octs %i - %i",
+                                    noct_range[0],noct_range[1])
+                else:
                     subsets += ARTDomainSubset(d,mask,c,d.domain_level),
+
             dobj._chunk_info = subsets
             dobj.size = sum(counts)
             dobj.shape = (dobj.size,)
@@ -399,7 +412,8 @@
         return False
 
 class ARTDomainSubset(object):
-    def __init__(self, domain, mask, cell_count,domain_level):
+    def __init__(self, domain, mask, cell_count,domain_level,
+                 noct_range=None):
         self.mask = mask
         self.domain = domain
         self.oct_handler = domain.pf.h.oct_handler
@@ -411,6 +425,7 @@
         level_counts[1:] = level_counts[:-1]
         level_counts[0] = 0
         self.level_counts = np.add.accumulate(level_counts)
+        self.noct_range = noct_range
 
     def icoords(self, dobj):
         return self.oct_handler.icoords(self.domain.domain_id, self.mask,
@@ -455,24 +470,20 @@
         offset = self.domain.level_offsets
         no = self.domain.level_count[level]
         if level==0:
+            source= {}
             data = _read_root_level(content,self.domain.level_child_offsets,
                                    self.domain.level_count)
-            data = data[field_idxs,:]
+            for i in field_idxs:
+                temp = np.reshape(data[i,:],self.domain.pf.domain_dimensions,
+                                  order='F').astype('float64')
+                source[field] = temp
         else:
-            data = _read_child_level(content,self.domain.level_child_offsets,
+            source = _read_child_level(content,self.domain.level_child_offsets,
                                      self.domain.level_offsets,
                                      self.domain.level_count,level,fields,
                                      self.domain.pf.domain_dimensions,
-                                     self.domain.pf.parameters['ncell0'])
-        source= {}
-        for i,field in enumerate(fields):
-            if level==0:
-                temp = np.reshape(data[i,:],self.domain.pf.domain_dimensions,
-                                  order='F')
-            else:
-                temp = np.reshape(data[i,:],(no,8),order='C')
-            temp = temp.astype('float64')
-            source[field] = temp
+                                     self.domain.pf.parameters['ncell0'],
+                                     noct_range=self.noct_range)
         if level==0:
             level_offset += oct_handler.fill_level_from_grid(self.domain.domain_id, 
                                    level, dest, source, self.mask, level_offset)

diff -r 36ed96388a0f895c0525e269b97e2f67295bb7cb -r a5d8d694965ee91c2172167871bb36e0ad1b0024 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -250,30 +250,43 @@
 dtyp = np.dtype(">i4,>i8,>i8"+",>%sf4"%(nchem)+ \
                 ",>%sf4"%(2)+",>i4")
 def _read_child_level(f,level_child_offsets,level_oct_offsets,level_info,level,
-                      fields,domain_dimensions,ncell0,nhydro_vars=10,nchild=8):
+                      fields,domain_dimensions,ncell0,nhydro_vars=10,nchild=8,
+                      noct_range=None):
     #emulate the fortran code for reading cell data
     #read ( 19 ) idc, iOctCh(idc), (hvar(i,idc),i=1,nhvar), 
     #    &                 (var(i,idc), i=2,3)
     #contiguous 8-cell sections are for the same oct;
     #ie, we don't write out just the 0 cells, then the 1 cells
+    #optionally, we only read noct_range to save memory
     left_index, fl, octs, nocts,root_level = _read_art_level_info(f, 
         level_oct_offsets,level, coarse_grid=domain_dimensions[0])
-    nocts = level_info[level]
-    ncells = nocts*8
-    f.seek(level_child_offsets[level])
-    arr = np.fromfile(f,dtype=hydro_struct,count=ncells)
-    assert np.all(arr['pad1']==arr['pad2']) #pads must be equal
-    #idc = np.argsort(arr['idc']) #correct fortran indices
-    #translate idc into icell, and then to iOct
-    icell = (arr['idc'] >> 3) << 3
-    iocts = (icell-ncell0)/nchild #without a F correction, theres a +1
-    #assert that the children are read in the same order as the octs
-    assert np.all(octs==iocts[::nchild]) 
-    if len(fields)>1:
-        vars = np.concatenate((arr[field] for field in fields))
+    if noct_range is None:
+        nocts = level_info[level]
+        ncells = nocts*8
+        f.seek(level_child_offsets[level])
+        arr = np.fromfile(f,dtype=hydro_struct,count=ncells)
+        assert np.all(arr['pad1']==arr['pad2']) #pads must be equal
+        #idc = np.argsort(arr['idc']) #correct fortran indices
+        #translate idc into icell, and then to iOct
+        icell = (arr['idc'] >> 3) << 3
+        iocts = (icell-ncell0)/nchild #without a F correction, theres a +1
+        #assert that the children are read in the same order as the octs
+        assert np.all(octs==iocts[::nchild]) 
     else:
-        vars = arr[fields[0]].reshape((1,arr.shape[0]))
-    return vars
+        start,end = noct_range
+        nocts = min(end-start,level_info[level])
+        end = start + nocts
+        ncells = nocts*8
+        skip = np.dtype(hydro_struct).itemsize*start
+        f.seek(level_child_offsets[level]+skip)
+        arr = np.fromfile(f,dtype=hydro_struct,count=ncells)
+        assert np.all(arr['pad1']==arr['pad2']) #pads must be equal
+    source = {}
+    for field in fields:
+        sh = (nocts,8)
+        source[field] = np.reshape(arr[field],sh,order='C').astype('float64')
+    return source
+
 
 def _read_root_level(f,level_offsets,level_info,nhydro_vars=10):
     nocts = level_info[0]


https://bitbucket.org/yt_analysis/yt/commits/693811a27b56/
Changeset:   693811a27b56
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-17 08:05:33
Summary:     removing bounds checking now that im not debuggin
Affected #:  1 file

diff -r a5d8d694965ee91c2172167871bb36e0ad1b0024 -r 693811a27b56a78e3cdd9e9f7815615d3f0d710b yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -873,7 +873,7 @@
         return local_filled
 
 
-    @cython.boundscheck(True)
+    @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
     def fill_level_from_grid(self, int domain, int level, dest_fields, 


https://bitbucket.org/yt_analysis/yt/commits/59c31766a6ab/
Changeset:   59c31766a6ab
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-17 08:37:28
Summary:     working on subchunking
Affected #:  2 files

diff -r 693811a27b56a78e3cdd9e9f7815615d3f0d710b -r 59c31766a6ab42498f7dea64a42465bc0ec87c88 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -489,7 +489,9 @@
                                    level, dest, source, self.mask, level_offset)
         else:
             level_offset += oct_handler.fill_level(self.domain.domain_id, 
-                                   level, dest, source, self.mask, level_offset)
+                                   level, dest, source, self.mask, level_offset,
+                                   skip_start=self.noct_range[0],
+                                   skip_end=self.noct_range[1])
         return dest
 
 class ARTDomainFile(object):

diff -r 693811a27b56a78e3cdd9e9f7815615d3f0d710b -r 59c31766a6ab42498f7dea64a42465bc0ec87c88 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -843,17 +843,18 @@
                         level_counts[o.level] += 1
         return coords
 
-    @cython.boundscheck(False)
+    @cython.boundscheck(True)
     @cython.wraparound(False)
     @cython.cdivision(True)
     def fill_level(self, int domain, int level, dest_fields, source_fields,
-                   np.ndarray[np.uint8_t, ndim=2, cast=True] mask, int offset):
+                   np.ndarray[np.uint8_t, ndim=2, cast=True] mask, int offset,
+                   int skip_start=0, int skip_end=np.iinfo(np.int32()).max):
         cdef np.ndarray[np.float64_t, ndim=2] source
         cdef np.ndarray[np.float64_t, ndim=1] dest
         cdef OctAllocationContainer *dom = self.domains[domain - 1]
         cdef Oct *o
         cdef int n
-        cdef int i, j, k, ii
+        cdef int i, j, k, ii, index
         cdef int local_pos, local_filled
         cdef np.float64_t val
         for key in dest_fields:
@@ -863,12 +864,15 @@
             for n in range(dom.n):
                 o = &dom.my_octs[n]
                 if o.level != level: continue
+                index = o.ind - skip_start
+                if index < 0: continue
+                if index >= skip_end: continue
                 for i in range(2):
                     for j in range(2):
                         for k in range(2):
                             ii = ((k*2)+j)*2+i
                             if mask[o.local_ind, ii] == 0: continue
-                            dest[local_filled + offset] = source[o.ind, ii]
+                            dest[local_filled + offset] = source[index, ii]
                             local_filled += 1
         return local_filled
 


https://bitbucket.org/yt_analysis/yt/commits/0769bc0d9286/
Changeset:   0769bc0d9286
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-17 19:22:38
Summary:     removing changes to oct container
Affected #:  1 file

diff -r 59c31766a6ab42498f7dea64a42465bc0ec87c88 -r 0769bc0d9286442cabb5da526d5a69a6b4da57e5 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -843,18 +843,17 @@
                         level_counts[o.level] += 1
         return coords
 
-    @cython.boundscheck(True)
+    @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
     def fill_level(self, int domain, int level, dest_fields, source_fields,
-                   np.ndarray[np.uint8_t, ndim=2, cast=True] mask, int offset,
-                   int skip_start=0, int skip_end=np.iinfo(np.int32()).max):
+                   np.ndarray[np.uint8_t, ndim=2, cast=True] mask, int offset):
         cdef np.ndarray[np.float64_t, ndim=2] source
         cdef np.ndarray[np.float64_t, ndim=1] dest
         cdef OctAllocationContainer *dom = self.domains[domain - 1]
         cdef Oct *o
         cdef int n
-        cdef int i, j, k, ii, index
+        cdef int i, j, k, ii
         cdef int local_pos, local_filled
         cdef np.float64_t val
         for key in dest_fields:
@@ -864,15 +863,12 @@
             for n in range(dom.n):
                 o = &dom.my_octs[n]
                 if o.level != level: continue
-                index = o.ind - skip_start
-                if index < 0: continue
-                if index >= skip_end: continue
                 for i in range(2):
                     for j in range(2):
                         for k in range(2):
                             ii = ((k*2)+j)*2+i
                             if mask[o.local_ind, ii] == 0: continue
-                            dest[local_filled + offset] = source[index, ii]
+                            dest[local_filled + offset] = source[o.ind, ii]
                             local_filled += 1
         return local_filled
 


https://bitbucket.org/yt_analysis/yt/commits/a00c2febec62/
Changeset:   a00c2febec62
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-17 19:23:25
Summary:     segfaulting on oct selection
Affected #:  1 file

diff -r 0769bc0d9286442cabb5da526d5a69a6b4da57e5 -r a00c2febec629284ce16c5f83a41416d3525e20b yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -75,7 +75,6 @@
     FieldInfoContainer, NullFunc
 from yt.utilities.physical_constants import \
     mass_hydrogen_cgs, sec_per_Gyr
-
 class ARTGeometryHandler(OctreeGeometryHandler):
     def __init__(self,pf,data_style="art"):
         """
@@ -99,8 +98,24 @@
         allocate the requisite memory in the oct tree
         """
         nv = len(self.fluid_field_list)
-        self.domains = [ARTDomainFile(self.parameter_file,i+1,nv,l)
-                        for i,l in enumerate(range(self.pf.max_level))]
+
+        def subchunk(count,size):
+            for i in range(0,count,size):
+                yield i,i+min(size,count-i)
+
+        self.domains = []
+        root = ARTDomainFile(self.parameter_file,1,nv,0,None)
+        count = root.level_count
+        counts = root._level_count
+        self.domains += root,
+        i=2
+        for did,l in enumerate(range(1,self.pf.max_level)):
+            count = counts[l]
+            for cpu,noct_range in enumerate(subchunk(count,self.subchunk_size)):
+                self.domains += ARTDomainFile(self.parameter_file,i,
+                                              nv,l,noct_range),
+                i +=1
+
         self.octs_per_domain = [dom.level_count.sum() for dom in self.domains]
         self.total_octs = sum(self.octs_per_domain)
         self.oct_handler = RAMSESOctreeContainer(
@@ -137,21 +152,10 @@
             #For all domains, figure out how many counts we have 
             #and build a subset=mask of domains 
             subsets = []
-            def subchunk(count,size):
-                for i in range(0,count,size):
-                    yield i,i+min(size,count-i)
             for d,c in zip(self.domains,counts):
                 nocts = d.level_count[d.domain_level]
                 if c<1: continue
-                if d.domain_level > 0:
-                    for noct_range in subchunk(nocts,self.subchunk_size):
-                        subsets += ARTDomainSubset(d,mask,c,d.domain_level,
-                                                   noct_range=noct_range),
-                        mylog.debug("Creating subset of octs %i - %i",
-                                    noct_range[0],noct_range[1])
-                else:
-                    subsets += ARTDomainSubset(d,mask,c,d.domain_level),
-
+                subsets += ARTDomainSubset(d,mask,c,d.domain_level),
             dobj._chunk_info = subsets
             dobj.size = sum(counts)
             dobj.shape = (dobj.size,)
@@ -412,8 +416,7 @@
         return False
 
 class ARTDomainSubset(object):
-    def __init__(self, domain, mask, cell_count,domain_level,
-                 noct_range=None):
+    def __init__(self, domain, mask, cell_count,domain_level):
         self.mask = mask
         self.domain = domain
         self.oct_handler = domain.pf.h.oct_handler
@@ -425,7 +428,7 @@
         level_counts[1:] = level_counts[:-1]
         level_counts[0] = 0
         self.level_counts = np.add.accumulate(level_counts)
-        self.noct_range = noct_range
+        self.noct_range = domain.noct_range
 
     def icoords(self, dobj):
         return self.oct_handler.icoords(self.domain.domain_id, self.mask,
@@ -477,6 +480,8 @@
                 temp = np.reshape(data[i,:],self.domain.pf.domain_dimensions,
                                   order='F').astype('float64')
                 source[field] = temp
+            level_offset += oct_handler.fill_level_from_grid(self.domain.domain_id, 
+                                   level, dest, source, self.mask, level_offset)
         else:
             source = _read_child_level(content,self.domain.level_child_offsets,
                                      self.domain.level_offsets,
@@ -484,14 +489,8 @@
                                      self.domain.pf.domain_dimensions,
                                      self.domain.pf.parameters['ncell0'],
                                      noct_range=self.noct_range)
-        if level==0:
-            level_offset += oct_handler.fill_level_from_grid(self.domain.domain_id, 
-                                   level, dest, source, self.mask, level_offset)
-        else:
             level_offset += oct_handler.fill_level(self.domain.domain_id, 
-                                   level, dest, source, self.mask, level_offset,
-                                   skip_start=self.noct_range[0],
-                                   skip_end=self.noct_range[1])
+                                level, dest, source, self.mask, level_offset)
         return dest
 
 class ARTDomainFile(object):
@@ -504,7 +503,7 @@
     _last_mask = None
     _last_seletor_id = None
 
-    def __init__(self,pf,domain_id,nvar,level):
+    def __init__(self,pf,domain_id,nvar,level,noct_range=None,cpu=0):
         self.nvar = nvar
         self.pf = pf
         self.domain_id = domain_id
@@ -512,6 +511,8 @@
         self._level_count = None
         self._level_oct_offsets = None
         self._level_child_offsets = None
+        self.noct_range = noct_range
+        self.cpu = cpu #this is where we do our subchunking
 
 
     @property
@@ -595,9 +596,17 @@
             float_center = float_left_edge + 0.5*1.0/octs_side
             #all floatin unitary positions should fit inside the domain
             assert np.all(float_center<1.0)
-            nocts_check = oct_handler.add(self.domain_id,level, nocts, 
-                                          float_left_edge, self.domain_id)
-            assert(nocts_check == nocts)
+            if self.noct_range is None:
+                nocts_check = oct_handler.add(self.domain_id,level, nocts, 
+                                              float_left_edge, self.domain_id)
+                assert(nocts_check == nocts)
+            else:
+                s,e= self.noct_range[0],self.noct_range[1]
+                nocts_check = oct_handler.add(self.domain_id,level, e-s,
+                                              float_left_edge[s:e],
+                                              self.domain_id)
+                assert(nocts_check == e-s)
+                nocts = e-s
             mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
                         nocts, level,oct_handler.nocts)
 


https://bitbucket.org/yt_analysis/yt/commits/b9ec68615fcb/
Changeset:   b9ec68615fcb
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-17 19:25:06
Summary:     removed noct_range
Affected #:  1 file

diff -r a00c2febec629284ce16c5f83a41416d3525e20b -r b9ec68615fcb1a34c66cf0c8b0b62c119cfd2dd6 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -108,13 +108,9 @@
         count = root.level_count
         counts = root._level_count
         self.domains += root,
-        i=2
         for did,l in enumerate(range(1,self.pf.max_level)):
-            count = counts[l]
-            for cpu,noct_range in enumerate(subchunk(count,self.subchunk_size)):
-                self.domains += ARTDomainFile(self.parameter_file,i,
-                                              nv,l,noct_range),
-                i +=1
+            self.domains += ARTDomainFile(self.parameter_file,did+1,
+                                          nv,l),
 
         self.octs_per_domain = [dom.level_count.sum() for dom in self.domains]
         self.total_octs = sum(self.octs_per_domain)
@@ -428,7 +424,6 @@
         level_counts[1:] = level_counts[:-1]
         level_counts[0] = 0
         self.level_counts = np.add.accumulate(level_counts)
-        self.noct_range = domain.noct_range
 
     def icoords(self, dobj):
         return self.oct_handler.icoords(self.domain.domain_id, self.mask,
@@ -487,8 +482,7 @@
                                      self.domain.level_offsets,
                                      self.domain.level_count,level,fields,
                                      self.domain.pf.domain_dimensions,
-                                     self.domain.pf.parameters['ncell0'],
-                                     noct_range=self.noct_range)
+                                     self.domain.pf.parameters['ncell0'])
             level_offset += oct_handler.fill_level(self.domain.domain_id, 
                                 level, dest, source, self.mask, level_offset)
         return dest
@@ -503,7 +497,7 @@
     _last_mask = None
     _last_seletor_id = None
 
-    def __init__(self,pf,domain_id,nvar,level,noct_range=None,cpu=0):
+    def __init__(self,pf,domain_id,nvar,level):
         self.nvar = nvar
         self.pf = pf
         self.domain_id = domain_id
@@ -511,8 +505,6 @@
         self._level_count = None
         self._level_oct_offsets = None
         self._level_child_offsets = None
-        self.noct_range = noct_range
-        self.cpu = cpu #this is where we do our subchunking
 
 
     @property
@@ -596,17 +588,9 @@
             float_center = float_left_edge + 0.5*1.0/octs_side
             #all floatin unitary positions should fit inside the domain
             assert np.all(float_center<1.0)
-            if self.noct_range is None:
-                nocts_check = oct_handler.add(self.domain_id,level, nocts, 
-                                              float_left_edge, self.domain_id)
-                assert(nocts_check == nocts)
-            else:
-                s,e= self.noct_range[0],self.noct_range[1]
-                nocts_check = oct_handler.add(self.domain_id,level, e-s,
-                                              float_left_edge[s:e],
-                                              self.domain_id)
-                assert(nocts_check == e-s)
-                nocts = e-s
+            nocts_check = oct_handler.add(self.domain_id,level, nocts, 
+                                          float_left_edge, self.domain_id)
+            assert(nocts_check == nocts)
             mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
                         nocts, level,oct_handler.nocts)
 


https://bitbucket.org/yt_analysis/yt/commits/cb447924d99b/
Changeset:   cb447924d99b
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-17 19:28:43
Summary:     still segfaulting
Affected #:  1 file

diff -r b9ec68615fcb1a34c66cf0c8b0b62c119cfd2dd6 -r cb447924d99bc566ce61f5886290f06e8ba2886a yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -104,12 +104,12 @@
                 yield i,i+min(size,count-i)
 
         self.domains = []
-        root = ARTDomainFile(self.parameter_file,1,nv,0,None)
+        root = ARTDomainFile(self.parameter_file,1,nv,0)
         count = root.level_count
         counts = root._level_count
         self.domains += root,
         for did,l in enumerate(range(1,self.pf.max_level)):
-            self.domains += ARTDomainFile(self.parameter_file,did+1,
+            self.domains += ARTDomainFile(self.parameter_file,did+2,
                                           nv,l),
 
         self.octs_per_domain = [dom.level_count.sum() for dom in self.domains]


https://bitbucket.org/yt_analysis/yt/commits/1036e4343ab6/
Changeset:   1036e4343ab6
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-17 19:35:27
Summary:     turning off checks
Affected #:  1 file

diff -r de3ec92150bce327448894fe19acd68520815669 -r 1036e4343ab6f8f78545d14a84bffc28fa357224 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -873,7 +873,7 @@
         return local_filled
 
 
-    @cython.boundscheck(True)
+    @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
     def fill_level_from_grid(self, int domain, int level, dest_fields, 


https://bitbucket.org/yt_analysis/yt/commits/4381be62c1e2/
Changeset:   4381be62c1e2
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-17 19:39:00
Summary:     cleaning up IO<->data structures
Affected #:  2 files

diff -r 1036e4343ab6f8f78545d14a84bffc28fa357224 -r 4381be62c1e27e844faaa8a8844ce1f13e6f29fd yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -75,7 +75,6 @@
     FieldInfoContainer, NullFunc
 from yt.utilities.physical_constants import \
     mass_hydrogen_cgs, sec_per_Gyr
-
 class ARTGeometryHandler(OctreeGeometryHandler):
     def __init__(self,pf,data_style="art"):
         """
@@ -455,30 +454,23 @@
         offset = self.domain.level_offsets
         no = self.domain.level_count[level]
         if level==0:
+            source= {}
             data = _read_root_level(content,self.domain.level_child_offsets,
                                    self.domain.level_count)
-            data = data[field_idxs,:]
+            for i in field_idxs:
+                temp = np.reshape(data[i,:],self.domain.pf.domain_dimensions,
+                                  order='F').astype('float64')
+                source[field] = temp
+            level_offset += oct_handler.fill_level_from_grid(self.domain.domain_id, 
+                                   level, dest, source, self.mask, level_offset)
         else:
-            data = _read_child_level(content,self.domain.level_child_offsets,
+            source = _read_child_level(content,self.domain.level_child_offsets,
                                      self.domain.level_offsets,
                                      self.domain.level_count,level,fields,
                                      self.domain.pf.domain_dimensions,
                                      self.domain.pf.parameters['ncell0'])
-        source= {}
-        for i,field in enumerate(fields):
-            if level==0:
-                temp1 = np.reshape(data[i,:],self.domain.pf.domain_dimensions,
-                                  order='F')
-            else:
-                temp = np.reshape(data[i,:],(no,8),order='C')
-            temp = temp.astype('float64')
-            source[field] = temp
-        if level==0:
-            level_offset += oct_handler.fill_level_from_grid(self.domain.domain_id, 
-                                   level, dest, source, self.mask, level_offset)
-        else:
             level_offset += oct_handler.fill_level(self.domain.domain_id, 
-                                   level, dest, source, self.mask, level_offset)
+                                level, dest, source, self.mask, level_offset)
         return dest
 
 class ARTDomainFile(object):

diff -r 1036e4343ab6f8f78545d14a84bffc28fa357224 -r 4381be62c1e27e844faaa8a8844ce1f13e6f29fd yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -250,30 +250,43 @@
 dtyp = np.dtype(">i4,>i8,>i8"+",>%sf4"%(nchem)+ \
                 ",>%sf4"%(2)+",>i4")
 def _read_child_level(f,level_child_offsets,level_oct_offsets,level_info,level,
-                      fields,domain_dimensions,ncell0,nhydro_vars=10,nchild=8):
+                      fields,domain_dimensions,ncell0,nhydro_vars=10,nchild=8,
+                      noct_range=None):
     #emulate the fortran code for reading cell data
     #read ( 19 ) idc, iOctCh(idc), (hvar(i,idc),i=1,nhvar), 
     #    &                 (var(i,idc), i=2,3)
     #contiguous 8-cell sections are for the same oct;
     #ie, we don't write out just the 0 cells, then the 1 cells
+    #optionally, we only read noct_range to save memory
     left_index, fl, octs, nocts,root_level = _read_art_level_info(f, 
         level_oct_offsets,level, coarse_grid=domain_dimensions[0])
-    nocts = level_info[level]
-    ncells = nocts*8
-    f.seek(level_child_offsets[level])
-    arr = np.fromfile(f,dtype=hydro_struct,count=ncells)
-    assert np.all(arr['pad1']==arr['pad2']) #pads must be equal
-    #idc = np.argsort(arr['idc']) #correct fortran indices
-    #translate idc into icell, and then to iOct
-    icell = (arr['idc'] >> 3) << 3
-    iocts = (icell-ncell0)/nchild #without a F correction, theres a +1
-    #assert that the children are read in the same order as the octs
-    assert np.all(octs==iocts[::nchild]) 
-    if len(fields)>1:
-        vars = np.concatenate((arr[field] for field in fields))
+    if noct_range is None:
+        nocts = level_info[level]
+        ncells = nocts*8
+        f.seek(level_child_offsets[level])
+        arr = np.fromfile(f,dtype=hydro_struct,count=ncells)
+        assert np.all(arr['pad1']==arr['pad2']) #pads must be equal
+        #idc = np.argsort(arr['idc']) #correct fortran indices
+        #translate idc into icell, and then to iOct
+        icell = (arr['idc'] >> 3) << 3
+        iocts = (icell-ncell0)/nchild #without a F correction, theres a +1
+        #assert that the children are read in the same order as the octs
+        assert np.all(octs==iocts[::nchild]) 
     else:
-        vars = arr[fields[0]].reshape((1,arr.shape[0]))
-    return vars
+        start,end = noct_range
+        nocts = min(end-start,level_info[level])
+        end = start + nocts
+        ncells = nocts*8
+        skip = np.dtype(hydro_struct).itemsize*start
+        f.seek(level_child_offsets[level]+skip)
+        arr = np.fromfile(f,dtype=hydro_struct,count=ncells)
+        assert np.all(arr['pad1']==arr['pad2']) #pads must be equal
+    source = {}
+    for field in fields:
+        sh = (nocts,8)
+        source[field] = np.reshape(arr[field],sh,order='C').astype('float64')
+    return source
+
 
 def _read_root_level(f,level_offsets,level_info,nhydro_vars=10):
     nocts = level_info[0]


https://bitbucket.org/yt_analysis/yt/commits/1c528abd1d07/
Changeset:   1c528abd1d07
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-17 19:41:03
Summary:     Merge
Affected #:  2 files

diff -r 4381be62c1e27e844faaa8a8844ce1f13e6f29fd -r 1c528abd1d0782edd04ef99bc6a5dc622277145b yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -136,8 +136,9 @@
             #and build a subset=mask of domains 
             subsets = []
             for d,c in zip(self.domains,counts):
-                if c>0:
-                    subsets += ARTDomainSubset(d,mask,c,d.domain_level),
+                nocts = d.level_count[d.domain_level]
+                if c<1: continue
+                subsets += ARTDomainSubset(d,mask,c,d.domain_level),
             dobj._chunk_info = subsets
             dobj.size = sum(counts)
             dobj.shape = (dobj.size,)


https://bitbucket.org/yt_analysis/yt/commits/969d1163ac85/
Changeset:   969d1163ac85
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-17 20:17:29
Summary:     built the ART octree container, subclasses ramses
Affected #:  1 file

diff -r 1c528abd1d0782edd04ef99bc6a5dc622277145b -r 969d1163ac8592415e5c3e55d386920ebcd9b13a yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -873,6 +873,48 @@
         return local_filled
 
 
+
+cdef class ARTOctreeContainer(RAMSESOctreeContainer):
+
+    @cython.boundscheck(True)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def fill_level(self, int domain, int level, dest_fields, source_fields,
+                   np.ndarray[np.uint8_t, ndim=2, cast=True] mask, int offset,
+                   np.int64_t subchunk_offset, np.int64_t subchunk_max):
+        cdef np.ndarray[np.float64_t, ndim=2] source
+        cdef np.ndarray[np.float64_t, ndim=1] dest
+        cdef OctAllocationContainer *dom = self.domains[domain - 1]
+        cdef Oct *o
+        cdef int n
+        cdef int i, j, k, ii
+        cdef int local_pos, local_filled
+        cdef np.float64_t val
+        cdef np.int64_t index
+        for key in dest_fields:
+            local_filled = 0
+            dest = dest_fields[key]
+            source = source_fields[key]
+            for n in range(dom.n):
+                o = &dom.my_octs[n]
+                index = o.ind-subchunk_offset
+                if o.level != level: continue
+                if index < 0: continue
+                if index >= subchunk_max: 
+                    #if we hit the end of the array,
+                    #immeditely discontinue
+                    return local_filled
+                for i in range(2):
+                    for j in range(2):
+                        for k in range(2):
+                            ii = ((k*2)+j)*2+i
+                            if mask[o.local_ind, ii] == 0: continue
+                            dest[local_filled + offset] = \
+                                source[index,ii]
+                            local_filled += 1
+        return local_filled
+
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)


https://bitbucket.org/yt_analysis/yt/commits/fba0a8e94d0a/
Changeset:   fba0a8e94d0a
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-17 20:17:50
Summary:     sub chunking works
Affected #:  1 file

diff -r 969d1163ac8592415e5c3e55d386920ebcd9b13a -r fba0a8e94d0a17853ea4c1b8e9fbee0e14d306a3 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -39,7 +39,7 @@
 from yt.data_objects.static_output import \
       StaticOutput
 from yt.geometry.oct_container import \
-    RAMSESOctreeContainer
+    ARTOctreeContainer
 from yt.data_objects.field_info_container import \
     FieldInfoContainer, NullFunc
 from .fields import \
@@ -77,11 +77,6 @@
     mass_hydrogen_cgs, sec_per_Gyr
 class ARTGeometryHandler(OctreeGeometryHandler):
     def __init__(self,pf,data_style="art"):
-        """
-        Life is made simpler because we only have one AMR file
-        and one domain. However, we are matching to the RAMSES
-        multi-domain architecture.
-        """
         self.fluid_field_list = fluid_fields
         self.data_style = data_style
         self.parameter_file = weakref.proxy(pf)
@@ -101,7 +96,7 @@
                         for i,l in enumerate(range(self.pf.max_level))]
         self.octs_per_domain = [dom.level_count.sum() for dom in self.domains]
         self.total_octs = sum(self.octs_per_domain)
-        self.oct_handler = RAMSESOctreeContainer(
+        self.oct_handler = ARTOctreeContainer(
             self.parameter_file.domain_dimensions/2, #dd is # of root cells
             self.parameter_file.domain_left_edge,
             self.parameter_file.domain_right_edge)
@@ -465,13 +460,21 @@
             level_offset += oct_handler.fill_level_from_grid(self.domain.domain_id, 
                                    level, dest, source, self.mask, level_offset)
         else:
-            source = _read_child_level(content,self.domain.level_child_offsets,
-                                     self.domain.level_offsets,
-                                     self.domain.level_count,level,fields,
-                                     self.domain.pf.domain_dimensions,
-                                     self.domain.pf.parameters['ncell0'])
-            level_offset += oct_handler.fill_level(self.domain.domain_id, 
-                                level, dest, source, self.mask, level_offset)
+            def subchunk(count,size):
+                for i in range(0,count,size):
+                    yield i,i+min(size,count-i)
+            for noct_range in subchunk(no,long(2e5)):
+                source = _read_child_level(content,self.domain.level_child_offsets,
+                                         self.domain.level_offsets,
+                                         self.domain.level_count,level,fields,
+                                         self.domain.pf.domain_dimensions,
+                                         self.domain.pf.parameters['ncell0'],
+                                         noct_range = noct_range)
+                print noct_range
+                nocts_filling = noct_range[1]-noct_range[0]
+                level_offset += oct_handler.fill_level(self.domain.domain_id, 
+                                    level, dest, source, self.mask, level_offset,
+                                    noct_range[0],nocts_filling-1)
         return dest
 
 class ARTDomainFile(object):


https://bitbucket.org/yt_analysis/yt/commits/3a8469a424cc/
Changeset:   3a8469a424cc
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-17 20:25:59
Summary:     subchunking fill level does not appear to reduce memory consumption
Affected #:  1 file

diff -r fba0a8e94d0a17853ea4c1b8e9fbee0e14d306a3 -r 3a8469a424cc70ebc8c4a271bb488b897ca75f15 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -463,14 +463,13 @@
             def subchunk(count,size):
                 for i in range(0,count,size):
                     yield i,i+min(size,count-i)
-            for noct_range in subchunk(no,long(2e5)):
+            for noct_range in subchunk(no,long(1e5)):
                 source = _read_child_level(content,self.domain.level_child_offsets,
                                          self.domain.level_offsets,
                                          self.domain.level_count,level,fields,
                                          self.domain.pf.domain_dimensions,
                                          self.domain.pf.parameters['ncell0'],
                                          noct_range = noct_range)
-                print noct_range
                 nocts_filling = noct_range[1]-noct_range[0]
                 level_offset += oct_handler.fill_level(self.domain.domain_id, 
                                     level, dest, source, self.mask, level_offset,


https://bitbucket.org/yt_analysis/yt/commits/f23b4482202f/
Changeset:   f23b4482202f
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-18 00:57:07
Summary:     reorder definitions to show diff between stars/dm
Affected #:  1 file

diff -r 3a8469a424cc70ebc8c4a271bb488b897ca75f15 -r f23b4482202ff7daf2ecef0b5138945064d78d88 yt/frontends/art/definitions.py
--- a/yt/frontends/art/definitions.py
+++ b/yt/frontends/art/definitions.py
@@ -49,22 +49,21 @@
 hydro_struct += ('pad2','>i'),
 
 particle_fields= [
-    'particle_age',
+    'particle_mass', #stars have variable mass
     'particle_index',
-    'particle_mass',
+    'particle_type',
+    'particle_position_x',
+    'particle_position_y',
+    'particle_position_z',
+    'particle_velocity_x',
+    'particle_velocity_y',
+    'particle_velocity_z'
+    'particle_age', #this and below are stellar only fields
     'particle_mass_initial',
     'particle_creation_time',
     'particle_metallicity1',
     'particle_metallicity2',
     'particle_metallicity',
-    'particle_position_x',
-    'particle_position_y',
-    'particle_position_z',
-    'particle_velocity_x',
-    'particle_velocity_y',
-    'particle_velocity_z',
-    'particle_type',
-    'particle_index'
 ]
 
 particle_star_fields = [


https://bitbucket.org/yt_analysis/yt/commits/4882520a423e/
Changeset:   4882520a423e
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-18 01:19:44
Summary:     skeletal particle support
Affected #:  2 files

diff -r f23b4482202ff7daf2ecef0b5138945064d78d88 -r 4882520a423e78f15a53ead40e4ef0d03ad2bf9f yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -61,7 +61,6 @@
 from .io import _count_art_octs
 from .io import b2t
 
-
 import yt.frontends.ramses._ramses_reader as _ramses_reader
 
 from .fields import ARTFieldInfo, KnownARTFields

diff -r f23b4482202ff7daf2ecef0b5138945064d78d88 -r 4882520a423e78f15a53ead40e4ef0d03ad2bf9f yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -60,36 +60,40 @@
         return tr
 
     def _read_particle_selection(self, chunks, selector, fields):
-        size = 0
+        #ignore chunking; we have no particle chunk system
         masks = {}
-        for chunk in chunks:
-            for subset in chunk.objs:
-                # We read the whole thing, then feed it back to the selector
-                offsets = []
-                f = open(subset.domain.part_fn, "rb")
-                foffsets = subset.domain.particle_field_offsets
-                selection = {}
-                for ax in 'xyz':
-                    field = "particle_position_%s" % ax
-                    f.seek(foffsets[field])
-                    selection[ax] = fpu.read_vector(f, 'd')
-                mask = selector.select_points(selection['x'],
-                            selection['y'], selection['z'])
-                if mask is None: continue
-                size += mask.sum()
-                masks[id(subset)] = mask
-        # Now our second pass
+        pf = chunks[0].objs[0].domain.pf
+        ws,ls = pf.parameters["wspecies"],pf.parameters["lspecies"]
+        file_particle = pf.file_particle_data
+        file_stars = pf.file_particle_stars
+        pos,vel = read_particles(file_particle)
+        del vel
+        mask = selector.select_points(pos[:,0],pos[:,1],pos[:,2])
+        if mask is None: continue
+        size = mask.sum()
+        if not any(('position' in f for f in fields)):
+            del pos
+        if not any(('velocity' in f for f in fields)):
+            del vel
         tr = dict((f, np.empty(size, dtype="float64")) for f in fields)
-        for chunk in chunks:
-            for subset in chunk.objs:
-                f = open(subset.domain.part_fn, "rb")
-                mask = masks.pop(id(subset), None)
-                if mask is None: continue
-                for ftype, fname in fields:
-                    offsets.append((foffsets[fname], (ftype,fname)))
-                for offset, field in sorted(offsets):
-                    f.seek(offset)
-                    tr[field] = fpu.read_vector(f, 'd')[mask]
+        for field in fields:
+            for ax in 'xyz':
+                if field.startswith("particle_position_%s"%ax):
+                    tr[field]=pos[:,ax][mask]
+                if field.startswith("particle_velocity_%s"%ax):
+                    tr[field]=vel[:,ax][mask]
+            if field == "particle_mass":
+                #replace the stellar masses
+                tr[field]=1.0
+            elif field == "particle_index":
+                tr[field]=1.0
+            elif field == "particle_type":
+                tr[field]=1.0
+            elif field in particle_star_fields:
+                tr[field]=1.0
+            else:
+                raise 
+            tr[field] = fpu.read_vector(f, 'd')[mask]
         return tr
 
 


https://bitbucket.org/yt_analysis/yt/commits/e3311566ea04/
Changeset:   e3311566ea04
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-18 01:46:36
Summary:     draft of particle support
Affected #:  1 file

diff -r 4882520a423e78f15a53ead40e4ef0d03ad2bf9f -r e3311566ea04f1fb505197edce18cbf3768c0c9f yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -75,7 +75,9 @@
             del pos
         if not any(('velocity' in f for f in fields)):
             del vel
-        tr = dict((f, np.empty(size, dtype="float64")) for f in fields)
+        stara,starb = ls[-2],ls[-1]
+        np = ls[-1]
+        tr = {}
         for field in fields:
             for ax in 'xyz':
                 if field.startswith("particle_position_%s"%ax):
@@ -83,17 +85,30 @@
                 if field.startswith("particle_velocity_%s"%ax):
                     tr[field]=vel[:,ax][mask]
             if field == "particle_mass":
-                #replace the stellar masses
-                tr[field]=1.0
+                a=0
+                data = np.zeros(np,dtype='float64')
+                for b,m in zip(ls,ws):
+                    data[a:b]=(np.ones(size,dtype='float64')*m)
+                    a=b
+                tr[field]=data[mask]
+                #the stellar masses will be updated later
             elif field == "particle_index":
-                tr[field]=1.0
+                tr[field]=np.arange(np)[mask]
             elif field == "particle_type":
-                tr[field]=1.0
-            elif field in particle_star_fields:
-                tr[field]=1.0
-            else:
-                raise 
-            tr[field] = fpu.read_vector(f, 'd')[mask]
+                a=0
+                data = np.zeros(np,dtype='int64')
+                for b,m in zip(ls,ws):
+                    data[a:b]=(np.ones(size,dtype='int64')*i)
+                    a=b
+                tr[field]=data[mask]
+            if field in particle_star_fields:
+                #we possibly update and change the masses here
+                #all other fields are read in and changed once
+                temp= read_star_field(file_stars,field=field)
+                data = np.zeros(np,dtype="float64")
+                data[stara:starb] = temp
+                del temp
+                tr[field]=data[mask]
         return tr
 
 
@@ -224,7 +239,7 @@
                 data[field] = read_vector(fh,'f','>')
             else:
                 skip(fh,endian='>')
-    return data.pop(field),data
+    return data.pop(field)
 
 def _read_child_mask_level(f, level_child_offsets,level,nLevel,nhydro_vars):
     f.seek(level_child_offsets[level])


https://bitbucket.org/yt_analysis/yt/commits/1016f16f1d30/
Changeset:   1016f16f1d30
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-18 02:11:03
Summary:     truncating the number of stars rad
Affected #:  1 file

diff -r e3311566ea04f1fb505197edce18cbf3768c0c9f -r 1016f16f1d3086398f85f191b7ef4fb9323e4328 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -62,21 +62,22 @@
     def _read_particle_selection(self, chunks, selector, fields):
         #ignore chunking; we have no particle chunk system
         masks = {}
-        pf = chunks[0].objs[0].domain.pf
+        pf = (chunks.next()).objs[0].domain.pf
         ws,ls = pf.parameters["wspecies"],pf.parameters["lspecies"]
+        np = ls[-1]
         file_particle = pf.file_particle_data
         file_stars = pf.file_particle_stars
-        pos,vel = read_particles(file_particle)
-        del vel
+        pos,vel = read_particles(file_particle,pf.parameters['Nrow'],
+                                 total=np,dd=pf.domain_dimensions)
+        pos,vel = pos.astype('float64'), vel.astype('float64')
+        import pdb; pdb.set_trace()
         mask = selector.select_points(pos[:,0],pos[:,1],pos[:,2])
-        if mask is None: continue
         size = mask.sum()
         if not any(('position' in f for f in fields)):
             del pos
         if not any(('velocity' in f for f in fields)):
             del vel
         stara,starb = ls[-2],ls[-1]
-        np = ls[-1]
         tr = {}
         for field in fields:
             for ax in 'xyz':
@@ -220,7 +221,7 @@
     return le,fl,iocts,nLevel,root_level
 
 
-def read_particles(file,Nrow):
+def read_particles(file,Nrow,total=None,dd=1.0):
     words = 6 # words (reals) per particle: x,y,z,vx,vy,vz
     real_size = 4 # for file_particle_data; not always true?
     np_per_page = Nrow**2 # defined in ART a_setup.h
@@ -229,7 +230,10 @@
     f = np.fromfile(file, dtype='>f4').astype('float32') # direct access
     pages = np.vsplit(np.reshape(f, (num_pages, words, np_per_page)), num_pages)
     data = np.squeeze(np.dstack(pages)).T # x,y,z,vx,vy,vz
-    return data[:,0:3],data[:,3:]
+    if total is None:
+        return data[:,0:3]/dd,data[:,3:]
+    else:
+        return data[:total,0:3]/dd,data[:total,3:]
 
 def read_star_field(file,field=None):
     data = {}


https://bitbucket.org/yt_analysis/yt/commits/b96ae3c8c62d/
Changeset:   b96ae3c8c62d
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-18 02:15:43
Summary:     working particle IO
Affected #:  2 files

diff -r 1016f16f1d3086398f85f191b7ef4fb9323e4328 -r b96ae3c8c62d54ff4af16735d1e483b6f508abea yt/frontends/art/fields.py
--- a/yt/frontends/art/fields.py
+++ b/yt/frontends/art/fields.py
@@ -35,6 +35,7 @@
     ValidateGridType
 import yt.data_objects.universal_fields
 import yt.utilities.lib as amr_utils
+from yt.frontends.art.definitions import *
 
 KnownARTFields = FieldInfoContainer()
 add_art_field = KnownARTFields.add_field
@@ -45,17 +46,16 @@
 import numpy as np
 
 #these are just the hydro fields
-known_art_fields = [ 'Density','TotalEnergy',
-                     'XMomentumDensity','YMomentumDensity','ZMomentumDensity',
-                     'Pressure','Gamma','GasEnergy',
-                     'MetalDensitySNII', 'MetalDensitySNIa',
-                     'PotentialNew','PotentialOld']
-
 #Add the fields, then later we'll individually defined units and names
-for f in known_art_fields:
+for f in fluid_fields:
     add_art_field(f, function=NullFunc, take_log=True,
               validators = [ValidateDataField(f)])
 
+for f in particle_fields:
+    add_art_field(f, function=NullFunc, take_log=True,
+              validators = [ValidateDataField(f)],
+              particle_type = True)
+
 #Hydro Fields that are verified to be OK unit-wise:
 #Density
 #Temperature

diff -r 1016f16f1d3086398f85f191b7ef4fb9323e4328 -r b96ae3c8c62d54ff4af16735d1e483b6f508abea yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -70,21 +70,21 @@
         pos,vel = read_particles(file_particle,pf.parameters['Nrow'],
                                  total=np,dd=pf.domain_dimensions)
         pos,vel = pos.astype('float64'), vel.astype('float64')
-        import pdb; pdb.set_trace()
         mask = selector.select_points(pos[:,0],pos[:,1],pos[:,2])
         size = mask.sum()
-        if not any(('position' in f for f in fields)):
+        if not any(('position' in f for f in fields[0])):
             del pos
-        if not any(('velocity' in f for f in fields)):
+        if not any(('velocity' in f for f in fields[0])):
             del vel
         stara,starb = ls[-2],ls[-1]
         tr = {}
-        for field in fields:
-            for ax in 'xyz':
+        for field in fields[0]:
+            import pdb; pdb.set_trace()
+            for i,ax in enumerate('xyz'):
                 if field.startswith("particle_position_%s"%ax):
-                    tr[field]=pos[:,ax][mask]
+                    tr[field]=pos[:,i][mask]
                 if field.startswith("particle_velocity_%s"%ax):
-                    tr[field]=vel[:,ax][mask]
+                    tr[field]=vel[:,i][mask]
             if field == "particle_mass":
                 a=0
                 data = np.zeros(np,dtype='float64')


https://bitbucket.org/yt_analysis/yt/commits/6821f6e7cb6d/
Changeset:   6821f6e7cb6d
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-18 02:24:02
Summary:     removed pdb
Affected #:  1 file

diff -r b96ae3c8c62d54ff4af16735d1e483b6f508abea -r 6821f6e7cb6d2cc6224c6d65163406b3137d2a8d yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -79,7 +79,6 @@
         stara,starb = ls[-2],ls[-1]
         tr = {}
         for field in fields[0]:
-            import pdb; pdb.set_trace()
             for i,ax in enumerate('xyz'):
                 if field.startswith("particle_position_%s"%ax):
                     tr[field]=pos[:,i][mask]


https://bitbucket.org/yt_analysis/yt/commits/cca64f300e6d/
Changeset:   cca64f300e6d
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-18 21:20:44
Summary:     fixed up fields in particle io
Affected #:  1 file

diff -r 6821f6e7cb6d2cc6224c6d65163406b3137d2a8d -r cca64f300e6d4bdc2cfb7353ff5d434472daf910 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -72,19 +72,20 @@
         pos,vel = pos.astype('float64'), vel.astype('float64')
         mask = selector.select_points(pos[:,0],pos[:,1],pos[:,2])
         size = mask.sum()
-        if not any(('position' in f for f in fields[0])):
+        if not any(('position' in n for t,n in fields)):
             del pos
-        if not any(('velocity' in f for f in fields[0])):
+        if not any(('velocity' in n for t,n in fields)):
             del vel
         stara,starb = ls[-2],ls[-1]
         tr = {}
-        for field in fields[0]:
+        for field in fields:
+            ftype,fname = field
             for i,ax in enumerate('xyz'):
-                if field.startswith("particle_position_%s"%ax):
+                if fname.startswith("particle_position_%s"%ax):
                     tr[field]=pos[:,i][mask]
-                if field.startswith("particle_velocity_%s"%ax):
+                if fname.startswith("particle_velocity_%s"%ax):
                     tr[field]=vel[:,i][mask]
-            if field == "particle_mass":
+            if fname == "particle_mass":
                 a=0
                 data = np.zeros(np,dtype='float64')
                 for b,m in zip(ls,ws):
@@ -92,19 +93,19 @@
                     a=b
                 tr[field]=data[mask]
                 #the stellar masses will be updated later
-            elif field == "particle_index":
-                tr[field]=np.arange(np)[mask]
-            elif field == "particle_type":
+            elif fname == "particle_index":
+                tr[field]=np.arange(np)[mask].astype('int64')
+            elif fname == "particle_type":
                 a=0
                 data = np.zeros(np,dtype='int64')
                 for b,m in zip(ls,ws):
                     data[a:b]=(np.ones(size,dtype='int64')*i)
                     a=b
                 tr[field]=data[mask]
-            if field in particle_star_fields:
+            if fname in particle_star_fields:
                 #we possibly update and change the masses here
                 #all other fields are read in and changed once
-                temp= read_star_field(file_stars,field=field)
+                temp= read_star_field(file_stars,field=fname)
                 data = np.zeros(np,dtype="float64")
                 data[stara:starb] = temp
                 del temp


https://bitbucket.org/yt_analysis/yt/commits/fdfcf188dfa2/
Changeset:   fdfcf188dfa2
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-19 05:28:46
Summary:     Adding a bit of documentation to the Cython routines
Affected #:  1 file

diff -r cca64f300e6d4bdc2cfb7353ff5d434472daf910 -r fdfcf188dfa266cc9b03aa4f7d7502f340366c18 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -3,6 +3,8 @@
 
 Author: Matthew Turk <matthewturk at gmail.com>
 Affiliation: Columbia University
+Author: Christopher Moody <chris.e.moody at gmail.com>
+Affiliation: UC Santa Cruz
 Homepage: http://yt.enzotools.org/
 License:
   Copyright (C) 2011 Matthew Turk.  All Rights Reserved.
@@ -81,13 +83,14 @@
 #   * Only allocate octs that reside on >= domain
 #   * For all octs, insert into tree, which may require traversing existing
 #     octs
-#   * Note that this doesn ot allow OctAllocationContainer to exactly be a
+#   * Note that this does not allow OctAllocationContainer to exactly be a
 #     chunk, but it is close.  For IO chunking, we can theoretically examine
 #     those octs that live inside a given allocator.
 
 cdef class OctreeContainer:
 
     def __init__(self, oct_domain_dimensions, domain_left_edge, domain_right_edge):
+        # This will just initialize the root mesh octs
         cdef int i, j, k, p
         for i in range(3):
             self.nn[i] = oct_domain_dimensions[i]
@@ -118,6 +121,9 @@
         free(self.root_mesh)
 
     def __iter__(self):
+        #Get the next oct, will traverse domains
+        #Note that oct containers can be sorted 
+        #so that consecutive octs are on the same domain
         cdef OctAllocationContainer *cur = self.cont
         cdef Oct *this
         cdef int i
@@ -137,6 +143,8 @@
     @cython.wraparound(False)
     @cython.cdivision(True)
     cdef Oct *get(self, ppos):
+        #Given a floating point position, retrieve the most
+        #refined oct at that time
         cdef np.int64_t ind[3]
         cdef np.float64_t dds[3], cp[3], pp[3]
         cdef Oct *cur
@@ -185,6 +193,9 @@
     @cython.wraparound(False)
     @cython.cdivision(True)
     cdef void neighbors(self, Oct* o, Oct* neighbors[27]):
+        #Get 3x3x3 neighbors, although the 1,1,1 oct is the
+        #central one. 
+        #Return an array of Octs
         cdef np.int64_t curopos[3]
         cdef np.int64_t curnpos[3]
         cdef np.int64_t npos[3]
@@ -875,13 +886,15 @@
 
 
 cdef class ARTOctreeContainer(RAMSESOctreeContainer):
-
+    #this class is specifically for the NMSU ART
     @cython.boundscheck(True)
     @cython.wraparound(False)
     @cython.cdivision(True)
     def fill_level(self, int domain, int level, dest_fields, source_fields,
                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask, int offset,
                    np.int64_t subchunk_offset, np.int64_t subchunk_max):
+        #Only minorly different from the RAMSES version
+        #The source array is in chunks, just stop when we hit the end
         cdef np.ndarray[np.float64_t, ndim=2] source
         cdef np.ndarray[np.float64_t, ndim=1] dest
         cdef OctAllocationContainer *dom = self.domains[domain - 1]
@@ -922,11 +935,12 @@
                              source_fields, 
                              np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
                              int offset):
-        #precisely like fill level, but instead of assuming that the source
+        #Fill  level, but instead of assuming that the source
         #order is that of the oct order, we look up the oct position
         #and fill its children from the the source field
-        #as a result, source is 3D field with 8 times as many
+        #As a result, source is 3D grid with 8 times as many
         #elements as the number of octs on this level in this domain
+        #and with the shape of an equal-sided cube
         cdef np.ndarray[np.float64_t, ndim=3] source
         cdef np.ndarray[np.float64_t, ndim=1] dest
         cdef OctAllocationContainer *dom = self.domains[domain - 1]
@@ -957,6 +971,9 @@
 
 
 cdef int compare_octs(void *vo1, void *vo2) nogil:
+    #This only compares if the octs live on the
+    #domain, not if they are actually equal
+    #Used to sort octs into consecutive domains
     cdef Oct *o1 = (<Oct**> vo1)[0]
     cdef Oct *o2 = (<Oct**> vo2)[0]
     if o1.domain < o2.domain: return -1
@@ -967,11 +984,16 @@
     elif o1.domain > o2.domain: return 1
 
 cdef class ParticleOctreeContainer(OctreeContainer):
+    #Each ParticleArrays contains an Oct
+    #a reference to the next ParticleArrays
+    #its index and the number of particles 
     cdef ParticleArrays *first_sd
     cdef ParticleArrays *last_sd
     cdef Oct** oct_list
-    cdef np.int64_t *dom_offsets
+    #The starting oct index of each domain
+    cdef np.int64_t *dom_offsets 
     cdef public int max_level
+    #How many particles do we keep befor refining
     cdef public int n_ref
 
     def allocate_root(self):
@@ -989,6 +1011,8 @@
                     self.root_mesh[i][j][k] = cur
 
     def __dealloc__(self):
+        #Call the freemem ops on every ocy
+        #of the root mesh recursively
         cdef i, j, k
         for i in range(self.nn[0]):
             for j in range(self.nn[1]):
@@ -998,6 +1022,7 @@
         free(self.dom_offsets)
 
     cdef void visit_free(self, Oct *o):
+        #Free the memory for this oct recursively
         cdef int i, j, k
         for i in range(2):
             for j in range(2):
@@ -1017,6 +1042,11 @@
     def icoords(self, int domain_id,
                 np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
                 np.int64_t cell_count):
+        #Return the integer positions of the cells
+        #Limited to this domain and within the mask
+        #Positions are binary; aside from the root mesh
+        #to each digit we just add a << 1 and a 0 or 1 
+        #for each child recursively
         cdef np.ndarray[np.int64_t, ndim=2] coords
         coords = np.empty((cell_count, 3), dtype="int64")
         cdef int oi, i, ci, ii
@@ -1041,6 +1071,7 @@
     def ires(self, int domain_id,
                 np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
                 np.int64_t cell_count):
+        #Return the 'resolution' of each cell; ie the level
         cdef np.ndarray[np.int64_t, ndim=1] res
         res = np.empty(cell_count, dtype="int64")
         cdef int oi, i, ci
@@ -1060,6 +1091,7 @@
     def fcoords(self, int domain_id,
                 np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
                 np.int64_t cell_count):
+        #Return the floating point unitary position of every cell
         cdef np.ndarray[np.float64_t, ndim=2] coords
         coords = np.empty((cell_count, 3), dtype="float64")
         cdef int oi, i, ci
@@ -1102,6 +1134,10 @@
         pass
 
     def finalize(self):
+        #This will sort the octs in the oct list
+        #so that domains appear consecutively
+        #And then find the oct index/offset for
+        #every domain
         cdef int max_level = 0
         self.oct_list = <Oct**> malloc(sizeof(Oct*)*self.nocts)
         cdef np.int64_t i = 0
@@ -1131,6 +1167,9 @@
         self.dom_offsets[cur_dom + 2] = self.nocts
 
     cdef Oct* allocate_oct(self):
+        #Allocate the memory, set to NULL or -1
+        #We reserve space for n_ref particles, but keep
+        #track of how many are used with np initially 0
         self.nocts += 1
         cdef Oct *my_oct = <Oct*> malloc(sizeof(Oct))
         cdef ParticleArrays *sd = <ParticleArrays*> \
@@ -1162,6 +1201,9 @@
         return my_oct
 
     def linearly_count(self):
+        #Without visiting oct and cells
+        #jump from particle arrays to the next one
+        #counting the total # of particles en route
         cdef np.int64_t total = 0
         cdef ParticleArrays *c = self.first_sd
         while c != NULL:
@@ -1190,6 +1232,9 @@
         return level_count
 
     def add(self, np.ndarray[np.float64_t, ndim=2] pos, np.int64_t domain_id):
+        #Add this particle to the root oct
+        #Then if that oct has children, add it to them recursively
+        #If the child needs to be refined because of max particles, do so
         cdef int no = pos.shape[0]
         cdef int p, i, level
         cdef np.float64_t dds[3], cp[3], pp[3]
@@ -1200,6 +1245,10 @@
         for p in range(no):
             level = 0
             for i in range(3):
+                #PP Calculate the unitary position, 
+                #DDS Domain dimensions
+                #IND Corresponding integer index on the root octs
+                #CP Center  point of that oct
                 pp[i] = pos[p, i]
                 dds[i] = (self.DRE[i] + self.DLE[i])/self.nn[i]
                 ind[i] = <np.int64_t> ((pp[i] - self.DLE[i])/dds[i])
@@ -1232,6 +1281,11 @@
             cur.sd.np += 1
 
     cdef int _check_refine(self, Oct *cur, np.float64_t cp[3], int domain_id):
+        #Answers: should we refine this oct?
+        #False if refined, 
+        #False if not refined, but doesn't need refinement
+        #True if particles need refinement, 
+        #True if not in domain
         if cur.children[0][0][0] != NULL:
             return 0
         elif cur.sd.np >= self.n_ref:
@@ -1241,6 +1295,9 @@
         return 0
 
     cdef void refine_oct(self, Oct *o, np.float64_t pos[3]):
+        #Allocate and initialize child octs
+        #Attach particles to child octs
+        #Remove particles from this oct entirely
         cdef int i, j, k, m, ind[3]
         cdef Oct *noct
         for i in range(2):
@@ -1272,6 +1329,7 @@
         free(o.sd.pos)
 
     def recursively_count(self):
+        #Visit every cell, accumulate the # of cells per level
         cdef int i, j, k
         cdef np.int64_t counts[128]
         for i in range(128): counts[i] = 0
@@ -1297,6 +1355,9 @@
         return
 
     def domain_identify(self, np.ndarray[np.uint8_t, ndim=2, cast=True] mask):
+        #Return an array of length # of domains
+        #Every element is True if there is at least one
+        #fully refined *cell* in that domain that isn't masked out
         cdef int i, oi, m
         cdef Oct *o
         cdef np.ndarray[np.uint8_t, ndim=1, cast=True] dmask
@@ -1317,6 +1378,7 @@
     @cython.wraparound(False)
     @cython.cdivision(True)
     def count_neighbor_particles(self, ppos):
+        #How many particles are in my neighborhood
         cdef Oct *main = self.get(ppos)
         cdef Oct* neighbors[27]
         self.neighbors(main, neighbors)
@@ -1332,6 +1394,7 @@
     @cython.cdivision(True)
     def count_cells(self, SelectorObject selector,
               np.ndarray[np.uint8_t, ndim=2, cast=True] mask):
+        #Count how many cells per level there are
         cdef int i, j, k, oi
         # pos here is CELL center, not OCT center.
         cdef np.float64_t pos[3]


https://bitbucket.org/yt_analysis/yt/commits/1c32f8c13f01/
Changeset:   1c32f8c13f01
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-19 21:17:19
Summary:     fixing bug data structures for multiple fields
Affected #:  1 file

diff -r fdfcf188dfa266cc9b03aa4f7d7502f340366c18 -r 1c32f8c13f01a005093b6e3439c47e924fd63af9 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -452,7 +452,7 @@
             source= {}
             data = _read_root_level(content,self.domain.level_child_offsets,
                                    self.domain.level_count)
-            for i in field_idxs:
+            for field,i in zip(fields,field_idxs):
                 temp = np.reshape(data[i,:],self.domain.pf.domain_dimensions,
                                   order='F').astype('float64')
                 source[field] = temp


https://bitbucket.org/yt_analysis/yt/commits/6d4ceb24b97a/
Changeset:   6d4ceb24b97a
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-19 21:38:59
Summary:     fixed subchunking; projections looked good, slices not so much
Affected #:  1 file

diff -r 1c32f8c13f01a005093b6e3439c47e924fd63af9 -r 6d4ceb24b97a969224bce0e89682285332cda39c yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -472,7 +472,7 @@
                 nocts_filling = noct_range[1]-noct_range[0]
                 level_offset += oct_handler.fill_level(self.domain.domain_id, 
                                     level, dest, source, self.mask, level_offset,
-                                    noct_range[0],nocts_filling-1)
+                                    noct_range[0],nocts_filling)
         return dest
 
 class ARTDomainFile(object):


https://bitbucket.org/yt_analysis/yt/commits/6c8ec5b60714/
Changeset:   6c8ec5b60714
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-19 23:26:08
Summary:     sucbhunking wasn't actually working
Affected #:  1 file

diff -r 6d4ceb24b97a969224bce0e89682285332cda39c -r 6c8ec5b60714b6bd134514ec9ac7ce2ff6e5dcd9 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -300,7 +300,7 @@
         nocts = min(end-start,level_info[level])
         end = start + nocts
         ncells = nocts*8
-        skip = np.dtype(hydro_struct).itemsize*start
+        skip = np.dtype(hydro_struct).itemsize*start*8
         f.seek(level_child_offsets[level]+skip)
         arr = np.fromfile(f,dtype=hydro_struct,count=ncells)
         assert np.all(arr['pad1']==arr['pad2']) #pads must be equal


https://bitbucket.org/yt_analysis/yt/commits/8b6ab02bf49a/
Changeset:   8b6ab02bf49a
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-20 01:38:50
Summary:     generalizing is_valid to check the binary amr header
Affected #:  1 file

diff -r 6c8ec5b60714b6bd134514ec9ac7ce2ff6e5dcd9 -r 8b6ab02bf49a932406af733ea6382b17d17ef87e yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -384,12 +384,15 @@
         Defined for the NMSU file naming scheme.
         This could differ for other formats.
         """
-        fn = ("%s" % (os.path.basename(args[0])))
         f = ("%s" % args[0])
         prefix, suffix = filename_pattern['amr'].split('%s')
-        if fn.endswith(suffix) and fn.startswith(prefix) and\
-                os.path.exists(f): 
+        import pdb; pdb.set_trace()
+        with open(f,'rb') as fh:
+            try:
+                amr_header_vals = read_attrs(fh,amr_header_struct,'>')
                 return True
+            except:
+                return False
         return False
 
 class ARTDomainSubset(object):


https://bitbucket.org/yt_analysis/yt/commits/b8a8e1b0430c/
Changeset:   b8a8e1b0430c
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-20 01:39:00
Summary:     without the pdb
Affected #:  1 file

diff -r 8b6ab02bf49a932406af733ea6382b17d17ef87e -r b8a8e1b0430cc7f10223128c277a434a25ce49b0 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -386,7 +386,6 @@
         """
         f = ("%s" % args[0])
         prefix, suffix = filename_pattern['amr'].split('%s')
-        import pdb; pdb.set_trace()
         with open(f,'rb') as fh:
             try:
                 amr_header_vals = read_attrs(fh,amr_header_struct,'>')


https://bitbucket.org/yt_analysis/yt/commits/c6a9d60792bd/
Changeset:   c6a9d60792bd
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-20 02:15:33
Summary:     limit level works now
Affected #:  1 file

diff -r b8a8e1b0430cc7f10223128c277a434a25ce49b0 -r c6a9d60792bd94dfed8b0d65a518b602709c48bf yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -374,7 +374,7 @@
         self.omega_matter = amr_header_vals['Om0']
         self.hubble_constant = amr_header_vals['hubble']
         self.min_level = amr_header_vals['min_level']
-        self.max_level = amr_header_vals['max_level']
+        self.max_level = min(self.limit_level,amr_header_vals['max_level'])
         self.hubble_time  = 1.0/(self.hubble_constant*100/3.08568025e19)
         self.current_time = b2t(self.parameters['t']) * sec_per_Gyr
 


https://bitbucket.org/yt_analysis/yt/commits/364318377994/
Changeset:   364318377994
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-20 08:16:04
Summary:     starting to make modification to make Trujillo-Gomez simulations work
Affected #:  3 files

diff -r c6a9d60792bd94dfed8b0d65a518b602709c48bf -r 36431837799461aedce9ab1308698378e7449a03 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -166,7 +166,8 @@
     def __init__(self,filename,data_style='art',
                  fields = None, storage_filename = None,
                  skip_particles=False,skip_stars=False,
-                 limit_level=None,spread_age=True):
+                 limit_level=None,spread_age=True,
+                 force_max_level=None):
         if fields is None:
             fields = fluid_fields
         filename = os.path.abspath(filename)
@@ -178,6 +179,7 @@
         self.skip_stars = skip_stars
         self.limit_level = limit_level
         self.max_level = limit_level
+        self.force_max_level = force_max_level
         self.spread_age = spread_age
         self.domain_left_edge = np.zeros(3,dtype='float')
         self.domain_right_edge = np.zeros(3,dtype='float')+1.0
@@ -345,6 +347,11 @@
             self.child_grid_offset = f.tell()
             self.parameters.update(amr_header_vals)
             self.parameters['ncell0'] = self.parameters['ng']**3
+            #estimate the root level
+            float_center, fl, iocts, nocts,root_level = _read_art_level_info(f,
+                self._level_oct_offsets,level,
+                coarse_grid=self.pf.domain_dimensions[0],
+                root_level=self.root_level)
         #read the particle header
         if not self.skip_particles and self.file_particle_header:
             with open(self.file_particle_header,"rb") as fh:
@@ -374,9 +381,14 @@
         self.omega_matter = amr_header_vals['Om0']
         self.hubble_constant = amr_header_vals['hubble']
         self.min_level = amr_header_vals['min_level']
-        self.max_level = min(self.limit_level,amr_header_vals['max_level'])
+        self.max_level = amr_header_vals['max_level']
+        if self.limit_level is not None:
+            self.max_level = min(self.limit_level,amr_header_vals['max_level'])
+        if self.force_max_level is not None:
+            self.max_level = self.force_max_level
         self.hubble_time  = 1.0/(self.hubble_constant*100/3.08568025e19)
         self.current_time = b2t(self.parameters['t']) * sec_per_Gyr
+        mylog.info("Max level is %02i",self.max_level)
 
     @classmethod
     def _is_valid(self, *args, **kwargs):
@@ -456,7 +468,7 @@
                                    self.domain.level_count)
             for field,i in zip(fields,field_idxs):
                 temp = np.reshape(data[i,:],self.domain.pf.domain_dimensions,
-                                  order='F').astype('float64')
+                                  order='F').astype('float64').T
                 source[field] = temp
             level_offset += oct_handler.fill_level_from_grid(self.domain.domain_id, 
                                    level, dest, source, self.mask, level_offset)
@@ -568,18 +580,18 @@
             mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
                         root_octs_side**3, 0,oct_handler.nocts)
         else:
-            left_index, fl, iocts, nocts,root_level = _read_art_level_info(f, 
+            float_center, fl, iocts, nocts,root_level = _read_art_level_info(f,
                 self._level_oct_offsets,level,
-                coarse_grid=self.pf.domain_dimensions[0])
-            left_index/=2
+                coarse_grid=self.pf.domain_dimensions[0],
+                root_level=self.root_level)
             #at least one of the indices should be odd
             #assert np.sum(left_index[:,0]%2==1)>0
-            float_left_edge = left_index.astype("float64") / octs_side
-            float_center = float_left_edge + 0.5*1.0/octs_side
+            #float_left_edge = left_index.astype("float64") / octs_side
+            #float_center = float_left_edge + 0.5*1.0/octs_side
             #all floatin unitary positions should fit inside the domain
             assert np.all(float_center<1.0)
             nocts_check = oct_handler.add(self.domain_id,level, nocts, 
-                                          float_left_edge, self.domain_id)
+                                          float_center, self.domain_id)
             assert(nocts_check == nocts)
             mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
                         nocts, level,oct_handler.nocts)

diff -r c6a9d60792bd94dfed8b0d65a518b602709c48bf -r 36431837799461aedce9ab1308698378e7449a03 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -152,7 +152,8 @@
     f.seek(offset)
     return nhydrovars, iNOLL, level_oct_offsets, level_child_offsets
 
-def _read_art_level_info(f, level_oct_offsets,level,coarse_grid=128):
+def _read_art_level_info(f, level_oct_offsets,level,coarse_grid=128,
+                         ncell0=None,root_level=None):
     pos = f.tell()
     f.seek(level_oct_offsets[level])
     #Get the info for this level, skip the rest
@@ -179,7 +180,7 @@
         iocts[idxa:idxb] = data[:,-3] 
         idxa=idxa+this_chunk
     del data
-
+    
     #emulate fortran code
     #     do ic1 = 1 , nLevel
     #       read(19) (iOctPs(i,iOct),i=1,3),(iOctNb(i,iOct),i=1,6),
@@ -194,31 +195,64 @@
     iocts[1:]=iocts[:-1] #shift
     iocts = iocts[:nLevel] #chop off the last, unused, index
     iocts[0]=iOct #starting value
-
+    
     #now correct iocts for fortran indices start @ 1
     iocts = iocts-1
-
+    
     assert np.unique(iocts).shape[0] == nLevel
     
     #left edges are expressed as if they were on 
     #level 15, so no matter what level max(le)=2**15 
     #correct to the yt convention
     #le = le/2**(root_level-1-level)-1
+    
+    #try to find the root_level first
+    if root_level is None:
+        root_level=np.floor(np.log2(le.max()*1.0/coarse_grid))
+        root_level = root_level.astype('int64')
+        for i in range(10):
+            d_x= 1.0/(2.0**(root_level))
+            fc = (d_x * le) -1
+            go = np.diff(np.unique(fc)).min()<1.1
+            if go: break
+            root_level+=1
+    
+    #again emulate the fortran code
+    #This is all for calculating child oct locations
+    #iC_ = iC + nbshift
+    #iO = ishft ( iC_ , - ndim )
+    #id = ishft ( 1, MaxLevel - iOctLv(iO) )   
+    #j  = iC_ + 1 - ishft( iO , ndim )
+    #Posx   = d_x * (iOctPs(1,iO) + sign ( id , idelta(j,1) ))
+    #Posy   = d_x * (iOctPs(2,iO) + sign ( id , idelta(j,2) ))
+    #Posz   = d_x * (iOctPs(3,iO) + sign ( id , idelta(j,3) )) 
+    #idelta = [[-1,  1, -1,  1, -1,  1, -1,  1],
+              #[-1, -1,  1,  1, -1, -1,  1,  1],
+              #[-1, -1, -1, -1,  1,  1,  1,  1]]
+    #idelta = np.array(idelta)
+    #if ncell0 is None:
+        #ncell0 = coarse_grid**3
+    #nchild = 8
+    #ndim = 3
+    #nshift = nchild -1
+    #nbshift = nshift - ncell0
+    #iC = iocts #+ nbshift
+    #iO = iC >> ndim #possibly >>
+    #id = 1 << (root_level - level)
+    #j = iC + 1 - ( iO << 3)
+    #delta = np.abs(id)*idelta[:,j-1]
 
-    #try to find the root_level first
-    root_level=np.floor(np.log2(le.max()*1.0/coarse_grid))
-    root_level = root_level.astype('int64')
-
+    
     #try without the -1
-    le = le/2**(root_level+1-level)-1
-
+    #le = le/2**(root_level+1-level)
+    
     #now read the hvars and vars arrays
     #we are looking for iOctCh
     #we record if iOctCh is >0, in which it is subdivided
     #iOctCh  = np.zeros((nLevel+1,8),dtype='bool')
     
     f.seek(pos)
-    return le,fl,iocts,nLevel,root_level
+    return fc,fl,iocts,nLevel,root_level
 
 
 def read_particles(file,Nrow,total=None,dd=1.0):

diff -r c6a9d60792bd94dfed8b0d65a518b602709c48bf -r 36431837799461aedce9ab1308698378e7449a03 yt/visualization/plot_modifications.py
--- a/yt/visualization/plot_modifications.py
+++ b/yt/visualization/plot_modifications.py
@@ -1080,7 +1080,7 @@
             and np.all(self.region.left_edge <= LE) \
             and np.all(self.region.right_edge >= RE):
             return self.region
-        self.region = data.pf.h.periodic_region(
+        self.region = data.pf.h.region(
             data.center, LE, RE)
         return self.region
 


https://bitbucket.org/yt_analysis/yt/commits/219762c84f37/
Changeset:   219762c84f37
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-20 18:38:42
Summary:     evaluating the root level when reading in just the header
Affected #:  2 files

diff -r 36431837799461aedce9ab1308698378e7449a03 -r 219762c84f37dbfaf21e166031d349c67df63745 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -349,9 +349,10 @@
             self.parameters['ncell0'] = self.parameters['ng']**3
             #estimate the root level
             float_center, fl, iocts, nocts,root_level = _read_art_level_info(f,
-                self._level_oct_offsets,level,
-                coarse_grid=self.pf.domain_dimensions[0],
-                root_level=self.root_level)
+                [0,self.child_grid_offset],1,
+                coarse_grid=self.domain_dimensions[0])
+            del float_center, fl, iocts, nocts
+            self.root_level = root_level
         #read the particle header
         if not self.skip_particles and self.file_particle_header:
             with open(self.file_particle_header,"rb") as fh:
@@ -583,7 +584,7 @@
             float_center, fl, iocts, nocts,root_level = _read_art_level_info(f,
                 self._level_oct_offsets,level,
                 coarse_grid=self.pf.domain_dimensions[0],
-                root_level=self.root_level)
+                root_level=self.pf.root_level)
             #at least one of the indices should be odd
             #assert np.sum(left_index[:,0]%2==1)>0
             #float_left_edge = left_index.astype("float64") / octs_side

diff -r 36431837799461aedce9ab1308698378e7449a03 -r 219762c84f37dbfaf21e166031d349c67df63745 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -211,11 +211,14 @@
         root_level=np.floor(np.log2(le.max()*1.0/coarse_grid))
         root_level = root_level.astype('int64')
         for i in range(10):
-            d_x= 1.0/(2.0**(root_level))
-            fc = (d_x * le) -1
+            d_x= 1.0/(2.0**(root_level+1))
+            fc = (d_x * le) - 1
             go = np.diff(np.unique(fc)).min()<1.1
             if go: break
             root_level+=1
+    else:
+        d_x= 1.0/(2.0**(root_level+1))
+        fc = (d_x * le) - 1
     
     #again emulate the fortran code
     #This is all for calculating child oct locations


https://bitbucket.org/yt_analysis/yt/commits/055eb92df654/
Changeset:   055eb92df654
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-20 19:09:31
Summary:     Modified left_indices; now seems to work for Trujillo-Gomez datasets
May have had silent problems in Ceverino datasets
Affected #:  2 files

diff -r 219762c84f37dbfaf21e166031d349c67df63745 -r 055eb92df6548ea15c9f59278d91aefbff945a63 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -353,6 +353,7 @@
                 coarse_grid=self.domain_dimensions[0])
             del float_center, fl, iocts, nocts
             self.root_level = root_level
+            mylog.info("Using root level of %02i",self.root_level)
         #read the particle header
         if not self.skip_particles and self.file_particle_header:
             with open(self.file_particle_header,"rb") as fh:
@@ -581,7 +582,7 @@
             mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
                         root_octs_side**3, 0,oct_handler.nocts)
         else:
-            float_center, fl, iocts, nocts,root_level = _read_art_level_info(f,
+            unitary_center, fl, iocts, nocts,root_level = _read_art_level_info(f,
                 self._level_oct_offsets,level,
                 coarse_grid=self.pf.domain_dimensions[0],
                 root_level=self.pf.root_level)
@@ -590,9 +591,8 @@
             #float_left_edge = left_index.astype("float64") / octs_side
             #float_center = float_left_edge + 0.5*1.0/octs_side
             #all floatin unitary positions should fit inside the domain
-            assert np.all(float_center<1.0)
             nocts_check = oct_handler.add(self.domain_id,level, nocts, 
-                                          float_center, self.domain_id)
+                                          unitary_center, self.domain_id)
             assert(nocts_check == nocts)
             mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
                         nocts, level,oct_handler.nocts)

diff -r 219762c84f37dbfaf21e166031d349c67df63745 -r 055eb92df6548ea15c9f59278d91aefbff945a63 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -207,18 +207,22 @@
     #le = le/2**(root_level-1-level)-1
     
     #try to find the root_level first
+    def cfc(root_level,level,le):
+        d_x= 1.0/(2.0**(root_level-level+1))
+        fc = (d_x * le) - 2**(level-1)
+        return fc
     if root_level is None:
         root_level=np.floor(np.log2(le.max()*1.0/coarse_grid))
         root_level = root_level.astype('int64')
         for i in range(10):
-            d_x= 1.0/(2.0**(root_level+1))
-            fc = (d_x * le) - 1
+            fc = cfc(root_level,level,le)
             go = np.diff(np.unique(fc)).min()<1.1
             if go: break
             root_level+=1
     else:
-        d_x= 1.0/(2.0**(root_level+1))
-        fc = (d_x * le) - 1
+        fc = cfc(root_level,level,le)
+    unitary_center = fc/( coarse_grid*2.0**(level-1))
+    assert np.all(unitary_center<1.0)
     
     #again emulate the fortran code
     #This is all for calculating child oct locations
@@ -255,7 +259,7 @@
     #iOctCh  = np.zeros((nLevel+1,8),dtype='bool')
     
     f.seek(pos)
-    return fc,fl,iocts,nLevel,root_level
+    return unitary_center,fl,iocts,nLevel,root_level
 
 
 def read_particles(file,Nrow,total=None,dd=1.0):


https://bitbucket.org/yt_analysis/yt/commits/8f4c3a968e27/
Changeset:   8f4c3a968e27
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-23 00:50:39
Summary:     confused np with numpber of particles / numpy
Affected #:  1 file

diff -r 055eb92df6548ea15c9f59278d91aefbff945a63 -r 8f4c3a968e27ace70366a645b44b457df51d19c0 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -64,11 +64,11 @@
         masks = {}
         pf = (chunks.next()).objs[0].domain.pf
         ws,ls = pf.parameters["wspecies"],pf.parameters["lspecies"]
-        np = ls[-1]
+        npa = ls[-1]
         file_particle = pf.file_particle_data
         file_stars = pf.file_particle_stars
         pos,vel = read_particles(file_particle,pf.parameters['Nrow'],
-                                 total=np,dd=pf.domain_dimensions)
+                                 total=npa,dd=pf.domain_dimensions)
         pos,vel = pos.astype('float64'), vel.astype('float64')
         mask = selector.select_points(pos[:,0],pos[:,1],pos[:,2])
         size = mask.sum()
@@ -87,17 +87,17 @@
                     tr[field]=vel[:,i][mask]
             if fname == "particle_mass":
                 a=0
-                data = np.zeros(np,dtype='float64')
+                data = np.zeros(npa,dtype='float64')
                 for b,m in zip(ls,ws):
                     data[a:b]=(np.ones(size,dtype='float64')*m)
                     a=b
                 tr[field]=data[mask]
                 #the stellar masses will be updated later
             elif fname == "particle_index":
-                tr[field]=np.arange(np)[mask].astype('int64')
+                tr[field]=np.arange(npa)[mask].astype('int64')
             elif fname == "particle_type":
                 a=0
-                data = np.zeros(np,dtype='int64')
+                data = np.zeros(npa,dtype='int64')
                 for b,m in zip(ls,ws):
                     data[a:b]=(np.ones(size,dtype='int64')*i)
                     a=b
@@ -106,7 +106,7 @@
                 #we possibly update and change the masses here
                 #all other fields are read in and changed once
                 temp= read_star_field(file_stars,field=fname)
-                data = np.zeros(np,dtype="float64")
+                data = np.zeros(npa,dtype="float64")
                 data[stara:starb] = temp
                 del temp
                 tr[field]=data[mask]


https://bitbucket.org/yt_analysis/yt/commits/e2eb9ebe5004/
Changeset:   e2eb9ebe5004
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-23 01:06:19
Summary:     more fixes to particle IO
Affected #:  2 files

diff -r 8f4c3a968e27ace70366a645b44b457df51d19c0 -r e2eb9ebe5004683fc4694d63fe088d74d3228e2e yt/frontends/art/definitions.py
--- a/yt/frontends/art/definitions.py
+++ b/yt/frontends/art/definitions.py
@@ -116,11 +116,11 @@
         ('>d',('tdum','adum')),
         ('>i','nstars'),
         ('>d',('ws_old','ws_oldi')),
-        ('>f','mass'),
-        ('>f','imass'),
-        ('>f','tbirth'),
-        ('>f','metallicity1'),
-        ('>f','metallicity2')
+        ('>f','particle_mass'),
+        ('>f','particle_mass_initial'),
+        ('>f','particle_creation_time'),
+        ('>f','particle_metallicity1'),
+        ('>f','particle_metallicity2')
         ]
 
 star_name_map = {

diff -r 8f4c3a968e27ace70366a645b44b457df51d19c0 -r e2eb9ebe5004683fc4694d63fe088d74d3228e2e yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -89,7 +89,7 @@
                 a=0
                 data = np.zeros(npa,dtype='float64')
                 for b,m in zip(ls,ws):
-                    data[a:b]=(np.ones(size,dtype='float64')*m)
+                    data[a:b]=(np.ones(b-a,dtype='float64')*m)
                     a=b
                 tr[field]=data[mask]
                 #the stellar masses will be updated later
@@ -99,14 +99,16 @@
                 a=0
                 data = np.zeros(npa,dtype='int64')
                 for b,m in zip(ls,ws):
-                    data[a:b]=(np.ones(size,dtype='int64')*i)
+                    data[a:b]=(np.ones(b-a,dtype='int64')*i)
                     a=b
                 tr[field]=data[mask]
             if fname in particle_star_fields:
                 #we possibly update and change the masses here
                 #all other fields are read in and changed once
+                if starb-stara==0: continue
+                import pdb; pdb.set_trace()
                 temp= read_star_field(file_stars,field=fname)
-                data = np.zeros(npa,dtype="float64")
+                data = np.zeros(starb-stara,dtype="float64")
                 data[stara:starb] = temp
                 del temp
                 tr[field]=data[mask]


https://bitbucket.org/yt_analysis/yt/commits/9cf0a2d6055f/
Changeset:   9cf0a2d6055f
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-23 01:18:09
Summary:     fixes for stellar fields
Affected #:  1 file

diff -r e2eb9ebe5004683fc4694d63fe088d74d3228e2e -r 9cf0a2d6055f04237ea816c820cecc084e95286e yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -106,9 +106,8 @@
                 #we possibly update and change the masses here
                 #all other fields are read in and changed once
                 if starb-stara==0: continue
-                import pdb; pdb.set_trace()
                 temp= read_star_field(file_stars,field=fname)
-                data = np.zeros(starb-stara,dtype="float64")
+                data = np.zeros(npa,dtype="float64")
                 data[stara:starb] = temp
                 del temp
                 tr[field]=data[mask]
@@ -282,8 +281,9 @@
     data = {}
     with open(file,'rb') as fh:
         for dtype, variables in star_struct:
-            if field in variables or dtype=='>d' or dtype=='>d':
-                data[field] = read_vector(fh,'f','>')
+            found = field in variables or field==variables
+            if found:
+                data[field] = read_vector(fh,dtype[1],dtype[0])
             else:
                 skip(fh,endian='>')
     return data.pop(field)


https://bitbucket.org/yt_analysis/yt/commits/4a5f1ac29984/
Changeset:   4a5f1ac29984
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-23 01:48:10
Summary:     added particle mass unit conversion; pf['Time']=1.0
Affected #:  2 files

diff -r 9cf0a2d6055f04237ea816c820cecc084e95286e -r 4a5f1ac299847f96280b97b02864094c6ed8bd83 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -293,6 +293,9 @@
         cf["Potential"] = 1.0
         cf["Entropy"] = S_0
         cf["Temperature"] = tr
+        cf["Time"] = 1.0 
+        cf["particle_mass"] = cf['Mass']
+        cf["particle_mass_initial"] = cf['Mass']
         self.cosmological_simulation = True
         self.conversion_factors = cf
         
@@ -315,6 +318,7 @@
         self.unique_identifier = \
             int(os.stat(self.parameter_filename)[stat.ST_CTIME])
         self.parameters.update(constants)
+        self.parameters['Time'] = 1.0
         #read the amr header
         with open(self.file_amr,'rb') as f:
             amr_header_vals = read_attrs(f,amr_header_struct,'>')

diff -r 9cf0a2d6055f04237ea816c820cecc084e95286e -r 4a5f1ac299847f96280b97b02864094c6ed8bd83 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -98,7 +98,7 @@
             elif fname == "particle_type":
                 a=0
                 data = np.zeros(npa,dtype='int64')
-                for b,m in zip(ls,ws):
+                for i,(b,m) in enumerate(zip(ls,ws)):
                     data[a:b]=(np.ones(b-a,dtype='int64')*i)
                     a=b
                 tr[field]=data[mask]


https://bitbucket.org/yt_analysis/yt/commits/72f4462b2b4f/
Changeset:   72f4462b2b4f
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-24 02:40:40
Summary:     typo in particle field definition for vel-z
Affected #:  2 files

diff -r 4a5f1ac299847f96280b97b02864094c6ed8bd83 -r 72f4462b2b4f626fa21dfaa229746d075a923f90 yt/frontends/art/definitions.py
--- a/yt/frontends/art/definitions.py
+++ b/yt/frontends/art/definitions.py
@@ -57,7 +57,7 @@
     'particle_position_z',
     'particle_velocity_x',
     'particle_velocity_y',
-    'particle_velocity_z'
+    'particle_velocity_z',
     'particle_age', #this and below are stellar only fields
     'particle_mass_initial',
     'particle_creation_time',

diff -r 4a5f1ac299847f96280b97b02864094c6ed8bd83 -r 72f4462b2b4f626fa21dfaa229746d075a923f90 yt/frontends/art/fields.py
--- a/yt/frontends/art/fields.py
+++ b/yt/frontends/art/fields.py
@@ -56,6 +56,16 @@
               validators = [ValidateDataField(f)],
               particle_type = True)
 
+add_art_field("particle_mass",function=NullFunc,take_log=True,
+            validators=[ValidateDataField(f)],
+            particle_type = True,
+            convert_function= lambda x: x.convert("particle_mass"))
+
+add_art_field("particle_mass_initial",function=NullFunc,take_log=True,
+            validators=[ValidateDataField(f)],
+            particle_type = True,
+            convert_function= lambda x: x.convert("particle_mass"))
+
 #Hydro Fields that are verified to be OK unit-wise:
 #Density
 #Temperature


https://bitbucket.org/yt_analysis/yt/commits/86c383824be8/
Changeset:   86c383824be8
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-24 03:08:15
Summary:     adding stellar index to pf
Affected #:  1 file

diff -r 72f4462b2b4f626fa21dfaa229746d075a923f90 -r 86c383824be8e4f53095c7e79a80005ece6cc28f yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -369,6 +369,7 @@
             self.parameters['wspecies'] = wspecies[:n]
             self.parameters['lspecies'] = lspecies[:n]
             ls_nonzero = np.diff(lspecies)[:n-1]
+            self.star_type = len(ls_nonzero)
             mylog.info("Discovered %i species of particles",len(ls_nonzero))
             mylog.info("Particle populations: "+'%1.1e '*len(ls_nonzero),
                 *ls_nonzero)


https://bitbucket.org/yt_analysis/yt/commits/cd9799c03742/
Changeset:   cd9799c03742
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-25 21:05:08
Summary:     offset particle positions by 1.0/domain dimension
Affected #:  1 file

diff -r 86c383824be8e4f53095c7e79a80005ece6cc28f -r cd9799c037426dc64800c81234850115cc8bd952 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -70,6 +70,7 @@
         pos,vel = read_particles(file_particle,pf.parameters['Nrow'],
                                  total=npa,dd=pf.domain_dimensions)
         pos,vel = pos.astype('float64'), vel.astype('float64')
+        pos -= 1.0/pf.domain_dimensions[0]
         mask = selector.select_points(pos[:,0],pos[:,1],pos[:,2])
         size = mask.sum()
         if not any(('position' in n for t,n in fields)):


https://bitbucket.org/yt_analysis/yt/commits/b52a79f8960a/
Changeset:   b52a79f8960a
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-02-27 00:32:02
Summary:     subtle subchunking problems; disabling for now
Affected #:  2 files

diff -r cd9799c037426dc64800c81234850115cc8bd952 -r b52a79f8960ab7fe37870d5a0b2b71c58e48dcf2 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -465,7 +465,7 @@
         filled = pos = level_offset = 0
         field_idxs = [all_fields.index(f) for f in fields]
         for field in fields:
-            dest[field] = np.zeros(self.cell_count, 'float64')
+            dest[field] = np.zeros(self.cell_count, 'float64')-1.
         level = self.domain_level
         offset = self.domain.level_offsets
         no = self.domain.level_count[level]
@@ -483,7 +483,7 @@
             def subchunk(count,size):
                 for i in range(0,count,size):
                     yield i,i+min(size,count-i)
-            for noct_range in subchunk(no,long(1e5)):
+            for noct_range in subchunk(no,long(1e8)):
                 source = _read_child_level(content,self.domain.level_child_offsets,
                                          self.domain.level_offsets,
                                          self.domain.level_count,level,fields,

diff -r cd9799c037426dc64800c81234850115cc8bd952 -r b52a79f8960ab7fe37870d5a0b2b71c58e48dcf2 yt/frontends/art/fields.py
--- a/yt/frontends/art/fields.py
+++ b/yt/frontends/art/fields.py
@@ -178,23 +178,14 @@
 ####### Derived fields
 
 def _temperature(field, data):
-    dg = data["GasEnergy"] #.astype('float64')
-    dg /= data.pf.conversion_factors["GasEnergy"]
-    dd = data["Density"] #.astype('float64')
-    dd /= data.pf.conversion_factors["Density"]
-    tr = dg/dd*data.pf.conversion_factors['tr']
-    #ghost cells have zero density?
-    tr[np.isnan(tr)] = 0.0
-    #dd[di] = -1.0
-    #if data.id==460:
-    #tr[di] = -1.0 #replace the zero-density points with zero temp
-    #print tr.min()
-    #assert np.all(np.isfinite(tr))
+    tr  = data["GasEnergy"]/data["Density"]
+    tr /= data.pf.conversion_factors["GasEnergy"]
+    tr *= data.pf.conversion_factors["Density"]
+    tr *= data.pf.conversion_factors['tr']
     return tr
+
 def _converttemperature(data):
-    #x = data.pf.conversion_factors["Temperature"]
-    x = 1.0
-    return x
+    return 1.0
 add_field("Temperature", function=_temperature, units = r"\mathrm{K}",take_log=True)
 ARTFieldInfo["Temperature"]._units = r"\mathrm{K}"
 ARTFieldInfo["Temperature"]._projected_units = r"\mathrm{K}"


https://bitbucket.org/yt_analysis/yt/commits/cb0a927a3f3a/
Changeset:   cb0a927a3f3a
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-04 22:47:35
Summary:     Merge
Affected #:  82 files

diff -r b52a79f8960ab7fe37870d5a0b2b71c58e48dcf2 -r cb0a927a3f3add84ef8229f5654bdbd9c17f36c7 CREDITS
--- a/CREDITS
+++ b/CREDITS
@@ -1,29 +1,41 @@
 YT is a group effort.
 
-Contributors:                   Matthew Turk (matthewturk at gmail.com)
-                                Britton Smith (brittonsmith at gmail.com)
-                                Jeff Oishi (jsoishi at gmail.com)
-                                Stephen Skory (s at skory.us)
-                                Sam Skillman (samskillman at gmail.com)
-                                Devin Silvia (devin.silvia at gmail.com)
-                                John Wise (jwise at astro.princeton.edu)
-                                David Collins (dcollins at physics.ucsd.edu)
-                                Christopher Moody (cemoody at ucsc.edu)
-                                Oliver Hahn (ohahn at stanford.edu)
-                                John ZuHone (jzuhone at cfa.harvard.edu)
-                                Chris Malone (cmalone at mail.astro.sunysb.edu)
-                                Cameron Hummels (chummels at astro.columbia.edu)
-                                Stefan Klemer (sklemer at phys.uni-goettingen.de)
-                                Tom Abel (tabel at stanford.edu)
-                                Andrew Myers (atmyers at astro.berkeley.edu)
-                                Michael Kuhlen (mqk at astro.berkeley.edu)
-                                Casey Stark (caseywstark at gmail.com)
-                                JC Passy (jcpassy at gmail.com)
-                                Eve Lee (elee at cita.utoronto.ca)
-                                Elizabeth Tasker (tasker at astro1.sci.hokudai.ac.jp)
-                                Kacper Kowalik (xarthisius.kk at gmail.com)
-                                Nathan Goldbaum (goldbaum at ucolick.org)
-                                Anna Rosen (rosen at ucolick.org)
+Contributors:                   Tom Abel (tabel at stanford.edu)
+				David Collins (dcollins at physics.ucsd.edu)
+				Brian Crosby (crosby.bd at gmail.com)
+				Andrew Cunningham (ajcunn at gmail.com)
+				Nathan Goldbaum (goldbaum at ucolick.org)
+				Markus Haider (markus.haider at uibk.ac.at)
+				Cameron Hummels (chummels at gmail.com)
+				Christian Karch (chiffre at posteo.de)
+				Ji-hoon Kim (me at jihoonkim.org)
+				Steffen Klemer (sklemer at phys.uni-goettingen.de)
+				Kacper Kowalik (xarthisius.kk at gmail.com)
+				Michael Kuhlen (mqk at astro.berkeley.edu)
+				Eve Lee (elee at cita.utoronto.ca)
+				Yuan Li (yuan at astro.columbia.edu)
+				Chris Malone (chris.m.malone at gmail.com)
+				Josh Maloney (joshua.moloney at colorado.edu)
+				Chris Moody (cemoody at ucsc.edu)
+				Andrew Myers (atmyers at astro.berkeley.edu)
+				Jeff Oishi (jsoishi at gmail.com)
+				Jean-Claude Passy (jcpassy at uvic.ca)
+				Mark Richardson (Mark.L.Richardson at asu.edu)
+				Thomas Robitaille (thomas.robitaille at gmail.com)
+				Anna Rosen (rosen at ucolick.org)
+				Anthony Scopatz (scopatz at gmail.com)
+				Devin Silvia (devin.silvia at colorado.edu)
+				Sam Skillman (samskillman at gmail.com)
+				Stephen Skory (s at skory.us)
+				Britton Smith (brittonsmith at gmail.com)
+				Geoffrey So (gsiisg at gmail.com)
+				Casey Stark (caseywstark at gmail.com)
+				Elizabeth Tasker (tasker at astro1.sci.hokudai.ac.jp)
+				Stephanie Tonnesen (stonnes at gmail.com)
+				Matthew Turk (matthewturk at gmail.com)
+				Rich Wagner (rwagner at physics.ucsd.edu)
+				John Wise (jwise at physics.gatech.edu)
+				John ZuHone (jzuhone at gmail.com)
 
 We also include the Delaunay Triangulation module written by Robert Kern of
 Enthought, the cmdln.py module by Trent Mick, and the progressbar module by

diff -r b52a79f8960ab7fe37870d5a0b2b71c58e48dcf2 -r cb0a927a3f3add84ef8229f5654bdbd9c17f36c7 doc/install_script.sh
--- a/doc/install_script.sh
+++ b/doc/install_script.sh
@@ -7,8 +7,8 @@
 # There are a few options, but you only need to set *one* of them.  And
 # that's the next one, DEST_DIR.  But, if you want to use an existing HDF5
 # installation you can set HDF5_DIR, or if you want to use some other
-# subversion checkout of YT, you can set YT_DIR, too.  (It'll already
-# check the current directory and one up).
+# subversion checkout of yt, you can set YT_DIR, too.  (It'll already
+# check the current directory and one up.
 #
 # And, feel free to drop me a line: matthewturk at gmail.com
 #
@@ -49,7 +49,7 @@
 INST_ROCKSTAR=0 # Install the Rockstar halo finder?
 INST_SCIPY=0    # Install scipy?
 
-# If you've got YT some other place, set this to point to it.
+# If you've got yt some other place, set this to point to it.
 YT_DIR=""
 
 # If you need to pass anything to matplotlib, do so here.
@@ -230,6 +230,27 @@
             MPL_SUPP_CXXFLAGS="${MPL_SUPP_CXXFLAGS} -mmacosx-version-min=10.7"
         fi
     fi
+    if [ -f /etc/SuSE-release ] && [ `grep --count SUSE /etc/SuSE-release` -gt 0 ]
+    then
+        echo "Looks like you're on an OpenSUSE-compatible machine."
+        echo
+        echo "You need to have these packages installed:"
+        echo
+        echo "  * devel_C_C++"
+        echo "  * libopenssl-devel"
+        echo "  * libuuid-devel"
+        echo "  * zip"
+        echo "  * gcc-c++"
+        echo
+        echo "You can accomplish this by executing:"
+        echo
+        echo "$ sudo zypper install -t pattern devel_C_C++"
+        echo "$ sudo zypper install gcc-c++ libopenssl-devel libuuid-devel zip"
+        echo
+        echo "I am also setting special configure arguments to Python to"
+        echo "specify control lib/lib64 issues."
+        PYCONF_ARGS="--libdir=${DEST_DIR}/lib"
+    fi
     if [ -f /etc/lsb-release ] && [ `grep --count buntu /etc/lsb-release` -gt 0 ]
     then
         echo "Looks like you're on an Ubuntu-compatible machine."
@@ -293,9 +314,9 @@
 echo
 echo "========================================================================"
 echo
-echo "Hi there!  This is the YT installation script.  We're going to download"
+echo "Hi there!  This is the yt installation script.  We're going to download"
 echo "some stuff and install it to create a self-contained, isolated"
-echo "environment for YT to run within."
+echo "environment for yt to run within."
 echo
 echo "Inside the installation script you can set a few variables.  Here's what"
 echo "they're currently set to -- you can hit Ctrl-C and edit the values in "
@@ -476,7 +497,7 @@
 echo 'c68a425bacaa7441037910b9166f25b89e1387776a7749a5350793f89b1690350df5f018060c31d03686e7c3ed2aa848bd2b945c96350dc3b6322e087934783a  hdf5-1.8.9.tar.gz' > hdf5-1.8.9.tar.gz.sha512
 echo 'dbefad00fa34f4f21dca0f1e92e95bd55f1f4478fa0095dcf015b4d06f0c823ff11755cd777e507efaf1c9098b74af18f613ec9000e5c3a5cc1c7554fb5aefb8  libpng-1.5.12.tar.gz' > libpng-1.5.12.tar.gz.sha512
 echo '5b1a0fb52dcb21ca5f0ab71c8a49550e1e8cf633552ec6598dc43f0b32c03422bf5af65b30118c163231ecdddfd40846909336f16da318959106076e80a3fad0  matplotlib-1.2.0.tar.gz' > matplotlib-1.2.0.tar.gz.sha512
-echo '52d1127de2208aaae693d16fef10ffc9b8663081bece83b7597d65706e9568af3b9e56bd211878774e1ebed92e21365ee9c49602a0ff5e48f89f12244d79c161  mercurial-2.4.tar.gz' > mercurial-2.4.tar.gz.sha512
+echo '91693ca5f34934956a7c2c98bb69a5648b2a5660afd2ecf4a05035c5420450d42c194eeef0606d7683e267e4eaaaab414df23f30b34c88219bdd5c1a0f1f66ed  mercurial-2.5.1.tar.gz' > mercurial-2.5.1.tar.gz.sha512
 echo 'de3dd37f753614055dcfed910e9886e03688b8078492df3da94b1ec37be796030be93291cba09e8212fffd3e0a63b086902c3c25a996cf1439e15c5b16e014d9  numpy-1.6.1.tar.gz' > numpy-1.6.1.tar.gz.sha512
 echo '5ad681f99e75849a5ca6f439c7a19bb51abc73d121b50f4f8e4c0da42891950f30407f761a53f0fe51b370b1dbd4c4f5a480557cb2444c8c7c7d5412b328a474  sqlite-autoconf-3070500.tar.gz' > sqlite-autoconf-3070500.tar.gz.sha512
 echo 'edae735960279d92acf58e1f4095c6392a7c2059b8f1d2c46648fc608a0fb06b392db2d073f4973f5762c034ea66596e769b95b3d26ad963a086b9b2d09825f2  zlib-1.2.3.tar.bz2' > zlib-1.2.3.tar.bz2.sha512
@@ -509,7 +530,7 @@
 get_ytproject Python-2.7.3.tgz
 get_ytproject numpy-1.6.1.tar.gz
 get_ytproject matplotlib-1.2.0.tar.gz
-get_ytproject mercurial-2.4.tar.gz
+get_ytproject mercurial-2.5.1.tar.gz
 get_ytproject ipython-0.13.1.tar.gz
 get_ytproject h5py-2.1.0.tar.gz
 get_ytproject Cython-0.17.1.tar.gz
@@ -636,10 +657,10 @@
 
 if [ ! -e Python-2.7.3/done ]
 then
-    echo "Installing Python.  This may take a while, but don't worry.  YT loves you."
+    echo "Installing Python.  This may take a while, but don't worry.  yt loves you."
     [ ! -e Python-2.7.3 ] && tar xfz Python-2.7.3.tgz
     cd Python-2.7.3
-    ( ./configure --prefix=${DEST_DIR}/ 2>&1 ) 1>> ${LOG_FILE} || do_exit
+    ( ./configure --prefix=${DEST_DIR}/ ${PYCONF_ARGS} 2>&1 ) 1>> ${LOG_FILE} || do_exit
 
     ( make ${MAKE_PROCS} 2>&1 ) 1>> ${LOG_FILE} || do_exit
     ( make install 2>&1 ) 1>> ${LOG_FILE} || do_exit
@@ -654,7 +675,7 @@
 if [ $INST_HG -eq 1 ]
 then
     echo "Installing Mercurial."
-    do_setup_py mercurial-2.4
+    do_setup_py mercurial-2.5.1
     export HG_EXEC=${DEST_DIR}/bin/hg
 else
     # We assume that hg can be found in the path.

diff -r b52a79f8960ab7fe37870d5a0b2b71c58e48dcf2 -r cb0a927a3f3add84ef8229f5654bdbd9c17f36c7 yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -36,14 +36,20 @@
 speed_of_light_kms = speed_of_light_cgs * km_per_cm
 
 class AbsorptionSpectrum(object):
+    r"""Create an absorption spectrum object.
+
+    Parameters
+    ----------
+
+    lambda_min : float
+       lower wavelength bound in angstroms.
+    lambda_max : float
+       upper wavelength bound in angstroms.
+    n_lambda : float
+       number of wavelength bins.
+    """
+
     def __init__(self, lambda_min, lambda_max, n_lambda):
-        """
-        Create an absorption spectrum object.
-        :param lambda_min (float): lower wavelength bound in angstroms.
-        :param lambda_max (float): upper wavelength bound in angstroms.
-        :param n_lambda (float): number of wavelength bins.
-        """
-
         self.n_lambda = n_lambda
         self.tau_field = None
         self.flux_field = None
@@ -56,16 +62,24 @@
     def add_line(self, label, field_name, wavelength,
                  f_value, gamma, atomic_mass,
                  label_threshold=None):
+        r"""Add an absorption line to the list of lines included in the spectrum.
+
+        Parameters
+        ----------
+        
+        label : string
+           label for the line.
+        field_name : string
+           field name from ray data for column densities.
+        wavelength : float
+           line rest wavelength in angstroms.
+        f_value  : float
+           line f-value.
+        gamma : float
+           line gamme value.
+        atomic_mass : float
+           mass of atom in amu.
         """
-        Add an absorption line to the list of lines included in the spectrum.
-        :param label (string): label for the line.
-        :param field_name (string): field name from ray data for column densities.
-        :param wavelength (float): line rest wavelength in angstroms.
-        :param f_value (float): line f-value.
-        :param gamma (float): line gamme value.
-        :param atomic_mass (float): mass of atom in amu.
-        """
-
         self.line_list.append({'label': label, 'field_name': field_name,
                                'wavelength': wavelength, 'f_value': f_value,
                                'gamma': gamma, 'atomic_mass': atomic_mass,
@@ -75,11 +89,20 @@
                       normalization, index):
         """
         Add a continuum feature that follows a power-law.
-        :param label (string): label for the feature.
-        :param field_name (string): field name from ray data for column densities.
-        :param wavelength (float): line rest wavelength in angstroms.
-        :param normalization (float): the column density normalization.
-        :param index (float): the power-law index for the wavelength dependence.
+
+        Parameters
+        ----------
+
+        label : string
+           label for the feature.
+        field_name : string
+           field name from ray data for column densities.
+        wavelength : float
+           line rest wavelength in angstroms.
+        normalization : float
+           the column density normalization.
+        index : float
+           the power-law index for the wavelength dependence.
         """
 
         self.continuum_list.append({'label': label, 'field_name': field_name,
@@ -92,14 +115,17 @@
                       use_peculiar_velocity=True):
         """
         Make spectrum from ray data using the line list.
-        :param input_file (string): path to input ray data.
-        :param output_file (string): path for output file.
-               File formats are chosen based on the filename extension.
-                    - .h5: hdf5.
-                    - .fits: fits.
-                    - anything else: ascii.
-        :param use_peculiar_velocity (bool): if True, include line of sight
-        velocity for shifting lines.
+
+        Parameters
+        ----------
+
+        input_file : string
+           path to input ray data.
+        output_file : string
+           path for output file.  File formats are chosen based on the filename extension.
+           ``.h5`` for hdf5, ``.fits`` for fits, and everything else is ASCII.
+        use_peculiar_velocity : bool
+           if True, include line of sight velocity for shifting lines.
         """
 
         input_fields = ['dl', 'redshift', 'Temperature']

diff -r b52a79f8960ab7fe37870d5a0b2b71c58e48dcf2 -r cb0a927a3f3add84ef8229f5654bdbd9c17f36c7 yt/analysis_modules/api.py
--- a/yt/analysis_modules/api.py
+++ b/yt/analysis_modules/api.py
@@ -106,8 +106,9 @@
     RadialColumnDensity
 
 from .spectral_integrator.api import \
-    SpectralFrequencyIntegrator, \
-    create_table_from_textfiles
+     add_xray_emissivity_field, \
+     add_xray_luminosity_field, \
+     add_xray_photon_emissivity_field
 
 from .star_analysis.api import \
     StarFormationRate, \

diff -r b52a79f8960ab7fe37870d5a0b2b71c58e48dcf2 -r cb0a927a3f3add84ef8229f5654bdbd9c17f36c7 yt/analysis_modules/cosmological_observation/light_cone/light_cone.py
--- a/yt/analysis_modules/cosmological_observation/light_cone/light_cone.py
+++ b/yt/analysis_modules/cosmological_observation/light_cone/light_cone.py
@@ -49,6 +49,64 @@
      _light_cone_projection
 
 class LightCone(CosmologySplice):
+    """
+    Initialize a LightCone object.
+
+    Parameters
+    ----------
+    near_redshift : float
+        The near (lowest) redshift for the light cone.
+    far_redshift : float
+        The far (highest) redshift for the light cone.
+    observer_redshift : float
+        The redshift of the observer.
+        Default: 0.0.
+    field_of_view_in_arcminutes : float
+        The field of view of the image in units of arcminutes.
+        Default: 600.0.
+    image_resolution_in_arcseconds : float
+        The size of each image pixel in units of arcseconds.
+        Default: 60.0.
+    use_minimum_datasets : bool
+        If True, the minimum number of datasets is used to connect the initial
+        and final redshift.  If false, the light cone solution will contain
+        as many entries as possible within the redshift interval.
+        Default: True.
+    deltaz_min : float
+        Specifies the minimum :math:`\Delta z` between consecutive datasets in
+        the returned list.
+        Default: 0.0.
+    minimum_coherent_box_fraction : float
+        Used with use_minimum_datasets set to False, this parameter specifies
+        the fraction of the total box size to be traversed before rerandomizing
+        the projection axis and center.  This was invented to allow light cones
+        with thin slices to sample coherent large scale structure, but in
+        practice does not work so well.  Try setting this parameter to 1 and
+        see what happens.
+        Default: 0.0.
+    time_data : bool
+        Whether or not to include time outputs when gathering
+        datasets for time series.
+        Default: True.
+    redshift_data : bool
+        Whether or not to include redshift outputs when gathering
+        datasets for time series.
+        Default: True.
+    find_outputs : bool
+        Whether or not to search for parameter files in the current 
+        directory.
+        Default: False.
+    set_parameters : dict
+        Dictionary of parameters to attach to pf.parameters.
+        Default: None.
+    output_dir : string
+        The directory in which images and data files will be written.
+        Default: 'LC'.
+    output_prefix : string
+        The prefix of all images and data files.
+        Default: 'LightCone'.
+
+    """
     def __init__(self, parameter_filename, simulation_type,
                  near_redshift, far_redshift,
                  observer_redshift=0.0,
@@ -59,64 +117,6 @@
                  time_data=True, redshift_data=True,
                  find_outputs=False, set_parameters=None,
                  output_dir='LC', output_prefix='LightCone'):
-        """
-        Initialize a LightCone object.
-
-        Parameters
-        ----------
-        near_redshift : float
-            The near (lowest) redshift for the light cone.
-        far_redshift : float
-            The far (highest) redshift for the light cone.
-        observer_redshift : float
-            The redshift of the observer.
-            Default: 0.0.
-        field_of_view_in_arcminutes : float
-            The field of view of the image in units of arcminutes.
-            Default: 600.0.
-        image_resolution_in_arcseconds : float
-            The size of each image pixel in units of arcseconds.
-            Default: 60.0.
-        use_minimum_datasets : bool
-            If True, the minimum number of datasets is used to connect the initial
-            and final redshift.  If false, the light cone solution will contain
-            as many entries as possible within the redshift interval.
-            Default: True.
-        deltaz_min : float
-            Specifies the minimum :math:`\Delta z` between consecutive datasets in
-            the returned list.
-            Default: 0.0.
-        minimum_coherent_box_fraction : float
-            Used with use_minimum_datasets set to False, this parameter specifies
-            the fraction of the total box size to be traversed before rerandomizing
-            the projection axis and center.  This was invented to allow light cones
-            with thin slices to sample coherent large scale structure, but in
-            practice does not work so well.  Try setting this parameter to 1 and
-            see what happens.
-            Default: 0.0.
-        time_data : bool
-            Whether or not to include time outputs when gathering
-            datasets for time series.
-            Default: True.
-        redshift_data : bool
-            Whether or not to include redshift outputs when gathering
-            datasets for time series.
-            Default: True.
-        find_outputs : bool
-            Whether or not to search for parameter files in the current 
-            directory.
-            Default: False.
-        set_parameters : dict
-            Dictionary of parameters to attach to pf.parameters.
-            Default: None.
-        output_dir : string
-            The directory in which images and data files will be written.
-            Default: 'LC'.
-        output_prefix : string
-            The prefix of all images and data files.
-            Default: 'LightCone'.
-
-        """
 
         self.near_redshift = near_redshift
         self.far_redshift = far_redshift

diff -r b52a79f8960ab7fe37870d5a0b2b71c58e48dcf2 -r cb0a927a3f3add84ef8229f5654bdbd9c17f36c7 yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
--- a/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
+++ b/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
@@ -40,66 +40,66 @@
     parallel_root_only
 
 class LightRay(CosmologySplice):
+    """
+    Create a LightRay object.  A light ray is much like a light cone,
+    in that it stacks together multiple datasets in order to extend a
+    redshift interval.  Unlike a light cone, which does randomly
+    oriented projections for each dataset, a light ray consists of
+    randomly oriented single rays.  The purpose of these is to create
+    synthetic QSO lines of sight.
+
+    Once the LightRay object is set up, use LightRay.make_light_ray to
+    begin making rays.  Different randomizations can be created with a
+    single object by providing different random seeds to make_light_ray.
+
+    Parameters
+    ----------
+    parameter_filename : string
+        The simulation parameter file.
+    simulation_type : string
+        The simulation type.
+    near_redshift : float
+        The near (lowest) redshift for the light ray.
+    far_redshift : float
+        The far (highest) redshift for the light ray.
+    use_minimum_datasets : bool
+        If True, the minimum number of datasets is used to connect the
+        initial and final redshift.  If false, the light ray solution
+        will contain as many entries as possible within the redshift
+        interval.
+        Default: True.
+    deltaz_min : float
+        Specifies the minimum :math:`\Delta z` between consecutive
+        datasets in the returned list.
+        Default: 0.0.
+    minimum_coherent_box_fraction : float
+        Used with use_minimum_datasets set to False, this parameter
+        specifies the fraction of the total box size to be traversed
+        before rerandomizing the projection axis and center.  This
+        was invented to allow light rays with thin slices to sample
+        coherent large scale structure, but in practice does not work
+        so well.  Try setting this parameter to 1 and see what happens.
+        Default: 0.0.
+    time_data : bool
+        Whether or not to include time outputs when gathering
+        datasets for time series.
+        Default: True.
+    redshift_data : bool
+        Whether or not to include redshift outputs when gathering
+        datasets for time series.
+        Default: True.
+    find_outputs : bool
+        Whether or not to search for parameter files in the current 
+        directory.
+        Default: False.
+
+    """
     def __init__(self, parameter_filename, simulation_type,
                  near_redshift, far_redshift,
                  use_minimum_datasets=True, deltaz_min=0.0,
                  minimum_coherent_box_fraction=0.0,
                  time_data=True, redshift_data=True,
                  find_outputs=False):
-        """
-        Create a LightRay object.  A light ray is much like a light cone,
-        in that it stacks together multiple datasets in order to extend a
-        redshift interval.  Unlike a light cone, which does randomly
-        oriented projections for each dataset, a light ray consists of
-        randomly oriented single rays.  The purpose of these is to create
-        synthetic QSO lines of sight.
-
-        Once the LightRay object is set up, use LightRay.make_light_ray to
-        begin making rays.  Different randomizations can be created with a
-        single object by providing different random seeds to make_light_ray.
-
-        Parameters
-        ----------
-        parameter_filename : string
-            The simulation parameter file.
-        simulation_type : string
-            The simulation type.
-        near_redshift : float
-            The near (lowest) redshift for the light ray.
-        far_redshift : float
-            The far (highest) redshift for the light ray.
-        use_minimum_datasets : bool
-            If True, the minimum number of datasets is used to connect the
-            initial and final redshift.  If false, the light ray solution
-            will contain as many entries as possible within the redshift
-            interval.
-            Default: True.
-        deltaz_min : float
-            Specifies the minimum :math:`\Delta z` between consecutive
-            datasets in the returned list.
-            Default: 0.0.
-        minimum_coherent_box_fraction : float
-            Used with use_minimum_datasets set to False, this parameter
-            specifies the fraction of the total box size to be traversed
-            before rerandomizing the projection axis and center.  This
-            was invented to allow light rays with thin slices to sample
-            coherent large scale structure, but in practice does not work
-            so well.  Try setting this parameter to 1 and see what happens.
-            Default: 0.0.
-        time_data : bool
-            Whether or not to include time outputs when gathering
-            datasets for time series.
-            Default: True.
-        redshift_data : bool
-            Whether or not to include redshift outputs when gathering
-            datasets for time series.
-            Default: True.
-        find_outputs : bool
-            Whether or not to search for parameter files in the current 
-            directory.
-            Default: False.
-
-        """
 
         self.near_redshift = near_redshift
         self.far_redshift = far_redshift
@@ -270,47 +270,43 @@
         Examples
         --------
 
-        from yt.mods import *
-        from yt.analysis_modules.halo_profiler.api import *
-        from yt.analysis_modules.cosmological_analysis.light_ray.api import LightRay
-
-        halo_profiler_kwargs = {'halo_list_file': 'HopAnalysis.out'}
-
-        halo_profiler_actions = []
-        # Add a virial filter.
-        halo_profiler_actions.append({'function': add_halo_filter,
-                                      'args': VirialFilter,
-                                      'kwargs': {'overdensity_field': 'ActualOverdensity',
-                                                 'virial_overdensity': 200,
-                                                 'virial_filters': \
-                                                     [['TotalMassMsun','>=','1e14']],
-                                                 'virial_quantities': \
-                                                     ['TotalMassMsun','RadiusMpc']}})
-        # Make the profiles.
-        halo_profiler_actions.append({'function': make_profiles,
-                                      'args': None,
-                                      'kwargs': {'filename': 'VirializedHalos.out'}})
-
-        halo_list = 'filtered'
-
-        halo_profiler_parameters = dict(halo_profiler_kwargs=halo_profiler_kwargs,
-                                        halo_profiler_actions=halo_profiler_actions,
-                                        halo_list=halo_list)
-
-        my_ray = LightRay('simulation.par', 'Enzo', 0., 0.1,
-                          use_minimum_datasets=True,
-                          time_data=False)
-
-        my_ray.make_light_ray(seed=12345,
-                              solution_filename='solution.txt',
-                              data_filename='my_ray.h5',
-                              fields=['Temperature', 'Density'],
-                              get_nearest_halo=True,
-                              nearest_halo_fields=['TotalMassMsun_100',
-                                                   'RadiusMpc_100'],
-                              halo_profiler_parameters=halo_profiler_parameters,
-                              get_los_velocity=True)
-
+        >>> from yt.mods import *
+        >>> from yt.analysis_modules.halo_profiler.api import *
+        >>> from yt.analysis_modules.cosmological_analysis.light_ray.api import LightRay
+        >>> halo_profiler_kwargs = {'halo_list_file': 'HopAnalysis.out'}
+        >>> halo_profiler_actions = []
+        >>> # Add a virial filter.
+        >>> halo_profiler_actions.append({'function': add_halo_filter,
+        ...                           'args': VirialFilter,
+        ...                           'kwargs': {'overdensity_field': 'ActualOverdensity',
+        ...                                      'virial_overdensity': 200,
+        ...                                      'virial_filters': [['TotalMassMsun','>=','1e14']],
+        ...                                      'virial_quantities': ['TotalMassMsun','RadiusMpc']}})
+        ...
+        >>> # Make the profiles.
+        >>> halo_profiler_actions.append({'function': make_profiles,
+        ...                           'args': None,
+        ...                           'kwargs': {'filename': 'VirializedHalos.out'}})
+        ...
+        >>> halo_list = 'filtered'
+        >>> halo_profiler_parameters = dict(halo_profiler_kwargs=halo_profiler_kwargs,
+        ...                             halo_profiler_actions=halo_profiler_actions,
+        ...                             halo_list=halo_list)
+        ...
+        >>> my_ray = LightRay('simulation.par', 'Enzo', 0., 0.1,
+        ...                use_minimum_datasets=True,
+        ...                time_data=False)
+        ...
+        >>> my_ray.make_light_ray(seed=12345,
+        ...                   solution_filename='solution.txt',
+        ...                   data_filename='my_ray.h5',
+        ...                   fields=['Temperature', 'Density'],
+        ...                   get_nearest_halo=True,
+        ...                   nearest_halo_fields=['TotalMassMsun_100',
+        ...                                        'RadiusMpc_100'],
+        ...                   halo_profiler_parameters=halo_profiler_parameters,
+        ...                   get_los_velocity=True)
+        
         """
 
         if halo_profiler_parameters is None:

diff -r b52a79f8960ab7fe37870d5a0b2b71c58e48dcf2 -r cb0a927a3f3add84ef8229f5654bdbd9c17f36c7 yt/analysis_modules/halo_finding/halo_objects.py
--- a/yt/analysis_modules/halo_finding/halo_objects.py
+++ b/yt/analysis_modules/halo_finding/halo_objects.py
@@ -142,18 +142,30 @@
         if self.CoM is not None:
             return self.CoM
         pm = self["ParticleMassMsun"]
-        cx = self["particle_position_x"]
-        cy = self["particle_position_y"]
-        cz = self["particle_position_z"]
-        if isinstance(self, FOFHalo):
-            c_vec = np.array([cx[0], cy[0], cz[0]]) - self.pf.domain_center
-        else:
-            c_vec = self.maximum_density_location() - self.pf.domain_center
-        cx = (cx - c_vec[0])
-        cy = (cy - c_vec[1])
-        cz = (cz - c_vec[2])
-        com = np.array([v - np.floor(v) for v in [cx, cy, cz]])
-        return (com * pm).sum(axis=1) / pm.sum() + c_vec
+        c = {}
+        c[0] = self["particle_position_x"]
+        c[1] = self["particle_position_y"]
+        c[2] = self["particle_position_z"]
+        c_vec = np.zeros(3)
+        com = []
+        for i in range(3):
+            # A halo is likely periodic around a boundary if the distance 
+            # between the max and min particle
+            # positions are larger than half the box. 
+            # So skip the rest if the converse is true.
+            # Note we might make a change here when periodicity-handling is
+            # fully implemented.
+            if (c[i].max() - c[i].min()) < (self.pf.domain_width[i] / 2.):
+                com.append(c[i])
+                continue
+            # Now we want to flip around only those close to the left boundary.
+            d_left = c[i] - self.pf.domain_left_edge[i]
+            sel = (d_left <= (self.pf.domain_width[i]/2))
+            c[i][sel] += self.pf.domain_width[i]
+            com.append(c[i])
+        com = np.array(com)
+        c = (com * pm).sum(axis=1) / pm.sum()
+        return c%self.pf.domain_width
 
     def maximum_density(self):
         r"""Return the HOP-identified maximum density. Not applicable to
@@ -809,7 +821,6 @@
     _radjust = 1.05
 
     def __init__(self, pf, id, size=None, CoM=None,
-
         max_dens_point=None, group_total_mass=None, max_radius=None, bulk_vel=None,
         rms_vel=None, fnames=None, mag_A=None, mag_B=None, mag_C=None,
         e1_vec=None, tilt=None, supp=None):
@@ -843,6 +854,10 @@
             self.supp = {}
         else:
             self.supp = supp
+        self._saved_fields = {}
+        self._ds_sort = None
+        self._particle_mask = None
+
 
     def __getitem__(self, key):
         # This function will try to get particle data in one of three ways,
@@ -1059,6 +1074,7 @@
         mylog.info("Parsing outputs")
         self._parse_output()
         mylog.debug("Finished. (%s)", len(self))
+        self.redshift = redshift
 
     def __obtain_particles(self):
         if self.dm_only:
@@ -1074,8 +1090,7 @@
             else:
                 self.particle_fields[field] = \
                     self._data_source[field][ii].astype('float64')
-            print 'snl in halo_objects field',self._fields, 'had to remove delete field'
-            #del self._data_source[field]
+            del self._data_source[field]
         self._base_indices = np.arange(tot_part)[ii]
         gc.collect()
 
@@ -1243,6 +1258,7 @@
         else:
             f = open(filename, "w")
         f.write("# HALOS FOUND WITH %s\n" % (self._name))
+        f.write("# REDSHIFT OF OUTPUT = %f\n" % (self.redshift))
 
         if not ellipsoid_data:
             f.write("\t".join(["# Group","Mass","# part","max dens"
@@ -1439,18 +1455,17 @@
         pass
 
 class HOPHaloList(HaloList):
-
+    """
+    Run hop on *data_source* with a given density *threshold*.  If
+    *dm_only* is set, only run it on the dark matter particles, otherwise
+    on all particles.  Returns an iterable collection of *HopGroup* items.
+    """
     _name = "HOP"
     _halo_class = HOPHalo
     _fields = ["particle_position_%s" % ax for ax in 'xyz'] + \
               ["ParticleMassMsun"]
 
     def __init__(self, data_source, threshold=160.0, dm_only=True):
-        """
-        Run hop on *data_source* with a given density *threshold*.  If
-        *dm_only* is set, only run it on the dark matter particles, otherwise
-        on all particles.  Returns an iterable collection of *HopGroup* items.
-        """
         self.threshold = threshold
         mylog.info("Initializing HOP")
         HaloList.__init__(self, data_source, dm_only)
@@ -1488,10 +1503,10 @@
     _name = "FOF"
     _halo_class = FOFHalo
 
-    def __init__(self, data_source, link=0.2, dm_only=True):
+    def __init__(self, data_source, link=0.2, dm_only=True, redshift=-1):
         self.link = link
         mylog.info("Initializing FOF")
-        HaloList.__init__(self, data_source, dm_only)
+        HaloList.__init__(self, data_source, dm_only, redshift=redshift)
 
     def _run_finder(self):
         self.tags = \
@@ -1639,6 +1654,11 @@
 
 
 class parallelHOPHaloList(HaloList, ParallelAnalysisInterface):
+    """
+    Run hop on *data_source* with a given density *threshold*.  If
+    *dm_only* is set, only run it on the dark matter particles, otherwise
+    on all particles.  Returns an iterable collection of *HopGroup* items.
+    """
     _name = "parallelHOP"
     _halo_class = parallelHOPHalo
     _fields = ["particle_position_%s" % ax for ax in 'xyz'] + \
@@ -1647,11 +1667,6 @@
     def __init__(self, data_source, padding, num_neighbors, bounds, total_mass,
         period, threshold=160.0, dm_only=True, rearrange=True, premerge=True,
         tree='F'):
-        """
-        Run hop on *data_source* with a given density *threshold*.  If
-        *dm_only* is set, only run it on the dark matter particles, otherwise
-        on all particles.  Returns an iterable collection of *HopGroup* items.
-        """
         ParallelAnalysisInterface.__init__(self)
         self.threshold = threshold
         self.num_neighbors = num_neighbors
@@ -1993,6 +2008,10 @@
         --------
         >>> halos.write_out("HopAnalysis.out")
         """
+        # if path denoted in filename, assure path exists
+        if len(filename.split('/')) > 1:
+            mkdir_rec('/'.join(filename.split('/')[:-1]))
+
         f = self.comm.write_on_root(filename)
         HaloList.write_out(self, f, ellipsoid_data)
 
@@ -2012,6 +2031,10 @@
         --------
         >>> halos.write_particle_lists_txt("halo-parts")
         """
+        # if path denoted in prefix, assure path exists
+        if len(prefix.split('/')) > 1:
+            mkdir_rec('/'.join(prefix.split('/')[:-1]))
+
         f = self.comm.write_on_root("%s.txt" % prefix)
         HaloList.write_particle_lists_txt(self, prefix, fp=f)
 
@@ -2035,6 +2058,10 @@
         --------
         >>> halos.write_particle_lists("halo-parts")
         """
+        # if path denoted in prefix, assure path exists
+        if len(prefix.split('/')) > 1:
+            mkdir_rec('/'.join(prefix.split('/')[:-1]))
+
         fn = "%s.h5" % self.comm.get_filename(prefix)
         f = h5py.File(fn, "w")
         for halo in self._groups:
@@ -2068,94 +2095,98 @@
         --------
         >>> halos.dump("MyHalos")
         """
+        # if path denoted in basename, assure path exists
+        if len(basename.split('/')) > 1:
+            mkdir_rec('/'.join(basename.split('/')[:-1]))
+
         self.write_out("%s.out" % basename, ellipsoid_data)
         self.write_particle_lists(basename)
         self.write_particle_lists_txt(basename)
 
 
 class parallelHF(GenericHaloFinder, parallelHOPHaloList):
+    r"""Parallel HOP halo finder.
+
+    Halos are built by:
+    1. Calculating a density for each particle based on a smoothing kernel.
+    2. Recursively linking particles to other particles from lower density
+    particles to higher.
+    3. Geometrically proximate chains are identified and
+    4. merged into final halos following merging rules.
+
+    Lower thresholds generally produce more halos, and the largest halos
+    become larger. Also, halos become more filamentary and over-connected.
+
+    This is very similar to HOP, but it does not produce precisely the
+    same halos due to unavoidable numerical differences.
+
+    Skory et al. "Parallel HOP: A Scalable Halo Finder for Massive
+    Cosmological Data Sets." arXiv (2010) 1001.3411
+
+    Parameters
+    ----------
+    pf : `StaticOutput`
+        The parameter file on which halo finding will be conducted.
+    threshold : float
+        The density threshold used when building halos. Default = 160.0.
+    dm_only : bool
+        If True, only dark matter particles are used when building halos.
+        Default = False.
+    resize : bool
+        Turns load-balancing on or off. Default = True.
+    kdtree : string
+        Chooses which kD Tree to use. The Fortran one (kdtree = 'F') is
+        faster, but uses more memory. The Cython one (kdtree = 'C') is
+        slower but is more memory efficient.
+        Default = 'F'
+    rearrange : bool
+        Turns on faster nearest neighbor searches at the cost of increased
+        memory usage.
+        This option only applies when using the Fortran tree.
+        Default = True.
+    fancy_padding : bool
+        True calculates padding independently for each face of each
+        subvolume. Default = True.
+    safety : float
+        Due to variances in inter-particle spacing in the volume, the
+        padding may need to be increased above the raw calculation.
+        This number is multiplied to the calculated padding, and values
+        >1 increase the padding. Default = 1.5.
+    premerge : bool
+        True merges chains in two steps (rather than one with False), which
+        can speed up halo finding by 25% or more. However, True can result
+        in small (<<1%) variations in the final halo masses when compared
+        to False. Default = True.
+    sample : float
+        The fraction of the full dataset on which load-balancing is
+        performed. Default = 0.03.
+    total_mass : float
+        If HOP is run on the same dataset mulitple times, the total mass
+        of particles in Msun units in the full volume can be supplied here
+        to save time.
+        This must correspond to the particles being operated on, meaning
+        if stars are included in the halo finding, they must be included
+        in this mass as well, and visa-versa.
+        If halo finding on a subvolume, this still corresponds with the
+        mass in the entire volume.
+        Default = None, which means the total mass is automatically
+        calculated.
+    num_particles : integer
+        The total number of particles in the volume, in the same fashion
+        as `total_mass` is calculated. Specifying this turns off
+        fancy_padding.
+        Default = None, which means the number of particles is
+        automatically calculated.
+
+    Examples
+    -------
+    >>> pf = load("RedshiftOutput0000")
+    >>> halos = parallelHF(pf)
+    """
     def __init__(self, pf, subvolume=None, threshold=160, dm_only=True, \
         resize=True, rearrange=True,\
         fancy_padding=True, safety=1.5, premerge=True, sample=0.03, \
         total_mass=None, num_particles=None, tree='F'):
-        r"""Parallel HOP halo finder.
-
-        Halos are built by:
-        1. Calculating a density for each particle based on a smoothing kernel.
-        2. Recursively linking particles to other particles from lower density
-        particles to higher.
-        3. Geometrically proximate chains are identified and
-        4. merged into final halos following merging rules.
-
-        Lower thresholds generally produce more halos, and the largest halos
-        become larger. Also, halos become more filamentary and over-connected.
-
-        This is very similar to HOP, but it does not produce precisely the
-        same halos due to unavoidable numerical differences.
-
-        Skory et al. "Parallel HOP: A Scalable Halo Finder for Massive
-        Cosmological Data Sets." arXiv (2010) 1001.3411
-
-        Parameters
-        ----------
-        pf : `StaticOutput`
-            The parameter file on which halo finding will be conducted.
-        threshold : float
-            The density threshold used when building halos. Default = 160.0.
-        dm_only : bool
-            If True, only dark matter particles are used when building halos.
-            Default = False.
-        resize : bool
-            Turns load-balancing on or off. Default = True.
-        kdtree : string
-            Chooses which kD Tree to use. The Fortran one (kdtree = 'F') is
-            faster, but uses more memory. The Cython one (kdtree = 'C') is
-            slower but is more memory efficient.
-            Default = 'F'
-        rearrange : bool
-            Turns on faster nearest neighbor searches at the cost of increased
-            memory usage.
-            This option only applies when using the Fortran tree.
-            Default = True.
-        fancy_padding : bool
-            True calculates padding independently for each face of each
-            subvolume. Default = True.
-        safety : float
-            Due to variances in inter-particle spacing in the volume, the
-            padding may need to be increased above the raw calculation.
-            This number is multiplied to the calculated padding, and values
-            >1 increase the padding. Default = 1.5.
-        premerge : bool
-            True merges chains in two steps (rather than one with False), which
-            can speed up halo finding by 25% or more. However, True can result
-            in small (<<1%) variations in the final halo masses when compared
-            to False. Default = True.
-        sample : float
-            The fraction of the full dataset on which load-balancing is
-            performed. Default = 0.03.
-        total_mass : float
-            If HOP is run on the same dataset mulitple times, the total mass
-            of particles in Msun units in the full volume can be supplied here
-            to save time.
-            This must correspond to the particles being operated on, meaning
-            if stars are included in the halo finding, they must be included
-            in this mass as well, and visa-versa.
-            If halo finding on a subvolume, this still corresponds with the
-            mass in the entire volume.
-            Default = None, which means the total mass is automatically
-            calculated.
-        num_particles : integer
-            The total number of particles in the volume, in the same fashion
-            as `total_mass` is calculated. Specifying this turns off
-            fancy_padding.
-            Default = None, which means the number of particles is
-            automatically calculated.
-
-        Examples
-        -------
-        >>> pf = load("RedshiftOutput0000")
-        >>> halos = parallelHF(pf)
-        """
         if subvolume is not None:
             ds_LE = np.array(subvolume.left_edge)
             ds_RE = np.array(subvolume.right_edge)
@@ -2402,58 +2433,58 @@
 
 
 class HOPHaloFinder(GenericHaloFinder, HOPHaloList):
+    r"""HOP halo finder.
+
+    Halos are built by:
+    1. Calculating a density for each particle based on a smoothing kernel.
+    2. Recursively linking particles to other particles from lower density
+    particles to higher.
+    3. Geometrically proximate chains are identified and
+    4. merged into final halos following merging rules.
+
+    Lower thresholds generally produce more halos, and the largest halos
+    become larger. Also, halos become more filamentary and over-connected.
+
+    Eisenstein and Hut. "HOP: A New Group-Finding Algorithm for N-Body
+    Simulations." ApJ (1998) vol. 498 pp. 137-142
+
+    Parameters
+    ----------
+    pf : `StaticOutput`
+        The parameter file on which halo finding will be conducted.
+    subvolume : `yt.data_objects.api.AMRData`, optional
+        A region over which HOP will be run, which can be used to run HOP
+        on a subvolume of the full volume. Default = None, which defaults
+        to the full volume automatically.
+    threshold : float
+        The density threshold used when building halos. Default = 160.0.
+    dm_only : bool
+        If True, only dark matter particles are used when building halos.
+        Default = False.
+    padding : float
+        When run in parallel, the finder needs to surround each subvolume
+        with duplicated particles for halo finidng to work. This number
+        must be no smaller than the radius of the largest halo in the box
+        in code units. Default = 0.02.
+    total_mass : float
+        If HOP is run on the same dataset mulitple times, the total mass
+        of particles in Msun units in the full volume can be supplied here
+        to save time.
+        This must correspond to the particles being operated on, meaning
+        if stars are included in the halo finding, they must be included
+        in this mass as well, and visa-versa.
+        If halo finding on a subvolume, this still corresponds with the
+        mass in the entire volume.
+        Default = None, which means the total mass is automatically
+        calculated.
+
+    Examples
+    --------
+    >>> pf = load("RedshiftOutput0000")
+    >>> halos = HaloFinder(pf)
+    """
     def __init__(self, pf, subvolume=None, threshold=160, dm_only=True,
             padding=0.02, total_mass=None):
-        r"""HOP halo finder.
-
-        Halos are built by:
-        1. Calculating a density for each particle based on a smoothing kernel.
-        2. Recursively linking particles to other particles from lower density
-        particles to higher.
-        3. Geometrically proximate chains are identified and
-        4. merged into final halos following merging rules.
-
-        Lower thresholds generally produce more halos, and the largest halos
-        become larger. Also, halos become more filamentary and over-connected.
-
-        Eisenstein and Hut. "HOP: A New Group-Finding Algorithm for N-Body
-        Simulations." ApJ (1998) vol. 498 pp. 137-142
-
-        Parameters
-        ----------
-        pf : `StaticOutput`
-            The parameter file on which halo finding will be conducted.
-        subvolume : `yt.data_objects.api.YTDataContainer`, optional
-            A region over which HOP will be run, which can be used to run HOP
-            on a subvolume of the full volume. Default = None, which defaults
-            to the full volume automatically.
-        threshold : float
-            The density threshold used when building halos. Default = 160.0.
-        dm_only : bool
-            If True, only dark matter particles are used when building halos.
-            Default = False.
-        padding : float
-            When run in parallel, the finder needs to surround each subvolume
-            with duplicated particles for halo finidng to work. This number
-            must be no smaller than the radius of the largest halo in the box
-            in code units. Default = 0.02.
-        total_mass : float
-            If HOP is run on the same dataset mulitple times, the total mass
-            of particles in Msun units in the full volume can be supplied here
-            to save time.
-            This must correspond to the particles being operated on, meaning
-            if stars are included in the halo finding, they must be included
-            in this mass as well, and visa-versa.
-            If halo finding on a subvolume, this still corresponds with the
-            mass in the entire volume.
-            Default = None, which means the total mass is automatically
-            calculated.
-
-        Examples
-        --------
-        >>> pf = load("RedshiftOutput0000")
-        >>> halos = HaloFinder(pf)
-        """
         if subvolume is not None:
             ds_LE = np.array(subvolume.left_edge)
             ds_RE = np.array(subvolume.right_edge)
@@ -2507,53 +2538,54 @@
 
 
 class FOFHaloFinder(GenericHaloFinder, FOFHaloList):
+    r"""Friends-of-friends halo finder.
+
+    Halos are found by linking together all pairs of particles closer than
+    some distance from each other. Particles may have multiple links,
+    and halos are found by recursively linking together all such pairs.
+
+    Larger linking lengths produce more halos, and the largest halos
+    become larger. Also, halos become more filamentary and over-connected.
+
+    Davis et al. "The evolution of large-scale structure in a universe
+    dominated by cold dark matter." ApJ (1985) vol. 292 pp. 371-394
+
+    Parameters
+    ----------
+    pf : `StaticOutput`
+        The parameter file on which halo finding will be conducted.
+    subvolume : `yt.data_objects.api.AMRData`, optional
+        A region over which HOP will be run, which can be used to run HOP
+        on a subvolume of the full volume. Default = None, which defaults
+        to the full volume automatically.
+    link : float
+        If positive, the interparticle distance (compared to the overall
+        average) used to build the halos. If negative, this is taken to be
+        the *actual* linking length, and no other calculations will be
+        applied.  Default = 0.2.
+    dm_only : bool
+        If True, only dark matter particles are used when building halos.
+        Default = False.
+    padding : float
+        When run in parallel, the finder needs to surround each subvolume
+        with duplicated particles for halo finidng to work. This number
+        must be no smaller than the radius of the largest halo in the box
+        in code units. Default = 0.02.
+
+    Examples
+    --------
+    >>> pf = load("RedshiftOutput0000")
+    >>> halos = FOFHaloFinder(pf)
+    """
     def __init__(self, pf, subvolume=None, link=0.2, dm_only=True,
         padding=0.02):
-        r"""Friends-of-friends halo finder.
-
-        Halos are found by linking together all pairs of particles closer than
-        some distance from each other. Particles may have multiple links,
-        and halos are found by recursively linking together all such pairs.
-
-        Larger linking lengths produce more halos, and the largest halos
-        become larger. Also, halos become more filamentary and over-connected.
-
-        Davis et al. "The evolution of large-scale structure in a universe
-        dominated by cold dark matter." ApJ (1985) vol. 292 pp. 371-394
-
-        Parameters
-        ----------
-        pf : `StaticOutput`
-            The parameter file on which halo finding will be conducted.
-        subvolume : `yt.data_objects.api.YTDataContainer`, optional
-            A region over which HOP will be run, which can be used to run HOP
-            on a subvolume of the full volume. Default = None, which defaults
-            to the full volume automatically.
-        link : float
-            If positive, the interparticle distance (compared to the overall
-            average) used to build the halos. If negative, this is taken to be
-            the *actual* linking length, and no other calculations will be
-            applied.  Default = 0.2.
-        dm_only : bool
-            If True, only dark matter particles are used when building halos.
-            Default = False.
-        padding : float
-            When run in parallel, the finder needs to surround each subvolume
-            with duplicated particles for halo finidng to work. This number
-            must be no smaller than the radius of the largest halo in the box
-            in code units. Default = 0.02.
-
-        Examples
-        --------
-        >>> pf = load("RedshiftOutput0000")
-        >>> halos = FOFHaloFinder(pf)
-        """
         if subvolume is not None:
             ds_LE = np.array(subvolume.left_edge)
             ds_RE = np.array(subvolume.right_edge)
         self.period = pf.domain_right_edge - pf.domain_left_edge
         self.pf = pf
         self.hierarchy = pf.h
+        self.redshift = pf.current_redshift
         self._data_source = pf.h.all_data()
         GenericHaloFinder.__init__(self, pf, self._data_source, dm_only,
             padding)
@@ -2588,7 +2620,8 @@
         #self._reposition_particles((LE, RE))
         # here is where the FOF halo finder is run
         mylog.info("Using a linking length of %0.3e", linking_length)
-        FOFHaloList.__init__(self, self._data_source, linking_length, dm_only)
+        FOFHaloList.__init__(self, self._data_source, linking_length, dm_only,
+                             redshift=self.redshift)
         self._parse_halolist(1.)
         self._join_halolists()
 
@@ -2596,84 +2629,84 @@
 
 
 class LoadHaloes(GenericHaloFinder, LoadedHaloList):
+    r"""Load the full halo data into memory.
+
+    This function takes the output of `GenericHaloFinder.dump` and
+    re-establishes the list of halos in memory. This enables the full set
+    of halo analysis features without running the halo finder again. To
+    be precise, the particle data for each halo is only read in when
+    necessary, so examining a single halo will not require as much memory
+    as is required for halo finding.
+
+    Parameters
+    ----------
+    basename : String
+        The base name of the files that will be read in. This should match
+        what was used when `GenericHaloFinder.dump` was called. Default =
+        "HopAnalysis".
+
+    Examples
+    --------
+    >>> pf = load("data0005")
+    >>> halos = LoadHaloes(pf, "HopAnalysis")
+    """
     def __init__(self, pf, basename):
-        r"""Load the full halo data into memory.
-
-        This function takes the output of `GenericHaloFinder.dump` and
-        re-establishes the list of halos in memory. This enables the full set
-        of halo analysis features without running the halo finder again. To
-        be precise, the particle data for each halo is only read in when
-        necessary, so examining a single halo will not require as much memory
-        as is required for halo finding.
-
-        Parameters
-        ----------
-        basename : String
-            The base name of the files that will be read in. This should match
-            what was used when `GenericHaloFinder.dump` was called. Default =
-            "HopAnalysis".
-
-        Examples
-        --------
-        >>> pf = load("data0005")
-        >>> halos = LoadHaloes(pf, "HopAnalysis")
-        """
         self.basename = basename
         LoadedHaloList.__init__(self, pf, self.basename)
 
 class LoadTextHaloes(GenericHaloFinder, TextHaloList):
+    r"""Load a text file of halos.
+    
+    Like LoadHaloes, but when all that is available is a plain
+    text file. This assumes the text file has the 3-positions of halos
+    along with a radius. The halo objects created are spheres.
+
+    Parameters
+    ----------
+    fname : String
+        The name of the text file to read in.
+    
+    columns : dict
+        A dict listing the column name : column number pairs for data
+        in the text file. It is zero-based (like Python).
+        An example is {'x':0, 'y':1, 'z':2, 'r':3, 'm':4}.
+        Any column name outside of ['x', 'y', 'z', 'r'] will be attached
+        to each halo object in the supplementary dict 'supp'. See
+        example.
+    
+    comment : String
+        If the first character of a line is equal to this, the line is
+        skipped. Default = "#".
+
+    Examples
+    --------
+    >>> pf = load("data0005")
+    >>> halos = LoadTextHaloes(pf, "list.txt",
+        {'x':0, 'y':1, 'z':2, 'r':3, 'm':4},
+        comment = ";")
+    >>> halos[0].supp['m']
+        3.28392048e14
+    """
     def __init__(self, pf, filename, columns, comment = "#"):
-        r"""Load a text file of halos.
-        
-        Like LoadHaloes, but when all that is available is a plain
-        text file. This assumes the text file has the 3-positions of halos
-        along with a radius. The halo objects created are spheres.
-
-        Parameters
-        ----------
-        fname : String
-            The name of the text file to read in.
-        
-        columns : dict
-            A dict listing the column name : column number pairs for data
-            in the text file. It is zero-based (like Python).
-            An example is {'x':0, 'y':1, 'z':2, 'r':3, 'm':4}.
-            Any column name outside of ['x', 'y', 'z', 'r'] will be attached
-            to each halo object in the supplementary dict 'supp'. See
-            example.
-        
-        comment : String
-            If the first character of a line is equal to this, the line is
-            skipped. Default = "#".
-
-        Examples
-        --------
-        >>> pf = load("data0005")
-        >>> halos = LoadTextHaloes(pf, "list.txt",
-            {'x':0, 'y':1, 'z':2, 'r':3, 'm':4},
-            comment = ";")
-        >>> halos[0].supp['m']
-            3.28392048e14
-        """
         TextHaloList.__init__(self, pf, filename, columns, comment)
 
 LoadTextHalos = LoadTextHaloes
 
 class LoadRockstarHalos(GenericHaloFinder, RockstarHaloList):
+    r"""Load Rockstar halos off disk from Rockstar-output format.
+
+    Parameters
+    ----------
+    fname : String
+        The name of the Rockstar file to read in. Default = 
+        "rockstar_halos/out_0.list'.
+
+    Examples
+    --------
+    >>> pf = load("data0005")
+    >>> halos = LoadRockstarHalos(pf, "other_name.out")
+    """
     def __init__(self, pf, filename = None):
-        r"""Load Rockstar halos off disk from Rockstar-output format.
-
-        Parameters
-        ----------
-        fname : String
-            The name of the Rockstar file to read in. Default = 
-            "rockstar_halos/out_0.list'.
-
-        Examples
-        --------
-        >>> pf = load("data0005")
-        >>> halos = LoadRockstarHalos(pf, "other_name.out")
-        """
         if filename is None:
             filename = 'rockstar_halos/out_0.list'
         RockstarHaloList.__init__(self, pf, filename)

diff -r b52a79f8960ab7fe37870d5a0b2b71c58e48dcf2 -r cb0a927a3f3add84ef8229f5654bdbd9c17f36c7 yt/analysis_modules/halo_finding/rockstar/rockstar.py
--- a/yt/analysis_modules/halo_finding/rockstar/rockstar.py
+++ b/yt/analysis_modules/halo_finding/rockstar/rockstar.py
@@ -114,80 +114,88 @@
         return pool, workgroup
 
 class RockstarHaloFinder(ParallelAnalysisInterface):
+    r"""Spawns the Rockstar Halo finder, distributes dark matter
+    particles and finds halos.
+
+    The halo finder requires dark matter particles of a fixed size.
+    Rockstar has three main processes: reader, writer, and the 
+    server which coordinates reader/writer processes.
+
+    Parameters
+    ----------
+    ts   : TimeSeriesData, StaticOutput
+        This is the data source containing the DM particles. Because 
+        halo IDs may change from one snapshot to the next, the only
+        way to keep a consistent halo ID across time is to feed 
+        Rockstar a set of snapshots, ie, via TimeSeriesData.
+    num_readers: int
+        The number of reader can be increased from the default
+        of 1 in the event that a single snapshot is split among
+        many files. This can help in cases where performance is
+        IO-limited. Default is 1. If run inline, it is
+        equal to the number of MPI threads.
+    num_writers: int
+        The number of writers determines the number of processing threads
+        as well as the number of threads writing output data.
+        The default is set to comm.size-num_readers-1. If run inline,
+        the default is equal to the number of MPI threads.
+    outbase: str
+        This is where the out*list files that Rockstar makes should be
+        placed. Default is 'rockstar_halos'.
+    dm_type: 1
+        In order to exclude stars and other particle types, define
+        the dm_type. Default is 1, as Enzo has the DM particle type=1.
+    force_res: float
+        This parameter specifies the force resolution that Rockstar uses
+        in units of Mpc/h.
+        If no value is provided, this parameter is automatically set to
+        the width of the smallest grid element in the simulation from the
+        last data snapshot (i.e. the one where time has evolved the
+        longest) in the time series:
+        ``pf_last.h.get_smallest_dx() * pf_last['mpch']``.
+    total_particles : int
+        If supplied, this is a pre-calculated total number of dark matter
+        particles present in the simulation. For example, this is useful
+        when analyzing a series of snapshots where the number of dark
+        matter particles should not change and this will save some disk
+        access time. If left unspecified, it will
+        be calculated automatically. Default: ``None``.
+    dm_only : boolean
+        If set to ``True``, it will be assumed that there are only dark
+        matter particles present in the simulation. This can save analysis
+        time if this is indeed the case. Default: ``False``.
+    hires_dm_mass : float
+        If supplied, use only the highest resolution dark matter
+        particles, with a mass less than (1.1*hires_dm_mass), in units
+        of ParticleMassMsun. This is useful for multi-dm-mass
+        simulations. Note that this will only give sensible results for
+        halos that are not "polluted" by lower resolution
+        particles. Default: ``None``.
+        
+    Returns
+    -------
+    None
+
+    Examples
+    --------
+    To use the script below you must run it using MPI:
+    mpirun -np 3 python test_rockstar.py --parallel
+
+    test_rockstar.py:
+
+    from yt.analysis_modules.halo_finding.rockstar.api import RockstarHaloFinder
+    from yt.mods import *
+    import sys
+
+    ts = TimeSeriesData.from_filenames('/u/cmoody3/data/a*')
+    pm = 7.81769027e+11
+    rh = RockstarHaloFinder(ts)
+    rh.run()
+    """
     def __init__(self, ts, num_readers = 1, num_writers = None,
             outbase="rockstar_halos", dm_type=1, 
-            force_res=None, total_particles=None, dm_only=False):
-        r"""Spawns the Rockstar Halo finder, distributes dark matter
-        particles and finds halos.
-
-        The halo finder requires dark matter particles of a fixed size.
-        Rockstar has three main processes: reader, writer, and the 
-        server which coordinates reader/writer processes.
-
-        Parameters
-        ----------
-        ts   : TimeSeriesData, StaticOutput
-            This is the data source containing the DM particles. Because 
-            halo IDs may change from one snapshot to the next, the only
-            way to keep a consistent halo ID across time is to feed 
-            Rockstar a set of snapshots, ie, via TimeSeriesData.
-        num_readers: int
-            The number of reader can be increased from the default
-            of 1 in the event that a single snapshot is split among
-            many files. This can help in cases where performance is
-            IO-limited. Default is 1. If run inline, it is
-            equal to the number of MPI threads.
-        num_writers: int
-            The number of writers determines the number of processing threads
-            as well as the number of threads writing output data.
-            The default is set to comm.size-num_readers-1. If run inline,
-            the default is equal to the number of MPI threads.
-        outbase: str
-            This is where the out*list files that Rockstar makes should be
-            placed. Default is 'rockstar_halos'.
-        dm_type: 1
-            In order to exclude stars and other particle types, define
-            the dm_type. Default is 1, as Enzo has the DM particle type=1.
-        force_res: float
-            This parameter specifies the force resolution that Rockstar uses
-            in units of Mpc/h.
-            If no value is provided, this parameter is automatically set to
-            the width of the smallest grid element in the simulation from the
-            last data snapshot (i.e. the one where time has evolved the
-            longest) in the time series:
-            ``pf_last.h.get_smallest_dx() * pf_last['mpch']``.
-        total_particles : int
-            If supplied, this is a pre-calculated total number of dark matter
-            particles present in the simulation. For example, this is useful
-            when analyzing a series of snapshots where the number of dark
-            matter particles should not change and this will save some disk
-            access time. If left unspecified, it will
-            be calculated automatically. Default: ``None``.
-        dm_only : boolean
-            If set to ``True``, it will be assumed that there are only dark
-            matter particles present in the simulation. This can save analysis
-            time if this is indeed the case. Default: ``False``.
-            
-        Returns
-        -------
-        None
-
-        Examples
-        --------
-        To use the script below you must run it using MPI:
-        mpirun -np 3 python test_rockstar.py --parallel
-
-        test_rockstar.py:
-
-        from yt.analysis_modules.halo_finding.rockstar.api import RockstarHaloFinder
-        from yt.mods import *
-        import sys
-
-        ts = TimeSeriesData.from_filenames('/u/cmoody3/data/a*')
-        pm = 7.81769027e+11
-        rh = RockstarHaloFinder(ts)
-        rh.run()
-        """
+            force_res=None, total_particles=None, dm_only=False,
+            hires_dm_mass=None):
         mylog.warning("The citation for the Rockstar halo finder can be found at")
         mylog.warning("http://adsabs.harvard.edu/abs/2013ApJ...762..109B")
         ParallelAnalysisInterface.__init__(self)
@@ -217,6 +225,7 @@
             self.force_res = force_res
         self.total_particles = total_particles
         self.dm_only = dm_only
+        self.hires_dm_mass = hires_dm_mass
         # Setup pool and workgroups.
         self.pool, self.workgroup = self.runner.setup_pool()
         p = self._setup_parameters(ts)
@@ -227,28 +236,51 @@
     def _setup_parameters(self, ts):
         if self.workgroup.name != "readers": return None
         tpf = ts[0]
+
         def _particle_count(field, data):
-            if self.dm_only:
-                return np.prod(data["particle_position_x"].shape)
             try:
-                return (data["particle_type"]==self.dm_type).sum()
+                data["particle_type"]
+                has_particle_type=True
             except KeyError:
-                return np.prod(data["particle_position_x"].shape)
+                has_particle_type=False
+                
+            if (self.dm_only or (not has_particle_type)):
+                if self.hires_dm_mass is None:
+                    return np.prod(data["particle_position_x"].shape)
+                else:
+                    return (data['ParticleMassMsun'] < self.hires_dm_mass*1.1).sum()
+            elif has_particle_type:
+                if self.hires_dm_mass is None:
+                    return (data["particle_type"]==self.dm_type).sum()
+                else:
+                    return ( (data["particle_type"]==self.dm_type) & 
+                             (data['ParticleMassMsun'] < self.hires_dm_mass*1.1) ).sum()
+            else:                
+                raise RuntimeError() # should never get here
+
         add_field("particle_count", function=_particle_count,
                   not_in_all=True, particle_type=True)
         dd = tpf.h.all_data()
         # Get DM particle mass.
         all_fields = set(tpf.h.derived_field_list + tpf.h.field_list)
-        for g in tpf.h._get_objs("grids"):
-            if g.NumberOfParticles == 0: continue
-            if self.dm_only:
-                iddm = Ellipsis
-            elif "particle_type" in all_fields:
-                iddm = g["particle_type"] == self.dm_type
-            else:
-                iddm = Ellipsis
-            particle_mass = g['ParticleMassMsun'][iddm][0] / tpf.hubble_constant
-            break
+        has_particle_type = ("particle_type" in all_fields)
+
+        if self.hires_dm_mass is None:
+            for g in tpf.h._get_objs("grids"):
+                if g.NumberOfParticles == 0: continue
+
+                if (self.dm_only or (not has_particle_type)):
+                    iddm = Ellipsis
+                elif has_particle_type:
+                    iddm = g["particle_type"] == self.dm_type
+                else:                    
+                    iddm = Ellipsis # should never get here
+
+                particle_mass = g['ParticleMassMsun'][iddm][0] / tpf.hubble_constant
+                break
+        else:
+            particle_mass = self.hires_dm_mass / tpf.hubble_constant
+
         p = {}
         if self.total_particles is None:
             # Get total_particles in parallel.
@@ -302,6 +334,7 @@
                     force_res = self.force_res,
                     particle_mass = float(self.particle_mass),
                     dm_only = int(self.dm_only),
+                    hires_only = (self.hires_dm_mass is not None),
                     **kwargs)
         # Make the directory to store the halo lists in.
         if self.comm.rank == 0:

diff -r b52a79f8960ab7fe37870d5a0b2b71c58e48dcf2 -r cb0a927a3f3add84ef8229f5654bdbd9c17f36c7 yt/analysis_modules/halo_finding/rockstar/rockstar_interface.pyx
--- a/yt/analysis_modules/halo_finding/rockstar/rockstar_interface.pyx
+++ b/yt/analysis_modules/halo_finding/rockstar/rockstar_interface.pyx
@@ -163,6 +163,7 @@
     SCALE_NOW = 1.0/(pf.current_redshift+1.0)
     # Now we want to grab data from only a subset of the grids for each reader.
     all_fields = set(pf.h.derived_field_list + pf.h.field_list)
+    has_particle_type = ("particle_type" in all_fields)
 
     # First we need to find out how many this reader is going to read in
     # if the number of readers > 1.
@@ -170,12 +171,19 @@
         local_parts = 0
         for g in pf.h._get_objs("grids"):
             if g.NumberOfParticles == 0: continue
-            if rh.dm_only:
-                iddm = Ellipsis
-            elif "particle_type" in all_fields:
-                iddm = g["particle_type"] == rh.dm_type
+            if (rh.dm_only or (not has_particle_type)):
+                if rh.hires_only:
+                    iddm = (g['ParticleMassMsun'] < PARTICLE_MASS*1.1)
+                else:
+                    iddm = Ellipsis
+            elif has_particle_type:
+                if rh.hires_only:
+                    iddm = ( (g["particle_type"]==rh.dm_type) &
+                             (g['ParticleMassMsun'] < PARTICLE_MASS*1.1) )                    
+                else:
+                    iddm = g["particle_type"] == rh.dm_type
             else:
-                iddm = Ellipsis
+                iddm = Ellipsis # should never get here
             arri = g["particle_index"].astype("int64")
             arri = arri[iddm] #pick only DM
             local_parts += arri.size
@@ -195,12 +203,19 @@
     pi = 0
     for g in pf.h._get_objs("grids"):
         if g.NumberOfParticles == 0: continue
-        if rh.dm_only:
-            iddm = Ellipsis
-        elif "particle_type" in all_fields:
-            iddm = g["particle_type"] == rh.dm_type
-        else:
-            iddm = Ellipsis
+        if (rh.dm_only or (not has_particle_type)):
+            if rh.hires_only:
+                iddm = (g['ParticleMassMsun'] < PARTICLE_MASS*1.1)
+            else:
+                iddm = Ellipsis
+        elif has_particle_type:
+            if rh.hires_only:
+                iddm = ( (g["particle_type"]==rh.dm_type) &
+                         (g['ParticleMassMsun'] < PARTICLE_MASS*1.1) )                    
+            else:
+                iddm = g["particle_type"] == rh.dm_type
+        else:            
+            iddm = Ellipsis # should never get here
         arri = g["particle_index"].astype("int64")
         arri = arri[iddm] #pick only DM
         npart = arri.size
@@ -230,6 +245,7 @@
     cdef public int dm_type
     cdef public int total_particles
     cdef public int dm_only
+    cdef public int hires_only
 
     def __cinit__(self, ts):
         self.ts = ts
@@ -244,7 +260,7 @@
                        int writing_port = -1, int block_ratio = 1,
                        int periodic = 1, force_res=None,
                        int min_halo_size = 25, outbase = "None",
-                       int dm_only = 0):
+                       int dm_only = 0, int hires_only = False):
         global PARALLEL_IO, PARALLEL_IO_SERVER_ADDRESS, PARALLEL_IO_SERVER_PORT
         global FILENAME, FILE_FORMAT, NUM_SNAPS, STARTING_SNAP, h0, Ol, Om
         global BOX_SIZE, PERIODIC, PARTICLE_MASS, NUM_BLOCKS, NUM_READERS
@@ -276,6 +292,7 @@
         TOTAL_PARTICLES = total_particles
         self.block_ratio = block_ratio
         self.dm_only = dm_only
+        self.hires_only = hires_only
         
         tpf = self.ts[0]
         h0 = tpf.hubble_constant

diff -r b52a79f8960ab7fe37870d5a0b2b71c58e48dcf2 -r cb0a927a3f3add84ef8229f5654bdbd9c17f36c7 yt/analysis_modules/halo_mass_function/halo_mass_function.py
--- a/yt/analysis_modules/halo_mass_function/halo_mass_function.py
+++ b/yt/analysis_modules/halo_mass_function/halo_mass_function.py
@@ -33,52 +33,52 @@
     parallel_blocking_call
 
 class HaloMassFcn(ParallelAnalysisInterface):
+    """
+    Initalize a HaloMassFcn object to analyze the distribution of haloes
+    as a function of mass.
+    :param halo_file (str): The filename of the output of the Halo Profiler.
+    Default=None.
+    :param omega_matter0 (float): The fraction of the universe made up of
+    matter (dark and baryonic). Default=None.
+    :param omega_lambda0 (float): The fraction of the universe made up of
+    dark energy. Default=None.
+    :param omega_baryon0 (float): The fraction of the universe made up of
+    ordinary baryonic matter. This should match the value
+    used to create the initial conditions, using 'inits'. This is 
+    *not* stored in the enzo datset so it must be checked by hand.
+    Default=0.05.
+    :param hubble0 (float): The expansion rate of the universe in units of
+    100 km/s/Mpc. Default=None.
+    :param sigma8input (float): The amplitude of the linear power
+    spectrum at z=0 as specified by the rms amplitude of mass-fluctuations
+    in a top-hat sphere of radius 8 Mpc/h. This should match the value
+    used to create the initial conditions, using 'inits'. This is 
+    *not* stored in the enzo datset so it must be checked by hand.
+    Default=0.86.
+    :param primoridal_index (float): This is the index of the mass power
+    spectrum before modification by the transfer function. A value of 1
+    corresponds to the scale-free primordial spectrum. This should match
+    the value used to make the initial conditions using 'inits'. This is 
+    *not* stored in the enzo datset so it must be checked by hand.
+    Default=1.0.
+    :param this_redshift (float): The current redshift. Default=None.
+    :param log_mass_min (float): The log10 of the mass of the minimum of the
+    halo mass range. Default=None.
+    :param log_mass_max (float): The log10 of the mass of the maximum of the
+    halo mass range. Default=None.
+    :param num_sigma_bins (float): The number of bins (points) to use for
+    the calculations and generated fit. Default=360.
+    :param fitting_function (int): Which fitting function to use.
+    1 = Press-schechter, 2 = Jenkins, 3 = Sheth-Tormen, 4 = Warren fit
+    5 = Tinker
+    Default=4.
+    :param mass_column (int): The column of halo_file that contains the
+    masses of the haloes. Default=4.
+    """
     def __init__(self, pf, halo_file=None, omega_matter0=None, omega_lambda0=None,
     omega_baryon0=0.05, hubble0=None, sigma8input=0.86, primordial_index=1.0,
     this_redshift=None, log_mass_min=None, log_mass_max=None, num_sigma_bins=360,
     fitting_function=4, mass_column=5):
-        """
-        Initalize a HaloMassFcn object to analyze the distribution of haloes
-        as a function of mass.
-        :param halo_file (str): The filename of the output of the Halo Profiler.
-        Default=None.
-        :param omega_matter0 (float): The fraction of the universe made up of
-        matter (dark and baryonic). Default=None.
-        :param omega_lambda0 (float): The fraction of the universe made up of
-        dark energy. Default=None.
-        :param omega_baryon0 (float): The fraction of the universe made up of
-        ordinary baryonic matter. This should match the value
-        used to create the initial conditions, using 'inits'. This is 
-        *not* stored in the enzo datset so it must be checked by hand.
-        Default=0.05.
-        :param hubble0 (float): The expansion rate of the universe in units of
-        100 km/s/Mpc. Default=None.
-        :param sigma8input (float): The amplitude of the linear power
-        spectrum at z=0 as specified by the rms amplitude of mass-fluctuations
-        in a top-hat sphere of radius 8 Mpc/h. This should match the value
-        used to create the initial conditions, using 'inits'. This is 
-        *not* stored in the enzo datset so it must be checked by hand.
-        Default=0.86.
-        :param primoridal_index (float): This is the index of the mass power
-        spectrum before modification by the transfer function. A value of 1
-        corresponds to the scale-free primordial spectrum. This should match
-        the value used to make the initial conditions using 'inits'. This is 
-        *not* stored in the enzo datset so it must be checked by hand.
-        Default=1.0.
-        :param this_redshift (float): The current redshift. Default=None.
-        :param log_mass_min (float): The log10 of the mass of the minimum of the
-        halo mass range. Default=None.
-        :param log_mass_max (float): The log10 of the mass of the maximum of the
-        halo mass range. Default=None.
-        :param num_sigma_bins (float): The number of bins (points) to use for
-        the calculations and generated fit. Default=360.
-        :param fitting_function (int): Which fitting function to use.
-        1 = Press-schechter, 2 = Jenkins, 3 = Sheth-Tormen, 4 = Warren fit
-        5 = Tinker
-        Default=4.
-        :param mass_column (int): The column of halo_file that contains the
-        masses of the haloes. Default=4.
-        """
         ParallelAnalysisInterface.__init__(self)
         self.pf = pf
         self.halo_file = halo_file
@@ -132,7 +132,6 @@
         not stored in enzo datasets, so must be entered by hand.
         sigma8input=%f primordial_index=%f omega_baryon0=%f
         """ % (self.sigma8input, self.primordial_index, self.omega_baryon0))
-        time.sleep(1)
         
         # Do the calculations.
         self.sigmaM()
@@ -544,22 +543,22 @@
 """
 
 class TransferFunction(object):
+    """
+    /* This routine takes cosmological parameters and a redshift and sets up
+    all the internal scalar quantities needed to compute the transfer function. */
+    /* INPUT: omega_matter -- Density of CDM, baryons, and massive neutrinos,
+                    in units of the critical density. */
+    /* 	  omega_baryon -- Density of baryons, in units of critical. */
+    /* 	  omega_hdm    -- Density of massive neutrinos, in units of critical */
+    /* 	  degen_hdm    -- (Int) Number of degenerate massive neutrino species */
+    /*        omega_lambda -- Cosmological constant */
+    /* 	  hubble       -- Hubble constant, in units of 100 km/s/Mpc */
+    /*        redshift     -- The redshift at which to evaluate */
+    /* OUTPUT: Returns 0 if all is well, 1 if a warning was issued.  Otherwise,
+        sets many global variables for use in TFmdm_onek_mpc() */
+    """
     def __init__(self, omega_matter, omega_baryon, omega_hdm,
 	    degen_hdm, omega_lambda, hubble, redshift):
-        """
-        /* This routine takes cosmological parameters and a redshift and sets up
-        all the internal scalar quantities needed to compute the transfer function. */
-        /* INPUT: omega_matter -- Density of CDM, baryons, and massive neutrinos,
-                        in units of the critical density. */
-        /* 	  omega_baryon -- Density of baryons, in units of critical. */
-        /* 	  omega_hdm    -- Density of massive neutrinos, in units of critical */
-        /* 	  degen_hdm    -- (Int) Number of degenerate massive neutrino species */
-        /*        omega_lambda -- Cosmological constant */
-        /* 	  hubble       -- Hubble constant, in units of 100 km/s/Mpc */
-        /*        redshift     -- The redshift at which to evaluate */
-        /* OUTPUT: Returns 0 if all is well, 1 if a warning was issued.  Otherwise,
-            sets many global variables for use in TFmdm_onek_mpc() */
-        """
         self.qwarn = 0;
         self.theta_cmb = 2.728/2.7 # Assuming T_cmb = 2.728 K
     

diff -r b52a79f8960ab7fe37870d5a0b2b71c58e48dcf2 -r cb0a927a3f3add84ef8229f5654bdbd9c17f36c7 yt/analysis_modules/halo_merger_tree/api.py
--- a/yt/analysis_modules/halo_merger_tree/api.py
+++ b/yt/analysis_modules/halo_merger_tree/api.py
@@ -38,5 +38,7 @@
     MergerTreeTextOutput
 
 from .enzofof_merger_tree import \
+    HaloCatalog, \
     find_halo_relationships, \
-    EnzoFOFMergerTree
+    EnzoFOFMergerTree, \
+    plot_halo_evolution

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/c7776fe4712b/
Changeset:   c7776fe4712b
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-05 00:35:33
Summary:     draft of particle age support
Affected #:  3 files

diff -r cb0a927a3f3add84ef8229f5654bdbd9c17f36c7 -r c7776fe4712bc3d28d7315fa20ebcbd7a772465c yt/frontends/art/definitions.py
--- a/yt/frontends/art/definitions.py
+++ b/yt/frontends/art/definitions.py
@@ -58,7 +58,6 @@
     'particle_velocity_x',
     'particle_velocity_y',
     'particle_velocity_z',
-    'particle_age', #this and below are stellar only fields
     'particle_mass_initial',
     'particle_creation_time',
     'particle_metallicity1',
@@ -67,7 +66,6 @@
 ]
 
 particle_star_fields = [
-    'particle_age',
     'particle_mass',
     'particle_mass_initial',
     'particle_creation_time',
@@ -126,7 +124,7 @@
 star_name_map = {
         'particle_mass':'mass',
         'particle_mass_initial':'imass',
-        'particle_age':'tbirth',
+        'particle_creation_time':'tbirth',
         'particle_metallicity1':'metallicity1',
         'particle_metallicity2':'metallicity2',
         'particle_metallicity':'metallicity',

diff -r cb0a927a3f3add84ef8229f5654bdbd9c17f36c7 -r c7776fe4712bc3d28d7315fa20ebcbd7a772465c yt/frontends/art/fields.py
--- a/yt/frontends/art/fields.py
+++ b/yt/frontends/art/fields.py
@@ -243,3 +243,9 @@
 
 
 #Particle fields
+
+def _particle_age(field,data):
+    tr = data["particle_creation_time"]
+    return data.pf.current_time - tr
+add_field("particle_age",function=_particle_age,units=r"\mathrm{s}",
+          take_log=True,particle_type=True)

diff -r cb0a927a3f3add84ef8229f5654bdbd9c17f36c7 -r c7776fe4712bc3d28d7315fa20ebcbd7a772465c yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -37,6 +37,7 @@
 
 class IOHandlerART(BaseIOHandler):
     _data_style = "art"
+    interp_tb = None
 
     def _read_fluid_selection(self, chunks, selector, fields, size):
         # Chunks in this case will have affiliated domain subset objects
@@ -108,6 +109,11 @@
                 #all other fields are read in and changed once
                 if starb-stara==0: continue
                 temp= read_star_field(file_stars,field=fname)
+                if fname == "particle_creation_time":
+                    if self.interp_tb is None:
+                        self.interp_tb,self.interp_ages = b2t(temp)
+                    temp = np.interp(temp,self.interp_tb,self.interp_ages)
+                    temp *= 1.0e9*365*24*3600
                 data = np.zeros(npa,dtype="float64")
                 data[stara:starb] = temp
                 del temp
@@ -433,9 +439,7 @@
         ages += a2t(b2a(tbi)),
         if logger: logger(i)
     ages = np.array(ages)
-    fb2t = np.interp(tb,tbs,ages)
-    #fb2t = interp1d(tbs,ages)
-    return fb2t
+    return tbs,ages
 
 def spread_ages(ages,logger=None,spread=1.0e7*365*24*3600):
     #stars are formed in lumps; spread out the ages linearly


https://bitbucket.org/yt_analysis/yt/commits/60cf924b1502/
Changeset:   60cf924b1502
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-05 00:52:36
Summary:     asserting ages are similar b/w amr and stars
Affected #:  1 file

diff -r c7776fe4712bc3d28d7315fa20ebcbd7a772465c -r 60cf924b1502736523028e72bb1dbf2c692b78da yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -111,6 +111,11 @@
                 temp= read_star_field(file_stars,field=fname)
                 if fname == "particle_creation_time":
                     if self.interp_tb is None:
+                        self.tdum,self.adum = read_star_field(file_stars,
+                                                              field="tdum")
+                        tdiff = b2t(tdum)-pf.current_time/(3.15569e7*1e9)
+                        #timestamp of file should match amr timestamp
+                        assert np.abs(tdiff) < 1e-4
                         self.interp_tb,self.interp_ages = b2t(temp)
                     temp = np.interp(temp,self.interp_tb,self.interp_ages)
                     temp *= 1.0e9*365*24*3600


https://bitbucket.org/yt_analysis/yt/commits/45cc1f5c6196/
Changeset:   45cc1f5c6196
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-05 00:54:16
Summary:     typo
Affected #:  1 file

diff -r 60cf924b1502736523028e72bb1dbf2c692b78da -r 45cc1f5c61964f651038a1deddd99053cc767df4 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -113,7 +113,7 @@
                     if self.interp_tb is None:
                         self.tdum,self.adum = read_star_field(file_stars,
                                                               field="tdum")
-                        tdiff = b2t(tdum)-pf.current_time/(3.15569e7*1e9)
+                        tdiff = b2t(self.tdum)-pf.current_time/(3.15569e7*1e9)
                         #timestamp of file should match amr timestamp
                         assert np.abs(tdiff) < 1e-4
                         self.interp_tb,self.interp_ages = b2t(temp)


https://bitbucket.org/yt_analysis/yt/commits/7f994d27829b/
Changeset:   7f994d27829b
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-05 01:20:32
Summary:     adding spread ages field
Affected #:  2 files

diff -r 45cc1f5c61964f651038a1deddd99053cc767df4 -r 7f994d27829b1d1eeaa8dcb88b2afddcd4692707 yt/frontends/art/fields.py
--- a/yt/frontends/art/fields.py
+++ b/yt/frontends/art/fields.py
@@ -249,3 +249,35 @@
     return data.pf.current_time - tr
 add_field("particle_age",function=_particle_age,units=r"\mathrm{s}",
           take_log=True,particle_type=True)
+
+def spread_ages(ages,spread=1.0e7*365*24*3600):
+    #stars are formed in lumps; spread out the ages linearly
+    da= np.diff(ages)
+    assert np.all(da<=0)
+    #ages should always be decreasing, and ordered so
+    agesd = np.zeros(ages.shape)
+    idx, = np.where(da<0)
+    idx+=1 #mark the right edges
+    #spread this age evenly out to the next age
+    lidx=0
+    lage=0
+    for i in idx:
+        n = i-lidx #n stars affected
+        rage = ages[i]
+        lage = max(rage-spread,0.0)
+        agesd[lidx:i]=np.linspace(lage,rage,n)
+        lidx=i
+        #lage=rage
+    #we didn't get the last iter
+    n = agesd.shape[0]-lidx
+    rage = ages[-1]
+    lage = max(rage-spread,0.0)
+    agesd[lidx:]=np.linspace(lage,rage,n)
+    return agesd
+
+def _particle_age_spread(field,data):
+    tr = data["particle_creation_time"]
+    return spread_ages(data.pf.current_time - tr)
+
+add_field("particle_age_spread",function=_particle_age_spread,
+          particle_type=True,take_log=True,units=r"\rm{s}")

diff -r 45cc1f5c61964f651038a1deddd99053cc767df4 -r 7f994d27829b1d1eeaa8dcb88b2afddcd4692707 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -446,28 +446,3 @@
     ages = np.array(ages)
     return tbs,ages
 
-def spread_ages(ages,logger=None,spread=1.0e7*365*24*3600):
-    #stars are formed in lumps; spread out the ages linearly
-    da= np.diff(ages)
-    assert np.all(da<=0)
-    #ages should always be decreasing, and ordered so
-    agesd = np.zeros(ages.shape)
-    idx, = np.where(da<0)
-    idx+=1 #mark the right edges
-    #spread this age evenly out to the next age
-    lidx=0
-    lage=0
-    for i in idx:
-        n = i-lidx #n stars affected
-        rage = ages[i]
-        lage = max(rage-spread,0.0)
-        agesd[lidx:i]=np.linspace(lage,rage,n)
-        lidx=i
-        #lage=rage
-        if logger: logger(i)
-    #we didn't get the last iter
-    n = agesd.shape[0]-lidx
-    rage = ages[-1]
-    lage = max(rage-spread,0.0)
-    agesd[lidx:]=np.linspace(lage,rage,n)
-    return agesd


https://bitbucket.org/yt_analysis/yt/commits/1c8bbce50aa2/
Changeset:   1c8bbce50aa2
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-05 05:56:25
Summary:     turning assert into a warning
Affected #:  1 file

diff -r 7f994d27829b1d1eeaa8dcb88b2afddcd4692707 -r 1c8bbce50aa20ba90c1b99ae43e19855063ba61f yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -115,7 +115,9 @@
                                                               field="tdum")
                         tdiff = b2t(self.tdum)-pf.current_time/(3.15569e7*1e9)
                         #timestamp of file should match amr timestamp
-                        assert np.abs(tdiff) < 1e-4
+                        if np.abs(tdiff) < 1e-4:
+                            mylog.debug("Timestamp mismatch in star\
+                                         particle header")
                         self.interp_tb,self.interp_ages = b2t(temp)
                     temp = np.interp(temp,self.interp_tb,self.interp_ages)
                     temp *= 1.0e9*365*24*3600


https://bitbucket.org/yt_analysis/yt/commits/082e90bcdafb/
Changeset:   082e90bcdafb
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-05 21:55:04
Summary:     fixing super long debug line
Affected #:  1 file

diff -r 1c8bbce50aa20ba90c1b99ae43e19855063ba61f -r 082e90bcdafb2342bf7d95e1375c02c39383aa0a yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -116,8 +116,8 @@
                         tdiff = b2t(self.tdum)-pf.current_time/(3.15569e7*1e9)
                         #timestamp of file should match amr timestamp
                         if np.abs(tdiff) < 1e-4:
-                            mylog.debug("Timestamp mismatch in star\
-                                         particle header")
+                            mylog.debug("Timestamp mismatch in star "+
+                                         "particle header")
                         self.interp_tb,self.interp_ages = b2t(temp)
                     temp = np.interp(temp,self.interp_tb,self.interp_ages)
                     temp *= 1.0e9*365*24*3600


https://bitbucket.org/yt_analysis/yt/commits/208d6a26f1e1/
Changeset:   208d6a26f1e1
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-06 22:20:42
Summary:     fixing ires
Affected #:  1 file

diff -r 082e90bcdafb2342bf7d95e1375c02c39383aa0a -r 208d6a26f1e1f2d54d328b2ddab8a2e0cc707bc5 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -445,7 +445,7 @@
     def select_fwidth(self, dobj):
         base_dx = 1.0/self.domain.pf.domain_dimensions
         widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.ires(dobj))
+        dds = (2**self.select_ires(dobj))
         for i in range(3):
             widths[:,i] = base_dx[i] / dds
         return widths


https://bitbucket.org/yt_analysis/yt/commits/d7ac1f74ed9a/
Changeset:   d7ac1f74ed9a
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-07 02:14:28
Summary:     can now feed filenames for particle files
Affected #:  1 file

diff -r 208d6a26f1e1f2d54d328b2ddab8a2e0cc707bc5 -r d7ac1f74ed9aab03c8a003bc5bc8ccc4701a61a4 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -167,13 +167,17 @@
                  fields = None, storage_filename = None,
                  skip_particles=False,skip_stars=False,
                  limit_level=None,spread_age=True,
-                 force_max_level=None):
+                 force_max_level=None,file_particle_header=None,
+                 file_particle_data=None,file_particle_stars=None):
         if fields is None:
             fields = fluid_fields
         filename = os.path.abspath(filename)
         self._fields_in_file = fields
         self._find_files(filename)
         self.file_amr = filename
+        self.file_particle_header = file_particle_header
+        self.file_particle_data = file_particle_data
+        self.file_particle_stars = file_particle_stars
         self.parameter_filename = filename
         self.skip_particles = skip_particles
         self.skip_stars = skip_stars


https://bitbucket.org/yt_analysis/yt/commits/e68458a6757a/
Changeset:   e68458a6757a
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-07 19:22:09
Summary:     fixes for particles mass asssignment
Affected #:  1 file

diff -r d7ac1f74ed9aab03c8a003bc5bc8ccc4701a61a4 -r e68458a6757ad4416066365b4715bf8865e17168 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -84,17 +84,17 @@
             ftype,fname = field
             for i,ax in enumerate('xyz'):
                 if fname.startswith("particle_position_%s"%ax):
-                    tr[field]=pos[:,i][mask]
+                    tr[field]=pos[:,i]
                 if fname.startswith("particle_velocity_%s"%ax):
-                    tr[field]=vel[:,i][mask]
+                    tr[field]=vel[:,i]
             if fname == "particle_mass":
                 a=0
                 data = np.zeros(npa,dtype='float64')
                 for b,m in zip(ls,ws):
                     data[a:b]=(np.ones(b-a,dtype='float64')*m)
                     a=b
-                tr[field]=data[mask]
                 #the stellar masses will be updated later
+                tr[field] = data
             elif fname == "particle_index":
                 tr[field]=np.arange(npa)[mask].astype('int64')
             elif fname == "particle_type":
@@ -103,7 +103,7 @@
                 for i,(b,m) in enumerate(zip(ls,ws)):
                     data[a:b]=(np.ones(b-a,dtype='int64')*i)
                     a=b
-                tr[field]=data[mask]
+                tr[field] = data
             if fname in particle_star_fields:
                 #we possibly update and change the masses here
                 #all other fields are read in and changed once
@@ -121,10 +121,11 @@
                         self.interp_tb,self.interp_ages = b2t(temp)
                     temp = np.interp(temp,self.interp_tb,self.interp_ages)
                     temp *= 1.0e9*365*24*3600
-                data = np.zeros(npa,dtype="float64")
-                data[stara:starb] = temp
+                if field not in tr.keys():
+                    tr[field] = np.zeros(npa,dtype='f8')
+                tr[field][stara:starb] = temp
                 del temp
-                tr[field]=data[mask]
+            tr[field]=tr[field][mask]
         return tr
 
 


https://bitbucket.org/yt_analysis/yt/commits/870b580e5226/
Changeset:   870b580e5226
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-07 19:34:57
Summary:     cgs particle masses now work
Affected #:  3 files

diff -r e68458a6757ad4416066365b4715bf8865e17168 -r 870b580e522661a1c206a78aa725a71e188073c7 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -303,8 +303,6 @@
         self.cosmological_simulation = True
         self.conversion_factors = cf
         
-        for particle_field in particle_fields:
-            self.conversion_factors[particle_field] =  1.0
         for ax in 'xyz':
             self.conversion_factors["%s-velocity" % ax] = 1.0
         for unit in sec_conversion.keys():

diff -r e68458a6757ad4416066365b4715bf8865e17168 -r 870b580e522661a1c206a78aa725a71e188073c7 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -124,6 +124,7 @@
                 if field not in tr.keys():
                     tr[field] = np.zeros(npa,dtype='f8')
                 tr[field][stara:starb] = temp
+                print "masses in io.py: ", tr[field]
                 del temp
             tr[field]=tr[field][mask]
         return tr

diff -r e68458a6757ad4416066365b4715bf8865e17168 -r 870b580e522661a1c206a78aa725a71e188073c7 yt/geometry/geometry_handler.py
--- a/yt/geometry/geometry_handler.py
+++ b/yt/geometry/geometry_handler.py
@@ -353,6 +353,7 @@
                     self._chunk_io(dobj), selector,
                     fields_to_read)
         for field in fields_to_read:
+            import pdb; pdb.set_trace()
             ftype, fname = field
             finfo = self.pf._get_field_info(*field)
             conv_factor = finfo._convert_function(self)


https://bitbucket.org/yt_analysis/yt/commits/0633dbe1826c/
Changeset:   0633dbe1826c
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-07 19:38:17
Summary:     removed pdb
Affected #:  1 file

diff -r 870b580e522661a1c206a78aa725a71e188073c7 -r 0633dbe1826cd23fbed850f01899623c22fe370a yt/geometry/geometry_handler.py
--- a/yt/geometry/geometry_handler.py
+++ b/yt/geometry/geometry_handler.py
@@ -353,7 +353,6 @@
                     self._chunk_io(dobj), selector,
                     fields_to_read)
         for field in fields_to_read:
-            import pdb; pdb.set_trace()
             ftype, fname = field
             finfo = self.pf._get_field_info(*field)
             conv_factor = finfo._convert_function(self)


https://bitbucket.org/yt_analysis/yt/commits/1fb6bb76a0a2/
Changeset:   1fb6bb76a0a2
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-08 19:54:22
Summary:     merged with yt_analysis/yt-3.0
Affected #:  68 files

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 doc/install_script.sh
--- a/doc/install_script.sh
+++ b/doc/install_script.sh
@@ -837,16 +837,11 @@
 cd $YT_DIR
 ( ${HG_EXEC} pull 2>1 && ${HG_EXEC} up -C 2>1 ${BRANCH} 2>&1 ) 1>> ${LOG_FILE}
 
-echo "Building Fortran kD-tree module."
-cd yt/utilities/kdtree
-( make 2>&1 ) 1>> ${LOG_FILE}
-cd ../../..
-
 echo "Installing yt"
 echo $HDF5_DIR > hdf5.cfg
 [ $INST_PNG -eq 1 ] && echo $PNG_DIR > png.cfg
 [ $INST_FTYPE -eq 1 ] && echo $FTYPE_DIR > freetype.cfg
-( ${DEST_DIR}/bin/python2.7 setup.py develop 2>&1 ) 1>> ${LOG_FILE} || do_exit
+( export PATH=$DEST_DIR/bin:$PATH ; ${DEST_DIR}/bin/python2.7 setup.py develop 2>&1 ) 1>> ${LOG_FILE} || do_exit
 touch done
 cd $MY_PWD
 

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 scripts/iyt
--- a/scripts/iyt
+++ b/scripts/iyt
@@ -2,7 +2,7 @@
 import os, re
 from distutils import version
 from yt.mods import *
-from yt.data_objects.data_containers import AMRData
+from yt.data_objects.data_containers import YTDataContainer
 namespace = locals().copy()
 namespace.pop("__builtins__", None)
 
@@ -116,7 +116,7 @@
         except:
             raise IPython.ipapi.TryNext 
         
-    if isinstance(obj, (AMRData, ) ):
+    if isinstance(obj, (YTDataContainer, ) ):
         #print "COMPLETING ON THIS THING"
         all_fields = [f for f in sorted(
                 obj.pf.h.field_list + obj.pf.h.derived_field_list)]

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 setup.py
--- a/setup.py
+++ b/setup.py
@@ -4,14 +4,61 @@
 import sys
 import time
 import subprocess
+import shutil
+import glob
 import distribute_setup
 distribute_setup.use_setuptools()
 
 from distutils.command.build_py import build_py
 from numpy.distutils.misc_util import appendpath
+from numpy.distutils.command import install_data as np_install_data
 from numpy.distutils import log
 from distutils import version
 
+from distutils.core import Command
+from distutils.spawn import find_executable
+
+
+class BuildForthon(Command):
+
+    """Command for building Forthon modules"""
+
+    description = "Build Forthon modules"
+    user_options = []
+
+    def initialize_options(self):
+
+        """init options"""
+
+        pass
+
+    def finalize_options(self):
+
+        """finalize options"""
+
+        pass
+
+    def run(self):
+
+        """runner"""
+        Forthon_exe = find_executable("Forthon")
+        gfortran_exe = find_executable("gfortran")
+
+        if None in (Forthon_exe, gfortran_exe):
+            sys.stderr.write(
+                "fKDpy.so won't be built due to missing Forthon/gfortran\n"
+            )
+            return
+
+        cwd = os.getcwd()
+        os.chdir(os.path.join(cwd, 'yt/utilities/kdtree'))
+        cmd = [Forthon_exe, "-F", "gfortran", "--compile_first",
+               "fKD_source", "--no2underscores", "--fopt", "'-O3'", "fKD",
+               "fKD_source.f90"]
+        subprocess.check_call(cmd, shell=False)
+        shutil.move(glob.glob('build/lib*/fKDpy.so')[0], os.getcwd())
+        os.chdir(cwd)
+
 REASON_FILES = []
 REASON_DIRS = [
     "",
@@ -36,7 +83,7 @@
     files = []
     for ext in ["js", "html", "css", "png", "ico", "gif"]:
         files += glob.glob("%s/*.%s" % (dir_name, ext))
-    REASON_FILES.append( (dir_name, files) )
+    REASON_FILES.append((dir_name, files))
 
 # Verify that we have Cython installed
 try:
@@ -93,10 +140,10 @@
             language=extension.language, cplus=cplus,
             output_file=target_file)
         cython_result = Cython.Compiler.Main.compile(source,
-                                                   options=options)
+                                                     options=options)
         if cython_result.num_errors != 0:
-            raise DistutilsError("%d errors while compiling %r with Cython" \
-                  % (cython_result.num_errors, source))
+            raise DistutilsError("%d errors while compiling %r with Cython"
+                                 % (cython_result.num_errors, source))
     return target_file
 
 
@@ -109,7 +156,9 @@
 
 VERSION = "3.0dev"
 
-if os.path.exists('MANIFEST'): os.remove('MANIFEST')
+if os.path.exists('MANIFEST'):
+    os.remove('MANIFEST')
+
 
 def get_mercurial_changeset_id(target_dir):
     """adapted from a script by Jason F. Harris, published at
@@ -123,11 +172,11 @@
                                      stdout=subprocess.PIPE,
                                      stderr=subprocess.PIPE,
                                      shell=True)
-        
+
     if (get_changeset.stderr.read() != ""):
         print "Error in obtaining current changeset of the Mercurial repository"
         changeset = None
-        
+
     changeset = get_changeset.stdout.read().strip()
     if (not re.search("^[0-9a-f]{12}", changeset)):
         print "Current changeset of the Mercurial repository is malformed"
@@ -135,12 +184,26 @@
 
     return changeset
 
+
+class my_build_src(build_src.build_src):
+    def run(self):
+        self.run_command("build_forthon")
+        build_src.build_src.run(self)
+
+
+class my_install_data(np_install_data.install_data):
+    def run(self):
+        self.distribution.data_files.append(
+            ('yt/utilities/kdtree', ['yt/utilities/kdtree/fKDpy.so'])
+        )
+        np_install_data.install_data.run(self)
+
 class my_build_py(build_py):
     def run(self):
         # honor the --dry-run flag
         if not self.dry_run:
-            target_dir = os.path.join(self.build_lib,'yt')
-            src_dir =  os.getcwd() 
+            target_dir = os.path.join(self.build_lib, 'yt')
+            src_dir = os.getcwd()
             changeset = get_mercurial_changeset_id(src_dir)
             self.mkpath(target_dir)
             with open(os.path.join(target_dir, '__hg_version__.py'), 'w') as fobj:
@@ -148,6 +211,7 @@
 
             build_py.run(self)
 
+
 def configuration(parent_package='', top_path=None):
     from numpy.distutils.misc_util import Configuration
 
@@ -158,7 +222,7 @@
                        quiet=True)
 
     config.make_config_py()
-    #config.make_svn_version_py()
+    # config.make_svn_version_py()
     config.add_subpackage('yt', 'yt')
     config.add_scripts("scripts/*")
 
@@ -176,25 +240,25 @@
                     + "simulations, focusing on Adaptive Mesh Refinement data "
                       "from Enzo, Orion, FLASH, and others.",
         classifiers=["Development Status :: 5 - Production/Stable",
-            "Environment :: Console",
-            "Intended Audience :: Science/Research",
-            "License :: OSI Approved :: GNU General Public License (GPL)",
-            "Operating System :: MacOS :: MacOS X",
-            "Operating System :: POSIX :: AIX",
-            "Operating System :: POSIX :: Linux",
-            "Programming Language :: C",
-            "Programming Language :: Python",
-            "Topic :: Scientific/Engineering :: Astronomy",
-            "Topic :: Scientific/Engineering :: Physics",
-            "Topic :: Scientific/Engineering :: Visualization"],
-        keywords='astronomy astrophysics visualization ' + \
-            'amr adaptivemeshrefinement',
+                     "Environment :: Console",
+                     "Intended Audience :: Science/Research",
+                     "License :: OSI Approved :: GNU General Public License (GPL)",
+                     "Operating System :: MacOS :: MacOS X",
+                     "Operating System :: POSIX :: AIX",
+                     "Operating System :: POSIX :: Linux",
+                     "Programming Language :: C",
+                     "Programming Language :: Python",
+                     "Topic :: Scientific/Engineering :: Astronomy",
+                     "Topic :: Scientific/Engineering :: Physics",
+                     "Topic :: Scientific/Engineering :: Visualization"],
+        keywords='astronomy astrophysics visualization ' +
+        'amr adaptivemeshrefinement',
         entry_points={'console_scripts': [
-                            'yt = yt.utilities.command_line:run_main',
-                      ],
-                      'nose.plugins.0.10': [
-                            'answer-testing = yt.utilities.answer_testing.framework:AnswerTesting'
-                      ]
+        'yt = yt.utilities.command_line:run_main',
+        ],
+            'nose.plugins.0.10': [
+                'answer-testing = yt.utilities.answer_testing.framework:AnswerTesting'
+            ]
         },
         author="Matthew J. Turk",
         author_email="matthewturk at gmail.com",
@@ -203,8 +267,9 @@
         configuration=configuration,
         zip_safe=False,
         data_files=REASON_FILES,
-        cmdclass = {'build_py': my_build_py},
-        )
+        cmdclass={'build_py': my_build_py, 'build_forthon': BuildForthon,
+                  'build_src': my_build_src, 'install_data': my_install_data},
+    )
     return
 
 if __name__ == '__main__':

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
--- a/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
+++ b/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
@@ -244,8 +244,9 @@
             If True, use dynamic load balancing to create the projections.
             Default: False.
 
-        Getting the Nearest Galaxies
-        ----------------------------
+        Notes
+        -----
+
         The light ray tool will use the HaloProfiler to calculate the
         distance and mass of the nearest halo to that pixel.  In order
         to do this, a dictionary called halo_profiler_parameters is used

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/analysis_modules/halo_merger_tree/enzofof_merger_tree.py
--- a/yt/analysis_modules/halo_merger_tree/enzofof_merger_tree.py
+++ b/yt/analysis_modules/halo_merger_tree/enzofof_merger_tree.py
@@ -454,8 +454,8 @@
         halonum : int
             Halo number at the last output to trace.
 
-        Output
-        ------
+        Returns
+        -------
         output : dict
             Dictionary of redshifts, cycle numbers, and halo numbers
             of the most massive progenitor.  keys = {redshift, cycle,

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/analysis_modules/halo_merger_tree/merger_tree.py
--- a/yt/analysis_modules/halo_merger_tree/merger_tree.py
+++ b/yt/analysis_modules/halo_merger_tree/merger_tree.py
@@ -758,17 +758,19 @@
     
     def query(self, string):
         r"""Performs a query of the database and returns the results as a list
-        of tuple(s), even if the result is singular.
+        of tuples, even if the result is singular.
         
         Parameters
         ----------
-        string : String
+        
+        string : str
             The SQL query of the database.
         
         Examples
-        -------
+        --------
+
         >>> results = mtc.query("SELECT GlobalHaloID from Halos where SnapHaloID = 0 and \
-        ... SnapZ = 0;")
+        ...    SnapZ = 0;")
         """
         # Query the database and return a list of tuples.
         if string is None:

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/analysis_modules/halo_profiler/multi_halo_profiler.py
--- a/yt/analysis_modules/halo_profiler/multi_halo_profiler.py
+++ b/yt/analysis_modules/halo_profiler/multi_halo_profiler.py
@@ -430,8 +430,8 @@
         After all the calls to `add_profile`, this will trigger the actual
         calculations and output the profiles to disk.
 
-        Paramters
-        ---------
+        Parameters
+        ----------
 
         filename : str
             If set, a file will be written with all of the filtered halos

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/analysis_modules/spectral_integrator/spectral_frequency_integrator.py
--- a/yt/analysis_modules/spectral_integrator/spectral_frequency_integrator.py
+++ b/yt/analysis_modules/spectral_integrator/spectral_frequency_integrator.py
@@ -60,9 +60,9 @@
     
     Initialize an EmissivityIntegrator object.
 
-    Keyword Parameters
-    ------------------
-    filename: string
+    Parameters
+    ----------
+    filename: string, default None
         Path to data file containing emissivity values.  If None,
         a file called xray_emissivity.h5 is used.  This file contains 
         emissivity tables for primordial elements and for metals at 
@@ -146,8 +146,8 @@
     e_min: float
         the maximum energy in keV for the energy band.
 
-    Keyword Parameters
-    ------------------
+    Other Parameters
+    ----------------
     filename: string
         Path to data file containing emissivity values.  If None,
         a file called xray_emissivity.h5 is used.  This file contains 
@@ -220,8 +220,8 @@
     e_min: float
         the maximum energy in keV for the energy band.
 
-    Keyword Parameters
-    ------------------
+    Other Parameters
+    ----------------
     filename: string
         Path to data file containing emissivity values.  If None,
         a file called xray_emissivity.h5 is used.  This file contains 
@@ -277,8 +277,8 @@
     e_min: float
         the maximum energy in keV for the energy band.
 
-    Keyword Parameters
-    ------------------
+    Other Parameters
+    ----------------
     filename: string
         Path to data file containing emissivity values.  If None,
         a file called xray_emissivity.h5 is used.  This file contains 

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py
+++ b/yt/data_objects/construction_data_containers.py
@@ -42,7 +42,8 @@
 from .field_info_container import \
     NeedsOriginalGrid
 from yt.utilities.lib import \
-    QuadTree, ghost_zone_interpolate, fill_region
+    QuadTree, ghost_zone_interpolate, fill_region, \
+    march_cubes_grid, march_cubes_grid_flux
 from yt.utilities.data_point_utilities import CombineGrids,\
     DataCubeRefine, DataCubeReplace, FillRegion, FillBuffer
 from yt.utilities.definitions import axis_names, x_dict, y_dict
@@ -387,9 +388,8 @@
 
     Examples
     --------
-    cube = pf.h.covering_grid(2, left_edge=[0.0, 0.0, 0.0], \
-                              right_edge=[1.0, 1.0, 1.0],
-                              dims=[128, 128, 128])
+    >>> cube = pf.h.covering_grid(2, left_edge=[0.0, 0.0, 0.0], \
+    ...                          dims=[128, 128, 128])
     """
     _spatial = True
     _type_name = "covering_grid"
@@ -418,6 +418,10 @@
                     self.pf.domain_left_edge)/self.dds).astype('int64')
         self._setup_data_source()
 
+    @property
+    def shape(self):
+        return tuple(self.ActiveDimensions.tolist())
+
     def _setup_data_source(self):
         self._data_source = self.pf.h.region(
             self.center, self.left_edge, self.right_edge)
@@ -643,6 +647,7 @@
     """
     _type_name = "surface"
     _con_args = ("data_source", "surface_field", "field_value")
+    _container_fields = ("dx", "dy", "dz", "x", "y", "z")
     vertices = None
     def __init__(self, data_source, surface_field, field_value):
         ParallelAnalysisInterface.__init__(self)
@@ -653,7 +658,10 @@
         center = data_source.get_field_parameter("center")
         super(YTSurfaceBase, self).__init__(center = center, pf =
                     data_source.pf )
-        self._grids = self.data_source._grids.copy()
+
+    def _generate_container_field(self, field):
+        self.get_data(field)
+        return self[field]
 
     def get_data(self, fields = None, sample_type = "face"):
         if isinstance(fields, list) and len(fields) > 1:
@@ -662,20 +670,17 @@
         elif isinstance(fields, list):
             fields = fields[0]
         # Now we have a "fields" value that is either a string or None
-        pb = get_pbar("Extracting (sampling: %s)" % fields,
-                      len(list(self._get_grid_objs())))
+        mylog.info("Extracting (sampling: %s)" % (fields,))
         verts = []
         samples = []
-        for i,g in enumerate(self._get_grid_objs()):
-            pb.update(i)
+        for block, mask in parallel_objects(self.data_source.blocks):
             my_verts = self._extract_isocontours_from_grid(
-                            g, self.surface_field, self.field_value,
-                            fields, sample_type)
+                            block, self.surface_field, self.field_value,
+                            mask, fields, sample_type)
             if fields is not None:
                 my_verts, svals = my_verts
                 samples.append(svals)
             verts.append(my_verts)
-        pb.finish()
         verts = np.concatenate(verts).transpose()
         verts = self.comm.par_combine_object(verts, op='cat', datatype='array')
         self.vertices = verts
@@ -688,11 +693,9 @@
             elif sample_type == "vertex":
                 self.vertex_samples[fields] = samples
         
-
     def _extract_isocontours_from_grid(self, grid, field, value,
-                                       sample_values = None,
+                                       mask, sample_values = None,
                                        sample_type = "face"):
-        mask = self.data_source._get_cut_mask(grid) * grid.child_mask
         vals = grid.get_vertex_centered_data(field, no_ghost = False)
         if sample_values is not None:
             svals = grid.get_vertex_centered_data(sample_values)
@@ -759,19 +762,15 @@
         ...     "x-velocity", "y-velocity", "z-velocity", "Metal_Density")
         """
         flux = 0.0
-        pb = get_pbar("Fluxing %s" % fluxing_field,
-                len(list(self._get_grid_objs())))
-        for i, g in enumerate(self._get_grid_objs()):
-            pb.update(i)
-            flux += self._calculate_flux_in_grid(g,
+        mylog.info("Fluxing %s", fluxing_field)
+        for block, mask in parallel_objects(self.data_source.blocks):
+            flux += self._calculate_flux_in_grid(block, mask,
                     field_x, field_y, field_z, fluxing_field)
-        pb.finish()
         flux = self.comm.mpi_allreduce(flux, op="sum")
         return flux
 
-    def _calculate_flux_in_grid(self, grid, 
+    def _calculate_flux_in_grid(self, grid, mask,
                     field_x, field_y, field_z, fluxing_field = None):
-        mask = self.data_source._get_cut_mask(grid) * grid.child_mask
         vals = grid.get_vertex_centered_data(self.surface_field)
         if fluxing_field is None:
             ff = np.ones(vals.shape, dtype="float64")

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -535,18 +535,26 @@
 
     @property
     def icoords(self):
+        if self._current_chunk is None:
+            self.hierarchy._identify_base_chunk(self)
         return self._current_chunk.icoords
 
     @property
     def fcoords(self):
+        if self._current_chunk is None:
+            self.hierarchy._identify_base_chunk(self)
         return self._current_chunk.fcoords
 
     @property
     def ires(self):
+        if self._current_chunk is None:
+            self.hierarchy._identify_base_chunk(self)
         return self._current_chunk.ires
 
     @property
     def fwidth(self):
+        if self._current_chunk is None:
+            self.hierarchy._identify_base_chunk(self)
         return self._current_chunk.fwidth
 
 class YTSelectionContainer1D(YTSelectionContainer):
@@ -1165,7 +1173,7 @@
         sp1, ")"])
     """
     _type_name = "boolean"
-    _con_args = ("regions")
+    _con_args = ("regions",)
     def __init__(self, regions, fields = None, pf = None, **kwargs):
         # Center is meaningless, but we'll define it all the same.
         YTSelectionContainer3D.__init__(self, [0.5]*3, fields, pf, **kwargs)

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/data_objects/field_info_container.py
--- a/yt/data_objects/field_info_container.py
+++ b/yt/data_objects/field_info_container.py
@@ -305,6 +305,42 @@
     def has_field_parameter(self, param): return True
     def convert(self, item): return 1
 
+    @property
+    def fcoords(self):
+        fc = np.array(np.mgrid[0:1:self.nd*1j,
+                               0:1:self.nd*1j,
+                               0:1:self.nd*1j])
+        if self.flat:
+            fc.shape = (self.nd*self.nd*self.nd, 3)
+        else:
+            fc = fc.transpose()
+        return fc
+
+    @property
+    def icoords(self):
+        ic = np.mgrid[0:self.nd-1:self.nd*1j,
+                      0:self.nd-1:self.nd*1j,
+                      0:self.nd-1:self.nd*1j]
+        if self.flat:
+            ic.shape = (self.nd*self.nd*self.nd, 3)
+        else:
+            ic = ic.transpose()
+        return ic
+
+    @property
+    def ires(self):
+        ir = np.ones(self.nd**3, dtype="int64")
+        if not self.flat:
+            ir.shape = (self.nd, self.nd, self.nd)
+        return ir
+
+    @property
+    def fwidth(self):
+        fw = np.ones(self.nd**3, dtype="float64") / self.nd
+        if not self.flat:
+            fw.shape = (self.nd, self.nd, self.nd)
+        return fw
+
 class DerivedField(object):
     """
     This is the base class used to describe a cell-by-cell derived field.

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/data_objects/tests/test_covering_grid.py
--- a/yt/data_objects/tests/test_covering_grid.py
+++ b/yt/data_objects/tests/test_covering_grid.py
@@ -67,9 +67,9 @@
             dn = pf.refine_by**level 
             cg = pf.h.smoothed_covering_grid(level, [0.0, 0.0, 0.0],
                     dn * pf.domain_dimensions)
-            assert_equal( cg["Ones"].max(), 1.0)
-            assert_equal( cg["Ones"].min(), 1.0)
-            assert_equal( cg["CellVolume"].sum(), pf.domain_width.prod())
+            yield assert_equal, cg["Ones"].max(), 1.0
+            yield assert_equal, cg["Ones"].min(), 1.0
+            yield assert_equal, cg["CellVolume"].sum(), pf.domain_width.prod()
             for g in pf.h.grids:
                 if level != g.Level: continue
                 di = g.get_global_startindex()

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/data_objects/tests/test_extract_regions.py
--- a/yt/data_objects/tests/test_extract_regions.py
+++ b/yt/data_objects/tests/test_extract_regions.py
@@ -6,6 +6,7 @@
 
 def test_cut_region():
     # We decompose in different ways
+    return #TESTDISABLED
     for nprocs in [1, 2, 4, 8]:
         pf = fake_random_pf(64, nprocs = nprocs,
             fields = ("Density", "Temperature", "x-velocity"))
@@ -29,6 +30,7 @@
 
 def test_extract_region():
     # We decompose in different ways
+    return #TESTDISABLED
     for nprocs in [1, 2, 4, 8]:
         pf = fake_random_pf(64, nprocs = nprocs,
             fields = ("Density", "Temperature", "x-velocity"))

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/data_objects/tests/test_slice.py
--- a/yt/data_objects/tests/test_slice.py
+++ b/yt/data_objects/tests/test_slice.py
@@ -1,24 +1,60 @@
-from yt.testing import *
+"""
+Tests for AMRSlice
+
+Authors: Samuel Skillman <samskillman at gmail.com>
+Affiliation: University of Colorado at Boulder
+Author: Kacper Kowalik <xarthisius.kk at gmail.com>
+Affiliation: CA UMK
+Homepage: http://yt-project.org/
+License:
+  Copyright (C) 2012 Samuel Skillman.  All Rights Reserved.
+  Copyright (C) 2013 Kacper Kowalik.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
 import os
+import numpy as np
+from nose.tools import raises
+from yt.testing import \
+    fake_random_pf, assert_equal, assert_array_equal
+from yt.utilities.definitions import \
+    x_dict, y_dict
+from yt.utilities.exceptions import \
+    YTNoDataInObjectError
 
 def setup():
     from yt.config import ytcfg
-    ytcfg["yt","__withintesting"] = "True"
+    ytcfg["yt", "__withintesting"] = "True"
+
 
 def teardown_func(fns):
     for fn in fns:
         os.remove(fn)
 
+
 def test_slice():
     for nprocs in [8, 1]:
         # We want to test both 1 proc and 8 procs, to make sure that
         # parallelism isn't broken
-        pf = fake_random_pf(64, nprocs = nprocs)
+        pf = fake_random_pf(64, nprocs=nprocs)
         dims = pf.domain_dimensions
         xn, yn, zn = pf.domain_dimensions
-        xi, yi, zi = pf.domain_left_edge + 1.0/(pf.domain_dimensions * 2)
-        xf, yf, zf = pf.domain_right_edge - 1.0/(pf.domain_dimensions * 2)
-        coords = np.mgrid[xi:xf:xn*1j, yi:yf:yn*1j, zi:zf:zn*1j]
+        xi, yi, zi = pf.domain_left_edge + 1.0 / (pf.domain_dimensions * 2)
+        xf, yf, zf = pf.domain_right_edge - 1.0 / (pf.domain_dimensions * 2)
+        coords = np.mgrid[xi:xf:xn * 1j, yi:yf:yn * 1j, zi:zf:zn * 1j]
         uc = [np.unique(c) for c in coords]
         slc_pos = 0.5
         # Some simple slice tests with single grids
@@ -33,31 +69,45 @@
                 yield assert_equal, slc["Ones"].max(), 1.0
                 yield assert_equal, np.unique(slc["px"]), uc[xax]
                 yield assert_equal, np.unique(slc["py"]), uc[yax]
-                yield assert_equal, np.unique(slc["pdx"]), 1.0/(dims[xax]*2.0)
-                yield assert_equal, np.unique(slc["pdy"]), 1.0/(dims[yax]*2.0)
+                yield assert_equal, np.unique(slc["pdx"]), 0.5 / dims[xax]
+                yield assert_equal, np.unique(slc["pdy"]), 0.5 / dims[yax]
                 pw = slc.to_pw()
                 fns += pw.save()
-                frb = slc.to_frb((1.0,'unitary'), 64)
+                frb = slc.to_frb((1.0, 'unitary'), 64)
                 for slc_field in ['Ones', 'Density']:
                     yield assert_equal, frb[slc_field].info['data_source'], \
-                            slc.__str__()
+                        slc.__str__()
                     yield assert_equal, frb[slc_field].info['axis'], \
-                            ax
+                        ax
                     yield assert_equal, frb[slc_field].info['field'], \
-                            slc_field
+                        slc_field
                     yield assert_equal, frb[slc_field].info['units'], \
-                            pf.field_info[slc_field].get_units()
+                        pf.field_info[slc_field].get_units()
                     yield assert_equal, frb[slc_field].info['xlim'], \
-                            frb.bounds[:2]
+                        frb.bounds[:2]
                     yield assert_equal, frb[slc_field].info['ylim'], \
-                            frb.bounds[2:]
+                        frb.bounds[2:]
                     yield assert_equal, frb[slc_field].info['length_to_cm'], \
-                            pf['cm']
+                        pf['cm']
                     yield assert_equal, frb[slc_field].info['center'], \
-                            slc.center
+                        slc.center
                     yield assert_equal, frb[slc_field].info['coord'], \
-                            slc_pos
+                        slc_pos
                 teardown_func(fns)
             # wf == None
             yield assert_equal, wf, None
 
+
+def test_slice_over_edges():
+    pf = fake_random_pf(64, nprocs=8, fields=["Density"], negative=[False])
+    slc = pf.h.slice(0, 0.0)
+    slc["Density"]
+    slc = pf.h.slice(1, 0.5)
+    slc["Density"]
+
+
+def test_slice_over_outer_boundary():
+    pf = fake_random_pf(64, nprocs=8, fields=["Density"], negative=[False])
+    slc = pf.h.slice(2, 1.0)
+    slc["Density"]
+    yield assert_equal, slc["Density"].size, 0

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/data_objects/universal_fields.py
--- a/yt/data_objects/universal_fields.py
+++ b/yt/data_objects/universal_fields.py
@@ -55,7 +55,7 @@
      G, \
      rho_crit_now, \
      speed_of_light_cgs, \
-     km_per_cm
+     km_per_cm, keV_per_K
 
 from yt.utilities.math_utils import \
     get_sph_r_component, \
@@ -169,18 +169,25 @@
            data["Density"] * data["ThermalEnergy"]
 add_field("Pressure", function=_Pressure, units=r"\rm{dyne}/\rm{cm}^{2}")
 
+def _TempkeV(field, data):
+    return data["Temperature"] * keV_per_K
+add_field("TempkeV", function=_TempkeV, units=r"\rm{keV}",
+          display_name="Temperature")
+
 def _Entropy(field, data):
     if data.has_field_parameter("mu"):
         mw = mh*data.get_field_parameter("mu")
     else :
         mw = mh
+    try:
+        gammam1 = data.pf["Gamma"] - 1.0
+    except:
+        gammam1 = 5./3. - 1.0
     return kboltz * data["Temperature"] / \
-           ((data["Density"]/mw)**(data.pf["Gamma"] - 1.0))
+           ((data["Density"]/mw)**gammam1)
 add_field("Entropy", units=r"\rm{ergs}\ \rm{cm}^{3\gamma-3}",
           function=_Entropy)
 
-
-
 ### spherical coordinates: r (radius)
 def _sph_r(field, data):
     center = data.get_field_parameter("center")
@@ -737,22 +744,28 @@
          units=r"\rm{g}\/\rm{cm}^2/\rm{s}", particle_type=True,
          validators=[ValidateParameter('center')])
 
-def get_radius(positions, data):
-    c = data.get_field_parameter("center")
-    n_tup = tuple([1 for i in range(positions.ndim-1)])
-    center = np.tile(np.reshape(c, (positions.shape[0],)+n_tup),(1,)+positions.shape[1:])
-    periodicity = data.pf.periodicity
-    if any(periodicity):
-        period = data.pf.domain_right_edge - data.pf.domain_left_edge
-        return periodic_dist(positions, center, period, periodicity)
-    else:
-        return euclidean_dist(positions, center)
+def get_radius(data, field_prefix):
+    center = data.get_field_parameter("center")
+    DW = data.pf.domain_right_edge - data.pf.domain_left_edge
+    radius = np.zeros(data[field_prefix+"x"].shape, dtype='float64')
+    r = radius.copy()
+    if any(data.pf.periodicity):
+        rdw = radius.copy()
+    for i, ax in enumerate('xyz'):
+        np.subtract(data["%s%s" % (field_prefix, ax)], center[i], r)
+        if data.pf.periodicity[i] == True:
+            np.subtract(DW[i], r, rdw)
+            np.abs(r, r)
+            np.minimum(r, rdw, r)
+        np.power(r, 2.0, r)
+        np.add(radius, r, radius)
+    np.sqrt(radius, radius)
+    return radius
+
 def _ParticleRadius(field, data):
-    positions = np.array([data["particle_position_%s" % ax] for ax in 'xyz'])
-    return get_radius(positions, data)
+    return get_radius(data, "particle_position_")
 def _Radius(field, data):
-    positions = np.array([data['x'], data['y'], data['z']])
-    return get_radius(positions, data)
+    return get_radius(data, "")
 
 def _ConvertRadiusCGS(data):
     return data.convert("cm")

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/frontends/athena/data_structures.py
--- a/yt/frontends/athena/data_structures.py
+++ b/yt/frontends/athena/data_structures.py
@@ -289,6 +289,11 @@
                      self.parameter_file.domain_right_edge)
         self.parameter_file.domain_dimensions = \
                 np.round(self.parameter_file.domain_width/gdds[0]).astype('int')
+
+        # Need to reset the units in the parameter file based on the correct
+        # domain left/right/dimensions.
+        self.parameter_file._set_units()
+
         if self.parameter_file.dimensionality <= 2 :
             self.parameter_file.domain_dimensions[2] = np.int(1)
         if self.parameter_file.dimensionality == 1 :

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/frontends/stream/data_structures.py
--- a/yt/frontends/stream/data_structures.py
+++ b/yt/frontends/stream/data_structures.py
@@ -231,6 +231,7 @@
         """
         
         particle_types = set_particle_types(data[0])
+        ftype = "all"
 
         for key in data[0].keys() :
             if key is "number_of_particles": continue
@@ -241,9 +242,12 @@
         for i, grid in enumerate(self.grids) :
             if data[i].has_key("number_of_particles") :
                 grid.NumberOfParticles = data[i].pop("number_of_particles")
-            for key in data[i].keys() :
-                if key in grid.keys() : grid.field_data.pop(key, None)
-                self.stream_handler.fields[grid.id][key] = data[i][key]
+            for fname in data[i]:
+                if fname in grid.field_data:
+                    grid.field_data.pop(fname, None)
+                elif (ftype, fname) in grid.field_data:
+                    grid.field_data.pop( ("all", fname) )
+                self.stream_handler.fields[grid.id][fname] = data[i][fname]
             
         self._detect_fields()
         self._setup_unknown_fields()

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/frontends/stream/fields.py
--- a/yt/frontends/stream/fields.py
+++ b/yt/frontends/stream/fields.py
@@ -41,3 +41,9 @@
 StreamFieldInfo = FieldInfoContainer.create_with_fallback(FieldInfo)
 add_field = StreamFieldInfo.add_field
 
+add_stream_field("particle_position_x", function = NullFunc, particle_type=True)
+add_stream_field("particle_position_y", function = NullFunc, particle_type=True)
+add_stream_field("particle_position_z", function = NullFunc, particle_type=True)
+add_stream_field("particle_index", function = NullFunc, particle_type=True)
+add_stream_field("particle_gas_density", function = NullFunc, particle_type=True)
+add_stream_field("particle_gas_temperature", function = NullFunc, particle_type=True)

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/frontends/stream/io.py
--- a/yt/frontends/stream/io.py
+++ b/yt/frontends/stream/io.py
@@ -88,10 +88,15 @@
         for chunk in chunks:
             for g in chunk.objs:
                 if g.NumberOfParticles == 0: continue
+                gf = self.fields[g.id]
+                # Sometimes the stream operator won't have the 
+                # ("all", "Something") fields, but instead just "Something".
+                pns = []
+                for pn in pfields:
+                    if pn in gf: pns.append(pn)
+                    else: pns.append(pn[1])
                 size += g.count_particles(selector, 
-                    self.fields[g.id][pfields[0]],
-                    self.fields[g.id][pfields[1]],
-                    self.fields[g.id][pfields[2]])
+                    gf[pns[0]], gf[pns[1]], gf[pns[2]])
         for field in fields:
             # TODO: figure out dataset types
             rv[field] = np.empty(size, dtype='float64')
@@ -102,13 +107,20 @@
         for chunk in chunks:
             for g in chunk.objs:
                 if g.NumberOfParticles == 0: continue
+                gf = self.fields[g.id]
+                pns = []
+                for pn in pfields:
+                    if pn in gf: pns.append(pn)
+                    else: pns.append(pn[1])
                 mask = g.select_particles(selector,
-                    self.fields[g.id][pfields[0]],
-                    self.fields[g.id][pfields[1]],
-                    self.fields[g.id][pfields[2]])
+                    gf[pns[0]], gf[pns[1]], gf[pns[2]])
                 if mask is None: continue
                 for field in set(fields):
-                    gdata = self.fields[g.id][field][mask]
+                    if field in gf:
+                        fn = field
+                    else:
+                        fn = field[1]
+                    gdata = gf[fn][mask]
                     rv[field][ind:ind+gdata.size] = gdata
                 ind += gdata.size
         return rv

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/geometry/cartesian_fields.py
--- a/yt/geometry/cartesian_fields.py
+++ b/yt/geometry/cartesian_fields.py
@@ -41,26 +41,26 @@
 add_cart_field = CartesianFieldInfo.add_field
 
 def _dx(field, data):
-    return data.pf.domain_width[0] * data.fwidth[:,0]
+    return data.pf.domain_width[0] * data.fwidth[...,0]
 add_cart_field('dx', function=_dx, display_field=False)
 
 def _dy(field, data):
-    return data.pf.domain_width[1] * data.fwidth[:,1]
+    return data.pf.domain_width[1] * data.fwidth[...,1]
 add_cart_field('dy', function=_dy, display_field=False)
 
 def _dz(field, data):
-    return data.pf.domain_width[2] * data.fwidth[:,2]
+    return data.pf.domain_width[2] * data.fwidth[...,2]
 add_cart_field('dz', function=_dz, display_field=False)
 
 def _coordX(field, data):
-    return data.pf.domain_left_edge[0] + data.fcoords[:,0]
+    return data.pf.domain_left_edge[0] + data.fcoords[...,0]
 add_cart_field('x', function=_coordX, display_field=False)
 
 def _coordY(field, data):
-    return data.pf.domain_left_edge[1] + data.fcoords[:,1]
+    return data.pf.domain_left_edge[1] + data.fcoords[...,1]
 add_cart_field('y', function=_coordY, display_field=False)
 
 def _coordZ(field, data):
-    return data.pf.domain_left_edge[2] + data.fcoords[:,2]
+    return data.pf.domain_left_edge[2] + data.fcoords[...,2]
 add_cart_field('z', function=_coordZ, display_field=False)
 

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/geometry/grid_geometry_handler.py
--- a/yt/geometry/grid_geometry_handler.py
+++ b/yt/geometry/grid_geometry_handler.py
@@ -98,7 +98,7 @@
         """
         Returns (in code units) the smallest cell size in the simulation.
         """
-        return self.select_grids(self.grid_levels.max())[0].dds[0]
+        return self.select_grids(self.grid_levels.max())[0].dds[:].min()
 
     def _initialize_level_stats(self):
         # Now some statistics:

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/geometry/object_finding_mixin.py
--- a/yt/geometry/object_finding_mixin.py
+++ b/yt/geometry/object_finding_mixin.py
@@ -196,8 +196,10 @@
         """
         Gets back all the grids between a left edge and right edge
         """
-        grid_i = np.where((np.all(self.grid_right_edge > left_edge, axis=1)
-                         & np.all(self.grid_left_edge < right_edge, axis=1)) == True)
+        eps = np.finfo(np.float64).eps
+        grid_i = np.where((np.all((self.grid_right_edge - left_edge) > eps, axis=1)
+                         & np.all((right_edge - self.grid_left_edge) > eps, axis=1)) == True)
+
         return self.grids[grid_i], grid_i
 
     def get_periodic_box_grids(self, left_edge, right_edge):

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/geometry/tests/test_particle_octree.py
--- a/yt/geometry/tests/test_particle_octree.py
+++ b/yt/geometry/tests/test_particle_octree.py
@@ -15,6 +15,7 @@
         np.clip(pos[:,i], DLE[i], DRE[i], pos[:,i])
     for ndom in [1, 2, 4, 8]:
         octree = ParticleOctreeContainer((NDIM, NDIM, NDIM), DLE, DRE)
+        octree.n_ref = 32
         for dom in range(ndom):
             octree.add(pos[dom::ndom,:], dom)
         octree.finalize()

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/pmods.py
--- a/yt/pmods.py
+++ b/yt/pmods.py
@@ -230,7 +230,7 @@
 def __import_hook__(name, globals=None, locals=None, fromlist=None, level=-1):
     # TODO: handle level parameter correctly. For now, we'll ignore
     # it and try both absolute and relative imports.
-    parent = __determine_parent__(globals)
+    parent = __determine_parent__(globals, level)
     q, tail = __find_head_package__(parent, name)
     m = __load_tail__(q, tail)
     if not fromlist:
@@ -286,7 +286,7 @@
 
 # The remaining functions are taken unmodified (except for the names)
 # from knee.py.
-def __determine_parent__(globals):
+def __determine_parent__(globals, level):
     if not globals or  not globals.has_key("__name__"):
         return None
     pname = globals['__name__']
@@ -295,7 +295,13 @@
         assert globals is parent.__dict__
         return parent
     if '.' in pname:
-        i = pname.rfind('.')
+        if level > 0:
+            end = len(pname)
+            for l in range(level):
+                i = pname.rfind('.', 0, end)
+                end = i
+        else:
+            i = pname.rfind('.')
         pname = pname[:i]
         parent = sys.modules[pname]
         assert parent.__name__ == pname

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/testing.py
--- a/yt/testing.py
+++ b/yt/testing.py
@@ -194,8 +194,8 @@
     Returns
     -------
     array of dicts
-        An array of **kwargs dictionaries to be individually passed to
-        the appropriate function matching these kwargs.
+        An array of dictionaries to be individually passed to the appropriate
+        function matching these kwargs.
 
     Examples
     --------

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/utilities/amr_kdtree/amr_kdtools.py
--- a/yt/utilities/amr_kdtree/amr_kdtools.py
+++ b/yt/utilities/amr_kdtree/amr_kdtools.py
@@ -57,6 +57,49 @@
     else:
         return False
 
+
+def add_grid(node, gle, gre, gid, rank, size):
+    if not should_i_build(node, rank, size):
+        return
+
+    if kd_is_leaf(node):
+        insert_grid(node, gle, gre, gid, rank, size)
+    else:
+        less_id = gle[node.split.dim] < node.split.pos
+        if less_id:
+            add_grid(node.left, gle, gre,
+                     gid, rank, size)
+
+        greater_id = gre[node.split.dim] > node.split.pos
+        if greater_id:
+            add_grid(node.right, gle, gre,
+                     gid, rank, size)
+
+
+def insert_grid(node, gle, gre, grid_id, rank, size):
+    if not should_i_build(node, rank, size):
+        return
+
+    # If we should continue to split based on parallelism, do so!
+    if should_i_split(node, rank, size):
+        geo_split(node, gle, gre, grid_id, rank, size)
+        return
+
+    if np.all(gle <= node.left_edge) and \
+            np.all(gre >= node.right_edge):
+        node.grid = grid_id
+        assert(node.grid is not None)
+        return
+
+    # Split the grid
+    check = split_grid(node, gle, gre, grid_id, rank, size)
+    # If check is -1, then we have found a place where there are no choices.
+    # Exit out and set the node to None.
+    if check == -1:
+        node.grid = None
+    return
+
+
 def add_grids(node, gles, gres, gids, rank, size):
     if not should_i_build(node, rank, size):
         return
@@ -74,9 +117,36 @@
             add_grids(node.right, gles[greater_ids], gres[greater_ids],
                       gids[greater_ids], rank, size)
 
+
 def should_i_split(node, rank, size):
     return node.id < size
 
+
+def geo_split_grid(node, gle, gre, grid_id, rank, size):
+    big_dim = np.argmax(gre-gle)
+    new_pos = (gre[big_dim] + gle[big_dim])/2.
+    old_gre = gre.copy()
+    new_gle = gle.copy()
+    new_gle[big_dim] = new_pos
+    gre[big_dim] = new_pos
+
+    split = Split(big_dim, new_pos)
+
+    # Create a Split
+    divide(node, split)
+
+    # Populate Left Node
+    #print 'Inserting left node', node.left_edge, node.right_edge
+    insert_grid(node.left, gle, gre,
+                grid_id, rank, size)
+
+    # Populate Right Node
+    #print 'Inserting right node', node.left_edge, node.right_edge
+    insert_grid(node.right, new_gle, old_gre,
+                grid_id, rank, size)
+    return
+
+
 def geo_split(node, gles, gres, grid_ids, rank, size):
     big_dim = np.argmax(gres[0]-gles[0])
     new_pos = (gres[0][big_dim] + gles[0][big_dim])/2.
@@ -128,6 +198,39 @@
         node.grid = None
     return
 
+def split_grid(node, gle, gre, grid_id, rank, size):
+    # Find a Split
+    data = np.array([(gle[:], gre[:])],  copy=False)
+    best_dim, split_pos, less_id, greater_id = \
+        kdtree_get_choices(data, node.left_edge, node.right_edge)
+
+    # If best_dim is -1, then we have found a place where there are no choices.
+    # Exit out and set the node to None.
+    if best_dim == -1:
+        return -1
+
+    split = Split(best_dim, split_pos)
+
+    del data, best_dim, split_pos
+
+    # Create a Split
+    divide(node, split)
+
+    # Populate Left Node
+    #print 'Inserting left node', node.left_edge, node.right_edge
+    if less_id:
+        insert_grid(node.left, gle, gre,
+                     grid_id, rank, size)
+
+    # Populate Right Node
+    #print 'Inserting right node', node.left_edge, node.right_edge
+    if greater_id:
+        insert_grid(node.right, gle, gre,
+                     grid_id, rank, size)
+
+    return
+
+
 def split_grids(node, gles, gres, grid_ids, rank, size):
     # Find a Split
     data = np.array([(gles[i,:], gres[i,:]) for i in
@@ -207,10 +310,10 @@
         return kd_node_check(node.left)+kd_node_check(node.right)
 
 def kd_is_leaf(node):
-    has_l_child = node.left is None
-    has_r_child = node.right is None
-    assert has_l_child == has_r_child
-    return has_l_child
+    no_l_child = node.left is None
+    no_r_child = node.right is None
+    assert no_l_child == no_r_child
+    return no_l_child
 
 def step_depth(current, previous):
     '''

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/utilities/amr_kdtree/amr_kdtree.py
--- a/yt/utilities/amr_kdtree/amr_kdtree.py
+++ b/yt/utilities/amr_kdtree/amr_kdtree.py
@@ -49,58 +49,63 @@
                   [ 1,  0, -1], [ 1,  0,  0], [ 1,  0,  1],
                   [ 1,  1, -1], [ 1,  1,  0], [ 1,  1,  1] ])
 
+
+def make_vcd(data):
+    new_field = np.zeros(np.array(data.shape) + 1, dtype='float64')
+    of = data
+    new_field[:-1, :-1, :-1] += of
+    new_field[:-1, :-1, 1:] += of
+    new_field[:-1, 1:, :-1] += of
+    new_field[:-1, 1:, 1:] += of
+    new_field[1:, :-1, :-1] += of
+    new_field[1:, :-1, 1:] += of
+    new_field[1:, 1:, :-1] += of
+    new_field[1:, 1:, 1:] += of
+    np.multiply(new_field, 0.125, new_field)
+
+    new_field[:, :, -1] = 2.0*new_field[:, :, -2] - new_field[:, :, -3]
+    new_field[:, :, 0] = 2.0*new_field[:, :, 1] - new_field[:, :, 2]
+    new_field[:, -1, :] = 2.0*new_field[:, -2, :] - new_field[:, -3, :]
+    new_field[:, 0, :] = 2.0*new_field[:, 1, :] - new_field[:, 2, :]
+    new_field[-1, :, :] = 2.0*new_field[-2, :, :] - new_field[-3, :, :]
+    new_field[0, :, :] = 2.0*new_field[1, :, :] - new_field[2, :, :]
+    return new_field
+
 class Tree(object):
-    def __init__(self, pf, comm_rank=0, comm_size=1, left=None, right=None,
-            min_level=None, max_level=None, grids=None):
+    def __init__(self, pf, comm_rank=0, comm_size=1,
+            min_level=None, max_level=None, source=None):
         
         self.pf = pf
+        if source is None:
+            source = pf.h.all_data()
+        self.source = source
         self._id_offset = self.pf.h.grids[0]._id_offset
-        if left is None:
-            left = np.array([-np.inf]*3)
-        if right is None:
-            right = np.array([np.inf]*3)
-
         if min_level is None: min_level = 0
         if max_level is None: max_level = pf.h.max_level
         self.min_level = min_level
         self.max_level = max_level
         self.comm_rank = comm_rank
         self.comm_size = comm_size
+        left_edge = self.source.left_edge
+        right_edge= self.source.right_edge
         self.trunk = Node(None, None, None,
-                left, right, None, 1)
-        if grids is None:
-            source = pf.h.region((left+right)/2., left, right)
-        else:
-            self.grids = grids
-        self.build([g for g, mask in source.blocks])
+                left_edge, right_edge, None, 1)
+        self.build()
 
     def add_grids(self, grids):
+        gles = np.array([g.LeftEdge for g in grids])
+        gres = np.array([g.RightEdge for g in grids])
+        gids = np.array([g.id for g in grids])
+        add_grids(self.trunk, gles, gres, gids, self.comm_rank, self.comm_size)
+        del gles, gres, gids, grids
+
+    def build(self):
         lvl_range = range(self.min_level, self.max_level+1)
-        if grids is None:
-            level_iter = self.pf.hierarchy.get_levels()
-            while True:
-                try:
-                    grids = level_iter.next()
-                except:
-                    break
-                if grids[0].Level not in lvl_range: continue
-                gmask = np.array([g in self.grids for g in grids])
-                gles =  np.array([g.LeftEdge for g in grids])[gmask]
-                gres =  np.array([g.RightEdge for g in grids])[gmask]
-                gids = np.array([g.id for g in grids])[gmask]
-                add_grids(self.trunk, gles, gres, gids, self.comm_rank, self.comm_size)
-                del gles, gres, gids, grids
-        else:
-            gles = np.array([g.LeftEdge for g in grids])
-            gres = np.array([g.RightEdge for g in grids])
-            gids = np.array([g.id for g in grids])
-
-            add_grids(self.trunk, gles, gres, gids, self.comm_rank, self.comm_size)
-            del gles, gres, gids, grids
-
-
-    def build(self, grids = None):
-        self.add_grids(grids)
+        for lvl in lvl_range:
+            #grids = self.source.select_grids(lvl)
+            grids = np.array([b for b, mask in self.source.blocks])
+            if len(grids) == 0: break
+            self.add_grids(grids)
 
     def check_tree(self):
         for node in depth_traverse(self):
@@ -140,58 +145,49 @@
         return cells
 
 class AMRKDTree(ParallelAnalysisInterface):
-    def __init__(self, pf,  l_max=None, le=None, re=None,
-                 fields=None, no_ghost=False, min_level=None, max_level=None,
-                 log_fields=None,
-                 grids=None):
+    fields = None
+    log_fields = None
+    no_ghost = True
+    def __init__(self, pf, min_level=None, max_level=None, source=None):
 
         ParallelAnalysisInterface.__init__(self)
 
         self.pf = pf
-        self.l_max = l_max
-        if max_level is None: max_level = l_max
-        if fields is None: fields = ["Density"]
-        self.fields = ensure_list(fields)
         self.current_vcds = []
         self.current_saved_grids = []
         self.bricks = []
         self.brick_dimensions = []
         self.sdx = pf.h.get_smallest_dx()
-
         self._initialized = False
-        self.no_ghost = no_ghost
-        if log_fields is not None:
-            log_fields = ensure_list(log_fields)
-        else:
-            pf.h
-            log_fields = [self.pf.field_info[field].take_log
-                         for field in self.fields]
-
-        self.log_fields = log_fields
         self._id_offset = pf.h.grids[0]._id_offset
 
-        if le is None:
-            self.le = pf.domain_left_edge
-        else:
-            self.le = np.array(le)
-        if re is None:
-            self.re = pf.domain_right_edge
-        else:
-            self.re = np.array(re)
-
+        #self.add_mask_field()
+        if source is None:
+            source = pf.h.all_data()
+        self.source = source
+    
         mylog.debug('Building AMRKDTree')
         self.tree = Tree(pf, self.comm.rank, self.comm.size,
-                         self.le, self.re, min_level=min_level,
-                         max_level=max_level, grids=grids)
+                         min_level=min_level,
+                         max_level=max_level, source=source)
 
-    def initialize_source(self):
-        if self._initialized : return
+    def set_fields(self, fields, log_fields, no_ghost):
+        self.fields = fields
+        self.log_fields = log_fields
+        self.no_ghost = no_ghost
+        del self.bricks, self.brick_dimensions
+        self.brick_dimensions = []
         bricks = []
         for b in self.traverse():
             bricks.append(b)
         self.bricks = np.array(bricks)
         self.brick_dimensions = np.array(self.brick_dimensions)
-        self._initialized = True
+    
+    def initialize_source(self, fields, log_fields, no_ghost):
+        if fields == self.fields and log_fields == self.log_fields and \
+            no_ghost == self.no_ghost:
+            return
+        self.set_fields(fields, log_fields, no_ghost)
 
     def traverse(self, viewpoint=None):
         if viewpoint is None:
@@ -270,8 +266,12 @@
             dds = self.current_vcds[self.current_saved_grids.index(grid)]
         else:
             dds = []
+            mask = make_vcd(grid.child_mask)
+            mask = np.clip(mask, 0.0, 1.0)
+            mask[mask<1.0] = np.inf
             for i,field in enumerate(self.fields):
-                vcd = grid.get_vertex_centered_data(field,smoothed=True,no_ghost=self.no_ghost).astype('float64')
+                vcd = make_vcd(grid[field])
+                vcd *= mask
                 if self.log_fields[i]: vcd = np.log10(vcd)
                 dds.append(vcd)
                 self.current_saved_grids.append(grid)
@@ -286,7 +286,8 @@
                                 node.right_edge.copy(),
                                 dims.astype('int64'))
         node.data = brick
-        if not self._initialized: self.brick_dimensions.append(dims)
+        if not self._initialized: 
+            self.brick_dimensions.append(dims)
         return brick
 
     def locate_brick(self, position):

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/utilities/grid_data_format/writer.py
--- a/yt/utilities/grid_data_format/writer.py
+++ b/yt/utilities/grid_data_format/writer.py
@@ -128,10 +128,11 @@
         # add the field data to the grid group
         # Check if this is a real field or particle data.
         field_obj = pf.field_info[field_name]
+        grid.get_data(field_name)
         if field_obj.particle_type:  # particle data
-            pt_group[field_name] = grid.get_data(field_name)
+            pt_group[field_name] = grid[field_name]
         else:  # a field
-            grid_group[field_name] = grid.get_data(field_name)
+            grid_group[field_name] = grid[field_name]
 
 def _create_new_gdf(pf, gdf_path, data_author=None, data_comment=None,
                    particle_type_name="dark_matter"):

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/utilities/lib/geometry_utils.pyx
--- a/yt/utilities/lib/geometry_utils.pyx
+++ b/yt/utilities/lib/geometry_utils.pyx
@@ -40,58 +40,6 @@
     long int lrint(double x) nogil
     double fabs(double x) nogil
 
- at cython.cdivision(True)
- at cython.boundscheck(False)
- at cython.wraparound(False)
-def get_box_grids_level(np.ndarray[np.float64_t, ndim=1] left_edge,
-                        np.ndarray[np.float64_t, ndim=1] right_edge,
-                        int level,
-                        np.ndarray[np.float64_t, ndim=2] left_edges,
-                        np.ndarray[np.float64_t, ndim=2] right_edges,
-                        np.ndarray[np.int32_t, ndim=2] levels,
-                        np.ndarray[np.int32_t, ndim=1] mask,
-                        int min_index = 0):
-    cdef int i, n
-    cdef int nx = left_edges.shape[0]
-    cdef int inside 
-    for i in range(nx):
-        if i < min_index or levels[i,0] != level:
-            mask[i] = 0
-            continue
-        inside = 1
-        for n in range(3):
-            if left_edge[n] >= right_edges[i,n] or \
-               right_edge[n] <= left_edges[i,n]:
-                inside = 0
-                break
-        if inside == 1: mask[i] = 1
-        else: mask[i] = 0
-
- at cython.cdivision(True)
- at cython.boundscheck(False)
- at cython.wraparound(False)
-def get_box_grids_below_level(
-                        np.ndarray[np.float64_t, ndim=1] left_edge,
-                        np.ndarray[np.float64_t, ndim=1] right_edge,
-                        int level,
-                        np.ndarray[np.float64_t, ndim=2] left_edges,
-                        np.ndarray[np.float64_t, ndim=2] right_edges,
-                        np.ndarray[np.int32_t, ndim=2] levels,
-                        np.ndarray[np.int32_t, ndim=1] mask):
-    cdef int i, n
-    cdef int nx = left_edges.shape[0]
-    cdef int inside 
-    for i in range(nx):
-        mask[i] = 0
-        if levels[i,0] <= level:
-            inside = 1
-            for n in range(3):
-                if left_edge[n] >= right_edges[i,n] or \
-                   right_edge[n] <= left_edges[i,n]:
-                    inside = 0
-                    break
-            if inside == 1: mask[i] = 1
-
 # Finally, miscellaneous routines.
 
 @cython.cdivision(True)

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/utilities/lib/marching_cubes.pyx
--- a/yt/utilities/lib/marching_cubes.pyx
+++ b/yt/utilities/lib/marching_cubes.pyx
@@ -462,7 +462,7 @@
 @cython.cdivision(True)
 def march_cubes_grid(np.float64_t isovalue,
                      np.ndarray[np.float64_t, ndim=3] values,
-                     np.ndarray[np.int32_t, ndim=3] mask,
+                     np.ndarray[np.uint8_t, ndim=3, cast=True] mask,
                      np.ndarray[np.float64_t, ndim=1] left_edge,
                      np.ndarray[np.float64_t, ndim=1] dxs,
                      obj_sample = None, int sample_type = 1):
@@ -565,7 +565,7 @@
                      np.ndarray[np.float64_t, ndim=3] v2,
                      np.ndarray[np.float64_t, ndim=3] v3,
                      np.ndarray[np.float64_t, ndim=3] flux_field,
-                     np.ndarray[np.int32_t, ndim=3] mask,
+                     np.ndarray[np.uint8_t, ndim=3, cast=True] mask,
                      np.ndarray[np.float64_t, ndim=1] left_edge,
                      np.ndarray[np.float64_t, ndim=1] dxs):
     cdef int dims[3]

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/utilities/lib/misc_utilities.pyx
--- a/yt/utilities/lib/misc_utilities.pyx
+++ b/yt/utilities/lib/misc_utilities.pyx
@@ -118,12 +118,12 @@
 @cython.boundscheck(False)
 @cython.wraparound(False)
 @cython.cdivision(True)
-def lines(np.ndarray[np.float64_t, ndim=3] image, 
+def lines(np.ndarray[np.float64_t, ndim=3] image,
           np.ndarray[np.int64_t, ndim=1] xs,
           np.ndarray[np.int64_t, ndim=1] ys,
           np.ndarray[np.float64_t, ndim=2] colors,
           int points_per_color=1):
-    
+
     cdef int nx = image.shape[0]
     cdef int ny = image.shape[1]
     cdef int nl = xs.shape[0]
@@ -145,7 +145,7 @@
             for i in range(3):
                 alpha[i] = colors[j/points_per_color,3]*\
                         colors[j/points_per_color,i]
-        if x0 < x1: 
+        if x0 < x1:
             sx = 1
         else:
             sx = -1
@@ -153,7 +153,7 @@
             sy = 1
         else:
             sy = -1
-        while(1): 
+        while(1):
             if (x0 < 0 and sx == -1): break
             elif (x0 >= nx and sx == 1): break
             elif (y0 < 0 and sy == -1): break
@@ -175,13 +175,13 @@
             if e2 < dx :
                 err = err + dx
                 y0 += sy
-    return 
+    return
 
 def rotate_vectors(np.ndarray[np.float64_t, ndim=3] vecs,
         np.ndarray[np.float64_t, ndim=2] R):
     cdef int nx = vecs.shape[0]
     cdef int ny = vecs.shape[1]
-    rotated = np.empty((nx,ny,3),dtype='float64') 
+    rotated = np.empty((nx,ny,3),dtype='float64')
     for i in range(nx):
         for j in range(ny):
             for k in range(3):
@@ -291,15 +291,16 @@
                         int min_index = 0):
     cdef int i, n
     cdef int nx = left_edges.shape[0]
-    cdef int inside 
+    cdef int inside
+    cdef np.float64_t eps = np.finfo(np.float64).eps
     for i in range(nx):
         if i < min_index or levels[i,0] != level:
             mask[i] = 0
             continue
         inside = 1
         for n in range(3):
-            if left_edge[n] >= right_edges[i,n] or \
-               right_edge[n] <= left_edges[i,n]:
+            if (right_edges[i,n] - left_edge[n]) <= eps or \
+               (right_edge[n] - left_edges[i,n]) <= eps:
                 inside = 0
                 break
         if inside == 1: mask[i] = 1
@@ -319,14 +320,15 @@
                         int min_level = 0):
     cdef int i, n
     cdef int nx = left_edges.shape[0]
-    cdef int inside 
+    cdef int inside
+    cdef np.float64_t eps = np.finfo(np.float64).eps
     for i in range(nx):
         mask[i] = 0
         if levels[i,0] <= level and levels[i,0] >= min_level:
             inside = 1
             for n in range(3):
-                if left_edge[n] >= right_edges[i,n] or \
-                   right_edge[n] <= left_edges[i,n]:
+                if (right_edges[i,n] - left_edge[n]) <= eps or \
+                   (right_edge[n] - left_edges[i,n]) <= eps:
                     inside = 0
                     break
             if inside == 1: mask[i] = 1

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/utilities/particle_generator.py
--- a/yt/utilities/particle_generator.py
+++ b/yt/utilities/particle_generator.py
@@ -144,7 +144,7 @@
                                 self.particles[start:end,field_index],
                                 np.int64(self.NumberOfParticles[i]),
                                 cube[gfield], le, dims,
-                                np.float64(grid['dx']))
+                                grid.dds[0])
         pbar.finish()
 
     def apply_to_stream(self, clobber=False) :

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/utilities/physical_constants.py
--- a/yt/utilities/physical_constants.py
+++ b/yt/utilities/physical_constants.py
@@ -41,6 +41,7 @@
 mpc_per_rsun  = 2.253962e-14
 mpc_per_miles = 5.21552871e-20
 mpc_per_cm    = 3.24077929e-25
+kpc_per_cm    = mpc_per_cm / mpc_per_kpc
 km_per_pc     = 1.3806504e13
 km_per_m      = 1e-3
 km_per_cm     = 1e-5
@@ -54,17 +55,26 @@
 rsun_per_mpc  = 1.0 / mpc_per_rsun
 miles_per_mpc = 1.0 / mpc_per_miles
 cm_per_mpc    = 1.0 / mpc_per_cm
+cm_per_kpc    = 1.0 / kpc_per_cm
 cm_per_km     = 1.0 / km_per_cm
 pc_per_km     = 1.0 / km_per_pc
 cm_per_pc     = 1.0 / pc_per_cm
+
 # time
 sec_per_Gyr  = 31.5576e15
 sec_per_Myr  = 31.5576e12
+sec_per_kyr  = 31.5576e9
 sec_per_year = 31.5576e6   # "IAU Style Manual" by G.A. Wilkins, Comm. 5, in IAU Transactions XXB (1989)
 sec_per_day  = 86400.0
 sec_per_hr   = 3600.0
 day_per_year = 365.25
 
+# temperature / energy
+erg_per_eV = 1.602176487e-12 # http://goldbook.iupac.org/E02014.html
+erg_per_keV = erg_per_eV * 1.0e3
+K_per_keV = erg_per_keV / boltzmann_constant_cgs
+keV_per_K = 1.0 / K_per_keV
+
 #Short cuts
 G = gravitational_constant_cgs
 me = mass_electron_cgs

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/utilities/setup.py
--- a/yt/utilities/setup.py
+++ b/yt/utilities/setup.py
@@ -50,7 +50,6 @@
     config.add_subpackage("answer_testing")
     config.add_subpackage("delaunay")  # From SciPy, written by Robert Kern
     config.add_subpackage("kdtree")
-    config.add_data_files(('kdtree', ['kdtree/fKDpy.so']))
     config.add_subpackage("spatial")
     config.add_subpackage("grid_data_format")
     config.add_subpackage("parallel_tools")

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/utilities/tests/test_amr_kdtree.py
--- a/yt/utilities/tests/test_amr_kdtree.py
+++ b/yt/utilities/tests/test_amr_kdtree.py
@@ -24,48 +24,47 @@
 """
 
 from yt.utilities.amr_kdtree.api import AMRKDTree
-from yt.utilities.amr_kdtree.amr_kdtools import kd_node_check, depth_traverse
+from yt.utilities.amr_kdtree.amr_kdtools import depth_traverse
 import yt.utilities.initial_conditions as ic
 import yt.utilities.flagging_methods as fm
 from yt.frontends.stream.api import load_uniform_grid, refine_amr
 from yt.testing import assert_equal
 import numpy as np
 
-def test_amr_kdtree():
+
+def test_amr_kdtree_coverage():
+    return #TESTDISABLED
     domain_dims = (32, 32, 32)
     data = np.zeros(domain_dims) + 0.25
-    fo = [ic.CoredSphere(0.05, 0.3, [0.7,0.4,0.75], {"Density": (0.25, 100.0)})]
+    fo = [ic.CoredSphere(0.05, 0.3, [0.7, 0.4, 0.75],
+                         {"Density": (0.25, 100.0)})]
     rc = [fm.flagging_method_registry["overdensity"](8.0)]
     ug = load_uniform_grid({'Density': data}, domain_dims, 1.0)
     pf = refine_amr(ug, rc, fo, 5)
- 
+
     kd = AMRKDTree(pf)
 
-    yield assert_equal, kd.count_volume(), 1.0
-    
-def test_amr_kdtree_coverage():
-    domain_dims = (32, 32, 32)
-    data = np.zeros(domain_dims) + 0.25
-    fo = [ic.CoredSphere(0.05, 0.3, [0.7,0.4,0.75], {"Density": (0.25, 100.0)})]
-    rc = [fm.flagging_method_registry["overdensity"](8.0)]
-    ug = load_uniform_grid({'Density': data}, domain_dims, 1.0)
-    pf = refine_amr(ug, rc, fo, 5)
- 
-    kd = AMRKDTree(pf)
+    volume = kd.count_volume()
+    yield assert_equal, volume, \
+        np.prod(pf.domain_right_edge - pf.domain_left_edge)
 
+    cells = kd.count_cells()
+    true_cells = pf.h.all_data().quantities['TotalQuantity']('Ones')[0]
+    yield assert_equal, cells, true_cells
 
     # This largely reproduces the AMRKDTree.tree.check_tree() functionality
+    tree_ok = True
     for node in depth_traverse(kd.tree):
         if node.grid is None:
             continue
         grid = pf.h.grids[node.grid - kd._id_offset]
         dds = grid.dds
         gle = grid.LeftEdge
-        gre = grid.RightEdge
         li = np.rint((node.left_edge-gle)/dds).astype('int32')
         ri = np.rint((node.right_edge-gle)/dds).astype('int32')
         dims = (ri - li).astype('int32')
-        yield assert_equal, np.all(grid.LeftEdge <= node.left_edge), True
-        yield assert_equal, np.all(grid.RightEdge >= node.right_edge), True
-        yield assert_equal, np.all(dims > 0), True
+        tree_ok *= np.all(grid.LeftEdge <= node.left_edge)
+        tree_ok *= np.all(grid.RightEdge >= node.right_edge)
+        tree_ok *= np.all(dims > 0)
 
+    yield assert_equal, True, tree_ok

diff -r 0633dbe1826cd23fbed850f01899623c22fe370a -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 yt/utilities/tests/test_particle_generator.py
--- a/yt/utilities/tests/test_particle_generator.py
+++ b/yt/utilities/tests/test_particle_generator.py
@@ -105,5 +105,5 @@
     yield assert_equal, particles_per_grid3, particles1.NumberOfParticles+particles2.NumberOfParticles
 
 if __name__=="__main__":
-    for i in test_particle_generator():
+    for n, i in enumerate(test_particle_generator()):
         i[0](*i[1:])

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/4bf42d46e46e/
Changeset:   4bf42d46e46e
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-08 19:59:40
Summary:     reverting back to periodic region
Affected #:  1 file

diff -r 1fb6bb76a0a2be6537a6fca6fa073c180bb88f70 -r 4bf42d46e46e96ede8173695e64b55894e5e32ad yt/visualization/plot_modifications.py
--- a/yt/visualization/plot_modifications.py
+++ b/yt/visualization/plot_modifications.py
@@ -1103,7 +1103,7 @@
             and np.all(self.region.left_edge <= LE) \
             and np.all(self.region.right_edge >= RE):
             return self.region
-        self.region = data.pf.h.region(
+        self.region = data.pf.h.periodic_region(
             data.center, LE, RE)
         return self.region
 


https://bitbucket.org/yt_analysis/yt/commits/c19a8a845773/
Changeset:   c19a8a845773
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-08 20:20:26
Summary:     slicing needs the center passed
Affected #:  1 file

diff -r 4bf42d46e46e96ede8173695e64b55894e5e32ad -r c19a8a845773d6e7b6da12de325de50eba3a4917 yt/visualization/plot_window.py
--- a/yt/visualization/plot_window.py
+++ b/yt/visualization/plot_window.py
@@ -1215,7 +1215,7 @@
             axes_unit = units
         if field_parameters is None: field_parameters = {}
         slc = pf.h.slice(axis, center[axis],
-            field_parameters = field_parameters)
+            field_parameters = field_parameters, center=center)
         slc.get_data(fields)
         PWViewerMPL.__init__(self, slc, bounds, origin=origin,
                              fontsize=fontsize)


https://bitbucket.org/yt_analysis/yt/commits/447c295527b4/
Changeset:   447c295527b4
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-08 20:26:29
Summary:     adding ParticleMassMsun
Affected #:  1 file

diff -r c19a8a845773d6e7b6da12de325de50eba3a4917 -r 447c295527b40fa5da373a24b1a49ad17545f456 yt/frontends/art/fields.py
--- a/yt/frontends/art/fields.py
+++ b/yt/frontends/art/fields.py
@@ -281,3 +281,8 @@
 
 add_field("particle_age_spread",function=_particle_age_spread,
           particle_type=True,take_log=True,units=r"\rm{s}")
+
+def _ParticleMassMsun(field,data):
+    return data["particle_mass"]/1.989e33
+add_field("ParticleMassMsun",function=_ParticleMassMsun,particle_type=True,
+          take_log=True,units=r"\rm{Msun}")


https://bitbucket.org/yt_analysis/yt/commits/67cd1a8b8716/
Changeset:   67cd1a8b8716
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-08 22:42:07
Summary:     removing print statemnet
Affected #:  1 file

diff -r 447c295527b40fa5da373a24b1a49ad17545f456 -r 67cd1a8b87169d6535f04cae152759283507fa05 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -124,7 +124,6 @@
                 if field not in tr.keys():
                     tr[field] = np.zeros(npa,dtype='f8')
                 tr[field][stara:starb] = temp
-                print "masses in io.py: ", tr[field]
                 del temp
             tr[field]=tr[field][mask]
         return tr


https://bitbucket.org/yt_analysis/yt/commits/637666973ae0/
Changeset:   637666973ae0
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-08 22:50:29
Summary:     Stupid enumerate(range))
Affected #:  1 file

diff -r 67cd1a8b87169d6535f04cae152759283507fa05 -r 637666973ae0588fc6f77d3726688af1804d1b4b yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -92,7 +92,7 @@
         """
         nv = len(self.fluid_field_list)
         self.domains = [ARTDomainFile(self.parameter_file,i+1,nv,l)
-                        for i,l in enumerate(range(self.pf.max_level))]
+                        for l in range(self.pf.max_level)]
         self.octs_per_domain = [dom.level_count.sum() for dom in self.domains]
         self.total_octs = sum(self.octs_per_domain)
         self.oct_handler = ARTOctreeContainer(


https://bitbucket.org/yt_analysis/yt/commits/690660e722ec/
Changeset:   690660e722ec
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-08 22:58:40
Summary:     deleting fields now determines the field first
Affected #:  2 files

diff -r 637666973ae0588fc6f77d3726688af1804d1b4b -r 690660e722ec0e73898fec89e7bdd90ee7e7611f yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -204,13 +204,15 @@
         """
         Sets a field to be some other value.
         """
-        self.field_data[key] = val
+        f = self._determine_fields(key)[0]
+        self.field_data[f] = val
 
     def __delitem__(self, key):
         """
         Deletes a field
         """
-        del self.field_data[key]
+        f = self._determine_fields(key)[0]
+        del self.field_data[f]
 
     def _generate_field(self, field):
         ftype, fname = field

diff -r 637666973ae0588fc6f77d3726688af1804d1b4b -r 690660e722ec0e73898fec89e7bdd90ee7e7611f yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -91,7 +91,7 @@
         allocate the requisite memory in the oct tree
         """
         nv = len(self.fluid_field_list)
-        self.domains = [ARTDomainFile(self.parameter_file,i+1,nv,l)
+        self.domains = [ARTDomainFile(self.parameter_file,l+1,nv,l)
                         for l in range(self.pf.max_level)]
         self.octs_per_domain = [dom.level_count.sum() for dom in self.domains]
         self.total_octs = sum(self.octs_per_domain)


https://bitbucket.org/yt_analysis/yt/commits/e627ed46b244/
Changeset:   e627ed46b244
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-08 23:53:50
Summary:     first draft of ART testing
Affected #:  3 files

diff -r 690660e722ec0e73898fec89e7bdd90ee7e7611f -r e627ed46b244d65f7a7f9afa219d760287912b55 yt/analysis_modules/halo_finding/halo_objects.py
--- a/yt/analysis_modules/halo_finding/halo_objects.py
+++ b/yt/analysis_modules/halo_finding/halo_objects.py
@@ -1059,7 +1059,7 @@
 
     _fields = ["particle_position_%s" % ax for ax in 'xyz']
 
-    def __init__(self, data_source, dm_only=True):
+    def __init__(self, data_source, dm_only=True, redshift=-1):
         """
         Run hop on *data_source* with a given density *threshold*.  If
         *dm_only* is set, only run it on the dark matter particles, otherwise

diff -r 690660e722ec0e73898fec89e7bdd90ee7e7611f -r e627ed46b244d65f7a7f9afa219d760287912b55 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -229,7 +229,7 @@
                     setattr(self,"file_"+filetype,None)
 
     def __repr__(self):
-        return self.file_amr.rsplit(".",1)[0]
+        return self.file_amr.split('/')[-1]
 
     def _set_units(self):
         """
@@ -305,6 +305,9 @@
         
         for ax in 'xyz':
             self.conversion_factors["%s-velocity" % ax] = 1.0
+        for pt in particle_fields:
+            if pt not in self.conversion_factors.keys():
+                self.conversion_factors[pt] = 1.0
         for unit in sec_conversion.keys():
             self.time_units[unit] = 1.0 / sec_conversion[unit]
 

diff -r 690660e722ec0e73898fec89e7bdd90ee7e7611f -r e627ed46b244d65f7a7f9afa219d760287912b55 yt/frontends/art/tests/test_outputs.py
--- /dev/null
+++ b/yt/frontends/art/tests/test_outputs.py
@@ -0,0 +1,48 @@
+"""
+ART frontend tests using SFG1 a=0.330
+
+Author: Christopher Erick Moody <chrisemoody at gmail.com>
+Affiliation: University of California Santa Cruz
+Homepage: http://yt-project.org/
+License:
+  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+from yt.testing import *
+from yt.utilities.answer_testing.framework import \
+    requires_pf, \
+    small_patch_amr, \
+    big_patch_amr, \
+    data_dir_load
+from yt.frontends.art.api import ARTStaticOutput
+
+_fields = ("Density","particle_mass")
+
+sfg1 = "10MpcBox_csf512_a0.330.d"
+ at requires_pf(sfg1)
+def test_sfg1():
+    pf = data_dir_load(sfg1)
+    yield assert_equal, str(pf), "sfg1"
+    dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
+    for field in fields:
+        for axis in [0, 1, 2]:
+            for ds in dso:
+                for weight_field in [None, "Density"]:
+                    yield PixelizedProjectionValuesTest(
+                        sfg1, axis, field, weight_field,
+                        ds)


https://bitbucket.org/yt_analysis/yt/commits/2488a46a9a4a/
Changeset:   2488a46a9a4a
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-09 00:20:54
Summary:     answer testing update
Affected #:  1 file

diff -r e627ed46b244d65f7a7f9afa219d760287912b55 -r 2488a46a9a4ab63705412484c847255e6c00c38a yt/frontends/art/tests/test_outputs.py
--- a/yt/frontends/art/tests/test_outputs.py
+++ b/yt/frontends/art/tests/test_outputs.py
@@ -31,15 +31,15 @@
     data_dir_load
 from yt.frontends.art.api import ARTStaticOutput
 
-_fields = ("Density","particle_mass")
+_fields = ("Density","particle_mass",("all","particle_position_x"))
 
 sfg1 = "10MpcBox_csf512_a0.330.d"
- at requires_pf(sfg1)
+ at requires_pf(sfg1,big_data=True)
 def test_sfg1():
     pf = data_dir_load(sfg1)
-    yield assert_equal, str(pf), "sfg1"
+    yield assert_equal, str(pf), "10MpcBox_csf512_a0.330.d"
     dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
-    for field in fields:
+    for field in _fields:
         for axis in [0, 1, 2]:
             for ds in dso:
                 for weight_field in [None, "Density"]:


https://bitbucket.org/yt_analysis/yt/commits/e277526d5dbe/
Changeset:   e277526d5dbe
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-09 00:41:27
Summary:     reverted the field determintaion
Affected #:  2 files

diff -r 2488a46a9a4ab63705412484c847255e6c00c38a -r e277526d5dbea33aa3d2c8d93eb9a9cfffc90f71 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -204,15 +204,13 @@
         """
         Sets a field to be some other value.
         """
-        f = self._determine_fields(key)[0]
-        self.field_data[f] = val
+        self.field_data[key] = val
 
     def __delitem__(self, key):
         """
         Deletes a field
         """
-        f = self._determine_fields(key)[0]
-        del self.field_data[f]
+        del self.field_data[key]
 
     def _generate_field(self, field):
         ftype, fname = field

diff -r 2488a46a9a4ab63705412484c847255e6c00c38a -r e277526d5dbea33aa3d2c8d93eb9a9cfffc90f71 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -128,6 +128,15 @@
             tr[field]=tr[field][mask]
         return tr
 
+def _determine_field(pf,field):
+    ptfields = ["stars","darkmatter"]
+    ptmax= self.pf.parameters['wspecies'][-1]
+    if type(field) == int:
+        return field
+    if field in ptfields:
+        return field
+    return "all"
+
 
 def _count_art_octs(f, offset, 
                    MinLev, MaxLevelNow):


https://bitbucket.org/yt_analysis/yt/commits/60932471f489/
Changeset:   60932471f489
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-09 18:29:31
Summary:     change in data container key deletion
Affected #:  1 file

diff -r e277526d5dbea33aa3d2c8d93eb9a9cfffc90f71 -r 60932471f489b63db7dddc216fe2a5ef84f9c483 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -210,6 +210,8 @@
         """
         Deletes a field
         """
+        if key  not in self.field_data:
+            key = self._determine_fields(key)[0]
         del self.field_data[key]
 
     def _generate_field(self, field):


https://bitbucket.org/yt_analysis/yt/commits/f924ee4cda97/
Changeset:   f924ee4cda97
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-09 21:20:06
Summary:     now we accept (particle_type,particle_field)
Affected #:  2 files

diff -r 60932471f489b63db7dddc216fe2a5ef84f9c483 -r f924ee4cda9757b79d83018a81a67fc25a58a161 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -106,8 +106,15 @@
 
     def _detect_fields(self):
         self.particle_field_list = particle_fields
-        self.field_list = set(fluid_fields + particle_fields + particle_star_fields)
+        self.field_list = set(fluid_fields + particle_fields + 
+                              particle_star_fields)
         self.field_list = list(self.field_list)
+        #now generate all of the possible particle fields
+        wspecies = self.parameter_file.parameters['wspecies']
+        nspecies  = len(wspecies)
+        self.parameter_file.particle_types = ["all","darkmatter","stars"]
+        for specie in range(nspecies):
+            self.parameter_file.particle_types.append("specie%i"%specie)
     
     def _setup_classes(self):
         dd = self._get_data_reader_dict()

diff -r 60932471f489b63db7dddc216fe2a5ef84f9c483 -r f924ee4cda9757b79d83018a81a67fc25a58a161 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -34,10 +34,11 @@
 from yt.utilities.fortran_utils import *
 from yt.utilities.logger import ytLogger as mylog
 from yt.frontends.art.definitions import *
+from yt.utilities.physical_constants import sec_per_year
 
 class IOHandlerART(BaseIOHandler):
     _data_style = "art"
-    interp_tb = None
+    tb,ages= None,None
 
     def _read_fluid_selection(self, chunks, selector, fields, size):
         # Chunks in this case will have affiliated domain subset objects
@@ -65,23 +66,26 @@
         masks = {}
         pf = (chunks.next()).objs[0].domain.pf
         ws,ls = pf.parameters["wspecies"],pf.parameters["lspecies"]
-        npa = ls[-1]
+        sizes = np.diff(np.concatenate(([0],ls)))
+        ptmax= ws[-1]
+        npt = ls[-1]
+        nstars = ls[-1]-ls[-2]
         file_particle = pf.file_particle_data
         file_stars = pf.file_particle_stars
-        pos,vel = read_particles(file_particle,pf.parameters['Nrow'],
-                                 total=npa,dd=pf.domain_dimensions)
-        pos,vel = pos.astype('float64'), vel.astype('float64')
-        pos -= 1.0/pf.domain_dimensions[0]
-        mask = selector.select_points(pos[:,0],pos[:,1],pos[:,2])
-        size = mask.sum()
-        if not any(('position' in n for t,n in fields)):
-            del pos
-        if not any(('velocity' in n for t,n in fields)):
-            del vel
-        stara,starb = ls[-2],ls[-1]
         tr = {}
+        ftype_old = None
         for field in fields:
             ftype,fname = field
+            pbool, idxa,idxb= _determine_field_size(pf,ftype,ls,ptmax)
+            npa = idxb-idxa
+            if not ftype_old == ftype:
+                pos,vel = read_particles(file_particle,pf.parameters['Nrow'],
+                                         dd=pf.domain_dimensions,
+                                         idxa=idxa,idxb=idxb)
+                pos,vel = pos.astype('float64'), vel.astype('float64')
+                pos -= 1.0/pf.domain_dimensions[0]
+                mask = selector.select_points(pos[:,0],pos[:,1],pos[:,2])
+                size = mask.sum()
             for i,ax in enumerate('xyz'):
                 if fname.startswith("particle_position_%s"%ax):
                     tr[field]=pos[:,i]
@@ -89,54 +93,72 @@
                     tr[field]=vel[:,i]
             if fname == "particle_mass":
                 a=0
-                data = np.zeros(npa,dtype='float64')
-                for b,m in zip(ls,ws):
-                    data[a:b]=(np.ones(b-a,dtype='float64')*m)
-                    a=b
-                #the stellar masses will be updated later
+                data = np.zeros(npa,dtype='f8')
+                for ptb,size,m in zip(pbool,sizes,ws):
+                    if ptb:
+                        data[a:a+size]=m
+                        a+=size
                 tr[field] = data
             elif fname == "particle_index":
-                tr[field]=np.arange(npa)[mask].astype('int64')
+                tr[field]=np.arange(idxa,idxb).astype('int64')
             elif fname == "particle_type":
                 a=0
-                data = np.zeros(npa,dtype='int64')
-                for i,(b,m) in enumerate(zip(ls,ws)):
-                    data[a:b]=(np.ones(b-a,dtype='int64')*i)
-                    a=b
+                data = np.zeros(npa,dtype='int')
+                for i,(ptb,size) in enumerate(zip(pbool,sizes)):
+                    if ptb:
+                        data[a:a+size]=i
+                        a+=size
                 tr[field] = data
-            if fname in particle_star_fields:
-                #we possibly update and change the masses here
-                #all other fields are read in and changed once
-                if starb-stara==0: continue
-                temp= read_star_field(file_stars,field=fname)
-                if fname == "particle_creation_time":
-                    if self.interp_tb is None:
-                        self.tdum,self.adum = read_star_field(file_stars,
-                                                              field="tdum")
-                        tdiff = b2t(self.tdum)-pf.current_time/(3.15569e7*1e9)
-                        #timestamp of file should match amr timestamp
-                        if np.abs(tdiff) < 1e-4:
-                            mylog.debug("Timestamp mismatch in star "+
-                                         "particle header")
-                        self.interp_tb,self.interp_ages = b2t(temp)
-                    temp = np.interp(temp,self.interp_tb,self.interp_ages)
-                    temp *= 1.0e9*365*24*3600
-                if field not in tr.keys():
-                    tr[field] = np.zeros(npa,dtype='f8')
-                tr[field][stara:starb] = temp
-                del temp
+            if pbool[-1] and fname in particle_star_fields:
+                data = read_star_field(file_stars,field=fname)
+                temp = tr.get(field,np.zeros(npa,'f8'))
+                temp[-nstars:] = data
+                tr[field] = temp
+            if fname == "particle_creation_time":
+                data = tr.get(field,np.zeros(npa,'f8'))
+                self.tb,self.ages,data = interpolate_ages(tr[field][-nstars:],
+                                                          file_stars,
+                                                          self.tb,
+                                                          self.ages,
+                                                          pf.current_time)
+                tr.get(field,np.zeros(npa,'f8'))[-nstars:] = data
+                del data
             tr[field]=tr[field][mask]
+            ftype_old = ftype
         return tr
 
-def _determine_field(pf,field):
-    ptfields = ["stars","darkmatter"]
-    ptmax= self.pf.parameters['wspecies'][-1]
-    if type(field) == int:
-        return field
-    if field in ptfields:
-        return field
-    return "all"
+def _determine_field_size(pf,field,lspecies,ptmax):
+    pbool = np.zeros(len(lspecies),dtype="bool")
+    idxas = np.concatenate(([0,],lspecies[:-1]))
+    idxbs = lspecies
+    if "specie" in field:
+        index = int(field.replace("specie",""))
+        pbool[index] = True
+    elif field == "stars":
+        pbool[-1] = True
+    elif field == "darkmatter":
+        pbool[0:-1]=True
+    else:
+        pbool[:]=True
+    idxa,idxb = idxas[pbool][0],idxbs[pbool][-1]
+    return pbool,idxa,idxb
 
+def interpolate_ages(data,file_stars,interp_tb=None,interp_ages=None,
+                    current_time=None):
+    if interp_tb is None:
+        tdum,adum = read_star_field(file_stars,
+                                              field="tdum")
+        #timestamp of file should match amr timestamp
+        if current_time:
+            tdiff = b2t(tdum)-current_time/(sec_per_year*1e9)
+            if np.abs(tdiff) < 1e-4:
+                mylog.info("Timestamp mismatch in star "+
+                             "particle header")
+        mylog.info("Interpolating ages")
+        interp_tb,interp_ages = b2t(data)
+    temp = np.interp(data,interp_tb,interp_ages)
+    temp *= 1.0e9*sec_per_year
+    return interp_tb,interp_ages, temp
 
 def _count_art_octs(f, offset, 
                    MinLev, MaxLevelNow):
@@ -287,7 +309,7 @@
     return unitary_center,fl,iocts,nLevel,root_level
 
 
-def read_particles(file,Nrow,total=None,dd=1.0):
+def read_particles(file,Nrow,dd=1.0,idxa=None,idxb=None):
     words = 6 # words (reals) per particle: x,y,z,vx,vy,vz
     real_size = 4 # for file_particle_data; not always true?
     np_per_page = Nrow**2 # defined in ART a_setup.h
@@ -296,10 +318,7 @@
     f = np.fromfile(file, dtype='>f4').astype('float32') # direct access
     pages = np.vsplit(np.reshape(f, (num_pages, words, np_per_page)), num_pages)
     data = np.squeeze(np.dstack(pages)).T # x,y,z,vx,vy,vz
-    if total is None:
-        return data[:,0:3]/dd,data[:,3:]
-    else:
-        return data[:total,0:3]/dd,data[:total,3:]
+    return data[idxa:idxb,0:3]/dd,data[idxa:idxb,3:]
 
 def read_star_field(file,field=None):
     data = {}


https://bitbucket.org/yt_analysis/yt/commits/eeb18b957e98/
Changeset:   eeb18b957e98
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-09 21:27:17
Summary:     autopep8 on datastructures
Affected #:  1 file

diff -r f924ee4cda9757b79d83018a81a67fc25a58a161 -r eeb18b957e98db136c97263cc296ad58fe25c90d yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -37,7 +37,7 @@
 from yt.geometry.geometry_handler import \
     GeometryHandler, YTDataChunk
 from yt.data_objects.static_output import \
-      StaticOutput
+    StaticOutput
 from yt.geometry.oct_container import \
     ARTOctreeContainer
 from yt.data_objects.field_info_container import \
@@ -74,8 +74,10 @@
     FieldInfoContainer, NullFunc
 from yt.utilities.physical_constants import \
     mass_hydrogen_cgs, sec_per_Gyr
+
+
 class ARTGeometryHandler(OctreeGeometryHandler):
-    def __init__(self,pf,data_style="art"):
+    def __init__(self, pf, data_style="art"):
         self.fluid_field_list = fluid_fields
         self.data_style = data_style
         self.parameter_file = weakref.proxy(pf)
@@ -83,7 +85,7 @@
         self.directory = os.path.dirname(self.hierarchy_filename)
         self.max_level = pf.max_level
         self.float_type = np.float64
-        super(ARTGeometryHandler,self).__init__(pf,data_style)
+        super(ARTGeometryHandler, self).__init__(pf, data_style)
 
     def _initialize_oct_handler(self):
         """
@@ -91,12 +93,12 @@
         allocate the requisite memory in the oct tree
         """
         nv = len(self.fluid_field_list)
-        self.domains = [ARTDomainFile(self.parameter_file,l+1,nv,l)
+        self.domains = [ARTDomainFile(self.parameter_file, l+1, nv, l)
                         for l in range(self.pf.max_level)]
         self.octs_per_domain = [dom.level_count.sum() for dom in self.domains]
         self.total_octs = sum(self.octs_per_domain)
         self.oct_handler = ARTOctreeContainer(
-            self.parameter_file.domain_dimensions/2, #dd is # of root cells
+            self.parameter_file.domain_dimensions/2,  # dd is # of root cells
             self.parameter_file.domain_left_edge,
             self.parameter_file.domain_right_edge)
         mylog.debug("Allocating %s octs", self.total_octs)
@@ -106,16 +108,16 @@
 
     def _detect_fields(self):
         self.particle_field_list = particle_fields
-        self.field_list = set(fluid_fields + particle_fields + 
+        self.field_list = set(fluid_fields + particle_fields +
                               particle_star_fields)
         self.field_list = list(self.field_list)
-        #now generate all of the possible particle fields
+        # now generate all of the possible particle fields
         wspecies = self.parameter_file.parameters['wspecies']
-        nspecies  = len(wspecies)
-        self.parameter_file.particle_types = ["all","darkmatter","stars"]
+        nspecies = len(wspecies)
+        self.parameter_file.particle_types = ["all", "darkmatter", "stars"]
         for specie in range(nspecies):
-            self.parameter_file.particle_types.append("specie%i"%specie)
-    
+            self.parameter_file.particle_types.append("specie%i" % specie)
+
     def _setup_classes(self):
         dd = self._get_data_reader_dict()
         super(ARTGeometryHandler, self)._setup_classes(dd)
@@ -124,22 +126,23 @@
     def _identify_base_chunk(self, dobj):
         """
         Take the passed in data source dobj, and use its embedded selector
-        to calculate the domain mask, build the reduced domain 
+        to calculate the domain mask, build the reduced domain
         subsets and oct counts. Attach this information to dobj.
         """
         if getattr(dobj, "_chunk_info", None) is None:
-            #Get all octs within this oct handler
+            # Get all octs within this oct handler
             mask = dobj.selector.select_octs(self.oct_handler)
-            if mask.sum()==0:
+            if mask.sum() == 0:
                 mylog.debug("Warning: selected zero octs")
             counts = self.oct_handler.count_cells(dobj.selector, mask)
-            #For all domains, figure out how many counts we have 
-            #and build a subset=mask of domains 
+            # For all domains, figure out how many counts we have
+            # and build a subset=mask of domains
             subsets = []
-            for d,c in zip(self.domains,counts):
+            for d, c in zip(self.domains, counts):
                 nocts = d.level_count[d.domain_level]
-                if c<1: continue
-                subsets += ARTDomainSubset(d,mask,c,d.domain_level),
+                if c < 1:
+                    continue
+                subsets += ARTDomainSubset(d, mask, c, d.domain_level),
             dobj._chunk_info = subsets
             dobj.size = sum(counts)
             dobj.shape = (dobj.size,)
@@ -147,8 +150,8 @@
 
     def _chunk_all(self, dobj):
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
-        #We pass the chunk both the current chunk and list of chunks,
-        #as well as the referring data source
+        # We pass the chunk both the current chunk and list of chunks,
+        # as well as the referring data source
         yield YTDataChunk(dobj, "all", oobjs, dobj.size)
 
     def _chunk_spatial(self, dobj, ngz):
@@ -157,7 +160,7 @@
     def _chunk_io(self, dobj):
         """
         Since subsets are calculated per domain,
-        i.e. per file, yield each domain at a time to 
+        i.e. per file, yield each domain at a time to
         organize by IO. We will eventually chunk out NMSU ART
         to be level-by-level.
         """
@@ -165,17 +168,18 @@
         for subset in oobjs:
             yield YTDataChunk(dobj, "io", [subset], subset.cell_count)
 
+
 class ARTStaticOutput(StaticOutput):
     _hierarchy_class = ARTGeometryHandler
     _fieldinfo_fallback = ARTFieldInfo
     _fieldinfo_known = KnownARTFields
 
-    def __init__(self,filename,data_style='art',
-                 fields = None, storage_filename = None,
-                 skip_particles=False,skip_stars=False,
-                 limit_level=None,spread_age=True,
-                 force_max_level=None,file_particle_header=None,
-                 file_particle_data=None,file_particle_stars=None):
+    def __init__(self, filename, data_style='art',
+                 fields=None, storage_filename=None,
+                 skip_particles=False, skip_stars=False,
+                 limit_level=None, spread_age=True,
+                 force_max_level=None, file_particle_header=None,
+                 file_particle_data=None, file_particle_stars=None):
         if fields is None:
             fields = fluid_fields
         filename = os.path.abspath(filename)
@@ -192,56 +196,56 @@
         self.max_level = limit_level
         self.force_max_level = force_max_level
         self.spread_age = spread_age
-        self.domain_left_edge = np.zeros(3,dtype='float')
-        self.domain_right_edge = np.zeros(3,dtype='float')+1.0
-        StaticOutput.__init__(self,filename,data_style)
+        self.domain_left_edge = np.zeros(3, dtype='float')
+        self.domain_right_edge = np.zeros(3, dtype='float')+1.0
+        StaticOutput.__init__(self, filename, data_style)
         self.storage_filename = storage_filename
 
-    def _find_files(self,file_amr):
+    def _find_files(self, file_amr):
         """
         Given the AMR base filename, attempt to find the
         particle header, star files, etc.
         """
-        prefix,suffix = filename_pattern['amr'].split('%s')
-        affix = os.path.basename(file_amr).replace(prefix,'')
-        affix = affix.replace(suffix,'')
-        affix = affix.replace('_','')
+        prefix, suffix = filename_pattern['amr'].split('%s')
+        affix = os.path.basename(file_amr).replace(prefix, '')
+        affix = affix.replace(suffix, '')
+        affix = affix.replace('_', '')
         full_affix = affix
         affix = affix[1:-1]
         dirname = os.path.dirname(file_amr)
-        for fp in (filename_pattern_hf,filename_pattern):
+        for fp in (filename_pattern_hf, filename_pattern):
             for filetype, pattern in fp.items():
-                #if this attribute is already set skip it
-                if getattr(self,"file_"+filetype,None) is not None:
+                # if this attribute is already set skip it
+                if getattr(self, "file_"+filetype, None) is not None:
                     continue
-                #sometimes the affix is surrounded by an extraneous _
-                #so check for an extra character on either side
-                check_filename = dirname+'/'+pattern%('?%s?'%affix)
+                # sometimes the affix is surrounded by an extraneous _
+                # so check for an extra character on either side
+                check_filename = dirname+'/'+pattern % ('?%s?' % affix)
                 filenames = glob.glob(check_filename)
-                if len(filenames)>1:
+                if len(filenames) > 1:
                     check_filename_strict = \
-                            dirname+'/'+pattern%('?%s'%full_affix[1:])
+                        dirname+'/'+pattern % ('?%s' % full_affix[1:])
                     filenames = glob.glob(check_filename_strict)
-                
-                if len(filenames)==1:
-                    setattr(self,"file_"+filetype,filenames[0])
-                    mylog.info('discovered %s:%s',filetype,filenames[0])
-                elif len(filenames)>1:
-                    setattr(self,"file_"+filetype,None)
+
+                if len(filenames) == 1:
+                    setattr(self, "file_"+filetype, filenames[0])
+                    mylog.info('discovered %s:%s', filetype, filenames[0])
+                elif len(filenames) > 1:
+                    setattr(self, "file_"+filetype, None)
                     mylog.info("Ambiguous number of files found for %s",
-                            check_filename)
+                               check_filename)
                     for fn in filenames:
                         faffix = float(affix)
                 else:
-                    setattr(self,"file_"+filetype,None)
+                    setattr(self, "file_"+filetype, None)
 
     def __repr__(self):
         return self.file_amr.split('/')[-1]
 
     def _set_units(self):
         """
-        Generates the conversion to various physical units based 
-		on the parameters from the header
+        Generates the conversion to various physical units based
+                on the parameters from the header
         """
         self.units = {}
         self.time_units = {}
@@ -249,9 +253,9 @@
         self.units['1'] = 1.0
         self.units['unitary'] = 1.0
 
-        #spatial units
-        z   = self.current_redshift
-        h   = self.hubble_constant
+        # spatial units
+        z = self.current_redshift
+        h = self.hubble_constant
         boxcm_cal = self.parameters["boxh"]
         boxcm_uncal = boxcm_cal / h
         box_proper = boxcm_uncal/(1+z)
@@ -262,54 +266,54 @@
             self.units[unit+'cm'] = mpc_conversion[unit] * boxcm_uncal
             self.units[unit+'hcm'] = mpc_conversion[unit] * boxcm_cal
 
-        #all other units
+        # all other units
         wmu = self.parameters["wmu"]
         Om0 = self.parameters['Om0']
-        ng  = self.parameters['ng']
+        ng = self.parameters['ng']
         wmu = self.parameters["wmu"]
-        boxh   = self.parameters['boxh'] 
-        aexpn  = self.parameters["aexpn"]
+        boxh = self.parameters['boxh']
+        aexpn = self.parameters["aexpn"]
         hubble = self.parameters['hubble']
 
         cf = defaultdict(lambda: 1.0)
         r0 = boxh/ng
-        P0= 4.697e-16 * Om0**2.0 * r0**2.0 * hubble**2.0
-        T_0 = 3.03e5 * r0**2.0 * wmu * Om0 # [K]
+        P0 = 4.697e-16 * Om0**2.0 * r0**2.0 * hubble**2.0
+        T_0 = 3.03e5 * r0**2.0 * wmu * Om0  # [K]
         S_0 = 52.077 * wmu**(5.0/3.0)
         S_0 *= hubble**(-4.0/3.0)*Om0**(1.0/3.0)*r0**2.0
-        #v0 =  r0 * 50.0*1.0e5 * np.sqrt(self.omega_matter)  #cm/s
+        # v0 =  r0 * 50.0*1.0e5 * np.sqrt(self.omega_matter)  #cm/s
         v0 = 50.0*r0*np.sqrt(Om0)
         t0 = r0/v0
         rho1 = 1.8791e-29 * hubble**2.0 * self.omega_matter
         rho0 = 2.776e11 * hubble**2.0 * Om0
-        tr = 2./3. *(3.03e5*r0**2.0*wmu*self.omega_matter)*(1.0/(aexpn**2))     
+        tr = 2./3. * (3.03e5*r0**2.0*wmu*self.omega_matter)*(1.0/(aexpn**2))
         aM0 = rho0 * (boxh/hubble)**3.0 / ng**3.0
-        cf['r0']=r0
-        cf['P0']=P0
-        cf['T_0']=T_0
-        cf['S_0']=S_0
-        cf['v0']=v0
-        cf['t0']=t0
-        cf['rho0']=rho0
-        cf['rho1']=rho1
-        cf['tr']=tr
-        cf['aM0']=aM0
+        cf['r0'] = r0
+        cf['P0'] = P0
+        cf['T_0'] = T_0
+        cf['S_0'] = S_0
+        cf['v0'] = v0
+        cf['t0'] = t0
+        cf['rho0'] = rho0
+        cf['rho1'] = rho1
+        cf['tr'] = tr
+        cf['aM0'] = aM0
 
-        #factors to multiply the native code units to CGS
-        cf['Pressure'] = P0 #already cgs
-        cf['Velocity'] = v0/aexpn*1.0e5 #proper cm/s
+        # factors to multiply the native code units to CGS
+        cf['Pressure'] = P0  # already cgs
+        cf['Velocity'] = v0/aexpn*1.0e5  # proper cm/s
         cf["Mass"] = aM0 * 1.98892e33
         cf["Density"] = rho1*(aexpn**-3.0)
         cf["GasEnergy"] = rho0*v0**2*(aexpn**-5.0)
         cf["Potential"] = 1.0
         cf["Entropy"] = S_0
         cf["Temperature"] = tr
-        cf["Time"] = 1.0 
+        cf["Time"] = 1.0
         cf["particle_mass"] = cf['Mass']
         cf["particle_mass_initial"] = cf['Mass']
         self.cosmological_simulation = True
         self.conversion_factors = cf
-        
+
         for ax in 'xyz':
             self.conversion_factors["%s-velocity" % ax] = 1.0
         for pt in particle_fields:
@@ -331,70 +335,74 @@
             int(os.stat(self.parameter_filename)[stat.ST_CTIME])
         self.parameters.update(constants)
         self.parameters['Time'] = 1.0
-        #read the amr header
-        with open(self.file_amr,'rb') as f:
-            amr_header_vals = read_attrs(f,amr_header_struct,'>')
-            for to_skip in ['tl','dtl','tlold','dtlold','iSO']:
-                skipped=skip(f,endian='>')
-            (self.ncell) = read_vector(f,'i','>')[0]
+        # read the amr header
+        with open(self.file_amr, 'rb') as f:
+            amr_header_vals = read_attrs(f, amr_header_struct, '>')
+            for to_skip in ['tl', 'dtl', 'tlold', 'dtlold', 'iSO']:
+                skipped = skip(f, endian='>')
+            (self.ncell) = read_vector(f, 'i', '>')[0]
             # Try to figure out the root grid dimensions
             est = int(np.rint(self.ncell**(1.0/3.0)))
             # Note here: this is the number of *cells* on the root grid.
             # This is not the same as the number of Octs.
-            #domain dimensions is the number of root *cells*
+            # domain dimensions is the number of root *cells*
             self.domain_dimensions = np.ones(3, dtype='int64')*est
             self.root_grid_mask_offset = f.tell()
             self.root_nocts = self.domain_dimensions.prod()/8
             self.root_ncells = self.root_nocts*8
-            mylog.debug("Estimating %i cells on a root grid side,"+ \
-                        "%i root octs",est,self.root_nocts)
-            self.root_iOctCh = read_vector(f,'i','>')[:self.root_ncells]
+            mylog.debug("Estimating %i cells on a root grid side," +
+                        "%i root octs", est, self.root_nocts)
+            self.root_iOctCh = read_vector(f, 'i', '>')[:self.root_ncells]
             self.root_iOctCh = self.root_iOctCh.reshape(self.domain_dimensions,
-                 order='F')
+                                                        order='F')
             self.root_grid_offset = f.tell()
-            self.root_nhvar = skip(f,endian='>')
-            self.root_nvar  = skip(f,endian='>')
-            #make sure that the number of root variables is a multiple of rootcells
-            assert self.root_nhvar%self.root_ncells==0
-            assert self.root_nvar%self.root_ncells==0
-            self.nhydro_variables = ((self.root_nhvar+self.root_nvar)/ 
-                                    self.root_ncells)
-            self.iOctFree, self.nOct = read_vector(f,'i','>')
+            self.root_nhvar = skip(f, endian='>')
+            self.root_nvar = skip(f, endian='>')
+            # make sure that the number of root variables is a multiple of
+            # rootcells
+            assert self.root_nhvar % self.root_ncells == 0
+            assert self.root_nvar % self.root_ncells == 0
+            self.nhydro_variables = ((self.root_nhvar+self.root_nvar) /
+                                     self.root_ncells)
+            self.iOctFree, self.nOct = read_vector(f, 'i', '>')
             self.child_grid_offset = f.tell()
             self.parameters.update(amr_header_vals)
             self.parameters['ncell0'] = self.parameters['ng']**3
-            #estimate the root level
-            float_center, fl, iocts, nocts,root_level = _read_art_level_info(f,
-                [0,self.child_grid_offset],1,
+            # estimate the root level
+            float_center, fl, iocts, nocts, root_level = _read_art_level_info(
+                f,
+                [0, self.child_grid_offset], 1,
                 coarse_grid=self.domain_dimensions[0])
             del float_center, fl, iocts, nocts
             self.root_level = root_level
-            mylog.info("Using root level of %02i",self.root_level)
-        #read the particle header
+            mylog.info("Using root level of %02i", self.root_level)
+        # read the particle header
         if not self.skip_particles and self.file_particle_header:
-            with open(self.file_particle_header,"rb") as fh:
-                particle_header_vals = read_attrs(fh,particle_header_struct,'>')
+            with open(self.file_particle_header, "rb") as fh:
+                particle_header_vals = read_attrs(
+                    fh, particle_header_struct, '>')
                 fh.seek(seek_extras)
                 n = particle_header_vals['Nspecies']
-                wspecies = np.fromfile(fh,dtype='>f',count=10)
-                lspecies = np.fromfile(fh,dtype='>i',count=10)
+                wspecies = np.fromfile(fh, dtype='>f', count=10)
+                lspecies = np.fromfile(fh, dtype='>i', count=10)
             self.parameters['wspecies'] = wspecies[:n]
             self.parameters['lspecies'] = lspecies[:n]
             ls_nonzero = np.diff(lspecies)[:n-1]
             self.star_type = len(ls_nonzero)
-            mylog.info("Discovered %i species of particles",len(ls_nonzero))
+            mylog.info("Discovered %i species of particles", len(ls_nonzero))
             mylog.info("Particle populations: "+'%1.1e '*len(ls_nonzero),
-                *ls_nonzero)
-            for k,v in particle_header_vals.items():
+                       *ls_nonzero)
+            for k, v in particle_header_vals.items():
                 if k in self.parameters.keys():
                     if not self.parameters[k] == v:
-                        mylog.info("Inconsistent parameter %s %1.1e  %1.1e",k,v,
-                                   self.parameters[k])
+                        mylog.info(
+                            "Inconsistent parameter %s %1.1e  %1.1e", k, v,
+                            self.parameters[k])
                 else:
-                    self.parameters[k]=v
+                    self.parameters[k] = v
             self.parameters_particles = particle_header_vals
-    
-        #setup standard simulation params yt expects to see
+
+        # setup standard simulation params yt expects to see
         self.current_redshift = self.parameters["aexpn"]**-1.0 - 1.0
         self.omega_lambda = amr_header_vals['Oml0']
         self.omega_matter = amr_header_vals['Om0']
@@ -402,12 +410,13 @@
         self.min_level = amr_header_vals['min_level']
         self.max_level = amr_header_vals['max_level']
         if self.limit_level is not None:
-            self.max_level = min(self.limit_level,amr_header_vals['max_level'])
+            self.max_level = min(
+                self.limit_level, amr_header_vals['max_level'])
         if self.force_max_level is not None:
             self.max_level = self.force_max_level
-        self.hubble_time  = 1.0/(self.hubble_constant*100/3.08568025e19)
+        self.hubble_time = 1.0/(self.hubble_constant*100/3.08568025e19)
         self.current_time = b2t(self.parameters['t']) * sec_per_Gyr
-        mylog.info("Max level is %02i",self.max_level)
+        mylog.info("Max level is %02i", self.max_level)
 
     @classmethod
     def _is_valid(self, *args, **kwargs):
@@ -417,16 +426,17 @@
         """
         f = ("%s" % args[0])
         prefix, suffix = filename_pattern['amr'].split('%s')
-        with open(f,'rb') as fh:
+        with open(f, 'rb') as fh:
             try:
-                amr_header_vals = read_attrs(fh,amr_header_struct,'>')
+                amr_header_vals = read_attrs(fh, amr_header_struct, '>')
                 return True
             except:
                 return False
         return False
 
+
 class ARTDomainSubset(object):
-    def __init__(self, domain, mask, cell_count,domain_level):
+    def __init__(self, domain, mask, cell_count, domain_level):
         self.mask = mask
         self.domain = domain
         self.oct_handler = domain.pf.h.oct_handler
@@ -459,7 +469,7 @@
         widths = np.empty((self.cell_count, 3), dtype="float64")
         dds = (2**self.select_ires(dobj))
         for i in range(3):
-            widths[:,i] = base_dx[i] / dds
+            widths[:, i] = base_dx[i] / dds
         return widths
 
     def fill(self, content, fields):
@@ -471,9 +481,9 @@
         the order they are in in the octhandler.
         """
         oct_handler = self.oct_handler
-        all_fields  = self.domain.pf.h.fluid_field_list
+        all_fields = self.domain.pf.h.fluid_field_list
         fields = [f for ft, f in fields]
-        dest= {}
+        dest = {}
         filled = pos = level_offset = 0
         field_idxs = [all_fields.index(f) for f in fields]
         for field in fields:
@@ -481,44 +491,47 @@
         level = self.domain_level
         offset = self.domain.level_offsets
         no = self.domain.level_count[level]
-        if level==0:
-            source= {}
-            data = _read_root_level(content,self.domain.level_child_offsets,
-                                   self.domain.level_count)
-            for field,i in zip(fields,field_idxs):
-                temp = np.reshape(data[i,:],self.domain.pf.domain_dimensions,
+        if level == 0:
+            source = {}
+            data = _read_root_level(content, self.domain.level_child_offsets,
+                                    self.domain.level_count)
+            for field, i in zip(fields, field_idxs):
+                temp = np.reshape(data[i, :], self.domain.pf.domain_dimensions,
                                   order='F').astype('float64').T
                 source[field] = temp
-            level_offset += oct_handler.fill_level_from_grid(self.domain.domain_id, 
-                                   level, dest, source, self.mask, level_offset)
+            level_offset += oct_handler.fill_level_from_grid(
+                self.domain.domain_id,
+                level, dest, source, self.mask, level_offset)
         else:
-            def subchunk(count,size):
-                for i in range(0,count,size):
-                    yield i,i+min(size,count-i)
-            for noct_range in subchunk(no,long(1e8)):
-                source = _read_child_level(content,self.domain.level_child_offsets,
-                                         self.domain.level_offsets,
-                                         self.domain.level_count,level,fields,
-                                         self.domain.pf.domain_dimensions,
-                                         self.domain.pf.parameters['ncell0'],
-                                         noct_range = noct_range)
+            def subchunk(count, size):
+                for i in range(0, count, size):
+                    yield i, i+min(size, count-i)
+            for noct_range in subchunk(no, long(1e8)):
+                source = _read_child_level(
+                    content, self.domain.level_child_offsets,
+                    self.domain.level_offsets,
+                    self.domain.level_count, level, fields,
+                    self.domain.pf.domain_dimensions,
+                    self.domain.pf.parameters['ncell0'],
+                    noct_range=noct_range)
                 nocts_filling = noct_range[1]-noct_range[0]
-                level_offset += oct_handler.fill_level(self.domain.domain_id, 
-                                    level, dest, source, self.mask, level_offset,
-                                    noct_range[0],nocts_filling)
+                level_offset += oct_handler.fill_level(self.domain.domain_id,
+                                                       level, dest, source, self.mask, level_offset,
+                                                       noct_range[0], nocts_filling)
         return dest
 
+
 class ARTDomainFile(object):
     """
     Read in the AMR, left/right edges, fill out the octhandler
     """
-    #We already read in the header in static output,
-    #and since these headers are defined in only a single file it's
-    #best to leave them in the static output
+    # We already read in the header in static output,
+    # and since these headers are defined in only a single file it's
+    # best to leave them in the static output
     _last_mask = None
     _last_seletor_id = None
 
-    def __init__(self,pf,domain_id,nvar,level):
+    def __init__(self, pf, domain_id, nvar, level):
         self.nvar = nvar
         self.pf = pf
         self.domain_id = domain_id
@@ -527,56 +540,57 @@
         self._level_oct_offsets = None
         self._level_child_offsets = None
 
-
     @property
     def level_count(self):
-        #this is number of *octs*
-        if self._level_count is not None: return self._level_count
+        # this is number of *octs*
+        if self._level_count is not None:
+            return self._level_count
         self.level_offsets
         return self._level_count[self.domain_level]
 
     @property
     def level_child_offsets(self):
-        if self._level_count is not None: return self._level_child_offsets
+        if self._level_count is not None:
+            return self._level_child_offsets
         self.level_offsets
         return self._level_child_offsets
 
     @property
-    def level_offsets(self): 
-        #this is used by the IO operations to find the file offset,
-        #and then start reading to fill values
-        #note that this is called hydro_offset in ramses
-        if self._level_oct_offsets is not None: 
+    def level_offsets(self):
+        # this is used by the IO operations to find the file offset,
+        # and then start reading to fill values
+        # note that this is called hydro_offset in ramses
+        if self._level_oct_offsets is not None:
             return self._level_oct_offsets
         # We now have to open the file and calculate it
         f = open(self.pf.file_amr, "rb")
         nhydrovars, inoll, _level_oct_offsets, _level_child_offsets = \
             _count_art_octs(f,  self.pf.child_grid_offset, self.pf.min_level,
                             self.pf.max_level)
-        #remember that the root grid is by itself; manually add it back in
+        # remember that the root grid is by itself; manually add it back in
         inoll[0] = self.pf.domain_dimensions.prod()/8
         _level_child_offsets[0] = self.pf.root_grid_offset
         self.nhydrovars = nhydrovars
-        self.inoll = inoll #number of octs
+        self.inoll = inoll  # number of octs
         self._level_oct_offsets = _level_oct_offsets
         self._level_child_offsets = _level_child_offsets
         self._level_count = inoll
         return self._level_oct_offsets
-    
+
     def _read_amr(self, oct_handler):
         """Open the oct file, read in octs level-by-level.
-           For each oct, only the position, index, level and domain 
+           For each oct, only the position, index, level and domain
            are needed - its position in the octree is found automatically.
            The most important is finding all the information to feed
            oct_handler.add
         """
-        #on the root level we typically have 64^3 octs
-        #giving rise to 128^3 cells
-        #but on level 1 instead of 128^3 octs, we have 256^3 octs
-        #leave this code here instead of static output - it's memory intensive
+        # on the root level we typically have 64^3 octs
+        # giving rise to 128^3 cells
+        # but on level 1 instead of 128^3 octs, we have 256^3 octs
+        # leave this code here instead of static output - it's memory intensive
         self.level_offsets
         f = open(self.pf.file_amr, "rb")
-        #add the root *cell* not *oct* mesh
+        # add the root *cell* not *oct* mesh
         level = self.domain_level
         root_octs_side = self.pf.domain_dimensions[0]/2
         NX = np.ones(3)*root_octs_side
@@ -587,32 +601,33 @@
             root_dx = (RE - LE) / NX
             LL = LE + root_dx/2.0
             RL = RE - root_dx/2.0
-            #compute floating point centers of root octs
-            root_fc= np.mgrid[LL[0]:RL[0]:NX[0]*1j,
-                              LL[1]:RL[1]:NX[1]*1j,
-                              LL[2]:RL[2]:NX[2]*1j ]
-            root_fc= np.vstack([p.ravel() for p in root_fc]).T
-            nocts_check = oct_handler.add(self.domain_id, level, 
+            # compute floating point centers of root octs
+            root_fc = np.mgrid[LL[0]:RL[0]:NX[0]*1j,
+                               LL[1]:RL[1]:NX[1]*1j,
+                               LL[2]:RL[2]:NX[2]*1j]
+            root_fc = np.vstack([p.ravel() for p in root_fc]).T
+            nocts_check = oct_handler.add(self.domain_id, level,
                                           root_octs_side**3,
                                           root_fc, self.domain_id)
             assert(oct_handler.nocts == root_fc.shape[0])
             mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
-                        root_octs_side**3, 0,oct_handler.nocts)
+                        root_octs_side**3, 0, oct_handler.nocts)
         else:
-            unitary_center, fl, iocts, nocts,root_level = _read_art_level_info(f,
-                self._level_oct_offsets,level,
+            unitary_center, fl, iocts, nocts, root_level = _read_art_level_info(
+                f,
+                self._level_oct_offsets, level,
                 coarse_grid=self.pf.domain_dimensions[0],
                 root_level=self.pf.root_level)
-            #at least one of the indices should be odd
-            #assert np.sum(left_index[:,0]%2==1)>0
-            #float_left_edge = left_index.astype("float64") / octs_side
-            #float_center = float_left_edge + 0.5*1.0/octs_side
-            #all floatin unitary positions should fit inside the domain
-            nocts_check = oct_handler.add(self.domain_id,level, nocts, 
+            # at least one of the indices should be odd
+            # assert np.sum(left_index[:,0]%2==1)>0
+            # float_left_edge = left_index.astype("float64") / octs_side
+            # float_center = float_left_edge + 0.5*1.0/octs_side
+            # all floatin unitary positions should fit inside the domain
+            nocts_check = oct_handler.add(self.domain_id, level, nocts,
                                           unitary_center, self.domain_id)
             assert(nocts_check == nocts)
             mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
-                        nocts, level,oct_handler.nocts)
+                        nocts, level, oct_handler.nocts)
 
     def select(self, selector):
         if id(selector) == self._last_selector_id:
@@ -623,8 +638,8 @@
 
     def count(self, selector):
         if id(selector) == self._last_selector_id:
-            if self._last_mask is None: return 0
+            if self._last_mask is None:
+                return 0
             return self._last_mask.sum()
         self.select(selector)
         return self.count(selector)
-


https://bitbucket.org/yt_analysis/yt/commits/b7263dd2e2f9/
Changeset:   b7263dd2e2f9
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-09 21:27:33
Summary:     autopep8 on io
Affected #:  1 file

diff -r eeb18b957e98db136c97263cc296ad58fe25c90d -r b7263dd2e2f94c91aa65bbdf2f4497c9983fa4c1 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -36,9 +36,10 @@
 from yt.frontends.art.definitions import *
 from yt.utilities.physical_constants import sec_per_year
 
+
 class IOHandlerART(BaseIOHandler):
     _data_style = "art"
-    tb,ages= None,None
+    tb, ages = None, None
 
     def _read_fluid_selection(self, chunks, selector, fields, size):
         # Chunks in this case will have affiliated domain subset objects
@@ -55,19 +56,19 @@
                 rv = subset.fill(f, fields)
                 for ft, f in fields:
                     mylog.debug("Filling %s with %s (%0.3e %0.3e) (%s:%s)",
-                        f, subset.cell_count, rv[f].min(), rv[f].max(),
-                        cp, cp+subset.cell_count)
+                                f, subset.cell_count, rv[f].min(), rv[f].max(),
+                                cp, cp+subset.cell_count)
                     tr[(ft, f)][cp:cp+subset.cell_count] = rv.pop(f)
                 cp += subset.cell_count
         return tr
 
     def _read_particle_selection(self, chunks, selector, fields):
-        #ignore chunking; we have no particle chunk system
+        # ignore chunking; we have no particle chunk system
         masks = {}
         pf = (chunks.next()).objs[0].domain.pf
-        ws,ls = pf.parameters["wspecies"],pf.parameters["lspecies"]
-        sizes = np.diff(np.concatenate(([0],ls)))
-        ptmax= ws[-1]
+        ws, ls = pf.parameters["wspecies"], pf.parameters["lspecies"]
+        sizes = np.diff(np.concatenate(([0], ls)))
+        ptmax = ws[-1]
         npt = ls[-1]
         nstars = ls[-1]-ls[-2]
         file_particle = pf.file_particle_data
@@ -75,405 +76,422 @@
         tr = {}
         ftype_old = None
         for field in fields:
-            ftype,fname = field
-            pbool, idxa,idxb= _determine_field_size(pf,ftype,ls,ptmax)
+            ftype, fname = field
+            pbool, idxa, idxb = _determine_field_size(pf, ftype, ls, ptmax)
             npa = idxb-idxa
             if not ftype_old == ftype:
-                pos,vel = read_particles(file_particle,pf.parameters['Nrow'],
-                                         dd=pf.domain_dimensions,
-                                         idxa=idxa,idxb=idxb)
-                pos,vel = pos.astype('float64'), vel.astype('float64')
+                pos, vel = read_particles(file_particle, pf.parameters['Nrow'],
+                                          dd=pf.domain_dimensions,
+                                          idxa=idxa, idxb=idxb)
+                pos, vel = pos.astype('float64'), vel.astype('float64')
                 pos -= 1.0/pf.domain_dimensions[0]
-                mask = selector.select_points(pos[:,0],pos[:,1],pos[:,2])
+                mask = selector.select_points(pos[:, 0], pos[:, 1], pos[:, 2])
                 size = mask.sum()
-            for i,ax in enumerate('xyz'):
-                if fname.startswith("particle_position_%s"%ax):
-                    tr[field]=pos[:,i]
-                if fname.startswith("particle_velocity_%s"%ax):
-                    tr[field]=vel[:,i]
+            for i, ax in enumerate('xyz'):
+                if fname.startswith("particle_position_%s" % ax):
+                    tr[field] = pos[:, i]
+                if fname.startswith("particle_velocity_%s" % ax):
+                    tr[field] = vel[:, i]
             if fname == "particle_mass":
-                a=0
-                data = np.zeros(npa,dtype='f8')
-                for ptb,size,m in zip(pbool,sizes,ws):
+                a = 0
+                data = np.zeros(npa, dtype='f8')
+                for ptb, size, m in zip(pbool, sizes, ws):
                     if ptb:
-                        data[a:a+size]=m
-                        a+=size
+                        data[a:a+size] = m
+                        a += size
                 tr[field] = data
             elif fname == "particle_index":
-                tr[field]=np.arange(idxa,idxb).astype('int64')
+                tr[field] = np.arange(idxa, idxb).astype('int64')
             elif fname == "particle_type":
-                a=0
-                data = np.zeros(npa,dtype='int')
-                for i,(ptb,size) in enumerate(zip(pbool,sizes)):
+                a = 0
+                data = np.zeros(npa, dtype='int')
+                for i, (ptb, size) in enumerate(zip(pbool, sizes)):
                     if ptb:
-                        data[a:a+size]=i
-                        a+=size
+                        data[a:a+size] = i
+                        a += size
                 tr[field] = data
             if pbool[-1] and fname in particle_star_fields:
-                data = read_star_field(file_stars,field=fname)
-                temp = tr.get(field,np.zeros(npa,'f8'))
+                data = read_star_field(file_stars, field=fname)
+                temp = tr.get(field, np.zeros(npa, 'f8'))
                 temp[-nstars:] = data
                 tr[field] = temp
             if fname == "particle_creation_time":
-                data = tr.get(field,np.zeros(npa,'f8'))
-                self.tb,self.ages,data = interpolate_ages(tr[field][-nstars:],
-                                                          file_stars,
-                                                          self.tb,
-                                                          self.ages,
-                                                          pf.current_time)
-                tr.get(field,np.zeros(npa,'f8'))[-nstars:] = data
+                data = tr.get(field, np.zeros(npa, 'f8'))
+                self.tb, self.ages, data = interpolate_ages(
+                    tr[field][-nstars:],
+                    file_stars,
+                    self.tb,
+                    self.ages,
+                    pf.current_time)
+                tr.get(field, np.zeros(npa, 'f8'))[-nstars:] = data
                 del data
-            tr[field]=tr[field][mask]
+            tr[field] = tr[field][mask]
             ftype_old = ftype
         return tr
 
-def _determine_field_size(pf,field,lspecies,ptmax):
-    pbool = np.zeros(len(lspecies),dtype="bool")
-    idxas = np.concatenate(([0,],lspecies[:-1]))
+
+def _determine_field_size(pf, field, lspecies, ptmax):
+    pbool = np.zeros(len(lspecies), dtype="bool")
+    idxas = np.concatenate(([0, ], lspecies[:-1]))
     idxbs = lspecies
     if "specie" in field:
-        index = int(field.replace("specie",""))
+        index = int(field.replace("specie", ""))
         pbool[index] = True
     elif field == "stars":
         pbool[-1] = True
     elif field == "darkmatter":
-        pbool[0:-1]=True
+        pbool[0:-1] = True
     else:
-        pbool[:]=True
-    idxa,idxb = idxas[pbool][0],idxbs[pbool][-1]
-    return pbool,idxa,idxb
+        pbool[:] = True
+    idxa, idxb = idxas[pbool][0], idxbs[pbool][-1]
+    return pbool, idxa, idxb
 
-def interpolate_ages(data,file_stars,interp_tb=None,interp_ages=None,
-                    current_time=None):
+
+def interpolate_ages(data, file_stars, interp_tb=None, interp_ages=None,
+                     current_time=None):
     if interp_tb is None:
-        tdum,adum = read_star_field(file_stars,
-                                              field="tdum")
-        #timestamp of file should match amr timestamp
+        tdum, adum = read_star_field(file_stars,
+                                     field="tdum")
+        # timestamp of file should match amr timestamp
         if current_time:
             tdiff = b2t(tdum)-current_time/(sec_per_year*1e9)
             if np.abs(tdiff) < 1e-4:
-                mylog.info("Timestamp mismatch in star "+
-                             "particle header")
+                mylog.info("Timestamp mismatch in star " +
+                           "particle header")
         mylog.info("Interpolating ages")
-        interp_tb,interp_ages = b2t(data)
-    temp = np.interp(data,interp_tb,interp_ages)
+        interp_tb, interp_ages = b2t(data)
+    temp = np.interp(data, interp_tb, interp_ages)
     temp *= 1.0e9*sec_per_year
-    return interp_tb,interp_ages, temp
+    return interp_tb, interp_ages, temp
 
-def _count_art_octs(f, offset, 
-                   MinLev, MaxLevelNow):
-    level_oct_offsets= [0,]
-    level_child_offsets= [0,]
+
+def _count_art_octs(f, offset,
+                    MinLev, MaxLevelNow):
+    level_oct_offsets = [0, ]
+    level_child_offsets = [0, ]
     f.seek(offset)
-    nchild,ntot=8,0
+    nchild, ntot = 8, 0
     Level = np.zeros(MaxLevelNow+1 - MinLev, dtype='int64')
     iNOLL = np.zeros(MaxLevelNow+1 - MinLev, dtype='int64')
     iHOLL = np.zeros(MaxLevelNow+1 - MinLev, dtype='int64')
     for Lev in xrange(MinLev + 1, MaxLevelNow+1):
         level_oct_offsets.append(f.tell())
 
-        #Get the info for this level, skip the rest
-        #print "Reading oct tree data for level", Lev
-        #print 'offset:',f.tell()
-        Level[Lev], iNOLL[Lev], iHOLL[Lev] = read_vector(f,'i','>')
-        #print 'Level %i : '%Lev, iNOLL
-        #print 'offset after level record:',f.tell()
+        # Get the info for this level, skip the rest
+        # print "Reading oct tree data for level", Lev
+        # print 'offset:',f.tell()
+        Level[Lev], iNOLL[Lev], iHOLL[Lev] = read_vector(f, 'i', '>')
+        # print 'Level %i : '%Lev, iNOLL
+        # print 'offset after level record:',f.tell()
         iOct = iHOLL[Lev] - 1
         nLevel = iNOLL[Lev]
         nLevCells = nLevel * nchild
         ntot = ntot + nLevel
 
-        #Skip all the oct hierarchy data
-        ns = peek_record_size(f,endian='>')
+        # Skip all the oct hierarchy data
+        ns = peek_record_size(f, endian='>')
         size = struct.calcsize('>i') + ns + struct.calcsize('>i')
         f.seek(f.tell()+size * nLevel)
 
         level_child_offsets.append(f.tell())
-        #Skip the child vars data
-        ns = peek_record_size(f,endian='>')
+        # Skip the child vars data
+        ns = peek_record_size(f, endian='>')
         size = struct.calcsize('>i') + ns + struct.calcsize('>i')
         f.seek(f.tell()+size * nLevel*nchild)
 
-        #find nhydrovars
+        # find nhydrovars
         nhydrovars = 8+2
     f.seek(offset)
     return nhydrovars, iNOLL, level_oct_offsets, level_child_offsets
 
-def _read_art_level_info(f, level_oct_offsets,level,coarse_grid=128,
-                         ncell0=None,root_level=None):
+
+def _read_art_level_info(f, level_oct_offsets, level, coarse_grid=128,
+                         ncell0=None, root_level=None):
     pos = f.tell()
     f.seek(level_oct_offsets[level])
-    #Get the info for this level, skip the rest
-    junk, nLevel, iOct = read_vector(f,'i','>')
-    
-    #fortran indices start at 1
-    
-    #Skip all the oct hierarchy data
-    le     = np.zeros((nLevel,3),dtype='int64')
-    fl     = np.ones((nLevel,6),dtype='int64')
-    iocts  = np.zeros(nLevel+1,dtype='int64')
-    idxa,idxb = 0,0
-    chunk = long(1e6) #this is ~111MB for 15 dimensional 64 bit arrays
+    # Get the info for this level, skip the rest
+    junk, nLevel, iOct = read_vector(f, 'i', '>')
+
+    # fortran indices start at 1
+
+    # Skip all the oct hierarchy data
+    le = np.zeros((nLevel, 3), dtype='int64')
+    fl = np.ones((nLevel, 6), dtype='int64')
+    iocts = np.zeros(nLevel+1, dtype='int64')
+    idxa, idxb = 0, 0
+    chunk = long(1e6)  # this is ~111MB for 15 dimensional 64 bit arrays
     left = nLevel
-    while left > 0 :
-        this_chunk = min(chunk,left)
-        idxb=idxa+this_chunk
-        data = np.fromfile(f,dtype='>i',count=this_chunk*15)
-        data=data.reshape(this_chunk,15)
-        left-=this_chunk
-        le[idxa:idxb,:] = data[:,1:4]
-        fl[idxa:idxb,1] = np.arange(idxa,idxb)
-        #pad byte is last, LL2, then ioct right before it
-        iocts[idxa:idxb] = data[:,-3] 
-        idxa=idxa+this_chunk
+    while left > 0:
+        this_chunk = min(chunk, left)
+        idxb = idxa+this_chunk
+        data = np.fromfile(f, dtype='>i', count=this_chunk*15)
+        data = data.reshape(this_chunk, 15)
+        left -= this_chunk
+        le[idxa:idxb, :] = data[:, 1:4]
+        fl[idxa:idxb, 1] = np.arange(idxa, idxb)
+        # pad byte is last, LL2, then ioct right before it
+        iocts[idxa:idxb] = data[:, -3]
+        idxa = idxa+this_chunk
     del data
-    
-    #emulate fortran code
+
+    # emulate fortran code
     #     do ic1 = 1 , nLevel
     #       read(19) (iOctPs(i,iOct),i=1,3),(iOctNb(i,iOct),i=1,6),
-    #&                iOctPr(iOct), iOctLv(iOct), iOctLL1(iOct), 
+    #&                iOctPr(iOct), iOctLv(iOct), iOctLL1(iOct),
     #&                iOctLL2(iOct)
     #       iOct = iOctLL1(iOct)
-    
-    #ioct always represents the index of the next variable
-    #not the current, so shift forward one index
-    #the last index isn't used
+
+    # ioct always represents the index of the next variable
+    # not the current, so shift forward one index
+    # the last index isn't used
     ioctso = iocts.copy()
-    iocts[1:]=iocts[:-1] #shift
-    iocts = iocts[:nLevel] #chop off the last, unused, index
-    iocts[0]=iOct #starting value
-    
-    #now correct iocts for fortran indices start @ 1
+    iocts[1:] = iocts[:-1]  # shift
+    iocts = iocts[:nLevel]  # chop off the last, unused, index
+    iocts[0] = iOct  # starting value
+
+    # now correct iocts for fortran indices start @ 1
     iocts = iocts-1
-    
+
     assert np.unique(iocts).shape[0] == nLevel
-    
-    #left edges are expressed as if they were on 
-    #level 15, so no matter what level max(le)=2**15 
-    #correct to the yt convention
-    #le = le/2**(root_level-1-level)-1
-    
-    #try to find the root_level first
-    def cfc(root_level,level,le):
-        d_x= 1.0/(2.0**(root_level-level+1))
+
+    # left edges are expressed as if they were on
+    # level 15, so no matter what level max(le)=2**15
+    # correct to the yt convention
+    # le = le/2**(root_level-1-level)-1
+
+    # try to find the root_level first
+    def cfc(root_level, level, le):
+        d_x = 1.0/(2.0**(root_level-level+1))
         fc = (d_x * le) - 2**(level-1)
         return fc
     if root_level is None:
-        root_level=np.floor(np.log2(le.max()*1.0/coarse_grid))
+        root_level = np.floor(np.log2(le.max()*1.0/coarse_grid))
         root_level = root_level.astype('int64')
         for i in range(10):
-            fc = cfc(root_level,level,le)
-            go = np.diff(np.unique(fc)).min()<1.1
-            if go: break
-            root_level+=1
+            fc = cfc(root_level, level, le)
+            go = np.diff(np.unique(fc)).min() < 1.1
+            if go:
+                break
+            root_level += 1
     else:
-        fc = cfc(root_level,level,le)
-    unitary_center = fc/( coarse_grid*2.0**(level-1))
-    assert np.all(unitary_center<1.0)
-    
-    #again emulate the fortran code
-    #This is all for calculating child oct locations
-    #iC_ = iC + nbshift
-    #iO = ishft ( iC_ , - ndim )
-    #id = ishft ( 1, MaxLevel - iOctLv(iO) )   
-    #j  = iC_ + 1 - ishft( iO , ndim )
-    #Posx   = d_x * (iOctPs(1,iO) + sign ( id , idelta(j,1) ))
-    #Posy   = d_x * (iOctPs(2,iO) + sign ( id , idelta(j,2) ))
-    #Posz   = d_x * (iOctPs(3,iO) + sign ( id , idelta(j,3) )) 
-    #idelta = [[-1,  1, -1,  1, -1,  1, -1,  1],
+        fc = cfc(root_level, level, le)
+    unitary_center = fc/(coarse_grid*2.0**(level-1))
+    assert np.all(unitary_center < 1.0)
+
+    # again emulate the fortran code
+    # This is all for calculating child oct locations
+    # iC_ = iC + nbshift
+    # iO = ishft ( iC_ , - ndim )
+    # id = ishft ( 1, MaxLevel - iOctLv(iO) )
+    # j  = iC_ + 1 - ishft( iO , ndim )
+    # Posx   = d_x * (iOctPs(1,iO) + sign ( id , idelta(j,1) ))
+    # Posy   = d_x * (iOctPs(2,iO) + sign ( id , idelta(j,2) ))
+    # Posz   = d_x * (iOctPs(3,iO) + sign ( id , idelta(j,3) ))
+    # idelta = [[-1,  1, -1,  1, -1,  1, -1,  1],
               #[-1, -1,  1,  1, -1, -1,  1,  1],
               #[-1, -1, -1, -1,  1,  1,  1,  1]]
-    #idelta = np.array(idelta)
-    #if ncell0 is None:
-        #ncell0 = coarse_grid**3
-    #nchild = 8
-    #ndim = 3
-    #nshift = nchild -1
-    #nbshift = nshift - ncell0
-    #iC = iocts #+ nbshift
-    #iO = iC >> ndim #possibly >>
-    #id = 1 << (root_level - level)
-    #j = iC + 1 - ( iO << 3)
-    #delta = np.abs(id)*idelta[:,j-1]
+    # idelta = np.array(idelta)
+    # if ncell0 is None:
+        # ncell0 = coarse_grid**3
+    # nchild = 8
+    # ndim = 3
+    # nshift = nchild -1
+    # nbshift = nshift - ncell0
+    # iC = iocts #+ nbshift
+    # iO = iC >> ndim #possibly >>
+    # id = 1 << (root_level - level)
+    # j = iC + 1 - ( iO << 3)
+    # delta = np.abs(id)*idelta[:,j-1]
 
-    
-    #try without the -1
-    #le = le/2**(root_level+1-level)
-    
-    #now read the hvars and vars arrays
-    #we are looking for iOctCh
-    #we record if iOctCh is >0, in which it is subdivided
-    #iOctCh  = np.zeros((nLevel+1,8),dtype='bool')
-    
+    # try without the -1
+    # le = le/2**(root_level+1-level)
+    # now read the hvars and vars arrays
+    # we are looking for iOctCh
+    # we record if iOctCh is >0, in which it is subdivided
+    # iOctCh  = np.zeros((nLevel+1,8),dtype='bool')
     f.seek(pos)
-    return unitary_center,fl,iocts,nLevel,root_level
+    return unitary_center, fl, iocts, nLevel, root_level
 
 
-def read_particles(file,Nrow,dd=1.0,idxa=None,idxb=None):
-    words = 6 # words (reals) per particle: x,y,z,vx,vy,vz
-    real_size = 4 # for file_particle_data; not always true?
-    np_per_page = Nrow**2 # defined in ART a_setup.h
+def read_particles(file, Nrow, dd=1.0, idxa=None, idxb=None):
+    words = 6  # words (reals) per particle: x,y,z,vx,vy,vz
+    real_size = 4  # for file_particle_data; not always true?
+    np_per_page = Nrow**2  # defined in ART a_setup.h
     num_pages = os.path.getsize(file)/(real_size*words*np_per_page)
 
-    f = np.fromfile(file, dtype='>f4').astype('float32') # direct access
-    pages = np.vsplit(np.reshape(f, (num_pages, words, np_per_page)), num_pages)
-    data = np.squeeze(np.dstack(pages)).T # x,y,z,vx,vy,vz
-    return data[idxa:idxb,0:3]/dd,data[idxa:idxb,3:]
+    f = np.fromfile(file, dtype='>f4').astype('float32')  # direct access
+    pages = np.vsplit(np.reshape(f, (
+        num_pages, words, np_per_page)), num_pages)
+    data = np.squeeze(np.dstack(pages)).T  # x,y,z,vx,vy,vz
+    return data[idxa:idxb, 0:3]/dd, data[idxa:idxb, 3:]
 
-def read_star_field(file,field=None):
+
+def read_star_field(file, field=None):
     data = {}
-    with open(file,'rb') as fh:
+    with open(file, 'rb') as fh:
         for dtype, variables in star_struct:
-            found = field in variables or field==variables
+            found = field in variables or field == variables
             if found:
-                data[field] = read_vector(fh,dtype[1],dtype[0])
+                data[field] = read_vector(fh, dtype[1], dtype[0])
             else:
-                skip(fh,endian='>')
+                skip(fh, endian='>')
     return data.pop(field)
 
-def _read_child_mask_level(f, level_child_offsets,level,nLevel,nhydro_vars):
+
+def _read_child_mask_level(f, level_child_offsets, level, nLevel, nhydro_vars):
     f.seek(level_child_offsets[level])
-    nvals = nLevel * (nhydro_vars + 6) # 2 vars, 2 pads
-    ioctch = np.zeros(nLevel,dtype='uint8')
-    idc = np.zeros(nLevel,dtype='int32')
-    
+    nvals = nLevel * (nhydro_vars + 6)  # 2 vars, 2 pads
+    ioctch = np.zeros(nLevel, dtype='uint8')
+    idc = np.zeros(nLevel, dtype='int32')
+
     chunk = long(1e6)
     left = nLevel
     width = nhydro_vars+6
-    a,b=0,0
+    a, b = 0, 0
     while left > 0:
-        chunk = min(chunk,left)
+        chunk = min(chunk, left)
         b += chunk
         arr = np.fromfile(f, dtype='>i', count=chunk*width)
         arr = arr.reshape((width, chunk), order="F")
-        assert np.all(arr[0,:]==arr[-1,:]) #pads must be equal
-        idc[a:b]    = arr[1,:]-1 #fix fortran indexing
-        ioctch[a:b] = arr[2,:]==0 #if it is above zero, then refined available
-        #zero in the mask means there is refinement available
-        a=b
+        assert np.all(arr[0, :] == arr[-1, :])  # pads must be equal
+        idc[a:b] = arr[1, :]-1  # fix fortran indexing
+        ioctch[a:b] = arr[
+            2, :] == 0  # if it is above zero, then refined available
+        # zero in the mask means there is refinement available
+        a = b
         left -= chunk
-    assert left==0
-    return idc,ioctch
-    
-nchem=8+2
-dtyp = np.dtype(">i4,>i8,>i8"+",>%sf4"%(nchem)+ \
-                ",>%sf4"%(2)+",>i4")
-def _read_child_level(f,level_child_offsets,level_oct_offsets,level_info,level,
-                      fields,domain_dimensions,ncell0,nhydro_vars=10,nchild=8,
-                      noct_range=None):
-    #emulate the fortran code for reading cell data
-    #read ( 19 ) idc, iOctCh(idc), (hvar(i,idc),i=1,nhvar), 
+    assert left == 0
+    return idc, ioctch
+
+nchem = 8+2
+dtyp = np.dtype(">i4,>i8,>i8"+",>%sf4" % (nchem) +
+                ",>%sf4" % (2)+",>i4")
+
+
+def _read_child_level(
+    f, level_child_offsets, level_oct_offsets, level_info, level,
+    fields, domain_dimensions, ncell0, nhydro_vars=10, nchild=8,
+        noct_range=None):
+    # emulate the fortran code for reading cell data
+    # read ( 19 ) idc, iOctCh(idc), (hvar(i,idc),i=1,nhvar),
     #    &                 (var(i,idc), i=2,3)
-    #contiguous 8-cell sections are for the same oct;
-    #ie, we don't write out just the 0 cells, then the 1 cells
-    #optionally, we only read noct_range to save memory
-    left_index, fl, octs, nocts,root_level = _read_art_level_info(f, 
-        level_oct_offsets,level, coarse_grid=domain_dimensions[0])
+    # contiguous 8-cell sections are for the same oct;
+    # ie, we don't write out just the 0 cells, then the 1 cells
+    # optionally, we only read noct_range to save memory
+    left_index, fl, octs, nocts, root_level = _read_art_level_info(f,
+                                                                   level_oct_offsets, level, coarse_grid=domain_dimensions[0])
     if noct_range is None:
         nocts = level_info[level]
         ncells = nocts*8
         f.seek(level_child_offsets[level])
-        arr = np.fromfile(f,dtype=hydro_struct,count=ncells)
-        assert np.all(arr['pad1']==arr['pad2']) #pads must be equal
-        #idc = np.argsort(arr['idc']) #correct fortran indices
-        #translate idc into icell, and then to iOct
+        arr = np.fromfile(f, dtype=hydro_struct, count=ncells)
+        assert np.all(arr['pad1'] == arr['pad2'])  # pads must be equal
+        # idc = np.argsort(arr['idc']) #correct fortran indices
+        # translate idc into icell, and then to iOct
         icell = (arr['idc'] >> 3) << 3
-        iocts = (icell-ncell0)/nchild #without a F correction, theres a +1
-        #assert that the children are read in the same order as the octs
-        assert np.all(octs==iocts[::nchild]) 
+        iocts = (icell-ncell0)/nchild  # without a F correction, theres a +1
+        # assert that the children are read in the same order as the octs
+        assert np.all(octs == iocts[::nchild])
     else:
-        start,end = noct_range
-        nocts = min(end-start,level_info[level])
+        start, end = noct_range
+        nocts = min(end-start, level_info[level])
         end = start + nocts
         ncells = nocts*8
         skip = np.dtype(hydro_struct).itemsize*start*8
         f.seek(level_child_offsets[level]+skip)
-        arr = np.fromfile(f,dtype=hydro_struct,count=ncells)
-        assert np.all(arr['pad1']==arr['pad2']) #pads must be equal
+        arr = np.fromfile(f, dtype=hydro_struct, count=ncells)
+        assert np.all(arr['pad1'] == arr['pad2'])  # pads must be equal
     source = {}
     for field in fields:
-        sh = (nocts,8)
-        source[field] = np.reshape(arr[field],sh,order='C').astype('float64')
+        sh = (nocts, 8)
+        source[field] = np.reshape(arr[field], sh, order='C').astype('float64')
     return source
 
 
-def _read_root_level(f,level_offsets,level_info,nhydro_vars=10):
+def _read_root_level(f, level_offsets, level_info, nhydro_vars=10):
     nocts = level_info[0]
-    f.seek(level_offsets[0]) # Ditch the header
-    hvar = read_vector(f,'f','>')
-    var = read_vector(f,'f','>')
+    f.seek(level_offsets[0])  # Ditch the header
+    hvar = read_vector(f, 'f', '>')
+    var = read_vector(f, 'f', '>')
     hvar = hvar.reshape((nhydro_vars, nocts*8), order="F")
     var = var.reshape((2, nocts*8), order="F")
-    arr = np.concatenate((hvar,var))
+    arr = np.concatenate((hvar, var))
     return arr
 
-#All of these functions are to convert from hydro time var to 
-#proper time
+# All of these functions are to convert from hydro time var to
+# proper time
 sqrt = np.sqrt
 sign = np.sign
 
-def find_root(f,a,b,tol=1e-6):
+
+def find_root(f, a, b, tol=1e-6):
     c = (a+b)/2.0
     last = -np.inf
-    assert(sign(f(a)) != sign(f(b)))  
+    assert(sign(f(a)) != sign(f(b)))
     while np.abs(f(c)-last) > tol:
-        last=f(c)
-        if sign(last)==sign(f(b)):
-            b=c
+        last = f(c)
+        if sign(last) == sign(f(b)):
+            b = c
         else:
-            a=c
+            a = c
         c = (a+b)/2.0
     return c
 
-def quad(fintegrand,xmin,xmax,n=1e4):
-    spacings = np.logspace(np.log10(xmin),np.log10(xmax),n)
+
+def quad(fintegrand, xmin, xmax, n=1e4):
+    spacings = np.logspace(np.log10(xmin), np.log10(xmax), n)
     integrand_arr = fintegrand(spacings)
-    val = np.trapz(integrand_arr,dx=np.diff(spacings))
+    val = np.trapz(integrand_arr, dx=np.diff(spacings))
     return val
 
-def a2b(at,Om0=0.27,Oml0=0.73,h=0.700):
+
+def a2b(at, Om0=0.27, Oml0=0.73, h=0.700):
     def f_a2b(x):
         val = 0.5*sqrt(Om0) / x**3.0
-        val /= sqrt(Om0/x**3.0 +Oml0 +(1.0 - Om0-Oml0)/x**2.0)
+        val /= sqrt(Om0/x**3.0 + Oml0 + (1.0 - Om0-Oml0)/x**2.0)
         return val
-    #val, err = si.quad(f_a2b,1,at)
-    val = quad(f_a2b,1,at)
+    # val, err = si.quad(f_a2b,1,at)
+    val = quad(f_a2b, 1, at)
     return val
 
-def b2a(bt,**kwargs):
-    #converts code time into expansion factor 
-    #if Om0 ==1and OmL == 0 then b2a is (1 / (1-td))**2
-    #if bt < -190.0 or bt > -.10:  raise 'bt outside of range'
-    f_b2a = lambda at: a2b(at,**kwargs)-bt
-    return find_root(f_b2a,1e-4,1.1)
-    #return so.brenth(f_b2a,1e-4,1.1)
-    #return brent.brent(f_b2a)
 
-def a2t(at,Om0=0.27,Oml0=0.73,h=0.700):
-    integrand = lambda x : 1./(x*sqrt(Oml0+Om0*x**-3.0))
-    #current_time,err = si.quad(integrand,0.0,at,epsabs=1e-6,epsrel=1e-6)
-    current_time = quad(integrand,1e-4,at)
-    #spacings = np.logspace(-5,np.log10(at),1e5)
-    #integrand_arr = integrand(spacings)
-    #current_time = np.trapz(integrand_arr,dx=np.diff(spacings))
+def b2a(bt, **kwargs):
+    # converts code time into expansion factor
+    # if Om0 ==1and OmL == 0 then b2a is (1 / (1-td))**2
+    # if bt < -190.0 or bt > -.10:  raise 'bt outside of range'
+    f_b2a = lambda at: a2b(at, **kwargs)-bt
+    return find_root(f_b2a, 1e-4, 1.1)
+    # return so.brenth(f_b2a,1e-4,1.1)
+    # return brent.brent(f_b2a)
+
+
+def a2t(at, Om0=0.27, Oml0=0.73, h=0.700):
+    integrand = lambda x: 1./(x*sqrt(Oml0+Om0*x**-3.0))
+    # current_time,err = si.quad(integrand,0.0,at,epsabs=1e-6,epsrel=1e-6)
+    current_time = quad(integrand, 1e-4, at)
+    # spacings = np.logspace(-5,np.log10(at),1e5)
+    # integrand_arr = integrand(spacings)
+    # current_time = np.trapz(integrand_arr,dx=np.diff(spacings))
     current_time *= 9.779/h
     return current_time
 
-def b2t(tb,n = 1e2,logger=None,**kwargs):
+
+def b2t(tb, n=1e2, logger=None, **kwargs):
     tb = np.array(tb)
-    if type(tb) == type(1.1): 
+    if isinstance(tb, type(1.1)):
         return a2t(b2a(tb))
-    if tb.shape == (): 
+    if tb.shape == ():
         return a2t(b2a(tb))
-    if len(tb) < n: n= len(tb)
-    age_min = a2t(b2a(tb.max(),**kwargs),**kwargs)
-    age_max = a2t(b2a(tb.min(),**kwargs),**kwargs)
-    tbs  = -1.*np.logspace(np.log10(-tb.min()),
-                          np.log10(-tb.max()),n)
+    if len(tb) < n:
+        n = len(tb)
+    age_min = a2t(b2a(tb.max(), **kwargs), **kwargs)
+    age_max = a2t(b2a(tb.min(), **kwargs), **kwargs)
+    tbs = -1.*np.logspace(np.log10(-tb.min()),
+                          np.log10(-tb.max()), n)
     ages = []
-    for i,tbi in enumerate(tbs):
+    for i, tbi in enumerate(tbs):
         ages += a2t(b2a(tbi)),
-        if logger: logger(i)
+        if logger:
+            logger(i)
     ages = np.array(ages)
-    return tbs,ages
-
+    return tbs, ages


https://bitbucket.org/yt_analysis/yt/commits/73b9316a3cb2/
Changeset:   73b9316a3cb2
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-09 21:27:45
Summary:     autopep8 on fields
Affected #:  1 file

diff -r b7263dd2e2f94c91aa65bbdf2f4497c9983fa4c1 -r 73b9316a3cb22d568ce157663fc95dda0ce4f046 yt/frontends/art/fields.py
--- a/yt/frontends/art/fields.py
+++ b/yt/frontends/art/fields.py
@@ -45,244 +45,276 @@
 
 import numpy as np
 
-#these are just the hydro fields
-#Add the fields, then later we'll individually defined units and names
+# these are just the hydro fields
+# Add the fields, then later we'll individually defined units and names
 for f in fluid_fields:
     add_art_field(f, function=NullFunc, take_log=True,
-              validators = [ValidateDataField(f)])
+                  validators=[ValidateDataField(f)])
 
 for f in particle_fields:
     add_art_field(f, function=NullFunc, take_log=True,
-              validators = [ValidateDataField(f)],
-              particle_type = True)
+                  validators=[ValidateDataField(f)],
+                  particle_type=True)
 
-add_art_field("particle_mass",function=NullFunc,take_log=True,
-            validators=[ValidateDataField(f)],
-            particle_type = True,
-            convert_function= lambda x: x.convert("particle_mass"))
+add_art_field("particle_mass", function=NullFunc, take_log=True,
+              validators=[ValidateDataField(f)],
+              particle_type=True,
+              convert_function=lambda x: x.convert("particle_mass"))
 
-add_art_field("particle_mass_initial",function=NullFunc,take_log=True,
-            validators=[ValidateDataField(f)],
-            particle_type = True,
-            convert_function= lambda x: x.convert("particle_mass"))
+add_art_field("particle_mass_initial", function=NullFunc, take_log=True,
+              validators=[ValidateDataField(f)],
+              particle_type=True,
+              convert_function=lambda x: x.convert("particle_mass"))
 
-#Hydro Fields that are verified to be OK unit-wise:
-#Density
-#Temperature
-#metallicities
-#MetalDensity SNII + SNia
+# Hydro Fields that are verified to be OK unit-wise:
+# Density
+# Temperature
+# metallicities
+# MetalDensity SNII + SNia
 
-#Hydro Fields that need to be tested:
-#TotalEnergy
-#XYZMomentum
-#Pressure
-#Gamma
-#GasEnergy
-#Potentials
-#xyzvelocity
+# Hydro Fields that need to be tested:
+# TotalEnergy
+# XYZMomentum
+# Pressure
+# Gamma
+# GasEnergy
+# Potentials
+# xyzvelocity
 
-#Particle fields that are tested:
-#particle_position_xyz
-#particle_type
-#particle_index
-#particle_mass
-#particle_mass_initial
-#particle_age
-#particle_velocity
-#particle_metallicity12
+# Particle fields that are tested:
+# particle_position_xyz
+# particle_type
+# particle_index
+# particle_mass
+# particle_mass_initial
+# particle_age
+# particle_velocity
+# particle_metallicity12
 
-#Particle fields that are untested:
-#NONE
+# Particle fields that are untested:
+# NONE
 
-#Other checks:
-#CellMassMsun == Density * CellVolume
+# Other checks:
+# CellMassMsun == Density * CellVolume
+
 
 def _convertDensity(data):
     return data.convert("Density")
 KnownARTFields["Density"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["Density"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["Density"]._convert_function=_convertDensity
+KnownARTFields["Density"]._convert_function = _convertDensity
+
 
 def _convertTotalEnergy(data):
     return data.convert("GasEnergy")
 KnownARTFields["TotalEnergy"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["TotalEnergy"]._projected_units = r"\rm{K}"
-KnownARTFields["TotalEnergy"]._convert_function=_convertTotalEnergy
+KnownARTFields["TotalEnergy"]._convert_function = _convertTotalEnergy
+
 
 def _convertXMomentumDensity(data):
-    tr  = data.convert("Mass")*data.convert("Velocity")
+    tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
     return tr
 KnownARTFields["XMomentumDensity"]._units = r"\rm{mg}/\rm{s}/\rm{cm}^3"
 KnownARTFields["XMomentumDensity"]._projected_units = r"\rm{K}"
-KnownARTFields["XMomentumDensity"]._convert_function=_convertXMomentumDensity
+KnownARTFields["XMomentumDensity"]._convert_function = _convertXMomentumDensity
+
 
 def _convertYMomentumDensity(data):
-    tr  = data.convert("Mass")*data.convert("Velocity")
+    tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
     return tr
 KnownARTFields["YMomentumDensity"]._units = r"\rm{mg}/\rm{s}/\rm{cm}^3"
 KnownARTFields["YMomentumDensity"]._projected_units = r"\rm{K}"
-KnownARTFields["YMomentumDensity"]._convert_function=_convertYMomentumDensity
+KnownARTFields["YMomentumDensity"]._convert_function = _convertYMomentumDensity
+
 
 def _convertZMomentumDensity(data):
-    tr  = data.convert("Mass")*data.convert("Velocity")
+    tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
     return tr
 KnownARTFields["ZMomentumDensity"]._units = r"\rm{mg}/\rm{s}/\rm{cm}^3"
 KnownARTFields["ZMomentumDensity"]._projected_units = r"\rm{K}"
-KnownARTFields["ZMomentumDensity"]._convert_function=_convertZMomentumDensity
+KnownARTFields["ZMomentumDensity"]._convert_function = _convertZMomentumDensity
+
 
 def _convertPressure(data):
     return data.convert("Pressure")
 KnownARTFields["Pressure"]._units = r"\rm{g}/\rm{cm}/\rm{s}^2"
 KnownARTFields["Pressure"]._projected_units = r"\rm{g}/\rm{s}^2"
-KnownARTFields["Pressure"]._convert_function=_convertPressure
+KnownARTFields["Pressure"]._convert_function = _convertPressure
+
 
 def _convertGamma(data):
     return 1.0
 KnownARTFields["Gamma"]._units = r""
 KnownARTFields["Gamma"]._projected_units = r""
-KnownARTFields["Gamma"]._convert_function=_convertGamma
+KnownARTFields["Gamma"]._convert_function = _convertGamma
+
 
 def _convertGasEnergy(data):
     return data.convert("GasEnergy")
 KnownARTFields["GasEnergy"]._units = r"\rm{ergs}/\rm{g}"
 KnownARTFields["GasEnergy"]._projected_units = r""
-KnownARTFields["GasEnergy"]._convert_function=_convertGasEnergy
+KnownARTFields["GasEnergy"]._convert_function = _convertGasEnergy
+
 
 def _convertMetalDensitySNII(data):
     return data.convert('Density')
 KnownARTFields["MetalDensitySNII"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["MetalDensitySNII"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["MetalDensitySNII"]._convert_function=_convertMetalDensitySNII
+KnownARTFields["MetalDensitySNII"]._convert_function = _convertMetalDensitySNII
+
 
 def _convertMetalDensitySNIa(data):
     return data.convert('Density')
 KnownARTFields["MetalDensitySNIa"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["MetalDensitySNIa"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["MetalDensitySNIa"]._convert_function=_convertMetalDensitySNIa
+KnownARTFields["MetalDensitySNIa"]._convert_function = _convertMetalDensitySNIa
+
 
 def _convertPotentialNew(data):
     return data.convert("Potential")
 KnownARTFields["PotentialNew"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["PotentialNew"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["PotentialNew"]._convert_function=_convertPotentialNew
+KnownARTFields["PotentialNew"]._convert_function = _convertPotentialNew
+
 
 def _convertPotentialOld(data):
     return data.convert("Potential")
 KnownARTFields["PotentialOld"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["PotentialOld"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["PotentialOld"]._convert_function=_convertPotentialOld
+KnownARTFields["PotentialOld"]._convert_function = _convertPotentialOld
 
 ####### Derived fields
 
+
 def _temperature(field, data):
-    tr  = data["GasEnergy"]/data["Density"]
+    tr = data["GasEnergy"]/data["Density"]
     tr /= data.pf.conversion_factors["GasEnergy"]
     tr *= data.pf.conversion_factors["Density"]
     tr *= data.pf.conversion_factors['tr']
     return tr
 
+
 def _converttemperature(data):
     return 1.0
-add_field("Temperature", function=_temperature, units = r"\mathrm{K}",take_log=True)
+add_field("Temperature", function=_temperature,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Temperature"]._units = r"\mathrm{K}"
 ARTFieldInfo["Temperature"]._projected_units = r"\mathrm{K}"
-#ARTFieldInfo["Temperature"]._convert_function=_converttemperature
+# ARTFieldInfo["Temperature"]._convert_function=_converttemperature
+
 
 def _metallicity_snII(field, data):
-    tr  = data["MetalDensitySNII"] / data["Density"]
+    tr = data["MetalDensitySNII"] / data["Density"]
     return tr
-add_field("Metallicity_SNII", function=_metallicity_snII, units = r"\mathrm{K}",take_log=True)
+add_field("Metallicity_SNII", function=_metallicity_snII,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Metallicity_SNII"]._units = r""
 ARTFieldInfo["Metallicity_SNII"]._projected_units = r""
 
+
 def _metallicity_snIa(field, data):
-    tr  = data["MetalDensitySNIa"] / data["Density"]
+    tr = data["MetalDensitySNIa"] / data["Density"]
     return tr
-add_field("Metallicity_SNIa", function=_metallicity_snIa, units = r"\mathrm{K}",take_log=True)
+add_field("Metallicity_SNIa", function=_metallicity_snIa,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Metallicity_SNIa"]._units = r""
 ARTFieldInfo["Metallicity_SNIa"]._projected_units = r""
 
+
 def _metallicity(field, data):
-    tr  = data["Metal_Density"] / data["Density"]
+    tr = data["Metal_Density"] / data["Density"]
     return tr
-add_field("Metallicity", function=_metallicity, units = r"\mathrm{K}",take_log=True)
+add_field("Metallicity", function=_metallicity,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Metallicity"]._units = r""
 ARTFieldInfo["Metallicity"]._projected_units = r""
 
-def _x_velocity(field,data):
-    tr  = data["XMomentumDensity"]/data["Density"]
+
+def _x_velocity(field, data):
+    tr = data["XMomentumDensity"]/data["Density"]
     return tr
-add_field("x-velocity", function=_x_velocity, units = r"\mathrm{cm/s}",take_log=False)
+add_field("x-velocity", function=_x_velocity,
+          units=r"\mathrm{cm/s}", take_log=False)
 ARTFieldInfo["x-velocity"]._units = r"\rm{cm}/\rm{s}"
 ARTFieldInfo["x-velocity"]._projected_units = r"\rm{cm}/\rm{s}"
 
-def _y_velocity(field,data):
-    tr  = data["YMomentumDensity"]/data["Density"]
+
+def _y_velocity(field, data):
+    tr = data["YMomentumDensity"]/data["Density"]
     return tr
-add_field("y-velocity", function=_y_velocity, units = r"\mathrm{cm/s}",take_log=False)
+add_field("y-velocity", function=_y_velocity,
+          units=r"\mathrm{cm/s}", take_log=False)
 ARTFieldInfo["y-velocity"]._units = r"\rm{cm}/\rm{s}"
 ARTFieldInfo["y-velocity"]._projected_units = r"\rm{cm}/\rm{s}"
 
-def _z_velocity(field,data):
-    tr  = data["ZMomentumDensity"]/data["Density"]
+
+def _z_velocity(field, data):
+    tr = data["ZMomentumDensity"]/data["Density"]
     return tr
-add_field("z-velocity", function=_z_velocity, units = r"\mathrm{cm/s}",take_log=False)
+add_field("z-velocity", function=_z_velocity,
+          units=r"\mathrm{cm/s}", take_log=False)
 ARTFieldInfo["z-velocity"]._units = r"\rm{cm}/\rm{s}"
 ARTFieldInfo["z-velocity"]._projected_units = r"\rm{cm}/\rm{s}"
 
+
 def _metal_density(field, data):
-    tr  = data["MetalDensitySNIa"]
+    tr = data["MetalDensitySNIa"]
     tr += data["MetalDensitySNII"]
     return tr
-add_field("Metal_Density", function=_metal_density, units = r"\mathrm{K}",take_log=True)
+add_field("Metal_Density", function=_metal_density,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Metal_Density"]._units = r""
 ARTFieldInfo["Metal_Density"]._projected_units = r""
 
 
-#Particle fields
+# Particle fields
 
-def _particle_age(field,data):
+def _particle_age(field, data):
     tr = data["particle_creation_time"]
     return data.pf.current_time - tr
-add_field("particle_age",function=_particle_age,units=r"\mathrm{s}",
-          take_log=True,particle_type=True)
+add_field("particle_age", function=_particle_age, units=r"\mathrm{s}",
+          take_log=True, particle_type=True)
 
-def spread_ages(ages,spread=1.0e7*365*24*3600):
-    #stars are formed in lumps; spread out the ages linearly
-    da= np.diff(ages)
-    assert np.all(da<=0)
-    #ages should always be decreasing, and ordered so
+
+def spread_ages(ages, spread=1.0e7*365*24*3600):
+    # stars are formed in lumps; spread out the ages linearly
+    da = np.diff(ages)
+    assert np.all(da <= 0)
+    # ages should always be decreasing, and ordered so
     agesd = np.zeros(ages.shape)
-    idx, = np.where(da<0)
-    idx+=1 #mark the right edges
-    #spread this age evenly out to the next age
-    lidx=0
-    lage=0
+    idx, = np.where(da < 0)
+    idx += 1  # mark the right edges
+    # spread this age evenly out to the next age
+    lidx = 0
+    lage = 0
     for i in idx:
-        n = i-lidx #n stars affected
+        n = i-lidx  # n stars affected
         rage = ages[i]
-        lage = max(rage-spread,0.0)
-        agesd[lidx:i]=np.linspace(lage,rage,n)
-        lidx=i
-        #lage=rage
-    #we didn't get the last iter
+        lage = max(rage-spread, 0.0)
+        agesd[lidx:i] = np.linspace(lage, rage, n)
+        lidx = i
+        # lage=rage
+    # we didn't get the last iter
     n = agesd.shape[0]-lidx
     rage = ages[-1]
-    lage = max(rage-spread,0.0)
-    agesd[lidx:]=np.linspace(lage,rage,n)
+    lage = max(rage-spread, 0.0)
+    agesd[lidx:] = np.linspace(lage, rage, n)
     return agesd
 
-def _particle_age_spread(field,data):
+
+def _particle_age_spread(field, data):
     tr = data["particle_creation_time"]
     return spread_ages(data.pf.current_time - tr)
 
-add_field("particle_age_spread",function=_particle_age_spread,
-          particle_type=True,take_log=True,units=r"\rm{s}")
+add_field("particle_age_spread", function=_particle_age_spread,
+          particle_type=True, take_log=True, units=r"\rm{s}")
 
-def _ParticleMassMsun(field,data):
+
+def _ParticleMassMsun(field, data):
     return data["particle_mass"]/1.989e33
-add_field("ParticleMassMsun",function=_ParticleMassMsun,particle_type=True,
-          take_log=True,units=r"\rm{Msun}")
+add_field("ParticleMassMsun", function=_ParticleMassMsun, particle_type=True,
+          take_log=True, units=r"\rm{Msun}")


https://bitbucket.org/yt_analysis/yt/commits/dd6f2e27bfda/
Changeset:   dd6f2e27bfda
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-09 21:28:08
Summary:     autopep8 on definitions
Affected #:  1 file

diff -r 73b9316a3cb22d568ce157663fc95dda0ce4f046 -r dd6f2e27bfda2e4d61a9149d0b11480c7073aca7 yt/frontends/art/definitions.py
--- a/yt/frontends/art/definitions.py
+++ b/yt/frontends/art/definitions.py
@@ -25,10 +25,10 @@
 
 """
 
-#If not otherwise specified, we are big endian
+# If not otherwise specified, we are big endian
 endian = '>'
 
-fluid_fields= [ 
+fluid_fields = [
     'Density',
     'TotalEnergy',
     'XMomentumDensity',
@@ -43,13 +43,13 @@
     'PotentialOld'
 ]
 
-hydro_struct = [('pad1','>i'),('idc','>i'),('iOctCh','>i')]
+hydro_struct = [('pad1', '>i'), ('idc', '>i'), ('iOctCh', '>i')]
 for field in fluid_fields:
-    hydro_struct += (field,'>f'),
-hydro_struct += ('pad2','>i'),
+    hydro_struct += (field, '>f'),
+hydro_struct += ('pad2', '>i'),
 
-particle_fields= [
-    'particle_mass', #stars have variable mass
+particle_fields = [
+    'particle_mass',  # stars have variable mass
     'particle_index',
     'particle_type',
     'particle_position_x',
@@ -74,69 +74,69 @@
     'particle_metallicity',
 ]
 
-filename_pattern = {				
-	'amr':'10MpcBox_csf512_%s.d',
-	'particle_header':'PMcrd%s.DAT',
-	'particle_data':'PMcrs0%s.DAT',
-	'particle_stars':'stars_%s.dat'
+filename_pattern = {
+    'amr': '10MpcBox_csf512_%s.d',
+    'particle_header': 'PMcrd%s.DAT',
+    'particle_data': 'PMcrs0%s.DAT',
+    'particle_stars': 'stars_%s.dat'
 }
 
-filename_pattern_hf = {				
-	'particle_header':'PMcrd_%s.DAT',
-	'particle_data':'PMcrs0_%s.DAT',
+filename_pattern_hf = {
+    'particle_header': 'PMcrd_%s.DAT',
+    'particle_data': 'PMcrs0_%s.DAT',
 }
 
 amr_header_struct = [
-    ('jname',1,'256s'),
-    (('istep','t','dt','aexpn','ainit'),1,'iddff'),
-    (('boxh','Om0','Oml0','Omb0','hubble'),5,'f'),
-    ('nextras',1,'i'),
-    (('extra1','extra2'),2,'f'),
-    ('lextra',1,'512s'),
-    (('min_level','max_level'),2,'i')
+    ('jname', 1, '256s'),
+    (('istep', 't', 'dt', 'aexpn', 'ainit'), 1, 'iddff'),
+    (('boxh', 'Om0', 'Oml0', 'Omb0', 'hubble'), 5, 'f'),
+    ('nextras', 1, 'i'),
+    (('extra1', 'extra2'), 2, 'f'),
+    ('lextra', 1, '512s'),
+    (('min_level', 'max_level'), 2, 'i')
 ]
 
-particle_header_struct =[
+particle_header_struct = [
     (('header',
-     'aexpn','aexp0','amplt','astep',
+     'aexpn', 'aexp0', 'amplt', 'astep',
      'istep',
-     'partw','tintg',
-     'Ekin','Ekin1','Ekin2',
-     'au0','aeu0',
-     'Nrow','Ngridc','Nspecies','Nseed',
-     'Om0','Oml0','hubble','Wp5','Ocurv','Omb0',
-     'extras','unknown'),
-      1,
+     'partw', 'tintg',
+     'Ekin', 'Ekin1', 'Ekin2',
+     'au0', 'aeu0',
+     'Nrow', 'Ngridc', 'Nspecies', 'Nseed',
+     'Om0', 'Oml0', 'hubble', 'Wp5', 'Ocurv', 'Omb0',
+     'extras', 'unknown'),
+     1,
      '45sffffi'+'fffffff'+'iiii'+'ffffff'+'396s'+'f')
 ]
 
 star_struct = [
-        ('>d',('tdum','adum')),
-        ('>i','nstars'),
-        ('>d',('ws_old','ws_oldi')),
-        ('>f','particle_mass'),
-        ('>f','particle_mass_initial'),
-        ('>f','particle_creation_time'),
-        ('>f','particle_metallicity1'),
-        ('>f','particle_metallicity2')
-        ]
+    ('>d', ('tdum', 'adum')),
+    ('>i', 'nstars'),
+    ('>d', ('ws_old', 'ws_oldi')),
+    ('>f', 'particle_mass'),
+    ('>f', 'particle_mass_initial'),
+    ('>f', 'particle_creation_time'),
+    ('>f', 'particle_metallicity1'),
+    ('>f', 'particle_metallicity2')
+]
 
 star_name_map = {
-        'particle_mass':'mass',
-        'particle_mass_initial':'imass',
-        'particle_creation_time':'tbirth',
-        'particle_metallicity1':'metallicity1',
-        'particle_metallicity2':'metallicity2',
-        'particle_metallicity':'metallicity',
-        }
+    'particle_mass': 'mass',
+    'particle_mass_initial': 'imass',
+    'particle_creation_time': 'tbirth',
+    'particle_metallicity1': 'metallicity1',
+    'particle_metallicity2': 'metallicity2',
+    'particle_metallicity': 'metallicity',
+}
 
 constants = {
-    "Y_p":0.245,
-    "gamma":5./3.,
-    "T_CMB0":2.726,
-    "T_min":300.,
-    "ng":128,
-    "wmu":4.0/(8.0-5.0*0.245)
+    "Y_p": 0.245,
+    "gamma": 5./3.,
+    "T_CMB0": 2.726,
+    "T_min": 300.,
+    "ng": 128,
+    "wmu": 4.0/(8.0-5.0*0.245)
 }
 
 seek_extras = 137


https://bitbucket.org/yt_analysis/yt/commits/97e43b3b81b4/
Changeset:   97e43b3b81b4
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-09 21:28:29
Summary:     autopep8 on tests
Affected #:  1 file

diff -r dd6f2e27bfda2e4d61a9149d0b11480c7073aca7 -r 97e43b3b81b4cb3318405cf2e1b1446e50271d54 yt/frontends/art/tests/test_outputs.py
--- a/yt/frontends/art/tests/test_outputs.py
+++ b/yt/frontends/art/tests/test_outputs.py
@@ -31,14 +31,16 @@
     data_dir_load
 from yt.frontends.art.api import ARTStaticOutput
 
-_fields = ("Density","particle_mass",("all","particle_position_x"))
+_fields = ("Density", "particle_mass", ("all", "particle_position_x"))
 
 sfg1 = "10MpcBox_csf512_a0.330.d"
- at requires_pf(sfg1,big_data=True)
+
+
+ at requires_pf(sfg1, big_data=True)
 def test_sfg1():
     pf = data_dir_load(sfg1)
     yield assert_equal, str(pf), "10MpcBox_csf512_a0.330.d"
-    dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
+    dso = [None, ("sphere", ("max", (0.1, 'unitary')))]
     for field in _fields:
         for axis in [0, 1, 2]:
             for ds in dso:


https://bitbucket.org/yt_analysis/yt/commits/889003cc1d9c/
Changeset:   889003cc1d9c
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-09 21:29:58
Summary:     wrapping to 80 characters
Affected #:  1 file

diff -r 97e43b3b81b4cb3318405cf2e1b1446e50271d54 -r 889003cc1d9cf24732b973da5de5b8cdfc8dde71 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -516,8 +516,10 @@
                     noct_range=noct_range)
                 nocts_filling = noct_range[1]-noct_range[0]
                 level_offset += oct_handler.fill_level(self.domain.domain_id,
-                                                       level, dest, source, self.mask, level_offset,
-                                                       noct_range[0], nocts_filling)
+                                                       level, dest, source,
+                                                       self.mask, level_offset,
+                                                       noct_range[0],
+                                                       nocts_filling)
         return dest
 
 


https://bitbucket.org/yt_analysis/yt/commits/a14ec366cf23/
Changeset:   a14ec366cf23
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-09 21:31:46
Summary:     added credits
Affected #:  2 files

diff -r 889003cc1d9cf24732b973da5de5b8cdfc8dde71 -r a14ec366cf23a17788ab90a5901ba1215688aa2b yt/frontends/art/fields.py
--- a/yt/frontends/art/fields.py
+++ b/yt/frontends/art/fields.py
@@ -3,6 +3,8 @@
 
 Author: Matthew Turk <matthewturk at gmail.com>
 Affiliation: UCSD
+Author: Chris Moody <matthewturk at gmail.com>
+Affiliation: UCSC
 Homepage: http://yt-project.org/
 License:
   Copyright (C) 2010-2011 Matthew Turk.  All Rights Reserved.

diff -r 889003cc1d9cf24732b973da5de5b8cdfc8dde71 -r a14ec366cf23a17788ab90a5901ba1215688aa2b yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -3,6 +3,8 @@
 
 Author: Matthew Turk <matthewturk at gmail.com>
 Affiliation: KIPAC/SLAC/Stanford
+Author: Chris Moody <matthewturk at gmail.com>
+Affiliation: UCSC
 Homepage: http://yt-project.org/
 License:
   Copyright (C) 2007-2011 Matthew Turk.  All Rights Reserved.


https://bitbucket.org/yt_analysis/yt/commits/43abc3ca352e/
Changeset:   43abc3ca352e
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-09 21:39:02
Summary:     fixing units on fields
Affected #:  1 file

diff -r a14ec366cf23a17788ab90a5901ba1215688aa2b -r 43abc3ca352e4ec3306059916b0b3f926a3bece9 yt/frontends/art/fields.py
--- a/yt/frontends/art/fields.py
+++ b/yt/frontends/art/fields.py
@@ -68,37 +68,6 @@
               particle_type=True,
               convert_function=lambda x: x.convert("particle_mass"))
 
-# Hydro Fields that are verified to be OK unit-wise:
-# Density
-# Temperature
-# metallicities
-# MetalDensity SNII + SNia
-
-# Hydro Fields that need to be tested:
-# TotalEnergy
-# XYZMomentum
-# Pressure
-# Gamma
-# GasEnergy
-# Potentials
-# xyzvelocity
-
-# Particle fields that are tested:
-# particle_position_xyz
-# particle_type
-# particle_index
-# particle_mass
-# particle_mass_initial
-# particle_age
-# particle_velocity
-# particle_metallicity12
-
-# Particle fields that are untested:
-# NONE
-
-# Other checks:
-# CellMassMsun == Density * CellVolume
-
 
 def _convertDensity(data):
     return data.convert("Density")
@@ -109,8 +78,8 @@
 
 def _convertTotalEnergy(data):
     return data.convert("GasEnergy")
-KnownARTFields["TotalEnergy"]._units = r"\rm{g}/\rm{cm}^3"
-KnownARTFields["TotalEnergy"]._projected_units = r"\rm{K}"
+KnownARTFields["TotalEnergy"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
+KnownARTFields["TotalEnergy"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
 KnownARTFields["TotalEnergy"]._convert_function = _convertTotalEnergy
 
 
@@ -118,8 +87,8 @@
     tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
     return tr
-KnownARTFields["XMomentumDensity"]._units = r"\rm{mg}/\rm{s}/\rm{cm}^3"
-KnownARTFields["XMomentumDensity"]._projected_units = r"\rm{K}"
+KnownARTFields["XMomentumDensity"]._units = r"\rm{g}/\rm{s}/\rm{cm}^3"
+KnownARTFields["XMomentumDensity"]._projected_units = r"\rm{g}/\rm{s}/\rm{cm}^2"
 KnownARTFields["XMomentumDensity"]._convert_function = _convertXMomentumDensity
 
 
@@ -127,8 +96,8 @@
     tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
     return tr
-KnownARTFields["YMomentumDensity"]._units = r"\rm{mg}/\rm{s}/\rm{cm}^3"
-KnownARTFields["YMomentumDensity"]._projected_units = r"\rm{K}"
+KnownARTFields["YMomentumDensity"]._units = r"\rm{g}/\rm{s}/\rm{cm}^3"
+KnownARTFields["YMomentumDensity"]._projected_units = r"\rm{g}/\rm{s}/\rm{cm}^2"
 KnownARTFields["YMomentumDensity"]._convert_function = _convertYMomentumDensity
 
 
@@ -136,14 +105,14 @@
     tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
     return tr
-KnownARTFields["ZMomentumDensity"]._units = r"\rm{mg}/\rm{s}/\rm{cm}^3"
-KnownARTFields["ZMomentumDensity"]._projected_units = r"\rm{K}"
+KnownARTFields["ZMomentumDensity"]._units = r"\rm{g}/\rm{s}/\rm{cm}^3"
+KnownARTFields["ZMomentumDensity"]._projected_units = r"\rm{g}/\rm{s}/\rm{cm}^2"
 KnownARTFields["ZMomentumDensity"]._convert_function = _convertZMomentumDensity
 
 
 def _convertPressure(data):
     return data.convert("Pressure")
-KnownARTFields["Pressure"]._units = r"\rm{g}/\rm{cm}/\rm{s}^2"
+KnownARTFields["Pressure"]._units = r"\rm{g}/\rm{s}^2/\rm{cm}^1"
 KnownARTFields["Pressure"]._projected_units = r"\rm{g}/\rm{s}^2"
 KnownARTFields["Pressure"]._convert_function = _convertPressure
 
@@ -157,8 +126,8 @@
 
 def _convertGasEnergy(data):
     return data.convert("GasEnergy")
-KnownARTFields["GasEnergy"]._units = r"\rm{ergs}/\rm{g}"
-KnownARTFields["GasEnergy"]._projected_units = r""
+KnownARTFields["GasEnergy"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
+KnownARTFields["GasEnergy"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
 KnownARTFields["GasEnergy"]._convert_function = _convertGasEnergy
 
 
@@ -178,15 +147,15 @@
 
 def _convertPotentialNew(data):
     return data.convert("Potential")
-KnownARTFields["PotentialNew"]._units = r"\rm{g}/\rm{cm}^3"
-KnownARTFields["PotentialNew"]._projected_units = r"\rm{g}/\rm{cm}^2"
+KnownARTFields["PotentialNew"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
+KnownARTFields["PotentialNew"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
 KnownARTFields["PotentialNew"]._convert_function = _convertPotentialNew
 
 
 def _convertPotentialOld(data):
     return data.convert("Potential")
-KnownARTFields["PotentialOld"]._units = r"\rm{g}/\rm{cm}^3"
-KnownARTFields["PotentialOld"]._projected_units = r"\rm{g}/\rm{cm}^2"
+KnownARTFields["PotentialOld"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
+KnownARTFields["PotentialOld"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
 KnownARTFields["PotentialOld"]._convert_function = _convertPotentialOld
 
 ####### Derived fields
@@ -269,8 +238,8 @@
     return tr
 add_field("Metal_Density", function=_metal_density,
           units=r"\mathrm{K}", take_log=True)
-ARTFieldInfo["Metal_Density"]._units = r""
-ARTFieldInfo["Metal_Density"]._projected_units = r""
+ARTFieldInfo["Metal_Density"]._units = r"\rm{g}/\rm{cm}^3"
+ARTFieldInfo["Metal_Density"]._projected_units = r"\rm{g}/\rm{cm}^2"
 
 
 # Particle fields


https://bitbucket.org/yt_analysis/yt/commits/24229c3d19f9/
Changeset:   24229c3d19f9
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-09 22:04:38
Summary:     cleaning up fields some more
Affected #:  1 file

diff -r 43abc3ca352e4ec3306059916b0b3f926a3bece9 -r 24229c3d19f9528f909b246454b07e9bc3acdc78 yt/frontends/art/fields.py
--- a/yt/frontends/art/fields.py
+++ b/yt/frontends/art/fields.py
@@ -24,7 +24,7 @@
   You should have received a copy of the GNU General Public License
   along with this program.  If not, see <http://www.gnu.org/licenses/>.
 """
-
+import numpy as np
 from yt.data_objects.field_info_container import \
     FieldInfoContainer, \
     FieldInfo, \
@@ -37,18 +37,14 @@
     ValidateGridType
 import yt.data_objects.universal_fields
 import yt.utilities.lib as amr_utils
+from yt.utilities.physical_constants import mass_sun_cgs
 from yt.frontends.art.definitions import *
 
 KnownARTFields = FieldInfoContainer()
 add_art_field = KnownARTFields.add_field
-
 ARTFieldInfo = FieldInfoContainer.create_with_fallback(FieldInfo)
 add_field = ARTFieldInfo.add_field
 
-import numpy as np
-
-# these are just the hydro fields
-# Add the fields, then later we'll individually defined units and names
 for f in fluid_fields:
     add_art_field(f, function=NullFunc, take_log=True,
                   validators=[ValidateDataField(f)])
@@ -57,32 +53,27 @@
     add_art_field(f, function=NullFunc, take_log=True,
                   validators=[ValidateDataField(f)],
                   particle_type=True)
-
 add_art_field("particle_mass", function=NullFunc, take_log=True,
               validators=[ValidateDataField(f)],
               particle_type=True,
               convert_function=lambda x: x.convert("particle_mass"))
-
 add_art_field("particle_mass_initial", function=NullFunc, take_log=True,
               validators=[ValidateDataField(f)],
               particle_type=True,
               convert_function=lambda x: x.convert("particle_mass"))
 
-
 def _convertDensity(data):
     return data.convert("Density")
 KnownARTFields["Density"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["Density"]._projected_units = r"\rm{g}/\rm{cm}^2"
 KnownARTFields["Density"]._convert_function = _convertDensity
 
-
 def _convertTotalEnergy(data):
     return data.convert("GasEnergy")
 KnownARTFields["TotalEnergy"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
 KnownARTFields["TotalEnergy"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
 KnownARTFields["TotalEnergy"]._convert_function = _convertTotalEnergy
 
-
 def _convertXMomentumDensity(data):
     tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
@@ -91,7 +82,6 @@
 KnownARTFields["XMomentumDensity"]._projected_units = r"\rm{g}/\rm{s}/\rm{cm}^2"
 KnownARTFields["XMomentumDensity"]._convert_function = _convertXMomentumDensity
 
-
 def _convertYMomentumDensity(data):
     tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
@@ -100,7 +90,6 @@
 KnownARTFields["YMomentumDensity"]._projected_units = r"\rm{g}/\rm{s}/\rm{cm}^2"
 KnownARTFields["YMomentumDensity"]._convert_function = _convertYMomentumDensity
 
-
 def _convertZMomentumDensity(data):
     tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
@@ -109,49 +98,42 @@
 KnownARTFields["ZMomentumDensity"]._projected_units = r"\rm{g}/\rm{s}/\rm{cm}^2"
 KnownARTFields["ZMomentumDensity"]._convert_function = _convertZMomentumDensity
 
-
 def _convertPressure(data):
     return data.convert("Pressure")
 KnownARTFields["Pressure"]._units = r"\rm{g}/\rm{s}^2/\rm{cm}^1"
 KnownARTFields["Pressure"]._projected_units = r"\rm{g}/\rm{s}^2"
 KnownARTFields["Pressure"]._convert_function = _convertPressure
 
-
 def _convertGamma(data):
     return 1.0
 KnownARTFields["Gamma"]._units = r""
 KnownARTFields["Gamma"]._projected_units = r""
 KnownARTFields["Gamma"]._convert_function = _convertGamma
 
-
 def _convertGasEnergy(data):
     return data.convert("GasEnergy")
 KnownARTFields["GasEnergy"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
 KnownARTFields["GasEnergy"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
 KnownARTFields["GasEnergy"]._convert_function = _convertGasEnergy
 
-
 def _convertMetalDensitySNII(data):
     return data.convert('Density')
 KnownARTFields["MetalDensitySNII"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["MetalDensitySNII"]._projected_units = r"\rm{g}/\rm{cm}^2"
 KnownARTFields["MetalDensitySNII"]._convert_function = _convertMetalDensitySNII
 
-
 def _convertMetalDensitySNIa(data):
     return data.convert('Density')
 KnownARTFields["MetalDensitySNIa"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["MetalDensitySNIa"]._projected_units = r"\rm{g}/\rm{cm}^2"
 KnownARTFields["MetalDensitySNIa"]._convert_function = _convertMetalDensitySNIa
 
-
 def _convertPotentialNew(data):
     return data.convert("Potential")
 KnownARTFields["PotentialNew"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
 KnownARTFields["PotentialNew"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
 KnownARTFields["PotentialNew"]._convert_function = _convertPotentialNew
 
-
 def _convertPotentialOld(data):
     return data.convert("Potential")
 KnownARTFields["PotentialOld"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
@@ -159,8 +141,6 @@
 KnownARTFields["PotentialOld"]._convert_function = _convertPotentialOld
 
 ####### Derived fields
-
-
 def _temperature(field, data):
     tr = data["GasEnergy"]/data["Density"]
     tr /= data.pf.conversion_factors["GasEnergy"]
@@ -168,15 +148,12 @@
     tr *= data.pf.conversion_factors['tr']
     return tr
 
-
 def _converttemperature(data):
     return 1.0
 add_field("Temperature", function=_temperature,
           units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Temperature"]._units = r"\mathrm{K}"
 ARTFieldInfo["Temperature"]._projected_units = r"\mathrm{K}"
-# ARTFieldInfo["Temperature"]._convert_function=_converttemperature
-
 
 def _metallicity_snII(field, data):
     tr = data["MetalDensitySNII"] / data["Density"]
@@ -186,7 +163,6 @@
 ARTFieldInfo["Metallicity_SNII"]._units = r""
 ARTFieldInfo["Metallicity_SNII"]._projected_units = r""
 
-
 def _metallicity_snIa(field, data):
     tr = data["MetalDensitySNIa"] / data["Density"]
     return tr
@@ -195,7 +171,6 @@
 ARTFieldInfo["Metallicity_SNIa"]._units = r""
 ARTFieldInfo["Metallicity_SNIa"]._projected_units = r""
 
-
 def _metallicity(field, data):
     tr = data["Metal_Density"] / data["Density"]
     return tr
@@ -204,7 +179,6 @@
 ARTFieldInfo["Metallicity"]._units = r""
 ARTFieldInfo["Metallicity"]._projected_units = r""
 
-
 def _x_velocity(field, data):
     tr = data["XMomentumDensity"]/data["Density"]
     return tr
@@ -213,7 +187,6 @@
 ARTFieldInfo["x-velocity"]._units = r"\rm{cm}/\rm{s}"
 ARTFieldInfo["x-velocity"]._projected_units = r"\rm{cm}/\rm{s}"
 
-
 def _y_velocity(field, data):
     tr = data["YMomentumDensity"]/data["Density"]
     return tr
@@ -222,7 +195,6 @@
 ARTFieldInfo["y-velocity"]._units = r"\rm{cm}/\rm{s}"
 ARTFieldInfo["y-velocity"]._projected_units = r"\rm{cm}/\rm{s}"
 
-
 def _z_velocity(field, data):
     tr = data["ZMomentumDensity"]/data["Density"]
     return tr
@@ -231,7 +203,6 @@
 ARTFieldInfo["z-velocity"]._units = r"\rm{cm}/\rm{s}"
 ARTFieldInfo["z-velocity"]._projected_units = r"\rm{cm}/\rm{s}"
 
-
 def _metal_density(field, data):
     tr = data["MetalDensitySNIa"]
     tr += data["MetalDensitySNII"]
@@ -241,16 +212,13 @@
 ARTFieldInfo["Metal_Density"]._units = r"\rm{g}/\rm{cm}^3"
 ARTFieldInfo["Metal_Density"]._projected_units = r"\rm{g}/\rm{cm}^2"
 
-
 # Particle fields
-
 def _particle_age(field, data):
     tr = data["particle_creation_time"]
     return data.pf.current_time - tr
 add_field("particle_age", function=_particle_age, units=r"\mathrm{s}",
           take_log=True, particle_type=True)
 
-
 def spread_ages(ages, spread=1.0e7*365*24*3600):
     # stars are formed in lumps; spread out the ages linearly
     da = np.diff(ages)
@@ -276,7 +244,6 @@
     agesd[lidx:] = np.linspace(lage, rage, n)
     return agesd
 
-
 def _particle_age_spread(field, data):
     tr = data["particle_creation_time"]
     return spread_ages(data.pf.current_time - tr)
@@ -284,8 +251,7 @@
 add_field("particle_age_spread", function=_particle_age_spread,
           particle_type=True, take_log=True, units=r"\rm{s}")
 
-
 def _ParticleMassMsun(field, data):
-    return data["particle_mass"]/1.989e33
+    return data["particle_mass"]/mass_sun_cgs
 add_field("ParticleMassMsun", function=_ParticleMassMsun, particle_type=True,
           take_log=True, units=r"\rm{Msun}")


https://bitbucket.org/yt_analysis/yt/commits/a061c15f2ce1/
Changeset:   a061c15f2ce1
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-10 03:51:53
Summary:     first pass at find max progenitor, a naive 'halo' finder
Affected #:  4 files

diff -r 24229c3d19f9528f909b246454b07e9bc3acdc78 -r a061c15f2ce1347b21ca16e8a8d29fd4edb5ac3c yt/analysis_modules/halo_finding/fmp/fmp.py
--- /dev/null
+++ b/yt/analysis_modules/halo_finding/fmp/fmp.py
@@ -0,0 +1,86 @@
+"""
+Find a progenitor line by reverse traversing a timeseries
+and finding the max density around the previous timestep
+
+Author: Christopher Moody <chrisemoody at gmail.com>
+Affiliation: Columbia University
+Homepage: http://yt.enzotools.org/
+License:
+  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+#Take the first snapshot, find max
+#Define new position for 2nd snap from old position + bulk velocity
+#Draw sphere around new position
+#Find new max density
+#Or particle approach:
+#Take last snapshot
+#Find highest particle density
+#Keep indices of n=1000 most bound particles
+#Then go to N-1 snapshot, find new positions of particles
+#Find new max particle density 
+#Find new set of n=1000 particles around new center
+
+class FindMaxProgenitor:
+    def __init__(self,ts):
+        self.ts = ts
+    
+    def find_max_field(self,field,radius=0.01)):
+        """
+        Find the maximum of the given field, and iterate backwards through 
+        snapshots finding the maxima within a small radius of the previous 
+        snapshot's center.
+        """
+        centers = []
+        v,c = ts[-1].h.find_max(field)
+        for pf in ts[::-1]:
+            sph = pf.h.sphere(c,radius)
+            vn,c = sph.find_max(field)
+            centers.append(c)
+        return centers
+
+    def find_max_particle(self,radius=0.01,nparticles=1000):
+        """
+        Find the particle at the maximum density and iterate backwards through
+        snapshots, finding the localtion of the maximum density of the 
+        previously closest nparticles.
+        """
+        indices = None
+        dd = ts[-1].h.all_data()
+        domain_dimensions = ts[-1].domain_dimensions
+        sim_unit_to_cm = ts[-1]['cm']
+        v,c = dd.quantities["ParticleDensityCenter"]()
+        centers = [c,]
+        for pf in ts[::-1]:
+            if indices:
+                dd = pf.h.all_data()
+                data = dict(number_of_particles=indices.shape[0])
+                for ax in 'xyz':
+                    data['particle_position_%s'%ax]=\
+                            dd["particle_position_%s"%ax][indices]
+                subselection = load_uniform_grid(data,domain_dimensions,
+                                                 sim_unit_to_cm)
+                ss = subselection.h.all_data()
+                v,c = ss.quantities["ParticleDensityCenter"]()
+            sph = pf.h.sphere(c,radius)
+            rad = sph["particle_radius"]
+            idx = sph["particle_index"]
+            indices = idx[np.argsort(rad)[:nparticles]]
+            centers.append(c)
+        return centers
+            

diff -r 24229c3d19f9528f909b246454b07e9bc3acdc78 -r a061c15f2ce1347b21ca16e8a8d29fd4edb5ac3c yt/analysis_modules/halo_finding/fmp/setup.py
--- /dev/null
+++ b/yt/analysis_modules/halo_finding/fmp/setup.py
@@ -0,0 +1,12 @@
+#!/usr/bin/env python
+import setuptools
+import os
+import sys
+import os.path
+
+def configuration(parent_package='',top_path=None):
+    from numpy.distutils.misc_util import Configuration
+    config = Configuration('fmp',parent_package,top_path)
+    config.make_config_py() # installs __config__.py
+    return config
+

diff -r 24229c3d19f9528f909b246454b07e9bc3acdc78 -r a061c15f2ce1347b21ca16e8a8d29fd4edb5ac3c yt/analysis_modules/halo_finding/setup.py
--- a/yt/analysis_modules/halo_finding/setup.py
+++ b/yt/analysis_modules/halo_finding/setup.py
@@ -12,6 +12,7 @@
     config.add_subpackage("parallel_hop")
     if os.path.exists("rockstar.cfg"):
         config.add_subpackage("rockstar")
+    config.add_subpackage("fmp")
     config.make_config_py() # installs __config__.py
     #config.make_svn_version_py()
     return config


https://bitbucket.org/yt_analysis/yt/commits/bf9c4f84f002/
Changeset:   bf9c4f84f002
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-10 04:11:09
Summary:     added center finding for particle density
Affected #:  1 file

diff -r a061c15f2ce1347b21ca16e8a8d29fd4edb5ac3c -r bf9c4f84f002491b97ce87291d7304155d0a39e9 yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -678,3 +678,34 @@
     return [np.sum(totals[:,i]) for i in range(n_fields)]
 add_quantity("TotalQuantity", function=_TotalQuantity,
                 combine_function=_combTotalQuantity, n_ret=2)
+
+def _ParticleDensityCenter(data,field,nbins=3):
+    """
+    Find the center of the particle density
+    by histogramming the particles iteratively.
+    """
+    pos = np.array([data["particle_position_%s"%ax] for ax in "xyz"]).T
+    mas = data["particle_mass"]
+    calc_radius= lambda x,y:np.sqrt(np.sum((x-y)**2.0,axis=1))
+    density = 0
+    while pos.shape[0] > 1:
+        table,bins=np.histogramdd(pos,bins=nbins, weights=mas)
+        bin_size = min((np.max(bins,axis=1)-np.min(bins,axis=1))/nbins)
+        centeridx = np.where(table==table.max())
+        le = np.array([bins[0][centeridx[0][0]],
+                       bins[1][centeridx[1][0]],
+                       bins[2][centeridx[2][0]]])
+        re = np.array([bins[0][centeridx[0][0]+1],
+                       bins[1][centeridx[1][0]+1],
+                       bins[2][centeridx[2][0]+1]])
+        center = 0.5*(le+re)
+        idx = calc_radius(pos,center)<bin_size
+        pos, mas = pos[idx],mas[idx]
+        density = max(density,mas.sum()/bin_size**3.0)
+    return density, center
+def _combParticleDensityCenter(data,densities,centers):
+    i = np.argmax(densities)
+    return densities[i],centers[i]
+
+add_quantity("ParticleDensityCenter",function=_ParticleDensityCenter,
+             combine_function=_combParticleDensityCenter,n_ret=2)


https://bitbucket.org/yt_analysis/yt/commits/bc38946cada3/
Changeset:   bc38946cada3
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-10 04:22:32
Summary:     finished testing ParticleDensityCenter
Affected #:  1 file

diff -r bf9c4f84f002491b97ce87291d7304155d0a39e9 -r bc38946cada32794608c4d64aac6fba27db505ca yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -679,7 +679,7 @@
 add_quantity("TotalQuantity", function=_TotalQuantity,
                 combine_function=_combTotalQuantity, n_ret=2)
 
-def _ParticleDensityCenter(data,field,nbins=3):
+def _ParticleDensityCenter(data,nbins=3):
     """
     Find the center of the particle density
     by histogramming the particles iteratively.


https://bitbucket.org/yt_analysis/yt/commits/4e17843128df/
Changeset:   4e17843128df
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-10 06:12:20
Summary:     fmp functional, completes without errors
Affected #:  4 files

diff -r bc38946cada32794608c4d64aac6fba27db505ca -r 4e17843128dfc20886ad56d4b050266c87cfd01a yt/analysis_modules/halo_finding/fmp/fmp.py
--- a/yt/analysis_modules/halo_finding/fmp/fmp.py
+++ b/yt/analysis_modules/halo_finding/fmp/fmp.py
@@ -3,7 +3,7 @@
 and finding the max density around the previous timestep
 
 Author: Christopher Moody <chrisemoody at gmail.com>
-Affiliation: Columbia University
+Affiliation: UC Santa Cruz
 Homepage: http://yt.enzotools.org/
 License:
   Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
@@ -23,35 +23,35 @@
   You should have received a copy of the GNU General Public License
   along with this program.  If not, see <http://www.gnu.org/licenses/>.
 """
-
-#Take the first snapshot, find max
-#Define new position for 2nd snap from old position + bulk velocity
-#Draw sphere around new position
-#Find new max density
-#Or particle approach:
-#Take last snapshot
-#Find highest particle density
-#Keep indices of n=1000 most bound particles
-#Then go to N-1 snapshot, find new positions of particles
-#Find new max particle density 
-#Find new set of n=1000 particles around new center
+import numpy as np
+from yt.frontends.stream.api import load_uniform_grid
 
 class FindMaxProgenitor:
     def __init__(self,ts):
         self.ts = ts
     
-    def find_max_field(self,field,radius=0.01)):
+    def find_max_field(self,field="Density",radius=0.01,use_bulk=True):
         """
         Find the maximum of the given field, and iterate backwards through 
         snapshots finding the maxima within a small radius of the previous 
         snapshot's center.
         """
         centers = []
-        v,c = ts[-1].h.find_max(field)
-        for pf in ts[::-1]:
+        v,c = self.ts[-1].h.find_max(field)
+        t_old = None
+        for pf in self.ts[::-1]:
+            t = pf.current_time
+            if t_old:
+                dt = t_old - t
+                c += dt*bv
             sph = pf.h.sphere(c,radius)
-            vn,c = sph.find_max(field)
+            v,i,x,y,z=sph.quantities["MaxLocation"]("Density")
+            c = np.array([x,y,z])
             centers.append(c)
+            bv = sph.quantities["BulkVelocity"]()
+            #bv is in cgs but center is in untary
+            bv /= pf['cm']
+            t_old = pf.current_time
         return centers
 
     def find_max_particle(self,radius=0.01,nparticles=1000):
@@ -61,24 +61,29 @@
         previously closest nparticles.
         """
         indices = None
-        dd = ts[-1].h.all_data()
-        domain_dimensions = ts[-1].domain_dimensions
-        sim_unit_to_cm = ts[-1]['cm']
+        dd = self.ts[-1].h.all_data()
+        domain_dimensions = self.ts[-1].domain_dimensions
+        sim_unit_to_cm = self.ts[-1]['cm']
         v,c = dd.quantities["ParticleDensityCenter"]()
-        centers = [c,]
-        for pf in ts[::-1]:
-            if indices:
+        centers = []
+        for pf in self.ts[::-1]:
+            if indices is not None:
                 dd = pf.h.all_data()
                 data = dict(number_of_particles=indices.shape[0])
+                index = dd[('all','particle_index')]
+                order= np.argsort(index)
+                assert np.all(index[order][indices]==indices)
                 for ax in 'xyz':
-                    data['particle_position_%s'%ax]=\
-                            dd["particle_position_%s"%ax][indices]
+                    pos = dd[('all',"particle_position_%s"%ax)][order][indices]
+                    data[('all','particle_position_%s'%ax)]= pos
+                mas = dd[('all',"particle_mass")][order][indices]
+                data[('all','particle_mass')]= mas
                 subselection = load_uniform_grid(data,domain_dimensions,
                                                  sim_unit_to_cm)
                 ss = subselection.h.all_data()
                 v,c = ss.quantities["ParticleDensityCenter"]()
             sph = pf.h.sphere(c,radius)
-            rad = sph["particle_radius"]
+            rad = sph["ParticleRadius"]
             idx = sph["particle_index"]
             indices = idx[np.argsort(rad)[:nparticles]]
             centers.append(c)

diff -r bc38946cada32794608c4d64aac6fba27db505ca -r 4e17843128dfc20886ad56d4b050266c87cfd01a yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -684,10 +684,12 @@
     Find the center of the particle density
     by histogramming the particles iteratively.
     """
-    pos = np.array([data["particle_position_%s"%ax] for ax in "xyz"]).T
-    mas = data["particle_mass"]
+    pos = np.array([data[('all',"particle_position_%s"%ax)] for ax in "xyz"]).T
+    mas = data[('all',"particle_mass")]
     calc_radius= lambda x,y:np.sqrt(np.sum((x-y)**2.0,axis=1))
     density = 0
+    if pos.shape[0]==0:
+        return -1.0,[-1.,-1.,-1.]
     while pos.shape[0] > 1:
         table,bins=np.histogramdd(pos,bins=nbins, weights=mas)
         bin_size = min((np.max(bins,axis=1)-np.min(bins,axis=1))/nbins)

diff -r bc38946cada32794608c4d64aac6fba27db505ca -r 4e17843128dfc20886ad56d4b050266c87cfd01a yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -184,11 +184,11 @@
             fields = fluid_fields
         filename = os.path.abspath(filename)
         self._fields_in_file = fields
-        self._find_files(filename)
         self.file_amr = filename
         self.file_particle_header = file_particle_header
         self.file_particle_data = file_particle_data
         self.file_particle_stars = file_particle_stars
+        self._find_files(filename)
         self.parameter_filename = filename
         self.skip_particles = skip_particles
         self.skip_stars = skip_stars

diff -r bc38946cada32794608c4d64aac6fba27db505ca -r 4e17843128dfc20886ad56d4b050266c87cfd01a yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -66,8 +66,16 @@
 
     def _read_particle_selection(self, chunks, selector, fields):
         # ignore chunking; we have no particle chunk system
+        chunk = chunks.next()
+        level = chunk.objs[0].domain.domain_level 
+        # only chunk out particles on level zero
+        if level > 0: 
+            tr = {}
+            for field in fields:
+                tr[field] = np.array([],'f8')
+            return tr
+        pf = chunk.objs[0].domain.pf
         masks = {}
-        pf = (chunks.next()).objs[0].domain.pf
         ws, ls = pf.parameters["wspecies"], pf.parameters["lspecies"]
         sizes = np.diff(np.concatenate(([0], ls)))
         ptmax = ws[-1]
@@ -115,17 +123,19 @@
             if pbool[-1] and fname in particle_star_fields:
                 data = read_star_field(file_stars, field=fname)
                 temp = tr.get(field, np.zeros(npa, 'f8'))
-                temp[-nstars:] = data
+                if nstars > 0:
+                    temp[-nstars:] = data
                 tr[field] = temp
             if fname == "particle_creation_time":
-                data = tr.get(field, np.zeros(npa, 'f8'))
                 self.tb, self.ages, data = interpolate_ages(
                     tr[field][-nstars:],
                     file_stars,
                     self.tb,
                     self.ages,
                     pf.current_time)
-                tr.get(field, np.zeros(npa, 'f8'))[-nstars:] = data
+                temp = tr.get(field, np.zeros(npa, 'f8'))
+                temp[-nstars:] = data
+                tr[field]=temp
                 del data
             tr[field] = tr[field][mask]
             ftype_old = ftype


https://bitbucket.org/yt_analysis/yt/commits/5e06580e03f4/
Changeset:   5e06580e03f4
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-11 18:52:34
Summary:     rewrote io to get particle field types working, faster particle loading
Affected #:  1 file

diff -r 4e17843128dfc20886ad56d4b050266c87cfd01a -r 5e06580e03f446dc23158454b743ced92968ea02 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -65,83 +65,82 @@
         return tr
 
     def _read_particle_selection(self, chunks, selector, fields):
-        # ignore chunking; we have no particle chunk system
-        chunk = chunks.next()
-        level = chunk.objs[0].domain.domain_level 
-        # only chunk out particles on level zero
-        if level > 0: 
-            tr = {}
+        tr = {}
+        fields_read = []
+        for chunk in chunks:
+            level = chunk.objs[0].domain.domain_level 
+            pf = chunk.objs[0].domain.pf
+            masks = {}
+            ws, ls = pf.parameters["wspecies"], pf.parameters["lspecies"]
+            sizes = np.diff(np.concatenate(([0], ls)))
+            ptmax = ws[-1]
+            npt = ls[-1]
+            nstars = ls[-1]-ls[-2]
+            file_particle = pf.file_particle_data
+            file_stars = pf.file_particle_stars
+            ftype_old = None
             for field in fields:
-                tr[field] = np.array([],'f8')
-            return tr
-        pf = chunk.objs[0].domain.pf
-        masks = {}
-        ws, ls = pf.parameters["wspecies"], pf.parameters["lspecies"]
-        sizes = np.diff(np.concatenate(([0], ls)))
-        ptmax = ws[-1]
-        npt = ls[-1]
-        nstars = ls[-1]-ls[-2]
-        file_particle = pf.file_particle_data
-        file_stars = pf.file_particle_stars
-        tr = {}
-        ftype_old = None
-        for field in fields:
-            ftype, fname = field
-            pbool, idxa, idxb = _determine_field_size(pf, ftype, ls, ptmax)
-            npa = idxb-idxa
-            if not ftype_old == ftype:
-                pos, vel = read_particles(file_particle, pf.parameters['Nrow'],
-                                          dd=pf.domain_dimensions,
-                                          idxa=idxa, idxb=idxb)
-                pos, vel = pos.astype('float64'), vel.astype('float64')
-                pos -= 1.0/pf.domain_dimensions[0]
-                mask = selector.select_points(pos[:, 0], pos[:, 1], pos[:, 2])
-                size = mask.sum()
-            for i, ax in enumerate('xyz'):
-                if fname.startswith("particle_position_%s" % ax):
-                    tr[field] = pos[:, i]
-                if fname.startswith("particle_velocity_%s" % ax):
-                    tr[field] = vel[:, i]
-            if fname == "particle_mass":
-                a = 0
-                data = np.zeros(npa, dtype='f8')
-                for ptb, size, m in zip(pbool, sizes, ws):
-                    if ptb:
-                        data[a:a+size] = m
-                        a += size
-                tr[field] = data
-            elif fname == "particle_index":
-                tr[field] = np.arange(idxa, idxb).astype('int64')
-            elif fname == "particle_type":
-                a = 0
-                data = np.zeros(npa, dtype='int')
-                for i, (ptb, size) in enumerate(zip(pbool, sizes)):
-                    if ptb:
-                        data[a:a+size] = i
-                        a += size
-                tr[field] = data
-            if pbool[-1] and fname in particle_star_fields:
-                data = read_star_field(file_stars, field=fname)
-                temp = tr.get(field, np.zeros(npa, 'f8'))
-                if nstars > 0:
+                if field in fields_read: continue
+                ftype, fname = field
+                pbool, idxa, idxb = _determine_field_size(pf, ftype, ls, ptmax)
+                npa = idxb-idxa
+                if not ftype_old == ftype:
+                    Nrow = pf.parameters["Nrow"]
+                    rp = lambda ax: read_particles(file_particle,Nrow,idxa=idxa,
+                                                   idxb=idxb,field=ax)
+                    x,y,z = (rp(ax) for ax in 'xyz')
+                    dd = pf.domain_dimensions[0]
+                    off = 1.0/dd
+                    x,y,z = (t.astype('f8')/dd - off  for t in (x,y,z))
+                    mask = selector.select_points(x,y,z)
+                    size = mask.sum()
+                for i, ax in enumerate('xyz'):
+                    if fname.startswith("particle_position_%s" % ax):
+                        tr[field] = vars()[ax]
+                    if fname.startswith("particle_velocity_%s" % ax):
+                        tr[field] = rp('v'+ax)
+                if fname == "particle_mass":
+                    a = 0
+                    data = np.zeros(npa, dtype='f8')
+                    for ptb, size, m in zip(pbool, sizes, ws):
+                        if ptb:
+                            data[a:a+size] = m
+                            a += size
+                    tr[field] = data
+                elif fname == "particle_index":
+                    tr[field] = np.arange(idxa, idxb).astype('int64')
+                elif fname == "particle_type":
+                    a = 0
+                    data = np.zeros(npa, dtype='int')
+                    for i, (ptb, size) in enumerate(zip(pbool, sizes)):
+                        if ptb:
+                            data[a:a+size] = i
+                            a += size
+                    tr[field] = data
+                if pbool[-1] and fname in particle_star_fields:
+                    data = read_star_field(file_stars, field=fname)
+                    temp = tr.get(field, np.zeros(npa, 'f8'))
+                    if nstars > 0:
+                        temp[-nstars:] = data
+                    tr[field] = temp
+                if fname == "particle_creation_time":
+                    self.tb, self.ages, data = interpolate_ages(
+                        tr[field][-nstars:],
+                        file_stars,
+                        self.tb,
+                        self.ages,
+                        pf.current_time)
+                    temp = tr.get(field, np.zeros(npa, 'f8'))
                     temp[-nstars:] = data
-                tr[field] = temp
-            if fname == "particle_creation_time":
-                self.tb, self.ages, data = interpolate_ages(
-                    tr[field][-nstars:],
-                    file_stars,
-                    self.tb,
-                    self.ages,
-                    pf.current_time)
-                temp = tr.get(field, np.zeros(npa, 'f8'))
-                temp[-nstars:] = data
-                tr[field]=temp
-                del data
-            tr[field] = tr[field][mask]
-            ftype_old = ftype
+                    tr[field]=temp
+                    del data
+                tr[field] = tr[field][mask]
+                ftype_old = ftype
+                fields_read.append(field)
+        if tr=={}:
+            tr = dict((f,np.array([])) for f in fields)
         return tr
 
-
 def _determine_field_size(pf, field, lspecies, ptmax):
     pbool = np.zeros(len(lspecies), dtype="bool")
     idxas = np.concatenate(([0, ], lspecies[:-1]))
@@ -324,19 +323,32 @@
     f.seek(pos)
     return unitary_center, fl, iocts, nLevel, root_level
 
-
-def read_particles(file, Nrow, dd=1.0, idxa=None, idxb=None):
+def read_particles(file,Nrow,idxa=None,idxb=None,field=None):
     words = 6  # words (reals) per particle: x,y,z,vx,vy,vz
     real_size = 4  # for file_particle_data; not always true?
     np_per_page = Nrow**2  # defined in ART a_setup.h
     num_pages = os.path.getsize(file)/(real_size*words*np_per_page)
-
-    f = np.fromfile(file, dtype='>f4').astype('float32')  # direct access
-    pages = np.vsplit(np.reshape(f, (
-        num_pages, words, np_per_page)), num_pages)
-    data = np.squeeze(np.dstack(pages)).T  # x,y,z,vx,vy,vz
-    return data[idxa:idxb, 0:3]/dd, data[idxa:idxb, 3:]
-
+    data = np.array([],'f4')
+    fh = open(file,'r')
+    totalp = idxb-idxa
+    left = totalp
+    for page in range(num_pages):
+        for i,fname in enumerate(['x','y','z','vx','vy','vz']):
+            if i==field or fname==field:
+                if idxa is not None:
+                    fh.seek(real_size*idxa,1)
+                    count = min(np_per_page,left)
+                    temp = np.fromfile(fh,count=count,dtype='>f4')
+                    pageleft = np_per_page-count-idxa
+                    fh.seek(real_size*pageleft,1)
+                    left -= count
+                else:
+                    count = np_per_page
+                    temp = np.fromfile(fh,count=count,dtype='>f4')
+                data = np.concatenate((data,temp))
+            else:
+                fh.seek(4*np_per_page,1)
+    return data
 
 def read_star_field(file, field=None):
     data = {}


https://bitbucket.org/yt_analysis/yt/commits/826b3fcdb1cd/
Changeset:   826b3fcdb1cd
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-11 18:53:34
Summary:     added option to initialize the center, subselect particle type
Affected #:  1 file

diff -r 5e06580e03f446dc23158454b743ced92968ea02 -r 826b3fcdb1cd4bd388bfff68204b85f875472e16 yt/analysis_modules/halo_finding/fmp/fmp.py
--- a/yt/analysis_modules/halo_finding/fmp/fmp.py
+++ b/yt/analysis_modules/halo_finding/fmp/fmp.py
@@ -54,7 +54,8 @@
             t_old = pf.current_time
         return centers
 
-    def find_max_particle(self,radius=0.01,nparticles=1000):
+    def find_max_particle(self,initial_center=None,radius=0.01,nparticles=1000,
+                          particle_type="all"):
         """
         Find the particle at the maximum density and iterate backwards through
         snapshots, finding the localtion of the maximum density of the 
@@ -64,19 +65,21 @@
         dd = self.ts[-1].h.all_data()
         domain_dimensions = self.ts[-1].domain_dimensions
         sim_unit_to_cm = self.ts[-1]['cm']
-        v,c = dd.quantities["ParticleDensityCenter"]()
+        c = initial_center
+        if c is None:
+            v,c = dd.quantities["ParticleDensityCenter"](particle_type=\
+                                                         particle_type)
         centers = []
         for pf in self.ts[::-1]:
             if indices is not None:
-                dd = pf.h.all_data()
+                dd = pf.h.sphere(c,radius)
                 data = dict(number_of_particles=indices.shape[0])
-                index = dd[('all','particle_index')]
-                order= np.argsort(index)
-                assert np.all(index[order][indices]==indices)
+                index = dd[(particle_type,'particle_index')]
+                inside = find_index_in_array(index,indices)
                 for ax in 'xyz':
-                    pos = dd[('all',"particle_position_%s"%ax)][order][indices]
+                    pos = dd[(particle_type,"particle_position_%s"%ax)][inside]
                     data[('all','particle_position_%s'%ax)]= pos
-                mas = dd[('all',"particle_mass")][order][indices]
+                mas = dd[(particle_type,"particle_mass")][inside]
                 data[('all','particle_mass')]= mas
                 subselection = load_uniform_grid(data,domain_dimensions,
                                                  sim_unit_to_cm)
@@ -89,3 +92,21 @@
             centers.append(c)
         return centers
             
+def chunks(l, n):
+    #http://stackoverflow.com/questions/312443/how-do-you-split-
+    #a-list-into-evenly-sized-chunks-in-python
+    """ Yield successive n-sized chunks from l.
+    """
+    for i in xrange(0, len(l), n):
+        yield l[i:i+n]
+
+def find_index_in_array(arr1,arr2,size=long(1e6)):
+    #for element in arr2 find corresponding index in arr1
+    #temporary size is arr1.shape x arr2.shape so chunk this out
+    indices = np.array((),'i8')
+    for chunk in chunks(arr1,size):
+        idx = np.where(np.reshape(chunk,(chunk.shape[0],1))==
+                       np.reshape(arr2,(1,arr2.shape[0])))[0]
+        indices = np.concatenate((indices,idx)).astype('i8')
+    return indices
+


https://bitbucket.org/yt_analysis/yt/commits/57852d850e1d/
Changeset:   57852d850e1d
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-11 18:54:30
Summary:     fixing typos
Affected #:  1 file

diff -r 826b3fcdb1cd4bd388bfff68204b85f875472e16 -r 57852d850e1db56c8a39b1ba1b97b574dc14b7aa yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -679,13 +679,14 @@
 add_quantity("TotalQuantity", function=_TotalQuantity,
                 combine_function=_combTotalQuantity, n_ret=2)
 
-def _ParticleDensityCenter(data,nbins=3):
+def _ParticleDensityCenter(data,nbins=3,particle_type="all"):
     """
     Find the center of the particle density
     by histogramming the particles iteratively.
     """
-    pos = np.array([data[('all',"particle_position_%s"%ax)] for ax in "xyz"]).T
-    mas = data[('all',"particle_mass")]
+    pos = [data[(particle_type,"particle_position_%s"%ax)] for ax in "xyz"]
+    pos = np.array(pos).T
+    mas = data[(particle_type,"particle_mass")]
     calc_radius= lambda x,y:np.sqrt(np.sum((x-y)**2.0,axis=1))
     density = 0
     if pos.shape[0]==0:


https://bitbucket.org/yt_analysis/yt/commits/abfa37edf3cb/
Changeset:   abfa37edf3cb
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-11 18:54:46
Summary:     cleaner
Affected #:  1 file

diff -r 57852d850e1db56c8a39b1ba1b97b574dc14b7aa -r abfa37edf3cbb081bbe5dcf568a246365acc2e7a yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -142,7 +142,8 @@
                 nocts = d.level_count[d.domain_level]
                 if c < 1:
                     continue
-                subsets += ARTDomainSubset(d, mask, c, d.domain_level),
+                subset = ARTDomainSubset(d, mask, c, d.domain_level)
+                subsets.append(subset)
             dobj._chunk_info = subsets
             dobj.size = sum(counts)
             dobj.shape = (dobj.size,)


https://bitbucket.org/yt_analysis/yt/commits/7bbdd187dae0/
Changeset:   7bbdd187dae0
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-11 19:06:32
Summary:     found np.in1d, sped things up
Affected #:  1 file

diff -r abfa37edf3cbb081bbe5dcf568a246365acc2e7a -r 7bbdd187dae0986f58f3e617c2a2c35cb428826d yt/analysis_modules/halo_finding/fmp/fmp.py
--- a/yt/analysis_modules/halo_finding/fmp/fmp.py
+++ b/yt/analysis_modules/halo_finding/fmp/fmp.py
@@ -24,6 +24,7 @@
   along with this program.  If not, see <http://www.gnu.org/licenses/>.
 """
 import numpy as np
+from yt.funcs import *
 from yt.frontends.stream.api import load_uniform_grid
 
 class FindMaxProgenitor:
@@ -75,38 +76,22 @@
                 dd = pf.h.sphere(c,radius)
                 data = dict(number_of_particles=indices.shape[0])
                 index = dd[(particle_type,'particle_index')]
-                inside = find_index_in_array(index,indices)
+                mylog.info("Collecting particles")
+                inside = np.in1d(indices,index,assume_unique=True)
                 for ax in 'xyz':
                     pos = dd[(particle_type,"particle_position_%s"%ax)][inside]
                     data[('all','particle_position_%s'%ax)]= pos
                 mas = dd[(particle_type,"particle_mass")][inside]
                 data[('all','particle_mass')]= mas
+                mylog.info("Finding center")
                 subselection = load_uniform_grid(data,domain_dimensions,
                                                  sim_unit_to_cm)
                 ss = subselection.h.all_data()
                 v,c = ss.quantities["ParticleDensityCenter"]()
+            mylog.info("Finding central indices")
             sph = pf.h.sphere(c,radius)
             rad = sph["ParticleRadius"]
             idx = sph["particle_index"]
             indices = idx[np.argsort(rad)[:nparticles]]
             centers.append(c)
         return centers
-            
-def chunks(l, n):
-    #http://stackoverflow.com/questions/312443/how-do-you-split-
-    #a-list-into-evenly-sized-chunks-in-python
-    """ Yield successive n-sized chunks from l.
-    """
-    for i in xrange(0, len(l), n):
-        yield l[i:i+n]
-
-def find_index_in_array(arr1,arr2,size=long(1e6)):
-    #for element in arr2 find corresponding index in arr1
-    #temporary size is arr1.shape x arr2.shape so chunk this out
-    indices = np.array((),'i8')
-    for chunk in chunks(arr1,size):
-        idx = np.where(np.reshape(chunk,(chunk.shape[0],1))==
-                       np.reshape(arr2,(1,arr2.shape[0])))[0]
-        indices = np.concatenate((indices,idx)).astype('i8')
-    return indices
-


https://bitbucket.org/yt_analysis/yt/commits/9145a6b40927/
Changeset:   9145a6b40927
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-11 23:09:17
Summary:     fixing domain_width; smallest dx now works
Affected #:  1 file

diff -r 7bbdd187dae0986f58f3e617c2a2c35cb428826d -r 9145a6b4092782e12842f38d82f1830927dc5211 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -86,6 +86,7 @@
         self.max_level = pf.max_level
         self.float_type = np.float64
         super(ARTGeometryHandler, self).__init__(pf, data_style)
+        pf.domain_width = pf.domain_dimensions.astype('f8')
 
     def _initialize_oct_handler(self):
         """


https://bitbucket.org/yt_analysis/yt/commits/fd133f2921db/
Changeset:   fd133f2921db
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-11 23:09:32
Summary:     adding more verbosity to level loading
Affected #:  1 file

diff -r 9145a6b4092782e12842f38d82f1830927dc5211 -r fd133f2921db8b75ea3f4d2de19b585643a2df67 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -57,7 +57,8 @@
                 # and pick off the right vectors
                 rv = subset.fill(f, fields)
                 for ft, f in fields:
-                    mylog.debug("Filling %s with %s (%0.3e %0.3e) (%s:%s)",
+                    mylog.debug("Filling L%i %s with %s (%0.3e %0.3e) (%s:%s)",
+                                subset.domain_level,
                                 f, subset.cell_count, rv[f].min(), rv[f].max(),
                                 cp, cp+subset.cell_count)
                     tr[(ft, f)][cp:cp+subset.cell_count] = rv.pop(f)


https://bitbucket.org/yt_analysis/yt/commits/e3c8b836f606/
Changeset:   e3c8b836f606
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-11 23:16:56
Summary:     fixed with to 1.0/dimensions
Affected #:  2 files

diff -r fd133f2921db8b75ea3f4d2de19b585643a2df67 -r e3c8b836f606e07d95c323036b3e99d2623027e6 yt/analysis_modules/halo_finding/fmp/fmp.py
--- a/yt/analysis_modules/halo_finding/fmp/fmp.py
+++ b/yt/analysis_modules/halo_finding/fmp/fmp.py
@@ -26,8 +26,13 @@
 import numpy as np
 from yt.funcs import *
 from yt.frontends.stream.api import load_uniform_grid
+from yt.utilities.parallel_tools import parallel_analysis_interface
+from yt.utilities.parallel_tools.parallel_analysis_interface import \
+     parallel_objects
+from yt.utilities.parallel_tools.parallel_analysis_interface import \
+    ParallelAnalysisInterface, ProcessorPool, Communicator
 
-class FindMaxProgenitor:
+class FindMaxProgenitor(ParallelAnalysisInterface):
     def __init__(self,ts):
         self.ts = ts
     
@@ -70,14 +75,24 @@
         if c is None:
             v,c = dd.quantities["ParticleDensityCenter"](particle_type=\
                                                          particle_type)
-        centers = []
-        for pf in self.ts[::-1]:
-            if indices is not None:
+        centers = {}
+        earliest_c = c
+        for pfs_chunk in chunks(self.ts[::-1]):
+            pf = pfs_chunk[0] #first is the latest in time
+            mylog.info("Finding central indices")
+            sph = pf.h.sphere(earliest_c,radius)
+            rad = sph["ParticleRadius"]
+            idx = sph["particle_index"]
+            indices = idx[np.argsort(rad)[:nparticles]]
+            for sto, pf in parallel_objects(pfs_chunk,storage=centers):
                 dd = pf.h.sphere(c,radius)
                 data = dict(number_of_particles=indices.shape[0])
                 index = dd[(particle_type,'particle_index')]
-                mylog.info("Collecting particles")
                 inside = np.in1d(indices,index,assume_unique=True)
+                mylog.info("Collecting particles %1.1e of %1.1e",inside.sum(),
+                           nparticles)
+                if inside.sum()==0:
+                    mylog.warning("Found no matching indices in %s",str(pf)) 
                 for ax in 'xyz':
                     pos = dd[(particle_type,"particle_position_%s"%ax)][inside]
                     data[('all','particle_position_%s'%ax)]= pos
@@ -88,10 +103,17 @@
                                                  sim_unit_to_cm)
                 ss = subselection.h.all_data()
                 v,c = ss.quantities["ParticleDensityCenter"]()
-            mylog.info("Finding central indices")
-            sph = pf.h.sphere(c,radius)
-            rad = sph["ParticleRadius"]
-            idx = sph["particle_index"]
-            indices = idx[np.argsort(rad)[:nparticles]]
-            centers.append(c)
+                sto.result_id = pf.parameters['aexpn']
+                sto.result = c
+            #last in the chunk is the earliest in time
+            earliest_c = centers[pf_chunk[-1].parameters['aexpn']]
         return centers
+
+def chunks(l, n):
+    """ Yield successive n-sized chunks from l.
+    """
+    #http://stackoverflow.com/questions/312443/
+    #how-do-you-split-a-list-into-evenly-sized-chunks-in-python
+    for i in xrange(0, len(l), n):
+        yield l[i:i+n]
+

diff -r fd133f2921db8b75ea3f4d2de19b585643a2df67 -r e3c8b836f606e07d95c323036b3e99d2623027e6 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -86,7 +86,7 @@
         self.max_level = pf.max_level
         self.float_type = np.float64
         super(ARTGeometryHandler, self).__init__(pf, data_style)
-        pf.domain_width = pf.domain_dimensions.astype('f8')
+        pf.domain_width = 1.0/pf.domain_dimensions.astype('f8')
 
     def _initialize_oct_handler(self):
         """


https://bitbucket.org/yt_analysis/yt/commits/b837a203a571/
Changeset:   b837a203a571
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-19 20:29:36
Summary:     Merged yt_analysis/yt-3.0 into yt-3.0
Affected #:  38 files

diff -r e3c8b836f606e07d95c323036b3e99d2623027e6 -r b837a203a571c83ede9b40c5ee547c878dd0345b .hgignore
--- a/.hgignore
+++ b/.hgignore
@@ -5,11 +5,16 @@
 hdf5.cfg
 png.cfg
 yt_updater.log
+yt/frontends/artio/_artio_caller.c
 yt/frontends/ramses/_ramses_reader.cpp
+yt/frontends/sph/smoothing_kernel.c
+yt/geometry/oct_container.c
+yt/geometry/selection_routines.c
 yt/utilities/amr_utils.c
 yt/utilities/kdtree/forthonf2c.h
 yt/utilities/libconfig_wrapper.c
 yt/utilities/spatial/ckdtree.c
+yt/utilities/lib/alt_ray_tracers.c
 yt/utilities/lib/CICDeposit.c
 yt/utilities/lib/ContourFinding.c
 yt/utilities/lib/DepthFirstOctree.c

diff -r e3c8b836f606e07d95c323036b3e99d2623027e6 -r b837a203a571c83ede9b40c5ee547c878dd0345b FUNDING
--- a/FUNDING
+++ b/FUNDING
@@ -1,6 +1,7 @@
 The development of yt has benefited from funding from many different sources
 and institutions.  Here is an incomplete list of these sources:
 
+  * NSF grant OCI-0904484
   * NSF grant OCI-1048505
   * NSF grant AST-0239709 
   * NSF grant AST-0707474
@@ -22,6 +23,7 @@
   * Columbia University
   * Harvard-Smithsonian Center for Astrophysics
   * Institute for Advanced Study
+  * Kavli Institute for Cosmological Physics
   * Kavli Institute for Particle Astrophysics and Cosmology
   * Kavli Institute for Theoretical Physics
   * Los Alamos National Lab
@@ -32,4 +34,7 @@
   * University of California at Berkeley
   * University of California at San Diego
   * University of California at Santa Cruz
+  * University of Chicago Research Computing Center
   * University of Colorado at Boulder
+  * University of Maryland at College Park
+  * Yale University 

diff -r e3c8b836f606e07d95c323036b3e99d2623027e6 -r b837a203a571c83ede9b40c5ee547c878dd0345b setup.cfg
--- a/setup.cfg
+++ b/setup.cfg
@@ -6,4 +6,4 @@
 detailed-errors=1
 where=yt
 exclude=answer_testing
-with-xunit=1
\ No newline at end of file
+with-xunit=1

diff -r e3c8b836f606e07d95c323036b3e99d2623027e6 -r b837a203a571c83ede9b40c5ee547c878dd0345b yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -455,7 +455,7 @@
     def _identify_dependencies(self, fields_to_get):
         inspected = 0
         fields_to_get = fields_to_get[:]
-        for ftype, field in itertools.cycle(fields_to_get):
+        for field in itertools.cycle(fields_to_get):
             if inspected >= len(fields_to_get): break
             inspected += 1
             if field not in self.pf.field_dependencies: continue

diff -r e3c8b836f606e07d95c323036b3e99d2623027e6 -r b837a203a571c83ede9b40c5ee547c878dd0345b yt/data_objects/selection_data_containers.py
--- a/yt/data_objects/selection_data_containers.py
+++ b/yt/data_objects/selection_data_containers.py
@@ -928,7 +928,6 @@
             raise YTSphereTooSmall(pf, radius, self.hierarchy.get_smallest_dx())
         self.set_field_parameter('radius',radius)
         self.radius = radius
-        self.DW = self.pf.domain_right_edge - self.pf.domain_left_edge
 
 class YTEllipsoidBase(YTSelectionContainer3D):
     """

diff -r e3c8b836f606e07d95c323036b3e99d2623027e6 -r b837a203a571c83ede9b40c5ee547c878dd0345b yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -251,6 +251,8 @@
     _last_freq = (None, None)
     _last_finfo = None
     def _get_field_info(self, ftype, fname):
+        if ftype == "unknown" and self._last_freq[0] != None:
+            ftype = self._last_freq[0]
         field = (ftype, fname)
         if field == self._last_freq or fname == self._last_freq[1]:
             return self._last_finfo

diff -r e3c8b836f606e07d95c323036b3e99d2623027e6 -r b837a203a571c83ede9b40c5ee547c878dd0345b yt/data_objects/tests/test_fields.py
--- a/yt/data_objects/tests/test_fields.py
+++ b/yt/data_objects/tests/test_fields.py
@@ -69,6 +69,8 @@
         dd2.field_parameters.update(_sample_parameters)
         v1 = dd1[self.field_name]
         conv = field._convert_function(dd1) or 1.0
+        if not field.particle_type:
+            assert_equal(v1, dd1["gas", self.field_name])
         if not needs_spatial:
             assert_equal(v1, conv*field._function(field, dd2))
         if not skip_grids:

diff -r e3c8b836f606e07d95c323036b3e99d2623027e6 -r b837a203a571c83ede9b40c5ee547c878dd0345b yt/frontends/artio/_artio_caller.pyx
--- a/yt/frontends/artio/_artio_caller.pyx
+++ b/yt/frontends/artio/_artio_caller.pyx
@@ -10,17 +10,12 @@
 from libc.stdint cimport int32_t, int64_t
 from libc.stdlib cimport malloc, free
 import  data_structures  
-from yt.geometry.oct_container cimport \
-    OctreeContainer, \
-    ARTIOOctreeContainer, \
-    Oct
-
-cdef extern from "stdlib.h":
-    void *alloca(int)
 
 cdef extern from "artio.h":
     ctypedef struct artio_fileset_handle "artio_fileset" :
         pass
+    ctypedef struct artio_selection "artio_selection" :
+        pass
     ctypedef struct artio_context :
         pass
     cdef extern artio_context *artio_context_global 
@@ -38,6 +33,8 @@
     cdef int ARTIO_TYPE_DOUBLE "ARTIO_TYPE_DOUBLE"
     cdef int ARTIO_TYPE_LONG "ARTIO_TYPE_LONG"
 
+    cdef int ARTIO_MAX_STRING_LENGTH "ARTIO_MAX_STRING_LENGTH"
+
     cdef int ARTIO_PARAMETER_EXHAUSTED "ARTIO_PARAMETER_EXHAUSTED"
 
     # grid read options
@@ -58,14 +55,26 @@
     int artio_fileset_open_grid(artio_fileset_handle *handle) 
     int artio_fileset_close_grid(artio_fileset_handle *handle) 
 
+    # selection functions
+    artio_selection *artio_selection_allocate( artio_fileset_handle *handle )
+    artio_selection *artio_select_all( artio_fileset_handle *handle )
+    artio_selection *artio_select_volume( artio_fileset_handle *handle, double lpos[3], double rpos[3] )
+    int artio_selection_add_root_cell( artio_selection *selection, int coords[3] )
+    int artio_selection_destroy( artio_selection *selection )
+    int artio_selection_iterator( artio_selection *selection,
+            int64_t max_range_size, int64_t *start, int64_t *end )
+    int64_t artio_selection_size( artio_selection *selection )
+    void artio_selection_print( artio_selection *selection )
+
     # parameter functions
     int artio_parameter_iterate( artio_fileset_handle *handle, char *key, int *type, int *length )
     int artio_parameter_get_int_array(artio_fileset_handle *handle, char * key, int length, int32_t *values)
     int artio_parameter_get_float_array(artio_fileset_handle *handle, char * key, int length, float *values)
     int artio_parameter_get_long_array(artio_fileset_handle *handle, char * key, int length, int64_t *values)
     int artio_parameter_get_double_array(artio_fileset_handle *handle, char * key, int length, double *values)
-    int artio_parameter_get_string_array(artio_fileset_handle *handle, char * key, int length, char **values, int max_length)
+    int artio_parameter_get_string_array(artio_fileset_handle *handle, char * key, int length, char **values )
 
+    # grid functions
     int artio_grid_cache_sfc_range(artio_fileset_handle *handle, int64_t start, int64_t end)
     int artio_grid_clear_sfc_cache( artio_fileset_handle *handle ) 
 
@@ -99,34 +108,39 @@
     if status!=ARTIO_SUCCESS :
         callername = sys._getframe().f_code.co_name
         nline = sys._getframe().f_lineno
-        print 'failure with status', status, 'in function',fname,'from caller', callername, nline 
-        sys.exit(1)
+        raise RuntimeError('failure with status', status, 'in function',fname,'from caller', callername, nline)
 
 cdef class artio_fileset :
     cdef public object parameters 
-    cdef ARTIOOctreeContainer oct_handler
     cdef artio_fileset_handle *handle
-    cdef artio_fileset_handle *particle_handle
+
+    # common attributes
+    cdef public int num_grid
     cdef int64_t num_root_cells
     cdef int64_t sfc_min, sfc_max
-    cdef public int num_grid
 
     # grid attributes
-    cdef int min_level, max_level
-    cdef int num_grid_variables
+    cdef public int min_level, max_level
+    cdef public int num_grid_variables
+    cdef int *num_octs_per_level
+    cdef float *grid_variables
 
-    cdef int cpu
-    cdef int domain_id
-
+    # particle attributes
+    cdef public int num_species
+    cdef int *particle_position_index
+    cdef int *num_particles_per_species
+    cdef double *primary_variables
+    cdef float *secondary_variables
+ 
     def __init__(self, char *file_prefix) :
         cdef int artio_type = ARTIO_OPEN_HEADER
         cdef int64_t num_root
 
         self.handle = artio_fileset_open( file_prefix, artio_type, artio_context_global ) 
-        self.particle_handle = artio_fileset_open( file_prefix, artio_type, artio_context_global )
+        if not self.handle :
+            raise RuntimeError
+
         self.read_parameters()
-        print 'print parameters in caller.pyx',self.parameters
-        print 'done reading header parameters'
 
         self.num_root_cells = self.parameters['num_root_cells'][0]
         self.num_grid = 1
@@ -135,31 +149,57 @@
             self.num_grid <<= 1
             num_root >>= 3
 
-        #kln - add particle detection code
-        status = artio_fileset_open_particles( self.particle_handle )
-        check_artio_status(status)
- 
-        # dhr - add grid detection code 
+        self.sfc_min = 0
+        self.sfc_max = self.num_root_cells-1
+
+        # grid detection
+        self.min_level = 0
+        self.max_level = self.parameters['grid_max_level'][0]
+        self.num_grid_variables = self.parameters['num_grid_variables'][0]
+
+        self.num_octs_per_level = <int *>malloc(self.max_level*sizeof(int))
+        self.grid_variables = <float *>malloc(8*self.num_grid_variables*sizeof(float))
+        if (not self.num_octs_per_level) or (not self.grid_variables) :
+            raise MemoryError
+
         status = artio_fileset_open_grid( self.handle )
         check_artio_status(status)
+
+        # particle detection
+        self.num_species = self.parameters['num_particle_species'][0]
+        self.particle_position_index = <int *>malloc(3*sizeof(int)*self.num_species)
+        if not self.particle_position_index :
+            raise MemoryError
+        for ispec in range(self.num_species) :
+            labels = self.parameters["species_%02d_primary_variable_labels"% (ispec,)]
+            try :
+                self.particle_position_index[3*ispec+0] = labels.index('POSITION_X')
+                self.particle_position_index[3*ispec+1] = labels.index('POSITION_Y')
+                self.particle_position_index[3*ispec+2] = labels.index('POSITION_Z')
+            except ValueError :
+                raise RuntimeError("Unable to locate position information for particle species", ispec )
+
+        self.num_particles_per_species =  <int *>malloc(sizeof(int)*self.num_species) 
+        self.primary_variables = <double *>malloc(sizeof(double)*max(self.parameters['num_primary_variables']))  
+        self.secondary_variables = <float *>malloc(sizeof(float)*max(self.parameters['num_secondary_variables']))  
+        if (not self.num_particles_per_species) or (not self.primary_variables) or (not self.secondary_variables) :
+            raise MemoryError
+
+        status = artio_fileset_open_particles( self.handle )
+        check_artio_status(status)
+   
+    # this should possibly be __dealloc__ 
+    def __del__(self) :
+        if self.num_octs_per_level : free(self.num_octs_per_level)
+        if self.grid_variables : free(self.grid_variables)
+
+        if self.particle_position_index : free(self.particle_position_index)
+        if self.num_particles_per_species : free(self.num_particles_per_species)
+        if self.primary_variables : free(self.primary_variables)
+        if self.secondary_variables : free(self.secondary_variables)
+
+        if self.handle : artio_fileset_close(self.handle)
   
-
-        # grid stuff
-        self.oct_handler=None
-
-        self.min_level = 0
-        self.max_level = self.parameters['grid_max_level'][0]
-
-        #snl FIX: the sfc values used should come from "subset" and describe the domain for chunking
-        # note the root level method may force chunking to be done on 0-level ytocts 
-        self.sfc_min = 0
-        self.sfc_max = self.parameters['grid_file_sfc_index'][1]-1
-        self.num_grid_variables = self.parameters['num_grid_variables'][0]
-
-        # these should be fixed to something meaningful
-        self.cpu = 0
-        self.domain_id = 0
-
     def read_parameters(self) :
         cdef char key[64]
         cdef int type
@@ -176,8 +216,8 @@
             if type == ARTIO_TYPE_STRING :
                 char_values = <char **>malloc(length*sizeof(char *))
                 for i in range(length) :
-                    char_values[i] = <char *>malloc( 128*sizeof(char) )
-                artio_parameter_get_string_array( self.handle, key, length, char_values, 128 ) 
+                    char_values[i] = <char *>malloc( ARTIO_MAX_STRING_LENGTH*sizeof(char) )
+                artio_parameter_get_string_array( self.handle, key, length, char_values ) 
                 parameter = [ char_values[i] for i in range(length) ]
                 for i in range(length) :
                     free(char_values[i])
@@ -203,488 +243,266 @@
                 parameter = [ double_values[i] for i in range(length) ]
                 free(double_values)
             else :
-                print "ERROR: invalid type!"
+                raise RuntimeError("ARTIO file corruption detected: invalid type!")
 
             self.parameters[key] = parameter
 
-    def count_refined_octs(self) :
-        cdef int64_t num_total_octs = 0
-
-        # this only works if this domain includes all cells!
-        if self.parameters.has_key("num_octs_per_level") :
-            return self.parameters["num_octs_per_level"].sum()
-
-        status = artio_grid_count_octs_in_sfc_range( self.handle, 
-                self.sfc_min, self.sfc_max, &num_total_octs )
-        check_artio_status(status) 
-
-        # add octs for root cells
-        num_total_octs += (self.sfc_max-self.sfc_min+1)/8
- 
-        return num_total_octs
-
-    def grid_pos_fill(self, ARTIOOctreeContainer oct_handler) :
-        ''' adds all refined octs and a new array of ghost octs for  
-        the "fully refined octs" at level=-1 in ART or 0 in yt convention 
-        so that root level can consist of children
-        '''
-        cdef int64_t sfc
-        cdef int level, ix, iy, iz
-        cdef int num_oct_levels
-        cdef int *num_octs_per_level
-        cdef double dpos[3]
-        cdef int refined[8]
-        cdef int oct_count
-        cdef int ioct
-
-        cdef Oct **next_level_parents=NULL, **cur_level_parents=NULL, **tmp_parents
-        cdef int num_next_parents, num_cur_parents, tmp_size
-        cdef int next_parents_size, cur_parents_size, cur_parent
-
-        print 'start filling oct positions'
-        self.oct_handler = oct_handler
-        oct_count = 0
-
-        for iz in range(self.num_grid/2) :
-            dpos[2] = iz*2+1
-            for iy in range(self.num_grid/2) :
-                dpos[1] = iy*2+1
-                for ix in range(self.num_grid/2) :
-                    dpos[0]=ix*2+1
-                    oct_handler.add_oct(self.cpu+1, NULL, 0, dpos)
-                    oct_count += 1
-
-        print "done filling root oct positions"
-
-        status = artio_grid_cache_sfc_range( self.handle, self.sfc_min, self.sfc_max )
-        check_artio_status(status) 
-
-        num_octs_per_level = <int *>malloc(self.max_level*sizeof(int))
-        if not num_octs_per_level :
-            raise MemoryError
-        next_level_parents = <Oct **>malloc(sizeof(Oct *))
-        if not next_level_parents :
-            raise MemoryError
-        next_parents_size = 1
-        cur_level_parents = <Oct **>malloc(sizeof(Oct *))
-        if not cur_level_parents :
-            raise MemoryError
-        cur_parents_size = 1
-
-        for sfc in range( self.sfc_min, self.sfc_max+1 ) :
-            status = artio_grid_read_root_cell_begin( self.handle, sfc, 
-                dpos, NULL, &num_oct_levels, num_octs_per_level )
-            check_artio_status(status) 
-
-            next_level_parents[0] = oct_handler.get_root_oct(dpos)
-            num_next_parents = 1
-
-            for level in range(1,num_oct_levels+1) :
-                tmp_parents = cur_level_parents
-                tmp_size = cur_parents_size
-
-                cur_level_parents = next_level_parents
-                cur_parents_size = next_parents_size
-                num_cur_parents = num_next_parents
-
-                cur_parent = 0
-                num_next_parents = 0
-                next_parents_size = tmp_size
-                next_level_parents = tmp_parents
-
-                if level < num_oct_levels and \
-                         next_parents_size < num_octs_per_level[level] :
-                    free(next_level_parents)
-                    next_parents_size = num_octs_per_level[level]
-                    next_level_parents = <Oct **>malloc(next_parents_size*sizeof(Oct *))
-                    if not next_level_parents :
-                        raise MemoryError
-
-                status = artio_grid_read_level_begin( self.handle, level )
-                check_artio_status(status) 
-
-                for ioct in range(num_octs_per_level[level-1]) :
-                    status = artio_grid_read_oct( self.handle, dpos, NULL, refined )
-                    check_artio_status(status) 
-
-                    new_oct = oct_handler.add_oct(self.cpu+1, 
-                            cur_level_parents[cur_parent],
-                            level, dpos)
-                    oct_count += 1
-                    cur_parent += 1
-
-                    if level < num_oct_levels :
-                        for i in range(8) :
-                            if refined[i] :
-                                next_level_parents[num_next_parents] = new_oct
-                                num_next_parents += 1
-
-                status = artio_grid_read_level_end( self.handle )
-                check_artio_status(status) 
-
-            status = artio_grid_read_root_cell_end( self.handle )
-            check_artio_status(status) 
-       
-        status = artio_grid_clear_sfc_cache( self.handle )
-        check_artio_status(status)
-
-        free(num_octs_per_level)
-        free(next_level_parents)
-        free(cur_level_parents)
-
-        print 'done filling oct positions', oct_count
-
 #    @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    def particle_var_fill(self, accessed_species, masked_particles,SelectorObject selector, fields) :
-        # major issues:
-        # 1) cannot choose which subspecies to access
-        # 2) but what if a species doesnt have a field? make it zeroes
-        # 3) mask size should be calculated and not just num_acc_species   
-        # e.g.   
-        # accessed species = nbody, stars, bh
-        # secondary speces[nbody] = []
-        # secondary speces[stars] = [birth, mass, blah]
-        # secondary speces[bh] = [accretionrate, mass, spin]
-        #
-        cdef double **primary_variables
-        cdef float **secondary_variables
-        cdef int **fieldtoindex
-        cdef int *iacctoispec 
+    def read_particle_chunk(self, SelectorObject selector, int64_t sfc_start, int64_t sfc_end, fields) :
+        cdef int i
         cdef int status
-        cdef np.ndarray[np.float32_t, ndim=1] arr
-        cdef int **mask
-        cdef int *num_particles_per_species 
-        cdef int **pos_index
+        cdef int subspecies
+        cdef int64_t pid
 
-        cdef int *subspecies
-        subspecies = <int*>malloc(sizeof(int))
-        cdef int64_t *pid
-        pid = <int64_t *>malloc(sizeof(int64_t))
+        cdef int num_fields = len(fields)
+        cdef np.float64_t pos[3]
+        cdef np.float64_t dds[3]
+        cdef int eterm[3]
+        for i in range(3) : dds[i] = 0
 
-        cdef int nf = len(fields)
-        cdef int i, j, level
-        cdef np.float64_t dds[3], pos[3]
-        cdef int eterm[3]
+        data = {}
+        accessed_species = np.zeros( self.num_species, dtype="int")
+        selected_mass = [ None for i in range(self.num_species)]
+        selected_pid = [ None for i in range(self.num_species)]
+        selected_species = [ None for i in range(self.num_species)]
+        selected_primary = [ [] for i in range(self.num_species)]
+        selected_secondary = [ [] for i in range(self.num_species)]
 
+        for species,field in fields :
+            if species < 0 or species > self.num_species :
+                raise RuntimeError("Invalid species provided to read_particle_chunk")
+            accessed_species[species] = 1
 
-        if len(accessed_species) != 1 : 
-            print 'multiple particle species access needs serious thought'
-            sys.exit(1)
-            
-        # setup the range for all reads:
-        status = artio_particle_cache_sfc_range( self.particle_handle, 
-                                                 self.sfc_min, self.sfc_max )
+            if self.parameters["num_primary_variables"][species] > 0 and \
+                    field in self.parameters["species_%02u_primary_variable_labels"%(species,)] :
+                selected_primary[species].append((self.parameters["species_%02u_primary_variable_labels"%(species,)].index(field),(species,field)))
+                data[(species,field)] = np.empty(0,dtype="float64")
+            elif self.parameters["num_secondary_variables"][species] > 0 and \
+                    field in self.parameters["species_%02u_secondary_variable_labels"%(species,)] :
+                selected_secondary[species].append((self.parameters["species_%02u_secondary_variable_labels"%(species,)].index(field),(species,field)))
+                data[(species,field)] = np.empty(0,dtype="float32")
+            elif field == "MASS" :
+                selected_mass[species] = (species,field)
+                data[(species,field)] = np.empty(0,dtype="float32")
+            elif field == "PID" :
+                selected_pid[species] = (species,field)
+                data[(species,field)] = np.empty(0,dtype="int64")
+            elif field == "SPECIES" :
+                selected_species[species] = (species,field)
+                data[(species,field)] = np.empty(0,dtype="int8")
+            else :
+                raise RuntimeError("invalid field name provided to read_particle_chunk")
+
+        # cache the range
+        status = artio_particle_cache_sfc_range( self.handle, self.sfc_min, self.sfc_max ) 
         check_artio_status(status)
-	
-        # mask ####################
-        # mask[spec][particle] fields are irrelevant for masking 
-        # -- masking only cares abount position
-        num_acc_species = len(accessed_species)
-        mask = <int**>malloc(sizeof(int*)*num_acc_species)
-        if not mask :
-            raise MemoryError
-        for aspec in range(num_acc_species) :
-             mask[aspec] = <int*>malloc(
-                 self.parameters['particle_species_num'][aspec] 
-                 * sizeof(int))
-             if not mask[aspec] :
-                 raise MemoryError
+
+        for sfc in range( sfc_start, sfc_end+1 ) :
+            status = artio_particle_read_root_cell_begin( self.handle, sfc,
+                    self.num_particles_per_species )
+            check_artio_status(status)	
+
+            for ispec in range(self.num_species) : 
+                if accessed_species[ispec] :
+                    status = artio_particle_read_species_begin(self.handle, ispec);
+                    check_artio_status(status)
  
-        # particle species ##########        
-        num_species = self.parameters['num_particle_species'][0]
-        labels_species = self.parameters['particle_species_labels']
-
-        fieldtoindex = <int**>malloc(sizeof(int*)*num_species)
-        if not fieldtoindex: raise MemoryError
-        pos_index = <int**>malloc(sizeof(int*)*num_species)
-        if not pos_index: raise MemoryError
-        num_particles_per_species =  <int *>malloc(
-            sizeof(int)*num_species) 
-        if not num_particles_per_species : raise MemoryError
-        iacctoispec = <int*>malloc(sizeof(int)*num_acc_species)
-        if not iacctoispec: raise MemoryError
-        for i, spec in enumerate(accessed_species):
-            j = labels_species.index(spec)
-            iacctoispec[i] = j
-            # species of the same type (e.g. N-BODY) MUST be sequential in the label array
-            if i > 0 and iacctoispec[i] == iacctoispec[i-1] :
-                iacctoispec[i] = j+1
-        # check that iacctoispec points to uniq indices
-        for i in range(num_acc_species): 
-            for j in range(i+1,num_acc_species):  
-                if iacctoispec[i]==iacctoispec[j]:
-                    print iacctoispec[i]
-                    print 'some accessed species indices point to the same ispec; exitting'
-                    sys.exit(1)
-            
-        # particle field labels and order ##########        
-        labels_primary={}
-        labels_secondary={}
-        labels_static={}
-        howtoread = {}
-        for ispec in range(num_species) : 
-            fieldtoindex[ispec] = <int*>malloc(nf*sizeof(int))
-            if not fieldtoindex[ispec] : raise MemoryError
-
-        countnbody = 0 
-        for ispec in range(num_species) : 
-            # data_structures converted fields into ART labels
-            # now attribute ART fields to each species primary/secondary/static/empty
-            # so that we know how to read them
-            param_name = "species_%02d_primary_variable_labels" % ispec
-            labels_primary[ispec] = self.parameters[param_name]
-            if self.parameters["num_secondary_variables"][ispec] > 0 :
-                param_name = "species_%02d_secondary_variable_labels" % ispec
-                labels_secondary[ispec] = self.parameters[param_name]
-            else : 
-                labels_secondary[ispec] = []
-
-            #the only static label for now is NBODY mass
-            if labels_species[ispec] == 'N-BODY' :
-                labels_static[ispec] = "particle_species_mass"
-            else : 
-                labels_static[ispec] = [] 
-
-            for i, f in enumerate(fields):
-                if   f in labels_primary[ispec]:
-                    howtoread[ispec,i]= 'primary'
-                    fieldtoindex[ispec][i] = labels_primary[ispec].index(f)
-                elif f in labels_secondary[ispec]:
-                    howtoread[ispec,i]= 'secondary'
-                    fieldtoindex[ispec][i] = labels_secondary[ispec].index(f)
-                elif f in labels_static[ispec]:
-                    howtoread[ispec,i]= 'static'
-                    fieldtoindex[ispec][i] = countnbody
-                    countnbody += 1 #particle_mass happens once per N-BODY species
-                else : 
-                    howtoread[ispec,i]= 'empty'
-                    fieldtoindex[ispec][i] = 9999999
-            #fix pos_index
-            pos_index[ispec] = <int*>malloc(3*sizeof(int))
-            pos_index[ispec][0] = labels_primary[ispec].index('POSITION_X')
-            pos_index[ispec][1] = labels_primary[ispec].index('POSITION_Y')
-            pos_index[ispec][2] = labels_primary[ispec].index('POSITION_Z')
-                                
-                                
-
-        # allocate io pointers ############
-        primary_variables = <double **>malloc(sizeof(double**)*num_acc_species)  
-        secondary_variables = <float **>malloc(sizeof(float**)*num_acc_species)  
-        if (not primary_variables) or (not secondary_variables) : raise MemoryError
-            
-        for aspec in range(num_acc_species) : 
-            primary_variables[aspec]   = <double *>malloc(self.parameters['num_primary_variables'][aspec]*sizeof(double))
-            secondary_variables[aspec] = <float *>malloc(self.parameters['num_secondary_variables'][aspec]*sizeof(float))
-            if (not primary_variables[aspec]) or (not secondary_variables[aspec]) : raise MemoryError
-
-        count_mask = []
-        ipspec = []
-        # counts=0 ##########
-        for aspec in range(num_acc_species) :
-             count_mask.append(0)
-             ipspec.append(0)
-        # mask begin ##########
-        print "generating mask for particles"
-        for sfc in range( self.sfc_min, self.sfc_max+1 ) :
-            status = artio_particle_read_root_cell_begin( 
-                self.particle_handle, sfc,
-                num_particles_per_species )
-            check_artio_status(status)
-            # ispec is index out of all specs and aspecs is index out of accessed
-            # ispec only needed for num_particles_per_species and 
-            #    artio_particle_read_species_begin
-            for aspec in range(num_acc_species ) :
-                ispec = iacctoispec[aspec]
-                status = artio_particle_read_species_begin(
-                    self.particle_handle, ispec)
-                check_artio_status(status)
-                for particle in range( num_particles_per_species[ispec] ) :
-                    # print 'snl in caller: aspec count_mask count',aspec,ispec, count_mask[aspec], ipspec[aspec]
-                    status = artio_particle_read_particle(
-                        self.particle_handle,
-                        pid, subspecies, primary_variables[aspec],
-                        secondary_variables[aspec])
-                    check_artio_status(status)
-                    pos[0] = primary_variables[aspec][pos_index[aspec][0]]
-                    pos[1] = primary_variables[aspec][pos_index[aspec][1]]
-                    pos[2] = primary_variables[aspec][pos_index[aspec][2]]
-                    mask[aspec][ipspec[aspec]] = selector.select_cell(pos, dds, eterm)
-                    count_mask[aspec] += mask[aspec][count_mask[aspec]]
-                    ipspec[aspec] += 1
-                status = artio_particle_read_species_end( self.particle_handle )
-                check_artio_status(status)
-            status = artio_particle_read_root_cell_end( self.particle_handle )
-            check_artio_status(status)
-        print 'finished masking'
-	##########################################################
-
-        cdef np.float32_t **fpoint
-        fpoint = <np.float32_t**>malloc(sizeof(np.float32_t*)*nf)
-        num_masked_particles = sum(count_mask)
-        if not fpoint : raise MemoryError
-        for i, f in enumerate(fields):
-            masked_particles[f] = np.empty(num_masked_particles,dtype="float32")    
-            arr = masked_particles[f]
-            fpoint[i] = <np.float32_t *>arr.data
-
-	##########################################################
-        #variable begin ##########
-        print "reading in particle variables"
-        for aspec in range(num_acc_species) :
-             count_mask[aspec] = 0
-             ipspec[aspec] = 0
-        ipall = 0
-        for sfc in range( self.sfc_min, self.sfc_max+1 ) :
-                status = artio_particle_read_root_cell_begin( self.particle_handle, sfc,
-                    num_particles_per_species )
-                check_artio_status(status)	
-
-                for aspec in range(num_acc_species) :
-                    ispec = iacctoispec[aspec]
-                    status = artio_particle_read_species_begin(self.particle_handle, ispec);
-                    check_artio_status(status)
-                    for particle in range( num_particles_per_species[ispec] ) :
-
-                        status = artio_particle_read_particle(self.particle_handle,
-                                        pid, subspecies, primary_variables[aspec],
-                                        secondary_variables[aspec])
+                    for particle in range( self.num_particles_per_species[ispec] ) :
+                        status = artio_particle_read_particle(self.handle,
+                                &pid, &subspecies, self.primary_variables,
+                                self.secondary_variables)
                         check_artio_status(status)
 
-                        ########## snl this is not right because of primary overflow
-                        if mask[aspec][ipspec[aspec]] == 1 :
-                             for i in range(nf):
-                                 if   howtoread[ispec,i] == 'primary' : 
-                                     fpoint[i][ipall] = primary_variables[aspec][fieldtoindex[ispec][i]]
-                                 elif howtoread[ispec,i] == 'secondary' : 
-                                     fpoint[i][ipall] = secondary_variables[aspec][fieldtoindex[ispec][i]]
-                                 elif howtoread[ispec,i] == 'static' : 
-                                     fpoint[i][ipall] = self.parameters[labels_static[ispec]][fieldtoindex[ispec][i]]
-                                 elif howtoread[ispec,i] == 'empty' : 
-                                     fpoint[i][ipall] = 0
-                                 else : 
-                                     print 'undefined how to read in caller', howtoread[ispec,i]
-                                     print 'this should be impossible.'
-                                     sys.exit(1)
-                                 # print 'reading into fpoint', ipall,fpoint[i][ipall], fields[i]
-                             ipall += 1
-                        ########## snl this is not right because of primary overflow
-                        ipspec[aspec] += 1
+                        for i in range(3) :
+                            pos[i] = self.primary_variables[self.particle_position_index[3*ispec+i]] 
 
-                    status = artio_particle_read_species_end( self.particle_handle )
+                        if selector.select_cell(pos,dds,eterm) :
+                            # loop over primary variables
+                            for i,field in selected_primary[ispec] :
+                                count = len(data[field])
+                                data[field].resize(count+1)
+                                data[field][count] = self.primary_variables[i]
+                            
+                            # loop over secondary variables
+                            for i,field in selected_secondary[ispec] :
+                                count = len(data[field])
+                                data[field].resize(count+1)
+                                data[field][count] = self.secondary_variables[i]
+
+                            # add particle id
+                            if selected_pid[ispec] :
+                                count = len(data[selected_pid[ispec]])
+                                data[selected_pid[ispec]].resize(count+1)
+                                data[selected_pid[ispec]][count] = pid
+
+                            # add mass if requested
+                            if selected_mass[ispec] :
+                                count = len(data[selected_mass[ispec]])
+                                data[selected_mass[ispec]].resize(count+1)
+                                data[selected_mass[ispec]][count] = self.parameters["particle_species_mass"]
+                        
+                    status = artio_particle_read_species_end( self.handle )
                     check_artio_status(status)
-
-                status = artio_particle_read_root_cell_end( self.particle_handle )
-                check_artio_status(status)
-
-        free(subspecies)
-        free(pid)
-        free(mask)
-        free(pos_index)
-        free(num_particles_per_species)
-        free(iacctoispec)
-        free(primary_variables)
-        free(secondary_variables)
-        free(fpoint)
-        free(fieldtoindex)
-
-        print 'done filling particle variables', ipall
-
-
+                    
+            status = artio_particle_read_root_cell_end( self.handle )
+            check_artio_status(status)
+ 
+        return data
 
     #@cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    def grid_var_fill(self, source, fields):
+    def read_grid_chunk(self, SelectorObject selector, int64_t sfc_start, int64_t sfc_end, fields ):
+        cdef int i
+        cdef int level
         cdef int num_oct_levels
-        cdef int *num_octs_per_level
-        cdef float *variables
+        cdef int refined[8]
         cdef int status
-        cdef int root_oct, child, order
-        cdef int ix, iy, iz
-        cdef int cx, cy, cz
+        cdef int64_t count
+        cdef int64_t max_octs
         cdef double dpos[3]
-        cdef np.ndarray[np.float32_t, ndim=1] arr
-        # This relies on the fields being contiguous
-        cdef np.float32_t **fpoint
-        cdef int nf = len(fields)
-        fpoint = <np.float32_t**>malloc(sizeof(np.float32_t*)*nf)
-        forder = <int*>malloc(sizeof(int)*nf)
-        cdef int i, j, level
+        cdef np.float64_t pos[3]
+        cdef np.float64_t dds[3]
+        cdef int eterm[3]
+        cdef int *field_order
+        cdef int num_fields  = len(fields)
+        field_order = <int*>malloc(sizeof(int)*num_fields)
 
         # translate fields from ARTIO names to indices
         var_labels = self.parameters['grid_variable_labels']
         for i, f in enumerate(fields):
-            # It might be better to do this check in the Python code
             if f not in var_labels:
-                print "This field is not known to ARTIO:", f
-                raise RuntimeError
-            j = var_labels.index(f)
-            arr = source[f]
-            fpoint[i] = <np.float32_t *>arr.data
-            forder[i] = j
+                raise RuntimeError("Field",f,"is not known to ARTIO")
+            field_order[i] = var_labels.index(f)
 
+        # dhr - cache the entire domain (replace later)
         status = artio_grid_cache_sfc_range( self.handle, self.sfc_min, self.sfc_max )
         check_artio_status(status) 
 
-        num_octs_per_level = <int *>malloc(self.max_level*sizeof(int))
-        variables = <float *>malloc(8*self.num_grid_variables*sizeof(float))
+        # determine max number of cells we could hit (optimize later)
+        #status = artio_grid_count_octs_in_sfc_range( self.handle, 
+        #        sfc_start, sfc_end, &max_octs )
+        #check_artio_status(status)
+        #max_cells = sfc_end-sfc_start+1 + max_octs*8
 
-        count = self.num_root_cells
+        # allocate space for _fcoords, _icoords, _fwidth, _ires
+        #fcoords = np.empty((max_cells, 3), dtype="float64")
+        #ires = np.empty(max_cells, dtype="int64")
+        fcoords = np.empty((0, 3), dtype="float64")
+        ires = np.empty(0, dtype="int64")
 
-        for sfc in range( self.sfc_min, self.sfc_max+1 ) :
+        #data = [ np.empty(max_cells, dtype="float32") for i in range(num_fields) ]
+        data = [ np.empty(0,dtype="float32") for i in range(num_fields)]
+
+        count = 0
+        for sfc in range( sfc_start, sfc_end+1 ) :
             status = artio_grid_read_root_cell_begin( self.handle, sfc, 
-                    dpos, variables, &num_oct_levels, num_octs_per_level )
+                    dpos, self.grid_variables, &num_oct_levels, self.num_octs_per_level )
             check_artio_status(status) 
 
-            ix = <int>(dpos[0]-0.5) / 2
-            iy = <int>(dpos[1]-0.5) / 2
-            iz = <int>(dpos[2]-0.5) / 2
-
-            cx = 0 if dpos[0] < (2*ix + 1) else 1
-            cy = 0 if dpos[1] < (2*iy + 1) else 1
-            cz = 0 if dpos[2] < (2*iz + 1) else 1
-            
-            root_oct = ix+(self.num_grid/2)*(iy+(self.num_grid/2)*iz)
-            child = cx+2*(cy+2*cz)
-            order = 8*root_oct + child
-
-            assert( root_oct < self.num_root_cells / 8 )
-            assert( child >= 0 and child < 8 )
-            assert( order >= 0 and order < self.num_root_cells )
-
-            for i in range(nf):
-                fpoint[i][order] = variables[forder[i]]
- 
+            if num_oct_levels == 0 :
+                for i in range(num_fields) :
+                    data[i].resize(count+1)
+                    data[i][count] = self.grid_variables[field_order[i]]
+                fcoords.resize((count+1,3))
+                for i in range(3) :
+                    fcoords[count][i] = dpos[i]
+                ires.resize(count+1)
+                ires[count] = 0
+                count += 1
+    
             for level in range(1,num_oct_levels+1) :
                 status = artio_grid_read_level_begin( self.handle, level )
                 check_artio_status(status) 
 
-                for oct in range(num_octs_per_level[level-1]) :
-                    status = artio_grid_read_oct( self.handle, NULL, variables, NULL )
+                for i in range(3) :
+                    dds[i] = 2.**-level
+
+                for oct in range(self.num_octs_per_level[level-1]) :
+                    status = artio_grid_read_oct( self.handle, dpos, self.grid_variables, refined )
                     check_artio_status(status) 
 
                     for child in range(8) :
-                        for i in range(nf):
-                            fpoint[i][count] = variables[self.num_grid_variables*child+forder[i]]
-                        count += 1
- 
+                        if not refined[child] :
+                            for i in range(3) :
+                                pos[i] = dpos[i] + dds[i]*(0.5 if (child & (1<<i)) else -0.5)
+
+                            if selector.select_cell( pos, dds, eterm ) :
+                                fcoords.resize((count+1, 3))
+                                for i in range(3) :
+                                    fcoords[count][i] = pos[i]
+                                ires.resize(count+1)
+                                ires[count] = level
+                                for i in range(num_fields) :
+                                    data[i].resize(count+1)
+                                    data[i][count] = self.grid_variables[self.num_grid_variables*child+field_order[i]]
+                                count += 1 
                 status = artio_grid_read_level_end( self.handle )
                 check_artio_status(status) 
 
             status = artio_grid_read_root_cell_end( self.handle )
             check_artio_status(status) 
         
-        status = artio_grid_clear_sfc_cache( self.handle )
-        check_artio_status(status)
+        free(field_order)
 
-        free(num_octs_per_level) 
-        free(variables)
-        free(fpoint)
-        free(forder)
+        #fcoords.resize((count,3))
+        #ires.resize(count)
+        #    
+        #for i in range(num_fields) :
+        #    data[i].resize(count)
 
-        print 'done filling oct variables', count
+        return (fcoords, ires, data)
+
+    def root_sfc_ranges_all(self) :
+        cdef int max_range_size = 1024
+        cdef int64_t sfc_start, sfc_end
+        cdef artio_selection *selection
+
+        selection = artio_select_all( self.handle )
+        if selection == NULL :
+            raise RuntimeError
+        sfc_ranges = []
+        while artio_selection_iterator(selection, max_range_size, 
+                &sfc_start, &sfc_end) == ARTIO_SUCCESS :
+            sfc_ranges.append([sfc_start, sfc_end])
+        artio_selection_destroy(selection)
+        return sfc_ranges
+
+    def root_sfc_ranges(self, SelectorObject selector) :
+        cdef int max_range_size = 1024
+        cdef int coords[3]
+        cdef int64_t sfc_start, sfc_end
+        cdef np.float64_t pos[3]
+        cdef np.float64_t dds[3]
+        cdef int eterm[3]
+        cdef artio_selection *selection
+        cdef int i, j, k
+        for i in range(3): dds[i] = 1.0
+
+        sfc_ranges=[]
+        selection = artio_selection_allocate(self.handle)
+        for i in range(self.num_grid) :
+            # stupid cython
+            coords[0] = i
+            pos[0] = coords[0] + 0.5
+            for j in range(self.num_grid) :
+                coords[1] = j
+                pos[1] = coords[1] + 0.5
+                for k in range(self.num_grid) :
+                    coords[2] = k 
+                    pos[2] = coords[2] + 0.5
+                    if selector.select_cell(pos, dds, eterm) :
+                        status = artio_selection_add_root_cell(selection, coords)
+                        check_artio_status(status)
+
+        while artio_selection_iterator(selection, max_range_size, 
+                &sfc_start, &sfc_end) == ARTIO_SUCCESS :
+            sfc_ranges.append([sfc_start, sfc_end])
+
+        artio_selection_destroy(selection)
+        return sfc_ranges
 
 ###################################################
 def artio_is_valid( char *file_prefix ) :

diff -r e3c8b836f606e07d95c323036b3e99d2623027e6 -r b837a203a571c83ede9b40c5ee547c878dd0345b yt/frontends/artio/api.py
--- a/yt/frontends/artio/api.py
+++ b/yt/frontends/artio/api.py
@@ -29,11 +29,11 @@
 """
 
 from .data_structures import \
-      ARTIOStaticOutput
+    ARTIOStaticOutput
 
 from .fields import \
-      ARTIOFieldInfo, \
-      add_artio_field
+    ARTIOFieldInfo, \
+    add_artio_field
 
 from .io import \
-      IOHandlerARTIO
+    IOHandlerARTIO

diff -r e3c8b836f606e07d95c323036b3e99d2623027e6 -r b837a203a571c83ede9b40c5ee547c878dd0345b yt/frontends/artio/artio_headers/artio.c
--- a/yt/frontends/artio/artio_headers/artio.c
+++ b/yt/frontends/artio/artio_headers/artio.c
@@ -5,24 +5,36 @@
  *  Author: Yongen Yu
  */
 
+#include "artio.h"
+#include "artio_internal.h"
+
 #include <stdio.h>
 #include <stdlib.h>
 #include <stdint.h>
 #include <string.h>
 #include <math.h>
 
-#include "artio.h"
-#include "artio_internal.h"
-
 artio_fileset *artio_fileset_allocate( char *file_prefix, int mode,
 		const artio_context *context );
 void artio_fileset_destroy( artio_fileset *handle );
 
+int artio_fh_buffer_size = ARTIO_DEFAULT_BUFFER_SIZE;
+
+int artio_set_buffer_size( int buffer_size ) {
+	if ( buffer_size < 0 ) {
+		return ARTIO_ERR_INVALID_BUFFER_SIZE;
+	}
+
+	artio_fh_buffer_size = buffer_size;
+	return ARTIO_SUCCESS;
+}
+
 artio_fileset *artio_fileset_open(char * file_prefix, int type, const artio_context *context) {
 	artio_fh *head_fh;
 	char filename[256];
 	int ret;
 	int64_t tmp;
+	int artio_major, artio_minor;
 
 	artio_fileset *handle = 
 		artio_fileset_allocate( file_prefix, ARTIO_FILESET_READ, context );
@@ -33,7 +45,7 @@
 	/* open header file */
 	sprintf(filename, "%s.art", handle->file_prefix);
 	head_fh = artio_file_fopen(filename, 
-			ARTIO_MODE_READ | ARTIO_MODE_ACCESS | ARTIO_MODE_DIRECT, context);
+			ARTIO_MODE_READ | ARTIO_MODE_ACCESS, context);
 
 	if ( head_fh == NULL ) {
 		artio_fileset_destroy(handle);
@@ -48,6 +60,23 @@
 
 	artio_file_fclose(head_fh);
 
+	/* check versions */
+	if ( artio_parameter_get_int(handle, "ARTIO_MAJOR_VERSION", &artio_major ) == 
+			ARTIO_ERR_PARAM_NOT_FOUND ) {
+		/* version pre 1.0 */
+		artio_major = 0;
+		artio_minor = 9;
+	} else {
+		artio_parameter_get_int(handle, "ARTIO_MINOR_VERSION", &artio_minor );
+	}
+
+	if ( artio_major > ARTIO_MAJOR_VERSION ) {
+		fprintf(stderr,"ERROR: artio file version newer than library (%u.%u vs %u.%u).\n",
+			artio_major, artio_minor, ARTIO_MAJOR_VERSION, ARTIO_MINOR_VERSION );
+		artio_fileset_destroy(handle);
+		return NULL;
+	}
+	
 	artio_parameter_get_long(handle, "num_root_cells", &handle->num_root_cells);
 	
 	if ( artio_parameter_get_int(handle, "sfc_type", &handle->sfc_type ) != ARTIO_SUCCESS ) {
@@ -60,6 +89,7 @@
 		handle->nBitsPerDim++;
 		tmp >>= 3;
 	}
+	handle->num_grid = 1<<handle->nBitsPerDim;
 
 	/* default to accessing all sfc indices */
 	handle->proc_sfc_begin = 0;
@@ -114,6 +144,9 @@
 
 	artio_parameter_set_long(handle, "num_root_cells", root_cells);
 
+	artio_parameter_set_int(handle, "ARTIO_MAJOR_VERSION", ARTIO_MAJOR_VERSION );
+	artio_parameter_set_int(handle, "ARTIO_MINOR_VERSION", ARTIO_MINOR_VERSION );
+
 	return handle;
 }
 
@@ -138,8 +171,7 @@
 
 		sprintf(header_filename, "%s.art", handle->file_prefix);
 		head_fh = artio_file_fopen(header_filename, 
-				ARTIO_MODE_WRITE | ARTIO_MODE_DIRECT |
-					   ((handle->rank == 0) ? ARTIO_MODE_ACCESS : 0), 
+				ARTIO_MODE_WRITE | ((handle->rank == 0) ? ARTIO_MODE_ACCESS : 0), 
 				handle->context);
 
 		if (head_fh == NULL) {
@@ -220,3 +252,17 @@
 
 	free(handle);
 }
+
+int artio_fileset_has_grid( artio_fileset *handle ) {
+	int num_grid_files = 0;
+	return ( handle->grid != NULL ||
+		( artio_parameter_get_int( handle, "num_grid_files", &num_grid_files ) == ARTIO_SUCCESS &&
+		  num_grid_files > 0 ) );
+}
+
+int artio_fileset_has_particles( artio_fileset *handle ) {
+	int num_particle_files = 0;
+	return ( handle->particle != NULL ||
+			( artio_parameter_get_int( handle, "num_particle_files", &num_particle_files ) == ARTIO_SUCCESS &&
+			  num_particle_files > 0 ) );
+}

diff -r e3c8b836f606e07d95c323036b3e99d2623027e6 -r b837a203a571c83ede9b40c5ee547c878dd0345b yt/frontends/artio/artio_headers/artio.h
--- a/yt/frontends/artio/artio_headers/artio.h
+++ b/yt/frontends/artio/artio_headers/artio.h
@@ -6,11 +6,20 @@
  *  Modified: Jun 6, 2010 - Doug Rudd
  *            Nov 18, 2010 - Doug Rudd
  *            Nov 14, 2012 - Doug Rudd
+ *            Feb 7, 2013 - Doug Rudd - Version 1.0
+ *            March 3, 2013 - Doug Rudd - Version 1.1 (inc. selectors)
  */
 
 #ifndef __ARTIO_H__
 #define __ARTIO_H__
 
+#define ARTIO_MAJOR_VERSION     1
+#define ARTIO_MINOR_VERSION     1
+
+#ifdef ARTIO_MPI
+#include <mpi.h>
+#endif
+
 #include <stdint.h>
 #ifndef int64_t
 #ifdef _WIN32
@@ -47,11 +56,15 @@
 #define ARTIO_SUCCESS                       0
 
 #define ARTIO_ERR_PARAM_NOT_FOUND           1
-#define ARTIO_ERR_PARAM_INVALID_LENGTH      2
-#define ARTIO_ERR_PARAM_TYPE_MISMATCH       3
-#define ARTIO_ERR_PARAM_LENGTH_MISMATCH     4
-#define ARTIO_ERR_PARAM_LENGTH_INVALID      5
-#define ARTIO_ERR_PARAM_DUPLICATE           6
+#define ARTIO_PARAMETER_EXHAUSTED			2
+#define ARTIO_ERR_PARAM_INVALID_LENGTH      3 
+#define ARTIO_ERR_PARAM_TYPE_MISMATCH       4
+#define ARTIO_ERR_PARAM_LENGTH_MISMATCH     5
+#define ARTIO_ERR_PARAM_LENGTH_INVALID      6
+#define ARTIO_ERR_PARAM_DUPLICATE           7
+#define ARTIO_ERR_PARAM_CORRUPTED           8
+#define ARTIO_ERR_PARAM_CORRUPTED_MAGIC     9
+#define ARTIO_ERR_STRING_LENGTH             10
 
 #define ARTIO_ERR_INVALID_FILESET_MODE      100
 #define	ARTIO_ERR_INVALID_FILE_NUMBER       101
@@ -69,24 +82,30 @@
 #define ARTIO_ERR_INVALID_OCT_REFINED       113
 #define ARTIO_ERR_INVALID_HANDLE            114
 #define ARTIO_ERR_INVALID_CELL_TYPES        115
+#define ARTIO_ERR_INVALID_BUFFER_SIZE		116
+#define ARTIO_ERR_INVALID_INDEX				117
 
 #define ARTIO_ERR_DATA_EXISTS               200
 #define ARTIO_ERR_INSUFFICIENT_DATA         201
 #define ARTIO_ERR_FILE_CREATE               202
-#define ARTIO_ERR_PARTICLE_FILE_NOT_FOUND   203
+#define ARTIO_ERR_GRID_DATA_NOT_FOUND       203
 #define ARTIO_ERR_GRID_FILE_NOT_FOUND       204
+#define ARTIO_ERR_PARTICLE_DATA_NOT_FOUND   205
+#define ARTIO_ERR_PARTICLE_FILE_NOT_FOUND   206
+#define ARTIO_ERR_IO_OVERFLOW               207
+#define ARTIO_ERR_IO_WRITE                  208
+#define ARTIO_ERR_IO_READ                   209
+#define ARTIO_ERR_BUFFER_EXISTS             210
 
-#define ARTIO_ERR_PARAM_CORRUPTED           207
-#define ARTIO_ERR_PARAM_CORRUPTED_MAGIC     208
+#define ARTIO_SELECTION_EXHAUSTED           300
+#define ARTIO_ERR_INVALID_SELECTION         301
+#define ARTIO_ERR_INVALID_COORDINATES       302
 
-#define ARTIO_ERR_64_TO_32_BIT_TRUNCATION   209
-#define ARTIO_ERR_MEMORY_ALLOCATION         210
+#define ARTIO_ERR_MEMORY_ALLOCATION         400
 
-#define ARTIO_PARAMETER_EXHAUSTED           300
+#define ARTIO_ERR_VERSION_MISMATCH			500
 
 #ifdef ARTIO_MPI
-#include <mpi.h>
-
 typedef struct {
     MPI_Comm comm;
 } artio_context;
@@ -96,7 +115,10 @@
 } artio_context;
 #endif
 
+#define ARTIO_MAX_STRING_LENGTH				256
+
 typedef struct artio_fileset_struct artio_fileset;
+typedef struct artio_selection_struct artio_selection;
 
 extern const artio_context *artio_context_global;
 
@@ -125,6 +147,9 @@
  */
 int artio_fileset_close(artio_fileset *handle);
 
+int artio_fileset_has_grid( artio_fileset *handle );
+int artio_fileset_has_particles( artio_fileset *handle );
+
 /* public parameter interface */
 int artio_parameter_iterate( artio_fileset *handle, char *key, int *type, int *length );
 int artio_parameter_get_array_length(artio_fileset *handle, const char * key, int *length);
@@ -136,14 +161,18 @@
 		int32_t *values);
 int artio_parameter_get_int_array(artio_fileset *handle, const char * key, int length,
 		int32_t *values);
+int artio_parameter_get_int_array_index(artio_fileset *handle, const char * key, 
+		int index, int32_t *values);
 
 int artio_parameter_set_string(artio_fileset *handle, const char * key, char * value);
-int artio_parameter_get_string(artio_fileset *handle, const char * key, char * value, int max_length);
+int artio_parameter_get_string(artio_fileset *handle, const char * key, char * value );
 
 int artio_parameter_set_string_array(artio_fileset *handle, const char * key,
 		int length, char ** values);
 int artio_parameter_get_string_array(artio_fileset *handle, const char * key,
-		int length, char ** values, int max_length);
+		int length, char ** values );
+int artio_parameter_get_string_array_index(artio_fileset *handle, const char * key,
+		int index, char * values );
 
 int artio_parameter_set_float(artio_fileset *handle, const char * key, float value);
 int artio_parameter_get_float(artio_fileset *handle, const char * key, float * value);
@@ -152,6 +181,8 @@
 		int length, float *values);
 int artio_parameter_get_float_array(artio_fileset *handle, const char * key,
 		int length, float * values);
+int artio_parameter_get_float_array_index(artio_fileset *handle, const char * key,
+		int index, float * values);
 
 int artio_parameter_set_double(artio_fileset *handle, const char * key, double value);
 int  artio_parameter_get_double(artio_fileset *handle, const char * key, double * value);
@@ -160,6 +191,8 @@
 		int length, double * values);
 int artio_parameter_get_double_array(artio_fileset *handle, const char * key,
         int length, double *values);
+int artio_parameter_get_double_array_index(artio_fileset *handle, const char * key,
+		int index, double *values);
 
 int artio_parameter_set_long(artio_fileset *handle, const char * key, int64_t value);
 int artio_parameter_get_long(artio_fileset *handle, const char * key, int64_t *value);
@@ -168,10 +201,12 @@
         int length, int64_t *values);
 int artio_parameter_get_long_array(artio_fileset *handle, const char * key,
         int length, int64_t *values);
+int artio_parameter_get_long_array_index(artio_fileset *handle, const char * key,
+		int index, int64_t *values);
 
 /* public grid interface */
-typedef void (* GridCallBack)( int64_t sfc_index, int level,
-		double *pos, float * variables, int *refined );
+typedef void (* artio_grid_callback)( int64_t sfc_index, int level,
+		double *pos, float * variables, int *refined, void *params );
 
 /*
  * Description:	Add a grid component to a fileset open for writing
@@ -280,16 +315,31 @@
  *  sfc2			the end sfc index
  *  max_level_to_read		max level to read for each oct tree
  *  option			1. refined nodes; 2 leaf nodes; 3 all nodes
- *  callback			callback function
+ *  callback        callback function
+ *  params          a pointer to user-defined data passed to the callback
  */
-int artio_grid_read_sfc_range(artio_fileset *handle, int64_t sfc1, int64_t sfc2, 
+int artio_grid_read_sfc_range_levels(artio_fileset *handle, 
+		int64_t sfc1, int64_t sfc2, 
 		int min_level_to_read, int max_level_to_read, 
-		int options, GridCallBack callback);
+		int options, artio_grid_callback callback,
+		void *params );
 
+int artio_grid_read_sfc_range(artio_fileset *handle,
+        int64_t sfc1, int64_t sfc2, int options,
+        artio_grid_callback callback,
+		void *params );
 
-typedef void (* ParticleCallBack)(int64_t sfc_index,
-		int species, int subspecies, int64_t pid, 
-		double *primary_variables, float *secondary_variables );
+int artio_grid_read_selection(artio_fileset *handle,
+		artio_selection *selection, int options,
+		artio_grid_callback callback,
+		void *params );
+
+int artio_grid_read_selection_levels( artio_fileset *handle,
+		artio_selection *selection, 
+		int min_level_to_read, int max_level_to_read,
+		int options,
+		artio_grid_callback callback,
+		void *params );
 
 /**
  *  header			head file name
@@ -384,6 +434,11 @@
 			double *primary_variables, float *secondary_variables);
 
 int artio_particle_cache_sfc_range(artio_fileset *handle, int64_t sfc_start, int64_t sfc_end);
+int artio_particle_clear_sfc_cache(artio_fileset *handle );                                                          
+
+typedef void (* artio_particle_callback)(int64_t sfc_index,
+		int species, int subspecies, int64_t pid, 
+		double *primary_variables, float *secondary_variables, void *params );
 
 /*
  * Description: Read a segment of particles
@@ -391,13 +446,41 @@
  *  handle			file pointer
  *  sfc1			the start sfc index
  *  sfc2			the end sfc index
- *  start_species		the first particle species to read
- *  end_species			the last particle species to read
- *  callback			callback function
+ *  start_species   the first particle species to read
+ *  end_species     the last particle species to read
+ *  callback        callback function
+ *  params          user defined data passed to the callback function
  */
 int artio_particle_read_sfc_range(artio_fileset *handle, 
 		int64_t sfc1, int64_t sfc2, 
-		int start_species, int end_species,
-		ParticleCallBack callback);
+		artio_particle_callback callback,
+		void *params);
+
+int artio_particle_read_sfc_range_species( artio_fileset *handle, 
+        int64_t sfc1, int64_t sfc2, 
+        int start_species, int end_species,
+        artio_particle_callback callback,
+		void *params);
+
+int artio_particle_read_selection(artio_fileset *handle,
+        artio_selection *selection, artio_particle_callback callback,
+		void *params );
+
+int artio_particle_read_selection_species( artio_fileset *handle,
+        artio_selection *selection, int start_species, int end_species,
+        artio_particle_callback callback,
+		void *params );
+
+artio_selection *artio_selection_allocate( artio_fileset *handle );
+artio_selection *artio_select_all( artio_fileset *handle );
+artio_selection *artio_select_volume( artio_fileset *handle, double lpos[3], double rpos[3] );
+artio_selection *artio_select_cube( artio_fileset *handle, double center[3], double size );
+int artio_selection_add_root_cell( artio_selection *selection, int coords[3] );                   
+int artio_selection_destroy( artio_selection *selection );
+void artio_selection_print( artio_selection *selection );
+int artio_selection_iterator( artio_selection *selection,
+         int64_t max_range_size, int64_t *start, int64_t *end );
+int artio_selection_iterator_reset( artio_selection *selection );
+int64_t artio_selection_size( artio_selection *selection );
 
 #endif /* __ARTIO_H__ */

diff -r e3c8b836f606e07d95c323036b3e99d2623027e6 -r b837a203a571c83ede9b40c5ee547c878dd0345b yt/frontends/artio/artio_headers/artio_grid.c
--- a/yt/frontends/artio/artio_headers/artio_grid.c
+++ b/yt/frontends/artio/artio_headers/artio_grid.c
@@ -4,13 +4,13 @@
  *  Created on: May 10, 2011
  *      Author: Yongen Yu
  */
-#include <math.h>
+#include "artio.h"
+#include "artio_internal.h"
+
 #include <stdio.h>
 #include <stdlib.h>
 #include <stdint.h>
-
-#include "artio.h"
-#include "artio_internal.h"
+#include <math.h>
 
 int artio_grid_find_file(artio_grid_file *ghandle, int start, int end, int64_t sfc);
 artio_grid_file *artio_grid_file_allocate(void);
@@ -50,9 +50,11 @@
 		return ARTIO_ERR_MEMORY_ALLOCATION;
 	}
 
-	/* load grid parameters from header file (should be doing error handling...) */
-	artio_parameter_get_int(handle, "num_grid_variables", &ghandle->num_grid_variables);
-	artio_parameter_get_int(handle, "num_grid_files", &ghandle->num_grid_files);
+	/* load grid parameters from header file */
+	if ( artio_parameter_get_int(handle, "num_grid_files", &ghandle->num_grid_files) != ARTIO_SUCCESS ||
+			artio_parameter_get_int( handle, "num_grid_variables", &ghandle->num_grid_variables ) != ARTIO_SUCCESS ) {
+		return ARTIO_ERR_GRID_DATA_NOT_FOUND;
+	}
 
 	ghandle->file_sfc_index = (int64_t *)malloc(sizeof(int64_t) * (ghandle->num_grid_files + 1));
 	if ( ghandle->file_sfc_index == NULL ) {
@@ -344,6 +346,13 @@
 		ghandle->next_level_pos = NULL;
 		ghandle->cur_level_pos = NULL;
 		ghandle->next_level_oct = -1;
+
+		ghandle->buffer_size = artio_fh_buffer_size;
+		ghandle->buffer = malloc(ghandle->buffer_size);
+		if ( ghandle->buffer == NULL ) {
+			free(ghandle);
+			return NULL;
+		}
     }
 	return ghandle;
 }
@@ -366,6 +375,7 @@
 	if ( ghandle->file_sfc_index != NULL ) free(ghandle->file_sfc_index);
 	if ( ghandle->next_level_pos != NULL ) free(ghandle->next_level_pos);
 	if ( ghandle->cur_level_pos != NULL ) free(ghandle->cur_level_pos);
+	if ( ghandle->buffer != NULL ) free( ghandle->buffer );
 
 	free(ghandle);
 }
@@ -513,10 +523,16 @@
 		return ARTIO_ERR_INVALID_SFC_RANGE;
 	}
 
+	ghandle = handle->grid;
+
+	/* check if we've already cached the range */
+	if ( start >= ghandle->cache_sfc_begin &&
+			end <= ghandle->cache_sfc_end ) {
+		return ARTIO_SUCCESS;
+	}
+
 	artio_grid_clear_sfc_cache(handle);
 
-	ghandle = handle->grid;
-
 	first_file = artio_grid_find_file(ghandle, 0, ghandle->num_grid_files, start);
 	last_file = artio_grid_find_file(ghandle, first_file, ghandle->num_grid_files, end);
 
@@ -527,11 +543,20 @@
 		return ARTIO_ERR_MEMORY_ALLOCATION;
 	}
 
+	if ( ghandle->cur_file != -1 ) {
+		artio_file_detach_buffer( ghandle->ffh[ghandle->cur_file]);
+		ghandle->cur_file = -1;
+	}
+
 	cur = 0;
 	for (i = first_file; i <= last_file; i++) {
 		first = MAX( 0, start - ghandle->file_sfc_index[i] );
 		count = MIN( ghandle->file_sfc_index[i+1], end+1 )
 				- MAX( start, ghandle->file_sfc_index[i]);
+
+		artio_file_attach_buffer( ghandle->ffh[i], 
+			ghandle->buffer, ghandle->buffer_size );
+
 		ret = artio_file_fseek(ghandle->ffh[i], 
 				sizeof(int64_t) * first, ARTIO_SEEK_SET);
 		if ( ret != ARTIO_SUCCESS ) return ret;
@@ -541,6 +566,7 @@
 				count, ARTIO_TYPE_LONG);
 		if ( ret != ARTIO_SUCCESS ) return ret;
 
+		artio_file_detach_buffer( ghandle->ffh[i] );
 		cur += count;
 	}
 
@@ -608,6 +634,7 @@
 int artio_grid_seek_to_sfc(artio_fileset *handle, int64_t sfc) {
 	int64_t offset;
 	artio_grid_file *ghandle;
+	int file;
 
 	if ( handle == NULL ) {
 		return ARTIO_ERR_INVALID_HANDLE;
@@ -626,7 +653,17 @@
 		return ARTIO_ERR_INVALID_SFC;
 	}
 
-	ghandle->cur_file = artio_grid_find_file(ghandle, 0, ghandle->num_grid_files, sfc);
+	file = artio_grid_find_file(ghandle, 0, ghandle->num_grid_files, sfc );
+	if ( file != ghandle->cur_file ) {
+		if ( ghandle->cur_file != -1 ) {
+			artio_file_detach_buffer( ghandle->ffh[ghandle->cur_file] );
+		}
+		if ( ghandle->buffer_size > 0 ) {
+			artio_file_attach_buffer( ghandle->ffh[file], 
+					ghandle->buffer, ghandle->buffer_size );
+		}
+		ghandle->cur_file = file;
+	}
 	offset = ghandle->sfc_offset_table[sfc - ghandle->cache_sfc_begin];
 	return artio_file_fseek(ghandle->ffh[ghandle->cur_file], 
 			offset, ARTIO_SEEK_SET);
@@ -1095,11 +1132,12 @@
 	return ARTIO_SUCCESS;
 }
 
-int artio_grid_read_sfc_range(artio_fileset *handle, 
+int artio_grid_read_sfc_range_levels(artio_fileset *handle, 
 		int64_t sfc1, int64_t sfc2,
 		int min_level_to_read, int max_level_to_read, 
 		int options,
-		GridCallBack callback) {
+		artio_grid_callback callback, 
+		void *params ) {
 	int i, j;
 	int64_t sfc;
 	int oct, level;
@@ -1122,13 +1160,13 @@
 		return ARTIO_ERR_INVALID_FILESET_MODE;
 	}
 
-	if ( ( options & ARTIO_RETURN_CELLS && 
+	if ( ( (options & ARTIO_RETURN_CELLS) && 
 				!(options & ARTIO_READ_LEAFS) && 
 				!(options & ARTIO_READ_REFINED)) ||
-			( options & ARTIO_RETURN_OCTS && 
-				(options & ARTIO_READ_LEAFS || 
-				options & ARTIO_READ_REFINED ) &&
-				!(options & ARTIO_READ_ALL) ) ) {
+			( (options & ARTIO_RETURN_OCTS) && 
+				((options & ARTIO_READ_LEAFS) || 
+				(options & ARTIO_READ_REFINED) ) &&
+				!((options & ARTIO_READ_ALL) == ARTIO_READ_ALL ) ) ) {
 		return ARTIO_ERR_INVALID_CELL_TYPES;
 	}
 	
@@ -1163,11 +1201,11 @@
 			return ret;
 		}
 
-		if (min_level_to_read == 0 && (options & ARTIO_READ_ALL || 
-				(options & ARTIO_READ_REFINED && root_tree_levels > 0) || 
+		if (min_level_to_read == 0 && 
+				((options & ARTIO_READ_REFINED && root_tree_levels > 0) || 
 				(options & ARTIO_READ_LEAFS && root_tree_levels == 0)) ) {
 			refined = (root_tree_levels > 0) ? 1 : 0;
-			callback( sfc, 0, pos, variables, &refined );
+			callback( sfc, 0, pos, variables, &refined, params );
 		}
 
 		for (level = MAX(min_level_to_read,1); 
@@ -1188,18 +1226,17 @@
 				}
 
 				if ( options & ARTIO_RETURN_OCTS ) {
-					callback( sfc, level, pos, variables, oct_refined );
+					callback( sfc, level, pos, variables, oct_refined, params );
 				} else {
 					for (i = 0; i < 8; i++) {
-						if (options & ARTIO_READ_ALL || 
-								(options & ARTIO_READ_REFINED && oct_refined[i]) ||
+						if ( (options & ARTIO_READ_REFINED && oct_refined[i]) ||
 								(options & ARTIO_READ_LEAFS && !oct_refined[i]) ) {
 							for ( j = 0; j < 3; j++ ) {
 								cell_pos[j] = pos[j] + ghandle->cell_size_level*oct_pos_offsets[i][j];
 							}
 							callback( sfc, level, cell_pos, 
 									&variables[i * ghandle->num_grid_variables],
-									&oct_refined[i] );
+									&oct_refined[i], params );
 						}
 					}
 				}
@@ -1216,3 +1253,62 @@
 
 	return ARTIO_SUCCESS;
 }
+
+int artio_grid_read_sfc_range(artio_fileset *handle,
+        int64_t sfc1, int64_t sfc2,
+		int options,
+		artio_grid_callback callback,
+		void *params) {
+
+	if ( handle == NULL ) {
+		return ARTIO_ERR_INVALID_HANDLE;
+	}
+
+	if (handle->open_mode != ARTIO_FILESET_READ ||
+			!(handle->open_type & ARTIO_OPEN_GRID) ||
+			handle->grid == NULL ) {
+		return ARTIO_ERR_INVALID_FILESET_MODE;
+	}
+
+	return artio_grid_read_sfc_range_levels( handle, sfc1, sfc2, 
+			0, handle->grid->file_max_level, options, callback, params );
+}
+
+int artio_grid_read_selection(artio_fileset *handle,
+        artio_selection *selection, int options, artio_grid_callback callback,
+		void *params ) {
+	if ( handle == NULL ) {
+		return ARTIO_ERR_INVALID_HANDLE;
+	}
+
+	if (handle->open_mode != ARTIO_FILESET_READ ||
+			!(handle->open_type & ARTIO_OPEN_GRID) ||
+			handle->grid == NULL ) {
+		return ARTIO_ERR_INVALID_FILESET_MODE;
+	}
+
+	return artio_grid_read_selection_levels( handle, selection,
+			0, handle->grid->file_max_level, options, callback, params );
+}	
+
+int artio_grid_read_selection_levels( artio_fileset *handle,
+        artio_selection *selection, 
+        int min_level_to_read, int max_level_to_read,
+		int options,
+        artio_grid_callback callback, void *params ) {
+	int ret;
+	int64_t start, end;
+
+	/* loop over selected ranges */
+	artio_selection_iterator_reset( selection );
+	while ( artio_selection_iterator( selection, 
+			handle->num_root_cells, 
+			&start, &end ) == ARTIO_SUCCESS ) {
+		ret = artio_grid_read_sfc_range_levels( handle, start, end,
+				min_level_to_read, max_level_to_read, options, 
+				callback, params);
+		if ( ret != ARTIO_SUCCESS ) return ret;
+	}
+
+	return ARTIO_SUCCESS;
+}

diff -r e3c8b836f606e07d95c323036b3e99d2623027e6 -r b837a203a571c83ede9b40c5ee547c878dd0345b yt/frontends/artio/artio_headers/artio_internal.h
--- a/yt/frontends/artio/artio_headers/artio_internal.h
+++ b/yt/frontends/artio/artio_headers/artio_internal.h
@@ -9,12 +9,22 @@
 #ifndef __ARTIO_INTERNAL_H__
 #define __ARTIO_INTERNAL_H__
 
+#ifdef ARTIO_MPI
+#include <mpi.h>
+#endif
+
 #include <stdlib.h>
 #include <stdint.h>
+#include <limits.h>
 
-#include "artio.h"
 #include "artio_endian.h"
 
+#ifndef ARTIO_DEFAULT_BUFFER_SIZE
+#define ARTIO_DEFAULT_BUFFER_SIZE	65536
+#endif
+
+extern int artio_fh_buffer_size;
+
 #define nDim			3
 
 #ifndef MIN
@@ -24,14 +34,23 @@
 #define MAX(x,y)        (((x) > (y)) ? (x): (y))
 #endif
 
-#ifdef ARTIO_MPI
-#include <mpi.h>
+/* limit individual writes to 32-bit safe quantity */
+#define ARTIO_IO_MAX	(1<<30)
+
+#ifdef INT64_MAX
+#define ARTIO_INT64_MAX	INT64_MAX
+#else
+#define ARTIO_INT64_MAX 0x7fffffffffffffffLL
 #endif
 
 typedef struct ARTIO_FH artio_fh;
 
 typedef struct artio_particle_file_struct {
 	artio_fh **ffh;
+
+	void *buffer;
+	int buffer_size;
+
 	int num_particle_files;
 	int64_t *file_sfc_index;
 	int64_t cache_sfc_begin;
@@ -51,6 +70,10 @@
 
 typedef struct artio_grid_file_struct {
 	artio_fh **ffh;
+
+	void *buffer;
+	int buffer_size;
+
 	int num_grid_variables;
 	int num_grid_files;
 	int64_t *file_sfc_index;
@@ -109,27 +132,38 @@
 	int64_t num_root_cells;
 	int sfc_type;
 	int nBitsPerDim;
+	int num_grid;
 	
 	parameter_list *parameters;
 	artio_grid_file *grid;
 	artio_particle_file *particle;
 };
 
+struct artio_selection_struct {
+    int64_t *list;
+    int size;
+    int num_ranges; 
+	int cursor;
+	int64_t subcycle;
+	artio_fileset *fileset;
+};
+
 #define ARTIO_FILESET_READ      0
 #define ARTIO_FILESET_WRITE     1
 
 #define ARTIO_MODE_READ         1
 #define ARTIO_MODE_WRITE        2
-#define ARTIO_MODE_DIRECT       4
-#define ARTIO_MODE_ACCESS       8
-#define ARTIO_MODE_ENDIAN_SWAP 16
+#define ARTIO_MODE_ACCESS       4
+#define ARTIO_MODE_ENDIAN_SWAP  8
 
 #define ARTIO_SEEK_SET          0
 #define ARTIO_SEEK_CUR          1
 #define ARTIO_SEEK_END			2
 
 artio_fh *artio_file_fopen( char * filename, int amode, const artio_context *context );
-int artio_file_fwrite(artio_fh *handle, void *buf, int64_t count, int type );
+int artio_file_attach_buffer( artio_fh *handle, void *buf, int buf_size );
+int artio_file_detach_buffer( artio_fh *handle );
+int artio_file_fwrite(artio_fh *handle, const void *buf, int64_t count, int type );
 int artio_file_ftell( artio_fh *handle, int64_t *offset );
 int artio_file_fflush(artio_fh *handle);
 int artio_file_fseek(artio_fh *ffh, int64_t offset, int whence);

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/389b19a43b57/
Changeset:   389b19a43b57
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-19 21:57:58
Summary:     making art file matching more generic
Affected #:  1 file

diff -r b837a203a571c83ede9b40c5ee547c878dd0345b -r 389b19a43b57e497037ee19fd98616cc00336669 yt/frontends/art/definitions.py
--- a/yt/frontends/art/definitions.py
+++ b/yt/frontends/art/definitions.py
@@ -74,16 +74,12 @@
     'particle_metallicity',
 ]
 
+
 filename_pattern = {
-    'amr': '10MpcBox_csf512_%s.d',
-    'particle_header': 'PMcrd%s.DAT',
-    'particle_data': 'PMcrs0%s.DAT',
-    'particle_stars': 'stars_%s.dat'
-}
-
-filename_pattern_hf = {
-    'particle_header': 'PMcrd_%s.DAT',
-    'particle_data': 'PMcrs0_%s.DAT',
+    'amr': ['10MpcBox_','.d'], 
+    'particle_header': ['PMcrd','.DAT'], #
+    'particle_data': ['PMcrs','.DAT'],
+    'particle_stars': ['stars','.dat']
 }
 
 amr_header_struct = [


https://bitbucket.org/yt_analysis/yt/commits/ece2c9bc6a3a/
Changeset:   ece2c9bc6a3a
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-19 21:58:16
Summary:     changes to smallest dx
Affected #:  1 file

diff -r 389b19a43b57e497037ee19fd98616cc00336669 -r ece2c9bc6a3a1663914f319438480701ee313b7b yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -30,6 +30,8 @@
 import stat
 import weakref
 import cStringIO
+import difflib
+import glob
 
 from yt.funcs import *
 from yt.geometry.oct_geometry_handler import \
@@ -86,7 +88,15 @@
         self.max_level = pf.max_level
         self.float_type = np.float64
         super(ARTGeometryHandler, self).__init__(pf, data_style)
-        pf.domain_width = 1.0/pf.domain_dimensions.astype('f8')
+    
+    def get_smallest_dx(self):
+        """
+        Returns (in code units) the smallest cell size in the simulation.
+        """
+        #Overloaded
+        pf = self.parameter_file
+        return (1.0/pf.domain_dimensions.astype('f8') /
+                (2**self.max_level)).min()
 
     def _initialize_oct_handler(self):
         """
@@ -208,38 +218,20 @@
         Given the AMR base filename, attempt to find the
         particle header, star files, etc.
         """
-        prefix, suffix = filename_pattern['amr'].split('%s')
-        affix = os.path.basename(file_amr).replace(prefix, '')
-        affix = affix.replace(suffix, '')
-        affix = affix.replace('_', '')
-        full_affix = affix
-        affix = affix[1:-1]
-        dirname = os.path.dirname(file_amr)
-        for fp in (filename_pattern_hf, filename_pattern):
-            for filetype, pattern in fp.items():
-                # if this attribute is already set skip it
-                if getattr(self, "file_"+filetype, None) is not None:
-                    continue
-                # sometimes the affix is surrounded by an extraneous _
-                # so check for an extra character on either side
-                check_filename = dirname+'/'+pattern % ('?%s?' % affix)
-                filenames = glob.glob(check_filename)
-                if len(filenames) > 1:
-                    check_filename_strict = \
-                        dirname+'/'+pattern % ('?%s' % full_affix[1:])
-                    filenames = glob.glob(check_filename_strict)
-
-                if len(filenames) == 1:
-                    setattr(self, "file_"+filetype, filenames[0])
-                    mylog.info('discovered %s:%s', filetype, filenames[0])
-                elif len(filenames) > 1:
-                    setattr(self, "file_"+filetype, None)
-                    mylog.info("Ambiguous number of files found for %s",
-                               check_filename)
-                    for fn in filenames:
-                        faffix = float(affix)
-                else:
-                    setattr(self, "file_"+filetype, None)
+        base_prefix, base_suffix = filename_pattern['amr']
+        possibles = glob.glob(os.path.dirname(file_amr)+"/*")
+        for filetype, (prefix, suffix) in filename_pattern.iteritems():
+            # if this attribute is already set skip it
+            if getattr(self, "file_"+filetype, None) is not None:
+                continue
+            stripped = file_amr.replace(base_prefix,prefix)
+            stripped = stripped.replace(base_suffix,suffix)
+            match, = difflib.get_close_matches(stripped,possibles,1,0.6)
+            if match is not None: 
+                mylog.info('discovered %s:%s', filetype, match)
+                setattr(self, "file_"+filetype, match)
+            else:
+                setattr(self, "file_"+filetype, None)
 
     def __repr__(self):
         return self.file_amr.split('/')[-1]
@@ -427,7 +419,7 @@
         This could differ for other formats.
         """
         f = ("%s" % args[0])
-        prefix, suffix = filename_pattern['amr'].split('%s')
+        prefix, suffix = filename_pattern['amr']
         with open(f, 'rb') as fh:
             try:
                 amr_header_vals = read_attrs(fh, amr_header_struct, '>')


https://bitbucket.org/yt_analysis/yt/commits/f96249c8e477/
Changeset:   f96249c8e477
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-19 22:04:01
Summary:     fixing case when particles not laoded in
Affected #:  1 file

diff -r ece2c9bc6a3a1663914f319438480701ee313b7b -r f96249c8e477efc886c7d91060b7cb4af6a1de62 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -123,11 +123,14 @@
                               particle_star_fields)
         self.field_list = list(self.field_list)
         # now generate all of the possible particle fields
-        wspecies = self.parameter_file.parameters['wspecies']
-        nspecies = len(wspecies)
-        self.parameter_file.particle_types = ["all", "darkmatter", "stars"]
-        for specie in range(nspecies):
-            self.parameter_file.particle_types.append("specie%i" % specie)
+        if "wspecies" in self.parameter_file.parameter.keys():
+            wspecies = self.parameter_file.parameters['wspecies']
+            nspecies = len(wspecies)
+            self.parameter_file.particle_types = ["all", "darkmatter", "stars"]
+            for specie in range(nspecies):
+                self.parameter_file.particle_types.append("specie%i" % specie)
+        else:
+            self.parameter_file.particle_types = []
 
     def _setup_classes(self):
         dd = self._get_data_reader_dict()


https://bitbucket.org/yt_analysis/yt/commits/95d450c43218/
Changeset:   95d450c43218
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-19 22:05:42
Summary:     fixed typo
Affected #:  1 file

diff -r f96249c8e477efc886c7d91060b7cb4af6a1de62 -r 95d450c432181dd049d78829e9f778fb14c45c32 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -123,7 +123,7 @@
                               particle_star_fields)
         self.field_list = list(self.field_list)
         # now generate all of the possible particle fields
-        if "wspecies" in self.parameter_file.parameter.keys():
+        if "wspecies" in self.parameter_file.parameters.keys():
             wspecies = self.parameter_file.parameters['wspecies']
             nspecies = len(wspecies)
             self.parameter_file.particle_types = ["all", "darkmatter", "stars"]


https://bitbucket.org/yt_analysis/yt/commits/44808dc45298/
Changeset:   44808dc45298
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-19 22:17:06
Summary:     changes to _file
Affected #:  1 file

diff -r 95d450c432181dd049d78829e9f778fb14c45c32 -r 44808dc452987ca94b74f6c0d2518823f26b4b08 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -153,7 +153,6 @@
             # and build a subset=mask of domains
             subsets = []
             for d, c in zip(self.domains, counts):
-                nocts = d.level_count[d.domain_level]
                 if c < 1:
                     continue
                 subset = ARTDomainSubset(d, mask, c, d.domain_level)
@@ -199,10 +198,10 @@
             fields = fluid_fields
         filename = os.path.abspath(filename)
         self._fields_in_file = fields
-        self.file_amr = filename
-        self.file_particle_header = file_particle_header
-        self.file_particle_data = file_particle_data
-        self.file_particle_stars = file_particle_stars
+        self._file_amr = filename
+        self._file_particle_header = file_particle_header
+        self._file_particle_data = file_particle_data
+        self._file_particle_stars = file_particle_stars
         self._find_files(filename)
         self.parameter_filename = filename
         self.skip_particles = skip_particles
@@ -225,19 +224,19 @@
         possibles = glob.glob(os.path.dirname(file_amr)+"/*")
         for filetype, (prefix, suffix) in filename_pattern.iteritems():
             # if this attribute is already set skip it
-            if getattr(self, "file_"+filetype, None) is not None:
+            if getattr(self, "_file_"+filetype, None) is not None:
                 continue
             stripped = file_amr.replace(base_prefix,prefix)
             stripped = stripped.replace(base_suffix,suffix)
             match, = difflib.get_close_matches(stripped,possibles,1,0.6)
             if match is not None: 
                 mylog.info('discovered %s:%s', filetype, match)
-                setattr(self, "file_"+filetype, match)
+                setattr(self, "_file_"+filetype, match)
             else:
-                setattr(self, "file_"+filetype, None)
+                setattr(self, "_file_"+filetype, None)
 
     def __repr__(self):
-        return self.file_amr.split('/')[-1]
+        return self._file_amr.split('/')[-1]
 
     def _set_units(self):
         """
@@ -333,7 +332,7 @@
         self.parameters.update(constants)
         self.parameters['Time'] = 1.0
         # read the amr header
-        with open(self.file_amr, 'rb') as f:
+        with open(self._file_amr, 'rb') as f:
             amr_header_vals = read_attrs(f, amr_header_struct, '>')
             for to_skip in ['tl', 'dtl', 'tlold', 'dtlold', 'iSO']:
                 skipped = skip(f, endian='>')
@@ -374,8 +373,8 @@
             self.root_level = root_level
             mylog.info("Using root level of %02i", self.root_level)
         # read the particle header
-        if not self.skip_particles and self.file_particle_header:
-            with open(self.file_particle_header, "rb") as fh:
+        if not self.skip_particles and self._file_particle_header:
+            with open(self._file_particle_header, "rb") as fh:
                 particle_header_vals = read_attrs(
                     fh, particle_header_struct, '>')
                 fh.seek(seek_extras)
@@ -427,7 +426,7 @@
             try:
                 amr_header_vals = read_attrs(fh, amr_header_struct, '>')
                 return True
-            except:
+            except AssertionError:
                 return False
         return False
 
@@ -562,7 +561,7 @@
         if self._level_oct_offsets is not None:
             return self._level_oct_offsets
         # We now have to open the file and calculate it
-        f = open(self.pf.file_amr, "rb")
+        f = open(self.pf._file_amr, "rb")
         nhydrovars, inoll, _level_oct_offsets, _level_child_offsets = \
             _count_art_octs(f,  self.pf.child_grid_offset, self.pf.min_level,
                             self.pf.max_level)
@@ -588,7 +587,7 @@
         # but on level 1 instead of 128^3 octs, we have 256^3 octs
         # leave this code here instead of static output - it's memory intensive
         self.level_offsets
-        f = open(self.pf.file_amr, "rb")
+        f = open(self.pf._file_amr, "rb")
         # add the root *cell* not *oct* mesh
         level = self.domain_level
         root_octs_side = self.pf.domain_dimensions[0]/2


https://bitbucket.org/yt_analysis/yt/commits/9c66706d483d/
Changeset:   9c66706d483d
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-19 22:17:54
Summary:     fixing _file in io.py
Affected #:  1 file

diff -r 44808dc452987ca94b74f6c0d2518823f26b4b08 -r 9c66706d483dc604ef648a9ee870327437b590f9 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -52,7 +52,7 @@
         for chunk in chunks:
             for subset in chunk.objs:
                 # Now we read the entire thing
-                f = open(subset.domain.pf.file_amr, "rb")
+                f = open(subset.domain.pf._file_amr, "rb")
                 # This contains the boundary information, so we skim through
                 # and pick off the right vectors
                 rv = subset.fill(f, fields)
@@ -77,8 +77,8 @@
             ptmax = ws[-1]
             npt = ls[-1]
             nstars = ls[-1]-ls[-2]
-            file_particle = pf.file_particle_data
-            file_stars = pf.file_particle_stars
+            file_particle = pf._file_particle_data
+            file_stars = pf._file_particle_stars
             ftype_old = None
             for field in fields:
                 if field in fields_read: continue


https://bitbucket.org/yt_analysis/yt/commits/8db3dadfe6ff/
Changeset:   8db3dadfe6ff
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-19 22:25:32
Summary:     split read_amr into root and by level reads
Affected #:  1 file

diff -r 9c66706d483dc604ef648a9ee870327437b590f9 -r 8db3dadfe6ff103740f922b4ff46a7e91ee29971 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -115,7 +115,10 @@
         mylog.debug("Allocating %s octs", self.total_octs)
         self.oct_handler.allocate_domains(self.octs_per_domain)
         for domain in self.domains:
-            domain._read_amr(self.oct_handler)
+            if domain.domain_level==0:
+                domain._read_amr_root(self.oct_handler)
+            else:
+                domain._read_amr_level(self.oct_handler)
 
     def _detect_fields(self):
         self.particle_field_list = particle_fields
@@ -499,23 +502,20 @@
                 self.domain.domain_id,
                 level, dest, source, self.mask, level_offset)
         else:
-            def subchunk(count, size):
-                for i in range(0, count, size):
-                    yield i, i+min(size, count-i)
-            for noct_range in subchunk(no, long(1e8)):
-                source = _read_child_level(
-                    content, self.domain.level_child_offsets,
-                    self.domain.level_offsets,
-                    self.domain.level_count, level, fields,
-                    self.domain.pf.domain_dimensions,
-                    self.domain.pf.parameters['ncell0'],
-                    noct_range=noct_range)
-                nocts_filling = noct_range[1]-noct_range[0]
-                level_offset += oct_handler.fill_level(self.domain.domain_id,
-                                                       level, dest, source,
-                                                       self.mask, level_offset,
-                                                       noct_range[0],
-                                                       nocts_filling)
+            noct_range = [0,no]
+            source = _read_child_level(
+                content, self.domain.level_child_offsets,
+                self.domain.level_offsets,
+                self.domain.level_count, level, fields,
+                self.domain.pf.domain_dimensions,
+                self.domain.pf.parameters['ncell0'],
+                noct_range=noct_range)
+            nocts_filling = noct_range[1]-noct_range[0]
+            level_offset += oct_handler.fill_level(self.domain.domain_id,
+                                                   level, dest, source,
+                                                   self.mask, level_offset,
+                                                   noct_range[0],
+                                                   nocts_filling)
         return dest
 
 
@@ -575,17 +575,29 @@
         self._level_count = inoll
         return self._level_oct_offsets
 
-    def _read_amr(self, oct_handler):
+
+    def _read_amr_level(self, oct_handler):
         """Open the oct file, read in octs level-by-level.
            For each oct, only the position, index, level and domain
            are needed - its position in the octree is found automatically.
            The most important is finding all the information to feed
            oct_handler.add
         """
-        # on the root level we typically have 64^3 octs
-        # giving rise to 128^3 cells
-        # but on level 1 instead of 128^3 octs, we have 256^3 octs
-        # leave this code here instead of static output - it's memory intensive
+        self.level_offsets
+        f = open(self.pf._file_amr, "rb")
+        level = self.domain_level
+        unitary_center, fl, iocts, nocts, root_level = _read_art_level_info(
+            f,
+            self._level_oct_offsets, level,
+            coarse_grid=self.pf.domain_dimensions[0],
+            root_level=self.pf.root_level)
+        nocts_check = oct_handler.add(self.domain_id, level, nocts,
+                                      unitary_center, self.domain_id)
+        assert(nocts_check == nocts)
+        mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
+                    nocts, level, oct_handler.nocts)
+
+    def _read_amr_root(self,oct_handler):
         self.level_offsets
         f = open(self.pf._file_amr, "rb")
         # add the root *cell* not *oct* mesh
@@ -593,39 +605,22 @@
         root_octs_side = self.pf.domain_dimensions[0]/2
         NX = np.ones(3)*root_octs_side
         octs_side = NX*2**level
-        if level == 0:
-            LE = np.array([0.0, 0.0, 0.0], dtype='float64')
-            RE = np.array([1.0, 1.0, 1.0], dtype='float64')
-            root_dx = (RE - LE) / NX
-            LL = LE + root_dx/2.0
-            RL = RE - root_dx/2.0
-            # compute floating point centers of root octs
-            root_fc = np.mgrid[LL[0]:RL[0]:NX[0]*1j,
-                               LL[1]:RL[1]:NX[1]*1j,
-                               LL[2]:RL[2]:NX[2]*1j]
-            root_fc = np.vstack([p.ravel() for p in root_fc]).T
-            nocts_check = oct_handler.add(self.domain_id, level,
-                                          root_octs_side**3,
-                                          root_fc, self.domain_id)
-            assert(oct_handler.nocts == root_fc.shape[0])
-            mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
-                        root_octs_side**3, 0, oct_handler.nocts)
-        else:
-            unitary_center, fl, iocts, nocts, root_level = _read_art_level_info(
-                f,
-                self._level_oct_offsets, level,
-                coarse_grid=self.pf.domain_dimensions[0],
-                root_level=self.pf.root_level)
-            # at least one of the indices should be odd
-            # assert np.sum(left_index[:,0]%2==1)>0
-            # float_left_edge = left_index.astype("float64") / octs_side
-            # float_center = float_left_edge + 0.5*1.0/octs_side
-            # all floatin unitary positions should fit inside the domain
-            nocts_check = oct_handler.add(self.domain_id, level, nocts,
-                                          unitary_center, self.domain_id)
-            assert(nocts_check == nocts)
-            mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
-                        nocts, level, oct_handler.nocts)
+        LE = np.array([0.0, 0.0, 0.0], dtype='float64')
+        RE = np.array([1.0, 1.0, 1.0], dtype='float64')
+        root_dx = (RE - LE) / NX
+        LL = LE + root_dx/2.0
+        RL = RE - root_dx/2.0
+        # compute floating point centers of root octs
+        root_fc = np.mgrid[LL[0]:RL[0]:NX[0]*1j,
+                           LL[1]:RL[1]:NX[1]*1j,
+                           LL[2]:RL[2]:NX[2]*1j]
+        root_fc = np.vstack([p.ravel() for p in root_fc]).T
+        nocts_check = oct_handler.add(self.domain_id, level,
+                                      root_octs_side**3,
+                                      root_fc, self.domain_id)
+        assert(oct_handler.nocts == root_fc.shape[0])
+        mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
+                    root_octs_side**3, 0, oct_handler.nocts)
 
     def select(self, selector):
         if id(selector) == self._last_selector_id:


https://bitbucket.org/yt_analysis/yt/commits/b35a20f2e442/
Changeset:   b35a20f2e442
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-19 22:30:04
Summary:     split the hydro fill routines into root/ level
Affected #:  1 file

diff -r 8db3dadfe6ff103740f922b4ff46a7e91ee29971 -r b35a20f2e4426b08fa16974cd6e5d4005178c3ff yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -471,7 +471,7 @@
             widths[:, i] = base_dx[i] / dds
         return widths
 
-    def fill(self, content, fields):
+    def fill_root(self, content, ftfields):
         """
         This is called from IOHandler. It takes content
         which is a binary stream, reads the requested field
@@ -481,41 +481,48 @@
         """
         oct_handler = self.oct_handler
         all_fields = self.domain.pf.h.fluid_field_list
-        fields = [f for ft, f in fields]
+        fields = [f for ft, f in ftfields]
+        level_offset = 0
+        field_idxs = [all_fields.index(f) for f in fields]
         dest = {}
-        filled = pos = level_offset = 0
-        field_idxs = [all_fields.index(f) for f in fields]
         for field in fields:
             dest[field] = np.zeros(self.cell_count, 'float64')-1.
         level = self.domain_level
-        offset = self.domain.level_offsets
+        source = {}
+        data = _read_root_level(content, self.domain.level_child_offsets,
+                                self.domain.level_count)
+        for field, i in zip(fields, field_idxs):
+            temp = np.reshape(data[i, :], self.domain.pf.domain_dimensions,
+                              order='F').astype('float64').T
+            source[field] = temp
+        level_offset += oct_handler.fill_level_from_grid(
+            self.domain.domain_id,
+            level, dest, source, self.mask, level_offset)
+        return dest
+
+    def fill_level(self, content, ftfields):
+        oct_handler = self.oct_handler
+        fields = [f for ft, f in ftfields]
+        level_offset = 0
+        dest = {}
+        for field in fields:
+            dest[field] = np.zeros(self.cell_count, 'float64')-1.
+        level = self.domain_level
         no = self.domain.level_count[level]
-        if level == 0:
-            source = {}
-            data = _read_root_level(content, self.domain.level_child_offsets,
-                                    self.domain.level_count)
-            for field, i in zip(fields, field_idxs):
-                temp = np.reshape(data[i, :], self.domain.pf.domain_dimensions,
-                                  order='F').astype('float64').T
-                source[field] = temp
-            level_offset += oct_handler.fill_level_from_grid(
-                self.domain.domain_id,
-                level, dest, source, self.mask, level_offset)
-        else:
-            noct_range = [0,no]
-            source = _read_child_level(
-                content, self.domain.level_child_offsets,
-                self.domain.level_offsets,
-                self.domain.level_count, level, fields,
-                self.domain.pf.domain_dimensions,
-                self.domain.pf.parameters['ncell0'],
-                noct_range=noct_range)
-            nocts_filling = noct_range[1]-noct_range[0]
-            level_offset += oct_handler.fill_level(self.domain.domain_id,
-                                                   level, dest, source,
-                                                   self.mask, level_offset,
-                                                   noct_range[0],
-                                                   nocts_filling)
+        noct_range = [0,no]
+        source = _read_child_level(
+            content, self.domain.level_child_offsets,
+            self.domain.level_offsets,
+            self.domain.level_count, level, fields,
+            self.domain.pf.domain_dimensions,
+            self.domain.pf.parameters['ncell0'],
+            noct_range=noct_range)
+        nocts_filling = noct_range[1]-noct_range[0]
+        level_offset += oct_handler.fill_level(self.domain.domain_id,
+                                               level, dest, source,
+                                               self.mask, level_offset,
+                                               noct_range[0],
+                                               nocts_filling)
         return dest
 
 


https://bitbucket.org/yt_analysis/yt/commits/59f48705d2a8/
Changeset:   59f48705d2a8
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-19 22:31:18
Summary:     split the fill routines
Affected #:  1 file

diff -r b35a20f2e4426b08fa16974cd6e5d4005178c3ff -r 59f48705d2a883cf3b537aeeb9ec6c3453530095 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -55,7 +55,10 @@
                 f = open(subset.domain.pf._file_amr, "rb")
                 # This contains the boundary information, so we skim through
                 # and pick off the right vectors
-                rv = subset.fill(f, fields)
+                if subset.domain_level == 0:
+                    rv = subset.fill_root(f, fields)
+                else:
+                    rv = subset.fill_level(f, fields)
                 for ft, f in fields:
                     mylog.debug("Filling L%i %s with %s (%0.3e %0.3e) (%s:%s)",
                                 subset.domain_level,


https://bitbucket.org/yt_analysis/yt/commits/ad5e7f75833e/
Changeset:   ad5e7f75833e
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-19 22:34:28
Summary:     pep8
Affected #:  3 files

diff -r 59f48705d2a883cf3b537aeeb9ec6c3453530095 -r ad5e7f75833e8ffdd5920961c862d838094a495e yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -88,12 +88,12 @@
         self.max_level = pf.max_level
         self.float_type = np.float64
         super(ARTGeometryHandler, self).__init__(pf, data_style)
-    
+
     def get_smallest_dx(self):
         """
         Returns (in code units) the smallest cell size in the simulation.
         """
-        #Overloaded
+        # Overloaded
         pf = self.parameter_file
         return (1.0/pf.domain_dimensions.astype('f8') /
                 (2**self.max_level)).min()
@@ -115,7 +115,7 @@
         mylog.debug("Allocating %s octs", self.total_octs)
         self.oct_handler.allocate_domains(self.octs_per_domain)
         for domain in self.domains:
-            if domain.domain_level==0:
+            if domain.domain_level == 0:
                 domain._read_amr_root(self.oct_handler)
             else:
                 domain._read_amr_level(self.oct_handler)
@@ -229,10 +229,10 @@
             # if this attribute is already set skip it
             if getattr(self, "_file_"+filetype, None) is not None:
                 continue
-            stripped = file_amr.replace(base_prefix,prefix)
-            stripped = stripped.replace(base_suffix,suffix)
-            match, = difflib.get_close_matches(stripped,possibles,1,0.6)
-            if match is not None: 
+            stripped = file_amr.replace(base_prefix, prefix)
+            stripped = stripped.replace(base_suffix, suffix)
+            match, = difflib.get_close_matches(stripped, possibles, 1, 0.6)
+            if match is not None:
                 mylog.info('discovered %s:%s', filetype, match)
                 setattr(self, "_file_"+filetype, match)
             else:
@@ -509,7 +509,7 @@
             dest[field] = np.zeros(self.cell_count, 'float64')-1.
         level = self.domain_level
         no = self.domain.level_count[level]
-        noct_range = [0,no]
+        noct_range = [0, no]
         source = _read_child_level(
             content, self.domain.level_child_offsets,
             self.domain.level_offsets,
@@ -582,7 +582,6 @@
         self._level_count = inoll
         return self._level_oct_offsets
 
-
     def _read_amr_level(self, oct_handler):
         """Open the oct file, read in octs level-by-level.
            For each oct, only the position, index, level and domain
@@ -604,7 +603,7 @@
         mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
                     nocts, level, oct_handler.nocts)
 
-    def _read_amr_root(self,oct_handler):
+    def _read_amr_root(self, oct_handler):
         self.level_offsets
         f = open(self.pf._file_amr, "rb")
         # add the root *cell* not *oct* mesh

diff -r 59f48705d2a883cf3b537aeeb9ec6c3453530095 -r ad5e7f75833e8ffdd5920961c862d838094a495e yt/frontends/art/definitions.py
--- a/yt/frontends/art/definitions.py
+++ b/yt/frontends/art/definitions.py
@@ -76,10 +76,10 @@
 
 
 filename_pattern = {
-    'amr': ['10MpcBox_','.d'], 
-    'particle_header': ['PMcrd','.DAT'], #
-    'particle_data': ['PMcrs','.DAT'],
-    'particle_stars': ['stars','.dat']
+    'amr': ['10MpcBox_', '.d'],
+    'particle_header': ['PMcrd', '.DAT'],
+    'particle_data': ['PMcrs', '.DAT'],
+    'particle_stars': ['stars', '.dat']
 }
 
 amr_header_struct = [

diff -r 59f48705d2a883cf3b537aeeb9ec6c3453530095 -r ad5e7f75833e8ffdd5920961c862d838094a495e yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -72,7 +72,7 @@
         tr = {}
         fields_read = []
         for chunk in chunks:
-            level = chunk.objs[0].domain.domain_level 
+            level = chunk.objs[0].domain.domain_level
             pf = chunk.objs[0].domain.pf
             masks = {}
             ws, ls = pf.parameters["wspecies"], pf.parameters["lspecies"]
@@ -84,19 +84,21 @@
             file_stars = pf._file_particle_stars
             ftype_old = None
             for field in fields:
-                if field in fields_read: continue
+                if field in fields_read:
+                    continue
                 ftype, fname = field
                 pbool, idxa, idxb = _determine_field_size(pf, ftype, ls, ptmax)
                 npa = idxb-idxa
                 if not ftype_old == ftype:
                     Nrow = pf.parameters["Nrow"]
-                    rp = lambda ax: read_particles(file_particle,Nrow,idxa=idxa,
-                                                   idxb=idxb,field=ax)
-                    x,y,z = (rp(ax) for ax in 'xyz')
+                    rp = lambda ax: read_particles(
+                        file_particle, Nrow, idxa=idxa,
+                        idxb=idxb, field=ax)
+                    x, y, z = (rp(ax) for ax in 'xyz')
                     dd = pf.domain_dimensions[0]
                     off = 1.0/dd
-                    x,y,z = (t.astype('f8')/dd - off  for t in (x,y,z))
-                    mask = selector.select_points(x,y,z)
+                    x, y, z = (t.astype('f8')/dd - off for t in (x, y, z))
+                    mask = selector.select_points(x, y, z)
                     size = mask.sum()
                 for i, ax in enumerate('xyz'):
                     if fname.startswith("particle_position_%s" % ax):
@@ -136,15 +138,16 @@
                         pf.current_time)
                     temp = tr.get(field, np.zeros(npa, 'f8'))
                     temp[-nstars:] = data
-                    tr[field]=temp
+                    tr[field] = temp
                     del data
                 tr[field] = tr[field][mask]
                 ftype_old = ftype
                 fields_read.append(field)
-        if tr=={}:
-            tr = dict((f,np.array([])) for f in fields)
+        if tr == {}:
+            tr = dict((f, np.array([])) for f in fields)
         return tr
 
+
 def _determine_field_size(pf, field, lspecies, ptmax):
     pbool = np.zeros(len(lspecies), dtype="bool")
     idxas = np.concatenate(([0, ], lspecies[:-1]))
@@ -327,33 +330,35 @@
     f.seek(pos)
     return unitary_center, fl, iocts, nLevel, root_level
 
-def read_particles(file,Nrow,idxa=None,idxb=None,field=None):
+
+def read_particles(file, Nrow, idxa=None, idxb=None, field=None):
     words = 6  # words (reals) per particle: x,y,z,vx,vy,vz
     real_size = 4  # for file_particle_data; not always true?
     np_per_page = Nrow**2  # defined in ART a_setup.h
     num_pages = os.path.getsize(file)/(real_size*words*np_per_page)
-    data = np.array([],'f4')
-    fh = open(file,'r')
+    data = np.array([], 'f4')
+    fh = open(file, 'r')
     totalp = idxb-idxa
     left = totalp
     for page in range(num_pages):
-        for i,fname in enumerate(['x','y','z','vx','vy','vz']):
-            if i==field or fname==field:
+        for i, fname in enumerate(['x', 'y', 'z', 'vx', 'vy', 'vz']):
+            if i == field or fname == field:
                 if idxa is not None:
-                    fh.seek(real_size*idxa,1)
-                    count = min(np_per_page,left)
-                    temp = np.fromfile(fh,count=count,dtype='>f4')
+                    fh.seek(real_size*idxa, 1)
+                    count = min(np_per_page, left)
+                    temp = np.fromfile(fh, count=count, dtype='>f4')
                     pageleft = np_per_page-count-idxa
-                    fh.seek(real_size*pageleft,1)
+                    fh.seek(real_size*pageleft, 1)
                     left -= count
                 else:
                     count = np_per_page
-                    temp = np.fromfile(fh,count=count,dtype='>f4')
-                data = np.concatenate((data,temp))
+                    temp = np.fromfile(fh, count=count, dtype='>f4')
+                data = np.concatenate((data, temp))
             else:
-                fh.seek(4*np_per_page,1)
+                fh.seek(4*np_per_page, 1)
     return data
 
+
 def read_star_field(file, field=None):
     data = {}
     with open(file, 'rb') as fh:


https://bitbucket.org/yt_analysis/yt/commits/88c3c57398b2/
Changeset:   88c3c57398b2
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-19 22:35:05
Summary:     Merged yt_analysis/yt-3.0 into yt-3.0
Affected #:  4 files

diff -r ad5e7f75833e8ffdd5920961c862d838094a495e -r 88c3c57398b268852467ac28798340b7d5926380 yt/data_objects/universal_fields.py
--- a/yt/data_objects/universal_fields.py
+++ b/yt/data_objects/universal_fields.py
@@ -1,4 +1,5 @@
 """
+
 The basic field info container resides here.  These classes, code specific and
 universal, are the means by which we access fields across YT, both derived and
 native.
@@ -754,8 +755,9 @@
     for i, ax in enumerate('xyz'):
         np.subtract(data["%s%s" % (field_prefix, ax)], center[i], r)
         if data.pf.periodicity[i] == True:
-            np.subtract(DW[i], r, rdw)
             np.abs(r, r)
+            np.subtract(r, DW[i], rdw)
+            np.abs(rdw, rdw)
             np.minimum(r, rdw, r)
         np.power(r, 2.0, r)
         np.add(radius, r, radius)
@@ -946,7 +948,7 @@
                  data["particle_position_x"].size,
                  blank, np.array(data.LeftEdge).astype(np.float64),
                  np.array(data.ActiveDimensions).astype(np.int32),
-                 np.float64(data['dx']))
+                 just_one(data['dx']))
     return blank
 add_field("particle_density", function=_pdensity,
           validators=[ValidateGridType()], convert_function=_convertDensity,

diff -r ad5e7f75833e8ffdd5920961c862d838094a495e -r 88c3c57398b268852467ac28798340b7d5926380 yt/frontends/enzo/fields.py
--- a/yt/frontends/enzo/fields.py
+++ b/yt/frontends/enzo/fields.py
@@ -361,7 +361,7 @@
                            np.int64(np.where(filter)[0].size),
                            blank, np.array(data.LeftEdge).astype(np.float64),
                            np.array(data.ActiveDimensions).astype(np.int32), 
-                           np.float64(data['dx']))
+                           just_one(data['dx']))
     return blank
 add_field("star_density", function=_spdensity,
           validators=[ValidateSpatial(0)], convert_function=_convertDensity)
@@ -374,7 +374,7 @@
         if not filter.any(): return blank
         num = filter.sum()
     else:
-        filter = None
+        filter = Ellipsis
         num = data["particle_position_x"].size
     amr_utils.CICDeposit_3(data["particle_position_x"][filter].astype(np.float64),
                            data["particle_position_y"][filter].astype(np.float64),
@@ -383,7 +383,7 @@
                            num,
                            blank, np.array(data.LeftEdge).astype(np.float64),
                            np.array(data.ActiveDimensions).astype(np.int32), 
-                           np.float64(data['dx']))
+                           just_one(data['dx']))
     return blank
 add_field("dm_density", function=_dmpdensity,
           validators=[ValidateSpatial(0)], convert_function=_convertDensity)
@@ -404,7 +404,7 @@
                            data["particle_position_x"].size,
                            top, np.array(data.LeftEdge).astype(np.float64),
                            np.array(data.ActiveDimensions).astype(np.int32), 
-                           np.float64(data['dx']))
+                           just_one(data['dx']))
     del particle_field_data
 
     bottom = np.zeros(data.ActiveDimensions, dtype='float32')
@@ -415,7 +415,7 @@
                            data["particle_position_x"].size,
                            bottom, np.array(data.LeftEdge).astype(np.float64),
                            np.array(data.ActiveDimensions).astype(np.int32), 
-                           np.float64(data['dx']))
+                           just_one(data['dx']))
     top[bottom == 0] = 0.0
     bnz = bottom.nonzero()
     top[bnz] /= bottom[bnz]
@@ -445,7 +445,7 @@
                           np.int64(np.where(filter)[0].size),
                           top, np.array(data.LeftEdge).astype(np.float64),
                           np.array(data.ActiveDimensions).astype(np.int32), 
-                          np.float64(data['dx']))
+                          just_one(data['dx']))
     del particle_field_data
 
     bottom = np.zeros(data.ActiveDimensions, dtype='float32')
@@ -456,7 +456,7 @@
                           np.int64(np.where(filter)[0].size),
                           bottom, np.array(data.LeftEdge).astype(np.float64),
                           np.array(data.ActiveDimensions).astype(np.int32), 
-                          np.float64(data['dx']))
+                          just_one(data['dx']))
     top[bottom == 0] = 0.0
     bnz = bottom.nonzero()
     top[bnz] /= bottom[bnz]

diff -r ad5e7f75833e8ffdd5920961c862d838094a495e -r 88c3c57398b268852467ac28798340b7d5926380 yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -467,7 +467,7 @@
                     temp -= self.domain_width[i]
                 elif temp < -self.domain_width[i]/2.0:
                     temp += self.domain_width[i]
-            temp = temp - fclip(temp, -dds[i]/2.0, dds[i]/2.0)
+            #temp = temp - fclip(temp, -dds[i]/2.0, dds[i]/2.0)
             dist2 += temp*temp
         if dist2 <= self.radius2: return 1
         return 0

diff -r ad5e7f75833e8ffdd5920961c862d838094a495e -r 88c3c57398b268852467ac28798340b7d5926380 yt/utilities/tests/test_selectors.py
--- a/yt/utilities/tests/test_selectors.py
+++ b/yt/utilities/tests/test_selectors.py
@@ -20,7 +20,10 @@
         data = pf.h.sphere(center, 0.25)
         data.get_data()
         # WARNING: this value has not be externally verified
-        yield assert_equal, data.size, 19568
+        dd = pf.h.all_data()
+        dd.set_field_parameter("center", center)
+        n_outside = (dd["RadiusCode"] >= 0.25).sum()
+        assert_equal( data.size + n_outside, dd.size)
 
         positions = np.array([data[ax] for ax in 'xyz'])
         centers = np.tile( data.center, data.shape[0] ).reshape(data.shape[0],3).transpose()
@@ -28,4 +31,4 @@
                          pf.domain_right_edge-pf.domain_left_edge,
                          pf.periodicity)
         # WARNING: this value has not been externally verified
-        yield assert_almost_equal, dist.max(), 0.261806188752
+        yield assert_array_less, dist, 0.25


https://bitbucket.org/yt_analysis/yt/commits/4f194a75accb/
Changeset:   4f194a75accb
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-19 22:42:12
Summary:     removing FMP stuff
Affected #:  4 files

diff -r 88c3c57398b268852467ac28798340b7d5926380 -r 4f194a75accbb4485b0185b95079d0be2a681d2a yt/analysis_modules/halo_finding/fmp/fmp.py
--- a/yt/analysis_modules/halo_finding/fmp/fmp.py
+++ /dev/null
@@ -1,119 +0,0 @@
-"""
-Find a progenitor line by reverse traversing a timeseries
-and finding the max density around the previous timestep
-
-Author: Christopher Moody <chrisemoody at gmail.com>
-Affiliation: UC Santa Cruz
-Homepage: http://yt.enzotools.org/
-License:
-  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
-
-  This file is part of yt.
-
-  yt is free software; you can redistribute it and/or modify
-  it under the terms of the GNU General Public License as published by
-  the Free Software Foundation; either version 3 of the License, or
-  (at your option) any later version.
-
-  This program is distributed in the hope that it will be useful,
-  but WITHOUT ANY WARRANTY; without even the implied warranty of
-  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-  GNU General Public License for more details.
-
-  You should have received a copy of the GNU General Public License
-  along with this program.  If not, see <http://www.gnu.org/licenses/>.
-"""
-import numpy as np
-from yt.funcs import *
-from yt.frontends.stream.api import load_uniform_grid
-from yt.utilities.parallel_tools import parallel_analysis_interface
-from yt.utilities.parallel_tools.parallel_analysis_interface import \
-     parallel_objects
-from yt.utilities.parallel_tools.parallel_analysis_interface import \
-    ParallelAnalysisInterface, ProcessorPool, Communicator
-
-class FindMaxProgenitor(ParallelAnalysisInterface):
-    def __init__(self,ts):
-        self.ts = ts
-    
-    def find_max_field(self,field="Density",radius=0.01,use_bulk=True):
-        """
-        Find the maximum of the given field, and iterate backwards through 
-        snapshots finding the maxima within a small radius of the previous 
-        snapshot's center.
-        """
-        centers = []
-        v,c = self.ts[-1].h.find_max(field)
-        t_old = None
-        for pf in self.ts[::-1]:
-            t = pf.current_time
-            if t_old:
-                dt = t_old - t
-                c += dt*bv
-            sph = pf.h.sphere(c,radius)
-            v,i,x,y,z=sph.quantities["MaxLocation"]("Density")
-            c = np.array([x,y,z])
-            centers.append(c)
-            bv = sph.quantities["BulkVelocity"]()
-            #bv is in cgs but center is in untary
-            bv /= pf['cm']
-            t_old = pf.current_time
-        return centers
-
-    def find_max_particle(self,initial_center=None,radius=0.01,nparticles=1000,
-                          particle_type="all"):
-        """
-        Find the particle at the maximum density and iterate backwards through
-        snapshots, finding the localtion of the maximum density of the 
-        previously closest nparticles.
-        """
-        indices = None
-        dd = self.ts[-1].h.all_data()
-        domain_dimensions = self.ts[-1].domain_dimensions
-        sim_unit_to_cm = self.ts[-1]['cm']
-        c = initial_center
-        if c is None:
-            v,c = dd.quantities["ParticleDensityCenter"](particle_type=\
-                                                         particle_type)
-        centers = {}
-        earliest_c = c
-        for pfs_chunk in chunks(self.ts[::-1]):
-            pf = pfs_chunk[0] #first is the latest in time
-            mylog.info("Finding central indices")
-            sph = pf.h.sphere(earliest_c,radius)
-            rad = sph["ParticleRadius"]
-            idx = sph["particle_index"]
-            indices = idx[np.argsort(rad)[:nparticles]]
-            for sto, pf in parallel_objects(pfs_chunk,storage=centers):
-                dd = pf.h.sphere(c,radius)
-                data = dict(number_of_particles=indices.shape[0])
-                index = dd[(particle_type,'particle_index')]
-                inside = np.in1d(indices,index,assume_unique=True)
-                mylog.info("Collecting particles %1.1e of %1.1e",inside.sum(),
-                           nparticles)
-                if inside.sum()==0:
-                    mylog.warning("Found no matching indices in %s",str(pf)) 
-                for ax in 'xyz':
-                    pos = dd[(particle_type,"particle_position_%s"%ax)][inside]
-                    data[('all','particle_position_%s'%ax)]= pos
-                mas = dd[(particle_type,"particle_mass")][inside]
-                data[('all','particle_mass')]= mas
-                mylog.info("Finding center")
-                subselection = load_uniform_grid(data,domain_dimensions,
-                                                 sim_unit_to_cm)
-                ss = subselection.h.all_data()
-                v,c = ss.quantities["ParticleDensityCenter"]()
-                sto.result_id = pf.parameters['aexpn']
-                sto.result = c
-            #last in the chunk is the earliest in time
-            earliest_c = centers[pf_chunk[-1].parameters['aexpn']]
-        return centers
-
-def chunks(l, n):
-    """ Yield successive n-sized chunks from l.
-    """
-    #http://stackoverflow.com/questions/312443/
-    #how-do-you-split-a-list-into-evenly-sized-chunks-in-python
-    for i in xrange(0, len(l), n):
-        yield l[i:i+n]
-

diff -r 88c3c57398b268852467ac28798340b7d5926380 -r 4f194a75accbb4485b0185b95079d0be2a681d2a yt/analysis_modules/halo_finding/fmp/setup.py
--- a/yt/analysis_modules/halo_finding/fmp/setup.py
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/usr/bin/env python
-import setuptools
-import os
-import sys
-import os.path
-
-def configuration(parent_package='',top_path=None):
-    from numpy.distutils.misc_util import Configuration
-    config = Configuration('fmp',parent_package,top_path)
-    config.make_config_py() # installs __config__.py
-    return config
-

diff -r 88c3c57398b268852467ac28798340b7d5926380 -r 4f194a75accbb4485b0185b95079d0be2a681d2a yt/analysis_modules/halo_finding/setup.py
--- a/yt/analysis_modules/halo_finding/setup.py
+++ b/yt/analysis_modules/halo_finding/setup.py
@@ -12,7 +12,6 @@
     config.add_subpackage("parallel_hop")
     if os.path.exists("rockstar.cfg"):
         config.add_subpackage("rockstar")
-    config.add_subpackage("fmp")
     config.make_config_py() # installs __config__.py
     #config.make_svn_version_py()
     return config


https://bitbucket.org/yt_analysis/yt/commits/1ab51a4c88ac/
Changeset:   1ab51a4c88ac
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-27 13:58:28
Summary:     Merged in juxtaposicion/yt-3.0 (pull request #16)

NMSU ART update
Affected #:  14 files

diff -r 6bdd0c4ba1619072b110248e87d120eb2e14d30f -r 1ab51a4c88ac448a24633d9d5f766499b875facb yt/analysis_modules/halo_finding/halo_objects.py
--- a/yt/analysis_modules/halo_finding/halo_objects.py
+++ b/yt/analysis_modules/halo_finding/halo_objects.py
@@ -1059,7 +1059,7 @@
 
     _fields = ["particle_position_%s" % ax for ax in 'xyz']
 
-    def __init__(self, data_source, dm_only=True):
+    def __init__(self, data_source, dm_only=True, redshift=-1):
         """
         Run hop on *data_source* with a given density *threshold*.  If
         *dm_only* is set, only run it on the dark matter particles, otherwise

diff -r 6bdd0c4ba1619072b110248e87d120eb2e14d30f -r 1ab51a4c88ac448a24633d9d5f766499b875facb yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -210,6 +210,8 @@
         """
         Deletes a field
         """
+        if key  not in self.field_data:
+            key = self._determine_fields(key)[0]
         del self.field_data[key]
 
     def _generate_field(self, field):

diff -r 6bdd0c4ba1619072b110248e87d120eb2e14d30f -r 1ab51a4c88ac448a24633d9d5f766499b875facb yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -678,3 +678,37 @@
     return [np.sum(totals[:,i]) for i in range(n_fields)]
 add_quantity("TotalQuantity", function=_TotalQuantity,
                 combine_function=_combTotalQuantity, n_ret=2)
+
+def _ParticleDensityCenter(data,nbins=3,particle_type="all"):
+    """
+    Find the center of the particle density
+    by histogramming the particles iteratively.
+    """
+    pos = [data[(particle_type,"particle_position_%s"%ax)] for ax in "xyz"]
+    pos = np.array(pos).T
+    mas = data[(particle_type,"particle_mass")]
+    calc_radius= lambda x,y:np.sqrt(np.sum((x-y)**2.0,axis=1))
+    density = 0
+    if pos.shape[0]==0:
+        return -1.0,[-1.,-1.,-1.]
+    while pos.shape[0] > 1:
+        table,bins=np.histogramdd(pos,bins=nbins, weights=mas)
+        bin_size = min((np.max(bins,axis=1)-np.min(bins,axis=1))/nbins)
+        centeridx = np.where(table==table.max())
+        le = np.array([bins[0][centeridx[0][0]],
+                       bins[1][centeridx[1][0]],
+                       bins[2][centeridx[2][0]]])
+        re = np.array([bins[0][centeridx[0][0]+1],
+                       bins[1][centeridx[1][0]+1],
+                       bins[2][centeridx[2][0]+1]])
+        center = 0.5*(le+re)
+        idx = calc_radius(pos,center)<bin_size
+        pos, mas = pos[idx],mas[idx]
+        density = max(density,mas.sum()/bin_size**3.0)
+    return density, center
+def _combParticleDensityCenter(data,densities,centers):
+    i = np.argmax(densities)
+    return densities[i],centers[i]
+
+add_quantity("ParticleDensityCenter",function=_ParticleDensityCenter,
+             combine_function=_combParticleDensityCenter,n_ret=2)

diff -r 6bdd0c4ba1619072b110248e87d120eb2e14d30f -r 1ab51a4c88ac448a24633d9d5f766499b875facb yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -30,6 +30,8 @@
 import stat
 import weakref
 import cStringIO
+import difflib
+import glob
 
 from yt.funcs import *
 from yt.geometry.oct_geometry_handler import \
@@ -37,9 +39,9 @@
 from yt.geometry.geometry_handler import \
     GeometryHandler, YTDataChunk
 from yt.data_objects.static_output import \
-      StaticOutput
+    StaticOutput
 from yt.geometry.oct_container import \
-    RAMSESOctreeContainer
+    ARTOctreeContainer
 from yt.data_objects.field_info_container import \
     FieldInfoContainer, NullFunc
 from .fields import \
@@ -52,20 +54,15 @@
     get_box_grids_level
 import yt.utilities.lib as amr_utils
 
-from .definitions import *
-from .io import _read_frecord
-from .io import _read_record
-from .io import _read_struct
+from yt.frontends.art.definitions import *
+from yt.utilities.fortran_utils import *
 from .io import _read_art_level_info
 from .io import _read_child_mask_level
 from .io import _read_child_level
 from .io import _read_root_level
-from .io import _read_record_size
-from .io import _skip_record
 from .io import _count_art_octs
 from .io import b2t
 
-
 import yt.frontends.ramses._ramses_reader as _ramses_reader
 
 from .fields import ARTFieldInfo, KnownARTFields
@@ -80,13 +77,9 @@
 from yt.utilities.physical_constants import \
     mass_hydrogen_cgs, sec_per_Gyr
 
+
 class ARTGeometryHandler(OctreeGeometryHandler):
-    def __init__(self,pf,data_style="art"):
-        """
-        Life is made simpler because we only have one AMR file
-        and one domain. However, we are matching to the RAMSES
-        multi-domain architecture.
-        """
+    def __init__(self, pf, data_style="art"):
         self.fluid_field_list = fluid_fields
         self.data_style = data_style
         self.parameter_file = weakref.proxy(pf)
@@ -94,7 +87,16 @@
         self.directory = os.path.dirname(self.hierarchy_filename)
         self.max_level = pf.max_level
         self.float_type = np.float64
-        super(ARTGeometryHandler,self).__init__(pf,data_style)
+        super(ARTGeometryHandler, self).__init__(pf, data_style)
+
+    def get_smallest_dx(self):
+        """
+        Returns (in code units) the smallest cell size in the simulation.
+        """
+        # Overloaded
+        pf = self.parameter_file
+        return (1.0/pf.domain_dimensions.astype('f8') /
+                (2**self.max_level)).min()
 
     def _initialize_oct_handler(self):
         """
@@ -102,23 +104,37 @@
         allocate the requisite memory in the oct tree
         """
         nv = len(self.fluid_field_list)
-        self.domains = [ARTDomainFile(self.parameter_file,1,nv)]
+        self.domains = [ARTDomainFile(self.parameter_file, l+1, nv, l)
+                        for l in range(self.pf.max_level)]
         self.octs_per_domain = [dom.level_count.sum() for dom in self.domains]
         self.total_octs = sum(self.octs_per_domain)
-        self.oct_handler = RAMSESOctreeContainer(
-            self.parameter_file.domain_dimensions/2, #dd is # of root cells
+        self.oct_handler = ARTOctreeContainer(
+            self.parameter_file.domain_dimensions/2,  # dd is # of root cells
             self.parameter_file.domain_left_edge,
             self.parameter_file.domain_right_edge)
         mylog.debug("Allocating %s octs", self.total_octs)
         self.oct_handler.allocate_domains(self.octs_per_domain)
         for domain in self.domains:
-            domain._read_amr(self.oct_handler)
+            if domain.domain_level == 0:
+                domain._read_amr_root(self.oct_handler)
+            else:
+                domain._read_amr_level(self.oct_handler)
 
     def _detect_fields(self):
         self.particle_field_list = particle_fields
-        self.field_list = set(fluid_fields + particle_fields + particle_star_fields)
+        self.field_list = set(fluid_fields + particle_fields +
+                              particle_star_fields)
         self.field_list = list(self.field_list)
-    
+        # now generate all of the possible particle fields
+        if "wspecies" in self.parameter_file.parameters.keys():
+            wspecies = self.parameter_file.parameters['wspecies']
+            nspecies = len(wspecies)
+            self.parameter_file.particle_types = ["all", "darkmatter", "stars"]
+            for specie in range(nspecies):
+                self.parameter_file.particle_types.append("specie%i" % specie)
+        else:
+            self.parameter_file.particle_types = []
+
     def _setup_classes(self):
         dd = self._get_data_reader_dict()
         super(ARTGeometryHandler, self)._setup_classes(dd)
@@ -127,19 +143,23 @@
     def _identify_base_chunk(self, dobj):
         """
         Take the passed in data source dobj, and use its embedded selector
-        to calculate the domain mask, build the reduced domain 
+        to calculate the domain mask, build the reduced domain
         subsets and oct counts. Attach this information to dobj.
         """
         if getattr(dobj, "_chunk_info", None) is None:
-            #Get all octs within this oct handler
+            # Get all octs within this oct handler
             mask = dobj.selector.select_octs(self.oct_handler)
-            if mask.sum()==0:
+            if mask.sum() == 0:
                 mylog.debug("Warning: selected zero octs")
             counts = self.oct_handler.count_cells(dobj.selector, mask)
-            #For all domains, figure out how many counts we have 
-            #and build a subset=mask of domains 
-            subsets = [ARTDomainSubset(d, mask, c)
-                       for d, c in zip(self.domains, counts) if c > 0]
+            # For all domains, figure out how many counts we have
+            # and build a subset=mask of domains
+            subsets = []
+            for d, c in zip(self.domains, counts):
+                if c < 1:
+                    continue
+                subset = ARTDomainSubset(d, mask, c, d.domain_level)
+                subsets.append(subset)
             dobj._chunk_info = subsets
             dobj.size = sum(counts)
             dobj.shape = (dobj.size,)
@@ -147,8 +167,8 @@
 
     def _chunk_all(self, dobj):
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
-        #We pass the chunk both the current chunk and list of chunks,
-        #as well as the referring data source
+        # We pass the chunk both the current chunk and list of chunks,
+        # as well as the referring data source
         yield YTDataChunk(dobj, "all", oobjs, dobj.size)
 
     def _chunk_spatial(self, dobj, ngz):
@@ -157,7 +177,7 @@
     def _chunk_io(self, dobj):
         """
         Since subsets are calculated per domain,
-        i.e. per file, yield each domain at a time to 
+        i.e. per file, yield each domain at a time to
         organize by IO. We will eventually chunk out NMSU ART
         to be level-by-level.
         """
@@ -165,77 +185,66 @@
         for subset in oobjs:
             yield YTDataChunk(dobj, "io", [subset], subset.cell_count)
 
+
 class ARTStaticOutput(StaticOutput):
     _hierarchy_class = ARTGeometryHandler
     _fieldinfo_fallback = ARTFieldInfo
     _fieldinfo_known = KnownARTFields
 
-    def __init__(self,filename,data_style='art',
-                 fields = None, storage_filename = None,
-                 skip_particles=False,skip_stars=False,
-                 limit_level=None,spread_age=True):
+    def __init__(self, filename, data_style='art',
+                 fields=None, storage_filename=None,
+                 skip_particles=False, skip_stars=False,
+                 limit_level=None, spread_age=True,
+                 force_max_level=None, file_particle_header=None,
+                 file_particle_data=None, file_particle_stars=None):
         if fields is None:
             fields = fluid_fields
         filename = os.path.abspath(filename)
         self._fields_in_file = fields
+        self._file_amr = filename
+        self._file_particle_header = file_particle_header
+        self._file_particle_data = file_particle_data
+        self._file_particle_stars = file_particle_stars
         self._find_files(filename)
-        self.file_amr = filename
         self.parameter_filename = filename
         self.skip_particles = skip_particles
         self.skip_stars = skip_stars
         self.limit_level = limit_level
         self.max_level = limit_level
+        self.force_max_level = force_max_level
         self.spread_age = spread_age
-        self.domain_left_edge = np.zeros(3,dtype='float')
-        self.domain_right_edge = np.zeros(3,dtype='float')+1.0
-        StaticOutput.__init__(self,filename,data_style)
+        self.domain_left_edge = np.zeros(3, dtype='float')
+        self.domain_right_edge = np.zeros(3, dtype='float')+1.0
+        StaticOutput.__init__(self, filename, data_style)
         self.storage_filename = storage_filename
 
-    def _find_files(self,file_amr):
+    def _find_files(self, file_amr):
         """
         Given the AMR base filename, attempt to find the
         particle header, star files, etc.
         """
-        prefix,suffix = filename_pattern['amr'].split('%s')
-        affix = os.path.basename(file_amr).replace(prefix,'')
-        affix = affix.replace(suffix,'')
-        affix = affix.replace('_','')
-        full_affix = affix
-        affix = affix[1:-1]
-        dirname = os.path.dirname(file_amr)
-        for fp in (filename_pattern_hf,filename_pattern):
-            for filetype, pattern in fp.items():
-                #if this attribute is already set skip it
-                if getattr(self,"file_"+filetype,None) is not None:
-                    continue
-                #sometimes the affix is surrounded by an extraneous _
-                #so check for an extra character on either side
-                check_filename = dirname+'/'+pattern%('?%s?'%affix)
-                filenames = glob.glob(check_filename)
-                if len(filenames)>1:
-                    check_filename_strict = \
-                            dirname+'/'+pattern%('?%s'%full_affix[1:])
-                    filenames = glob.glob(check_filename_strict)
-                
-                if len(filenames)==1:
-                    setattr(self,"file_"+filetype,filenames[0])
-                    mylog.info('discovered %s:%s',filetype,filenames[0])
-                elif len(filenames)>1:
-                    setattr(self,"file_"+filetype,None)
-                    mylog.info("Ambiguous number of files found for %s",
-                            check_filename)
-                    for fn in filenames:
-                        faffix = float(affix)
-                else:
-                    setattr(self,"file_"+filetype,None)
+        base_prefix, base_suffix = filename_pattern['amr']
+        possibles = glob.glob(os.path.dirname(file_amr)+"/*")
+        for filetype, (prefix, suffix) in filename_pattern.iteritems():
+            # if this attribute is already set skip it
+            if getattr(self, "_file_"+filetype, None) is not None:
+                continue
+            stripped = file_amr.replace(base_prefix, prefix)
+            stripped = stripped.replace(base_suffix, suffix)
+            match, = difflib.get_close_matches(stripped, possibles, 1, 0.6)
+            if match is not None:
+                mylog.info('discovered %s:%s', filetype, match)
+                setattr(self, "_file_"+filetype, match)
+            else:
+                setattr(self, "_file_"+filetype, None)
 
     def __repr__(self):
-        return self.file_amr.rsplit(".",1)[0]
+        return self._file_amr.split('/')[-1]
 
     def _set_units(self):
         """
-        Generates the conversion to various physical units based 
-		on the parameters from the header
+        Generates the conversion to various physical units based
+                on the parameters from the header
         """
         self.units = {}
         self.time_units = {}
@@ -243,9 +252,9 @@
         self.units['1'] = 1.0
         self.units['unitary'] = 1.0
 
-        #spatial units
-        z   = self.current_redshift
-        h   = self.hubble_constant
+        # spatial units
+        z = self.current_redshift
+        h = self.hubble_constant
         boxcm_cal = self.parameters["boxh"]
         boxcm_uncal = boxcm_cal / h
         box_proper = boxcm_uncal/(1+z)
@@ -256,55 +265,59 @@
             self.units[unit+'cm'] = mpc_conversion[unit] * boxcm_uncal
             self.units[unit+'hcm'] = mpc_conversion[unit] * boxcm_cal
 
-        #all other units
+        # all other units
         wmu = self.parameters["wmu"]
         Om0 = self.parameters['Om0']
-        ng  = self.parameters['ng']
+        ng = self.parameters['ng']
         wmu = self.parameters["wmu"]
-        boxh   = self.parameters['boxh'] 
-        aexpn  = self.parameters["aexpn"]
+        boxh = self.parameters['boxh']
+        aexpn = self.parameters["aexpn"]
         hubble = self.parameters['hubble']
 
         cf = defaultdict(lambda: 1.0)
         r0 = boxh/ng
-        P0= 4.697e-16 * Om0**2.0 * r0**2.0 * hubble**2.0
-        T_0 = 3.03e5 * r0**2.0 * wmu * Om0 # [K]
+        P0 = 4.697e-16 * Om0**2.0 * r0**2.0 * hubble**2.0
+        T_0 = 3.03e5 * r0**2.0 * wmu * Om0  # [K]
         S_0 = 52.077 * wmu**(5.0/3.0)
         S_0 *= hubble**(-4.0/3.0)*Om0**(1.0/3.0)*r0**2.0
-        #v0 =  r0 * 50.0*1.0e5 * np.sqrt(self.omega_matter)  #cm/s
+        # v0 =  r0 * 50.0*1.0e5 * np.sqrt(self.omega_matter)  #cm/s
         v0 = 50.0*r0*np.sqrt(Om0)
         t0 = r0/v0
         rho1 = 1.8791e-29 * hubble**2.0 * self.omega_matter
         rho0 = 2.776e11 * hubble**2.0 * Om0
-        tr = 2./3. *(3.03e5*r0**2.0*wmu*self.omega_matter)*(1.0/(aexpn**2))     
+        tr = 2./3. * (3.03e5*r0**2.0*wmu*self.omega_matter)*(1.0/(aexpn**2))
         aM0 = rho0 * (boxh/hubble)**3.0 / ng**3.0
-        cf['r0']=r0
-        cf['P0']=P0
-        cf['T_0']=T_0
-        cf['S_0']=S_0
-        cf['v0']=v0
-        cf['t0']=t0
-        cf['rho0']=rho0
-        cf['rho1']=rho1
-        cf['tr']=tr
-        cf['aM0']=aM0
+        cf['r0'] = r0
+        cf['P0'] = P0
+        cf['T_0'] = T_0
+        cf['S_0'] = S_0
+        cf['v0'] = v0
+        cf['t0'] = t0
+        cf['rho0'] = rho0
+        cf['rho1'] = rho1
+        cf['tr'] = tr
+        cf['aM0'] = aM0
 
-        #factors to multiply the native code units to CGS
-        cf['Pressure'] = P0 #already cgs
-        cf['Velocity'] = v0/aexpn*1.0e5 #proper cm/s
+        # factors to multiply the native code units to CGS
+        cf['Pressure'] = P0  # already cgs
+        cf['Velocity'] = v0/aexpn*1.0e5  # proper cm/s
         cf["Mass"] = aM0 * 1.98892e33
         cf["Density"] = rho1*(aexpn**-3.0)
         cf["GasEnergy"] = rho0*v0**2*(aexpn**-5.0)
         cf["Potential"] = 1.0
         cf["Entropy"] = S_0
         cf["Temperature"] = tr
+        cf["Time"] = 1.0
+        cf["particle_mass"] = cf['Mass']
+        cf["particle_mass_initial"] = cf['Mass']
         self.cosmological_simulation = True
         self.conversion_factors = cf
-        
-        for particle_field in particle_fields:
-            self.conversion_factors[particle_field] =  1.0
+
         for ax in 'xyz':
             self.conversion_factors["%s-velocity" % ax] = 1.0
+        for pt in particle_fields:
+            if pt not in self.conversion_factors.keys():
+                self.conversion_factors[pt] = 1.0
         for unit in sec_conversion.keys():
             self.time_units[unit] = 1.0 / sec_conversion[unit]
 
@@ -320,72 +333,89 @@
         self.unique_identifier = \
             int(os.stat(self.parameter_filename)[stat.ST_CTIME])
         self.parameters.update(constants)
-        #read the amr header
-        with open(self.file_amr,'rb') as f:
-            amr_header_vals = _read_struct(f,amr_header_struct)
-            for to_skip in ['tl','dtl','tlold','dtlold','iSO']:
-                _skip_record(f)
-            (self.ncell,) = struct.unpack('>l', _read_record(f))
+        self.parameters['Time'] = 1.0
+        # read the amr header
+        with open(self._file_amr, 'rb') as f:
+            amr_header_vals = read_attrs(f, amr_header_struct, '>')
+            for to_skip in ['tl', 'dtl', 'tlold', 'dtlold', 'iSO']:
+                skipped = skip(f, endian='>')
+            (self.ncell) = read_vector(f, 'i', '>')[0]
             # Try to figure out the root grid dimensions
             est = int(np.rint(self.ncell**(1.0/3.0)))
             # Note here: this is the number of *cells* on the root grid.
             # This is not the same as the number of Octs.
-            #domain dimensions is the number of root *cells*
+            # domain dimensions is the number of root *cells*
             self.domain_dimensions = np.ones(3, dtype='int64')*est
             self.root_grid_mask_offset = f.tell()
             self.root_nocts = self.domain_dimensions.prod()/8
             self.root_ncells = self.root_nocts*8
-            mylog.debug("Estimating %i cells on a root grid side,"+ \
-                        "%i root octs",est,self.root_nocts)
-            self.root_iOctCh = _read_frecord(f,'>i')[:self.root_ncells]
+            mylog.debug("Estimating %i cells on a root grid side," +
+                        "%i root octs", est, self.root_nocts)
+            self.root_iOctCh = read_vector(f, 'i', '>')[:self.root_ncells]
             self.root_iOctCh = self.root_iOctCh.reshape(self.domain_dimensions,
-                 order='F')
+                                                        order='F')
             self.root_grid_offset = f.tell()
-            #_skip_record(f) # hvar
-            #_skip_record(f) # var
-            self.root_nhvar = _read_frecord(f,'>f',size_only=True)
-            self.root_nvar  = _read_frecord(f,'>f',size_only=True)
-            #make sure that the number of root variables is a multiple of rootcells
-            assert self.root_nhvar%self.root_ncells==0
-            assert self.root_nvar%self.root_ncells==0
-            self.nhydro_variables = ((self.root_nhvar+self.root_nvar)/ 
-                                    self.root_ncells)
-            self.iOctFree, self.nOct = struct.unpack('>ii', _read_record(f))
+            self.root_nhvar = skip(f, endian='>')
+            self.root_nvar = skip(f, endian='>')
+            # make sure that the number of root variables is a multiple of
+            # rootcells
+            assert self.root_nhvar % self.root_ncells == 0
+            assert self.root_nvar % self.root_ncells == 0
+            self.nhydro_variables = ((self.root_nhvar+self.root_nvar) /
+                                     self.root_ncells)
+            self.iOctFree, self.nOct = read_vector(f, 'i', '>')
             self.child_grid_offset = f.tell()
             self.parameters.update(amr_header_vals)
             self.parameters['ncell0'] = self.parameters['ng']**3
-        #read the particle header
-        if not self.skip_particles and self.file_particle_header:
-            with open(self.file_particle_header,"rb") as fh:
-                particle_header_vals = _read_struct(fh,particle_header_struct)
+            # estimate the root level
+            float_center, fl, iocts, nocts, root_level = _read_art_level_info(
+                f,
+                [0, self.child_grid_offset], 1,
+                coarse_grid=self.domain_dimensions[0])
+            del float_center, fl, iocts, nocts
+            self.root_level = root_level
+            mylog.info("Using root level of %02i", self.root_level)
+        # read the particle header
+        if not self.skip_particles and self._file_particle_header:
+            with open(self._file_particle_header, "rb") as fh:
+                particle_header_vals = read_attrs(
+                    fh, particle_header_struct, '>')
                 fh.seek(seek_extras)
                 n = particle_header_vals['Nspecies']
-                wspecies = np.fromfile(fh,dtype='>f',count=10)
-                lspecies = np.fromfile(fh,dtype='>i',count=10)
+                wspecies = np.fromfile(fh, dtype='>f', count=10)
+                lspecies = np.fromfile(fh, dtype='>i', count=10)
             self.parameters['wspecies'] = wspecies[:n]
             self.parameters['lspecies'] = lspecies[:n]
             ls_nonzero = np.diff(lspecies)[:n-1]
-            mylog.info("Discovered %i species of particles",len(ls_nonzero))
+            self.star_type = len(ls_nonzero)
+            mylog.info("Discovered %i species of particles", len(ls_nonzero))
             mylog.info("Particle populations: "+'%1.1e '*len(ls_nonzero),
-                *ls_nonzero)
-            for k,v in particle_header_vals.items():
+                       *ls_nonzero)
+            for k, v in particle_header_vals.items():
                 if k in self.parameters.keys():
                     if not self.parameters[k] == v:
-                        mylog.info("Inconsistent parameter %s %1.1e  %1.1e",k,v,
-                                   self.parameters[k])
+                        mylog.info(
+                            "Inconsistent parameter %s %1.1e  %1.1e", k, v,
+                            self.parameters[k])
                 else:
-                    self.parameters[k]=v
+                    self.parameters[k] = v
             self.parameters_particles = particle_header_vals
-    
-        #setup standard simulation params yt expects to see
+
+        # setup standard simulation params yt expects to see
         self.current_redshift = self.parameters["aexpn"]**-1.0 - 1.0
         self.omega_lambda = amr_header_vals['Oml0']
         self.omega_matter = amr_header_vals['Om0']
         self.hubble_constant = amr_header_vals['hubble']
         self.min_level = amr_header_vals['min_level']
         self.max_level = amr_header_vals['max_level']
-        self.hubble_time  = 1.0/(self.hubble_constant*100/3.08568025e19)
+        if self.limit_level is not None:
+            self.max_level = min(
+                self.limit_level, amr_header_vals['max_level'])
+        if self.force_max_level is not None:
+            self.max_level = self.force_max_level
+        self.hubble_time = 1.0/(self.hubble_constant*100/3.08568025e19)
         self.current_time = b2t(self.parameters['t']) * sec_per_Gyr
+        mylog.info("Max level is %02i", self.max_level)
 
     @classmethod
     def _is_valid(self, *args, **kwargs):
@@ -393,20 +423,24 @@
         Defined for the NMSU file naming scheme.
         This could differ for other formats.
         """
-        fn = ("%s" % (os.path.basename(args[0])))
         f = ("%s" % args[0])
-        prefix, suffix = filename_pattern['amr'].split('%s')
-        if fn.endswith(suffix) and fn.startswith(prefix) and\
-                os.path.exists(f): 
+        prefix, suffix = filename_pattern['amr']
+        with open(f, 'rb') as fh:
+            try:
+                amr_header_vals = read_attrs(fh, amr_header_struct, '>')
                 return True
+            except AssertionError:
+                return False
         return False
 
+
 class ARTDomainSubset(object):
-    def __init__(self, domain, mask, cell_count):
+    def __init__(self, domain, mask, cell_count, domain_level):
         self.mask = mask
         self.domain = domain
         self.oct_handler = domain.pf.h.oct_handler
         self.cell_count = cell_count
+        self.domain_level = domain_level
         level_counts = self.oct_handler.count_levels(
             self.domain.pf.max_level, self.domain.domain_id, mask)
         assert(level_counts.sum() == cell_count)
@@ -432,12 +466,12 @@
     def select_fwidth(self, dobj):
         base_dx = 1.0/self.domain.pf.domain_dimensions
         widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.ires(dobj))
+        dds = (2**self.select_ires(dobj))
         for i in range(3):
-            widths[:,i] = base_dx[i] / dds
+            widths[:, i] = base_dx[i] / dds
         return widths
 
-    def fill(self, content, fields):
+    def fill_root(self, content, ftfields):
         """
         This is called from IOHandler. It takes content
         which is a binary stream, reads the requested field
@@ -446,135 +480,153 @@
         the order they are in in the octhandler.
         """
         oct_handler = self.oct_handler
-        all_fields  = self.domain.pf.h.fluid_field_list
-        fields = [f for ft, f in fields]
-        dest= {}
-        filled = pos = level_offset = 0
+        all_fields = self.domain.pf.h.fluid_field_list
+        fields = [f for ft, f in ftfields]
+        level_offset = 0
         field_idxs = [all_fields.index(f) for f in fields]
+        dest = {}
         for field in fields:
-            dest[field] = np.zeros(self.cell_count, 'float64')
-        for level, offset in enumerate(self.domain.level_offsets):
-            no = self.domain.level_count[level]
-            if level==0:
-                data = _read_root_level(content,self.domain.level_child_offsets,
-                                       self.domain.level_count)
-                data = data[field_idxs,:]
-            else:
-                data = _read_child_level(content,self.domain.level_child_offsets,
-                                         self.domain.level_offsets,
-                                         self.domain.level_count,level,fields,
-                                         self.domain.pf.domain_dimensions,
-                                         self.domain.pf.parameters['ncell0'])
-            source= {}
-            for i,field in enumerate(fields):
-                source[field] = np.empty((no, 8), dtype="float64")
-                source[field][:,:] = np.reshape(data[i,:],(no,8))
-            level_offset += oct_handler.fill_level(self.domain.domain_id, 
-                                   level, dest, source, self.mask, level_offset)
+            dest[field] = np.zeros(self.cell_count, 'float64')-1.
+        level = self.domain_level
+        source = {}
+        data = _read_root_level(content, self.domain.level_child_offsets,
+                                self.domain.level_count)
+        for field, i in zip(fields, field_idxs):
+            temp = np.reshape(data[i, :], self.domain.pf.domain_dimensions,
+                              order='F').astype('float64').T
+            source[field] = temp
+        level_offset += oct_handler.fill_level_from_grid(
+            self.domain.domain_id,
+            level, dest, source, self.mask, level_offset)
         return dest
 
+    def fill_level(self, content, ftfields):
+        oct_handler = self.oct_handler
+        fields = [f for ft, f in ftfields]
+        level_offset = 0
+        dest = {}
+        for field in fields:
+            dest[field] = np.zeros(self.cell_count, 'float64')-1.
+        level = self.domain_level
+        no = self.domain.level_count[level]
+        noct_range = [0, no]
+        source = _read_child_level(
+            content, self.domain.level_child_offsets,
+            self.domain.level_offsets,
+            self.domain.level_count, level, fields,
+            self.domain.pf.domain_dimensions,
+            self.domain.pf.parameters['ncell0'],
+            noct_range=noct_range)
+        nocts_filling = noct_range[1]-noct_range[0]
+        level_offset += oct_handler.fill_level(self.domain.domain_id,
+                                               level, dest, source,
+                                               self.mask, level_offset,
+                                               noct_range[0],
+                                               nocts_filling)
+        return dest
+
+
 class ARTDomainFile(object):
     """
     Read in the AMR, left/right edges, fill out the octhandler
     """
-    #We already read in the header in static output,
-    #and since these headers are defined in only a single file it's
-    #best to leave them in the static output
+    # We already read in the header in static output,
+    # and since these headers are defined in only a single file it's
+    # best to leave them in the static output
     _last_mask = None
     _last_seletor_id = None
 
-    def __init__(self,pf,domain_id,nvar):
+    def __init__(self, pf, domain_id, nvar, level):
         self.nvar = nvar
         self.pf = pf
         self.domain_id = domain_id
+        self.domain_level = level
         self._level_count = None
         self._level_oct_offsets = None
         self._level_child_offsets = None
 
     @property
     def level_count(self):
-        #this is number of *octs*
-        if self._level_count is not None: return self._level_count
+        # this is number of *octs*
+        if self._level_count is not None:
+            return self._level_count
         self.level_offsets
-        return self._level_count
+        return self._level_count[self.domain_level]
 
     @property
     def level_child_offsets(self):
-        if self._level_count is not None: return self._level_child_offsets
+        if self._level_count is not None:
+            return self._level_child_offsets
         self.level_offsets
         return self._level_child_offsets
 
     @property
-    def level_offsets(self): 
-        #this is used by the IO operations to find the file offset,
-        #and then start reading to fill values
-        #note that this is called hydro_offset in ramses
-        if self._level_oct_offsets is not None: 
+    def level_offsets(self):
+        # this is used by the IO operations to find the file offset,
+        # and then start reading to fill values
+        # note that this is called hydro_offset in ramses
+        if self._level_oct_offsets is not None:
             return self._level_oct_offsets
         # We now have to open the file and calculate it
-        f = open(self.pf.file_amr, "rb")
+        f = open(self.pf._file_amr, "rb")
         nhydrovars, inoll, _level_oct_offsets, _level_child_offsets = \
             _count_art_octs(f,  self.pf.child_grid_offset, self.pf.min_level,
                             self.pf.max_level)
-        #remember that the root grid is by itself; manually add it back in
+        # remember that the root grid is by itself; manually add it back in
         inoll[0] = self.pf.domain_dimensions.prod()/8
         _level_child_offsets[0] = self.pf.root_grid_offset
         self.nhydrovars = nhydrovars
-        self.inoll = inoll #number of octs
+        self.inoll = inoll  # number of octs
         self._level_oct_offsets = _level_oct_offsets
         self._level_child_offsets = _level_child_offsets
         self._level_count = inoll
         return self._level_oct_offsets
-    
-    def _read_amr(self, oct_handler):
+
+    def _read_amr_level(self, oct_handler):
         """Open the oct file, read in octs level-by-level.
-           For each oct, only the position, index, level and domain 
+           For each oct, only the position, index, level and domain
            are needed - its position in the octree is found automatically.
            The most important is finding all the information to feed
            oct_handler.add
         """
-        #on the root level we typically have 64^3 octs
-        #giving rise to 128^3 cells
-        #but on level 1 instead of 128^3 octs, we have 256^3 octs
-        #leave this code here instead of static output - it's memory intensive
         self.level_offsets
-        f = open(self.pf.file_amr, "rb")
-        #add the root *cell* not *oct* mesh
+        f = open(self.pf._file_amr, "rb")
+        level = self.domain_level
+        unitary_center, fl, iocts, nocts, root_level = _read_art_level_info(
+            f,
+            self._level_oct_offsets, level,
+            coarse_grid=self.pf.domain_dimensions[0],
+            root_level=self.pf.root_level)
+        nocts_check = oct_handler.add(self.domain_id, level, nocts,
+                                      unitary_center, self.domain_id)
+        assert(nocts_check == nocts)
+        mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
+                    nocts, level, oct_handler.nocts)
+
+    def _read_amr_root(self, oct_handler):
+        self.level_offsets
+        f = open(self.pf._file_amr, "rb")
+        # add the root *cell* not *oct* mesh
+        level = self.domain_level
         root_octs_side = self.pf.domain_dimensions[0]/2
         NX = np.ones(3)*root_octs_side
+        octs_side = NX*2**level
         LE = np.array([0.0, 0.0, 0.0], dtype='float64')
         RE = np.array([1.0, 1.0, 1.0], dtype='float64')
         root_dx = (RE - LE) / NX
         LL = LE + root_dx/2.0
         RL = RE - root_dx/2.0
-        #compute floating point centers of root octs
-        root_fc= np.mgrid[LL[0]:RL[0]:NX[0]*1j,
-                          LL[1]:RL[1]:NX[1]*1j,
-                          LL[2]:RL[2]:NX[2]*1j ]
-        root_fc= np.vstack([p.ravel() for p in root_fc]).T
-        nocts_check = oct_handler.add(1, 0, root_octs_side**3,
+        # compute floating point centers of root octs
+        root_fc = np.mgrid[LL[0]:RL[0]:NX[0]*1j,
+                           LL[1]:RL[1]:NX[1]*1j,
+                           LL[2]:RL[2]:NX[2]*1j]
+        root_fc = np.vstack([p.ravel() for p in root_fc]).T
+        nocts_check = oct_handler.add(self.domain_id, level,
+                                      root_octs_side**3,
                                       root_fc, self.domain_id)
         assert(oct_handler.nocts == root_fc.shape[0])
-        nocts_added = root_fc.shape[0]
         mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
-                    root_octs_side**3, 0,nocts_added)
-        for level in xrange(1, self.pf.max_level+1):
-            left_index, fl, iocts, nocts,root_level = _read_art_level_info(f, 
-                self._level_oct_offsets,level,
-                coarse_grid=self.pf.domain_dimensions[0])
-            left_index/=2
-            #at least one of the indices should be odd
-            #assert np.sum(left_index[:,0]%2==1)>0
-            octs_side = NX*2**level
-            float_left_edge = left_index.astype("float64") / octs_side
-            float_center = float_left_edge + 0.5*1.0/octs_side
-            #all floatin unitary positions should fit inside the domain
-            assert np.all(float_center<1.0)
-            nocts_check = oct_handler.add(1,level, nocts, float_left_edge, self.domain_id)
-            nocts_added += nocts
-            assert(oct_handler.nocts == nocts_added)
-            mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
-                        nocts, level,nocts_added)
+                    root_octs_side**3, 0, oct_handler.nocts)
 
     def select(self, selector):
         if id(selector) == self._last_selector_id:
@@ -585,8 +637,8 @@
 
     def count(self, selector):
         if id(selector) == self._last_selector_id:
-            if self._last_mask is None: return 0
+            if self._last_mask is None:
+                return 0
             return self._last_mask.sum()
         self.select(selector)
         return self.count(selector)
-

diff -r 6bdd0c4ba1619072b110248e87d120eb2e14d30f -r 1ab51a4c88ac448a24633d9d5f766499b875facb yt/frontends/art/definitions.py
--- a/yt/frontends/art/definitions.py
+++ b/yt/frontends/art/definitions.py
@@ -25,7 +25,10 @@
 
 """
 
-fluid_fields= [ 
+# If not otherwise specified, we are big endian
+endian = '>'
+
+fluid_fields = [
     'Density',
     'TotalEnergy',
     'XMomentumDensity',
@@ -40,32 +43,29 @@
     'PotentialOld'
 ]
 
-hydro_struct = [('pad1','>i'),('idc','>i'),('iOctCh','>i')]
+hydro_struct = [('pad1', '>i'), ('idc', '>i'), ('iOctCh', '>i')]
 for field in fluid_fields:
-    hydro_struct += (field,'>f'),
-hydro_struct += ('pad2','>i'),
+    hydro_struct += (field, '>f'),
+hydro_struct += ('pad2', '>i'),
 
-particle_fields= [
-    'particle_age',
+particle_fields = [
+    'particle_mass',  # stars have variable mass
     'particle_index',
-    'particle_mass',
-    'particle_mass_initial',
-    'particle_creation_time',
-    'particle_metallicity1',
-    'particle_metallicity2',
-    'particle_metallicity',
+    'particle_type',
     'particle_position_x',
     'particle_position_y',
     'particle_position_z',
     'particle_velocity_x',
     'particle_velocity_y',
     'particle_velocity_z',
-    'particle_type',
-    'particle_index'
+    'particle_mass_initial',
+    'particle_creation_time',
+    'particle_metallicity1',
+    'particle_metallicity2',
+    'particle_metallicity',
 ]
 
 particle_star_fields = [
-    'particle_age',
     'particle_mass',
     'particle_mass_initial',
     'particle_creation_time',
@@ -74,110 +74,65 @@
     'particle_metallicity',
 ]
 
-filename_pattern = {				
-	'amr':'10MpcBox_csf512_%s.d',
-	'particle_header':'PMcrd%s.DAT',
-	'particle_data':'PMcrs0%s.DAT',
-	'particle_stars':'stars_%s.dat'
-}
 
-filename_pattern_hf = {				
-	'particle_header':'PMcrd_%s.DAT',
-	'particle_data':'PMcrs0_%s.DAT',
+filename_pattern = {
+    'amr': ['10MpcBox_', '.d'],
+    'particle_header': ['PMcrd', '.DAT'],
+    'particle_data': ['PMcrs', '.DAT'],
+    'particle_stars': ['stars', '.dat']
 }
 
 amr_header_struct = [
-    ('>i','pad byte'),
-    ('>256s','jname'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>i','istep'),
-    ('>d','t'),
-    ('>d','dt'),
-    ('>f','aexpn'),
-    ('>f','ainit'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>f','boxh'),
-    ('>f','Om0'),
-    ('>f','Oml0'),
-    ('>f','Omb0'),
-    ('>f','hubble'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>i','nextras'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>f','extra1'),
-    ('>f','extra2'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>256s','lextra'),
-    ('>256s','lextra'),
-    ('>i','pad byte'),
-    ('>i', 'pad byte'),
-    ('>i', 'min_level'),
-    ('>i', 'max_level'),
-    ('>i', 'pad byte'),
+    ('jname', 1, '256s'),
+    (('istep', 't', 'dt', 'aexpn', 'ainit'), 1, 'iddff'),
+    (('boxh', 'Om0', 'Oml0', 'Omb0', 'hubble'), 5, 'f'),
+    ('nextras', 1, 'i'),
+    (('extra1', 'extra2'), 2, 'f'),
+    ('lextra', 1, '512s'),
+    (('min_level', 'max_level'), 2, 'i')
 ]
 
-particle_header_struct =[
-    ('>i','pad'),
-    ('45s','header'), 
-    ('>f','aexpn'),
-    ('>f','aexp0'),
-    ('>f','amplt'),
-    ('>f','astep'),
-    ('>i','istep'),
-    ('>f','partw'),
-    ('>f','tintg'),
-    ('>f','Ekin'),
-    ('>f','Ekin1'),
-    ('>f','Ekin2'),
-    ('>f','au0'),
-    ('>f','aeu0'),
-    ('>i','Nrow'),
-    ('>i','Ngridc'),
-    ('>i','Nspecies'),
-    ('>i','Nseed'),
-    ('>f','Om0'),
-    ('>f','Oml0'),
-    ('>f','hubble'),
-    ('>f','Wp5'),
-    ('>f','Ocurv'),
-    ('>f','Omb0'),
-    ('>%ds'%(396),'extras'),
-    ('>f','unknown'),
-    ('>i','pad')
+particle_header_struct = [
+    (('header',
+     'aexpn', 'aexp0', 'amplt', 'astep',
+     'istep',
+     'partw', 'tintg',
+     'Ekin', 'Ekin1', 'Ekin2',
+     'au0', 'aeu0',
+     'Nrow', 'Ngridc', 'Nspecies', 'Nseed',
+     'Om0', 'Oml0', 'hubble', 'Wp5', 'Ocurv', 'Omb0',
+     'extras', 'unknown'),
+     1,
+     '45sffffi'+'fffffff'+'iiii'+'ffffff'+'396s'+'f')
 ]
 
 star_struct = [
-        ('>d',('tdum','adum')),
-        ('>i','nstars'),
-        ('>d',('ws_old','ws_oldi')),
-        ('>f','mass'),
-        ('>f','imass'),
-        ('>f','tbirth'),
-        ('>f','metallicity1'),
-        ('>f','metallicity2')
-        ]
+    ('>d', ('tdum', 'adum')),
+    ('>i', 'nstars'),
+    ('>d', ('ws_old', 'ws_oldi')),
+    ('>f', 'particle_mass'),
+    ('>f', 'particle_mass_initial'),
+    ('>f', 'particle_creation_time'),
+    ('>f', 'particle_metallicity1'),
+    ('>f', 'particle_metallicity2')
+]
 
 star_name_map = {
-        'particle_mass':'mass',
-        'particle_mass_initial':'imass',
-        'particle_age':'tbirth',
-        'particle_metallicity1':'metallicity1',
-        'particle_metallicity2':'metallicity2',
-        'particle_metallicity':'metallicity',
-        }
+    'particle_mass': 'mass',
+    'particle_mass_initial': 'imass',
+    'particle_creation_time': 'tbirth',
+    'particle_metallicity1': 'metallicity1',
+    'particle_metallicity2': 'metallicity2',
+    'particle_metallicity': 'metallicity',
+}
 
 constants = {
-    "Y_p":0.245,
-    "gamma":5./3.,
-    "T_CMB0":2.726,
-    "T_min":300.,
-    "ng":128,
-    "wmu":4.0/(8.0-5.0*0.245)
+    "Y_p": 0.245,
+    "gamma": 5./3.,
+    "T_CMB0": 2.726,
+    "T_min": 300.,
+    "ng": 128,
+    "wmu": 4.0/(8.0-5.0*0.245)
 }
 
 seek_extras = 137

diff -r 6bdd0c4ba1619072b110248e87d120eb2e14d30f -r 1ab51a4c88ac448a24633d9d5f766499b875facb yt/frontends/art/fields.py
--- a/yt/frontends/art/fields.py
+++ b/yt/frontends/art/fields.py
@@ -3,6 +3,8 @@
 
 Author: Matthew Turk <matthewturk at gmail.com>
 Affiliation: UCSD
+Author: Chris Moody <matthewturk at gmail.com>
+Affiliation: UCSC
 Homepage: http://yt-project.org/
 License:
   Copyright (C) 2010-2011 Matthew Turk.  All Rights Reserved.
@@ -22,7 +24,7 @@
   You should have received a copy of the GNU General Public License
   along with this program.  If not, see <http://www.gnu.org/licenses/>.
 """
-
+import numpy as np
 from yt.data_objects.field_info_container import \
     FieldInfoContainer, \
     FieldInfo, \
@@ -35,210 +37,221 @@
     ValidateGridType
 import yt.data_objects.universal_fields
 import yt.utilities.lib as amr_utils
+from yt.utilities.physical_constants import mass_sun_cgs
+from yt.frontends.art.definitions import *
 
 KnownARTFields = FieldInfoContainer()
 add_art_field = KnownARTFields.add_field
-
 ARTFieldInfo = FieldInfoContainer.create_with_fallback(FieldInfo)
 add_field = ARTFieldInfo.add_field
 
-import numpy as np
+for f in fluid_fields:
+    add_art_field(f, function=NullFunc, take_log=True,
+                  validators=[ValidateDataField(f)])
 
-#these are just the hydro fields
-known_art_fields = [ 'Density','TotalEnergy',
-                     'XMomentumDensity','YMomentumDensity','ZMomentumDensity',
-                     'Pressure','Gamma','GasEnergy',
-                     'MetalDensitySNII', 'MetalDensitySNIa',
-                     'PotentialNew','PotentialOld']
-
-#Add the fields, then later we'll individually defined units and names
-for f in known_art_fields:
+for f in particle_fields:
     add_art_field(f, function=NullFunc, take_log=True,
-              validators = [ValidateDataField(f)])
-
-#Hydro Fields that are verified to be OK unit-wise:
-#Density
-#Temperature
-#metallicities
-#MetalDensity SNII + SNia
-
-#Hydro Fields that need to be tested:
-#TotalEnergy
-#XYZMomentum
-#Pressure
-#Gamma
-#GasEnergy
-#Potentials
-#xyzvelocity
-
-#Particle fields that are tested:
-#particle_position_xyz
-#particle_type
-#particle_index
-#particle_mass
-#particle_mass_initial
-#particle_age
-#particle_velocity
-#particle_metallicity12
-
-#Particle fields that are untested:
-#NONE
-
-#Other checks:
-#CellMassMsun == Density * CellVolume
+                  validators=[ValidateDataField(f)],
+                  particle_type=True)
+add_art_field("particle_mass", function=NullFunc, take_log=True,
+              validators=[ValidateDataField(f)],
+              particle_type=True,
+              convert_function=lambda x: x.convert("particle_mass"))
+add_art_field("particle_mass_initial", function=NullFunc, take_log=True,
+              validators=[ValidateDataField(f)],
+              particle_type=True,
+              convert_function=lambda x: x.convert("particle_mass"))
 
 def _convertDensity(data):
     return data.convert("Density")
 KnownARTFields["Density"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["Density"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["Density"]._convert_function=_convertDensity
+KnownARTFields["Density"]._convert_function = _convertDensity
 
 def _convertTotalEnergy(data):
     return data.convert("GasEnergy")
-KnownARTFields["TotalEnergy"]._units = r"\rm{g}/\rm{cm}^3"
-KnownARTFields["TotalEnergy"]._projected_units = r"\rm{K}"
-KnownARTFields["TotalEnergy"]._convert_function=_convertTotalEnergy
+KnownARTFields["TotalEnergy"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
+KnownARTFields["TotalEnergy"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
+KnownARTFields["TotalEnergy"]._convert_function = _convertTotalEnergy
 
 def _convertXMomentumDensity(data):
-    tr  = data.convert("Mass")*data.convert("Velocity")
+    tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
     return tr
-KnownARTFields["XMomentumDensity"]._units = r"\rm{mg}/\rm{s}/\rm{cm}^3"
-KnownARTFields["XMomentumDensity"]._projected_units = r"\rm{K}"
-KnownARTFields["XMomentumDensity"]._convert_function=_convertXMomentumDensity
+KnownARTFields["XMomentumDensity"]._units = r"\rm{g}/\rm{s}/\rm{cm}^3"
+KnownARTFields["XMomentumDensity"]._projected_units = r"\rm{g}/\rm{s}/\rm{cm}^2"
+KnownARTFields["XMomentumDensity"]._convert_function = _convertXMomentumDensity
 
 def _convertYMomentumDensity(data):
-    tr  = data.convert("Mass")*data.convert("Velocity")
+    tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
     return tr
-KnownARTFields["YMomentumDensity"]._units = r"\rm{mg}/\rm{s}/\rm{cm}^3"
-KnownARTFields["YMomentumDensity"]._projected_units = r"\rm{K}"
-KnownARTFields["YMomentumDensity"]._convert_function=_convertYMomentumDensity
+KnownARTFields["YMomentumDensity"]._units = r"\rm{g}/\rm{s}/\rm{cm}^3"
+KnownARTFields["YMomentumDensity"]._projected_units = r"\rm{g}/\rm{s}/\rm{cm}^2"
+KnownARTFields["YMomentumDensity"]._convert_function = _convertYMomentumDensity
 
 def _convertZMomentumDensity(data):
-    tr  = data.convert("Mass")*data.convert("Velocity")
+    tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
     return tr
-KnownARTFields["ZMomentumDensity"]._units = r"\rm{mg}/\rm{s}/\rm{cm}^3"
-KnownARTFields["ZMomentumDensity"]._projected_units = r"\rm{K}"
-KnownARTFields["ZMomentumDensity"]._convert_function=_convertZMomentumDensity
+KnownARTFields["ZMomentumDensity"]._units = r"\rm{g}/\rm{s}/\rm{cm}^3"
+KnownARTFields["ZMomentumDensity"]._projected_units = r"\rm{g}/\rm{s}/\rm{cm}^2"
+KnownARTFields["ZMomentumDensity"]._convert_function = _convertZMomentumDensity
 
 def _convertPressure(data):
     return data.convert("Pressure")
-KnownARTFields["Pressure"]._units = r"\rm{g}/\rm{cm}/\rm{s}^2"
+KnownARTFields["Pressure"]._units = r"\rm{g}/\rm{s}^2/\rm{cm}^1"
 KnownARTFields["Pressure"]._projected_units = r"\rm{g}/\rm{s}^2"
-KnownARTFields["Pressure"]._convert_function=_convertPressure
+KnownARTFields["Pressure"]._convert_function = _convertPressure
 
 def _convertGamma(data):
     return 1.0
 KnownARTFields["Gamma"]._units = r""
 KnownARTFields["Gamma"]._projected_units = r""
-KnownARTFields["Gamma"]._convert_function=_convertGamma
+KnownARTFields["Gamma"]._convert_function = _convertGamma
 
 def _convertGasEnergy(data):
     return data.convert("GasEnergy")
-KnownARTFields["GasEnergy"]._units = r"\rm{ergs}/\rm{g}"
-KnownARTFields["GasEnergy"]._projected_units = r""
-KnownARTFields["GasEnergy"]._convert_function=_convertGasEnergy
+KnownARTFields["GasEnergy"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
+KnownARTFields["GasEnergy"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
+KnownARTFields["GasEnergy"]._convert_function = _convertGasEnergy
 
 def _convertMetalDensitySNII(data):
     return data.convert('Density')
 KnownARTFields["MetalDensitySNII"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["MetalDensitySNII"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["MetalDensitySNII"]._convert_function=_convertMetalDensitySNII
+KnownARTFields["MetalDensitySNII"]._convert_function = _convertMetalDensitySNII
 
 def _convertMetalDensitySNIa(data):
     return data.convert('Density')
 KnownARTFields["MetalDensitySNIa"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["MetalDensitySNIa"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["MetalDensitySNIa"]._convert_function=_convertMetalDensitySNIa
+KnownARTFields["MetalDensitySNIa"]._convert_function = _convertMetalDensitySNIa
 
 def _convertPotentialNew(data):
     return data.convert("Potential")
-KnownARTFields["PotentialNew"]._units = r"\rm{g}/\rm{cm}^3"
-KnownARTFields["PotentialNew"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["PotentialNew"]._convert_function=_convertPotentialNew
+KnownARTFields["PotentialNew"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
+KnownARTFields["PotentialNew"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
+KnownARTFields["PotentialNew"]._convert_function = _convertPotentialNew
 
 def _convertPotentialOld(data):
     return data.convert("Potential")
-KnownARTFields["PotentialOld"]._units = r"\rm{g}/\rm{cm}^3"
-KnownARTFields["PotentialOld"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["PotentialOld"]._convert_function=_convertPotentialOld
+KnownARTFields["PotentialOld"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
+KnownARTFields["PotentialOld"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
+KnownARTFields["PotentialOld"]._convert_function = _convertPotentialOld
 
 ####### Derived fields
+def _temperature(field, data):
+    tr = data["GasEnergy"]/data["Density"]
+    tr /= data.pf.conversion_factors["GasEnergy"]
+    tr *= data.pf.conversion_factors["Density"]
+    tr *= data.pf.conversion_factors['tr']
+    return tr
 
-def _temperature(field, data):
-    dg = data["GasEnergy"] #.astype('float64')
-    dg /= data.pf.conversion_factors["GasEnergy"]
-    dd = data["Density"] #.astype('float64')
-    dd /= data.pf.conversion_factors["Density"]
-    tr = dg/dd*data.pf.conversion_factors['tr']
-    #ghost cells have zero density?
-    tr[np.isnan(tr)] = 0.0
-    #dd[di] = -1.0
-    #if data.id==460:
-    #tr[di] = -1.0 #replace the zero-density points with zero temp
-    #print tr.min()
-    #assert np.all(np.isfinite(tr))
-    return tr
 def _converttemperature(data):
-    #x = data.pf.conversion_factors["Temperature"]
-    x = 1.0
-    return x
-add_field("Temperature", function=_temperature, units = r"\mathrm{K}",take_log=True)
+    return 1.0
+add_field("Temperature", function=_temperature,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Temperature"]._units = r"\mathrm{K}"
 ARTFieldInfo["Temperature"]._projected_units = r"\mathrm{K}"
-#ARTFieldInfo["Temperature"]._convert_function=_converttemperature
 
 def _metallicity_snII(field, data):
-    tr  = data["MetalDensitySNII"] / data["Density"]
+    tr = data["MetalDensitySNII"] / data["Density"]
     return tr
-add_field("Metallicity_SNII", function=_metallicity_snII, units = r"\mathrm{K}",take_log=True)
+add_field("Metallicity_SNII", function=_metallicity_snII,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Metallicity_SNII"]._units = r""
 ARTFieldInfo["Metallicity_SNII"]._projected_units = r""
 
 def _metallicity_snIa(field, data):
-    tr  = data["MetalDensitySNIa"] / data["Density"]
+    tr = data["MetalDensitySNIa"] / data["Density"]
     return tr
-add_field("Metallicity_SNIa", function=_metallicity_snIa, units = r"\mathrm{K}",take_log=True)
+add_field("Metallicity_SNIa", function=_metallicity_snIa,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Metallicity_SNIa"]._units = r""
 ARTFieldInfo["Metallicity_SNIa"]._projected_units = r""
 
 def _metallicity(field, data):
-    tr  = data["Metal_Density"] / data["Density"]
+    tr = data["Metal_Density"] / data["Density"]
     return tr
-add_field("Metallicity", function=_metallicity, units = r"\mathrm{K}",take_log=True)
+add_field("Metallicity", function=_metallicity,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Metallicity"]._units = r""
 ARTFieldInfo["Metallicity"]._projected_units = r""
 
-def _x_velocity(field,data):
-    tr  = data["XMomentumDensity"]/data["Density"]
+def _x_velocity(field, data):
+    tr = data["XMomentumDensity"]/data["Density"]
     return tr
-add_field("x-velocity", function=_x_velocity, units = r"\mathrm{cm/s}",take_log=False)
+add_field("x-velocity", function=_x_velocity,
+          units=r"\mathrm{cm/s}", take_log=False)
 ARTFieldInfo["x-velocity"]._units = r"\rm{cm}/\rm{s}"
 ARTFieldInfo["x-velocity"]._projected_units = r"\rm{cm}/\rm{s}"
 
-def _y_velocity(field,data):
-    tr  = data["YMomentumDensity"]/data["Density"]
+def _y_velocity(field, data):
+    tr = data["YMomentumDensity"]/data["Density"]
     return tr
-add_field("y-velocity", function=_y_velocity, units = r"\mathrm{cm/s}",take_log=False)
+add_field("y-velocity", function=_y_velocity,
+          units=r"\mathrm{cm/s}", take_log=False)
 ARTFieldInfo["y-velocity"]._units = r"\rm{cm}/\rm{s}"
 ARTFieldInfo["y-velocity"]._projected_units = r"\rm{cm}/\rm{s}"
 
-def _z_velocity(field,data):
-    tr  = data["ZMomentumDensity"]/data["Density"]
+def _z_velocity(field, data):
+    tr = data["ZMomentumDensity"]/data["Density"]
     return tr
-add_field("z-velocity", function=_z_velocity, units = r"\mathrm{cm/s}",take_log=False)
+add_field("z-velocity", function=_z_velocity,
+          units=r"\mathrm{cm/s}", take_log=False)
 ARTFieldInfo["z-velocity"]._units = r"\rm{cm}/\rm{s}"
 ARTFieldInfo["z-velocity"]._projected_units = r"\rm{cm}/\rm{s}"
 
 def _metal_density(field, data):
-    tr  = data["MetalDensitySNIa"]
+    tr = data["MetalDensitySNIa"]
     tr += data["MetalDensitySNII"]
     return tr
-add_field("Metal_Density", function=_metal_density, units = r"\mathrm{K}",take_log=True)
-ARTFieldInfo["Metal_Density"]._units = r""
-ARTFieldInfo["Metal_Density"]._projected_units = r""
+add_field("Metal_Density", function=_metal_density,
+          units=r"\mathrm{K}", take_log=True)
+ARTFieldInfo["Metal_Density"]._units = r"\rm{g}/\rm{cm}^3"
+ARTFieldInfo["Metal_Density"]._projected_units = r"\rm{g}/\rm{cm}^2"
 
+# Particle fields
+def _particle_age(field, data):
+    tr = data["particle_creation_time"]
+    return data.pf.current_time - tr
+add_field("particle_age", function=_particle_age, units=r"\mathrm{s}",
+          take_log=True, particle_type=True)
 
-#Particle fields
+def spread_ages(ages, spread=1.0e7*365*24*3600):
+    # stars are formed in lumps; spread out the ages linearly
+    da = np.diff(ages)
+    assert np.all(da <= 0)
+    # ages should always be decreasing, and ordered so
+    agesd = np.zeros(ages.shape)
+    idx, = np.where(da < 0)
+    idx += 1  # mark the right edges
+    # spread this age evenly out to the next age
+    lidx = 0
+    lage = 0
+    for i in idx:
+        n = i-lidx  # n stars affected
+        rage = ages[i]
+        lage = max(rage-spread, 0.0)
+        agesd[lidx:i] = np.linspace(lage, rage, n)
+        lidx = i
+        # lage=rage
+    # we didn't get the last iter
+    n = agesd.shape[0]-lidx
+    rage = ages[-1]
+    lage = max(rage-spread, 0.0)
+    agesd[lidx:] = np.linspace(lage, rage, n)
+    return agesd
+
+def _particle_age_spread(field, data):
+    tr = data["particle_creation_time"]
+    return spread_ages(data.pf.current_time - tr)
+
+add_field("particle_age_spread", function=_particle_age_spread,
+          particle_type=True, take_log=True, units=r"\rm{s}")
+
+def _ParticleMassMsun(field, data):
+    return data["particle_mass"]/mass_sun_cgs
+add_field("ParticleMassMsun", function=_ParticleMassMsun, particle_type=True,
+          take_log=True, units=r"\rm{Msun}")

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/6cf22697b73c/
Changeset:   6cf22697b73c
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-27 14:28:51
Summary:     Merge
Affected #:  15 files

diff -r f49fc8747a994d9f8c7326d7ba2a25f05430e56a -r 6cf22697b73c1490164684de693252380b7c2d18 yt/analysis_modules/halo_finding/halo_objects.py
--- a/yt/analysis_modules/halo_finding/halo_objects.py
+++ b/yt/analysis_modules/halo_finding/halo_objects.py
@@ -1059,7 +1059,7 @@
 
     _fields = ["particle_position_%s" % ax for ax in 'xyz']
 
-    def __init__(self, data_source, dm_only=True):
+    def __init__(self, data_source, dm_only=True, redshift=-1):
         """
         Run hop on *data_source* with a given density *threshold*.  If
         *dm_only* is set, only run it on the dark matter particles, otherwise

diff -r f49fc8747a994d9f8c7326d7ba2a25f05430e56a -r 6cf22697b73c1490164684de693252380b7c2d18 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -210,6 +210,8 @@
         """
         Deletes a field
         """
+        if key  not in self.field_data:
+            key = self._determine_fields(key)[0]
         del self.field_data[key]
 
     def _generate_field(self, field):

diff -r f49fc8747a994d9f8c7326d7ba2a25f05430e56a -r 6cf22697b73c1490164684de693252380b7c2d18 yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -678,3 +678,37 @@
     return [np.sum(totals[:,i]) for i in range(n_fields)]
 add_quantity("TotalQuantity", function=_TotalQuantity,
                 combine_function=_combTotalQuantity, n_ret=2)
+
+def _ParticleDensityCenter(data,nbins=3,particle_type="all"):
+    """
+    Find the center of the particle density
+    by histogramming the particles iteratively.
+    """
+    pos = [data[(particle_type,"particle_position_%s"%ax)] for ax in "xyz"]
+    pos = np.array(pos).T
+    mas = data[(particle_type,"particle_mass")]
+    calc_radius= lambda x,y:np.sqrt(np.sum((x-y)**2.0,axis=1))
+    density = 0
+    if pos.shape[0]==0:
+        return -1.0,[-1.,-1.,-1.]
+    while pos.shape[0] > 1:
+        table,bins=np.histogramdd(pos,bins=nbins, weights=mas)
+        bin_size = min((np.max(bins,axis=1)-np.min(bins,axis=1))/nbins)
+        centeridx = np.where(table==table.max())
+        le = np.array([bins[0][centeridx[0][0]],
+                       bins[1][centeridx[1][0]],
+                       bins[2][centeridx[2][0]]])
+        re = np.array([bins[0][centeridx[0][0]+1],
+                       bins[1][centeridx[1][0]+1],
+                       bins[2][centeridx[2][0]+1]])
+        center = 0.5*(le+re)
+        idx = calc_radius(pos,center)<bin_size
+        pos, mas = pos[idx],mas[idx]
+        density = max(density,mas.sum()/bin_size**3.0)
+    return density, center
+def _combParticleDensityCenter(data,densities,centers):
+    i = np.argmax(densities)
+    return densities[i],centers[i]
+
+add_quantity("ParticleDensityCenter",function=_ParticleDensityCenter,
+             combine_function=_combParticleDensityCenter,n_ret=2)

diff -r f49fc8747a994d9f8c7326d7ba2a25f05430e56a -r 6cf22697b73c1490164684de693252380b7c2d18 yt/data_objects/tests/test_fields.py
--- a/yt/data_objects/tests/test_fields.py
+++ b/yt/data_objects/tests/test_fields.py
@@ -33,6 +33,7 @@
     pf.conversion_factors.update( dict((f, 1.0) for f in fields) )
     pf.current_redshift = 0.0001
     pf.hubble_constant = 0.7
+    pf.omega_matter = 0.27
     for unit in mpc_conversion:
         pf.units[unit+'h'] = pf.units[unit]
         pf.units[unit+'cm'] = pf.units[unit]

diff -r f49fc8747a994d9f8c7326d7ba2a25f05430e56a -r 6cf22697b73c1490164684de693252380b7c2d18 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -30,6 +30,8 @@
 import stat
 import weakref
 import cStringIO
+import difflib
+import glob
 
 from yt.funcs import *
 from yt.geometry.oct_geometry_handler import \
@@ -37,9 +39,9 @@
 from yt.geometry.geometry_handler import \
     GeometryHandler, YTDataChunk
 from yt.data_objects.static_output import \
-      StaticOutput
+    StaticOutput
 from yt.geometry.oct_container import \
-    RAMSESOctreeContainer
+    ARTOctreeContainer
 from yt.data_objects.field_info_container import \
     FieldInfoContainer, NullFunc
 from .fields import \
@@ -52,20 +54,15 @@
     get_box_grids_level
 import yt.utilities.lib as amr_utils
 
-from .definitions import *
-from .io import _read_frecord
-from .io import _read_record
-from .io import _read_struct
+from yt.frontends.art.definitions import *
+from yt.utilities.fortran_utils import *
 from .io import _read_art_level_info
 from .io import _read_child_mask_level
 from .io import _read_child_level
 from .io import _read_root_level
-from .io import _read_record_size
-from .io import _skip_record
 from .io import _count_art_octs
 from .io import b2t
 
-
 import yt.frontends.ramses._ramses_reader as _ramses_reader
 
 from .fields import ARTFieldInfo, KnownARTFields
@@ -80,13 +77,9 @@
 from yt.utilities.physical_constants import \
     mass_hydrogen_cgs, sec_per_Gyr
 
+
 class ARTGeometryHandler(OctreeGeometryHandler):
-    def __init__(self,pf,data_style="art"):
-        """
-        Life is made simpler because we only have one AMR file
-        and one domain. However, we are matching to the RAMSES
-        multi-domain architecture.
-        """
+    def __init__(self, pf, data_style="art"):
         self.fluid_field_list = fluid_fields
         self.data_style = data_style
         self.parameter_file = weakref.proxy(pf)
@@ -94,7 +87,16 @@
         self.directory = os.path.dirname(self.hierarchy_filename)
         self.max_level = pf.max_level
         self.float_type = np.float64
-        super(ARTGeometryHandler,self).__init__(pf,data_style)
+        super(ARTGeometryHandler, self).__init__(pf, data_style)
+
+    def get_smallest_dx(self):
+        """
+        Returns (in code units) the smallest cell size in the simulation.
+        """
+        # Overloaded
+        pf = self.parameter_file
+        return (1.0/pf.domain_dimensions.astype('f8') /
+                (2**self.max_level)).min()
 
     def _initialize_oct_handler(self):
         """
@@ -102,23 +104,37 @@
         allocate the requisite memory in the oct tree
         """
         nv = len(self.fluid_field_list)
-        self.domains = [ARTDomainFile(self.parameter_file,1,nv)]
+        self.domains = [ARTDomainFile(self.parameter_file, l+1, nv, l)
+                        for l in range(self.pf.max_level)]
         self.octs_per_domain = [dom.level_count.sum() for dom in self.domains]
         self.total_octs = sum(self.octs_per_domain)
-        self.oct_handler = RAMSESOctreeContainer(
-            self.parameter_file.domain_dimensions/2, #dd is # of root cells
+        self.oct_handler = ARTOctreeContainer(
+            self.parameter_file.domain_dimensions/2,  # dd is # of root cells
             self.parameter_file.domain_left_edge,
             self.parameter_file.domain_right_edge)
         mylog.debug("Allocating %s octs", self.total_octs)
         self.oct_handler.allocate_domains(self.octs_per_domain)
         for domain in self.domains:
-            domain._read_amr(self.oct_handler)
+            if domain.domain_level == 0:
+                domain._read_amr_root(self.oct_handler)
+            else:
+                domain._read_amr_level(self.oct_handler)
 
     def _detect_fields(self):
         self.particle_field_list = particle_fields
-        self.field_list = set(fluid_fields + particle_fields + particle_star_fields)
+        self.field_list = set(fluid_fields + particle_fields +
+                              particle_star_fields)
         self.field_list = list(self.field_list)
-    
+        # now generate all of the possible particle fields
+        if "wspecies" in self.parameter_file.parameters.keys():
+            wspecies = self.parameter_file.parameters['wspecies']
+            nspecies = len(wspecies)
+            self.parameter_file.particle_types = ["all", "darkmatter", "stars"]
+            for specie in range(nspecies):
+                self.parameter_file.particle_types.append("specie%i" % specie)
+        else:
+            self.parameter_file.particle_types = []
+
     def _setup_classes(self):
         dd = self._get_data_reader_dict()
         super(ARTGeometryHandler, self)._setup_classes(dd)
@@ -127,19 +143,23 @@
     def _identify_base_chunk(self, dobj):
         """
         Take the passed in data source dobj, and use its embedded selector
-        to calculate the domain mask, build the reduced domain 
+        to calculate the domain mask, build the reduced domain
         subsets and oct counts. Attach this information to dobj.
         """
         if getattr(dobj, "_chunk_info", None) is None:
-            #Get all octs within this oct handler
+            # Get all octs within this oct handler
             mask = dobj.selector.select_octs(self.oct_handler)
-            if mask.sum()==0:
+            if mask.sum() == 0:
                 mylog.debug("Warning: selected zero octs")
             counts = self.oct_handler.count_cells(dobj.selector, mask)
-            #For all domains, figure out how many counts we have 
-            #and build a subset=mask of domains 
-            subsets = [ARTDomainSubset(d, mask, c)
-                       for d, c in zip(self.domains, counts) if c > 0]
+            # For all domains, figure out how many counts we have
+            # and build a subset=mask of domains
+            subsets = []
+            for d, c in zip(self.domains, counts):
+                if c < 1:
+                    continue
+                subset = ARTDomainSubset(d, mask, c, d.domain_level)
+                subsets.append(subset)
             dobj._chunk_info = subsets
             dobj.size = sum(counts)
             dobj.shape = (dobj.size,)
@@ -147,8 +167,8 @@
 
     def _chunk_all(self, dobj):
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
-        #We pass the chunk both the current chunk and list of chunks,
-        #as well as the referring data source
+        # We pass the chunk both the current chunk and list of chunks,
+        # as well as the referring data source
         yield YTDataChunk(dobj, "all", oobjs, dobj.size)
 
     def _chunk_spatial(self, dobj, ngz):
@@ -157,7 +177,7 @@
     def _chunk_io(self, dobj):
         """
         Since subsets are calculated per domain,
-        i.e. per file, yield each domain at a time to 
+        i.e. per file, yield each domain at a time to
         organize by IO. We will eventually chunk out NMSU ART
         to be level-by-level.
         """
@@ -165,77 +185,66 @@
         for subset in oobjs:
             yield YTDataChunk(dobj, "io", [subset], subset.cell_count)
 
+
 class ARTStaticOutput(StaticOutput):
     _hierarchy_class = ARTGeometryHandler
     _fieldinfo_fallback = ARTFieldInfo
     _fieldinfo_known = KnownARTFields
 
-    def __init__(self,filename,data_style='art',
-                 fields = None, storage_filename = None,
-                 skip_particles=False,skip_stars=False,
-                 limit_level=None,spread_age=True):
+    def __init__(self, filename, data_style='art',
+                 fields=None, storage_filename=None,
+                 skip_particles=False, skip_stars=False,
+                 limit_level=None, spread_age=True,
+                 force_max_level=None, file_particle_header=None,
+                 file_particle_data=None, file_particle_stars=None):
         if fields is None:
             fields = fluid_fields
         filename = os.path.abspath(filename)
         self._fields_in_file = fields
+        self._file_amr = filename
+        self._file_particle_header = file_particle_header
+        self._file_particle_data = file_particle_data
+        self._file_particle_stars = file_particle_stars
         self._find_files(filename)
-        self.file_amr = filename
         self.parameter_filename = filename
         self.skip_particles = skip_particles
         self.skip_stars = skip_stars
         self.limit_level = limit_level
         self.max_level = limit_level
+        self.force_max_level = force_max_level
         self.spread_age = spread_age
-        self.domain_left_edge = np.zeros(3,dtype='float')
-        self.domain_right_edge = np.zeros(3,dtype='float')+1.0
-        StaticOutput.__init__(self,filename,data_style)
+        self.domain_left_edge = np.zeros(3, dtype='float')
+        self.domain_right_edge = np.zeros(3, dtype='float')+1.0
+        StaticOutput.__init__(self, filename, data_style)
         self.storage_filename = storage_filename
 
-    def _find_files(self,file_amr):
+    def _find_files(self, file_amr):
         """
         Given the AMR base filename, attempt to find the
         particle header, star files, etc.
         """
-        prefix,suffix = filename_pattern['amr'].split('%s')
-        affix = os.path.basename(file_amr).replace(prefix,'')
-        affix = affix.replace(suffix,'')
-        affix = affix.replace('_','')
-        full_affix = affix
-        affix = affix[1:-1]
-        dirname = os.path.dirname(file_amr)
-        for fp in (filename_pattern_hf,filename_pattern):
-            for filetype, pattern in fp.items():
-                #if this attribute is already set skip it
-                if getattr(self,"file_"+filetype,None) is not None:
-                    continue
-                #sometimes the affix is surrounded by an extraneous _
-                #so check for an extra character on either side
-                check_filename = dirname+'/'+pattern%('?%s?'%affix)
-                filenames = glob.glob(check_filename)
-                if len(filenames)>1:
-                    check_filename_strict = \
-                            dirname+'/'+pattern%('?%s'%full_affix[1:])
-                    filenames = glob.glob(check_filename_strict)
-                
-                if len(filenames)==1:
-                    setattr(self,"file_"+filetype,filenames[0])
-                    mylog.info('discovered %s:%s',filetype,filenames[0])
-                elif len(filenames)>1:
-                    setattr(self,"file_"+filetype,None)
-                    mylog.info("Ambiguous number of files found for %s",
-                            check_filename)
-                    for fn in filenames:
-                        faffix = float(affix)
-                else:
-                    setattr(self,"file_"+filetype,None)
+        base_prefix, base_suffix = filename_pattern['amr']
+        possibles = glob.glob(os.path.dirname(file_amr)+"/*")
+        for filetype, (prefix, suffix) in filename_pattern.iteritems():
+            # if this attribute is already set skip it
+            if getattr(self, "_file_"+filetype, None) is not None:
+                continue
+            stripped = file_amr.replace(base_prefix, prefix)
+            stripped = stripped.replace(base_suffix, suffix)
+            match, = difflib.get_close_matches(stripped, possibles, 1, 0.6)
+            if match is not None:
+                mylog.info('discovered %s:%s', filetype, match)
+                setattr(self, "_file_"+filetype, match)
+            else:
+                setattr(self, "_file_"+filetype, None)
 
     def __repr__(self):
-        return self.file_amr.rsplit(".",1)[0]
+        return self._file_amr.split('/')[-1]
 
     def _set_units(self):
         """
-        Generates the conversion to various physical units based 
-		on the parameters from the header
+        Generates the conversion to various physical units based
+                on the parameters from the header
         """
         self.units = {}
         self.time_units = {}
@@ -243,9 +252,9 @@
         self.units['1'] = 1.0
         self.units['unitary'] = 1.0
 
-        #spatial units
-        z   = self.current_redshift
-        h   = self.hubble_constant
+        # spatial units
+        z = self.current_redshift
+        h = self.hubble_constant
         boxcm_cal = self.parameters["boxh"]
         boxcm_uncal = boxcm_cal / h
         box_proper = boxcm_uncal/(1+z)
@@ -256,55 +265,59 @@
             self.units[unit+'cm'] = mpc_conversion[unit] * boxcm_uncal
             self.units[unit+'hcm'] = mpc_conversion[unit] * boxcm_cal
 
-        #all other units
+        # all other units
         wmu = self.parameters["wmu"]
         Om0 = self.parameters['Om0']
-        ng  = self.parameters['ng']
+        ng = self.parameters['ng']
         wmu = self.parameters["wmu"]
-        boxh   = self.parameters['boxh'] 
-        aexpn  = self.parameters["aexpn"]
+        boxh = self.parameters['boxh']
+        aexpn = self.parameters["aexpn"]
         hubble = self.parameters['hubble']
 
         cf = defaultdict(lambda: 1.0)
         r0 = boxh/ng
-        P0= 4.697e-16 * Om0**2.0 * r0**2.0 * hubble**2.0
-        T_0 = 3.03e5 * r0**2.0 * wmu * Om0 # [K]
+        P0 = 4.697e-16 * Om0**2.0 * r0**2.0 * hubble**2.0
+        T_0 = 3.03e5 * r0**2.0 * wmu * Om0  # [K]
         S_0 = 52.077 * wmu**(5.0/3.0)
         S_0 *= hubble**(-4.0/3.0)*Om0**(1.0/3.0)*r0**2.0
-        #v0 =  r0 * 50.0*1.0e5 * np.sqrt(self.omega_matter)  #cm/s
+        # v0 =  r0 * 50.0*1.0e5 * np.sqrt(self.omega_matter)  #cm/s
         v0 = 50.0*r0*np.sqrt(Om0)
         t0 = r0/v0
         rho1 = 1.8791e-29 * hubble**2.0 * self.omega_matter
         rho0 = 2.776e11 * hubble**2.0 * Om0
-        tr = 2./3. *(3.03e5*r0**2.0*wmu*self.omega_matter)*(1.0/(aexpn**2))     
+        tr = 2./3. * (3.03e5*r0**2.0*wmu*self.omega_matter)*(1.0/(aexpn**2))
         aM0 = rho0 * (boxh/hubble)**3.0 / ng**3.0
-        cf['r0']=r0
-        cf['P0']=P0
-        cf['T_0']=T_0
-        cf['S_0']=S_0
-        cf['v0']=v0
-        cf['t0']=t0
-        cf['rho0']=rho0
-        cf['rho1']=rho1
-        cf['tr']=tr
-        cf['aM0']=aM0
+        cf['r0'] = r0
+        cf['P0'] = P0
+        cf['T_0'] = T_0
+        cf['S_0'] = S_0
+        cf['v0'] = v0
+        cf['t0'] = t0
+        cf['rho0'] = rho0
+        cf['rho1'] = rho1
+        cf['tr'] = tr
+        cf['aM0'] = aM0
 
-        #factors to multiply the native code units to CGS
-        cf['Pressure'] = P0 #already cgs
-        cf['Velocity'] = v0/aexpn*1.0e5 #proper cm/s
+        # factors to multiply the native code units to CGS
+        cf['Pressure'] = P0  # already cgs
+        cf['Velocity'] = v0/aexpn*1.0e5  # proper cm/s
         cf["Mass"] = aM0 * 1.98892e33
         cf["Density"] = rho1*(aexpn**-3.0)
         cf["GasEnergy"] = rho0*v0**2*(aexpn**-5.0)
         cf["Potential"] = 1.0
         cf["Entropy"] = S_0
         cf["Temperature"] = tr
+        cf["Time"] = 1.0
+        cf["particle_mass"] = cf['Mass']
+        cf["particle_mass_initial"] = cf['Mass']
         self.cosmological_simulation = True
         self.conversion_factors = cf
-        
-        for particle_field in particle_fields:
-            self.conversion_factors[particle_field] =  1.0
+
         for ax in 'xyz':
             self.conversion_factors["%s-velocity" % ax] = 1.0
+        for pt in particle_fields:
+            if pt not in self.conversion_factors.keys():
+                self.conversion_factors[pt] = 1.0
         for unit in sec_conversion.keys():
             self.time_units[unit] = 1.0 / sec_conversion[unit]
 
@@ -320,72 +333,89 @@
         self.unique_identifier = \
             int(os.stat(self.parameter_filename)[stat.ST_CTIME])
         self.parameters.update(constants)
-        #read the amr header
-        with open(self.file_amr,'rb') as f:
-            amr_header_vals = _read_struct(f,amr_header_struct)
-            for to_skip in ['tl','dtl','tlold','dtlold','iSO']:
-                _skip_record(f)
-            (self.ncell,) = struct.unpack('>l', _read_record(f))
+        self.parameters['Time'] = 1.0
+        # read the amr header
+        with open(self._file_amr, 'rb') as f:
+            amr_header_vals = read_attrs(f, amr_header_struct, '>')
+            for to_skip in ['tl', 'dtl', 'tlold', 'dtlold', 'iSO']:
+                skipped = skip(f, endian='>')
+            (self.ncell) = read_vector(f, 'i', '>')[0]
             # Try to figure out the root grid dimensions
             est = int(np.rint(self.ncell**(1.0/3.0)))
             # Note here: this is the number of *cells* on the root grid.
             # This is not the same as the number of Octs.
-            #domain dimensions is the number of root *cells*
+            # domain dimensions is the number of root *cells*
             self.domain_dimensions = np.ones(3, dtype='int64')*est
             self.root_grid_mask_offset = f.tell()
             self.root_nocts = self.domain_dimensions.prod()/8
             self.root_ncells = self.root_nocts*8
-            mylog.debug("Estimating %i cells on a root grid side,"+ \
-                        "%i root octs",est,self.root_nocts)
-            self.root_iOctCh = _read_frecord(f,'>i')[:self.root_ncells]
+            mylog.debug("Estimating %i cells on a root grid side," +
+                        "%i root octs", est, self.root_nocts)
+            self.root_iOctCh = read_vector(f, 'i', '>')[:self.root_ncells]
             self.root_iOctCh = self.root_iOctCh.reshape(self.domain_dimensions,
-                 order='F')
+                                                        order='F')
             self.root_grid_offset = f.tell()
-            #_skip_record(f) # hvar
-            #_skip_record(f) # var
-            self.root_nhvar = _read_frecord(f,'>f',size_only=True)
-            self.root_nvar  = _read_frecord(f,'>f',size_only=True)
-            #make sure that the number of root variables is a multiple of rootcells
-            assert self.root_nhvar%self.root_ncells==0
-            assert self.root_nvar%self.root_ncells==0
-            self.nhydro_variables = ((self.root_nhvar+self.root_nvar)/ 
-                                    self.root_ncells)
-            self.iOctFree, self.nOct = struct.unpack('>ii', _read_record(f))
+            self.root_nhvar = skip(f, endian='>')
+            self.root_nvar = skip(f, endian='>')
+            # make sure that the number of root variables is a multiple of
+            # rootcells
+            assert self.root_nhvar % self.root_ncells == 0
+            assert self.root_nvar % self.root_ncells == 0
+            self.nhydro_variables = ((self.root_nhvar+self.root_nvar) /
+                                     self.root_ncells)
+            self.iOctFree, self.nOct = read_vector(f, 'i', '>')
             self.child_grid_offset = f.tell()
             self.parameters.update(amr_header_vals)
             self.parameters['ncell0'] = self.parameters['ng']**3
-        #read the particle header
-        if not self.skip_particles and self.file_particle_header:
-            with open(self.file_particle_header,"rb") as fh:
-                particle_header_vals = _read_struct(fh,particle_header_struct)
+            # estimate the root level
+            float_center, fl, iocts, nocts, root_level = _read_art_level_info(
+                f,
+                [0, self.child_grid_offset], 1,
+                coarse_grid=self.domain_dimensions[0])
+            del float_center, fl, iocts, nocts
+            self.root_level = root_level
+            mylog.info("Using root level of %02i", self.root_level)
+        # read the particle header
+        if not self.skip_particles and self._file_particle_header:
+            with open(self._file_particle_header, "rb") as fh:
+                particle_header_vals = read_attrs(
+                    fh, particle_header_struct, '>')
                 fh.seek(seek_extras)
                 n = particle_header_vals['Nspecies']
-                wspecies = np.fromfile(fh,dtype='>f',count=10)
-                lspecies = np.fromfile(fh,dtype='>i',count=10)
+                wspecies = np.fromfile(fh, dtype='>f', count=10)
+                lspecies = np.fromfile(fh, dtype='>i', count=10)
             self.parameters['wspecies'] = wspecies[:n]
             self.parameters['lspecies'] = lspecies[:n]
             ls_nonzero = np.diff(lspecies)[:n-1]
-            mylog.info("Discovered %i species of particles",len(ls_nonzero))
+            self.star_type = len(ls_nonzero)
+            mylog.info("Discovered %i species of particles", len(ls_nonzero))
             mylog.info("Particle populations: "+'%1.1e '*len(ls_nonzero),
-                *ls_nonzero)
-            for k,v in particle_header_vals.items():
+                       *ls_nonzero)
+            for k, v in particle_header_vals.items():
                 if k in self.parameters.keys():
                     if not self.parameters[k] == v:
-                        mylog.info("Inconsistent parameter %s %1.1e  %1.1e",k,v,
-                                   self.parameters[k])
+                        mylog.info(
+                            "Inconsistent parameter %s %1.1e  %1.1e", k, v,
+                            self.parameters[k])
                 else:
-                    self.parameters[k]=v
+                    self.parameters[k] = v
             self.parameters_particles = particle_header_vals
-    
-        #setup standard simulation params yt expects to see
+
+        # setup standard simulation params yt expects to see
         self.current_redshift = self.parameters["aexpn"]**-1.0 - 1.0
         self.omega_lambda = amr_header_vals['Oml0']
         self.omega_matter = amr_header_vals['Om0']
         self.hubble_constant = amr_header_vals['hubble']
         self.min_level = amr_header_vals['min_level']
         self.max_level = amr_header_vals['max_level']
-        self.hubble_time  = 1.0/(self.hubble_constant*100/3.08568025e19)
+        if self.limit_level is not None:
+            self.max_level = min(
+                self.limit_level, amr_header_vals['max_level'])
+        if self.force_max_level is not None:
+            self.max_level = self.force_max_level
+        self.hubble_time = 1.0/(self.hubble_constant*100/3.08568025e19)
         self.current_time = b2t(self.parameters['t']) * sec_per_Gyr
+        mylog.info("Max level is %02i", self.max_level)
 
     @classmethod
     def _is_valid(self, *args, **kwargs):
@@ -393,20 +423,24 @@
         Defined for the NMSU file naming scheme.
         This could differ for other formats.
         """
-        fn = ("%s" % (os.path.basename(args[0])))
         f = ("%s" % args[0])
-        prefix, suffix = filename_pattern['amr'].split('%s')
-        if fn.endswith(suffix) and fn.startswith(prefix) and\
-                os.path.exists(f): 
+        prefix, suffix = filename_pattern['amr']
+        with open(f, 'rb') as fh:
+            try:
+                amr_header_vals = read_attrs(fh, amr_header_struct, '>')
                 return True
+            except AssertionError:
+                return False
         return False
 
+
 class ARTDomainSubset(object):
-    def __init__(self, domain, mask, cell_count):
+    def __init__(self, domain, mask, cell_count, domain_level):
         self.mask = mask
         self.domain = domain
         self.oct_handler = domain.pf.h.oct_handler
         self.cell_count = cell_count
+        self.domain_level = domain_level
         level_counts = self.oct_handler.count_levels(
             self.domain.pf.max_level, self.domain.domain_id, mask)
         assert(level_counts.sum() == cell_count)
@@ -432,12 +466,12 @@
     def select_fwidth(self, dobj):
         base_dx = 1.0/self.domain.pf.domain_dimensions
         widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.ires(dobj))
+        dds = (2**self.select_ires(dobj))
         for i in range(3):
-            widths[:,i] = base_dx[i] / dds
+            widths[:, i] = base_dx[i] / dds
         return widths
 
-    def fill(self, content, fields):
+    def fill_root(self, content, ftfields):
         """
         This is called from IOHandler. It takes content
         which is a binary stream, reads the requested field
@@ -446,135 +480,153 @@
         the order they are in in the octhandler.
         """
         oct_handler = self.oct_handler
-        all_fields  = self.domain.pf.h.fluid_field_list
-        fields = [f for ft, f in fields]
-        dest= {}
-        filled = pos = level_offset = 0
+        all_fields = self.domain.pf.h.fluid_field_list
+        fields = [f for ft, f in ftfields]
+        level_offset = 0
         field_idxs = [all_fields.index(f) for f in fields]
+        dest = {}
         for field in fields:
-            dest[field] = np.zeros(self.cell_count, 'float64')
-        for level, offset in enumerate(self.domain.level_offsets):
-            no = self.domain.level_count[level]
-            if level==0:
-                data = _read_root_level(content,self.domain.level_child_offsets,
-                                       self.domain.level_count)
-                data = data[field_idxs,:]
-            else:
-                data = _read_child_level(content,self.domain.level_child_offsets,
-                                         self.domain.level_offsets,
-                                         self.domain.level_count,level,fields,
-                                         self.domain.pf.domain_dimensions,
-                                         self.domain.pf.parameters['ncell0'])
-            source= {}
-            for i,field in enumerate(fields):
-                source[field] = np.empty((no, 8), dtype="float64")
-                source[field][:,:] = np.reshape(data[i,:],(no,8))
-            level_offset += oct_handler.fill_level(self.domain.domain_id, 
-                                   level, dest, source, self.mask, level_offset)
+            dest[field] = np.zeros(self.cell_count, 'float64')-1.
+        level = self.domain_level
+        source = {}
+        data = _read_root_level(content, self.domain.level_child_offsets,
+                                self.domain.level_count)
+        for field, i in zip(fields, field_idxs):
+            temp = np.reshape(data[i, :], self.domain.pf.domain_dimensions,
+                              order='F').astype('float64').T
+            source[field] = temp
+        level_offset += oct_handler.fill_level_from_grid(
+            self.domain.domain_id,
+            level, dest, source, self.mask, level_offset)
         return dest
 
+    def fill_level(self, content, ftfields):
+        oct_handler = self.oct_handler
+        fields = [f for ft, f in ftfields]
+        level_offset = 0
+        dest = {}
+        for field in fields:
+            dest[field] = np.zeros(self.cell_count, 'float64')-1.
+        level = self.domain_level
+        no = self.domain.level_count[level]
+        noct_range = [0, no]
+        source = _read_child_level(
+            content, self.domain.level_child_offsets,
+            self.domain.level_offsets,
+            self.domain.level_count, level, fields,
+            self.domain.pf.domain_dimensions,
+            self.domain.pf.parameters['ncell0'],
+            noct_range=noct_range)
+        nocts_filling = noct_range[1]-noct_range[0]
+        level_offset += oct_handler.fill_level(self.domain.domain_id,
+                                               level, dest, source,
+                                               self.mask, level_offset,
+                                               noct_range[0],
+                                               nocts_filling)
+        return dest
+
+
 class ARTDomainFile(object):
     """
     Read in the AMR, left/right edges, fill out the octhandler
     """
-    #We already read in the header in static output,
-    #and since these headers are defined in only a single file it's
-    #best to leave them in the static output
+    # We already read in the header in static output,
+    # and since these headers are defined in only a single file it's
+    # best to leave them in the static output
     _last_mask = None
     _last_seletor_id = None
 
-    def __init__(self,pf,domain_id,nvar):
+    def __init__(self, pf, domain_id, nvar, level):
         self.nvar = nvar
         self.pf = pf
         self.domain_id = domain_id
+        self.domain_level = level
         self._level_count = None
         self._level_oct_offsets = None
         self._level_child_offsets = None
 
     @property
     def level_count(self):
-        #this is number of *octs*
-        if self._level_count is not None: return self._level_count
+        # this is number of *octs*
+        if self._level_count is not None:
+            return self._level_count
         self.level_offsets
-        return self._level_count
+        return self._level_count[self.domain_level]
 
     @property
     def level_child_offsets(self):
-        if self._level_count is not None: return self._level_child_offsets
+        if self._level_count is not None:
+            return self._level_child_offsets
         self.level_offsets
         return self._level_child_offsets
 
     @property
-    def level_offsets(self): 
-        #this is used by the IO operations to find the file offset,
-        #and then start reading to fill values
-        #note that this is called hydro_offset in ramses
-        if self._level_oct_offsets is not None: 
+    def level_offsets(self):
+        # this is used by the IO operations to find the file offset,
+        # and then start reading to fill values
+        # note that this is called hydro_offset in ramses
+        if self._level_oct_offsets is not None:
             return self._level_oct_offsets
         # We now have to open the file and calculate it
-        f = open(self.pf.file_amr, "rb")
+        f = open(self.pf._file_amr, "rb")
         nhydrovars, inoll, _level_oct_offsets, _level_child_offsets = \
             _count_art_octs(f,  self.pf.child_grid_offset, self.pf.min_level,
                             self.pf.max_level)
-        #remember that the root grid is by itself; manually add it back in
+        # remember that the root grid is by itself; manually add it back in
         inoll[0] = self.pf.domain_dimensions.prod()/8
         _level_child_offsets[0] = self.pf.root_grid_offset
         self.nhydrovars = nhydrovars
-        self.inoll = inoll #number of octs
+        self.inoll = inoll  # number of octs
         self._level_oct_offsets = _level_oct_offsets
         self._level_child_offsets = _level_child_offsets
         self._level_count = inoll
         return self._level_oct_offsets
-    
-    def _read_amr(self, oct_handler):
+
+    def _read_amr_level(self, oct_handler):
         """Open the oct file, read in octs level-by-level.
-           For each oct, only the position, index, level and domain 
+           For each oct, only the position, index, level and domain
            are needed - its position in the octree is found automatically.
            The most important is finding all the information to feed
            oct_handler.add
         """
-        #on the root level we typically have 64^3 octs
-        #giving rise to 128^3 cells
-        #but on level 1 instead of 128^3 octs, we have 256^3 octs
-        #leave this code here instead of static output - it's memory intensive
         self.level_offsets
-        f = open(self.pf.file_amr, "rb")
-        #add the root *cell* not *oct* mesh
+        f = open(self.pf._file_amr, "rb")
+        level = self.domain_level
+        unitary_center, fl, iocts, nocts, root_level = _read_art_level_info(
+            f,
+            self._level_oct_offsets, level,
+            coarse_grid=self.pf.domain_dimensions[0],
+            root_level=self.pf.root_level)
+        nocts_check = oct_handler.add(self.domain_id, level, nocts,
+                                      unitary_center, self.domain_id)
+        assert(nocts_check == nocts)
+        mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
+                    nocts, level, oct_handler.nocts)
+
+    def _read_amr_root(self, oct_handler):
+        self.level_offsets
+        f = open(self.pf._file_amr, "rb")
+        # add the root *cell* not *oct* mesh
+        level = self.domain_level
         root_octs_side = self.pf.domain_dimensions[0]/2
         NX = np.ones(3)*root_octs_side
+        octs_side = NX*2**level
         LE = np.array([0.0, 0.0, 0.0], dtype='float64')
         RE = np.array([1.0, 1.0, 1.0], dtype='float64')
         root_dx = (RE - LE) / NX
         LL = LE + root_dx/2.0
         RL = RE - root_dx/2.0
-        #compute floating point centers of root octs
-        root_fc= np.mgrid[LL[0]:RL[0]:NX[0]*1j,
-                          LL[1]:RL[1]:NX[1]*1j,
-                          LL[2]:RL[2]:NX[2]*1j ]
-        root_fc= np.vstack([p.ravel() for p in root_fc]).T
-        nocts_check = oct_handler.add(1, 0, root_octs_side**3,
+        # compute floating point centers of root octs
+        root_fc = np.mgrid[LL[0]:RL[0]:NX[0]*1j,
+                           LL[1]:RL[1]:NX[1]*1j,
+                           LL[2]:RL[2]:NX[2]*1j]
+        root_fc = np.vstack([p.ravel() for p in root_fc]).T
+        nocts_check = oct_handler.add(self.domain_id, level,
+                                      root_octs_side**3,
                                       root_fc, self.domain_id)
         assert(oct_handler.nocts == root_fc.shape[0])
-        nocts_added = root_fc.shape[0]
         mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
-                    root_octs_side**3, 0,nocts_added)
-        for level in xrange(1, self.pf.max_level+1):
-            left_index, fl, iocts, nocts,root_level = _read_art_level_info(f, 
-                self._level_oct_offsets,level,
-                coarse_grid=self.pf.domain_dimensions[0])
-            left_index/=2
-            #at least one of the indices should be odd
-            #assert np.sum(left_index[:,0]%2==1)>0
-            octs_side = NX*2**level
-            float_left_edge = left_index.astype("float64") / octs_side
-            float_center = float_left_edge + 0.5*1.0/octs_side
-            #all floatin unitary positions should fit inside the domain
-            assert np.all(float_center<1.0)
-            nocts_check = oct_handler.add(1,level, nocts, float_left_edge, self.domain_id)
-            nocts_added += nocts
-            assert(oct_handler.nocts == nocts_added)
-            mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
-                        nocts, level,nocts_added)
+                    root_octs_side**3, 0, oct_handler.nocts)
 
     def select(self, selector):
         if id(selector) == self._last_selector_id:
@@ -585,8 +637,8 @@
 
     def count(self, selector):
         if id(selector) == self._last_selector_id:
-            if self._last_mask is None: return 0
+            if self._last_mask is None:
+                return 0
             return self._last_mask.sum()
         self.select(selector)
         return self.count(selector)
-

diff -r f49fc8747a994d9f8c7326d7ba2a25f05430e56a -r 6cf22697b73c1490164684de693252380b7c2d18 yt/frontends/art/definitions.py
--- a/yt/frontends/art/definitions.py
+++ b/yt/frontends/art/definitions.py
@@ -25,7 +25,10 @@
 
 """
 
-fluid_fields= [ 
+# If not otherwise specified, we are big endian
+endian = '>'
+
+fluid_fields = [
     'Density',
     'TotalEnergy',
     'XMomentumDensity',
@@ -40,32 +43,29 @@
     'PotentialOld'
 ]
 
-hydro_struct = [('pad1','>i'),('idc','>i'),('iOctCh','>i')]
+hydro_struct = [('pad1', '>i'), ('idc', '>i'), ('iOctCh', '>i')]
 for field in fluid_fields:
-    hydro_struct += (field,'>f'),
-hydro_struct += ('pad2','>i'),
+    hydro_struct += (field, '>f'),
+hydro_struct += ('pad2', '>i'),
 
-particle_fields= [
-    'particle_age',
+particle_fields = [
+    'particle_mass',  # stars have variable mass
     'particle_index',
-    'particle_mass',
-    'particle_mass_initial',
-    'particle_creation_time',
-    'particle_metallicity1',
-    'particle_metallicity2',
-    'particle_metallicity',
+    'particle_type',
     'particle_position_x',
     'particle_position_y',
     'particle_position_z',
     'particle_velocity_x',
     'particle_velocity_y',
     'particle_velocity_z',
-    'particle_type',
-    'particle_index'
+    'particle_mass_initial',
+    'particle_creation_time',
+    'particle_metallicity1',
+    'particle_metallicity2',
+    'particle_metallicity',
 ]
 
 particle_star_fields = [
-    'particle_age',
     'particle_mass',
     'particle_mass_initial',
     'particle_creation_time',
@@ -74,110 +74,65 @@
     'particle_metallicity',
 ]
 
-filename_pattern = {				
-	'amr':'10MpcBox_csf512_%s.d',
-	'particle_header':'PMcrd%s.DAT',
-	'particle_data':'PMcrs0%s.DAT',
-	'particle_stars':'stars_%s.dat'
-}
 
-filename_pattern_hf = {				
-	'particle_header':'PMcrd_%s.DAT',
-	'particle_data':'PMcrs0_%s.DAT',
+filename_pattern = {
+    'amr': ['10MpcBox_', '.d'],
+    'particle_header': ['PMcrd', '.DAT'],
+    'particle_data': ['PMcrs', '.DAT'],
+    'particle_stars': ['stars', '.dat']
 }
 
 amr_header_struct = [
-    ('>i','pad byte'),
-    ('>256s','jname'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>i','istep'),
-    ('>d','t'),
-    ('>d','dt'),
-    ('>f','aexpn'),
-    ('>f','ainit'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>f','boxh'),
-    ('>f','Om0'),
-    ('>f','Oml0'),
-    ('>f','Omb0'),
-    ('>f','hubble'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>i','nextras'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>f','extra1'),
-    ('>f','extra2'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>256s','lextra'),
-    ('>256s','lextra'),
-    ('>i','pad byte'),
-    ('>i', 'pad byte'),
-    ('>i', 'min_level'),
-    ('>i', 'max_level'),
-    ('>i', 'pad byte'),
+    ('jname', 1, '256s'),
+    (('istep', 't', 'dt', 'aexpn', 'ainit'), 1, 'iddff'),
+    (('boxh', 'Om0', 'Oml0', 'Omb0', 'hubble'), 5, 'f'),
+    ('nextras', 1, 'i'),
+    (('extra1', 'extra2'), 2, 'f'),
+    ('lextra', 1, '512s'),
+    (('min_level', 'max_level'), 2, 'i')
 ]
 
-particle_header_struct =[
-    ('>i','pad'),
-    ('45s','header'), 
-    ('>f','aexpn'),
-    ('>f','aexp0'),
-    ('>f','amplt'),
-    ('>f','astep'),
-    ('>i','istep'),
-    ('>f','partw'),
-    ('>f','tintg'),
-    ('>f','Ekin'),
-    ('>f','Ekin1'),
-    ('>f','Ekin2'),
-    ('>f','au0'),
-    ('>f','aeu0'),
-    ('>i','Nrow'),
-    ('>i','Ngridc'),
-    ('>i','Nspecies'),
-    ('>i','Nseed'),
-    ('>f','Om0'),
-    ('>f','Oml0'),
-    ('>f','hubble'),
-    ('>f','Wp5'),
-    ('>f','Ocurv'),
-    ('>f','Omb0'),
-    ('>%ds'%(396),'extras'),
-    ('>f','unknown'),
-    ('>i','pad')
+particle_header_struct = [
+    (('header',
+     'aexpn', 'aexp0', 'amplt', 'astep',
+     'istep',
+     'partw', 'tintg',
+     'Ekin', 'Ekin1', 'Ekin2',
+     'au0', 'aeu0',
+     'Nrow', 'Ngridc', 'Nspecies', 'Nseed',
+     'Om0', 'Oml0', 'hubble', 'Wp5', 'Ocurv', 'Omb0',
+     'extras', 'unknown'),
+     1,
+     '45sffffi'+'fffffff'+'iiii'+'ffffff'+'396s'+'f')
 ]
 
 star_struct = [
-        ('>d',('tdum','adum')),
-        ('>i','nstars'),
-        ('>d',('ws_old','ws_oldi')),
-        ('>f','mass'),
-        ('>f','imass'),
-        ('>f','tbirth'),
-        ('>f','metallicity1'),
-        ('>f','metallicity2')
-        ]
+    ('>d', ('tdum', 'adum')),
+    ('>i', 'nstars'),
+    ('>d', ('ws_old', 'ws_oldi')),
+    ('>f', 'particle_mass'),
+    ('>f', 'particle_mass_initial'),
+    ('>f', 'particle_creation_time'),
+    ('>f', 'particle_metallicity1'),
+    ('>f', 'particle_metallicity2')
+]
 
 star_name_map = {
-        'particle_mass':'mass',
-        'particle_mass_initial':'imass',
-        'particle_age':'tbirth',
-        'particle_metallicity1':'metallicity1',
-        'particle_metallicity2':'metallicity2',
-        'particle_metallicity':'metallicity',
-        }
+    'particle_mass': 'mass',
+    'particle_mass_initial': 'imass',
+    'particle_creation_time': 'tbirth',
+    'particle_metallicity1': 'metallicity1',
+    'particle_metallicity2': 'metallicity2',
+    'particle_metallicity': 'metallicity',
+}
 
 constants = {
-    "Y_p":0.245,
-    "gamma":5./3.,
-    "T_CMB0":2.726,
-    "T_min":300.,
-    "ng":128,
-    "wmu":4.0/(8.0-5.0*0.245)
+    "Y_p": 0.245,
+    "gamma": 5./3.,
+    "T_CMB0": 2.726,
+    "T_min": 300.,
+    "ng": 128,
+    "wmu": 4.0/(8.0-5.0*0.245)
 }
 
 seek_extras = 137

diff -r f49fc8747a994d9f8c7326d7ba2a25f05430e56a -r 6cf22697b73c1490164684de693252380b7c2d18 yt/frontends/art/fields.py
--- a/yt/frontends/art/fields.py
+++ b/yt/frontends/art/fields.py
@@ -3,6 +3,8 @@
 
 Author: Matthew Turk <matthewturk at gmail.com>
 Affiliation: UCSD
+Author: Chris Moody <matthewturk at gmail.com>
+Affiliation: UCSC
 Homepage: http://yt-project.org/
 License:
   Copyright (C) 2010-2011 Matthew Turk.  All Rights Reserved.
@@ -22,7 +24,7 @@
   You should have received a copy of the GNU General Public License
   along with this program.  If not, see <http://www.gnu.org/licenses/>.
 """
-
+import numpy as np
 from yt.data_objects.field_info_container import \
     FieldInfoContainer, \
     FieldInfo, \
@@ -35,210 +37,221 @@
     ValidateGridType
 import yt.data_objects.universal_fields
 import yt.utilities.lib as amr_utils
+from yt.utilities.physical_constants import mass_sun_cgs
+from yt.frontends.art.definitions import *
 
 KnownARTFields = FieldInfoContainer()
 add_art_field = KnownARTFields.add_field
-
 ARTFieldInfo = FieldInfoContainer.create_with_fallback(FieldInfo)
 add_field = ARTFieldInfo.add_field
 
-import numpy as np
+for f in fluid_fields:
+    add_art_field(f, function=NullFunc, take_log=True,
+                  validators=[ValidateDataField(f)])
 
-#these are just the hydro fields
-known_art_fields = [ 'Density','TotalEnergy',
-                     'XMomentumDensity','YMomentumDensity','ZMomentumDensity',
-                     'Pressure','Gamma','GasEnergy',
-                     'MetalDensitySNII', 'MetalDensitySNIa',
-                     'PotentialNew','PotentialOld']
-
-#Add the fields, then later we'll individually defined units and names
-for f in known_art_fields:
+for f in particle_fields:
     add_art_field(f, function=NullFunc, take_log=True,
-              validators = [ValidateDataField(f)])
-
-#Hydro Fields that are verified to be OK unit-wise:
-#Density
-#Temperature
-#metallicities
-#MetalDensity SNII + SNia
-
-#Hydro Fields that need to be tested:
-#TotalEnergy
-#XYZMomentum
-#Pressure
-#Gamma
-#GasEnergy
-#Potentials
-#xyzvelocity
-
-#Particle fields that are tested:
-#particle_position_xyz
-#particle_type
-#particle_index
-#particle_mass
-#particle_mass_initial
-#particle_age
-#particle_velocity
-#particle_metallicity12
-
-#Particle fields that are untested:
-#NONE
-
-#Other checks:
-#CellMassMsun == Density * CellVolume
+                  validators=[ValidateDataField(f)],
+                  particle_type=True)
+add_art_field("particle_mass", function=NullFunc, take_log=True,
+              validators=[ValidateDataField(f)],
+              particle_type=True,
+              convert_function=lambda x: x.convert("particle_mass"))
+add_art_field("particle_mass_initial", function=NullFunc, take_log=True,
+              validators=[ValidateDataField(f)],
+              particle_type=True,
+              convert_function=lambda x: x.convert("particle_mass"))
 
 def _convertDensity(data):
     return data.convert("Density")
 KnownARTFields["Density"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["Density"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["Density"]._convert_function=_convertDensity
+KnownARTFields["Density"]._convert_function = _convertDensity
 
 def _convertTotalEnergy(data):
     return data.convert("GasEnergy")
-KnownARTFields["TotalEnergy"]._units = r"\rm{g}/\rm{cm}^3"
-KnownARTFields["TotalEnergy"]._projected_units = r"\rm{K}"
-KnownARTFields["TotalEnergy"]._convert_function=_convertTotalEnergy
+KnownARTFields["TotalEnergy"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
+KnownARTFields["TotalEnergy"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
+KnownARTFields["TotalEnergy"]._convert_function = _convertTotalEnergy
 
 def _convertXMomentumDensity(data):
-    tr  = data.convert("Mass")*data.convert("Velocity")
+    tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
     return tr
-KnownARTFields["XMomentumDensity"]._units = r"\rm{mg}/\rm{s}/\rm{cm}^3"
-KnownARTFields["XMomentumDensity"]._projected_units = r"\rm{K}"
-KnownARTFields["XMomentumDensity"]._convert_function=_convertXMomentumDensity
+KnownARTFields["XMomentumDensity"]._units = r"\rm{g}/\rm{s}/\rm{cm}^3"
+KnownARTFields["XMomentumDensity"]._projected_units = r"\rm{g}/\rm{s}/\rm{cm}^2"
+KnownARTFields["XMomentumDensity"]._convert_function = _convertXMomentumDensity
 
 def _convertYMomentumDensity(data):
-    tr  = data.convert("Mass")*data.convert("Velocity")
+    tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
     return tr
-KnownARTFields["YMomentumDensity"]._units = r"\rm{mg}/\rm{s}/\rm{cm}^3"
-KnownARTFields["YMomentumDensity"]._projected_units = r"\rm{K}"
-KnownARTFields["YMomentumDensity"]._convert_function=_convertYMomentumDensity
+KnownARTFields["YMomentumDensity"]._units = r"\rm{g}/\rm{s}/\rm{cm}^3"
+KnownARTFields["YMomentumDensity"]._projected_units = r"\rm{g}/\rm{s}/\rm{cm}^2"
+KnownARTFields["YMomentumDensity"]._convert_function = _convertYMomentumDensity
 
 def _convertZMomentumDensity(data):
-    tr  = data.convert("Mass")*data.convert("Velocity")
+    tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
     return tr
-KnownARTFields["ZMomentumDensity"]._units = r"\rm{mg}/\rm{s}/\rm{cm}^3"
-KnownARTFields["ZMomentumDensity"]._projected_units = r"\rm{K}"
-KnownARTFields["ZMomentumDensity"]._convert_function=_convertZMomentumDensity
+KnownARTFields["ZMomentumDensity"]._units = r"\rm{g}/\rm{s}/\rm{cm}^3"
+KnownARTFields["ZMomentumDensity"]._projected_units = r"\rm{g}/\rm{s}/\rm{cm}^2"
+KnownARTFields["ZMomentumDensity"]._convert_function = _convertZMomentumDensity
 
 def _convertPressure(data):
     return data.convert("Pressure")
-KnownARTFields["Pressure"]._units = r"\rm{g}/\rm{cm}/\rm{s}^2"
+KnownARTFields["Pressure"]._units = r"\rm{g}/\rm{s}^2/\rm{cm}^1"
 KnownARTFields["Pressure"]._projected_units = r"\rm{g}/\rm{s}^2"
-KnownARTFields["Pressure"]._convert_function=_convertPressure
+KnownARTFields["Pressure"]._convert_function = _convertPressure
 
 def _convertGamma(data):
     return 1.0
 KnownARTFields["Gamma"]._units = r""
 KnownARTFields["Gamma"]._projected_units = r""
-KnownARTFields["Gamma"]._convert_function=_convertGamma
+KnownARTFields["Gamma"]._convert_function = _convertGamma
 
 def _convertGasEnergy(data):
     return data.convert("GasEnergy")
-KnownARTFields["GasEnergy"]._units = r"\rm{ergs}/\rm{g}"
-KnownARTFields["GasEnergy"]._projected_units = r""
-KnownARTFields["GasEnergy"]._convert_function=_convertGasEnergy
+KnownARTFields["GasEnergy"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
+KnownARTFields["GasEnergy"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
+KnownARTFields["GasEnergy"]._convert_function = _convertGasEnergy
 
 def _convertMetalDensitySNII(data):
     return data.convert('Density')
 KnownARTFields["MetalDensitySNII"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["MetalDensitySNII"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["MetalDensitySNII"]._convert_function=_convertMetalDensitySNII
+KnownARTFields["MetalDensitySNII"]._convert_function = _convertMetalDensitySNII
 
 def _convertMetalDensitySNIa(data):
     return data.convert('Density')
 KnownARTFields["MetalDensitySNIa"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["MetalDensitySNIa"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["MetalDensitySNIa"]._convert_function=_convertMetalDensitySNIa
+KnownARTFields["MetalDensitySNIa"]._convert_function = _convertMetalDensitySNIa
 
 def _convertPotentialNew(data):
     return data.convert("Potential")
-KnownARTFields["PotentialNew"]._units = r"\rm{g}/\rm{cm}^3"
-KnownARTFields["PotentialNew"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["PotentialNew"]._convert_function=_convertPotentialNew
+KnownARTFields["PotentialNew"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
+KnownARTFields["PotentialNew"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
+KnownARTFields["PotentialNew"]._convert_function = _convertPotentialNew
 
 def _convertPotentialOld(data):
     return data.convert("Potential")
-KnownARTFields["PotentialOld"]._units = r"\rm{g}/\rm{cm}^3"
-KnownARTFields["PotentialOld"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["PotentialOld"]._convert_function=_convertPotentialOld
+KnownARTFields["PotentialOld"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
+KnownARTFields["PotentialOld"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
+KnownARTFields["PotentialOld"]._convert_function = _convertPotentialOld
 
 ####### Derived fields
+def _temperature(field, data):
+    tr = data["GasEnergy"]/data["Density"]
+    tr /= data.pf.conversion_factors["GasEnergy"]
+    tr *= data.pf.conversion_factors["Density"]
+    tr *= data.pf.conversion_factors['tr']
+    return tr
 
-def _temperature(field, data):
-    dg = data["GasEnergy"] #.astype('float64')
-    dg /= data.pf.conversion_factors["GasEnergy"]
-    dd = data["Density"] #.astype('float64')
-    dd /= data.pf.conversion_factors["Density"]
-    tr = dg/dd*data.pf.conversion_factors['tr']
-    #ghost cells have zero density?
-    tr[np.isnan(tr)] = 0.0
-    #dd[di] = -1.0
-    #if data.id==460:
-    #tr[di] = -1.0 #replace the zero-density points with zero temp
-    #print tr.min()
-    #assert np.all(np.isfinite(tr))
-    return tr
 def _converttemperature(data):
-    #x = data.pf.conversion_factors["Temperature"]
-    x = 1.0
-    return x
-add_field("Temperature", function=_temperature, units = r"\mathrm{K}",take_log=True)
+    return 1.0
+add_field("Temperature", function=_temperature,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Temperature"]._units = r"\mathrm{K}"
 ARTFieldInfo["Temperature"]._projected_units = r"\mathrm{K}"
-#ARTFieldInfo["Temperature"]._convert_function=_converttemperature
 
 def _metallicity_snII(field, data):
-    tr  = data["MetalDensitySNII"] / data["Density"]
+    tr = data["MetalDensitySNII"] / data["Density"]
     return tr
-add_field("Metallicity_SNII", function=_metallicity_snII, units = r"\mathrm{K}",take_log=True)
+add_field("Metallicity_SNII", function=_metallicity_snII,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Metallicity_SNII"]._units = r""
 ARTFieldInfo["Metallicity_SNII"]._projected_units = r""
 
 def _metallicity_snIa(field, data):
-    tr  = data["MetalDensitySNIa"] / data["Density"]
+    tr = data["MetalDensitySNIa"] / data["Density"]
     return tr
-add_field("Metallicity_SNIa", function=_metallicity_snIa, units = r"\mathrm{K}",take_log=True)
+add_field("Metallicity_SNIa", function=_metallicity_snIa,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Metallicity_SNIa"]._units = r""
 ARTFieldInfo["Metallicity_SNIa"]._projected_units = r""
 
 def _metallicity(field, data):
-    tr  = data["Metal_Density"] / data["Density"]
+    tr = data["Metal_Density"] / data["Density"]
     return tr
-add_field("Metallicity", function=_metallicity, units = r"\mathrm{K}",take_log=True)
+add_field("Metallicity", function=_metallicity,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Metallicity"]._units = r""
 ARTFieldInfo["Metallicity"]._projected_units = r""
 
-def _x_velocity(field,data):
-    tr  = data["XMomentumDensity"]/data["Density"]
+def _x_velocity(field, data):
+    tr = data["XMomentumDensity"]/data["Density"]
     return tr
-add_field("x-velocity", function=_x_velocity, units = r"\mathrm{cm/s}",take_log=False)
+add_field("x-velocity", function=_x_velocity,
+          units=r"\mathrm{cm/s}", take_log=False)
 ARTFieldInfo["x-velocity"]._units = r"\rm{cm}/\rm{s}"
 ARTFieldInfo["x-velocity"]._projected_units = r"\rm{cm}/\rm{s}"
 
-def _y_velocity(field,data):
-    tr  = data["YMomentumDensity"]/data["Density"]
+def _y_velocity(field, data):
+    tr = data["YMomentumDensity"]/data["Density"]
     return tr
-add_field("y-velocity", function=_y_velocity, units = r"\mathrm{cm/s}",take_log=False)
+add_field("y-velocity", function=_y_velocity,
+          units=r"\mathrm{cm/s}", take_log=False)
 ARTFieldInfo["y-velocity"]._units = r"\rm{cm}/\rm{s}"
 ARTFieldInfo["y-velocity"]._projected_units = r"\rm{cm}/\rm{s}"
 
-def _z_velocity(field,data):
-    tr  = data["ZMomentumDensity"]/data["Density"]
+def _z_velocity(field, data):
+    tr = data["ZMomentumDensity"]/data["Density"]
     return tr
-add_field("z-velocity", function=_z_velocity, units = r"\mathrm{cm/s}",take_log=False)
+add_field("z-velocity", function=_z_velocity,
+          units=r"\mathrm{cm/s}", take_log=False)
 ARTFieldInfo["z-velocity"]._units = r"\rm{cm}/\rm{s}"
 ARTFieldInfo["z-velocity"]._projected_units = r"\rm{cm}/\rm{s}"
 
 def _metal_density(field, data):
-    tr  = data["MetalDensitySNIa"]
+    tr = data["MetalDensitySNIa"]
     tr += data["MetalDensitySNII"]
     return tr
-add_field("Metal_Density", function=_metal_density, units = r"\mathrm{K}",take_log=True)
-ARTFieldInfo["Metal_Density"]._units = r""
-ARTFieldInfo["Metal_Density"]._projected_units = r""
+add_field("Metal_Density", function=_metal_density,
+          units=r"\mathrm{K}", take_log=True)
+ARTFieldInfo["Metal_Density"]._units = r"\rm{g}/\rm{cm}^3"
+ARTFieldInfo["Metal_Density"]._projected_units = r"\rm{g}/\rm{cm}^2"
 
+# Particle fields
+def _particle_age(field, data):
+    tr = data["particle_creation_time"]
+    return data.pf.current_time - tr
+add_field("particle_age", function=_particle_age, units=r"\mathrm{s}",
+          take_log=True, particle_type=True)
 
-#Particle fields
+def spread_ages(ages, spread=1.0e7*365*24*3600):
+    # stars are formed in lumps; spread out the ages linearly
+    da = np.diff(ages)
+    assert np.all(da <= 0)
+    # ages should always be decreasing, and ordered so
+    agesd = np.zeros(ages.shape)
+    idx, = np.where(da < 0)
+    idx += 1  # mark the right edges
+    # spread this age evenly out to the next age
+    lidx = 0
+    lage = 0
+    for i in idx:
+        n = i-lidx  # n stars affected
+        rage = ages[i]
+        lage = max(rage-spread, 0.0)
+        agesd[lidx:i] = np.linspace(lage, rage, n)
+        lidx = i
+        # lage=rage
+    # we didn't get the last iter
+    n = agesd.shape[0]-lidx
+    rage = ages[-1]
+    lage = max(rage-spread, 0.0)
+    agesd[lidx:] = np.linspace(lage, rage, n)
+    return agesd
+
+def _particle_age_spread(field, data):
+    tr = data["particle_creation_time"]
+    return spread_ages(data.pf.current_time - tr)
+
+add_field("particle_age_spread", function=_particle_age_spread,
+          particle_type=True, take_log=True, units=r"\rm{s}")
+
+def _ParticleMassMsun(field, data):
+    return data["particle_mass"]/mass_sun_cgs
+add_field("ParticleMassMsun", function=_ParticleMassMsun, particle_type=True,
+          take_log=True, units=r"\rm{Msun}")

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/9bc9be139905/
Changeset:   9bc9be139905
Branch:      yt-3.0
User:        xarthisius
Date:        2013-03-27 15:08:55
Summary:     This changes another assert_equal in field unit conversion tests to allow 4 nulp
Affected #:  1 file

diff -r 6cf22697b73c1490164684de693252380b7c2d18 -r 9bc9be139905a5d03dfeae915661d992671c8310 yt/data_objects/tests/test_fields.py
--- a/yt/data_objects/tests/test_fields.py
+++ b/yt/data_objects/tests/test_fields.py
@@ -81,7 +81,7 @@
                 v1 = g[self.field_name]
                 g.clear_data()
                 g.field_parameters.update(_sample_parameters)
-                assert_equal(v1, conv*field._function(field, g))
+                assert_equal_almost_equal_nulp(v1, conv*field._function(field, g), 4)
 
 def test_all_fields():
     for field in FieldInfo:


https://bitbucket.org/yt_analysis/yt/commits/6c9860ffd783/
Changeset:   6c9860ffd783
Branch:      yt-3.0
User:        xarthisius
Date:        2013-03-27 15:12:42
Summary:     Fix typo in previous changeset
Affected #:  1 file

diff -r 9bc9be139905a5d03dfeae915661d992671c8310 -r 6c9860ffd783d43ff12fcd50c370309795eb2be2 yt/data_objects/tests/test_fields.py
--- a/yt/data_objects/tests/test_fields.py
+++ b/yt/data_objects/tests/test_fields.py
@@ -81,7 +81,7 @@
                 v1 = g[self.field_name]
                 g.clear_data()
                 g.field_parameters.update(_sample_parameters)
-                assert_equal_almost_equal_nulp(v1, conv*field._function(field, g), 4)
+                assert_array_almost_equal_nulp(v1, conv*field._function(field, g), 4)
 
 def test_all_fields():
     for field in FieldInfo:


https://bitbucket.org/yt_analysis/yt/commits/1d07aeeb15c1/
Changeset:   1d07aeeb15c1
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-28 20:11:33
Summary:     Rewrite _read_vector to work with strings and files.
Affected #:  1 file

diff -r 6c9860ffd783d43ff12fcd50c370309795eb2be2 -r 1d07aeeb15c1f20febd1691ae3047bbc5acca694 yt/utilities/fortran_utils.py
--- a/yt/utilities/fortran_utils.py
+++ b/yt/utilities/fortran_utils.py
@@ -117,18 +117,21 @@
     >>> f = open("fort.3", "rb")
     >>> rv = read_vector(f, 'd')
     """
-    fmt = endian+"%s" % d
-    size = struct.calcsize(fmt)
-    padfmt = endian + "I"
-    padsize = struct.calcsize(padfmt)
-    length = struct.unpack(padfmt,f.read(padsize))[0]
-    if length % size!= 0:
+    pad_fmt = "%sI" % (endian)
+    pad_size = struct.calcsize(pad_fmt)
+    vec_len = struct.unpack(pad_fmt,f.read(pad_size))[0] # bytes
+    vec_fmt = "%s%s" % (endian, d)
+    vec_size = struct.calcsize(vec_fmt)
+    if vec_len % vec_size != 0:
         print "fmt = '%s' ; length = %s ; size= %s" % (fmt, length, size)
         raise RuntimeError
-    count = length/ size
-    tr = np.fromfile(f,fmt,count=count)
-    length2= struct.unpack(padfmt,f.read(padsize))[0]
-    assert(length == length2)
+    vec_num = vec_len / vec_size
+    if isinstance(f, file): # Needs to be explicitly a file
+        tr = np.fromfile(f, vec_fmt, count=vec_num)
+    else:
+        tr = np.fromstring(f.read(vec_len), vec_fmt, count=vec_num)
+    vec_len2 = struct.unpack(pad_fmt,f.read(pad_size))[0]
+    assert(vec_len == vec_len2)
     return tr
 
 def skip(f, n=1, endian='='):


https://bitbucket.org/yt_analysis/yt/commits/2b47da9b16a9/
Changeset:   2b47da9b16a9
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-28 20:14:08
Summary:     Fixing RAMSES with ires->select_ires.
Affected #:  1 file

diff -r 1d07aeeb15c1f20febd1691ae3047bbc5acca694 -r 2b47da9b16a987f4a48d03afef600548637304cd yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -269,7 +269,7 @@
         base_dx = (self.domain.pf.domain_width /
                    self.domain.pf.domain_dimensions)
         widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.ires(dobj))
+        dds = (2**self.select_ires(dobj))
         for i in range(3):
             widths[:,i] = base_dx[i] / dds
         return widths


https://bitbucket.org/yt_analysis/yt/commits/382704af8a3f/
Changeset:   382704af8a3f
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-03-28 21:01:49
Summary:     Adding a note about AssertionError being from incorrect ramses fields.
Affected #:  1 file

diff -r 2b47da9b16a987f4a48d03afef600548637304cd -r 382704af8a3f86e440e3d6a3bd25d8e4671e3412 yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -94,7 +94,12 @@
                              self.amr_header['ncpu']):
                 header = ( ('file_ilevel', 1, 'I'),
                            ('file_ncache', 1, 'I') )
-                hvals = fpu.read_attrs(f, header)
+                try:
+                    hvals = fpu.read_attrs(f, header, "=")
+                except AssertionError:
+                    print "You are running with the wrong number of fields."
+                    print "Please specify these in the load command."
+                    raise
                 if hvals['file_ncache'] == 0: continue
                 assert(hvals['file_ilevel'] == level+1)
                 if cpu + 1 == self.domain_id and level >= min_level:


https://bitbucket.org/yt_analysis/yt/commits/962f6c59656b/
Changeset:   962f6c59656b
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-11 18:14:37
Summary:     Fixing RAMSES particle reading when fields are missing.

This will avoid adding particle fields that don't exist.  Note also I have
added particle_age and particle_metallicity to the known RAMSES particle
fields.
Affected #:  2 files

diff -r 382704af8a3f86e440e3d6a3bd25d8e4671e3412 -r 962f6c59656b134ae9accb72cf739cbb5a06181f yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -116,6 +116,9 @@
             self.particle_field_offsets = {}
             return
         f = open(self.part_fn, "rb")
+        f.seek(0, os.SEEK_END)
+        flen = f.tell()
+        f.seek(0)
         hvals = {}
         attrs = ( ('ncpu', 1, 'I'),
                   ('ndim', 1, 'I'),
@@ -143,12 +146,15 @@
         if hvals["nstar_tot"] > 0:
             particle_fields += [("particle_age", "d"),
                                 ("particle_metallicity", "d")]
-        field_offsets = {particle_fields[0][0]: f.tell()}
-        for field, vtype in particle_fields[1:]:
+        field_offsets = {}
+        _pfields = {}
+        for field, vtype in particle_fields:
+            if f.tell() >= flen: break
+            field_offsets[field] = f.tell()
+            _pfields[field] = vtype
             fpu.skip(f, 1)
-            field_offsets[field] = f.tell()
         self.particle_field_offsets = field_offsets
-        self.particle_field_types = dict(particle_fields)
+        self.particle_field_types = _pfields
 
     def _read_amr_header(self):
         hvals = {}

diff -r 382704af8a3f86e440e3d6a3bd25d8e4671e3412 -r 962f6c59656b134ae9accb72cf739cbb5a06181f yt/frontends/ramses/fields.py
--- a/yt/frontends/ramses/fields.py
+++ b/yt/frontends/ramses/fields.py
@@ -91,6 +91,8 @@
     "particle_mass",
     "particle_identifier",
     "particle_refinement_level",
+    "particle_age",
+    "particle_metallicity",
 ]
 
 for f in known_ramses_particle_fields:


https://bitbucket.org/yt_analysis/yt/commits/0057bdaec2ec/
Changeset:   0057bdaec2ec
Branch:      yt-3.0
User:        ngoldbaum
Date:        2013-04-10 01:43:14
Summary:     Fixing yt's particle selection to reflect enzo-3.0 PR #40.
Affected #:  1 file

diff -r 382704af8a3f86e440e3d6a3bd25d8e4671e3412 -r 0057bdaec2ecb2f9c7efabed72c7b52afe2571e6 yt/frontends/enzo/io.py
--- a/yt/frontends/enzo/io.py
+++ b/yt/frontends/enzo/io.py
@@ -55,7 +55,7 @@
         ptypes = list(set([ftype for ftype, fname in fields]))
         fields = list(set(fields))
         if len(ptypes) > 1: raise NotImplementedError
-        pfields = [(ptypes[0], "position_%s" % ax) for ax in 'xyz']
+        pfields = [(ptypes[0], "particle_position_%s" % ax) for ax in 'xyz']
         size = 0
         for chunk in chunks:
             data = self._read_chunk_data(chunk, pfields, 'active', 


https://bitbucket.org/yt_analysis/yt/commits/ae0003cdf0a5/
Changeset:   ae0003cdf0a5
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-12 19:21:23
Summary:     Merged in ngoldbaum/yt-3.0 (pull request #29)

Fixing yt's particle selection to reflect enzo-3.0 PR #40.
Affected #:  1 file

diff -r 962f6c59656b134ae9accb72cf739cbb5a06181f -r ae0003cdf0a5c5c11d3722d37796c67b0b84428a yt/frontends/enzo/io.py
--- a/yt/frontends/enzo/io.py
+++ b/yt/frontends/enzo/io.py
@@ -55,7 +55,7 @@
         ptypes = list(set([ftype for ftype, fname in fields]))
         fields = list(set(fields))
         if len(ptypes) > 1: raise NotImplementedError
-        pfields = [(ptypes[0], "position_%s" % ax) for ax in 'xyz']
+        pfields = [(ptypes[0], "particle_position_%s" % ax) for ax in 'xyz']
         size = 0
         for chunk in chunks:
             data = self._read_chunk_data(chunk, pfields, 'active', 


https://bitbucket.org/yt_analysis/yt/commits/0b7d1ec2011c/
Changeset:   0b7d1ec2011c
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-20 23:09:01
Summary:     slight changes to file finding; should work STG datasets
Affected #:  1 file

diff -r 4f194a75accbb4485b0185b95079d0be2a681d2a -r 0b7d1ec2011c66db46bc740ee74766bd6bfb681e yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -224,17 +224,19 @@
         particle header, star files, etc.
         """
         base_prefix, base_suffix = filename_pattern['amr']
-        possibles = glob.glob(os.path.dirname(file_amr)+"/*")
         for filetype, (prefix, suffix) in filename_pattern.iteritems():
-            # if this attribute is already set skip it
+            if "amr" in filetype: continue
             if getattr(self, "_file_"+filetype, None) is not None:
                 continue
             stripped = file_amr.replace(base_prefix, prefix)
             stripped = stripped.replace(base_suffix, suffix)
-            match, = difflib.get_close_matches(stripped, possibles, 1, 0.6)
-            if match is not None:
-                mylog.info('discovered %s:%s', filetype, match)
-                setattr(self, "_file_"+filetype, match)
+            path = "/%s*%s"%(prefix,suffix)
+            possibles = glob.glob(os.path.dirname(file_amr)+path)
+            matches = difflib.get_close_matches(stripped, possibles, 1, 0.85)
+            if len(matches) == 0: continue
+            if matches[0] is not None:
+                mylog.info('discovered %s:%s', filetype, matches[0])
+                setattr(self, "_file_"+filetype, matches[0])
             else:
                 setattr(self, "_file_"+filetype, None)
 


https://bitbucket.org/yt_analysis/yt/commits/54a5da1fd2a5/
Changeset:   54a5da1fd2a5
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-03-20 23:09:16
Summary:     ensuring 64bit data leaving io.py
Affected #:  1 file

diff -r 0b7d1ec2011c66db46bc740ee74766bd6bfb681e -r 54a5da1fd2a51f8f836d1fd0e1dbf1425ca7c174 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -356,7 +356,7 @@
                 data = np.concatenate((data, temp))
             else:
                 fh.seek(4*np_per_page, 1)
-    return data
+    return data.astype("f8")
 
 
 def read_star_field(file, field=None):


https://bitbucket.org/yt_analysis/yt/commits/c78f5ddaa98f/
Changeset:   c78f5ddaa98f
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-08 19:15:31
Summary:     replaced tabs with spaces
Affected #:  1 file

diff -r 54a5da1fd2a51f8f836d1fd0e1dbf1425ca7c174 -r c78f5ddaa98f189c80399bb67a4c814672371347 yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -312,8 +312,8 @@
         Used for testing the periodic adjustment machinery
         of this derived quantity.
     include_particles : Bool
-	Should we add the mass contribution of particles
-	to calculate binding energy?
+    Should we add the mass contribution of particles
+    to calculate binding energy?
 
     Examples
     --------
@@ -332,13 +332,13 @@
                       (data["z-velocity"] - bv_z)**2)).sum()
 
     if (include_particles):
-	mass_to_use = data["TotalMass"]
+        mass_to_use = data["TotalMass"]
         kinetic += 0.5 * (data["Dark_Matter_Mass"] *
                           ((data["cic_particle_velocity_x"] - bv_x)**2 +
                            (data["cic_particle_velocity_y"] - bv_y)**2 +
                            (data["cic_particle_velocity_z"] - bv_z)**2)).sum()
     else:
-	mass_to_use = data["CellMass"]
+        mass_to_use = data["CellMass"]
     # Add thermal energy to kinetic energy
     if (include_thermal_energy):
         thermal = (data["ThermalEnergy"] * mass_to_use).sum()
@@ -375,8 +375,8 @@
     for label in ["x", "y", "z"]: # Separating CellMass from the for loop
         local_data[label] = data[label]
     local_data["CellMass"] = mass_to_use # Adding CellMass separately
-					 # NOTE: if include_particles = True, local_data["CellMass"]
-					 #       is not the same as data["CellMass"]!!!
+    # NOTE: if include_particles = True, local_data["CellMass"]
+    #       is not the same as data["CellMass"]!!!
     if periodic.any():
         # Adjust local_data to re-center the clump to remove the periodicity
         # by the gap calculated above.
@@ -431,7 +431,7 @@
             thisx = (local_data["x"][sel] / dx).astype('int64') - cover_imin[0] * 2**L
             thisy = (local_data["y"][sel] / dy).astype('int64') - cover_imin[1] * 2**L
             thisz = (local_data["z"][sel] / dz).astype('int64') - cover_imin[2] * 2**L
-	    vals = np.array([local_data["CellMass"][sel]], order='F')
+            vals = np.array([local_data["CellMass"][sel]], order='F')
             octree.add_array_to_tree(L, thisx, thisy, thisz, vals,
                np.ones_like(thisx).astype('float64'), treecode = 1)
         # Now we calculate the binding energy using a treecode.


https://bitbucket.org/yt_analysis/yt/commits/13db4ab10eae/
Changeset:   13db4ab10eae
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-08 19:41:17
Summary:     Merge
Affected #:  4 files

diff -r c78f5ddaa98f189c80399bb67a4c814672371347 -r 13db4ab10eae7872ef876d7d01094b5077a6aba7 yt/data_objects/tests/test_fields.py
--- a/yt/data_objects/tests/test_fields.py
+++ b/yt/data_objects/tests/test_fields.py
@@ -33,6 +33,7 @@
     pf.conversion_factors.update( dict((f, 1.0) for f in fields) )
     pf.current_redshift = 0.0001
     pf.hubble_constant = 0.7
+    pf.omega_matter = 0.27
     for unit in mpc_conversion:
         pf.units[unit+'h'] = pf.units[unit]
         pf.units[unit+'cm'] = pf.units[unit]
@@ -72,7 +73,7 @@
         if not field.particle_type:
             assert_equal(v1, dd1["gas", self.field_name])
         if not needs_spatial:
-            assert_equal(v1, conv*field._function(field, dd2))
+            assert_array_almost_equal_nulp(v1, conv*field._function(field, dd2), 4)
         if not skip_grids:
             for g in pf.h.grids:
                 g.field_parameters.update(_sample_parameters)
@@ -80,7 +81,7 @@
                 v1 = g[self.field_name]
                 g.clear_data()
                 g.field_parameters.update(_sample_parameters)
-                assert_equal(v1, conv*field._function(field, g))
+                assert_array_almost_equal_nulp(v1, conv*field._function(field, g), 4)
 
 def test_all_fields():
     for field in FieldInfo:

diff -r c78f5ddaa98f189c80399bb67a4c814672371347 -r 13db4ab10eae7872ef876d7d01094b5077a6aba7 yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -94,7 +94,12 @@
                              self.amr_header['ncpu']):
                 header = ( ('file_ilevel', 1, 'I'),
                            ('file_ncache', 1, 'I') )
-                hvals = fpu.read_attrs(f, header)
+                try:
+                    hvals = fpu.read_attrs(f, header, "=")
+                except AssertionError:
+                    print "You are running with the wrong number of fields."
+                    print "Please specify these in the load command."
+                    raise
                 if hvals['file_ncache'] == 0: continue
                 assert(hvals['file_ilevel'] == level+1)
                 if cpu + 1 == self.domain_id and level >= min_level:
@@ -143,6 +148,7 @@
             fpu.skip(f, 1)
             field_offsets[field] = f.tell()
         self.particle_field_offsets = field_offsets
+        self.particle_field_types = dict(particle_fields)
 
     def _read_amr_header(self):
         hvals = {}
@@ -268,7 +274,7 @@
         base_dx = (self.domain.pf.domain_width /
                    self.domain.pf.domain_dimensions)
         widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.ires(dobj))
+        dds = (2**self.select_ires(dobj))
         for i in range(3):
             widths[:,i] = base_dx[i] / dds
         return widths

diff -r c78f5ddaa98f189c80399bb67a4c814672371347 -r 13db4ab10eae7872ef876d7d01094b5077a6aba7 yt/frontends/ramses/io.py
--- a/yt/frontends/ramses/io.py
+++ b/yt/frontends/ramses/io.py
@@ -60,33 +60,45 @@
     def _read_particle_selection(self, chunks, selector, fields):
         size = 0
         masks = {}
+        chunks = list(chunks)
+        pos_fields = [("all","particle_position_%s" % ax) for ax in "xyz"]
         for chunk in chunks:
             for subset in chunk.objs:
                 # We read the whole thing, then feed it back to the selector
-                offsets = []
-                f = open(subset.domain.part_fn, "rb")
-                foffsets = subset.domain.particle_field_offsets
-                selection = {}
-                for ax in 'xyz':
-                    field = "particle_position_%s" % ax
-                    f.seek(foffsets[field])
-                    selection[ax] = fpu.read_vector(f, 'd')
-                mask = selector.select_points(selection['x'],
-                            selection['y'], selection['z'])
+                selection = self._read_particle_subset(subset, pos_fields)
+                mask = selector.select_points(
+                    selection["all", "particle_position_x"],
+                    selection["all", "particle_position_y"],
+                    selection["all", "particle_position_z"])
                 if mask is None: continue
+                #print "MASK", mask
                 size += mask.sum()
                 masks[id(subset)] = mask
         # Now our second pass
-        tr = dict((f, np.empty(size, dtype="float64")) for f in fields)
+        tr = {}
+        pos = 0
         for chunk in chunks:
             for subset in chunk.objs:
-                f = open(subset.domain.part_fn, "rb")
+                selection = self._read_particle_subset(subset, fields)
                 mask = masks.pop(id(subset), None)
                 if mask is None: continue
-                for ftype, fname in fields:
-                    offsets.append((foffsets[fname], (ftype,fname)))
-                for offset, field in sorted(offsets):
-                    f.seek(offset)
-                    tr[field] = fpu.read_vector(f, 'd')[mask]
+                count = mask.sum()
+                for field in fields:
+                    ti = selection.pop(field)[mask]
+                    if field not in tr:
+                        dt = subset.domain.particle_field_types[field[1]]
+                        tr[field] = np.empty(size, dt)
+                    tr[field][pos:pos+count] = ti
+                pos += count
         return tr
 
+    def _read_particle_subset(self, subset, fields):
+        f = open(subset.domain.part_fn, "rb")
+        foffsets = subset.domain.particle_field_offsets
+        tr = {}
+        #for field in sorted(fields, key=lambda a:foffsets[a]):
+        for field in fields:
+            f.seek(foffsets[field[1]])
+            dt = subset.domain.particle_field_types[field[1]]
+            tr[field] = fpu.read_vector(f, dt)
+        return tr

diff -r c78f5ddaa98f189c80399bb67a4c814672371347 -r 13db4ab10eae7872ef876d7d01094b5077a6aba7 yt/utilities/fortran_utils.py
--- a/yt/utilities/fortran_utils.py
+++ b/yt/utilities/fortran_utils.py
@@ -117,18 +117,21 @@
     >>> f = open("fort.3", "rb")
     >>> rv = read_vector(f, 'd')
     """
-    fmt = endian+"%s" % d
-    size = struct.calcsize(fmt)
-    padfmt = endian + "I"
-    padsize = struct.calcsize(padfmt)
-    length = struct.unpack(padfmt,f.read(padsize))[0]
-    if length % size!= 0:
+    pad_fmt = "%sI" % (endian)
+    pad_size = struct.calcsize(pad_fmt)
+    vec_len = struct.unpack(pad_fmt,f.read(pad_size))[0] # bytes
+    vec_fmt = "%s%s" % (endian, d)
+    vec_size = struct.calcsize(vec_fmt)
+    if vec_len % vec_size != 0:
         print "fmt = '%s' ; length = %s ; size= %s" % (fmt, length, size)
         raise RuntimeError
-    count = length/ size
-    tr = np.fromfile(f,fmt,count=count)
-    length2= struct.unpack(padfmt,f.read(padsize))[0]
-    assert(length == length2)
+    vec_num = vec_len / vec_size
+    if isinstance(f, file): # Needs to be explicitly a file
+        tr = np.fromfile(f, vec_fmt, count=vec_num)
+    else:
+        tr = np.fromstring(f.read(vec_len), vec_fmt, count=vec_num)
+    vec_len2 = struct.unpack(pad_fmt,f.read(pad_size))[0]
+    assert(vec_len == vec_len2)
     return tr
 
 def skip(f, n=1, endian='='):


https://bitbucket.org/yt_analysis/yt/commits/ad356961683b/
Changeset:   ad356961683b
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-08 19:46:13
Summary:      allowing quantities to use other quantities
Affected #:  1 file

diff -r 13db4ab10eae7872ef876d7d01094b5077a6aba7 -r ad356961683b903b11dc1666ca187bf44c70b091 yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -59,6 +59,7 @@
     def __call__(self, *args, **kwargs):
         e = FieldDetector(flat = True)
         e.NumberOfParticles = 1
+        e.quantities = self._data_source.quantities
         fields = e.requested
         self.func(e, *args, **kwargs)
         retvals = [ [] for i in range(self.n_ret)]


https://bitbucket.org/yt_analysis/yt/commits/05d1d1d4d88e/
Changeset:   05d1d1d4d88e
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-08 21:02:24
Summary:     removing self.func; fake data breaks the isbound quantity
Affected #:  1 file

diff -r ad356961683b903b11dc1666ca187bf44c70b091 -r 05d1d1d4d88e926746425c3b8731aa3f686798a3 yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -59,9 +59,7 @@
     def __call__(self, *args, **kwargs):
         e = FieldDetector(flat = True)
         e.NumberOfParticles = 1
-        e.quantities = self._data_source.quantities
         fields = e.requested
-        self.func(e, *args, **kwargs)
         retvals = [ [] for i in range(self.n_ret)]
         chunks = self._data_source.chunks([], chunking_style="io")
         for ds in parallel_objects(chunks, -1):
@@ -334,7 +332,7 @@
 
     if (include_particles):
         mass_to_use = data["TotalMass"]
-        kinetic += 0.5 * (data["Dark_Matter_Mass"] *
+        kinetic += 0.5 * (data["particle_mass"] *
                           ((data["cic_particle_velocity_x"] - bv_x)**2 +
                            (data["cic_particle_velocity_y"] - bv_y)**2 +
                            (data["cic_particle_velocity_z"] - bv_z)**2)).sum()


https://bitbucket.org/yt_analysis/yt/commits/44991b2d5f00/
Changeset:   44991b2d5f00
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-09 20:08:28
Summary:     first draft of oct deposit
Affected #:  3 files

diff -r 05d1d1d4d88e926746425c3b8731aa3f686798a3 -r 44991b2d5f009aafa4a6dfee00bb7f64ff9c714b yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -120,7 +120,7 @@
             else:
                 domain._read_amr_level(self.oct_handler)
 
-    def _detect_fields(self):
+    def _detect_fields_original(self):
         self.particle_field_list = particle_fields
         self.field_list = set(fluid_fields + particle_fields +
                               particle_star_fields)
@@ -135,6 +135,30 @@
         else:
             self.parameter_file.particle_types = []
 
+    def _detect_fields(self):
+        #populate particle_field list and field_list
+        self.field_list = [('gas',f) for f in fluid_fields]
+        if "wspecies" in self.parameter_file.parameters.keys():
+            particle_field_list = [f for f in particle_fields]
+            self.parameter_file.particle_types = ["all", "darkmatter"]
+            if pf.file_particle_stars:
+                particle_field_list += particle_star_fields
+                self.parameter_file.particle_types.append("stars")
+            wspecies = self.parameter_file.parameters['wspecies']
+            nspecies = len(wspecies)
+            for specie in range(nspecies):
+                self.parameter_file.particle_types.append("specie%i" % specie)
+            self.particle_field_list = particle_field_list 
+            #maybe change this to (type, name) format?
+        else:
+            self.particle_field_list = []
+            self.parameter_file.particle_types = []
+        for particle_type in self.parameter_file.particle_types:
+            for particle_field in self.particle_field_list:
+                self.field_list.append([particle_type, particle_field])
+                self.field_list.append(["deposit_"+particle_type, 
+                                        particle_field])
+
     def _setup_classes(self):
         dd = self._get_data_reader_dict()
         super(ARTGeometryHandler, self)._setup_classes(dd)
@@ -473,6 +497,23 @@
             widths[:, i] = base_dx[i] / dds
         return widths
 
+    def deposit_particle_fields(self, ppos, pdata):
+        """
+        Given the x,y,z,particle_field data, do a particle deposition
+        using the oct_handler to accumulate values. We look up the particle
+        position again for every field, so this is inefficient
+        """
+        import pdb; pdb.set_trace()
+        fields = pdata.keys()
+        filled = {}
+        for field in fields:
+            dest = np.zeros(self.cell_count, 'float64')-1.
+            level = self.domain_level
+            oct_handler.deposit_particle_cumsum(ppos, pdata, self.mask, dest,
+                                                fields, self.domain.domain_id)
+            filled[field] = dest
+        return filled 
+
     def fill_root(self, content, ftfields):
         """
         This is called from IOHandler. It takes content

diff -r 05d1d1d4d88e926746425c3b8731aa3f686798a3 -r 44991b2d5f009aafa4a6dfee00bb7f64ff9c714b yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -55,16 +55,50 @@
                 f = open(subset.domain.pf._file_amr, "rb")
                 # This contains the boundary information, so we skim through
                 # and pick off the right vectors
-                if subset.domain_level == 0:
-                    rv = subset.fill_root(f, fields)
-                else:
-                    rv = subset.fill_level(f, fields)
+                if ft == 'gas':
+                    if subset.domain_level == 0:
+                        rv = subset.fill_root(f, fields)
+                    else:
+                        rv = subset.fill_level(f, fields)
+                elif ft.startswith('deposit'):
+                    import pdb; pdb.set_trace()
+                    # We will find all particles in this chunk and deposit
+                    # this means the particles will be read again for every 
+                    # hydro chunk, unfortunately
+                    # This is also complicated because particle deposition
+                    # is ideally a derived field formed from the position
+                    # fields plus another particle field, but we are treating
+                    # it like a first class native field
+                    if ft == "deposit":
+                        ft = "particle"
+                    else:
+                        #accept "deposit_stars" field for example
+                        ft = ft.replace("deposit_","")
+                    mylog.debug("Deposit L%i particles", 
+                                subset.domain_level)
+                    coords = [(ft, 'particle_position_%s'%ax ) for ax \
+                                in 'xyz']
+                    fnames = [(ft, f) for oft,f in fields ]
+                    pfields = [c for c in coords]
+                    for f in fnames:
+                        if f in pfields: 
+                            continue
+                        pfields.append(f)
+                    pdata = self._read_particle_selection(chunk,selector,
+                                                          pfields)
+                    x, y, z = (pdata[c] for c in coords)
+                    ppos = np.array([x,y,z]).T
+                    del x,y,z
+                    for c in coords:
+                        if c not in fnames:
+                            del pdata[c]
+                    rv = subset.deposit_particle_fields(ppos, pdata)
                 for ft, f in fields:
-                    mylog.debug("Filling L%i %s with %s (%0.3e %0.3e) (%s:%s)",
-                                subset.domain_level,
-                                f, subset.cell_count, rv[f].min(), rv[f].max(),
-                                cp, cp+subset.cell_count)
-                    tr[(ft, f)][cp:cp+subset.cell_count] = rv.pop(f)
+                        mylog.debug("Fill L%i %s with %s (%0.3e %0.3e) (%s:%s)",
+                                    subset.domain_level, f, subset.cell_count, 
+                                    rv[f].min(), rv[f].max(),
+                                    cp, cp+subset.cell_count)
+                        tr[(ft, f)][cp:cp+subset.cell_count] = rv.pop(f)
                 cp += subset.cell_count
         return tr
 

diff -r 05d1d1d4d88e926746425c3b8731aa3f686798a3 -r 44991b2d5f009aafa4a6dfee00bb7f64ff9c714b yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -167,6 +167,44 @@
             cur = cur.children[ind[0]][ind[1]][ind[2]]
         return cur
 
+    @cython.boundscheck(True)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef Oct *get_octant(self, ppos):
+        # This does a bit more than the built in get() function
+        # by also computing the index of the octant the point is in
+        cdef np.int64_t ind[3]
+        cdef np.float64_t dds[3], cp[3], pp[3]
+        cdef Oct *cur
+        cdef int i
+        cdef int ii
+        for i in range(3):
+            pp[i] = ppos[i] - self.DLE[i]
+            dds[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
+            ind[i] = <np.int64_t> ((pp[i] - self.DLE[i])/dds[i])
+            cp[i] = (ind[i] + 0.5) * dds[i]
+        cur = self.root_mesh[ind[0]][ind[1]][ind[2]]
+        while cur.children[0][0][0] != NULL:
+            for i in range(3):
+                dds[i] = dds[i] / 2.0
+                if cp[i] > pp[i]:
+                    ind[i] = 0
+                    cp[i] -= dds[i] / 2.0
+                else:
+                    ind[i] = 1
+                    cp[i] += dds[i]/2.0
+            cur = cur.children[ind[0]][ind[1]][ind[2]]
+        for i in range(3):
+            dds[i] = dds[i] / 2.0
+            if cp[i] > pp[i]:
+                ind[i] = 0
+                cp[i] -= dds[i] / 2.0
+            else:
+                ind[i] = 1
+                cp[i] += dds[i]/2.0
+        ii = ((ind[2]*2)+ind[1])*2+ind[0]
+        return cur, ii 
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
@@ -887,6 +925,26 @@
 
 cdef class ARTOctreeContainer(RAMSESOctreeContainer):
     #this class is specifically for the NMSU ART
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def deposit_particle_cumsum(np.ndarray[np.float64_t, ndim=3] ppos, 
+                                np.ndarray[np.float64_t, ndim=1] pdata,
+                                np.ndarray[np.float64_t, ndim=1] mask,
+                                np.ndarray[np.float64_t, ndim=1] dest,
+                                fields, int domain):
+        cdef Oct *o
+        cdef OctAllocationContainer *dom = self.domains[domain - 1]
+        cdef np.float64_t pos[3]
+        cdef int ii
+        cdef int no = ppos.shape[0]
+        for n in range(no):
+            pos = ppos[n]
+            *o, ii = dom.get_octant(pos) 
+            if mask[o.local_ind,ii]==0: continue
+            dest[o.ind+ii] += pdata[n]
+        return dest
+
     @cython.boundscheck(True)
     @cython.wraparound(False)
     @cython.cdivision(True)


https://bitbucket.org/yt_analysis/yt/commits/48a93884ff89/
Changeset:   48a93884ff89
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-17 01:28:37
Summary:     Merge
Affected #:  4 files

diff -r ae0003cdf0a5c5c11d3722d37796c67b0b84428a -r 48a93884ff89a3623b166185a943f56182774db7 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -167,6 +167,44 @@
             cur = cur.children[ind[0]][ind[1]][ind[2]]
         return cur
 
+    @cython.boundscheck(True)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef Oct *get_octant(self, ppos):
+        # This does a bit more than the built in get() function
+        # by also computing the index of the octant the point is in
+        cdef np.int64_t ind[3]
+        cdef np.float64_t dds[3], cp[3], pp[3]
+        cdef Oct *cur
+        cdef int i
+        cdef int ii
+        for i in range(3):
+            pp[i] = ppos[i] - self.DLE[i]
+            dds[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
+            ind[i] = <np.int64_t> ((pp[i] - self.DLE[i])/dds[i])
+            cp[i] = (ind[i] + 0.5) * dds[i]
+        cur = self.root_mesh[ind[0]][ind[1]][ind[2]]
+        while cur.children[0][0][0] != NULL:
+            for i in range(3):
+                dds[i] = dds[i] / 2.0
+                if cp[i] > pp[i]:
+                    ind[i] = 0
+                    cp[i] -= dds[i] / 2.0
+                else:
+                    ind[i] = 1
+                    cp[i] += dds[i]/2.0
+            cur = cur.children[ind[0]][ind[1]][ind[2]]
+        for i in range(3):
+            dds[i] = dds[i] / 2.0
+            if cp[i] > pp[i]:
+                ind[i] = 0
+                cp[i] -= dds[i] / 2.0
+            else:
+                ind[i] = 1
+                cp[i] += dds[i]/2.0
+        ii = ((ind[2]*2)+ind[1])*2+ind[0]
+        return cur, ii 
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
@@ -887,6 +925,26 @@
 
 cdef class ARTOctreeContainer(RAMSESOctreeContainer):
     #this class is specifically for the NMSU ART
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def deposit_particle_cumsum(np.ndarray[np.float64_t, ndim=3] ppos, 
+                                np.ndarray[np.float64_t, ndim=1] pdata,
+                                np.ndarray[np.float64_t, ndim=1] mask,
+                                np.ndarray[np.float64_t, ndim=1] dest,
+                                fields, int domain):
+        cdef Oct *o
+        cdef OctAllocationContainer *dom = self.domains[domain - 1]
+        cdef np.float64_t pos[3]
+        cdef int ii
+        cdef int no = ppos.shape[0]
+        for n in range(no):
+            pos = ppos[n]
+            *o, ii = dom.get_octant(pos) 
+            if mask[o.local_ind,ii]==0: continue
+            dest[o.ind+ii] += pdata[n]
+        return dest
+
     @cython.boundscheck(True)
     @cython.wraparound(False)
     @cython.cdivision(True)


https://bitbucket.org/yt_analysis/yt/commits/897796b0b4aa/
Changeset:   897796b0b4aa
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-17 05:41:57
Summary:     commenting the sph kernel code
Affected #:  1 file

diff -r 48a93884ff89a3623b166185a943f56182774db7 -r 897796b0b4aabc85319cc561d1a252898ece2555 yt/frontends/sph/smoothing_kernel.pyx
--- a/yt/frontends/sph/smoothing_kernel.pyx
+++ b/yt/frontends/sph/smoothing_kernel.pyx
@@ -53,21 +53,28 @@
     for p in range(ngas):
         kernel_sum[p] = 0.0
         skip = 0
+        # Find the # of cells of the kernel
         for i in range(3):
             pos[i] = ppos[p, i]
+            # Get particle root grid integer index
             ind[i] = <int>((pos[i] - left_edge[i]) / dds[i])
+            # How many root grid cells does the smoothing length span + 1
             half_len = <int>(hsml[p]/dds[i]) + 1
+            # Left and right integer indices of the smoothing range
+            # If smoothing len is small could be inside the same bin
             ib0[i] = ind[i] - half_len
             ib1[i] = ind[i] + half_len
             #pos[i] = ppos[p, i] - left_edge[i]
             #ind[i] = <int>(pos[i] / dds[i])
             #ib0[i] = <int>((pos[i] - hsml[i]) / dds[i]) - 1
             #ib1[i] = <int>((pos[i] + hsml[i]) / dds[i]) + 1
+            # Skip if outside out root grid
             if ib0[i] >= dims[i] or ib1[i] < 0:
                 skip = 1
             ib0[i] = iclip(ib0[i], 0, dims[i] - 1)
             ib1[i] = iclip(ib1[i], 0, dims[i] - 1)
         if skip == 1: continue
+        # Having found the kernel shape, calculate the kernel weight
         for i from ib0[0] <= i <= ib1[0]:
             idist[0] = (ind[0] - i) * (ind[0] - i) * sdds[0]
             for j from ib0[1] <= j <= ib1[1]:
@@ -75,10 +82,14 @@
                 for k from ib0[2] <= k <= ib1[2]:
                     idist[2] = (ind[2] - k) * (ind[2] - k) * sdds[2]
                     dist = idist[0] + idist[1] + idist[2]
+                    # Calculate distance in multiples of the smoothing length
                     dist = sqrt(dist) / hsml[p]
+                    # Kernel is 3D but save the elements in a 1D array
                     gi = ((i * dims[1] + j) * dims[2]) + k
                     pdist[gi] = sph_kernel(dist)
+                    # Save sum to normalize later
                     kernel_sum[p] += pdist[gi]
+        # Having found the kernel, deposit accordingly into gdata
         for i from ib0[0] <= i <= ib1[0]:
             for j from ib0[1] <= j <= ib1[1]:
                 for k from ib0[2] <= k <= ib1[2]:


https://bitbucket.org/yt_analysis/yt/commits/39377274e8c8/
Changeset:   39377274e8c8
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-17 05:42:47
Summary:     wrote pseudo code for oct deposit
Affected #:  1 file

diff -r 897796b0b4aabc85319cc561d1a252898ece2555 -r 39377274e8c8c41d545f3259bc1ab168670fe31f yt/geometry/oct_deposit.pyx
--- /dev/null
+++ b/yt/geometry/oct_deposit.pyx
@@ -0,0 +1,68 @@
+"""
+Particle Deposition onto Octs
+
+Author: Christopher Moody <chris.e.moody at gmail.com>
+Affiliation: UC Santa Cruz
+Author: Matthew Turk <matthewturk at gmail.com>
+Affiliation: Columbia University
+Homepage: http://yt.enzotools.org/
+License:
+  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+from libc.stdlib cimport malloc, free, qsort
+from libc.math cimport floor
+cimport numpy as np
+import numpy as np
+from oct_container cimport Oct, OctAllocationContainer, OctreeContainer
+cimport cython
+
+cdef np.float64_t kernel_sph(np.float64_t x) nogil:
+    cdef np.float64_t kernel
+    if x <= 0.5:
+        kernel = 1.-6.*x*x*(1.-x)
+    elif x>0.5 and x<=1.0:
+        kernel = 2.*(1.-x)*(1.-x)*(1.-x)
+    else:
+        kernel = 0.
+    return kernel
+
+#modes = count, sum, diff
+modes = {'count': opt_count, 'sum': opt_sum, 'diff': opt_diff}
+selections = {'direct': select_nearest, 'cic': select_radius}
+kernels = {'unitary': kernel_unitary, 'sph': kernel_sph}
+cdef deposit_direct(oct_handler, 
+        np.ndarray[np.float64_t, ndim=2] ppos, #positions,columns are x,y,z
+        np.ndarray[np.float64_t, ndim=2] pd, # particle fields
+        np.ndarray[np.float64_t, ndim=1] pr, # particle radius
+        np.ndarray[np.float64_t, ndim=2] data_out, #write deposited here
+        np.ndarray[np.float64_t, ndim=2] data_in, #used to calc diff, same shape as data_out
+        mode='count', selection='direct', kernel='sph'):
+    fopt = modes[mode]
+    fsel = selections[selection]
+    fker = kernels[kernel]
+    for pi in np.arange(particles):
+        octs = fsel(oct_handler, pr[pi])
+        for oct in octs:
+            w = fker(pr[pi],oct) 
+            weights.append(w)
+        norm = weights.sum()
+        for w, oct in zip(weights, octs):
+            fopt(pd[pi], w/norm, oct.index, data_in, data_out)
+
+


https://bitbucket.org/yt_analysis/yt/commits/c950797b59b9/
Changeset:   c950797b59b9
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-17 05:44:02
Summary:     realized octs have cells
Affected #:  1 file

diff -r 39377274e8c8c41d545f3259bc1ab168670fe31f -r c950797b59b91e6ccbd4b1417d3ac8e81f783caa yt/geometry/oct_deposit.pyx
--- a/yt/geometry/oct_deposit.pyx
+++ b/yt/geometry/oct_deposit.pyx
@@ -59,10 +59,12 @@
     for pi in np.arange(particles):
         octs = fsel(oct_handler, pr[pi])
         for oct in octs:
-            w = fker(pr[pi],oct) 
-            weights.append(w)
+            for cell in oct.cells:
+                w = fker(pr[pi],cell) 
+                weights.append(w)
         norm = weights.sum()
         for w, oct in zip(weights, octs):
-            fopt(pd[pi], w/norm, oct.index, data_in, data_out)
+            for cell in oct.cells:
+                fopt(pd[pi], w/norm, oct.index, data_in, data_out)
 
 


https://bitbucket.org/yt_analysis/yt/commits/d298743d7787/
Changeset:   d298743d7787
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-17 08:05:59
Summary:     added kernel function, deposit operations
Affected #:  1 file

diff -r c950797b59b91e6ccbd4b1417d3ac8e81f783caa -r d298743d77873b4db7326165b2e5f42bf1e5a885 yt/geometry/oct_deposit.pyx
--- a/yt/geometry/oct_deposit.pyx
+++ b/yt/geometry/oct_deposit.pyx
@@ -25,13 +25,77 @@
   along with this program.  If not, see <http://www.gnu.org/licenses/>.
 """
 
-from libc.stdlib cimport malloc, free, qsort
-from libc.math cimport floor
+from libc.stdlib cimport malloc, free
 cimport numpy as np
 import numpy as np
-from oct_container cimport Oct, OctAllocationContainer, OctreeContainer
 cimport cython
 
+from oct_container cimport Oct, OctAllocationContainer, OctreeContainer
+
+# Mode functions
+ctypedef np.float64_t (*type_opt)(np.float64_t, np.float64_t)
+cdef np.float64_t opt_count(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += 1.0
+
+cdef np.float64_t opt_sum(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += pdata 
+
+cdef np.float64_t opt_diff(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += (data_in[index] - pdata) 
+
+cdef np.float64_t opt_wcount(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += weight
+
+cdef np.float64_t opt_wsum(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += pdata * weight
+
+cdef np.float64_t opt_wdiff(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += (data_in[index] - pdata) * weight
+
+# Selection functions
+ctypedef NOTSURE (*type_sel)(OctreeContainer, 
+                                np.ndarray[np.float64_t, ndim=1],
+                                np.float64_t)
+cdef NOTSURE select_nearest(OctreeContainer oct_handler,
+                            np.ndarray[np.float64_t, ndim=1] pos,
+                            np.float64_t radius):
+    #return only the nearest oct
+    pass
+
+
+cdef NOTSURE select_radius(OctreeContainer oct_handler,
+                            np.ndarray[np.float64_t, ndim=1] pos,
+                            np.float64_t radius):
+    #return a list of octs within the radius
+    pass
+    
+
+# Kernel functions
+ctypedef np.float64_t (*type_ker)(np.float64_t)
 cdef np.float64_t kernel_sph(np.float64_t x) nogil:
     cdef np.float64_t kernel
     if x <= 0.5:
@@ -42,22 +106,46 @@
         kernel = 0.
     return kernel
 
-#modes = count, sum, diff
-modes = {'count': opt_count, 'sum': opt_sum, 'diff': opt_diff}
-selections = {'direct': select_nearest, 'cic': select_radius}
-kernels = {'unitary': kernel_unitary, 'sph': kernel_sph}
-cdef deposit_direct(oct_handler, 
+cdef np.float64_t kernel_null(np.float64_t x) nogil: return 1.0
+
+cdef deposit(OctreeContainer oct_handler, 
         np.ndarray[np.float64_t, ndim=2] ppos, #positions,columns are x,y,z
         np.ndarray[np.float64_t, ndim=2] pd, # particle fields
         np.ndarray[np.float64_t, ndim=1] pr, # particle radius
         np.ndarray[np.float64_t, ndim=2] data_out, #write deposited here
         np.ndarray[np.float64_t, ndim=2] data_in, #used to calc diff, same shape as data_out
-        mode='count', selection='direct', kernel='sph'):
-    fopt = modes[mode]
-    fsel = selections[selection]
-    fker = kernels[kernel]
-    for pi in np.arange(particles):
-        octs = fsel(oct_handler, pr[pi])
+        mode='count', selection='nearest', kernel='null'):
+    cdef type_opt fopt
+    cdef type_sel fsel
+    cdef type_ker fker
+    cdef long pi #particle index
+    cdef long nocts #number of octs in selection
+    cdef Oct oct 
+    cdef np.float64_t w
+    # Can we do this with dicts?
+    # Setup the function pointers
+    if mode == 'count':
+        fopt = opt_count
+    elif mode == 'sum':
+        fopt = opt_sum
+    elif mode == 'diff':
+        fopt = opt_diff
+    if mode == 'wcount':
+        fopt = opt_count
+    elif mode == 'wsum':
+        fopt = opt_sum
+    elif mode == 'wdiff':
+        fopt = opt_diff
+    if selection == 'nearest':
+        fsel = select_nearest
+    elif selection == 'radius':
+        fsel = select_radius
+    if kernel == 'null':
+        fker = kernel_null
+    if kernel == 'sph':
+        fker = kernel_sph
+    for pi in range(particles):
+        octs = fsel(oct_handler, ppos[pi], pr[pi])
         for oct in octs:
             for cell in oct.cells:
                 w = fker(pr[pi],cell) 


https://bitbucket.org/yt_analysis/yt/commits/01481a357802/
Changeset:   01481a357802
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-17 08:10:53
Summary:     changing in/out order
Affected #:  1 file

diff -r d298743d77873b4db7326165b2e5f42bf1e5a885 -r 01481a3578024cd054fce0a3ca9d5fa5198ea244 yt/geometry/oct_deposit.pyx
--- a/yt/geometry/oct_deposit.pyx
+++ b/yt/geometry/oct_deposit.pyx
@@ -112,8 +112,8 @@
         np.ndarray[np.float64_t, ndim=2] ppos, #positions,columns are x,y,z
         np.ndarray[np.float64_t, ndim=2] pd, # particle fields
         np.ndarray[np.float64_t, ndim=1] pr, # particle radius
+        np.ndarray[np.float64_t, ndim=2] data_in, #used to calc diff, same shape as data_out
         np.ndarray[np.float64_t, ndim=2] data_out, #write deposited here
-        np.ndarray[np.float64_t, ndim=2] data_in, #used to calc diff, same shape as data_out
         mode='count', selection='nearest', kernel='null'):
     cdef type_opt fopt
     cdef type_sel fsel


https://bitbucket.org/yt_analysis/yt/commits/b65117530f45/
Changeset:   b65117530f45
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-17 20:17:19
Summary:     Consolidate get and get_octant and make code compileable.
Affected #:  2 files

diff -r 01481a3578024cd054fce0a3ca9d5fa5198ea244 -r b65117530f45a23fe46db5f16ec0dedfd838df9d yt/geometry/oct_container.pxd
--- a/yt/geometry/oct_container.pxd
+++ b/yt/geometry/oct_container.pxd
@@ -54,7 +54,7 @@
     cdef np.float64_t DLE[3], DRE[3]
     cdef public int nocts
     cdef public int max_domain
-    cdef Oct* get(self, ppos)
+    cdef Oct* get(self, np.float64_t ppos[3], int *ii = ?)
     cdef void neighbors(self, Oct *, Oct **)
     cdef void oct_bounds(self, Oct *, np.float64_t *, np.float64_t *)
 

diff -r 01481a3578024cd054fce0a3ca9d5fa5198ea244 -r b65117530f45a23fe46db5f16ec0dedfd838df9d yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -142,7 +142,7 @@
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    cdef Oct *get(self, ppos):
+    cdef Oct *get(self, np.float64_t ppos[3], int *ii = NULL):
         #Given a floating point position, retrieve the most
         #refined oct at that time
         cdef np.int64_t ind[3]
@@ -165,45 +165,14 @@
                     ind[i] = 1
                     cp[i] += dds[i]/2.0
             cur = cur.children[ind[0]][ind[1]][ind[2]]
-        return cur
-
-    @cython.boundscheck(True)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    cdef Oct *get_octant(self, ppos):
-        # This does a bit more than the built in get() function
-        # by also computing the index of the octant the point is in
-        cdef np.int64_t ind[3]
-        cdef np.float64_t dds[3], cp[3], pp[3]
-        cdef Oct *cur
-        cdef int i
-        cdef int ii
+        if ii != NULL: return cur
         for i in range(3):
-            pp[i] = ppos[i] - self.DLE[i]
-            dds[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
-            ind[i] = <np.int64_t> ((pp[i] - self.DLE[i])/dds[i])
-            cp[i] = (ind[i] + 0.5) * dds[i]
-        cur = self.root_mesh[ind[0]][ind[1]][ind[2]]
-        while cur.children[0][0][0] != NULL:
-            for i in range(3):
-                dds[i] = dds[i] / 2.0
-                if cp[i] > pp[i]:
-                    ind[i] = 0
-                    cp[i] -= dds[i] / 2.0
-                else:
-                    ind[i] = 1
-                    cp[i] += dds[i]/2.0
-            cur = cur.children[ind[0]][ind[1]][ind[2]]
-        for i in range(3):
-            dds[i] = dds[i] / 2.0
             if cp[i] > pp[i]:
                 ind[i] = 0
-                cp[i] -= dds[i] / 2.0
             else:
                 ind[i] = 1
-                cp[i] += dds[i]/2.0
-        ii = ((ind[2]*2)+ind[1])*2+ind[0]
-        return cur, ii 
+        ii[0] = ((ind[2]*2)+ind[1])*2+ind[0]
+        return cur
 
     @cython.boundscheck(False)
     @cython.wraparound(False)
@@ -298,14 +267,17 @@
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    def get_neighbor_boundaries(self, ppos):
+    def get_neighbor_boundaries(self, oppos):
+        cdef int i, ii
+        cdef np.float64_t ppos[3]
+        for i in range(3):
+            ppos[i] = oppos[i]
         cdef Oct *main = self.get(ppos)
         cdef Oct* neighbors[27]
         self.neighbors(main, neighbors)
         cdef np.ndarray[np.float64_t, ndim=2] bounds
         cdef np.float64_t corner[3], size[3]
         bounds = np.zeros((27,6), dtype="float64")
-        cdef int i, ii
         tnp = 0
         for i in range(27):
             self.oct_bounds(neighbors[i], corner, size)
@@ -928,7 +900,8 @@
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    def deposit_particle_cumsum(np.ndarray[np.float64_t, ndim=3] ppos, 
+    def deposit_particle_cumsum(self,
+                                np.ndarray[np.float64_t, ndim=2] ppos, 
                                 np.ndarray[np.float64_t, ndim=1] pdata,
                                 np.ndarray[np.float64_t, ndim=1] mask,
                                 np.ndarray[np.float64_t, ndim=1] dest,
@@ -939,8 +912,9 @@
         cdef int ii
         cdef int no = ppos.shape[0]
         for n in range(no):
-            pos = ppos[n]
-            *o, ii = dom.get_octant(pos) 
+            for j in range(3):
+                pos[j] = ppos[n,j]
+            o = self.get(pos, &ii) 
             if mask[o.local_ind,ii]==0: continue
             dest[o.ind+ii] += pdata[n]
         return dest
@@ -1435,12 +1409,15 @@
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    def count_neighbor_particles(self, ppos):
+    def count_neighbor_particles(self, oppos):
         #How many particles are in my neighborhood
+        cdef int i, ni, dl, tnp
+        cdef np.float64_t ppos[3]
+        for i in range(3):
+            ppos[i] = oppos[i]
         cdef Oct *main = self.get(ppos)
         cdef Oct* neighbors[27]
         self.neighbors(main, neighbors)
-        cdef int i, ni, dl, tnp
         tnp = 0
         for i in range(27):
             if neighbors[i].sd != NULL:


https://bitbucket.org/yt_analysis/yt/commits/3b6dbcc91214/
Changeset:   3b6dbcc91214
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-17 21:20:05
Summary:     Stub of deposit function.
Affected #:  1 file

diff -r b65117530f45a23fe46db5f16ec0dedfd838df9d -r 3b6dbcc91214a4ada89232c502a2fa0690fb95af yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -513,6 +513,11 @@
                         if f not in fields_to_generate:
                             fields_to_generate.append(f)
 
+    def deposit(self, positions, fields, op):
+        assert(self._current_chunk.chunk_type == "spatial")
+        fields = ensure_list(fields)
+        self.hierarchy._deposit_particle_fields(self, positions, fields, op)
+
     @contextmanager
     def _field_lock(self):
         self._locked = True


https://bitbucket.org/yt_analysis/yt/commits/d48a016b4b8c/
Changeset:   d48a016b4b8c
Branch:      yt-3.0
User:        ngoldbaum
Date:        2013-04-17 07:09:46
Summary:     typo fix in particle selection for enzo 3.0
Affected #:  1 file

diff -r ae0003cdf0a5c5c11d3722d37796c67b0b84428a -r d48a016b4b8ce1c8326e23c308fd90789d0b4ec0 yt/frontends/enzo/io.py
--- a/yt/frontends/enzo/io.py
+++ b/yt/frontends/enzo/io.py
@@ -50,7 +50,6 @@
         return (exceptions.KeyError, hdf5_light_reader.ReadingError)
 
     def _read_particle_selection_by_type(self, chunks, selector, fields):
-        # Active particles don't have the particle_ prefix.
         rv = {}
         ptypes = list(set([ftype for ftype, fname in fields]))
         fields = list(set(fields))
@@ -94,7 +93,7 @@
         # Now we have to do something unpleasant
         if any((ftype != "all" for ftype, fname in fields)):
             type_fields = [(ftype, fname) for ftype, fname in fields
-                           if ftype != all]
+                           if ftype != "all"]
             rv.update(self._read_particle_selection_by_type(
                       chunks, selector, type_fields))
             if len(rv) == len(fields): return rv


https://bitbucket.org/yt_analysis/yt/commits/c5eb6de911ff/
Changeset:   c5eb6de911ff
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-17 22:58:09
Summary:     Adding OctreeSubset and make ART (not ARTIO) and RAMSES subclass this.

This is Phase 1.  Phase 2 will be making OctreeSubset similar to GridPatch in
that it can return __getitem__ and so on.  Phase 3 will be porting this
behavior to ARTIO and the SPH codes.
Affected #:  4 files

diff -r d48a016b4b8ce1c8326e23c308fd90789d0b4ec0 -r c5eb6de911ff807044c607a0a01d75f265beb007 yt/data_objects/api.py
--- a/yt/data_objects/api.py
+++ b/yt/data_objects/api.py
@@ -31,6 +31,9 @@
 from grid_patch import \
     AMRGridPatch
 
+from octree_subset import \
+    OctreeSubset
+
 from static_output import \
     StaticOutput
 

diff -r d48a016b4b8ce1c8326e23c308fd90789d0b4ec0 -r c5eb6de911ff807044c607a0a01d75f265beb007 yt/data_objects/octree_subset.py
--- /dev/null
+++ b/yt/data_objects/octree_subset.py
@@ -0,0 +1,65 @@
+"""
+Subsets of octrees
+
+Author: Matthew Turk <matthewturk at gmail.com>
+Affiliation: Columbia University
+Homepage: http://yt-project.org/
+License:
+  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+import numpy as np
+
+class OctreeSubset(object):
+    def __init__(self, domain, mask, cell_count):
+        self.mask = mask
+        self.domain = domain
+        self.oct_handler = domain.pf.h.oct_handler
+        self.cell_count = cell_count
+        level_counts = self.oct_handler.count_levels(
+            self.domain.pf.max_level, self.domain.domain_id, mask)
+        assert(level_counts.sum() == cell_count)
+        level_counts[1:] = level_counts[:-1]
+        level_counts[0] = 0
+        self.level_counts = np.add.accumulate(level_counts)
+
+    def select_icoords(self, dobj):
+        return self.oct_handler.icoords(self.domain.domain_id, self.mask,
+                                        self.cell_count,
+                                        self.level_counts.copy())
+
+    def select_fcoords(self, dobj):
+        return self.oct_handler.fcoords(self.domain.domain_id, self.mask,
+                                        self.cell_count,
+                                        self.level_counts.copy())
+
+    def select_fwidth(self, dobj):
+        # Recall domain_dimensions is the number of cells, not octs
+        base_dx = (self.domain.pf.domain_width /
+                   self.domain.pf.domain_dimensions)
+        widths = np.empty((self.cell_count, 3), dtype="float64")
+        dds = (2**self.select_ires(dobj))
+        for i in range(3):
+            widths[:,i] = base_dx[i] / dds
+        return widths
+
+    def select_ires(self, dobj):
+        return self.oct_handler.ires(self.domain.domain_id, self.mask,
+                                     self.cell_count,
+                                     self.level_counts.copy())
+

diff -r d48a016b4b8ce1c8326e23c308fd90789d0b4ec0 -r c5eb6de911ff807044c607a0a01d75f265beb007 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -40,6 +40,8 @@
     GeometryHandler, YTDataChunk
 from yt.data_objects.static_output import \
     StaticOutput
+from yt.data_objects.octree_subset import \
+    OctreeSubset
 from yt.geometry.oct_container import \
     ARTOctreeContainer
 from yt.data_objects.field_info_container import \
@@ -434,7 +436,7 @@
         return False
 
 
-class ARTDomainSubset(object):
+class ARTDomainSubset(OctreeSubset):
     def __init__(self, domain, mask, cell_count, domain_level):
         self.mask = mask
         self.domain = domain

diff -r d48a016b4b8ce1c8326e23c308fd90789d0b4ec0 -r c5eb6de911ff807044c607a0a01d75f265beb007 yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -35,6 +35,8 @@
     GeometryHandler, YTDataChunk
 from yt.data_objects.static_output import \
     StaticOutput
+from yt.data_objects.octree_subset import \
+    OctreeSubset
 
 from .definitions import ramses_header
 from yt.utilities.definitions import \
@@ -252,43 +254,7 @@
         self.select(selector)
         return self.count(selector)
 
-class RAMSESDomainSubset(object):
-    def __init__(self, domain, mask, cell_count):
-        self.mask = mask
-        self.domain = domain
-        self.oct_handler = domain.pf.h.oct_handler
-        self.cell_count = cell_count
-        level_counts = self.oct_handler.count_levels(
-            self.domain.pf.max_level, self.domain.domain_id, mask)
-        assert(level_counts.sum() == cell_count)
-        level_counts[1:] = level_counts[:-1]
-        level_counts[0] = 0
-        self.level_counts = np.add.accumulate(level_counts)
-
-    def select_icoords(self, dobj):
-        return self.oct_handler.icoords(self.domain.domain_id, self.mask,
-                                        self.cell_count,
-                                        self.level_counts.copy())
-
-    def select_fcoords(self, dobj):
-        return self.oct_handler.fcoords(self.domain.domain_id, self.mask,
-                                        self.cell_count,
-                                        self.level_counts.copy())
-
-    def select_fwidth(self, dobj):
-        # Recall domain_dimensions is the number of cells, not octs
-        base_dx = (self.domain.pf.domain_width /
-                   self.domain.pf.domain_dimensions)
-        widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.select_ires(dobj))
-        for i in range(3):
-            widths[:,i] = base_dx[i] / dds
-        return widths
-
-    def select_ires(self, dobj):
-        return self.oct_handler.ires(self.domain.domain_id, self.mask,
-                                     self.cell_count,
-                                     self.level_counts.copy())
+class RAMSESDomainSubset(OctreeSubset):
 
     def fill(self, content, fields):
         # Here we get a copy of the file, which we skip through and read the


https://bitbucket.org/yt_analysis/yt/commits/57c67f583c86/
Changeset:   57c67f583c86
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-17 23:50:07
Summary:     Begin the process of subclassing OctreeSubset as YTSelectionContainer
Affected #:  4 files

diff -r c5eb6de911ff807044c607a0a01d75f265beb007 -r 57c67f583c863a4f5bdc1f02185a22b130f5d859 yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py
+++ b/yt/data_objects/octree_subset.py
@@ -25,10 +25,34 @@
 
 import numpy as np
 
-class OctreeSubset(object):
+from yt.data_objects.data_containers import \
+    YTFieldData, \
+    YTDataContainer, \
+    YTSelectionContainer
+from .field_info_container import \
+    NeedsGridType, \
+    NeedsOriginalGrid, \
+    NeedsDataField, \
+    NeedsProperty, \
+    NeedsParameter
+
+class OctreeSubset(YTSelectionContainer):
+    _spatial = True
+    _num_ghost_zones = 0
+    _num_zones = 2
+    _type_name = 'octree_subset'
+    _skip_add = True
+    _con_args = ('domain', 'mask', 'cell_count')
+    _container_fields = ("dx", "dy", "dz")
+
     def __init__(self, domain, mask, cell_count):
+        self.field_data = YTFieldData()
+        self.field_parameters = {}
         self.mask = mask
+        self.n_oct = mask.shape[0]
         self.domain = domain
+        self.pf = domain.pf
+        self.hierarchy = self.pf.hierarchy
         self.oct_handler = domain.pf.h.oct_handler
         self.cell_count = cell_count
         level_counts = self.oct_handler.count_levels(
@@ -37,6 +61,8 @@
         level_counts[1:] = level_counts[:-1]
         level_counts[0] = 0
         self.level_counts = np.add.accumulate(level_counts)
+        self._current_particle_type = 'all'
+        self._current_fluid_type = self.pf.default_fluid_type
 
     def select_icoords(self, dobj):
         return self.oct_handler.icoords(self.domain.domain_id, self.mask,
@@ -63,3 +89,18 @@
                                      self.cell_count,
                                      self.level_counts.copy())
 
+    def __getitem__(self, key):
+        tr = super(OctreeSubset, self).__getitem__(key)
+        import pdb; pdb.set_trace()
+        try:
+            fields = self._determine_fields(key)
+        except YTFieldTypeNotFound:
+            return tr
+        finfo = self.pf._get_field_info(*fields[0])
+        if not finfo.particle_type:
+            nz = self._num_zones + 2*self._num_ghost_zones
+            dest_shape = (nz, nz, nz, self.n_oct)
+            return tr.reshape(dest_shape)
+        return tr
+
+

diff -r c5eb6de911ff807044c607a0a01d75f265beb007 -r 57c67f583c863a4f5bdc1f02185a22b130f5d859 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -435,43 +435,10 @@
                 return False
         return False
 
-
 class ARTDomainSubset(OctreeSubset):
     def __init__(self, domain, mask, cell_count, domain_level):
-        self.mask = mask
-        self.domain = domain
-        self.oct_handler = domain.pf.h.oct_handler
-        self.cell_count = cell_count
+        super(ARTDomainSubset, self).__init__(domain, mask, cell_count)
         self.domain_level = domain_level
-        level_counts = self.oct_handler.count_levels(
-            self.domain.pf.max_level, self.domain.domain_id, mask)
-        assert(level_counts.sum() == cell_count)
-        level_counts[1:] = level_counts[:-1]
-        level_counts[0] = 0
-        self.level_counts = np.add.accumulate(level_counts)
-
-    def select_icoords(self, dobj):
-        return self.oct_handler.icoords(self.domain.domain_id, self.mask,
-                                        self.cell_count,
-                                        self.level_counts.copy())
-
-    def select_fcoords(self, dobj):
-        return self.oct_handler.fcoords(self.domain.domain_id, self.mask,
-                                        self.cell_count,
-                                        self.level_counts.copy())
-
-    def select_ires(self, dobj):
-        return self.oct_handler.ires(self.domain.domain_id, self.mask,
-                                     self.cell_count,
-                                     self.level_counts.copy())
-
-    def select_fwidth(self, dobj):
-        base_dx = 1.0/self.domain.pf.domain_dimensions
-        widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.select_ires(dobj))
-        for i in range(3):
-            widths[:, i] = base_dx[i] / dds
-        return widths
 
     def fill_root(self, content, ftfields):
         """

diff -r c5eb6de911ff807044c607a0a01d75f265beb007 -r 57c67f583c863a4f5bdc1f02185a22b130f5d859 yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -355,8 +355,16 @@
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
         yield YTDataChunk(dobj, "all", oobjs, dobj.size)
 
-    def _chunk_spatial(self, dobj, ngz):
-        raise NotImplementedError
+    def _chunk_spatial(self, dobj, ngz, sort = None):
+        sobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
+        for i,og in enumerate(sobjs):
+            if ngz > 0:
+                g = og.retrieve_ghost_zones(ngz, [], smoothed=True)
+            else:
+                g = og
+            size = og.cell_count
+            if size == 0: continue
+            yield YTDataChunk(dobj, "spatial", [g], size)
 
     def _chunk_io(self, dobj):
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)

diff -r c5eb6de911ff807044c607a0a01d75f265beb007 -r 57c67f583c863a4f5bdc1f02185a22b130f5d859 yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -1098,3 +1098,44 @@
 
 grid_selector = GridSelector
 
+cdef class OctreeSubsetSelector(SelectorObject):
+    # This is a numpy array, which will be a bool of ndim 1
+    cdef object oct_mask
+
+    def __init__(self, dobj):
+        self.oct_mask = dobj.mask
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def select_octs(self, OctreeContainer octree):
+        return self.oct_mask
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef void set_bounds(self,
+                         np.float64_t left_edge[3], np.float64_t right_edge[3],
+                         np.float64_t dds[3], int ind[3][2], int *check):
+        check[0] = 0
+        return
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def select_grids(self,
+                     np.ndarray[np.float64_t, ndim=2] left_edges,
+                     np.ndarray[np.float64_t, ndim=2] right_edges,
+                     np.ndarray[np.int32_t, ndim=2] levels):
+        raise RuntimeError
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3],
+                         int eterm[3]) nogil:
+        return 1
+
+
+octree_subset_selector = OctreeSubsetSelector
+


https://bitbucket.org/yt_analysis/yt/commits/1babf3636103/
Changeset:   1babf3636103
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-18 12:47:07
Summary:     Add an Octree selector that enables spatial chunking for Octrees.
Affected #:  3 files

diff -r 57c67f583c863a4f5bdc1f02185a22b130f5d859 -r 1babf36361036e3b6d4658818aee47c0a1877a7a yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py
+++ b/yt/data_objects/octree_subset.py
@@ -49,7 +49,6 @@
         self.field_data = YTFieldData()
         self.field_parameters = {}
         self.mask = mask
-        self.n_oct = mask.shape[0]
         self.domain = domain
         self.pf = domain.pf
         self.hierarchy = self.pf.hierarchy
@@ -91,7 +90,6 @@
 
     def __getitem__(self, key):
         tr = super(OctreeSubset, self).__getitem__(key)
-        import pdb; pdb.set_trace()
         try:
             fields = self._determine_fields(key)
         except YTFieldTypeNotFound:
@@ -99,7 +97,8 @@
         finfo = self.pf._get_field_info(*fields[0])
         if not finfo.particle_type:
             nz = self._num_zones + 2*self._num_ghost_zones
-            dest_shape = (nz, nz, nz, self.n_oct)
+            n_oct = tr.shape[0] / (nz**3.0)
+            dest_shape = (nz, nz, nz, n_oct)
             return tr.reshape(dest_shape)
         return tr
 

diff -r 57c67f583c863a4f5bdc1f02185a22b130f5d859 -r 1babf36361036e3b6d4658818aee47c0a1877a7a yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -666,6 +666,20 @@
                 count[cur.my_octs[i - cur.offset].domain - 1] += 1
         return count
 
+    def domain_and(self, np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                   int domain_id):
+        cdef np.int64_t i, oi, n, 
+        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
+        cdef Oct *o
+        cdef np.ndarray[np.uint8_t, ndim=2] m2 = \
+                np.zeros((mask.shape[0], 8), 'uint8')
+        n = mask.shape[0]
+        for oi in range(cur.n_assigned):
+            o = &cur.my_octs[oi]
+            for i in range(8):
+                m2[o.local_ind, i] = mask[o.local_ind, i]
+        return m2
+
     def check(self, int curdom):
         cdef int dind, pi
         cdef Oct oct

diff -r 57c67f583c863a4f5bdc1f02185a22b130f5d859 -r 1babf36361036e3b6d4658818aee47c0a1877a7a yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -1101,15 +1101,26 @@
 cdef class OctreeSubsetSelector(SelectorObject):
     # This is a numpy array, which will be a bool of ndim 1
     cdef object oct_mask
+    cdef int domain_id
 
     def __init__(self, dobj):
         self.oct_mask = dobj.mask
+        self.domain_id = dobj.domain.domain_id
 
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
     def select_octs(self, OctreeContainer octree):
-        return self.oct_mask
+        cdef np.ndarray[np.uint8_t, ndim=2] m2
+        m2 = octree.domain_and(self.oct_mask, self.domain_id)
+        cdef int oi, i, a
+        for oi in range(m2.shape[0]):
+            a = 0
+            for i in range(8):
+                if m2[oi, i] == 1: a = 1
+            for i in range(8):
+                m2[oi, i] = a
+        return m2.astype("bool")
 
     @cython.boundscheck(False)
     @cython.wraparound(False)


https://bitbucket.org/yt_analysis/yt/commits/77b261f38610/
Changeset:   77b261f38610
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-18 13:43:28
Summary:     Adding spatial chunking to particle frontends.
Affected #:  3 files

diff -r 1babf36361036e3b6d4658818aee47c0a1877a7a -r 77b261f38610ca068b32241820f144bf82bed38f yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -59,6 +59,7 @@
     particle_types = ("all",)
     geometry = "cartesian"
     coordinates = None
+    max_level = 99
 
     class __metaclass__(type):
         def __init__(cls, name, b, d):

diff -r 1babf36361036e3b6d4658818aee47c0a1877a7a -r 77b261f38610ca068b32241820f144bf82bed38f yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -40,6 +40,8 @@
     GeometryHandler, YTDataChunk
 from yt.data_objects.static_output import \
     StaticOutput
+from yt.data_objects.octree_subset import \
+    OctreeSubset
 from yt.utilities.definitions import \
     mpc_conversion, sec_conversion
 from .fields import \
@@ -70,40 +72,8 @@
     def _calculate_offsets(self, fields):
         pass
 
-class ParticleDomainSubset(object):
-    def __init__(self, domain, mask, count):
-        self.domain = domain
-        self.mask = mask
-        self.cell_count = count
-        self.oct_handler = domain.pf.h.oct_handler
-        level_counts = self.oct_handler.count_levels(
-            99, self.domain.domain_id, mask)
-        level_counts[1:] = level_counts[:-1]
-        level_counts[0] = 0
-        self.level_counts = np.add.accumulate(level_counts)
-
-    def select_icoords(self, dobj):
-        return self.oct_handler.icoords(self.domain.domain_id, self.mask,
-                                        self.cell_count)
-
-    def select_fcoords(self, dobj):
-        return self.oct_handler.fcoords(self.domain.domain_id, self.mask,
-                                        self.cell_count)
-
-    def select_fwidth(self, dobj):
-        # Recall domain_dimensions is the number of cells, not octs
-        base_dx = (self.domain.pf.domain_width /
-                   self.domain.pf.domain_dimensions)
-        widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.ires(dobj))
-        for i in range(3):
-            widths[:,i] = base_dx[i] / dds
-        return widths
-
-    def select_ires(self, dobj):
-        return self.oct_handler.ires(self.domain.domain_id, self.mask,
-                                     self.cell_count)
-
+class ParticleDomainSubset(OctreeSubset):
+    pass
 
 class ParticleGeometryHandler(OctreeGeometryHandler):
 
@@ -170,8 +140,16 @@
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
         yield YTDataChunk(dobj, "all", oobjs, dobj.size)
 
-    def _chunk_spatial(self, dobj, ngz):
-        raise NotImplementedError
+    def _chunk_spatial(self, dobj, ngz, sort = None):
+        sobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
+        for i,og in enumerate(sobjs):
+            if ngz > 0:
+                g = og.retrieve_ghost_zones(ngz, [], smoothed=True)
+            else:
+                g = og
+            size = og.cell_count
+            if size == 0: continue
+            yield YTDataChunk(dobj, "spatial", [g], size)
 
     def _chunk_io(self, dobj):
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
@@ -216,6 +194,7 @@
         self.domain_right_edge = np.ones(3, "float64") * hvals["BoxSize"]
         self.domain_dimensions = np.ones(3, "int32") * self._root_dimensions
         self.cosmological_simulation = 1
+        self.periodicity = (True, True, True)
         self.current_redshift = hvals["Redshift"]
         self.omega_lambda = hvals["OmegaLambda"]
         self.omega_matter = hvals["Omega0"]
@@ -317,6 +296,7 @@
         self.domain_left_edge = np.zeros(3, "float64")
         self.domain_right_edge = np.ones(3, "float64") * hvals["BoxSize"]
         self.domain_dimensions = np.ones(3, "int32") * self._root_dimensions
+        self.periodicity = (True, True, True)
 
         self.cosmological_simulation = 1
 
@@ -411,6 +391,7 @@
         self.domain_left_edge = np.zeros(3, "float64") - 0.5
         self.domain_right_edge = np.ones(3, "float64") + 0.5
         self.domain_dimensions = np.ones(3, "int32") * self._root_dimensions
+        self.periodicity = (True, True, True)
 
         self.cosmological_simulation = 1
 

diff -r 1babf36361036e3b6d4658818aee47c0a1877a7a -r 77b261f38610ca068b32241820f144bf82bed38f yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -1055,7 +1055,8 @@
     @cython.cdivision(True)
     def icoords(self, int domain_id,
                 np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count):
+                np.int64_t cell_count,
+                np.ndarray[np.int64_t, ndim=1] level_counts):
         #Return the integer positions of the cells
         #Limited to this domain and within the mask
         #Positions are binary; aside from the root mesh
@@ -1084,7 +1085,8 @@
     @cython.cdivision(True)
     def ires(self, int domain_id,
                 np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count):
+                np.int64_t cell_count,
+                np.ndarray[np.int64_t, ndim=1] level_counts):
         #Return the 'resolution' of each cell; ie the level
         cdef np.ndarray[np.int64_t, ndim=1] res
         res = np.empty(cell_count, dtype="int64")
@@ -1104,7 +1106,8 @@
     @cython.cdivision(True)
     def fcoords(self, int domain_id,
                 np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count):
+                np.int64_t cell_count,
+                np.ndarray[np.int64_t, ndim=1] level_counts):
         #Return the floating point unitary position of every cell
         cdef np.ndarray[np.float64_t, ndim=2] coords
         coords = np.empty((cell_count, 3), dtype="float64")
@@ -1423,4 +1426,17 @@
                 count[o.domain] += mask[oi,i]
         return count
 
+    def domain_and(self, np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                   int domain_id):
+        cdef np.int64_t i, oi, n, 
+        cdef Oct *o
+        cdef np.ndarray[np.uint8_t, ndim=2] m2 = \
+                np.zeros((mask.shape[0], 8), 'uint8')
+        n = mask.shape[0]
+        for oi in range(n):
+            o = self.oct_list[oi]
+            if o.domain != domain_id: continue
+            for i in range(8):
+                m2[o.local_ind, i] = mask[o.local_ind, i]
+        return m2
 


https://bitbucket.org/yt_analysis/yt/commits/702d94e75e9c/
Changeset:   702d94e75e9c
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-18 00:18:27
Summary:     added fake octree test
Affected #:  3 files

diff -r b65117530f45a23fe46db5f16ec0dedfd838df9d -r 702d94e75e9c58f686f7ffc6e19f69ebee21bf5b yt/geometry/oct_deposit.pyx
--- a/yt/geometry/oct_deposit.pyx
+++ b/yt/geometry/oct_deposit.pyx
@@ -106,7 +106,7 @@
         kernel = 0.
     return kernel
 
-cdef np.float64_t kernel_null(np.float64_t x) nogil: return 1.0
+cdef np.float64_t kernel_null(np.float64_t x) nogil: return 0.0
 
 cdef deposit(OctreeContainer oct_handler, 
         np.ndarray[np.float64_t, ndim=2] ppos, #positions,columns are x,y,z

diff -r b65117530f45a23fe46db5f16ec0dedfd838df9d -r 702d94e75e9c58f686f7ffc6e19f69ebee21bf5b yt/geometry/setup.py
--- a/yt/geometry/setup.py
+++ b/yt/geometry/setup.py
@@ -23,6 +23,13 @@
                 depends=["yt/utilities/lib/fp_utils.pxd",
                          "yt/geometry/oct_container.pxd",
                          "yt/geometry/selection_routines.pxd"])
+    config.add_extension("fake_octree", 
+                ["yt/geometry/fake_octree.pyx"],
+                include_dirs=["yt/utilities/lib/"],
+                libraries=["m"],
+                depends=["yt/utilities/lib/fp_utils.pxd",
+                         "yt/geometry/oct_container.pxd",
+                         "yt/geometry/selection_routines.pxd"])
     config.make_config_py() # installs __config__.py
     #config.make_svn_version_py()
     return config

diff -r b65117530f45a23fe46db5f16ec0dedfd838df9d -r 702d94e75e9c58f686f7ffc6e19f69ebee21bf5b yt/geometry/tests/fake_octree.py
--- /dev/null
+++ b/yt/geometry/tests/fake_octree.py
@@ -0,0 +1,12 @@
+from yt.geometry.fake_octree import create_fake_octree
+import numpy as np
+
+max_leaf = 100
+max_level = 5
+dn = 2
+dd = np.ones(3,dtype='i4')*dn
+dle = np.ones(3,dtype='f8')*0.0
+dre = np.ones(3,dtype='f8')
+fsub = 0.90
+
+octtree = create_fake_octree(max_leaf, max_level, dd, dle, dre, fsub)


https://bitbucket.org/yt_analysis/yt/commits/de92cbbf84f8/
Changeset:   de92cbbf84f8
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-18 00:18:44
Summary:     fake octree creator; compiles now
Affected #:  1 file

diff -r 702d94e75e9c58f686f7ffc6e19f69ebee21bf5b -r de92cbbf84f841abd203982ec819359b8d43586a yt/geometry/fake_octree.pyx
--- /dev/null
+++ b/yt/geometry/fake_octree.pyx
@@ -0,0 +1,95 @@
+"""
+Make a fake octree, deposit particle at every leaf
+
+Author: Christopher Moody <chris.e.moody at gmail.com>
+Affiliation: UC Santa Cruz
+Author: Matthew Turk <matthewturk at gmail.com>
+Affiliation: Columbia University
+Homepage: http://yt.enzotools.org/
+License:
+  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+from libc.stdlib cimport malloc, free, rand, RAND_MAX
+cimport numpy as np
+import numpy as np
+cimport cython
+
+from oct_container cimport Oct, RAMSESOctreeContainer
+
+# Defined only by N leaves
+# Randomly decide if a branch should be subdivide, recurse one level if so
+# Once done, create a position array of len(leafes) with smoothing lengths = oct_size
+
+# Note that with this algorithm the octree won't be balanced once you hit
+# the maximum number of desired leaves
+
+# Use next_child(domain, int[3] octant, Oct parent)
+
+def create_fake_octree(long noct,
+                       long max_level,
+                       np.ndarray[np.int32_t, ndim=1] ndd,
+                       np.ndarray[np.float64_t, ndim=1] dle,
+                       np.ndarray[np.float64_t, ndim=1] dre,
+                       float fsubdivide):
+    cdef RAMSESOctreeContainer oct_handler = RAMSESOctreeContainer(ndd,dle,dre)
+    cdef int[3] ind #hold the octant index
+    cdef int[3] dd #hold the octant index
+    cdef long i
+    for i in range(3):
+        ind[i] = 0
+        dd[i] = ndd[i]
+    cdef long total_oct = (dd[0]*dd[1]*dd[2]) + noct
+    print 'starting'
+    print ind[0], ind[1], ind[2]
+    print 'allocate'
+    print total_oct
+    oct_handler.allocate_domains([total_oct])
+    print 'parent'
+    parent = oct_handler.next_root(oct_handler.max_domain, ind)
+    print 'subdiv'
+    subdivide(oct_handler,ind, dd, parent, 0, 0, noct,
+              max_level, fsubdivide)
+    return oct_handler
+
+cdef subdivide(RAMSESOctreeContainer oct_handler, int ind[3], 
+               int dd[3],
+               Oct *parent, long cur_level, long cur_leaf,
+               long noct, long max_level, float fsubdivide):
+    print "entrance"
+    cdef int ddr[3]
+    cdef long i,j,k
+    cdef float rf #random float from 0-1
+    if cur_level >= max_level: 
+        return
+    if cur_leaf >= noct: 
+        return
+    print "loop over cells"
+    for i in range(3):
+        ind[i] = <int> rand() / RAND_MAX * dd[i]
+        ddr[i] = 2
+    rf = rand() / RAND_MAX
+    print ind[0], ind[1], ind[2]
+    print rf
+    if rf > fsubdivide:
+        #this will mark the octant ind as subdivided
+        print 'subdivide'
+        oct = oct_handler.next_child(1, ind, parent)
+        print 'recurse'
+        subdivide(oct_handler, ind, ddr, oct, cur_level + 1, 
+                  cur_leaf + 1, noct, max_level, fsubdivide)


https://bitbucket.org/yt_analysis/yt/commits/13f4fb73e78a/
Changeset:   13f4fb73e78a
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-18 01:44:24
Summary:     now creates a balanced octree
Affected #:  1 file

diff -r de92cbbf84f841abd203982ec819359b8d43586a -r 13f4fb73e78ae85c67e50bd333d65df53e132b87 yt/geometry/fake_octree.pyx
--- a/yt/geometry/fake_octree.pyx
+++ b/yt/geometry/fake_octree.pyx
@@ -41,55 +41,54 @@
 
 # Use next_child(domain, int[3] octant, Oct parent)
 
-def create_fake_octree(long noct,
+def create_fake_octree(RAMSESOctreeContainer oct_handler,
+                       long noct,
                        long max_level,
                        np.ndarray[np.int32_t, ndim=1] ndd,
                        np.ndarray[np.float64_t, ndim=1] dle,
                        np.ndarray[np.float64_t, ndim=1] dre,
                        float fsubdivide):
-    cdef RAMSESOctreeContainer oct_handler = RAMSESOctreeContainer(ndd,dle,dre)
     cdef int[3] ind #hold the octant index
     cdef int[3] dd #hold the octant index
     cdef long i
+    cdef long cur_noct = 0
     for i in range(3):
         ind[i] = 0
         dd[i] = ndd[i]
-    cdef long total_oct = (dd[0]*dd[1]*dd[2]) + noct
+    assert dd[0]*dd[1]*dd[2] <= noct
     print 'starting'
     print ind[0], ind[1], ind[2]
     print 'allocate'
-    print total_oct
-    oct_handler.allocate_domains([total_oct])
+    print noct
+    oct_handler.allocate_domains([noct])
+    print 'n_assigned', oct_handler.domains[0].n_assigned
     print 'parent'
     parent = oct_handler.next_root(oct_handler.max_domain, ind)
     print 'subdiv'
-    subdivide(oct_handler,ind, dd, parent, 0, 0, noct,
-              max_level, fsubdivide)
-    return oct_handler
+    while oct_handler.domains[0].n_assigned < noct:
+        cur_noct = subdivide(oct_handler,ind, dd, parent, 0, 0, noct,
+                  max_level, fsubdivide)
 
-cdef subdivide(RAMSESOctreeContainer oct_handler, int ind[3], 
+cdef long subdivide(RAMSESOctreeContainer oct_handler, int ind[3], 
                int dd[3],
-               Oct *parent, long cur_level, long cur_leaf,
+               Oct *parent, long cur_level, long cur_noct,
                long noct, long max_level, float fsubdivide):
-    print "entrance"
+    print cur_level, ' n_assigned ', oct_handler.domains[0].n_assigned, 
+    print ' n', oct_handler.domains[0].n
     cdef int ddr[3]
     cdef long i,j,k
     cdef float rf #random float from 0-1
     if cur_level >= max_level: 
-        return
-    if cur_leaf >= noct: 
-        return
-    print "loop over cells"
+        return cur_noct
+    if oct_handler.domains[0].n_assigned >= noct: 
+        return cur_noct
     for i in range(3):
-        ind[i] = <int> rand() / RAND_MAX * dd[i]
+        ind[i] = <int> ((rand() * 1.0 / RAND_MAX) * dd[i])
         ddr[i] = 2
-    rf = rand() / RAND_MAX
-    print ind[0], ind[1], ind[2]
-    print rf
+    rf = rand() * 1.0 / RAND_MAX
     if rf > fsubdivide:
         #this will mark the octant ind as subdivided
-        print 'subdivide'
         oct = oct_handler.next_child(1, ind, parent)
-        print 'recurse'
         subdivide(oct_handler, ind, ddr, oct, cur_level + 1, 
-                  cur_leaf + 1, noct, max_level, fsubdivide)
+                  cur_noct+ 1, noct, max_level, fsubdivide)
+    return cur_noct


https://bitbucket.org/yt_analysis/yt/commits/ae7f57c8e2d9/
Changeset:   ae7f57c8e2d9
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-18 01:44:41
Summary:     updating the tests for the octree
Affected #:  1 file

diff -r 13f4fb73e78ae85c67e50bd333d65df53e132b87 -r ae7f57c8e2d9dea4d974c26badf796a53cb67db3 yt/geometry/tests/fake_octree.py
--- a/yt/geometry/tests/fake_octree.py
+++ b/yt/geometry/tests/fake_octree.py
@@ -1,12 +1,25 @@
 from yt.geometry.fake_octree import create_fake_octree
+from yt.geometry.oct_container import RAMSESOctreeContainer
 import numpy as np
 
-max_leaf = 100
-max_level = 5
+nocts= 100
+max_level = 12
 dn = 2
 dd = np.ones(3,dtype='i4')*dn
 dle = np.ones(3,dtype='f8')*0.0
 dre = np.ones(3,dtype='f8')
-fsub = 0.90
+fsub = 0.10
+domain = 0
 
-octtree = create_fake_octree(max_leaf, max_level, dd, dle, dre, fsub)
+oct_handler = RAMSESOctreeContainer(dd,dle,dre)
+create_fake_octree(oct_handler, nocts, max_level, dd, dle, dre, fsub)
+print "filled"
+print oct_handler.check(domain, print_all=1)
+mask = np.ones(nocts,dtype='bool')
+print mask
+cell_count = nocts*8
+level_counts = np.array([nocts]) # not used anyway
+fc = oct_handler.fcoords(domain,mask,cell_count)
+print fc
+print fc.shape
+


https://bitbucket.org/yt_analysis/yt/commits/861231d91808/
Changeset:   861231d91808
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-18 01:44:58
Summary:     Merge
Affected #:  1 file

diff -r ae7f57c8e2d9dea4d974c26badf796a53cb67db3 -r 861231d918087390f934d55775406563de920dab yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -513,6 +513,11 @@
                         if f not in fields_to_generate:
                             fields_to_generate.append(f)
 
+    def deposit(self, positions, fields, op):
+        assert(self._current_chunk.chunk_type == "spatial")
+        fields = ensure_list(fields)
+        self.hierarchy._deposit_particle_fields(self, positions, fields, op)
+
     @contextmanager
     def _field_lock(self):
         self._locked = True


https://bitbucket.org/yt_analysis/yt/commits/9e8ac81fa888/
Changeset:   9e8ac81fa888
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-18 01:47:10
Summary:     Merge
Affected #:  6 files

diff -r 861231d918087390f934d55775406563de920dab -r 9e8ac81fa888ddce960f8c4b7a07fe20dfb3633a yt/data_objects/api.py
--- a/yt/data_objects/api.py
+++ b/yt/data_objects/api.py
@@ -31,6 +31,9 @@
 from grid_patch import \
     AMRGridPatch
 
+from octree_subset import \
+    OctreeSubset
+
 from static_output import \
     StaticOutput
 

diff -r 861231d918087390f934d55775406563de920dab -r 9e8ac81fa888ddce960f8c4b7a07fe20dfb3633a yt/data_objects/octree_subset.py
--- /dev/null
+++ b/yt/data_objects/octree_subset.py
@@ -0,0 +1,106 @@
+"""
+Subsets of octrees
+
+Author: Matthew Turk <matthewturk at gmail.com>
+Affiliation: Columbia University
+Homepage: http://yt-project.org/
+License:
+  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+import numpy as np
+
+from yt.data_objects.data_containers import \
+    YTFieldData, \
+    YTDataContainer, \
+    YTSelectionContainer
+from .field_info_container import \
+    NeedsGridType, \
+    NeedsOriginalGrid, \
+    NeedsDataField, \
+    NeedsProperty, \
+    NeedsParameter
+
+class OctreeSubset(YTSelectionContainer):
+    _spatial = True
+    _num_ghost_zones = 0
+    _num_zones = 2
+    _type_name = 'octree_subset'
+    _skip_add = True
+    _con_args = ('domain', 'mask', 'cell_count')
+    _container_fields = ("dx", "dy", "dz")
+
+    def __init__(self, domain, mask, cell_count):
+        self.field_data = YTFieldData()
+        self.field_parameters = {}
+        self.mask = mask
+        self.n_oct = mask.shape[0]
+        self.domain = domain
+        self.pf = domain.pf
+        self.hierarchy = self.pf.hierarchy
+        self.oct_handler = domain.pf.h.oct_handler
+        self.cell_count = cell_count
+        level_counts = self.oct_handler.count_levels(
+            self.domain.pf.max_level, self.domain.domain_id, mask)
+        assert(level_counts.sum() == cell_count)
+        level_counts[1:] = level_counts[:-1]
+        level_counts[0] = 0
+        self.level_counts = np.add.accumulate(level_counts)
+        self._current_particle_type = 'all'
+        self._current_fluid_type = self.pf.default_fluid_type
+
+    def select_icoords(self, dobj):
+        return self.oct_handler.icoords(self.domain.domain_id, self.mask,
+                                        self.cell_count,
+                                        self.level_counts.copy())
+
+    def select_fcoords(self, dobj):
+        return self.oct_handler.fcoords(self.domain.domain_id, self.mask,
+                                        self.cell_count,
+                                        self.level_counts.copy())
+
+    def select_fwidth(self, dobj):
+        # Recall domain_dimensions is the number of cells, not octs
+        base_dx = (self.domain.pf.domain_width /
+                   self.domain.pf.domain_dimensions)
+        widths = np.empty((self.cell_count, 3), dtype="float64")
+        dds = (2**self.select_ires(dobj))
+        for i in range(3):
+            widths[:,i] = base_dx[i] / dds
+        return widths
+
+    def select_ires(self, dobj):
+        return self.oct_handler.ires(self.domain.domain_id, self.mask,
+                                     self.cell_count,
+                                     self.level_counts.copy())
+
+    def __getitem__(self, key):
+        tr = super(OctreeSubset, self).__getitem__(key)
+        import pdb; pdb.set_trace()
+        try:
+            fields = self._determine_fields(key)
+        except YTFieldTypeNotFound:
+            return tr
+        finfo = self.pf._get_field_info(*fields[0])
+        if not finfo.particle_type:
+            nz = self._num_zones + 2*self._num_ghost_zones
+            dest_shape = (nz, nz, nz, self.n_oct)
+            return tr.reshape(dest_shape)
+        return tr
+
+

diff -r 861231d918087390f934d55775406563de920dab -r 9e8ac81fa888ddce960f8c4b7a07fe20dfb3633a yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -40,6 +40,8 @@
     GeometryHandler, YTDataChunk
 from yt.data_objects.static_output import \
     StaticOutput
+from yt.data_objects.octree_subset import \
+    OctreeSubset
 from yt.geometry.oct_container import \
     ARTOctreeContainer
 from yt.data_objects.field_info_container import \
@@ -433,43 +435,10 @@
                 return False
         return False
 
-
-class ARTDomainSubset(object):
+class ARTDomainSubset(OctreeSubset):
     def __init__(self, domain, mask, cell_count, domain_level):
-        self.mask = mask
-        self.domain = domain
-        self.oct_handler = domain.pf.h.oct_handler
-        self.cell_count = cell_count
+        super(ARTDomainSubset, self).__init__(domain, mask, cell_count)
         self.domain_level = domain_level
-        level_counts = self.oct_handler.count_levels(
-            self.domain.pf.max_level, self.domain.domain_id, mask)
-        assert(level_counts.sum() == cell_count)
-        level_counts[1:] = level_counts[:-1]
-        level_counts[0] = 0
-        self.level_counts = np.add.accumulate(level_counts)
-
-    def select_icoords(self, dobj):
-        return self.oct_handler.icoords(self.domain.domain_id, self.mask,
-                                        self.cell_count,
-                                        self.level_counts.copy())
-
-    def select_fcoords(self, dobj):
-        return self.oct_handler.fcoords(self.domain.domain_id, self.mask,
-                                        self.cell_count,
-                                        self.level_counts.copy())
-
-    def select_ires(self, dobj):
-        return self.oct_handler.ires(self.domain.domain_id, self.mask,
-                                     self.cell_count,
-                                     self.level_counts.copy())
-
-    def select_fwidth(self, dobj):
-        base_dx = 1.0/self.domain.pf.domain_dimensions
-        widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.select_ires(dobj))
-        for i in range(3):
-            widths[:, i] = base_dx[i] / dds
-        return widths
 
     def fill_root(self, content, ftfields):
         """

diff -r 861231d918087390f934d55775406563de920dab -r 9e8ac81fa888ddce960f8c4b7a07fe20dfb3633a yt/frontends/enzo/io.py
--- a/yt/frontends/enzo/io.py
+++ b/yt/frontends/enzo/io.py
@@ -50,7 +50,6 @@
         return (exceptions.KeyError, hdf5_light_reader.ReadingError)
 
     def _read_particle_selection_by_type(self, chunks, selector, fields):
-        # Active particles don't have the particle_ prefix.
         rv = {}
         ptypes = list(set([ftype for ftype, fname in fields]))
         fields = list(set(fields))
@@ -94,7 +93,7 @@
         # Now we have to do something unpleasant
         if any((ftype != "all" for ftype, fname in fields)):
             type_fields = [(ftype, fname) for ftype, fname in fields
-                           if ftype != all]
+                           if ftype != "all"]
             rv.update(self._read_particle_selection_by_type(
                       chunks, selector, type_fields))
             if len(rv) == len(fields): return rv

diff -r 861231d918087390f934d55775406563de920dab -r 9e8ac81fa888ddce960f8c4b7a07fe20dfb3633a yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -35,6 +35,8 @@
     GeometryHandler, YTDataChunk
 from yt.data_objects.static_output import \
     StaticOutput
+from yt.data_objects.octree_subset import \
+    OctreeSubset
 
 from .definitions import ramses_header
 from yt.utilities.definitions import \
@@ -252,43 +254,7 @@
         self.select(selector)
         return self.count(selector)
 
-class RAMSESDomainSubset(object):
-    def __init__(self, domain, mask, cell_count):
-        self.mask = mask
-        self.domain = domain
-        self.oct_handler = domain.pf.h.oct_handler
-        self.cell_count = cell_count
-        level_counts = self.oct_handler.count_levels(
-            self.domain.pf.max_level, self.domain.domain_id, mask)
-        assert(level_counts.sum() == cell_count)
-        level_counts[1:] = level_counts[:-1]
-        level_counts[0] = 0
-        self.level_counts = np.add.accumulate(level_counts)
-
-    def select_icoords(self, dobj):
-        return self.oct_handler.icoords(self.domain.domain_id, self.mask,
-                                        self.cell_count,
-                                        self.level_counts.copy())
-
-    def select_fcoords(self, dobj):
-        return self.oct_handler.fcoords(self.domain.domain_id, self.mask,
-                                        self.cell_count,
-                                        self.level_counts.copy())
-
-    def select_fwidth(self, dobj):
-        # Recall domain_dimensions is the number of cells, not octs
-        base_dx = (self.domain.pf.domain_width /
-                   self.domain.pf.domain_dimensions)
-        widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.select_ires(dobj))
-        for i in range(3):
-            widths[:,i] = base_dx[i] / dds
-        return widths
-
-    def select_ires(self, dobj):
-        return self.oct_handler.ires(self.domain.domain_id, self.mask,
-                                     self.cell_count,
-                                     self.level_counts.copy())
+class RAMSESDomainSubset(OctreeSubset):
 
     def fill(self, content, fields):
         # Here we get a copy of the file, which we skip through and read the
@@ -389,8 +355,16 @@
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
         yield YTDataChunk(dobj, "all", oobjs, dobj.size)
 
-    def _chunk_spatial(self, dobj, ngz):
-        raise NotImplementedError
+    def _chunk_spatial(self, dobj, ngz, sort = None):
+        sobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
+        for i,og in enumerate(sobjs):
+            if ngz > 0:
+                g = og.retrieve_ghost_zones(ngz, [], smoothed=True)
+            else:
+                g = og
+            size = og.cell_count
+            if size == 0: continue
+            yield YTDataChunk(dobj, "spatial", [g], size)
 
     def _chunk_io(self, dobj):
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)

diff -r 861231d918087390f934d55775406563de920dab -r 9e8ac81fa888ddce960f8c4b7a07fe20dfb3633a yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -1098,3 +1098,44 @@
 
 grid_selector = GridSelector
 
+cdef class OctreeSubsetSelector(SelectorObject):
+    # This is a numpy array, which will be a bool of ndim 1
+    cdef object oct_mask
+
+    def __init__(self, dobj):
+        self.oct_mask = dobj.mask
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def select_octs(self, OctreeContainer octree):
+        return self.oct_mask
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef void set_bounds(self,
+                         np.float64_t left_edge[3], np.float64_t right_edge[3],
+                         np.float64_t dds[3], int ind[3][2], int *check):
+        check[0] = 0
+        return
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def select_grids(self,
+                     np.ndarray[np.float64_t, ndim=2] left_edges,
+                     np.ndarray[np.float64_t, ndim=2] right_edges,
+                     np.ndarray[np.int32_t, ndim=2] levels):
+        raise RuntimeError
+
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef int select_cell(self, np.float64_t pos[3], np.float64_t dds[3],
+                         int eterm[3]) nogil:
+        return 1
+
+
+octree_subset_selector = OctreeSubsetSelector
+


https://bitbucket.org/yt_analysis/yt/commits/acadab069a03/
Changeset:   acadab069a03
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-18 02:55:53
Summary:     adding more debug prints
Affected #:  1 file

diff -r 9e8ac81fa888ddce960f8c4b7a07fe20dfb3633a -r acadab069a03a501d1ca1e8e1010e68b69da83aa yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -676,7 +676,7 @@
                 count[cur.my_octs[i - cur.offset].domain - 1] += 1
         return count
 
-    def check(self, int curdom):
+    def check(self, int curdom, int print_all = 0):
         cdef int dind, pi
         cdef Oct oct
         cdef OctAllocationContainer *cont = self.domains[curdom - 1]
@@ -685,6 +685,9 @@
         cdef int unassigned = 0
         for pi in range(cont.n_assigned):
             oct = cont.my_octs[pi]
+            if print_all==1:
+                print pi, oct.level, oct.domain,
+                print oct.pos[0],oct.pos[1],oct.pos[2]
             for i in range(2):
                 for j in range(2):
                     for k in range(2):


https://bitbucket.org/yt_analysis/yt/commits/a69ba0a7a97a/
Changeset:   a69ba0a7a97a
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-18 02:56:27
Summary:     assign domain=1 to octs. fixes segfault
Affected #:  1 file

diff -r acadab069a03a501d1ca1e8e1010e68b69da83aa -r a69ba0a7a97a4060aa6dd14bb8c7227f9aaef601 yt/geometry/fake_octree.pyx
--- a/yt/geometry/fake_octree.pyx
+++ b/yt/geometry/fake_octree.pyx
@@ -52,6 +52,7 @@
     cdef int[3] dd #hold the octant index
     cdef long i
     cdef long cur_noct = 0
+    cdef long cur_leaf= 0
     for i in range(3):
         ind[i] = 0
         dd[i] = ndd[i]
@@ -63,18 +64,21 @@
     oct_handler.allocate_domains([noct])
     print 'n_assigned', oct_handler.domains[0].n_assigned
     print 'parent'
-    parent = oct_handler.next_root(oct_handler.max_domain, ind)
+    parent = oct_handler.next_root(1, ind)
+    parent.domain = 1
+    cur_leaf = 8 #we've added one parent...
     print 'subdiv'
     while oct_handler.domains[0].n_assigned < noct:
-        cur_noct = subdivide(oct_handler,ind, dd, parent, 0, 0, noct,
+        cur_noct = subdivide(oct_handler,ind, dd, cur_leaf, parent, 0, 0, noct,
                   max_level, fsubdivide)
 
 cdef long subdivide(RAMSESOctreeContainer oct_handler, int ind[3], 
-               int dd[3],
+               int dd[3], long cur_leaf,
                Oct *parent, long cur_level, long cur_noct,
                long noct, long max_level, float fsubdivide):
-    print cur_level, ' n_assigned ', oct_handler.domains[0].n_assigned, 
-    print ' n', oct_handler.domains[0].n
+    print cur_level, ' na ', oct_handler.domains[0].n_assigned, 
+    print ' n', oct_handler.domains[0].n,
+    print 'pos ', parent.pos[0], parent.pos[1], parent.pos[2]
     cdef int ddr[3]
     cdef long i,j,k
     cdef float rf #random float from 0-1
@@ -89,6 +93,7 @@
     if rf > fsubdivide:
         #this will mark the octant ind as subdivided
         oct = oct_handler.next_child(1, ind, parent)
+        oct.domain = 1
         subdivide(oct_handler, ind, ddr, oct, cur_level + 1, 
                   cur_noct+ 1, noct, max_level, fsubdivide)
     return cur_noct


https://bitbucket.org/yt_analysis/yt/commits/634a0e8835d7/
Changeset:   634a0e8835d7
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-18 02:57:04
Summary:     tests are ok… need to stop adding octs on a # leafs condition, not a # of octs condition
Affected #:  1 file

diff -r a69ba0a7a97a4060aa6dd14bb8c7227f9aaef601 -r 634a0e8835d7a007a4355a54b4aa2044206ec698 yt/geometry/tests/fake_octree.py
--- a/yt/geometry/tests/fake_octree.py
+++ b/yt/geometry/tests/fake_octree.py
@@ -8,18 +8,23 @@
 dd = np.ones(3,dtype='i4')*dn
 dle = np.ones(3,dtype='f8')*0.0
 dre = np.ones(3,dtype='f8')
-fsub = 0.10
-domain = 0
+fsub = 0.25
+domain = 1
 
 oct_handler = RAMSESOctreeContainer(dd,dle,dre)
 create_fake_octree(oct_handler, nocts, max_level, dd, dle, dre, fsub)
 print "filled"
-print oct_handler.check(domain, print_all=1)
-mask = np.ones(nocts,dtype='bool')
-print mask
+print oct_handler.check(1, print_all=1)
+mask = np.ones((nocts,8),dtype='bool')
 cell_count = nocts*8
-level_counts = np.array([nocts]) # not used anyway
-fc = oct_handler.fcoords(domain,mask,cell_count)
+level_counts = oct_handler.count_levels(max_level, 1, mask)
+print level_counts
+print "fcoords"
+fc = oct_handler.fcoords(domain,mask,cell_count,level_counts)
+print level_counts, level_counts.sum()
+print [np.unique(fc[:,ax]).shape[0] for ax in range(3)]
 print fc
 print fc.shape
+import pdb; pdb.set_trace()
 
+#Now take the particles and recreate the same octree


https://bitbucket.org/yt_analysis/yt/commits/ff7fe030cc38/
Changeset:   ff7fe030cc38
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-18 04:22:35
Summary:     cleaned; fixed level counts
Affected #:  1 file

diff -r 634a0e8835d7a007a4355a54b4aa2044206ec698 -r ff7fe030cc382be44bb9083efcc64e66847e4aa4 yt/geometry/tests/fake_octree.py
--- a/yt/geometry/tests/fake_octree.py
+++ b/yt/geometry/tests/fake_octree.py
@@ -13,18 +13,10 @@
 
 oct_handler = RAMSESOctreeContainer(dd,dle,dre)
 create_fake_octree(oct_handler, nocts, max_level, dd, dle, dre, fsub)
-print "filled"
-print oct_handler.check(1, print_all=1)
 mask = np.ones((nocts,8),dtype='bool')
 cell_count = nocts*8
-level_counts = oct_handler.count_levels(max_level, 1, mask)
-print level_counts
-print "fcoords"
-fc = oct_handler.fcoords(domain,mask,cell_count,level_counts)
-print level_counts, level_counts.sum()
-print [np.unique(fc[:,ax]).shape[0] for ax in range(3)]
-print fc
-print fc.shape
-import pdb; pdb.set_trace()
+oct_counts = oct_handler.count_levels(max_level, 1, mask)
+level_counts = np.concatenate(([0,],np.cumsum(oct_counts)))
+fc = oct_handler.fcoords(domain,mask,cell_count, level_counts.copy())
 
 #Now take the particles and recreate the same octree


https://bitbucket.org/yt_analysis/yt/commits/7bd2fd1466e2/
Changeset:   7bd2fd1466e2
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-18 07:37:34
Summary:     added leaf counting
Affected #:  1 file

diff -r ff7fe030cc382be44bb9083efcc64e66847e4aa4 -r 7bd2fd1466e2a92c6a6b4e9961a2de239e58f071 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -196,6 +196,39 @@
                 count[o.domain - 1] += mask[o.local_ind,i]
         return count
 
+    @cython.boundscheck(True)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def count_leaves(self, np.ndarray[np.uint8_t, ndim=2, cast=True] mask):
+        # Modified to work when not all octs are assigned
+        cdef int i, j, k, ii
+        cdef np.int64_t oi
+        # pos here is CELL center, not OCT center.
+        cdef np.float64_t pos[3]
+        cdef int n = mask.shape[0]
+        cdef np.ndarray[np.int64_t, ndim=1] count
+        count = np.zeros(self.max_domain, 'int64')
+        # 
+        cur = self.cont
+        for oi in range(n):
+            if oi - cur.offset >= cur.n_assigned:
+                cur = cur.next
+                if cur == NULL:
+                    break
+            o = &cur.my_octs[oi - cur.offset]
+            # skip if unassigned
+            if o == NULL:
+                continue
+            if o.domain == -1: 
+                continue
+            for i in range(2):
+                for j in range(2):
+                    for k in range(2):
+                        if o.children[i][j][k] == NULL:
+                            ii = ((k*2)+j)*2+i
+                            count[o.domain - 1] += mask[o.local_ind,ii]
+        return count
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)


https://bitbucket.org/yt_analysis/yt/commits/390b77a76c9f/
Changeset:   390b77a76c9f
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-18 07:38:05
Summary:     cleaned fake_octree code
Affected #:  1 file

diff -r 7bd2fd1466e2a92c6a6b4e9961a2de239e58f071 -r 390b77a76c9fc3b0e7a517b1c7600fb2977746e2 yt/geometry/fake_octree.pyx
--- a/yt/geometry/fake_octree.pyx
+++ b/yt/geometry/fake_octree.pyx
@@ -32,68 +32,61 @@
 
 from oct_container cimport Oct, RAMSESOctreeContainer
 
-# Defined only by N leaves
-# Randomly decide if a branch should be subdivide, recurse one level if so
-# Once done, create a position array of len(leafes) with smoothing lengths = oct_size
-
-# Note that with this algorithm the octree won't be balanced once you hit
-# the maximum number of desired leaves
-
-# Use next_child(domain, int[3] octant, Oct parent)
-
+# Create a balanced octree by a random walk that recursively
+# subdivides
 def create_fake_octree(RAMSESOctreeContainer oct_handler,
-                       long noct,
+                       long max_noct,
                        long max_level,
                        np.ndarray[np.int32_t, ndim=1] ndd,
                        np.ndarray[np.float64_t, ndim=1] dle,
                        np.ndarray[np.float64_t, ndim=1] dre,
                        float fsubdivide):
+    cdef int[3] dd #hold the octant index
     cdef int[3] ind #hold the octant index
-    cdef int[3] dd #hold the octant index
     cdef long i
-    cdef long cur_noct = 0
-    cdef long cur_leaf= 0
+    cdef long cur_leaf = 0
+    cdef long leaves = 0
+    cdef np.ndarray[np.uint8_t, ndim=2] mask
     for i in range(3):
         ind[i] = 0
         dd[i] = ndd[i]
-    assert dd[0]*dd[1]*dd[2] <= noct
-    print 'starting'
-    print ind[0], ind[1], ind[2]
-    print 'allocate'
-    print noct
-    oct_handler.allocate_domains([noct])
-    print 'n_assigned', oct_handler.domains[0].n_assigned
-    print 'parent'
+    oct_handler.allocate_domains([max_noct])
     parent = oct_handler.next_root(1, ind)
     parent.domain = 1
     cur_leaf = 8 #we've added one parent...
-    print 'subdiv'
-    while oct_handler.domains[0].n_assigned < noct:
-        cur_noct = subdivide(oct_handler,ind, dd, cur_leaf, parent, 0, 0, noct,
-                  max_level, fsubdivide)
+    mask = np.ones((max_noct,8),dtype='uint8')
+    while oct_handler.domains[0].n_assigned < max_noct:
+        print "root: nocts ", oct_handler.domains[0].n_assigned
+        cur_leaf = subdivide(oct_handler, parent, ind, dd, cur_leaf, 0,
+                             max_noct, max_level, fsubdivide, mask)
+                             
+    leaves = oct_handler.count_leaves(mask)
+    assert cur_leaf == leaves
 
-cdef long subdivide(RAMSESOctreeContainer oct_handler, int ind[3], 
-               int dd[3], long cur_leaf,
-               Oct *parent, long cur_level, long cur_noct,
-               long noct, long max_level, float fsubdivide):
-    print cur_level, ' na ', oct_handler.domains[0].n_assigned, 
-    print ' n', oct_handler.domains[0].n,
-    print 'pos ', parent.pos[0], parent.pos[1], parent.pos[2]
+cdef long subdivide(RAMSESOctreeContainer oct_handler, 
+                    Oct *parent,
+                    int ind[3], int dd[3], 
+                    long cur_leaf, long cur_level, 
+                    long max_noct, long max_level, float fsubdivide,
+                    np.ndarray[np.uint8_t, ndim=2] mask):
+    print "child", parent.ind, ind[0], ind[1], ind[2], cur_leaf, cur_level
     cdef int ddr[3]
     cdef long i,j,k
     cdef float rf #random float from 0-1
     if cur_level >= max_level: 
-        return cur_noct
-    if oct_handler.domains[0].n_assigned >= noct: 
-        return cur_noct
+        return cur_leaf
+    if oct_handler.domains[0].n_assigned >= max_noct:
+        return cur_leaf
     for i in range(3):
         ind[i] = <int> ((rand() * 1.0 / RAND_MAX) * dd[i])
         ddr[i] = 2
     rf = rand() * 1.0 / RAND_MAX
     if rf > fsubdivide:
-        #this will mark the octant ind as subdivided
+        if parent.children[ind[0]][ind[1]][ind[2]] == NULL:
+            cur_leaf += 7 
         oct = oct_handler.next_child(1, ind, parent)
         oct.domain = 1
-        subdivide(oct_handler, ind, ddr, oct, cur_level + 1, 
-                  cur_noct+ 1, noct, max_level, fsubdivide)
-    return cur_noct
+        cur_leaf = subdivide(oct_handler, oct, ind, ddr, cur_leaf, 
+                             cur_level + 1, max_noct, max_level, 
+                             fsubdivide, mask)
+    return cur_leaf


https://bitbucket.org/yt_analysis/yt/commits/fd90ac91d0d8/
Changeset:   fd90ac91d0d8
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-04-18 08:07:43
Summary:     octree is successfully instantiated; particle octree does not yet match
Affected #:  2 files

diff -r 390b77a76c9fc3b0e7a517b1c7600fb2977746e2 -r fd90ac91d0d8ecb8d46f13c7b3d9cb389292e190 yt/geometry/fake_octree.pyx
--- a/yt/geometry/fake_octree.pyx
+++ b/yt/geometry/fake_octree.pyx
@@ -45,7 +45,6 @@
     cdef int[3] ind #hold the octant index
     cdef long i
     cdef long cur_leaf = 0
-    cdef long leaves = 0
     cdef np.ndarray[np.uint8_t, ndim=2] mask
     for i in range(3):
         ind[i] = 0
@@ -59,9 +58,8 @@
         print "root: nocts ", oct_handler.domains[0].n_assigned
         cur_leaf = subdivide(oct_handler, parent, ind, dd, cur_leaf, 0,
                              max_noct, max_level, fsubdivide, mask)
+    return cur_leaf
                              
-    leaves = oct_handler.count_leaves(mask)
-    assert cur_leaf == leaves
 
 cdef long subdivide(RAMSESOctreeContainer oct_handler, 
                     Oct *parent,

diff -r 390b77a76c9fc3b0e7a517b1c7600fb2977746e2 -r fd90ac91d0d8ecb8d46f13c7b3d9cb389292e190 yt/geometry/tests/fake_octree.py
--- a/yt/geometry/tests/fake_octree.py
+++ b/yt/geometry/tests/fake_octree.py
@@ -1,8 +1,8 @@
 from yt.geometry.fake_octree import create_fake_octree
-from yt.geometry.oct_container import RAMSESOctreeContainer
+from yt.geometry.oct_container import RAMSESOctreeContainer, ParticleOctreeContainer
 import numpy as np
 
-nocts= 100
+nocts= 3
 max_level = 12
 dn = 2
 dd = np.ones(3,dtype='i4')*dn
@@ -12,11 +12,27 @@
 domain = 1
 
 oct_handler = RAMSESOctreeContainer(dd,dle,dre)
-create_fake_octree(oct_handler, nocts, max_level, dd, dle, dre, fsub)
+leaves = create_fake_octree(oct_handler, nocts, max_level, dd, dle, dre, fsub)
 mask = np.ones((nocts,8),dtype='bool')
 cell_count = nocts*8
 oct_counts = oct_handler.count_levels(max_level, 1, mask)
 level_counts = np.concatenate(([0,],np.cumsum(oct_counts)))
 fc = oct_handler.fcoords(domain,mask,cell_count, level_counts.copy())
+leavesb = oct_handler.count_leaves(mask)
+assert leaves == leavesb
 
-#Now take the particles and recreate the same octree
+#Now take the fcoords, call them particles and recreate the same octree
+print "particle-based recreate"
+oct_handler2 = ParticleOctreeContainer(dd,dle,dre)
+oct_handler2.allocate_domains([nocts])
+oct_handler2.n_ref = 1 #specifically make a maximum of 1 particle per oct
+oct_handler2.add(fc, 1)
+print "added particles"
+cell_count2 = nocts*8
+oct_counts2 = oct_handler.count_levels(max_level, 1, mask)
+level_counts2 = np.concatenate(([0,],np.cumsum(oct_counts)))
+fc2 = oct_handler.fcoords(domain,mask,cell_count, level_counts.copy())
+leaves2 = oct_handler2.count_leaves(mask)
+assert leaves == leaves2
+
+print "success"


https://bitbucket.org/yt_analysis/yt/commits/1b3764566f4c/
Changeset:   1b3764566f4c
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-18 17:31:43
Summary:     Merging in the basics of particle deposition
Affected #:  11 files

diff -r 77b261f38610ca068b32241820f144bf82bed38f -r 1b3764566f4c0177f819f842fc7e84f45d2b2cce yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -513,6 +513,11 @@
                         if f not in fields_to_generate:
                             fields_to_generate.append(f)
 
+    def deposit(self, positions, fields, op):
+        assert(self._current_chunk.chunk_type == "spatial")
+        fields = ensure_list(fields)
+        self.hierarchy._deposit_particle_fields(self, positions, fields, op)
+
     @contextmanager
     def _field_lock(self):
         self._locked = True

diff -r 77b261f38610ca068b32241820f144bf82bed38f -r 1b3764566f4c0177f819f842fc7e84f45d2b2cce yt/frontends/sph/smoothing_kernel.pyx
--- a/yt/frontends/sph/smoothing_kernel.pyx
+++ b/yt/frontends/sph/smoothing_kernel.pyx
@@ -53,21 +53,28 @@
     for p in range(ngas):
         kernel_sum[p] = 0.0
         skip = 0
+        # Find the # of cells of the kernel
         for i in range(3):
             pos[i] = ppos[p, i]
+            # Get particle root grid integer index
             ind[i] = <int>((pos[i] - left_edge[i]) / dds[i])
+            # How many root grid cells does the smoothing length span + 1
             half_len = <int>(hsml[p]/dds[i]) + 1
+            # Left and right integer indices of the smoothing range
+            # If smoothing len is small could be inside the same bin
             ib0[i] = ind[i] - half_len
             ib1[i] = ind[i] + half_len
             #pos[i] = ppos[p, i] - left_edge[i]
             #ind[i] = <int>(pos[i] / dds[i])
             #ib0[i] = <int>((pos[i] - hsml[i]) / dds[i]) - 1
             #ib1[i] = <int>((pos[i] + hsml[i]) / dds[i]) + 1
+            # Skip if outside out root grid
             if ib0[i] >= dims[i] or ib1[i] < 0:
                 skip = 1
             ib0[i] = iclip(ib0[i], 0, dims[i] - 1)
             ib1[i] = iclip(ib1[i], 0, dims[i] - 1)
         if skip == 1: continue
+        # Having found the kernel shape, calculate the kernel weight
         for i from ib0[0] <= i <= ib1[0]:
             idist[0] = (ind[0] - i) * (ind[0] - i) * sdds[0]
             for j from ib0[1] <= j <= ib1[1]:
@@ -75,10 +82,14 @@
                 for k from ib0[2] <= k <= ib1[2]:
                     idist[2] = (ind[2] - k) * (ind[2] - k) * sdds[2]
                     dist = idist[0] + idist[1] + idist[2]
+                    # Calculate distance in multiples of the smoothing length
                     dist = sqrt(dist) / hsml[p]
+                    # Kernel is 3D but save the elements in a 1D array
                     gi = ((i * dims[1] + j) * dims[2]) + k
                     pdist[gi] = sph_kernel(dist)
+                    # Save sum to normalize later
                     kernel_sum[p] += pdist[gi]
+        # Having found the kernel, deposit accordingly into gdata
         for i from ib0[0] <= i <= ib1[0]:
             for j from ib0[1] <= j <= ib1[1]:
                 for k from ib0[2] <= k <= ib1[2]:

diff -r 77b261f38610ca068b32241820f144bf82bed38f -r 1b3764566f4c0177f819f842fc7e84f45d2b2cce yt/geometry/fake_octree.pyx
--- /dev/null
+++ b/yt/geometry/fake_octree.pyx
@@ -0,0 +1,90 @@
+"""
+Make a fake octree, deposit particle at every leaf
+
+Author: Christopher Moody <chris.e.moody at gmail.com>
+Affiliation: UC Santa Cruz
+Author: Matthew Turk <matthewturk at gmail.com>
+Affiliation: Columbia University
+Homepage: http://yt.enzotools.org/
+License:
+  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+from libc.stdlib cimport malloc, free, rand, RAND_MAX
+cimport numpy as np
+import numpy as np
+cimport cython
+
+from oct_container cimport Oct, RAMSESOctreeContainer
+
+# Create a balanced octree by a random walk that recursively
+# subdivides
+def create_fake_octree(RAMSESOctreeContainer oct_handler,
+                       long max_noct,
+                       long max_level,
+                       np.ndarray[np.int32_t, ndim=1] ndd,
+                       np.ndarray[np.float64_t, ndim=1] dle,
+                       np.ndarray[np.float64_t, ndim=1] dre,
+                       float fsubdivide):
+    cdef int[3] dd #hold the octant index
+    cdef int[3] ind #hold the octant index
+    cdef long i
+    cdef long cur_leaf = 0
+    cdef np.ndarray[np.uint8_t, ndim=2] mask
+    for i in range(3):
+        ind[i] = 0
+        dd[i] = ndd[i]
+    oct_handler.allocate_domains([max_noct])
+    parent = oct_handler.next_root(1, ind)
+    parent.domain = 1
+    cur_leaf = 8 #we've added one parent...
+    mask = np.ones((max_noct,8),dtype='uint8')
+    while oct_handler.domains[0].n_assigned < max_noct:
+        print "root: nocts ", oct_handler.domains[0].n_assigned
+        cur_leaf = subdivide(oct_handler, parent, ind, dd, cur_leaf, 0,
+                             max_noct, max_level, fsubdivide, mask)
+    return cur_leaf
+                             
+
+cdef long subdivide(RAMSESOctreeContainer oct_handler, 
+                    Oct *parent,
+                    int ind[3], int dd[3], 
+                    long cur_leaf, long cur_level, 
+                    long max_noct, long max_level, float fsubdivide,
+                    np.ndarray[np.uint8_t, ndim=2] mask):
+    print "child", parent.ind, ind[0], ind[1], ind[2], cur_leaf, cur_level
+    cdef int ddr[3]
+    cdef long i,j,k
+    cdef float rf #random float from 0-1
+    if cur_level >= max_level: 
+        return cur_leaf
+    if oct_handler.domains[0].n_assigned >= max_noct:
+        return cur_leaf
+    for i in range(3):
+        ind[i] = <int> ((rand() * 1.0 / RAND_MAX) * dd[i])
+        ddr[i] = 2
+    rf = rand() * 1.0 / RAND_MAX
+    if rf > fsubdivide:
+        if parent.children[ind[0]][ind[1]][ind[2]] == NULL:
+            cur_leaf += 7 
+        oct = oct_handler.next_child(1, ind, parent)
+        oct.domain = 1
+        cur_leaf = subdivide(oct_handler, oct, ind, ddr, cur_leaf, 
+                             cur_level + 1, max_noct, max_level, 
+                             fsubdivide, mask)
+    return cur_leaf

diff -r 77b261f38610ca068b32241820f144bf82bed38f -r 1b3764566f4c0177f819f842fc7e84f45d2b2cce yt/geometry/oct_container.pxd
--- a/yt/geometry/oct_container.pxd
+++ b/yt/geometry/oct_container.pxd
@@ -54,7 +54,7 @@
     cdef np.float64_t DLE[3], DRE[3]
     cdef public int nocts
     cdef public int max_domain
-    cdef Oct* get(self, ppos)
+    cdef Oct* get(self, np.float64_t ppos[3], int *ii = ?)
     cdef void neighbors(self, Oct *, Oct **)
     cdef void oct_bounds(self, Oct *, np.float64_t *, np.float64_t *)
 

diff -r 77b261f38610ca068b32241820f144bf82bed38f -r 1b3764566f4c0177f819f842fc7e84f45d2b2cce yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -142,7 +142,7 @@
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    cdef Oct *get(self, ppos):
+    cdef Oct *get(self, np.float64_t ppos[3], int *ii = NULL):
         #Given a floating point position, retrieve the most
         #refined oct at that time
         cdef np.int64_t ind[3]
@@ -165,6 +165,13 @@
                     ind[i] = 1
                     cp[i] += dds[i]/2.0
             cur = cur.children[ind[0]][ind[1]][ind[2]]
+        if ii != NULL: return cur
+        for i in range(3):
+            if cp[i] > pp[i]:
+                ind[i] = 0
+            else:
+                ind[i] = 1
+        ii[0] = ((ind[2]*2)+ind[1])*2+ind[0]
         return cur
 
     @cython.boundscheck(False)
@@ -189,6 +196,39 @@
                 count[o.domain - 1] += mask[o.local_ind,i]
         return count
 
+    @cython.boundscheck(True)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def count_leaves(self, np.ndarray[np.uint8_t, ndim=2, cast=True] mask):
+        # Modified to work when not all octs are assigned
+        cdef int i, j, k, ii
+        cdef np.int64_t oi
+        # pos here is CELL center, not OCT center.
+        cdef np.float64_t pos[3]
+        cdef int n = mask.shape[0]
+        cdef np.ndarray[np.int64_t, ndim=1] count
+        count = np.zeros(self.max_domain, 'int64')
+        # 
+        cur = self.cont
+        for oi in range(n):
+            if oi - cur.offset >= cur.n_assigned:
+                cur = cur.next
+                if cur == NULL:
+                    break
+            o = &cur.my_octs[oi - cur.offset]
+            # skip if unassigned
+            if o == NULL:
+                continue
+            if o.domain == -1: 
+                continue
+            for i in range(2):
+                for j in range(2):
+                    for k in range(2):
+                        if o.children[i][j][k] == NULL:
+                            ii = ((k*2)+j)*2+i
+                            count[o.domain - 1] += mask[o.local_ind,ii]
+        return count
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
@@ -260,14 +300,17 @@
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    def get_neighbor_boundaries(self, ppos):
+    def get_neighbor_boundaries(self, oppos):
+        cdef int i, ii
+        cdef np.float64_t ppos[3]
+        for i in range(3):
+            ppos[i] = oppos[i]
         cdef Oct *main = self.get(ppos)
         cdef Oct* neighbors[27]
         self.neighbors(main, neighbors)
         cdef np.ndarray[np.float64_t, ndim=2] bounds
         cdef np.float64_t corner[3], size[3]
         bounds = np.zeros((27,6), dtype="float64")
-        cdef int i, ii
         tnp = 0
         for i in range(27):
             self.oct_bounds(neighbors[i], corner, size)
@@ -680,7 +723,7 @@
                 m2[o.local_ind, i] = mask[o.local_ind, i]
         return m2
 
-    def check(self, int curdom):
+    def check(self, int curdom, int print_all = 0):
         cdef int dind, pi
         cdef Oct oct
         cdef OctAllocationContainer *cont = self.domains[curdom - 1]
@@ -689,6 +732,9 @@
         cdef int unassigned = 0
         for pi in range(cont.n_assigned):
             oct = cont.my_octs[pi]
+            if print_all==1:
+                print pi, oct.level, oct.domain,
+                print oct.pos[0],oct.pos[1],oct.pos[2]
             for i in range(2):
                 for j in range(2):
                     for k in range(2):
@@ -901,6 +947,28 @@
 
 cdef class ARTOctreeContainer(RAMSESOctreeContainer):
     #this class is specifically for the NMSU ART
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def deposit_particle_cumsum(self,
+                                np.ndarray[np.float64_t, ndim=2] ppos, 
+                                np.ndarray[np.float64_t, ndim=1] pdata,
+                                np.ndarray[np.float64_t, ndim=1] mask,
+                                np.ndarray[np.float64_t, ndim=1] dest,
+                                fields, int domain):
+        cdef Oct *o
+        cdef OctAllocationContainer *dom = self.domains[domain - 1]
+        cdef np.float64_t pos[3]
+        cdef int ii
+        cdef int no = ppos.shape[0]
+        for n in range(no):
+            for j in range(3):
+                pos[j] = ppos[n,j]
+            o = self.get(pos, &ii) 
+            if mask[o.local_ind,ii]==0: continue
+            dest[o.ind+ii] += pdata[n]
+        return dest
+
     @cython.boundscheck(True)
     @cython.wraparound(False)
     @cython.cdivision(True)
@@ -1394,12 +1462,15 @@
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    def count_neighbor_particles(self, ppos):
+    def count_neighbor_particles(self, oppos):
         #How many particles are in my neighborhood
+        cdef int i, ni, dl, tnp
+        cdef np.float64_t ppos[3]
+        for i in range(3):
+            ppos[i] = oppos[i]
         cdef Oct *main = self.get(ppos)
         cdef Oct* neighbors[27]
         self.neighbors(main, neighbors)
-        cdef int i, ni, dl, tnp
         tnp = 0
         for i in range(27):
             if neighbors[i].sd != NULL:

diff -r 77b261f38610ca068b32241820f144bf82bed38f -r 1b3764566f4c0177f819f842fc7e84f45d2b2cce yt/geometry/oct_deposit.pyx
--- /dev/null
+++ b/yt/geometry/oct_deposit.pyx
@@ -0,0 +1,158 @@
+"""
+Particle Deposition onto Octs
+
+Author: Christopher Moody <chris.e.moody at gmail.com>
+Affiliation: UC Santa Cruz
+Author: Matthew Turk <matthewturk at gmail.com>
+Affiliation: Columbia University
+Homepage: http://yt.enzotools.org/
+License:
+  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+from libc.stdlib cimport malloc, free
+cimport numpy as np
+import numpy as np
+cimport cython
+
+from oct_container cimport Oct, OctAllocationContainer, OctreeContainer
+
+# Mode functions
+ctypedef np.float64_t (*type_opt)(np.float64_t, np.float64_t)
+cdef np.float64_t opt_count(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += 1.0
+
+cdef np.float64_t opt_sum(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += pdata 
+
+cdef np.float64_t opt_diff(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += (data_in[index] - pdata) 
+
+cdef np.float64_t opt_wcount(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += weight
+
+cdef np.float64_t opt_wsum(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += pdata * weight
+
+cdef np.float64_t opt_wdiff(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += (data_in[index] - pdata) * weight
+
+# Selection functions
+ctypedef NOTSURE (*type_sel)(OctreeContainer, 
+                                np.ndarray[np.float64_t, ndim=1],
+                                np.float64_t)
+cdef NOTSURE select_nearest(OctreeContainer oct_handler,
+                            np.ndarray[np.float64_t, ndim=1] pos,
+                            np.float64_t radius):
+    #return only the nearest oct
+    pass
+
+
+cdef NOTSURE select_radius(OctreeContainer oct_handler,
+                            np.ndarray[np.float64_t, ndim=1] pos,
+                            np.float64_t radius):
+    #return a list of octs within the radius
+    pass
+    
+
+# Kernel functions
+ctypedef np.float64_t (*type_ker)(np.float64_t)
+cdef np.float64_t kernel_sph(np.float64_t x) nogil:
+    cdef np.float64_t kernel
+    if x <= 0.5:
+        kernel = 1.-6.*x*x*(1.-x)
+    elif x>0.5 and x<=1.0:
+        kernel = 2.*(1.-x)*(1.-x)*(1.-x)
+    else:
+        kernel = 0.
+    return kernel
+
+cdef np.float64_t kernel_null(np.float64_t x) nogil: return 0.0
+
+cdef deposit(OctreeContainer oct_handler, 
+        np.ndarray[np.float64_t, ndim=2] ppos, #positions,columns are x,y,z
+        np.ndarray[np.float64_t, ndim=2] pd, # particle fields
+        np.ndarray[np.float64_t, ndim=1] pr, # particle radius
+        np.ndarray[np.float64_t, ndim=2] data_in, #used to calc diff, same shape as data_out
+        np.ndarray[np.float64_t, ndim=2] data_out, #write deposited here
+        mode='count', selection='nearest', kernel='null'):
+    cdef type_opt fopt
+    cdef type_sel fsel
+    cdef type_ker fker
+    cdef long pi #particle index
+    cdef long nocts #number of octs in selection
+    cdef Oct oct 
+    cdef np.float64_t w
+    # Can we do this with dicts?
+    # Setup the function pointers
+    if mode == 'count':
+        fopt = opt_count
+    elif mode == 'sum':
+        fopt = opt_sum
+    elif mode == 'diff':
+        fopt = opt_diff
+    if mode == 'wcount':
+        fopt = opt_count
+    elif mode == 'wsum':
+        fopt = opt_sum
+    elif mode == 'wdiff':
+        fopt = opt_diff
+    if selection == 'nearest':
+        fsel = select_nearest
+    elif selection == 'radius':
+        fsel = select_radius
+    if kernel == 'null':
+        fker = kernel_null
+    if kernel == 'sph':
+        fker = kernel_sph
+    for pi in range(particles):
+        octs = fsel(oct_handler, ppos[pi], pr[pi])
+        for oct in octs:
+            for cell in oct.cells:
+                w = fker(pr[pi],cell) 
+                weights.append(w)
+        norm = weights.sum()
+        for w, oct in zip(weights, octs):
+            for cell in oct.cells:
+                fopt(pd[pi], w/norm, oct.index, data_in, data_out)
+
+

diff -r 77b261f38610ca068b32241820f144bf82bed38f -r 1b3764566f4c0177f819f842fc7e84f45d2b2cce yt/geometry/setup.py
--- a/yt/geometry/setup.py
+++ b/yt/geometry/setup.py
@@ -23,6 +23,13 @@
                 depends=["yt/utilities/lib/fp_utils.pxd",
                          "yt/geometry/oct_container.pxd",
                          "yt/geometry/selection_routines.pxd"])
+    config.add_extension("fake_octree", 
+                ["yt/geometry/fake_octree.pyx"],
+                include_dirs=["yt/utilities/lib/"],
+                libraries=["m"],
+                depends=["yt/utilities/lib/fp_utils.pxd",
+                         "yt/geometry/oct_container.pxd",
+                         "yt/geometry/selection_routines.pxd"])
     config.make_config_py() # installs __config__.py
     #config.make_svn_version_py()
     return config

diff -r 77b261f38610ca068b32241820f144bf82bed38f -r 1b3764566f4c0177f819f842fc7e84f45d2b2cce yt/geometry/tests/fake_octree.py
--- /dev/null
+++ b/yt/geometry/tests/fake_octree.py
@@ -0,0 +1,38 @@
+from yt.geometry.fake_octree import create_fake_octree
+from yt.geometry.oct_container import RAMSESOctreeContainer, ParticleOctreeContainer
+import numpy as np
+
+nocts= 3
+max_level = 12
+dn = 2
+dd = np.ones(3,dtype='i4')*dn
+dle = np.ones(3,dtype='f8')*0.0
+dre = np.ones(3,dtype='f8')
+fsub = 0.25
+domain = 1
+
+oct_handler = RAMSESOctreeContainer(dd,dle,dre)
+leaves = create_fake_octree(oct_handler, nocts, max_level, dd, dle, dre, fsub)
+mask = np.ones((nocts,8),dtype='bool')
+cell_count = nocts*8
+oct_counts = oct_handler.count_levels(max_level, 1, mask)
+level_counts = np.concatenate(([0,],np.cumsum(oct_counts)))
+fc = oct_handler.fcoords(domain,mask,cell_count, level_counts.copy())
+leavesb = oct_handler.count_leaves(mask)
+assert leaves == leavesb
+
+#Now take the fcoords, call them particles and recreate the same octree
+print "particle-based recreate"
+oct_handler2 = ParticleOctreeContainer(dd,dle,dre)
+oct_handler2.allocate_domains([nocts])
+oct_handler2.n_ref = 1 #specifically make a maximum of 1 particle per oct
+oct_handler2.add(fc, 1)
+print "added particles"
+cell_count2 = nocts*8
+oct_counts2 = oct_handler.count_levels(max_level, 1, mask)
+level_counts2 = np.concatenate(([0,],np.cumsum(oct_counts)))
+fc2 = oct_handler.fcoords(domain,mask,cell_count, level_counts.copy())
+leaves2 = oct_handler2.count_leaves(mask)
+assert leaves == leaves2
+
+print "success"


https://bitbucket.org/yt_analysis/yt/commits/36e0d142b508/
Changeset:   36e0d142b508
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-18 21:38:12
Summary:     Spatial chunking within data objects for Octree codes now works.
Affected #:  3 files

diff -r 1b3764566f4c0177f819f842fc7e84f45d2b2cce -r 36e0d142b508b5432051fc45937ffdd0125b32af yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py
+++ b/yt/data_objects/octree_subset.py
@@ -60,6 +60,8 @@
         level_counts[1:] = level_counts[:-1]
         level_counts[0] = 0
         self.level_counts = np.add.accumulate(level_counts)
+        self._last_mask = None
+        self._last_selector_id = None
         self._current_particle_type = 'all'
         self._current_fluid_type = self.pf.default_fluid_type
 
@@ -98,8 +100,35 @@
         if not finfo.particle_type:
             nz = self._num_zones + 2*self._num_ghost_zones
             n_oct = tr.shape[0] / (nz**3.0)
-            dest_shape = (nz, nz, nz, n_oct)
-            return tr.reshape(dest_shape)
+            tr.shape = (n_oct, nz, nz, nz)
+            tr = np.rollaxis(tr, 0, 4)
+            return tr
         return tr
 
+    def deposit(self, positions, fields, method):
+        pass
 
+    def select(self, selector):
+        if id(selector) == self._last_selector_id:
+            return self._last_mask
+        self._last_mask = self.oct_handler.domain_mask(
+                self.mask, self.domain.domain_id)
+        if self._last_mask.sum() == 0: return None
+        self._last_selector_id = id(selector)
+        return self._last_mask
+
+    def count(self, selector):
+        if id(selector) == self._last_selector_id:
+            if self._last_mask is None: return 0
+            return self._last_mask.sum()
+        self.select(selector)
+        return self.count(selector)
+
+    def count_particles(self, selector, x, y, z):
+        # We don't cache the selector results
+        count = selector.count_points(x,y,z)
+        return count
+
+    def select_particles(self, selector, x, y, z):
+        mask = selector.select_points(x,y,z)
+        return mask

diff -r 1b3764566f4c0177f819f842fc7e84f45d2b2cce -r 36e0d142b508b5432051fc45937ffdd0125b32af yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -721,7 +721,43 @@
             o = &cur.my_octs[oi]
             for i in range(8):
                 m2[o.local_ind, i] = mask[o.local_ind, i]
-        return m2
+        return m2 # NOTE: This is uint8_t
+
+    def domain_mask(self,
+                    # mask is the base selector's *global* mask
+                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                    int domain_id):
+        # What distinguishes this one from domain_and is that we have a mask,
+        # which covers the whole domain, but our output will only be of a much
+        # smaller subset of octs that belong to a given domain *and* the mask.
+        # Note also that typically when something calls domain_and, they will 
+        # use a logical_any along the oct axis.  Here we don't do that.
+        # Note also that we change the shape of the returned array.
+        cdef np.int64_t i, j, k, oi, n, nm
+        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
+        cdef Oct *o
+        n = mask.shape[0]
+        nm = 0
+        for oi in range(cur.n_assigned):
+            o = &cur.my_octs[oi]
+            use = 0
+            for i in range(8):
+                if mask[o.local_ind, i] == 1: use = 1
+            nm += use
+        cdef np.ndarray[np.uint8_t, ndim=4] m2 = \
+                np.zeros((2, 2, 2, nm), 'uint8')
+        nm = 0
+        for oi in range(cur.n_assigned):
+            o = &cur.my_octs[oi]
+            use = 0
+            for i in range(2):
+                for j in range(2):
+                    for k in range(2):
+                        ii = ((k*2)+j)*2+i
+                        if mask[o.local_ind, ii] == 0: continue
+                        use = m2[i, j, k, nm] = 1
+            nm += use
+        return m2.astype("bool")
 
     def check(self, int curdom, int print_all = 0):
         cdef int dind, pi

diff -r 1b3764566f4c0177f819f842fc7e84f45d2b2cce -r 36e0d142b508b5432051fc45937ffdd0125b32af yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -1147,6 +1147,5 @@
                          int eterm[3]) nogil:
         return 1
 
-
 octree_subset_selector = OctreeSubsetSelector
 


https://bitbucket.org/yt_analysis/yt/commits/b7949c2f0f27/
Changeset:   b7949c2f0f27
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-18 21:38:12
Summary:     Spatial chunking within data objects for Octree codes now works.
Affected #:  3 files

diff -r 77b261f38610ca068b32241820f144bf82bed38f -r b7949c2f0f27d84a367fe70fc2301f4e58a7ef33 yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py
+++ b/yt/data_objects/octree_subset.py
@@ -60,6 +60,8 @@
         level_counts[1:] = level_counts[:-1]
         level_counts[0] = 0
         self.level_counts = np.add.accumulate(level_counts)
+        self._last_mask = None
+        self._last_selector_id = None
         self._current_particle_type = 'all'
         self._current_fluid_type = self.pf.default_fluid_type
 
@@ -98,8 +100,35 @@
         if not finfo.particle_type:
             nz = self._num_zones + 2*self._num_ghost_zones
             n_oct = tr.shape[0] / (nz**3.0)
-            dest_shape = (nz, nz, nz, n_oct)
-            return tr.reshape(dest_shape)
+            tr.shape = (n_oct, nz, nz, nz)
+            tr = np.rollaxis(tr, 0, 4)
+            return tr
         return tr
 
+    def deposit(self, positions, fields, method):
+        pass
 
+    def select(self, selector):
+        if id(selector) == self._last_selector_id:
+            return self._last_mask
+        self._last_mask = self.oct_handler.domain_mask(
+                self.mask, self.domain.domain_id)
+        if self._last_mask.sum() == 0: return None
+        self._last_selector_id = id(selector)
+        return self._last_mask
+
+    def count(self, selector):
+        if id(selector) == self._last_selector_id:
+            if self._last_mask is None: return 0
+            return self._last_mask.sum()
+        self.select(selector)
+        return self.count(selector)
+
+    def count_particles(self, selector, x, y, z):
+        # We don't cache the selector results
+        count = selector.count_points(x,y,z)
+        return count
+
+    def select_particles(self, selector, x, y, z):
+        mask = selector.select_points(x,y,z)
+        return mask

diff -r 77b261f38610ca068b32241820f144bf82bed38f -r b7949c2f0f27d84a367fe70fc2301f4e58a7ef33 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -678,7 +678,43 @@
             o = &cur.my_octs[oi]
             for i in range(8):
                 m2[o.local_ind, i] = mask[o.local_ind, i]
-        return m2
+        return m2 # NOTE: This is uint8_t
+
+    def domain_mask(self,
+                    # mask is the base selector's *global* mask
+                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                    int domain_id):
+        # What distinguishes this one from domain_and is that we have a mask,
+        # which covers the whole domain, but our output will only be of a much
+        # smaller subset of octs that belong to a given domain *and* the mask.
+        # Note also that typically when something calls domain_and, they will 
+        # use a logical_any along the oct axis.  Here we don't do that.
+        # Note also that we change the shape of the returned array.
+        cdef np.int64_t i, j, k, oi, n, nm
+        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
+        cdef Oct *o
+        n = mask.shape[0]
+        nm = 0
+        for oi in range(cur.n_assigned):
+            o = &cur.my_octs[oi]
+            use = 0
+            for i in range(8):
+                if mask[o.local_ind, i] == 1: use = 1
+            nm += use
+        cdef np.ndarray[np.uint8_t, ndim=4] m2 = \
+                np.zeros((2, 2, 2, nm), 'uint8')
+        nm = 0
+        for oi in range(cur.n_assigned):
+            o = &cur.my_octs[oi]
+            use = 0
+            for i in range(2):
+                for j in range(2):
+                    for k in range(2):
+                        ii = ((k*2)+j)*2+i
+                        if mask[o.local_ind, ii] == 0: continue
+                        use = m2[i, j, k, nm] = 1
+            nm += use
+        return m2.astype("bool")
 
     def check(self, int curdom):
         cdef int dind, pi

diff -r 77b261f38610ca068b32241820f144bf82bed38f -r b7949c2f0f27d84a367fe70fc2301f4e58a7ef33 yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -1147,6 +1147,5 @@
                          int eterm[3]) nogil:
         return 1
 
-
 octree_subset_selector = OctreeSubsetSelector
 


https://bitbucket.org/yt_analysis/yt/commits/05d417a01bbd/
Changeset:   05d417a01bbd
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-18 21:54:39
Summary:     Adding domain_mask for particle octrees.
Affected #:  1 file

diff -r b7949c2f0f27d84a367fe70fc2301f4e58a7ef33 -r 05d417a01bbd9874ac07ab9a1ee743ee9ada59d0 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -690,7 +690,7 @@
         # Note also that typically when something calls domain_and, they will 
         # use a logical_any along the oct axis.  Here we don't do that.
         # Note also that we change the shape of the returned array.
-        cdef np.int64_t i, j, k, oi, n, nm
+        cdef np.int64_t i, j, k, oi, n, nm, use
         cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
         cdef Oct *o
         n = mask.shape[0]
@@ -1476,3 +1476,39 @@
                 m2[o.local_ind, i] = mask[o.local_ind, i]
         return m2
 
+    def domain_mask(self,
+                    # mask is the base selector's *global* mask
+                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                    int domain_id):
+        # What distinguishes this one from domain_and is that we have a mask,
+        # which covers the whole domain, but our output will only be of a much
+        # smaller subset of octs that belong to a given domain *and* the mask.
+        # Note also that typically when something calls domain_and, they will 
+        # use a logical_any along the oct axis.  Here we don't do that.
+        # Note also that we change the shape of the returned array.
+        cdef np.int64_t i, j, k, oi, n, nm, use
+        cdef Oct *o
+        n = mask.shape[0]
+        nm = 0
+        for oi in range(n):
+            o = self.oct_list[oi]
+            if o.domain != domain_id: continue
+            use = 0
+            for i in range(8):
+                if mask[o.local_ind, i] == 1: use = 1
+            nm += use
+        cdef np.ndarray[np.uint8_t, ndim=4] m2 = \
+                np.zeros((2, 2, 2, nm), 'uint8')
+        nm = 0
+        for oi in range(n):
+            o = self.oct_list[oi]
+            if o.domain != domain_id: continue
+            use = 0
+            for i in range(2):
+                for j in range(2):
+                    for k in range(2):
+                        ii = ((k*2)+j)*2+i
+                        if mask[o.local_ind, ii] == 0: continue
+                        use = m2[i, j, k, nm] = 1
+            nm += use
+        return m2.astype("bool")


https://bitbucket.org/yt_analysis/yt/commits/94dce22865da/
Changeset:   94dce22865da
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-19 21:13:18
Summary:     Enabled PkdGrav output via endian and dtype specification.

Also added Exception for exceeding the bounds of the octree for particles in
tipsy format.
Affected #:  3 files

diff -r 05d417a01bbd9874ac07ab9a1ee743ee9ada59d0 -r 94dce22865dacd30b38dfeb8fe05c5ce709bff64 yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -351,10 +351,27 @@
                     ('dummy',   'i'))
 
     def __init__(self, filename, data_style="tipsy",
-                 root_dimensions = 64):
+                 root_dimensions = 64, endian = ">",
+                 field_dtypes = None,
+                 domain_left_edge = None,
+                 domain_right_edge = None):
+        self.endian = endian
         self._root_dimensions = root_dimensions
         # Set up the template for domain files
         self.storage_filename = None
+        if domain_left_edge is None:
+            domain_left_edge = np.zeros(3, "float64") - 0.5
+        if domain_right_edge is None:
+            domain_right_edge = np.ones(3, "float64") + 0.5
+
+        self.domain_left_edge = np.array(domain_left_edge, dtype="float64")
+        self.domain_right_edge = np.array(domain_right_edge, dtype="float64")
+
+        # My understanding is that dtypes are set on a field by field basis,
+        # not on a (particle type, field) basis
+        if field_dtypes is None: field_dtypes = {}
+        self._field_dtypes = field_dtypes
+
         super(TipsyStaticOutput, self).__init__(filename, data_style)
 
     def __repr__(self):
@@ -373,7 +390,7 @@
         # in the GADGET-2 user guide.
 
         f = open(self.parameter_filename, "rb")
-        hh = ">" + "".join(["%s" % (b) for a,b in self._header_spec])
+        hh = self.endian + "".join(["%s" % (b) for a,b in self._header_spec])
         hvals = dict([(a, c) for (a, b), c in zip(self._header_spec,
                      struct.unpack(hh, f.read(struct.calcsize(hh))))])
         self._header_offset = f.tell()
@@ -388,8 +405,9 @@
         # This may not be correct.
         self.current_time = hvals["time"]
 
-        self.domain_left_edge = np.zeros(3, "float64") - 0.5
-        self.domain_right_edge = np.ones(3, "float64") + 0.5
+        # NOTE: These are now set in the main initializer.
+        #self.domain_left_edge = np.zeros(3, "float64") - 0.5
+        #self.domain_right_edge = np.ones(3, "float64") + 0.5
         self.domain_dimensions = np.ones(3, "int32") * self._root_dimensions
         self.periodicity = (True, True, True)
 

diff -r 05d417a01bbd9874ac07ab9a1ee743ee9ada59d0 -r 94dce22865dacd30b38dfeb8fe05c5ce709bff64 yt/frontends/sph/io.py
--- a/yt/frontends/sph/io.py
+++ b/yt/frontends/sph/io.py
@@ -372,6 +372,7 @@
         return rv
 
     def _initialize_octree(self, domain, octree):
+        pf = domain.pf
         with open(domain.domain_filename, "rb") as f:
             f.seek(domain.pf._header_offset)
             for ptype in self._ptypes:
@@ -391,6 +392,11 @@
                             pos[:,1].min(), pos[:,1].max())
                 mylog.debug("Spanning: %0.3e .. %0.3e in z",
                             pos[:,2].min(), pos[:,2].max())
+                if np.any(pos.min(axis=0) < pf.domain_left_edge) or \
+                   np.any(pos.max(axis=0) > pf.domain_right_edge):
+                    raise YTDomainOverflow(pos.min(axis=0), pos.max(axis=0),
+                                           pf.domain_left_edge,
+                                           pf.domain_right_edge)
                 del pp
                 octree.add(pos, domain.domain_id)
 
@@ -412,10 +418,12 @@
         for ptype, field in self._fields:
             pfields = []
             if tp[ptype] == 0: continue
+            dtbase = domain.pf._field_dtypes.get(field, 'f')
+            ff = "%s%s" % (domain.pf.endian, dtbase)
             if field in _vector_fields:
-                dt = (field, [('x', '>f'), ('y', '>f'), ('z', '>f')])
+                dt = (field, [('x', ff), ('y', ff), ('z', ff)])
             else:
-                dt = (field, '>f')
+                dt = (field, ff)
             pds.setdefault(ptype, []).append(dt)
             field_list.append((ptype, field))
         for ptype in pds:

diff -r 05d417a01bbd9874ac07ab9a1ee743ee9ada59d0 -r 94dce22865dacd30b38dfeb8fe05c5ce709bff64 yt/utilities/exceptions.py
--- a/yt/utilities/exceptions.py
+++ b/yt/utilities/exceptions.py
@@ -249,3 +249,14 @@
 
     def __str__(self):
         return "Data selector '%s' not implemented." % (self.class_name)
+
+class YTDomainOverflow(YTException):
+    def __init__(self, mi, ma, dle, dre):
+        self.mi = mi
+        self.ma = ma
+        self.dle = dle
+        self.dre = dre
+
+    def __str__(self):
+        return "Particle bounds %s and %s exceed domain bounds %s and %s" % (
+            self.mi, self.ma, self.dle, self.dre)


https://bitbucket.org/yt_analysis/yt/commits/fa49ef603bcf/
Changeset:   fa49ef603bcf
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-20 16:26:26
Summary:     Minor fixes for particle octrees for dx and domain size.
Affected #:  2 files

diff -r 94dce22865dacd30b38dfeb8fe05c5ce709bff64 -r fa49ef603bcf478298d6fefd914f7d050304c831 yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -362,7 +362,7 @@
         if domain_left_edge is None:
             domain_left_edge = np.zeros(3, "float64") - 0.5
         if domain_right_edge is None:
-            domain_right_edge = np.ones(3, "float64") + 0.5
+            domain_right_edge = np.zeros(3, "float64") + 0.5
 
         self.domain_left_edge = np.array(domain_left_edge, dtype="float64")
         self.domain_right_edge = np.array(domain_right_edge, dtype="float64")

diff -r 94dce22865dacd30b38dfeb8fe05c5ce709bff64 -r fa49ef603bcf478298d6fefd914f7d050304c831 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -1303,7 +1303,7 @@
                 #IND Corresponding integer index on the root octs
                 #CP Center  point of that oct
                 pp[i] = pos[p, i]
-                dds[i] = (self.DRE[i] + self.DLE[i])/self.nn[i]
+                dds[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
                 ind[i] = <np.int64_t> ((pp[i] - self.DLE[i])/dds[i])
                 cp[i] = (ind[i] + 0.5) * dds[i] + self.DLE[i]
             cur = self.root_mesh[ind[0]][ind[1]][ind[2]]


https://bitbucket.org/yt_analysis/yt/commits/145a6c342daa/
Changeset:   145a6c342daa
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-22 18:15:04
Summary:     When fields are already in field_data for octrees, we don't need to reshape them.
Affected #:  1 file

diff -r fa49ef603bcf478298d6fefd914f7d050304c831 -r 145a6c342daafe2991a40304daa02c09df0d2e5d yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py
+++ b/yt/data_objects/octree_subset.py
@@ -99,9 +99,12 @@
         finfo = self.pf._get_field_info(*fields[0])
         if not finfo.particle_type:
             nz = self._num_zones + 2*self._num_ghost_zones
-            n_oct = tr.shape[0] / (nz**3.0)
-            tr.shape = (n_oct, nz, nz, nz)
-            tr = np.rollaxis(tr, 0, 4)
+            # We may need to reshape the field, if it is being queried from
+            # field_data.  If it's already cached, it just passes through.
+            if len(tr.shape) < 4: 
+                n_oct = tr.shape[0] / (nz**3.0)
+                tr.shape = (n_oct, nz, nz, nz)
+                tr = np.rollaxis(tr, 0, 4)
             return tr
         return tr
 


https://bitbucket.org/yt_analysis/yt/commits/54328cbc3d8d/
Changeset:   54328cbc3d8d
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-22 19:45:07
Summary:     Adding a domain_ind function to enable particle deposition.
Affected #:  2 files

diff -r 145a6c342daafe2991a40304daa02c09df0d2e5d -r 54328cbc3d8df9b3c90882f64721d72dcdbe473b yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py
+++ b/yt/data_objects/octree_subset.py
@@ -108,7 +108,17 @@
             return tr
         return tr
 
+    _domain_ind = None
+
+    @property
+    def domain_ind(self):
+        if self._domain_ind is None:
+            di = self.oct_handler.domain_ind(self.mask, self.domain.domain_id)
+            self._domain_ind = di
+        return self._domain_ind
+
     def deposit(self, positions, fields, method):
+        # Here we perform our particle deposition.
         pass
 
     def select(self, selector):

diff -r 145a6c342daafe2991a40304daa02c09df0d2e5d -r 54328cbc3d8df9b3c90882f64721d72dcdbe473b yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -1490,6 +1490,7 @@
         cdef Oct *o
         n = mask.shape[0]
         nm = 0
+        # This could perhaps be faster if we 
         for oi in range(n):
             o = self.oct_list[oi]
             if o.domain != domain_id: continue
@@ -1512,3 +1513,30 @@
                         use = m2[i, j, k, nm] = 1
             nm += use
         return m2.astype("bool")
+
+    def domain_ind(self,
+                    # mask is the base selector's *global* mask
+                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                    int domain_id):
+        # Here we once again do something similar to the other functions.  We
+        # need a set of indices into the final reduced, masked values.  The
+        # indices will be domain.n long, and will be of type int64.  This way,
+        # we can get the Oct through a .get() call, then use Oct.ind as an
+        # index into this newly created array, then finally use the returned
+        # index into the domain subset array for deposition.
+        cdef np.int64_t i, j, k, oi, noct, n, nm, use, offset
+        cdef Oct *o
+        offset = self.dom_offsets[domain_id]
+        noct = self.dom_offsets[domain_id + 1] - offset
+        cdef np.ndarray[np.int64_t, ndim=1] ind = np.zeros(noct, 'int64')
+        nm = 0
+        for oi in range(noct):
+            ind[oi] = -1
+            o = self.oct_list[oi + offset]
+            use = 0
+            for i in range(8):
+                if mask[o.local_ind, i] == 1: use = 1
+            if use == 1:
+                ind[oi] = nm
+            nm += use
+        return ind


https://bitbucket.org/yt_analysis/yt/commits/5177df2099d2/
Changeset:   5177df2099d2
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-22 19:49:47
Summary:     Merging octree spatial work.
Affected #:  5 files

diff -r 36e0d142b508b5432051fc45937ffdd0125b32af -r 5177df2099d2a49f8ac3cb2cce448f8cae16b2b1 yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py
+++ b/yt/data_objects/octree_subset.py
@@ -99,13 +99,26 @@
         finfo = self.pf._get_field_info(*fields[0])
         if not finfo.particle_type:
             nz = self._num_zones + 2*self._num_ghost_zones
-            n_oct = tr.shape[0] / (nz**3.0)
-            tr.shape = (n_oct, nz, nz, nz)
-            tr = np.rollaxis(tr, 0, 4)
+            # We may need to reshape the field, if it is being queried from
+            # field_data.  If it's already cached, it just passes through.
+            if len(tr.shape) < 4: 
+                n_oct = tr.shape[0] / (nz**3.0)
+                tr.shape = (n_oct, nz, nz, nz)
+                tr = np.rollaxis(tr, 0, 4)
             return tr
         return tr
 
+    _domain_ind = None
+
+    @property
+    def domain_ind(self):
+        if self._domain_ind is None:
+            di = self.oct_handler.domain_ind(self.mask, self.domain.domain_id)
+            self._domain_ind = di
+        return self._domain_ind
+
     def deposit(self, positions, fields, method):
+        # Here we perform our particle deposition.
         pass
 
     def select(self, selector):

diff -r 36e0d142b508b5432051fc45937ffdd0125b32af -r 5177df2099d2a49f8ac3cb2cce448f8cae16b2b1 yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -351,10 +351,27 @@
                     ('dummy',   'i'))
 
     def __init__(self, filename, data_style="tipsy",
-                 root_dimensions = 64):
+                 root_dimensions = 64, endian = ">",
+                 field_dtypes = None,
+                 domain_left_edge = None,
+                 domain_right_edge = None):
+        self.endian = endian
         self._root_dimensions = root_dimensions
         # Set up the template for domain files
         self.storage_filename = None
+        if domain_left_edge is None:
+            domain_left_edge = np.zeros(3, "float64") - 0.5
+        if domain_right_edge is None:
+            domain_right_edge = np.zeros(3, "float64") + 0.5
+
+        self.domain_left_edge = np.array(domain_left_edge, dtype="float64")
+        self.domain_right_edge = np.array(domain_right_edge, dtype="float64")
+
+        # My understanding is that dtypes are set on a field by field basis,
+        # not on a (particle type, field) basis
+        if field_dtypes is None: field_dtypes = {}
+        self._field_dtypes = field_dtypes
+
         super(TipsyStaticOutput, self).__init__(filename, data_style)
 
     def __repr__(self):
@@ -373,7 +390,7 @@
         # in the GADGET-2 user guide.
 
         f = open(self.parameter_filename, "rb")
-        hh = ">" + "".join(["%s" % (b) for a,b in self._header_spec])
+        hh = self.endian + "".join(["%s" % (b) for a,b in self._header_spec])
         hvals = dict([(a, c) for (a, b), c in zip(self._header_spec,
                      struct.unpack(hh, f.read(struct.calcsize(hh))))])
         self._header_offset = f.tell()
@@ -388,8 +405,9 @@
         # This may not be correct.
         self.current_time = hvals["time"]
 
-        self.domain_left_edge = np.zeros(3, "float64") - 0.5
-        self.domain_right_edge = np.ones(3, "float64") + 0.5
+        # NOTE: These are now set in the main initializer.
+        #self.domain_left_edge = np.zeros(3, "float64") - 0.5
+        #self.domain_right_edge = np.ones(3, "float64") + 0.5
         self.domain_dimensions = np.ones(3, "int32") * self._root_dimensions
         self.periodicity = (True, True, True)
 

diff -r 36e0d142b508b5432051fc45937ffdd0125b32af -r 5177df2099d2a49f8ac3cb2cce448f8cae16b2b1 yt/frontends/sph/io.py
--- a/yt/frontends/sph/io.py
+++ b/yt/frontends/sph/io.py
@@ -372,6 +372,7 @@
         return rv
 
     def _initialize_octree(self, domain, octree):
+        pf = domain.pf
         with open(domain.domain_filename, "rb") as f:
             f.seek(domain.pf._header_offset)
             for ptype in self._ptypes:
@@ -391,6 +392,11 @@
                             pos[:,1].min(), pos[:,1].max())
                 mylog.debug("Spanning: %0.3e .. %0.3e in z",
                             pos[:,2].min(), pos[:,2].max())
+                if np.any(pos.min(axis=0) < pf.domain_left_edge) or \
+                   np.any(pos.max(axis=0) > pf.domain_right_edge):
+                    raise YTDomainOverflow(pos.min(axis=0), pos.max(axis=0),
+                                           pf.domain_left_edge,
+                                           pf.domain_right_edge)
                 del pp
                 octree.add(pos, domain.domain_id)
 
@@ -412,10 +418,12 @@
         for ptype, field in self._fields:
             pfields = []
             if tp[ptype] == 0: continue
+            dtbase = domain.pf._field_dtypes.get(field, 'f')
+            ff = "%s%s" % (domain.pf.endian, dtbase)
             if field in _vector_fields:
-                dt = (field, [('x', '>f'), ('y', '>f'), ('z', '>f')])
+                dt = (field, [('x', ff), ('y', ff), ('z', ff)])
             else:
-                dt = (field, '>f')
+                dt = (field, ff)
             pds.setdefault(ptype, []).append(dt)
             field_list.append((ptype, field))
         for ptype in pds:

diff -r 36e0d142b508b5432051fc45937ffdd0125b32af -r 5177df2099d2a49f8ac3cb2cce448f8cae16b2b1 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -733,7 +733,7 @@
         # Note also that typically when something calls domain_and, they will 
         # use a logical_any along the oct axis.  Here we don't do that.
         # Note also that we change the shape of the returned array.
-        cdef np.int64_t i, j, k, oi, n, nm
+        cdef np.int64_t i, j, k, oi, n, nm, use
         cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
         cdef Oct *o
         n = mask.shape[0]
@@ -1371,7 +1371,7 @@
                 #IND Corresponding integer index on the root octs
                 #CP Center  point of that oct
                 pp[i] = pos[p, i]
-                dds[i] = (self.DRE[i] + self.DLE[i])/self.nn[i]
+                dds[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
                 ind[i] = <np.int64_t> ((pp[i] - self.DLE[i])/dds[i])
                 cp[i] = (ind[i] + 0.5) * dds[i] + self.DLE[i]
             cur = self.root_mesh[ind[0]][ind[1]][ind[2]]
@@ -1547,3 +1547,67 @@
                 m2[o.local_ind, i] = mask[o.local_ind, i]
         return m2
 
+    def domain_mask(self,
+                    # mask is the base selector's *global* mask
+                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                    int domain_id):
+        # What distinguishes this one from domain_and is that we have a mask,
+        # which covers the whole domain, but our output will only be of a much
+        # smaller subset of octs that belong to a given domain *and* the mask.
+        # Note also that typically when something calls domain_and, they will 
+        # use a logical_any along the oct axis.  Here we don't do that.
+        # Note also that we change the shape of the returned array.
+        cdef np.int64_t i, j, k, oi, n, nm, use
+        cdef Oct *o
+        n = mask.shape[0]
+        nm = 0
+        # This could perhaps be faster if we 
+        for oi in range(n):
+            o = self.oct_list[oi]
+            if o.domain != domain_id: continue
+            use = 0
+            for i in range(8):
+                if mask[o.local_ind, i] == 1: use = 1
+            nm += use
+        cdef np.ndarray[np.uint8_t, ndim=4] m2 = \
+                np.zeros((2, 2, 2, nm), 'uint8')
+        nm = 0
+        for oi in range(n):
+            o = self.oct_list[oi]
+            if o.domain != domain_id: continue
+            use = 0
+            for i in range(2):
+                for j in range(2):
+                    for k in range(2):
+                        ii = ((k*2)+j)*2+i
+                        if mask[o.local_ind, ii] == 0: continue
+                        use = m2[i, j, k, nm] = 1
+            nm += use
+        return m2.astype("bool")
+
+    def domain_ind(self,
+                    # mask is the base selector's *global* mask
+                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                    int domain_id):
+        # Here we once again do something similar to the other functions.  We
+        # need a set of indices into the final reduced, masked values.  The
+        # indices will be domain.n long, and will be of type int64.  This way,
+        # we can get the Oct through a .get() call, then use Oct.ind as an
+        # index into this newly created array, then finally use the returned
+        # index into the domain subset array for deposition.
+        cdef np.int64_t i, j, k, oi, noct, n, nm, use, offset
+        cdef Oct *o
+        offset = self.dom_offsets[domain_id]
+        noct = self.dom_offsets[domain_id + 1] - offset
+        cdef np.ndarray[np.int64_t, ndim=1] ind = np.zeros(noct, 'int64')
+        nm = 0
+        for oi in range(noct):
+            ind[oi] = -1
+            o = self.oct_list[oi + offset]
+            use = 0
+            for i in range(8):
+                if mask[o.local_ind, i] == 1: use = 1
+            if use == 1:
+                ind[oi] = nm
+            nm += use
+        return ind

diff -r 36e0d142b508b5432051fc45937ffdd0125b32af -r 5177df2099d2a49f8ac3cb2cce448f8cae16b2b1 yt/utilities/exceptions.py
--- a/yt/utilities/exceptions.py
+++ b/yt/utilities/exceptions.py
@@ -249,3 +249,14 @@
 
     def __str__(self):
         return "Data selector '%s' not implemented." % (self.class_name)
+
+class YTDomainOverflow(YTException):
+    def __init__(self, mi, ma, dle, dre):
+        self.mi = mi
+        self.ma = ma
+        self.dle = dle
+        self.dre = dre
+
+    def __str__(self):
+        return "Particle bounds %s and %s exceed domain bounds %s and %s" % (
+            self.mi, self.ma, self.dle, self.dre)


https://bitbucket.org/yt_analysis/yt/commits/71d137cd09df/
Changeset:   71d137cd09df
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-25 12:44:40
Summary:     Particle deposit first draft.

This renames Chris's oct_deposit to particle_deposit and adds the first pass at
a class for creating deposition routines.
Affected #:  4 files

diff -r 5177df2099d2a49f8ac3cb2cce448f8cae16b2b1 -r 71d137cd09df25305b367ea7d33afb9f990fa5ce yt/geometry/oct_deposit.pyx
--- a/yt/geometry/oct_deposit.pyx
+++ /dev/null
@@ -1,158 +0,0 @@
-"""
-Particle Deposition onto Octs
-
-Author: Christopher Moody <chris.e.moody at gmail.com>
-Affiliation: UC Santa Cruz
-Author: Matthew Turk <matthewturk at gmail.com>
-Affiliation: Columbia University
-Homepage: http://yt.enzotools.org/
-License:
-  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
-
-  This file is part of yt.
-
-  yt is free software; you can redistribute it and/or modify
-  it under the terms of the GNU General Public License as published by
-  the Free Software Foundation; either version 3 of the License, or
-  (at your option) any later version.
-
-  This program is distributed in the hope that it will be useful,
-  but WITHOUT ANY WARRANTY; without even the implied warranty of
-  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-  GNU General Public License for more details.
-
-  You should have received a copy of the GNU General Public License
-  along with this program.  If not, see <http://www.gnu.org/licenses/>.
-"""
-
-from libc.stdlib cimport malloc, free
-cimport numpy as np
-import numpy as np
-cimport cython
-
-from oct_container cimport Oct, OctAllocationContainer, OctreeContainer
-
-# Mode functions
-ctypedef np.float64_t (*type_opt)(np.float64_t, np.float64_t)
-cdef np.float64_t opt_count(np.float64_t pdata,
-                            np.float64_t weight,
-                            np.int64_t index,
-                            np.ndarray[np.float64_t, ndim=2] data_out, 
-                            np.ndarray[np.float64_t, ndim=2] data_in):
-    data_out[index] += 1.0
-
-cdef np.float64_t opt_sum(np.float64_t pdata,
-                            np.float64_t weight,
-                            np.int64_t index,
-                            np.ndarray[np.float64_t, ndim=2] data_out, 
-                            np.ndarray[np.float64_t, ndim=2] data_in):
-    data_out[index] += pdata 
-
-cdef np.float64_t opt_diff(np.float64_t pdata,
-                            np.float64_t weight,
-                            np.int64_t index,
-                            np.ndarray[np.float64_t, ndim=2] data_out, 
-                            np.ndarray[np.float64_t, ndim=2] data_in):
-    data_out[index] += (data_in[index] - pdata) 
-
-cdef np.float64_t opt_wcount(np.float64_t pdata,
-                            np.float64_t weight,
-                            np.int64_t index,
-                            np.ndarray[np.float64_t, ndim=2] data_out, 
-                            np.ndarray[np.float64_t, ndim=2] data_in):
-    data_out[index] += weight
-
-cdef np.float64_t opt_wsum(np.float64_t pdata,
-                            np.float64_t weight,
-                            np.int64_t index,
-                            np.ndarray[np.float64_t, ndim=2] data_out, 
-                            np.ndarray[np.float64_t, ndim=2] data_in):
-    data_out[index] += pdata * weight
-
-cdef np.float64_t opt_wdiff(np.float64_t pdata,
-                            np.float64_t weight,
-                            np.int64_t index,
-                            np.ndarray[np.float64_t, ndim=2] data_out, 
-                            np.ndarray[np.float64_t, ndim=2] data_in):
-    data_out[index] += (data_in[index] - pdata) * weight
-
-# Selection functions
-ctypedef NOTSURE (*type_sel)(OctreeContainer, 
-                                np.ndarray[np.float64_t, ndim=1],
-                                np.float64_t)
-cdef NOTSURE select_nearest(OctreeContainer oct_handler,
-                            np.ndarray[np.float64_t, ndim=1] pos,
-                            np.float64_t radius):
-    #return only the nearest oct
-    pass
-
-
-cdef NOTSURE select_radius(OctreeContainer oct_handler,
-                            np.ndarray[np.float64_t, ndim=1] pos,
-                            np.float64_t radius):
-    #return a list of octs within the radius
-    pass
-    
-
-# Kernel functions
-ctypedef np.float64_t (*type_ker)(np.float64_t)
-cdef np.float64_t kernel_sph(np.float64_t x) nogil:
-    cdef np.float64_t kernel
-    if x <= 0.5:
-        kernel = 1.-6.*x*x*(1.-x)
-    elif x>0.5 and x<=1.0:
-        kernel = 2.*(1.-x)*(1.-x)*(1.-x)
-    else:
-        kernel = 0.
-    return kernel
-
-cdef np.float64_t kernel_null(np.float64_t x) nogil: return 0.0
-
-cdef deposit(OctreeContainer oct_handler, 
-        np.ndarray[np.float64_t, ndim=2] ppos, #positions,columns are x,y,z
-        np.ndarray[np.float64_t, ndim=2] pd, # particle fields
-        np.ndarray[np.float64_t, ndim=1] pr, # particle radius
-        np.ndarray[np.float64_t, ndim=2] data_in, #used to calc diff, same shape as data_out
-        np.ndarray[np.float64_t, ndim=2] data_out, #write deposited here
-        mode='count', selection='nearest', kernel='null'):
-    cdef type_opt fopt
-    cdef type_sel fsel
-    cdef type_ker fker
-    cdef long pi #particle index
-    cdef long nocts #number of octs in selection
-    cdef Oct oct 
-    cdef np.float64_t w
-    # Can we do this with dicts?
-    # Setup the function pointers
-    if mode == 'count':
-        fopt = opt_count
-    elif mode == 'sum':
-        fopt = opt_sum
-    elif mode == 'diff':
-        fopt = opt_diff
-    if mode == 'wcount':
-        fopt = opt_count
-    elif mode == 'wsum':
-        fopt = opt_sum
-    elif mode == 'wdiff':
-        fopt = opt_diff
-    if selection == 'nearest':
-        fsel = select_nearest
-    elif selection == 'radius':
-        fsel = select_radius
-    if kernel == 'null':
-        fker = kernel_null
-    if kernel == 'sph':
-        fker = kernel_sph
-    for pi in range(particles):
-        octs = fsel(oct_handler, ppos[pi], pr[pi])
-        for oct in octs:
-            for cell in oct.cells:
-                w = fker(pr[pi],cell) 
-                weights.append(w)
-        norm = weights.sum()
-        for w, oct in zip(weights, octs):
-            for cell in oct.cells:
-                fopt(pd[pi], w/norm, oct.index, data_in, data_out)
-
-

diff -r 5177df2099d2a49f8ac3cb2cce448f8cae16b2b1 -r 71d137cd09df25305b367ea7d33afb9f990fa5ce yt/geometry/particle_deposit.pxd
--- /dev/null
+++ b/yt/geometry/particle_deposit.pxd
@@ -0,0 +1,47 @@
+"""
+Particle Deposition onto Octs
+
+Author: Christopher Moody <chris.e.moody at gmail.com>
+Affiliation: UC Santa Cruz
+Author: Matthew Turk <matthewturk at gmail.com>
+Affiliation: Columbia University
+Homepage: http://yt.enzotools.org/
+License:
+  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+cimport numpy as np
+import numpy as np
+from libc.stdlib cimport malloc, free
+cimport cython
+
+from fp_utils cimport *
+from oct_container cimport Oct, OctAllocationContainer, OctreeContainer
+
+cdef extern from "alloca.h":
+    void *alloca(int)
+
+cdef inline int gind(int i, int j, int k, int dims[3]):
+    return ((k*dims[1])+j)*dims[0]+i
+
+cdef class ParticleDepositOperation:
+    # We assume each will allocate and define their own temporary storage
+    cdef np.int64_t nvals
+    cdef void process(self, int dim[3], np.float64_t left_edge[3],
+                      np.float64_t dds[3], np.int64_t offset,
+                      np.float64_t ppos[3], np.float64_t *fields)

diff -r 5177df2099d2a49f8ac3cb2cce448f8cae16b2b1 -r 71d137cd09df25305b367ea7d33afb9f990fa5ce yt/geometry/particle_deposit.pyx
--- /dev/null
+++ b/yt/geometry/particle_deposit.pyx
@@ -0,0 +1,232 @@
+"""
+Particle Deposition onto Cells
+
+Author: Christopher Moody <chris.e.moody at gmail.com>
+Affiliation: UC Santa Cruz
+Author: Matthew Turk <matthewturk at gmail.com>
+Affiliation: Columbia University
+Homepage: http://yt.enzotools.org/
+License:
+  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+cimport numpy as np
+import numpy as np
+from libc.stdlib cimport malloc, free
+cimport cython
+
+from fp_utils cimport *
+from oct_container cimport Oct, OctAllocationContainer, OctreeContainer
+
+cdef class ParticleDepositOperation:
+    def __init__(self, nvals):
+        self.nvals = nvals
+
+    def initialize(self, *args):
+        raise NotImplementedError
+
+    def finalize(self, *args):
+        raise NotImplementedError
+
+    def process_octree(self, OctreeContainer octree,
+                     np.ndarray[np.int64_t, ndim=1] dom_ind,
+                     np.ndarray[np.float64_t, ndim=2] positions,
+                     fields = None):
+        raise NotImplementedError
+
+    def process_grid(self, gobj,
+                     np.ndarray[np.float64_t, ndim=2] positions,
+                     fields = None):
+        cdef int nf, i, j
+        if fields is None:
+            fields = []
+        nf = len(fields)
+        cdef np.float64_t **field_pointers, *field_vals, pos[3]
+        cdef np.ndarray[np.float64_t, ndim=1] tarr
+        field_pointers = <np.float64_t**> alloca(sizeof(np.float64_t *) * nf)
+        field_vals = <np.float64_t*>alloca(sizeof(np.float64_t) * nf)
+        for i in range(nf):
+            tarr = fields[i]
+            field_pointers[i] = <np.float64_t *> tarr.data
+        cdef np.float64_t dds[3], left_edge[3]
+        cdef int dims[3]
+        for i in range(3):
+            dds[i] = gobj.dds[i]
+            left_edge[i] = gobj.LeftEdge[i]
+            dims[i] = gobj.ActiveDimensions[i]
+        for i in range(positions.shape[0]):
+            # Now we process
+            for j in range(nf):
+                field_vals[j] = field_pointers[j][i]
+            for j in range(3):
+                pos[j] = positions[i, j]
+            self.process(dims, left_edge, dds, 0, pos, field_vals)
+
+    cdef void process(self, int dim[3], np.float64_t left_edge[3],
+                      np.float64_t dds[3], np.int64_t offset,
+                      np.float64_t ppos[3], np.float64_t *fields):
+        raise NotImplementedError
+
+cdef class CountParticles(ParticleDepositOperation):
+    cdef np.float64_t *count # float, for ease
+    cdef object ocount
+    def initialize(self):
+        self.ocount = np.zeros(self.nvals, dtype="float64")
+        cdef np.ndarray arr = self.ocount
+        self.count = <np.float64_t*> arr.data
+
+    cdef void process(self, int dim[3],
+                      np.float64_t left_edge[3], 
+                      np.float64_t dds[3],
+                      np.int64_t offset, # offset into IO field
+                      np.float64_t ppos[3], # this particle's position
+                      np.float64_t *fields # any other fields we need
+                      ):
+        # here we do our thing; this is the kernel
+        cdef int ii[3], i
+        for i in range(3):
+            ii[i] = <int>((ppos[i] - left_edge[i])/dds[i])
+        self.count[gind(ii[0], ii[1], ii[2], dim)] += 1
+        
+    def finalize(self):
+        return self.ocount
+
+"""
+# Mode functions
+ctypedef np.float64_t (*type_opt)(np.float64_t, np.float64_t)
+cdef np.float64_t opt_count(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += 1.0
+
+cdef np.float64_t opt_sum(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += pdata 
+
+cdef np.float64_t opt_diff(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += (data_in[index] - pdata) 
+
+cdef np.float64_t opt_wcount(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += weight
+
+cdef np.float64_t opt_wsum(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += pdata * weight
+
+cdef np.float64_t opt_wdiff(np.float64_t pdata,
+                            np.float64_t weight,
+                            np.int64_t index,
+                            np.ndarray[np.float64_t, ndim=2] data_out, 
+                            np.ndarray[np.float64_t, ndim=2] data_in):
+    data_out[index] += (data_in[index] - pdata) * weight
+
+# Selection functions
+ctypedef NOTSURE (*type_sel)(OctreeContainer, 
+                                np.ndarray[np.float64_t, ndim=1],
+                                np.float64_t)
+cdef NOTSURE select_nearest(OctreeContainer oct_handler,
+                            np.ndarray[np.float64_t, ndim=1] pos,
+                            np.float64_t radius):
+    #return only the nearest oct
+    pass
+
+
+cdef NOTSURE select_radius(OctreeContainer oct_handler,
+                            np.ndarray[np.float64_t, ndim=1] pos,
+                            np.float64_t radius):
+    #return a list of octs within the radius
+    pass
+    
+
+# Kernel functions
+ctypedef np.float64_t (*type_ker)(np.float64_t)
+cdef np.float64_t kernel_sph(np.float64_t x) nogil:
+    cdef np.float64_t kernel
+    if x <= 0.5:
+        kernel = 1.-6.*x*x*(1.-x)
+    elif x>0.5 and x<=1.0:
+        kernel = 2.*(1.-x)*(1.-x)*(1.-x)
+    else:
+        kernel = 0.
+    return kernel
+
+cdef np.float64_t kernel_null(np.float64_t x) nogil: return 0.0
+
+cdef deposit(OctreeContainer oct_handler, 
+        np.ndarray[np.float64_t, ndim=2] ppos, #positions,columns are x,y,z
+        np.ndarray[np.float64_t, ndim=2] pd, # particle fields
+        np.ndarray[np.float64_t, ndim=1] pr, # particle radius
+        np.ndarray[np.float64_t, ndim=2] data_in, #used to calc diff, same shape as data_out
+        np.ndarray[np.float64_t, ndim=2] data_out, #write deposited here
+        mode='count', selection='nearest', kernel='null'):
+    cdef type_opt fopt
+    cdef type_sel fsel
+    cdef type_ker fker
+    cdef long pi #particle index
+    cdef long nocts #number of octs in selection
+    cdef Oct oct 
+    cdef np.float64_t w
+    # Can we do this with dicts?
+    # Setup the function pointers
+    if mode == 'count':
+        fopt = opt_count
+    elif mode == 'sum':
+        fopt = opt_sum
+    elif mode == 'diff':
+        fopt = opt_diff
+    if mode == 'wcount':
+        fopt = opt_count
+    elif mode == 'wsum':
+        fopt = opt_sum
+    elif mode == 'wdiff':
+        fopt = opt_diff
+    if selection == 'nearest':
+        fsel = select_nearest
+    elif selection == 'radius':
+        fsel = select_radius
+    if kernel == 'null':
+        fker = kernel_null
+    if kernel == 'sph':
+        fker = kernel_sph
+    for pi in range(particles):
+        octs = fsel(oct_handler, ppos[pi], pr[pi])
+        for oct in octs:
+            for cell in oct.cells:
+                w = fker(pr[pi],cell) 
+                weights.append(w)
+        norm = weights.sum()
+        for w, oct in zip(weights, octs):
+            for cell in oct.cells:
+                fopt(pd[pi], w/norm, oct.index, data_in, data_out)
+"""

diff -r 5177df2099d2a49f8ac3cb2cce448f8cae16b2b1 -r 71d137cd09df25305b367ea7d33afb9f990fa5ce yt/geometry/setup.py
--- a/yt/geometry/setup.py
+++ b/yt/geometry/setup.py
@@ -23,6 +23,13 @@
                 depends=["yt/utilities/lib/fp_utils.pxd",
                          "yt/geometry/oct_container.pxd",
                          "yt/geometry/selection_routines.pxd"])
+    config.add_extension("particle_deposit", 
+                ["yt/geometry/particle_deposit.pyx"],
+                include_dirs=["yt/utilities/lib/"],
+                libraries=["m"],
+                depends=["yt/utilities/lib/fp_utils.pxd",
+                         "yt/geometry/oct_container.pxd",
+                         "yt/geometry/particle_deposit.pxd"])
     config.add_extension("fake_octree", 
                 ["yt/geometry/fake_octree.pyx"],
                 include_dirs=["yt/utilities/lib/"],


https://bitbucket.org/yt_analysis/yt/commits/05a99508b3cd/
Changeset:   05a99508b3cd
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-25 16:25:26
Summary:     First draft of .deposit() for OctreeSubset objects.

This includes a few fixes to how particles are assigned to ParticleOctree
elements.
Affected #:  5 files

diff -r 71d137cd09df25305b367ea7d33afb9f990fa5ce -r 05a99508b3cd77bd87d83bae2d4a0f850a013e00 yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py
+++ b/yt/data_objects/octree_subset.py
@@ -35,6 +35,7 @@
     NeedsDataField, \
     NeedsProperty, \
     NeedsParameter
+import yt.geometry.particle_deposit as particle_deposit
 
 class OctreeSubset(YTSelectionContainer):
     _spatial = True
@@ -98,16 +99,20 @@
             return tr
         finfo = self.pf._get_field_info(*fields[0])
         if not finfo.particle_type:
-            nz = self._num_zones + 2*self._num_ghost_zones
             # We may need to reshape the field, if it is being queried from
             # field_data.  If it's already cached, it just passes through.
-            if len(tr.shape) < 4: 
-                n_oct = tr.shape[0] / (nz**3.0)
-                tr.shape = (n_oct, nz, nz, nz)
-                tr = np.rollaxis(tr, 0, 4)
+            if len(tr.shape) < 4:
+                tr = self._reshape_vals(tr)
             return tr
         return tr
 
+    def _reshape_vals(self, arr):
+        nz = self._num_zones + 2*self._num_ghost_zones
+        n_oct = arr.shape[0] / (nz**3.0)
+        arr.shape = (n_oct, nz, nz, nz)
+        arr = np.rollaxis(arr, 0, 4)
+        return arr
+
     _domain_ind = None
 
     @property
@@ -117,9 +122,17 @@
             self._domain_ind = di
         return self._domain_ind
 
-    def deposit(self, positions, fields, method):
+    def deposit(self, positions, fields = None, method = None):
         # Here we perform our particle deposition.
-        pass
+        cls = getattr(particle_deposit, "deposit_%s" % method, None)
+        if cls is None:
+            raise YTParticleDepositionNotImplemented(method)
+        nvals = (self.domain_ind >= 0).sum() * 8
+        op = cls(nvals) # We allocate number of zones, not number of octs
+        op.initialize()
+        op.process_octree(self.oct_handler, self.domain_ind, positions, fields)
+        vals = op.finalize()
+        return self._reshape_vals(vals)
 
     def select(self, selector):
         if id(selector) == self._last_selector_id:

diff -r 71d137cd09df25305b367ea7d33afb9f990fa5ce -r 05a99508b3cd77bd87d83bae2d4a0f850a013e00 yt/geometry/oct_container.pxd
--- a/yt/geometry/oct_container.pxd
+++ b/yt/geometry/oct_container.pxd
@@ -39,6 +39,10 @@
     Oct *children[2][2][2]
     Oct *parent
 
+cdef struct OctInfo:
+    np.float64_t left_edge[3]
+    np.float64_t dds[3]
+
 cdef struct OctAllocationContainer
 cdef struct OctAllocationContainer:
     np.int64_t n
@@ -54,7 +58,7 @@
     cdef np.float64_t DLE[3], DRE[3]
     cdef public int nocts
     cdef public int max_domain
-    cdef Oct* get(self, np.float64_t ppos[3], int *ii = ?)
+    cdef Oct* get(self, np.float64_t ppos[3], OctInfo *oinfo = ?)
     cdef void neighbors(self, Oct *, Oct **)
     cdef void oct_bounds(self, Oct *, np.float64_t *, np.float64_t *)
 

diff -r 71d137cd09df25305b367ea7d33afb9f990fa5ce -r 05a99508b3cd77bd87d83bae2d4a0f850a013e00 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -142,7 +142,7 @@
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    cdef Oct *get(self, np.float64_t ppos[3], int *ii = NULL):
+    cdef Oct *get(self, np.float64_t ppos[3], OctInfo *oinfo = NULL):
         #Given a floating point position, retrieve the most
         #refined oct at that time
         cdef np.int64_t ind[3]
@@ -150,28 +150,24 @@
         cdef Oct *cur
         cdef int i
         for i in range(3):
-            pp[i] = ppos[i] - self.DLE[i]
             dds[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
-            ind[i] = <np.int64_t> ((pp[i] - self.DLE[i])/dds[i])
-            cp[i] = (ind[i] + 0.5) * dds[i]
+            ind[i] = <np.int64_t> ((ppos[i] - self.DLE[i])/dds[i])
+            cp[i] = (ind[i] + 0.5) * dds[i] + self.DLE[i]
         cur = self.root_mesh[ind[0]][ind[1]][ind[2]]
         while cur.children[0][0][0] != NULL:
             for i in range(3):
                 dds[i] = dds[i] / 2.0
-                if cp[i] > pp[i]:
+                if cp[i] > ppos[i]:
                     ind[i] = 0
                     cp[i] -= dds[i] / 2.0
                 else:
                     ind[i] = 1
                     cp[i] += dds[i]/2.0
             cur = cur.children[ind[0]][ind[1]][ind[2]]
-        if ii != NULL: return cur
+        if oinfo == NULL: return cur
         for i in range(3):
-            if cp[i] > pp[i]:
-                ind[i] = 0
-            else:
-                ind[i] = 1
-        ii[0] = ((ind[2]*2)+ind[1])*2+ind[0]
+            oinfo.dds[i] = dds[i] # Cell width
+            oinfo.left_edge[i] = cp[i] - dds[i]
         return cur
 
     @cython.boundscheck(False)
@@ -982,28 +978,6 @@
 
 
 cdef class ARTOctreeContainer(RAMSESOctreeContainer):
-    #this class is specifically for the NMSU ART
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def deposit_particle_cumsum(self,
-                                np.ndarray[np.float64_t, ndim=2] ppos, 
-                                np.ndarray[np.float64_t, ndim=1] pdata,
-                                np.ndarray[np.float64_t, ndim=1] mask,
-                                np.ndarray[np.float64_t, ndim=1] dest,
-                                fields, int domain):
-        cdef Oct *o
-        cdef OctAllocationContainer *dom = self.domains[domain - 1]
-        cdef np.float64_t pos[3]
-        cdef int ii
-        cdef int no = ppos.shape[0]
-        for n in range(no):
-            for j in range(3):
-                pos[j] = ppos[n,j]
-            o = self.get(pos, &ii) 
-            if mask[o.local_ind,ii]==0: continue
-            dest[o.ind+ii] += pdata[n]
-        return dest
 
     @cython.boundscheck(True)
     @cython.wraparound(False)
@@ -1262,6 +1236,7 @@
         cdef int max_level = 0
         self.oct_list = <Oct**> malloc(sizeof(Oct*)*self.nocts)
         cdef np.int64_t i = 0
+        cdef np.int64_t dom_ind
         cdef ParticleArrays *c = self.first_sd
         while c != NULL:
             self.oct_list[i] = c.oct
@@ -1280,11 +1255,15 @@
         self.dom_offsets = <np.int64_t *>malloc(sizeof(np.int64_t) *
                                                 (self.max_domain + 3))
         self.dom_offsets[0] = 0
+        dom_ind = 0
         for i in range(self.nocts):
             self.oct_list[i].local_ind = i
+            self.oct_list[i].ind = dom_ind
+            dom_ind += 1
             if self.oct_list[i].domain > cur_dom:
                 cur_dom = self.oct_list[i].domain
                 self.dom_offsets[cur_dom + 1] = i
+                dom_ind = 0
         self.dom_offsets[cur_dom + 2] = self.nocts
 
     cdef Oct* allocate_oct(self):
@@ -1597,8 +1576,9 @@
         # index into the domain subset array for deposition.
         cdef np.int64_t i, j, k, oi, noct, n, nm, use, offset
         cdef Oct *o
-        offset = self.dom_offsets[domain_id]
-        noct = self.dom_offsets[domain_id + 1] - offset
+        # For particle octrees, domain 0 is special and means non-leaf nodes.
+        offset = self.dom_offsets[domain_id + 1]
+        noct = self.dom_offsets[domain_id + 2] - offset
         cdef np.ndarray[np.int64_t, ndim=1] ind = np.zeros(noct, 'int64')
         nm = 0
         for oi in range(noct):

diff -r 71d137cd09df25305b367ea7d33afb9f990fa5ce -r 05a99508b3cd77bd87d83bae2d4a0f850a013e00 yt/geometry/particle_deposit.pyx
--- a/yt/geometry/particle_deposit.pyx
+++ b/yt/geometry/particle_deposit.pyx
@@ -31,7 +31,8 @@
 cimport cython
 
 from fp_utils cimport *
-from oct_container cimport Oct, OctAllocationContainer, OctreeContainer
+from oct_container cimport Oct, OctAllocationContainer, \
+    OctreeContainer, OctInfo
 
 cdef class ParticleDepositOperation:
     def __init__(self, nvals):
@@ -47,8 +48,34 @@
                      np.ndarray[np.int64_t, ndim=1] dom_ind,
                      np.ndarray[np.float64_t, ndim=2] positions,
                      fields = None):
-        raise NotImplementedError
-
+        cdef int nf, i, j
+        if fields is None:
+            fields = []
+        nf = len(fields)
+        cdef np.float64_t **field_pointers, *field_vals, pos[3]
+        cdef np.ndarray[np.float64_t, ndim=1] tarr
+        field_pointers = <np.float64_t**> alloca(sizeof(np.float64_t *) * nf)
+        field_vals = <np.float64_t*>alloca(sizeof(np.float64_t) * nf)
+        for i in range(nf):
+            tarr = fields[i]
+            field_pointers[i] = <np.float64_t *> tarr.data
+        cdef int dims[3]
+        dims[0] = dims[1] = dims[2] = 2
+        cdef OctInfo oi
+        cdef np.int64_t offset
+        cdef Oct *oct
+        for i in range(positions.shape[0]):
+            # We should check if particle remains inside the Oct here
+            for j in range(nf):
+                field_vals[j] = field_pointers[j][i]
+            for j in range(3):
+                pos[j] = positions[i, j]
+            oct = octree.get(pos, &oi)
+            #print oct.local_ind, oct.pos[0], oct.pos[1], oct.pos[2]
+            offset = dom_ind[oct.ind]
+            self.process(dims, oi.left_edge, oi.dds,
+                         offset, pos, field_vals)
+        
     def process_grid(self, gobj,
                      np.ndarray[np.float64_t, ndim=2] positions,
                      fields = None):
@@ -84,12 +111,13 @@
 
 cdef class CountParticles(ParticleDepositOperation):
     cdef np.float64_t *count # float, for ease
-    cdef object ocount
+    cdef public object ocount
     def initialize(self):
         self.ocount = np.zeros(self.nvals, dtype="float64")
         cdef np.ndarray arr = self.ocount
         self.count = <np.float64_t*> arr.data
 
+    @cython.cdivision(True)
     cdef void process(self, int dim[3],
                       np.float64_t left_edge[3], 
                       np.float64_t dds[3],
@@ -101,11 +129,15 @@
         cdef int ii[3], i
         for i in range(3):
             ii[i] = <int>((ppos[i] - left_edge[i])/dds[i])
-        self.count[gind(ii[0], ii[1], ii[2], dim)] += 1
+        #print "Depositing into", offset,
+        #print gind(ii[0], ii[1], ii[2], dim)
+        self.count[gind(ii[0], ii[1], ii[2], dim) + offset] += 1
         
     def finalize(self):
         return self.ocount
 
+deposit_count = CountParticles
+
 """
 # Mode functions
 ctypedef np.float64_t (*type_opt)(np.float64_t, np.float64_t)

diff -r 71d137cd09df25305b367ea7d33afb9f990fa5ce -r 05a99508b3cd77bd87d83bae2d4a0f850a013e00 yt/utilities/exceptions.py
--- a/yt/utilities/exceptions.py
+++ b/yt/utilities/exceptions.py
@@ -250,6 +250,13 @@
     def __str__(self):
         return "Data selector '%s' not implemented." % (self.class_name)
 
+class YTParticleDepositionNotImplemented(YTException):
+    def __init__(self, class_name):
+        self.class_name = class_name
+
+    def __str__(self):
+        return "Particle deposition method '%s' not implemented." % (self.class_name)
+
 class YTDomainOverflow(YTException):
     def __init__(self, mi, ma, dle, dre):
         self.mi = mi


https://bitbucket.org/yt_analysis/yt/commits/2219b4075dac/
Changeset:   2219b4075dac
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-25 16:59:18
Summary:     The number of octs should be half the number of zones for particle octrees.
Affected #:  1 file

diff -r 05a99508b3cd77bd87d83bae2d4a0f850a013e00 -r 2219b4075dacf4f176372adf425174e23eb49a33 yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -96,7 +96,7 @@
         total_particles = sum(sum(d.total_particles.values())
                               for d in self.domains)
         self.oct_handler = ParticleOctreeContainer(
-            self.parameter_file.domain_dimensions,
+            self.parameter_file.domain_dimensions/2,
             self.parameter_file.domain_left_edge,
             self.parameter_file.domain_right_edge)
         self.oct_handler.n_ref = 64


https://bitbucket.org/yt_analysis/yt/commits/3ee0f2ec3e20/
Changeset:   3ee0f2ec3e20
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-25 19:25:24
Summary:     Adding domain_ind for fluid octrees, fixing an oct/cell width confusion.
Affected #:  1 file

diff -r 2219b4075dacf4f176372adf425174e23eb49a33 -r 3ee0f2ec3e2054f16867d3125aadedcdacc3ecba yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -166,8 +166,8 @@
             cur = cur.children[ind[0]][ind[1]][ind[2]]
         if oinfo == NULL: return cur
         for i in range(3):
-            oinfo.dds[i] = dds[i] # Cell width
-            oinfo.left_edge[i] = cp[i] - dds[i]
+            oinfo.dds[i] = dds[i]/2.0 # Cell width
+            oinfo.left_edge[i] = cp[i] - dds[i]/2.0
         return cur
 
     @cython.boundscheck(False)
@@ -755,6 +755,27 @@
             nm += use
         return m2.astype("bool")
 
+    def domain_ind(self,
+                    # mask is the base selector's *global* mask
+                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                    int domain_id):
+        cdef np.int64_t i, j, k, oi, noct, n, nm, use, offset
+        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
+        cdef Oct *o
+        # For particle octrees, domain 0 is special and means non-leaf nodes.
+        cdef np.ndarray[np.int64_t, ndim=1] ind = np.zeros(cur.n_assigned, 'int64')
+        nm = 0
+        for oi in range(cur.n_assigned):
+            ind[oi] = -1
+            o = &cur.my_octs[oi]
+            use = 0
+            for i in range(8):
+                if mask[o.local_ind, i] == 1: use = 1
+            if use == 1:
+                ind[o.ind] = nm
+            nm += use
+        return ind
+
     def check(self, int curdom, int print_all = 0):
         cdef int dind, pi
         cdef Oct oct


https://bitbucket.org/yt_analysis/yt/commits/dbc1ac2e558e/
Changeset:   dbc1ac2e558e
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-04-26 12:00:13
Summary:     A few changes to get octree deposition closer to working.
Affected #:  4 files

diff -r 3ee0f2ec3e2054f16867d3125aadedcdacc3ecba -r dbc1ac2e558e1c170a765648c6120555dcf3f79e yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py
+++ b/yt/data_objects/octree_subset.py
@@ -127,10 +127,11 @@
         cls = getattr(particle_deposit, "deposit_%s" % method, None)
         if cls is None:
             raise YTParticleDepositionNotImplemented(method)
-        nvals = (self.domain_ind >= 0).sum() * 8
+        nvals = self.domain_ind.size * 8
         op = cls(nvals) # We allocate number of zones, not number of octs
         op.initialize()
-        op.process_octree(self.oct_handler, self.domain_ind, positions, fields)
+        op.process_octree(self.oct_handler, self.domain_ind, positions, fields,
+                          self.domain.domain_id)
         vals = op.finalize()
         return self._reshape_vals(vals)
 

diff -r 3ee0f2ec3e2054f16867d3125aadedcdacc3ecba -r dbc1ac2e558e1c170a765648c6120555dcf3f79e yt/data_objects/universal_fields.py
--- a/yt/data_objects/universal_fields.py
+++ b/yt/data_objects/universal_fields.py
@@ -96,7 +96,7 @@
           display_field = False)
 
 def _Ones(field, data):
-    return np.ones(data.shape, dtype='float64')
+    return np.ones(data.ires.size, dtype='float64')
 add_field("Ones", function=_Ones,
           projection_conversion="unitary",
           display_field = False)

diff -r 3ee0f2ec3e2054f16867d3125aadedcdacc3ecba -r dbc1ac2e558e1c170a765648c6120555dcf3f79e yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -153,8 +153,10 @@
             dds[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
             ind[i] = <np.int64_t> ((ppos[i] - self.DLE[i])/dds[i])
             cp[i] = (ind[i] + 0.5) * dds[i] + self.DLE[i]
-        cur = self.root_mesh[ind[0]][ind[1]][ind[2]]
-        while cur.children[0][0][0] != NULL:
+        next = self.root_mesh[ind[0]][ind[1]][ind[2]]
+        # We want to stop recursing when there's nowhere else to go
+        while next != NULL:
+            cur = next
             for i in range(3):
                 dds[i] = dds[i] / 2.0
                 if cp[i] > ppos[i]:
@@ -163,7 +165,7 @@
                 else:
                     ind[i] = 1
                     cp[i] += dds[i]/2.0
-            cur = cur.children[ind[0]][ind[1]][ind[2]]
+            next = cur.children[ind[0]][ind[1]][ind[2]]
         if oinfo == NULL: return cur
         for i in range(3):
             oinfo.dds[i] = dds[i]/2.0 # Cell width
@@ -707,7 +709,7 @@
 
     def domain_and(self, np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
                    int domain_id):
-        cdef np.int64_t i, oi, n, 
+        cdef np.int64_t i, oi, n,  use
         cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
         cdef Oct *o
         cdef np.ndarray[np.uint8_t, ndim=2] m2 = \
@@ -715,6 +717,7 @@
         n = mask.shape[0]
         for oi in range(cur.n_assigned):
             o = &cur.my_octs[oi]
+            use = 0
             for i in range(8):
                 m2[o.local_ind, i] = mask[o.local_ind, i]
         return m2 # NOTE: This is uint8_t
@@ -763,10 +766,9 @@
         cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
         cdef Oct *o
         # For particle octrees, domain 0 is special and means non-leaf nodes.
-        cdef np.ndarray[np.int64_t, ndim=1] ind = np.zeros(cur.n_assigned, 'int64')
+        cdef np.ndarray[np.int64_t, ndim=1] ind = np.zeros(cur.n, 'int64') - 1
         nm = 0
-        for oi in range(cur.n_assigned):
-            ind[oi] = -1
+        for oi in range(cur.n):
             o = &cur.my_octs[oi]
             use = 0
             for i in range(8):
@@ -804,6 +806,33 @@
         print "DOMAIN % 3i HAS % 9i MISSED OCTS" % (curdom, nmissed)
         print "DOMAIN % 3i HAS % 9i UNASSIGNED OCTS" % (curdom, unassigned)
 
+    def check_refinement(self, int curdom):
+        cdef int pi, i, j, k, some_refined, some_unrefined
+        cdef Oct *oct
+        cdef int bad = 0
+        cdef OctAllocationContainer *cont = self.domains[curdom - 1]
+        for pi in range(cont.n_assigned):
+            oct = &cont.my_octs[pi]
+            some_unrefined = 0
+            some_refined = 0
+            for i in range(2):
+                for j in range(2):
+                    for k in range(2):
+                        if oct.children[i][j][k] == NULL:
+                            some_unrefined = 1
+                        else:
+                            some_refined = 1
+            if some_unrefined == some_refined == 1:
+                #print "BAD", oct.ind, oct.local_ind
+                bad += 1
+                if curdom == 10 or curdom == 72:
+                    for i in range(2):
+                        for j in range(2):
+                            for k in range(2):
+                                print (oct.children[i][j][k] == NULL),
+                    print
+        print "BAD TOTAL", curdom, bad, cont.n_assigned
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
@@ -1535,7 +1564,7 @@
 
     def domain_and(self, np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
                    int domain_id):
-        cdef np.int64_t i, oi, n, 
+        cdef np.int64_t i, oi, n, use
         cdef Oct *o
         cdef np.ndarray[np.uint8_t, ndim=2] m2 = \
                 np.zeros((mask.shape[0], 8), 'uint8')
@@ -1543,6 +1572,7 @@
         for oi in range(n):
             o = self.oct_list[oi]
             if o.domain != domain_id: continue
+            use = 0
             for i in range(8):
                 m2[o.local_ind, i] = mask[o.local_ind, i]
         return m2

diff -r 3ee0f2ec3e2054f16867d3125aadedcdacc3ecba -r dbc1ac2e558e1c170a765648c6120555dcf3f79e yt/geometry/particle_deposit.pyx
--- a/yt/geometry/particle_deposit.pyx
+++ b/yt/geometry/particle_deposit.pyx
@@ -47,7 +47,7 @@
     def process_octree(self, OctreeContainer octree,
                      np.ndarray[np.int64_t, ndim=1] dom_ind,
                      np.ndarray[np.float64_t, ndim=2] positions,
-                     fields = None):
+                     fields = None, int domain_id = -1):
         cdef int nf, i, j
         if fields is None:
             fields = []
@@ -72,7 +72,8 @@
                 pos[j] = positions[i, j]
             oct = octree.get(pos, &oi)
             #print oct.local_ind, oct.pos[0], oct.pos[1], oct.pos[2]
-            offset = dom_ind[oct.ind]
+            offset = dom_ind[oct.ind] * 8
+            # Check that we found the oct ...
             self.process(dims, oi.left_edge, oi.dds,
                          offset, pos, field_vals)
         


https://bitbucket.org/yt_analysis/yt/commits/a86882ab8661/
Changeset:   a86882ab8661
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-01 22:29:56
Summary:     Adding a few comments and container fields for oct subsets.
Affected #:  3 files

diff -r dbc1ac2e558e1c170a765648c6120555dcf3f79e -r a86882ab86610ebc6cfceb10ca9b37b0fd8a8f5f yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py
+++ b/yt/data_objects/octree_subset.py
@@ -66,6 +66,16 @@
         self._current_particle_type = 'all'
         self._current_fluid_type = self.pf.default_fluid_type
 
+    def _generate_container_field(self, field):
+        if self._current_chunk is None:
+            self.hierarchy._identify_base_chunk(self)
+        if field == "dx":
+            return self._current_chunk.fwidth[:,0]
+        elif field == "dy":
+            return self._current_chunk.fwidth[:,1]
+        elif field == "dz":
+            return self._current_chunk.fwidth[:,2]
+
     def select_icoords(self, dobj):
         return self.oct_handler.icoords(self.domain.domain_id, self.mask,
                                         self.cell_count,

diff -r dbc1ac2e558e1c170a765648c6120555dcf3f79e -r a86882ab86610ebc6cfceb10ca9b37b0fd8a8f5f yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -168,8 +168,16 @@
             next = cur.children[ind[0]][ind[1]][ind[2]]
         if oinfo == NULL: return cur
         for i in range(3):
-            oinfo.dds[i] = dds[i]/2.0 # Cell width
-            oinfo.left_edge[i] = cp[i] - dds[i]/2.0
+            # This will happen *after* we quit out, so we need to back out the
+            # last change to cp
+            if ind[i] == 1:
+                cp[i] -= dds[i]/2.0 # Now centered
+            else:
+                cp[i] += dds[i]/2.0
+            # We don't need to change dds[i] as it has been halved from the
+            # oct width, thus making it already the cell width
+            oinfo.dds[i] = dds[i] # Cell width
+            oinfo.left_edge[i] = cp[i] - dds[i] # Center minus dds
         return cur
 
     @cython.boundscheck(False)
@@ -513,7 +521,7 @@
         n = mask.shape[0]
         cdef np.ndarray[np.int64_t, ndim=2] coords
         coords = np.empty((cell_count, 3), dtype="int64")
-        ci=0
+        ci = 0
         for oi in range(cur.n):
             o = &cur.my_octs[oi]
             for k in range(2):
@@ -521,6 +529,9 @@
                     for i in range(2):
                         ii = ((k*2)+j)*2+i
                         if mask[o.local_ind, ii] == 0: continue
+                        # Note that we bit shift because o.pos is oct position,
+                        # not cell position, and it is with respect to octs,
+                        # not cells.
                         coords[ci, 0] = (o.pos[0] << 1) + i
                         coords[ci, 1] = (o.pos[1] << 1) + j
                         coords[ci, 2] = (o.pos[2] << 1) + k
@@ -636,7 +647,6 @@
                             ii = ((k*2)+j)*2+i
                             if mask[o.local_ind, ii] == 0: continue
                             dest[local_filled + offset] = source[o.local_ind*8+ii]
-                            # print 'oct_container.pyx:sourcemasked',o.level,local_filled, o.local_ind*8+ii, source[o.local_ind*8+ii]
                             local_filled += 1
         return local_filled
 
@@ -765,7 +775,6 @@
         cdef np.int64_t i, j, k, oi, noct, n, nm, use, offset
         cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
         cdef Oct *o
-        # For particle octrees, domain 0 is special and means non-leaf nodes.
         cdef np.ndarray[np.int64_t, ndim=1] ind = np.zeros(cur.n, 'int64') - 1
         nm = 0
         for oi in range(cur.n):

diff -r dbc1ac2e558e1c170a765648c6120555dcf3f79e -r a86882ab86610ebc6cfceb10ca9b37b0fd8a8f5f yt/geometry/particle_deposit.pyx
--- a/yt/geometry/particle_deposit.pyx
+++ b/yt/geometry/particle_deposit.pyx
@@ -130,8 +130,6 @@
         cdef int ii[3], i
         for i in range(3):
             ii[i] = <int>((ppos[i] - left_edge[i])/dds[i])
-        #print "Depositing into", offset,
-        #print gind(ii[0], ii[1], ii[2], dim)
         self.count[gind(ii[0], ii[1], ii[2], dim) + offset] += 1
         
     def finalize(self):


https://bitbucket.org/yt_analysis/yt/commits/ff219faca878/
Changeset:   ff219faca878
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-01 22:54:20
Summary:     Sometimes particles belong to ghost zones.

This sidesteps the issue but it must be addressed.
Affected #:  1 file

diff -r a86882ab86610ebc6cfceb10ca9b37b0fd8a8f5f -r ff219faca878615de3cf9761fd86abbd34af55be yt/geometry/particle_deposit.pyx
--- a/yt/geometry/particle_deposit.pyx
+++ b/yt/geometry/particle_deposit.pyx
@@ -71,7 +71,10 @@
             for j in range(3):
                 pos[j] = positions[i, j]
             oct = octree.get(pos, &oi)
-            #print oct.local_ind, oct.pos[0], oct.pos[1], oct.pos[2]
+            # This next line is unfortunate.  Basically it says, sometimes we
+            # might have particles that belong to octs outside our domain.
+            if oct.domain != domain_id: continue
+            #print domain_id, oct.local_ind, oct.ind, oct.domain, oct.pos[0], oct.pos[1], oct.pos[2]
             offset = dom_ind[oct.ind] * 8
             # Check that we found the oct ...
             self.process(dims, oi.left_edge, oi.dds,


https://bitbucket.org/yt_analysis/yt/commits/494770bb6ed2/
Changeset:   494770bb6ed2
Branch:      yt-3.0
User:        sskory
Date:        2013-03-21 20:13:03
Summary:     Speeding up field detection for enzo.
Affected #:  2 files

diff -r 6bdd0c4ba1619072b110248e87d120eb2e14d30f -r 494770bb6ed2d1689fa64e84da63103c6dfb1787 yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py
+++ b/yt/frontends/enzo/data_structures.py
@@ -30,6 +30,7 @@
 import stat
 import string
 import re
+import multiprocessing
 
 from itertools import izip
 
@@ -57,6 +58,8 @@
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
     parallel_blocking_call
 
+from data_structures_helper import get_field_names_helper
+
 class EnzoGrid(AMRGridPatch):
     """
     Class representing a single Enzo Grid instance.
@@ -288,7 +291,7 @@
         if self.parameter_file.parameters["VersionNumber"] > 2.0:
             active_particles = True
             nap = {}
-            for type in self.parameters.get("AppendActiveParticleType", []):
+            for type in self.parameters["AppendActiveParticleType"]:
                 nap[type] = []
         else:
             active_particles = False
@@ -309,7 +312,7 @@
             if active_particles:
                 ptypes = _next_token_line("PresentParticleTypes", f)
                 counts = [int(c) for c in _next_token_line("ParticleTypeCounts", f)]
-                for ptype in self.parameters.get("AppendActiveParticleType", []):
+                for ptype in self.parameters["AppendActiveParticleType"]:
                     if ptype in ptypes:
                         nap[ptype].append(counts[ptypes.index(ptype)])
                     else:
@@ -401,29 +404,28 @@
         self.max_level = self.grid_levels.max()
 
     def _detect_active_particle_fields(self):
-        select_grids = np.zeros(len(self.grids), dtype='int32')
-        for ptype in self.parameter_file["AppendActiveParticleType"]:
-            select_grids += self.grid_active_particle_count[ptype].flat
-        gs = self.grids[select_grids > 0]
-        grids = sorted((g for g in gs), key = lambda a: a.filename)
-        handle = last = None
         ap_list = self.parameter_file["AppendActiveParticleType"]
         _fields = dict((ap, []) for ap in ap_list)
         fields = []
-        for g in grids:
-            # We inspect every grid, for now, until we have a list of
-            # attributes in a defined location.
-            if last != g.filename:
-                if handle is not None: handle.close()
-                handle = h5py.File(g.filename, "r")
-            node = handle["/Grid%08i/Particles/" % g.id]
-            for ptype in (str(p) for p in node):
-                if ptype not in _fields: continue
-                for field in (str(f) for f in node[ptype]):
-                    _fields[ptype].append(field)
-                fields += [(ptype, field) for field in _fields.pop(ptype)]
-            if len(_fields) == 0: break
-        if handle is not None: handle.close()
+        for ptype in self.parameter_file["AppendActiveParticleType"]:
+            select_grids = self.grid_active_particle_count[ptype].flat
+            gs = self.grids[select_grids > 0]
+            grids = sorted((g for g in gs), key = lambda a: a.filename)
+            handle = last = None
+            for g in grids:
+                # We inspect every grid, for now, until we have a list of
+                # attributes in a defined location.
+                if last != g.filename:
+                    if handle is not None: handle.close()
+                    handle = h5py.File(g.filename)
+                node = handle["/Grid%08i/Particles/" % g.id]
+                for ptype in (str(p) for p in node):
+                    if ptype not in _fields: continue
+                    for field in (str(f) for f in node[ptype]):
+                        _fields[ptype].append(field)
+                    fields += [(ptype, field) for field in _fields.pop(ptype)]
+                if len(_fields) == 0: break
+            if handle is not None: handle.close()
         return set(fields)
 
     def _setup_derived_fields(self):
@@ -448,15 +450,19 @@
                 mylog.info("Gathering a field list (this may take a moment.)")
                 field_list = set()
                 random_sample = self._generate_random_grids()
+                jobs = []
+                result_queue = multiprocessing.Queue()
                 for grid in random_sample:
                     if not hasattr(grid, 'filename'): continue
-                    try:
-                        gf = self.io._read_field_names(grid)
-                    except self.io._read_exception:
-                        mylog.debug("Grid %s is a bit funky?", grid.id)
-                        continue
-                    mylog.debug("Grid %s has: %s", grid.id, gf)
-                    field_list = field_list.union(gf)
+                    p = multiprocessing.Process(target=get_field_names_helper,
+                        args = (grid.filename, grid.id, result_queue))
+                    jobs.append(p)
+                    p.start()
+                for grid in random_sample:
+                    res = result_queue.get()
+                    mylog.debug(res[1])
+                    if res[0] is not None:
+                        field_list = field_list.union(res[0])
             if "AppendActiveParticleType" in self.parameter_file.parameters:
                 ap_fields = self._detect_active_particle_fields()
                 field_list = list(set(field_list).union(ap_fields))
@@ -998,7 +1004,6 @@
         for p, v in self._conversion_override.items():
             self.conversion_factors[p] = v
         self.refine_by = self.parameters["RefineBy"]
-        self.periodicity = ensure_tuple(self.parameters["LeftFaceBoundaryCondition"] == 3)
         self.dimensionality = self.parameters["TopGridRank"]
         self.domain_dimensions = self.parameters["TopGridDimensions"]
         self.current_time = self.parameters["InitialTime"]

diff -r 6bdd0c4ba1619072b110248e87d120eb2e14d30f -r 494770bb6ed2d1689fa64e84da63103c6dfb1787 yt/frontends/enzo/data_structures_helper.py
--- /dev/null
+++ b/yt/frontends/enzo/data_structures_helper.py
@@ -0,0 +1,34 @@
+"""
+Data structures helper for Enzo
+
+Author: Stephen Skory <s at skory.us>
+Affiliation: Univ of Colorado
+Homepage: http://yt-project.org/
+License:
+  Copyright (C) 2007-2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+from yt.utilities import hdf5_light_reader
+
+def get_field_names_helper(filename, id, results):
+    try:
+        names= hdf5_light_reader.ReadListOfDatasets(
+                    filename, "/Grid%08i" % id)
+        results.put((names, "Grid %s has: %s" % (grid.id, names)))
+    except:
+        results.put((None, "Grid %s is a bit funky?" % id))


https://bitbucket.org/yt_analysis/yt/commits/1b21c904aae6/
Changeset:   1b21c904aae6
Branch:      yt-3.0
User:        sskory
Date:        2013-03-21 20:16:24
Summary:     Restoring some stuff I accidentally changed (?).
Affected #:  1 file

diff -r 494770bb6ed2d1689fa64e84da63103c6dfb1787 -r 1b21c904aae6d287a00d354f6e83075fceb6aa0d yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py
+++ b/yt/frontends/enzo/data_structures.py
@@ -291,7 +291,7 @@
         if self.parameter_file.parameters["VersionNumber"] > 2.0:
             active_particles = True
             nap = {}
-            for type in self.parameters["AppendActiveParticleType"]:
+            for type in self.parameters.get("AppendActiveParticleType", []):
                 nap[type] = []
         else:
             active_particles = False
@@ -312,7 +312,7 @@
             if active_particles:
                 ptypes = _next_token_line("PresentParticleTypes", f)
                 counts = [int(c) for c in _next_token_line("ParticleTypeCounts", f)]
-                for ptype in self.parameters["AppendActiveParticleType"]:
+                for ptype in self.parameters.get("AppendActiveParticleType", []):
                     if ptype in ptypes:
                         nap[ptype].append(counts[ptypes.index(ptype)])
                     else:
@@ -1005,6 +1005,7 @@
             self.conversion_factors[p] = v
         self.refine_by = self.parameters["RefineBy"]
         self.dimensionality = self.parameters["TopGridRank"]
+        self.periodicity = ensure_tuple(self.parameters["LeftFaceBoundaryCondition"] == 3)
         self.domain_dimensions = self.parameters["TopGridDimensions"]
         self.current_time = self.parameters["InitialTime"]
         if "CurrentTimeIdentifier" in self.parameters:


https://bitbucket.org/yt_analysis/yt/commits/c2667b210a3b/
Changeset:   c2667b210a3b
Branch:      yt-3.0
User:        sskory
Date:        2013-03-22 01:02:31
Summary:     Replaced multithreading with threading.
Affected #:  2 files

diff -r 1b21c904aae6d287a00d354f6e83075fceb6aa0d -r c2667b210a3b44c37379ba6fd38420e8793feb81 yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py
+++ b/yt/frontends/enzo/data_structures.py
@@ -30,7 +30,9 @@
 import stat
 import string
 import re
-import multiprocessing
+
+from threading import Thread
+import Queue
 
 from itertools import izip
 
@@ -58,7 +60,13 @@
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
     parallel_blocking_call
 
-from data_structures_helper import get_field_names_helper
+def get_field_names_helper(filename, id, results):
+    try:
+        names = hdf5_light_reader.ReadListOfDatasets(
+                    filename, "/Grid%08i" % id)
+        results.put((names, "Grid %s has: %s" % (id, names)))
+    except:
+        results.put((None, "Grid %s is a bit funky?" % id))
 
 class EnzoGrid(AMRGridPatch):
     """
@@ -451,13 +459,17 @@
                 field_list = set()
                 random_sample = self._generate_random_grids()
                 jobs = []
-                result_queue = multiprocessing.Queue()
+                result_queue = Queue.Queue()
+                # Start threads
                 for grid in random_sample:
                     if not hasattr(grid, 'filename'): continue
-                    p = multiprocessing.Process(target=get_field_names_helper,
+                    helper = Thread(target = get_field_names_helper, 
                         args = (grid.filename, grid.id, result_queue))
-                    jobs.append(p)
-                    p.start()
+                    jobs.append(helper)
+                    helper.start()
+                # Here we make sure they're finished.
+                for helper in jobs:
+                    helper.join()
                 for grid in random_sample:
                     res = result_queue.get()
                     mylog.debug(res[1])

diff -r 1b21c904aae6d287a00d354f6e83075fceb6aa0d -r c2667b210a3b44c37379ba6fd38420e8793feb81 yt/frontends/enzo/data_structures_helper.py
--- a/yt/frontends/enzo/data_structures_helper.py
+++ /dev/null
@@ -1,34 +0,0 @@
-"""
-Data structures helper for Enzo
-
-Author: Stephen Skory <s at skory.us>
-Affiliation: Univ of Colorado
-Homepage: http://yt-project.org/
-License:
-  Copyright (C) 2007-2013 Matthew Turk.  All Rights Reserved.
-
-  This file is part of yt.
-
-  yt is free software; you can redistribute it and/or modify
-  it under the terms of the GNU General Public License as published by
-  the Free Software Foundation; either version 3 of the License, or
-  (at your option) any later version.
-
-  This program is distributed in the hope that it will be useful,
-  but WITHOUT ANY WARRANTY; without even the implied warranty of
-  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-  GNU General Public License for more details.
-
-  You should have received a copy of the GNU General Public License
-  along with this program.  If not, see <http://www.gnu.org/licenses/>.
-"""
-
-from yt.utilities import hdf5_light_reader
-
-def get_field_names_helper(filename, id, results):
-    try:
-        names= hdf5_light_reader.ReadListOfDatasets(
-                    filename, "/Grid%08i" % id)
-        results.put((names, "Grid %s has: %s" % (grid.id, names)))
-    except:
-        results.put((None, "Grid %s is a bit funky?" % id))


https://bitbucket.org/yt_analysis/yt/commits/36524214f584/
Changeset:   36524214f584
Branch:      yt-3.0
User:        sskory
Date:        2013-04-24 01:01:05
Summary:     Making field detection threading optional, and off by default.
Affected #:  2 files

diff -r c2667b210a3b44c37379ba6fd38420e8793feb81 -r 36524214f584e412d6d4bf2bc8a81aa5b6df5ba9 yt/config.py
--- a/yt/config.py
+++ b/yt/config.py
@@ -64,7 +64,8 @@
     answer_testing_bitwise = 'False',
     gold_standard_filename = 'gold006',
     local_standard_filename = 'local001',
-    sketchfab_api_key = 'None'
+    sketchfab_api_key = 'None',
+    thread_field_detection = 'False'
     )
 # Here is the upgrade.  We're actually going to parse the file in its entirety
 # here.  Then, if it has any of the Forbidden Sections, it will be rewritten

diff -r c2667b210a3b44c37379ba6fd38420e8793feb81 -r 36524214f584e412d6d4bf2bc8a81aa5b6df5ba9 yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py
+++ b/yt/frontends/enzo/data_structures.py
@@ -65,7 +65,7 @@
         names = hdf5_light_reader.ReadListOfDatasets(
                     filename, "/Grid%08i" % id)
         results.put((names, "Grid %s has: %s" % (id, names)))
-    except:
+    except (exceptions.KeyError, hdf5_light_reader.ReadingError):
         results.put((None, "Grid %s is a bit funky?" % id))
 
 class EnzoGrid(AMRGridPatch):
@@ -418,22 +418,15 @@
         for ptype in self.parameter_file["AppendActiveParticleType"]:
             select_grids = self.grid_active_particle_count[ptype].flat
             gs = self.grids[select_grids > 0]
-            grids = sorted((g for g in gs), key = lambda a: a.filename)
-            handle = last = None
-            for g in grids:
-                # We inspect every grid, for now, until we have a list of
-                # attributes in a defined location.
-                if last != g.filename:
-                    if handle is not None: handle.close()
-                    handle = h5py.File(g.filename)
-                node = handle["/Grid%08i/Particles/" % g.id]
-                for ptype in (str(p) for p in node):
-                    if ptype not in _fields: continue
-                    for field in (str(f) for f in node[ptype]):
-                        _fields[ptype].append(field)
-                    fields += [(ptype, field) for field in _fields.pop(ptype)]
-                if len(_fields) == 0: break
-            if handle is not None: handle.close()
+            g = gs[0]
+            handle = h5py.File(g.filename)
+            node = handle["/Grid%08i/Particles/" % g.id]
+            for ptype in (str(p) for p in node):
+                if ptype not in _fields: continue
+                for field in (str(f) for f in node[ptype]):
+                    _fields[ptype].append(field)
+                fields += [(ptype, field) for field in _fields.pop(ptype)]
+            handle.close()
         return set(fields)
 
     def _setup_derived_fields(self):
@@ -458,23 +451,35 @@
                 mylog.info("Gathering a field list (this may take a moment.)")
                 field_list = set()
                 random_sample = self._generate_random_grids()
-                jobs = []
-                result_queue = Queue.Queue()
-                # Start threads
-                for grid in random_sample:
-                    if not hasattr(grid, 'filename'): continue
-                    helper = Thread(target = get_field_names_helper, 
-                        args = (grid.filename, grid.id, result_queue))
-                    jobs.append(helper)
-                    helper.start()
-                # Here we make sure they're finished.
-                for helper in jobs:
-                    helper.join()
-                for grid in random_sample:
-                    res = result_queue.get()
-                    mylog.debug(res[1])
-                    if res[0] is not None:
-                        field_list = field_list.union(res[0])
+                tothread = ytcfg.getboolean("yt","thread_field_detection")
+                if tothread:
+                    jobs = []
+                    result_queue = Queue.Queue()
+                    # Start threads
+                    for grid in random_sample:
+                        if not hasattr(grid, 'filename'): continue
+                        helper = Thread(target = get_field_names_helper, 
+                            args = (grid.filename, grid.id, result_queue))
+                        jobs.append(helper)
+                        helper.start()
+                    # Here we make sure they're finished.
+                    for helper in jobs:
+                        helper.join()
+                    for grid in random_sample:
+                        res = result_queue.get()
+                        mylog.debug(res[1])
+                        if res[0] is not None:
+                            field_list = field_list.union(res[0])
+                else:
+                    for grid in random_sample:
+                        if not hasattr(grid, 'filename'): continue
+                        try:
+                            gf = self.io._read_field_names(grid)
+                        except self.io._read_exception:
+                            mylog.debug("Grid %s is a bit funky?", grid.id)
+                            continue
+                        mylog.debug("Grid %s has: %s", grid.id, gf)
+                        field_list = field_list.union(gf)
             if "AppendActiveParticleType" in self.parameter_file.parameters:
                 ap_fields = self._detect_active_particle_fields()
                 field_list = list(set(field_list).union(ap_fields))


https://bitbucket.org/yt_analysis/yt/commits/5e02fade47bd/
Changeset:   5e02fade47bd
Branch:      yt-3.0
User:        sskory
Date:        2013-04-24 01:01:48
Summary:     Merge.
Affected #:  19 files

diff -r 36524214f584e412d6d4bf2bc8a81aa5b6df5ba9 -r 5e02fade47bdae5020c890e0d80e4c1a5d2e7db4 yt/analysis_modules/halo_finding/halo_objects.py
--- a/yt/analysis_modules/halo_finding/halo_objects.py
+++ b/yt/analysis_modules/halo_finding/halo_objects.py
@@ -1059,7 +1059,7 @@
 
     _fields = ["particle_position_%s" % ax for ax in 'xyz']
 
-    def __init__(self, data_source, dm_only=True):
+    def __init__(self, data_source, dm_only=True, redshift=-1):
         """
         Run hop on *data_source* with a given density *threshold*.  If
         *dm_only* is set, only run it on the dark matter particles, otherwise

diff -r 36524214f584e412d6d4bf2bc8a81aa5b6df5ba9 -r 5e02fade47bdae5020c890e0d80e4c1a5d2e7db4 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -210,6 +210,8 @@
         """
         Deletes a field
         """
+        if key  not in self.field_data:
+            key = self._determine_fields(key)[0]
         del self.field_data[key]
 
     def _generate_field(self, field):

diff -r 36524214f584e412d6d4bf2bc8a81aa5b6df5ba9 -r 5e02fade47bdae5020c890e0d80e4c1a5d2e7db4 yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -678,3 +678,37 @@
     return [np.sum(totals[:,i]) for i in range(n_fields)]
 add_quantity("TotalQuantity", function=_TotalQuantity,
                 combine_function=_combTotalQuantity, n_ret=2)
+
+def _ParticleDensityCenter(data,nbins=3,particle_type="all"):
+    """
+    Find the center of the particle density
+    by histogramming the particles iteratively.
+    """
+    pos = [data[(particle_type,"particle_position_%s"%ax)] for ax in "xyz"]
+    pos = np.array(pos).T
+    mas = data[(particle_type,"particle_mass")]
+    calc_radius= lambda x,y:np.sqrt(np.sum((x-y)**2.0,axis=1))
+    density = 0
+    if pos.shape[0]==0:
+        return -1.0,[-1.,-1.,-1.]
+    while pos.shape[0] > 1:
+        table,bins=np.histogramdd(pos,bins=nbins, weights=mas)
+        bin_size = min((np.max(bins,axis=1)-np.min(bins,axis=1))/nbins)
+        centeridx = np.where(table==table.max())
+        le = np.array([bins[0][centeridx[0][0]],
+                       bins[1][centeridx[1][0]],
+                       bins[2][centeridx[2][0]]])
+        re = np.array([bins[0][centeridx[0][0]+1],
+                       bins[1][centeridx[1][0]+1],
+                       bins[2][centeridx[2][0]+1]])
+        center = 0.5*(le+re)
+        idx = calc_radius(pos,center)<bin_size
+        pos, mas = pos[idx],mas[idx]
+        density = max(density,mas.sum()/bin_size**3.0)
+    return density, center
+def _combParticleDensityCenter(data,densities,centers):
+    i = np.argmax(densities)
+    return densities[i],centers[i]
+
+add_quantity("ParticleDensityCenter",function=_ParticleDensityCenter,
+             combine_function=_combParticleDensityCenter,n_ret=2)

diff -r 36524214f584e412d6d4bf2bc8a81aa5b6df5ba9 -r 5e02fade47bdae5020c890e0d80e4c1a5d2e7db4 yt/data_objects/tests/test_fields.py
--- a/yt/data_objects/tests/test_fields.py
+++ b/yt/data_objects/tests/test_fields.py
@@ -81,7 +81,7 @@
                 v1 = g[self.field_name]
                 g.clear_data()
                 g.field_parameters.update(_sample_parameters)
-                assert_equal(v1, conv*field._function(field, g))
+                assert_array_almost_equal_nulp(v1, conv*field._function(field, g), 4)
 
 def test_all_fields():
     for field in FieldInfo:

diff -r 36524214f584e412d6d4bf2bc8a81aa5b6df5ba9 -r 5e02fade47bdae5020c890e0d80e4c1a5d2e7db4 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -30,6 +30,8 @@
 import stat
 import weakref
 import cStringIO
+import difflib
+import glob
 
 from yt.funcs import *
 from yt.geometry.oct_geometry_handler import \
@@ -37,9 +39,9 @@
 from yt.geometry.geometry_handler import \
     GeometryHandler, YTDataChunk
 from yt.data_objects.static_output import \
-      StaticOutput
+    StaticOutput
 from yt.geometry.oct_container import \
-    RAMSESOctreeContainer
+    ARTOctreeContainer
 from yt.data_objects.field_info_container import \
     FieldInfoContainer, NullFunc
 from .fields import \
@@ -52,20 +54,15 @@
     get_box_grids_level
 import yt.utilities.lib as amr_utils
 
-from .definitions import *
-from .io import _read_frecord
-from .io import _read_record
-from .io import _read_struct
+from yt.frontends.art.definitions import *
+from yt.utilities.fortran_utils import *
 from .io import _read_art_level_info
 from .io import _read_child_mask_level
 from .io import _read_child_level
 from .io import _read_root_level
-from .io import _read_record_size
-from .io import _skip_record
 from .io import _count_art_octs
 from .io import b2t
 
-
 import yt.frontends.ramses._ramses_reader as _ramses_reader
 
 from .fields import ARTFieldInfo, KnownARTFields
@@ -80,13 +77,9 @@
 from yt.utilities.physical_constants import \
     mass_hydrogen_cgs, sec_per_Gyr
 
+
 class ARTGeometryHandler(OctreeGeometryHandler):
-    def __init__(self,pf,data_style="art"):
-        """
-        Life is made simpler because we only have one AMR file
-        and one domain. However, we are matching to the RAMSES
-        multi-domain architecture.
-        """
+    def __init__(self, pf, data_style="art"):
         self.fluid_field_list = fluid_fields
         self.data_style = data_style
         self.parameter_file = weakref.proxy(pf)
@@ -94,7 +87,16 @@
         self.directory = os.path.dirname(self.hierarchy_filename)
         self.max_level = pf.max_level
         self.float_type = np.float64
-        super(ARTGeometryHandler,self).__init__(pf,data_style)
+        super(ARTGeometryHandler, self).__init__(pf, data_style)
+
+    def get_smallest_dx(self):
+        """
+        Returns (in code units) the smallest cell size in the simulation.
+        """
+        # Overloaded
+        pf = self.parameter_file
+        return (1.0/pf.domain_dimensions.astype('f8') /
+                (2**self.max_level)).min()
 
     def _initialize_oct_handler(self):
         """
@@ -102,23 +104,37 @@
         allocate the requisite memory in the oct tree
         """
         nv = len(self.fluid_field_list)
-        self.domains = [ARTDomainFile(self.parameter_file,1,nv)]
+        self.domains = [ARTDomainFile(self.parameter_file, l+1, nv, l)
+                        for l in range(self.pf.max_level)]
         self.octs_per_domain = [dom.level_count.sum() for dom in self.domains]
         self.total_octs = sum(self.octs_per_domain)
-        self.oct_handler = RAMSESOctreeContainer(
-            self.parameter_file.domain_dimensions/2, #dd is # of root cells
+        self.oct_handler = ARTOctreeContainer(
+            self.parameter_file.domain_dimensions/2,  # dd is # of root cells
             self.parameter_file.domain_left_edge,
             self.parameter_file.domain_right_edge)
         mylog.debug("Allocating %s octs", self.total_octs)
         self.oct_handler.allocate_domains(self.octs_per_domain)
         for domain in self.domains:
-            domain._read_amr(self.oct_handler)
+            if domain.domain_level == 0:
+                domain._read_amr_root(self.oct_handler)
+            else:
+                domain._read_amr_level(self.oct_handler)
 
     def _detect_fields(self):
         self.particle_field_list = particle_fields
-        self.field_list = set(fluid_fields + particle_fields + particle_star_fields)
+        self.field_list = set(fluid_fields + particle_fields +
+                              particle_star_fields)
         self.field_list = list(self.field_list)
-    
+        # now generate all of the possible particle fields
+        if "wspecies" in self.parameter_file.parameters.keys():
+            wspecies = self.parameter_file.parameters['wspecies']
+            nspecies = len(wspecies)
+            self.parameter_file.particle_types = ["all", "darkmatter", "stars"]
+            for specie in range(nspecies):
+                self.parameter_file.particle_types.append("specie%i" % specie)
+        else:
+            self.parameter_file.particle_types = []
+
     def _setup_classes(self):
         dd = self._get_data_reader_dict()
         super(ARTGeometryHandler, self)._setup_classes(dd)
@@ -127,19 +143,23 @@
     def _identify_base_chunk(self, dobj):
         """
         Take the passed in data source dobj, and use its embedded selector
-        to calculate the domain mask, build the reduced domain 
+        to calculate the domain mask, build the reduced domain
         subsets and oct counts. Attach this information to dobj.
         """
         if getattr(dobj, "_chunk_info", None) is None:
-            #Get all octs within this oct handler
+            # Get all octs within this oct handler
             mask = dobj.selector.select_octs(self.oct_handler)
-            if mask.sum()==0:
+            if mask.sum() == 0:
                 mylog.debug("Warning: selected zero octs")
             counts = self.oct_handler.count_cells(dobj.selector, mask)
-            #For all domains, figure out how many counts we have 
-            #and build a subset=mask of domains 
-            subsets = [ARTDomainSubset(d, mask, c)
-                       for d, c in zip(self.domains, counts) if c > 0]
+            # For all domains, figure out how many counts we have
+            # and build a subset=mask of domains
+            subsets = []
+            for d, c in zip(self.domains, counts):
+                if c < 1:
+                    continue
+                subset = ARTDomainSubset(d, mask, c, d.domain_level)
+                subsets.append(subset)
             dobj._chunk_info = subsets
             dobj.size = sum(counts)
             dobj.shape = (dobj.size,)
@@ -147,8 +167,8 @@
 
     def _chunk_all(self, dobj):
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
-        #We pass the chunk both the current chunk and list of chunks,
-        #as well as the referring data source
+        # We pass the chunk both the current chunk and list of chunks,
+        # as well as the referring data source
         yield YTDataChunk(dobj, "all", oobjs, dobj.size)
 
     def _chunk_spatial(self, dobj, ngz):
@@ -157,7 +177,7 @@
     def _chunk_io(self, dobj):
         """
         Since subsets are calculated per domain,
-        i.e. per file, yield each domain at a time to 
+        i.e. per file, yield each domain at a time to
         organize by IO. We will eventually chunk out NMSU ART
         to be level-by-level.
         """
@@ -165,77 +185,66 @@
         for subset in oobjs:
             yield YTDataChunk(dobj, "io", [subset], subset.cell_count)
 
+
 class ARTStaticOutput(StaticOutput):
     _hierarchy_class = ARTGeometryHandler
     _fieldinfo_fallback = ARTFieldInfo
     _fieldinfo_known = KnownARTFields
 
-    def __init__(self,filename,data_style='art',
-                 fields = None, storage_filename = None,
-                 skip_particles=False,skip_stars=False,
-                 limit_level=None,spread_age=True):
+    def __init__(self, filename, data_style='art',
+                 fields=None, storage_filename=None,
+                 skip_particles=False, skip_stars=False,
+                 limit_level=None, spread_age=True,
+                 force_max_level=None, file_particle_header=None,
+                 file_particle_data=None, file_particle_stars=None):
         if fields is None:
             fields = fluid_fields
         filename = os.path.abspath(filename)
         self._fields_in_file = fields
+        self._file_amr = filename
+        self._file_particle_header = file_particle_header
+        self._file_particle_data = file_particle_data
+        self._file_particle_stars = file_particle_stars
         self._find_files(filename)
-        self.file_amr = filename
         self.parameter_filename = filename
         self.skip_particles = skip_particles
         self.skip_stars = skip_stars
         self.limit_level = limit_level
         self.max_level = limit_level
+        self.force_max_level = force_max_level
         self.spread_age = spread_age
-        self.domain_left_edge = np.zeros(3,dtype='float')
-        self.domain_right_edge = np.zeros(3,dtype='float')+1.0
-        StaticOutput.__init__(self,filename,data_style)
+        self.domain_left_edge = np.zeros(3, dtype='float')
+        self.domain_right_edge = np.zeros(3, dtype='float')+1.0
+        StaticOutput.__init__(self, filename, data_style)
         self.storage_filename = storage_filename
 
-    def _find_files(self,file_amr):
+    def _find_files(self, file_amr):
         """
         Given the AMR base filename, attempt to find the
         particle header, star files, etc.
         """
-        prefix,suffix = filename_pattern['amr'].split('%s')
-        affix = os.path.basename(file_amr).replace(prefix,'')
-        affix = affix.replace(suffix,'')
-        affix = affix.replace('_','')
-        full_affix = affix
-        affix = affix[1:-1]
-        dirname = os.path.dirname(file_amr)
-        for fp in (filename_pattern_hf,filename_pattern):
-            for filetype, pattern in fp.items():
-                #if this attribute is already set skip it
-                if getattr(self,"file_"+filetype,None) is not None:
-                    continue
-                #sometimes the affix is surrounded by an extraneous _
-                #so check for an extra character on either side
-                check_filename = dirname+'/'+pattern%('?%s?'%affix)
-                filenames = glob.glob(check_filename)
-                if len(filenames)>1:
-                    check_filename_strict = \
-                            dirname+'/'+pattern%('?%s'%full_affix[1:])
-                    filenames = glob.glob(check_filename_strict)
-                
-                if len(filenames)==1:
-                    setattr(self,"file_"+filetype,filenames[0])
-                    mylog.info('discovered %s:%s',filetype,filenames[0])
-                elif len(filenames)>1:
-                    setattr(self,"file_"+filetype,None)
-                    mylog.info("Ambiguous number of files found for %s",
-                            check_filename)
-                    for fn in filenames:
-                        faffix = float(affix)
-                else:
-                    setattr(self,"file_"+filetype,None)
+        base_prefix, base_suffix = filename_pattern['amr']
+        possibles = glob.glob(os.path.dirname(file_amr)+"/*")
+        for filetype, (prefix, suffix) in filename_pattern.iteritems():
+            # if this attribute is already set skip it
+            if getattr(self, "_file_"+filetype, None) is not None:
+                continue
+            stripped = file_amr.replace(base_prefix, prefix)
+            stripped = stripped.replace(base_suffix, suffix)
+            match, = difflib.get_close_matches(stripped, possibles, 1, 0.6)
+            if match is not None:
+                mylog.info('discovered %s:%s', filetype, match)
+                setattr(self, "_file_"+filetype, match)
+            else:
+                setattr(self, "_file_"+filetype, None)
 
     def __repr__(self):
-        return self.file_amr.rsplit(".",1)[0]
+        return self._file_amr.split('/')[-1]
 
     def _set_units(self):
         """
-        Generates the conversion to various physical units based 
-		on the parameters from the header
+        Generates the conversion to various physical units based
+                on the parameters from the header
         """
         self.units = {}
         self.time_units = {}
@@ -243,9 +252,9 @@
         self.units['1'] = 1.0
         self.units['unitary'] = 1.0
 
-        #spatial units
-        z   = self.current_redshift
-        h   = self.hubble_constant
+        # spatial units
+        z = self.current_redshift
+        h = self.hubble_constant
         boxcm_cal = self.parameters["boxh"]
         boxcm_uncal = boxcm_cal / h
         box_proper = boxcm_uncal/(1+z)
@@ -256,55 +265,59 @@
             self.units[unit+'cm'] = mpc_conversion[unit] * boxcm_uncal
             self.units[unit+'hcm'] = mpc_conversion[unit] * boxcm_cal
 
-        #all other units
+        # all other units
         wmu = self.parameters["wmu"]
         Om0 = self.parameters['Om0']
-        ng  = self.parameters['ng']
+        ng = self.parameters['ng']
         wmu = self.parameters["wmu"]
-        boxh   = self.parameters['boxh'] 
-        aexpn  = self.parameters["aexpn"]
+        boxh = self.parameters['boxh']
+        aexpn = self.parameters["aexpn"]
         hubble = self.parameters['hubble']
 
         cf = defaultdict(lambda: 1.0)
         r0 = boxh/ng
-        P0= 4.697e-16 * Om0**2.0 * r0**2.0 * hubble**2.0
-        T_0 = 3.03e5 * r0**2.0 * wmu * Om0 # [K]
+        P0 = 4.697e-16 * Om0**2.0 * r0**2.0 * hubble**2.0
+        T_0 = 3.03e5 * r0**2.0 * wmu * Om0  # [K]
         S_0 = 52.077 * wmu**(5.0/3.0)
         S_0 *= hubble**(-4.0/3.0)*Om0**(1.0/3.0)*r0**2.0
-        #v0 =  r0 * 50.0*1.0e5 * np.sqrt(self.omega_matter)  #cm/s
+        # v0 =  r0 * 50.0*1.0e5 * np.sqrt(self.omega_matter)  #cm/s
         v0 = 50.0*r0*np.sqrt(Om0)
         t0 = r0/v0
         rho1 = 1.8791e-29 * hubble**2.0 * self.omega_matter
         rho0 = 2.776e11 * hubble**2.0 * Om0
-        tr = 2./3. *(3.03e5*r0**2.0*wmu*self.omega_matter)*(1.0/(aexpn**2))     
+        tr = 2./3. * (3.03e5*r0**2.0*wmu*self.omega_matter)*(1.0/(aexpn**2))
         aM0 = rho0 * (boxh/hubble)**3.0 / ng**3.0
-        cf['r0']=r0
-        cf['P0']=P0
-        cf['T_0']=T_0
-        cf['S_0']=S_0
-        cf['v0']=v0
-        cf['t0']=t0
-        cf['rho0']=rho0
-        cf['rho1']=rho1
-        cf['tr']=tr
-        cf['aM0']=aM0
+        cf['r0'] = r0
+        cf['P0'] = P0
+        cf['T_0'] = T_0
+        cf['S_0'] = S_0
+        cf['v0'] = v0
+        cf['t0'] = t0
+        cf['rho0'] = rho0
+        cf['rho1'] = rho1
+        cf['tr'] = tr
+        cf['aM0'] = aM0
 
-        #factors to multiply the native code units to CGS
-        cf['Pressure'] = P0 #already cgs
-        cf['Velocity'] = v0/aexpn*1.0e5 #proper cm/s
+        # factors to multiply the native code units to CGS
+        cf['Pressure'] = P0  # already cgs
+        cf['Velocity'] = v0/aexpn*1.0e5  # proper cm/s
         cf["Mass"] = aM0 * 1.98892e33
         cf["Density"] = rho1*(aexpn**-3.0)
         cf["GasEnergy"] = rho0*v0**2*(aexpn**-5.0)
         cf["Potential"] = 1.0
         cf["Entropy"] = S_0
         cf["Temperature"] = tr
+        cf["Time"] = 1.0
+        cf["particle_mass"] = cf['Mass']
+        cf["particle_mass_initial"] = cf['Mass']
         self.cosmological_simulation = True
         self.conversion_factors = cf
-        
-        for particle_field in particle_fields:
-            self.conversion_factors[particle_field] =  1.0
+
         for ax in 'xyz':
             self.conversion_factors["%s-velocity" % ax] = 1.0
+        for pt in particle_fields:
+            if pt not in self.conversion_factors.keys():
+                self.conversion_factors[pt] = 1.0
         for unit in sec_conversion.keys():
             self.time_units[unit] = 1.0 / sec_conversion[unit]
 
@@ -320,72 +333,89 @@
         self.unique_identifier = \
             int(os.stat(self.parameter_filename)[stat.ST_CTIME])
         self.parameters.update(constants)
-        #read the amr header
-        with open(self.file_amr,'rb') as f:
-            amr_header_vals = _read_struct(f,amr_header_struct)
-            for to_skip in ['tl','dtl','tlold','dtlold','iSO']:
-                _skip_record(f)
-            (self.ncell,) = struct.unpack('>l', _read_record(f))
+        self.parameters['Time'] = 1.0
+        # read the amr header
+        with open(self._file_amr, 'rb') as f:
+            amr_header_vals = read_attrs(f, amr_header_struct, '>')
+            for to_skip in ['tl', 'dtl', 'tlold', 'dtlold', 'iSO']:
+                skipped = skip(f, endian='>')
+            (self.ncell) = read_vector(f, 'i', '>')[0]
             # Try to figure out the root grid dimensions
             est = int(np.rint(self.ncell**(1.0/3.0)))
             # Note here: this is the number of *cells* on the root grid.
             # This is not the same as the number of Octs.
-            #domain dimensions is the number of root *cells*
+            # domain dimensions is the number of root *cells*
             self.domain_dimensions = np.ones(3, dtype='int64')*est
             self.root_grid_mask_offset = f.tell()
             self.root_nocts = self.domain_dimensions.prod()/8
             self.root_ncells = self.root_nocts*8
-            mylog.debug("Estimating %i cells on a root grid side,"+ \
-                        "%i root octs",est,self.root_nocts)
-            self.root_iOctCh = _read_frecord(f,'>i')[:self.root_ncells]
+            mylog.debug("Estimating %i cells on a root grid side," +
+                        "%i root octs", est, self.root_nocts)
+            self.root_iOctCh = read_vector(f, 'i', '>')[:self.root_ncells]
             self.root_iOctCh = self.root_iOctCh.reshape(self.domain_dimensions,
-                 order='F')
+                                                        order='F')
             self.root_grid_offset = f.tell()
-            #_skip_record(f) # hvar
-            #_skip_record(f) # var
-            self.root_nhvar = _read_frecord(f,'>f',size_only=True)
-            self.root_nvar  = _read_frecord(f,'>f',size_only=True)
-            #make sure that the number of root variables is a multiple of rootcells
-            assert self.root_nhvar%self.root_ncells==0
-            assert self.root_nvar%self.root_ncells==0
-            self.nhydro_variables = ((self.root_nhvar+self.root_nvar)/ 
-                                    self.root_ncells)
-            self.iOctFree, self.nOct = struct.unpack('>ii', _read_record(f))
+            self.root_nhvar = skip(f, endian='>')
+            self.root_nvar = skip(f, endian='>')
+            # make sure that the number of root variables is a multiple of
+            # rootcells
+            assert self.root_nhvar % self.root_ncells == 0
+            assert self.root_nvar % self.root_ncells == 0
+            self.nhydro_variables = ((self.root_nhvar+self.root_nvar) /
+                                     self.root_ncells)
+            self.iOctFree, self.nOct = read_vector(f, 'i', '>')
             self.child_grid_offset = f.tell()
             self.parameters.update(amr_header_vals)
             self.parameters['ncell0'] = self.parameters['ng']**3
-        #read the particle header
-        if not self.skip_particles and self.file_particle_header:
-            with open(self.file_particle_header,"rb") as fh:
-                particle_header_vals = _read_struct(fh,particle_header_struct)
+            # estimate the root level
+            float_center, fl, iocts, nocts, root_level = _read_art_level_info(
+                f,
+                [0, self.child_grid_offset], 1,
+                coarse_grid=self.domain_dimensions[0])
+            del float_center, fl, iocts, nocts
+            self.root_level = root_level
+            mylog.info("Using root level of %02i", self.root_level)
+        # read the particle header
+        if not self.skip_particles and self._file_particle_header:
+            with open(self._file_particle_header, "rb") as fh:
+                particle_header_vals = read_attrs(
+                    fh, particle_header_struct, '>')
                 fh.seek(seek_extras)
                 n = particle_header_vals['Nspecies']
-                wspecies = np.fromfile(fh,dtype='>f',count=10)
-                lspecies = np.fromfile(fh,dtype='>i',count=10)
+                wspecies = np.fromfile(fh, dtype='>f', count=10)
+                lspecies = np.fromfile(fh, dtype='>i', count=10)
             self.parameters['wspecies'] = wspecies[:n]
             self.parameters['lspecies'] = lspecies[:n]
             ls_nonzero = np.diff(lspecies)[:n-1]
-            mylog.info("Discovered %i species of particles",len(ls_nonzero))
+            self.star_type = len(ls_nonzero)
+            mylog.info("Discovered %i species of particles", len(ls_nonzero))
             mylog.info("Particle populations: "+'%1.1e '*len(ls_nonzero),
-                *ls_nonzero)
-            for k,v in particle_header_vals.items():
+                       *ls_nonzero)
+            for k, v in particle_header_vals.items():
                 if k in self.parameters.keys():
                     if not self.parameters[k] == v:
-                        mylog.info("Inconsistent parameter %s %1.1e  %1.1e",k,v,
-                                   self.parameters[k])
+                        mylog.info(
+                            "Inconsistent parameter %s %1.1e  %1.1e", k, v,
+                            self.parameters[k])
                 else:
-                    self.parameters[k]=v
+                    self.parameters[k] = v
             self.parameters_particles = particle_header_vals
-    
-        #setup standard simulation params yt expects to see
+
+        # setup standard simulation params yt expects to see
         self.current_redshift = self.parameters["aexpn"]**-1.0 - 1.0
         self.omega_lambda = amr_header_vals['Oml0']
         self.omega_matter = amr_header_vals['Om0']
         self.hubble_constant = amr_header_vals['hubble']
         self.min_level = amr_header_vals['min_level']
         self.max_level = amr_header_vals['max_level']
-        self.hubble_time  = 1.0/(self.hubble_constant*100/3.08568025e19)
+        if self.limit_level is not None:
+            self.max_level = min(
+                self.limit_level, amr_header_vals['max_level'])
+        if self.force_max_level is not None:
+            self.max_level = self.force_max_level
+        self.hubble_time = 1.0/(self.hubble_constant*100/3.08568025e19)
         self.current_time = b2t(self.parameters['t']) * sec_per_Gyr
+        mylog.info("Max level is %02i", self.max_level)
 
     @classmethod
     def _is_valid(self, *args, **kwargs):
@@ -393,20 +423,24 @@
         Defined for the NMSU file naming scheme.
         This could differ for other formats.
         """
-        fn = ("%s" % (os.path.basename(args[0])))
         f = ("%s" % args[0])
-        prefix, suffix = filename_pattern['amr'].split('%s')
-        if fn.endswith(suffix) and fn.startswith(prefix) and\
-                os.path.exists(f): 
+        prefix, suffix = filename_pattern['amr']
+        with open(f, 'rb') as fh:
+            try:
+                amr_header_vals = read_attrs(fh, amr_header_struct, '>')
                 return True
+            except AssertionError:
+                return False
         return False
 
+
 class ARTDomainSubset(object):
-    def __init__(self, domain, mask, cell_count):
+    def __init__(self, domain, mask, cell_count, domain_level):
         self.mask = mask
         self.domain = domain
         self.oct_handler = domain.pf.h.oct_handler
         self.cell_count = cell_count
+        self.domain_level = domain_level
         level_counts = self.oct_handler.count_levels(
             self.domain.pf.max_level, self.domain.domain_id, mask)
         assert(level_counts.sum() == cell_count)
@@ -432,12 +466,12 @@
     def select_fwidth(self, dobj):
         base_dx = 1.0/self.domain.pf.domain_dimensions
         widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.ires(dobj))
+        dds = (2**self.select_ires(dobj))
         for i in range(3):
-            widths[:,i] = base_dx[i] / dds
+            widths[:, i] = base_dx[i] / dds
         return widths
 
-    def fill(self, content, fields):
+    def fill_root(self, content, ftfields):
         """
         This is called from IOHandler. It takes content
         which is a binary stream, reads the requested field
@@ -446,135 +480,153 @@
         the order they are in in the octhandler.
         """
         oct_handler = self.oct_handler
-        all_fields  = self.domain.pf.h.fluid_field_list
-        fields = [f for ft, f in fields]
-        dest= {}
-        filled = pos = level_offset = 0
+        all_fields = self.domain.pf.h.fluid_field_list
+        fields = [f for ft, f in ftfields]
+        level_offset = 0
         field_idxs = [all_fields.index(f) for f in fields]
+        dest = {}
         for field in fields:
-            dest[field] = np.zeros(self.cell_count, 'float64')
-        for level, offset in enumerate(self.domain.level_offsets):
-            no = self.domain.level_count[level]
-            if level==0:
-                data = _read_root_level(content,self.domain.level_child_offsets,
-                                       self.domain.level_count)
-                data = data[field_idxs,:]
-            else:
-                data = _read_child_level(content,self.domain.level_child_offsets,
-                                         self.domain.level_offsets,
-                                         self.domain.level_count,level,fields,
-                                         self.domain.pf.domain_dimensions,
-                                         self.domain.pf.parameters['ncell0'])
-            source= {}
-            for i,field in enumerate(fields):
-                source[field] = np.empty((no, 8), dtype="float64")
-                source[field][:,:] = np.reshape(data[i,:],(no,8))
-            level_offset += oct_handler.fill_level(self.domain.domain_id, 
-                                   level, dest, source, self.mask, level_offset)
+            dest[field] = np.zeros(self.cell_count, 'float64')-1.
+        level = self.domain_level
+        source = {}
+        data = _read_root_level(content, self.domain.level_child_offsets,
+                                self.domain.level_count)
+        for field, i in zip(fields, field_idxs):
+            temp = np.reshape(data[i, :], self.domain.pf.domain_dimensions,
+                              order='F').astype('float64').T
+            source[field] = temp
+        level_offset += oct_handler.fill_level_from_grid(
+            self.domain.domain_id,
+            level, dest, source, self.mask, level_offset)
         return dest
 
+    def fill_level(self, content, ftfields):
+        oct_handler = self.oct_handler
+        fields = [f for ft, f in ftfields]
+        level_offset = 0
+        dest = {}
+        for field in fields:
+            dest[field] = np.zeros(self.cell_count, 'float64')-1.
+        level = self.domain_level
+        no = self.domain.level_count[level]
+        noct_range = [0, no]
+        source = _read_child_level(
+            content, self.domain.level_child_offsets,
+            self.domain.level_offsets,
+            self.domain.level_count, level, fields,
+            self.domain.pf.domain_dimensions,
+            self.domain.pf.parameters['ncell0'],
+            noct_range=noct_range)
+        nocts_filling = noct_range[1]-noct_range[0]
+        level_offset += oct_handler.fill_level(self.domain.domain_id,
+                                               level, dest, source,
+                                               self.mask, level_offset,
+                                               noct_range[0],
+                                               nocts_filling)
+        return dest
+
+
 class ARTDomainFile(object):
     """
     Read in the AMR, left/right edges, fill out the octhandler
     """
-    #We already read in the header in static output,
-    #and since these headers are defined in only a single file it's
-    #best to leave them in the static output
+    # We already read in the header in static output,
+    # and since these headers are defined in only a single file it's
+    # best to leave them in the static output
     _last_mask = None
     _last_seletor_id = None
 
-    def __init__(self,pf,domain_id,nvar):
+    def __init__(self, pf, domain_id, nvar, level):
         self.nvar = nvar
         self.pf = pf
         self.domain_id = domain_id
+        self.domain_level = level
         self._level_count = None
         self._level_oct_offsets = None
         self._level_child_offsets = None
 
     @property
     def level_count(self):
-        #this is number of *octs*
-        if self._level_count is not None: return self._level_count
+        # this is number of *octs*
+        if self._level_count is not None:
+            return self._level_count
         self.level_offsets
-        return self._level_count
+        return self._level_count[self.domain_level]
 
     @property
     def level_child_offsets(self):
-        if self._level_count is not None: return self._level_child_offsets
+        if self._level_count is not None:
+            return self._level_child_offsets
         self.level_offsets
         return self._level_child_offsets
 
     @property
-    def level_offsets(self): 
-        #this is used by the IO operations to find the file offset,
-        #and then start reading to fill values
-        #note that this is called hydro_offset in ramses
-        if self._level_oct_offsets is not None: 
+    def level_offsets(self):
+        # this is used by the IO operations to find the file offset,
+        # and then start reading to fill values
+        # note that this is called hydro_offset in ramses
+        if self._level_oct_offsets is not None:
             return self._level_oct_offsets
         # We now have to open the file and calculate it
-        f = open(self.pf.file_amr, "rb")
+        f = open(self.pf._file_amr, "rb")
         nhydrovars, inoll, _level_oct_offsets, _level_child_offsets = \
             _count_art_octs(f,  self.pf.child_grid_offset, self.pf.min_level,
                             self.pf.max_level)
-        #remember that the root grid is by itself; manually add it back in
+        # remember that the root grid is by itself; manually add it back in
         inoll[0] = self.pf.domain_dimensions.prod()/8
         _level_child_offsets[0] = self.pf.root_grid_offset
         self.nhydrovars = nhydrovars
-        self.inoll = inoll #number of octs
+        self.inoll = inoll  # number of octs
         self._level_oct_offsets = _level_oct_offsets
         self._level_child_offsets = _level_child_offsets
         self._level_count = inoll
         return self._level_oct_offsets
-    
-    def _read_amr(self, oct_handler):
+
+    def _read_amr_level(self, oct_handler):
         """Open the oct file, read in octs level-by-level.
-           For each oct, only the position, index, level and domain 
+           For each oct, only the position, index, level and domain
            are needed - its position in the octree is found automatically.
            The most important is finding all the information to feed
            oct_handler.add
         """
-        #on the root level we typically have 64^3 octs
-        #giving rise to 128^3 cells
-        #but on level 1 instead of 128^3 octs, we have 256^3 octs
-        #leave this code here instead of static output - it's memory intensive
         self.level_offsets
-        f = open(self.pf.file_amr, "rb")
-        #add the root *cell* not *oct* mesh
+        f = open(self.pf._file_amr, "rb")
+        level = self.domain_level
+        unitary_center, fl, iocts, nocts, root_level = _read_art_level_info(
+            f,
+            self._level_oct_offsets, level,
+            coarse_grid=self.pf.domain_dimensions[0],
+            root_level=self.pf.root_level)
+        nocts_check = oct_handler.add(self.domain_id, level, nocts,
+                                      unitary_center, self.domain_id)
+        assert(nocts_check == nocts)
+        mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
+                    nocts, level, oct_handler.nocts)
+
+    def _read_amr_root(self, oct_handler):
+        self.level_offsets
+        f = open(self.pf._file_amr, "rb")
+        # add the root *cell* not *oct* mesh
+        level = self.domain_level
         root_octs_side = self.pf.domain_dimensions[0]/2
         NX = np.ones(3)*root_octs_side
+        octs_side = NX*2**level
         LE = np.array([0.0, 0.0, 0.0], dtype='float64')
         RE = np.array([1.0, 1.0, 1.0], dtype='float64')
         root_dx = (RE - LE) / NX
         LL = LE + root_dx/2.0
         RL = RE - root_dx/2.0
-        #compute floating point centers of root octs
-        root_fc= np.mgrid[LL[0]:RL[0]:NX[0]*1j,
-                          LL[1]:RL[1]:NX[1]*1j,
-                          LL[2]:RL[2]:NX[2]*1j ]
-        root_fc= np.vstack([p.ravel() for p in root_fc]).T
-        nocts_check = oct_handler.add(1, 0, root_octs_side**3,
+        # compute floating point centers of root octs
+        root_fc = np.mgrid[LL[0]:RL[0]:NX[0]*1j,
+                           LL[1]:RL[1]:NX[1]*1j,
+                           LL[2]:RL[2]:NX[2]*1j]
+        root_fc = np.vstack([p.ravel() for p in root_fc]).T
+        nocts_check = oct_handler.add(self.domain_id, level,
+                                      root_octs_side**3,
                                       root_fc, self.domain_id)
         assert(oct_handler.nocts == root_fc.shape[0])
-        nocts_added = root_fc.shape[0]
         mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
-                    root_octs_side**3, 0,nocts_added)
-        for level in xrange(1, self.pf.max_level+1):
-            left_index, fl, iocts, nocts,root_level = _read_art_level_info(f, 
-                self._level_oct_offsets,level,
-                coarse_grid=self.pf.domain_dimensions[0])
-            left_index/=2
-            #at least one of the indices should be odd
-            #assert np.sum(left_index[:,0]%2==1)>0
-            octs_side = NX*2**level
-            float_left_edge = left_index.astype("float64") / octs_side
-            float_center = float_left_edge + 0.5*1.0/octs_side
-            #all floatin unitary positions should fit inside the domain
-            assert np.all(float_center<1.0)
-            nocts_check = oct_handler.add(1,level, nocts, float_left_edge, self.domain_id)
-            nocts_added += nocts
-            assert(oct_handler.nocts == nocts_added)
-            mylog.debug("Added %07i octs on level %02i, cumulative is %07i",
-                        nocts, level,nocts_added)
+                    root_octs_side**3, 0, oct_handler.nocts)
 
     def select(self, selector):
         if id(selector) == self._last_selector_id:
@@ -585,8 +637,8 @@
 
     def count(self, selector):
         if id(selector) == self._last_selector_id:
-            if self._last_mask is None: return 0
+            if self._last_mask is None:
+                return 0
             return self._last_mask.sum()
         self.select(selector)
         return self.count(selector)
-

diff -r 36524214f584e412d6d4bf2bc8a81aa5b6df5ba9 -r 5e02fade47bdae5020c890e0d80e4c1a5d2e7db4 yt/frontends/art/definitions.py
--- a/yt/frontends/art/definitions.py
+++ b/yt/frontends/art/definitions.py
@@ -25,7 +25,10 @@
 
 """
 
-fluid_fields= [ 
+# If not otherwise specified, we are big endian
+endian = '>'
+
+fluid_fields = [
     'Density',
     'TotalEnergy',
     'XMomentumDensity',
@@ -40,32 +43,29 @@
     'PotentialOld'
 ]
 
-hydro_struct = [('pad1','>i'),('idc','>i'),('iOctCh','>i')]
+hydro_struct = [('pad1', '>i'), ('idc', '>i'), ('iOctCh', '>i')]
 for field in fluid_fields:
-    hydro_struct += (field,'>f'),
-hydro_struct += ('pad2','>i'),
+    hydro_struct += (field, '>f'),
+hydro_struct += ('pad2', '>i'),
 
-particle_fields= [
-    'particle_age',
+particle_fields = [
+    'particle_mass',  # stars have variable mass
     'particle_index',
-    'particle_mass',
-    'particle_mass_initial',
-    'particle_creation_time',
-    'particle_metallicity1',
-    'particle_metallicity2',
-    'particle_metallicity',
+    'particle_type',
     'particle_position_x',
     'particle_position_y',
     'particle_position_z',
     'particle_velocity_x',
     'particle_velocity_y',
     'particle_velocity_z',
-    'particle_type',
-    'particle_index'
+    'particle_mass_initial',
+    'particle_creation_time',
+    'particle_metallicity1',
+    'particle_metallicity2',
+    'particle_metallicity',
 ]
 
 particle_star_fields = [
-    'particle_age',
     'particle_mass',
     'particle_mass_initial',
     'particle_creation_time',
@@ -74,110 +74,65 @@
     'particle_metallicity',
 ]
 
-filename_pattern = {				
-	'amr':'10MpcBox_csf512_%s.d',
-	'particle_header':'PMcrd%s.DAT',
-	'particle_data':'PMcrs0%s.DAT',
-	'particle_stars':'stars_%s.dat'
-}
 
-filename_pattern_hf = {				
-	'particle_header':'PMcrd_%s.DAT',
-	'particle_data':'PMcrs0_%s.DAT',
+filename_pattern = {
+    'amr': ['10MpcBox_', '.d'],
+    'particle_header': ['PMcrd', '.DAT'],
+    'particle_data': ['PMcrs', '.DAT'],
+    'particle_stars': ['stars', '.dat']
 }
 
 amr_header_struct = [
-    ('>i','pad byte'),
-    ('>256s','jname'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>i','istep'),
-    ('>d','t'),
-    ('>d','dt'),
-    ('>f','aexpn'),
-    ('>f','ainit'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>f','boxh'),
-    ('>f','Om0'),
-    ('>f','Oml0'),
-    ('>f','Omb0'),
-    ('>f','hubble'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>i','nextras'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>f','extra1'),
-    ('>f','extra2'),
-    ('>i','pad byte'),
-    ('>i','pad byte'),
-    ('>256s','lextra'),
-    ('>256s','lextra'),
-    ('>i','pad byte'),
-    ('>i', 'pad byte'),
-    ('>i', 'min_level'),
-    ('>i', 'max_level'),
-    ('>i', 'pad byte'),
+    ('jname', 1, '256s'),
+    (('istep', 't', 'dt', 'aexpn', 'ainit'), 1, 'iddff'),
+    (('boxh', 'Om0', 'Oml0', 'Omb0', 'hubble'), 5, 'f'),
+    ('nextras', 1, 'i'),
+    (('extra1', 'extra2'), 2, 'f'),
+    ('lextra', 1, '512s'),
+    (('min_level', 'max_level'), 2, 'i')
 ]
 
-particle_header_struct =[
-    ('>i','pad'),
-    ('45s','header'), 
-    ('>f','aexpn'),
-    ('>f','aexp0'),
-    ('>f','amplt'),
-    ('>f','astep'),
-    ('>i','istep'),
-    ('>f','partw'),
-    ('>f','tintg'),
-    ('>f','Ekin'),
-    ('>f','Ekin1'),
-    ('>f','Ekin2'),
-    ('>f','au0'),
-    ('>f','aeu0'),
-    ('>i','Nrow'),
-    ('>i','Ngridc'),
-    ('>i','Nspecies'),
-    ('>i','Nseed'),
-    ('>f','Om0'),
-    ('>f','Oml0'),
-    ('>f','hubble'),
-    ('>f','Wp5'),
-    ('>f','Ocurv'),
-    ('>f','Omb0'),
-    ('>%ds'%(396),'extras'),
-    ('>f','unknown'),
-    ('>i','pad')
+particle_header_struct = [
+    (('header',
+     'aexpn', 'aexp0', 'amplt', 'astep',
+     'istep',
+     'partw', 'tintg',
+     'Ekin', 'Ekin1', 'Ekin2',
+     'au0', 'aeu0',
+     'Nrow', 'Ngridc', 'Nspecies', 'Nseed',
+     'Om0', 'Oml0', 'hubble', 'Wp5', 'Ocurv', 'Omb0',
+     'extras', 'unknown'),
+     1,
+     '45sffffi'+'fffffff'+'iiii'+'ffffff'+'396s'+'f')
 ]
 
 star_struct = [
-        ('>d',('tdum','adum')),
-        ('>i','nstars'),
-        ('>d',('ws_old','ws_oldi')),
-        ('>f','mass'),
-        ('>f','imass'),
-        ('>f','tbirth'),
-        ('>f','metallicity1'),
-        ('>f','metallicity2')
-        ]
+    ('>d', ('tdum', 'adum')),
+    ('>i', 'nstars'),
+    ('>d', ('ws_old', 'ws_oldi')),
+    ('>f', 'particle_mass'),
+    ('>f', 'particle_mass_initial'),
+    ('>f', 'particle_creation_time'),
+    ('>f', 'particle_metallicity1'),
+    ('>f', 'particle_metallicity2')
+]
 
 star_name_map = {
-        'particle_mass':'mass',
-        'particle_mass_initial':'imass',
-        'particle_age':'tbirth',
-        'particle_metallicity1':'metallicity1',
-        'particle_metallicity2':'metallicity2',
-        'particle_metallicity':'metallicity',
-        }
+    'particle_mass': 'mass',
+    'particle_mass_initial': 'imass',
+    'particle_creation_time': 'tbirth',
+    'particle_metallicity1': 'metallicity1',
+    'particle_metallicity2': 'metallicity2',
+    'particle_metallicity': 'metallicity',
+}
 
 constants = {
-    "Y_p":0.245,
-    "gamma":5./3.,
-    "T_CMB0":2.726,
-    "T_min":300.,
-    "ng":128,
-    "wmu":4.0/(8.0-5.0*0.245)
+    "Y_p": 0.245,
+    "gamma": 5./3.,
+    "T_CMB0": 2.726,
+    "T_min": 300.,
+    "ng": 128,
+    "wmu": 4.0/(8.0-5.0*0.245)
 }
 
 seek_extras = 137

diff -r 36524214f584e412d6d4bf2bc8a81aa5b6df5ba9 -r 5e02fade47bdae5020c890e0d80e4c1a5d2e7db4 yt/frontends/art/fields.py
--- a/yt/frontends/art/fields.py
+++ b/yt/frontends/art/fields.py
@@ -3,6 +3,8 @@
 
 Author: Matthew Turk <matthewturk at gmail.com>
 Affiliation: UCSD
+Author: Chris Moody <matthewturk at gmail.com>
+Affiliation: UCSC
 Homepage: http://yt-project.org/
 License:
   Copyright (C) 2010-2011 Matthew Turk.  All Rights Reserved.
@@ -22,7 +24,7 @@
   You should have received a copy of the GNU General Public License
   along with this program.  If not, see <http://www.gnu.org/licenses/>.
 """
-
+import numpy as np
 from yt.data_objects.field_info_container import \
     FieldInfoContainer, \
     FieldInfo, \
@@ -35,210 +37,221 @@
     ValidateGridType
 import yt.data_objects.universal_fields
 import yt.utilities.lib as amr_utils
+from yt.utilities.physical_constants import mass_sun_cgs
+from yt.frontends.art.definitions import *
 
 KnownARTFields = FieldInfoContainer()
 add_art_field = KnownARTFields.add_field
-
 ARTFieldInfo = FieldInfoContainer.create_with_fallback(FieldInfo)
 add_field = ARTFieldInfo.add_field
 
-import numpy as np
+for f in fluid_fields:
+    add_art_field(f, function=NullFunc, take_log=True,
+                  validators=[ValidateDataField(f)])
 
-#these are just the hydro fields
-known_art_fields = [ 'Density','TotalEnergy',
-                     'XMomentumDensity','YMomentumDensity','ZMomentumDensity',
-                     'Pressure','Gamma','GasEnergy',
-                     'MetalDensitySNII', 'MetalDensitySNIa',
-                     'PotentialNew','PotentialOld']
-
-#Add the fields, then later we'll individually defined units and names
-for f in known_art_fields:
+for f in particle_fields:
     add_art_field(f, function=NullFunc, take_log=True,
-              validators = [ValidateDataField(f)])
-
-#Hydro Fields that are verified to be OK unit-wise:
-#Density
-#Temperature
-#metallicities
-#MetalDensity SNII + SNia
-
-#Hydro Fields that need to be tested:
-#TotalEnergy
-#XYZMomentum
-#Pressure
-#Gamma
-#GasEnergy
-#Potentials
-#xyzvelocity
-
-#Particle fields that are tested:
-#particle_position_xyz
-#particle_type
-#particle_index
-#particle_mass
-#particle_mass_initial
-#particle_age
-#particle_velocity
-#particle_metallicity12
-
-#Particle fields that are untested:
-#NONE
-
-#Other checks:
-#CellMassMsun == Density * CellVolume
+                  validators=[ValidateDataField(f)],
+                  particle_type=True)
+add_art_field("particle_mass", function=NullFunc, take_log=True,
+              validators=[ValidateDataField(f)],
+              particle_type=True,
+              convert_function=lambda x: x.convert("particle_mass"))
+add_art_field("particle_mass_initial", function=NullFunc, take_log=True,
+              validators=[ValidateDataField(f)],
+              particle_type=True,
+              convert_function=lambda x: x.convert("particle_mass"))
 
 def _convertDensity(data):
     return data.convert("Density")
 KnownARTFields["Density"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["Density"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["Density"]._convert_function=_convertDensity
+KnownARTFields["Density"]._convert_function = _convertDensity
 
 def _convertTotalEnergy(data):
     return data.convert("GasEnergy")
-KnownARTFields["TotalEnergy"]._units = r"\rm{g}/\rm{cm}^3"
-KnownARTFields["TotalEnergy"]._projected_units = r"\rm{K}"
-KnownARTFields["TotalEnergy"]._convert_function=_convertTotalEnergy
+KnownARTFields["TotalEnergy"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
+KnownARTFields["TotalEnergy"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
+KnownARTFields["TotalEnergy"]._convert_function = _convertTotalEnergy
 
 def _convertXMomentumDensity(data):
-    tr  = data.convert("Mass")*data.convert("Velocity")
+    tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
     return tr
-KnownARTFields["XMomentumDensity"]._units = r"\rm{mg}/\rm{s}/\rm{cm}^3"
-KnownARTFields["XMomentumDensity"]._projected_units = r"\rm{K}"
-KnownARTFields["XMomentumDensity"]._convert_function=_convertXMomentumDensity
+KnownARTFields["XMomentumDensity"]._units = r"\rm{g}/\rm{s}/\rm{cm}^3"
+KnownARTFields["XMomentumDensity"]._projected_units = r"\rm{g}/\rm{s}/\rm{cm}^2"
+KnownARTFields["XMomentumDensity"]._convert_function = _convertXMomentumDensity
 
 def _convertYMomentumDensity(data):
-    tr  = data.convert("Mass")*data.convert("Velocity")
+    tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
     return tr
-KnownARTFields["YMomentumDensity"]._units = r"\rm{mg}/\rm{s}/\rm{cm}^3"
-KnownARTFields["YMomentumDensity"]._projected_units = r"\rm{K}"
-KnownARTFields["YMomentumDensity"]._convert_function=_convertYMomentumDensity
+KnownARTFields["YMomentumDensity"]._units = r"\rm{g}/\rm{s}/\rm{cm}^3"
+KnownARTFields["YMomentumDensity"]._projected_units = r"\rm{g}/\rm{s}/\rm{cm}^2"
+KnownARTFields["YMomentumDensity"]._convert_function = _convertYMomentumDensity
 
 def _convertZMomentumDensity(data):
-    tr  = data.convert("Mass")*data.convert("Velocity")
+    tr = data.convert("Mass")*data.convert("Velocity")
     tr *= (data.convert("Density")/data.convert("Mass"))
     return tr
-KnownARTFields["ZMomentumDensity"]._units = r"\rm{mg}/\rm{s}/\rm{cm}^3"
-KnownARTFields["ZMomentumDensity"]._projected_units = r"\rm{K}"
-KnownARTFields["ZMomentumDensity"]._convert_function=_convertZMomentumDensity
+KnownARTFields["ZMomentumDensity"]._units = r"\rm{g}/\rm{s}/\rm{cm}^3"
+KnownARTFields["ZMomentumDensity"]._projected_units = r"\rm{g}/\rm{s}/\rm{cm}^2"
+KnownARTFields["ZMomentumDensity"]._convert_function = _convertZMomentumDensity
 
 def _convertPressure(data):
     return data.convert("Pressure")
-KnownARTFields["Pressure"]._units = r"\rm{g}/\rm{cm}/\rm{s}^2"
+KnownARTFields["Pressure"]._units = r"\rm{g}/\rm{s}^2/\rm{cm}^1"
 KnownARTFields["Pressure"]._projected_units = r"\rm{g}/\rm{s}^2"
-KnownARTFields["Pressure"]._convert_function=_convertPressure
+KnownARTFields["Pressure"]._convert_function = _convertPressure
 
 def _convertGamma(data):
     return 1.0
 KnownARTFields["Gamma"]._units = r""
 KnownARTFields["Gamma"]._projected_units = r""
-KnownARTFields["Gamma"]._convert_function=_convertGamma
+KnownARTFields["Gamma"]._convert_function = _convertGamma
 
 def _convertGasEnergy(data):
     return data.convert("GasEnergy")
-KnownARTFields["GasEnergy"]._units = r"\rm{ergs}/\rm{g}"
-KnownARTFields["GasEnergy"]._projected_units = r""
-KnownARTFields["GasEnergy"]._convert_function=_convertGasEnergy
+KnownARTFields["GasEnergy"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
+KnownARTFields["GasEnergy"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
+KnownARTFields["GasEnergy"]._convert_function = _convertGasEnergy
 
 def _convertMetalDensitySNII(data):
     return data.convert('Density')
 KnownARTFields["MetalDensitySNII"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["MetalDensitySNII"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["MetalDensitySNII"]._convert_function=_convertMetalDensitySNII
+KnownARTFields["MetalDensitySNII"]._convert_function = _convertMetalDensitySNII
 
 def _convertMetalDensitySNIa(data):
     return data.convert('Density')
 KnownARTFields["MetalDensitySNIa"]._units = r"\rm{g}/\rm{cm}^3"
 KnownARTFields["MetalDensitySNIa"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["MetalDensitySNIa"]._convert_function=_convertMetalDensitySNIa
+KnownARTFields["MetalDensitySNIa"]._convert_function = _convertMetalDensitySNIa
 
 def _convertPotentialNew(data):
     return data.convert("Potential")
-KnownARTFields["PotentialNew"]._units = r"\rm{g}/\rm{cm}^3"
-KnownARTFields["PotentialNew"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["PotentialNew"]._convert_function=_convertPotentialNew
+KnownARTFields["PotentialNew"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
+KnownARTFields["PotentialNew"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
+KnownARTFields["PotentialNew"]._convert_function = _convertPotentialNew
 
 def _convertPotentialOld(data):
     return data.convert("Potential")
-KnownARTFields["PotentialOld"]._units = r"\rm{g}/\rm{cm}^3"
-KnownARTFields["PotentialOld"]._projected_units = r"\rm{g}/\rm{cm}^2"
-KnownARTFields["PotentialOld"]._convert_function=_convertPotentialOld
+KnownARTFields["PotentialOld"]._units = r"\rm{g}\rm{cm}^2/\rm{s}^2"
+KnownARTFields["PotentialOld"]._projected_units = r"\rm{g}\rm{cm}^3/\rm{s}^2"
+KnownARTFields["PotentialOld"]._convert_function = _convertPotentialOld
 
 ####### Derived fields
+def _temperature(field, data):
+    tr = data["GasEnergy"]/data["Density"]
+    tr /= data.pf.conversion_factors["GasEnergy"]
+    tr *= data.pf.conversion_factors["Density"]
+    tr *= data.pf.conversion_factors['tr']
+    return tr
 
-def _temperature(field, data):
-    dg = data["GasEnergy"] #.astype('float64')
-    dg /= data.pf.conversion_factors["GasEnergy"]
-    dd = data["Density"] #.astype('float64')
-    dd /= data.pf.conversion_factors["Density"]
-    tr = dg/dd*data.pf.conversion_factors['tr']
-    #ghost cells have zero density?
-    tr[np.isnan(tr)] = 0.0
-    #dd[di] = -1.0
-    #if data.id==460:
-    #tr[di] = -1.0 #replace the zero-density points with zero temp
-    #print tr.min()
-    #assert np.all(np.isfinite(tr))
-    return tr
 def _converttemperature(data):
-    #x = data.pf.conversion_factors["Temperature"]
-    x = 1.0
-    return x
-add_field("Temperature", function=_temperature, units = r"\mathrm{K}",take_log=True)
+    return 1.0
+add_field("Temperature", function=_temperature,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Temperature"]._units = r"\mathrm{K}"
 ARTFieldInfo["Temperature"]._projected_units = r"\mathrm{K}"
-#ARTFieldInfo["Temperature"]._convert_function=_converttemperature
 
 def _metallicity_snII(field, data):
-    tr  = data["MetalDensitySNII"] / data["Density"]
+    tr = data["MetalDensitySNII"] / data["Density"]
     return tr
-add_field("Metallicity_SNII", function=_metallicity_snII, units = r"\mathrm{K}",take_log=True)
+add_field("Metallicity_SNII", function=_metallicity_snII,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Metallicity_SNII"]._units = r""
 ARTFieldInfo["Metallicity_SNII"]._projected_units = r""
 
 def _metallicity_snIa(field, data):
-    tr  = data["MetalDensitySNIa"] / data["Density"]
+    tr = data["MetalDensitySNIa"] / data["Density"]
     return tr
-add_field("Metallicity_SNIa", function=_metallicity_snIa, units = r"\mathrm{K}",take_log=True)
+add_field("Metallicity_SNIa", function=_metallicity_snIa,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Metallicity_SNIa"]._units = r""
 ARTFieldInfo["Metallicity_SNIa"]._projected_units = r""
 
 def _metallicity(field, data):
-    tr  = data["Metal_Density"] / data["Density"]
+    tr = data["Metal_Density"] / data["Density"]
     return tr
-add_field("Metallicity", function=_metallicity, units = r"\mathrm{K}",take_log=True)
+add_field("Metallicity", function=_metallicity,
+          units=r"\mathrm{K}", take_log=True)
 ARTFieldInfo["Metallicity"]._units = r""
 ARTFieldInfo["Metallicity"]._projected_units = r""
 
-def _x_velocity(field,data):
-    tr  = data["XMomentumDensity"]/data["Density"]
+def _x_velocity(field, data):
+    tr = data["XMomentumDensity"]/data["Density"]
     return tr
-add_field("x-velocity", function=_x_velocity, units = r"\mathrm{cm/s}",take_log=False)
+add_field("x-velocity", function=_x_velocity,
+          units=r"\mathrm{cm/s}", take_log=False)
 ARTFieldInfo["x-velocity"]._units = r"\rm{cm}/\rm{s}"
 ARTFieldInfo["x-velocity"]._projected_units = r"\rm{cm}/\rm{s}"
 
-def _y_velocity(field,data):
-    tr  = data["YMomentumDensity"]/data["Density"]
+def _y_velocity(field, data):
+    tr = data["YMomentumDensity"]/data["Density"]
     return tr
-add_field("y-velocity", function=_y_velocity, units = r"\mathrm{cm/s}",take_log=False)
+add_field("y-velocity", function=_y_velocity,
+          units=r"\mathrm{cm/s}", take_log=False)
 ARTFieldInfo["y-velocity"]._units = r"\rm{cm}/\rm{s}"
 ARTFieldInfo["y-velocity"]._projected_units = r"\rm{cm}/\rm{s}"
 
-def _z_velocity(field,data):
-    tr  = data["ZMomentumDensity"]/data["Density"]
+def _z_velocity(field, data):
+    tr = data["ZMomentumDensity"]/data["Density"]
     return tr
-add_field("z-velocity", function=_z_velocity, units = r"\mathrm{cm/s}",take_log=False)
+add_field("z-velocity", function=_z_velocity,
+          units=r"\mathrm{cm/s}", take_log=False)
 ARTFieldInfo["z-velocity"]._units = r"\rm{cm}/\rm{s}"
 ARTFieldInfo["z-velocity"]._projected_units = r"\rm{cm}/\rm{s}"
 
 def _metal_density(field, data):
-    tr  = data["MetalDensitySNIa"]
+    tr = data["MetalDensitySNIa"]
     tr += data["MetalDensitySNII"]
     return tr
-add_field("Metal_Density", function=_metal_density, units = r"\mathrm{K}",take_log=True)
-ARTFieldInfo["Metal_Density"]._units = r""
-ARTFieldInfo["Metal_Density"]._projected_units = r""
+add_field("Metal_Density", function=_metal_density,
+          units=r"\mathrm{K}", take_log=True)
+ARTFieldInfo["Metal_Density"]._units = r"\rm{g}/\rm{cm}^3"
+ARTFieldInfo["Metal_Density"]._projected_units = r"\rm{g}/\rm{cm}^2"
 
+# Particle fields
+def _particle_age(field, data):
+    tr = data["particle_creation_time"]
+    return data.pf.current_time - tr
+add_field("particle_age", function=_particle_age, units=r"\mathrm{s}",
+          take_log=True, particle_type=True)
 
-#Particle fields
+def spread_ages(ages, spread=1.0e7*365*24*3600):
+    # stars are formed in lumps; spread out the ages linearly
+    da = np.diff(ages)
+    assert np.all(da <= 0)
+    # ages should always be decreasing, and ordered so
+    agesd = np.zeros(ages.shape)
+    idx, = np.where(da < 0)
+    idx += 1  # mark the right edges
+    # spread this age evenly out to the next age
+    lidx = 0
+    lage = 0
+    for i in idx:
+        n = i-lidx  # n stars affected
+        rage = ages[i]
+        lage = max(rage-spread, 0.0)
+        agesd[lidx:i] = np.linspace(lage, rage, n)
+        lidx = i
+        # lage=rage
+    # we didn't get the last iter
+    n = agesd.shape[0]-lidx
+    rage = ages[-1]
+    lage = max(rage-spread, 0.0)
+    agesd[lidx:] = np.linspace(lage, rage, n)
+    return agesd
+
+def _particle_age_spread(field, data):
+    tr = data["particle_creation_time"]
+    return spread_ages(data.pf.current_time - tr)
+
+add_field("particle_age_spread", function=_particle_age_spread,
+          particle_type=True, take_log=True, units=r"\rm{s}")
+
+def _ParticleMassMsun(field, data):
+    return data["particle_mass"]/mass_sun_cgs
+add_field("ParticleMassMsun", function=_ParticleMassMsun, particle_type=True,
+          take_log=True, units=r"\rm{Msun}")

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/00cce9aff2cf/
Changeset:   00cce9aff2cf
Branch:      yt-3.0
User:        ngoldbaum
Date:        2013-04-29 02:10:23
Summary:     Fixing the docstring for the projection object's data_source keyword.
Affected #:  1 file

diff -r d48a016b4b8ce1c8326e23c308fd90789d0b4ec0 -r 00cce9aff2cff342f2971bee8f578fed9d1deab5 yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py
+++ b/yt/data_objects/construction_data_containers.py
@@ -190,7 +190,7 @@
     center : array_like, optional
         The 'center' supplied to fields that use it.  Note that this does
         not have to have `coord` as one value.  Strictly optional.
-    source : `yt.data_objects.api.AMRData`, optional
+    data_source : `yt.data_objects.api.AMRData`, optional
         If specified, this will be the data source used for selecting
         regions to project.
     node_name: string, optional


https://bitbucket.org/yt_analysis/yt/commits/346c728780cb/
Changeset:   346c728780cb
Branch:      yt-3.0
User:        ngoldbaum
Date:        2013-04-29 02:12:13
Summary:     Merging with bitbucket tip.
Affected #:  2 files

diff -r 00cce9aff2cff342f2971bee8f578fed9d1deab5 -r 346c728780cb0a4607a16608a8706178a51e1bf3 yt/config.py
--- a/yt/config.py
+++ b/yt/config.py
@@ -64,7 +64,8 @@
     answer_testing_bitwise = 'False',
     gold_standard_filename = 'gold006',
     local_standard_filename = 'local001',
-    sketchfab_api_key = 'None'
+    sketchfab_api_key = 'None',
+    thread_field_detection = 'False'
     )
 # Here is the upgrade.  We're actually going to parse the file in its entirety
 # here.  Then, if it has any of the Forbidden Sections, it will be rewritten

diff -r 00cce9aff2cff342f2971bee8f578fed9d1deab5 -r 346c728780cb0a4607a16608a8706178a51e1bf3 yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py
+++ b/yt/frontends/enzo/data_structures.py
@@ -31,6 +31,9 @@
 import string
 import re
 
+from threading import Thread
+import Queue
+
 from itertools import izip
 
 from yt.funcs import *
@@ -57,6 +60,14 @@
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
     parallel_blocking_call
 
+def get_field_names_helper(filename, id, results):
+    try:
+        names = hdf5_light_reader.ReadListOfDatasets(
+                    filename, "/Grid%08i" % id)
+        results.put((names, "Grid %s has: %s" % (id, names)))
+    except (exceptions.KeyError, hdf5_light_reader.ReadingError):
+        results.put((None, "Grid %s is a bit funky?" % id))
+
 class EnzoGrid(AMRGridPatch):
     """
     Class representing a single Enzo Grid instance.
@@ -401,29 +412,21 @@
         self.max_level = self.grid_levels.max()
 
     def _detect_active_particle_fields(self):
-        select_grids = np.zeros(len(self.grids), dtype='int32')
-        for ptype in self.parameter_file["AppendActiveParticleType"]:
-            select_grids += self.grid_active_particle_count[ptype].flat
-        gs = self.grids[select_grids > 0]
-        grids = sorted((g for g in gs), key = lambda a: a.filename)
-        handle = last = None
         ap_list = self.parameter_file["AppendActiveParticleType"]
         _fields = dict((ap, []) for ap in ap_list)
         fields = []
-        for g in grids:
-            # We inspect every grid, for now, until we have a list of
-            # attributes in a defined location.
-            if last != g.filename:
-                if handle is not None: handle.close()
-                handle = h5py.File(g.filename, "r")
+        for ptype in self.parameter_file["AppendActiveParticleType"]:
+            select_grids = self.grid_active_particle_count[ptype].flat
+            gs = self.grids[select_grids > 0]
+            g = gs[0]
+            handle = h5py.File(g.filename)
             node = handle["/Grid%08i/Particles/" % g.id]
             for ptype in (str(p) for p in node):
                 if ptype not in _fields: continue
                 for field in (str(f) for f in node[ptype]):
                     _fields[ptype].append(field)
                 fields += [(ptype, field) for field in _fields.pop(ptype)]
-            if len(_fields) == 0: break
-        if handle is not None: handle.close()
+            handle.close()
         return set(fields)
 
     def _setup_derived_fields(self):
@@ -448,15 +451,35 @@
                 mylog.info("Gathering a field list (this may take a moment.)")
                 field_list = set()
                 random_sample = self._generate_random_grids()
-                for grid in random_sample:
-                    if not hasattr(grid, 'filename'): continue
-                    try:
-                        gf = self.io._read_field_names(grid)
-                    except self.io._read_exception:
-                        mylog.debug("Grid %s is a bit funky?", grid.id)
-                        continue
-                    mylog.debug("Grid %s has: %s", grid.id, gf)
-                    field_list = field_list.union(gf)
+                tothread = ytcfg.getboolean("yt","thread_field_detection")
+                if tothread:
+                    jobs = []
+                    result_queue = Queue.Queue()
+                    # Start threads
+                    for grid in random_sample:
+                        if not hasattr(grid, 'filename'): continue
+                        helper = Thread(target = get_field_names_helper, 
+                            args = (grid.filename, grid.id, result_queue))
+                        jobs.append(helper)
+                        helper.start()
+                    # Here we make sure they're finished.
+                    for helper in jobs:
+                        helper.join()
+                    for grid in random_sample:
+                        res = result_queue.get()
+                        mylog.debug(res[1])
+                        if res[0] is not None:
+                            field_list = field_list.union(res[0])
+                else:
+                    for grid in random_sample:
+                        if not hasattr(grid, 'filename'): continue
+                        try:
+                            gf = self.io._read_field_names(grid)
+                        except self.io._read_exception:
+                            mylog.debug("Grid %s is a bit funky?", grid.id)
+                            continue
+                        mylog.debug("Grid %s has: %s", grid.id, gf)
+                        field_list = field_list.union(gf)
             if "AppendActiveParticleType" in self.parameter_file.parameters:
                 ap_fields = self._detect_active_particle_fields()
                 field_list = list(set(field_list).union(ap_fields))
@@ -998,8 +1021,8 @@
         for p, v in self._conversion_override.items():
             self.conversion_factors[p] = v
         self.refine_by = self.parameters["RefineBy"]
+        self.dimensionality = self.parameters["TopGridRank"]
         self.periodicity = ensure_tuple(self.parameters["LeftFaceBoundaryCondition"] == 3)
-        self.dimensionality = self.parameters["TopGridRank"]
         self.domain_dimensions = self.parameters["TopGridDimensions"]
         self.current_time = self.parameters["InitialTime"]
         if "CurrentTimeIdentifier" in self.parameters:


https://bitbucket.org/yt_analysis/yt/commits/5bdfa420ed4b/
Changeset:   5bdfa420ed4b
Branch:      yt-3.0
User:        ngoldbaum
Date:        2013-05-06 08:10:57
Summary:     Active particles: Bail on detecting fields when we can't find any particles.
Affected #:  1 file

diff -r 346c728780cb0a4607a16608a8706178a51e1bf3 -r 5bdfa420ed4b007621ad600d98a8da5fa8a18ad9 yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py
+++ b/yt/frontends/enzo/data_structures.py
@@ -417,6 +417,8 @@
         fields = []
         for ptype in self.parameter_file["AppendActiveParticleType"]:
             select_grids = self.grid_active_particle_count[ptype].flat
+            if np.any(select_grids) == False:
+                continue
             gs = self.grids[select_grids > 0]
             g = gs[0]
             handle = h5py.File(g.filename)


https://bitbucket.org/yt_analysis/yt/commits/f2fa53e572bc/
Changeset:   f2fa53e572bc
Branch:      yt-3.0
User:        ngoldbaum
Date:        2013-05-06 08:13:55
Summary:     fortran_utils: remove debug print statement.
Affected #:  1 file

diff -r 346c728780cb0a4607a16608a8706178a51e1bf3 -r f2fa53e572bca4023ecdf3b960880047b74dc131 yt/utilities/fortran_utils.py
--- a/yt/utilities/fortran_utils.py
+++ b/yt/utilities/fortran_utils.py
@@ -81,8 +81,6 @@
         s2 = vals.pop(0)
         if s1 != s2:
             size = struct.calcsize(endian + "I" + "".join(n*[t]) + "I")
-            print "S1 = %s ; S2 = %s ; %s %s %s = %s" % (
-                    s1, s2, a, n, t, size)
         assert(s1 == s2)
         if n == 1: v = v[0]
         if type(a)==tuple:


https://bitbucket.org/yt_analysis/yt/commits/37e213d586c9/
Changeset:   37e213d586c9
Branch:      yt-3.0
User:        ngoldbaum
Date:        2013-05-06 08:14:04
Summary:     Merging.
Affected #:  1 file

diff -r f2fa53e572bca4023ecdf3b960880047b74dc131 -r 37e213d586c9e89550f44941db8804504b757d72 yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py
+++ b/yt/frontends/enzo/data_structures.py
@@ -417,6 +417,8 @@
         fields = []
         for ptype in self.parameter_file["AppendActiveParticleType"]:
             select_grids = self.grid_active_particle_count[ptype].flat
+            if np.any(select_grids) == False:
+                continue
             gs = self.grids[select_grids > 0]
             g = gs[0]
             handle = h5py.File(g.filename)


https://bitbucket.org/yt_analysis/yt/commits/6d82df72b1bf/
Changeset:   6d82df72b1bf
Branch:      yt-3.0
User:        sleitner
Date:        2013-05-07 06:07:16
Summary:     answer testing for not big_data
Affected #:  1 file

diff -r 346c728780cb0a4607a16608a8706178a51e1bf3 -r 6d82df72b1bf6f2b01f78e03611a8f62a94d2496 yt/frontends/artio/tests/test_outputs.py
--- /dev/null
+++ b/yt/frontends/artio/tests/test_outputs.py
@@ -0,0 +1,51 @@
+"""
+ARTIO frontend tests 
+
+Author: Samuel Leitner <sam.leitner at gmail.com>
+Affiliation: University of Maryland College Park
+Homepage: http://yt-project.org/
+License:
+  Copyright (C) 2012 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+from yt.testing import *
+from yt.utilities.answer_testing.framework import \
+    requires_pf, \
+    data_dir_load, \
+    PixelizedProjectionValuesTest, \
+    FieldValuesTest
+from yt.frontends.artio.api import ARTIOStaticOutput
+
+_fields = ("Temperature", "Density", "VelocityMagnitude") 
+
+aiso5 = "artio/aiso_a0.9005.art"
+ at requires_pf(aiso5)
+def test_aiso5():
+    pf = data_dir_load(aiso5)
+    yield assert_equal, str(pf), "aiso_a0.9005.art"
+    dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
+    for field in _fields:
+        for axis in [0, 1, 2]:
+            for ds in dso:
+                for weight_field in [None, "Density"]:
+                    yield PixelizedProjectionValuesTest(
+                        aiso5, axis, field, weight_field,
+                        ds)
+                yield FieldValuesTest(
+                        aiso5, field, ds)
+


https://bitbucket.org/yt_analysis/yt/commits/a24afb5197a2/
Changeset:   a24afb5197a2
Branch:      yt-3.0
User:        sleitner
Date:        2013-05-07 06:21:07
Summary:     added empty __init__.py
Affected #:  1 file



https://bitbucket.org/yt_analysis/yt/commits/7245ec83f282/
Changeset:   7245ec83f282
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-09 15:25:30
Summary:     Merged in sleitner/yt-3.0-answertest (pull request #34)

ARTIO answer testing for not big_data
Affected #:  2 files

diff -r 37e213d586c9e89550f44941db8804504b757d72 -r 7245ec83f282b0d9be1a8aabe6abf17c92f0b546 yt/frontends/artio/tests/test_outputs.py
--- /dev/null
+++ b/yt/frontends/artio/tests/test_outputs.py
@@ -0,0 +1,51 @@
+"""
+ARTIO frontend tests 
+
+Author: Samuel Leitner <sam.leitner at gmail.com>
+Affiliation: University of Maryland College Park
+Homepage: http://yt-project.org/
+License:
+  Copyright (C) 2012 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+from yt.testing import *
+from yt.utilities.answer_testing.framework import \
+    requires_pf, \
+    data_dir_load, \
+    PixelizedProjectionValuesTest, \
+    FieldValuesTest
+from yt.frontends.artio.api import ARTIOStaticOutput
+
+_fields = ("Temperature", "Density", "VelocityMagnitude") 
+
+aiso5 = "artio/aiso_a0.9005.art"
+ at requires_pf(aiso5)
+def test_aiso5():
+    pf = data_dir_load(aiso5)
+    yield assert_equal, str(pf), "aiso_a0.9005.art"
+    dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
+    for field in _fields:
+        for axis in [0, 1, 2]:
+            for ds in dso:
+                for weight_field in [None, "Density"]:
+                    yield PixelizedProjectionValuesTest(
+                        aiso5, axis, field, weight_field,
+                        ds)
+                yield FieldValuesTest(
+                        aiso5, field, ds)
+


https://bitbucket.org/yt_analysis/yt/commits/e2c59d797d73/
Changeset:   e2c59d797d73
Branch:      yt-3.0
User:        xarthisius
Date:        2013-04-16 17:57:58
Summary:     [gdf] adding basic support for new data representation
Affected #:  2 files

diff -r ae0003cdf0a5c5c11d3722d37796c67b0b84428a -r e2c59d797d731004107cdd985ef75229e41c23bf yt/frontends/gdf/data_structures.py
--- a/yt/frontends/gdf/data_structures.py
+++ b/yt/frontends/gdf/data_structures.py
@@ -39,6 +39,8 @@
            StaticOutput
 from yt.utilities.lib import \
     get_box_grids_level
+from yt.utilities.io_handler import \
+    io_registry
 from yt.utilities.definitions import \
     mpc_conversion, sec_conversion
 
@@ -78,6 +80,10 @@
         if self.pf.dimensionality < 3: self.dds[2] = 1.0
         self.field_data['dx'], self.field_data['dy'], self.field_data['dz'] = self.dds
 
+    @property
+    def filename(self):
+        return None
+
 class GDFHierarchy(GridGeometryHandler):
 
     grid = GDFGrid
@@ -85,19 +91,23 @@
     def __init__(self, pf, data_style='grid_data_format'):
         self.parameter_file = weakref.proxy(pf)
         self.data_style = data_style
+        self.max_level = 10  # FIXME
         # for now, the hierarchy file is the parameter file!
         self.hierarchy_filename = self.parameter_file.parameter_filename
         self.directory = os.path.dirname(self.hierarchy_filename)
-        self._fhandle = h5py.File(self.hierarchy_filename,'r')
-        GridGeometryHandler.__init__(self,pf,data_style)
+#        self._handle = h5py.File(self.hierarchy_filename, 'r')
+        self._handle = pf._handle
+#        import pudb; pudb.set_trace()
+        GridGeometryHandler.__init__(self, pf, data_style)
+        print "!!!!"
 
-        self._fhandle.close()
+#        self._handle.close()
 
     def _initialize_data_storage(self):
         pass
 
     def _detect_fields(self):
-        self.field_list = self._fhandle['field_types'].keys()
+        self.field_list = self._handle['field_types'].keys()
 
     def _setup_classes(self):
         dd = self._get_data_reader_dict()
@@ -105,10 +115,10 @@
         self.object_types.sort()
 
     def _count_grids(self):
-        self.num_grids = self._fhandle['/grid_parent_id'].shape[0]
+        self.num_grids = self._handle['/grid_parent_id'].shape[0]
 
     def _parse_hierarchy(self):
-        f = self._fhandle
+        f = self._handle
         dxs = []
         self.grids = np.empty(self.num_grids, dtype='object')
         levels = (f['grid_level'][:]).copy()
@@ -139,7 +149,7 @@
         for gi, g in enumerate(self.grids):
             g._prepare_grid()
             g._setup_dx()
-
+        return
         for gi, g in enumerate(self.grids):
             g.Children = self._get_grid_children(g)
             for g1 in g.Children:
@@ -165,16 +175,22 @@
         mask[grid_ind] = True
         return [g for g in self.grids[mask] if g.Level == grid.Level + 1]
 
+    def _setup_data_io(self):
+        self.io = io_registry[self.data_style](self.parameter_file)
+
 class GDFStaticOutput(StaticOutput):
     _hierarchy_class = GDFHierarchy
     _fieldinfo_fallback = GDFFieldInfo
     _fieldinfo_known = KnownGDFFields
+    _handle = None
 
     def __init__(self, filename, data_style='grid_data_format',
                  storage_filename = None):
-        StaticOutput.__init__(self, filename, data_style)
+        if self._handle is not None: return
+        self._handle = h5py.File(filename, "r")
         self.storage_filename = storage_filename
         self.filename = filename
+        StaticOutput.__init__(self, filename, data_style)
 
     def _set_units(self):
         """
@@ -208,9 +224,10 @@
             self._fieldinfo_known.add_field(field_name, function=NullFunc, take_log=False,
                    units=current_fields_unit, projected_units="",
                    convert_function=_get_convert(field_name))
-
-        self._handle.close()
-        del self._handle
+        for p, v in self.units.items():
+            self.conversion_factors[p] = v
+#        self._handle.close()
+#        del self._handle
 
     def _parse_parameter_file(self):
         self._handle = h5py.File(self.parameter_filename, "r")
@@ -241,8 +258,8 @@
                 self.hubble_constant = self.cosmological_simulation = 0.0
         self.parameters['Time'] = 1.0 # Hardcode time conversion for now.
         self.parameters["HydroMethod"] = 0 # Hardcode for now until field staggering is supported.
-        self._handle.close()
-        del self._handle
+#        self._handle.close()
+#        del self._handle
 
     @classmethod
     def _is_valid(self, *args, **kwargs):
@@ -259,3 +276,5 @@
     def __repr__(self):
         return self.basename.rsplit(".", 1)[0]
 
+    def __del__(self):
+        self._handle.close()

diff -r ae0003cdf0a5c5c11d3722d37796c67b0b84428a -r e2c59d797d731004107cdd985ef75229e41c23bf yt/frontends/gdf/io.py
--- a/yt/frontends/gdf/io.py
+++ b/yt/frontends/gdf/io.py
@@ -25,31 +25,67 @@
   You should have received a copy of the GNU General Public License
   along with this program.  If not, see <http://www.gnu.org/licenses/>.
 """
+import numpy as np
+from yt.funcs import \
+    mylog
 from yt.utilities.io_handler import \
-           BaseIOHandler
-import h5py
+    BaseIOHandler
 
+
+def field_dname(grid_id, field_name):
+    return "/data/grid_%010i/%s" % (grid_id, field_name)
+
+
+# TODO all particle bits were removed
 class IOHandlerGDFHDF5(BaseIOHandler):
     _data_style = "grid_data_format"
     _offset_string = 'data:offsets=0'
     _data_string = 'data:datatype=0'
 
-    def _field_dict(self,fhandle):
-        keys = fhandle['field_types'].keys()
-        val = fhandle['field_types'].keys()
-        return dict(zip(keys,val))
+    def __init__(self, pf, *args, **kwargs):
+        # TODO check if _num_per_stride is needed
+        self._num_per_stride = kwargs.pop("num_per_stride", 1000000)
+        BaseIOHandler.__init__(self, *args, **kwargs)
+        self.pf = pf
+        self._handle = pf._handle
 
-    def _read_field_names(self,grid):
-        fhandle = h5py.File(grid.filename,'r')
-        names = fhandle['field_types'].keys()
-        fhandle.close()
-        return names
 
-    def _read_data(self,grid,field):
-        fhandle = h5py.File(grid.hierarchy.hierarchy_filename,'r')
-        data = (fhandle['/data/grid_%010i/'%grid.id+field][:]).copy()
-        fhandle.close()
-        if grid.pf.field_ordering == 1:
-            return data.T
-        else:
-            return data
+    def _read_data_set(self, grid, field):
+        data = self._handle[field_dname(grid.id, field)][:, :, :]
+        # TODO transpose data if needed (grid.pf.field_ordering)
+        return data.astype("float64")
+
+    def _read_data_slice(self, grid, field, axis, coord):
+        slc = [slice(None), slice(None), slice(None)]
+        slc[axis] = slice(coord, coord + 1)
+        # TODO transpose data if needed
+        data = self._handle[field_dname(grid.id, field)][slc]
+        return data.astype("float64")
+
+    def _read_fluid_selection(self, chunks, selector, fields, size):
+        chunks = list(chunks)
+        # TODO ????
+        #if any((ftype != "gas" for ftype, fname in fields)):
+        #    raise NotImplementedError
+        fhandle = self._handle
+        rv = {}
+        for field in fields:
+            ftype, fname = field
+            rv[field] = np.empty(
+                size, dtype=fhandle[field_dname(0, fname)].dtype)
+        ngrids = sum(len(chunk.objs) for chunk in chunks)
+        mylog.debug("Reading %s cells of %s fields in %s blocks",
+                    size, [fname for ftype, fname in fields], ngrids)
+        for field in fields:
+            ftype, fname = field
+            ind = 0
+            for chunk in chunks:
+                for grid in chunk.objs:
+                    mask = grid.select(selector)  # caches
+                    if mask is None:
+                        continue
+                    # TODO transpose if needed
+                    data = fhandle[field_dname(grid.id, fname)][mask]
+                    rv[field][ind:ind + data.size] = data
+                    ind += data.size
+        return rv


https://bitbucket.org/yt_analysis/yt/commits/9ff17559b7f6/
Changeset:   9ff17559b7f6
Branch:      yt-3.0
User:        xarthisius
Date:        2013-01-16 20:39:20
Summary:     Remove debug statements
Affected #:  1 file

diff -r e2c59d797d731004107cdd985ef75229e41c23bf -r 9ff17559b7f662be5cfc0ce48436e854de59adc0 yt/frontends/gdf/data_structures.py
--- a/yt/frontends/gdf/data_structures.py
+++ b/yt/frontends/gdf/data_structures.py
@@ -97,9 +97,7 @@
         self.directory = os.path.dirname(self.hierarchy_filename)
 #        self._handle = h5py.File(self.hierarchy_filename, 'r')
         self._handle = pf._handle
-#        import pudb; pudb.set_trace()
         GridGeometryHandler.__init__(self, pf, data_style)
-        print "!!!!"
 
 #        self._handle.close()
 


https://bitbucket.org/yt_analysis/yt/commits/96a9135733c7/
Changeset:   96a9135733c7
Branch:      yt-3.0
User:        xarthisius
Date:        2013-04-16 15:08:08
Summary:     [gdf] update I/O, SlicePlot and ProjectionPlot are working for enzo dataset written with write_to_gdf
Affected #:  2 files

diff -r 9ff17559b7f662be5cfc0ce48436e854de59adc0 -r 96a9135733c790708dd46705625c527839c69eca yt/frontends/gdf/data_structures.py
--- a/yt/frontends/gdf/data_structures.py
+++ b/yt/frontends/gdf/data_structures.py
@@ -95,11 +95,9 @@
         # for now, the hierarchy file is the parameter file!
         self.hierarchy_filename = self.parameter_file.parameter_filename
         self.directory = os.path.dirname(self.hierarchy_filename)
-#        self._handle = h5py.File(self.hierarchy_filename, 'r')
         self._handle = pf._handle
         GridGeometryHandler.__init__(self, pf, data_style)
 
-#        self._handle.close()
 
     def _initialize_data_storage(self):
         pass
@@ -147,7 +145,6 @@
         for gi, g in enumerate(self.grids):
             g._prepare_grid()
             g._setup_dx()
-        return
         for gi, g in enumerate(self.grids):
             g.Children = self._get_grid_children(g)
             for g1 in g.Children:
@@ -167,9 +164,20 @@
     def _setup_derived_fields(self):
         self.derived_field_list = []
 
+    def _get_box_grids(self, left_edge, right_edge):
+        """
+        Gets back all the grids between a left edge and right edge
+        """
+        eps = np.finfo(np.float64).eps
+        grid_i = np.where((np.all((self.grid_right_edge - left_edge) > eps, axis=1) \
+                        &  np.all((right_edge - self.grid_left_edge) > eps, axis=1)) == True)
+
+        return self.grids[grid_i], grid_i
+
+
     def _get_grid_children(self, grid):
         mask = np.zeros(self.num_grids, dtype='bool')
-        grids, grid_ind = self.get_box_grids(grid.LeftEdge, grid.RightEdge)
+        grids, grid_ind = self._get_box_grids(grid.LeftEdge, grid.RightEdge)
         mask[grid_ind] = True
         return [g for g in self.grids[mask] if g.Level == grid.Level + 1]
 
@@ -216,7 +224,7 @@
             except:
                 self.units[field_name] = 1.0
             try:
-                current_fields_unit = current_field.attrs['field_units'][0]
+                current_fields_unit = current_field.attrs['field_units']
             except:
                 current_fields_unit = ""
             self._fieldinfo_known.add_field(field_name, function=NullFunc, take_log=False,
@@ -224,8 +232,6 @@
                    convert_function=_get_convert(field_name))
         for p, v in self.units.items():
             self.conversion_factors[p] = v
-#        self._handle.close()
-#        del self._handle
 
     def _parse_parameter_file(self):
         self._handle = h5py.File(self.parameter_filename, "r")
@@ -256,8 +262,6 @@
                 self.hubble_constant = self.cosmological_simulation = 0.0
         self.parameters['Time'] = 1.0 # Hardcode time conversion for now.
         self.parameters["HydroMethod"] = 0 # Hardcode for now until field staggering is supported.
-#        self._handle.close()
-#        del self._handle
 
     @classmethod
     def _is_valid(self, *args, **kwargs):

diff -r 9ff17559b7f662be5cfc0ce48436e854de59adc0 -r 96a9135733c790708dd46705625c527839c69eca yt/frontends/gdf/io.py
--- a/yt/frontends/gdf/io.py
+++ b/yt/frontends/gdf/io.py
@@ -64,9 +64,8 @@
 
     def _read_fluid_selection(self, chunks, selector, fields, size):
         chunks = list(chunks)
-        # TODO ????
-        #if any((ftype != "gas" for ftype, fname in fields)):
-        #    raise NotImplementedError
+        if any((ftype != "gas" for ftype, fname in fields)):
+            raise NotImplementedError
         fhandle = self._handle
         rv = {}
         for field in fields:


https://bitbucket.org/yt_analysis/yt/commits/235a3da4c101/
Changeset:   235a3da4c101
Branch:      yt-3.0
User:        xarthisius
Date:        2013-04-16 21:22:35
Summary:     [gdf] respect field_ordering variable, SlicePlot and ProjectionPlot now work for Fortran data
Affected #:  1 file

diff -r 96a9135733c790708dd46705625c527839c69eca -r 235a3da4c101e7946102b9bdb5ab961d5fcb0125 yt/frontends/gdf/io.py
--- a/yt/frontends/gdf/io.py
+++ b/yt/frontends/gdf/io.py
@@ -51,15 +51,19 @@
 
 
     def _read_data_set(self, grid, field):
-        data = self._handle[field_dname(grid.id, field)][:, :, :]
-        # TODO transpose data if needed (grid.pf.field_ordering)
+        if self.pf.field_ordering == 1:
+            data = self._handle[field_dname(grid.id, field)][:].swapaxes(0, 2)
+        else:
+            data = self._handle[field_dname(grid.id, field)][:, :, :]
         return data.astype("float64")
 
     def _read_data_slice(self, grid, field, axis, coord):
         slc = [slice(None), slice(None), slice(None)]
         slc[axis] = slice(coord, coord + 1)
-        # TODO transpose data if needed
-        data = self._handle[field_dname(grid.id, field)][slc]
+        if self.pf.field_ordering == 1:
+            data = self._handle[field_dname(grid.id, field)][:].swapaxes(0, 2)[slc]
+        else:
+            data = self._handle[field_dname(grid.id, field)][slc]
         return data.astype("float64")
 
     def _read_fluid_selection(self, chunks, selector, fields, size):
@@ -83,8 +87,10 @@
                     mask = grid.select(selector)  # caches
                     if mask is None:
                         continue
-                    # TODO transpose if needed
-                    data = fhandle[field_dname(grid.id, fname)][mask]
+                    if self.pf.field_ordering == 1:
+                        data = fhandle[field_dname(grid.id, fname)][:].swapaxes(0, 2)[mask]
+                    else:
+                        data = fhandle[field_dname(grid.id, fname)][mask]
                     rv[field][ind:ind + data.size] = data
                     ind += data.size
         return rv


https://bitbucket.org/yt_analysis/yt/commits/74c2c00d1078/
Changeset:   74c2c00d1078
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-09 15:33:10
Summary:     Merged in xarthisius/yt-3.0 (pull request #30)

Adding support for GDF I/O
Affected #:  2 files

diff -r 7245ec83f282b0d9be1a8aabe6abf17c92f0b546 -r 74c2c00d1078b5743660abeecdfb359f8266c9bd yt/frontends/gdf/data_structures.py
--- a/yt/frontends/gdf/data_structures.py
+++ b/yt/frontends/gdf/data_structures.py
@@ -39,6 +39,8 @@
            StaticOutput
 from yt.utilities.lib import \
     get_box_grids_level
+from yt.utilities.io_handler import \
+    io_registry
 from yt.utilities.definitions import \
     mpc_conversion, sec_conversion
 
@@ -78,6 +80,10 @@
         if self.pf.dimensionality < 3: self.dds[2] = 1.0
         self.field_data['dx'], self.field_data['dy'], self.field_data['dz'] = self.dds
 
+    @property
+    def filename(self):
+        return None
+
 class GDFHierarchy(GridGeometryHandler):
 
     grid = GDFGrid
@@ -85,19 +91,19 @@
     def __init__(self, pf, data_style='grid_data_format'):
         self.parameter_file = weakref.proxy(pf)
         self.data_style = data_style
+        self.max_level = 10  # FIXME
         # for now, the hierarchy file is the parameter file!
         self.hierarchy_filename = self.parameter_file.parameter_filename
         self.directory = os.path.dirname(self.hierarchy_filename)
-        self._fhandle = h5py.File(self.hierarchy_filename,'r')
-        GridGeometryHandler.__init__(self,pf,data_style)
+        self._handle = pf._handle
+        GridGeometryHandler.__init__(self, pf, data_style)
 
-        self._fhandle.close()
 
     def _initialize_data_storage(self):
         pass
 
     def _detect_fields(self):
-        self.field_list = self._fhandle['field_types'].keys()
+        self.field_list = self._handle['field_types'].keys()
 
     def _setup_classes(self):
         dd = self._get_data_reader_dict()
@@ -105,10 +111,10 @@
         self.object_types.sort()
 
     def _count_grids(self):
-        self.num_grids = self._fhandle['/grid_parent_id'].shape[0]
+        self.num_grids = self._handle['/grid_parent_id'].shape[0]
 
     def _parse_hierarchy(self):
-        f = self._fhandle
+        f = self._handle
         dxs = []
         self.grids = np.empty(self.num_grids, dtype='object')
         levels = (f['grid_level'][:]).copy()
@@ -139,7 +145,6 @@
         for gi, g in enumerate(self.grids):
             g._prepare_grid()
             g._setup_dx()
-
         for gi, g in enumerate(self.grids):
             g.Children = self._get_grid_children(g)
             for g1 in g.Children:
@@ -159,22 +164,39 @@
     def _setup_derived_fields(self):
         self.derived_field_list = []
 
+    def _get_box_grids(self, left_edge, right_edge):
+        """
+        Gets back all the grids between a left edge and right edge
+        """
+        eps = np.finfo(np.float64).eps
+        grid_i = np.where((np.all((self.grid_right_edge - left_edge) > eps, axis=1) \
+                        &  np.all((right_edge - self.grid_left_edge) > eps, axis=1)) == True)
+
+        return self.grids[grid_i], grid_i
+
+
     def _get_grid_children(self, grid):
         mask = np.zeros(self.num_grids, dtype='bool')
-        grids, grid_ind = self.get_box_grids(grid.LeftEdge, grid.RightEdge)
+        grids, grid_ind = self._get_box_grids(grid.LeftEdge, grid.RightEdge)
         mask[grid_ind] = True
         return [g for g in self.grids[mask] if g.Level == grid.Level + 1]
 
+    def _setup_data_io(self):
+        self.io = io_registry[self.data_style](self.parameter_file)
+
 class GDFStaticOutput(StaticOutput):
     _hierarchy_class = GDFHierarchy
     _fieldinfo_fallback = GDFFieldInfo
     _fieldinfo_known = KnownGDFFields
+    _handle = None
 
     def __init__(self, filename, data_style='grid_data_format',
                  storage_filename = None):
-        StaticOutput.__init__(self, filename, data_style)
+        if self._handle is not None: return
+        self._handle = h5py.File(filename, "r")
         self.storage_filename = storage_filename
         self.filename = filename
+        StaticOutput.__init__(self, filename, data_style)
 
     def _set_units(self):
         """
@@ -202,15 +224,14 @@
             except:
                 self.units[field_name] = 1.0
             try:
-                current_fields_unit = current_field.attrs['field_units'][0]
+                current_fields_unit = current_field.attrs['field_units']
             except:
                 current_fields_unit = ""
             self._fieldinfo_known.add_field(field_name, function=NullFunc, take_log=False,
                    units=current_fields_unit, projected_units="",
                    convert_function=_get_convert(field_name))
-
-        self._handle.close()
-        del self._handle
+        for p, v in self.units.items():
+            self.conversion_factors[p] = v
 
     def _parse_parameter_file(self):
         self._handle = h5py.File(self.parameter_filename, "r")
@@ -241,8 +262,6 @@
                 self.hubble_constant = self.cosmological_simulation = 0.0
         self.parameters['Time'] = 1.0 # Hardcode time conversion for now.
         self.parameters["HydroMethod"] = 0 # Hardcode for now until field staggering is supported.
-        self._handle.close()
-        del self._handle
 
     @classmethod
     def _is_valid(self, *args, **kwargs):
@@ -259,3 +278,5 @@
     def __repr__(self):
         return self.basename.rsplit(".", 1)[0]
 
+    def __del__(self):
+        self._handle.close()

diff -r 7245ec83f282b0d9be1a8aabe6abf17c92f0b546 -r 74c2c00d1078b5743660abeecdfb359f8266c9bd yt/frontends/gdf/io.py
--- a/yt/frontends/gdf/io.py
+++ b/yt/frontends/gdf/io.py
@@ -25,31 +25,72 @@
   You should have received a copy of the GNU General Public License
   along with this program.  If not, see <http://www.gnu.org/licenses/>.
 """
+import numpy as np
+from yt.funcs import \
+    mylog
 from yt.utilities.io_handler import \
-           BaseIOHandler
-import h5py
+    BaseIOHandler
 
+
+def field_dname(grid_id, field_name):
+    return "/data/grid_%010i/%s" % (grid_id, field_name)
+
+
+# TODO all particle bits were removed
 class IOHandlerGDFHDF5(BaseIOHandler):
     _data_style = "grid_data_format"
     _offset_string = 'data:offsets=0'
     _data_string = 'data:datatype=0'
 
-    def _field_dict(self,fhandle):
-        keys = fhandle['field_types'].keys()
-        val = fhandle['field_types'].keys()
-        return dict(zip(keys,val))
+    def __init__(self, pf, *args, **kwargs):
+        # TODO check if _num_per_stride is needed
+        self._num_per_stride = kwargs.pop("num_per_stride", 1000000)
+        BaseIOHandler.__init__(self, *args, **kwargs)
+        self.pf = pf
+        self._handle = pf._handle
 
-    def _read_field_names(self,grid):
-        fhandle = h5py.File(grid.filename,'r')
-        names = fhandle['field_types'].keys()
-        fhandle.close()
-        return names
 
-    def _read_data(self,grid,field):
-        fhandle = h5py.File(grid.hierarchy.hierarchy_filename,'r')
-        data = (fhandle['/data/grid_%010i/'%grid.id+field][:]).copy()
-        fhandle.close()
-        if grid.pf.field_ordering == 1:
-            return data.T
+    def _read_data_set(self, grid, field):
+        if self.pf.field_ordering == 1:
+            data = self._handle[field_dname(grid.id, field)][:].swapaxes(0, 2)
         else:
-            return data
+            data = self._handle[field_dname(grid.id, field)][:, :, :]
+        return data.astype("float64")
+
+    def _read_data_slice(self, grid, field, axis, coord):
+        slc = [slice(None), slice(None), slice(None)]
+        slc[axis] = slice(coord, coord + 1)
+        if self.pf.field_ordering == 1:
+            data = self._handle[field_dname(grid.id, field)][:].swapaxes(0, 2)[slc]
+        else:
+            data = self._handle[field_dname(grid.id, field)][slc]
+        return data.astype("float64")
+
+    def _read_fluid_selection(self, chunks, selector, fields, size):
+        chunks = list(chunks)
+        if any((ftype != "gas" for ftype, fname in fields)):
+            raise NotImplementedError
+        fhandle = self._handle
+        rv = {}
+        for field in fields:
+            ftype, fname = field
+            rv[field] = np.empty(
+                size, dtype=fhandle[field_dname(0, fname)].dtype)
+        ngrids = sum(len(chunk.objs) for chunk in chunks)
+        mylog.debug("Reading %s cells of %s fields in %s blocks",
+                    size, [fname for ftype, fname in fields], ngrids)
+        for field in fields:
+            ftype, fname = field
+            ind = 0
+            for chunk in chunks:
+                for grid in chunk.objs:
+                    mask = grid.select(selector)  # caches
+                    if mask is None:
+                        continue
+                    if self.pf.field_ordering == 1:
+                        data = fhandle[field_dname(grid.id, fname)][:].swapaxes(0, 2)[mask]
+                    else:
+                        data = fhandle[field_dname(grid.id, fname)][mask]
+                    rv[field][ind:ind + data.size] = data
+                    ind += data.size
+        return rv


https://bitbucket.org/yt_analysis/yt/commits/0eb93875a7c0/
Changeset:   0eb93875a7c0
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-11 00:35:51
Summary:     Renaming Oct members ind and local_ind to file_ind and domain_ind.

This improves clarity of what they actually mean.  I've added some comments as
well.  I've also added an offset-obtaining function.
Affected #:  4 files

diff -r ff219faca878615de3cf9761fd86abbd34af55be -r 0eb93875a7c0a8bfa7e039bba9a5bd2eae9ee0c9 yt/geometry/fake_octree.pyx
--- a/yt/geometry/fake_octree.pyx
+++ b/yt/geometry/fake_octree.pyx
@@ -67,7 +67,7 @@
                     long cur_leaf, long cur_level, 
                     long max_noct, long max_level, float fsubdivide,
                     np.ndarray[np.uint8_t, ndim=2] mask):
-    print "child", parent.ind, ind[0], ind[1], ind[2], cur_leaf, cur_level
+    print "child", parent.file_ind, ind[0], ind[1], ind[2], cur_leaf, cur_level
     cdef int ddr[3]
     cdef long i,j,k
     cdef float rf #random float from 0-1

diff -r ff219faca878615de3cf9761fd86abbd34af55be -r 0eb93875a7c0a8bfa7e039bba9a5bd2eae9ee0c9 yt/geometry/oct_container.pxd
--- a/yt/geometry/oct_container.pxd
+++ b/yt/geometry/oct_container.pxd
@@ -30,8 +30,12 @@
 
 cdef struct Oct
 cdef struct Oct:
-    np.int64_t ind          # index
-    np.int64_t local_ind
+    np.int64_t file_ind     # index with respect to the order in which it was
+                            # added
+    np.int64_t domain_ind   # index within the global set of domains
+                            # note that moving to a local index will require
+                            # moving to split-up masks, which is part of a
+                            # bigger refactor
     np.int64_t domain       # (opt) addl int index
     np.int64_t pos[3]       # position in ints
     np.int8_t level
@@ -61,6 +65,9 @@
     cdef Oct* get(self, np.float64_t ppos[3], OctInfo *oinfo = ?)
     cdef void neighbors(self, Oct *, Oct **)
     cdef void oct_bounds(self, Oct *, np.float64_t *, np.float64_t *)
+    # This function must return the offset from global-to-local domains; i.e.,
+    # OctAllocationContainer.offset if such a thing exists.
+    cdef np.int64_t get_domain_offset(self, int domain_id)
 
 cdef class ARTIOOctreeContainer(OctreeContainer):
     cdef OctAllocationContainer **domains

diff -r ff219faca878615de3cf9761fd86abbd34af55be -r 0eb93875a7c0a8bfa7e039bba9a5bd2eae9ee0c9 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -56,8 +56,8 @@
     for n in range(n_octs):
         oct = &n_cont.my_octs[n]
         oct.parent = NULL
-        oct.ind = oct.domain = -1
-        oct.local_ind = n + n_cont.offset
+        oct.file_ind = oct.domain = -1
+        oct.domain_ind = n + n_cont.offset
         oct.level = -1
         for i in range(2):
             for j in range(2):
@@ -130,7 +130,7 @@
         while cur != NULL:
             for i in range(cur.n_assigned):
                 this = &cur.my_octs[i]
-                yield (this.ind, this.local_ind, this.domain)
+                yield (this.file_ind, this.domain_ind, this.domain)
             cur = cur.next
 
     cdef void oct_bounds(self, Oct *o, np.float64_t *corner, np.float64_t *size):
@@ -139,6 +139,9 @@
             size[i] = (self.DRE[i] - self.DLE[i]) / (self.nn[i] << o.level)
             corner[i] = o.pos[i] * size[i] + self.DLE[i]
 
+    cdef np.int64_t get_domain_offset(self, int domain_id):
+        return 0
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
@@ -199,7 +202,7 @@
                 cur = cur.next
             o = &cur.my_octs[oi - cur.offset]
             for i in range(8):
-                count[o.domain - 1] += mask[o.local_ind,i]
+                count[o.domain - 1] += mask[o.domain_ind,i]
         return count
 
     @cython.boundscheck(True)
@@ -232,7 +235,7 @@
                     for k in range(2):
                         if o.children[i][j][k] == NULL:
                             ii = ((k*2)+j)*2+i
-                            count[o.domain - 1] += mask[o.local_ind,ii]
+                            count[o.domain - 1] += mask[o.domain_ind,ii]
         return count
 
     @cython.boundscheck(False)
@@ -493,7 +496,7 @@
             next_oct.pos[i] = pos[i]
         next_oct.domain = curdom
         next_oct.parent = cur
-        next_oct.ind = 1
+        next_oct.file_ind = 1
         next_oct.level = curlevel
         return next_oct
 
@@ -528,7 +531,7 @@
                 for j in range(2):
                     for i in range(2):
                         ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
+                        if mask[o.domain_ind, ii] == 0: continue
                         # Note that we bit shift because o.pos is oct position,
                         # not cell position, and it is with respect to octs,
                         # not cells.
@@ -574,7 +577,7 @@
         for oi in range(cur.n):
             o = &cur.my_octs[oi]
             for i in range(8):
-                if mask[o.local_ind, i] == 0: continue
+                if mask[o.domain_ind, i] == 0: continue
                 level_count[o.level] += 1
         return level_count
 
@@ -613,7 +616,7 @@
                 for j in range(2):
                     for i in range(2):
                         ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
+                        if mask[o.domain_ind, ii] == 0: continue
                         coords[ci, 0] = pos[0] + dx[0] * i
                         coords[ci, 1] = pos[1] + dx[1] * j
                         coords[ci, 2] = pos[2] + dx[2] * k
@@ -645,13 +648,17 @@
                     for j in range(2):
                         for i in range(2):
                             ii = ((k*2)+j)*2+i
-                            if mask[o.local_ind, ii] == 0: continue
-                            dest[local_filled + offset] = source[o.local_ind*8+ii]
+                            if mask[o.domain_ind, ii] == 0: continue
+                            dest[local_filled + offset] = source[o.domain_ind*8+ii]
                             local_filled += 1
         return local_filled
 
 cdef class RAMSESOctreeContainer(OctreeContainer):
 
+    cdef np.int64_t get_domain_offset(self, int domain_id):
+        cdef OctAllocationContainer *cont = self.domains[domain_id - 1]
+        return cont.offset
+
     cdef Oct* next_root(self, int domain_id, int ind[3]):
         cdef Oct *next = self.root_mesh[ind[0]][ind[1]][ind[2]]
         if next != NULL: return next
@@ -729,7 +736,7 @@
             o = &cur.my_octs[oi]
             use = 0
             for i in range(8):
-                m2[o.local_ind, i] = mask[o.local_ind, i]
+                m2[o.domain_ind, i] = mask[o.domain_ind, i]
         return m2 # NOTE: This is uint8_t
 
     def domain_mask(self,
@@ -751,7 +758,7 @@
             o = &cur.my_octs[oi]
             use = 0
             for i in range(8):
-                if mask[o.local_ind, i] == 1: use = 1
+                if mask[o.domain_ind, i] == 1: use = 1
             nm += use
         cdef np.ndarray[np.uint8_t, ndim=4] m2 = \
                 np.zeros((2, 2, 2, nm), 'uint8')
@@ -763,7 +770,7 @@
                 for j in range(2):
                     for k in range(2):
                         ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
+                        if mask[o.domain_ind, ii] == 0: continue
                         use = m2[i, j, k, nm] = 1
             nm += use
         return m2.astype("bool")
@@ -781,9 +788,9 @@
             o = &cur.my_octs[oi]
             use = 0
             for i in range(8):
-                if mask[o.local_ind, i] == 1: use = 1
+                if mask[o.domain_ind, i] == 1: use = 1
             if use == 1:
-                ind[o.ind] = nm
+                ind[o.file_ind] = nm
             nm += use
         return ind
 
@@ -832,7 +839,7 @@
                         else:
                             some_refined = 1
             if some_unrefined == some_refined == 1:
-                #print "BAD", oct.ind, oct.local_ind
+                #print "BAD", oct.file_ind, oct.domain_ind
                 bad += 1
                 if curdom == 10 or curdom == 72:
                     for i in range(2):
@@ -890,7 +897,7 @@
             # Now we should be at the right level
             cur.domain = curdom
             if local == 1:
-                cur.ind = p
+                cur.file_ind = p
             cur.level = curlevel
         return cont.n_assigned - initial
 
@@ -914,7 +921,7 @@
                 for j in range(2):
                     for k in range(2):
                         ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
+                        if mask[o.domain_ind, ii] == 0: continue
                         ci = level_counts[o.level]
                         coords[ci, 0] = (o.pos[0] << 1) + i
                         coords[ci, 1] = (o.pos[1] << 1) + j
@@ -959,7 +966,7 @@
         for oi in range(cur.n_assigned):
             o = &cur.my_octs[oi]
             for i in range(8):
-                if mask[o.local_ind, i] == 0: continue
+                if mask[o.domain_ind, i] == 0: continue
                 level_count[o.level] += 1
         return level_count
 
@@ -997,7 +1004,7 @@
                 for j in range(2):
                     for k in range(2):
                         ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
+                        if mask[o.domain_ind, ii] == 0: continue
                         ci = level_counts[o.level]
                         coords[ci, 0] = pos[0] + dx[0] * i
                         coords[ci, 1] = pos[1] + dx[1] * j
@@ -1029,8 +1036,8 @@
                     for j in range(2):
                         for k in range(2):
                             ii = ((k*2)+j)*2+i
-                            if mask[o.local_ind, ii] == 0: continue
-                            dest[local_filled + offset] = source[o.ind, ii]
+                            if mask[o.domain_ind, ii] == 0: continue
+                            dest[local_filled + offset] = source[o.file_ind, ii]
                             local_filled += 1
         return local_filled
 
@@ -1061,7 +1068,7 @@
             source = source_fields[key]
             for n in range(dom.n):
                 o = &dom.my_octs[n]
-                index = o.ind-subchunk_offset
+                index = o.file_ind-subchunk_offset
                 if o.level != level: continue
                 if index < 0: continue
                 if index >= subchunk_max: 
@@ -1072,7 +1079,7 @@
                     for j in range(2):
                         for k in range(2):
                             ii = ((k*2)+j)*2+i
-                            if mask[o.local_ind, ii] == 0: continue
+                            if mask[o.domain_ind, ii] == 0: continue
                             dest[local_filled + offset] = \
                                 source[index,ii]
                             local_filled += 1
@@ -1112,7 +1119,7 @@
                     for j in range(2):
                         for k in range(2):
                             ii = ((k*2)+j)*2+i
-                            if mask[o.local_ind, ii] == 0: continue
+                            if mask[o.domain_ind, ii] == 0: continue
                             ox = (o.pos[0] << 1) + i
                             oy = (o.pos[1] << 1) + j
                             oz = (o.pos[2] << 1) + k
@@ -1316,8 +1323,8 @@
         self.dom_offsets[0] = 0
         dom_ind = 0
         for i in range(self.nocts):
-            self.oct_list[i].local_ind = i
-            self.oct_list[i].ind = dom_ind
+            self.oct_list[i].domain_ind = i
+            self.oct_list[i].file_ind = dom_ind
             dom_ind += 1
             if self.oct_list[i].domain > cur_dom:
                 cur_dom = self.oct_list[i].domain
@@ -1334,8 +1341,8 @@
         cdef ParticleArrays *sd = <ParticleArrays*> \
             malloc(sizeof(ParticleArrays))
         cdef int i, j, k
-        my_oct.ind = my_oct.domain = -1
-        my_oct.local_ind = self.nocts - 1
+        my_oct.file_ind = my_oct.domain = -1
+        my_oct.domain_ind = self.nocts - 1
         my_oct.pos[0] = my_oct.pos[1] = my_oct.pos[2] = -1
         my_oct.level = -1
         my_oct.sd = sd
@@ -1386,7 +1393,7 @@
         for oi in range(ndo):
             o = self.oct_list[oi + doff]
             for i in range(8):
-                if mask[o.local_ind, i] == 0: continue
+                if mask[o.domain_ind, i] == 0: continue
                 level_count[o.level] += 1
         return level_count
 
@@ -1583,7 +1590,7 @@
             if o.domain != domain_id: continue
             use = 0
             for i in range(8):
-                m2[o.local_ind, i] = mask[o.local_ind, i]
+                m2[o.domain_ind, i] = mask[o.domain_ind, i]
         return m2
 
     def domain_mask(self,
@@ -1606,7 +1613,7 @@
             if o.domain != domain_id: continue
             use = 0
             for i in range(8):
-                if mask[o.local_ind, i] == 1: use = 1
+                if mask[o.domain_ind, i] == 1: use = 1
             nm += use
         cdef np.ndarray[np.uint8_t, ndim=4] m2 = \
                 np.zeros((2, 2, 2, nm), 'uint8')
@@ -1619,7 +1626,7 @@
                 for j in range(2):
                     for k in range(2):
                         ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
+                        if mask[o.domain_ind, ii] == 0: continue
                         use = m2[i, j, k, nm] = 1
             nm += use
         return m2.astype("bool")
@@ -1631,7 +1638,7 @@
         # Here we once again do something similar to the other functions.  We
         # need a set of indices into the final reduced, masked values.  The
         # indices will be domain.n long, and will be of type int64.  This way,
-        # we can get the Oct through a .get() call, then use Oct.ind as an
+        # we can get the Oct through a .get() call, then use Oct.file_ind as an
         # index into this newly created array, then finally use the returned
         # index into the domain subset array for deposition.
         cdef np.int64_t i, j, k, oi, noct, n, nm, use, offset
@@ -1646,7 +1653,7 @@
             o = self.oct_list[oi + offset]
             use = 0
             for i in range(8):
-                if mask[o.local_ind, i] == 1: use = 1
+                if mask[o.domain_ind, i] == 1: use = 1
             if use == 1:
                 ind[oi] = nm
             nm += use

diff -r ff219faca878615de3cf9761fd86abbd34af55be -r 0eb93875a7c0a8bfa7e039bba9a5bd2eae9ee0c9 yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -201,7 +201,7 @@
             this_level = 0
         if res == 0:
             for i in range(8):
-                mask[root.local_ind,i] = 0
+                mask[root.domain_ind,i] = 0
             # If this level *is* being selected (i.e., no early termination)
             # then we know no child zones will be selected.
             if this_level == 1:
@@ -217,11 +217,11 @@
                     ii = ((k*2)+j)*2+i
                     ch = root.children[i][j][k]
                     if next_level == 1 and ch != NULL:
-                        mask[root.local_ind, ii] = 0
+                        mask[root.domain_ind, ii] = 0
                         self.recursively_select_octs(
                             ch, spos, sdds, mask, level + 1)
                     elif this_level == 1:
-                        mask[root.local_ind, ii] = \
+                        mask[root.domain_ind, ii] = \
                             self.select_cell(spos, sdds, eterm)
                     spos[2] += sdds[2]
                 spos[1] += sdds[1]


https://bitbucket.org/yt_analysis/yt/commits/c2c281051ffa/
Changeset:   c2c281051ffa
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-11 00:37:01
Summary:     Try using the actual domain_ind minus the global offset as input to the output oct index.
Affected #:  1 file

diff -r 0eb93875a7c0a8bfa7e039bba9a5bd2eae9ee0c9 -r c2c281051ffadb10da23d52c19c782393bbad7ed yt/geometry/particle_deposit.pyx
--- a/yt/geometry/particle_deposit.pyx
+++ b/yt/geometry/particle_deposit.pyx
@@ -62,8 +62,9 @@
         cdef int dims[3]
         dims[0] = dims[1] = dims[2] = 2
         cdef OctInfo oi
-        cdef np.int64_t offset
+        cdef np.int64_t offset, moff
         cdef Oct *oct
+        moff = octree.get_domain_offset(domain_id)
         for i in range(positions.shape[0]):
             # We should check if particle remains inside the Oct here
             for j in range(nf):
@@ -75,7 +76,8 @@
             # might have particles that belong to octs outside our domain.
             if oct.domain != domain_id: continue
             #print domain_id, oct.local_ind, oct.ind, oct.domain, oct.pos[0], oct.pos[1], oct.pos[2]
-            offset = dom_ind[oct.ind] * 8
+            # Note that this has to be our local index, not our in-file index.
+            offset = dom_ind[oct.domain_ind - moff] * 8
             # Check that we found the oct ...
             self.process(dims, oi.left_edge, oi.dds,
                          offset, pos, field_vals)


https://bitbucket.org/yt_analysis/yt/commits/069e11b50b20/
Changeset:   069e11b50b20
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-13 01:28:22
Summary:     Experimental adjustment to make oct-traversal more straightforward for RAMSES.

This works for projections, etc, but I believe it may be slightly more costly
for filling from files.  However, I think we can mitigate this by eliminating
the "level_count" attribute, and further optimizations such as splitting the
mask up.
Affected #:  1 file

diff -r c2c281051ffadb10da23d52c19c782393bbad7ed -r 069e11b50b204653e98255ef7ec9b88e781b3e57 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -790,7 +790,7 @@
             for i in range(8):
                 if mask[o.domain_ind, i] == 1: use = 1
             if use == 1:
-                ind[o.file_ind] = nm
+                ind[o.domain_ind - cur.offset] = nm
             nm += use
         return ind
 
@@ -915,6 +915,7 @@
         n = mask.shape[0]
         cdef np.ndarray[np.int64_t, ndim=2] coords
         coords = np.empty((cell_count, 3), dtype="int64")
+        ci = 0
         for oi in range(cur.n_assigned):
             o = &cur.my_octs[oi]
             for i in range(2):
@@ -922,11 +923,10 @@
                     for k in range(2):
                         ii = ((k*2)+j)*2+i
                         if mask[o.domain_ind, ii] == 0: continue
-                        ci = level_counts[o.level]
                         coords[ci, 0] = (o.pos[0] << 1) + i
                         coords[ci, 1] = (o.pos[1] << 1) + j
                         coords[ci, 2] = (o.pos[2] << 1) + k
-                        level_counts[o.level] += 1
+                        ci += 1
         return coords
 
     @cython.boundscheck(False)
@@ -948,9 +948,8 @@
             o = &cur.my_octs[oi]
             for i in range(8):
                 if mask[oi + cur.offset, i] == 0: continue
-                ci = level_counts[o.level]
                 levels[ci] = o.level
-                level_counts[o.level] += 1
+                ci += 1
         return levels
 
     @cython.boundscheck(False)
@@ -991,6 +990,7 @@
             # position.  Note that the positions will also all be offset by
             # dx/2.0.  This is also for *oct grids*, not cells.
             base_dx[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
+        ci = 0
         for oi in range(cur.n):
             o = &cur.my_octs[oi]
             for i in range(3):
@@ -1005,11 +1005,10 @@
                     for k in range(2):
                         ii = ((k*2)+j)*2+i
                         if mask[o.domain_ind, ii] == 0: continue
-                        ci = level_counts[o.level]
                         coords[ci, 0] = pos[0] + dx[0] * i
                         coords[ci, 1] = pos[1] + dx[1] * j
                         coords[ci, 2] = pos[2] + dx[2] * k
-                        level_counts[o.level] += 1
+                        ci += 1
         return coords
 
     @cython.boundscheck(False)
@@ -1031,18 +1030,15 @@
             source = source_fields[key]
             for n in range(dom.n):
                 o = &dom.my_octs[n]
-                if o.level != level: continue
-                for i in range(2):
-                    for j in range(2):
-                        for k in range(2):
-                            ii = ((k*2)+j)*2+i
-                            if mask[o.domain_ind, ii] == 0: continue
-                            dest[local_filled + offset] = source[o.file_ind, ii]
-                            local_filled += 1
+                for ii in range(8):
+                    # We iterate and check here to keep our counts consistent
+                    # when filling different levels.
+                    if mask[o.domain_ind, ii] == 0: continue
+                    if o.level == level: 
+                        dest[local_filled] = source[o.file_ind, ii]
+                    local_filled += 1
         return local_filled
 
-
-
 cdef class ARTOctreeContainer(RAMSESOctreeContainer):
 
     @cython.boundscheck(True)


https://bitbucket.org/yt_analysis/yt/commits/88be57c26d09/
Changeset:   88be57c26d09
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-13 20:09:47
Summary:     This change allows spatially-defined fields to match the ordering of
non-spatially defined fields for octrees.
Affected #:  2 files

diff -r 069e11b50b204653e98255ef7ec9b88e781b3e57 -r 88be57c26d09ba78525268c07f52a74d813dfc77 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -249,7 +249,13 @@
                 for i,chunk in enumerate(self.chunks(field, "spatial", ngz = 0)):
                     mask = self._current_chunk.objs[0].select(self.selector)
                     if mask is None: continue
-                    data = self[field][mask]
+                    data = self[field]
+                    if len(data.shape) == 4:
+                        # This is how we keep it consistent between oct ordering
+                        # and grid ordering.
+                        data = data.T[mask.T]
+                    else:
+                        data = data[mask]
                     rv[ind:ind+data.size] = data
                     ind += data.size
         else:

diff -r 069e11b50b204653e98255ef7ec9b88e781b3e57 -r 88be57c26d09ba78525268c07f52a74d813dfc77 yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py
+++ b/yt/data_objects/octree_subset.py
@@ -119,8 +119,7 @@
     def _reshape_vals(self, arr):
         nz = self._num_zones + 2*self._num_ghost_zones
         n_oct = arr.shape[0] / (nz**3.0)
-        arr.shape = (n_oct, nz, nz, nz)
-        arr = np.rollaxis(arr, 0, 4)
+        arr = arr.reshape((nz, nz, nz, n_oct), order="F")
         return arr
 
     _domain_ind = None


https://bitbucket.org/yt_analysis/yt/commits/54cc305a4ff4/
Changeset:   54cc305a4ff4
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-14 19:53:49
Summary:     Enable domain offsets for Particle octrees.
Affected #:  2 files

diff -r 88be57c26d09ba78525268c07f52a74d813dfc77 -r 54cc305a4ff4df1b0f20fa2ca2a4ad2bb3ad13cc yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -1190,6 +1190,16 @@
                 free(o.sd.pos)
         free(o)
 
+    def __iter__(self):
+        #Get the next oct, will traverse domains
+        #Note that oct containers can be sorted 
+        #so that consecutive octs are on the same domain
+        cdef int oi
+        cdef Oct *o
+        for oi in range(self.nocts):
+            o = self.oct_list[oi]
+            yield (o.file_ind, o.domain_ind, o.domain)
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
@@ -1328,6 +1338,9 @@
                 dom_ind = 0
         self.dom_offsets[cur_dom + 2] = self.nocts
 
+    cdef np.int64_t get_domain_offset(self, int domain_id):
+        return self.dom_offsets[domain_id + 1]
+
     cdef Oct* allocate_oct(self):
         #Allocate the memory, set to NULL or -1
         #We reserve space for n_ref particles, but keep

diff -r 88be57c26d09ba78525268c07f52a74d813dfc77 -r 54cc305a4ff4df1b0f20fa2ca2a4ad2bb3ad13cc yt/geometry/oct_geometry_handler.py
--- a/yt/geometry/oct_geometry_handler.py
+++ b/yt/geometry/oct_geometry_handler.py
@@ -54,7 +54,7 @@
         Returns (in code units) the smallest cell size in the simulation.
         """
         return (self.parameter_file.domain_width /
-                (2**self.max_level)).min()
+                (2**(self.max_level+1))).min()
 
     def convert(self, unit):
         return self.parameter_file.conversion_factors[unit]


https://bitbucket.org/yt_analysis/yt/commits/29aa47d1b2c1/
Changeset:   29aa47d1b2c1
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-15 00:00:50
Summary:     Adding a deposit method to the FieldDetector.
Affected #:  1 file

diff -r 54cc305a4ff4df1b0f20fa2ca2a4ad2bb3ad13cc -r 29aa47d1b2c1a6f7a8fc7a744e1429a7cc8933e1 yt/data_objects/field_info_container.py
--- a/yt/data_objects/field_info_container.py
+++ b/yt/data_objects/field_info_container.py
@@ -286,6 +286,9 @@
         self.requested.append(item)
         return defaultdict.__missing__(self, item)
 
+    def deposit(self, *args, **kwargs):
+        return np.random.random((self.nd, self.nd, self.nd))
+
     def _read_data(self, field_name):
         self.requested.append(field_name)
         FI = getattr(self.pf, "field_info", FieldInfo)


https://bitbucket.org/yt_analysis/yt/commits/1a650e92f63f/
Changeset:   1a650e92f63f
Branch:      yt-3.0
User:        xarthisius
Date:        2013-05-05 13:22:58
Summary:     Use physical_constants module instead of hardcoded values
Affected #:  13 files

diff -r 346c728780cb0a4607a16608a8706178a51e1bf3 -r 1a650e92f63f47456a3dcbba4eda4cca65653c2e yt/analysis_modules/absorption_spectrum/absorption_line.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_line.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_line.py
@@ -24,6 +24,13 @@
 """
 
 import numpy as np
+from yt.utilities.physical_constants import \
+    charge_proton_cgs, \
+    cm_per_km, \
+    km_per_cm, \
+    mass_electron_cgs, \
+    speed_of_light_cgs
+
 
 def voigt(a,u):
     """
@@ -167,10 +174,10 @@
     """
 
     ## constants
-    me = 1.6726231e-24 / 1836.        # grams mass electron 
-    e = 4.8032e-10                    # esu 
-    c = 2.99792456e5                  # km/s
-    ccgs = c * 1.e5                   # cm/s 
+    me = mass_electron_cgs              # grams mass electron 
+    e = charge_proton_cgs               # esu 
+    c = speed_of_light_cgs * km_per_cm  # km/s
+    ccgs = speed_of_light_cgs           # cm/s 
 
     ## shift lam0 by deltav
     if deltav is not None:
@@ -181,7 +188,7 @@
         lam1 = lam0
 
     ## conversions
-    vdop = vkms * 1.e5                # in cm/s
+    vdop = vkms * cm_per_km           # in cm/s
     lam0cgs = lam0 / 1.e8             # rest wavelength in cm
     lam1cgs = lam1 / 1.e8             # line wavelength in cm
     nu1 = ccgs / lam1cgs              # line freq in Hz

diff -r 346c728780cb0a4607a16608a8706178a51e1bf3 -r 1a650e92f63f47456a3dcbba4eda4cca65653c2e yt/analysis_modules/halo_finding/halo_objects.py
--- a/yt/analysis_modules/halo_finding/halo_objects.py
+++ b/yt/analysis_modules/halo_finding/halo_objects.py
@@ -45,7 +45,10 @@
 from yt.utilities.performance_counters import \
     yt_counters, time_function
 from yt.utilities.math_utils import periodic_dist, get_rotation_matrix
-from yt.utilities.physical_constants import rho_crit_now, mass_sun_cgs
+from yt.utilities.physical_constants import \
+    rho_crit_now, \
+    mass_sun_cgs, \
+    TINY
 
 from .hop.EnzoHop import RunHOP
 from .fof.EnzoFOF import RunFOF
@@ -60,8 +63,6 @@
     ParallelAnalysisInterface, \
     parallel_blocking_call
 
-TINY = 1.e-40
-
 class Halo(object):
     """
     A data source that returns particle information about the members of a
@@ -1428,7 +1429,7 @@
         fglob = path.join(basedir, 'halos_%d.*.bin' % n)
         files = glob.glob(fglob)
         halos = self._get_halos_binary(files)
-        #Jc = 1.98892e33/pf['mpchcm']*1e5
+        #Jc = mass_sun_cgs/ pf['mpchcm'] * 1e5
         Jc = 1.0
         length = 1.0 / pf['mpchcm']
         conv = dict(pos = np.array([length, length, length,

diff -r 346c728780cb0a4607a16608a8706178a51e1bf3 -r 1a650e92f63f47456a3dcbba4eda4cca65653c2e yt/analysis_modules/halo_mass_function/halo_mass_function.py
--- a/yt/analysis_modules/halo_mass_function/halo_mass_function.py
+++ b/yt/analysis_modules/halo_mass_function/halo_mass_function.py
@@ -31,6 +31,11 @@
     ParallelDummy, \
     ParallelAnalysisInterface, \
     parallel_blocking_call
+from yt.utilities.physical_constants import \
+    cm_per_mpc, \
+    mass_sun_cgs, \
+    rho_crit_now
+
 
 class HaloMassFcn(ParallelAnalysisInterface):
     """
@@ -259,7 +264,9 @@
         sigma8_unnorm = math.sqrt(self.sigma_squared_of_R(R));
         sigma_normalization = self.sigma8input / sigma8_unnorm;
 
-        rho0 = self.omega_matter0 * 2.78e+11; # in units of h^2 Msolar/Mpc^3
+        # rho0 in units of h^2 Msolar/Mpc^3
+        rho0 = self.omega_matter0 * \
+                rho_crit_now * cm_per_mpc**3 / mass_sun_cgs
 
         # spacing in mass of our sigma calculation
         dm = (float(self.log_mass_max) - self.log_mass_min)/self.num_sigma_bins;
@@ -294,7 +301,9 @@
     def dndm(self):
         
         # constants - set these before calling any functions!
-        rho0 = self.omega_matter0 * 2.78e+11; # in units of h^2 Msolar/Mpc^3
+        # rho0 in units of h^2 Msolar/Mpc^3
+        rho0 = self.omega_matter0 * \
+                rho_crit_now * cm_per_mpc**3 / mass_sun_cgs
         self.delta_c0 = 1.69;  # critical density for turnaround (Press-Schechter)
         
         nofmz_cum = 0.0;  # keep track of cumulative number density

diff -r 346c728780cb0a4607a16608a8706178a51e1bf3 -r 1a650e92f63f47456a3dcbba4eda4cca65653c2e yt/analysis_modules/halo_profiler/multi_halo_profiler.py
--- a/yt/analysis_modules/halo_profiler/multi_halo_profiler.py
+++ b/yt/analysis_modules/halo_profiler/multi_halo_profiler.py
@@ -52,6 +52,9 @@
     parallel_blocking_call, \
     parallel_root_only, \
     parallel_objects
+from yt.utilities.physical_constants import \
+    mass_sun_cgs, \
+    rho_crit_now
 from yt.visualization.fixed_resolution import \
     FixedResolutionBuffer
 from yt.visualization.image_writer import write_image
@@ -951,12 +954,11 @@
         if 'ActualOverdensity' in profile.keys():
             return
 
-        rho_crit_now = 1.8788e-29 * self.pf.hubble_constant**2 # g cm^-3
-        Msun2g = 1.989e33
-        rho_crit = rho_crit_now * ((1.0 + self.pf.current_redshift)**3.0)
+        rhocritnow = rho_crit_now * self.pf.hubble_constant**2 # g cm^-3
+        rho_crit = rhocritnow * ((1.0 + self.pf.current_redshift)**3.0)
         if not self.use_critical_density: rho_crit *= self.pf.omega_matter
 
-        profile['ActualOverdensity'] = (Msun2g * profile['TotalMassMsun']) / \
+        profile['ActualOverdensity'] = (mass_sun_cgs * profile['TotalMassMsun']) / \
             profile['CellVolume'] / rho_crit
 
     def _check_for_needed_profile_fields(self):

diff -r 346c728780cb0a4607a16608a8706178a51e1bf3 -r 1a650e92f63f47456a3dcbba4eda4cca65653c2e yt/analysis_modules/spectral_integrator/spectral_frequency_integrator.py
--- a/yt/analysis_modules/spectral_integrator/spectral_frequency_integrator.py
+++ b/yt/analysis_modules/spectral_integrator/spectral_frequency_integrator.py
@@ -37,6 +37,9 @@
 from yt.utilities.exceptions import YTException
 from yt.utilities.linear_interpolators import \
     BilinearFieldInterpolator
+from yt.utilities.physical_constants import \
+    erg_per_eV, \
+    keV_per_Hz
 
 xray_data_version = 1
 
@@ -101,7 +104,7 @@
                   np.power(10, np.concatenate([self.log_E[:-1] - 0.5 * E_diff,
                                                [self.log_E[-1] - 0.5 * E_diff[-1],
                                                 self.log_E[-1] + 0.5 * E_diff[-1]]]))
-        self.dnu = 2.41799e17 * np.diff(self.E_bins)
+        self.dnu = keV_per_Hz * np.diff(self.E_bins)
 
     def _get_interpolator(self, data, e_min, e_max):
         r"""Create an interpolator for total emissivity in a 
@@ -311,7 +314,7 @@
     """
 
     my_si = EmissivityIntegrator(filename=filename)
-    energy_erg = np.power(10, my_si.log_E) * 1.60217646e-9
+    energy_erg = np.power(10, my_si.log_E) * erg_per_eV
 
     em_0 = my_si._get_interpolator((my_si.emissivity_primordial[..., :] / energy_erg),
                                    e_min, e_max)

diff -r 346c728780cb0a4607a16608a8706178a51e1bf3 -r 1a650e92f63f47456a3dcbba4eda4cca65653c2e yt/analysis_modules/star_analysis/sfr_spectrum.py
--- a/yt/analysis_modules/star_analysis/sfr_spectrum.py
+++ b/yt/analysis_modules/star_analysis/sfr_spectrum.py
@@ -31,9 +31,13 @@
 from yt.utilities.cosmology import \
     Cosmology, \
     EnzoCosmology
+from yt.utilities.physical_constants import \
+    sec_per_year, \
+    speed_of_light_cgs
 
-YEAR = 3.155693e7 # sec / year
-LIGHT = 2.997925e10 # cm / s
+
+YEAR = sec_per_year # sec / year
+LIGHT = speed_of_light_cgs # cm / s
 
 class StarFormationRate(object):
     r"""Calculates the star formation rate for a given population of

diff -r 346c728780cb0a4607a16608a8706178a51e1bf3 -r 1a650e92f63f47456a3dcbba4eda4cca65653c2e yt/analysis_modules/sunrise_export/sunrise_exporter.py
--- a/yt/analysis_modules/sunrise_export/sunrise_exporter.py
+++ b/yt/analysis_modules/sunrise_export/sunrise_exporter.py
@@ -35,6 +35,9 @@
 import numpy as np
 from yt.funcs import *
 import yt.utilities.lib as amr_utils
+from yt.utilities.physical_constants import \
+    kpc_per_cm, \
+    sec_per_year
 from yt.data_objects.universal_fields import add_field
 from yt.mods import *
 
@@ -524,7 +527,7 @@
                         for ax in 'xyz']).transpose()
         # Velocity is cm/s, we want it to be kpc/yr
         #vel *= (pf["kpc"]/pf["cm"]) / (365*24*3600.)
-        vel *= 1.02268944e-14 
+        vel *= kpc_per_cm * sec_per_year
     if initial_mass is None:
         #in solar masses
         initial_mass = dd["particle_mass_initial"][idx]*pf['Msun']

diff -r 346c728780cb0a4607a16608a8706178a51e1bf3 -r 1a650e92f63f47456a3dcbba4eda4cca65653c2e yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -36,6 +36,11 @@
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
     ParallelAnalysisInterface, parallel_objects
 from yt.utilities.lib import Octree
+from yt.utilities.physical_constants import \
+    gravitational_constant_cgs,
+    mass_sun_cgs,
+    HUGE
+
 
 __CUDA_BLOCK_SIZE = 256
 
@@ -263,8 +268,7 @@
     M = m_enc.sum()
     J = np.sqrt(((j_mag.sum(axis=0))**2.0).sum())/W
     E = np.sqrt(e_term_pre.sum()/W)
-    G = 6.67e-8 # cm^3 g^-1 s^-2
-    spin = J * E / (M*1.989e33*G)
+    spin = J * E / (M * mass_sun_cgs * gravitational_constant_cgs)
     return spin
 add_quantity("BaryonSpinParameter", function=_BaryonSpinParameter,
              combine_function=_combBaryonSpinParameter, n_ret=4)
@@ -348,7 +352,7 @@
     # Gravitational potential energy
     # We only divide once here because we have velocity in cgs, but radius is
     # in code.
-    G = 6.67e-8 / data.convert("cm") # cm^3 g^-1 s^-2
+    G = gravitational_constant_cgs / data.convert("cm") # cm^3 g^-1 s^-2
     # Check for periodicity of the clump.
     two_root = 2. * np.array(data.pf.domain_width) / np.array(data.pf.domain_dimensions)
     domain_period = data.pf.domain_right_edge - data.pf.domain_left_edge
@@ -570,15 +574,15 @@
     mins, maxs = [], []
     for field in fields:
         if data[field].size < 1:
-            mins.append(1e90)
-            maxs.append(-1e90)
+            mins.append(HUGE)
+            maxs.append(-HUGE)
             continue
         if filter is None:
             if non_zero:
                 nz_filter = data[field]>0.0
                 if not nz_filter.any():
-                    mins.append(1e90)
-                    maxs.append(-1e90)
+                    mins.append(HUGE)
+                    maxs.append(-HUGE)
                     continue
             else:
                 nz_filter = None
@@ -593,8 +597,8 @@
                 mins.append(np.nanmin(data[field][nz_filter]))
                 maxs.append(np.nanmax(data[field][nz_filter]))
             else:
-                mins.append(1e90)
-                maxs.append(-1e90)
+                mins.append(HUGE)
+                maxs.append(-HUGE)
     return len(fields), mins, maxs
 def _combExtrema(data, n_fields, mins, maxs):
     mins, maxs = np.atleast_2d(mins, maxs)
@@ -626,7 +630,7 @@
     This function returns the location of the maximum of a set
     of fields.
     """
-    ma, maxi, mx, my, mz = -1e90, -1, -1, -1, -1
+    ma, maxi, mx, my, mz = -HUGE, -1, -1, -1, -1
     if data[field].size > 0:
         maxi = np.argmax(data[field])
         ma = data[field][maxi]
@@ -644,7 +648,7 @@
     This function returns the location of the minimum of a set
     of fields.
     """
-    ma, mini, mx, my, mz = 1e90, -1, -1, -1, -1
+    ma, mini, mx, my, mz = HUGE, -1, -1, -1, -1
     if data[field].size > 0:
         mini = np.argmin(data[field])
         ma = data[field][mini]

diff -r 346c728780cb0a4607a16608a8706178a51e1bf3 -r 1a650e92f63f47456a3dcbba4eda4cca65653c2e yt/data_objects/universal_fields.py
--- a/yt/data_objects/universal_fields.py
+++ b/yt/data_objects/universal_fields.py
@@ -48,15 +48,16 @@
     NeedsParameter
 
 from yt.utilities.physical_constants import \
-     mh, \
-     me, \
-     sigma_thompson, \
-     clight, \
-     kboltz, \
-     G, \
-     rho_crit_now, \
-     speed_of_light_cgs, \
-     km_per_cm, keV_per_K
+    mass_sun_cgs, \
+    mh, \
+    me, \
+    sigma_thompson, \
+    clight, \
+    kboltz, \
+    G, \
+    rho_crit_now, \
+    speed_of_light_cgs, \
+    km_per_cm, keV_per_K
 
 from yt.utilities.math_utils import \
     get_sph_r_component, \
@@ -379,7 +380,7 @@
 def _CellMass(field, data):
     return data["Density"] * data["CellVolume"]
 def _convertCellMassMsun(data):
-    return 5.027854e-34 # g^-1
+    return 1.0 / mass_sun_cgs # g^-1
 add_field("CellMass", function=_CellMass, units=r"\rm{g}")
 add_field("CellMassMsun", units=r"M_{\odot}",
           function=_CellMass,
@@ -458,6 +459,7 @@
     # lens to source
     DLS = data.pf.parameters['cosmology_calculator'].AngularDiameterDistance(
         data.pf.current_redshift, data.pf.parameters['lensing_source_redshift'])
+    # TODO: convert 1.5e14 to constants
     return (((DL * DLS) / DS) * (1.5e14 * data.pf.omega_matter * 
                                 (data.pf.hubble_constant / speed_of_light_cgs)**2 *
                                 (1 + data.pf.current_redshift)))
@@ -520,7 +522,7 @@
     return ((data["Density"].astype('float64')**2.0) \
             *data["Temperature"]**0.5)
 def _convertXRayEmissivity(data):
-    return 2.168e60
+    return 2.168e60 #TODO: convert me to constants
 add_field("XRayEmissivity", function=_XRayEmissivity,
           convert_function=_convertXRayEmissivity,
           projection_conversion="1")
@@ -927,8 +929,8 @@
 add_field("MeanMolecularWeight",function=_MeanMolecularWeight,units=r"")
 
 def _JeansMassMsun(field,data):
-    MJ_constant = (((5*kboltz)/(G*mh))**(1.5)) * \
-    (3/(4*3.1415926535897931))**(0.5) / 1.989e33
+    MJ_constant = (((5.0 * kboltz) / (G * mh)) ** (1.5)) * \
+    (3.0 / (4.0 * np.pi)) ** (0.5) / mass_sun_cgs
 
     return (MJ_constant *
             ((data["Temperature"]/data["MeanMolecularWeight"])**(1.5)) *

diff -r 346c728780cb0a4607a16608a8706178a51e1bf3 -r 1a650e92f63f47456a3dcbba4eda4cca65653c2e yt/geometry/object_finding_mixin.py
--- a/yt/geometry/object_finding_mixin.py
+++ b/yt/geometry/object_finding_mixin.py
@@ -32,6 +32,8 @@
 from yt.utilities.lib import \
     MatchPointsToGrids, \
     GridTree
+from yt.utilities.physical_constants import \
+    HUGE
 
 class ObjectFindingMixin(object) :
 
@@ -83,7 +85,7 @@
         Returns (value, center) of location of minimum for a given field
         """
         gI = np.where(self.grid_levels >= 0) # Slow but pedantic
-        minVal = 1e100
+        minVal = HUGE
         for grid in self.grids[gI[0]]:
             mylog.debug("Checking %s (level %s)", grid.id, grid.Level)
             val, coord = grid.find_min(field)

diff -r 346c728780cb0a4607a16608a8706178a51e1bf3 -r 1a650e92f63f47456a3dcbba4eda4cca65653c2e yt/utilities/cosmology.py
--- a/yt/utilities/cosmology.py
+++ b/yt/utilities/cosmology.py
@@ -25,10 +25,15 @@
 """
 
 import numpy as np
+from yt.utilities.physical_constants import \
+    gravitational_constant_cgs, \
+    km_per_cm, \
+    pc_per_mpc, \
+    speed_of_light_cgs
 
-c_kms = 2.99792458e5 # c in km/s
-G = 6.67259e-8 # cgs
-kmPerMpc = 3.08567758e19
+c_kms = speed_of_light_cgs * km_per_cm # c in km/s
+G = gravitational_constant_cgs
+kmPerMpc = km_per_pc * pc_per_mpc
 
 class Cosmology(object):
     def __init__(self, HubbleConstantNow = 71.0,
@@ -162,6 +167,7 @@
         """
         # Changed 2.52e17 to 2.52e19 because H_0 is in km/s/Mpc, 
         # instead of 100 km/s/Mpc.
+        # TODO: Move me to physical_units
         return 2.52e19 / np.sqrt(self.OmegaMatterNow) / \
             self.HubbleConstantNow / np.power(1 + self.InitialRedshift,1.5)
 

diff -r 346c728780cb0a4607a16608a8706178a51e1bf3 -r 1a650e92f63f47456a3dcbba4eda4cca65653c2e yt/utilities/physical_constants.py
--- a/yt/utilities/physical_constants.py
+++ b/yt/utilities/physical_constants.py
@@ -85,3 +85,7 @@
 kboltz = boltzmann_constant_cgs
 hcgs = planck_constant_cgs
 sigma_thompson = cross_section_thompson_cgs
+
+# Miscellaneous
+HUGE = 1.0e90
+TINY = 1.0e-40

diff -r 346c728780cb0a4607a16608a8706178a51e1bf3 -r 1a650e92f63f47456a3dcbba4eda4cca65653c2e yt/visualization/volume_rendering/CUDARayCast.py
--- a/yt/visualization/volume_rendering/CUDARayCast.py
+++ b/yt/visualization/volume_rendering/CUDARayCast.py
@@ -29,6 +29,8 @@
 import yt.extensions.HierarchySubset as hs
 import numpy as np
 import h5py, time
+from yt.utilities.physical_constants import \
+    mass_hydrogen_cgs
 
 import matplotlib;matplotlib.use("Agg");import pylab
 
@@ -62,7 +64,7 @@
 
     print "Constructing transfer function."
     if "Data" in fn:
-        mh = np.log10(1.67e-24)
+        mh = np.log10(mass_hydrogen_cgs)
         tf = ColorTransferFunction((7.5+mh, 14.0+mh))
         tf.add_gaussian( 8.25+mh, 0.002, [0.2, 0.2, 0.4, 0.1])
         tf.add_gaussian( 9.75+mh, 0.002, [0.0, 0.0, 0.3, 0.1])


https://bitbucket.org/yt_analysis/yt/commits/557e464b6a22/
Changeset:   557e464b6a22
Branch:      yt-3.0
User:        xarthisius
Date:        2013-05-08 19:32:05
Summary:     Fix import
Affected #:  1 file

diff -r 1a650e92f63f47456a3dcbba4eda4cca65653c2e -r 557e464b6a22cb9a51f842775ec0f8d592312be1 yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -37,8 +37,8 @@
     ParallelAnalysisInterface, parallel_objects
 from yt.utilities.lib import Octree
 from yt.utilities.physical_constants import \
-    gravitational_constant_cgs,
-    mass_sun_cgs,
+    gravitational_constant_cgs, \
+    mass_sun_cgs, \
     HUGE
 
 


https://bitbucket.org/yt_analysis/yt/commits/a3bf645f0944/
Changeset:   a3bf645f0944
Branch:      yt-3.0
User:        xarthisius
Date:        2013-05-16 19:36:28
Summary:     merging
Affected #:  6 files

diff -r 557e464b6a22cb9a51f842775ec0f8d592312be1 -r a3bf645f0944f39b2550d4c445a6d59f4a017b22 yt/frontends/artio/tests/test_outputs.py
--- /dev/null
+++ b/yt/frontends/artio/tests/test_outputs.py
@@ -0,0 +1,51 @@
+"""
+ARTIO frontend tests 
+
+Author: Samuel Leitner <sam.leitner at gmail.com>
+Affiliation: University of Maryland College Park
+Homepage: http://yt-project.org/
+License:
+  Copyright (C) 2012 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+from yt.testing import *
+from yt.utilities.answer_testing.framework import \
+    requires_pf, \
+    data_dir_load, \
+    PixelizedProjectionValuesTest, \
+    FieldValuesTest
+from yt.frontends.artio.api import ARTIOStaticOutput
+
+_fields = ("Temperature", "Density", "VelocityMagnitude") 
+
+aiso5 = "artio/aiso_a0.9005.art"
+ at requires_pf(aiso5)
+def test_aiso5():
+    pf = data_dir_load(aiso5)
+    yield assert_equal, str(pf), "aiso_a0.9005.art"
+    dso = [ None, ("sphere", ("max", (0.1, 'unitary')))]
+    for field in _fields:
+        for axis in [0, 1, 2]:
+            for ds in dso:
+                for weight_field in [None, "Density"]:
+                    yield PixelizedProjectionValuesTest(
+                        aiso5, axis, field, weight_field,
+                        ds)
+                yield FieldValuesTest(
+                        aiso5, field, ds)
+

diff -r 557e464b6a22cb9a51f842775ec0f8d592312be1 -r a3bf645f0944f39b2550d4c445a6d59f4a017b22 yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py
+++ b/yt/frontends/enzo/data_structures.py
@@ -417,6 +417,8 @@
         fields = []
         for ptype in self.parameter_file["AppendActiveParticleType"]:
             select_grids = self.grid_active_particle_count[ptype].flat
+            if np.any(select_grids) == False:
+                continue
             gs = self.grids[select_grids > 0]
             g = gs[0]
             handle = h5py.File(g.filename)

diff -r 557e464b6a22cb9a51f842775ec0f8d592312be1 -r a3bf645f0944f39b2550d4c445a6d59f4a017b22 yt/frontends/gdf/data_structures.py
--- a/yt/frontends/gdf/data_structures.py
+++ b/yt/frontends/gdf/data_structures.py
@@ -39,6 +39,8 @@
            StaticOutput
 from yt.utilities.lib import \
     get_box_grids_level
+from yt.utilities.io_handler import \
+    io_registry
 from yt.utilities.definitions import \
     mpc_conversion, sec_conversion
 
@@ -78,6 +80,10 @@
         if self.pf.dimensionality < 3: self.dds[2] = 1.0
         self.field_data['dx'], self.field_data['dy'], self.field_data['dz'] = self.dds
 
+    @property
+    def filename(self):
+        return None
+
 class GDFHierarchy(GridGeometryHandler):
 
     grid = GDFGrid
@@ -85,19 +91,19 @@
     def __init__(self, pf, data_style='grid_data_format'):
         self.parameter_file = weakref.proxy(pf)
         self.data_style = data_style
+        self.max_level = 10  # FIXME
         # for now, the hierarchy file is the parameter file!
         self.hierarchy_filename = self.parameter_file.parameter_filename
         self.directory = os.path.dirname(self.hierarchy_filename)
-        self._fhandle = h5py.File(self.hierarchy_filename,'r')
-        GridGeometryHandler.__init__(self,pf,data_style)
+        self._handle = pf._handle
+        GridGeometryHandler.__init__(self, pf, data_style)
 
-        self._fhandle.close()
 
     def _initialize_data_storage(self):
         pass
 
     def _detect_fields(self):
-        self.field_list = self._fhandle['field_types'].keys()
+        self.field_list = self._handle['field_types'].keys()
 
     def _setup_classes(self):
         dd = self._get_data_reader_dict()
@@ -105,10 +111,10 @@
         self.object_types.sort()
 
     def _count_grids(self):
-        self.num_grids = self._fhandle['/grid_parent_id'].shape[0]
+        self.num_grids = self._handle['/grid_parent_id'].shape[0]
 
     def _parse_hierarchy(self):
-        f = self._fhandle
+        f = self._handle
         dxs = []
         self.grids = np.empty(self.num_grids, dtype='object')
         levels = (f['grid_level'][:]).copy()
@@ -139,7 +145,6 @@
         for gi, g in enumerate(self.grids):
             g._prepare_grid()
             g._setup_dx()
-
         for gi, g in enumerate(self.grids):
             g.Children = self._get_grid_children(g)
             for g1 in g.Children:
@@ -159,22 +164,39 @@
     def _setup_derived_fields(self):
         self.derived_field_list = []
 
+    def _get_box_grids(self, left_edge, right_edge):
+        """
+        Gets back all the grids between a left edge and right edge
+        """
+        eps = np.finfo(np.float64).eps
+        grid_i = np.where((np.all((self.grid_right_edge - left_edge) > eps, axis=1) \
+                        &  np.all((right_edge - self.grid_left_edge) > eps, axis=1)) == True)
+
+        return self.grids[grid_i], grid_i
+
+
     def _get_grid_children(self, grid):
         mask = np.zeros(self.num_grids, dtype='bool')
-        grids, grid_ind = self.get_box_grids(grid.LeftEdge, grid.RightEdge)
+        grids, grid_ind = self._get_box_grids(grid.LeftEdge, grid.RightEdge)
         mask[grid_ind] = True
         return [g for g in self.grids[mask] if g.Level == grid.Level + 1]
 
+    def _setup_data_io(self):
+        self.io = io_registry[self.data_style](self.parameter_file)
+
 class GDFStaticOutput(StaticOutput):
     _hierarchy_class = GDFHierarchy
     _fieldinfo_fallback = GDFFieldInfo
     _fieldinfo_known = KnownGDFFields
+    _handle = None
 
     def __init__(self, filename, data_style='grid_data_format',
                  storage_filename = None):
-        StaticOutput.__init__(self, filename, data_style)
+        if self._handle is not None: return
+        self._handle = h5py.File(filename, "r")
         self.storage_filename = storage_filename
         self.filename = filename
+        StaticOutput.__init__(self, filename, data_style)
 
     def _set_units(self):
         """
@@ -202,15 +224,14 @@
             except:
                 self.units[field_name] = 1.0
             try:
-                current_fields_unit = current_field.attrs['field_units'][0]
+                current_fields_unit = current_field.attrs['field_units']
             except:
                 current_fields_unit = ""
             self._fieldinfo_known.add_field(field_name, function=NullFunc, take_log=False,
                    units=current_fields_unit, projected_units="",
                    convert_function=_get_convert(field_name))
-
-        self._handle.close()
-        del self._handle
+        for p, v in self.units.items():
+            self.conversion_factors[p] = v
 
     def _parse_parameter_file(self):
         self._handle = h5py.File(self.parameter_filename, "r")
@@ -241,8 +262,6 @@
                 self.hubble_constant = self.cosmological_simulation = 0.0
         self.parameters['Time'] = 1.0 # Hardcode time conversion for now.
         self.parameters["HydroMethod"] = 0 # Hardcode for now until field staggering is supported.
-        self._handle.close()
-        del self._handle
 
     @classmethod
     def _is_valid(self, *args, **kwargs):
@@ -259,3 +278,5 @@
     def __repr__(self):
         return self.basename.rsplit(".", 1)[0]
 
+    def __del__(self):
+        self._handle.close()

diff -r 557e464b6a22cb9a51f842775ec0f8d592312be1 -r a3bf645f0944f39b2550d4c445a6d59f4a017b22 yt/frontends/gdf/io.py
--- a/yt/frontends/gdf/io.py
+++ b/yt/frontends/gdf/io.py
@@ -25,31 +25,72 @@
   You should have received a copy of the GNU General Public License
   along with this program.  If not, see <http://www.gnu.org/licenses/>.
 """
+import numpy as np
+from yt.funcs import \
+    mylog
 from yt.utilities.io_handler import \
-           BaseIOHandler
-import h5py
+    BaseIOHandler
 
+
+def field_dname(grid_id, field_name):
+    return "/data/grid_%010i/%s" % (grid_id, field_name)
+
+
+# TODO all particle bits were removed
 class IOHandlerGDFHDF5(BaseIOHandler):
     _data_style = "grid_data_format"
     _offset_string = 'data:offsets=0'
     _data_string = 'data:datatype=0'
 
-    def _field_dict(self,fhandle):
-        keys = fhandle['field_types'].keys()
-        val = fhandle['field_types'].keys()
-        return dict(zip(keys,val))
+    def __init__(self, pf, *args, **kwargs):
+        # TODO check if _num_per_stride is needed
+        self._num_per_stride = kwargs.pop("num_per_stride", 1000000)
+        BaseIOHandler.__init__(self, *args, **kwargs)
+        self.pf = pf
+        self._handle = pf._handle
 
-    def _read_field_names(self,grid):
-        fhandle = h5py.File(grid.filename,'r')
-        names = fhandle['field_types'].keys()
-        fhandle.close()
-        return names
 
-    def _read_data(self,grid,field):
-        fhandle = h5py.File(grid.hierarchy.hierarchy_filename,'r')
-        data = (fhandle['/data/grid_%010i/'%grid.id+field][:]).copy()
-        fhandle.close()
-        if grid.pf.field_ordering == 1:
-            return data.T
+    def _read_data_set(self, grid, field):
+        if self.pf.field_ordering == 1:
+            data = self._handle[field_dname(grid.id, field)][:].swapaxes(0, 2)
         else:
-            return data
+            data = self._handle[field_dname(grid.id, field)][:, :, :]
+        return data.astype("float64")
+
+    def _read_data_slice(self, grid, field, axis, coord):
+        slc = [slice(None), slice(None), slice(None)]
+        slc[axis] = slice(coord, coord + 1)
+        if self.pf.field_ordering == 1:
+            data = self._handle[field_dname(grid.id, field)][:].swapaxes(0, 2)[slc]
+        else:
+            data = self._handle[field_dname(grid.id, field)][slc]
+        return data.astype("float64")
+
+    def _read_fluid_selection(self, chunks, selector, fields, size):
+        chunks = list(chunks)
+        if any((ftype != "gas" for ftype, fname in fields)):
+            raise NotImplementedError
+        fhandle = self._handle
+        rv = {}
+        for field in fields:
+            ftype, fname = field
+            rv[field] = np.empty(
+                size, dtype=fhandle[field_dname(0, fname)].dtype)
+        ngrids = sum(len(chunk.objs) for chunk in chunks)
+        mylog.debug("Reading %s cells of %s fields in %s blocks",
+                    size, [fname for ftype, fname in fields], ngrids)
+        for field in fields:
+            ftype, fname = field
+            ind = 0
+            for chunk in chunks:
+                for grid in chunk.objs:
+                    mask = grid.select(selector)  # caches
+                    if mask is None:
+                        continue
+                    if self.pf.field_ordering == 1:
+                        data = fhandle[field_dname(grid.id, fname)][:].swapaxes(0, 2)[mask]
+                    else:
+                        data = fhandle[field_dname(grid.id, fname)][mask]
+                    rv[field][ind:ind + data.size] = data
+                    ind += data.size
+        return rv

diff -r 557e464b6a22cb9a51f842775ec0f8d592312be1 -r a3bf645f0944f39b2550d4c445a6d59f4a017b22 yt/utilities/fortran_utils.py
--- a/yt/utilities/fortran_utils.py
+++ b/yt/utilities/fortran_utils.py
@@ -81,8 +81,6 @@
         s2 = vals.pop(0)
         if s1 != s2:
             size = struct.calcsize(endian + "I" + "".join(n*[t]) + "I")
-            print "S1 = %s ; S2 = %s ; %s %s %s = %s" % (
-                    s1, s2, a, n, t, size)
         assert(s1 == s2)
         if n == 1: v = v[0]
         if type(a)==tuple:


https://bitbucket.org/yt_analysis/yt/commits/4dcdcbad4ae3/
Changeset:   4dcdcbad4ae3
Branch:      yt-3.0
User:        ngoldbaum
Date:        2013-05-18 00:42:16
Summary:     Inserting a missing import to fix a NameError caused by the previous PR merge.
Affected #:  1 file

diff -r a3bf645f0944f39b2550d4c445a6d59f4a017b22 -r 4dcdcbad4ae304727cd63a5eca39a0bab7dab952 yt/utilities/cosmology.py
--- a/yt/utilities/cosmology.py
+++ b/yt/utilities/cosmology.py
@@ -29,6 +29,7 @@
     gravitational_constant_cgs, \
     km_per_cm, \
     pc_per_mpc, \
+    km_per_pc, \
     speed_of_light_cgs
 
 c_kms = speed_of_light_cgs * km_per_cm # c in km/s


https://bitbucket.org/yt_analysis/yt/commits/6eaa2394b591/
Changeset:   6eaa2394b591
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-21 20:52:30
Summary:     Removing ARTIOOctreeContainer (S.L. authorized.)
Affected #:  2 files

diff -r 29aa47d1b2c1a6f7a8fc7a744e1429a7cc8933e1 -r 6eaa2394b59160d94d04da9ae653ac69c9f64e00 yt/geometry/oct_container.pxd
--- a/yt/geometry/oct_container.pxd
+++ b/yt/geometry/oct_container.pxd
@@ -69,13 +69,6 @@
     # OctAllocationContainer.offset if such a thing exists.
     cdef np.int64_t get_domain_offset(self, int domain_id)
 
-cdef class ARTIOOctreeContainer(OctreeContainer):
-    cdef OctAllocationContainer **domains
-    cdef Oct *get_root_oct(self, np.float64_t ppos[3])
-    cdef Oct *next_free_oct( self, int curdom )
-    cdef int valid_domain_oct(self, int curdom, Oct *parent)
-    cdef Oct *add_oct(self, int curdom, Oct *parent, int curlevel, double pp[3])
-
 cdef class RAMSESOctreeContainer(OctreeContainer):
     cdef OctAllocationContainer **domains
     cdef Oct *next_root(self, int domain_id, int ind[3])

diff -r 29aa47d1b2c1a6f7a8fc7a744e1429a7cc8933e1 -r 6eaa2394b59160d94d04da9ae653ac69c9f64e00 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -328,331 +328,6 @@
                 bounds[i, 3+ii] = size[ii]
         return bounds
 
-cdef class ARTIOOctreeContainer(OctreeContainer):
-
-    def allocate_domains(self, domain_counts):
-        cdef int count, i
-        cdef OctAllocationContainer *cur = self.cont
-        assert(cur == NULL)
-        self.max_domain = len(domain_counts) # 1-indexed
-        self.domains = <OctAllocationContainer **> malloc(
-            sizeof(OctAllocationContainer *) * len(domain_counts))
-        for i, count in enumerate(domain_counts):
-            cur = allocate_octs(count, cur)
-            if self.cont == NULL: self.cont = cur
-            self.domains[i] = cur
-        
-    def __dealloc__(self):
-        # This gets called BEFORE the superclass deallocation.  But, both get
-        # called.
-        if self.domains != NULL: free(self.domains)
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def count(self, np.ndarray[np.uint8_t, ndim=1, cast=True] mask,
-                     split = False):
-        cdef int n = mask.shape[0]
-        cdef int i, dom
-        cdef OctAllocationContainer *cur
-        cdef np.ndarray[np.int64_t, ndim=1] count
-        count = np.zeros(self.max_domain, 'int64')
-        # This is the idiom for iterating over many containers.
-        cur = self.cont
-        for i in range(n):
-            if i - cur.offset >= cur.n: cur = cur.next
-            if mask[i] == 1:
-                count[cur.my_octs[i - cur.offset].domain - 1] += 1
-        return count
-
-    def check(self, int curdom):
-        cdef int dind, pi
-        cdef Oct oct
-        cdef OctAllocationContainer *cont = self.domains[curdom - 1]
-        cdef int nbad = 0
-        for pi in range(cont.n_assigned):
-            oct = cont.my_octs[pi]
-            for i in range(2):
-                for j in range(2):
-                    for k in range(2):
-                        if oct.children[i][j][k] != NULL and \
-                           oct.children[i][j][k].level != oct.level + 1:
-                            if curdom == 61:
-                                print pi, oct.children[i][j][k].level,
-                                print oct.level
-                            nbad += 1
-        print "DOMAIN % 3i HAS % 9i BAD OCTS (%s / %s / %s)" % (curdom, nbad, 
-            cont.n - cont.n_assigned, cont.n_assigned, cont.n)
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    cdef Oct *next_free_oct( self, int curdom ) :
-        cdef OctAllocationContainer *cont
-        cdef Oct *next_oct
-
-        if curdom < 1 or curdom > self.max_domain or self.domains == NULL  :
-            print "Error, invalid domain or unallocated domains"
-            raise RuntimeError
-        
-        cont = self.domains[curdom - 1]
-        if cont.n_assigned >= cont.n :
-            print "Error, ran out of octs in domain curdom"
-            raise RuntimeError
-
-        self.nocts += 1
-        next_oct = &cont.my_octs[cont.n_assigned]
-        cont.n_assigned += 1
-        return next_oct
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    cdef int valid_domain_oct(self, int curdom, Oct *parent) :
-        cdef OctAllocationContainer *cont
-
-        if curdom < 1 or curdom > self.max_domain or self.domains == NULL  :
-            raise RuntimeError
-        cont = self.domains[curdom - 1]
-
-        if parent == NULL or parent < &cont.my_octs[0] or \
-                parent > &cont.my_octs[cont.n_assigned] :
-            return 0
-        else :
-            return 1
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    cdef Oct *get_root_oct(self, np.float64_t ppos[3]):
-        cdef np.int64_t ind[3]
-        cdef np.float64_t dds
-        cdef int i
-        for i in range(3):
-            dds = (self.DRE[i] - self.DLE[i])/self.nn[i]
-            ind[i] = <np.int64_t> floor((ppos[i]-self.DLE[i])/dds)
-        return self.root_mesh[ind[0]][ind[1]][ind[2]]
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    cdef Oct *add_oct(self, int curdom, Oct *parent, 
-                    int curlevel, np.float64_t pp[3]):
-
-        cdef int level, i, ind[3]
-        cdef Oct *cur, *next_oct
-        cdef np.int64_t pos[3]
-        cdef np.float64_t dds
-
-        if curlevel < 0 :
-            raise RuntimeError
-        for i in range(3):
-            if pp[i] < self.DLE[i] or pp[i] > self.DRE[i] :
-                raise RuntimeError
-            dds = (self.DRE[i] - self.DLE[i])/(<np.int64_t>self.nn[i])
-            pos[i] = <np.int64_t> floor((pp[i]-self.DLE[i])*<np.float64_t>(1<<curlevel)/dds)
-
-        if curlevel == 0 :
-            cur = NULL
-        elif parent == NULL :
-            cur = self.get_root_oct(pp)
-            assert( cur != NULL )
-
-            # Now we find the location we want
-            for level in range(1,curlevel):
-                # At every level, find the cell this oct lives inside
-                for i in range(3) :
-                    if pos[i] < (2*cur.pos[i]+1)<<(curlevel-level) :
-                        ind[i] = 0
-                    else :
-                        ind[i] = 1
-                cur = cur.children[ind[0]][ind[1]][ind[2]]
-                if cur == NULL:
-                    # in ART we don't allocate down to curlevel 
-                    # if parent doesn't exist
-                    print "Error, no oct exists at that level"
-                    raise RuntimeError
-        else :
-            if not self.valid_domain_oct(curdom,parent) or \
-                    parent.level != curlevel - 1:
-                raise RuntimeError
-            cur = parent
- 
-        next_oct = self.next_free_oct( curdom )
-        if cur == NULL :
-            self.root_mesh[pos[0]][pos[1]][pos[2]] = next_oct
-        else :
-            for i in range(3) :
-                if pos[i] < 2*cur.pos[i]+1 :
-                    ind[i] = 0
-                else :
-                    ind[i] = 1
-            if cur.level != curlevel - 1 or  \
-                    cur.children[ind[0]][ind[1]][ind[2]] != NULL :
-                print "Error in add_oct: child already filled!"
-                raise RuntimeError
-
-            cur.children[ind[0]][ind[1]][ind[2]] = next_oct
-        for i in range(3) :
-            next_oct.pos[i] = pos[i]
-        next_oct.domain = curdom
-        next_oct.parent = cur
-        next_oct.file_ind = 1
-        next_oct.level = curlevel
-        return next_oct
-
-    # ii:mask/art ; ci=ramses loop backward (k<-fast, j ,i<-slow) 
-    # ii=0 000 art 000 ci 000 
-    # ii=1 100 art 100 ci 001 
-    # ii=2 010 art 010 ci 010 
-    # ii=3 110 art 110 ci 011
-    # ii=4 001 art 001 ci 100
-    # ii=5 101 art 011 ci 101
-    # ii=6 011 art 011 ci 110
-    # ii=7 111 art 111 ci 111
-    # keep coords ints so multiply by pow(2,1) when increasing level.
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def icoords(self, int domain_id,
-                np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count,
-                np.ndarray[np.int64_t, ndim=1] level_counts):
-        # Wham, bam, it's a scam
-        cdef np.int64_t i, j, k, oi, ci, n, ii, level
-        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
-        cdef Oct *o
-        n = mask.shape[0]
-        cdef np.ndarray[np.int64_t, ndim=2] coords
-        coords = np.empty((cell_count, 3), dtype="int64")
-        ci = 0
-        for oi in range(cur.n):
-            o = &cur.my_octs[oi]
-            for k in range(2):
-                for j in range(2):
-                    for i in range(2):
-                        ii = ((k*2)+j)*2+i
-                        if mask[o.domain_ind, ii] == 0: continue
-                        # Note that we bit shift because o.pos is oct position,
-                        # not cell position, and it is with respect to octs,
-                        # not cells.
-                        coords[ci, 0] = (o.pos[0] << 1) + i
-                        coords[ci, 1] = (o.pos[1] << 1) + j
-                        coords[ci, 2] = (o.pos[2] << 1) + k
-                        ci += 1
-        return coords
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def ires(self, int domain_id,
-                np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count,
-                np.ndarray[np.int64_t, ndim=1] level_counts):
-        # Wham, bam, it's a scam
-        cdef np.int64_t i, j, k, oi, ci, n
-        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
-        cdef Oct *o
-        n = mask.shape[0]
-        cdef np.ndarray[np.int64_t, ndim=1] levels
-        levels = np.empty(cell_count, dtype="int64")
-        ci = 0
-        for oi in range(cur.n):
-            o = &cur.my_octs[oi]
-            for i in range(8):
-                if mask[oi + cur.offset, i] == 0: continue
-                levels[ci] = o.level
-                ci +=1
-        return levels
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def count_levels(self, int max_level, int domain_id,
-                     np.ndarray[np.uint8_t, ndim=2, cast=True] mask):
-        cdef np.ndarray[np.int64_t, ndim=1] level_count
-        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
-        cdef Oct *o
-        cdef int oi, i
-        level_count = np.zeros(max_level+1, 'int64')
-        for oi in range(cur.n):
-            o = &cur.my_octs[oi]
-            for i in range(8):
-                if mask[o.domain_ind, i] == 0: continue
-                level_count[o.level] += 1
-        return level_count
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def fcoords(self, int domain_id,
-                np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count,
-                np.ndarray[np.int64_t, ndim=1] level_counts):
-        # Wham, bam, it's a scam
-        cdef np.int64_t i, j, k, oi, ci, n, ii
-        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
-        cdef Oct *o
-        cdef np.float64_t pos[3]
-        cdef np.float64_t base_dx[3], dx[3]
-        n = mask.shape[0]
-        cdef np.ndarray[np.float64_t, ndim=2] coords
-        coords = np.empty((cell_count, 3), dtype="float64")
-        ci =0 
-        for i in range(3):
-            # This is the base_dx, but not the base distance from the center
-            # position.  Note that the positions will also all be offset by
-            # dx/2.0.  This is also for *oct grids*, not cells.
-            base_dx[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
-        for oi in range(cur.n):
-            o = &cur.my_octs[oi]
-            for i in range(3):
-                # This gives the *grid* width for this level
-                dx[i] = base_dx[i] / (1 << o.level)
-                # o.pos is the *grid* index, so pos[i] is the center of the
-                # first cell in the grid
-                pos[i] = self.DLE[i] + o.pos[i]*dx[i] + dx[i]/4.0
-                dx[i] = dx[i] / 2.0 # This is now the *offset* 
-            for k in range(2):
-                for j in range(2):
-                    for i in range(2):
-                        ii = ((k*2)+j)*2+i
-                        if mask[o.domain_ind, ii] == 0: continue
-                        coords[ci, 0] = pos[0] + dx[0] * i
-                        coords[ci, 1] = pos[1] + dx[1] * j
-                        coords[ci, 2] = pos[2] + dx[2] * k
-                        ci +=1 
-        return coords
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def fill_mask(self, int domain, dest_fields, source_fields,
-                   np.ndarray[np.uint8_t, ndim=2, cast=True] mask, int offset):
-        cdef np.ndarray[np.float32_t, ndim=1] source
-        cdef np.ndarray[np.float64_t, ndim=1] dest
-        cdef OctAllocationContainer *dom = self.domains[domain - 1]
-        cdef Oct *o
-        cdef int n
-        cdef int i, j, k, ii
-        cdef int local_pos, local_filled
-        cdef np.float64_t val
-        for key in dest_fields:
-            local_filled = 0
-            dest = dest_fields[key]
-            source = source_fields[key]
-            # snl: an alternative to filling level 0 yt-octs is to produce a 
-            # mapping between the mask and the source read order
-            for n in range(dom.n):
-                o = &dom.my_octs[n]
-                for k in range(2):
-                    for j in range(2):
-                        for i in range(2):
-                            ii = ((k*2)+j)*2+i
-                            if mask[o.domain_ind, ii] == 0: continue
-                            dest[local_filled + offset] = source[o.domain_ind*8+ii]
-                            local_filled += 1
-        return local_filled
-
 cdef class RAMSESOctreeContainer(OctreeContainer):
 
     cdef np.int64_t get_domain_offset(self, int domain_id):


https://bitbucket.org/yt_analysis/yt/commits/1fb97d1d3a5d/
Changeset:   1fb97d1d3a5d
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-21 22:25:16
Summary:     We only want to deposit into an array of the correct size.  Also, skip bad offsets.
Affected #:  2 files

diff -r 6eaa2394b59160d94d04da9ae653ac69c9f64e00 -r 1fb97d1d3a5dad27ea29dda7b187404cdd261ecd yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py
+++ b/yt/data_objects/octree_subset.py
@@ -136,7 +136,7 @@
         cls = getattr(particle_deposit, "deposit_%s" % method, None)
         if cls is None:
             raise YTParticleDepositionNotImplemented(method)
-        nvals = self.domain_ind.size * 8
+        nvals = (self.domain_ind >= 0).sum() * 8
         op = cls(nvals) # We allocate number of zones, not number of octs
         op.initialize()
         op.process_octree(self.oct_handler, self.domain_ind, positions, fields,

diff -r 6eaa2394b59160d94d04da9ae653ac69c9f64e00 -r 1fb97d1d3a5dad27ea29dda7b187404cdd261ecd yt/geometry/particle_deposit.pyx
--- a/yt/geometry/particle_deposit.pyx
+++ b/yt/geometry/particle_deposit.pyx
@@ -78,6 +78,7 @@
             #print domain_id, oct.local_ind, oct.ind, oct.domain, oct.pos[0], oct.pos[1], oct.pos[2]
             # Note that this has to be our local index, not our in-file index.
             offset = dom_ind[oct.domain_ind - moff] * 8
+            if offset < 0: continue
             # Check that we found the oct ...
             self.process(dims, oi.left_edge, oi.dds,
                          offset, pos, field_vals)


https://bitbucket.org/yt_analysis/yt/commits/85d4b7d54649/
Changeset:   85d4b7d54649
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-05-08 02:51:21
Summary:     fixed particle IO when # of particles exceeds 4096^2
Affected #:  1 file

diff -r 05a99508b3cd77bd87d83bae2d4a0f850a013e00 -r 85d4b7d546498f40f3951b18c93a3db56132bca2 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -330,32 +330,58 @@
     f.seek(pos)
     return unitary_center, fl, iocts, nLevel, root_level
 
+def get_ranges(skip, count, field, words=6, real_size=4, np_per_page=4096**2, 
+                  num_pages=1):
+    #translate every particle index into a file position ranges
+    ranges = []
+    arr_size = np_per_page * real_size
+    page_size = words * np_per_page * real_size
+    idxa, idxb = 0, 0
+    posa, posb = 0, 0
+    left = count
+    for page in range(num_pages):
+        idxb += np_per_page
+        for i, fname in enumerate(['x', 'y', 'z', 'vx', 'vy', 'vz']):
+            posb += arr_size
+            if i == field or fname == field:
+                if skip < np_per_page and count > 0:
+                    left_in_page = np_per_page - skip
+                    this_count = min(left_in_page, count)
+                    count -= this_count
+                    start = posa + skip * real_size
+                    end = posa + this_count * real_size
+                    ranges.append((start, this_count))
+                    skip = 0
+                    assert end <= posb
+                else:
+                    skip -= np_per_page
+            posa += arr_size
+        idxa += np_per_page
+    assert count == 0
+    return ranges
 
-def read_particles(file, Nrow, idxa=None, idxb=None, field=None):
+
+def read_particles(file, Nrow, idxa, idxb, field):
     words = 6  # words (reals) per particle: x,y,z,vx,vy,vz
     real_size = 4  # for file_particle_data; not always true?
-    np_per_page = Nrow**2  # defined in ART a_setup.h
+    np_per_page = Nrow**2  # defined in ART a_setup.h, # of particles/page
     num_pages = os.path.getsize(file)/(real_size*words*np_per_page)
     data = np.array([], 'f4')
     fh = open(file, 'r')
-    totalp = idxb-idxa
-    left = totalp
-    for page in range(num_pages):
-        for i, fname in enumerate(['x', 'y', 'z', 'vx', 'vy', 'vz']):
-            if i == field or fname == field:
-                if idxa is not None:
-                    fh.seek(real_size*idxa, 1)
-                    count = min(np_per_page, left)
-                    temp = np.fromfile(fh, count=count, dtype='>f4')
-                    pageleft = np_per_page-count-idxa
-                    fh.seek(real_size*pageleft, 1)
-                    left -= count
-                else:
-                    count = np_per_page
-                    temp = np.fromfile(fh, count=count, dtype='>f4')
-                data = np.concatenate((data, temp))
-            else:
-                fh.seek(4*np_per_page, 1)
+    skip, count = idxa, idxb - idxa
+    import pdb; pdb.set_trace()
+    kwargs = dict(words=words, real_size=real_size, 
+                  np_per_page=np_per_page, num_pages=num_pages)
+    ranges = get_ranges(skip, count, field, **kwargs)
+    data = None
+    for seek, this_count in ranges:
+        fh.seek(seek)
+        temp = np.fromfile(fh, count=this_count, dtype='>f4')
+        if data is None:
+            data = temp
+        else:
+            data = np.concatenate((data, temp))
+    fh.close()
     return data
 
 


https://bitbucket.org/yt_analysis/yt/commits/14d07dcd9760/
Changeset:   14d07dcd9760
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-05-08 02:51:32
Summary:     removed pdb
Affected #:  1 file

diff -r 85d4b7d546498f40f3951b18c93a3db56132bca2 -r 14d07dcd97608a40b3634d065ff47aca3ea9919a yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -369,7 +369,6 @@
     data = np.array([], 'f4')
     fh = open(file, 'r')
     skip, count = idxa, idxb - idxa
-    import pdb; pdb.set_trace()
     kwargs = dict(words=words, real_size=real_size, 
                   np_per_page=np_per_page, num_pages=num_pages)
     ranges = get_ranges(skip, count, field, **kwargs)


https://bitbucket.org/yt_analysis/yt/commits/648cf68c5c16/
Changeset:   648cf68c5c16
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-05-21 22:40:33
Summary:     added proto std, sum methods
Affected #:  1 file

diff -r 14d07dcd97608a40b3634d065ff47aca3ea9919a -r 648cf68c5c16115802f3722a541147dfbf8df23a yt/geometry/particle_deposit.pyx
--- a/yt/geometry/particle_deposit.pyx
+++ b/yt/geometry/particle_deposit.pyx
@@ -113,9 +113,11 @@
     cdef np.float64_t *count # float, for ease
     cdef public object ocount
     def initialize(self):
+        # Create a numpy array accessible to python
         self.ocount = np.zeros(self.nvals, dtype="float64")
         cdef np.ndarray arr = self.ocount
-        self.count = <np.float64_t*> arr.data
+        # alias the C-view for use in cython
+        self.count = <np.int64_t*> arr.data
 
     @cython.cdivision(True)
     cdef void process(self, int dim[3],
@@ -138,127 +140,64 @@
 
 deposit_count = CountParticles
 
-"""
-# Mode functions
-ctypedef np.float64_t (*type_opt)(np.float64_t, np.float64_t)
-cdef np.float64_t opt_count(np.float64_t pdata,
-                            np.float64_t weight,
-                            np.int64_t index,
-                            np.ndarray[np.float64_t, ndim=2] data_out, 
-                            np.ndarray[np.float64_t, ndim=2] data_in):
-    data_out[index] += 1.0
+cdef class SumParticleField(ParticleDepositOperation):
+    cdef np.float64_t *count # float, for ease
+    cdef public object ocount
+    def initialize(self):
+        self.osum = np.zeros(self.nvals, dtype="float64")
+        cdef np.ndarray arr = self.osum
+        self.sum = <np.float64_t*> arr.data
 
-cdef np.float64_t opt_sum(np.float64_t pdata,
-                            np.float64_t weight,
-                            np.int64_t index,
-                            np.ndarray[np.float64_t, ndim=2] data_out, 
-                            np.ndarray[np.float64_t, ndim=2] data_in):
-    data_out[index] += pdata 
+    @cython.cdivision(True)
+    cdef void process(self, int dim[3],
+                      np.float64_t left_edge[3], 
+                      np.float64_t dds[3],
+                      np.int64_t offset, # offset into IO field
+                      np.float64_t ppos[3], # this particle's position
+                      np.float64_t *fields # any other fields we need
+                      ):
+        # here we do our thing; this is the kernel
+        cdef int ii[3], i
+        for i in range(3):
+            ii[i] = <int>((ppos[i] - left_edge[i])/dds[i])
+        #print "Depositing into", offset,
+        #print gind(ii[0], ii[1], ii[2], dim)
+        self.sum[gind(ii[0], ii[1], ii[2], dim) + offset] += fields[i]
+        
+    def finalize(self):
+        return self.sum
 
-cdef np.float64_t opt_diff(np.float64_t pdata,
-                            np.float64_t weight,
-                            np.int64_t index,
-                            np.ndarray[np.float64_t, ndim=2] data_out, 
-                            np.ndarray[np.float64_t, ndim=2] data_in):
-    data_out[index] += (data_in[index] - pdata) 
+deposit_sum = SumParticleField
 
-cdef np.float64_t opt_wcount(np.float64_t pdata,
-                            np.float64_t weight,
-                            np.int64_t index,
-                            np.ndarray[np.float64_t, ndim=2] data_out, 
-                            np.ndarray[np.float64_t, ndim=2] data_in):
-    data_out[index] += weight
+cdef class StdParticleField(ParticleDepositOperation):
+    # Thanks to Britton and MJ Turk for the link
+    # to a single-pass STD
+    # http://www.cs.berkeley.edu/~mhoemmen/cs194/Tutorials/variance.pdf
+    cdef np.float64_t *count # float, for ease
+    cdef public object ocount
+    def initialize(self):
+        self.osum = np.zeros(self.nvals, dtype="float64")
+        cdef np.ndarray arr = self.osum
+        self.sum = <np.float64_t*> arr.data
 
-cdef np.float64_t opt_wsum(np.float64_t pdata,
-                            np.float64_t weight,
-                            np.int64_t index,
-                            np.ndarray[np.float64_t, ndim=2] data_out, 
-                            np.ndarray[np.float64_t, ndim=2] data_in):
-    data_out[index] += pdata * weight
+    @cython.cdivision(True)
+    cdef void process(self, int dim[3],
+                      np.float64_t left_edge[3], 
+                      np.float64_t dds[3],
+                      np.int64_t offset, # offset into IO field
+                      np.float64_t ppos[3], # this particle's position
+                      np.float64_t *fields # any other fields we need
+                      ):
+        # here we do our thing; this is the kernel
+        cdef int ii[3], i
+        for i in range(3):
+            ii[i] = <int>((ppos[i] - left_edge[i])/dds[i])
+        #print "Depositing into", offset,
+        #print gind(ii[0], ii[1], ii[2], dim)
+        self.sum[gind(ii[0], ii[1], ii[2], dim) + offset] += fields[i]
+        
+    def finalize(self):
+        return self.sum
 
-cdef np.float64_t opt_wdiff(np.float64_t pdata,
-                            np.float64_t weight,
-                            np.int64_t index,
-                            np.ndarray[np.float64_t, ndim=2] data_out, 
-                            np.ndarray[np.float64_t, ndim=2] data_in):
-    data_out[index] += (data_in[index] - pdata) * weight
+deposit_sum = SumParticleField
 
-# Selection functions
-ctypedef NOTSURE (*type_sel)(OctreeContainer, 
-                                np.ndarray[np.float64_t, ndim=1],
-                                np.float64_t)
-cdef NOTSURE select_nearest(OctreeContainer oct_handler,
-                            np.ndarray[np.float64_t, ndim=1] pos,
-                            np.float64_t radius):
-    #return only the nearest oct
-    pass
-
-
-cdef NOTSURE select_radius(OctreeContainer oct_handler,
-                            np.ndarray[np.float64_t, ndim=1] pos,
-                            np.float64_t radius):
-    #return a list of octs within the radius
-    pass
-    
-
-# Kernel functions
-ctypedef np.float64_t (*type_ker)(np.float64_t)
-cdef np.float64_t kernel_sph(np.float64_t x) nogil:
-    cdef np.float64_t kernel
-    if x <= 0.5:
-        kernel = 1.-6.*x*x*(1.-x)
-    elif x>0.5 and x<=1.0:
-        kernel = 2.*(1.-x)*(1.-x)*(1.-x)
-    else:
-        kernel = 0.
-    return kernel
-
-cdef np.float64_t kernel_null(np.float64_t x) nogil: return 0.0
-
-cdef deposit(OctreeContainer oct_handler, 
-        np.ndarray[np.float64_t, ndim=2] ppos, #positions,columns are x,y,z
-        np.ndarray[np.float64_t, ndim=2] pd, # particle fields
-        np.ndarray[np.float64_t, ndim=1] pr, # particle radius
-        np.ndarray[np.float64_t, ndim=2] data_in, #used to calc diff, same shape as data_out
-        np.ndarray[np.float64_t, ndim=2] data_out, #write deposited here
-        mode='count', selection='nearest', kernel='null'):
-    cdef type_opt fopt
-    cdef type_sel fsel
-    cdef type_ker fker
-    cdef long pi #particle index
-    cdef long nocts #number of octs in selection
-    cdef Oct oct 
-    cdef np.float64_t w
-    # Can we do this with dicts?
-    # Setup the function pointers
-    if mode == 'count':
-        fopt = opt_count
-    elif mode == 'sum':
-        fopt = opt_sum
-    elif mode == 'diff':
-        fopt = opt_diff
-    if mode == 'wcount':
-        fopt = opt_count
-    elif mode == 'wsum':
-        fopt = opt_sum
-    elif mode == 'wdiff':
-        fopt = opt_diff
-    if selection == 'nearest':
-        fsel = select_nearest
-    elif selection == 'radius':
-        fsel = select_radius
-    if kernel == 'null':
-        fker = kernel_null
-    if kernel == 'sph':
-        fker = kernel_sph
-    for pi in range(particles):
-        octs = fsel(oct_handler, ppos[pi], pr[pi])
-        for oct in octs:
-            for cell in oct.cells:
-                w = fker(pr[pi],cell) 
-                weights.append(w)
-        norm = weights.sum()
-        for w, oct in zip(weights, octs):
-            for cell in oct.cells:
-                fopt(pd[pi], w/norm, oct.index, data_in, data_out)
-"""


https://bitbucket.org/yt_analysis/yt/commits/03eb8bf54a6d/
Changeset:   03eb8bf54a6d
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-05-21 22:59:43
Summary:     fixing std
Affected #:  1 file

diff -r 648cf68c5c16115802f3722a541147dfbf8df23a -r 03eb8bf54a6dcfc7a951aa50f20d2aec30d56b32 yt/geometry/particle_deposit.pyx
--- a/yt/geometry/particle_deposit.pyx
+++ b/yt/geometry/particle_deposit.pyx
@@ -114,7 +114,7 @@
     cdef public object ocount
     def initialize(self):
         # Create a numpy array accessible to python
-        self.ocount = np.zeros(self.nvals, dtype="float64")
+        self.ocount = np.zeros(self.nvals, dtype="int64")
         cdef np.ndarray arr = self.ocount
         # alias the C-view for use in cython
         self.count = <np.int64_t*> arr.data
@@ -176,9 +176,18 @@
     cdef np.float64_t *count # float, for ease
     cdef public object ocount
     def initialize(self):
-        self.osum = np.zeros(self.nvals, dtype="float64")
-        cdef np.ndarray arr = self.osum
-        self.sum = <np.float64_t*> arr.data
+        # we do this in a single pass, but need two scalar
+        # per cell, M_k, and Q_k and also the number of particles
+        # deposited into each one
+        self.omk= np.zeros(self.nvals, dtype="float64")
+        cdef np.ndarray omkarr= self.omk
+        self.mk= <np.float64_t*> omkarr.data
+        self.oqk= np.zeros(self.nvals, dtype="float64")
+        cdef np.ndarray oqkarr= self.oqk
+        self.qk= <np.float64_t*> oqkarr.data
+        self.oi = np.zeros(self.nvals, dtype="int64")
+        cdef np.ndarray oiarr = self.oi
+        self.qk= <np.float64_t*> oiarr.data
 
     @cython.cdivision(True)
     cdef void process(self, int dim[3],
@@ -189,12 +198,19 @@
                       np.float64_t *fields # any other fields we need
                       ):
         # here we do our thing; this is the kernel
-        cdef int ii[3], i
+        cdef int ii[3], i, cell_index
         for i in range(3):
             ii[i] = <int>((ppos[i] - left_edge[i])/dds[i])
         #print "Depositing into", offset,
         #print gind(ii[0], ii[1], ii[2], dim)
-        self.sum[gind(ii[0], ii[1], ii[2], dim) + offset] += fields[i]
+        cell_index = gind(ii[0], ii[1], ii[2], dim) + offset
+        if self.mk[cell_index] == -1:
+            self.mk[cell_index] = fields[i]
+        else:
+            self.mk[cell_index] = self.mk[cell_index] + (fields[i] - self.mk[cell_index]) / k
+
+
+        if self.sum
         
     def finalize(self):
         return self.sum


https://bitbucket.org/yt_analysis/yt/commits/c8e92b6c8878/
Changeset:   c8e92b6c8878
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-05-21 23:40:17
Summary:     forgot to commit after the merge. now
merge is mixed in with particel deposit changes, and spatial chunk update
to NMSU ART
Affected #:  12 files

diff -r 03eb8bf54a6dcfc7a951aa50f20d2aec30d56b32 -r c8e92b6c8878a74f03c39f26149b55f1d7af8aed yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -249,7 +249,13 @@
                 for i,chunk in enumerate(self.chunks(field, "spatial", ngz = 0)):
                     mask = self._current_chunk.objs[0].select(self.selector)
                     if mask is None: continue
-                    data = self[field][mask]
+                    data = self[field]
+                    if len(data.shape) == 4:
+                        # This is how we keep it consistent between oct ordering
+                        # and grid ordering.
+                        data = data.T[mask.T]
+                    else:
+                        data = data[mask]
                     rv[ind:ind+data.size] = data
                     ind += data.size
         else:

diff -r 03eb8bf54a6dcfc7a951aa50f20d2aec30d56b32 -r c8e92b6c8878a74f03c39f26149b55f1d7af8aed yt/data_objects/field_info_container.py
--- a/yt/data_objects/field_info_container.py
+++ b/yt/data_objects/field_info_container.py
@@ -286,6 +286,9 @@
         self.requested.append(item)
         return defaultdict.__missing__(self, item)
 
+    def deposit(self, *args, **kwargs):
+        return np.random.random((self.nd, self.nd, self.nd))
+
     def _read_data(self, field_name):
         self.requested.append(field_name)
         FI = getattr(self.pf, "field_info", FieldInfo)

diff -r 03eb8bf54a6dcfc7a951aa50f20d2aec30d56b32 -r c8e92b6c8878a74f03c39f26149b55f1d7af8aed yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py
+++ b/yt/data_objects/octree_subset.py
@@ -66,6 +66,16 @@
         self._current_particle_type = 'all'
         self._current_fluid_type = self.pf.default_fluid_type
 
+    def _generate_container_field(self, field):
+        if self._current_chunk is None:
+            self.hierarchy._identify_base_chunk(self)
+        if field == "dx":
+            return self._current_chunk.fwidth[:,0]
+        elif field == "dy":
+            return self._current_chunk.fwidth[:,1]
+        elif field == "dz":
+            return self._current_chunk.fwidth[:,2]
+
     def select_icoords(self, dobj):
         return self.oct_handler.icoords(self.domain.domain_id, self.mask,
                                         self.cell_count,
@@ -109,8 +119,7 @@
     def _reshape_vals(self, arr):
         nz = self._num_zones + 2*self._num_ghost_zones
         n_oct = arr.shape[0] / (nz**3.0)
-        arr.shape = (n_oct, nz, nz, nz)
-        arr = np.rollaxis(arr, 0, 4)
+        arr = arr.reshape((nz, nz, nz, n_oct), order="F")
         return arr
 
     _domain_ind = None
@@ -130,7 +139,8 @@
         nvals = (self.domain_ind >= 0).sum() * 8
         op = cls(nvals) # We allocate number of zones, not number of octs
         op.initialize()
-        op.process_octree(self.oct_handler, self.domain_ind, positions, fields)
+        op.process_octree(self.oct_handler, self.domain_ind, positions, fields,
+                          self.domain.domain_id)
         vals = op.finalize()
         return self._reshape_vals(vals)
 

diff -r 03eb8bf54a6dcfc7a951aa50f20d2aec30d56b32 -r c8e92b6c8878a74f03c39f26149b55f1d7af8aed yt/data_objects/universal_fields.py
--- a/yt/data_objects/universal_fields.py
+++ b/yt/data_objects/universal_fields.py
@@ -96,7 +96,7 @@
           display_field = False)
 
 def _Ones(field, data):
-    return np.ones(data.shape, dtype='float64')
+    return np.ones(data.ires.size, dtype='float64')
 add_field("Ones", function=_Ones,
           projection_conversion="unitary",
           display_field = False)

diff -r 03eb8bf54a6dcfc7a951aa50f20d2aec30d56b32 -r c8e92b6c8878a74f03c39f26149b55f1d7af8aed yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -173,8 +173,16 @@
         # as well as the referring data source
         yield YTDataChunk(dobj, "all", oobjs, dobj.size)
 
-    def _chunk_spatial(self, dobj, ngz):
-        raise NotImplementedError
+    def _chunk_spatial(self, dobj, ngz, sort = None):
+        sobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
+        for i,og in enumerate(sobjs):
+            if ngz > 0:
+                g = og.retrieve_ghost_zones(ngz, [], smoothed=True)
+            else:
+                g = og
+            size = og.cell_count
+            if size == 0: continue
+            yield YTDataChunk(dobj, "spatial", [g], size)
 
     def _chunk_io(self, dobj):
         """

diff -r 03eb8bf54a6dcfc7a951aa50f20d2aec30d56b32 -r c8e92b6c8878a74f03c39f26149b55f1d7af8aed yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -96,7 +96,7 @@
         total_particles = sum(sum(d.total_particles.values())
                               for d in self.domains)
         self.oct_handler = ParticleOctreeContainer(
-            self.parameter_file.domain_dimensions,
+            self.parameter_file.domain_dimensions/2,
             self.parameter_file.domain_left_edge,
             self.parameter_file.domain_right_edge)
         self.oct_handler.n_ref = 64

diff -r 03eb8bf54a6dcfc7a951aa50f20d2aec30d56b32 -r c8e92b6c8878a74f03c39f26149b55f1d7af8aed yt/geometry/fake_octree.pyx
--- a/yt/geometry/fake_octree.pyx
+++ b/yt/geometry/fake_octree.pyx
@@ -67,7 +67,7 @@
                     long cur_leaf, long cur_level, 
                     long max_noct, long max_level, float fsubdivide,
                     np.ndarray[np.uint8_t, ndim=2] mask):
-    print "child", parent.ind, ind[0], ind[1], ind[2], cur_leaf, cur_level
+    print "child", parent.file_ind, ind[0], ind[1], ind[2], cur_leaf, cur_level
     cdef int ddr[3]
     cdef long i,j,k
     cdef float rf #random float from 0-1

diff -r 03eb8bf54a6dcfc7a951aa50f20d2aec30d56b32 -r c8e92b6c8878a74f03c39f26149b55f1d7af8aed yt/geometry/oct_container.pxd
--- a/yt/geometry/oct_container.pxd
+++ b/yt/geometry/oct_container.pxd
@@ -30,8 +30,12 @@
 
 cdef struct Oct
 cdef struct Oct:
-    np.int64_t ind          # index
-    np.int64_t local_ind
+    np.int64_t file_ind     # index with respect to the order in which it was
+                            # added
+    np.int64_t domain_ind   # index within the global set of domains
+                            # note that moving to a local index will require
+                            # moving to split-up masks, which is part of a
+                            # bigger refactor
     np.int64_t domain       # (opt) addl int index
     np.int64_t pos[3]       # position in ints
     np.int8_t level
@@ -61,13 +65,9 @@
     cdef Oct* get(self, np.float64_t ppos[3], OctInfo *oinfo = ?)
     cdef void neighbors(self, Oct *, Oct **)
     cdef void oct_bounds(self, Oct *, np.float64_t *, np.float64_t *)
-
-cdef class ARTIOOctreeContainer(OctreeContainer):
-    cdef OctAllocationContainer **domains
-    cdef Oct *get_root_oct(self, np.float64_t ppos[3])
-    cdef Oct *next_free_oct( self, int curdom )
-    cdef int valid_domain_oct(self, int curdom, Oct *parent)
-    cdef Oct *add_oct(self, int curdom, Oct *parent, int curlevel, double pp[3])
+    # This function must return the offset from global-to-local domains; i.e.,
+    # OctAllocationContainer.offset if such a thing exists.
+    cdef np.int64_t get_domain_offset(self, int domain_id)
 
 cdef class RAMSESOctreeContainer(OctreeContainer):
     cdef OctAllocationContainer **domains

diff -r 03eb8bf54a6dcfc7a951aa50f20d2aec30d56b32 -r c8e92b6c8878a74f03c39f26149b55f1d7af8aed yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -56,8 +56,8 @@
     for n in range(n_octs):
         oct = &n_cont.my_octs[n]
         oct.parent = NULL
-        oct.ind = oct.domain = -1
-        oct.local_ind = n + n_cont.offset
+        oct.file_ind = oct.domain = -1
+        oct.domain_ind = n + n_cont.offset
         oct.level = -1
         for i in range(2):
             for j in range(2):
@@ -130,7 +130,7 @@
         while cur != NULL:
             for i in range(cur.n_assigned):
                 this = &cur.my_octs[i]
-                yield (this.ind, this.local_ind, this.domain)
+                yield (this.file_ind, this.domain_ind, this.domain)
             cur = cur.next
 
     cdef void oct_bounds(self, Oct *o, np.float64_t *corner, np.float64_t *size):
@@ -139,6 +139,9 @@
             size[i] = (self.DRE[i] - self.DLE[i]) / (self.nn[i] << o.level)
             corner[i] = o.pos[i] * size[i] + self.DLE[i]
 
+    cdef np.int64_t get_domain_offset(self, int domain_id):
+        return 0
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
@@ -153,8 +156,10 @@
             dds[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
             ind[i] = <np.int64_t> ((ppos[i] - self.DLE[i])/dds[i])
             cp[i] = (ind[i] + 0.5) * dds[i] + self.DLE[i]
-        cur = self.root_mesh[ind[0]][ind[1]][ind[2]]
-        while cur.children[0][0][0] != NULL:
+        next = self.root_mesh[ind[0]][ind[1]][ind[2]]
+        # We want to stop recursing when there's nowhere else to go
+        while next != NULL:
+            cur = next
             for i in range(3):
                 dds[i] = dds[i] / 2.0
                 if cp[i] > ppos[i]:
@@ -163,11 +168,19 @@
                 else:
                     ind[i] = 1
                     cp[i] += dds[i]/2.0
-            cur = cur.children[ind[0]][ind[1]][ind[2]]
+            next = cur.children[ind[0]][ind[1]][ind[2]]
         if oinfo == NULL: return cur
         for i in range(3):
+            # This will happen *after* we quit out, so we need to back out the
+            # last change to cp
+            if ind[i] == 1:
+                cp[i] -= dds[i]/2.0 # Now centered
+            else:
+                cp[i] += dds[i]/2.0
+            # We don't need to change dds[i] as it has been halved from the
+            # oct width, thus making it already the cell width
             oinfo.dds[i] = dds[i] # Cell width
-            oinfo.left_edge[i] = cp[i] - dds[i]
+            oinfo.left_edge[i] = cp[i] - dds[i] # Center minus dds
         return cur
 
     @cython.boundscheck(False)
@@ -189,7 +202,7 @@
                 cur = cur.next
             o = &cur.my_octs[oi - cur.offset]
             for i in range(8):
-                count[o.domain - 1] += mask[o.local_ind,i]
+                count[o.domain - 1] += mask[o.domain_ind,i]
         return count
 
     @cython.boundscheck(True)
@@ -222,7 +235,7 @@
                     for k in range(2):
                         if o.children[i][j][k] == NULL:
                             ii = ((k*2)+j)*2+i
-                            count[o.domain - 1] += mask[o.local_ind,ii]
+                            count[o.domain - 1] += mask[o.domain_ind,ii]
         return count
 
     @cython.boundscheck(False)
@@ -315,330 +328,11 @@
                 bounds[i, 3+ii] = size[ii]
         return bounds
 
-cdef class ARTIOOctreeContainer(OctreeContainer):
+cdef class RAMSESOctreeContainer(OctreeContainer):
 
-    def allocate_domains(self, domain_counts):
-        cdef int count, i
-        cdef OctAllocationContainer *cur = self.cont
-        assert(cur == NULL)
-        self.max_domain = len(domain_counts) # 1-indexed
-        self.domains = <OctAllocationContainer **> malloc(
-            sizeof(OctAllocationContainer *) * len(domain_counts))
-        for i, count in enumerate(domain_counts):
-            cur = allocate_octs(count, cur)
-            if self.cont == NULL: self.cont = cur
-            self.domains[i] = cur
-        
-    def __dealloc__(self):
-        # This gets called BEFORE the superclass deallocation.  But, both get
-        # called.
-        if self.domains != NULL: free(self.domains)
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def count(self, np.ndarray[np.uint8_t, ndim=1, cast=True] mask,
-                     split = False):
-        cdef int n = mask.shape[0]
-        cdef int i, dom
-        cdef OctAllocationContainer *cur
-        cdef np.ndarray[np.int64_t, ndim=1] count
-        count = np.zeros(self.max_domain, 'int64')
-        # This is the idiom for iterating over many containers.
-        cur = self.cont
-        for i in range(n):
-            if i - cur.offset >= cur.n: cur = cur.next
-            if mask[i] == 1:
-                count[cur.my_octs[i - cur.offset].domain - 1] += 1
-        return count
-
-    def check(self, int curdom):
-        cdef int dind, pi
-        cdef Oct oct
-        cdef OctAllocationContainer *cont = self.domains[curdom - 1]
-        cdef int nbad = 0
-        for pi in range(cont.n_assigned):
-            oct = cont.my_octs[pi]
-            for i in range(2):
-                for j in range(2):
-                    for k in range(2):
-                        if oct.children[i][j][k] != NULL and \
-                           oct.children[i][j][k].level != oct.level + 1:
-                            if curdom == 61:
-                                print pi, oct.children[i][j][k].level,
-                                print oct.level
-                            nbad += 1
-        print "DOMAIN % 3i HAS % 9i BAD OCTS (%s / %s / %s)" % (curdom, nbad, 
-            cont.n - cont.n_assigned, cont.n_assigned, cont.n)
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    cdef Oct *next_free_oct( self, int curdom ) :
-        cdef OctAllocationContainer *cont
-        cdef Oct *next_oct
-
-        if curdom < 1 or curdom > self.max_domain or self.domains == NULL  :
-            print "Error, invalid domain or unallocated domains"
-            raise RuntimeError
-        
-        cont = self.domains[curdom - 1]
-        if cont.n_assigned >= cont.n :
-            print "Error, ran out of octs in domain curdom"
-            raise RuntimeError
-
-        self.nocts += 1
-        next_oct = &cont.my_octs[cont.n_assigned]
-        cont.n_assigned += 1
-        return next_oct
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    cdef int valid_domain_oct(self, int curdom, Oct *parent) :
-        cdef OctAllocationContainer *cont
-
-        if curdom < 1 or curdom > self.max_domain or self.domains == NULL  :
-            raise RuntimeError
-        cont = self.domains[curdom - 1]
-
-        if parent == NULL or parent < &cont.my_octs[0] or \
-                parent > &cont.my_octs[cont.n_assigned] :
-            return 0
-        else :
-            return 1
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    cdef Oct *get_root_oct(self, np.float64_t ppos[3]):
-        cdef np.int64_t ind[3]
-        cdef np.float64_t dds
-        cdef int i
-        for i in range(3):
-            dds = (self.DRE[i] - self.DLE[i])/self.nn[i]
-            ind[i] = <np.int64_t> floor((ppos[i]-self.DLE[i])/dds)
-        return self.root_mesh[ind[0]][ind[1]][ind[2]]
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    cdef Oct *add_oct(self, int curdom, Oct *parent, 
-                    int curlevel, np.float64_t pp[3]):
-
-        cdef int level, i, ind[3]
-        cdef Oct *cur, *next_oct
-        cdef np.int64_t pos[3]
-        cdef np.float64_t dds
-
-        if curlevel < 0 :
-            raise RuntimeError
-        for i in range(3):
-            if pp[i] < self.DLE[i] or pp[i] > self.DRE[i] :
-                raise RuntimeError
-            dds = (self.DRE[i] - self.DLE[i])/(<np.int64_t>self.nn[i])
-            pos[i] = <np.int64_t> floor((pp[i]-self.DLE[i])*<np.float64_t>(1<<curlevel)/dds)
-
-        if curlevel == 0 :
-            cur = NULL
-        elif parent == NULL :
-            cur = self.get_root_oct(pp)
-            assert( cur != NULL )
-
-            # Now we find the location we want
-            for level in range(1,curlevel):
-                # At every level, find the cell this oct lives inside
-                for i in range(3) :
-                    if pos[i] < (2*cur.pos[i]+1)<<(curlevel-level) :
-                        ind[i] = 0
-                    else :
-                        ind[i] = 1
-                cur = cur.children[ind[0]][ind[1]][ind[2]]
-                if cur == NULL:
-                    # in ART we don't allocate down to curlevel 
-                    # if parent doesn't exist
-                    print "Error, no oct exists at that level"
-                    raise RuntimeError
-        else :
-            if not self.valid_domain_oct(curdom,parent) or \
-                    parent.level != curlevel - 1:
-                raise RuntimeError
-            cur = parent
- 
-        next_oct = self.next_free_oct( curdom )
-        if cur == NULL :
-            self.root_mesh[pos[0]][pos[1]][pos[2]] = next_oct
-        else :
-            for i in range(3) :
-                if pos[i] < 2*cur.pos[i]+1 :
-                    ind[i] = 0
-                else :
-                    ind[i] = 1
-            if cur.level != curlevel - 1 or  \
-                    cur.children[ind[0]][ind[1]][ind[2]] != NULL :
-                print "Error in add_oct: child already filled!"
-                raise RuntimeError
-
-            cur.children[ind[0]][ind[1]][ind[2]] = next_oct
-        for i in range(3) :
-            next_oct.pos[i] = pos[i]
-        next_oct.domain = curdom
-        next_oct.parent = cur
-        next_oct.ind = 1
-        next_oct.level = curlevel
-        return next_oct
-
-    # ii:mask/art ; ci=ramses loop backward (k<-fast, j ,i<-slow) 
-    # ii=0 000 art 000 ci 000 
-    # ii=1 100 art 100 ci 001 
-    # ii=2 010 art 010 ci 010 
-    # ii=3 110 art 110 ci 011
-    # ii=4 001 art 001 ci 100
-    # ii=5 101 art 011 ci 101
-    # ii=6 011 art 011 ci 110
-    # ii=7 111 art 111 ci 111
-    # keep coords ints so multiply by pow(2,1) when increasing level.
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def icoords(self, int domain_id,
-                np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count,
-                np.ndarray[np.int64_t, ndim=1] level_counts):
-        # Wham, bam, it's a scam
-        cdef np.int64_t i, j, k, oi, ci, n, ii, level
-        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
-        cdef Oct *o
-        n = mask.shape[0]
-        cdef np.ndarray[np.int64_t, ndim=2] coords
-        coords = np.empty((cell_count, 3), dtype="int64")
-        ci=0
-        for oi in range(cur.n):
-            o = &cur.my_octs[oi]
-            for k in range(2):
-                for j in range(2):
-                    for i in range(2):
-                        ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
-                        coords[ci, 0] = (o.pos[0] << 1) + i
-                        coords[ci, 1] = (o.pos[1] << 1) + j
-                        coords[ci, 2] = (o.pos[2] << 1) + k
-                        ci += 1
-        return coords
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def ires(self, int domain_id,
-                np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count,
-                np.ndarray[np.int64_t, ndim=1] level_counts):
-        # Wham, bam, it's a scam
-        cdef np.int64_t i, j, k, oi, ci, n
-        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
-        cdef Oct *o
-        n = mask.shape[0]
-        cdef np.ndarray[np.int64_t, ndim=1] levels
-        levels = np.empty(cell_count, dtype="int64")
-        ci = 0
-        for oi in range(cur.n):
-            o = &cur.my_octs[oi]
-            for i in range(8):
-                if mask[oi + cur.offset, i] == 0: continue
-                levels[ci] = o.level
-                ci +=1
-        return levels
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def count_levels(self, int max_level, int domain_id,
-                     np.ndarray[np.uint8_t, ndim=2, cast=True] mask):
-        cdef np.ndarray[np.int64_t, ndim=1] level_count
-        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
-        cdef Oct *o
-        cdef int oi, i
-        level_count = np.zeros(max_level+1, 'int64')
-        for oi in range(cur.n):
-            o = &cur.my_octs[oi]
-            for i in range(8):
-                if mask[o.local_ind, i] == 0: continue
-                level_count[o.level] += 1
-        return level_count
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def fcoords(self, int domain_id,
-                np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count,
-                np.ndarray[np.int64_t, ndim=1] level_counts):
-        # Wham, bam, it's a scam
-        cdef np.int64_t i, j, k, oi, ci, n, ii
-        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
-        cdef Oct *o
-        cdef np.float64_t pos[3]
-        cdef np.float64_t base_dx[3], dx[3]
-        n = mask.shape[0]
-        cdef np.ndarray[np.float64_t, ndim=2] coords
-        coords = np.empty((cell_count, 3), dtype="float64")
-        ci =0 
-        for i in range(3):
-            # This is the base_dx, but not the base distance from the center
-            # position.  Note that the positions will also all be offset by
-            # dx/2.0.  This is also for *oct grids*, not cells.
-            base_dx[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
-        for oi in range(cur.n):
-            o = &cur.my_octs[oi]
-            for i in range(3):
-                # This gives the *grid* width for this level
-                dx[i] = base_dx[i] / (1 << o.level)
-                # o.pos is the *grid* index, so pos[i] is the center of the
-                # first cell in the grid
-                pos[i] = self.DLE[i] + o.pos[i]*dx[i] + dx[i]/4.0
-                dx[i] = dx[i] / 2.0 # This is now the *offset* 
-            for k in range(2):
-                for j in range(2):
-                    for i in range(2):
-                        ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
-                        coords[ci, 0] = pos[0] + dx[0] * i
-                        coords[ci, 1] = pos[1] + dx[1] * j
-                        coords[ci, 2] = pos[2] + dx[2] * k
-                        ci +=1 
-        return coords
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def fill_mask(self, int domain, dest_fields, source_fields,
-                   np.ndarray[np.uint8_t, ndim=2, cast=True] mask, int offset):
-        cdef np.ndarray[np.float32_t, ndim=1] source
-        cdef np.ndarray[np.float64_t, ndim=1] dest
-        cdef OctAllocationContainer *dom = self.domains[domain - 1]
-        cdef Oct *o
-        cdef int n
-        cdef int i, j, k, ii
-        cdef int local_pos, local_filled
-        cdef np.float64_t val
-        for key in dest_fields:
-            local_filled = 0
-            dest = dest_fields[key]
-            source = source_fields[key]
-            # snl: an alternative to filling level 0 yt-octs is to produce a 
-            # mapping between the mask and the source read order
-            for n in range(dom.n):
-                o = &dom.my_octs[n]
-                for k in range(2):
-                    for j in range(2):
-                        for i in range(2):
-                            ii = ((k*2)+j)*2+i
-                            if mask[o.local_ind, ii] == 0: continue
-                            dest[local_filled + offset] = source[o.local_ind*8+ii]
-                            # print 'oct_container.pyx:sourcemasked',o.level,local_filled, o.local_ind*8+ii, source[o.local_ind*8+ii]
-                            local_filled += 1
-        return local_filled
-
-cdef class RAMSESOctreeContainer(OctreeContainer):
+    cdef np.int64_t get_domain_offset(self, int domain_id):
+        cdef OctAllocationContainer *cont = self.domains[domain_id - 1]
+        return cont.offset
 
     cdef Oct* next_root(self, int domain_id, int ind[3]):
         cdef Oct *next = self.root_mesh[ind[0]][ind[1]][ind[2]]
@@ -707,7 +401,7 @@
 
     def domain_and(self, np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
                    int domain_id):
-        cdef np.int64_t i, oi, n, 
+        cdef np.int64_t i, oi, n,  use
         cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
         cdef Oct *o
         cdef np.ndarray[np.uint8_t, ndim=2] m2 = \
@@ -715,8 +409,9 @@
         n = mask.shape[0]
         for oi in range(cur.n_assigned):
             o = &cur.my_octs[oi]
+            use = 0
             for i in range(8):
-                m2[o.local_ind, i] = mask[o.local_ind, i]
+                m2[o.domain_ind, i] = mask[o.domain_ind, i]
         return m2 # NOTE: This is uint8_t
 
     def domain_mask(self,
@@ -738,7 +433,7 @@
             o = &cur.my_octs[oi]
             use = 0
             for i in range(8):
-                if mask[o.local_ind, i] == 1: use = 1
+                if mask[o.domain_ind, i] == 1: use = 1
             nm += use
         cdef np.ndarray[np.uint8_t, ndim=4] m2 = \
                 np.zeros((2, 2, 2, nm), 'uint8')
@@ -750,11 +445,30 @@
                 for j in range(2):
                     for k in range(2):
                         ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
+                        if mask[o.domain_ind, ii] == 0: continue
                         use = m2[i, j, k, nm] = 1
             nm += use
         return m2.astype("bool")
 
+    def domain_ind(self,
+                    # mask is the base selector's *global* mask
+                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                    int domain_id):
+        cdef np.int64_t i, j, k, oi, noct, n, nm, use, offset
+        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
+        cdef Oct *o
+        cdef np.ndarray[np.int64_t, ndim=1] ind = np.zeros(cur.n, 'int64') - 1
+        nm = 0
+        for oi in range(cur.n):
+            o = &cur.my_octs[oi]
+            use = 0
+            for i in range(8):
+                if mask[o.domain_ind, i] == 1: use = 1
+            if use == 1:
+                ind[o.domain_ind - cur.offset] = nm
+            nm += use
+        return ind
+
     def check(self, int curdom, int print_all = 0):
         cdef int dind, pi
         cdef Oct oct
@@ -783,6 +497,33 @@
         print "DOMAIN % 3i HAS % 9i MISSED OCTS" % (curdom, nmissed)
         print "DOMAIN % 3i HAS % 9i UNASSIGNED OCTS" % (curdom, unassigned)
 
+    def check_refinement(self, int curdom):
+        cdef int pi, i, j, k, some_refined, some_unrefined
+        cdef Oct *oct
+        cdef int bad = 0
+        cdef OctAllocationContainer *cont = self.domains[curdom - 1]
+        for pi in range(cont.n_assigned):
+            oct = &cont.my_octs[pi]
+            some_unrefined = 0
+            some_refined = 0
+            for i in range(2):
+                for j in range(2):
+                    for k in range(2):
+                        if oct.children[i][j][k] == NULL:
+                            some_unrefined = 1
+                        else:
+                            some_refined = 1
+            if some_unrefined == some_refined == 1:
+                #print "BAD", oct.file_ind, oct.domain_ind
+                bad += 1
+                if curdom == 10 or curdom == 72:
+                    for i in range(2):
+                        for j in range(2):
+                            for k in range(2):
+                                print (oct.children[i][j][k] == NULL),
+                    print
+        print "BAD TOTAL", curdom, bad, cont.n_assigned
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
@@ -831,7 +572,7 @@
             # Now we should be at the right level
             cur.domain = curdom
             if local == 1:
-                cur.ind = p
+                cur.file_ind = p
             cur.level = curlevel
         return cont.n_assigned - initial
 
@@ -849,18 +590,18 @@
         n = mask.shape[0]
         cdef np.ndarray[np.int64_t, ndim=2] coords
         coords = np.empty((cell_count, 3), dtype="int64")
+        ci = 0
         for oi in range(cur.n_assigned):
             o = &cur.my_octs[oi]
             for i in range(2):
                 for j in range(2):
                     for k in range(2):
                         ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
-                        ci = level_counts[o.level]
+                        if mask[o.domain_ind, ii] == 0: continue
                         coords[ci, 0] = (o.pos[0] << 1) + i
                         coords[ci, 1] = (o.pos[1] << 1) + j
                         coords[ci, 2] = (o.pos[2] << 1) + k
-                        level_counts[o.level] += 1
+                        ci += 1
         return coords
 
     @cython.boundscheck(False)
@@ -882,9 +623,8 @@
             o = &cur.my_octs[oi]
             for i in range(8):
                 if mask[oi + cur.offset, i] == 0: continue
-                ci = level_counts[o.level]
                 levels[ci] = o.level
-                level_counts[o.level] += 1
+                ci += 1
         return levels
 
     @cython.boundscheck(False)
@@ -900,7 +640,7 @@
         for oi in range(cur.n_assigned):
             o = &cur.my_octs[oi]
             for i in range(8):
-                if mask[o.local_ind, i] == 0: continue
+                if mask[o.domain_ind, i] == 0: continue
                 level_count[o.level] += 1
         return level_count
 
@@ -925,6 +665,7 @@
             # position.  Note that the positions will also all be offset by
             # dx/2.0.  This is also for *oct grids*, not cells.
             base_dx[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
+        ci = 0
         for oi in range(cur.n):
             o = &cur.my_octs[oi]
             for i in range(3):
@@ -938,12 +679,11 @@
                 for j in range(2):
                     for k in range(2):
                         ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
-                        ci = level_counts[o.level]
+                        if mask[o.domain_ind, ii] == 0: continue
                         coords[ci, 0] = pos[0] + dx[0] * i
                         coords[ci, 1] = pos[1] + dx[1] * j
                         coords[ci, 2] = pos[2] + dx[2] * k
-                        level_counts[o.level] += 1
+                        ci += 1
         return coords
 
     @cython.boundscheck(False)
@@ -965,18 +705,15 @@
             source = source_fields[key]
             for n in range(dom.n):
                 o = &dom.my_octs[n]
-                if o.level != level: continue
-                for i in range(2):
-                    for j in range(2):
-                        for k in range(2):
-                            ii = ((k*2)+j)*2+i
-                            if mask[o.local_ind, ii] == 0: continue
-                            dest[local_filled + offset] = source[o.ind, ii]
-                            local_filled += 1
+                for ii in range(8):
+                    # We iterate and check here to keep our counts consistent
+                    # when filling different levels.
+                    if mask[o.domain_ind, ii] == 0: continue
+                    if o.level == level: 
+                        dest[local_filled] = source[o.file_ind, ii]
+                    local_filled += 1
         return local_filled
 
-
-
 cdef class ARTOctreeContainer(RAMSESOctreeContainer):
 
     @cython.boundscheck(True)
@@ -1002,7 +739,7 @@
             source = source_fields[key]
             for n in range(dom.n):
                 o = &dom.my_octs[n]
-                index = o.ind-subchunk_offset
+                index = o.file_ind-subchunk_offset
                 if o.level != level: continue
                 if index < 0: continue
                 if index >= subchunk_max: 
@@ -1013,7 +750,7 @@
                     for j in range(2):
                         for k in range(2):
                             ii = ((k*2)+j)*2+i
-                            if mask[o.local_ind, ii] == 0: continue
+                            if mask[o.domain_ind, ii] == 0: continue
                             dest[local_filled + offset] = \
                                 source[index,ii]
                             local_filled += 1
@@ -1053,7 +790,7 @@
                     for j in range(2):
                         for k in range(2):
                             ii = ((k*2)+j)*2+i
-                            if mask[o.local_ind, ii] == 0: continue
+                            if mask[o.domain_ind, ii] == 0: continue
                             ox = (o.pos[0] << 1) + i
                             oy = (o.pos[1] << 1) + j
                             oz = (o.pos[2] << 1) + k
@@ -1128,6 +865,16 @@
                 free(o.sd.pos)
         free(o)
 
+    def __iter__(self):
+        #Get the next oct, will traverse domains
+        #Note that oct containers can be sorted 
+        #so that consecutive octs are on the same domain
+        cdef int oi
+        cdef Oct *o
+        for oi in range(self.nocts):
+            o = self.oct_list[oi]
+            yield (o.file_ind, o.domain_ind, o.domain)
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
@@ -1257,8 +1004,8 @@
         self.dom_offsets[0] = 0
         dom_ind = 0
         for i in range(self.nocts):
-            self.oct_list[i].local_ind = i
-            self.oct_list[i].ind = dom_ind
+            self.oct_list[i].domain_ind = i
+            self.oct_list[i].file_ind = dom_ind
             dom_ind += 1
             if self.oct_list[i].domain > cur_dom:
                 cur_dom = self.oct_list[i].domain
@@ -1266,6 +1013,9 @@
                 dom_ind = 0
         self.dom_offsets[cur_dom + 2] = self.nocts
 
+    cdef np.int64_t get_domain_offset(self, int domain_id):
+        return self.dom_offsets[domain_id + 1]
+
     cdef Oct* allocate_oct(self):
         #Allocate the memory, set to NULL or -1
         #We reserve space for n_ref particles, but keep
@@ -1275,8 +1025,8 @@
         cdef ParticleArrays *sd = <ParticleArrays*> \
             malloc(sizeof(ParticleArrays))
         cdef int i, j, k
-        my_oct.ind = my_oct.domain = -1
-        my_oct.local_ind = self.nocts - 1
+        my_oct.file_ind = my_oct.domain = -1
+        my_oct.domain_ind = self.nocts - 1
         my_oct.pos[0] = my_oct.pos[1] = my_oct.pos[2] = -1
         my_oct.level = -1
         my_oct.sd = sd
@@ -1327,7 +1077,7 @@
         for oi in range(ndo):
             o = self.oct_list[oi + doff]
             for i in range(8):
-                if mask[o.local_ind, i] == 0: continue
+                if mask[o.domain_ind, i] == 0: continue
                 level_count[o.level] += 1
         return level_count
 
@@ -1514,7 +1264,7 @@
 
     def domain_and(self, np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
                    int domain_id):
-        cdef np.int64_t i, oi, n, 
+        cdef np.int64_t i, oi, n, use
         cdef Oct *o
         cdef np.ndarray[np.uint8_t, ndim=2] m2 = \
                 np.zeros((mask.shape[0], 8), 'uint8')
@@ -1522,8 +1272,9 @@
         for oi in range(n):
             o = self.oct_list[oi]
             if o.domain != domain_id: continue
+            use = 0
             for i in range(8):
-                m2[o.local_ind, i] = mask[o.local_ind, i]
+                m2[o.domain_ind, i] = mask[o.domain_ind, i]
         return m2
 
     def domain_mask(self,
@@ -1546,7 +1297,7 @@
             if o.domain != domain_id: continue
             use = 0
             for i in range(8):
-                if mask[o.local_ind, i] == 1: use = 1
+                if mask[o.domain_ind, i] == 1: use = 1
             nm += use
         cdef np.ndarray[np.uint8_t, ndim=4] m2 = \
                 np.zeros((2, 2, 2, nm), 'uint8')
@@ -1559,7 +1310,7 @@
                 for j in range(2):
                     for k in range(2):
                         ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
+                        if mask[o.domain_ind, ii] == 0: continue
                         use = m2[i, j, k, nm] = 1
             nm += use
         return m2.astype("bool")
@@ -1571,7 +1322,7 @@
         # Here we once again do something similar to the other functions.  We
         # need a set of indices into the final reduced, masked values.  The
         # indices will be domain.n long, and will be of type int64.  This way,
-        # we can get the Oct through a .get() call, then use Oct.ind as an
+        # we can get the Oct through a .get() call, then use Oct.file_ind as an
         # index into this newly created array, then finally use the returned
         # index into the domain subset array for deposition.
         cdef np.int64_t i, j, k, oi, noct, n, nm, use, offset
@@ -1586,7 +1337,7 @@
             o = self.oct_list[oi + offset]
             use = 0
             for i in range(8):
-                if mask[o.local_ind, i] == 1: use = 1
+                if mask[o.domain_ind, i] == 1: use = 1
             if use == 1:
                 ind[oi] = nm
             nm += use

diff -r 03eb8bf54a6dcfc7a951aa50f20d2aec30d56b32 -r c8e92b6c8878a74f03c39f26149b55f1d7af8aed yt/geometry/oct_geometry_handler.py
--- a/yt/geometry/oct_geometry_handler.py
+++ b/yt/geometry/oct_geometry_handler.py
@@ -54,7 +54,7 @@
         Returns (in code units) the smallest cell size in the simulation.
         """
         return (self.parameter_file.domain_width /
-                (2**self.max_level)).min()
+                (2**(self.max_level+1))).min()
 
     def convert(self, unit):
         return self.parameter_file.conversion_factors[unit]

diff -r 03eb8bf54a6dcfc7a951aa50f20d2aec30d56b32 -r c8e92b6c8878a74f03c39f26149b55f1d7af8aed yt/geometry/particle_deposit.pyx
--- a/yt/geometry/particle_deposit.pyx
+++ b/yt/geometry/particle_deposit.pyx
@@ -47,7 +47,7 @@
     def process_octree(self, OctreeContainer octree,
                      np.ndarray[np.int64_t, ndim=1] dom_ind,
                      np.ndarray[np.float64_t, ndim=2] positions,
-                     fields = None):
+                     fields = None, int domain_id = -1):
         cdef int nf, i, j
         if fields is None:
             fields = []
@@ -62,8 +62,9 @@
         cdef int dims[3]
         dims[0] = dims[1] = dims[2] = 2
         cdef OctInfo oi
-        cdef np.int64_t offset
+        cdef np.int64_t offset, moff
         cdef Oct *oct
+        moff = octree.get_domain_offset(domain_id)
         for i in range(positions.shape[0]):
             # We should check if particle remains inside the Oct here
             for j in range(nf):
@@ -71,8 +72,14 @@
             for j in range(3):
                 pos[j] = positions[i, j]
             oct = octree.get(pos, &oi)
-            #print oct.local_ind, oct.pos[0], oct.pos[1], oct.pos[2]
-            offset = dom_ind[oct.ind]
+            # This next line is unfortunate.  Basically it says, sometimes we
+            # might have particles that belong to octs outside our domain.
+            if oct.domain != domain_id: continue
+            #print domain_id, oct.local_ind, oct.ind, oct.domain, oct.pos[0], oct.pos[1], oct.pos[2]
+            # Note that this has to be our local index, not our in-file index.
+            offset = dom_ind[oct.domain_ind - moff] * 8
+            if offset < 0: continue
+            # Check that we found the oct ...
             self.process(dims, oi.left_edge, oi.dds,
                          offset, pos, field_vals)
         
@@ -110,7 +117,7 @@
         raise NotImplementedError
 
 cdef class CountParticles(ParticleDepositOperation):
-    cdef np.float64_t *count # float, for ease
+    cdef np.int64_t *count # float, for ease
     cdef public object ocount
     def initialize(self):
         # Create a numpy array accessible to python
@@ -131,18 +138,16 @@
         cdef int ii[3], i
         for i in range(3):
             ii[i] = <int>((ppos[i] - left_edge[i])/dds[i])
-        #print "Depositing into", offset,
-        #print gind(ii[0], ii[1], ii[2], dim)
         self.count[gind(ii[0], ii[1], ii[2], dim) + offset] += 1
         
     def finalize(self):
-        return self.ocount
+        return self.ocount.astype('f8')
 
 deposit_count = CountParticles
 
 cdef class SumParticleField(ParticleDepositOperation):
-    cdef np.float64_t *count # float, for ease
-    cdef public object ocount
+    cdef np.float64_t *sum
+    cdef public object osum
     def initialize(self):
         self.osum = np.zeros(self.nvals, dtype="float64")
         cdef np.ndarray arr = self.osum
@@ -152,20 +157,17 @@
     cdef void process(self, int dim[3],
                       np.float64_t left_edge[3], 
                       np.float64_t dds[3],
-                      np.int64_t offset, # offset into IO field
-                      np.float64_t ppos[3], # this particle's position
-                      np.float64_t *fields # any other fields we need
+                      np.int64_t offset, 
+                      np.float64_t ppos[3],
+                      np.float64_t *fields 
                       ):
-        # here we do our thing; this is the kernel
         cdef int ii[3], i
         for i in range(3):
-            ii[i] = <int>((ppos[i] - left_edge[i])/dds[i])
-        #print "Depositing into", offset,
-        #print gind(ii[0], ii[1], ii[2], dim)
-        self.sum[gind(ii[0], ii[1], ii[2], dim) + offset] += fields[i]
+            ii[i] = <int>((ppos[i] - left_edge[i]) / dds[i])
+        self.sum[gind(ii[0], ii[1], ii[2], dim) + offset] += fields[0]
         
     def finalize(self):
-        return self.sum
+        return self.osum
 
 deposit_sum = SumParticleField
 
@@ -173,47 +175,56 @@
     # Thanks to Britton and MJ Turk for the link
     # to a single-pass STD
     # http://www.cs.berkeley.edu/~mhoemmen/cs194/Tutorials/variance.pdf
-    cdef np.float64_t *count # float, for ease
-    cdef public object ocount
+    cdef np.float64_t *mk
+    cdef np.float64_t *qk
+    cdef np.float64_t *i
+    cdef public object omk
+    cdef public object oqk
+    cdef public object oi
     def initialize(self):
         # we do this in a single pass, but need two scalar
         # per cell, M_k, and Q_k and also the number of particles
         # deposited into each one
+        # the M_k term
         self.omk= np.zeros(self.nvals, dtype="float64")
         cdef np.ndarray omkarr= self.omk
         self.mk= <np.float64_t*> omkarr.data
+        # the Q_k term
         self.oqk= np.zeros(self.nvals, dtype="float64")
         cdef np.ndarray oqkarr= self.oqk
         self.qk= <np.float64_t*> oqkarr.data
+        # particle count
         self.oi = np.zeros(self.nvals, dtype="int64")
         cdef np.ndarray oiarr = self.oi
-        self.qk= <np.float64_t*> oiarr.data
+        self.i = <np.float64_t*> oiarr.data
 
     @cython.cdivision(True)
     cdef void process(self, int dim[3],
                       np.float64_t left_edge[3], 
                       np.float64_t dds[3],
-                      np.int64_t offset, # offset into IO field
-                      np.float64_t ppos[3], # this particle's position
-                      np.float64_t *fields # any other fields we need
+                      np.int64_t offset,
+                      np.float64_t ppos[3],
+                      np.float64_t *fields
                       ):
-        # here we do our thing; this is the kernel
         cdef int ii[3], i, cell_index
+        cdef float k
         for i in range(3):
             ii[i] = <int>((ppos[i] - left_edge[i])/dds[i])
-        #print "Depositing into", offset,
-        #print gind(ii[0], ii[1], ii[2], dim)
         cell_index = gind(ii[0], ii[1], ii[2], dim) + offset
-        if self.mk[cell_index] == -1:
-            self.mk[cell_index] = fields[i]
+        k = <float> self.i[cell_index]
+        if self.i[cell_index] == 0:
+            # Initialize cell values
+            self.mk[cell_index] = fields[0]
         else:
-            self.mk[cell_index] = self.mk[cell_index] + (fields[i] - self.mk[cell_index]) / k
-
-
-        if self.sum
+            self.mk[cell_index] = self.mk[cell_index] + \
+                                  (fields[0] - self.mk[cell_index]) / k
+            self.qk[cell_index] = self.qk[cell_index] + \
+                                  (k - 1.0) * (fields[0] - 
+                                             self.mk[cell_index]) ** 2.0 / k
+        self.qk[cell_index] += 1
         
     def finalize(self):
         return self.sum
 
-deposit_sum = SumParticleField
+deposit_std = StdParticleField
 

diff -r 03eb8bf54a6dcfc7a951aa50f20d2aec30d56b32 -r c8e92b6c8878a74f03c39f26149b55f1d7af8aed yt/geometry/selection_routines.pyx
--- a/yt/geometry/selection_routines.pyx
+++ b/yt/geometry/selection_routines.pyx
@@ -201,7 +201,7 @@
             this_level = 0
         if res == 0:
             for i in range(8):
-                mask[root.local_ind,i] = 0
+                mask[root.domain_ind,i] = 0
             # If this level *is* being selected (i.e., no early termination)
             # then we know no child zones will be selected.
             if this_level == 1:
@@ -217,11 +217,11 @@
                     ii = ((k*2)+j)*2+i
                     ch = root.children[i][j][k]
                     if next_level == 1 and ch != NULL:
-                        mask[root.local_ind, ii] = 0
+                        mask[root.domain_ind, ii] = 0
                         self.recursively_select_octs(
                             ch, spos, sdds, mask, level + 1)
                     elif this_level == 1:
-                        mask[root.local_ind, ii] = \
+                        mask[root.domain_ind, ii] = \
                             self.select_cell(spos, sdds, eterm)
                     spos[2] += sdds[2]
                 spos[1] += sdds[1]


https://bitbucket.org/yt_analysis/yt/commits/de8847abae8a/
Changeset:   de8847abae8a
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-05-22 00:09:16
Summary:     ensure returned datatypes are f64
Affected #:  1 file

diff -r c8e92b6c8878a74f03c39f26149b55f1d7af8aed -r de8847abae8a3bca439c47b80e5dc5fb812796fa yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -140,7 +140,7 @@
                     temp[-nstars:] = data
                     tr[field] = temp
                     del data
-                tr[field] = tr[field][mask]
+                tr[field] = tr[field][mask].astype('f8')
                 ftype_old = ftype
                 fields_read.append(field)
         if tr == {}:


https://bitbucket.org/yt_analysis/yt/commits/13b5ec11dfb7/
Changeset:   13b5ec11dfb7
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-05-22 00:35:11
Summary:     std deposit works (I think?)
Affected #:  1 file

diff -r de8847abae8a3bca439c47b80e5dc5fb812796fa -r 13b5ec11dfb7cf67ad05788dc0e808d1a9970c2e yt/geometry/particle_deposit.pyx
--- a/yt/geometry/particle_deposit.pyx
+++ b/yt/geometry/particle_deposit.pyx
@@ -194,7 +194,7 @@
         cdef np.ndarray oqkarr= self.oqk
         self.qk= <np.float64_t*> oqkarr.data
         # particle count
-        self.oi = np.zeros(self.nvals, dtype="int64")
+        self.oi = np.zeros(self.nvals, dtype="float64")
         cdef np.ndarray oiarr = self.oi
         self.i = <np.float64_t*> oiarr.data
 
@@ -207,24 +207,28 @@
                       np.float64_t *fields
                       ):
         cdef int ii[3], i, cell_index
-        cdef float k
+        cdef float k, mk, qk
         for i in range(3):
             ii[i] = <int>((ppos[i] - left_edge[i])/dds[i])
         cell_index = gind(ii[0], ii[1], ii[2], dim) + offset
-        k = <float> self.i[cell_index]
-        if self.i[cell_index] == 0:
+        k = self.i[cell_index] 
+        mk = self.mk[cell_index]
+        qk = self.qk[cell_index] 
+        #print k, mk, qk, cell_index
+        if k == 0.0:
             # Initialize cell values
             self.mk[cell_index] = fields[0]
         else:
-            self.mk[cell_index] = self.mk[cell_index] + \
-                                  (fields[0] - self.mk[cell_index]) / k
-            self.qk[cell_index] = self.qk[cell_index] + \
-                                  (k - 1.0) * (fields[0] - 
-                                             self.mk[cell_index]) ** 2.0 / k
-        self.qk[cell_index] += 1
+            self.mk[cell_index] = mk + (fields[0] - mk) / k
+            self.qk[cell_index] = qk + (k - 1.0) * (fields[0] - mk)**2.0 / k
+        self.i[cell_index] += 1
         
     def finalize(self):
-        return self.sum
+        # This is the standard variance
+        # if we want sample variance divide by (self.oi - 1.0)
+        std = self.oqk / self.oi
+        std[~np.isfinite(std)] = 0.0
+        return std
 
 deposit_std = StdParticleField
 


https://bitbucket.org/yt_analysis/yt/commits/d38104697610/
Changeset:   d38104697610
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-05-22 01:59:02
Summary:     added conversion factors for gas, particle velocities
Affected #:  1 file

diff -r 13b5ec11dfb7cf67ad05788dc0e808d1a9970c2e -r d38104697610551cd1ac189704a3c572d618912b yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -324,7 +324,8 @@
         self.conversion_factors = cf
 
         for ax in 'xyz':
-            self.conversion_factors["%s-velocity" % ax] = 1.0
+            self.conversion_factors["%s-velocity" % ax] = cf["Velocity"]
+            self.conversion_factors["particle_velocity_%s" % ax] = cf["Velocity"]
         for pt in particle_fields:
             if pt not in self.conversion_factors.keys():
                 self.conversion_factors[pt] = 1.0


https://bitbucket.org/yt_analysis/yt/commits/55e98d3c0933/
Changeset:   55e98d3c0933
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-05-22 02:11:23
Summary:     adding particle velocity fields and their correct conversions
Affected #:  1 file

diff -r d38104697610551cd1ac189704a3c572d618912b -r 55e98d3c0933198e1d3cd04efd35f9bac6477800 yt/frontends/art/fields.py
--- a/yt/frontends/art/fields.py
+++ b/yt/frontends/art/fields.py
@@ -49,19 +49,6 @@
     add_art_field(f, function=NullFunc, take_log=True,
                   validators=[ValidateDataField(f)])
 
-for f in particle_fields:
-    add_art_field(f, function=NullFunc, take_log=True,
-                  validators=[ValidateDataField(f)],
-                  particle_type=True)
-add_art_field("particle_mass", function=NullFunc, take_log=True,
-              validators=[ValidateDataField(f)],
-              particle_type=True,
-              convert_function=lambda x: x.convert("particle_mass"))
-add_art_field("particle_mass_initial", function=NullFunc, take_log=True,
-              validators=[ValidateDataField(f)],
-              particle_type=True,
-              convert_function=lambda x: x.convert("particle_mass"))
-
 def _convertDensity(data):
     return data.convert("Density")
 KnownARTFields["Density"]._units = r"\rm{g}/\rm{cm}^3"
@@ -213,6 +200,24 @@
 ARTFieldInfo["Metal_Density"]._projected_units = r"\rm{g}/\rm{cm}^2"
 
 # Particle fields
+for f in particle_fields:
+    add_art_field(f, function=NullFunc, take_log=True,
+                  validators=[ValidateDataField(f)],
+                  particle_type=True)
+for ax in "xyz":
+    add_art_field("particle_velocity_%s" % ax, function=NullFunc, take_log=True,
+                  validators=[ValidateDataField(f)],
+                  particle_type=True,
+                  convert_function=lambda x: x.convert("particle_velocity_%s" % ax))
+add_art_field("particle_mass", function=NullFunc, take_log=True,
+              validators=[ValidateDataField(f)],
+              particle_type=True,
+              convert_function=lambda x: x.convert("particle_mass"))
+add_art_field("particle_mass_initial", function=NullFunc, take_log=True,
+              validators=[ValidateDataField(f)],
+              particle_type=True,
+              convert_function=lambda x: x.convert("particle_mass"))
+
 def _particle_age(field, data):
     tr = data["particle_creation_time"]
     return data.pf.current_time - tr


https://bitbucket.org/yt_analysis/yt/commits/440a76cf232d/
Changeset:   440a76cf232d
Branch:      yt-3.0
User:        juxtaposicion
Date:        2013-05-22 02:45:13
Summary:     return the sqrt of the variance
Affected #:  1 file

diff -r 55e98d3c0933198e1d3cd04efd35f9bac6477800 -r 440a76cf232df87a3e27304dfd8507f4f965d3f9 yt/geometry/particle_deposit.pyx
--- a/yt/geometry/particle_deposit.pyx
+++ b/yt/geometry/particle_deposit.pyx
@@ -226,9 +226,9 @@
     def finalize(self):
         # This is the standard variance
         # if we want sample variance divide by (self.oi - 1.0)
-        std = self.oqk / self.oi
-        std[~np.isfinite(std)] = 0.0
-        return std
+        std2 = self.oqk / self.oi
+        std2[self.oi == 0.0] = 0.0
+        return np.sqrt(std2)
 
 deposit_std = StdParticleField
 


https://bitbucket.org/yt_analysis/yt/commits/18deb7f7cfbb/
Changeset:   18deb7f7cfbb
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-22 16:19:14
Summary:     Adding grid patch particle deposition.  Allowing failures for preloading data.

The preload issue comes about specifically when doing something like find_max
of a particle deposition field where the particle field relies on a
multi-component field like Coordinates.  I hope this will change over time once
we have proper vector field support baked in.
Affected #:  3 files

diff -r 440a76cf232df87a3e27304dfd8507f4f965d3f9 -r 18deb7f7cfbb2f87b8ffe24b403b1e2a56f0cc17 yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -60,7 +60,10 @@
         e = FieldDetector(flat = True)
         e.NumberOfParticles = 1
         fields = e.requested
-        self.func(e, *args, **kwargs)
+        try:
+            self.func(e, *args, **kwargs)
+        except:
+            mylog.error("Could not preload for quantity %s, IO speed may suffer", self.__name__)
         retvals = [ [] for i in range(self.n_ret)]
         chunks = self._data_source.chunks([], chunking_style="io")
         for ds in parallel_objects(chunks, -1):

diff -r 440a76cf232df87a3e27304dfd8507f4f965d3f9 -r 18deb7f7cfbb2f87b8ffe24b403b1e2a56f0cc17 yt/data_objects/field_info_container.py
--- a/yt/data_objects/field_info_container.py
+++ b/yt/data_objects/field_info_container.py
@@ -233,6 +233,7 @@
         if pf is None:
             # required attrs
             pf = fake_parameter_file(lambda: 1)
+            pf["Massarr"] = np.ones(6)
             pf.current_redshift = pf.omega_lambda = pf.omega_matter = \
                 pf.cosmological_simulation = 0.0
             pf.hubble_constant = 0.7

diff -r 440a76cf232df87a3e27304dfd8507f4f965d3f9 -r 18deb7f7cfbb2f87b8ffe24b403b1e2a56f0cc17 yt/data_objects/grid_patch.py
--- a/yt/data_objects/grid_patch.py
+++ b/yt/data_objects/grid_patch.py
@@ -44,6 +44,7 @@
     NeedsProperty, \
     NeedsParameter
 from yt.geometry.selection_routines import convert_mask_to_indices
+import yt.geometry.particle_deposit as particle_deposit
 
 class AMRGridPatch(YTSelectionContainer):
     _spatial = True
@@ -474,6 +475,17 @@
         dt, t = dobj.selector.get_dt(self)
         return dt, t
 
+    def deposit(self, positions, fields = None, method = None):
+        # Here we perform our particle deposition.
+        cls = getattr(particle_deposit, "deposit_%s" % method, None)
+        if cls is None:
+            raise YTParticleDepositionNotImplemented(method)
+        op = cls(self.ActiveDimensions.prod()) # We allocate number of zones, not number of octs
+        op.initialize()
+        op.process_grid(self, positions, fields)
+        vals = op.finalize()
+        return vals.reshape(self.ActiveDimensions, order="F")
+
     def select(self, selector):
         if id(selector) == self._last_selector_id:
             return self._last_mask


https://bitbucket.org/yt_analysis/yt/commits/b8521d5e0e89/
Changeset:   b8521d5e0e89
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-22 20:39:50
Summary:     Merging from octtraversal bookmark
Affected #:  25 files

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/data_objects/api.py
--- a/yt/data_objects/api.py
+++ b/yt/data_objects/api.py
@@ -31,6 +31,9 @@
 from grid_patch import \
     AMRGridPatch
 
+from octree_subset import \
+    OctreeSubset
+
 from static_output import \
     StaticOutput
 

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -249,7 +249,13 @@
                 for i,chunk in enumerate(self.chunks(field, "spatial", ngz = 0)):
                     mask = self._current_chunk.objs[0].select(self.selector)
                     if mask is None: continue
-                    data = self[field][mask]
+                    data = self[field]
+                    if len(data.shape) == 4:
+                        # This is how we keep it consistent between oct ordering
+                        # and grid ordering.
+                        data = data.T[mask.T]
+                    else:
+                        data = data[mask]
                     rv[ind:ind+data.size] = data
                     ind += data.size
         else:
@@ -513,6 +519,11 @@
                         if f not in fields_to_generate:
                             fields_to_generate.append(f)
 
+    def deposit(self, positions, fields, op):
+        assert(self._current_chunk.chunk_type == "spatial")
+        fields = ensure_list(fields)
+        self.hierarchy._deposit_particle_fields(self, positions, fields, op)
+
     @contextmanager
     def _field_lock(self):
         self._locked = True

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -60,7 +60,10 @@
         e = FieldDetector(flat = True)
         e.NumberOfParticles = 1
         fields = e.requested
-        self.func(e, *args, **kwargs)
+        try:
+            self.func(e, *args, **kwargs)
+        except:
+            mylog.error("Could not preload for quantity %s, IO speed may suffer", self.__name__)
         retvals = [ [] for i in range(self.n_ret)]
         chunks = self._data_source.chunks([], chunking_style="io")
         for ds in parallel_objects(chunks, -1):

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/data_objects/field_info_container.py
--- a/yt/data_objects/field_info_container.py
+++ b/yt/data_objects/field_info_container.py
@@ -233,6 +233,7 @@
         if pf is None:
             # required attrs
             pf = fake_parameter_file(lambda: 1)
+            pf["Massarr"] = np.ones(6)
             pf.current_redshift = pf.omega_lambda = pf.omega_matter = \
                 pf.cosmological_simulation = 0.0
             pf.hubble_constant = 0.7
@@ -286,6 +287,9 @@
         self.requested.append(item)
         return defaultdict.__missing__(self, item)
 
+    def deposit(self, *args, **kwargs):
+        return np.random.random((self.nd, self.nd, self.nd))
+
     def _read_data(self, field_name):
         self.requested.append(field_name)
         FI = getattr(self.pf, "field_info", FieldInfo)

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/data_objects/grid_patch.py
--- a/yt/data_objects/grid_patch.py
+++ b/yt/data_objects/grid_patch.py
@@ -44,6 +44,7 @@
     NeedsProperty, \
     NeedsParameter
 from yt.geometry.selection_routines import convert_mask_to_indices
+import yt.geometry.particle_deposit as particle_deposit
 
 class AMRGridPatch(YTSelectionContainer):
     _spatial = True
@@ -474,6 +475,17 @@
         dt, t = dobj.selector.get_dt(self)
         return dt, t
 
+    def deposit(self, positions, fields = None, method = None):
+        # Here we perform our particle deposition.
+        cls = getattr(particle_deposit, "deposit_%s" % method, None)
+        if cls is None:
+            raise YTParticleDepositionNotImplemented(method)
+        op = cls(self.ActiveDimensions.prod()) # We allocate number of zones, not number of octs
+        op.initialize()
+        op.process_grid(self, positions, fields)
+        vals = op.finalize()
+        return vals.reshape(self.ActiveDimensions, order="F")
+
     def select(self, selector):
         if id(selector) == self._last_selector_id:
             return self._last_mask

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/data_objects/octree_subset.py
--- /dev/null
+++ b/yt/data_objects/octree_subset.py
@@ -0,0 +1,170 @@
+"""
+Subsets of octrees
+
+Author: Matthew Turk <matthewturk at gmail.com>
+Affiliation: Columbia University
+Homepage: http://yt-project.org/
+License:
+  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+import numpy as np
+
+from yt.data_objects.data_containers import \
+    YTFieldData, \
+    YTDataContainer, \
+    YTSelectionContainer
+from .field_info_container import \
+    NeedsGridType, \
+    NeedsOriginalGrid, \
+    NeedsDataField, \
+    NeedsProperty, \
+    NeedsParameter
+import yt.geometry.particle_deposit as particle_deposit
+
+class OctreeSubset(YTSelectionContainer):
+    _spatial = True
+    _num_ghost_zones = 0
+    _num_zones = 2
+    _type_name = 'octree_subset'
+    _skip_add = True
+    _con_args = ('domain', 'mask', 'cell_count')
+    _container_fields = ("dx", "dy", "dz")
+
+    def __init__(self, domain, mask, cell_count):
+        self.field_data = YTFieldData()
+        self.field_parameters = {}
+        self.mask = mask
+        self.domain = domain
+        self.pf = domain.pf
+        self.hierarchy = self.pf.hierarchy
+        self.oct_handler = domain.pf.h.oct_handler
+        self.cell_count = cell_count
+        level_counts = self.oct_handler.count_levels(
+            self.domain.pf.max_level, self.domain.domain_id, mask)
+        assert(level_counts.sum() == cell_count)
+        level_counts[1:] = level_counts[:-1]
+        level_counts[0] = 0
+        self.level_counts = np.add.accumulate(level_counts)
+        self._last_mask = None
+        self._last_selector_id = None
+        self._current_particle_type = 'all'
+        self._current_fluid_type = self.pf.default_fluid_type
+
+    def _generate_container_field(self, field):
+        if self._current_chunk is None:
+            self.hierarchy._identify_base_chunk(self)
+        if field == "dx":
+            return self._current_chunk.fwidth[:,0]
+        elif field == "dy":
+            return self._current_chunk.fwidth[:,1]
+        elif field == "dz":
+            return self._current_chunk.fwidth[:,2]
+
+    def select_icoords(self, dobj):
+        return self.oct_handler.icoords(self.domain.domain_id, self.mask,
+                                        self.cell_count,
+                                        self.level_counts.copy())
+
+    def select_fcoords(self, dobj):
+        return self.oct_handler.fcoords(self.domain.domain_id, self.mask,
+                                        self.cell_count,
+                                        self.level_counts.copy())
+
+    def select_fwidth(self, dobj):
+        # Recall domain_dimensions is the number of cells, not octs
+        base_dx = (self.domain.pf.domain_width /
+                   self.domain.pf.domain_dimensions)
+        widths = np.empty((self.cell_count, 3), dtype="float64")
+        dds = (2**self.select_ires(dobj))
+        for i in range(3):
+            widths[:,i] = base_dx[i] / dds
+        return widths
+
+    def select_ires(self, dobj):
+        return self.oct_handler.ires(self.domain.domain_id, self.mask,
+                                     self.cell_count,
+                                     self.level_counts.copy())
+
+    def __getitem__(self, key):
+        tr = super(OctreeSubset, self).__getitem__(key)
+        try:
+            fields = self._determine_fields(key)
+        except YTFieldTypeNotFound:
+            return tr
+        finfo = self.pf._get_field_info(*fields[0])
+        if not finfo.particle_type:
+            # We may need to reshape the field, if it is being queried from
+            # field_data.  If it's already cached, it just passes through.
+            if len(tr.shape) < 4:
+                tr = self._reshape_vals(tr)
+            return tr
+        return tr
+
+    def _reshape_vals(self, arr):
+        nz = self._num_zones + 2*self._num_ghost_zones
+        n_oct = arr.shape[0] / (nz**3.0)
+        arr = arr.reshape((nz, nz, nz, n_oct), order="F")
+        return arr
+
+    _domain_ind = None
+
+    @property
+    def domain_ind(self):
+        if self._domain_ind is None:
+            di = self.oct_handler.domain_ind(self.mask, self.domain.domain_id)
+            self._domain_ind = di
+        return self._domain_ind
+
+    def deposit(self, positions, fields = None, method = None):
+        # Here we perform our particle deposition.
+        cls = getattr(particle_deposit, "deposit_%s" % method, None)
+        if cls is None:
+            raise YTParticleDepositionNotImplemented(method)
+        nvals = (self.domain_ind >= 0).sum() * 8
+        op = cls(nvals) # We allocate number of zones, not number of octs
+        op.initialize()
+        op.process_octree(self.oct_handler, self.domain_ind, positions, fields,
+                          self.domain.domain_id)
+        vals = op.finalize()
+        return self._reshape_vals(vals)
+
+    def select(self, selector):
+        if id(selector) == self._last_selector_id:
+            return self._last_mask
+        self._last_mask = self.oct_handler.domain_mask(
+                self.mask, self.domain.domain_id)
+        if self._last_mask.sum() == 0: return None
+        self._last_selector_id = id(selector)
+        return self._last_mask
+
+    def count(self, selector):
+        if id(selector) == self._last_selector_id:
+            if self._last_mask is None: return 0
+            return self._last_mask.sum()
+        self.select(selector)
+        return self.count(selector)
+
+    def count_particles(self, selector, x, y, z):
+        # We don't cache the selector results
+        count = selector.count_points(x,y,z)
+        return count
+
+    def select_particles(self, selector, x, y, z):
+        mask = selector.select_points(x,y,z)
+        return mask

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -59,6 +59,7 @@
     particle_types = ("all",)
     geometry = "cartesian"
     coordinates = None
+    max_level = 99
 
     class __metaclass__(type):
         def __init__(cls, name, b, d):

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/data_objects/universal_fields.py
--- a/yt/data_objects/universal_fields.py
+++ b/yt/data_objects/universal_fields.py
@@ -96,7 +96,7 @@
           display_field = False)
 
 def _Ones(field, data):
-    return np.ones(data.shape, dtype='float64')
+    return np.ones(data.ires.size, dtype='float64')
 add_field("Ones", function=_Ones,
           projection_conversion="unitary",
           display_field = False)

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -40,6 +40,8 @@
     GeometryHandler, YTDataChunk
 from yt.data_objects.static_output import \
     StaticOutput
+from yt.data_objects.octree_subset import \
+    OctreeSubset
 from yt.geometry.oct_container import \
     ARTOctreeContainer
 from yt.data_objects.field_info_container import \
@@ -171,8 +173,16 @@
         # as well as the referring data source
         yield YTDataChunk(dobj, "all", oobjs, dobj.size)
 
-    def _chunk_spatial(self, dobj, ngz):
-        raise NotImplementedError
+    def _chunk_spatial(self, dobj, ngz, sort = None):
+        sobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
+        for i,og in enumerate(sobjs):
+            if ngz > 0:
+                g = og.retrieve_ghost_zones(ngz, [], smoothed=True)
+            else:
+                g = og
+            size = og.cell_count
+            if size == 0: continue
+            yield YTDataChunk(dobj, "spatial", [g], size)
 
     def _chunk_io(self, dobj):
         """
@@ -314,7 +324,8 @@
         self.conversion_factors = cf
 
         for ax in 'xyz':
-            self.conversion_factors["%s-velocity" % ax] = 1.0
+            self.conversion_factors["%s-velocity" % ax] = cf["Velocity"]
+            self.conversion_factors["particle_velocity_%s" % ax] = cf["Velocity"]
         for pt in particle_fields:
             if pt not in self.conversion_factors.keys():
                 self.conversion_factors[pt] = 1.0
@@ -433,43 +444,10 @@
                 return False
         return False
 
-
-class ARTDomainSubset(object):
+class ARTDomainSubset(OctreeSubset):
     def __init__(self, domain, mask, cell_count, domain_level):
-        self.mask = mask
-        self.domain = domain
-        self.oct_handler = domain.pf.h.oct_handler
-        self.cell_count = cell_count
+        super(ARTDomainSubset, self).__init__(domain, mask, cell_count)
         self.domain_level = domain_level
-        level_counts = self.oct_handler.count_levels(
-            self.domain.pf.max_level, self.domain.domain_id, mask)
-        assert(level_counts.sum() == cell_count)
-        level_counts[1:] = level_counts[:-1]
-        level_counts[0] = 0
-        self.level_counts = np.add.accumulate(level_counts)
-
-    def select_icoords(self, dobj):
-        return self.oct_handler.icoords(self.domain.domain_id, self.mask,
-                                        self.cell_count,
-                                        self.level_counts.copy())
-
-    def select_fcoords(self, dobj):
-        return self.oct_handler.fcoords(self.domain.domain_id, self.mask,
-                                        self.cell_count,
-                                        self.level_counts.copy())
-
-    def select_ires(self, dobj):
-        return self.oct_handler.ires(self.domain.domain_id, self.mask,
-                                     self.cell_count,
-                                     self.level_counts.copy())
-
-    def select_fwidth(self, dobj):
-        base_dx = 1.0/self.domain.pf.domain_dimensions
-        widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.select_ires(dobj))
-        for i in range(3):
-            widths[:, i] = base_dx[i] / dds
-        return widths
 
     def fill_root(self, content, ftfields):
         """

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/frontends/art/fields.py
--- a/yt/frontends/art/fields.py
+++ b/yt/frontends/art/fields.py
@@ -49,19 +49,6 @@
     add_art_field(f, function=NullFunc, take_log=True,
                   validators=[ValidateDataField(f)])
 
-for f in particle_fields:
-    add_art_field(f, function=NullFunc, take_log=True,
-                  validators=[ValidateDataField(f)],
-                  particle_type=True)
-add_art_field("particle_mass", function=NullFunc, take_log=True,
-              validators=[ValidateDataField(f)],
-              particle_type=True,
-              convert_function=lambda x: x.convert("particle_mass"))
-add_art_field("particle_mass_initial", function=NullFunc, take_log=True,
-              validators=[ValidateDataField(f)],
-              particle_type=True,
-              convert_function=lambda x: x.convert("particle_mass"))
-
 def _convertDensity(data):
     return data.convert("Density")
 KnownARTFields["Density"]._units = r"\rm{g}/\rm{cm}^3"
@@ -213,6 +200,24 @@
 ARTFieldInfo["Metal_Density"]._projected_units = r"\rm{g}/\rm{cm}^2"
 
 # Particle fields
+for f in particle_fields:
+    add_art_field(f, function=NullFunc, take_log=True,
+                  validators=[ValidateDataField(f)],
+                  particle_type=True)
+for ax in "xyz":
+    add_art_field("particle_velocity_%s" % ax, function=NullFunc, take_log=True,
+                  validators=[ValidateDataField(f)],
+                  particle_type=True,
+                  convert_function=lambda x: x.convert("particle_velocity_%s" % ax))
+add_art_field("particle_mass", function=NullFunc, take_log=True,
+              validators=[ValidateDataField(f)],
+              particle_type=True,
+              convert_function=lambda x: x.convert("particle_mass"))
+add_art_field("particle_mass_initial", function=NullFunc, take_log=True,
+              validators=[ValidateDataField(f)],
+              particle_type=True,
+              convert_function=lambda x: x.convert("particle_mass"))
+
 def _particle_age(field, data):
     tr = data["particle_creation_time"]
     return data.pf.current_time - tr

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -140,7 +140,7 @@
                     temp[-nstars:] = data
                     tr[field] = temp
                     del data
-                tr[field] = tr[field][mask]
+                tr[field] = tr[field][mask].astype('f8')
                 ftype_old = ftype
                 fields_read.append(field)
         if tr == {}:
@@ -330,32 +330,57 @@
     f.seek(pos)
     return unitary_center, fl, iocts, nLevel, root_level
 
+def get_ranges(skip, count, field, words=6, real_size=4, np_per_page=4096**2, 
+                  num_pages=1):
+    #translate every particle index into a file position ranges
+    ranges = []
+    arr_size = np_per_page * real_size
+    page_size = words * np_per_page * real_size
+    idxa, idxb = 0, 0
+    posa, posb = 0, 0
+    left = count
+    for page in range(num_pages):
+        idxb += np_per_page
+        for i, fname in enumerate(['x', 'y', 'z', 'vx', 'vy', 'vz']):
+            posb += arr_size
+            if i == field or fname == field:
+                if skip < np_per_page and count > 0:
+                    left_in_page = np_per_page - skip
+                    this_count = min(left_in_page, count)
+                    count -= this_count
+                    start = posa + skip * real_size
+                    end = posa + this_count * real_size
+                    ranges.append((start, this_count))
+                    skip = 0
+                    assert end <= posb
+                else:
+                    skip -= np_per_page
+            posa += arr_size
+        idxa += np_per_page
+    assert count == 0
+    return ranges
 
-def read_particles(file, Nrow, idxa=None, idxb=None, field=None):
+
+def read_particles(file, Nrow, idxa, idxb, field):
     words = 6  # words (reals) per particle: x,y,z,vx,vy,vz
     real_size = 4  # for file_particle_data; not always true?
-    np_per_page = Nrow**2  # defined in ART a_setup.h
+    np_per_page = Nrow**2  # defined in ART a_setup.h, # of particles/page
     num_pages = os.path.getsize(file)/(real_size*words*np_per_page)
     data = np.array([], 'f4')
     fh = open(file, 'r')
-    totalp = idxb-idxa
-    left = totalp
-    for page in range(num_pages):
-        for i, fname in enumerate(['x', 'y', 'z', 'vx', 'vy', 'vz']):
-            if i == field or fname == field:
-                if idxa is not None:
-                    fh.seek(real_size*idxa, 1)
-                    count = min(np_per_page, left)
-                    temp = np.fromfile(fh, count=count, dtype='>f4')
-                    pageleft = np_per_page-count-idxa
-                    fh.seek(real_size*pageleft, 1)
-                    left -= count
-                else:
-                    count = np_per_page
-                    temp = np.fromfile(fh, count=count, dtype='>f4')
-                data = np.concatenate((data, temp))
-            else:
-                fh.seek(4*np_per_page, 1)
+    skip, count = idxa, idxb - idxa
+    kwargs = dict(words=words, real_size=real_size, 
+                  np_per_page=np_per_page, num_pages=num_pages)
+    ranges = get_ranges(skip, count, field, **kwargs)
+    data = None
+    for seek, this_count in ranges:
+        fh.seek(seek)
+        temp = np.fromfile(fh, count=this_count, dtype='>f4')
+        if data is None:
+            data = temp
+        else:
+            data = np.concatenate((data, temp))
+    fh.close()
     return data
 
 

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -35,6 +35,8 @@
     GeometryHandler, YTDataChunk
 from yt.data_objects.static_output import \
     StaticOutput
+from yt.data_objects.octree_subset import \
+    OctreeSubset
 
 from .definitions import ramses_header
 from yt.utilities.definitions import \
@@ -252,43 +254,7 @@
         self.select(selector)
         return self.count(selector)
 
-class RAMSESDomainSubset(object):
-    def __init__(self, domain, mask, cell_count):
-        self.mask = mask
-        self.domain = domain
-        self.oct_handler = domain.pf.h.oct_handler
-        self.cell_count = cell_count
-        level_counts = self.oct_handler.count_levels(
-            self.domain.pf.max_level, self.domain.domain_id, mask)
-        assert(level_counts.sum() == cell_count)
-        level_counts[1:] = level_counts[:-1]
-        level_counts[0] = 0
-        self.level_counts = np.add.accumulate(level_counts)
-
-    def select_icoords(self, dobj):
-        return self.oct_handler.icoords(self.domain.domain_id, self.mask,
-                                        self.cell_count,
-                                        self.level_counts.copy())
-
-    def select_fcoords(self, dobj):
-        return self.oct_handler.fcoords(self.domain.domain_id, self.mask,
-                                        self.cell_count,
-                                        self.level_counts.copy())
-
-    def select_fwidth(self, dobj):
-        # Recall domain_dimensions is the number of cells, not octs
-        base_dx = (self.domain.pf.domain_width /
-                   self.domain.pf.domain_dimensions)
-        widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.select_ires(dobj))
-        for i in range(3):
-            widths[:,i] = base_dx[i] / dds
-        return widths
-
-    def select_ires(self, dobj):
-        return self.oct_handler.ires(self.domain.domain_id, self.mask,
-                                     self.cell_count,
-                                     self.level_counts.copy())
+class RAMSESDomainSubset(OctreeSubset):
 
     def fill(self, content, fields):
         # Here we get a copy of the file, which we skip through and read the
@@ -389,8 +355,16 @@
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
         yield YTDataChunk(dobj, "all", oobjs, dobj.size)
 
-    def _chunk_spatial(self, dobj, ngz):
-        raise NotImplementedError
+    def _chunk_spatial(self, dobj, ngz, sort = None):
+        sobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
+        for i,og in enumerate(sobjs):
+            if ngz > 0:
+                g = og.retrieve_ghost_zones(ngz, [], smoothed=True)
+            else:
+                g = og
+            size = og.cell_count
+            if size == 0: continue
+            yield YTDataChunk(dobj, "spatial", [g], size)
 
     def _chunk_io(self, dobj):
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -40,6 +40,8 @@
     GeometryHandler, YTDataChunk
 from yt.data_objects.static_output import \
     StaticOutput
+from yt.data_objects.octree_subset import \
+    OctreeSubset
 from yt.utilities.definitions import \
     mpc_conversion, sec_conversion
 from .fields import \
@@ -70,40 +72,8 @@
     def _calculate_offsets(self, fields):
         pass
 
-class ParticleDomainSubset(object):
-    def __init__(self, domain, mask, count):
-        self.domain = domain
-        self.mask = mask
-        self.cell_count = count
-        self.oct_handler = domain.pf.h.oct_handler
-        level_counts = self.oct_handler.count_levels(
-            99, self.domain.domain_id, mask)
-        level_counts[1:] = level_counts[:-1]
-        level_counts[0] = 0
-        self.level_counts = np.add.accumulate(level_counts)
-
-    def select_icoords(self, dobj):
-        return self.oct_handler.icoords(self.domain.domain_id, self.mask,
-                                        self.cell_count)
-
-    def select_fcoords(self, dobj):
-        return self.oct_handler.fcoords(self.domain.domain_id, self.mask,
-                                        self.cell_count)
-
-    def select_fwidth(self, dobj):
-        # Recall domain_dimensions is the number of cells, not octs
-        base_dx = (self.domain.pf.domain_width /
-                   self.domain.pf.domain_dimensions)
-        widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.ires(dobj))
-        for i in range(3):
-            widths[:,i] = base_dx[i] / dds
-        return widths
-
-    def select_ires(self, dobj):
-        return self.oct_handler.ires(self.domain.domain_id, self.mask,
-                                     self.cell_count)
-
+class ParticleDomainSubset(OctreeSubset):
+    pass
 
 class ParticleGeometryHandler(OctreeGeometryHandler):
 
@@ -126,7 +96,7 @@
         total_particles = sum(sum(d.total_particles.values())
                               for d in self.domains)
         self.oct_handler = ParticleOctreeContainer(
-            self.parameter_file.domain_dimensions,
+            self.parameter_file.domain_dimensions/2,
             self.parameter_file.domain_left_edge,
             self.parameter_file.domain_right_edge)
         self.oct_handler.n_ref = 64
@@ -170,8 +140,16 @@
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
         yield YTDataChunk(dobj, "all", oobjs, dobj.size)
 
-    def _chunk_spatial(self, dobj, ngz):
-        raise NotImplementedError
+    def _chunk_spatial(self, dobj, ngz, sort = None):
+        sobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
+        for i,og in enumerate(sobjs):
+            if ngz > 0:
+                g = og.retrieve_ghost_zones(ngz, [], smoothed=True)
+            else:
+                g = og
+            size = og.cell_count
+            if size == 0: continue
+            yield YTDataChunk(dobj, "spatial", [g], size)
 
     def _chunk_io(self, dobj):
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
@@ -216,6 +194,7 @@
         self.domain_right_edge = np.ones(3, "float64") * hvals["BoxSize"]
         self.domain_dimensions = np.ones(3, "int32") * self._root_dimensions
         self.cosmological_simulation = 1
+        self.periodicity = (True, True, True)
         self.current_redshift = hvals["Redshift"]
         self.omega_lambda = hvals["OmegaLambda"]
         self.omega_matter = hvals["Omega0"]
@@ -317,6 +296,7 @@
         self.domain_left_edge = np.zeros(3, "float64")
         self.domain_right_edge = np.ones(3, "float64") * hvals["BoxSize"]
         self.domain_dimensions = np.ones(3, "int32") * self._root_dimensions
+        self.periodicity = (True, True, True)
 
         self.cosmological_simulation = 1
 
@@ -371,10 +351,27 @@
                     ('dummy',   'i'))
 
     def __init__(self, filename, data_style="tipsy",
-                 root_dimensions = 64):
+                 root_dimensions = 64, endian = ">",
+                 field_dtypes = None,
+                 domain_left_edge = None,
+                 domain_right_edge = None):
+        self.endian = endian
         self._root_dimensions = root_dimensions
         # Set up the template for domain files
         self.storage_filename = None
+        if domain_left_edge is None:
+            domain_left_edge = np.zeros(3, "float64") - 0.5
+        if domain_right_edge is None:
+            domain_right_edge = np.zeros(3, "float64") + 0.5
+
+        self.domain_left_edge = np.array(domain_left_edge, dtype="float64")
+        self.domain_right_edge = np.array(domain_right_edge, dtype="float64")
+
+        # My understanding is that dtypes are set on a field by field basis,
+        # not on a (particle type, field) basis
+        if field_dtypes is None: field_dtypes = {}
+        self._field_dtypes = field_dtypes
+
         super(TipsyStaticOutput, self).__init__(filename, data_style)
 
     def __repr__(self):
@@ -393,7 +390,7 @@
         # in the GADGET-2 user guide.
 
         f = open(self.parameter_filename, "rb")
-        hh = ">" + "".join(["%s" % (b) for a,b in self._header_spec])
+        hh = self.endian + "".join(["%s" % (b) for a,b in self._header_spec])
         hvals = dict([(a, c) for (a, b), c in zip(self._header_spec,
                      struct.unpack(hh, f.read(struct.calcsize(hh))))])
         self._header_offset = f.tell()
@@ -408,9 +405,11 @@
         # This may not be correct.
         self.current_time = hvals["time"]
 
-        self.domain_left_edge = np.zeros(3, "float64") - 0.5
-        self.domain_right_edge = np.ones(3, "float64") + 0.5
+        # NOTE: These are now set in the main initializer.
+        #self.domain_left_edge = np.zeros(3, "float64") - 0.5
+        #self.domain_right_edge = np.ones(3, "float64") + 0.5
         self.domain_dimensions = np.ones(3, "int32") * self._root_dimensions
+        self.periodicity = (True, True, True)
 
         self.cosmological_simulation = 1
 

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/frontends/sph/io.py
--- a/yt/frontends/sph/io.py
+++ b/yt/frontends/sph/io.py
@@ -372,6 +372,7 @@
         return rv
 
     def _initialize_octree(self, domain, octree):
+        pf = domain.pf
         with open(domain.domain_filename, "rb") as f:
             f.seek(domain.pf._header_offset)
             for ptype in self._ptypes:
@@ -391,6 +392,11 @@
                             pos[:,1].min(), pos[:,1].max())
                 mylog.debug("Spanning: %0.3e .. %0.3e in z",
                             pos[:,2].min(), pos[:,2].max())
+                if np.any(pos.min(axis=0) < pf.domain_left_edge) or \
+                   np.any(pos.max(axis=0) > pf.domain_right_edge):
+                    raise YTDomainOverflow(pos.min(axis=0), pos.max(axis=0),
+                                           pf.domain_left_edge,
+                                           pf.domain_right_edge)
                 del pp
                 octree.add(pos, domain.domain_id)
 
@@ -412,10 +418,12 @@
         for ptype, field in self._fields:
             pfields = []
             if tp[ptype] == 0: continue
+            dtbase = domain.pf._field_dtypes.get(field, 'f')
+            ff = "%s%s" % (domain.pf.endian, dtbase)
             if field in _vector_fields:
-                dt = (field, [('x', '>f'), ('y', '>f'), ('z', '>f')])
+                dt = (field, [('x', ff), ('y', ff), ('z', ff)])
             else:
-                dt = (field, '>f')
+                dt = (field, ff)
             pds.setdefault(ptype, []).append(dt)
             field_list.append((ptype, field))
         for ptype in pds:

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/frontends/sph/smoothing_kernel.pyx
--- a/yt/frontends/sph/smoothing_kernel.pyx
+++ b/yt/frontends/sph/smoothing_kernel.pyx
@@ -53,21 +53,28 @@
     for p in range(ngas):
         kernel_sum[p] = 0.0
         skip = 0
+        # Find the # of cells of the kernel
         for i in range(3):
             pos[i] = ppos[p, i]
+            # Get particle root grid integer index
             ind[i] = <int>((pos[i] - left_edge[i]) / dds[i])
+            # How many root grid cells does the smoothing length span + 1
             half_len = <int>(hsml[p]/dds[i]) + 1
+            # Left and right integer indices of the smoothing range
+            # If smoothing len is small could be inside the same bin
             ib0[i] = ind[i] - half_len
             ib1[i] = ind[i] + half_len
             #pos[i] = ppos[p, i] - left_edge[i]
             #ind[i] = <int>(pos[i] / dds[i])
             #ib0[i] = <int>((pos[i] - hsml[i]) / dds[i]) - 1
             #ib1[i] = <int>((pos[i] + hsml[i]) / dds[i]) + 1
+            # Skip if outside out root grid
             if ib0[i] >= dims[i] or ib1[i] < 0:
                 skip = 1
             ib0[i] = iclip(ib0[i], 0, dims[i] - 1)
             ib1[i] = iclip(ib1[i], 0, dims[i] - 1)
         if skip == 1: continue
+        # Having found the kernel shape, calculate the kernel weight
         for i from ib0[0] <= i <= ib1[0]:
             idist[0] = (ind[0] - i) * (ind[0] - i) * sdds[0]
             for j from ib0[1] <= j <= ib1[1]:
@@ -75,10 +82,14 @@
                 for k from ib0[2] <= k <= ib1[2]:
                     idist[2] = (ind[2] - k) * (ind[2] - k) * sdds[2]
                     dist = idist[0] + idist[1] + idist[2]
+                    # Calculate distance in multiples of the smoothing length
                     dist = sqrt(dist) / hsml[p]
+                    # Kernel is 3D but save the elements in a 1D array
                     gi = ((i * dims[1] + j) * dims[2]) + k
                     pdist[gi] = sph_kernel(dist)
+                    # Save sum to normalize later
                     kernel_sum[p] += pdist[gi]
+        # Having found the kernel, deposit accordingly into gdata
         for i from ib0[0] <= i <= ib1[0]:
             for j from ib0[1] <= j <= ib1[1]:
                 for k from ib0[2] <= k <= ib1[2]:

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/geometry/fake_octree.pyx
--- /dev/null
+++ b/yt/geometry/fake_octree.pyx
@@ -0,0 +1,90 @@
+"""
+Make a fake octree, deposit particle at every leaf
+
+Author: Christopher Moody <chris.e.moody at gmail.com>
+Affiliation: UC Santa Cruz
+Author: Matthew Turk <matthewturk at gmail.com>
+Affiliation: Columbia University
+Homepage: http://yt.enzotools.org/
+License:
+  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+from libc.stdlib cimport malloc, free, rand, RAND_MAX
+cimport numpy as np
+import numpy as np
+cimport cython
+
+from oct_container cimport Oct, RAMSESOctreeContainer
+
+# Create a balanced octree by a random walk that recursively
+# subdivides
+def create_fake_octree(RAMSESOctreeContainer oct_handler,
+                       long max_noct,
+                       long max_level,
+                       np.ndarray[np.int32_t, ndim=1] ndd,
+                       np.ndarray[np.float64_t, ndim=1] dle,
+                       np.ndarray[np.float64_t, ndim=1] dre,
+                       float fsubdivide):
+    cdef int[3] dd #hold the octant index
+    cdef int[3] ind #hold the octant index
+    cdef long i
+    cdef long cur_leaf = 0
+    cdef np.ndarray[np.uint8_t, ndim=2] mask
+    for i in range(3):
+        ind[i] = 0
+        dd[i] = ndd[i]
+    oct_handler.allocate_domains([max_noct])
+    parent = oct_handler.next_root(1, ind)
+    parent.domain = 1
+    cur_leaf = 8 #we've added one parent...
+    mask = np.ones((max_noct,8),dtype='uint8')
+    while oct_handler.domains[0].n_assigned < max_noct:
+        print "root: nocts ", oct_handler.domains[0].n_assigned
+        cur_leaf = subdivide(oct_handler, parent, ind, dd, cur_leaf, 0,
+                             max_noct, max_level, fsubdivide, mask)
+    return cur_leaf
+                             
+
+cdef long subdivide(RAMSESOctreeContainer oct_handler, 
+                    Oct *parent,
+                    int ind[3], int dd[3], 
+                    long cur_leaf, long cur_level, 
+                    long max_noct, long max_level, float fsubdivide,
+                    np.ndarray[np.uint8_t, ndim=2] mask):
+    print "child", parent.file_ind, ind[0], ind[1], ind[2], cur_leaf, cur_level
+    cdef int ddr[3]
+    cdef long i,j,k
+    cdef float rf #random float from 0-1
+    if cur_level >= max_level: 
+        return cur_leaf
+    if oct_handler.domains[0].n_assigned >= max_noct:
+        return cur_leaf
+    for i in range(3):
+        ind[i] = <int> ((rand() * 1.0 / RAND_MAX) * dd[i])
+        ddr[i] = 2
+    rf = rand() * 1.0 / RAND_MAX
+    if rf > fsubdivide:
+        if parent.children[ind[0]][ind[1]][ind[2]] == NULL:
+            cur_leaf += 7 
+        oct = oct_handler.next_child(1, ind, parent)
+        oct.domain = 1
+        cur_leaf = subdivide(oct_handler, oct, ind, ddr, cur_leaf, 
+                             cur_level + 1, max_noct, max_level, 
+                             fsubdivide, mask)
+    return cur_leaf

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/geometry/oct_container.pxd
--- a/yt/geometry/oct_container.pxd
+++ b/yt/geometry/oct_container.pxd
@@ -30,8 +30,12 @@
 
 cdef struct Oct
 cdef struct Oct:
-    np.int64_t ind          # index
-    np.int64_t local_ind
+    np.int64_t file_ind     # index with respect to the order in which it was
+                            # added
+    np.int64_t domain_ind   # index within the global set of domains
+                            # note that moving to a local index will require
+                            # moving to split-up masks, which is part of a
+                            # bigger refactor
     np.int64_t domain       # (opt) addl int index
     np.int64_t pos[3]       # position in ints
     np.int8_t level
@@ -39,6 +43,10 @@
     Oct *children[2][2][2]
     Oct *parent
 
+cdef struct OctInfo:
+    np.float64_t left_edge[3]
+    np.float64_t dds[3]
+
 cdef struct OctAllocationContainer
 cdef struct OctAllocationContainer:
     np.int64_t n
@@ -54,16 +62,12 @@
     cdef np.float64_t DLE[3], DRE[3]
     cdef public int nocts
     cdef public int max_domain
-    cdef Oct* get(self, ppos)
+    cdef Oct* get(self, np.float64_t ppos[3], OctInfo *oinfo = ?)
     cdef void neighbors(self, Oct *, Oct **)
     cdef void oct_bounds(self, Oct *, np.float64_t *, np.float64_t *)
-
-cdef class ARTIOOctreeContainer(OctreeContainer):
-    cdef OctAllocationContainer **domains
-    cdef Oct *get_root_oct(self, np.float64_t ppos[3])
-    cdef Oct *next_free_oct( self, int curdom )
-    cdef int valid_domain_oct(self, int curdom, Oct *parent)
-    cdef Oct *add_oct(self, int curdom, Oct *parent, int curlevel, double pp[3])
+    # This function must return the offset from global-to-local domains; i.e.,
+    # OctAllocationContainer.offset if such a thing exists.
+    cdef np.int64_t get_domain_offset(self, int domain_id)
 
 cdef class RAMSESOctreeContainer(OctreeContainer):
     cdef OctAllocationContainer **domains

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -56,8 +56,8 @@
     for n in range(n_octs):
         oct = &n_cont.my_octs[n]
         oct.parent = NULL
-        oct.ind = oct.domain = -1
-        oct.local_ind = n + n_cont.offset
+        oct.file_ind = oct.domain = -1
+        oct.domain_ind = n + n_cont.offset
         oct.level = -1
         for i in range(2):
             for j in range(2):
@@ -130,7 +130,7 @@
         while cur != NULL:
             for i in range(cur.n_assigned):
                 this = &cur.my_octs[i]
-                yield (this.ind, this.local_ind, this.domain)
+                yield (this.file_ind, this.domain_ind, this.domain)
             cur = cur.next
 
     cdef void oct_bounds(self, Oct *o, np.float64_t *corner, np.float64_t *size):
@@ -139,10 +139,13 @@
             size[i] = (self.DRE[i] - self.DLE[i]) / (self.nn[i] << o.level)
             corner[i] = o.pos[i] * size[i] + self.DLE[i]
 
+    cdef np.int64_t get_domain_offset(self, int domain_id):
+        return 0
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    cdef Oct *get(self, ppos):
+    cdef Oct *get(self, np.float64_t ppos[3], OctInfo *oinfo = NULL):
         #Given a floating point position, retrieve the most
         #refined oct at that time
         cdef np.int64_t ind[3]
@@ -150,21 +153,34 @@
         cdef Oct *cur
         cdef int i
         for i in range(3):
-            pp[i] = ppos[i] - self.DLE[i]
             dds[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
-            ind[i] = <np.int64_t> ((pp[i] - self.DLE[i])/dds[i])
-            cp[i] = (ind[i] + 0.5) * dds[i]
-        cur = self.root_mesh[ind[0]][ind[1]][ind[2]]
-        while cur.children[0][0][0] != NULL:
+            ind[i] = <np.int64_t> ((ppos[i] - self.DLE[i])/dds[i])
+            cp[i] = (ind[i] + 0.5) * dds[i] + self.DLE[i]
+        next = self.root_mesh[ind[0]][ind[1]][ind[2]]
+        # We want to stop recursing when there's nowhere else to go
+        while next != NULL:
+            cur = next
             for i in range(3):
                 dds[i] = dds[i] / 2.0
-                if cp[i] > pp[i]:
+                if cp[i] > ppos[i]:
                     ind[i] = 0
                     cp[i] -= dds[i] / 2.0
                 else:
                     ind[i] = 1
                     cp[i] += dds[i]/2.0
-            cur = cur.children[ind[0]][ind[1]][ind[2]]
+            next = cur.children[ind[0]][ind[1]][ind[2]]
+        if oinfo == NULL: return cur
+        for i in range(3):
+            # This will happen *after* we quit out, so we need to back out the
+            # last change to cp
+            if ind[i] == 1:
+                cp[i] -= dds[i]/2.0 # Now centered
+            else:
+                cp[i] += dds[i]/2.0
+            # We don't need to change dds[i] as it has been halved from the
+            # oct width, thus making it already the cell width
+            oinfo.dds[i] = dds[i] # Cell width
+            oinfo.left_edge[i] = cp[i] - dds[i] # Center minus dds
         return cur
 
     @cython.boundscheck(False)
@@ -186,7 +202,40 @@
                 cur = cur.next
             o = &cur.my_octs[oi - cur.offset]
             for i in range(8):
-                count[o.domain - 1] += mask[o.local_ind,i]
+                count[o.domain - 1] += mask[o.domain_ind,i]
+        return count
+
+    @cython.boundscheck(True)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def count_leaves(self, np.ndarray[np.uint8_t, ndim=2, cast=True] mask):
+        # Modified to work when not all octs are assigned
+        cdef int i, j, k, ii
+        cdef np.int64_t oi
+        # pos here is CELL center, not OCT center.
+        cdef np.float64_t pos[3]
+        cdef int n = mask.shape[0]
+        cdef np.ndarray[np.int64_t, ndim=1] count
+        count = np.zeros(self.max_domain, 'int64')
+        # 
+        cur = self.cont
+        for oi in range(n):
+            if oi - cur.offset >= cur.n_assigned:
+                cur = cur.next
+                if cur == NULL:
+                    break
+            o = &cur.my_octs[oi - cur.offset]
+            # skip if unassigned
+            if o == NULL:
+                continue
+            if o.domain == -1: 
+                continue
+            for i in range(2):
+                for j in range(2):
+                    for k in range(2):
+                        if o.children[i][j][k] == NULL:
+                            ii = ((k*2)+j)*2+i
+                            count[o.domain - 1] += mask[o.domain_ind,ii]
         return count
 
     @cython.boundscheck(False)
@@ -260,14 +309,17 @@
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    def get_neighbor_boundaries(self, ppos):
+    def get_neighbor_boundaries(self, oppos):
+        cdef int i, ii
+        cdef np.float64_t ppos[3]
+        for i in range(3):
+            ppos[i] = oppos[i]
         cdef Oct *main = self.get(ppos)
         cdef Oct* neighbors[27]
         self.neighbors(main, neighbors)
         cdef np.ndarray[np.float64_t, ndim=2] bounds
         cdef np.float64_t corner[3], size[3]
         bounds = np.zeros((27,6), dtype="float64")
-        cdef int i, ii
         tnp = 0
         for i in range(27):
             self.oct_bounds(neighbors[i], corner, size)
@@ -276,330 +328,11 @@
                 bounds[i, 3+ii] = size[ii]
         return bounds
 
-cdef class ARTIOOctreeContainer(OctreeContainer):
+cdef class RAMSESOctreeContainer(OctreeContainer):
 
-    def allocate_domains(self, domain_counts):
-        cdef int count, i
-        cdef OctAllocationContainer *cur = self.cont
-        assert(cur == NULL)
-        self.max_domain = len(domain_counts) # 1-indexed
-        self.domains = <OctAllocationContainer **> malloc(
-            sizeof(OctAllocationContainer *) * len(domain_counts))
-        for i, count in enumerate(domain_counts):
-            cur = allocate_octs(count, cur)
-            if self.cont == NULL: self.cont = cur
-            self.domains[i] = cur
-        
-    def __dealloc__(self):
-        # This gets called BEFORE the superclass deallocation.  But, both get
-        # called.
-        if self.domains != NULL: free(self.domains)
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def count(self, np.ndarray[np.uint8_t, ndim=1, cast=True] mask,
-                     split = False):
-        cdef int n = mask.shape[0]
-        cdef int i, dom
-        cdef OctAllocationContainer *cur
-        cdef np.ndarray[np.int64_t, ndim=1] count
-        count = np.zeros(self.max_domain, 'int64')
-        # This is the idiom for iterating over many containers.
-        cur = self.cont
-        for i in range(n):
-            if i - cur.offset >= cur.n: cur = cur.next
-            if mask[i] == 1:
-                count[cur.my_octs[i - cur.offset].domain - 1] += 1
-        return count
-
-    def check(self, int curdom):
-        cdef int dind, pi
-        cdef Oct oct
-        cdef OctAllocationContainer *cont = self.domains[curdom - 1]
-        cdef int nbad = 0
-        for pi in range(cont.n_assigned):
-            oct = cont.my_octs[pi]
-            for i in range(2):
-                for j in range(2):
-                    for k in range(2):
-                        if oct.children[i][j][k] != NULL and \
-                           oct.children[i][j][k].level != oct.level + 1:
-                            if curdom == 61:
-                                print pi, oct.children[i][j][k].level,
-                                print oct.level
-                            nbad += 1
-        print "DOMAIN % 3i HAS % 9i BAD OCTS (%s / %s / %s)" % (curdom, nbad, 
-            cont.n - cont.n_assigned, cont.n_assigned, cont.n)
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    cdef Oct *next_free_oct( self, int curdom ) :
-        cdef OctAllocationContainer *cont
-        cdef Oct *next_oct
-
-        if curdom < 1 or curdom > self.max_domain or self.domains == NULL  :
-            print "Error, invalid domain or unallocated domains"
-            raise RuntimeError
-        
-        cont = self.domains[curdom - 1]
-        if cont.n_assigned >= cont.n :
-            print "Error, ran out of octs in domain curdom"
-            raise RuntimeError
-
-        self.nocts += 1
-        next_oct = &cont.my_octs[cont.n_assigned]
-        cont.n_assigned += 1
-        return next_oct
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    cdef int valid_domain_oct(self, int curdom, Oct *parent) :
-        cdef OctAllocationContainer *cont
-
-        if curdom < 1 or curdom > self.max_domain or self.domains == NULL  :
-            raise RuntimeError
-        cont = self.domains[curdom - 1]
-
-        if parent == NULL or parent < &cont.my_octs[0] or \
-                parent > &cont.my_octs[cont.n_assigned] :
-            return 0
-        else :
-            return 1
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    cdef Oct *get_root_oct(self, np.float64_t ppos[3]):
-        cdef np.int64_t ind[3]
-        cdef np.float64_t dds
-        cdef int i
-        for i in range(3):
-            dds = (self.DRE[i] - self.DLE[i])/self.nn[i]
-            ind[i] = <np.int64_t> floor((ppos[i]-self.DLE[i])/dds)
-        return self.root_mesh[ind[0]][ind[1]][ind[2]]
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    cdef Oct *add_oct(self, int curdom, Oct *parent, 
-                    int curlevel, np.float64_t pp[3]):
-
-        cdef int level, i, ind[3]
-        cdef Oct *cur, *next_oct
-        cdef np.int64_t pos[3]
-        cdef np.float64_t dds
-
-        if curlevel < 0 :
-            raise RuntimeError
-        for i in range(3):
-            if pp[i] < self.DLE[i] or pp[i] > self.DRE[i] :
-                raise RuntimeError
-            dds = (self.DRE[i] - self.DLE[i])/(<np.int64_t>self.nn[i])
-            pos[i] = <np.int64_t> floor((pp[i]-self.DLE[i])*<np.float64_t>(1<<curlevel)/dds)
-
-        if curlevel == 0 :
-            cur = NULL
-        elif parent == NULL :
-            cur = self.get_root_oct(pp)
-            assert( cur != NULL )
-
-            # Now we find the location we want
-            for level in range(1,curlevel):
-                # At every level, find the cell this oct lives inside
-                for i in range(3) :
-                    if pos[i] < (2*cur.pos[i]+1)<<(curlevel-level) :
-                        ind[i] = 0
-                    else :
-                        ind[i] = 1
-                cur = cur.children[ind[0]][ind[1]][ind[2]]
-                if cur == NULL:
-                    # in ART we don't allocate down to curlevel 
-                    # if parent doesn't exist
-                    print "Error, no oct exists at that level"
-                    raise RuntimeError
-        else :
-            if not self.valid_domain_oct(curdom,parent) or \
-                    parent.level != curlevel - 1:
-                raise RuntimeError
-            cur = parent
- 
-        next_oct = self.next_free_oct( curdom )
-        if cur == NULL :
-            self.root_mesh[pos[0]][pos[1]][pos[2]] = next_oct
-        else :
-            for i in range(3) :
-                if pos[i] < 2*cur.pos[i]+1 :
-                    ind[i] = 0
-                else :
-                    ind[i] = 1
-            if cur.level != curlevel - 1 or  \
-                    cur.children[ind[0]][ind[1]][ind[2]] != NULL :
-                print "Error in add_oct: child already filled!"
-                raise RuntimeError
-
-            cur.children[ind[0]][ind[1]][ind[2]] = next_oct
-        for i in range(3) :
-            next_oct.pos[i] = pos[i]
-        next_oct.domain = curdom
-        next_oct.parent = cur
-        next_oct.ind = 1
-        next_oct.level = curlevel
-        return next_oct
-
-    # ii:mask/art ; ci=ramses loop backward (k<-fast, j ,i<-slow) 
-    # ii=0 000 art 000 ci 000 
-    # ii=1 100 art 100 ci 001 
-    # ii=2 010 art 010 ci 010 
-    # ii=3 110 art 110 ci 011
-    # ii=4 001 art 001 ci 100
-    # ii=5 101 art 011 ci 101
-    # ii=6 011 art 011 ci 110
-    # ii=7 111 art 111 ci 111
-    # keep coords ints so multiply by pow(2,1) when increasing level.
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def icoords(self, int domain_id,
-                np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count,
-                np.ndarray[np.int64_t, ndim=1] level_counts):
-        # Wham, bam, it's a scam
-        cdef np.int64_t i, j, k, oi, ci, n, ii, level
-        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
-        cdef Oct *o
-        n = mask.shape[0]
-        cdef np.ndarray[np.int64_t, ndim=2] coords
-        coords = np.empty((cell_count, 3), dtype="int64")
-        ci=0
-        for oi in range(cur.n):
-            o = &cur.my_octs[oi]
-            for k in range(2):
-                for j in range(2):
-                    for i in range(2):
-                        ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
-                        coords[ci, 0] = (o.pos[0] << 1) + i
-                        coords[ci, 1] = (o.pos[1] << 1) + j
-                        coords[ci, 2] = (o.pos[2] << 1) + k
-                        ci += 1
-        return coords
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def ires(self, int domain_id,
-                np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count,
-                np.ndarray[np.int64_t, ndim=1] level_counts):
-        # Wham, bam, it's a scam
-        cdef np.int64_t i, j, k, oi, ci, n
-        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
-        cdef Oct *o
-        n = mask.shape[0]
-        cdef np.ndarray[np.int64_t, ndim=1] levels
-        levels = np.empty(cell_count, dtype="int64")
-        ci = 0
-        for oi in range(cur.n):
-            o = &cur.my_octs[oi]
-            for i in range(8):
-                if mask[oi + cur.offset, i] == 0: continue
-                levels[ci] = o.level
-                ci +=1
-        return levels
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def count_levels(self, int max_level, int domain_id,
-                     np.ndarray[np.uint8_t, ndim=2, cast=True] mask):
-        cdef np.ndarray[np.int64_t, ndim=1] level_count
-        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
-        cdef Oct *o
-        cdef int oi, i
-        level_count = np.zeros(max_level+1, 'int64')
-        for oi in range(cur.n):
-            o = &cur.my_octs[oi]
-            for i in range(8):
-                if mask[o.local_ind, i] == 0: continue
-                level_count[o.level] += 1
-        return level_count
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def fcoords(self, int domain_id,
-                np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count,
-                np.ndarray[np.int64_t, ndim=1] level_counts):
-        # Wham, bam, it's a scam
-        cdef np.int64_t i, j, k, oi, ci, n, ii
-        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
-        cdef Oct *o
-        cdef np.float64_t pos[3]
-        cdef np.float64_t base_dx[3], dx[3]
-        n = mask.shape[0]
-        cdef np.ndarray[np.float64_t, ndim=2] coords
-        coords = np.empty((cell_count, 3), dtype="float64")
-        ci =0 
-        for i in range(3):
-            # This is the base_dx, but not the base distance from the center
-            # position.  Note that the positions will also all be offset by
-            # dx/2.0.  This is also for *oct grids*, not cells.
-            base_dx[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
-        for oi in range(cur.n):
-            o = &cur.my_octs[oi]
-            for i in range(3):
-                # This gives the *grid* width for this level
-                dx[i] = base_dx[i] / (1 << o.level)
-                # o.pos is the *grid* index, so pos[i] is the center of the
-                # first cell in the grid
-                pos[i] = self.DLE[i] + o.pos[i]*dx[i] + dx[i]/4.0
-                dx[i] = dx[i] / 2.0 # This is now the *offset* 
-            for k in range(2):
-                for j in range(2):
-                    for i in range(2):
-                        ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
-                        coords[ci, 0] = pos[0] + dx[0] * i
-                        coords[ci, 1] = pos[1] + dx[1] * j
-                        coords[ci, 2] = pos[2] + dx[2] * k
-                        ci +=1 
-        return coords
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def fill_mask(self, int domain, dest_fields, source_fields,
-                   np.ndarray[np.uint8_t, ndim=2, cast=True] mask, int offset):
-        cdef np.ndarray[np.float32_t, ndim=1] source
-        cdef np.ndarray[np.float64_t, ndim=1] dest
-        cdef OctAllocationContainer *dom = self.domains[domain - 1]
-        cdef Oct *o
-        cdef int n
-        cdef int i, j, k, ii
-        cdef int local_pos, local_filled
-        cdef np.float64_t val
-        for key in dest_fields:
-            local_filled = 0
-            dest = dest_fields[key]
-            source = source_fields[key]
-            # snl: an alternative to filling level 0 yt-octs is to produce a 
-            # mapping between the mask and the source read order
-            for n in range(dom.n):
-                o = &dom.my_octs[n]
-                for k in range(2):
-                    for j in range(2):
-                        for i in range(2):
-                            ii = ((k*2)+j)*2+i
-                            if mask[o.local_ind, ii] == 0: continue
-                            dest[local_filled + offset] = source[o.local_ind*8+ii]
-                            # print 'oct_container.pyx:sourcemasked',o.level,local_filled, o.local_ind*8+ii, source[o.local_ind*8+ii]
-                            local_filled += 1
-        return local_filled
-
-cdef class RAMSESOctreeContainer(OctreeContainer):
+    cdef np.int64_t get_domain_offset(self, int domain_id):
+        cdef OctAllocationContainer *cont = self.domains[domain_id - 1]
+        return cont.offset
 
     cdef Oct* next_root(self, int domain_id, int ind[3]):
         cdef Oct *next = self.root_mesh[ind[0]][ind[1]][ind[2]]
@@ -666,7 +399,77 @@
                 count[cur.my_octs[i - cur.offset].domain - 1] += 1
         return count
 
-    def check(self, int curdom):
+    def domain_and(self, np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                   int domain_id):
+        cdef np.int64_t i, oi, n,  use
+        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
+        cdef Oct *o
+        cdef np.ndarray[np.uint8_t, ndim=2] m2 = \
+                np.zeros((mask.shape[0], 8), 'uint8')
+        n = mask.shape[0]
+        for oi in range(cur.n_assigned):
+            o = &cur.my_octs[oi]
+            use = 0
+            for i in range(8):
+                m2[o.domain_ind, i] = mask[o.domain_ind, i]
+        return m2 # NOTE: This is uint8_t
+
+    def domain_mask(self,
+                    # mask is the base selector's *global* mask
+                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                    int domain_id):
+        # What distinguishes this one from domain_and is that we have a mask,
+        # which covers the whole domain, but our output will only be of a much
+        # smaller subset of octs that belong to a given domain *and* the mask.
+        # Note also that typically when something calls domain_and, they will 
+        # use a logical_any along the oct axis.  Here we don't do that.
+        # Note also that we change the shape of the returned array.
+        cdef np.int64_t i, j, k, oi, n, nm, use
+        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
+        cdef Oct *o
+        n = mask.shape[0]
+        nm = 0
+        for oi in range(cur.n_assigned):
+            o = &cur.my_octs[oi]
+            use = 0
+            for i in range(8):
+                if mask[o.domain_ind, i] == 1: use = 1
+            nm += use
+        cdef np.ndarray[np.uint8_t, ndim=4] m2 = \
+                np.zeros((2, 2, 2, nm), 'uint8')
+        nm = 0
+        for oi in range(cur.n_assigned):
+            o = &cur.my_octs[oi]
+            use = 0
+            for i in range(2):
+                for j in range(2):
+                    for k in range(2):
+                        ii = ((k*2)+j)*2+i
+                        if mask[o.domain_ind, ii] == 0: continue
+                        use = m2[i, j, k, nm] = 1
+            nm += use
+        return m2.astype("bool")
+
+    def domain_ind(self,
+                    # mask is the base selector's *global* mask
+                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                    int domain_id):
+        cdef np.int64_t i, j, k, oi, noct, n, nm, use, offset
+        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
+        cdef Oct *o
+        cdef np.ndarray[np.int64_t, ndim=1] ind = np.zeros(cur.n, 'int64') - 1
+        nm = 0
+        for oi in range(cur.n):
+            o = &cur.my_octs[oi]
+            use = 0
+            for i in range(8):
+                if mask[o.domain_ind, i] == 1: use = 1
+            if use == 1:
+                ind[o.domain_ind - cur.offset] = nm
+            nm += use
+        return ind
+
+    def check(self, int curdom, int print_all = 0):
         cdef int dind, pi
         cdef Oct oct
         cdef OctAllocationContainer *cont = self.domains[curdom - 1]
@@ -675,6 +478,9 @@
         cdef int unassigned = 0
         for pi in range(cont.n_assigned):
             oct = cont.my_octs[pi]
+            if print_all==1:
+                print pi, oct.level, oct.domain,
+                print oct.pos[0],oct.pos[1],oct.pos[2]
             for i in range(2):
                 for j in range(2):
                     for k in range(2):
@@ -691,6 +497,33 @@
         print "DOMAIN % 3i HAS % 9i MISSED OCTS" % (curdom, nmissed)
         print "DOMAIN % 3i HAS % 9i UNASSIGNED OCTS" % (curdom, unassigned)
 
+    def check_refinement(self, int curdom):
+        cdef int pi, i, j, k, some_refined, some_unrefined
+        cdef Oct *oct
+        cdef int bad = 0
+        cdef OctAllocationContainer *cont = self.domains[curdom - 1]
+        for pi in range(cont.n_assigned):
+            oct = &cont.my_octs[pi]
+            some_unrefined = 0
+            some_refined = 0
+            for i in range(2):
+                for j in range(2):
+                    for k in range(2):
+                        if oct.children[i][j][k] == NULL:
+                            some_unrefined = 1
+                        else:
+                            some_refined = 1
+            if some_unrefined == some_refined == 1:
+                #print "BAD", oct.file_ind, oct.domain_ind
+                bad += 1
+                if curdom == 10 or curdom == 72:
+                    for i in range(2):
+                        for j in range(2):
+                            for k in range(2):
+                                print (oct.children[i][j][k] == NULL),
+                    print
+        print "BAD TOTAL", curdom, bad, cont.n_assigned
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
@@ -739,7 +572,7 @@
             # Now we should be at the right level
             cur.domain = curdom
             if local == 1:
-                cur.ind = p
+                cur.file_ind = p
             cur.level = curlevel
         return cont.n_assigned - initial
 
@@ -757,18 +590,18 @@
         n = mask.shape[0]
         cdef np.ndarray[np.int64_t, ndim=2] coords
         coords = np.empty((cell_count, 3), dtype="int64")
+        ci = 0
         for oi in range(cur.n_assigned):
             o = &cur.my_octs[oi]
             for i in range(2):
                 for j in range(2):
                     for k in range(2):
                         ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
-                        ci = level_counts[o.level]
+                        if mask[o.domain_ind, ii] == 0: continue
                         coords[ci, 0] = (o.pos[0] << 1) + i
                         coords[ci, 1] = (o.pos[1] << 1) + j
                         coords[ci, 2] = (o.pos[2] << 1) + k
-                        level_counts[o.level] += 1
+                        ci += 1
         return coords
 
     @cython.boundscheck(False)
@@ -790,9 +623,8 @@
             o = &cur.my_octs[oi]
             for i in range(8):
                 if mask[oi + cur.offset, i] == 0: continue
-                ci = level_counts[o.level]
                 levels[ci] = o.level
-                level_counts[o.level] += 1
+                ci += 1
         return levels
 
     @cython.boundscheck(False)
@@ -808,7 +640,7 @@
         for oi in range(cur.n_assigned):
             o = &cur.my_octs[oi]
             for i in range(8):
-                if mask[o.local_ind, i] == 0: continue
+                if mask[o.domain_ind, i] == 0: continue
                 level_count[o.level] += 1
         return level_count
 
@@ -833,6 +665,7 @@
             # position.  Note that the positions will also all be offset by
             # dx/2.0.  This is also for *oct grids*, not cells.
             base_dx[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
+        ci = 0
         for oi in range(cur.n):
             o = &cur.my_octs[oi]
             for i in range(3):
@@ -846,12 +679,11 @@
                 for j in range(2):
                     for k in range(2):
                         ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
-                        ci = level_counts[o.level]
+                        if mask[o.domain_ind, ii] == 0: continue
                         coords[ci, 0] = pos[0] + dx[0] * i
                         coords[ci, 1] = pos[1] + dx[1] * j
                         coords[ci, 2] = pos[2] + dx[2] * k
-                        level_counts[o.level] += 1
+                        ci += 1
         return coords
 
     @cython.boundscheck(False)
@@ -873,20 +705,17 @@
             source = source_fields[key]
             for n in range(dom.n):
                 o = &dom.my_octs[n]
-                if o.level != level: continue
-                for i in range(2):
-                    for j in range(2):
-                        for k in range(2):
-                            ii = ((k*2)+j)*2+i
-                            if mask[o.local_ind, ii] == 0: continue
-                            dest[local_filled + offset] = source[o.ind, ii]
-                            local_filled += 1
+                for ii in range(8):
+                    # We iterate and check here to keep our counts consistent
+                    # when filling different levels.
+                    if mask[o.domain_ind, ii] == 0: continue
+                    if o.level == level: 
+                        dest[local_filled] = source[o.file_ind, ii]
+                    local_filled += 1
         return local_filled
 
+cdef class ARTOctreeContainer(RAMSESOctreeContainer):
 
-
-cdef class ARTOctreeContainer(RAMSESOctreeContainer):
-    #this class is specifically for the NMSU ART
     @cython.boundscheck(True)
     @cython.wraparound(False)
     @cython.cdivision(True)
@@ -910,7 +739,7 @@
             source = source_fields[key]
             for n in range(dom.n):
                 o = &dom.my_octs[n]
-                index = o.ind-subchunk_offset
+                index = o.file_ind-subchunk_offset
                 if o.level != level: continue
                 if index < 0: continue
                 if index >= subchunk_max: 
@@ -921,7 +750,7 @@
                     for j in range(2):
                         for k in range(2):
                             ii = ((k*2)+j)*2+i
-                            if mask[o.local_ind, ii] == 0: continue
+                            if mask[o.domain_ind, ii] == 0: continue
                             dest[local_filled + offset] = \
                                 source[index,ii]
                             local_filled += 1
@@ -961,7 +790,7 @@
                     for j in range(2):
                         for k in range(2):
                             ii = ((k*2)+j)*2+i
-                            if mask[o.local_ind, ii] == 0: continue
+                            if mask[o.domain_ind, ii] == 0: continue
                             ox = (o.pos[0] << 1) + i
                             oy = (o.pos[1] << 1) + j
                             oz = (o.pos[2] << 1) + k
@@ -1036,12 +865,23 @@
                 free(o.sd.pos)
         free(o)
 
+    def __iter__(self):
+        #Get the next oct, will traverse domains
+        #Note that oct containers can be sorted 
+        #so that consecutive octs are on the same domain
+        cdef int oi
+        cdef Oct *o
+        for oi in range(self.nocts):
+            o = self.oct_list[oi]
+            yield (o.file_ind, o.domain_ind, o.domain)
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
     def icoords(self, int domain_id,
                 np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count):
+                np.int64_t cell_count,
+                np.ndarray[np.int64_t, ndim=1] level_counts):
         #Return the integer positions of the cells
         #Limited to this domain and within the mask
         #Positions are binary; aside from the root mesh
@@ -1070,7 +910,8 @@
     @cython.cdivision(True)
     def ires(self, int domain_id,
                 np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count):
+                np.int64_t cell_count,
+                np.ndarray[np.int64_t, ndim=1] level_counts):
         #Return the 'resolution' of each cell; ie the level
         cdef np.ndarray[np.int64_t, ndim=1] res
         res = np.empty(cell_count, dtype="int64")
@@ -1090,7 +931,8 @@
     @cython.cdivision(True)
     def fcoords(self, int domain_id,
                 np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count):
+                np.int64_t cell_count,
+                np.ndarray[np.int64_t, ndim=1] level_counts):
         #Return the floating point unitary position of every cell
         cdef np.ndarray[np.float64_t, ndim=2] coords
         coords = np.empty((cell_count, 3), dtype="float64")
@@ -1141,6 +983,7 @@
         cdef int max_level = 0
         self.oct_list = <Oct**> malloc(sizeof(Oct*)*self.nocts)
         cdef np.int64_t i = 0
+        cdef np.int64_t dom_ind
         cdef ParticleArrays *c = self.first_sd
         while c != NULL:
             self.oct_list[i] = c.oct
@@ -1159,13 +1002,20 @@
         self.dom_offsets = <np.int64_t *>malloc(sizeof(np.int64_t) *
                                                 (self.max_domain + 3))
         self.dom_offsets[0] = 0
+        dom_ind = 0
         for i in range(self.nocts):
-            self.oct_list[i].local_ind = i
+            self.oct_list[i].domain_ind = i
+            self.oct_list[i].file_ind = dom_ind
+            dom_ind += 1
             if self.oct_list[i].domain > cur_dom:
                 cur_dom = self.oct_list[i].domain
                 self.dom_offsets[cur_dom + 1] = i
+                dom_ind = 0
         self.dom_offsets[cur_dom + 2] = self.nocts
 
+    cdef np.int64_t get_domain_offset(self, int domain_id):
+        return self.dom_offsets[domain_id + 1]
+
     cdef Oct* allocate_oct(self):
         #Allocate the memory, set to NULL or -1
         #We reserve space for n_ref particles, but keep
@@ -1175,8 +1025,8 @@
         cdef ParticleArrays *sd = <ParticleArrays*> \
             malloc(sizeof(ParticleArrays))
         cdef int i, j, k
-        my_oct.ind = my_oct.domain = -1
-        my_oct.local_ind = self.nocts - 1
+        my_oct.file_ind = my_oct.domain = -1
+        my_oct.domain_ind = self.nocts - 1
         my_oct.pos[0] = my_oct.pos[1] = my_oct.pos[2] = -1
         my_oct.level = -1
         my_oct.sd = sd
@@ -1227,7 +1077,7 @@
         for oi in range(ndo):
             o = self.oct_list[oi + doff]
             for i in range(8):
-                if mask[o.local_ind, i] == 0: continue
+                if mask[o.domain_ind, i] == 0: continue
                 level_count[o.level] += 1
         return level_count
 
@@ -1250,7 +1100,7 @@
                 #IND Corresponding integer index on the root octs
                 #CP Center  point of that oct
                 pp[i] = pos[p, i]
-                dds[i] = (self.DRE[i] + self.DLE[i])/self.nn[i]
+                dds[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
                 ind[i] = <np.int64_t> ((pp[i] - self.DLE[i])/dds[i])
                 cp[i] = (ind[i] + 0.5) * dds[i] + self.DLE[i]
             cur = self.root_mesh[ind[0]][ind[1]][ind[2]]
@@ -1377,12 +1227,15 @@
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    def count_neighbor_particles(self, ppos):
+    def count_neighbor_particles(self, oppos):
         #How many particles are in my neighborhood
+        cdef int i, ni, dl, tnp
+        cdef np.float64_t ppos[3]
+        for i in range(3):
+            ppos[i] = oppos[i]
         cdef Oct *main = self.get(ppos)
         cdef Oct* neighbors[27]
         self.neighbors(main, neighbors)
-        cdef int i, ni, dl, tnp
         tnp = 0
         for i in range(27):
             if neighbors[i].sd != NULL:
@@ -1409,4 +1262,83 @@
                 count[o.domain] += mask[oi,i]
         return count
 
+    def domain_and(self, np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                   int domain_id):
+        cdef np.int64_t i, oi, n, use
+        cdef Oct *o
+        cdef np.ndarray[np.uint8_t, ndim=2] m2 = \
+                np.zeros((mask.shape[0], 8), 'uint8')
+        n = mask.shape[0]
+        for oi in range(n):
+            o = self.oct_list[oi]
+            if o.domain != domain_id: continue
+            use = 0
+            for i in range(8):
+                m2[o.domain_ind, i] = mask[o.domain_ind, i]
+        return m2
 
+    def domain_mask(self,
+                    # mask is the base selector's *global* mask
+                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                    int domain_id):
+        # What distinguishes this one from domain_and is that we have a mask,
+        # which covers the whole domain, but our output will only be of a much
+        # smaller subset of octs that belong to a given domain *and* the mask.
+        # Note also that typically when something calls domain_and, they will 
+        # use a logical_any along the oct axis.  Here we don't do that.
+        # Note also that we change the shape of the returned array.
+        cdef np.int64_t i, j, k, oi, n, nm, use
+        cdef Oct *o
+        n = mask.shape[0]
+        nm = 0
+        # This could perhaps be faster if we 
+        for oi in range(n):
+            o = self.oct_list[oi]
+            if o.domain != domain_id: continue
+            use = 0
+            for i in range(8):
+                if mask[o.domain_ind, i] == 1: use = 1
+            nm += use
+        cdef np.ndarray[np.uint8_t, ndim=4] m2 = \
+                np.zeros((2, 2, 2, nm), 'uint8')
+        nm = 0
+        for oi in range(n):
+            o = self.oct_list[oi]
+            if o.domain != domain_id: continue
+            use = 0
+            for i in range(2):
+                for j in range(2):
+                    for k in range(2):
+                        ii = ((k*2)+j)*2+i
+                        if mask[o.domain_ind, ii] == 0: continue
+                        use = m2[i, j, k, nm] = 1
+            nm += use
+        return m2.astype("bool")
+
+    def domain_ind(self,
+                    # mask is the base selector's *global* mask
+                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                    int domain_id):
+        # Here we once again do something similar to the other functions.  We
+        # need a set of indices into the final reduced, masked values.  The
+        # indices will be domain.n long, and will be of type int64.  This way,
+        # we can get the Oct through a .get() call, then use Oct.file_ind as an
+        # index into this newly created array, then finally use the returned
+        # index into the domain subset array for deposition.
+        cdef np.int64_t i, j, k, oi, noct, n, nm, use, offset
+        cdef Oct *o
+        # For particle octrees, domain 0 is special and means non-leaf nodes.
+        offset = self.dom_offsets[domain_id + 1]
+        noct = self.dom_offsets[domain_id + 2] - offset
+        cdef np.ndarray[np.int64_t, ndim=1] ind = np.zeros(noct, 'int64')
+        nm = 0
+        for oi in range(noct):
+            ind[oi] = -1
+            o = self.oct_list[oi + offset]
+            use = 0
+            for i in range(8):
+                if mask[o.domain_ind, i] == 1: use = 1
+            if use == 1:
+                ind[oi] = nm
+            nm += use
+        return ind

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/geometry/oct_geometry_handler.py
--- a/yt/geometry/oct_geometry_handler.py
+++ b/yt/geometry/oct_geometry_handler.py
@@ -54,7 +54,7 @@
         Returns (in code units) the smallest cell size in the simulation.
         """
         return (self.parameter_file.domain_width /
-                (2**self.max_level)).min()
+                (2**(self.max_level+1))).min()
 
     def convert(self, unit):
         return self.parameter_file.conversion_factors[unit]

diff -r 74c2c00d1078b5743660abeecdfb359f8266c9bd -r b8521d5e0e89939669e98c352ddc933f9d40eb89 yt/geometry/particle_deposit.pxd
--- /dev/null
+++ b/yt/geometry/particle_deposit.pxd
@@ -0,0 +1,47 @@
+"""
+Particle Deposition onto Octs
+
+Author: Christopher Moody <chris.e.moody at gmail.com>
+Affiliation: UC Santa Cruz
+Author: Matthew Turk <matthewturk at gmail.com>
+Affiliation: Columbia University
+Homepage: http://yt.enzotools.org/
+License:
+  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+cimport numpy as np
+import numpy as np
+from libc.stdlib cimport malloc, free
+cimport cython
+
+from fp_utils cimport *
+from oct_container cimport Oct, OctAllocationContainer, OctreeContainer
+
+cdef extern from "alloca.h":
+    void *alloca(int)
+
+cdef inline int gind(int i, int j, int k, int dims[3]):
+    return ((k*dims[1])+j)*dims[0]+i
+
+cdef class ParticleDepositOperation:
+    # We assume each will allocate and define their own temporary storage
+    cdef np.int64_t nvals
+    cdef void process(self, int dim[3], np.float64_t left_edge[3],
+                      np.float64_t dds[3], np.int64_t offset,
+                      np.float64_t ppos[3], np.float64_t *fields)

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/092f2338cb85/
Changeset:   092f2338cb85
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-23 16:09:37
Summary:     A few units tweaks for RAMSES.  Also implemented Temperature derived field.

Note that this temperature field may require a mu that is not yet specified.
Affected #:  2 files

diff -r 4dcdcbad4ae304727cd63a5eca39a0bab7dab952 -r 092f2338cb8541e88eb0e2ac048130cf3db079ab yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -429,8 +429,10 @@
         self.time_units['1'] = 1
         self.units['1'] = 1.0
         self.units['unitary'] = 1.0 / (self.domain_right_edge - self.domain_left_edge).max()
-        self.conversion_factors["Density"] = self.parameters['unit_d']
+        rho_u = self.parameters['unit_d']
+        self.conversion_factors["Density"] = rho_u
         vel_u = self.parameters['unit_l'] / self.parameters['unit_t']
+        self.conversion_factors["Pressure"] = rho_u*vel_u**2
         self.conversion_factors["x-velocity"] = vel_u
         self.conversion_factors["y-velocity"] = vel_u
         self.conversion_factors["z-velocity"] = vel_u
@@ -499,6 +501,5 @@
     def _is_valid(self, *args, **kwargs):
         if not os.path.basename(args[0]).startswith("info_"): return False
         fn = args[0].replace("info_", "amr_").replace(".txt", ".out00001")
-        print fn
         return os.path.exists(fn)
 

diff -r 4dcdcbad4ae304727cd63a5eca39a0bab7dab952 -r 092f2338cb8541e88eb0e2ac048130cf3db079ab yt/frontends/ramses/fields.py
--- a/yt/frontends/ramses/fields.py
+++ b/yt/frontends/ramses/fields.py
@@ -34,6 +34,9 @@
     ValidateSpatial, \
     ValidateGridType
 import yt.data_objects.universal_fields
+from yt.utilities.physical_constants import \
+    boltzmann_constant_cgs, \
+    mass_hydrogen_cgs
 
 RAMSESFieldInfo = FieldInfoContainer.create_with_fallback(FieldInfo, "RFI")
 add_field = RAMSESFieldInfo.add_field
@@ -73,6 +76,11 @@
 KnownRAMSESFields["Density"]._projected_units = r"\rm{g}/\rm{cm}^2"
 KnownRAMSESFields["Density"]._convert_function=_convertDensity
 
+def _convertPressure(data):
+    return data.convert("Pressure")
+KnownRAMSESFields["Pressure"]._units=r"\rm{dyne}/\rm{cm}^{2}/\mu"
+KnownRAMSESFields["Pressure"]._convert_function=_convertPressure
+
 def _convertVelocity(data):
     return data.convert("x-velocity")
 for ax in ['x','y','z']:
@@ -134,3 +142,9 @@
                 just_one(data["CellVolumeCode"].ravel())
     # Note that we mandate grid-type here, so this is okay
     return particles
+
+def _Temperature(field, data):
+    rv = data["Pressure"]/data["Density"]
+    rv *= mass_hydrogen_cgs/boltzmann_constant_cgs
+    return rv
+add_field("Temperature", function=_Temperature, units=r"\rm{K}")


https://bitbucket.org/yt_analysis/yt/commits/1e258e92949d/
Changeset:   1e258e92949d
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-23 16:19:29
Summary:     Hubble Constant for us is normalized to 100.
Affected #:  1 file

diff -r 092f2338cb8541e88eb0e2ac048130cf3db079ab -r 1e258e92949d5f699268a1d784101aa19bd6ae90 yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -488,13 +488,13 @@
         self.domain_right_edge = np.ones(3, dtype='float64')
         # This is likely not true, but I am not sure how to otherwise
         # distinguish them.
-        mylog.warning("No current mechanism of distinguishing cosmological simulations in RAMSES!")
+        mylog.warning("RAMSES frontend assumes all simulations are cosmological!")
         self.cosmological_simulation = 1
         self.periodicity = (True, True, True)
         self.current_redshift = (1.0 / rheader["aexp"]) - 1.0
         self.omega_lambda = rheader["omega_l"]
         self.omega_matter = rheader["omega_m"]
-        self.hubble_constant = rheader["H0"]
+        self.hubble_constant = rheader["H0"] / 100.0 # This is H100
         self.max_level = rheader['levelmax'] - rheader['levelmin']
 
     @classmethod


https://bitbucket.org/yt_analysis/yt/commits/69d81d1f44f2/
Changeset:   69d81d1f44f2
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-23 18:55:19
Summary:     Fixing RAMSES length units and adding particle mass conversions.
Affected #:  2 files

diff -r 1e258e92949d5f699268a1d784101aa19bd6ae90 -r 69d81d1f44f25bd0eb1ed9eee9b23ae04032ef4d yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -436,10 +436,13 @@
         self.conversion_factors["x-velocity"] = vel_u
         self.conversion_factors["y-velocity"] = vel_u
         self.conversion_factors["z-velocity"] = vel_u
+        # Necessary to get the length units in, which are needed for Mass
+        self.conversion_factors['mass'] = rho_u * self.parameters['unit_l']**3
 
     def _setup_nounits_units(self):
+        unit_l_prop = self.parameters['unit_l'] / (1.0 + self.current_redshift)
         for unit in mpc_conversion.keys():
-            self.units[unit] = self.parameters['unit_l'] * mpc_conversion[unit] / mpc_conversion["cm"]
+            self.units[unit] = unit_l_prop * mpc_conversion[unit] / mpc_conversion["cm"]
         for unit in sec_conversion.keys():
             self.time_units[unit] = self.parameters['unit_t'] / sec_conversion[unit]
 

diff -r 1e258e92949d5f699268a1d784101aa19bd6ae90 -r 69d81d1f44f25bd0eb1ed9eee9b23ae04032ef4d yt/frontends/ramses/fields.py
--- a/yt/frontends/ramses/fields.py
+++ b/yt/frontends/ramses/fields.py
@@ -109,39 +109,24 @@
                   validators = [ValidateDataField(f)],
                   particle_type = True)
 
-def _ParticleMass(field, data):
-    particles = data["particle_mass"].astype('float64') * \
-                just_one(data["CellVolumeCode"].ravel())
-    # Note that we mandate grid-type here, so this is okay
-    return particles
+for ax in 'xyz':
+    KnownRAMSESFields["particle_velocity_%s" % ax]._convert_function = \
+        _convertVelocity
 
 def _convertParticleMass(data):
-    return data.convert("Density")*(data.convert("cm")**3.0)
-def _IOLevelParticleMass(grid):
-    dd = dict(particle_mass = np.ones(1), CellVolumeCode=grid["CellVolumeCode"])
-    cf = (_ParticleMass(None, dd) * _convertParticleMass(grid))[0]
-    return cf
+    return data.convert("mass")
+
+KnownRAMSESFields["particle_mass"]._convert_function = \
+        _convertParticleMass
+KnownRAMSESFields["particle_mass"]._units = r"\mathrm{g}"
+
 def _convertParticleMassMsun(data):
-    return data.convert("Density")*((data.convert("cm")**3.0)/1.989e33)
-def _IOLevelParticleMassMsun(grid):
-    dd = dict(particle_mass = np.ones(1), CellVolumeCode=grid["CellVolumeCode"])
-    cf = (_ParticleMass(None, dd) * _convertParticleMassMsun(grid))[0]
-    return cf
-add_field("ParticleMass",
-          function=_ParticleMass, validators=[ValidateSpatial(0)],
-          particle_type=True, convert_function=_convertParticleMass,
-          particle_convert_function=_IOLevelParticleMass)
+    return 1.0/1.989e33
+add_field("ParticleMass", function=TranslationFunc("particle_mass"), 
+          particle_type=True)
 add_field("ParticleMassMsun",
-          function=_ParticleMass, validators=[ValidateSpatial(0)],
-          particle_type=True, convert_function=_convertParticleMassMsun,
-          particle_convert_function=_IOLevelParticleMassMsun)
-
-
-def _ParticleMass(field, data):
-    particles = data["particle_mass"].astype('float64') * \
-                just_one(data["CellVolumeCode"].ravel())
-    # Note that we mandate grid-type here, so this is okay
-    return particles
+          function=TranslationFunc("particle_mass"), 
+          particle_type=True, convert_function=_convertParticleMassMsun)
 
 def _Temperature(field, data):
     rv = data["Pressure"]/data["Density"]


https://bitbucket.org/yt_analysis/yt/commits/b20f76ccd3c3/
Changeset:   b20f76ccd3c3
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-24 15:02:39
Summary:     Removing factor of 1+z from length calculation in RAMSES.
Affected #:  1 file

diff -r 69d81d1f44f25bd0eb1ed9eee9b23ae04032ef4d -r b20f76ccd3c34bac9c187272593f9f49b58e7795 yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -440,9 +440,10 @@
         self.conversion_factors['mass'] = rho_u * self.parameters['unit_l']**3
 
     def _setup_nounits_units(self):
-        unit_l_prop = self.parameters['unit_l'] / (1.0 + self.current_redshift)
+        # Note that unit_l *already* converts to proper!
+        unit_l = self.parameters['unit_l']
         for unit in mpc_conversion.keys():
-            self.units[unit] = unit_l_prop * mpc_conversion[unit] / mpc_conversion["cm"]
+            self.units[unit] = unit_l * mpc_conversion[unit] / mpc_conversion["cm"]
         for unit in sec_conversion.keys():
             self.time_units[unit] = self.parameters['unit_t'] / sec_conversion[unit]
 


https://bitbucket.org/yt_analysis/yt/commits/50851ef46600/
Changeset:   50851ef46600
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-24 18:00:17
Summary:     Merging from mainline development.
Affected #:  15 files

diff -r b8521d5e0e89939669e98c352ddc933f9d40eb89 -r 50851ef466000f8b511d6e3295857f4ddb90f612 yt/analysis_modules/absorption_spectrum/absorption_line.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_line.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_line.py
@@ -24,6 +24,13 @@
 """
 
 import numpy as np
+from yt.utilities.physical_constants import \
+    charge_proton_cgs, \
+    cm_per_km, \
+    km_per_cm, \
+    mass_electron_cgs, \
+    speed_of_light_cgs
+
 
 def voigt(a,u):
     """
@@ -167,10 +174,10 @@
     """
 
     ## constants
-    me = 1.6726231e-24 / 1836.        # grams mass electron 
-    e = 4.8032e-10                    # esu 
-    c = 2.99792456e5                  # km/s
-    ccgs = c * 1.e5                   # cm/s 
+    me = mass_electron_cgs              # grams mass electron 
+    e = charge_proton_cgs               # esu 
+    c = speed_of_light_cgs * km_per_cm  # km/s
+    ccgs = speed_of_light_cgs           # cm/s 
 
     ## shift lam0 by deltav
     if deltav is not None:
@@ -181,7 +188,7 @@
         lam1 = lam0
 
     ## conversions
-    vdop = vkms * 1.e5                # in cm/s
+    vdop = vkms * cm_per_km           # in cm/s
     lam0cgs = lam0 / 1.e8             # rest wavelength in cm
     lam1cgs = lam1 / 1.e8             # line wavelength in cm
     nu1 = ccgs / lam1cgs              # line freq in Hz

diff -r b8521d5e0e89939669e98c352ddc933f9d40eb89 -r 50851ef466000f8b511d6e3295857f4ddb90f612 yt/analysis_modules/halo_finding/halo_objects.py
--- a/yt/analysis_modules/halo_finding/halo_objects.py
+++ b/yt/analysis_modules/halo_finding/halo_objects.py
@@ -45,7 +45,10 @@
 from yt.utilities.performance_counters import \
     yt_counters, time_function
 from yt.utilities.math_utils import periodic_dist, get_rotation_matrix
-from yt.utilities.physical_constants import rho_crit_now, mass_sun_cgs
+from yt.utilities.physical_constants import \
+    rho_crit_now, \
+    mass_sun_cgs, \
+    TINY
 
 from .hop.EnzoHop import RunHOP
 from .fof.EnzoFOF import RunFOF
@@ -60,8 +63,6 @@
     ParallelAnalysisInterface, \
     parallel_blocking_call
 
-TINY = 1.e-40
-
 class Halo(object):
     """
     A data source that returns particle information about the members of a
@@ -1428,7 +1429,7 @@
         fglob = path.join(basedir, 'halos_%d.*.bin' % n)
         files = glob.glob(fglob)
         halos = self._get_halos_binary(files)
-        #Jc = 1.98892e33/pf['mpchcm']*1e5
+        #Jc = mass_sun_cgs/ pf['mpchcm'] * 1e5
         Jc = 1.0
         length = 1.0 / pf['mpchcm']
         conv = dict(pos = np.array([length, length, length,

diff -r b8521d5e0e89939669e98c352ddc933f9d40eb89 -r 50851ef466000f8b511d6e3295857f4ddb90f612 yt/analysis_modules/halo_mass_function/halo_mass_function.py
--- a/yt/analysis_modules/halo_mass_function/halo_mass_function.py
+++ b/yt/analysis_modules/halo_mass_function/halo_mass_function.py
@@ -31,6 +31,11 @@
     ParallelDummy, \
     ParallelAnalysisInterface, \
     parallel_blocking_call
+from yt.utilities.physical_constants import \
+    cm_per_mpc, \
+    mass_sun_cgs, \
+    rho_crit_now
+
 
 class HaloMassFcn(ParallelAnalysisInterface):
     """
@@ -259,7 +264,9 @@
         sigma8_unnorm = math.sqrt(self.sigma_squared_of_R(R));
         sigma_normalization = self.sigma8input / sigma8_unnorm;
 
-        rho0 = self.omega_matter0 * 2.78e+11; # in units of h^2 Msolar/Mpc^3
+        # rho0 in units of h^2 Msolar/Mpc^3
+        rho0 = self.omega_matter0 * \
+                rho_crit_now * cm_per_mpc**3 / mass_sun_cgs
 
         # spacing in mass of our sigma calculation
         dm = (float(self.log_mass_max) - self.log_mass_min)/self.num_sigma_bins;
@@ -294,7 +301,9 @@
     def dndm(self):
         
         # constants - set these before calling any functions!
-        rho0 = self.omega_matter0 * 2.78e+11; # in units of h^2 Msolar/Mpc^3
+        # rho0 in units of h^2 Msolar/Mpc^3
+        rho0 = self.omega_matter0 * \
+                rho_crit_now * cm_per_mpc**3 / mass_sun_cgs
         self.delta_c0 = 1.69;  # critical density for turnaround (Press-Schechter)
         
         nofmz_cum = 0.0;  # keep track of cumulative number density

diff -r b8521d5e0e89939669e98c352ddc933f9d40eb89 -r 50851ef466000f8b511d6e3295857f4ddb90f612 yt/analysis_modules/halo_profiler/multi_halo_profiler.py
--- a/yt/analysis_modules/halo_profiler/multi_halo_profiler.py
+++ b/yt/analysis_modules/halo_profiler/multi_halo_profiler.py
@@ -52,6 +52,9 @@
     parallel_blocking_call, \
     parallel_root_only, \
     parallel_objects
+from yt.utilities.physical_constants import \
+    mass_sun_cgs, \
+    rho_crit_now
 from yt.visualization.fixed_resolution import \
     FixedResolutionBuffer
 from yt.visualization.image_writer import write_image
@@ -951,12 +954,11 @@
         if 'ActualOverdensity' in profile.keys():
             return
 
-        rho_crit_now = 1.8788e-29 * self.pf.hubble_constant**2 # g cm^-3
-        Msun2g = 1.989e33
-        rho_crit = rho_crit_now * ((1.0 + self.pf.current_redshift)**3.0)
+        rhocritnow = rho_crit_now * self.pf.hubble_constant**2 # g cm^-3
+        rho_crit = rhocritnow * ((1.0 + self.pf.current_redshift)**3.0)
         if not self.use_critical_density: rho_crit *= self.pf.omega_matter
 
-        profile['ActualOverdensity'] = (Msun2g * profile['TotalMassMsun']) / \
+        profile['ActualOverdensity'] = (mass_sun_cgs * profile['TotalMassMsun']) / \
             profile['CellVolume'] / rho_crit
 
     def _check_for_needed_profile_fields(self):

diff -r b8521d5e0e89939669e98c352ddc933f9d40eb89 -r 50851ef466000f8b511d6e3295857f4ddb90f612 yt/analysis_modules/spectral_integrator/spectral_frequency_integrator.py
--- a/yt/analysis_modules/spectral_integrator/spectral_frequency_integrator.py
+++ b/yt/analysis_modules/spectral_integrator/spectral_frequency_integrator.py
@@ -37,6 +37,9 @@
 from yt.utilities.exceptions import YTException
 from yt.utilities.linear_interpolators import \
     BilinearFieldInterpolator
+from yt.utilities.physical_constants import \
+    erg_per_eV, \
+    keV_per_Hz
 
 xray_data_version = 1
 
@@ -101,7 +104,7 @@
                   np.power(10, np.concatenate([self.log_E[:-1] - 0.5 * E_diff,
                                                [self.log_E[-1] - 0.5 * E_diff[-1],
                                                 self.log_E[-1] + 0.5 * E_diff[-1]]]))
-        self.dnu = 2.41799e17 * np.diff(self.E_bins)
+        self.dnu = keV_per_Hz * np.diff(self.E_bins)
 
     def _get_interpolator(self, data, e_min, e_max):
         r"""Create an interpolator for total emissivity in a 
@@ -311,7 +314,7 @@
     """
 
     my_si = EmissivityIntegrator(filename=filename)
-    energy_erg = np.power(10, my_si.log_E) * 1.60217646e-9
+    energy_erg = np.power(10, my_si.log_E) * erg_per_eV
 
     em_0 = my_si._get_interpolator((my_si.emissivity_primordial[..., :] / energy_erg),
                                    e_min, e_max)

diff -r b8521d5e0e89939669e98c352ddc933f9d40eb89 -r 50851ef466000f8b511d6e3295857f4ddb90f612 yt/analysis_modules/star_analysis/sfr_spectrum.py
--- a/yt/analysis_modules/star_analysis/sfr_spectrum.py
+++ b/yt/analysis_modules/star_analysis/sfr_spectrum.py
@@ -31,9 +31,13 @@
 from yt.utilities.cosmology import \
     Cosmology, \
     EnzoCosmology
+from yt.utilities.physical_constants import \
+    sec_per_year, \
+    speed_of_light_cgs
 
-YEAR = 3.155693e7 # sec / year
-LIGHT = 2.997925e10 # cm / s
+
+YEAR = sec_per_year # sec / year
+LIGHT = speed_of_light_cgs # cm / s
 
 class StarFormationRate(object):
     r"""Calculates the star formation rate for a given population of

diff -r b8521d5e0e89939669e98c352ddc933f9d40eb89 -r 50851ef466000f8b511d6e3295857f4ddb90f612 yt/analysis_modules/sunrise_export/sunrise_exporter.py
--- a/yt/analysis_modules/sunrise_export/sunrise_exporter.py
+++ b/yt/analysis_modules/sunrise_export/sunrise_exporter.py
@@ -35,6 +35,9 @@
 import numpy as np
 from yt.funcs import *
 import yt.utilities.lib as amr_utils
+from yt.utilities.physical_constants import \
+    kpc_per_cm, \
+    sec_per_year
 from yt.data_objects.universal_fields import add_field
 from yt.mods import *
 
@@ -524,7 +527,7 @@
                         for ax in 'xyz']).transpose()
         # Velocity is cm/s, we want it to be kpc/yr
         #vel *= (pf["kpc"]/pf["cm"]) / (365*24*3600.)
-        vel *= 1.02268944e-14 
+        vel *= kpc_per_cm * sec_per_year
     if initial_mass is None:
         #in solar masses
         initial_mass = dd["particle_mass_initial"][idx]*pf['Msun']

diff -r b8521d5e0e89939669e98c352ddc933f9d40eb89 -r 50851ef466000f8b511d6e3295857f4ddb90f612 yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -36,6 +36,11 @@
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
     ParallelAnalysisInterface, parallel_objects
 from yt.utilities.lib import Octree
+from yt.utilities.physical_constants import \
+    gravitational_constant_cgs, \
+    mass_sun_cgs, \
+    HUGE
+
 
 __CUDA_BLOCK_SIZE = 256
 
@@ -266,8 +271,7 @@
     M = m_enc.sum()
     J = np.sqrt(((j_mag.sum(axis=0))**2.0).sum())/W
     E = np.sqrt(e_term_pre.sum()/W)
-    G = 6.67e-8 # cm^3 g^-1 s^-2
-    spin = J * E / (M*1.989e33*G)
+    spin = J * E / (M * mass_sun_cgs * gravitational_constant_cgs)
     return spin
 add_quantity("BaryonSpinParameter", function=_BaryonSpinParameter,
              combine_function=_combBaryonSpinParameter, n_ret=4)
@@ -351,7 +355,7 @@
     # Gravitational potential energy
     # We only divide once here because we have velocity in cgs, but radius is
     # in code.
-    G = 6.67e-8 / data.convert("cm") # cm^3 g^-1 s^-2
+    G = gravitational_constant_cgs / data.convert("cm") # cm^3 g^-1 s^-2
     # Check for periodicity of the clump.
     two_root = 2. * np.array(data.pf.domain_width) / np.array(data.pf.domain_dimensions)
     domain_period = data.pf.domain_right_edge - data.pf.domain_left_edge
@@ -573,15 +577,15 @@
     mins, maxs = [], []
     for field in fields:
         if data[field].size < 1:
-            mins.append(1e90)
-            maxs.append(-1e90)
+            mins.append(HUGE)
+            maxs.append(-HUGE)
             continue
         if filter is None:
             if non_zero:
                 nz_filter = data[field]>0.0
                 if not nz_filter.any():
-                    mins.append(1e90)
-                    maxs.append(-1e90)
+                    mins.append(HUGE)
+                    maxs.append(-HUGE)
                     continue
             else:
                 nz_filter = None
@@ -596,8 +600,8 @@
                 mins.append(np.nanmin(data[field][nz_filter]))
                 maxs.append(np.nanmax(data[field][nz_filter]))
             else:
-                mins.append(1e90)
-                maxs.append(-1e90)
+                mins.append(HUGE)
+                maxs.append(-HUGE)
     return len(fields), mins, maxs
 def _combExtrema(data, n_fields, mins, maxs):
     mins, maxs = np.atleast_2d(mins, maxs)
@@ -629,7 +633,7 @@
     This function returns the location of the maximum of a set
     of fields.
     """
-    ma, maxi, mx, my, mz = -1e90, -1, -1, -1, -1
+    ma, maxi, mx, my, mz = -HUGE, -1, -1, -1, -1
     if data[field].size > 0:
         maxi = np.argmax(data[field])
         ma = data[field][maxi]
@@ -647,7 +651,7 @@
     This function returns the location of the minimum of a set
     of fields.
     """
-    ma, mini, mx, my, mz = 1e90, -1, -1, -1, -1
+    ma, mini, mx, my, mz = HUGE, -1, -1, -1, -1
     if data[field].size > 0:
         mini = np.argmin(data[field])
         ma = data[field][mini]

diff -r b8521d5e0e89939669e98c352ddc933f9d40eb89 -r 50851ef466000f8b511d6e3295857f4ddb90f612 yt/data_objects/universal_fields.py
--- a/yt/data_objects/universal_fields.py
+++ b/yt/data_objects/universal_fields.py
@@ -48,15 +48,16 @@
     NeedsParameter
 
 from yt.utilities.physical_constants import \
-     mh, \
-     me, \
-     sigma_thompson, \
-     clight, \
-     kboltz, \
-     G, \
-     rho_crit_now, \
-     speed_of_light_cgs, \
-     km_per_cm, keV_per_K
+    mass_sun_cgs, \
+    mh, \
+    me, \
+    sigma_thompson, \
+    clight, \
+    kboltz, \
+    G, \
+    rho_crit_now, \
+    speed_of_light_cgs, \
+    km_per_cm, keV_per_K
 
 from yt.utilities.math_utils import \
     get_sph_r_component, \
@@ -379,7 +380,7 @@
 def _CellMass(field, data):
     return data["Density"] * data["CellVolume"]
 def _convertCellMassMsun(data):
-    return 5.027854e-34 # g^-1
+    return 1.0 / mass_sun_cgs # g^-1
 add_field("CellMass", function=_CellMass, units=r"\rm{g}")
 add_field("CellMassMsun", units=r"M_{\odot}",
           function=_CellMass,
@@ -458,6 +459,7 @@
     # lens to source
     DLS = data.pf.parameters['cosmology_calculator'].AngularDiameterDistance(
         data.pf.current_redshift, data.pf.parameters['lensing_source_redshift'])
+    # TODO: convert 1.5e14 to constants
     return (((DL * DLS) / DS) * (1.5e14 * data.pf.omega_matter * 
                                 (data.pf.hubble_constant / speed_of_light_cgs)**2 *
                                 (1 + data.pf.current_redshift)))
@@ -520,7 +522,7 @@
     return ((data["Density"].astype('float64')**2.0) \
             *data["Temperature"]**0.5)
 def _convertXRayEmissivity(data):
-    return 2.168e60
+    return 2.168e60 #TODO: convert me to constants
 add_field("XRayEmissivity", function=_XRayEmissivity,
           convert_function=_convertXRayEmissivity,
           projection_conversion="1")
@@ -927,8 +929,8 @@
 add_field("MeanMolecularWeight",function=_MeanMolecularWeight,units=r"")
 
 def _JeansMassMsun(field,data):
-    MJ_constant = (((5*kboltz)/(G*mh))**(1.5)) * \
-    (3/(4*3.1415926535897931))**(0.5) / 1.989e33
+    MJ_constant = (((5.0 * kboltz) / (G * mh)) ** (1.5)) * \
+    (3.0 / (4.0 * np.pi)) ** (0.5) / mass_sun_cgs
 
     return (MJ_constant *
             ((data["Temperature"]/data["MeanMolecularWeight"])**(1.5)) *

diff -r b8521d5e0e89939669e98c352ddc933f9d40eb89 -r 50851ef466000f8b511d6e3295857f4ddb90f612 yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -403,15 +403,21 @@
         self.time_units['1'] = 1
         self.units['1'] = 1.0
         self.units['unitary'] = 1.0 / (self.domain_right_edge - self.domain_left_edge).max()
-        self.conversion_factors["Density"] = self.parameters['unit_d']
+        rho_u = self.parameters['unit_d']
+        self.conversion_factors["Density"] = rho_u
         vel_u = self.parameters['unit_l'] / self.parameters['unit_t']
+        self.conversion_factors["Pressure"] = rho_u*vel_u**2
         self.conversion_factors["x-velocity"] = vel_u
         self.conversion_factors["y-velocity"] = vel_u
         self.conversion_factors["z-velocity"] = vel_u
+        # Necessary to get the length units in, which are needed for Mass
+        self.conversion_factors['mass'] = rho_u * self.parameters['unit_l']**3
 
     def _setup_nounits_units(self):
+        # Note that unit_l *already* converts to proper!
+        unit_l = self.parameters['unit_l']
         for unit in mpc_conversion.keys():
-            self.units[unit] = self.parameters['unit_l'] * mpc_conversion[unit] / mpc_conversion["cm"]
+            self.units[unit] = unit_l * mpc_conversion[unit] / mpc_conversion["cm"]
         for unit in sec_conversion.keys():
             self.time_units[unit] = self.parameters['unit_t'] / sec_conversion[unit]
 
@@ -460,19 +466,18 @@
         self.domain_right_edge = np.ones(3, dtype='float64')
         # This is likely not true, but I am not sure how to otherwise
         # distinguish them.
-        mylog.warning("No current mechanism of distinguishing cosmological simulations in RAMSES!")
+        mylog.warning("RAMSES frontend assumes all simulations are cosmological!")
         self.cosmological_simulation = 1
         self.periodicity = (True, True, True)
         self.current_redshift = (1.0 / rheader["aexp"]) - 1.0
         self.omega_lambda = rheader["omega_l"]
         self.omega_matter = rheader["omega_m"]
-        self.hubble_constant = rheader["H0"]
+        self.hubble_constant = rheader["H0"] / 100.0 # This is H100
         self.max_level = rheader['levelmax'] - rheader['levelmin']
 
     @classmethod
     def _is_valid(self, *args, **kwargs):
         if not os.path.basename(args[0]).startswith("info_"): return False
         fn = args[0].replace("info_", "amr_").replace(".txt", ".out00001")
-        print fn
         return os.path.exists(fn)
 

diff -r b8521d5e0e89939669e98c352ddc933f9d40eb89 -r 50851ef466000f8b511d6e3295857f4ddb90f612 yt/frontends/ramses/fields.py
--- a/yt/frontends/ramses/fields.py
+++ b/yt/frontends/ramses/fields.py
@@ -34,6 +34,9 @@
     ValidateSpatial, \
     ValidateGridType
 import yt.data_objects.universal_fields
+from yt.utilities.physical_constants import \
+    boltzmann_constant_cgs, \
+    mass_hydrogen_cgs
 
 RAMSESFieldInfo = FieldInfoContainer.create_with_fallback(FieldInfo, "RFI")
 add_field = RAMSESFieldInfo.add_field
@@ -73,6 +76,11 @@
 KnownRAMSESFields["Density"]._projected_units = r"\rm{g}/\rm{cm}^2"
 KnownRAMSESFields["Density"]._convert_function=_convertDensity
 
+def _convertPressure(data):
+    return data.convert("Pressure")
+KnownRAMSESFields["Pressure"]._units=r"\rm{dyne}/\rm{cm}^{2}/\mu"
+KnownRAMSESFields["Pressure"]._convert_function=_convertPressure
+
 def _convertVelocity(data):
     return data.convert("x-velocity")
 for ax in ['x','y','z']:
@@ -101,36 +109,27 @@
                   validators = [ValidateDataField(f)],
                   particle_type = True)
 
-def _ParticleMass(field, data):
-    particles = data["particle_mass"].astype('float64') * \
-                just_one(data["CellVolumeCode"].ravel())
-    # Note that we mandate grid-type here, so this is okay
-    return particles
+for ax in 'xyz':
+    KnownRAMSESFields["particle_velocity_%s" % ax]._convert_function = \
+        _convertVelocity
 
 def _convertParticleMass(data):
-    return data.convert("Density")*(data.convert("cm")**3.0)
-def _IOLevelParticleMass(grid):
-    dd = dict(particle_mass = np.ones(1), CellVolumeCode=grid["CellVolumeCode"])
-    cf = (_ParticleMass(None, dd) * _convertParticleMass(grid))[0]
-    return cf
+    return data.convert("mass")
+
+KnownRAMSESFields["particle_mass"]._convert_function = \
+        _convertParticleMass
+KnownRAMSESFields["particle_mass"]._units = r"\mathrm{g}"
+
 def _convertParticleMassMsun(data):
-    return data.convert("Density")*((data.convert("cm")**3.0)/1.989e33)
-def _IOLevelParticleMassMsun(grid):
-    dd = dict(particle_mass = np.ones(1), CellVolumeCode=grid["CellVolumeCode"])
-    cf = (_ParticleMass(None, dd) * _convertParticleMassMsun(grid))[0]
-    return cf
-add_field("ParticleMass",
-          function=_ParticleMass, validators=[ValidateSpatial(0)],
-          particle_type=True, convert_function=_convertParticleMass,
-          particle_convert_function=_IOLevelParticleMass)
+    return 1.0/1.989e33
+add_field("ParticleMass", function=TranslationFunc("particle_mass"), 
+          particle_type=True)
 add_field("ParticleMassMsun",
-          function=_ParticleMass, validators=[ValidateSpatial(0)],
-          particle_type=True, convert_function=_convertParticleMassMsun,
-          particle_convert_function=_IOLevelParticleMassMsun)
+          function=TranslationFunc("particle_mass"), 
+          particle_type=True, convert_function=_convertParticleMassMsun)
 
-
-def _ParticleMass(field, data):
-    particles = data["particle_mass"].astype('float64') * \
-                just_one(data["CellVolumeCode"].ravel())
-    # Note that we mandate grid-type here, so this is okay
-    return particles
+def _Temperature(field, data):
+    rv = data["Pressure"]/data["Density"]
+    rv *= mass_hydrogen_cgs/boltzmann_constant_cgs
+    return rv
+add_field("Temperature", function=_Temperature, units=r"\rm{K}")

diff -r b8521d5e0e89939669e98c352ddc933f9d40eb89 -r 50851ef466000f8b511d6e3295857f4ddb90f612 yt/geometry/object_finding_mixin.py
--- a/yt/geometry/object_finding_mixin.py
+++ b/yt/geometry/object_finding_mixin.py
@@ -32,6 +32,8 @@
 from yt.utilities.lib import \
     MatchPointsToGrids, \
     GridTree
+from yt.utilities.physical_constants import \
+    HUGE
 
 class ObjectFindingMixin(object) :
 
@@ -83,7 +85,7 @@
         Returns (value, center) of location of minimum for a given field
         """
         gI = np.where(self.grid_levels >= 0) # Slow but pedantic
-        minVal = 1e100
+        minVal = HUGE
         for grid in self.grids[gI[0]]:
             mylog.debug("Checking %s (level %s)", grid.id, grid.Level)
             val, coord = grid.find_min(field)

diff -r b8521d5e0e89939669e98c352ddc933f9d40eb89 -r 50851ef466000f8b511d6e3295857f4ddb90f612 yt/utilities/cosmology.py
--- a/yt/utilities/cosmology.py
+++ b/yt/utilities/cosmology.py
@@ -25,10 +25,16 @@
 """
 
 import numpy as np
+from yt.utilities.physical_constants import \
+    gravitational_constant_cgs, \
+    km_per_cm, \
+    pc_per_mpc, \
+    km_per_pc, \
+    speed_of_light_cgs
 
-c_kms = 2.99792458e5 # c in km/s
-G = 6.67259e-8 # cgs
-kmPerMpc = 3.08567758e19
+c_kms = speed_of_light_cgs * km_per_cm # c in km/s
+G = gravitational_constant_cgs
+kmPerMpc = km_per_pc * pc_per_mpc
 
 class Cosmology(object):
     def __init__(self, HubbleConstantNow = 71.0,
@@ -162,6 +168,7 @@
         """
         # Changed 2.52e17 to 2.52e19 because H_0 is in km/s/Mpc, 
         # instead of 100 km/s/Mpc.
+        # TODO: Move me to physical_units
         return 2.52e19 / np.sqrt(self.OmegaMatterNow) / \
             self.HubbleConstantNow / np.power(1 + self.InitialRedshift,1.5)
 

diff -r b8521d5e0e89939669e98c352ddc933f9d40eb89 -r 50851ef466000f8b511d6e3295857f4ddb90f612 yt/utilities/physical_constants.py
--- a/yt/utilities/physical_constants.py
+++ b/yt/utilities/physical_constants.py
@@ -85,3 +85,7 @@
 kboltz = boltzmann_constant_cgs
 hcgs = planck_constant_cgs
 sigma_thompson = cross_section_thompson_cgs
+
+# Miscellaneous
+HUGE = 1.0e90
+TINY = 1.0e-40

diff -r b8521d5e0e89939669e98c352ddc933f9d40eb89 -r 50851ef466000f8b511d6e3295857f4ddb90f612 yt/visualization/volume_rendering/CUDARayCast.py
--- a/yt/visualization/volume_rendering/CUDARayCast.py
+++ b/yt/visualization/volume_rendering/CUDARayCast.py
@@ -29,6 +29,8 @@
 import yt.extensions.HierarchySubset as hs
 import numpy as np
 import h5py, time
+from yt.utilities.physical_constants import \
+    mass_hydrogen_cgs
 
 import matplotlib;matplotlib.use("Agg");import pylab
 
@@ -62,7 +64,7 @@
 
     print "Constructing transfer function."
     if "Data" in fn:
-        mh = np.log10(1.67e-24)
+        mh = np.log10(mass_hydrogen_cgs)
         tf = ColorTransferFunction((7.5+mh, 14.0+mh))
         tf.add_gaussian( 8.25+mh, 0.002, [0.2, 0.2, 0.4, 0.1])
         tf.add_gaussian( 9.75+mh, 0.002, [0.0, 0.0, 0.3, 0.1])


https://bitbucket.org/yt_analysis/yt/commits/88ca63e13e85/
Changeset:   88ca63e13e85
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-24 19:40:58
Summary:     First pass at Tipsy units module.
Affected #:  1 file

diff -r 50851ef466000f8b511d6e3295857f4ddb90f612 -r 88ca63e13e85702d078927e6dcb8549d1cf23d94 yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -44,6 +44,10 @@
     OctreeSubset
 from yt.utilities.definitions import \
     mpc_conversion, sec_conversion
+from yt.utilities.physical_constants import \
+    gravitational_constant_cgs, \
+    km_per_pc
+import yt.utilities.physical_constants as pcons
 from .fields import \
     OWLSFieldInfo, \
     KnownOWLSFields, \
@@ -352,6 +356,7 @@
 
     def __init__(self, filename, data_style="tipsy",
                  root_dimensions = 64, endian = ">",
+                 box_size = None, hubble_constant = None,
                  field_dtypes = None,
                  domain_left_edge = None,
                  domain_right_edge = None):
@@ -428,3 +433,26 @@
     def _is_valid(self, *args, **kwargs):
         # We do not allow load() of these files.
         return False
+
+    @classmethod
+    def calculate_tipsy_units(self, hubble_constant, box_size):
+        # box_size in cm, or else in a unit we can convert to cm
+        # hubble_constant is assumed to be in units scaled to 100 km / s / Mpc
+        hubble_hertz = hubble_constant / (km_per_pc * 1e4)
+        if isinstance(box_size, types.TupleType):
+            if not isinstance(box_size[1], types.StringTypes):
+                raise RuntimeError
+            conversion = getattr(pcons, "cm_per_%s" % box_size[1], None)
+            if conversion is None:
+                raise RuntimeError
+            box_size = box_size[0] * conversion
+        print hubble_hertz, box_size
+        units = {}
+        units['length'] = box_size
+        units['density'] = 3.0 * hubble_hertz**2 / \
+                          (8.0 * np.pi * gravitational_constant_cgs)
+        # density is in g/cm^3
+        units['mass'] = units['density'] * units['length']**3.0
+        units['time'] = 1.0 / np.sqrt(gravitational_constant_cgs * units['density'])
+        units['velocity'] = units['length'] / units['time']
+        return units


https://bitbucket.org/yt_analysis/yt/commits/8e51019f682e/
Changeset:   8e51019f682e
Branch:      yt-3.0
User:        ngoldbaum
Date:        2013-05-24 19:38:18
Summary:     Fixing the incorrect conversion factor from km to parsec.

physical_constants.py edited online with Bitbucket
Affected #:  1 file

diff -r 88ca63e13e85702d078927e6dcb8549d1cf23d94 -r 8e51019f682e67f667788d9a98764ae19bb5d58e yt/utilities/physical_constants.py
--- a/yt/utilities/physical_constants.py
+++ b/yt/utilities/physical_constants.py
@@ -42,7 +42,7 @@
 mpc_per_miles = 5.21552871e-20
 mpc_per_cm    = 3.24077929e-25
 kpc_per_cm    = mpc_per_cm / mpc_per_kpc
-km_per_pc     = 1.3806504e13
+km_per_pc     = 3.08567758e13
 km_per_m      = 1e-3
 km_per_cm     = 1e-5
 pc_per_cm     = 3.24077929e-19


https://bitbucket.org/yt_analysis/yt/commits/1c607a2db728/
Changeset:   1c607a2db728
Branch:      yt-3.0
User:        ngoldbaum
Date:        2013-05-24 21:03:16
Summary:     Merged in MatthewTurk/yt-3.0 (pull request #32)

Implement initial spatial chunking for octrees
Affected #:  25 files

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/data_objects/api.py
--- a/yt/data_objects/api.py
+++ b/yt/data_objects/api.py
@@ -31,6 +31,9 @@
 from grid_patch import \
     AMRGridPatch
 
+from octree_subset import \
+    OctreeSubset
+
 from static_output import \
     StaticOutput
 

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -249,7 +249,13 @@
                 for i,chunk in enumerate(self.chunks(field, "spatial", ngz = 0)):
                     mask = self._current_chunk.objs[0].select(self.selector)
                     if mask is None: continue
-                    data = self[field][mask]
+                    data = self[field]
+                    if len(data.shape) == 4:
+                        # This is how we keep it consistent between oct ordering
+                        # and grid ordering.
+                        data = data.T[mask.T]
+                    else:
+                        data = data[mask]
                     rv[ind:ind+data.size] = data
                     ind += data.size
         else:
@@ -513,6 +519,11 @@
                         if f not in fields_to_generate:
                             fields_to_generate.append(f)
 
+    def deposit(self, positions, fields, op):
+        assert(self._current_chunk.chunk_type == "spatial")
+        fields = ensure_list(fields)
+        self.hierarchy._deposit_particle_fields(self, positions, fields, op)
+
     @contextmanager
     def _field_lock(self):
         self._locked = True

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -65,7 +65,10 @@
         e = FieldDetector(flat = True)
         e.NumberOfParticles = 1
         fields = e.requested
-        self.func(e, *args, **kwargs)
+        try:
+            self.func(e, *args, **kwargs)
+        except:
+            mylog.error("Could not preload for quantity %s, IO speed may suffer", self.__name__)
         retvals = [ [] for i in range(self.n_ret)]
         chunks = self._data_source.chunks([], chunking_style="io")
         for ds in parallel_objects(chunks, -1):

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/data_objects/field_info_container.py
--- a/yt/data_objects/field_info_container.py
+++ b/yt/data_objects/field_info_container.py
@@ -233,6 +233,7 @@
         if pf is None:
             # required attrs
             pf = fake_parameter_file(lambda: 1)
+            pf["Massarr"] = np.ones(6)
             pf.current_redshift = pf.omega_lambda = pf.omega_matter = \
                 pf.cosmological_simulation = 0.0
             pf.hubble_constant = 0.7
@@ -286,6 +287,9 @@
         self.requested.append(item)
         return defaultdict.__missing__(self, item)
 
+    def deposit(self, *args, **kwargs):
+        return np.random.random((self.nd, self.nd, self.nd))
+
     def _read_data(self, field_name):
         self.requested.append(field_name)
         FI = getattr(self.pf, "field_info", FieldInfo)

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/data_objects/grid_patch.py
--- a/yt/data_objects/grid_patch.py
+++ b/yt/data_objects/grid_patch.py
@@ -44,6 +44,7 @@
     NeedsProperty, \
     NeedsParameter
 from yt.geometry.selection_routines import convert_mask_to_indices
+import yt.geometry.particle_deposit as particle_deposit
 
 class AMRGridPatch(YTSelectionContainer):
     _spatial = True
@@ -474,6 +475,17 @@
         dt, t = dobj.selector.get_dt(self)
         return dt, t
 
+    def deposit(self, positions, fields = None, method = None):
+        # Here we perform our particle deposition.
+        cls = getattr(particle_deposit, "deposit_%s" % method, None)
+        if cls is None:
+            raise YTParticleDepositionNotImplemented(method)
+        op = cls(self.ActiveDimensions.prod()) # We allocate number of zones, not number of octs
+        op.initialize()
+        op.process_grid(self, positions, fields)
+        vals = op.finalize()
+        return vals.reshape(self.ActiveDimensions, order="F")
+
     def select(self, selector):
         if id(selector) == self._last_selector_id:
             return self._last_mask

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/data_objects/octree_subset.py
--- /dev/null
+++ b/yt/data_objects/octree_subset.py
@@ -0,0 +1,170 @@
+"""
+Subsets of octrees
+
+Author: Matthew Turk <matthewturk at gmail.com>
+Affiliation: Columbia University
+Homepage: http://yt-project.org/
+License:
+  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+import numpy as np
+
+from yt.data_objects.data_containers import \
+    YTFieldData, \
+    YTDataContainer, \
+    YTSelectionContainer
+from .field_info_container import \
+    NeedsGridType, \
+    NeedsOriginalGrid, \
+    NeedsDataField, \
+    NeedsProperty, \
+    NeedsParameter
+import yt.geometry.particle_deposit as particle_deposit
+
+class OctreeSubset(YTSelectionContainer):
+    _spatial = True
+    _num_ghost_zones = 0
+    _num_zones = 2
+    _type_name = 'octree_subset'
+    _skip_add = True
+    _con_args = ('domain', 'mask', 'cell_count')
+    _container_fields = ("dx", "dy", "dz")
+
+    def __init__(self, domain, mask, cell_count):
+        self.field_data = YTFieldData()
+        self.field_parameters = {}
+        self.mask = mask
+        self.domain = domain
+        self.pf = domain.pf
+        self.hierarchy = self.pf.hierarchy
+        self.oct_handler = domain.pf.h.oct_handler
+        self.cell_count = cell_count
+        level_counts = self.oct_handler.count_levels(
+            self.domain.pf.max_level, self.domain.domain_id, mask)
+        assert(level_counts.sum() == cell_count)
+        level_counts[1:] = level_counts[:-1]
+        level_counts[0] = 0
+        self.level_counts = np.add.accumulate(level_counts)
+        self._last_mask = None
+        self._last_selector_id = None
+        self._current_particle_type = 'all'
+        self._current_fluid_type = self.pf.default_fluid_type
+
+    def _generate_container_field(self, field):
+        if self._current_chunk is None:
+            self.hierarchy._identify_base_chunk(self)
+        if field == "dx":
+            return self._current_chunk.fwidth[:,0]
+        elif field == "dy":
+            return self._current_chunk.fwidth[:,1]
+        elif field == "dz":
+            return self._current_chunk.fwidth[:,2]
+
+    def select_icoords(self, dobj):
+        return self.oct_handler.icoords(self.domain.domain_id, self.mask,
+                                        self.cell_count,
+                                        self.level_counts.copy())
+
+    def select_fcoords(self, dobj):
+        return self.oct_handler.fcoords(self.domain.domain_id, self.mask,
+                                        self.cell_count,
+                                        self.level_counts.copy())
+
+    def select_fwidth(self, dobj):
+        # Recall domain_dimensions is the number of cells, not octs
+        base_dx = (self.domain.pf.domain_width /
+                   self.domain.pf.domain_dimensions)
+        widths = np.empty((self.cell_count, 3), dtype="float64")
+        dds = (2**self.select_ires(dobj))
+        for i in range(3):
+            widths[:,i] = base_dx[i] / dds
+        return widths
+
+    def select_ires(self, dobj):
+        return self.oct_handler.ires(self.domain.domain_id, self.mask,
+                                     self.cell_count,
+                                     self.level_counts.copy())
+
+    def __getitem__(self, key):
+        tr = super(OctreeSubset, self).__getitem__(key)
+        try:
+            fields = self._determine_fields(key)
+        except YTFieldTypeNotFound:
+            return tr
+        finfo = self.pf._get_field_info(*fields[0])
+        if not finfo.particle_type:
+            # We may need to reshape the field, if it is being queried from
+            # field_data.  If it's already cached, it just passes through.
+            if len(tr.shape) < 4:
+                tr = self._reshape_vals(tr)
+            return tr
+        return tr
+
+    def _reshape_vals(self, arr):
+        nz = self._num_zones + 2*self._num_ghost_zones
+        n_oct = arr.shape[0] / (nz**3.0)
+        arr = arr.reshape((nz, nz, nz, n_oct), order="F")
+        return arr
+
+    _domain_ind = None
+
+    @property
+    def domain_ind(self):
+        if self._domain_ind is None:
+            di = self.oct_handler.domain_ind(self.mask, self.domain.domain_id)
+            self._domain_ind = di
+        return self._domain_ind
+
+    def deposit(self, positions, fields = None, method = None):
+        # Here we perform our particle deposition.
+        cls = getattr(particle_deposit, "deposit_%s" % method, None)
+        if cls is None:
+            raise YTParticleDepositionNotImplemented(method)
+        nvals = (self.domain_ind >= 0).sum() * 8
+        op = cls(nvals) # We allocate number of zones, not number of octs
+        op.initialize()
+        op.process_octree(self.oct_handler, self.domain_ind, positions, fields,
+                          self.domain.domain_id)
+        vals = op.finalize()
+        return self._reshape_vals(vals)
+
+    def select(self, selector):
+        if id(selector) == self._last_selector_id:
+            return self._last_mask
+        self._last_mask = self.oct_handler.domain_mask(
+                self.mask, self.domain.domain_id)
+        if self._last_mask.sum() == 0: return None
+        self._last_selector_id = id(selector)
+        return self._last_mask
+
+    def count(self, selector):
+        if id(selector) == self._last_selector_id:
+            if self._last_mask is None: return 0
+            return self._last_mask.sum()
+        self.select(selector)
+        return self.count(selector)
+
+    def count_particles(self, selector, x, y, z):
+        # We don't cache the selector results
+        count = selector.count_points(x,y,z)
+        return count
+
+    def select_particles(self, selector, x, y, z):
+        mask = selector.select_points(x,y,z)
+        return mask

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -59,6 +59,7 @@
     particle_types = ("all",)
     geometry = "cartesian"
     coordinates = None
+    max_level = 99
 
     class __metaclass__(type):
         def __init__(cls, name, b, d):

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/data_objects/universal_fields.py
--- a/yt/data_objects/universal_fields.py
+++ b/yt/data_objects/universal_fields.py
@@ -97,7 +97,7 @@
           display_field = False)
 
 def _Ones(field, data):
-    return np.ones(data.shape, dtype='float64')
+    return np.ones(data.ires.size, dtype='float64')
 add_field("Ones", function=_Ones,
           projection_conversion="unitary",
           display_field = False)

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -40,6 +40,8 @@
     GeometryHandler, YTDataChunk
 from yt.data_objects.static_output import \
     StaticOutput
+from yt.data_objects.octree_subset import \
+    OctreeSubset
 from yt.geometry.oct_container import \
     ARTOctreeContainer
 from yt.data_objects.field_info_container import \
@@ -171,8 +173,16 @@
         # as well as the referring data source
         yield YTDataChunk(dobj, "all", oobjs, dobj.size)
 
-    def _chunk_spatial(self, dobj, ngz):
-        raise NotImplementedError
+    def _chunk_spatial(self, dobj, ngz, sort = None):
+        sobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
+        for i,og in enumerate(sobjs):
+            if ngz > 0:
+                g = og.retrieve_ghost_zones(ngz, [], smoothed=True)
+            else:
+                g = og
+            size = og.cell_count
+            if size == 0: continue
+            yield YTDataChunk(dobj, "spatial", [g], size)
 
     def _chunk_io(self, dobj):
         """
@@ -314,7 +324,8 @@
         self.conversion_factors = cf
 
         for ax in 'xyz':
-            self.conversion_factors["%s-velocity" % ax] = 1.0
+            self.conversion_factors["%s-velocity" % ax] = cf["Velocity"]
+            self.conversion_factors["particle_velocity_%s" % ax] = cf["Velocity"]
         for pt in particle_fields:
             if pt not in self.conversion_factors.keys():
                 self.conversion_factors[pt] = 1.0
@@ -433,43 +444,10 @@
                 return False
         return False
 
-
-class ARTDomainSubset(object):
+class ARTDomainSubset(OctreeSubset):
     def __init__(self, domain, mask, cell_count, domain_level):
-        self.mask = mask
-        self.domain = domain
-        self.oct_handler = domain.pf.h.oct_handler
-        self.cell_count = cell_count
+        super(ARTDomainSubset, self).__init__(domain, mask, cell_count)
         self.domain_level = domain_level
-        level_counts = self.oct_handler.count_levels(
-            self.domain.pf.max_level, self.domain.domain_id, mask)
-        assert(level_counts.sum() == cell_count)
-        level_counts[1:] = level_counts[:-1]
-        level_counts[0] = 0
-        self.level_counts = np.add.accumulate(level_counts)
-
-    def select_icoords(self, dobj):
-        return self.oct_handler.icoords(self.domain.domain_id, self.mask,
-                                        self.cell_count,
-                                        self.level_counts.copy())
-
-    def select_fcoords(self, dobj):
-        return self.oct_handler.fcoords(self.domain.domain_id, self.mask,
-                                        self.cell_count,
-                                        self.level_counts.copy())
-
-    def select_ires(self, dobj):
-        return self.oct_handler.ires(self.domain.domain_id, self.mask,
-                                     self.cell_count,
-                                     self.level_counts.copy())
-
-    def select_fwidth(self, dobj):
-        base_dx = 1.0/self.domain.pf.domain_dimensions
-        widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.select_ires(dobj))
-        for i in range(3):
-            widths[:, i] = base_dx[i] / dds
-        return widths
 
     def fill_root(self, content, ftfields):
         """

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/frontends/art/fields.py
--- a/yt/frontends/art/fields.py
+++ b/yt/frontends/art/fields.py
@@ -49,19 +49,6 @@
     add_art_field(f, function=NullFunc, take_log=True,
                   validators=[ValidateDataField(f)])
 
-for f in particle_fields:
-    add_art_field(f, function=NullFunc, take_log=True,
-                  validators=[ValidateDataField(f)],
-                  particle_type=True)
-add_art_field("particle_mass", function=NullFunc, take_log=True,
-              validators=[ValidateDataField(f)],
-              particle_type=True,
-              convert_function=lambda x: x.convert("particle_mass"))
-add_art_field("particle_mass_initial", function=NullFunc, take_log=True,
-              validators=[ValidateDataField(f)],
-              particle_type=True,
-              convert_function=lambda x: x.convert("particle_mass"))
-
 def _convertDensity(data):
     return data.convert("Density")
 KnownARTFields["Density"]._units = r"\rm{g}/\rm{cm}^3"
@@ -213,6 +200,24 @@
 ARTFieldInfo["Metal_Density"]._projected_units = r"\rm{g}/\rm{cm}^2"
 
 # Particle fields
+for f in particle_fields:
+    add_art_field(f, function=NullFunc, take_log=True,
+                  validators=[ValidateDataField(f)],
+                  particle_type=True)
+for ax in "xyz":
+    add_art_field("particle_velocity_%s" % ax, function=NullFunc, take_log=True,
+                  validators=[ValidateDataField(f)],
+                  particle_type=True,
+                  convert_function=lambda x: x.convert("particle_velocity_%s" % ax))
+add_art_field("particle_mass", function=NullFunc, take_log=True,
+              validators=[ValidateDataField(f)],
+              particle_type=True,
+              convert_function=lambda x: x.convert("particle_mass"))
+add_art_field("particle_mass_initial", function=NullFunc, take_log=True,
+              validators=[ValidateDataField(f)],
+              particle_type=True,
+              convert_function=lambda x: x.convert("particle_mass"))
+
 def _particle_age(field, data):
     tr = data["particle_creation_time"]
     return data.pf.current_time - tr

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/frontends/art/io.py
--- a/yt/frontends/art/io.py
+++ b/yt/frontends/art/io.py
@@ -140,7 +140,7 @@
                     temp[-nstars:] = data
                     tr[field] = temp
                     del data
-                tr[field] = tr[field][mask]
+                tr[field] = tr[field][mask].astype('f8')
                 ftype_old = ftype
                 fields_read.append(field)
         if tr == {}:
@@ -330,32 +330,57 @@
     f.seek(pos)
     return unitary_center, fl, iocts, nLevel, root_level
 
+def get_ranges(skip, count, field, words=6, real_size=4, np_per_page=4096**2, 
+                  num_pages=1):
+    #translate every particle index into a file position ranges
+    ranges = []
+    arr_size = np_per_page * real_size
+    page_size = words * np_per_page * real_size
+    idxa, idxb = 0, 0
+    posa, posb = 0, 0
+    left = count
+    for page in range(num_pages):
+        idxb += np_per_page
+        for i, fname in enumerate(['x', 'y', 'z', 'vx', 'vy', 'vz']):
+            posb += arr_size
+            if i == field or fname == field:
+                if skip < np_per_page and count > 0:
+                    left_in_page = np_per_page - skip
+                    this_count = min(left_in_page, count)
+                    count -= this_count
+                    start = posa + skip * real_size
+                    end = posa + this_count * real_size
+                    ranges.append((start, this_count))
+                    skip = 0
+                    assert end <= posb
+                else:
+                    skip -= np_per_page
+            posa += arr_size
+        idxa += np_per_page
+    assert count == 0
+    return ranges
 
-def read_particles(file, Nrow, idxa=None, idxb=None, field=None):
+
+def read_particles(file, Nrow, idxa, idxb, field):
     words = 6  # words (reals) per particle: x,y,z,vx,vy,vz
     real_size = 4  # for file_particle_data; not always true?
-    np_per_page = Nrow**2  # defined in ART a_setup.h
+    np_per_page = Nrow**2  # defined in ART a_setup.h, # of particles/page
     num_pages = os.path.getsize(file)/(real_size*words*np_per_page)
     data = np.array([], 'f4')
     fh = open(file, 'r')
-    totalp = idxb-idxa
-    left = totalp
-    for page in range(num_pages):
-        for i, fname in enumerate(['x', 'y', 'z', 'vx', 'vy', 'vz']):
-            if i == field or fname == field:
-                if idxa is not None:
-                    fh.seek(real_size*idxa, 1)
-                    count = min(np_per_page, left)
-                    temp = np.fromfile(fh, count=count, dtype='>f4')
-                    pageleft = np_per_page-count-idxa
-                    fh.seek(real_size*pageleft, 1)
-                    left -= count
-                else:
-                    count = np_per_page
-                    temp = np.fromfile(fh, count=count, dtype='>f4')
-                data = np.concatenate((data, temp))
-            else:
-                fh.seek(4*np_per_page, 1)
+    skip, count = idxa, idxb - idxa
+    kwargs = dict(words=words, real_size=real_size, 
+                  np_per_page=np_per_page, num_pages=num_pages)
+    ranges = get_ranges(skip, count, field, **kwargs)
+    data = None
+    for seek, this_count in ranges:
+        fh.seek(seek)
+        temp = np.fromfile(fh, count=this_count, dtype='>f4')
+        if data is None:
+            data = temp
+        else:
+            data = np.concatenate((data, temp))
+    fh.close()
     return data
 
 

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -35,6 +35,8 @@
     GeometryHandler, YTDataChunk
 from yt.data_objects.static_output import \
     StaticOutput
+from yt.data_objects.octree_subset import \
+    OctreeSubset
 
 from .definitions import ramses_header
 from yt.utilities.definitions import \
@@ -252,43 +254,7 @@
         self.select(selector)
         return self.count(selector)
 
-class RAMSESDomainSubset(object):
-    def __init__(self, domain, mask, cell_count):
-        self.mask = mask
-        self.domain = domain
-        self.oct_handler = domain.pf.h.oct_handler
-        self.cell_count = cell_count
-        level_counts = self.oct_handler.count_levels(
-            self.domain.pf.max_level, self.domain.domain_id, mask)
-        assert(level_counts.sum() == cell_count)
-        level_counts[1:] = level_counts[:-1]
-        level_counts[0] = 0
-        self.level_counts = np.add.accumulate(level_counts)
-
-    def select_icoords(self, dobj):
-        return self.oct_handler.icoords(self.domain.domain_id, self.mask,
-                                        self.cell_count,
-                                        self.level_counts.copy())
-
-    def select_fcoords(self, dobj):
-        return self.oct_handler.fcoords(self.domain.domain_id, self.mask,
-                                        self.cell_count,
-                                        self.level_counts.copy())
-
-    def select_fwidth(self, dobj):
-        # Recall domain_dimensions is the number of cells, not octs
-        base_dx = (self.domain.pf.domain_width /
-                   self.domain.pf.domain_dimensions)
-        widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.select_ires(dobj))
-        for i in range(3):
-            widths[:,i] = base_dx[i] / dds
-        return widths
-
-    def select_ires(self, dobj):
-        return self.oct_handler.ires(self.domain.domain_id, self.mask,
-                                     self.cell_count,
-                                     self.level_counts.copy())
+class RAMSESDomainSubset(OctreeSubset):
 
     def fill(self, content, fields):
         # Here we get a copy of the file, which we skip through and read the
@@ -389,8 +355,16 @@
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
         yield YTDataChunk(dobj, "all", oobjs, dobj.size)
 
-    def _chunk_spatial(self, dobj, ngz):
-        raise NotImplementedError
+    def _chunk_spatial(self, dobj, ngz, sort = None):
+        sobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
+        for i,og in enumerate(sobjs):
+            if ngz > 0:
+                g = og.retrieve_ghost_zones(ngz, [], smoothed=True)
+            else:
+                g = og
+            size = og.cell_count
+            if size == 0: continue
+            yield YTDataChunk(dobj, "spatial", [g], size)
 
     def _chunk_io(self, dobj):
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -40,6 +40,8 @@
     GeometryHandler, YTDataChunk
 from yt.data_objects.static_output import \
     StaticOutput
+from yt.data_objects.octree_subset import \
+    OctreeSubset
 from yt.utilities.definitions import \
     mpc_conversion, sec_conversion
 from .fields import \
@@ -70,40 +72,8 @@
     def _calculate_offsets(self, fields):
         pass
 
-class ParticleDomainSubset(object):
-    def __init__(self, domain, mask, count):
-        self.domain = domain
-        self.mask = mask
-        self.cell_count = count
-        self.oct_handler = domain.pf.h.oct_handler
-        level_counts = self.oct_handler.count_levels(
-            99, self.domain.domain_id, mask)
-        level_counts[1:] = level_counts[:-1]
-        level_counts[0] = 0
-        self.level_counts = np.add.accumulate(level_counts)
-
-    def select_icoords(self, dobj):
-        return self.oct_handler.icoords(self.domain.domain_id, self.mask,
-                                        self.cell_count)
-
-    def select_fcoords(self, dobj):
-        return self.oct_handler.fcoords(self.domain.domain_id, self.mask,
-                                        self.cell_count)
-
-    def select_fwidth(self, dobj):
-        # Recall domain_dimensions is the number of cells, not octs
-        base_dx = (self.domain.pf.domain_width /
-                   self.domain.pf.domain_dimensions)
-        widths = np.empty((self.cell_count, 3), dtype="float64")
-        dds = (2**self.ires(dobj))
-        for i in range(3):
-            widths[:,i] = base_dx[i] / dds
-        return widths
-
-    def select_ires(self, dobj):
-        return self.oct_handler.ires(self.domain.domain_id, self.mask,
-                                     self.cell_count)
-
+class ParticleDomainSubset(OctreeSubset):
+    pass
 
 class ParticleGeometryHandler(OctreeGeometryHandler):
 
@@ -126,7 +96,7 @@
         total_particles = sum(sum(d.total_particles.values())
                               for d in self.domains)
         self.oct_handler = ParticleOctreeContainer(
-            self.parameter_file.domain_dimensions,
+            self.parameter_file.domain_dimensions/2,
             self.parameter_file.domain_left_edge,
             self.parameter_file.domain_right_edge)
         self.oct_handler.n_ref = 64
@@ -170,8 +140,16 @@
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
         yield YTDataChunk(dobj, "all", oobjs, dobj.size)
 
-    def _chunk_spatial(self, dobj, ngz):
-        raise NotImplementedError
+    def _chunk_spatial(self, dobj, ngz, sort = None):
+        sobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
+        for i,og in enumerate(sobjs):
+            if ngz > 0:
+                g = og.retrieve_ghost_zones(ngz, [], smoothed=True)
+            else:
+                g = og
+            size = og.cell_count
+            if size == 0: continue
+            yield YTDataChunk(dobj, "spatial", [g], size)
 
     def _chunk_io(self, dobj):
         oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info)
@@ -216,6 +194,7 @@
         self.domain_right_edge = np.ones(3, "float64") * hvals["BoxSize"]
         self.domain_dimensions = np.ones(3, "int32") * self._root_dimensions
         self.cosmological_simulation = 1
+        self.periodicity = (True, True, True)
         self.current_redshift = hvals["Redshift"]
         self.omega_lambda = hvals["OmegaLambda"]
         self.omega_matter = hvals["Omega0"]
@@ -317,6 +296,7 @@
         self.domain_left_edge = np.zeros(3, "float64")
         self.domain_right_edge = np.ones(3, "float64") * hvals["BoxSize"]
         self.domain_dimensions = np.ones(3, "int32") * self._root_dimensions
+        self.periodicity = (True, True, True)
 
         self.cosmological_simulation = 1
 
@@ -371,10 +351,27 @@
                     ('dummy',   'i'))
 
     def __init__(self, filename, data_style="tipsy",
-                 root_dimensions = 64):
+                 root_dimensions = 64, endian = ">",
+                 field_dtypes = None,
+                 domain_left_edge = None,
+                 domain_right_edge = None):
+        self.endian = endian
         self._root_dimensions = root_dimensions
         # Set up the template for domain files
         self.storage_filename = None
+        if domain_left_edge is None:
+            domain_left_edge = np.zeros(3, "float64") - 0.5
+        if domain_right_edge is None:
+            domain_right_edge = np.zeros(3, "float64") + 0.5
+
+        self.domain_left_edge = np.array(domain_left_edge, dtype="float64")
+        self.domain_right_edge = np.array(domain_right_edge, dtype="float64")
+
+        # My understanding is that dtypes are set on a field by field basis,
+        # not on a (particle type, field) basis
+        if field_dtypes is None: field_dtypes = {}
+        self._field_dtypes = field_dtypes
+
         super(TipsyStaticOutput, self).__init__(filename, data_style)
 
     def __repr__(self):
@@ -393,7 +390,7 @@
         # in the GADGET-2 user guide.
 
         f = open(self.parameter_filename, "rb")
-        hh = ">" + "".join(["%s" % (b) for a,b in self._header_spec])
+        hh = self.endian + "".join(["%s" % (b) for a,b in self._header_spec])
         hvals = dict([(a, c) for (a, b), c in zip(self._header_spec,
                      struct.unpack(hh, f.read(struct.calcsize(hh))))])
         self._header_offset = f.tell()
@@ -408,9 +405,11 @@
         # This may not be correct.
         self.current_time = hvals["time"]
 
-        self.domain_left_edge = np.zeros(3, "float64") - 0.5
-        self.domain_right_edge = np.ones(3, "float64") + 0.5
+        # NOTE: These are now set in the main initializer.
+        #self.domain_left_edge = np.zeros(3, "float64") - 0.5
+        #self.domain_right_edge = np.ones(3, "float64") + 0.5
         self.domain_dimensions = np.ones(3, "int32") * self._root_dimensions
+        self.periodicity = (True, True, True)
 
         self.cosmological_simulation = 1
 

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/frontends/sph/io.py
--- a/yt/frontends/sph/io.py
+++ b/yt/frontends/sph/io.py
@@ -372,6 +372,7 @@
         return rv
 
     def _initialize_octree(self, domain, octree):
+        pf = domain.pf
         with open(domain.domain_filename, "rb") as f:
             f.seek(domain.pf._header_offset)
             for ptype in self._ptypes:
@@ -391,6 +392,11 @@
                             pos[:,1].min(), pos[:,1].max())
                 mylog.debug("Spanning: %0.3e .. %0.3e in z",
                             pos[:,2].min(), pos[:,2].max())
+                if np.any(pos.min(axis=0) < pf.domain_left_edge) or \
+                   np.any(pos.max(axis=0) > pf.domain_right_edge):
+                    raise YTDomainOverflow(pos.min(axis=0), pos.max(axis=0),
+                                           pf.domain_left_edge,
+                                           pf.domain_right_edge)
                 del pp
                 octree.add(pos, domain.domain_id)
 
@@ -412,10 +418,12 @@
         for ptype, field in self._fields:
             pfields = []
             if tp[ptype] == 0: continue
+            dtbase = domain.pf._field_dtypes.get(field, 'f')
+            ff = "%s%s" % (domain.pf.endian, dtbase)
             if field in _vector_fields:
-                dt = (field, [('x', '>f'), ('y', '>f'), ('z', '>f')])
+                dt = (field, [('x', ff), ('y', ff), ('z', ff)])
             else:
-                dt = (field, '>f')
+                dt = (field, ff)
             pds.setdefault(ptype, []).append(dt)
             field_list.append((ptype, field))
         for ptype in pds:

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/frontends/sph/smoothing_kernel.pyx
--- a/yt/frontends/sph/smoothing_kernel.pyx
+++ b/yt/frontends/sph/smoothing_kernel.pyx
@@ -53,21 +53,28 @@
     for p in range(ngas):
         kernel_sum[p] = 0.0
         skip = 0
+        # Find the # of cells of the kernel
         for i in range(3):
             pos[i] = ppos[p, i]
+            # Get particle root grid integer index
             ind[i] = <int>((pos[i] - left_edge[i]) / dds[i])
+            # How many root grid cells does the smoothing length span + 1
             half_len = <int>(hsml[p]/dds[i]) + 1
+            # Left and right integer indices of the smoothing range
+            # If smoothing len is small could be inside the same bin
             ib0[i] = ind[i] - half_len
             ib1[i] = ind[i] + half_len
             #pos[i] = ppos[p, i] - left_edge[i]
             #ind[i] = <int>(pos[i] / dds[i])
             #ib0[i] = <int>((pos[i] - hsml[i]) / dds[i]) - 1
             #ib1[i] = <int>((pos[i] + hsml[i]) / dds[i]) + 1
+            # Skip if outside out root grid
             if ib0[i] >= dims[i] or ib1[i] < 0:
                 skip = 1
             ib0[i] = iclip(ib0[i], 0, dims[i] - 1)
             ib1[i] = iclip(ib1[i], 0, dims[i] - 1)
         if skip == 1: continue
+        # Having found the kernel shape, calculate the kernel weight
         for i from ib0[0] <= i <= ib1[0]:
             idist[0] = (ind[0] - i) * (ind[0] - i) * sdds[0]
             for j from ib0[1] <= j <= ib1[1]:
@@ -75,10 +82,14 @@
                 for k from ib0[2] <= k <= ib1[2]:
                     idist[2] = (ind[2] - k) * (ind[2] - k) * sdds[2]
                     dist = idist[0] + idist[1] + idist[2]
+                    # Calculate distance in multiples of the smoothing length
                     dist = sqrt(dist) / hsml[p]
+                    # Kernel is 3D but save the elements in a 1D array
                     gi = ((i * dims[1] + j) * dims[2]) + k
                     pdist[gi] = sph_kernel(dist)
+                    # Save sum to normalize later
                     kernel_sum[p] += pdist[gi]
+        # Having found the kernel, deposit accordingly into gdata
         for i from ib0[0] <= i <= ib1[0]:
             for j from ib0[1] <= j <= ib1[1]:
                 for k from ib0[2] <= k <= ib1[2]:

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/geometry/fake_octree.pyx
--- /dev/null
+++ b/yt/geometry/fake_octree.pyx
@@ -0,0 +1,90 @@
+"""
+Make a fake octree, deposit particle at every leaf
+
+Author: Christopher Moody <chris.e.moody at gmail.com>
+Affiliation: UC Santa Cruz
+Author: Matthew Turk <matthewturk at gmail.com>
+Affiliation: Columbia University
+Homepage: http://yt.enzotools.org/
+License:
+  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+from libc.stdlib cimport malloc, free, rand, RAND_MAX
+cimport numpy as np
+import numpy as np
+cimport cython
+
+from oct_container cimport Oct, RAMSESOctreeContainer
+
+# Create a balanced octree by a random walk that recursively
+# subdivides
+def create_fake_octree(RAMSESOctreeContainer oct_handler,
+                       long max_noct,
+                       long max_level,
+                       np.ndarray[np.int32_t, ndim=1] ndd,
+                       np.ndarray[np.float64_t, ndim=1] dle,
+                       np.ndarray[np.float64_t, ndim=1] dre,
+                       float fsubdivide):
+    cdef int[3] dd #hold the octant index
+    cdef int[3] ind #hold the octant index
+    cdef long i
+    cdef long cur_leaf = 0
+    cdef np.ndarray[np.uint8_t, ndim=2] mask
+    for i in range(3):
+        ind[i] = 0
+        dd[i] = ndd[i]
+    oct_handler.allocate_domains([max_noct])
+    parent = oct_handler.next_root(1, ind)
+    parent.domain = 1
+    cur_leaf = 8 #we've added one parent...
+    mask = np.ones((max_noct,8),dtype='uint8')
+    while oct_handler.domains[0].n_assigned < max_noct:
+        print "root: nocts ", oct_handler.domains[0].n_assigned
+        cur_leaf = subdivide(oct_handler, parent, ind, dd, cur_leaf, 0,
+                             max_noct, max_level, fsubdivide, mask)
+    return cur_leaf
+                             
+
+cdef long subdivide(RAMSESOctreeContainer oct_handler, 
+                    Oct *parent,
+                    int ind[3], int dd[3], 
+                    long cur_leaf, long cur_level, 
+                    long max_noct, long max_level, float fsubdivide,
+                    np.ndarray[np.uint8_t, ndim=2] mask):
+    print "child", parent.file_ind, ind[0], ind[1], ind[2], cur_leaf, cur_level
+    cdef int ddr[3]
+    cdef long i,j,k
+    cdef float rf #random float from 0-1
+    if cur_level >= max_level: 
+        return cur_leaf
+    if oct_handler.domains[0].n_assigned >= max_noct:
+        return cur_leaf
+    for i in range(3):
+        ind[i] = <int> ((rand() * 1.0 / RAND_MAX) * dd[i])
+        ddr[i] = 2
+    rf = rand() * 1.0 / RAND_MAX
+    if rf > fsubdivide:
+        if parent.children[ind[0]][ind[1]][ind[2]] == NULL:
+            cur_leaf += 7 
+        oct = oct_handler.next_child(1, ind, parent)
+        oct.domain = 1
+        cur_leaf = subdivide(oct_handler, oct, ind, ddr, cur_leaf, 
+                             cur_level + 1, max_noct, max_level, 
+                             fsubdivide, mask)
+    return cur_leaf

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/geometry/oct_container.pxd
--- a/yt/geometry/oct_container.pxd
+++ b/yt/geometry/oct_container.pxd
@@ -30,8 +30,12 @@
 
 cdef struct Oct
 cdef struct Oct:
-    np.int64_t ind          # index
-    np.int64_t local_ind
+    np.int64_t file_ind     # index with respect to the order in which it was
+                            # added
+    np.int64_t domain_ind   # index within the global set of domains
+                            # note that moving to a local index will require
+                            # moving to split-up masks, which is part of a
+                            # bigger refactor
     np.int64_t domain       # (opt) addl int index
     np.int64_t pos[3]       # position in ints
     np.int8_t level
@@ -39,6 +43,10 @@
     Oct *children[2][2][2]
     Oct *parent
 
+cdef struct OctInfo:
+    np.float64_t left_edge[3]
+    np.float64_t dds[3]
+
 cdef struct OctAllocationContainer
 cdef struct OctAllocationContainer:
     np.int64_t n
@@ -54,16 +62,12 @@
     cdef np.float64_t DLE[3], DRE[3]
     cdef public int nocts
     cdef public int max_domain
-    cdef Oct* get(self, ppos)
+    cdef Oct* get(self, np.float64_t ppos[3], OctInfo *oinfo = ?)
     cdef void neighbors(self, Oct *, Oct **)
     cdef void oct_bounds(self, Oct *, np.float64_t *, np.float64_t *)
-
-cdef class ARTIOOctreeContainer(OctreeContainer):
-    cdef OctAllocationContainer **domains
-    cdef Oct *get_root_oct(self, np.float64_t ppos[3])
-    cdef Oct *next_free_oct( self, int curdom )
-    cdef int valid_domain_oct(self, int curdom, Oct *parent)
-    cdef Oct *add_oct(self, int curdom, Oct *parent, int curlevel, double pp[3])
+    # This function must return the offset from global-to-local domains; i.e.,
+    # OctAllocationContainer.offset if such a thing exists.
+    cdef np.int64_t get_domain_offset(self, int domain_id)
 
 cdef class RAMSESOctreeContainer(OctreeContainer):
     cdef OctAllocationContainer **domains

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -56,8 +56,8 @@
     for n in range(n_octs):
         oct = &n_cont.my_octs[n]
         oct.parent = NULL
-        oct.ind = oct.domain = -1
-        oct.local_ind = n + n_cont.offset
+        oct.file_ind = oct.domain = -1
+        oct.domain_ind = n + n_cont.offset
         oct.level = -1
         for i in range(2):
             for j in range(2):
@@ -130,7 +130,7 @@
         while cur != NULL:
             for i in range(cur.n_assigned):
                 this = &cur.my_octs[i]
-                yield (this.ind, this.local_ind, this.domain)
+                yield (this.file_ind, this.domain_ind, this.domain)
             cur = cur.next
 
     cdef void oct_bounds(self, Oct *o, np.float64_t *corner, np.float64_t *size):
@@ -139,10 +139,13 @@
             size[i] = (self.DRE[i] - self.DLE[i]) / (self.nn[i] << o.level)
             corner[i] = o.pos[i] * size[i] + self.DLE[i]
 
+    cdef np.int64_t get_domain_offset(self, int domain_id):
+        return 0
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    cdef Oct *get(self, ppos):
+    cdef Oct *get(self, np.float64_t ppos[3], OctInfo *oinfo = NULL):
         #Given a floating point position, retrieve the most
         #refined oct at that time
         cdef np.int64_t ind[3]
@@ -150,21 +153,34 @@
         cdef Oct *cur
         cdef int i
         for i in range(3):
-            pp[i] = ppos[i] - self.DLE[i]
             dds[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
-            ind[i] = <np.int64_t> ((pp[i] - self.DLE[i])/dds[i])
-            cp[i] = (ind[i] + 0.5) * dds[i]
-        cur = self.root_mesh[ind[0]][ind[1]][ind[2]]
-        while cur.children[0][0][0] != NULL:
+            ind[i] = <np.int64_t> ((ppos[i] - self.DLE[i])/dds[i])
+            cp[i] = (ind[i] + 0.5) * dds[i] + self.DLE[i]
+        next = self.root_mesh[ind[0]][ind[1]][ind[2]]
+        # We want to stop recursing when there's nowhere else to go
+        while next != NULL:
+            cur = next
             for i in range(3):
                 dds[i] = dds[i] / 2.0
-                if cp[i] > pp[i]:
+                if cp[i] > ppos[i]:
                     ind[i] = 0
                     cp[i] -= dds[i] / 2.0
                 else:
                     ind[i] = 1
                     cp[i] += dds[i]/2.0
-            cur = cur.children[ind[0]][ind[1]][ind[2]]
+            next = cur.children[ind[0]][ind[1]][ind[2]]
+        if oinfo == NULL: return cur
+        for i in range(3):
+            # This will happen *after* we quit out, so we need to back out the
+            # last change to cp
+            if ind[i] == 1:
+                cp[i] -= dds[i]/2.0 # Now centered
+            else:
+                cp[i] += dds[i]/2.0
+            # We don't need to change dds[i] as it has been halved from the
+            # oct width, thus making it already the cell width
+            oinfo.dds[i] = dds[i] # Cell width
+            oinfo.left_edge[i] = cp[i] - dds[i] # Center minus dds
         return cur
 
     @cython.boundscheck(False)
@@ -186,7 +202,40 @@
                 cur = cur.next
             o = &cur.my_octs[oi - cur.offset]
             for i in range(8):
-                count[o.domain - 1] += mask[o.local_ind,i]
+                count[o.domain - 1] += mask[o.domain_ind,i]
+        return count
+
+    @cython.boundscheck(True)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def count_leaves(self, np.ndarray[np.uint8_t, ndim=2, cast=True] mask):
+        # Modified to work when not all octs are assigned
+        cdef int i, j, k, ii
+        cdef np.int64_t oi
+        # pos here is CELL center, not OCT center.
+        cdef np.float64_t pos[3]
+        cdef int n = mask.shape[0]
+        cdef np.ndarray[np.int64_t, ndim=1] count
+        count = np.zeros(self.max_domain, 'int64')
+        # 
+        cur = self.cont
+        for oi in range(n):
+            if oi - cur.offset >= cur.n_assigned:
+                cur = cur.next
+                if cur == NULL:
+                    break
+            o = &cur.my_octs[oi - cur.offset]
+            # skip if unassigned
+            if o == NULL:
+                continue
+            if o.domain == -1: 
+                continue
+            for i in range(2):
+                for j in range(2):
+                    for k in range(2):
+                        if o.children[i][j][k] == NULL:
+                            ii = ((k*2)+j)*2+i
+                            count[o.domain - 1] += mask[o.domain_ind,ii]
         return count
 
     @cython.boundscheck(False)
@@ -260,14 +309,17 @@
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    def get_neighbor_boundaries(self, ppos):
+    def get_neighbor_boundaries(self, oppos):
+        cdef int i, ii
+        cdef np.float64_t ppos[3]
+        for i in range(3):
+            ppos[i] = oppos[i]
         cdef Oct *main = self.get(ppos)
         cdef Oct* neighbors[27]
         self.neighbors(main, neighbors)
         cdef np.ndarray[np.float64_t, ndim=2] bounds
         cdef np.float64_t corner[3], size[3]
         bounds = np.zeros((27,6), dtype="float64")
-        cdef int i, ii
         tnp = 0
         for i in range(27):
             self.oct_bounds(neighbors[i], corner, size)
@@ -276,330 +328,11 @@
                 bounds[i, 3+ii] = size[ii]
         return bounds
 
-cdef class ARTIOOctreeContainer(OctreeContainer):
+cdef class RAMSESOctreeContainer(OctreeContainer):
 
-    def allocate_domains(self, domain_counts):
-        cdef int count, i
-        cdef OctAllocationContainer *cur = self.cont
-        assert(cur == NULL)
-        self.max_domain = len(domain_counts) # 1-indexed
-        self.domains = <OctAllocationContainer **> malloc(
-            sizeof(OctAllocationContainer *) * len(domain_counts))
-        for i, count in enumerate(domain_counts):
-            cur = allocate_octs(count, cur)
-            if self.cont == NULL: self.cont = cur
-            self.domains[i] = cur
-        
-    def __dealloc__(self):
-        # This gets called BEFORE the superclass deallocation.  But, both get
-        # called.
-        if self.domains != NULL: free(self.domains)
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def count(self, np.ndarray[np.uint8_t, ndim=1, cast=True] mask,
-                     split = False):
-        cdef int n = mask.shape[0]
-        cdef int i, dom
-        cdef OctAllocationContainer *cur
-        cdef np.ndarray[np.int64_t, ndim=1] count
-        count = np.zeros(self.max_domain, 'int64')
-        # This is the idiom for iterating over many containers.
-        cur = self.cont
-        for i in range(n):
-            if i - cur.offset >= cur.n: cur = cur.next
-            if mask[i] == 1:
-                count[cur.my_octs[i - cur.offset].domain - 1] += 1
-        return count
-
-    def check(self, int curdom):
-        cdef int dind, pi
-        cdef Oct oct
-        cdef OctAllocationContainer *cont = self.domains[curdom - 1]
-        cdef int nbad = 0
-        for pi in range(cont.n_assigned):
-            oct = cont.my_octs[pi]
-            for i in range(2):
-                for j in range(2):
-                    for k in range(2):
-                        if oct.children[i][j][k] != NULL and \
-                           oct.children[i][j][k].level != oct.level + 1:
-                            if curdom == 61:
-                                print pi, oct.children[i][j][k].level,
-                                print oct.level
-                            nbad += 1
-        print "DOMAIN % 3i HAS % 9i BAD OCTS (%s / %s / %s)" % (curdom, nbad, 
-            cont.n - cont.n_assigned, cont.n_assigned, cont.n)
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    cdef Oct *next_free_oct( self, int curdom ) :
-        cdef OctAllocationContainer *cont
-        cdef Oct *next_oct
-
-        if curdom < 1 or curdom > self.max_domain or self.domains == NULL  :
-            print "Error, invalid domain or unallocated domains"
-            raise RuntimeError
-        
-        cont = self.domains[curdom - 1]
-        if cont.n_assigned >= cont.n :
-            print "Error, ran out of octs in domain curdom"
-            raise RuntimeError
-
-        self.nocts += 1
-        next_oct = &cont.my_octs[cont.n_assigned]
-        cont.n_assigned += 1
-        return next_oct
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    cdef int valid_domain_oct(self, int curdom, Oct *parent) :
-        cdef OctAllocationContainer *cont
-
-        if curdom < 1 or curdom > self.max_domain or self.domains == NULL  :
-            raise RuntimeError
-        cont = self.domains[curdom - 1]
-
-        if parent == NULL or parent < &cont.my_octs[0] or \
-                parent > &cont.my_octs[cont.n_assigned] :
-            return 0
-        else :
-            return 1
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    cdef Oct *get_root_oct(self, np.float64_t ppos[3]):
-        cdef np.int64_t ind[3]
-        cdef np.float64_t dds
-        cdef int i
-        for i in range(3):
-            dds = (self.DRE[i] - self.DLE[i])/self.nn[i]
-            ind[i] = <np.int64_t> floor((ppos[i]-self.DLE[i])/dds)
-        return self.root_mesh[ind[0]][ind[1]][ind[2]]
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    cdef Oct *add_oct(self, int curdom, Oct *parent, 
-                    int curlevel, np.float64_t pp[3]):
-
-        cdef int level, i, ind[3]
-        cdef Oct *cur, *next_oct
-        cdef np.int64_t pos[3]
-        cdef np.float64_t dds
-
-        if curlevel < 0 :
-            raise RuntimeError
-        for i in range(3):
-            if pp[i] < self.DLE[i] or pp[i] > self.DRE[i] :
-                raise RuntimeError
-            dds = (self.DRE[i] - self.DLE[i])/(<np.int64_t>self.nn[i])
-            pos[i] = <np.int64_t> floor((pp[i]-self.DLE[i])*<np.float64_t>(1<<curlevel)/dds)
-
-        if curlevel == 0 :
-            cur = NULL
-        elif parent == NULL :
-            cur = self.get_root_oct(pp)
-            assert( cur != NULL )
-
-            # Now we find the location we want
-            for level in range(1,curlevel):
-                # At every level, find the cell this oct lives inside
-                for i in range(3) :
-                    if pos[i] < (2*cur.pos[i]+1)<<(curlevel-level) :
-                        ind[i] = 0
-                    else :
-                        ind[i] = 1
-                cur = cur.children[ind[0]][ind[1]][ind[2]]
-                if cur == NULL:
-                    # in ART we don't allocate down to curlevel 
-                    # if parent doesn't exist
-                    print "Error, no oct exists at that level"
-                    raise RuntimeError
-        else :
-            if not self.valid_domain_oct(curdom,parent) or \
-                    parent.level != curlevel - 1:
-                raise RuntimeError
-            cur = parent
- 
-        next_oct = self.next_free_oct( curdom )
-        if cur == NULL :
-            self.root_mesh[pos[0]][pos[1]][pos[2]] = next_oct
-        else :
-            for i in range(3) :
-                if pos[i] < 2*cur.pos[i]+1 :
-                    ind[i] = 0
-                else :
-                    ind[i] = 1
-            if cur.level != curlevel - 1 or  \
-                    cur.children[ind[0]][ind[1]][ind[2]] != NULL :
-                print "Error in add_oct: child already filled!"
-                raise RuntimeError
-
-            cur.children[ind[0]][ind[1]][ind[2]] = next_oct
-        for i in range(3) :
-            next_oct.pos[i] = pos[i]
-        next_oct.domain = curdom
-        next_oct.parent = cur
-        next_oct.ind = 1
-        next_oct.level = curlevel
-        return next_oct
-
-    # ii:mask/art ; ci=ramses loop backward (k<-fast, j ,i<-slow) 
-    # ii=0 000 art 000 ci 000 
-    # ii=1 100 art 100 ci 001 
-    # ii=2 010 art 010 ci 010 
-    # ii=3 110 art 110 ci 011
-    # ii=4 001 art 001 ci 100
-    # ii=5 101 art 011 ci 101
-    # ii=6 011 art 011 ci 110
-    # ii=7 111 art 111 ci 111
-    # keep coords ints so multiply by pow(2,1) when increasing level.
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def icoords(self, int domain_id,
-                np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count,
-                np.ndarray[np.int64_t, ndim=1] level_counts):
-        # Wham, bam, it's a scam
-        cdef np.int64_t i, j, k, oi, ci, n, ii, level
-        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
-        cdef Oct *o
-        n = mask.shape[0]
-        cdef np.ndarray[np.int64_t, ndim=2] coords
-        coords = np.empty((cell_count, 3), dtype="int64")
-        ci=0
-        for oi in range(cur.n):
-            o = &cur.my_octs[oi]
-            for k in range(2):
-                for j in range(2):
-                    for i in range(2):
-                        ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
-                        coords[ci, 0] = (o.pos[0] << 1) + i
-                        coords[ci, 1] = (o.pos[1] << 1) + j
-                        coords[ci, 2] = (o.pos[2] << 1) + k
-                        ci += 1
-        return coords
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def ires(self, int domain_id,
-                np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count,
-                np.ndarray[np.int64_t, ndim=1] level_counts):
-        # Wham, bam, it's a scam
-        cdef np.int64_t i, j, k, oi, ci, n
-        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
-        cdef Oct *o
-        n = mask.shape[0]
-        cdef np.ndarray[np.int64_t, ndim=1] levels
-        levels = np.empty(cell_count, dtype="int64")
-        ci = 0
-        for oi in range(cur.n):
-            o = &cur.my_octs[oi]
-            for i in range(8):
-                if mask[oi + cur.offset, i] == 0: continue
-                levels[ci] = o.level
-                ci +=1
-        return levels
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def count_levels(self, int max_level, int domain_id,
-                     np.ndarray[np.uint8_t, ndim=2, cast=True] mask):
-        cdef np.ndarray[np.int64_t, ndim=1] level_count
-        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
-        cdef Oct *o
-        cdef int oi, i
-        level_count = np.zeros(max_level+1, 'int64')
-        for oi in range(cur.n):
-            o = &cur.my_octs[oi]
-            for i in range(8):
-                if mask[o.local_ind, i] == 0: continue
-                level_count[o.level] += 1
-        return level_count
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def fcoords(self, int domain_id,
-                np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count,
-                np.ndarray[np.int64_t, ndim=1] level_counts):
-        # Wham, bam, it's a scam
-        cdef np.int64_t i, j, k, oi, ci, n, ii
-        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
-        cdef Oct *o
-        cdef np.float64_t pos[3]
-        cdef np.float64_t base_dx[3], dx[3]
-        n = mask.shape[0]
-        cdef np.ndarray[np.float64_t, ndim=2] coords
-        coords = np.empty((cell_count, 3), dtype="float64")
-        ci =0 
-        for i in range(3):
-            # This is the base_dx, but not the base distance from the center
-            # position.  Note that the positions will also all be offset by
-            # dx/2.0.  This is also for *oct grids*, not cells.
-            base_dx[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
-        for oi in range(cur.n):
-            o = &cur.my_octs[oi]
-            for i in range(3):
-                # This gives the *grid* width for this level
-                dx[i] = base_dx[i] / (1 << o.level)
-                # o.pos is the *grid* index, so pos[i] is the center of the
-                # first cell in the grid
-                pos[i] = self.DLE[i] + o.pos[i]*dx[i] + dx[i]/4.0
-                dx[i] = dx[i] / 2.0 # This is now the *offset* 
-            for k in range(2):
-                for j in range(2):
-                    for i in range(2):
-                        ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
-                        coords[ci, 0] = pos[0] + dx[0] * i
-                        coords[ci, 1] = pos[1] + dx[1] * j
-                        coords[ci, 2] = pos[2] + dx[2] * k
-                        ci +=1 
-        return coords
-
-    @cython.boundscheck(False)
-    @cython.wraparound(False)
-    @cython.cdivision(True)
-    def fill_mask(self, int domain, dest_fields, source_fields,
-                   np.ndarray[np.uint8_t, ndim=2, cast=True] mask, int offset):
-        cdef np.ndarray[np.float32_t, ndim=1] source
-        cdef np.ndarray[np.float64_t, ndim=1] dest
-        cdef OctAllocationContainer *dom = self.domains[domain - 1]
-        cdef Oct *o
-        cdef int n
-        cdef int i, j, k, ii
-        cdef int local_pos, local_filled
-        cdef np.float64_t val
-        for key in dest_fields:
-            local_filled = 0
-            dest = dest_fields[key]
-            source = source_fields[key]
-            # snl: an alternative to filling level 0 yt-octs is to produce a 
-            # mapping between the mask and the source read order
-            for n in range(dom.n):
-                o = &dom.my_octs[n]
-                for k in range(2):
-                    for j in range(2):
-                        for i in range(2):
-                            ii = ((k*2)+j)*2+i
-                            if mask[o.local_ind, ii] == 0: continue
-                            dest[local_filled + offset] = source[o.local_ind*8+ii]
-                            # print 'oct_container.pyx:sourcemasked',o.level,local_filled, o.local_ind*8+ii, source[o.local_ind*8+ii]
-                            local_filled += 1
-        return local_filled
-
-cdef class RAMSESOctreeContainer(OctreeContainer):
+    cdef np.int64_t get_domain_offset(self, int domain_id):
+        cdef OctAllocationContainer *cont = self.domains[domain_id - 1]
+        return cont.offset
 
     cdef Oct* next_root(self, int domain_id, int ind[3]):
         cdef Oct *next = self.root_mesh[ind[0]][ind[1]][ind[2]]
@@ -666,7 +399,77 @@
                 count[cur.my_octs[i - cur.offset].domain - 1] += 1
         return count
 
-    def check(self, int curdom):
+    def domain_and(self, np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                   int domain_id):
+        cdef np.int64_t i, oi, n,  use
+        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
+        cdef Oct *o
+        cdef np.ndarray[np.uint8_t, ndim=2] m2 = \
+                np.zeros((mask.shape[0], 8), 'uint8')
+        n = mask.shape[0]
+        for oi in range(cur.n_assigned):
+            o = &cur.my_octs[oi]
+            use = 0
+            for i in range(8):
+                m2[o.domain_ind, i] = mask[o.domain_ind, i]
+        return m2 # NOTE: This is uint8_t
+
+    def domain_mask(self,
+                    # mask is the base selector's *global* mask
+                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                    int domain_id):
+        # What distinguishes this one from domain_and is that we have a mask,
+        # which covers the whole domain, but our output will only be of a much
+        # smaller subset of octs that belong to a given domain *and* the mask.
+        # Note also that typically when something calls domain_and, they will 
+        # use a logical_any along the oct axis.  Here we don't do that.
+        # Note also that we change the shape of the returned array.
+        cdef np.int64_t i, j, k, oi, n, nm, use
+        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
+        cdef Oct *o
+        n = mask.shape[0]
+        nm = 0
+        for oi in range(cur.n_assigned):
+            o = &cur.my_octs[oi]
+            use = 0
+            for i in range(8):
+                if mask[o.domain_ind, i] == 1: use = 1
+            nm += use
+        cdef np.ndarray[np.uint8_t, ndim=4] m2 = \
+                np.zeros((2, 2, 2, nm), 'uint8')
+        nm = 0
+        for oi in range(cur.n_assigned):
+            o = &cur.my_octs[oi]
+            use = 0
+            for i in range(2):
+                for j in range(2):
+                    for k in range(2):
+                        ii = ((k*2)+j)*2+i
+                        if mask[o.domain_ind, ii] == 0: continue
+                        use = m2[i, j, k, nm] = 1
+            nm += use
+        return m2.astype("bool")
+
+    def domain_ind(self,
+                    # mask is the base selector's *global* mask
+                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                    int domain_id):
+        cdef np.int64_t i, j, k, oi, noct, n, nm, use, offset
+        cdef OctAllocationContainer *cur = self.domains[domain_id - 1]
+        cdef Oct *o
+        cdef np.ndarray[np.int64_t, ndim=1] ind = np.zeros(cur.n, 'int64') - 1
+        nm = 0
+        for oi in range(cur.n):
+            o = &cur.my_octs[oi]
+            use = 0
+            for i in range(8):
+                if mask[o.domain_ind, i] == 1: use = 1
+            if use == 1:
+                ind[o.domain_ind - cur.offset] = nm
+            nm += use
+        return ind
+
+    def check(self, int curdom, int print_all = 0):
         cdef int dind, pi
         cdef Oct oct
         cdef OctAllocationContainer *cont = self.domains[curdom - 1]
@@ -675,6 +478,9 @@
         cdef int unassigned = 0
         for pi in range(cont.n_assigned):
             oct = cont.my_octs[pi]
+            if print_all==1:
+                print pi, oct.level, oct.domain,
+                print oct.pos[0],oct.pos[1],oct.pos[2]
             for i in range(2):
                 for j in range(2):
                     for k in range(2):
@@ -691,6 +497,33 @@
         print "DOMAIN % 3i HAS % 9i MISSED OCTS" % (curdom, nmissed)
         print "DOMAIN % 3i HAS % 9i UNASSIGNED OCTS" % (curdom, unassigned)
 
+    def check_refinement(self, int curdom):
+        cdef int pi, i, j, k, some_refined, some_unrefined
+        cdef Oct *oct
+        cdef int bad = 0
+        cdef OctAllocationContainer *cont = self.domains[curdom - 1]
+        for pi in range(cont.n_assigned):
+            oct = &cont.my_octs[pi]
+            some_unrefined = 0
+            some_refined = 0
+            for i in range(2):
+                for j in range(2):
+                    for k in range(2):
+                        if oct.children[i][j][k] == NULL:
+                            some_unrefined = 1
+                        else:
+                            some_refined = 1
+            if some_unrefined == some_refined == 1:
+                #print "BAD", oct.file_ind, oct.domain_ind
+                bad += 1
+                if curdom == 10 or curdom == 72:
+                    for i in range(2):
+                        for j in range(2):
+                            for k in range(2):
+                                print (oct.children[i][j][k] == NULL),
+                    print
+        print "BAD TOTAL", curdom, bad, cont.n_assigned
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
@@ -739,7 +572,7 @@
             # Now we should be at the right level
             cur.domain = curdom
             if local == 1:
-                cur.ind = p
+                cur.file_ind = p
             cur.level = curlevel
         return cont.n_assigned - initial
 
@@ -757,18 +590,18 @@
         n = mask.shape[0]
         cdef np.ndarray[np.int64_t, ndim=2] coords
         coords = np.empty((cell_count, 3), dtype="int64")
+        ci = 0
         for oi in range(cur.n_assigned):
             o = &cur.my_octs[oi]
             for i in range(2):
                 for j in range(2):
                     for k in range(2):
                         ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
-                        ci = level_counts[o.level]
+                        if mask[o.domain_ind, ii] == 0: continue
                         coords[ci, 0] = (o.pos[0] << 1) + i
                         coords[ci, 1] = (o.pos[1] << 1) + j
                         coords[ci, 2] = (o.pos[2] << 1) + k
-                        level_counts[o.level] += 1
+                        ci += 1
         return coords
 
     @cython.boundscheck(False)
@@ -790,9 +623,8 @@
             o = &cur.my_octs[oi]
             for i in range(8):
                 if mask[oi + cur.offset, i] == 0: continue
-                ci = level_counts[o.level]
                 levels[ci] = o.level
-                level_counts[o.level] += 1
+                ci += 1
         return levels
 
     @cython.boundscheck(False)
@@ -808,7 +640,7 @@
         for oi in range(cur.n_assigned):
             o = &cur.my_octs[oi]
             for i in range(8):
-                if mask[o.local_ind, i] == 0: continue
+                if mask[o.domain_ind, i] == 0: continue
                 level_count[o.level] += 1
         return level_count
 
@@ -833,6 +665,7 @@
             # position.  Note that the positions will also all be offset by
             # dx/2.0.  This is also for *oct grids*, not cells.
             base_dx[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
+        ci = 0
         for oi in range(cur.n):
             o = &cur.my_octs[oi]
             for i in range(3):
@@ -846,12 +679,11 @@
                 for j in range(2):
                     for k in range(2):
                         ii = ((k*2)+j)*2+i
-                        if mask[o.local_ind, ii] == 0: continue
-                        ci = level_counts[o.level]
+                        if mask[o.domain_ind, ii] == 0: continue
                         coords[ci, 0] = pos[0] + dx[0] * i
                         coords[ci, 1] = pos[1] + dx[1] * j
                         coords[ci, 2] = pos[2] + dx[2] * k
-                        level_counts[o.level] += 1
+                        ci += 1
         return coords
 
     @cython.boundscheck(False)
@@ -873,20 +705,17 @@
             source = source_fields[key]
             for n in range(dom.n):
                 o = &dom.my_octs[n]
-                if o.level != level: continue
-                for i in range(2):
-                    for j in range(2):
-                        for k in range(2):
-                            ii = ((k*2)+j)*2+i
-                            if mask[o.local_ind, ii] == 0: continue
-                            dest[local_filled + offset] = source[o.ind, ii]
-                            local_filled += 1
+                for ii in range(8):
+                    # We iterate and check here to keep our counts consistent
+                    # when filling different levels.
+                    if mask[o.domain_ind, ii] == 0: continue
+                    if o.level == level: 
+                        dest[local_filled] = source[o.file_ind, ii]
+                    local_filled += 1
         return local_filled
 
+cdef class ARTOctreeContainer(RAMSESOctreeContainer):
 
-
-cdef class ARTOctreeContainer(RAMSESOctreeContainer):
-    #this class is specifically for the NMSU ART
     @cython.boundscheck(True)
     @cython.wraparound(False)
     @cython.cdivision(True)
@@ -910,7 +739,7 @@
             source = source_fields[key]
             for n in range(dom.n):
                 o = &dom.my_octs[n]
-                index = o.ind-subchunk_offset
+                index = o.file_ind-subchunk_offset
                 if o.level != level: continue
                 if index < 0: continue
                 if index >= subchunk_max: 
@@ -921,7 +750,7 @@
                     for j in range(2):
                         for k in range(2):
                             ii = ((k*2)+j)*2+i
-                            if mask[o.local_ind, ii] == 0: continue
+                            if mask[o.domain_ind, ii] == 0: continue
                             dest[local_filled + offset] = \
                                 source[index,ii]
                             local_filled += 1
@@ -961,7 +790,7 @@
                     for j in range(2):
                         for k in range(2):
                             ii = ((k*2)+j)*2+i
-                            if mask[o.local_ind, ii] == 0: continue
+                            if mask[o.domain_ind, ii] == 0: continue
                             ox = (o.pos[0] << 1) + i
                             oy = (o.pos[1] << 1) + j
                             oz = (o.pos[2] << 1) + k
@@ -1036,12 +865,23 @@
                 free(o.sd.pos)
         free(o)
 
+    def __iter__(self):
+        #Get the next oct, will traverse domains
+        #Note that oct containers can be sorted 
+        #so that consecutive octs are on the same domain
+        cdef int oi
+        cdef Oct *o
+        for oi in range(self.nocts):
+            o = self.oct_list[oi]
+            yield (o.file_ind, o.domain_ind, o.domain)
+
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
     def icoords(self, int domain_id,
                 np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count):
+                np.int64_t cell_count,
+                np.ndarray[np.int64_t, ndim=1] level_counts):
         #Return the integer positions of the cells
         #Limited to this domain and within the mask
         #Positions are binary; aside from the root mesh
@@ -1070,7 +910,8 @@
     @cython.cdivision(True)
     def ires(self, int domain_id,
                 np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count):
+                np.int64_t cell_count,
+                np.ndarray[np.int64_t, ndim=1] level_counts):
         #Return the 'resolution' of each cell; ie the level
         cdef np.ndarray[np.int64_t, ndim=1] res
         res = np.empty(cell_count, dtype="int64")
@@ -1090,7 +931,8 @@
     @cython.cdivision(True)
     def fcoords(self, int domain_id,
                 np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
-                np.int64_t cell_count):
+                np.int64_t cell_count,
+                np.ndarray[np.int64_t, ndim=1] level_counts):
         #Return the floating point unitary position of every cell
         cdef np.ndarray[np.float64_t, ndim=2] coords
         coords = np.empty((cell_count, 3), dtype="float64")
@@ -1141,6 +983,7 @@
         cdef int max_level = 0
         self.oct_list = <Oct**> malloc(sizeof(Oct*)*self.nocts)
         cdef np.int64_t i = 0
+        cdef np.int64_t dom_ind
         cdef ParticleArrays *c = self.first_sd
         while c != NULL:
             self.oct_list[i] = c.oct
@@ -1159,13 +1002,20 @@
         self.dom_offsets = <np.int64_t *>malloc(sizeof(np.int64_t) *
                                                 (self.max_domain + 3))
         self.dom_offsets[0] = 0
+        dom_ind = 0
         for i in range(self.nocts):
-            self.oct_list[i].local_ind = i
+            self.oct_list[i].domain_ind = i
+            self.oct_list[i].file_ind = dom_ind
+            dom_ind += 1
             if self.oct_list[i].domain > cur_dom:
                 cur_dom = self.oct_list[i].domain
                 self.dom_offsets[cur_dom + 1] = i
+                dom_ind = 0
         self.dom_offsets[cur_dom + 2] = self.nocts
 
+    cdef np.int64_t get_domain_offset(self, int domain_id):
+        return self.dom_offsets[domain_id + 1]
+
     cdef Oct* allocate_oct(self):
         #Allocate the memory, set to NULL or -1
         #We reserve space for n_ref particles, but keep
@@ -1175,8 +1025,8 @@
         cdef ParticleArrays *sd = <ParticleArrays*> \
             malloc(sizeof(ParticleArrays))
         cdef int i, j, k
-        my_oct.ind = my_oct.domain = -1
-        my_oct.local_ind = self.nocts - 1
+        my_oct.file_ind = my_oct.domain = -1
+        my_oct.domain_ind = self.nocts - 1
         my_oct.pos[0] = my_oct.pos[1] = my_oct.pos[2] = -1
         my_oct.level = -1
         my_oct.sd = sd
@@ -1227,7 +1077,7 @@
         for oi in range(ndo):
             o = self.oct_list[oi + doff]
             for i in range(8):
-                if mask[o.local_ind, i] == 0: continue
+                if mask[o.domain_ind, i] == 0: continue
                 level_count[o.level] += 1
         return level_count
 
@@ -1250,7 +1100,7 @@
                 #IND Corresponding integer index on the root octs
                 #CP Center  point of that oct
                 pp[i] = pos[p, i]
-                dds[i] = (self.DRE[i] + self.DLE[i])/self.nn[i]
+                dds[i] = (self.DRE[i] - self.DLE[i])/self.nn[i]
                 ind[i] = <np.int64_t> ((pp[i] - self.DLE[i])/dds[i])
                 cp[i] = (ind[i] + 0.5) * dds[i] + self.DLE[i]
             cur = self.root_mesh[ind[0]][ind[1]][ind[2]]
@@ -1377,12 +1227,15 @@
     @cython.boundscheck(False)
     @cython.wraparound(False)
     @cython.cdivision(True)
-    def count_neighbor_particles(self, ppos):
+    def count_neighbor_particles(self, oppos):
         #How many particles are in my neighborhood
+        cdef int i, ni, dl, tnp
+        cdef np.float64_t ppos[3]
+        for i in range(3):
+            ppos[i] = oppos[i]
         cdef Oct *main = self.get(ppos)
         cdef Oct* neighbors[27]
         self.neighbors(main, neighbors)
-        cdef int i, ni, dl, tnp
         tnp = 0
         for i in range(27):
             if neighbors[i].sd != NULL:
@@ -1409,4 +1262,83 @@
                 count[o.domain] += mask[oi,i]
         return count
 
+    def domain_and(self, np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                   int domain_id):
+        cdef np.int64_t i, oi, n, use
+        cdef Oct *o
+        cdef np.ndarray[np.uint8_t, ndim=2] m2 = \
+                np.zeros((mask.shape[0], 8), 'uint8')
+        n = mask.shape[0]
+        for oi in range(n):
+            o = self.oct_list[oi]
+            if o.domain != domain_id: continue
+            use = 0
+            for i in range(8):
+                m2[o.domain_ind, i] = mask[o.domain_ind, i]
+        return m2
 
+    def domain_mask(self,
+                    # mask is the base selector's *global* mask
+                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                    int domain_id):
+        # What distinguishes this one from domain_and is that we have a mask,
+        # which covers the whole domain, but our output will only be of a much
+        # smaller subset of octs that belong to a given domain *and* the mask.
+        # Note also that typically when something calls domain_and, they will 
+        # use a logical_any along the oct axis.  Here we don't do that.
+        # Note also that we change the shape of the returned array.
+        cdef np.int64_t i, j, k, oi, n, nm, use
+        cdef Oct *o
+        n = mask.shape[0]
+        nm = 0
+        # This could perhaps be faster if we 
+        for oi in range(n):
+            o = self.oct_list[oi]
+            if o.domain != domain_id: continue
+            use = 0
+            for i in range(8):
+                if mask[o.domain_ind, i] == 1: use = 1
+            nm += use
+        cdef np.ndarray[np.uint8_t, ndim=4] m2 = \
+                np.zeros((2, 2, 2, nm), 'uint8')
+        nm = 0
+        for oi in range(n):
+            o = self.oct_list[oi]
+            if o.domain != domain_id: continue
+            use = 0
+            for i in range(2):
+                for j in range(2):
+                    for k in range(2):
+                        ii = ((k*2)+j)*2+i
+                        if mask[o.domain_ind, ii] == 0: continue
+                        use = m2[i, j, k, nm] = 1
+            nm += use
+        return m2.astype("bool")
+
+    def domain_ind(self,
+                    # mask is the base selector's *global* mask
+                    np.ndarray[np.uint8_t, ndim=2, cast=True] mask,
+                    int domain_id):
+        # Here we once again do something similar to the other functions.  We
+        # need a set of indices into the final reduced, masked values.  The
+        # indices will be domain.n long, and will be of type int64.  This way,
+        # we can get the Oct through a .get() call, then use Oct.file_ind as an
+        # index into this newly created array, then finally use the returned
+        # index into the domain subset array for deposition.
+        cdef np.int64_t i, j, k, oi, noct, n, nm, use, offset
+        cdef Oct *o
+        # For particle octrees, domain 0 is special and means non-leaf nodes.
+        offset = self.dom_offsets[domain_id + 1]
+        noct = self.dom_offsets[domain_id + 2] - offset
+        cdef np.ndarray[np.int64_t, ndim=1] ind = np.zeros(noct, 'int64')
+        nm = 0
+        for oi in range(noct):
+            ind[oi] = -1
+            o = self.oct_list[oi + offset]
+            use = 0
+            for i in range(8):
+                if mask[o.domain_ind, i] == 1: use = 1
+            if use == 1:
+                ind[oi] = nm
+            nm += use
+        return ind

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/geometry/oct_geometry_handler.py
--- a/yt/geometry/oct_geometry_handler.py
+++ b/yt/geometry/oct_geometry_handler.py
@@ -54,7 +54,7 @@
         Returns (in code units) the smallest cell size in the simulation.
         """
         return (self.parameter_file.domain_width /
-                (2**self.max_level)).min()
+                (2**(self.max_level+1))).min()
 
     def convert(self, unit):
         return self.parameter_file.conversion_factors[unit]

diff -r b20f76ccd3c34bac9c187272593f9f49b58e7795 -r 1c607a2db7281ee3db313a707b62805b53cfad73 yt/geometry/particle_deposit.pxd
--- /dev/null
+++ b/yt/geometry/particle_deposit.pxd
@@ -0,0 +1,47 @@
+"""
+Particle Deposition onto Octs
+
+Author: Christopher Moody <chris.e.moody at gmail.com>
+Affiliation: UC Santa Cruz
+Author: Matthew Turk <matthewturk at gmail.com>
+Affiliation: Columbia University
+Homepage: http://yt.enzotools.org/
+License:
+  Copyright (C) 2013 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+cimport numpy as np
+import numpy as np
+from libc.stdlib cimport malloc, free
+cimport cython
+
+from fp_utils cimport *
+from oct_container cimport Oct, OctAllocationContainer, OctreeContainer
+
+cdef extern from "alloca.h":
+    void *alloca(int)
+
+cdef inline int gind(int i, int j, int k, int dims[3]):
+    return ((k*dims[1])+j)*dims[0]+i
+
+cdef class ParticleDepositOperation:
+    # We assume each will allocate and define their own temporary storage
+    cdef np.int64_t nvals
+    cdef void process(self, int dim[3], np.float64_t left_edge[3],
+                      np.float64_t dds[3], np.int64_t offset,
+                      np.float64_t ppos[3], np.float64_t *fields)

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/cdcd37b6f2fa/
Changeset:   cdcd37b6f2fa
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-24 22:18:20
Summary:     This fixes broken tests.  However: it relies on a .shape attribute.

In another PR, Doug has started removing the .shape attributes.  I am making a
note that we need a better solution to this -- for instance, we should be able
to query .ires, but unfortunately because of how covering grids work right now
we cannot.  I will be addressing this before we better support a non-global
mesh, which is on the roadmap for this Summer.
Affected #:  1 file

diff -r 1c607a2db7281ee3db313a707b62805b53cfad73 -r cdcd37b6f2fa39e411569d1fde8972dbad7e737f yt/data_objects/universal_fields.py
--- a/yt/data_objects/universal_fields.py
+++ b/yt/data_objects/universal_fields.py
@@ -90,18 +90,18 @@
           display_field=False)
 
 def _Zeros(field, data):
-    return np.zeros(data.ActiveDimensions, dtype='float64')
+    return np.zeros(data.shape, dtype='float64')
 add_field("Zeros", function=_Zeros,
-          validators=[ValidateSpatial(0)],
           projection_conversion="unitary",
           display_field = False)
 
 def _Ones(field, data):
-    return np.ones(data.ires.size, dtype='float64')
+    return np.ones(data.shape, dtype='float64')
 add_field("Ones", function=_Ones,
           projection_conversion="unitary",
           display_field = False)
-add_field("CellsPerBin", function=_Ones, display_field = False)
+add_field("CellsPerBin", function=_Ones,
+          display_field = False)
 
 def _SoundSpeed(field, data):
     if data.pf["EOSType"] == 1:


https://bitbucket.org/yt_analysis/yt/commits/d1ab7a9f1bd8/
Changeset:   d1ab7a9f1bd8
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-30 19:55:58
Summary:     Adding support for specifying units to Tipsy and Gadget files.

This is somewhat reluctantly implemented, and will change considerably when
YTEP-0011 is implemented.  Examples to follow.

Note also that I have changed the __repr__ for Tipsy and Gadget not to split
based on filenames.
Affected #:  1 file

diff -r cdcd37b6f2fa39e411569d1fde8972dbad7e737f -r d1ab7a9f1bd8372ab793566750e78b0110b6bfdd yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -234,7 +234,36 @@
         self.field_offsets = self.io._calculate_field_offsets(
                 field_list, self.total_particles)
 
-class GadgetStaticOutput(StaticOutput):
+class ParticleStaticOutput(StaticOutput):
+    _unit_base = None
+
+    def _set_units(self):
+        self.units = {}
+        self.time_units = {}
+        self.conversion_factors = {}
+        self.units['1'] = 1.0
+        self.units['unitary'] = (self.domain_right_edge -
+                                 self.domain_left_edge).max()
+        # Check 
+        base = None
+        mpch = {}
+        mpch.update(mpc_conversion)
+        unit_base = self._unit_base or {}
+        for unit in mpc_conversion:
+            mpch['%sh' % unit] = mpch[unit] / self.hubble_constant
+            mpch['%shcm' % unit] = (mpch["%sh" % unit] / 
+                    (1 + self.current_redshift))
+            mpch['%scm' % unit] = mpch[unit] / (1 + self.current_redshift)
+        for unit_registry in [mpch, sec_conversion]:
+            for unit in sorted(unit_base):
+                if unit in unit_registry:
+                    base = unit_registry[unit] / unit_registry['mpc'] 
+                    break
+            if base is None: continue
+            for unit in unit_registry:
+                self.units[unit] = unit_registry[unit] / base
+
+class GadgetStaticOutput(ParticleStaticOutput):
     _hierarchy_class = ParticleGeometryHandler
     _domain_class = GadgetBinaryDomainFile
     _fieldinfo_fallback = GadgetFieldInfo
@@ -258,20 +287,17 @@
                     ('unused', 16, 'i') )
 
     def __init__(self, filename, data_style="gadget_binary",
-                 additional_fields = (), root_dimensions = 64):
+                 additional_fields = (), root_dimensions = 64,
+                 unit_base = None):
         self._root_dimensions = root_dimensions
         # Set up the template for domain files
         self.storage_filename = None
+        self._unit_base = unit_base
         super(GadgetStaticOutput, self).__init__(filename, data_style)
 
     def __repr__(self):
         return os.path.basename(self.parameter_filename).split(".")[0]
 
-    def _set_units(self):
-        self.units = {}
-        self.time_units = {}
-        self.conversion_factors = {}
-
     def _parse_parameter_file(self):
 
         # The entries in this header are capitalized and named to match Table 4
@@ -337,7 +363,7 @@
         io._create_dtypes(self)
 
 
-class TipsyStaticOutput(StaticOutput):
+class TipsyStaticOutput(ParticleStaticOutput):
     _hierarchy_class = ParticleGeometryHandler
     _domain_class = TipsyDomainFile
     _fieldinfo_fallback = TipsyFieldInfo
@@ -354,7 +380,8 @@
                  root_dimensions = 64, endian = ">",
                  field_dtypes = None,
                  domain_left_edge = None,
-                 domain_right_edge = None):
+                 domain_right_edge = None,
+                 unit_base = None):
         self.endian = endian
         self._root_dimensions = root_dimensions
         # Set up the template for domain files
@@ -372,10 +399,11 @@
         if field_dtypes is None: field_dtypes = {}
         self._field_dtypes = field_dtypes
 
+        unit_base = self._unit_base or {}
         super(TipsyStaticOutput, self).__init__(filename, data_style)
 
     def __repr__(self):
-        return os.path.basename(self.parameter_filename).split(".")[0]
+        return os.path.basename(self.parameter_filename)
 
     def _set_units(self):
         self.units = {}


https://bitbucket.org/yt_analysis/yt/commits/b977da78640e/
Changeset:   b977da78640e
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-30 19:56:24
Summary:     For the case when we have multiple fluids, we need to call _get_field_info for
generating the field.
Affected #:  1 file

diff -r d1ab7a9f1bd8372ab793566750e78b0110b6bfdd -r b977da78640e958de7897fc1aa4535440504f3a8 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -228,17 +228,18 @@
     def _generate_fluid_field(self, field):
         # First we check the validator
         ftype, fname = field
+        finfo = self.pf._get_field_info(ftype, fname)
         if self._current_chunk is None or \
            self._current_chunk.chunk_type != "spatial":
             gen_obj = self
         else:
             gen_obj = self._current_chunk.objs[0]
         try:
-            self.pf.field_info[fname].check_available(gen_obj)
+            finfo.check_available(gen_obj)
         except NeedsGridType as ngt_exception:
             rv = self._generate_spatial_fluid(field, ngt_exception.ghost_zones)
         else:
-            rv = self.pf.field_info[fname](gen_obj)
+            rv = finfo(gen_obj)
         return rv
 
     def _generate_spatial_fluid(self, field, ngz):


https://bitbucket.org/yt_analysis/yt/commits/c04157024e47/
Changeset:   c04157024e47
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-30 19:56:48
Summary:     Adding on another field type, "deposit"
Affected #:  1 file

diff -r b977da78640e958de7897fc1aa4535440504f3a8 -r c04157024e47cf52b28130e5d0359e5a3feb9152 yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -55,7 +55,7 @@
 class StaticOutput(object):
 
     default_fluid_type = "gas"
-    fluid_types = ("gas",)
+    fluid_types = ("gas","deposit")
     particle_types = ("all",)
     geometry = "cartesian"
     coordinates = None


https://bitbucket.org/yt_analysis/yt/commits/79eeeec29e2c/
Changeset:   79eeeec29e2c
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-30 19:57:30
Summary:     Adding in some particle deposit fields for Gadget and Tipsy.

This also adds on Mass for Gadget, which will grab from Masarr if needed.
Affected #:  1 file

diff -r c04157024e47cf52b28130e5d0359e5a3feb9152 -r 79eeeec29e2cb793b570c3df12544e19cb1987e9 yt/frontends/sph/fields.py
--- a/yt/frontends/sph/fields.py
+++ b/yt/frontends/sph/fields.py
@@ -54,3 +54,49 @@
 KnownTipsyFields = FieldInfoContainer()
 add_tipsy_field = KnownTipsyFields.add_field
 
+def _particle_functions(ptype, coord_name, mass_name, registry):
+    def particle_count(field, data):
+        pos = data[ptype, coord_name]
+        d = data.deposit(pos, method = "count")
+        return d
+    registry.add_field(("deposit", "%s_count" % ptype),
+             function = particle_count,
+             validators = [ValidateSpatial()],
+             projection_conversion = '1')
+
+    def particle_density(field, data):
+        pos = data[ptype, coord_name]
+        d = data.deposit(pos, [data[ptype, mass_name]], method = "sum")
+        d /= data["CellVolume"]
+        return d
+
+    registry.add_field(("deposit", "%s_density" % ptype),
+             function = particle_density,
+             validators = [ValidateSpatial()],
+             units = r"\mathrm{g}/\mathrm{cm}^{3}",
+             projection_conversion = 'cm')
+
+for ptype in ["Gas", "DarkMatter", "Stars"]:
+    _particle_functions(ptype, "Coordinates", "Mass", TipsyFieldInfo)
+   
+# GADGET
+# ======
+
+# Among other things we need to set up Coordinates
+
+_gadget_ptypes = ("Gas", "Halo", "Disk", "Bulge", "Stars", "Bndry")
+
+def _gadget_particle_fields(ptype):
+    def _Mass(field, data):
+        pind = _gadget_ptypes.index(ptype)
+        if data.pf["Massarr"][pind] == 0.0:
+            return data[ptype, "Masses"]
+        mass = np.ones(data[ptype, "Coordinates"].shape[0], dtype="float64")
+        mass *= data.pf["Massarr"][pind]
+        return mass
+    GadgetFieldInfo.add_field((ptype, "Mass"), function=_Mass,
+                              particle_type = True)
+
+for ptype in _gadget_ptypes:
+    _gadget_particle_fields(ptype)
+    _particle_functions(ptype, "Coordinates", "Mass", GadgetFieldInfo)


https://bitbucket.org/yt_analysis/yt/commits/50470049dd08/
Changeset:   50470049dd08
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-30 20:02:09
Summary:     Removing Tipsy's _set_units.
Affected #:  1 file

diff -r 79eeeec29e2cb793b570c3df12544e19cb1987e9 -r 50470049dd08944284627e0f1231495534a7e2b1 yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -405,13 +405,6 @@
     def __repr__(self):
         return os.path.basename(self.parameter_filename)
 
-    def _set_units(self):
-        self.units = {}
-        self.time_units = {}
-        self.conversion_factors = {}
-        DW = self.domain_right_edge - self.domain_left_edge
-        self.units["unitary"] = 1.0 / DW.max()
-
     def _parse_parameter_file(self):
 
         # The entries in this header are capitalized and named to match Table 4


https://bitbucket.org/yt_analysis/yt/commits/c003ed58ae1d/
Changeset:   c003ed58ae1d
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-30 20:21:43
Summary:     Adding cosmology_parameters to Tipsy.  Fix unit ratio for Gadget and Tipsy.
Affected #:  1 file

diff -r 50470049dd08944284627e0f1231495534a7e2b1 -r c003ed58ae1d4146faa96f83320adb009c51d79b yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -257,7 +257,8 @@
         for unit_registry in [mpch, sec_conversion]:
             for unit in sorted(unit_base):
                 if unit in unit_registry:
-                    base = unit_registry[unit] / unit_registry['mpc'] 
+                    ratio = (unit_registry[unit] / unit_registry['mpc'] )
+                    base = unit_base[unit] * ratio
                     break
             if base is None: continue
             for unit in unit_registry:
@@ -381,7 +382,8 @@
                  field_dtypes = None,
                  domain_left_edge = None,
                  domain_right_edge = None,
-                 unit_base = None):
+                 unit_base = None,
+                 cosmology_parameters = None):
         self.endian = endian
         self._root_dimensions = root_dimensions
         # Set up the template for domain files
@@ -399,7 +401,8 @@
         if field_dtypes is None: field_dtypes = {}
         self._field_dtypes = field_dtypes
 
-        unit_base = self._unit_base or {}
+        self._unit_base = unit_base or {}
+        self._cosmology_parameters = cosmology_parameters
         super(TipsyStaticOutput, self).__init__(filename, data_style)
 
     def __repr__(self):
@@ -434,10 +437,16 @@
 
         self.cosmological_simulation = 1
 
-        self.current_redshift = 0.0
-        self.omega_lambda = 0.0
-        self.omega_matter = 0.0
-        self.hubble_constant = 0.0
+        cosm = self._cosmology_parameters or {}
+        dcosm = dict(current_redshift = 0.0,
+                     omega_lambda = 0.0,
+                     omega_matter = 0.0,
+                     hubble_constant = 1.0)
+        for param in ['current_redshift', 'omega_lambda',
+                      'omega_matter', 'hubble_constant']:
+            pval = cosm.get(param, dcosm[param])
+            setattr(self, param, pval)
+
         self.parameters = hvals
 
         self.domain_template = self.parameter_filename


https://bitbucket.org/yt_analysis/yt/commits/6e369093aeca/
Changeset:   6e369093aeca
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-30 21:22:51
Summary:     Fix up some time units setting for Particle files.

Also have re-implemented a portion of the tipsy units module, which I will now
merge with the one I implemented a week ago.

Added several known tipsy fields.
Affected #:  3 files

diff -r c003ed58ae1d4146faa96f83320adb009c51d79b -r 6e369093aeca7ad1e5be70725939d576eb2c4fe1 yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -44,6 +44,9 @@
     OctreeSubset
 from yt.utilities.definitions import \
     mpc_conversion, sec_conversion
+from yt.utilities.physical_constants import \
+    G
+from yt.utilities.cosmology import Cosmology
 from .fields import \
     OWLSFieldInfo, \
     KnownOWLSFields, \
@@ -254,15 +257,17 @@
             mpch['%shcm' % unit] = (mpch["%sh" % unit] / 
                     (1 + self.current_redshift))
             mpch['%scm' % unit] = mpch[unit] / (1 + self.current_redshift)
-        for unit_registry in [mpch, sec_conversion]:
+        # ud == unit destination
+        # ur == unit registry
+        for ud, ur in [(self.units, mpch), (self.time_units, sec_conversion)]:
             for unit in sorted(unit_base):
-                if unit in unit_registry:
-                    ratio = (unit_registry[unit] / unit_registry['mpc'] )
+                if unit in ur:
+                    ratio = (ur[unit] / ur['mpc'] )
                     base = unit_base[unit] * ratio
                     break
             if base is None: continue
-            for unit in unit_registry:
-                self.units[unit] = unit_registry[unit] / base
+            for unit in ur:
+                ud[unit] = ur[unit] / base
 
 class GadgetStaticOutput(ParticleStaticOutput):
     _hierarchy_class = ParticleGeometryHandler
@@ -454,6 +459,22 @@
 
         f.close()
 
+    def _set_units(self):
+        super(TipsyStaticOutput, self)._set_units()
+        DW = (self.domain_right_edge - self.domain_left_edge).max()
+        cosmo = Cosmology(self.hubble_constant * 100.0,
+                          self.omega_matter, self.omega_lambda)
+        length_unit = DW * self.units['cm'] # Get it in proper cm
+        density_unit = cosmo.CriticalDensity(self.current_redshift)
+        mass_unit = density_unit * length_unit**3
+        time_unit = 1.0 / np.sqrt(G*density_unit)
+        velocity_unit = length_unit / time_unit
+        self.conversion_factors["velocity"] = velocity_unit
+        self.conversion_factors["mass"] = mass_unit
+        self.conversion_factors["density"] = density_unit
+        for u in sec_conversion:
+            self.time_units[u] = time_unit * sec_conversion[u]
+
     @classmethod
     def _is_valid(self, *args, **kwargs):
         # We do not allow load() of these files.

diff -r c003ed58ae1d4146faa96f83320adb009c51d79b -r 6e369093aeca7ad1e5be70725939d576eb2c4fe1 yt/frontends/sph/fields.py
--- a/yt/frontends/sph/fields.py
+++ b/yt/frontends/sph/fields.py
@@ -33,7 +33,9 @@
     ValidateDataField, \
     ValidateProperty, \
     ValidateSpatial, \
-    ValidateGridType
+    ValidateGridType, \
+    NullFunc, \
+    TranslationFunc
 import yt.data_objects.universal_fields
 
 OWLSFieldInfo = FieldInfoContainer.create_with_fallback(FieldInfo)
@@ -76,8 +78,21 @@
              units = r"\mathrm{g}/\mathrm{cm}^{3}",
              projection_conversion = 'cm')
 
+
+def _get_conv(cf):
+    def _convert(data):
+        return data.convert(cf)
+
 for ptype in ["Gas", "DarkMatter", "Stars"]:
     _particle_functions(ptype, "Coordinates", "Mass", TipsyFieldInfo)
+    KnownTipsyFields.add_field((ptype, "Mass"), function=NullFunc,
+        particle_type = True,
+        convert_function=_get_conv("mass"),
+        units = r"\mathrm{g}")
+    KnownTipsyFields.add_field((ptype, "Velocities"), function=NullFunc,
+        particle_type = True,
+        convert_function=_get_conv("velocity"),
+        units = r"\mathrm{cm}/\mathrm{s}")
    
 # GADGET
 # ======

diff -r c003ed58ae1d4146faa96f83320adb009c51d79b -r 6e369093aeca7ad1e5be70725939d576eb2c4fe1 yt/utilities/cosmology.py
--- a/yt/utilities/cosmology.py
+++ b/yt/utilities/cosmology.py
@@ -109,6 +109,8 @@
         return (self.AngularDiameterDistance(z_i,z_f) / 648. * np.pi)
 
     def CriticalDensity(self,z):
+        return ( (3.0 * (self.HubbleConstantNow / kmPerMpc)**2.0)
+               / (8.0 * np.pi * G) )
         return (3.0 / 8.0 / np.pi * sqr(self.HubbleConstantNow / kmPerMpc) / G *
                 (self.OmegaLambdaNow + ((1 + z)**3.0) * self.OmegaMatterNow))
 


https://bitbucket.org/yt_analysis/yt/commits/b94173eb9dc1/
Changeset:   b94173eb9dc1
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-30 21:25:10
Summary:     Merging, including the calculate_tipsy_units function
Affected #:  2 files

diff -r 6e369093aeca7ad1e5be70725939d576eb2c4fe1 -r b94173eb9dc11c1bdab3e385fd52838753416443 yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -45,7 +45,9 @@
 from yt.utilities.definitions import \
     mpc_conversion, sec_conversion
 from yt.utilities.physical_constants import \
-    G
+    G, \
+    gravitational_constant_cgs, \
+    km_per_pc
 from yt.utilities.cosmology import Cosmology
 from .fields import \
     OWLSFieldInfo, \
@@ -479,3 +481,26 @@
     def _is_valid(self, *args, **kwargs):
         # We do not allow load() of these files.
         return False
+
+    @classmethod
+    def calculate_tipsy_units(self, hubble_constant, box_size):
+        # box_size in cm, or else in a unit we can convert to cm
+        # hubble_constant is assumed to be in units scaled to 100 km / s / Mpc
+        hubble_hertz = hubble_constant / (km_per_pc * 1e4)
+        if isinstance(box_size, types.TupleType):
+            if not isinstance(box_size[1], types.StringTypes):
+                raise RuntimeError
+            conversion = getattr(pcons, "cm_per_%s" % box_size[1], None)
+            if conversion is None:
+                raise RuntimeError
+            box_size = box_size[0] * conversion
+        print hubble_hertz, box_size
+        units = {}
+        units['length'] = box_size
+        units['density'] = 3.0 * hubble_hertz**2 / \
+                          (8.0 * np.pi * gravitational_constant_cgs)
+        # density is in g/cm^3
+        units['mass'] = units['density'] * units['length']**3.0
+        units['time'] = 1.0 / np.sqrt(gravitational_constant_cgs * units['density'])
+        units['velocity'] = units['length'] / units['time']
+        return units

diff -r 6e369093aeca7ad1e5be70725939d576eb2c4fe1 -r b94173eb9dc11c1bdab3e385fd52838753416443 yt/utilities/physical_constants.py
--- a/yt/utilities/physical_constants.py
+++ b/yt/utilities/physical_constants.py
@@ -42,7 +42,7 @@
 mpc_per_miles = 5.21552871e-20
 mpc_per_cm    = 3.24077929e-25
 kpc_per_cm    = mpc_per_cm / mpc_per_kpc
-km_per_pc     = 1.3806504e13
+km_per_pc     = 3.08567758e13
 km_per_m      = 1e-3
 km_per_cm     = 1e-5
 pc_per_cm     = 3.24077929e-19


https://bitbucket.org/yt_analysis/yt/commits/2b801eff5ff8/
Changeset:   2b801eff5ff8
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-30 21:37:23
Summary:     Removed calculate_tipsy_units and fixed /h length units
Affected #:  1 file

diff -r b94173eb9dc11c1bdab3e385fd52838753416443 -r 2b801eff5ff81e00890781251a1a200616058894 yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -255,7 +255,7 @@
         mpch.update(mpc_conversion)
         unit_base = self._unit_base or {}
         for unit in mpc_conversion:
-            mpch['%sh' % unit] = mpch[unit] / self.hubble_constant
+            mpch['%sh' % unit] = mpch[unit] * self.hubble_constant
             mpch['%shcm' % unit] = (mpch["%sh" % unit] / 
                     (1 + self.current_redshift))
             mpch['%scm' % unit] = mpch[unit] / (1 + self.current_redshift)
@@ -481,26 +481,3 @@
     def _is_valid(self, *args, **kwargs):
         # We do not allow load() of these files.
         return False
-
-    @classmethod
-    def calculate_tipsy_units(self, hubble_constant, box_size):
-        # box_size in cm, or else in a unit we can convert to cm
-        # hubble_constant is assumed to be in units scaled to 100 km / s / Mpc
-        hubble_hertz = hubble_constant / (km_per_pc * 1e4)
-        if isinstance(box_size, types.TupleType):
-            if not isinstance(box_size[1], types.StringTypes):
-                raise RuntimeError
-            conversion = getattr(pcons, "cm_per_%s" % box_size[1], None)
-            if conversion is None:
-                raise RuntimeError
-            box_size = box_size[0] * conversion
-        print hubble_hertz, box_size
-        units = {}
-        units['length'] = box_size
-        units['density'] = 3.0 * hubble_hertz**2 / \
-                          (8.0 * np.pi * gravitational_constant_cgs)
-        # density is in g/cm^3
-        units['mass'] = units['density'] * units['length']**3.0
-        units['time'] = 1.0 / np.sqrt(gravitational_constant_cgs * units['density'])
-        units['velocity'] = units['length'] / units['time']
-        return units


https://bitbucket.org/yt_analysis/yt/commits/a7f1fe1a3873/
Changeset:   a7f1fe1a3873
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-30 21:43:44
Summary:     Have to return the _convert function
Affected #:  1 file

diff -r 2b801eff5ff81e00890781251a1a200616058894 -r a7f1fe1a38730dbbe55f836fee5dbacb5809f878 yt/frontends/sph/fields.py
--- a/yt/frontends/sph/fields.py
+++ b/yt/frontends/sph/fields.py
@@ -82,6 +82,7 @@
 def _get_conv(cf):
     def _convert(data):
         return data.convert(cf)
+    return _convert
 
 for ptype in ["Gas", "DarkMatter", "Stars"]:
     _particle_functions(ptype, "Coordinates", "Mass", TipsyFieldInfo)


https://bitbucket.org/yt_analysis/yt/commits/f1ff005e52c7/
Changeset:   f1ff005e52c7
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-30 22:40:12
Summary:     Setting up Gadget units, with a few reasonable defaults.
Affected #:  2 files

diff -r a7f1fe1a38730dbbe55f836fee5dbacb5809f878 -r f1ff005e52c7d447270790dbb5a7c21e421ff189 yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -300,6 +300,8 @@
         self._root_dimensions = root_dimensions
         # Set up the template for domain files
         self.storage_filename = None
+        if unit_base is not None and "UnitLength_in_cm" in unit_base:
+            unit_base['cm'] = unit_base["UnitLength_in_cm"]
         self._unit_base = unit_base
         super(GadgetStaticOutput, self).__init__(filename, data_style)
 
@@ -352,6 +354,22 @@
 
         f.close()
 
+    def _set_units(self):
+        super(GadgetStaticOutput, self)._set_units()
+        length_unit = self.units['cm']
+        unit_base = self._unit_base or {}
+        velocity_unit = unit_base.get("velocity", 1e5)
+        velocity_unit = unit_base.get("UnitVelocity_in_cm_per_s", velocity_unit)
+        mass_unit = unit_base.get("g", 1.989e43 / self.hubble_constant)
+        mass_unit = unit_base.get("UnitMass_in_g", mass_unit)
+        time_unit = length_unit / velocity_unit
+        self.conversion_factors["velocity"] = velocity_unit
+        self.conversion_factors["mass"] = mass_unit
+        self.conversion_factors["density"] = mass_unit / length_unit**3
+        #import pdb; pdb.set_trace()
+        for u in sec_conversion:
+            self.time_units[u] = time_unit * sec_conversion[u]
+
     @classmethod
     def _is_valid(self, *args, **kwargs):
         # We do not allow load() of these files.

diff -r a7f1fe1a38730dbbe55f836fee5dbacb5809f878 -r f1ff005e52c7d447270790dbb5a7c21e421ff189 yt/frontends/sph/fields.py
--- a/yt/frontends/sph/fields.py
+++ b/yt/frontends/sph/fields.py
@@ -37,6 +37,8 @@
     NullFunc, \
     TranslationFunc
 import yt.data_objects.universal_fields
+from yt.utilities.physical_constants import \
+    mass_sun_cgs
 
 OWLSFieldInfo = FieldInfoContainer.create_with_fallback(FieldInfo)
 add_owls_field = OWLSFieldInfo.add_field
@@ -64,6 +66,7 @@
     registry.add_field(("deposit", "%s_count" % ptype),
              function = particle_count,
              validators = [ValidateSpatial()],
+             display_name = "\\mathrm{%s Count}" % ptype,
              projection_conversion = '1')
 
     def particle_density(field, data):
@@ -75,9 +78,28 @@
     registry.add_field(("deposit", "%s_density" % ptype),
              function = particle_density,
              validators = [ValidateSpatial()],
+             display_name = "\\mathrm{%s Density}" % ptype,
              units = r"\mathrm{g}/\mathrm{cm}^{3}",
+             projected_units = r"\mathrm{g}/\mathrm{cm}^{-2}",
              projection_conversion = 'cm')
 
+    # Now some translation functions.
+
+    registry.add_field((ptype, "ParticleMass"),
+            function = TranslationFunc((ptype, mass_name)),
+            particle_type = True,
+            units = r"\mathrm{g}")
+
+    def _ParticleMassMsun(field, data):
+        return data[ptype, mass_name].copy()
+    def _conv_Msun(data):
+        return 1.0/mass_sun_cgs
+
+    registry.add_field((ptype, "ParticleMassMsun"),
+            function = _ParticleMassMsun,
+            convert_function = _conv_Msun,
+            particle_type = True,
+            units = r"\mathrm{M}_\odot")
 
 def _get_conv(cf):
     def _convert(data):
@@ -85,7 +107,6 @@
     return _convert
 
 for ptype in ["Gas", "DarkMatter", "Stars"]:
-    _particle_functions(ptype, "Coordinates", "Mass", TipsyFieldInfo)
     KnownTipsyFields.add_field((ptype, "Mass"), function=NullFunc,
         particle_type = True,
         convert_function=_get_conv("mass"),
@@ -94,7 +115,10 @@
         particle_type = True,
         convert_function=_get_conv("velocity"),
         units = r"\mathrm{cm}/\mathrm{s}")
-   
+    # Note that we have to do this last so that TranslationFunc operates
+    # correctly.
+    _particle_functions(ptype, "Coordinates", "Mass", TipsyFieldInfo)
+
 # GADGET
 # ======
 
@@ -106,13 +130,23 @@
     def _Mass(field, data):
         pind = _gadget_ptypes.index(ptype)
         if data.pf["Massarr"][pind] == 0.0:
-            return data[ptype, "Masses"]
+            return data[ptype, "Masses"].copy()
         mass = np.ones(data[ptype, "Coordinates"].shape[0], dtype="float64")
-        mass *= data.pf["Massarr"][pind]
+        # Note that this is an alias, which is why we need to apply conversion
+        # here.  Otherwise we'd have an asymmetry.
+        mass *= data.pf["Massarr"][pind] * data.convert("mass")
         return mass
     GadgetFieldInfo.add_field((ptype, "Mass"), function=_Mass,
                               particle_type = True)
 
 for ptype in _gadget_ptypes:
     _gadget_particle_fields(ptype)
+    KnownGadgetFields.add_field((ptype, "Masses"), function=NullFunc,
+        particle_type = True,
+        convert_function=_get_conv("mass"),
+        units = r"\mathrm{g}")
+    KnownGadgetFields.add_field((ptype, "Velocities"), function=NullFunc,
+        particle_type = True,
+        convert_function=_get_conv("velocity"),
+        units = r"\mathrm{cm}/\mathrm{s}")
     _particle_functions(ptype, "Coordinates", "Mass", GadgetFieldInfo)


https://bitbucket.org/yt_analysis/yt/commits/469b511ff712/
Changeset:   469b511ff712
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-31 00:28:11
Summary:     Adding an "all" mass field for Gadget.

This also includes a fix for getting the most recently used field info, field
detection for tuple-based selection, and a slight improvement to the speed of
IO for particle masses.  A more generic solution is needed for the "all"
particle type, which should probably get pushed into the particle IO handler at
a higher level.
Affected #:  5 files

diff -r f1ff005e52c7d447270790dbb5a7c21e421ff189 -r 469b511ff71298d9fb0019ca08428a1b0bf5651c yt/data_objects/field_info_container.py
--- a/yt/data_objects/field_info_container.py
+++ b/yt/data_objects/field_info_container.py
@@ -265,15 +265,22 @@
                 + 1e-4*np.random.random((nd * nd * nd)))
 
     def __missing__(self, item):
-        FI = getattr(self.pf, "field_info", FieldInfo)
-        if FI.has_key(item) and FI[item]._function.func_name != 'NullFunc':
+        if hasattr(self.pf, "field_info") and isinstance(item, tuple):
+            finfo = self.pf._get_field_info(*item)
+        else:
+            FI = getattr(self.pf, "field_info", FieldInfo)
+            if item in FI:
+                finfo = FI[item]
+            else:
+                finfo = None
+        if finfo is not None and finfo._function.func_name != 'NullFunc':
             try:
-                vv = FI[item](self)
+                vv = finfo(self)
             except NeedsGridType as exc:
                 ngz = exc.ghost_zones
                 nfd = FieldDetector(self.nd + ngz * 2)
                 nfd._num_ghost_zones = ngz
-                vv = FI[item](nfd)
+                vv = finfo(nfd)
                 if ngz > 0: vv = vv[ngz:-ngz, ngz:-ngz, ngz:-ngz]
                 for i in nfd.requested:
                     if i not in self.requested: self.requested.append(i)

diff -r f1ff005e52c7d447270790dbb5a7c21e421ff189 -r 469b511ff71298d9fb0019ca08428a1b0bf5651c yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -255,12 +255,14 @@
         if ftype == "unknown" and self._last_freq[0] != None:
             ftype = self._last_freq[0]
         field = (ftype, fname)
-        if field == self._last_freq or fname == self._last_freq[1]:
+        if field == self._last_freq:
             return self._last_finfo
         if field in self.field_info:
             self._last_freq = field
             self._last_finfo = self.field_info[(ftype, fname)]
             return self._last_finfo
+        if fname == self._last_freq[1]:
+            return self._last_finfo
         if fname in self.field_info:
             self._last_freq = field
             self._last_finfo = self.field_info[fname]

diff -r f1ff005e52c7d447270790dbb5a7c21e421ff189 -r 469b511ff71298d9fb0019ca08428a1b0bf5651c yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -124,6 +124,7 @@
         self.field_list = pfl
         pf = self.parameter_file
         pf.particle_types = tuple(set(pt for pt, pf in pfl))
+        pf.particle_types += ('all',)
     
     def _setup_classes(self):
         dd = self._get_data_reader_dict()

diff -r f1ff005e52c7d447270790dbb5a7c21e421ff189 -r 469b511ff71298d9fb0019ca08428a1b0bf5651c yt/frontends/sph/fields.py
--- a/yt/frontends/sph/fields.py
+++ b/yt/frontends/sph/fields.py
@@ -126,12 +126,12 @@
 
 _gadget_ptypes = ("Gas", "Halo", "Disk", "Bulge", "Stars", "Bndry")
 
-def _gadget_particle_fields(ptype):
+def _gadget_particle_fields(_ptype):
     def _Mass(field, data):
         pind = _gadget_ptypes.index(ptype)
         if data.pf["Massarr"][pind] == 0.0:
             return data[ptype, "Masses"].copy()
-        mass = np.ones(data[ptype, "Coordinates"].shape[0], dtype="float64")
+        mass = np.ones(data[ptype, "ParticleIDs"].shape[0], dtype="float64")
         # Note that this is an alias, which is why we need to apply conversion
         # here.  Otherwise we'd have an asymmetry.
         mass *= data.pf["Massarr"][pind] * data.convert("mass")
@@ -139,6 +139,16 @@
     GadgetFieldInfo.add_field((ptype, "Mass"), function=_Mass,
                               particle_type = True)
 
+def _AllGadgetMass(field, data):
+    v = []
+    for ptype in data.pf.particle_types:
+        if ptype == "all": continue
+        v.append(data[ptype, "Mass"].copy())
+    masses = np.concatenate(v)
+    return masses
+GadgetFieldInfo.add_field(("all", "Mass"), function=_AllGadgetMass,
+        particle_type = True, units = r"\mathrm{g}")
+
 for ptype in _gadget_ptypes:
     _gadget_particle_fields(ptype)
     KnownGadgetFields.add_field((ptype, "Masses"), function=NullFunc,
@@ -150,3 +160,5 @@
         convert_function=_get_conv("velocity"),
         units = r"\mathrm{cm}/\mathrm{s}")
     _particle_functions(ptype, "Coordinates", "Mass", GadgetFieldInfo)
+#_gadget_particle_fields("all")
+_particle_functions("all", "Coordinates", "Mass", GadgetFieldInfo)

diff -r f1ff005e52c7d447270790dbb5a7c21e421ff189 -r 469b511ff71298d9fb0019ca08428a1b0bf5651c yt/frontends/sph/io.py
--- a/yt/frontends/sph/io.py
+++ b/yt/frontends/sph/io.py
@@ -163,12 +163,32 @@
         rv = {}
         # We first need a set of masks for each particle type
         ptf = defaultdict(list)
+        ptall = []
         psize = defaultdict(lambda: 0)
         chunks = list(chunks)
+        pf = chunks[0].objs[0].domain.pf
+        aptypes = [ptype for ptype in pf.particle_types if ptype != "all"]
         ptypes = set()
         for ftype, fname in fields:
-            ptf[ftype].append(fname)
-            ptypes.add(ftype)
+            if ftype == "all":
+                # We simply read everything and concatenate them. 
+                ptall += [(ptype, fname) for ptype in aptypes]
+            else:
+                ptf[ftype].append(fname)
+                ptypes.add(ftype)
+        if len(ptall) > 0:
+            fv = self._read_particle_selection(chunks, selector, ptall)
+            # Now we have a dict of the return values, and we need to
+            # concatenate
+            fields = set(fname for ftype, fname in fv)
+            for field in fields:
+                v = []
+                for ptype in aptypes:
+                    va = fv.pop((ptype, field), None)
+                    if va is None: continue
+                    v.append(va)
+                rv["all", field] = np.concatenate(v, axis=0)
+        if len(ptf) == 0: return rv
         ptypes = list(ptypes)
         ptypes.sort(key = lambda a: self._ptypes.index(a))
         for chunk in chunks:


https://bitbucket.org/yt_analysis/yt/commits/baf520758359/
Changeset:   baf520758359
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-31 00:33:47
Summary:     Adding "all" fields for other Gadget fields.

This will ultimately need to be streamlined so that new columns can be added to
Gadget outputs.
Affected #:  1 file

diff -r 469b511ff71298d9fb0019ca08428a1b0bf5651c -r baf520758359c5ff1237964815625e4276ac9afc yt/frontends/sph/fields.py
--- a/yt/frontends/sph/fields.py
+++ b/yt/frontends/sph/fields.py
@@ -139,15 +139,22 @@
     GadgetFieldInfo.add_field((ptype, "Mass"), function=_Mass,
                               particle_type = True)
 
-def _AllGadgetMass(field, data):
-    v = []
-    for ptype in data.pf.particle_types:
-        if ptype == "all": continue
-        v.append(data[ptype, "Mass"].copy())
-    masses = np.concatenate(v)
-    return masses
-GadgetFieldInfo.add_field(("all", "Mass"), function=_AllGadgetMass,
-        particle_type = True, units = r"\mathrm{g}")
+def _field_concat(fname):
+    def _AllFields(field, data):
+        v = []
+        for ptype in data.pf.particle_types:
+            if ptype == "all": continue
+            v.append(data[ptype, fname].copy())
+        rv = np.concatenate(v, axis=0)
+        return rv
+    return _AllFields
+
+for fname in ["Coordinates", "Velocities", "ParticleIDs",
+              # Note: Mass, not Masses
+              "Mass"]:
+    func = _field_concat(fname)
+    GadgetFieldInfo.add_field(("all", fname), function=func,
+            particle_type = True)
 
 for ptype in _gadget_ptypes:
     _gadget_particle_fields(ptype)


https://bitbucket.org/yt_analysis/yt/commits/379df93d70a7/
Changeset:   379df93d70a7
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-31 00:57:42
Summary:     We were missing the crucial check of whether or not a field was a particle
field during detection.  I have also removed the IO-level particle field
concatenation; this is actually much simpler to do higher up, although that
comes at something of a higher cost when field dependencies are not
pre-calculated.  (Fortunately they are, so this should not hit the disk more
frequently.)
Affected #:  4 files

diff -r baf520758359c5ff1237964815625e4276ac9afc -r 379df93d70a7e2b7e462ed5122c6dcbcc65e1ecd yt/data_objects/field_info_container.py
--- a/yt/data_objects/field_info_container.py
+++ b/yt/data_objects/field_info_container.py
@@ -291,6 +291,10 @@
                 if not self.flat: self[item] = vv
                 else: self[item] = vv.ravel()
                 return self[item]
+        elif finfo is not None and finfo.particle_type:
+            self[item] = np.ones(self.NumberOfParticles)
+            self.requested.append(item)
+            return self[item]
         self.requested.append(item)
         return defaultdict.__missing__(self, item)
 

diff -r baf520758359c5ff1237964815625e4276ac9afc -r 379df93d70a7e2b7e462ed5122c6dcbcc65e1ecd yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -367,7 +367,6 @@
         self.conversion_factors["velocity"] = velocity_unit
         self.conversion_factors["mass"] = mass_unit
         self.conversion_factors["density"] = mass_unit / length_unit**3
-        #import pdb; pdb.set_trace()
         for u in sec_conversion:
             self.time_units[u] = time_unit * sec_conversion[u]
 

diff -r baf520758359c5ff1237964815625e4276ac9afc -r 379df93d70a7e2b7e462ed5122c6dcbcc65e1ecd yt/frontends/sph/fields.py
--- a/yt/frontends/sph/fields.py
+++ b/yt/frontends/sph/fields.py
@@ -126,7 +126,7 @@
 
 _gadget_ptypes = ("Gas", "Halo", "Disk", "Bulge", "Stars", "Bndry")
 
-def _gadget_particle_fields(_ptype):
+def _gadget_particle_fields(ptype):
     def _Mass(field, data):
         pind = _gadget_ptypes.index(ptype)
         if data.pf["Massarr"][pind] == 0.0:
@@ -157,11 +157,11 @@
             particle_type = True)
 
 for ptype in _gadget_ptypes:
-    _gadget_particle_fields(ptype)
     KnownGadgetFields.add_field((ptype, "Masses"), function=NullFunc,
         particle_type = True,
         convert_function=_get_conv("mass"),
         units = r"\mathrm{g}")
+    _gadget_particle_fields(ptype)
     KnownGadgetFields.add_field((ptype, "Velocities"), function=NullFunc,
         particle_type = True,
         convert_function=_get_conv("velocity"),

diff -r baf520758359c5ff1237964815625e4276ac9afc -r 379df93d70a7e2b7e462ed5122c6dcbcc65e1ecd yt/frontends/sph/io.py
--- a/yt/frontends/sph/io.py
+++ b/yt/frontends/sph/io.py
@@ -167,28 +167,10 @@
         psize = defaultdict(lambda: 0)
         chunks = list(chunks)
         pf = chunks[0].objs[0].domain.pf
-        aptypes = [ptype for ptype in pf.particle_types if ptype != "all"]
         ptypes = set()
         for ftype, fname in fields:
-            if ftype == "all":
-                # We simply read everything and concatenate them. 
-                ptall += [(ptype, fname) for ptype in aptypes]
-            else:
-                ptf[ftype].append(fname)
-                ptypes.add(ftype)
-        if len(ptall) > 0:
-            fv = self._read_particle_selection(chunks, selector, ptall)
-            # Now we have a dict of the return values, and we need to
-            # concatenate
-            fields = set(fname for ftype, fname in fv)
-            for field in fields:
-                v = []
-                for ptype in aptypes:
-                    va = fv.pop((ptype, field), None)
-                    if va is None: continue
-                    v.append(va)
-                rv["all", field] = np.concatenate(v, axis=0)
-        if len(ptf) == 0: return rv
+            ptf[ftype].append(fname)
+            ptypes.add(ftype)
         ptypes = list(ptypes)
         ptypes.sort(key = lambda a: self._ptypes.index(a))
         for chunk in chunks:


https://bitbucket.org/yt_analysis/yt/commits/6e84de91f77b/
Changeset:   6e84de91f77b
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-31 16:32:02
Summary:     Adding translations for particle_position and particle_velocity.

The field detection logic is getting more convoluted, and this speaks to really
needing to address vector fields.  However, with this change I can now run HOP
on a Gadget dataset.  That being said, it also over-reads data, since we're not
yet batching particle fields or preloading in the halo finder.  So for each
component, it reads xyz individually, which requires reading Coordinates again.
Affected #:  2 files

diff -r 379df93d70a7e2b7e462ed5122c6dcbcc65e1ecd -r 6e84de91f77bf2c190a8e2f7c2317138881c1abc yt/data_objects/field_info_container.py
--- a/yt/data_objects/field_info_container.py
+++ b/yt/data_objects/field_info_container.py
@@ -292,7 +292,13 @@
                 else: self[item] = vv.ravel()
                 return self[item]
         elif finfo is not None and finfo.particle_type:
-            self[item] = np.ones(self.NumberOfParticles)
+            if item == "Coordinates" or item[1] == "Coordinates" or \
+               item == "Velocities" or item[1] == "Velocities":
+                # A vector
+                self[item] = np.ones((self.NumberOfParticles, 3))
+            else:
+                # Not a vector
+                self[item] = np.ones(self.NumberOfParticles)
             self.requested.append(item)
             return self[item]
         self.requested.append(item)

diff -r 379df93d70a7e2b7e462ed5122c6dcbcc65e1ecd -r 6e84de91f77bf2c190a8e2f7c2317138881c1abc yt/frontends/sph/fields.py
--- a/yt/frontends/sph/fields.py
+++ b/yt/frontends/sph/fields.py
@@ -69,6 +69,19 @@
              display_name = "\\mathrm{%s Count}" % ptype,
              projection_conversion = '1')
 
+    def particle_mass(field, data):
+        pos = data[ptype, coord_name]
+        d = data.deposit(pos, [data[ptype, mass_name]], method = "sum")
+        return d
+
+    registry.add_field(("deposit", "%s_mass" % ptype),
+             function = particle_mass,
+             validators = [ValidateSpatial()],
+             display_name = "\\mathrm{%s Mass}" % ptype,
+             units = r"\mathrm{g}",
+             projected_units = r"\mathrm{g}\/\mathrm{cm}",
+             projection_conversion = 'cm')
+
     def particle_density(field, data):
         pos = data[ptype, coord_name]
         d = data.deposit(pos, [data[ptype, mass_name]], method = "sum")
@@ -101,6 +114,28 @@
             particle_type = True,
             units = r"\mathrm{M}_\odot")
 
+    # For 'all', which is a special field, we skip adding a few types.
+    
+    if ptype == "all": return
+
+    # Now we have to set up the various velocity and coordinate things.  In the
+    # future, we'll actually invert this and use the 3-component items
+    # elsewhere, and stop using these.
+    
+    # Note that we pass in _ptype here so that it's defined inside the closure.
+    def _get_coord_funcs(axi, _ptype):
+        def _particle_velocity(field, data):
+            return data[_ptype, "Velocities"][:,axi]
+        def _particle_position(field, data):
+            return data[_ptype, "Coordinates"][:,axi]
+        return _particle_velocity, _particle_position
+    for axi, ax in enumerate("xyz"):
+        v, p = _get_coord_funcs(axi, ptype)
+        registry.add_field((ptype, "particle_velocity_%s" % ax),
+            particle_type = True, function = v)
+        registry.add_field((ptype, "particle_position_%s" % ax),
+            particle_type = True, function = p)
+    
 def _get_conv(cf):
     def _convert(data):
         return data.convert(cf)
@@ -156,6 +191,7 @@
     GadgetFieldInfo.add_field(("all", fname), function=func,
             particle_type = True)
 
+
 for ptype in _gadget_ptypes:
     KnownGadgetFields.add_field((ptype, "Masses"), function=NullFunc,
         particle_type = True,
@@ -167,5 +203,28 @@
         convert_function=_get_conv("velocity"),
         units = r"\mathrm{cm}/\mathrm{s}")
     _particle_functions(ptype, "Coordinates", "Mass", GadgetFieldInfo)
+    KnownGadgetFields.add_field((ptype, "Coordinates"), function=NullFunc,
+        particle_type = True)
 #_gadget_particle_fields("all")
 _particle_functions("all", "Coordinates", "Mass", GadgetFieldInfo)
+
+# Now we have to manually apply the splits for "all", since we don't want to
+# use the splits defined above.
+
+def _field_concat_slice(fname, axi):
+    def _AllFields(field, data):
+        v = []
+        for ptype in data.pf.particle_types:
+            if ptype == "all": continue
+            v.append(data[ptype, fname][:,axi])
+        rv = np.concatenate(v, axis=0)
+        return rv
+    return _AllFields
+
+for iname, oname in [("Coordinates", "particle_position_"),
+                     ("Velocities", "particle_velocity_")]:
+    for axi, ax in enumerate("xyz"):
+        func = _field_concat_slice(iname, axi)
+        GadgetFieldInfo.add_field(("all", oname + ax), function=func,
+                particle_type = True)
+


https://bitbucket.org/yt_analysis/yt/commits/d1648a800a1d/
Changeset:   d1648a800a1d
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-31 17:29:21
Summary:     Adding guessing for "all" to _get_field_info.  Fixing weighting-by-tuples in filename creation for PlotWindow.
Affected #:  2 files

diff -r 6e84de91f77bf2c190a8e2f7c2317138881c1abc -r d1648a800a1dcc457ee62c23b4f68e5555f03ef0 yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -252,8 +252,10 @@
     _last_freq = (None, None)
     _last_finfo = None
     def _get_field_info(self, ftype, fname):
+        guessing_type = False
         if ftype == "unknown" and self._last_freq[0] != None:
             ftype = self._last_freq[0]
+            guessing_type = True
         field = (ftype, fname)
         if field == self._last_freq:
             return self._last_finfo
@@ -267,6 +269,12 @@
             self._last_freq = field
             self._last_finfo = self.field_info[fname]
             return self._last_finfo
+        # We also should check "all" for particles, which can show up if you're
+        # mixing deposition/gas fields with particle fields.
+        if guessing_type and ("all", fname) in self.field_info:
+            self._last_freq = ("all", fname)
+            self._last_finfo = self.field_info["all", fname]
+            return self._last_finfo
         raise YTFieldNotFound((ftype, fname), self)
 
 def _reconstruct_pf(*args, **kwargs):

diff -r 6e84de91f77bf2c190a8e2f7c2317138881c1abc -r d1648a800a1dcc457ee62c23b4f68e5555f03ef0 yt/visualization/plot_window.py
--- a/yt/visualization/plot_window.py
+++ b/yt/visualization/plot_window.py
@@ -1069,6 +1069,8 @@
                 # for cutting planes
                 n = "%s_%s_%s" % (name, type, k)
             if weight:
+                if isinstance(weight, tuple):
+                    weight = weight[1]
                 n += "_%s" % (weight)
             names.append(v.save(n,mpl_kwargs))
         return names


https://bitbucket.org/yt_analysis/yt/commits/2adf4b12e727/
Changeset:   2adf4b12e727
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-31 20:07:27
Summary:     Re-enabling MIP projections.
Affected #:  2 files

diff -r d1648a800a1dcc457ee62c23b4f68e5555f03ef0 -r 2adf4b12e72793f95e86b148374695c04d15fc83 yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py
+++ b/yt/data_objects/construction_data_containers.py
@@ -228,10 +228,8 @@
         self.proj_style = style
         if style == "mip":
             self.func = np.max
-            op = "max"
         elif style == "integrate":
             self.func = np.sum # for the future
-            op = "sum"
         else:
             raise NotImplementedError(style)
         self.weight_field = weight_field
@@ -299,7 +297,7 @@
         # TODO: Add the combine operation
         ox = self.pf.domain_left_edge[x_dict[self.axis]]
         oy = self.pf.domain_left_edge[y_dict[self.axis]]
-        px, py, pdx, pdy, nvals, nwvals = tree.get_all(False)
+        px, py, pdx, pdy, nvals, nwvals = tree.get_all(False, merge_style)
         nvals = self.comm.mpi_allreduce(nvals, op=op)
         nwvals = self.comm.mpi_allreduce(nwvals, op=op)
         np.multiply(px, self.pf.domain_width[x_dict[self.axis]], px)

diff -r d1648a800a1dcc457ee62c23b4f68e5555f03ef0 -r 2adf4b12e72793f95e86b148374695c04d15fc83 yt/utilities/lib/QuadTree.pyx
--- a/yt/utilities/lib/QuadTree.pyx
+++ b/yt/utilities/lib/QuadTree.pyx
@@ -342,10 +342,11 @@
 
     @cython.boundscheck(False)
     @cython.wraparound(False)
-    def get_all(self, int count_only = 0):
+    def get_all(self, int count_only = 0, int style = 1):
         cdef int i, j, vi
         cdef int total = 0
         vals = []
+        self.merged = style
         for i in range(self.top_grid_dims[0]):
             for j in range(self.top_grid_dims[1]):
                 total += self.count(self.root_nodes[i][j])


https://bitbucket.org/yt_analysis/yt/commits/27d0013f48e8/
Changeset:   27d0013f48e8
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-31 20:28:56
Summary:     Fixing "unitary" for SPH.
Affected #:  1 file

diff -r 2adf4b12e72793f95e86b148374695c04d15fc83 -r 27d0013f48e8e98983603ca3351c575289cfbbad yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -248,8 +248,8 @@
         self.time_units = {}
         self.conversion_factors = {}
         self.units['1'] = 1.0
-        self.units['unitary'] = (self.domain_right_edge -
-                                 self.domain_left_edge).max()
+        DW = self.domain_right_edge - self.domain_left_edge
+        self.units["unitary"] = 1.0 / DW.max()
         # Check 
         base = None
         mpch = {}


https://bitbucket.org/yt_analysis/yt/commits/c4405c868b90/
Changeset:   c4405c868b90
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-31 20:54:09
Summary:     Re-enable .shape on data objects to enable Ones working on patch datasets being
projected.
Affected #:  1 file

diff -r 27d0013f48e8e98983603ca3351c575289cfbbad -r c4405c868b90c264de30896a8eabeda8613a94d5 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -539,11 +539,11 @@
         old_size, self.size = self.size, chunk.data_size
         old_chunk, self._current_chunk = self._current_chunk, chunk
         old_locked, self._locked = self._locked, False
-        #self.shape = (self.size,)
+        self.shape = (self.size,)
         yield
         self.field_data = old_field_data
         self.size = old_size
-        #self.shape = (old_size,)
+        self.shape = (old_size,)
         self._current_chunk = old_chunk
         self._locked = old_locked
 


https://bitbucket.org/yt_analysis/yt/commits/af89e30c12b5/
Changeset:   af89e30c12b5
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-31 20:54:52
Summary:     Allow 0-particle octs in different domains to be flagged as not needing
refinement.  This ensures all oct children eventually survive, and that
volume is correctly set globally for the particles on the mesh.
Affected #:  1 file

diff -r c4405c868b90c264de30896a8eabeda8613a94d5 -r af89e30c12b57346e4cbef691ff60ef1530ec084 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -1138,6 +1138,8 @@
         #True if not in domain
         if cur.children[0][0][0] != NULL:
             return 0
+        elif cur.sd.np == 0:
+            return 0
         elif cur.sd.np >= self.n_ref:
             return 1
         elif cur.domain >= 0 and cur.domain != domain_id:
@@ -1154,6 +1156,7 @@
             for j in range(2):
                 for k in range(2):
                     noct = self.allocate_oct()
+                    noct.domain = o.domain
                     noct.level = o.level + 1
                     noct.pos[0] = (o.pos[0] << 1) + i
                     noct.pos[1] = (o.pos[1] << 1) + j


https://bitbucket.org/yt_analysis/yt/commits/e77960d08012/
Changeset:   e77960d08012
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-31 21:43:08
Summary:     Fixing periodicity checks for halo center of mass.  (mirrored in 75a6021)
Affected #:  1 file

diff -r af89e30c12b57346e4cbef691ff60ef1530ec084 -r e77960d08012bc074717c8d9309697897ad33960 yt/analysis_modules/halo_finding/halo_objects.py
--- a/yt/analysis_modules/halo_finding/halo_objects.py
+++ b/yt/analysis_modules/halo_finding/halo_objects.py
@@ -144,10 +144,10 @@
             return self.CoM
         pm = self["ParticleMassMsun"]
         c = {}
-        c[0] = self["particle_position_x"]
-        c[1] = self["particle_position_y"]
-        c[2] = self["particle_position_z"]
-        c_vec = np.zeros(3)
+        # We shift into a box where the origin is the left edge
+        c[0] = self["particle_position_x"] - self.pf.domain_left_edge[0]
+        c[1] = self["particle_position_y"] - self.pf.domain_left_edge[1]
+        c[2] = self["particle_position_z"] - self.pf.domain_left_edge[2]
         com = []
         for i in range(3):
             # A halo is likely periodic around a boundary if the distance 
@@ -160,13 +160,12 @@
                 com.append(c[i])
                 continue
             # Now we want to flip around only those close to the left boundary.
-            d_left = c[i] - self.pf.domain_left_edge[i]
-            sel = (d_left <= (self.pf.domain_width[i]/2))
+            sel = (c[i] <= (self.pf.domain_width[i]/2))
             c[i][sel] += self.pf.domain_width[i]
             com.append(c[i])
         com = np.array(com)
         c = (com * pm).sum(axis=1) / pm.sum()
-        return c%self.pf.domain_width
+        return c % self.pf.domain_width + self.pf.domain_left_edge
 
     def maximum_density(self):
         r"""Return the HOP-identified maximum density. Not applicable to


https://bitbucket.org/yt_analysis/yt/commits/c62e4f9ae6f8/
Changeset:   c62e4f9ae6f8
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-31 21:43:28
Summary:     Adding fields for Tipsy to match those in Gadget.  HOP now runs.
Affected #:  1 file

diff -r e77960d08012bc074717c8d9309697897ad33960 -r c62e4f9ae6f8e4397f0662439d0f0889936b5c42 yt/frontends/sph/fields.py
--- a/yt/frontends/sph/fields.py
+++ b/yt/frontends/sph/fields.py
@@ -135,12 +135,37 @@
             particle_type = True, function = v)
         registry.add_field((ptype, "particle_position_%s" % ax),
             particle_type = True, function = p)
-    
+
+# Here are helper functions for things like vector fields and so on.
+
 def _get_conv(cf):
     def _convert(data):
         return data.convert(cf)
     return _convert
 
+def _field_concat(fname):
+    def _AllFields(field, data):
+        v = []
+        for ptype in data.pf.particle_types:
+            if ptype == "all": continue
+            v.append(data[ptype, fname].copy())
+        rv = np.concatenate(v, axis=0)
+        return rv
+    return _AllFields
+
+def _field_concat_slice(fname, axi):
+    def _AllFields(field, data):
+        v = []
+        for ptype in data.pf.particle_types:
+            if ptype == "all": continue
+            v.append(data[ptype, fname][:,axi])
+        rv = np.concatenate(v, axis=0)
+        return rv
+    return _AllFields
+
+# TIPSY
+# =====
+
 for ptype in ["Gas", "DarkMatter", "Stars"]:
     KnownTipsyFields.add_field((ptype, "Mass"), function=NullFunc,
         particle_type = True,
@@ -153,6 +178,20 @@
     # Note that we have to do this last so that TranslationFunc operates
     # correctly.
     _particle_functions(ptype, "Coordinates", "Mass", TipsyFieldInfo)
+_particle_functions("all", "Coordinates", "Mass", TipsyFieldInfo)
+
+for fname in ["Coordinates", "Velocities", "ParticleIDs", "Mass",
+              "Epsilon", "Phi"]:
+    func = _field_concat(fname)
+    TipsyFieldInfo.add_field(("all", fname), function=func,
+            particle_type = True)
+
+for iname, oname in [("Coordinates", "particle_position_"),
+                     ("Velocities", "particle_velocity_")]:
+    for axi, ax in enumerate("xyz"):
+        func = _field_concat_slice(iname, axi)
+        TipsyFieldInfo.add_field(("all", oname + ax), function=func,
+                particle_type = True)
 
 # GADGET
 # ======
@@ -161,6 +200,8 @@
 
 _gadget_ptypes = ("Gas", "Halo", "Disk", "Bulge", "Stars", "Bndry")
 
+# This has to be done manually for Gadget, because some of the particles will
+# have uniform mass
 def _gadget_particle_fields(ptype):
     def _Mass(field, data):
         pind = _gadget_ptypes.index(ptype)
@@ -174,16 +215,6 @@
     GadgetFieldInfo.add_field((ptype, "Mass"), function=_Mass,
                               particle_type = True)
 
-def _field_concat(fname):
-    def _AllFields(field, data):
-        v = []
-        for ptype in data.pf.particle_types:
-            if ptype == "all": continue
-            v.append(data[ptype, fname].copy())
-        rv = np.concatenate(v, axis=0)
-        return rv
-    return _AllFields
-
 for fname in ["Coordinates", "Velocities", "ParticleIDs",
               # Note: Mass, not Masses
               "Mass"]:
@@ -191,7 +222,6 @@
     GadgetFieldInfo.add_field(("all", fname), function=func,
             particle_type = True)
 
-
 for ptype in _gadget_ptypes:
     KnownGadgetFields.add_field((ptype, "Masses"), function=NullFunc,
         particle_type = True,
@@ -205,22 +235,11 @@
     _particle_functions(ptype, "Coordinates", "Mass", GadgetFieldInfo)
     KnownGadgetFields.add_field((ptype, "Coordinates"), function=NullFunc,
         particle_type = True)
-#_gadget_particle_fields("all")
 _particle_functions("all", "Coordinates", "Mass", GadgetFieldInfo)
 
 # Now we have to manually apply the splits for "all", since we don't want to
 # use the splits defined above.
 
-def _field_concat_slice(fname, axi):
-    def _AllFields(field, data):
-        v = []
-        for ptype in data.pf.particle_types:
-            if ptype == "all": continue
-            v.append(data[ptype, fname][:,axi])
-        rv = np.concatenate(v, axis=0)
-        return rv
-    return _AllFields
-
 for iname, oname in [("Coordinates", "particle_position_"),
                      ("Velocities", "particle_velocity_")]:
     for axi, ax in enumerate("xyz"):


https://bitbucket.org/yt_analysis/yt/commits/0b21630b2c16/
Changeset:   0b21630b2c16
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-05-31 22:31:16
Summary:     This fixes a test failure.  Note that this once again touches .shape, which
*will* need to go away, but also reinforces that shape will need to be removed
carefully.
Affected #:  1 file

diff -r c62e4f9ae6f8e4397f0662439d0f0889936b5c42 -r 0b21630b2c160f55d856f2d43a37881fa020897b yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -539,11 +539,13 @@
         old_size, self.size = self.size, chunk.data_size
         old_chunk, self._current_chunk = self._current_chunk, chunk
         old_locked, self._locked = self._locked, False
-        self.shape = (self.size,)
+        if not self._spatial:
+            self.shape = (self.size,)
         yield
         self.field_data = old_field_data
         self.size = old_size
-        self.shape = (old_size,)
+        if not self._spatial:
+            self.shape = (old_size,)
         self._current_chunk = old_chunk
         self._locked = old_locked
 


https://bitbucket.org/yt_analysis/yt/commits/75692f91b4b8/
Changeset:   75692f91b4b8
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-06-02 22:00:48
Summary:     Changing hand-coded Msun to use physical_constants.py.
Affected #:  1 file

diff -r 0b21630b2c160f55d856f2d43a37881fa020897b -r 75692f91b4b85f5dc13cf9ba61b9deb94ed8cc14 yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -47,7 +47,8 @@
 from yt.utilities.physical_constants import \
     G, \
     gravitational_constant_cgs, \
-    km_per_pc
+    km_per_pc, \
+    mass_sun_cgs
 from yt.utilities.cosmology import Cosmology
 from .fields import \
     OWLSFieldInfo, \
@@ -361,7 +362,8 @@
         unit_base = self._unit_base or {}
         velocity_unit = unit_base.get("velocity", 1e5)
         velocity_unit = unit_base.get("UnitVelocity_in_cm_per_s", velocity_unit)
-        mass_unit = unit_base.get("g", 1.989e43 / self.hubble_constant)
+        msun10 = mass_sun_cgs * 1e10
+        mass_unit = unit_base.get("g", msun10 / self.hubble_constant)
         mass_unit = unit_base.get("UnitMass_in_g", mass_unit)
         time_unit = length_unit / velocity_unit
         self.conversion_factors["velocity"] = velocity_unit


https://bitbucket.org/yt_analysis/yt/commits/b4ce5d2b3d4d/
Changeset:   b4ce5d2b3d4d
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-06-02 22:07:28
Summary:     Reverting a change to cosmology.py.
Affected #:  1 file

diff -r 75692f91b4b85f5dc13cf9ba61b9deb94ed8cc14 -r b4ce5d2b3d4d040a1d5d9dd292b1288c54887c95 yt/utilities/cosmology.py
--- a/yt/utilities/cosmology.py
+++ b/yt/utilities/cosmology.py
@@ -109,8 +109,6 @@
         return (self.AngularDiameterDistance(z_i,z_f) / 648. * np.pi)
 
     def CriticalDensity(self,z):
-        return ( (3.0 * (self.HubbleConstantNow / kmPerMpc)**2.0)
-               / (8.0 * np.pi * G) )
         return (3.0 / 8.0 / np.pi * sqr(self.HubbleConstantNow / kmPerMpc) / G *
                 (self.OmegaLambdaNow + ((1 + z)**3.0) * self.OmegaMatterNow))
 


https://bitbucket.org/yt_analysis/yt/commits/8b1db060b7dc/
Changeset:   8b1db060b7dc
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-06-02 23:07:07
Summary:     I think this addresses the suggestions for how to figure out units.

It's not entirely clear to me that this does the right thing, but it *does*
change it so that the "UnitLength_in_cm" is comoving.  But note that this may
lead to confusion if someone were to assume that passing unit_base['cm'] sets
the comoving centimeters unit.
Affected #:  1 file

diff -r b4ce5d2b3d4d040a1d5d9dd292b1288c54887c95 -r 8b1db060b7dc5a6538bc4f8c17776d6a15f2735d yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -303,7 +303,9 @@
         # Set up the template for domain files
         self.storage_filename = None
         if unit_base is not None and "UnitLength_in_cm" in unit_base:
-            unit_base['cm'] = unit_base["UnitLength_in_cm"]
+            # We assume this is comoving, because in the absence of comoving
+            # integration the redshift will be zero.
+            unit_base['cmcm'] = unit_base["UnitLength_in_cm"]
         self._unit_base = unit_base
         super(GadgetStaticOutput, self).__init__(filename, data_style)
 
@@ -342,6 +344,17 @@
         self.omega_lambda = hvals["OmegaLambda"]
         self.omega_matter = hvals["Omega0"]
         self.hubble_constant = hvals["HubbleParam"]
+        # According to the Gadget manual, OmegaLambda will be zero for
+        # non-cosmological datasets.  However, it may be the case that
+        # individuals are running cosmological simulations *without* Lambda, in
+        # which case we may be doing something incorrect here.
+        # It may be possible to deduce whether ComovingIntegration is on
+        # somehow, but opinions on this vary.
+        if self.omega_lambda == 0.0:
+            mylog.info("Omega Lambda is 0.0, so we are turning off Cosmology.")
+            self.hubble_constant = 1.0 # So that scaling comes out correct
+            self.cosmological_simulation = 0
+            self.current_redshift = 0.0
         self.parameters = hvals
 
         prefix = self.parameter_filename.split(".", 1)[0]
@@ -362,8 +375,9 @@
         unit_base = self._unit_base or {}
         velocity_unit = unit_base.get("velocity", 1e5)
         velocity_unit = unit_base.get("UnitVelocity_in_cm_per_s", velocity_unit)
-        msun10 = mass_sun_cgs * 1e10
-        mass_unit = unit_base.get("g", msun10 / self.hubble_constant)
+        # We set hubble_constant = 1.0 for non-cosmology
+        msun10 = mass_sun_cgs * 1e10 / self.hubble_constant
+        mass_unit = unit_base.get("g", msun10)
         mass_unit = unit_base.get("UnitMass_in_g", mass_unit)
         time_unit = length_unit / velocity_unit
         self.conversion_factors["velocity"] = velocity_unit


https://bitbucket.org/yt_analysis/yt/commits/ada7c2c608f1/
Changeset:   ada7c2c608f1
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-06-02 23:35:02
Summary:     "Time" is different for ComovingIntegration.  So we attempt to handle it
differently.

Note also that this disables all unit conversions for time, as typically these
are not used very often and furthermore I am not 100% sure the correct way to
set them up.  Once I have some simulations with star particles, and
FormationTime fields for them, we can re-investigate this, but it seems to me
that if Time is set the same way as FormationTime, this will be a non-trivial
problem to address as it will not be a linear scaling for conversions.
Affected #:  1 file

diff -r 8b1db060b7dc5a6538bc4f8c17776d6a15f2735d -r ada7c2c608f1bb724bfa274539a0321b76df503c yt/frontends/sph/data_structures.py
--- a/yt/frontends/sph/data_structures.py
+++ b/yt/frontends/sph/data_structures.py
@@ -330,8 +330,6 @@
             int(os.stat(self.parameter_filename)[stat.ST_CTIME])
         # Set standard values
 
-        # This may not be correct.
-        self.current_time = hvals["Time"] * sec_conversion["Gyr"]
 
         self.domain_left_edge = np.zeros(3, "float64")
         self.domain_right_edge = np.ones(3, "float64") * hvals["BoxSize"]
@@ -355,6 +353,18 @@
             self.hubble_constant = 1.0 # So that scaling comes out correct
             self.cosmological_simulation = 0
             self.current_redshift = 0.0
+            # This may not be correct.
+            self.current_time = hvals["Time"] * sec_conversion["Gyr"]
+        else:
+            # Now we calculate our time based on the cosmology, because in
+            # ComovingIntegration hvals["Time"] will in fact be the expansion
+            # factor, not the actual integration time, so we re-calculate
+            # global time from our Cosmology.
+            cosmo = Cosmology(self.hubble_constant * 100.0,
+                        self.omega_matter, self.omega_lambda)
+            self.current_time = cosmo.UniverseAge(self.current_redshift)
+            mylog.info("Calculating time from %0.3e to be %0.3e seconds",
+                       hvals["Time"], self.current_time)
         self.parameters = hvals
 
         prefix = self.parameter_filename.split(".", 1)[0]
@@ -379,12 +389,15 @@
         msun10 = mass_sun_cgs * 1e10 / self.hubble_constant
         mass_unit = unit_base.get("g", msun10)
         mass_unit = unit_base.get("UnitMass_in_g", mass_unit)
-        time_unit = length_unit / velocity_unit
         self.conversion_factors["velocity"] = velocity_unit
         self.conversion_factors["mass"] = mass_unit
         self.conversion_factors["density"] = mass_unit / length_unit**3
-        for u in sec_conversion:
-            self.time_units[u] = time_unit * sec_conversion[u]
+        # Currently, setting time_units is disabled.  The current_time is
+        # accurately set, but until a time that we can confirm how
+        # FormationTime for stars is set I am disabling these.
+        #time_unit = length_unit / velocity_unit
+        #for u in sec_conversion:
+        #    self.time_units[u] = time_unit / sec_conversion[u]
 
     @classmethod
     def _is_valid(self, *args, **kwargs):


https://bitbucket.org/yt_analysis/yt/commits/bcd7cd3dd612/
Changeset:   bcd7cd3dd612
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-06-03 19:41:47
Summary:     Missed a masking operator here, as otherwise we would always read the whole field.
Affected #:  1 file

diff -r ada7c2c608f1bb724bfa274539a0321b76df503c -r bcd7cd3dd612baf132c4598fada9ce87953a28e4 yt/frontends/sph/io.py
--- a/yt/frontends/sph/io.py
+++ b/yt/frontends/sph/io.py
@@ -341,7 +341,7 @@
             else:
                 rv[field] = np.empty(size, dtype="float64")
                 if size == 0: continue
-                rv[field][:] = vals[field]
+                rv[field][:] = vals[field][mask]
         return rv
 
     def _read_particle_selection(self, chunks, selector, fields):


https://bitbucket.org/yt_analysis/yt/commits/72b584eb3706/
Changeset:   72b584eb3706
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-06-03 20:29:51
Summary:     Adding particle deposition fields, comoving & h units, to RAMSES.
Affected #:  2 files

diff -r bcd7cd3dd612baf132c4598fada9ce87953a28e4 -r 72b584eb3706c3fe8c7e5c59e6c801140db55cd7 yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -418,6 +418,9 @@
         unit_l = self.parameters['unit_l']
         for unit in mpc_conversion.keys():
             self.units[unit] = unit_l * mpc_conversion[unit] / mpc_conversion["cm"]
+            self.units['%sh' % unit] = self.units[unit] * self.hubble_constant
+            self.units['%shcm' % unit] = (self.units['%sh' % unit] /
+                                          (1 + self.current_redshift))
         for unit in sec_conversion.keys():
             self.time_units[unit] = self.parameters['unit_t'] / sec_conversion[unit]
 

diff -r bcd7cd3dd612baf132c4598fada9ce87953a28e4 -r 72b584eb3706c3fe8c7e5c59e6c801140db55cd7 yt/frontends/ramses/fields.py
--- a/yt/frontends/ramses/fields.py
+++ b/yt/frontends/ramses/fields.py
@@ -36,7 +36,9 @@
 import yt.data_objects.universal_fields
 from yt.utilities.physical_constants import \
     boltzmann_constant_cgs, \
-    mass_hydrogen_cgs
+    mass_hydrogen_cgs, \
+    mass_sun_cgs
+import numpy as np
 
 RAMSESFieldInfo = FieldInfoContainer.create_with_fallback(FieldInfo, "RFI")
 add_field = RAMSESFieldInfo.add_field
@@ -121,7 +123,7 @@
 KnownRAMSESFields["particle_mass"]._units = r"\mathrm{g}"
 
 def _convertParticleMassMsun(data):
-    return 1.0/1.989e33
+    return 1.0/mass_sun_cgs
 add_field("ParticleMass", function=TranslationFunc("particle_mass"), 
           particle_type=True)
 add_field("ParticleMassMsun",
@@ -133,3 +135,46 @@
     rv *= mass_hydrogen_cgs/boltzmann_constant_cgs
     return rv
 add_field("Temperature", function=_Temperature, units=r"\rm{K}")
+
+
+# We now set up a couple particle fields.  This should eventually be abstracted
+# into a single particle field function that adds them all on and is used
+# across frontends, but that will need to wait until moving to using
+# Coordinates, or vector fields.
+
+def particle_count(field, data):
+    pos = np.column_stack([data["particle_position_%s" % ax] for ax in 'xyz'])
+    d = data.deposit(pos, method = "count")
+    return d
+RAMSESFieldInfo.add_field(("deposit", "%s_count" % "all"),
+         function = particle_count,
+         validators = [ValidateSpatial()],
+         display_name = "\\mathrm{%s Count}" % "all",
+         projection_conversion = '1')
+
+def particle_mass(field, data):
+    pos = np.column_stack([data["particle_position_%s" % ax] for ax in 'xyz'])
+    d = data.deposit(pos, [data["ParticleMass"]], method = "sum")
+    return d
+
+RAMSESFieldInfo.add_field(("deposit", "%s_mass" % "all"),
+         function = particle_mass,
+         validators = [ValidateSpatial()],
+         display_name = "\\mathrm{%s Mass}" % "all",
+         units = r"\mathrm{g}",
+         projected_units = r"\mathrm{g}\/\mathrm{cm}",
+         projection_conversion = 'cm')
+
+def particle_density(field, data):
+    pos = np.column_stack([data["particle_position_%s" % ax] for ax in 'xyz'])
+    d = data.deposit(pos, [data["ParticleMass"]], method = "sum")
+    d /= data["CellVolume"]
+    return d
+
+RAMSESFieldInfo.add_field(("deposit", "%s_density" % "all"),
+         function = particle_density,
+         validators = [ValidateSpatial()],
+         display_name = "\\mathrm{%s Density}" % "all",
+         units = r"\mathrm{g}/\mathrm{cm}^{3}",
+         projected_units = r"\mathrm{g}/\mathrm{cm}^{-2}",
+         projection_conversion = 'cm')


https://bitbucket.org/yt_analysis/yt/commits/e527800d99b2/
Changeset:   e527800d99b2
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-06-03 21:02:13
Summary:     This corrects a pernicious bug that crept in to the RAMSES oct filling.

Note that this is actually not precisely that same as before, because we now
iterate in Fortran order, whereas implicitly before we iterated in C order.
While we could in principle write a single "for ii" loop, we do not do that
here so that we can remain clear about the order of filling, and because we
would need to have a reverse-ordering test anyway.
Affected #:  1 file

diff -r 72b584eb3706c3fe8c7e5c59e6c801140db55cd7 -r e527800d99b2fa73e2f73a97b6939c6126e21f00 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -705,13 +705,15 @@
             source = source_fields[key]
             for n in range(dom.n):
                 o = &dom.my_octs[n]
-                for ii in range(8):
-                    # We iterate and check here to keep our counts consistent
-                    # when filling different levels.
-                    if mask[o.domain_ind, ii] == 0: continue
-                    if o.level == level: 
-                        dest[local_filled] = source[o.file_ind, ii]
-                    local_filled += 1
+                for i in range(2):
+                    for j in range(2):
+                        for k in range(2):
+                            ii = ((k*2)+j)*2+i
+                            if mask[o.domain_ind, ii] == 0: continue
+                            if o.level == level:
+                                dest[local_filled] = \
+                                    source[o.file_ind, ii]
+                            local_filled += 1
         return local_filled
 
 cdef class ARTOctreeContainer(RAMSESOctreeContainer):


https://bitbucket.org/yt_analysis/yt/commits/2de0b1d9c4dd/
Changeset:   2de0b1d9c4dd
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-06-03 21:24:56
Summary:     Adding Enzo particle deposition fields.
Affected #:  1 file

diff -r e527800d99b2fa73e2f73a97b6939c6126e21f00 -r 2de0b1d9c4dd42949d103bf4b350f293d6fe749d yt/frontends/enzo/fields.py
--- a/yt/frontends/enzo/fields.py
+++ b/yt/frontends/enzo/fields.py
@@ -653,3 +653,39 @@
                function=TranslationFunc(("CenOstriker","position_%s" % ax)),
                particle_type = True)
 
+def particle_count(field, data):
+    pos = np.column_stack([data["particle_position_%s" % ax] for ax in 'xyz'])
+    d = data.deposit(pos, method = "count")
+    return d
+EnzoFieldInfo.add_field(("deposit", "%s_count" % "all"),
+         function = particle_count,
+         validators = [ValidateSpatial()],
+         display_name = "\\mathrm{%s Count}" % "all",
+         projection_conversion = '1')
+
+def particle_mass(field, data):
+    pos = np.column_stack([data["particle_position_%s" % ax] for ax in 'xyz'])
+    d = data.deposit(pos, [data["ParticleMass"]], method = "sum")
+    return d
+
+EnzoFieldInfo.add_field(("deposit", "%s_mass" % "all"),
+         function = particle_mass,
+         validators = [ValidateSpatial()],
+         display_name = "\\mathrm{%s Mass}" % "all",
+         units = r"\mathrm{g}",
+         projected_units = r"\mathrm{g}\/\mathrm{cm}",
+         projection_conversion = 'cm')
+
+def particle_density(field, data):
+    pos = np.column_stack([data["particle_position_%s" % ax] for ax in 'xyz'])
+    d = data.deposit(pos, [data["ParticleMass"]], method = "sum")
+    d /= data["CellVolume"]
+    return d
+
+EnzoFieldInfo.add_field(("deposit", "%s_density" % "all"),
+         function = particle_density,
+         validators = [ValidateSpatial()],
+         display_name = "\\mathrm{%s Density}" % "all",
+         units = r"\mathrm{g}/\mathrm{cm}^{3}",
+         projected_units = r"\mathrm{g}/\mathrm{cm}^{-2}",
+         projection_conversion = 'cm')


https://bitbucket.org/yt_analysis/yt/commits/49f1241d1fd8/
Changeset:   49f1241d1fd8
Branch:      yt-3.0
User:        samskillman
Date:        2013-06-03 21:01:37
Summary:     Fixing insertion of blocks based on levels. Will not work for things that do not have b.Level, but at least this does not Nlvl insert each grid. Also changing source to data_source
Affected #:  2 files

diff -r ada7c2c608f1bb724bfa274539a0321b76df503c -r 49f1241d1fd80b4294a520be53e790d1db72fe54 yt/utilities/amr_kdtree/amr_kdtree.py
--- a/yt/utilities/amr_kdtree/amr_kdtree.py
+++ b/yt/utilities/amr_kdtree/amr_kdtree.py
@@ -73,12 +73,12 @@
 
 class Tree(object):
     def __init__(self, pf, comm_rank=0, comm_size=1,
-            min_level=None, max_level=None, source=None):
+            min_level=None, max_level=None, data_source=None):
         
         self.pf = pf
-        if source is None:
-            source = pf.h.all_data()
-        self.source = source
+        if data_source is None:
+            data_source = pf.h.all_data()
+        self.data_source = data_source
         self._id_offset = self.pf.h.grids[0]._id_offset
         if min_level is None: min_level = 0
         if max_level is None: max_level = pf.h.max_level
@@ -86,8 +86,8 @@
         self.max_level = max_level
         self.comm_rank = comm_rank
         self.comm_size = comm_size
-        left_edge = self.source.left_edge
-        right_edge= self.source.right_edge
+        left_edge = self.data_source.left_edge
+        right_edge= self.data_source.right_edge
         self.trunk = Node(None, None, None,
                 left_edge, right_edge, None, 1)
         self.build()
@@ -102,8 +102,8 @@
     def build(self):
         lvl_range = range(self.min_level, self.max_level+1)
         for lvl in lvl_range:
-            #grids = self.source.select_grids(lvl)
-            grids = np.array([b for b, mask in self.source.blocks])
+            #grids = self.data_source.select_grids(lvl)
+            grids = np.array([b for b, mask in self.data_source.blocks if b.Level == lvl])
             if len(grids) == 0: break
             self.add_grids(grids)
 
@@ -148,7 +148,7 @@
     fields = None
     log_fields = None
     no_ghost = True
-    def __init__(self, pf, min_level=None, max_level=None, source=None):
+    def __init__(self, pf, min_level=None, max_level=None, data_source=None):
 
         ParallelAnalysisInterface.__init__(self)
 
@@ -159,17 +159,20 @@
         self.brick_dimensions = []
         self.sdx = pf.h.get_smallest_dx()
         self._initialized = False
-        self._id_offset = pf.h.grids[0]._id_offset
+        try: 
+            self._id_offset = pf.h.grids[0]._id_offset
+        except:
+            self._id_offset = 0
 
         #self.add_mask_field()
-        if source is None:
-            source = pf.h.all_data()
-        self.source = source
+        if data_source is None:
+            data_source = pf.h.all_data()
+        self.data_source = data_source
     
         mylog.debug('Building AMRKDTree')
         self.tree = Tree(pf, self.comm.rank, self.comm.size,
                          min_level=min_level,
-                         max_level=max_level, source=source)
+                         max_level=max_level, data_source=data_source)
 
     def set_fields(self, fields, log_fields, no_ghost):
         self.fields = fields

diff -r ada7c2c608f1bb724bfa274539a0321b76df503c -r 49f1241d1fd80b4294a520be53e790d1db72fe54 yt/visualization/volume_rendering/camera.py
--- a/yt/visualization/volume_rendering/camera.py
+++ b/yt/visualization/volume_rendering/camera.py
@@ -164,7 +164,7 @@
                  log_fields = None,
                  sub_samples = 5, pf = None,
                  min_level=None, max_level=None, no_ghost=True,
-                 source=None,
+                 data_source=None,
                  use_light=False):
         ParallelAnalysisInterface.__init__(self)
         if pf is not None: self.pf = pf
@@ -196,13 +196,13 @@
         if self.no_ghost:
             mylog.info('Warning: no_ghost is currently True (default). This may lead to artifacts at grid boundaries.')
 
-        if source is None:
-            source = self.pf.h.all_data()
-        self.source = source
+        if data_source is None:
+            data_source = self.pf.h.all_data()
+        self.data_source = data_source
 
         if volume is None:
             volume = AMRKDTree(self.pf, min_level=min_level, 
-                               max_level=max_level, source=self.source)
+                               max_level=max_level, data_source=self.data_source)
         self.volume = volume        
 
     def _setup_box_properties(self, width, center, unit_vectors):


https://bitbucket.org/yt_analysis/yt/commits/d540495296df/
Changeset:   d540495296df
Branch:      yt-3.0
User:        ngoldbaum
Date:        2013-06-03 21:52:29
Summary:     Merged in MatthewTurk/yt-3.0 (pull request #40)

Fields and RAMSES fix
Affected #:  5 files

diff -r 49f1241d1fd80b4294a520be53e790d1db72fe54 -r d540495296df584372b5c2ab42e90b43c6bdf058 yt/frontends/enzo/fields.py
--- a/yt/frontends/enzo/fields.py
+++ b/yt/frontends/enzo/fields.py
@@ -653,3 +653,39 @@
                function=TranslationFunc(("CenOstriker","position_%s" % ax)),
                particle_type = True)
 
+def particle_count(field, data):
+    pos = np.column_stack([data["particle_position_%s" % ax] for ax in 'xyz'])
+    d = data.deposit(pos, method = "count")
+    return d
+EnzoFieldInfo.add_field(("deposit", "%s_count" % "all"),
+         function = particle_count,
+         validators = [ValidateSpatial()],
+         display_name = "\\mathrm{%s Count}" % "all",
+         projection_conversion = '1')
+
+def particle_mass(field, data):
+    pos = np.column_stack([data["particle_position_%s" % ax] for ax in 'xyz'])
+    d = data.deposit(pos, [data["ParticleMass"]], method = "sum")
+    return d
+
+EnzoFieldInfo.add_field(("deposit", "%s_mass" % "all"),
+         function = particle_mass,
+         validators = [ValidateSpatial()],
+         display_name = "\\mathrm{%s Mass}" % "all",
+         units = r"\mathrm{g}",
+         projected_units = r"\mathrm{g}\/\mathrm{cm}",
+         projection_conversion = 'cm')
+
+def particle_density(field, data):
+    pos = np.column_stack([data["particle_position_%s" % ax] for ax in 'xyz'])
+    d = data.deposit(pos, [data["ParticleMass"]], method = "sum")
+    d /= data["CellVolume"]
+    return d
+
+EnzoFieldInfo.add_field(("deposit", "%s_density" % "all"),
+         function = particle_density,
+         validators = [ValidateSpatial()],
+         display_name = "\\mathrm{%s Density}" % "all",
+         units = r"\mathrm{g}/\mathrm{cm}^{3}",
+         projected_units = r"\mathrm{g}/\mathrm{cm}^{-2}",
+         projection_conversion = 'cm')

diff -r 49f1241d1fd80b4294a520be53e790d1db72fe54 -r d540495296df584372b5c2ab42e90b43c6bdf058 yt/frontends/ramses/data_structures.py
--- a/yt/frontends/ramses/data_structures.py
+++ b/yt/frontends/ramses/data_structures.py
@@ -418,6 +418,9 @@
         unit_l = self.parameters['unit_l']
         for unit in mpc_conversion.keys():
             self.units[unit] = unit_l * mpc_conversion[unit] / mpc_conversion["cm"]
+            self.units['%sh' % unit] = self.units[unit] * self.hubble_constant
+            self.units['%shcm' % unit] = (self.units['%sh' % unit] /
+                                          (1 + self.current_redshift))
         for unit in sec_conversion.keys():
             self.time_units[unit] = self.parameters['unit_t'] / sec_conversion[unit]
 

diff -r 49f1241d1fd80b4294a520be53e790d1db72fe54 -r d540495296df584372b5c2ab42e90b43c6bdf058 yt/frontends/ramses/fields.py
--- a/yt/frontends/ramses/fields.py
+++ b/yt/frontends/ramses/fields.py
@@ -36,7 +36,9 @@
 import yt.data_objects.universal_fields
 from yt.utilities.physical_constants import \
     boltzmann_constant_cgs, \
-    mass_hydrogen_cgs
+    mass_hydrogen_cgs, \
+    mass_sun_cgs
+import numpy as np
 
 RAMSESFieldInfo = FieldInfoContainer.create_with_fallback(FieldInfo, "RFI")
 add_field = RAMSESFieldInfo.add_field
@@ -121,7 +123,7 @@
 KnownRAMSESFields["particle_mass"]._units = r"\mathrm{g}"
 
 def _convertParticleMassMsun(data):
-    return 1.0/1.989e33
+    return 1.0/mass_sun_cgs
 add_field("ParticleMass", function=TranslationFunc("particle_mass"), 
           particle_type=True)
 add_field("ParticleMassMsun",
@@ -133,3 +135,46 @@
     rv *= mass_hydrogen_cgs/boltzmann_constant_cgs
     return rv
 add_field("Temperature", function=_Temperature, units=r"\rm{K}")
+
+
+# We now set up a couple particle fields.  This should eventually be abstracted
+# into a single particle field function that adds them all on and is used
+# across frontends, but that will need to wait until moving to using
+# Coordinates, or vector fields.
+
+def particle_count(field, data):
+    pos = np.column_stack([data["particle_position_%s" % ax] for ax in 'xyz'])
+    d = data.deposit(pos, method = "count")
+    return d
+RAMSESFieldInfo.add_field(("deposit", "%s_count" % "all"),
+         function = particle_count,
+         validators = [ValidateSpatial()],
+         display_name = "\\mathrm{%s Count}" % "all",
+         projection_conversion = '1')
+
+def particle_mass(field, data):
+    pos = np.column_stack([data["particle_position_%s" % ax] for ax in 'xyz'])
+    d = data.deposit(pos, [data["ParticleMass"]], method = "sum")
+    return d
+
+RAMSESFieldInfo.add_field(("deposit", "%s_mass" % "all"),
+         function = particle_mass,
+         validators = [ValidateSpatial()],
+         display_name = "\\mathrm{%s Mass}" % "all",
+         units = r"\mathrm{g}",
+         projected_units = r"\mathrm{g}\/\mathrm{cm}",
+         projection_conversion = 'cm')
+
+def particle_density(field, data):
+    pos = np.column_stack([data["particle_position_%s" % ax] for ax in 'xyz'])
+    d = data.deposit(pos, [data["ParticleMass"]], method = "sum")
+    d /= data["CellVolume"]
+    return d
+
+RAMSESFieldInfo.add_field(("deposit", "%s_density" % "all"),
+         function = particle_density,
+         validators = [ValidateSpatial()],
+         display_name = "\\mathrm{%s Density}" % "all",
+         units = r"\mathrm{g}/\mathrm{cm}^{3}",
+         projected_units = r"\mathrm{g}/\mathrm{cm}^{-2}",
+         projection_conversion = 'cm')

diff -r 49f1241d1fd80b4294a520be53e790d1db72fe54 -r d540495296df584372b5c2ab42e90b43c6bdf058 yt/frontends/sph/io.py
--- a/yt/frontends/sph/io.py
+++ b/yt/frontends/sph/io.py
@@ -341,7 +341,7 @@
             else:
                 rv[field] = np.empty(size, dtype="float64")
                 if size == 0: continue
-                rv[field][:] = vals[field]
+                rv[field][:] = vals[field][mask]
         return rv
 
     def _read_particle_selection(self, chunks, selector, fields):

diff -r 49f1241d1fd80b4294a520be53e790d1db72fe54 -r d540495296df584372b5c2ab42e90b43c6bdf058 yt/geometry/oct_container.pyx
--- a/yt/geometry/oct_container.pyx
+++ b/yt/geometry/oct_container.pyx
@@ -705,13 +705,15 @@
             source = source_fields[key]
             for n in range(dom.n):
                 o = &dom.my_octs[n]
-                for ii in range(8):
-                    # We iterate and check here to keep our counts consistent
-                    # when filling different levels.
-                    if mask[o.domain_ind, ii] == 0: continue
-                    if o.level == level: 
-                        dest[local_filled] = source[o.file_ind, ii]
-                    local_filled += 1
+                for i in range(2):
+                    for j in range(2):
+                        for k in range(2):
+                            ii = ((k*2)+j)*2+i
+                            if mask[o.domain_ind, ii] == 0: continue
+                            if o.level == level:
+                                dest[local_filled] = \
+                                    source[o.file_ind, ii]
+                            local_filled += 1
         return local_filled
 
 cdef class ARTOctreeContainer(RAMSESOctreeContainer):


https://bitbucket.org/yt_analysis/yt/commits/f2995289fe1a/
Changeset:   f2995289fe1a
Branch:      yt-3.0
User:        samskillman
Date:        2013-06-03 22:11:08
Summary:     Quick fix for VR. More work needs to be done to get data source and non-grid
based datasets working nicely.
Affected #:  2 files

diff -r 49f1241d1fd80b4294a520be53e790d1db72fe54 -r f2995289fe1af1a07a7d2d12802410e374f2a873 yt/utilities/amr_kdtree/amr_kdtree.py
--- a/yt/utilities/amr_kdtree/amr_kdtree.py
+++ b/yt/utilities/amr_kdtree/amr_kdtree.py
@@ -50,7 +50,7 @@
                   [ 1,  1, -1], [ 1,  1,  0], [ 1,  1,  1] ])
 
 
-def make_vcd(data):
+def make_vcd(data, log=False):
     new_field = np.zeros(np.array(data.shape) + 1, dtype='float64')
     of = data
     new_field[:-1, :-1, :-1] += of
@@ -62,6 +62,8 @@
     new_field[1:, 1:, :-1] += of
     new_field[1:, 1:, 1:] += of
     np.multiply(new_field, 0.125, new_field)
+    if log:
+        new_field = np.log10(new_field)
 
     new_field[:, :, -1] = 2.0*new_field[:, :, -2] - new_field[:, :, -3]
     new_field[:, :, 0] = 2.0*new_field[:, :, 1] - new_field[:, :, 2]
@@ -69,6 +71,9 @@
     new_field[:, 0, :] = 2.0*new_field[:, 1, :] - new_field[:, 2, :]
     new_field[-1, :, :] = 2.0*new_field[-2, :, :] - new_field[-3, :, :]
     new_field[0, :, :] = 2.0*new_field[1, :, :] - new_field[2, :, :]
+
+    if log: 
+        np.power(10.0, new_field, new_field)
     return new_field
 
 class Tree(object):
@@ -269,12 +274,12 @@
             dds = self.current_vcds[self.current_saved_grids.index(grid)]
         else:
             dds = []
-            mask = make_vcd(grid.child_mask)
-            mask = np.clip(mask, 0.0, 1.0)
-            mask[mask<1.0] = np.inf
+            #mask = make_vcd(grid.child_mask)
+            #mask = np.clip(mask, 0.0, 1.0)
+            #mask[mask<1.0] = np.inf
             for i,field in enumerate(self.fields):
-                vcd = make_vcd(grid[field])
-                vcd *= mask
+                vcd = make_vcd(grid[field], log=self.log_fields[i])
+                #vcd *= mask
                 if self.log_fields[i]: vcd = np.log10(vcd)
                 dds.append(vcd)
                 self.current_saved_grids.append(grid)

diff -r 49f1241d1fd80b4294a520be53e790d1db72fe54 -r f2995289fe1af1a07a7d2d12802410e374f2a873 yt/visualization/volume_rendering/camera.py
--- a/yt/visualization/volume_rendering/camera.py
+++ b/yt/visualization/volume_rendering/camera.py
@@ -185,7 +185,7 @@
             transfer_function = ProjectionTransferFunction()
         self.transfer_function = transfer_function
         self.log_fields = log_fields
-        dd = pf.h.all_data()
+        dd = self.pf.h.all_data()
         efields = dd._determine_fields(self.fields)
         if self.log_fields is None:
             self.log_fields = [self.pf._get_field_info(*f).take_log for f in efields]


https://bitbucket.org/yt_analysis/yt/commits/954d1ffcbf04/
Changeset:   954d1ffcbf04
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-06-03 22:13:16
Summary:     Merged in samskillman/yt-3.0 (pull request #41)

More VR fixes
Affected #:  2 files

diff -r d540495296df584372b5c2ab42e90b43c6bdf058 -r 954d1ffcbf04c3d1b394c2ea05324d903a9a07cf yt/utilities/amr_kdtree/amr_kdtree.py
--- a/yt/utilities/amr_kdtree/amr_kdtree.py
+++ b/yt/utilities/amr_kdtree/amr_kdtree.py
@@ -50,7 +50,7 @@
                   [ 1,  1, -1], [ 1,  1,  0], [ 1,  1,  1] ])
 
 
-def make_vcd(data):
+def make_vcd(data, log=False):
     new_field = np.zeros(np.array(data.shape) + 1, dtype='float64')
     of = data
     new_field[:-1, :-1, :-1] += of
@@ -62,6 +62,8 @@
     new_field[1:, 1:, :-1] += of
     new_field[1:, 1:, 1:] += of
     np.multiply(new_field, 0.125, new_field)
+    if log:
+        new_field = np.log10(new_field)
 
     new_field[:, :, -1] = 2.0*new_field[:, :, -2] - new_field[:, :, -3]
     new_field[:, :, 0] = 2.0*new_field[:, :, 1] - new_field[:, :, 2]
@@ -69,6 +71,9 @@
     new_field[:, 0, :] = 2.0*new_field[:, 1, :] - new_field[:, 2, :]
     new_field[-1, :, :] = 2.0*new_field[-2, :, :] - new_field[-3, :, :]
     new_field[0, :, :] = 2.0*new_field[1, :, :] - new_field[2, :, :]
+
+    if log: 
+        np.power(10.0, new_field, new_field)
     return new_field
 
 class Tree(object):
@@ -269,12 +274,12 @@
             dds = self.current_vcds[self.current_saved_grids.index(grid)]
         else:
             dds = []
-            mask = make_vcd(grid.child_mask)
-            mask = np.clip(mask, 0.0, 1.0)
-            mask[mask<1.0] = np.inf
+            #mask = make_vcd(grid.child_mask)
+            #mask = np.clip(mask, 0.0, 1.0)
+            #mask[mask<1.0] = np.inf
             for i,field in enumerate(self.fields):
-                vcd = make_vcd(grid[field])
-                vcd *= mask
+                vcd = make_vcd(grid[field], log=self.log_fields[i])
+                #vcd *= mask
                 if self.log_fields[i]: vcd = np.log10(vcd)
                 dds.append(vcd)
                 self.current_saved_grids.append(grid)

diff -r d540495296df584372b5c2ab42e90b43c6bdf058 -r 954d1ffcbf04c3d1b394c2ea05324d903a9a07cf yt/visualization/volume_rendering/camera.py
--- a/yt/visualization/volume_rendering/camera.py
+++ b/yt/visualization/volume_rendering/camera.py
@@ -185,7 +185,7 @@
             transfer_function = ProjectionTransferFunction()
         self.transfer_function = transfer_function
         self.log_fields = log_fields
-        dd = pf.h.all_data()
+        dd = self.pf.h.all_data()
         efields = dd._determine_fields(self.fields)
         if self.log_fields is None:
             self.log_fields = [self.pf._get_field_info(*f).take_log for f in efields]


https://bitbucket.org/yt_analysis/yt/commits/46a03cedb89e/
Changeset:   46a03cedb89e
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-06-03 22:17:12
Summary:     Added tag yt-3.0a2 for changeset 954d1ffcbf04
Affected #:  1 file

diff -r 954d1ffcbf04c3d1b394c2ea05324d903a9a07cf -r 46a03cedb89efd230f33bdf2abc2e9595ce2a55e .hgtags
--- a/.hgtags
+++ b/.hgtags
@@ -5156,3 +5156,4 @@
 0000000000000000000000000000000000000000 mpi-opaque
 f15825659f5af3ce64aaad30062aff3603cbfb66 hop callback
 0000000000000000000000000000000000000000 hop callback
+954d1ffcbf04c3d1b394c2ea05324d903a9a07cf yt-3.0a2


https://bitbucket.org/yt_analysis/yt/commits/479b20ef010f/
Changeset:   479b20ef010f
Branch:      yt-3.0
User:        MatthewTurk
Date:        2013-06-03 22:23:05
Summary:     Merging from our tag commit from a while back.
Affected #:  1 file

diff -r 46a03cedb89efd230f33bdf2abc2e9595ce2a55e -r 479b20ef010f5f444c2b35754a30576736cd8b75 .hgtags
--- a/.hgtags
+++ b/.hgtags
@@ -5156,4 +5156,5 @@
 0000000000000000000000000000000000000000 mpi-opaque
 f15825659f5af3ce64aaad30062aff3603cbfb66 hop callback
 0000000000000000000000000000000000000000 hop callback
+a71dffe4bc813fdadc506ccad9efb632e23dc843 yt-3.0a1
 954d1ffcbf04c3d1b394c2ea05324d903a9a07cf yt-3.0a2

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list