[yt-svn] commit/yt: 100 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Thu Mar 14 14:40:18 PDT 2013


100 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/6ad17911e36d/
changeset:   6ad17911e36d
branch:      yt-3.0
user:        MatthewTurk
date:        2012-04-04 13:57:14
summary:     Branching into yt-3.0.
affected #:  1 file

diff -r 2af699ff04a4d28d0494cd18f727a3a0b7f6a2e5 -r 6ad17911e36d824dbbe6c11d294bc9671492d2b9 setup.py
--- a/setup.py
+++ b/setup.py
@@ -78,7 +78,7 @@
 
 import setuptools
 
-VERSION = "2.4dev"
+VERSION = "3.0dev"
 
 if os.path.exists('MANIFEST'): os.remove('MANIFEST')
 


https://bitbucket.org/yt_analysis/yt/commits/284135e173b5/
changeset:   284135e173b5
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-01 18:03:59
summary:     Merging from the volume_refactor bookmark.
affected #:  75 files

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 setup.py
--- a/setup.py
+++ b/setup.py
@@ -76,7 +76,7 @@
 
 import setuptools
 
-VERSION = "2.3dev"
+VERSION = "2.4dev"
 
 if os.path.exists('MANIFEST'): os.remove('MANIFEST')
 

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/analysis_modules/halo_finding/halo_objects.py
--- a/yt/analysis_modules/halo_finding/halo_objects.py
+++ b/yt/analysis_modules/halo_finding/halo_objects.py
@@ -32,6 +32,7 @@
 import numpy as na
 import random
 import sys
+import os.path as path
 from collections import defaultdict
 
 from yt.funcs import *
@@ -1360,15 +1361,16 @@
         # The halos are listed in order in the file.
         lines = file("%s.txt" % self.basename)
         locations = []
+        realpath = path.realpath("%s.txt" % self.basename)
         for line in lines:
             line = line.split()
             # Prepend the hdf5 file names with the full path.
             temp = []
             for item in line[1:]:
-                if item[0] == "/":
-                    temp.append(item)
-                else:
-                    temp.append(self.pf.fullpath + '/' + item)
+                # This assumes that the .txt is in the same place as
+                # the h5 files, which is a good one I think.
+                item = item.split("/")
+                temp.append(path.join(path.dirname(realpath), item[-1]))
             locations.append(temp)
         lines.close()
         return locations

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/analysis_modules/halo_merger_tree/merger_tree.py
--- a/yt/analysis_modules/halo_merger_tree/merger_tree.py
+++ b/yt/analysis_modules/halo_merger_tree/merger_tree.py
@@ -86,6 +86,9 @@
 "ChildHaloID3", "ChildHaloFrac3",
 "ChildHaloID4", "ChildHaloFrac4"]
 
+NumNeighbors = 15
+NumDB = 5
+
 class DatabaseFunctions(object):
     # Common database functions so it doesn't have to be repeated.
     def _open_database(self):
@@ -366,9 +369,9 @@
         child_points = na.array(child_points)
         fKD.pos = na.asfortranarray(child_points.T)
         fKD.qv = na.empty(3, dtype='float64')
-        fKD.dist = na.empty(5, dtype='float64')
-        fKD.tags = na.empty(5, dtype='int64')
-        fKD.nn = 5
+        fKD.dist = na.empty(NumNeighbors, dtype='float64')
+        fKD.tags = na.empty(NumNeighbors, dtype='int64')
+        fKD.nn = NumNeighbors
         fKD.sort = True
         fKD.rearrange = True
         create_tree(0)
@@ -395,7 +398,7 @@
                 nIDs.append(n)
             # We need to fill in fake halos if there aren't enough halos,
             # which can happen at high redshifts.
-            while len(nIDs) < 5:
+            while len(nIDs) < NumNeighbors:
                 nIDs.append(-1)
             candidates[row[0]] = nIDs
         
@@ -405,12 +408,12 @@
         self.candidates = candidates
         
         # This stores the masses contributed to each child candidate.
-        self.child_mass_arr = na.zeros(len(candidates)*5, dtype='float64')
+        self.child_mass_arr = na.zeros(len(candidates)*NumNeighbors, dtype='float64')
         # Records where to put the entries in the above array.
         self.child_mass_loc = defaultdict(dict)
         for i,halo in enumerate(sorted(candidates)):
             for j, child in enumerate(candidates[halo]):
-                self.child_mass_loc[halo][child] = i*5 + j
+                self.child_mass_loc[halo][child] = i*NumNeighbors + j
 
     def _build_h5_refs(self, filename):
         # For this snapshot, add lists of file names that contain the
@@ -618,8 +621,8 @@
         result = self.cursor.fetchone()
         while result:
             mass = result[0]
-            self.child_mass_arr[mark:mark+5] /= mass
-            mark += 5
+            self.child_mass_arr[mark:mark+NumNeighbors] /= mass
+            mark += NumNeighbors
             result = self.cursor.fetchone()
         
         # Get the global ID for the SnapHaloID=0 from the child, this will
@@ -642,14 +645,15 @@
                 # We need to get the GlobalHaloID for this child.
                 child_globalID = baseChildID + child
                 child_indexes.append(child_globalID)
-                child_per.append(self.child_mass_arr[i*5 + j])
+                child_per.append(self.child_mass_arr[i*NumNeighbors + j])
             # Sort by percentages, desending.
             child_per, child_indexes = zip(*sorted(zip(child_per, child_indexes), reverse=True))
             values = []
-            for pair in zip(child_indexes, child_per):
+            for pair_count, pair in enumerate(zip(child_indexes, child_per)):
+                if pair_count == NumDB: break
                 values.extend([int(pair[0]), float(pair[1])])
             #values.extend([parent_currt, parent_halo])
-            # This has the child ID, child percent listed five times, followed
+            # This has the child ID, child percent listed NumDB times, followed
             # by the currt and this parent halo ID (SnapHaloID).
             #values = tuple(values)
             self.write_values.append(values)
@@ -841,7 +845,7 @@
          [1609, 0.0]]
         """
         parents = []
-        for i in range(5):
+        for i in range(NumDB):
             string = "SELECT GlobalHaloID, ChildHaloFrac%d FROM Halos\
             WHERE ChildHaloID%d=%d;" % (i, i, GlobalHaloID)
             self.cursor.execute(string)

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/analysis_modules/halo_profiler/api.py
--- a/yt/analysis_modules/halo_profiler/api.py
+++ b/yt/analysis_modules/halo_profiler/api.py
@@ -34,5 +34,5 @@
 from .multi_halo_profiler import \
     HaloProfiler, \
     FakeProfile, \
-    shift_projections, \
+    get_halo_sphere, \
     standard_fields

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/analysis_modules/halo_profiler/multi_halo_profiler.py
--- a/yt/analysis_modules/halo_profiler/multi_halo_profiler.py
+++ b/yt/analysis_modules/halo_profiler/multi_halo_profiler.py
@@ -46,7 +46,8 @@
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
     ParallelAnalysisInterface, \
     parallel_blocking_call, \
-    parallel_root_only
+    parallel_root_only, \
+    parallel_objects
 from yt.visualization.fixed_resolution import \
     FixedResolutionBuffer
 from yt.visualization.image_writer import write_image
@@ -66,7 +67,7 @@
                  recenter = None,
                  profile_output_dir='radial_profiles', projection_output_dir='projections',
                  projection_width=8.0, projection_width_units='mpc', project_at_level='max',
-                 velocity_center=['bulk', 'halo'], filter_quantities=['id','center'], 
+                 velocity_center=['bulk', 'halo'], filter_quantities=['id', 'center', 'r_max'], 
                  use_critical_density=False):
         r"""Initialize a Halo Profiler object.
         
@@ -184,7 +185,6 @@
         self._halo_filters = []
         self.all_halos = []
         self.filtered_halos = []
-        self._projection_halo_list = []
 
         # Create output directory if specified
         if self.output_dir is not None:
@@ -351,7 +351,8 @@
             
         """
 
-        self.profile_fields.append({'field':field, 'weight_field':weight_field, 'accumulation':accumulation})
+        self.profile_fields.append({'field':field, 'weight_field':weight_field, 
+                                    'accumulation':accumulation})
 
     def add_projection(self, field, weight_field=None, cmap='algae'):
         r"""Make a projection of the specified field.
@@ -453,7 +454,7 @@
 
         # Profile all halos.
         updated_halos = []
-        for halo in self._get_objs('all_halos', round_robin=True):
+        for halo in parallel_objects(self.all_halos, -1):
             # Apply prefilters to avoid profiling unwanted halos.
             filter_result = True
             haloQuantities = {}
@@ -509,7 +510,7 @@
 
     def _get_halo_profile(self, halo, filename, virial_filter=True,
             force_write=False):
-        """Profile a single halo and write profile data to a file.
+        r"""Profile a single halo and write profile data to a file.
         If file already exists, read profile data from file.
         Return a dictionary of id, center, and virial quantities if virial_filter is True.
         """
@@ -527,39 +528,9 @@
                 mylog.error("Skipping halo with r_max / r_min = %f." % (halo['r_max']/r_min))
                 return None
 
-            sphere = self.pf.h.sphere(halo['center'], halo['r_max']/self.pf.units['mpc'])
-            if len(sphere._grids) == 0: return None
-            new_sphere = False
-
-            if self.recenter:
-                old = halo['center']
-                if self.recenter in centering_registry:
-                    new_x, new_y, new_z = \
-                        centering_registry[self.recenter](sphere)
-                else:
-                    # user supplied function
-                    new_x, new_y, new_z = self.recenter(sphere)
-                if new_x < self.pf.domain_left_edge[0] or \
-                        new_y < self.pf.domain_left_edge[1] or \
-                        new_z < self.pf.domain_left_edge[2]:
-                    mylog.info("Recentering rejected, skipping halo %d" % \
-                        halo['id'])
-                    return None
-                halo['center'] = [new_x, new_y, new_z]
-                d = self.pf['kpc'] * periodic_dist(old, halo['center'],
-                    self.pf.domain_right_edge - self.pf.domain_left_edge)
-                mylog.info("Recentered halo %d %1.3e kpc away." % (halo['id'], d))
-                # Expand the halo to account for recentering. 
-                halo['r_max'] += d / 1000 # d is in kpc -> want mpc
-                new_sphere = True
-
-            if new_sphere:
-                # Temporary solution to memory leak.
-                for g in self.pf.h.grids:
-                    g.clear_data()
-                sphere.clear_data()
-                del sphere
-                sphere = self.pf.h.sphere(halo['center'], halo['r_max']/self.pf.units['mpc'])
+            # get a sphere object to profile
+            sphere = get_halo_sphere(halo, self.pf, recenter=self.recenter)
+            if sphere is None: return None
 
             if self._need_bulk_velocity:
                 # Set bulk velocity to zero out radial velocity profiles.
@@ -567,7 +538,9 @@
                     if self.velocity_center[1] == 'halo':
                         sphere.set_field_parameter('bulk_velocity', halo['velocity'])
                     elif self.velocity_center[1] == 'sphere':
-                        sphere.set_field_parameter('bulk_velocity', sphere.quantities['BulkVelocity'](lazy_reader=False, preload=False))
+                        sphere.set_field_parameter('bulk_velocity', 
+                                                   sphere.quantities['BulkVelocity'](lazy_reader=False, 
+                                                                                     preload=False))
                     else:
                         mylog.error("Invalid parameter: VelocityCenter.")
                 elif self.velocity_center[0] == 'max':
@@ -645,18 +618,18 @@
 
         # Get list of halos for projecting.
         if halo_list == 'filtered':
-            self._halo_projection_list = self.filtered_halos
+            halo_projection_list = self.filtered_halos
         elif halo_list == 'all':
-            self._halo_projection_list = self.all_halos
+            halo_projection_list = self.all_halos
         elif isinstance(halo_list, types.StringType):
-            self._halo_projection_list = self._read_halo_list(halo_list)
+            halo_projection_list = self._read_halo_list(halo_list)
         elif isinstance(halo_list, types.ListType):
-            self._halo_projection_list = halo_list
+            halo_projection_list = halo_list
         else:
             mylog.error("Keyword, halo_list', must be 'filtered', 'all', a filename, or an actual list.")
             return
 
-        if len(self._halo_projection_list) == 0:
+        if len(halo_projection_list) == 0:
             mylog.error("Halo list for projections is empty.")
             return
 
@@ -665,7 +638,8 @@
             proj_level = self.pf.h.max_level
         else:
             proj_level = int(self.project_at_level)
-        proj_dx = self.pf.units[self.projection_width_units] / self.pf.parameters['TopGridDimensions'][0] / \
+        proj_dx = self.pf.units[self.projection_width_units] / \
+            self.pf.parameters['TopGridDimensions'][0] / \
             (self.pf.parameters['RefineBy']**proj_level)
         projectionResolution = int(self.projection_width / proj_dx)
 
@@ -678,21 +652,25 @@
             my_output_dir = "%s/%s" % (self.pf.fullpath, self.projection_output_dir)
         self.__check_directory(my_output_dir)
 
-        center = [0.5 * (self.pf.parameters['DomainLeftEdge'][w] + self.pf.parameters['DomainRightEdge'][w])
+        center = [0.5 * (self.pf.parameters['DomainLeftEdge'][w] + 
+                         self.pf.parameters['DomainRightEdge'][w])
                   for w in range(self.pf.parameters['TopGridRank'])]
 
-        for halo in self._get_objs('_halo_projection_list', round_robin=True):
+        for halo in parallel_objects(halo_projection_list, -1):
             if halo is None:
                 continue
             # Check if region will overlap domain edge.
             # Using non-periodic regions is faster than using periodic ones.
-            leftEdge = [(halo['center'][w] - 0.5 * self.projection_width/self.pf.units[self.projection_width_units])
+            leftEdge = [(halo['center'][w] - 
+                         0.5 * self.projection_width/self.pf.units[self.projection_width_units])
                         for w in range(len(halo['center']))]
-            rightEdge = [(halo['center'][w] + 0.5 * self.projection_width/self.pf.units[self.projection_width_units])
+            rightEdge = [(halo['center'][w] + 
+                          0.5 * self.projection_width/self.pf.units[self.projection_width_units])
                          for w in range(len(halo['center']))]
 
             mylog.info("Projecting halo %04d in region: [%f, %f, %f] to [%f, %f, %f]." %
-                       (halo['id'], leftEdge[0], leftEdge[1], leftEdge[2], rightEdge[0], rightEdge[1], rightEdge[2]))
+                       (halo['id'], leftEdge[0], leftEdge[1], leftEdge[2], 
+                        rightEdge[0], rightEdge[1], rightEdge[2]))
 
             need_per = False
             for w in range(len(halo['center'])):
@@ -719,13 +697,13 @@
                 for hp in self.projection_fields:
                     projections.append(self.pf.h.proj(w, hp['field'], 
                                                       weight_field=hp['weight_field'], 
-                                                      data_source=region, center=halo['center'],
+                                                      source=region, center=halo['center'],
                                                       serialize=False))
                 
                 # Set x and y limits, shift image if it overlaps domain boundary.
                 if need_per:
                     pw = self.projection_width/self.pf.units[self.projection_width_units]
-                    #shift_projections(self.pf, projections, halo['center'], center, w)
+                    _shift_projections(self.pf, projections, halo['center'], center, w)
                     # Projection has now been shifted to center of box.
                     proj_left = [center[x_axis]-0.5*pw, center[y_axis]-0.5*pw]
                     proj_right = [center[x_axis]+0.5*pw, center[y_axis]+0.5*pw]
@@ -756,11 +734,85 @@
                         if save_images:
                             filename = "%s/Halo_%04d_%s_%s.png" % (my_output_dir, halo['id'], 
                                                                    dataset_name, axis_labels[w])
-                            write_image(na.log10(frb[hp['field']]), filename, cmap_name=hp['cmap'])
+                            if (frb[hp['field']] != 0).any():
+                                write_image(na.log10(frb[hp['field']]), filename, cmap_name=hp['cmap'])
+                            else:
+                                mylog.info('Projection of %s for halo %d is all zeros, skipping image.' %
+                                            (hp['field'], halo['id']))
                     if save_cube: output.close()
 
             del region
 
+    @parallel_blocking_call
+    def analyze_halo_spheres(self, analysis_function, halo_list='filtered',
+                             analysis_output_dir=None):
+        r"""Perform custom analysis on all halos.
+        
+        This will loop through all halo on the HaloProfiler's list, 
+        creating a sphere object for each halo and passing that sphere 
+        to the provided analysis function.
+        
+        Parameters
+        ---------
+        analysis_function : function
+            A function taking two arguments, the halo dictionary, and a 
+            sphere object.
+            Example function to calculate total mass of halo:
+                def my_analysis(halo, sphere):
+                    total_mass = sphere.quantities['TotalMass']()
+                    print total_mass
+        halo_list : {'filtered', 'all'}
+            Which set of halos to make profiles of, either ones passed by the
+            halo filters (if enabled/added), or all halos.
+            Default='filtered'.
+        analysis_output_dir : string, optional
+            If specified, this directory will be created within the dataset to 
+            contain any output from the analysis function.  Default: None.
+
+        Examples
+        --------
+        >>> hp.analyze_halo_spheres(my_analysis, halo_list="filtered",
+                                    analysis_output_dir='special_analysis')
+        
+        """
+
+        # Get list of halos for projecting.
+        if halo_list == 'filtered':
+            halo_analysis_list = self.filtered_halos
+        elif halo_list == 'all':
+            halo_analysis_list = self.all_halos
+        elif isinstance(halo_list, types.StringType):
+            halo_analysis_list = self._read_halo_list(halo_list)
+        elif isinstance(halo_list, types.ListType):
+            halo_analysis_list = halo_list
+        else:
+            mylog.error("Keyword, halo_list', must be 'filtered', 'all', a filename, or an actual list.")
+            return
+
+        if len(halo_analysis_list) == 0:
+            mylog.error("Halo list for analysis is empty.")
+            return
+
+        # Create output directory.
+        if analysis_output_dir is not None:
+            if self.output_dir is not None:
+                self.__check_directory("%s/%s" % (self.output_dir, self.pf.directory))
+                my_output_dir = "%s/%s/%s" % (self.output_dir, self.pf.directory, 
+                                              analysis_output_dir)
+            else:
+                my_output_dir = "%s/%s" % (self.pf.fullpath, analysis_output_dir)
+            self.__check_directory(my_output_dir)
+
+        for halo in parallel_objects(halo_analysis_list, -1):
+            if halo is None: continue
+
+            # Get a sphere object to analze.
+            sphere = get_halo_sphere(halo, self.pf, recenter=self.recenter)
+            if sphere is None: continue
+
+            # Call the given analysis function.
+            analysis_function(halo, sphere)
+
     def _add_actual_overdensity(self, profile):
         "Calculate overdensity from TotalMassMsun and CellVolume fields."
 
@@ -917,7 +969,8 @@
     def _run_hop(self, hop_file):
         "Run hop to get halos."
 
-        hop_results = self.halo_finder_function(self.pf, *self.halo_finder_args, **self.halo_finder_kwargs)
+        hop_results = self.halo_finder_function(self.pf, *self.halo_finder_args, 
+                                                **self.halo_finder_kwargs)
         hop_results.write_out(hop_file)
 
         del hop_results
@@ -989,7 +1042,95 @@
         else:
             os.mkdir(my_output_dir)
 
-def shift_projections(pf, projections, oldCenter, newCenter, axis):
+def get_halo_sphere(halo, pf, recenter=None):
+    r"""Returns a sphere object for a given halo.
+        
+    With a dictionary containing halo properties, such as center 
+    and r_max, this creates a sphere object and optionally 
+    recenters and recreates the sphere using a recentering function.
+    This is to be used primarily to make spheres for a set of halos 
+    loaded by the HaloProfiler.
+    
+    Parameters
+    ----------
+    halo : dict, required
+        The dictionary containing halo properties used to make the sphere.
+        Required entries:
+            center : list with center coordinates.
+            r_max : sphere radius in Mpc.
+    pf : parameter file object, required
+        The parameter file from which the sphere will be made.
+    recenter : {None, string or function}
+        The exact location of the sphere center can significantly affect 
+        radial profiles.  The halo center loaded by the HaloProfiler will 
+        typically be the dark matter center of mass calculated by a halo 
+        finder.  However, this may not be the best location for centering 
+        profiles of baryon quantities.  For example, one may want to center 
+        on the maximum density.
+        If recenter is given as a string, one of the existing recentering 
+        functions will be used:
+            Min_Dark_Matter_Density : location of minimum dark matter density
+            Max_Dark_Matter_Density : location of maximum dark matter density
+            CoM_Dark_Matter_Density : dark matter center of mass
+            Min_Gas_Density : location of minimum gas density
+            Max_Gas_Density : location of maximum gas density
+            CoM_Gas_Density : gas center of mass
+            Min_Total_Density : location of minimum total density
+            Max_Total_Density : location of maximum total density
+            CoM_Total_Density : total center of mass
+            Min_Temperature : location of minimum temperature
+            Max_Temperature : location of maximum temperature
+        Alternately, a function can be supplied for custom recentering.
+        The function should take only one argument, a sphere object.
+            Example function:
+                def my_center_of_mass(data):
+                   my_x, my_y, my_z = data.quantities['CenterOfMass']()
+                   return (my_x, my_y, my_z)
+
+        Examples: this should primarily be used with the halo list of the HaloProfiler.
+        This is an example with an abstract halo asssuming a pre-defined pf.
+        >>> halo = {'center': [0.5, 0.5, 0.5], 'r_max': 1.0}
+        >>> my_sphere = get_halo_sphere(halo, pf, recenter='Max_Gas_Density')
+        >>> # Assuming the above example function has been defined.
+        >>> my_sphere = get_halo_sphere(halo, pf, recenter=my_center_of_mass)
+    """
+        
+    sphere = pf.h.sphere(halo['center'], halo['r_max']/pf.units['mpc'])
+    if len(sphere._grids) == 0: return None
+    new_sphere = False
+
+    if recenter:
+        old = halo['center']
+        if recenter in centering_registry:
+            new_x, new_y, new_z = \
+                centering_registry[recenter](sphere)
+        else:
+            # user supplied function
+            new_x, new_y, new_z = recenter(sphere)
+        if new_x < pf.domain_left_edge[0] or \
+                new_y < pf.domain_left_edge[1] or \
+                new_z < pf.domain_left_edge[2]:
+            mylog.info("Recentering rejected, skipping halo %d" % \
+                halo['id'])
+            return None
+        halo['center'] = [new_x, new_y, new_z]
+        d = pf['kpc'] * periodic_dist(old, halo['center'],
+            pf.domain_right_edge - pf.domain_left_edge)
+        mylog.info("Recentered halo %d %1.3e kpc away." % (halo['id'], d))
+        # Expand the halo to account for recentering. 
+        halo['r_max'] += d / 1000 # d is in kpc -> want mpc
+        new_sphere = True
+
+    if new_sphere:
+        # Temporary solution to memory leak.
+        for g in pf.h.grids:
+            g.clear_data()
+        sphere.clear_data()
+        del sphere
+        sphere = pf.h.sphere(halo['center'], halo['r_max']/pf.units['mpc'])
+    return sphere
+
+def _shift_projections(pf, projections, oldCenter, newCenter, axis):
     """
     Shift projection data around.
     This is necessary when projecting a preiodic region.
@@ -1059,14 +1200,19 @@
         add2_y_weight_field = plot['weight_field'][plot['py'] - 0.5 * plot['pdy'] < 0]
 
         # Add the hanging cells back to the projection data.
-        plot.field_data['px'] = na.concatenate([plot['px'], add_x_px, add_y_px, add2_x_px, add2_y_px])
-        plot.field_data['py'] = na.concatenate([plot['py'], add_x_py, add_y_py, add2_x_py, add2_y_py])
-        plot.field_data['pdx'] = na.concatenate([plot['pdx'], add_x_pdx, add_y_pdx, add2_x_pdx, add2_y_pdx])
-        plot.field_data['pdy'] = na.concatenate([plot['pdy'], add_x_pdy, add_y_pdy, add2_x_pdy, add2_y_pdy])
-        plot.field_data[field] = na.concatenate([plot[field], add_x_field, add_y_field, add2_x_field, add2_y_field])
+        plot.field_data['px'] = na.concatenate([plot['px'], add_x_px, add_y_px, 
+                                                add2_x_px, add2_y_px])
+        plot.field_data['py'] = na.concatenate([plot['py'], add_x_py, add_y_py, 
+                                                add2_x_py, add2_y_py])
+        plot.field_data['pdx'] = na.concatenate([plot['pdx'], add_x_pdx, add_y_pdx, 
+                                                 add2_x_pdx, add2_y_pdx])
+        plot.field_data['pdy'] = na.concatenate([plot['pdy'], add_x_pdy, add_y_pdy, 
+                                                 add2_x_pdy, add2_y_pdy])
+        plot.field_data[field] = na.concatenate([plot[field], add_x_field, add_y_field, 
+                                                 add2_x_field, add2_y_field])
         plot.field_data['weight_field'] = na.concatenate([plot['weight_field'],
-                                                    add_x_weight_field, add_y_weight_field, 
-                                                    add2_x_weight_field, add2_y_weight_field])
+                                                          add_x_weight_field, add_y_weight_field, 
+                                                          add2_x_weight_field, add2_y_weight_field])
 
         # Delete original copies of hanging cells.
         del add_x_px, add_y_px, add2_x_px, add2_y_px

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/analysis_modules/star_analysis/sfr_spectrum.py
--- a/yt/analysis_modules/star_analysis/sfr_spectrum.py
+++ b/yt/analysis_modules/star_analysis/sfr_spectrum.py
@@ -96,6 +96,8 @@
             self._pf.current_redshift) # seconds
         # Build the distribution.
         self.build_dist()
+        # Attach some convenience arrays.
+        self.attach_arrays()
 
     def build_dist(self):
         """
@@ -127,6 +129,47 @@
         # We will want the time taken between bins.
         self.time_bins_dt = self.time_bins[1:] - self.time_bins[:-1]
     
+    def attach_arrays(self):
+        """
+        Attach convenience arrays to the class for easy access.
+        """
+        if self.mode == 'data_source':
+            try:
+                vol = self._data_source.volume('mpc')
+            except AttributeError:
+                # If we're here, this is probably a HOPHalo object, and we
+                # can get the volume this way.
+                ds = self._data_source.get_sphere()
+                vol = ds.volume('mpc')
+        elif self.mode == 'provided':
+            vol = self.volume
+        tc = self._pf["Time"]
+        self.time = []
+        self.lookback_time = []
+        self.redshift = []
+        self.Msol_yr = []
+        self.Msol_yr_vol = []
+        self.Msol = []
+        self.Msol_cumulative = []
+        # Use the center of the time_bin, not the left edge.
+        for i, time in enumerate((self.time_bins[1:] + self.time_bins[:-1])/2.):
+            self.time.append(time * tc / YEAR)
+            self.lookback_time.append((self.time_now - time * tc)/YEAR)
+            self.redshift.append(self.cosm.ComputeRedshiftFromTime(time * tc))
+            self.Msol_yr.append(self.mass_bins[i] / \
+                (self.time_bins_dt[i] * tc / YEAR))
+            self.Msol_yr_vol.append(self.mass_bins[i] / \
+                (self.time_bins_dt[i] * tc / YEAR) / vol)
+            self.Msol.append(self.mass_bins[i])
+            self.Msol_cumulative.append(self.cum_mass_bins[i])
+        self.time = na.array(self.time)
+        self.lookback_time = na.array(self.lookback_time)
+        self.redshift = na.array(self.redshift)
+        self.Msol_yr = na.array(self.Msol_yr)
+        self.Msol_yr_vol = na.array(self.Msol_yr_vol)
+        self.Msol = na.array(self.Msol)
+        self.Msol_cumulative = na.array(self.Msol_cumulative)
+    
     def write_out(self, name="StarFormationRate.out"):
         r"""Write out the star analysis to a text file *name*. The columns are in
         order.
@@ -150,31 +193,21 @@
         >>> sfr.write_out("stars-SFR.out")
         """
         fp = open(name, "w")
-        if self.mode == 'data_source':
-            try:
-                vol = self._data_source.volume('mpc')
-            except AttributeError:
-                # If we're here, this is probably a HOPHalo object, and we
-                # can get the volume this way.
-                ds = self._data_source.get_sphere()
-                vol = ds.volume('mpc')
-        elif self.mode == 'provided':
-            vol = self.volume
-        tc = self._pf["Time"]
-        # Use the center of the time_bin, not the left edge.
         fp.write("#time\tlookback\tredshift\tMsol/yr\tMsol/yr/Mpc3\tMsol\tcumMsol\t\n")
-        for i, time in enumerate((self.time_bins[1:] + self.time_bins[:-1])/2.):
+        for i, time in enumerate(self.time):
             line = "%1.5e %1.5e %1.5e %1.5e %1.5e %1.5e %1.5e\n" % \
-            (time * tc / YEAR, # Time
-            (self.time_now - time * tc)/YEAR, # Lookback time
-            self.cosm.ComputeRedshiftFromTime(time * tc), # Redshift
-            self.mass_bins[i] / (self.time_bins_dt[i] * tc / YEAR), # Msol/yr
-            self.mass_bins[i] / (self.time_bins_dt[i] * tc / YEAR) / vol, # Msol/yr/vol
-            self.mass_bins[i], # Msol in bin
-            self.cum_mass_bins[i]) # cumulative
+            (time, # Time
+            self.lookback_time[i], # Lookback time
+            self.redshift[i], # Redshift
+            self.Msol_yr[i], # Msol/yr
+            self.Msol_yr_vol[i], # Msol/yr/vol
+            self.Msol[i], # Msol in bin
+            self.Msol_cumulative[i]) # cumulative
             fp.write(line)
         fp.close()
 
+### Begin Synthetic Spectrum Stuff. ####
+
 CHABRIER = {
 "Z0001" : "bc2003_hr_m22_chab_ssp.ised.h5", #/* 0.5% */
 "Z0004" : "bc2003_hr_m32_chab_ssp.ised.h5", #/* 2% */

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/config.py
--- a/yt/config.py
+++ b/yt/config.py
@@ -42,6 +42,7 @@
     __global_parallel_size = '1',
     __topcomm_parallel_rank = '0',
     __topcomm_parallel_size = '1',
+    __command_line = 'False',
     storeparameterfiles = 'False',
     parameterfilestore = 'parameter_files.csv',
     maximumstoredpfs = '500',

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -54,6 +54,8 @@
     TrilinearFieldInterpolator
 from yt.utilities.parameter_file_storage import \
     ParameterFileStore
+from yt.utilities.minimal_representation import \
+    MinimalProjectionData
 
 from .derived_quantities import DerivedQuantityCollection
 from .field_info_container import \
@@ -89,6 +91,20 @@
         return tr
     return save_state
 
+def restore_field_information_state(func):
+    """
+    A decorator that takes a function with the API of (self, grid, field)
+    and ensures that after the function is called, the field_parameters will
+    be returned to normal.
+    """
+    def save_state(self, grid, field=None, *args, **kwargs):
+        old_params = grid.field_parameters
+        grid.field_parameters = self.field_parameters
+        tr = func(self, grid, field, *args, **kwargs)
+        grid.field_parameters = old_params
+        return tr
+    return save_state
+
 def cache_mask(func):
     """
     For computationally intensive indexing operations, we can cache
@@ -212,7 +228,7 @@
         self._point_indices = {}
         self._vc_data = {}
         for key, val in kwargs.items():
-            mylog.info("Setting %s to %s", key, val)
+            mylog.debug("Setting %s to %s", key, val)
             self.set_field_parameter(key, val)
 
     def __set_default_field_parameters(self):
@@ -382,9 +398,10 @@
                      [self.field_parameters])
         return (_reconstruct_object, args)
 
-    def __repr__(self):
+    def __repr__(self, clean = False):
         # We'll do this the slow way to be clear what's going on
-        s = "%s (%s): " % (self.__class__.__name__, self.pf)
+        if clean: s = "%s: " % (self.__class__.__name__)
+        else: s = "%s (%s): " % (self.__class__.__name__, self.pf)
         s += ", ".join(["%s=%s" % (i, getattr(self,i))
                        for i in self._con_args])
         return s
@@ -811,6 +828,38 @@
             self[field] = temp_data[field]
 
     def to_frb(self, width, resolution, center = None):
+        r"""This function returns a FixedResolutionBuffer generated from this
+        object.
+
+        A FixedResolutionBuffer is an object that accepts a variable-resolution
+        2D object and transforms it into an NxM bitmap that can be plotted,
+        examined or processed.  This is a convenience function to return an FRB
+        directly from an existing 2D data object.
+
+        Parameters
+        ----------
+        width : width specifier
+            This can either be a floating point value, in the native domain
+            units of the simulation, or a tuple of the (value, unit) style.
+            This will be the width of the FRB.
+        resolution : int or tuple of ints
+            The number of pixels on a side of the final FRB.
+        center : array-like of floats, optional
+            The center of the FRB.  If not specified, defaults to the center of
+            the current object.
+
+        Returns
+        -------
+        frb : :class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer`
+            A fixed resolution buffer, which can be queried for fields.
+
+        Examples
+        --------
+
+        >>> proj = pf.h.proj(0, "Density")
+        >>> frb = proj.to_frb( (100.0, 'kpc'), 1024)
+        >>> write_image(na.log10(frb["Density"]), 'density_100kpc.png')
+        """
         if center is None:
             center = self.get_field_parameter("center")
             if center is None:
@@ -1221,6 +1270,52 @@
         return "%s/c%s_L%s" % \
             (self._top_node, cen_name, L_name)
 
+    def to_frb(self, width, resolution):
+        r"""This function returns an ObliqueFixedResolutionBuffer generated
+        from this object.
+
+        An ObliqueFixedResolutionBuffer is an object that accepts a
+        variable-resolution 2D object and transforms it into an NxM bitmap that
+        can be plotted, examined or processed.  This is a convenience function
+        to return an FRB directly from an existing 2D data object.  Unlike the
+        corresponding to_frb function for other AMR2DData objects, this does
+        not accept a 'center' parameter as it is assumed to be centered at the
+        center of the cutting plane.
+
+        Parameters
+        ----------
+        width : width specifier
+            This can either be a floating point value, in the native domain
+            units of the simulation, or a tuple of the (value, unit) style.
+            This will be the width of the FRB.
+        resolution : int or tuple of ints
+            The number of pixels on a side of the final FRB.
+
+        Returns
+        -------
+        frb : :class:`~yt.visualization.fixed_resolution.ObliqueFixedResolutionBuffer`
+            A fixed resolution buffer, which can be queried for fields.
+
+        Examples
+        --------
+
+        >>> v, c = pf.h.find_max("Density")
+        >>> sp = pf.h.sphere(c, (100.0, 'au'))
+        >>> L = sp.quantities["AngularMomentumVector"]()
+        >>> cutting = pf.h.cutting(L, c)
+        >>> frb = cutting.to_frb( (1.0, 'pc'), 1024)
+        >>> write_image(na.log10(frb["Density"]), 'density_1pc.png')
+        """
+        if iterable(width):
+            w, u = width
+            width = w/self.pf[u]
+        if not iterable(resolution):
+            resolution = (resolution, resolution)
+        from yt.visualization.fixed_resolution import ObliqueFixedResolutionBuffer
+        bounds = (-width/2.0, width/2.0, -width/2.0, width/2.0)
+        frb = ObliqueFixedResolutionBuffer(self, bounds, resolution)
+        return frb
+
 class AMRFixedResCuttingPlaneBase(AMR2DData):
     """
     AMRFixedResCuttingPlaneBase is an oblique plane through the data,
@@ -1516,6 +1611,10 @@
         self._refresh_data()
         if self._okay_to_serialize and self.serialize: self._serialize(node_name=self._node_name)
 
+    @property
+    def _mrep(self):
+        return MinimalProjectionData(self)
+
     def _convert_field_name(self, field):
         if field == "weight_field": return "weight_field_%s" % self._weight
         if field in self._key_fields: return field
@@ -2443,14 +2542,8 @@
         verts = []
         samples = []
         for i, g in enumerate(self._get_grid_objs()):
-            mask = self._get_cut_mask(g) * g.child_mask
-            vals = g.get_vertex_centered_data(field)
-            if sample_values is not None:
-                svals = g.get_vertex_centered_data(sample_values)
-            else:
-                svals = None
-            my_verts = march_cubes_grid(value, vals, mask, g.LeftEdge, g.dds,
-                                        svals)
+            my_verts = self._extract_isocontours_from_grid(
+                            g, field, value, sample_values)
             if sample_values is not None:
                 my_verts, svals = my_verts
                 samples.append(svals)
@@ -2477,6 +2570,20 @@
             return verts, samples
         return verts
 
+
+    @restore_grid_state
+    def _extract_isocontours_from_grid(self, grid, field, value,
+                                       sample_values = None):
+        mask = self._get_cut_mask(grid) * grid.child_mask
+        vals = grid.get_vertex_centered_data(field)
+        if sample_values is not None:
+            svals = grid.get_vertex_centered_data(sample_values)
+        else:
+            svals = None
+        my_verts = march_cubes_grid(value, vals, mask, grid.LeftEdge,
+                                    grid.dds, svals)
+        return my_verts
+
     def calculate_isocontour_flux(self, field, value,
                     field_x, field_y, field_z, fluxing_field = None):
         r"""This identifies isocontours on a cell-by-cell basis, with no
@@ -2543,19 +2650,25 @@
         """
         flux = 0.0
         for g in self._get_grid_objs():
-            mask = self._get_cut_mask(g) * g.child_mask
-            vals = g.get_vertex_centered_data(field)
-            if fluxing_field is None:
-                ff = na.ones(vals.shape, dtype="float64")
-            else:
-                ff = g.get_vertex_centered_data(fluxing_field)
-            xv, yv, zv = [g.get_vertex_centered_data(f) for f in 
-                         [field_x, field_y, field_z]]
-            flux += march_cubes_grid_flux(value, vals, xv, yv, zv,
-                        ff, mask, g.LeftEdge, g.dds)
+            flux += self._calculate_flux_in_grid(g, field, value,
+                    field_x, field_y, field_z, fluxing_field)
         flux = self.comm.mpi_allreduce(flux, op="sum")
         return flux
 
+    @restore_grid_state
+    def _calculate_flux_in_grid(self, grid, field, value,
+                    field_x, field_y, field_z, fluxing_field = None):
+        mask = self._get_cut_mask(grid) * grid.child_mask
+        vals = grid.get_vertex_centered_data(field)
+        if fluxing_field is None:
+            ff = na.ones(vals.shape, dtype="float64")
+        else:
+            ff = grid.get_vertex_centered_data(fluxing_field)
+        xv, yv, zv = [grid.get_vertex_centered_data(f) for f in 
+                     [field_x, field_y, field_z]]
+        return march_cubes_grid_flux(value, vals, xv, yv, zv,
+                    ff, mask, grid.LeftEdge, grid.dds)
+
     def extract_connected_sets(self, field, num_levels, min_val, max_val,
                                 log_space=True, cumulative=True, cache=False):
         """
@@ -2855,12 +2968,6 @@
                  & (r <= self._radius))
         return cm
 
-    def volume(self, unit="unitary"):
-        """
-        Return the volume of the cylinder in units of *unit*.
-        """
-        return math.pi * (self._radius)**2. * self._height * pf[unit]**3
-
 class AMRInclinedBox(AMR3DData):
     _type_name="inclined_box"
     _con_args = ('origin','box_vectors')
@@ -3430,7 +3537,7 @@
                                    output_field, output_left)
             self.field_data[field] = output_field
 
-    @restore_grid_state
+    @restore_field_information_state
     def _get_data_from_grid(self, grid, fields):
         fields = ensure_list(fields)
         g_fields = [grid[field].astype("float64") for field in fields]
@@ -3523,6 +3630,19 @@
                     self._some_overlap.append(grid)
                     continue
     
+    def __repr__(self):
+        # We'll do this the slow way to be clear what's going on
+        s = "%s (%s): " % (self.__class__.__name__, self.pf)
+        s += "["
+        for i, region in enumerate(self.regions):
+            if region in ["OR", "AND", "NOT", "(", ")"]:
+                s += region
+            else:
+                s += region.__repr__(clean = True)
+            if i < (len(self.regions) - 1): s += ", "
+        s += "]"
+        return s
+    
     def _is_fully_enclosed(self, grid):
         return (grid in self._all_overlap)
 

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/data_objects/profiles.py
--- a/yt/data_objects/profiles.py
+++ b/yt/data_objects/profiles.py
@@ -133,7 +133,6 @@
             if weight:
                 f[u] /= w[u]
             self[field] = f
-        self["myweight"] = w
         self["UsedBins"] = u
 
     def add_fields(self, fields, weight = "CellMassMsun", accumulation = False, fractional=False):

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -37,6 +37,8 @@
     output_type_registry
 from yt.data_objects.field_info_container import \
     FieldInfoContainer, NullFunc
+from yt.utilities.minimal_representation import \
+    MinimalStaticOutput
 
 # We want to support the movie format in the future.
 # When such a thing comes to pass, I'll move all the stuff that is contant up
@@ -115,6 +117,10 @@
         except ImportError:
             return s.replace(";", "*")
 
+    @property
+    def _mrep(self):
+        return MinimalStaticOutput(self)
+
     @classmethod
     def _is_valid(cls, *args, **kwargs):
         return False

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/data_objects/universal_fields.py
--- a/yt/data_objects/universal_fields.py
+++ b/yt/data_objects/universal_fields.py
@@ -882,6 +882,8 @@
 def _convertVorticitySquared(data):
     return data.convert("cm")**-2.0
 add_field("VorticitySquared", function=_VorticitySquared,
-          validators=[ValidateSpatial(1)],
+          validators=[ValidateSpatial(1,
+              ["x-velocity","y-velocity","z-velocity"])],
           units=r"\rm{s}^{-2}",
           convert_function=_convertVorticitySquared)
+

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py
+++ b/yt/frontends/enzo/data_structures.py
@@ -484,6 +484,15 @@
         if self.num_grids > 40:
             starter = na.random.randint(0, 20)
             random_sample = na.mgrid[starter:len(self.grids)-1:20j].astype("int32")
+            # We also add in a bit to make sure that some of the grids have
+            # particles
+            gwp = self.grid_particle_count > 0
+            if na.any(gwp) and not na.any(gwp[(random_sample,)]):
+                # We just add one grid.  This is not terribly efficient.
+                first_grid = na.where(gwp)[0][0]
+                random_sample.resize((21,))
+                random_sample[-1] = first_grid
+                mylog.debug("Added additional grid %s", first_grid)
             mylog.debug("Checking grids: %s", random_sample.tolist())
         else:
             random_sample = na.mgrid[0:max(len(self.grids)-1,1)].astype("int32")

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/frontends/enzo/fields.py
--- a/yt/frontends/enzo/fields.py
+++ b/yt/frontends/enzo/fields.py
@@ -296,11 +296,12 @@
 def _dmpdensity(field, data):
     blank = na.zeros(data.ActiveDimensions, dtype='float32')
     if data.NumberOfParticles == 0: return blank
-    if 'creation_time' in data.keys():
+    if 'creation_time' in data.pf.field_info:
         filter = data['creation_time'] <= 0.0
         if not filter.any(): return blank
     else:
         filter = na.ones(data.NumberOfParticles, dtype='bool')
+    if not filter.any(): return blank
     amr_utils.CICDeposit_3(data["particle_position_x"][filter].astype(na.float64),
                            data["particle_position_y"][filter].astype(na.float64),
                            data["particle_position_z"][filter].astype(na.float64),

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/frontends/flash/data_structures.py
--- a/yt/frontends/flash/data_structures.py
+++ b/yt/frontends/flash/data_structures.py
@@ -316,6 +316,13 @@
             self.current_time = \
                 float(self._find_parameter("real", "time", scalar=True))
 
+        if self._flash_version == 7:
+            self.parameters['timestep'] = float(
+                self._handle["simulation parameters"]["timestep"])
+        else:
+            self.parameters['timestep'] = \
+                float(self._find_parameter("real", "dt", scalar=True))
+
         try:
             use_cosmo = self._find_parameter("logical", "usecosmology") 
         except:

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/frontends/gdf/api.py
--- a/yt/frontends/gdf/api.py
+++ b/yt/frontends/gdf/api.py
@@ -1,13 +1,14 @@
 """
-API for yt.frontends.chombo
+API for yt.frontends.gdf
 
+Author: Samuel W. Skillman <samskillman at gmail.com>
+Affiliation: University of Colorado at Boulder
 Author: Matthew Turk <matthewturk at gmail.com>
 Affiliation: UCSD
 Author: J.S. Oishi <jsoishi at gmail.com>
 Affiliation: KIPAC/SLAC/Stanford
 Author: Britton Smith <brittonsmith at gmail.com>
 Affiliation: MSU
-Homepage: http://yt.Chombotools.org/
 License:
   Copyright (C) 2010-2011 Matthew Turk.  All Rights Reserved.
 

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/frontends/gdf/data_structures.py
--- a/yt/frontends/gdf/data_structures.py
+++ b/yt/frontends/gdf/data_structures.py
@@ -1,12 +1,15 @@
 """
-Data structures for Chombo.
+Data structures for GDF.
 
+Author: Samuel W. Skillman <samskillman at gmail.com>
+Affiliation: University of Colorado at Boulder
 Author: Matthew Turk <matthewturk at gmail.com>
 Author: J. S. Oishi <jsoishi at gmail.com>
 Affiliation: KIPAC/SLAC/Stanford
 Homepage: http://yt-project.org/
 License:
-  Copyright (C) 2008-2011 Matthew Turk, J. S. Oishi.  All Rights Reserved.
+  Copyright (C) 2008-2011 Samuel W. Skillman, Matthew Turk, J. S. Oishi.  
+  All Rights Reserved.
 
   This file is part of yt.
 
@@ -76,7 +79,7 @@
         # for now, the hierarchy file is the parameter file!
         self.hierarchy_filename = self.parameter_file.parameter_filename
         self.directory = os.path.dirname(self.hierarchy_filename)
-        self._fhandle = h5py.File(self.hierarchy_filename)
+        self._fhandle = h5py.File(self.hierarchy_filename,'r')
         AMRHierarchy.__init__(self,pf,data_style)
 
         self._fhandle.close()
@@ -94,31 +97,31 @@
 
     def _count_grids(self):
         self.num_grids = self._fhandle['/grid_parent_id'].shape[0]
-        
+       
     def _parse_hierarchy(self):
         f = self._fhandle 
-        
-        # this relies on the first Group in the H5 file being
-        # 'Chombo_global'
-        levels = f.listnames()[1:]
         dxs=[]
         self.grids = na.empty(self.num_grids, dtype='object')
-        for i, grid in enumerate(f['data'].keys()):
-            self.grids[i] = self.grid(i, self, f['grid_level'][i],
-                                      f['grid_left_index'][i],
-                                      f['grid_dimensions'][i])
-            self.grids[i]._level_id = f['grid_level'][i]
+        levels = (f['grid_level'][:]).copy()
+        glis = (f['grid_left_index'][:]).copy()
+        gdims = (f['grid_dimensions'][:]).copy()
+        for i in range(levels.shape[0]):
+            self.grids[i] = self.grid(i, self, levels[i],
+                                      glis[i],
+                                      gdims[i])
+            self.grids[i]._level_id = levels[i]
 
             dx = (self.parameter_file.domain_right_edge-
                   self.parameter_file.domain_left_edge)/self.parameter_file.domain_dimensions
-            dx = dx/self.parameter_file.refine_by**(f['grid_level'][i])
+            dx = dx/self.parameter_file.refine_by**(levels[i])
             dxs.append(dx)
         dx = na.array(dxs)
-        self.grid_left_edge = self.parameter_file.domain_left_edge + dx*f['grid_left_index'][:]
-        self.grid_dimensions = f['grid_dimensions'][:].astype("int32")
+        self.grid_left_edge = self.parameter_file.domain_left_edge + dx*glis
+        self.grid_dimensions = gdims.astype("int32")
         self.grid_right_edge = self.grid_left_edge + dx*self.grid_dimensions
         self.grid_particle_count = f['grid_particle_count'][:]
-
+        del levels, glis, gdims
+ 
     def _populate_grid_objects(self):
         for g in self.grids:
             g._prepare_grid()
@@ -130,9 +133,6 @@
                 g1.Parent.append(g)
         self.max_level = self.grid_levels.max()
 
-    def _setup_unknown_fields(self):
-        pass
-
     def _setup_derived_fields(self):
         self.derived_field_list = []
 
@@ -171,7 +171,11 @@
         # This should be improved.
         self._handle = h5py.File(self.parameter_filename, "r")
         for field_name in self._handle["/field_types"]:
-            self.units[field_name] = self._handle["/field_types/%s" % field_name].attrs['field_to_cgs']
+            try:
+                self.units[field_name] = self._handle["/field_types/%s" % field_name].attrs['field_to_cgs']
+            except:
+                self.units[field_name] = 1.0
+
         self._handle.close()
         del self._handle
         
@@ -181,7 +185,9 @@
         self.domain_left_edge = sp["domain_left_edge"][:]
         self.domain_right_edge = sp["domain_right_edge"][:]
         self.domain_dimensions = sp["domain_dimensions"][:]
-        self.refine_by = sp["refine_by"]
+        refine_by = sp["refine_by"]
+        if refine_by is None: refine_by = 2
+        self.refine_by = refine_by 
         self.dimensionality = sp["dimensionality"]
         self.current_time = sp["current_time"]
         self.unique_identifier = sp["unique_identifier"]
@@ -198,6 +204,7 @@
         else:
             self.current_redshift = self.omega_lambda = self.omega_matter = \
                 self.hubble_constant = self.cosmological_simulation = 0.0
+        self.parameters['Time'] = 1.0 # Hardcode time conversion for now.
         self.parameters["HydroMethod"] = 0 # Hardcode for now until field staggering is supported.
         self._handle.close()
         del self._handle

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/frontends/gdf/fields.py
--- a/yt/frontends/gdf/fields.py
+++ b/yt/frontends/gdf/fields.py
@@ -1,11 +1,14 @@
 """
 GDF-specific fields
 
+Author: Samuel W. Skillman <samskillman at gmail.com>
+Affiliation: University of Colorado at Boulder
 Author: J. S. Oishi <jsoishi at gmail.com>
 Affiliation: KIPAC/SLAC/Stanford
 Homepage: http://yt-project.org/
 License:
-  Copyright (C) 2009-2011 J. S. Oishi, Matthew Turk.  All Rights Reserved.
+  Copyright (C) 2008-2011 Samuel W. Skillman, Matthew Turk, J. S. Oishi.  
+  All Rights Reserved.
 
   This file is part of yt.
 
@@ -53,40 +56,31 @@
 add_gdf_field = KnownGDFFields.add_field
 
 add_gdf_field("density", function=NullFunc, take_log=True,
-          validators = [ValidateDataField("density")],
           units=r"\rm{g}/\rm{cm}^3",
           projected_units =r"\rm{g}/\rm{cm}^2")
 
 add_gdf_field("specific_energy", function=NullFunc, take_log=True,
-          validators = [ValidateDataField("specific_energy")],
           units=r"\rm{erg}/\rm{g}")
 
 add_gdf_field("pressure", function=NullFunc, take_log=True,
-          validators = [ValidateDataField("pressure")],
           units=r"\rm{erg}/\rm{g}")
 
-add_gdf_field("velocity_x", function=NullFunc, take_log=True,
-          validators = [ValidateDataField("velocity_x")],
+add_gdf_field("velocity_x", function=NullFunc, take_log=False,
           units=r"\rm{cm}/\rm{s}")
 
 add_gdf_field("velocity_y", function=NullFunc, take_log=False,
-          validators = [ValidateDataField("velocity_y")],
           units=r"\rm{cm}/\rm{s}")
 
 add_gdf_field("velocity_z", function=NullFunc, take_log=False,
-          validators = [ValidateDataField("velocity_z")],
           units=r"\rm{cm}/\rm{s}")
 
 add_gdf_field("mag_field_x", function=NullFunc, take_log=False,
-          validators = [ValidateDataField("mag_field_x")],
           units=r"\rm{cm}/\rm{s}")
 
 add_gdf_field("mag_field_y", function=NullFunc, take_log=False,
-          validators = [ValidateDataField("mag_field_y")],
           units=r"\rm{cm}/\rm{s}")
 
 add_gdf_field("mag_field_z", function=NullFunc, take_log=False,
-          validators = [ValidateDataField("mag_field_z")],
           units=r"\rm{cm}/\rm{s}")
 
 for f,v in log_translation_dict.items():

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/frontends/gdf/io.py
--- a/yt/frontends/gdf/io.py
+++ b/yt/frontends/gdf/io.py
@@ -1,6 +1,8 @@
 """
 The data-file handling functions
 
+Author: Samuel W. Skillman <samskillman at gmail.com>
+Affiliation: University of Colorado at Boulder
 Author: Matthew Turk <matthewturk at gmail.com>
 Author: J. S. Oishi <jsoishi at gmail.com>
 Affiliation: KIPAC/SLAC/Stanford
@@ -35,38 +37,33 @@
     def _field_dict(self,fhandle):
         keys = fhandle['field_types'].keys()
         val = fhandle['field_types'].keys()
-        # ncomp = int(fhandle['/'].attrs['num_components'])
-        # temp =  fhandle['/'].attrs.listitems()[-ncomp:]
-        # val, keys = zip(*temp)
-        # val = [int(re.match('component_(\d+)',v).groups()[0]) for v in val]
         return dict(zip(keys,val))
         
     def _read_field_names(self,grid):
         fhandle = h5py.File(grid.filename,'r')
-        return fhandle['field_types'].keys()
+        names = fhandle['field_types'].keys()
+        fhandle.close()
+        return names
     
     def _read_data_set(self,grid,field):
         fhandle = h5py.File(grid.hierarchy.hierarchy_filename,'r')
-        return fhandle['/data/grid_%010i/'%grid.id+field][:]
-        # field_dict = self._field_dict(fhandle)
-        # lstring = 'level_%i' % grid.Level
-        # lev = fhandle[lstring]
-        # dims = grid.ActiveDimensions
-        # boxsize = dims.prod()
-        
-        # grid_offset = lev[self._offset_string][grid._level_id]
-        # start = grid_offset+field_dict[field]*boxsize
-        # stop = start + boxsize
-        # data = lev[self._data_string][start:stop]
-
-        # return data.reshape(dims, order='F')
-                                          
+        data = (fhandle['/data/grid_%010i/'%grid.id+field][:]).copy()
+        fhandle.close()
+        if grid.pf.field_ordering == 1:
+            return data.T
+        else:
+            return data
 
     def _read_data_slice(self, grid, field, axis, coord):
         sl = [slice(None), slice(None), slice(None)]
         sl[axis] = slice(coord, coord + 1)
+        if grid.pf.field_ordering == 1:
+            sl.reverse()
         fhandle = h5py.File(grid.hierarchy.hierarchy_filename,'r')
-        return fhandle['/data/grid_%010i/'%grid.id+field][:][sl]
+        data = (fhandle['/data/grid_%010i/'%grid.id+field][:][sl]).copy()
+        fhandle.close()
+        if grid.pf.field_ordering == 1:
+            return data.T
+        else:
+            return data
 
-    # return self._read_data_set(grid,field)[sl]
-

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/funcs.py
--- a/yt/funcs.py
+++ b/yt/funcs.py
@@ -24,42 +24,15 @@
 """
 
 import time, types, signal, inspect, traceback, sys, pdb, os
-import warnings, struct
+import warnings, struct, subprocess
 from math import floor, ceil
 
 from yt.utilities.exceptions import *
 from yt.utilities.logger import ytLogger as mylog
 import yt.utilities.progressbar as pb
 import yt.utilities.rpdb as rpdb
-
-# Some compatibility functions.  In the long run, these *should* disappear as
-# we move toward newer python versions.  Most were implemented to get things
-# running on DataStar.
-
-# If we're running on python2.4, we need a 'wraps' function
-def blank_wrapper(f):
-    return lambda a: a
-
-try:
-    from functools import wraps
-except ImportError:
-    wraps = blank_wrapper
-
-# We need to ensure that we have a defaultdict implementation
-
-class __defaultdict(dict):
-    def __init__(self, func):
-        self.__func = func
-        dict.__init__(self)
-    def __getitem__(self, key):
-        if not self.has_key(key):
-            self.__setitem__(key, self.__func())
-        return dict.__getitem__(self, key)
-
-try:
-    from collections import defaultdict
-except ImportError:
-    defaultdict = __defaultdict
+from collections import defaultdict
+from functools import wraps
 
 # Some functions for handling sequences and other types
 
@@ -78,7 +51,7 @@
     string to a list, for instance ensuring the *fields* as an argument is a
     list.
     """
-    if obj == None:
+    if obj is None:
         return [obj]
     if not isinstance(obj, types.ListType):
         return [obj]
@@ -385,18 +358,6 @@
 def signal_ipython(signo, frame):
     insert_ipython(2)
 
-# We use two signals, SIGUSR1 and SIGUSR2.  In a non-threaded environment,
-# we set up handlers to process these by printing the current stack and to
-# raise a RuntimeError.  The latter can be used, inside pdb, to catch an error
-# and then examine the current stack.
-try:
-    signal.signal(signal.SIGUSR1, signal_print_traceback)
-    mylog.debug("SIGUSR1 registered for traceback printing")
-    signal.signal(signal.SIGUSR2, signal_ipython)
-    mylog.debug("SIGUSR2 registered for IPython Insertion")
-except ValueError:  # Not in main thread
-    pass
-
 def paste_traceback(exc_type, exc, tb):
     """
     This is a traceback handler that knows how to paste to the pastebin.
@@ -450,29 +411,6 @@
     dec_s = ''.join([ chr(ord(a) ^ ord(b)) for a, b in zip(enc_s, itertools.cycle(key)) ])
     print dec_s
 
-# If we recognize one of the arguments on the command line as indicating a
-# different mechanism for handling tracebacks, we attach one of those handlers
-# and remove the argument from sys.argv.
-#
-# This fallback is for Paraview:
-if not hasattr(sys, 'argv') or sys.argv is None: sys.argv = []
-# Now, we check.
-if "--paste" in sys.argv:
-    sys.excepthook = paste_traceback
-    del sys.argv[sys.argv.index("--paste")]
-elif "--paste-detailed" in sys.argv:
-    sys.excepthook = paste_traceback_detailed
-    del sys.argv[sys.argv.index("--paste-detailed")]
-elif "--detailed" in sys.argv:
-    import cgitb; cgitb.enable(format="text")
-    del sys.argv[sys.argv.index("--detailed")]
-elif "--rpdb" in sys.argv:
-    sys.excepthook = rpdb.rpdb_excepthook
-    del sys.argv[sys.argv.index("--rpdb")]
-elif "--detailed" in sys.argv:
-    import cgitb; cgitb.enable(format="text")
-    del sys.argv[sys.argv.index("--detailed")]
-
 #
 # Some exceptions
 #
@@ -482,3 +420,103 @@
 
 class YTEmptyClass(object):
     pass
+
+def update_hg(path, skip_rebuild = False):
+    from mercurial import hg, ui, commands
+    f = open(os.path.join(path, "yt_updater.log"), "a")
+    u = ui.ui()
+    u.pushbuffer()
+    config_fn = os.path.join(path, ".hg", "hgrc")
+    print "Reading configuration from ", config_fn
+    u.readconfig(config_fn)
+    repo = hg.repository(u, path)
+    commands.pull(u, repo)
+    f.write(u.popbuffer())
+    f.write("\n\n")
+    u.pushbuffer()
+    commands.identify(u, repo)
+    if "+" in u.popbuffer():
+        print "Can't rebuild modules by myself."
+        print "You will have to do this yourself.  Here's a sample commands:"
+        print
+        print "    $ cd %s" % (path)
+        print "    $ hg up"
+        print "    $ %s setup.py develop" % (sys.executable)
+        return 1
+    print "Updating the repository"
+    f.write("Updating the repository\n\n")
+    commands.update(u, repo, check=True)
+    if skip_rebuild: return
+    f.write("Rebuilding modules\n\n")
+    p = subprocess.Popen([sys.executable, "setup.py", "build_ext", "-i"], cwd=path,
+                        stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
+    stdout, stderr = p.communicate()
+    f.write(stdout)
+    f.write("\n\n")
+    if p.returncode:
+        print "BROKEN: See %s" % (os.path.join(path, "yt_updater.log"))
+        sys.exit(1)
+    f.write("Successful!\n")
+    print "Updated successfully."
+
+def get_hg_version(path):
+    from mercurial import hg, ui, commands
+    u = ui.ui()
+    u.pushbuffer()
+    repo = hg.repository(u, path)
+    commands.identify(u, repo)
+    return u.popbuffer()
+
+def get_yt_version():
+    import pkg_resources
+    yt_provider = pkg_resources.get_provider("yt")
+    path = os.path.dirname(yt_provider.module_path)
+    version = _get_hg_version(path)[:12]
+    return version
+
+# This code snippet is modified from Georg Brandl
+def bb_apicall(endpoint, data, use_pass = True):
+    import urllib, urllib2
+    uri = 'https://api.bitbucket.org/1.0/%s/' % endpoint
+    # since bitbucket doesn't return the required WWW-Authenticate header when
+    # making a request without Authorization, we cannot use the standard urllib2
+    # auth handlers; we have to add the requisite header from the start
+    if data is not None:
+        data = urllib.urlencode(data)
+    req = urllib2.Request(uri, data)
+    if use_pass:
+        username = raw_input("Bitbucket Username? ")
+        password = getpass.getpass()
+        upw = '%s:%s' % (username, password)
+        req.add_header('Authorization', 'Basic %s' % base64.b64encode(upw).strip())
+    return urllib2.urlopen(req).read()
+
+def get_yt_supp():
+    supp_path = os.path.join(os.environ["YT_DEST"], "src",
+                             "yt-supplemental")
+    # Now we check that the supplemental repository is checked out.
+    if not os.path.isdir(supp_path):
+        print
+        print "*** The yt-supplemental repository is not checked ***"
+        print "*** out.  I can do this for you, but because this ***"
+        print "*** is a delicate act, I require you to respond   ***"
+        print "*** to the prompt with the word 'yes'.            ***"
+        print
+        response = raw_input("Do you want me to try to check it out? ")
+        if response != "yes":
+            print
+            print "Okay, I understand.  You can check it out yourself."
+            print "This command will do it:"
+            print
+            print "$ hg clone http://hg.yt-project.org/yt-supplemental/ ",
+            print "%s" % (supp_path)
+            print
+            sys.exit(1)
+        rv = commands.clone(uu,
+                "http://hg.yt-project.org/yt-supplemental/", supp_path)
+        if rv:
+            print "Something has gone wrong.  Quitting."
+            sys.exit(1)
+    # Now we think we have our supplemental repository.
+    return supp_path
+

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/3d.png
Binary file yt/gui/reason/html/images/3d.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/3d_tab.png
Binary file yt/gui/reason/html/images/3d_tab.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/binary.png
Binary file yt/gui/reason/html/images/binary.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/blockdevice.png
Binary file yt/gui/reason/html/images/blockdevice.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/blockdevice_tab.png
Binary file yt/gui/reason/html/images/blockdevice_tab.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/console.png
Binary file yt/gui/reason/html/images/console.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/double_down.png
Binary file yt/gui/reason/html/images/double_down.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/double_down_sm.png
Binary file yt/gui/reason/html/images/double_down_sm.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/double_left.png
Binary file yt/gui/reason/html/images/double_left.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/double_left_sm.png
Binary file yt/gui/reason/html/images/double_left_sm.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/double_right.png
Binary file yt/gui/reason/html/images/double_right.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/double_right_sm.png
Binary file yt/gui/reason/html/images/double_right_sm.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/double_up.png
Binary file yt/gui/reason/html/images/double_up.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/double_up_sm.png
Binary file yt/gui/reason/html/images/double_up_sm.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/graph.png
Binary file yt/gui/reason/html/images/graph.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/kivio_flw.png
Binary file yt/gui/reason/html/images/kivio_flw.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/single_down.png
Binary file yt/gui/reason/html/images/single_down.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/single_down_sm.png
Binary file yt/gui/reason/html/images/single_down_sm.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/single_left.png
Binary file yt/gui/reason/html/images/single_left.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/single_left_sm.png
Binary file yt/gui/reason/html/images/single_left_sm.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/single_right.png
Binary file yt/gui/reason/html/images/single_right.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/single_right_sm.png
Binary file yt/gui/reason/html/images/single_right_sm.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/single_up.png
Binary file yt/gui/reason/html/images/single_up.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/single_up_sm.png
Binary file yt/gui/reason/html/images/single_up_sm.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/images/upload.png
Binary file yt/gui/reason/html/images/upload.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/leaflet/images/marker-shadow.png
Binary file yt/gui/reason/html/leaflet/images/marker-shadow.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/leaflet/images/marker.png
Binary file yt/gui/reason/html/leaflet/images/marker.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/leaflet/images/popup-close.png
Binary file yt/gui/reason/html/leaflet/images/popup-close.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/leaflet/images/zoom-in.png
Binary file yt/gui/reason/html/leaflet/images/zoom-in.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/gui/reason/html/leaflet/images/zoom-out.png
Binary file yt/gui/reason/html/leaflet/images/zoom-out.png has changed

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/mods.py
--- a/yt/mods.py
+++ b/yt/mods.py
@@ -35,6 +35,13 @@
 import numpy as na # For historical reasons
 import numpy # In case anyone wishes to use it by name
 
+# This next item will handle most of the actual startup procedures, but it will
+# also attempt to parse the command line and set up the global state of various
+# operations.
+
+import yt.startup_tasks as __startup_tasks
+unparsed_args = __startup_tasks.unparsed_args
+
 from yt.funcs import *
 from yt.utilities.logger import ytLogger as mylog
 from yt.utilities.performance_counters import yt_counters, time_function
@@ -108,7 +115,7 @@
     PlotCollection, PlotCollectionInteractive, \
     get_multi_plot, FixedResolutionBuffer, ObliqueFixedResolutionBuffer, \
     callback_registry, write_bitmap, write_image, annotate_image, \
-    apply_colormap, scale_image
+    apply_colormap, scale_image, write_projection
 
 from yt.visualization.volume_rendering.api import \
     ColorTransferFunction, PlanckTransferFunction, ProjectionTransferFunction, \
@@ -122,6 +129,10 @@
 
 from yt.convenience import all_pfs, max_spheres, load, projload
 
+# Import some helpful math utilities
+from yt.utilities.math_utils import \
+    ortho_find, quartiles
+
 
 # We load plugins.  Keep in mind, this can be fairly dangerous -
 # the primary purpose is to allow people to have a set of functions

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/startup_tasks.py
--- /dev/null
+++ b/yt/startup_tasks.py
@@ -0,0 +1,144 @@
+"""
+Very simple convenience function for importing all the modules, setting up
+the namespace and getting the last argument on the command line.
+
+Author: Matthew Turk <matthewturk at gmail.com>
+Affiliation: Columbia University
+Homepage: http://yt-project.org/
+License:
+  Copyright (C) 2011 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+# This handles the command line.
+
+import argparse, os, sys
+
+from yt.config import ytcfg
+from yt.funcs import *
+
+exe_name = os.path.basename(sys.executable)
+# At import time, we determined whether or not we're being run in parallel.
+def turn_on_parallelism():
+    try:
+        from mpi4py import MPI
+        parallel_capable = (MPI.COMM_WORLD.size > 1)
+    except ImportError:
+        parallel_capable = False
+    if parallel_capable:
+        mylog.info("Global parallel computation enabled: %s / %s",
+                   MPI.COMM_WORLD.rank, MPI.COMM_WORLD.size)
+        ytcfg["yt","__global_parallel_rank"] = str(MPI.COMM_WORLD.rank)
+        ytcfg["yt","__global_parallel_size"] = str(MPI.COMM_WORLD.size)
+        ytcfg["yt","__parallel"] = "True"
+        if exe_name == "embed_enzo" or \
+            ("_parallel" in dir(sys) and sys._parallel == True):
+            ytcfg["yt","inline"] = "True"
+        # I believe we do not need to turn this off manually
+        #ytcfg["yt","StoreParameterFiles"] = "False"
+        # Now let's make sure we have the right options set.
+        if MPI.COMM_WORLD.rank > 0:
+            if ytcfg.getboolean("yt","LogFile"):
+                ytcfg["yt","LogFile"] = "False"
+                yt.utilities.logger.disable_file_logging()
+    return parallel_capable
+
+# This fallback is for Paraview:
+
+# We use two signals, SIGUSR1 and SIGUSR2.  In a non-threaded environment,
+# we set up handlers to process these by printing the current stack and to
+# raise a RuntimeError.  The latter can be used, inside pdb, to catch an error
+# and then examine the current stack.
+try:
+    signal.signal(signal.SIGUSR1, signal_print_traceback)
+    mylog.debug("SIGUSR1 registered for traceback printing")
+    signal.signal(signal.SIGUSR2, signal_ipython)
+    mylog.debug("SIGUSR2 registered for IPython Insertion")
+except ValueError:  # Not in main thread
+    pass
+
+class SetExceptionHandling(argparse.Action):
+    def __call__(self, parser, namespace, values, option_string = None):
+        # If we recognize one of the arguments on the command line as indicating a
+        # different mechanism for handling tracebacks, we attach one of those handlers
+        # and remove the argument from sys.argv.
+        #
+        if self.dest == "paste":
+            sys.excepthook = paste_traceback
+            mylog.debug("Enabling traceback pasting")
+        elif self.dest == "paste-detailed":
+            sys.excepthook = paste_traceback_detailed
+            mylog.debug("Enabling detailed traceback pasting")
+        elif self.dest == "detailed":
+            import cgitb; cgitb.enable(format="text")
+            mylog.debug("Enabling detailed traceback reporting")
+        elif self.dest == "rpdb":
+            sys.excepthook = rpdb.rpdb_excepthook
+            mylog.debug("Enabling remote debugging")
+
+class SetConfigOption(argparse.Action):
+    def __call__(self, parser, namespace, values, option_string = None):
+        param, val = values.split("=")
+        mylog.debug("Overriding config: %s = %s", param, val)
+        ytcfg["yt",param] = val
+
+parser = argparse.ArgumentParser(description = 'yt command line arguments')
+parser.add_argument("--config", action=SetConfigOption,
+    help = "Set configuration option, in the form param=value")
+parser.add_argument("--paste", action=SetExceptionHandling,
+    help = "Paste traceback to paste.yt-project.org", nargs = 0)
+parser.add_argument("--paste-detailed", action=SetExceptionHandling,
+    help = "Paste a detailed traceback with local variables to " +
+           "paste.yt-project.org", nargs = 0)
+parser.add_argument("--detailed", action=SetExceptionHandling,
+    help = "Display detailed traceback.", nargs = 0)
+parser.add_argument("--rpdb", action=SetExceptionHandling,
+    help = "Enable remote pdb interaction (for parallel debugging).", nargs = 0)
+parser.add_argument("--parallel", action="store_true", default=False,
+    dest = "parallel",
+    help = "Run in MPI-parallel mode (must be launched as an MPI task)")
+if not hasattr(sys, 'argv') or sys.argv is None: sys.argv = []
+
+unparsed_args = []
+
+parallel_capable = False
+if not ytcfg.getboolean("yt","__command_line"):
+    opts, unparsed_args = parser.parse_known_args()
+    # THIS IS NOT SUCH A GOOD IDEA:
+    #sys.argv = [a for a in unparsed_args]
+    if opts.parallel:
+        parallel_capable = turn_on_parallelism()
+else:
+    subparsers = parser.add_subparsers(title="subcommands",
+                        dest='subcommands',
+                        description="Valid subcommands",)
+    def print_help(*args, **kwargs):
+        parser.print_help()
+    help_parser = subparsers.add_parser("help", help="Print help message")
+    help_parser.set_defaults(func=print_help)
+
+
+if parallel_capable == True:
+    pass
+elif exe_name in \
+        ["mpi4py", "embed_enzo",
+         "python"+sys.version[:3]+"-mpi"] \
+    or '_parallel' in dir(sys) \
+    or any(["ipengine" in arg for arg in sys.argv]):
+    parallel_capable = turn_on_parallelism()
+else:
+    parallel_capable = False

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/utilities/_amr_utils/FixedInterpolator.c
--- a/yt/utilities/_amr_utils/FixedInterpolator.c
+++ b/yt/utilities/_amr_utils/FixedInterpolator.c
@@ -128,7 +128,7 @@
 }
 
 void eval_gradient(int ds[3], npy_float64 dp[3],
-				  npy_float64 *data, npy_float64 grad[3])
+				  npy_float64 *data, npy_float64 *grad)
 {
     // We just take some small value
 

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/utilities/_amr_utils/FixedInterpolator.h
--- a/yt/utilities/_amr_utils/FixedInterpolator.h
+++ b/yt/utilities/_amr_utils/FixedInterpolator.h
@@ -41,7 +41,7 @@
 npy_float64 trilinear_interpolate(int ds[3], int ci[3], npy_float64 dp[3],
 				  npy_float64 *data);
 
-void eval_gradient(int ds[3], npy_float64 dp[3], npy_float64 *data, npy_float64 grad[3]);
+void eval_gradient(int ds[3], npy_float64 dp[3], npy_float64 *data, npy_float64 *grad);
 
 void vertex_interp(npy_float64 v1, npy_float64 v2, npy_float64 isovalue,
                    npy_float64 vl[3], npy_float64 dds[3],

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/utilities/_amr_utils/VolumeIntegrator.pyx
--- a/yt/utilities/_amr_utils/VolumeIntegrator.pyx
+++ b/yt/utilities/_amr_utils/VolumeIntegrator.pyx
@@ -65,6 +65,9 @@
     double log2(double x)
     long int lrint(double x)
     double fabs(double x)
+    double cos(double x)
+    double sin(double x)
+    double asin(double x)
 
 cdef struct Triangle:
     Triangle *next
@@ -238,6 +241,33 @@
         tr[i] = ipnest
     return tr
 
+def arr_fisheye_vectors(int resolution, np.float64_t fov):
+    # We now follow figures 4-7 of:
+    # http://paulbourke.net/miscellaneous/domefisheye/fisheye/
+    # ...but all in Cython.
+    cdef np.ndarray[np.float64_t, ndim=3] vp
+    cdef int i, j, k
+    cdef np.float64_t r, phi, theta, px, py
+    cdef np.float64_t pi = 3.1415926
+    cdef np.float64_t fov_rad = fov * pi / 180.0
+    vp = np.zeros((resolution, resolution, 3), dtype="float64")
+    for i in range(resolution):
+        px = 2.0 * i / (resolution) - 1.0
+        for j in range(resolution):
+            py = 2.0 * j / (resolution) - 1.0
+            r = (px*px + py*py)**0.5
+            if r == 0.0:
+                phi = 0.0
+            elif px < 0:
+                phi = pi - asin(py / r)
+            else:
+                phi = asin(py / r)
+            theta = r * fov_rad / 2.0
+            vp[i,j,0] = sin(theta) * cos(phi)
+            vp[i,j,1] = sin(theta) * sin(phi)
+            vp[i,j,2] = cos(theta)
+    return vp
+
 cdef class star_kdtree_container:
     cdef kdtree_utils.kdtree *tree
     cdef public np.float64_t sigma

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/utilities/_amr_utils/field_interpolation_tables.pxd
--- a/yt/utilities/_amr_utils/field_interpolation_tables.pxd
+++ b/yt/utilities/_amr_utils/field_interpolation_tables.pxd
@@ -25,7 +25,7 @@
 
 cimport cython
 cimport numpy as np
-from fp_utils cimport imax, fmax, imin, fmin, iclip, fclip
+from fp_utils cimport imax, fmax, imin, fmin, iclip, fclip, fabs
 
 cdef struct FieldInterpolationTable:
     # Note that we make an assumption about retaining a reference to values
@@ -91,3 +91,30 @@
     for i in range(3):
         ta = fmax((1.0 - dt*trgba[i+3]), 0.0)
         rgba[i] = dt*trgba[i] + ta * rgba[i]
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+cdef inline void FIT_eval_transfer_with_light(np.float64_t dt, np.float64_t *dvs, 
+        np.float64_t *grad, np.float64_t *l_dir, np.float64_t *l_rgba,
+        np.float64_t *rgba, int n_fits,
+        FieldInterpolationTable fits[6],
+        int field_table_ids[6]) nogil:
+    cdef int i, fid, use
+    cdef np.float64_t ta, tf, istorage[6], trgba[6], dot_prod
+    dot_prod = 0.0
+    for i in range(3):
+        dot_prod += l_dir[i]*grad[i]
+    #dot_prod = fmax(0.0, dot_prod)
+    for i in range(6): istorage[i] = 0.0
+    for i in range(n_fits):
+        istorage[i] = FIT_get_value(&fits[i], dvs)
+    for i in range(n_fits):
+        fid = fits[i].weight_table_id
+        if fid != -1: istorage[i] *= istorage[fid]
+    for i in range(6):
+        trgba[i] = istorage[field_table_ids[i]]
+    for i in range(3):
+        ta = fmax((1.0 - dt*trgba[i+3]), 0.0)
+        rgba[i] = dt*trgba[i] + ta * rgba[i] + dt*dot_prod*l_rgba[i]*trgba[i]*l_rgba[3] #(trgba[0]+trgba[1]+trgba[2])
+

diff -r 2d81eb7134195d431da234bf1fb9c8c8ccf743a9 -r 284135e173b5f634347025aadb1fba4acedca0b4 yt/utilities/_amr_utils/fp_utils.pxd
--- a/yt/utilities/_amr_utils/fp_utils.pxd
+++ b/yt/utilities/_amr_utils/fp_utils.pxd
@@ -42,6 +42,10 @@
     if f0 < f1: return f0
     return f1
 
+cdef inline np.float64_t fabs(np.float64_t f0) nogil:
+    if f0 < 0.0: return -f0
+    return f0
+
 cdef inline int iclip(int i, int a, int b) nogil:
     if i < a: return a
     if i > b: return b

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/985572664992/
changeset:   985572664992
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-01 20:45:10
summary:     Converting disk to cython selection routines
affected #:  1 file

diff -r 284135e173b5f634347025aadb1fba4acedca0b4 -r 985572664992f545220de5375f1172429b6bd2b0 yt/utilities/_amr_utils/geometry_utils.pyx
--- a/yt/utilities/_amr_utils/geometry_utils.pyx
+++ b/yt/utilities/_amr_utils/geometry_utils.pyx
@@ -205,6 +205,105 @@
             y += dds[1]
         x += dds[0]
     return mask
+
+# Disk
+
+def disk_grids(dobj, np.ndarray[np.float64_t, ndim=2] left_edges,
+                     np.ndarray[np.float64_t, ndim=2] right_edges):
+    cdef int i, j, k, xi, yi, zi
+    cdef int ng = left_edges.shape[0]
+    cdef np.ndarray[np.int32_t, ndim=1] gridi = np.zeros(ng, dtype='int32')
+    cdef np.float64_t *arr[2]
+    arr[0] = <np.float64_t *> left_edges.data
+    arr[1] = <np.float64_t *> right_edges.data
+    cdef np.float64_t x, y, z
+    cdef np.float64_t norm_vec[3], center[3]
+    cdef np.float64_t d = dobj._d # offset to center
+    cdef np.float64_t rs = dobj._radius
+    cdef np.float64_t height = dobj._height
+    cdef np.float64_t H, D, R
+    cdef int cond[4]
+    # * H < height
+    # * R < radius
+    # * not ( all(H > 0) or all(H < 0) )
+    for i in range(3):
+        norm_vec[i] = dobj._norm_vec[i]
+        center[i] = dobj.center[i]
+    for i in range(ng):
+        cond[0] = cond[1] = 0
+        cond[2] = cond[3] = 1
+        for xi in range(2):
+            x = arr[xi][i * 3 + 0]
+            for yi in range(2):
+                y = arr[yi][i * 3 + 1]
+                for zi in range(2):
+                    z = arr[zi][i * 3 + 2]
+                    H = ( x * norm_vec[0]
+                        + y * norm_vec[1]
+                        + z * norm_vec[2]) + d
+                    D = ((x - center[0])**2
+                       + (y - center[1])**2
+                       + (z - center[2])**2)
+                    R = (D - H*H)**0.5
+                    if cond[0] == 0 and H < height: cond[0] = 1
+                    if cond[1] == 0 and R < rs: cond[1] = 1
+                    if cond[2] == 1 and H < 0: cond[2] = 0
+                    if cond[3] == 1 and H > 0: cond[3] = 0
+        if cond[0] == cond[1] == 1 and not (cond[2] == 1 or cond[3] == 1):
+            gridi[i] = 1
+    return gridi
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+cdef inline int disk_cell(
+                        np.float64_t x, np.float64_t y, np.float64_t z,
+                        np.float64_t norm_vec[3], np.float64_t obj_c[3],
+                        np.float64_t obj_d, np.float64_t obj_r,
+                        np.float64_t obj_h):
+    cdef np.float64_t h, d, r
+    h = x * norm_vec[0] + y * norm_vec[1] + z * norm_vec[2] + obj_d
+    d = ( (x - obj_c[0])**2
+        + (y - obj_c[1])**2
+        + (z - obj_c[2])**2)**0.5
+    r = (d*d - h*h)**0.5
+    if fabs(h) <= obj_h and r <= obj_r: return 1
+    return 0
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+def disk_cells(dobj, gobj):
+    cdef np.ndarray[np.int32_t, ndim=3] mask 
+    cdef np.ndarray[np.float64_t, ndim=1] left_edge = gobj.LeftEdge
+    cdef np.ndarray[np.float64_t, ndim=1] dds = gobj.dds
+    cdef int i, j, k
+    cdef np.float64_t x, y, z, dist
+    cdef np.float64_t norm_vec[3], obj_c[3]
+    cdef np.float64_t obj_d = dobj._d
+    cdef np.float64_t obj_r = dobj._radius
+    cdef np.float64_t obj_h = dobj._h
+    for i in range(3):
+        norm_vec[i] = dobj._norm_vec[i]
+        obj_c[i] = dobj.center[i]
+    mask = np.zeros(gobj.ActiveDimensions, dtype='int32')
+    x = left_edge[0] + dds[0] * 0.5
+    for i in range(mask.shape[0]):
+        y = left_edge[1] + dds[1] * 0.5
+        for j in range(mask.shape[1]):
+            z = left_edge[2] + dds[2] * 0.5
+            for k in range(mask.shape[2]):
+                mask[i,j,k] = disk_cell(x, y, z, norm_vec, obj_c,
+                                    obj_d, obj_r, obj_h)
+                z += dds[1]
+            y += dds[1]
+        x += dds[0]
+    return mask
+
+# Inclined Box
+# Rectangular Prism
+# Sphere
+# Ellipse
                 
 @cython.boundscheck(False)
 @cython.wraparound(False)


https://bitbucket.org/yt_analysis/yt/commits/5afde6d52ccf/
changeset:   5afde6d52ccf
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-01 21:49:36
summary:     First pass at rectangular prism and sphere selection.
affected #:  1 file

diff -r 985572664992f545220de5375f1172429b6bd2b0 -r 5afde6d52ccf89262bc08d16151d9b60901c384c yt/utilities/_amr_utils/geometry_utils.pyx
--- a/yt/utilities/_amr_utils/geometry_utils.pyx
+++ b/yt/utilities/_amr_utils/geometry_utils.pyx
@@ -27,6 +27,7 @@
 cimport numpy as np
 cimport cython
 from stdlib cimport malloc, free
+from fp_utils cimport fclip
 
 cdef extern from "math.h":
     double exp(double x) nogil
@@ -302,7 +303,132 @@
 
 # Inclined Box
 # Rectangular Prism
+
+def rprism_grids(dobj, np.ndarray[np.float64_t, ndim=2] left_edges,
+                     np.ndarray[np.float64_t, ndim=2] right_edges):
+    cdef int i, n
+    cdef int ng = left_edges.shape[0]
+    cdef np.ndarray[np.int32_t, ndim=1] gridi = np.zeros(ng, dtype='int32')
+    cdef np.ndarray[np.float64_t, ndim=1] rp_left = dobj.left_edge
+    cdef np.ndarray[np.float64_t, ndim=1] rp_right = dobj.right_edge
+    for n in range(ng):
+        inside = 1
+        for i in range(3):
+            if rp_left[i] >= right_edges[n,i] or \
+               rp_right[i] <= left_edges[n,i]:
+                inside = 0
+                break
+        if inside == 1: gridi[n] = 1
+    return gridi
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+cdef inline int rprism_cell(
+                        np.float64_t x, np.float64_t y, np.float64_t z,
+                        np.float64_t LE[3], np.float64_t RE[3]):
+    if LE[0] > x or RE[0] < x: return 0
+    if LE[1] > y or RE[1] < y: return 0
+    if LE[2] > z or RE[2] < z: return 0
+    return 1
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+def rprism_cells(dobj, gobj):
+    cdef int i, j, k
+    cdef np.ndarray[np.int32_t, ndim=3] mask 
+    cdef np.ndarray[np.float64_t, ndim=1] left_edge = gobj.LeftEdge
+    cdef np.ndarray[np.float64_t, ndim=1] right_edge = gobj.RightEdge
+    cdef np.ndarray[np.float64_t, ndim=1] dds = gobj.dds
+    cdef np.float64_t LE[3], RE[3]
+    for i in range(3):
+        LE[i] = dobj.LeftEdge[i]
+        RE[i] = dobj.RightEdge[i]
+    # TODO: Implement strict and periodicity ...
+    cdef np.float64_t x, y, z
+    mask = np.zeros(gobj.ActiveDimensions, dtype='int32')
+    x = left_edge[0] + dds[0] * 0.5
+    for i in range(mask.shape[0]):
+        y = left_edge[1] + dds[1] * 0.5
+        for j in range(mask.shape[1]):
+            z = left_edge[2] + dds[2] * 0.5
+            for k in range(mask.shape[2]):
+                mask[i,j,k] = rprism_cell(x, y, z, LE, RE)
+                z += dds[1]
+            y += dds[1]
+        x += dds[0]
+    return mask
+
 # Sphere
+
+def sphere_grids(dobj, np.ndarray[np.float64_t, ndim=2] left_edges,
+                     np.ndarray[np.float64_t, ndim=2] right_edges):
+    cdef int i, n
+    cdef int ng = left_edges.shape[0]
+    cdef np.ndarray[np.int32_t, ndim=1] gridi = np.zeros(ng, dtype='int32')
+    cdef np.float64_t center[3], box_center, relcenter, closest, dist
+    cdef np.float64_t edge
+    for i in range(3):
+        center[i] = dobj.center[i]
+    cdef np.float64_t radius2 = dobj._radius * dobj._radius
+    for n in range(ng):
+        # Check if the sphere is inside the grid
+        if (left_edges[n,0] <= center[0] <= right_edges[n,0] and
+            left_edges[n,1] <= center[1] <= right_edges[n,1] and
+            left_edges[n,2] <= center[2] <= right_edges[n,2]):
+            gridi[n] = 1
+            continue
+        # http://www.gamedev.net/topic/335465-is-this-the-simplest-sphere-aabb-collision-test/
+        dist = 0
+        for i in range(3):
+            box_center = (right_edges[n,i] + left_edges[n,i])/2.0
+            relcenter = center[i] - box_center
+            edge = right_edges[n,i] - left_edges[n,i]
+            closest = relcenter - fclip(relcenter, -edge/2.0, edge/2.0)
+            dist += closest * closest
+        if dist < radius2: gridi[n] = 1
+    return gridi
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+cdef inline int sphere_cell(
+                        np.float64_t x, np.float64_t y, np.float64_t z,
+                        np.float64_t center[3], np.float64_t radius2):
+    cdef np.float64_t dist2
+    dist2 = ( (x - center[0])*(x - center[0])
+            + (y - center[1])*(y - center[1])
+            + (z - center[2])*(z - center[2]) )
+    if dist2 < radius2: return 1
+    return 0
+
+ at cython.boundscheck(False)
+ at cython.wraparound(False)
+ at cython.cdivision(True)
+def sphere_cells(dobj, gobj):
+    cdef np.ndarray[np.int32_t, ndim=3] mask 
+    cdef np.ndarray[np.float64_t, ndim=1] dds = gobj.dds
+    cdef np.float64_t radius2 = dobj._radius * dobj._radius
+    cdef np.float64_t center[3]
+    cdef np.ndarray[np.float64_t, ndim=1] left_edge = gobj.LeftEdge
+    cdef np.ndarray[np.float64_t, ndim=1] right_edge = gobj.RightEdge
+    cdef int i, j, k
+    for i in range(3): center[i] = dobj.center[i]
+    cdef np.float64_t x, y, z
+    mask = np.zeros(gobj.ActiveDimensions, dtype='int32')
+    x = left_edge[0] + dds[0] * 0.5
+    for i in range(mask.shape[0]):
+        y = left_edge[1] + dds[1] * 0.5
+        for j in range(mask.shape[1]):
+            z = left_edge[2] + dds[2] * 0.5
+            for k in range(mask.shape[2]):
+                mask[i,j,k] = sphere_cell(x, y, z, center, radius2)
+                z += dds[1]
+            y += dds[1]
+        x += dds[0]
+    return mask
+
 # Ellipse
                 
 @cython.boundscheck(False)


https://bitbucket.org/yt_analysis/yt/commits/d32fdfa3ff2d/
changeset:   d32fdfa3ff2d
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-02 19:14:36
summary:     Rename AMRData to YTDataContainer, and do the same for its subclasses.
affected #:  8 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/6ea3fd7ba2a1/
changeset:   6ea3fd7ba2a1
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-02 19:34:06
summary:     More refactoring, to change all the AMR stuff to YT stuff.
affected #:  8 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/d8a1442320d8/
changeset:   d8a1442320d8
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-02 19:54:54
summary:     Fixing returning of the grid index mask from Cython to use bools
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/0778f5c8388e/
changeset:   0778f5c8388e
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-02 20:22:25
summary:     Initial import of refactored, but not yet tested or corrected-for-formatting,
selection & construction data containers.
affected #:  6 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/1e2d7229706e/
changeset:   1e2d7229706e
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-02 21:09:27
summary:     Fixing imports, moving some items back to data_containers.
affected #:  4 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/9bcc6eb8fb7d/
changeset:   9bcc6eb8fb7d
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-02 21:19:19
summary:     PyCharm ignores
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/df31fedc310d/
changeset:   df31fedc310d
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-02 21:23:06
summary:     Moving the AMRHierarchy object into a new file, grid_geometry_handler.py
affected #:  18 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/9ceeaa7f4adf/
changeset:   9ceeaa7f4adf
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-02 21:30:16
summary:     Refactoring the name AMRHierarchy to GridGeometryHandler.  Fixing a few
imports.
affected #:  20 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/2fb3b499d947/
changeset:   2fb3b499d947
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-02 22:12:21
summary:     Refactoring and extracting a superclass for the GeometryHandler.
affected #:  9 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/547116c3a5f8/
changeset:   547116c3a5f8
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-02 23:10:16
summary:     Starting to implement moving the data IO into the mesh/geometry.
affected #:  6 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/7da381c53a68/
changeset:   7da381c53a68
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-03 00:12:21
summary:     Nearly correct answers for spheres.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/e02e9933d4f1/
changeset:   e02e9933d4f1
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-03 03:48:37
summary:     This implements using masks and dataspaces directly to get a final, full-on
set of points to grab from a dataspace.  The selection all occurs right in the
GridGeometryHandler, and then the dataspace is created by the IO handler.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/c0387376bdf7/
changeset:   c0387376bdf7
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-03 03:53:13
summary:     Make child masks all bools, which get casted to uint8_t inside cython routines.
affected #:  5 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/29f72eb92a9d/
changeset:   29f72eb92a9d
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-03 17:16:42
summary:     Refactoring the selection routines into classes in Cython, mostly not using the
GIL.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/e59d2baee7fe/
changeset:   e59d2baee7fe
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-03 18:17:07
summary:     Identical cells are selected for non-periodic spheres from this as from the old
method.  The sum of the arrays is not yet identical, so more work needs to be
done on the IO.
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/ea67934dbbce/
changeset:   ea67934dbbce
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-03 18:37:30
summary:     Using the correct dataspaces for memory versus file gave massive slowdowns.
This implements a strategy of reading everything from disk and throwing it all
away, but at least the number of simultaneous temporary arrays is kept low.
Note also that this speeds things up substantially compared to the old way.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/400ad9dcc934/
changeset:   400ad9dcc934
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-03 21:58:32
summary:     Adding rectangular prisms, removing dead code
affected #:  4 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/a216ae4af4ba/
changeset:   a216ae4af4ba
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-03 22:14:49
summary:     Merging from yt mainline tip.
affected #:  113 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/7e3d11da31c3/
changeset:   7e3d11da31c3
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-03 22:16:06
summary:     Fixing a missing import in the merge.
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/0610040a14dd/
changeset:   0610040a14dd
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-03 23:22:29
summary:     Switching YTCylinderBase to YTDiskBase, moving the disk_* into its selector
class.  Correct results are found.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/e7311d02e494/
changeset:   e7311d02e494
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-03 23:50:25
summary:     Moving Cutting Plane into Cython selectors, but one cell is not selected that
previously had been.  More to come.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/4c3941c69342/
changeset:   4c3941c69342
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-04 14:33:03
summary:     Converting slicing to the new method.  This is the first routine that gets
slower, and it gets slower by a good amount.  These results aren't surprising,
and hopefully we'll be able to add in special cases for this sort of thing.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/16584691eee7/
changeset:   16584691eee7
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-04 14:56:45
summary:     Sped up slices a bit, added ortho rays to the selection routines.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/7cb91ba45071/
changeset:   7cb91ba45071
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-04 19:06:06
summary:     Speed things up somewhat for slicing.  Still poor results.  Going to try
returning a slice object next.  Note also that the old method subtracted off
the index from the domain left, then the integer startindex of the grid.  Not
sure why.
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/3212c14dff1d/
changeset:   3212c14dff1d
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-06 16:10:00
summary:     Adding a subclass override for ortho ray counting and selecting.
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/d0f2b3c99a58/
changeset:   d0f2b3c99a58
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-06 19:56:01
summary:     Adding early termination and an all-inclusive region check.  Current results
are all faster except for region, which is 20% slower.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/f5875731e5dd/
changeset:   f5875731e5dd
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-22 16:37:58
summary:     Field generation works now; silly mistake in how I updated gen_field_data
(never making generated fields available in subsequent passes) now fixed.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/1fd8074fe03c/
changeset:   1fd8074fe03c
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-22 23:32:09
summary:     Data objects now have a size and a shape.  They only need to be counted once.
affected #:  7 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/555769ecee2c/
changeset:   555769ecee2c
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-23 14:10:19
summary:     Removing prange, which was giving non-deterministic results.
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/5fa76948d5dd/
changeset:   5fa76948d5dd
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-23 15:14:45
summary:     First pass at chunking IO.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/00b0504fa90d/
changeset:   00b0504fa90d
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-23 16:34:51
summary:     Moving DerivedQuantities to use chunking rather than _get_data_from_grid.
affected #:  4 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/99e3b2eeee51/
changeset:   99e3b2eeee51
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-23 18:35:50
summary:     Removed lazy_reader from profiles.  If you don't ask for any non-spatial
fields, it works.
affected #:  4 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/9709c842ec6b/
changeset:   9709c842ec6b
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-23 20:54:20
summary:     Can now generate spatial data, although this is feeling somewhat more hacky.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/3f39fb8c6e7d/
changeset:   3f39fb8c6e7d
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-23 21:04:25
summary:     A bit of cleanup, and also changing "grid" to "spatial" in the chunking styles.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/d68909eb0c34/
changeset:   d68909eb0c34
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-23 21:51:35
summary:     Attempting to implement the grid collection object, and in doing so, have
identified a few problems with the previous chunking method.
affected #:  4 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/39e470349c8f/
changeset:   39e470349c8f
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-23 22:07:20
summary:     This fixes up finding the maximum location, and also adds a sorting-for-IO
feature.  I'm really not terribly keen on how the dependencies are currently
calculated and recalculated, which adds a substantial amount of time.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/b5d597dc1648/
changeset:   b5d597dc1648
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-23 22:12:35
summary:     Adding a field_dependencies property to static outputs.
affected #:  4 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/779215133824/
changeset:   779215133824
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-23 23:10:27
summary:     Convert parallel_objects to use itertools.  Remove finalize_parallel from
derived quantities.  Make DQs work with distributed IO.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/31b9f0a8c271/
changeset:   31b9f0a8c271
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-23 23:49:24
summary:     Have added the ability to generate ghost zones, sort of.  Ghost zone generation
is currently stuck until I start dealing with the constructed data objects.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/eabfe4e87dc6/
changeset:   eabfe4e87dc6
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-24 17:17:45
summary:     Starting on particle data.  Removing calls to refresh_data.
affected #:  4 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/e1b9294dce57/
changeset:   e1b9294dce57
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-24 20:39:38
summary:     Make a chunk an actual class.
affected #:  4 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/673dd4bae0cc/
changeset:   673dd4bae0cc
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-24 20:58:20
summary:     Fixing CellMassMsun.
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/2a40ca28b230/
changeset:   2a40ca28b230
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-24 22:12:05
summary:     First attempt at adding spatial information to chunks.
affected #:  6 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/e10640a70771/
changeset:   e10640a70771
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-24 22:31:44
summary:     This fixes issues with icoords.  Check this out:

http://paste.yt-project.org/show/2190/
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/2fe8aef23b53/
changeset:   2fe8aef23b53
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-25 06:07:56
summary:     Move selection into (in this case) grids.  Next up is refactoring chunks.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/045a36641511/
changeset:   045a36641511
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-25 06:19:58
summary:     Moving counting to the grids as well, and moving the chunking specifics to
the subclass of the geometry handler, while leaving dispatching inside the base
class.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/6550568916e8/
changeset:   6550568916e8
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-25 06:26:21
summary:     Adding floating point coordinates and fixing a bug with integer coordinates.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/7725bd7bea9a/
changeset:   7725bd7bea9a
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-25 06:29:11
summary:     Coordinate dtype fix.
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/b4d5041f2970/
changeset:   b4d5041f2970
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-25 14:04:31
summary:     Getting rid of a bunch of cruft.
affected #:  4 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/06fa9826c8d4/
changeset:   06fa9826c8d4
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-25 14:18:45
summary:     Start more clearly setting up base chunks and the like.  We call the
identification routine in get_data() because it's the only time we *know* the
entire object has been fully created.  Note also that this fixes a bug which
was ridiculously overcounting.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/56b4b120c4f8/
changeset:   56b4b120c4f8
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-25 14:28:59
summary:     Fixing a dumb counting error, which was not evaluating count_selection
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/3df28ffabdb1/
changeset:   3df28ffabdb1
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-26 23:25:19
summary:     Adding in container-specific fields, useful for things like coordinates in
slices and whatnot.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/f4aa4ee5adcc/
changeset:   f4aa4ee5adcc
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-27 01:56:54
summary:     Not entirely satisfied (need to refactor sorting into a code-specific handler)
but the grids always *always* need to be sorted in the same order regardless of
the chunking scheme.  Now slicing and plotting work!
affected #:  6 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/c76c96c628e0/
changeset:   c76c96c628e0
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-27 04:05:08
summary:     Adding coordinate generation for cutting planes.  They now plot as well.
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/5207ff7141cf/
changeset:   5207ff7141cf
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-27 17:40:10
summary:     Generate coordinates all in one go.  Should be somewhat faster this way, and
since we typically want all three axes anyway it's a better idea.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/b7e0f4f7bd91/
changeset:   b7e0f4f7bd91
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-27 22:24:46
summary:     Optimizing fcoords and icoords and adding them as attributes to data sources.
Removing some older cruft.
affected #:  6 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/a95eb08e8d65/
changeset:   a95eb08e8d65
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-29 17:42:32
summary:     Starting to thread through the ability to set and use multiple fluids.  Removed
some particle functionality for now.
affected #:  4 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/89ff6baad89f/
changeset:   89ff6baad89f
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-29 17:48:45
summary:     Removing pdb
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/ee962bf5b45c/
changeset:   ee962bf5b45c
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-29 19:40:58
summary:     Removing a bunch of the logic for calculating which fields still have to be
generated; this is much simpler.  Note that coordinates have suddenly broken!
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/9b942469621a/
changeset:   9b942469621a
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-29 19:59:47
summary:     Minor change.  Also, coordinates *do* work.
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/147961f46bf0/
changeset:   147961f46bf0
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-29 22:35:14
summary:     Progress on particle reading.  Mostly implemented the stride&count/stride&read
dance.
affected #:  8 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/21f1565040a0/
changeset:   21f1565040a0
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-29 23:10:02
summary:     Generated particle fields -- like ParticleMassMsun -- can now be generated.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/627f16ff93fe/
changeset:   627f16ff93fe
branch:      geometry_handling
user:        MatthewTurk
date:        2012-02-29 23:23:08
summary:     Don't use an assert here, use a NotImplementedError.
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/969fc3547a8a/
changeset:   969fc3547a8a
branch:      geometry_handling
user:        MatthewTurk
date:        2012-03-08 01:05:30
summary:     Beginning conversion of RAMSES to octree geometry handling.  Have refactored
some of the RAMSES routines, which will not work presently.  Moving toward
directly handling of Octs.
affected #:  6 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/14b391598044/
changeset:   14b391598044
branch:      geometry_handling
user:        MatthewTurk
date:        2012-03-08 01:15:37
summary:     Add a few cython decorators, which (sadly) have little impact on performance in
our particular use case.
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/0529297937d4/
changeset:   0529297937d4
branch:      geometry_handling
user:        MatthewTurk
date:        2012-03-08 07:02:52
summary:     A few minor fixes to the hilbert index utility functions.
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/9f2986852cf0/
changeset:   9f2986852cf0
branch:      geometry_handling
user:        MatthewTurk
date:        2012-03-08 07:03:52
summary:     Adding a forgotten setup.py and removing a validate spatial from particle_mass
in enzo.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/c801248056e2/
changeset:   c801248056e2
branch:      geometry_handling
user:        MatthewTurk
date:        2012-03-08 07:08:48
summary:     Restore compilation temporarily
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/5c3fa8be2fe4/
changeset:   5c3fa8be2fe4
branch:      geometry_handling
user:        MatthewTurk
date:        2012-03-30 22:56:23
summary:     Merging from tip of dev branch, to keep up to date.
affected #:  77 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/3927db4f4f73/
changeset:   3927db4f4f73
branch:      geometry_handling
user:        MatthewTurk
date:        2012-03-30 23:04:33
summary:     A few fixes to get things to import.
affected #:  4 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/d67375d582fa/
changeset:   d67375d582fa
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-03 16:27:33
summary:     Initial import of an oct container.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/66a3c59a2e10/
changeset:   66a3c59a2e10
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-03 16:52:20
summary:     Fixed an off-by-one error.
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/7a20b742953c/
changeset:   7a20b742953c
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-03 17:22:23
summary:     Splitting Oct definitions into multiple files, attempting a selection routine.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/b08234b2f569/
changeset:   b08234b2f569
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-03 17:54:34
summary:     A few modifications to the oct geometry container, so it now allocates exactly
the correct number of octs in single-domain calculations.  Some of the math for
identifying children was backwards, too.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/5f2961b5f1db/
changeset:   5f2961b5f1db
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-03 19:36:53
summary:     Octree geometry selection seems to work now.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/f7ad6b82b6ca/
changeset:   f7ad6b82b6ca
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-09 16:02:09
summary:     Adding some utilities for handling fortran files.
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/ebe3983044e9/
changeset:   ebe3983044e9
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-09 19:02:01
summary:     Cleanup of a substantial portion of the Octree reading code for RAMSES.  Use
the oct_container now, and mostly if not completely pure-python.  Add a
fortran_utils for reading fortran records.
affected #:  6 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/f7b55b1e4c29/
changeset:   f7b55b1e4c29
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-09 21:30:03
summary:     RAMSES tree generation and oct handling now works; selectors can be applied (by
hand) to the Octree handler.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/dbdd8d2377ab/
changeset:   dbdd8d2377ab
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-09 23:44:09
summary:     Split the RAMSES octree container into a subclass and add decorators for
selection routines for octs.
affected #:  4 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/115aa5ba1835/
changeset:   115aa5ba1835
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-10 22:38:27
summary:     Set up initial chunking, counting (of octs, domain-octs, and cells) and so on
for RAMSES data.  Octs now have positions associated with them, but I think we
might be able to remove that at some point.  The tricky part is that we don't
necessarily want to walk from the root for things like counting.
affected #:  8 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/338ca8d05439/
changeset:   338ca8d05439
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-11 12:06:06
summary:     [octree] Moving count_cells to base class
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/3becefcbe524/
changeset:   3becefcbe524
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-11 13:22:01
summary:     [octree] Octrees for RAMSES are now allocated on a domain-by-domain basis.
This means that we also have to keep a local versus global counter when
iterating over local_ind (which we could perhaps make more elegant somehow).
Additionally, we also insert any previously uninserted components of the mesh
during the reading of a single domain.  So we can construct the entire tree,
regardless of which domain we are currently reading.  In practice this *could*
lead to better load balancing, but that is further off.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/654bd3a1ac57/
changeset:   654bd3a1ac57
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-11 16:39:49
summary:     Adding a RAMSESDomainSubset for chunking data and storing indices.  Starting to
rethink the RAMSES data chunking scheme.  icoords doesn't quite work because it
doesn't apply a selector.
affected #:  4 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/8dfb9a2e3d2c/
changeset:   8dfb9a2e3d2c
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-11 23:05:08
summary:     [octree] Fix a pernicious segfault.  Removed a bunch of the domain indices
stuff.  A base chunk is now identified by a single mask that is shared between
chunks.  Add ires and icoords.  Projections now actually work in manual mode,
and given the correct (1.0 everywhere) answer for RAMSES data.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/327226a1acf6/
changeset:   327226a1acf6
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-11 23:19:41
summary:     Adding fcoords, which is revealing that somehow the coordinates all seem to run
from 0..0.5 in the test data, indicating an off-by-one offset in the level
information.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/24f6992132f2/
changeset:   24f6992132f2
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-14 02:20:12
summary:     Setting up some structures for RAMSES hydro var reading.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/f8c3af7cf2e9/
changeset:   f8c3af7cf2e9
branch:      geometry_handling
user:        juxtaposicion
date:        2012-04-13 00:37:11
summary:     nothing works. first pass at refactor ART frontend.
affected #:  5 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/b900f9e1ed90/
changeset:   b900f9e1ed90
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-14 02:28:57
summary:     Merging in Chris's changes.
affected #:  5 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/a1ad577f3977/
changeset:   a1ad577f3977
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-14 05:12:52
summary:     Attempting to prototype hydro field reading in RAMSES.

Disabled ART importing.
affected #:  4 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/2dbc099e53fb/
changeset:   2dbc099e53fb
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-16 13:34:06
summary:     Identifying octs to fill is still incorrect; however, this changeset correctly
feeds the results of the fill operation back to the user.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/0a550f7ded2d/
changeset:   0a550f7ded2d
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-16 16:10:07
summary:     Although still providing the incorrect answer for queries of density, this is
getting closer to correctly identifying oct indices with positions in the file.
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/7038c8a78cc4/
changeset:   7038c8a78cc4
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-16 16:27:30
summary:     Reversing the iteration of fields and cells seems to corect the disparity.
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/b4dc04e5c0fe/
changeset:   b4dc04e5c0fe
branch:      geometry_handling
user:        MatthewTurk
date:        2012-04-18 16:44:00
summary:     Attempting to deal with boundary issues.  This is still not functional.
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/5a1509def2d4/
changeset:   5a1509def2d4
branch:      yt-3.0
user:        MatthewTurk
date:        2012-06-07 20:46:16
summary:     Merging changes from geometry_handling
affected #:  90 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/7ec116a4d2a1/
changeset:   7ec116a4d2a1
branch:      yt-3.0
user:        MatthewTurk
date:        2012-06-07 21:58:28
summary:     Merging from volume_refactor into 3.0.
affected #:  183 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/f632bfe0211e/
changeset:   f632bfe0211e
branch:      yt-3.0
user:        MatthewTurk
date:        2012-06-07 23:35:09
summary:     A few lingering leftover fixes for imports, after the lib change and the new
orientation stuff.
affected #:  4 files
Diff not available.

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list