[yt-svn] commit/yt: 5 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Mon Mar 11 07:25:53 PDT 2013


5 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/eb57083b1568/
changeset:   eb57083b1568
branch:      yt
user:        atmyers
date:        2013-02-19 01:44:26
summary:     change to enable parallel computation in the field save function
affected #:  1 file

diff -r 9823f77dfcbbb2c286da9cc872d63d06a1791a6a -r eb57083b1568b6bb8c417d902b2b77d23a8502f3 yt/utilities/grid_data_format/writer.py
--- a/yt/utilities/grid_data_format/writer.py
+++ b/yt/utilities/grid_data_format/writer.py
@@ -57,7 +57,7 @@
     # don't forget to close the file.
     f.close()
 
-def save_field(pf, field_name):
+def save_field(pf, field_name, data=None):
     """
     Write a single field associated with the parameter file pf to the
     backup file.
@@ -85,12 +85,12 @@
                        particle_type_name="dark_matter")
 
     # now save the field
-    _write_field_to_gdf(pf, f, field_name, particle_type_name="dark_matter")
+    _write_field_to_gdf(pf, f, field_name, particle_type_name="dark_matter", data)
 
     # don't forget to close the file.
     f.close()
         
-def _write_field_to_gdf(pf, fhandle, field_name, particle_type_name):
+def _write_field_to_gdf(pf, fhandle, field_name, particle_type_name, data=None):
 
     # add field info to field_types group
     g = fhandle["field_types"]
@@ -131,7 +131,10 @@
         if field_obj.particle_type:  # particle data
             pt_group[field_name] = grid.get_data(field_name)
         else:  # a field
-            grid_group[field_name] = grid.get_data(field_name)
+            if data != None:
+                grid_group[field_name] = data[str(grid.id)]
+            else:
+                grid_group[field_name] = grid.get_data(field_name)
 
 def _create_new_gdf(pf, gdf_path, data_author=None, data_comment=None,
                    particle_type_name="dark_matter"):


https://bitbucket.org/yt_analysis/yt/commits/052135b61bdc/
changeset:   052135b61bdc
branch:      yt
user:        atmyers
date:        2013-02-19 01:44:46
summary:     merging in new pmods from Matt
affected #:  1 file

diff -r eb57083b1568b6bb8c417d902b2b77d23a8502f3 -r 052135b61bdc98c611bb8371af4e2722f3b55f74 yt/pmods.py
--- a/yt/pmods.py
+++ b/yt/pmods.py
@@ -17,343 +17,381 @@
 #####
 
 
-# This code is derived from knee.py, which was included in the Python
-# 2.6 distribution.
-#
-# The modifications to this code are copyright (c) 2011, Lawrence
-# Livermore National Security, LLC. Produced at the Lawrence Livermore
-# National Laboratory. Written by Tim Kadich and Asher Langton
-# <langton2 at llnl.gov>. Released as LLNL-CODE-522751 under the name
-# SmartImport.py, version 1.0. All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# - Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the disclaimer below.
-#
-# - Redistributions in binary form must reproduce the above copyright
-#   notice, this list of conditions and the disclaimer (as noted below)
-#   in the documentation and/or other materials provided with the
-#   distribution.
-#
-# - Neither the name of the LLNS/LLNL nor the names of its contributors
-#   may be used to endorse or promote products derived from this
-#   software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL LAWRENCE
-# LIVERMORE NATIONAL SECURITY, LLC, THE U.S. DEPARTMENT OF ENERGY OR
-# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
-# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
-# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
-# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
-# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
-# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-# Additional BSD Notice
-#
-# 1. This notice is required to be provided under our contract with the
-# U.S. Department of Energy (DOE). This work was produced at Lawrence
-# Livermore National Laboratory under Contract No. DE-AC52-07NA27344
-# with the DOE.
-#
-# 2. Neither the United States Government nor Lawrence Livermore
-# National Security, LLC nor any of their employees, makes any warranty,
-# express or implied, or assumes any liability or responsibility for the
-# accuracy, completeness, or usefulness of any information, apparatus,
-# product, or process disclosed, or represents that its use would not
-# infringe privately-owned rights.
-#
-# 3. Also, reference herein to any specific commercial products,
-# process, or services by trade name, trademark, manufacturer or
-# otherwise does not necessarily constitute or imply its endorsement,
-# recommendation, or favoring by the United States Government or
-# Lawrence Livermore National Security, LLC. The views and opinions of
-# authors expressed herein do not necessarily state or reflect those of
-# the United States Government or Lawrence Livermore National Security,
-# LLC, and shall not be used for advertising or product endorsement
-# purposes.
+"""This is an initial implementation of the finder/loader discussed at:
+http://mail.scipy.org/pipermail/numpy-discussion/2012-March/061160.html
 
-"""MPI_Import defines an mpi-aware import hook. The standard use of
-this module is as follows:
+This is intended to take the place of MPI_Import.py. This version has
+only been tested minimally, and is being made available primarily for
+testing and preliminary benchmarking.
 
-   from MPI_Import import mpi_import
-   with mpi_import():
-      import foo
-      import bar
+Known issues:
+- Modules loaded via the Windows registry may be incorrectly hidden by
+  a module of the same name in sys.path.
+- If a file is added to a directory on sys.path, it won't be cached, so
+  there may be precedence issues. If a file disappears or its permissions
+  change, the import will fail.
 
-Within the with block, the standard import statement is replaced by an
-MPI-aware import statement. The rank 0 process finds the location of
-each module to import, broadcasts the location, then all of the
-processes load that module.
+Update (3/16/12): I've merged in a new version, simple_finder, described
+below.
 
-One CRITICAL detail: any code inside the mpi_import block must be
-executed exactly the same on all of the MPI ranks. For example,
-consider this:
+To use the finder, start a script off with the following:
 
-def foo():
-   import mpi
-   if mpi.rank == 0:
-      bar = someFunction()
-   bar = mpi.bcast(bar,root=0)
+import sys
+from cached_import import finder
+sys.meta_path.append(finder())
 
-def someFunction():
-   import os
-   return os.name
+There are also variants of the finder that use MPI. The rank 0 process
+builds the cache and then broadcasts it. For these, replace finder
+with either pympi_finder or mpi4py_finder.
 
-If foo() is called during the import process, then things may go very
-wrong. If the os module hasn't been loaded, then the rank 0 process
-will find os and broadcast its location. Since there's no
-corresponding bcast for rank > 0, the other processes will receive
-that broadcast instead of the broadcast for bar, resulting in
-undefined behavior. Similarly, if rank >0 process encounters an import
-that rank 0 does not encounter, that process will either hang waiting
-for the bcast, or it will receive an out-of-order bcast.
+This finder works by building a cache mapping module names to
+locations. The expensive parts of this process are the calls that
+result in a stat. For that reason, we don't, by default, check whether
+a module file is readable.
 
-The import hook provides a way to test whether we're using this
-importer, which can be used to disable rank-asymmetric behavior in a
-module import:
-
-import __builtin__
-hasattr(__builtin__.__import__,"mpi_import")
-
-This evaluates to True only when we're in an mpi_import() context
-manager.
-
-There are some situations where rank-dependent code may be necessary.
-One such example is pyMPI's synchronizeQueuedOutput function, which
-tends to cause deadlocks when it is executed inside an mpi_imported
-module. In that case, we provide a hook to execute a function after
-the mpi_import hook has been replaced by the standard import hook.
-Here is an example showing the use of this feature:
-
-# encapsulate the rank-asymmetric code in a function
-def f():
-    if mpi.rank == 0:
-        doOneThing()
-    else:
-        doSomethingElse()
-
-# Either importer is None (standard import) or it's a reference to
-# the mpi_import object that owns the current importer.
-import __builtin__
-importer = getattr(__builtin__.__import__,"mpi_import",None)
-if importer:
-    importer.callAfterImport(f)
-else:
-    # If we're using the standard import, then we'll execute the
-    # code in f immediately
-    f()
-
-WARNING: the callAfterImport feature is not intended for casual use.
-Usually it will be sufficient (and preferable) to either remove the
-rank-asymmetric code or explicitly move it outside of the 'with
-mpi_import' block. callAfterImport is provided for the (hopefully
-rare!) cases where this does not suffice.
-
-
-Some implementation details:
-
--This code is based on knee.py, which is an example of a pure Python
- hierarchical import that was included with Python 2.6 distributions.
-
--Python PEP 302 defines another way to override import by using finder
- and loader objects, which behave similarly to the imp.find_module and
- imp.load_module functions in __import_module__ below. Unfortunately,
- the implementation of PEP 302 is such that the path for the module
- has already been found by the time that the "finder" object is
- constructed, so it's not suitable for our purposes.
-
--This module uses pyMPI. It was originally designed with mpi4py, and
- switching back to mpi4py requires only minor modifications. To
- quickly substitute mpi4py for pyMPI, the 'import mpi' line below can
- be replaced with the following wrapper:
-
-from mpi4py import MPI
-class mpi(object):
-    rank = MPI.COMM_WORLD.Get_rank()
-    @staticmethod
-    def bcast(obj=None,root=0):
-        return MPI.COMM_WORLD.bcast(obj,root)
-
--An alternate version of this module had rank 0 perform all of the
- lookups, and then broadcast the locations all-at-once when that
- process reached the end of the context manager. This was somewhat
- faster than the current implementation, but was prone to deadlock
- when loading modules containing MPI synchronization points.
-
--The 'level' parameter to the import hook is not handled correctly; we
- treat it as if it were -1 (try relative and absolute imports). For
- more information about the level parameter, run 'help(__import__)'.
+Since calls like os.isfile are expensive, I've added an alternate
+version called simple_finder. Instead of figuring out where all of the
+modules in sys.path are located, we just cache the contents of
+directories on sys.path and use the standard probing algorithm for the
+imports. This is much cheaper at startup and easier to maintain. It
+appears to be a bit faster than the MPI-enabled finders, though that
+will depend on the number of modules in sys.path as well as the number
+of modules actually imported.
 """
 
-import sys, imp, __builtin__,types
-from mpi4py import MPI
-class mpi(object):
-    rank = MPI.COMM_WORLD.Get_rank()
-    @staticmethod
-    def bcast(obj=None,root=0):
-        return MPI.COMM_WORLD.bcast(obj,root)
+import sys,os,imp
 
-class mpi_import(object):
-    def __enter__(self):
-        imp.acquire_lock()
-        __import_hook__.mpi_import = self
-        self.__funcs = []
-        self.original_import = __builtin__.__import__
-        __builtin__.__import__ = __import_hook__
+class finder(object):
+    def __init__(self,skip_checks=True,build=True):
+        """Build a finder object.
 
-    def __exit__(self,type,value,traceback):
-        __builtin__.__import__ = self.original_import
-        __import_hook__.mpi_import = None
-        imp.release_lock()
-        for f in self.__funcs:
-            f()
+        Arguments:
+        - skip_checks: Don't test whether modules are readable while building
+                       the cache. This improves performace, but can cause an
+                       unreadable file that looks like a Python module to
+                       shadow a readable module with the same name later
+                       in sys.path.
+        -build: if set, build the cache now. This is used in the mpi4py_finder
+                and pympi_finder extensions
+        """
+        # Store some suffix and module description information
+        t = imp.get_suffixes()
+        self.skip_checks = skip_checks
+        self._suffixes = [x[0] for x in t] # in order of precedence
+        self._rsuffixes = self._suffixes[::-1] # and in reverse order
+        self._suffix_tuples = dict((x[0],tuple(x)) for x in t)
 
-    def callAfterImport(self,f):
-        "Add f to the list of functions to call on exit"
-        if type(f) != types.FunctionType:
-            raise TypeError("Argument must be a function!")
-        self.__funcs.append(f)
+        # We store the value of sys.path in _syspath so we can keep track
+        # of changes. _cache is a dictionary mapping module names to tuples
+        # containing the information needed to load the module (path and
+        # module description).
+        if build:
+            self._syspath = list(sys.path)
+            self._build_cache()
+        else: # For some subclasses
+            self._syspath = []
+            self._cache = {}
 
+    def _build_cache(self):
+        """Traverse sys.path, building (or re-building) the cache."""
+        import os
+        self._cache = {}
+        for d in self._syspath:
+            self._process_dir(os.path.realpath(d))
 
-# The remaining code is for internal use only. Do not explicitly call
-# call any of the following functions.
+    def find_module(self,fullname,path=None):
+        """Return self if 'fullname' is in sys.path (and isn't a builtin or
+        frozen module)."""
+        # First, make sure our cache is up-to-date. (We could combine
+        # the append/prepend cases and more generally handle the case where
+        # self._syspath is a sublist of the new sys.path, but is that worth
+        # the effort? It's only beneficial if we encounter code where sys.path
+        # is both prepended to and appended to, and there isn't an import
+        # statement in between.
+        if sys.path != self._syspath:
+            stored_length = len(self._syspath)
+            real_length = len(sys.path)
+            rebuild = False
+            # If sys.path isn't bigger, we need to rebuild the cache
+            # but not before we update self._syspath.
+            if real_length <= stored_length:
+                rebuild = True
+            # Some directories were prepended to the path, so add them.
+            elif self._syspath == sys.path[-stored_length:]:
+                for d in sys.path[real_length-stored_length-1::-1]:
+                    self._process_dir(os.path.realpath(d),prepend=True)
+            # Directories appended to the path.
+            elif self._syspath == sys.path[:len(self._syspath)]:
+                for d in sys.path[stored_length-real_length:]:
+                    self._process_dir(os.path.realpath(d))
+            # Path otherwise modified, so we need to rebuild the cache.
+            else:
+                rebuild = True
 
-# Replacement for __import__(). Taken from knee.py; unmodified except for the
-# (unused) level parameter.
-def __import_hook__(name, globals=None, locals=None, fromlist=None, level=-1):
-    # TODO: handle level parameter correctly. For now, we'll ignore
-    # it and try both absolute and relative imports.
-    parent = __determine_parent__(globals)
-    q, tail = __find_head_package__(parent, name)
-    m = __load_tail__(q, tail)
-    if not fromlist:
-        return q
-    if hasattr(m, "__path__"):
-        __ensure_fromlist__(m, fromlist)
-    return m
+            # Now update self._syspath
+            self._syspath = list(sys.path)
+            if rebuild:
+                self._build_cache()
+            
+        # Don't override builtin/frozen modules. TODO: Windows registry?
+        if (fullname not in sys.builtin_module_names and
+            not imp.is_frozen(fullname) and
+            fullname in self._cache):
+            #print "__IMPORTING ",fullname
+            return self
+        return None
 
-# __import_module__ is the only part of knee.py with non-trivial changes.
-# The MPI rank 0 process handles the lookup and broadcasts the location to
-# the others. This must be called synchronously, at least in the case that
-# 'fqname' is not already in sys.modules.
-def __import_module__(partname, fqname, parent):
-    fqname = fqname.rstrip(".")
-    try:
-        return sys.modules[fqname]
-    except KeyError:
-        pass
-    fp = None         # module's file
-    pathname = None   # module's location
-    stuff = None      # tuple of (suffix,mode,type) for the module
-    ierror = False    # are we propagating an import error from rank 0?
+    def load_module(self,fullname):
+        """Load the module fullname using cached path."""
+        if fullname in self._cache:
+            if fullname in sys.modules:
+                return sys.modules[fullname]
+            pathname,desc = self._cache[fullname]
+            #print "__LOADING ",fullname,pathname
+            if os.path.isfile(pathname):
+                # (If we're loading a PY_SOURCE file, the interpreter will
+                # automatically check for a compiled (.py[c|o]) file.)
+                with open(pathname,desc[1]) as f:
+                    mod = imp.load_module(fullname,f,pathname,desc)
+            # Not a file, so it's a package directory
+            else:
+                mod = imp.load_module(fullname,None,pathname,desc)
+            mod.__loader__ = self
+            return mod
+        raise ImportError("This shouldn't happen!")
 
-    # Start with the lookup on rank 0. The other processes will be waiting
-    # on a broadcast, so we need to send one even if we're bailing out due
-    # to an import error.
-    if mpi.rank == 0:
+
+    # Build up a dict of modules (including package directories) found in a
+    # directory. If this directory has been prepended to the path, we need to
+    # overwrite any conflicting entries in the cache. To make sure precedence
+    # is correct, we'll reverse the list of suffixes when we're prepending.
+    #
+    # Rather than add a lot of checks here to make sure we don't stomp on a
+    # builtin module, we'll just reject these in find_module
+    def _process_dir(self,dir,parent=None,prepend=False,visited=None):
+        """Process a directory dir, looking for valid modules.
+
+        Arguments:
+        dir -- (an absolute, real path to a directory)
+        parent -- parent module, in the case where dir is a package directory
+        prepend -- True if dir has just been prepended to sys.path. In that
+                   case, we'll replace existing cached entries with the same
+                   module name.
+        visited -- list of the real paths of visited directories. Used to
+                   prevent infinite recursion in the case of symlink cycles
+                   in package subdirectories.
+        """
+        import stat
+        
+        # Avoid symlink cycles in a package.
+        if not visited:
+            visited = [dir]
+        elif dir not in visited:
+            visited.append(dir)
+        else:
+            return
+
+        # All files and subdirs. Store the name and the path.
         try:
-            fp, pathname, stuff = imp.find_module(partname,
-                                                  parent and parent.__path__)
-        except ImportError:
-            ierror = True
-            return None
-        finally:
-            pathname,stuff,ierror = mpi.bcast((pathname,stuff,ierror))
-    else:
-        pathname,stuff,ierror = mpi.bcast((pathname,stuff,ierror))
-        if ierror:
-            return None
-        # If imp.find_module returned an open file to rank 0, then we should
-        # open the corresponding file for this process too.
-        if stuff and stuff[1]:
-            fp = open(pathname,stuff[1])
+            contents = dict((x,os.path.join(dir,x))
+                            for x in os.listdir(dir))
+        # Unreadable directory, so skip
+        except OSError:
+            return
 
-    try:
-        m = imp.load_module(fqname, fp, pathname, stuff)
-    finally:
-        if fp: fp.close()
-    if parent:
-        setattr(parent, partname, m)
-    return m
+        # If this is a possible package directory with no __init__.py, bail
+        # out. If __init__.py is there, we need to see if there's an exising
+        # module by that name. 
+        if parent:
+            if "__init__.py" not in contents:
+                return
+            if not (self.skip_checks or
+                    os.access(os.path.join(dir,"__init__.py"),os.R_OK)):
+                return
+            if parent in self._cache and not prepend:
+                return
+            # Okay, this is a valid, non-duplicate module.
+            self._cache[parent] = (dir,('','',imp.PKG_DIRECTORY))
+            
+        # Split contents into files & subdirs (only stat each one once)
+        files = {}
+        subdirs = {}
+        for entry in contents:
+            try:
+                mode = os.stat(contents[entry]).st_mode
+            except OSError:
+                continue # couldn't read!
+            if stat.S_ISDIR(mode) and (self.skip_checks or
+                                       os.access(contents[entry],os.R_OK)):
+                subdirs[entry] = contents[entry]
+            elif stat.S_ISREG(mode) and (self.skip_checks or
+                                         os.access(contents[entry],os.R_OK)):
+                files[entry] = contents[entry]
 
+        # Package directories have the highest precedence. But when prepend is
+        # True, we need to reverse the order here. We'll do this with these
+        # nested functions.
+        def process_subdirs():
+            for d in subdirs:
+                fqname = parent+"."+d if parent else d # fully qualified name
+                self._process_dir(os.path.join(dir,d),fqname,prepend,visited)
 
-# The remaining functions are taken unmodified (except for the names)
-# from knee.py.
-def __determine_parent__(globals):
-    if not globals or  not globals.has_key("__name__"):
+        def process_files():
+            ordered_suffixes = self._rsuffixes if prepend else self._suffixes
+            for s in ordered_suffixes:
+                l = len(s)
+                for f in files:
+                    # Check for matching suffix.
+                    if f[-l:] == s:
+                        fqname = parent+"."+f[:-l] if parent else f[:-l]
+                        if fqname not in self._cache or prepend:
+                                self._cache[fqname] = (files[f],
+                                                       self._suffix_tuples[s])
+
+        if prepend:
+            process_files()
+            process_subdirs()
+        else:
+            process_subdirs()
+            process_files()
+
+                                
+"""Finder that lets one MPI process do all of the initial caching.
+"""
+class pympi_finder(finder):        
+    def __init__(self,skip_checks=True):
+        import mpi
+        if mpi.rank == 0:
+            finder.__init__(self,skip_checks)
+        else:
+            finder.__init__(self,skip_checks,False)
+        self._syspath,self._cache = mpi.bcast((self._syspath,self._cache))
+
+"""Finder that lets one MPI process do all of the initial caching.
+"""
+class mpi4py_finder(finder):        
+    def __init__(self,skip_checks=True):
+        from mpi4py import MPI
+        comm = MPI.COMM_WORLD
+        rank = comm.Get_rank()
+        if rank == 0:
+            finder.__init__(self,skip_checks)
+        else:
+            finder.__init__(self,skip_checks,False)
+        self._syspath,self._cache = comm.bcast((self._syspath,self._cache))
+
+"""
+Alternate version of cached_import. Instead of caching locations,
+just cache directory contents. Then mimic the standard import probing
+algorithm.
+
+This has not been thoroughly tested!
+"""
+
+class simple_finder(object):    
+    def __init__(self):
+        # _contents is a representation of the files located in
+        # sys.path (including, in the case of module packages, any
+        # subdirectories that are encountered in the import process).
+        # For each string in sys.path or subdirectory visited,
+        # _contents contains a dict mapping the filenames in the
+        # directory to full paths. If the string doesn't represent a
+        # valid directory, then the dict is empty.
+        self._contents = {}
+        for d in sys.path:
+            self._process_dir(d)
+
+    # Search for a module 'name' in the cached directory listing for 'path'
+    def _search(self,name,path):
+        # If we haven't cached the directory, do so now.
+        if path not in self._contents:
+            self._process_dir(path)
+        listing = self._contents[path]
+        # First check for a package directory.
+        try:
+            if (name in listing and
+                os.path.isdir(listing[name]) and
+                os.path.isfile(os.path.join(listing[name],
+                                            "__init__.py"))):
+                return listing[name],('','',imp.PKG_DIRECTORY)
+        except OSError:
+            pass
+        # Now check probe for name.so, namemodule.so, name.py, etc.
+        for suffix in imp.get_suffixes():
+            s = name+suffix[0]
+            if (s in listing and
+                os.path.isfile(listing[s]) and
+                os.access(listing[s],os.R_OK)):
+                return listing[s],suffix
+        return None,None
+    
+    # Don't use this directly. We need more state than the load_module
+    # signature allows, so we'll return a loader object for any module
+    # that we have found.
+    class _loader(object):
+        def __init__(self,fullname,path,desc,finder):
+            self._fullname = fullname
+            self._path = path
+            self._desc = desc
+            self._finder = finder
+
+        def load_module(self,fullname):
+            """Load the module fullname using cached path."""
+            if fullname != self._fullname:
+                raise ImportError 
+            if os.path.isfile(self._path): # check desc instead?
+                with open(self._path,self._desc[1]) as f:
+                    mod = imp.load_module(fullname,f,self._path,self._desc)
+            # Not a file, so it's a package directory
+            else:
+                mod = imp.load_module(fullname,None,self._path,self._desc)
+            mod.__loader__ = self._finder
+            return mod
+
+    # "Loader" for modules that have already been imported.
+    class _null_loader(object):
+        def load_module(self,fullname):
+            return sys.modules[fullname]
+
+    def find_module(self,fullname,path=None):
+        """Return self if 'fullname' is in sys.path (and isn't a builtin or
+        frozen module)."""
+        if fullname in sys.modules:
+            return simple_finder._null_loader()
+        # Don't override builtin/frozen modules. TODO: Windows registry?
+        if (fullname not in sys.builtin_module_names and
+            not imp.is_frozen(fullname)):
+            if path:
+                iterpath = path
+                name = fullname.split('.')[-1]
+            else:
+                iterpath = sys.path
+                name = fullname
+            for dir in iterpath:
+                loadpath,desc = self._search(name,dir)
+                if loadpath:
+                    break
+            #print "__IMPORTING ",fullname
+            if loadpath:
+                return simple_finder._loader(fullname,loadpath,desc,self)
         return None
-    pname = globals['__name__']
-    if globals.has_key("__path__"):
-        parent = sys.modules[pname]
-        assert globals is parent.__dict__
-        return parent
-    if '.' in pname:
-        i = pname.rfind('.')
-        pname = pname[:i]
-        parent = sys.modules[pname]
-        assert parent.__name__ == pname
-        return parent
-    return None
 
-def __find_head_package__(parent, name):
-    if '.' in name:
-        i = name.find('.')
-        head = name[:i]
-        tail = name[i+1:]
-    else:
-        head = name
-        tail = ""
-    if parent:
-        qname = "%s.%s" % (parent.__name__, head)
-    else:
-        qname = head
-    q = __import_module__(head, qname, parent)
-    if q: return q, tail
-    if parent:
-        qname = head
-        parent = None
-        q = __import_module__(head, qname, parent)
-        if q: return q, tail
-    raise ImportError, "No module named " + qname
+    def _process_dir(self,dir):
+        """
+        Arguments:
+        dir -- 
+        """
+        # All files and subdirs. Store the name and the path.
+        try:
+            contents = dict((x,os.path.join(dir,x))
+                            for x in os.listdir(dir))
+        # Unreadable directory, so skip
+        except OSError:
+            contents = {}
 
-def __load_tail__(q, tail):
-    m = q
-    while tail:
-        i = tail.find('.')
-        if i < 0: i = len(tail)
-        head, tail = tail[:i], tail[i+1:]
-        mname = "%s.%s" % (m.__name__, head)
-        m = __import_module__(head, mname, m)
-        if not m:
-            raise ImportError, "No module named " + mname
-    return m
-
-def __ensure_fromlist__(m, fromlist, recursive=0):
-    for sub in fromlist:
-        if sub == "*":
-            if not recursive:
-                try:
-                    all = m.__all__
-                except AttributeError:
-                    pass
-                else:
-                    __ensure_fromlist__(m, all, 1)
-            continue
-        if sub != "*" and not hasattr(m, sub):
-            subname = "%s.%s" % (m.__name__, sub)
-            submod = __import_module__(sub, subname, m)
-            if not submod:
-                raise ImportError, "No module named " + subname
+        self._contents[dir] = contents
 
 # Now we import all the yt.mods items.
-with mpi_import():
-    if MPI.COMM_WORLD.rank == 0: print "Beginning parallel import block."
-    from yt.mods import *
-    if MPI.COMM_WORLD.rank == 0: print "Ending parallel import block."
+import sys
+sys.meta_path.append(mpi4py_finder())
+from yt.mods import *


https://bitbucket.org/yt_analysis/yt/commits/a1ff99289b02/
changeset:   a1ff99289b02
branch:      yt
user:        atmyers
date:        2013-03-07 22:38:49
summary:     adding a frontend for Pluto
affected #:  12 files

diff -r 052135b61bdc98c611bb8371af4e2722f3b55f74 -r a1ff99289b028229a1570481842295a9fb28b31b yt/frontends/chombo/data_structures.py
--- a/yt/frontends/chombo/data_structures.py
+++ b/yt/frontends/chombo/data_structures.py
@@ -39,8 +39,8 @@
      ST_CTIME
 
 from .definitions import \
-     pluto2enzoDict, \
-     yt2plutoFieldsDict, \
+     chombo2enzoDict, \
+     yt2chomboFieldsDict, \
      parameterDict \
 
 from yt.funcs import *
@@ -250,7 +250,7 @@
         seconds = 1 #self["Time"]
         for unit in sec_conversion.keys():
             self.time_units[unit] = seconds / sec_conversion[unit]
-        for key in yt2plutoFieldsDict:
+        for key in yt2chomboFieldsDict:
             self.conversion_factors[key] = 1.0
 
     def _setup_nounits_units(self):
@@ -270,29 +270,22 @@
 
     def _parse_parameter_file(self):
         """
-        Check to see whether a 'pluto.ini' or 'orion2.ini' file
+        Check to see whether an 'orion2.ini' file
         exists in the plot file directory. If one does, attempt to parse it.
-        Otherwise, assume the left edge starts at 0 and get the right edge
-        from the hdf5 file.
+        Otherwise grab the dimensions from the hdf5 file.
         """
-        if os.path.isfile('pluto.ini'):
-            self._parse_pluto_file('pluto.ini')
-        else:
-            if os.path.isfile('orion2.ini'): self._parse_pluto_file('orion2.ini')
-            self.unique_identifier = \
-                int(os.stat(self.parameter_filename)[ST_CTIME])
-            self.domain_left_edge = self.__calc_left_edge()
-            self.domain_right_edge = self.__calc_right_edge()
-            self.domain_dimensions = self.__calc_domain_dimensions()
-            self.dimensionality = 3
-            self.refine_by = self._handle['/level_0'].attrs['ref_ratio']
+        
+        if os.path.isfile('orion2.ini'): self._parse_inputs_file('orion2.ini')
+        self.unique_identifier = \
+                               int(os.stat(self.parameter_filename)[ST_CTIME])
+        self.domain_left_edge = self.__calc_left_edge()
+        self.domain_right_edge = self.__calc_right_edge()
+        self.domain_dimensions = self.__calc_domain_dimensions()
+        self.dimensionality = 3
+        self.refine_by = self._handle['/level_0'].attrs['ref_ratio']
         self.periodicity = (True, True, True)
 
-    def _parse_pluto_file(self, ini_filename):
-        """
-        Reads in an inputs file in the 'pluto.ini' format. Probably not
-        especially robust at the moment.
-        """
+    def _parse_inputs_file(self, ini_filename):
         self.fullplotdir = os.path.abspath(self.parameter_filename)
         self.ini_filename = self._localize( \
             self.ini_filename, ini_filename)
@@ -305,8 +298,8 @@
                 param, sep, vals = map(rstrip,line.partition(' '))
             except ValueError:
                 mylog.error("ValueError: '%s'", line)
-            if pluto2enzoDict.has_key(param):
-                paramName = pluto2enzoDict[param]
+            if chombo2enzoDict.has_key(param):
+                paramName = chombo2enzoDict[param]
                 t = map(parameterDict[paramName], vals.split())
                 if len(t) == 1:
                     self.parameters[paramName] = t[0]
@@ -336,13 +329,14 @@
 
     @classmethod
     def _is_valid(self, *args, **kwargs):
-        try:
-            fileh = h5py.File(args[0],'r')
-            valid = "Chombo_global" in fileh["/"]
-            fileh.close()
-            return valid
-        except:
-            pass
+        if not os.path.isfile('pluto.ini'):
+            try:
+                fileh = h5py.File(args[0],'r')
+                valid = "Chombo_global" in fileh["/"]
+                fileh.close()
+                return valid
+            except:
+                pass
         return False
 
     @parallel_root_only

diff -r 052135b61bdc98c611bb8371af4e2722f3b55f74 -r a1ff99289b028229a1570481842295a9fb28b31b yt/frontends/chombo/definitions.py
--- a/yt/frontends/chombo/definitions.py
+++ b/yt/frontends/chombo/definitions.py
@@ -56,10 +56,10 @@
                  "NumberOfParticleAttributes": int,
                                  }
 
-pluto2enzoDict = {"GAMMA": "Gamma",
+chombo2enzoDict = {"GAMMA": "Gamma",
                   "Ref_ratio": "RefineBy"
                                     }
 
-yt2plutoFieldsDict = {}
-pluto2ytFieldsDict = {}
+yt2chomboFieldsDict = {}
+chombo2ytFieldsDict = {}
 

diff -r 052135b61bdc98c611bb8371af4e2722f3b55f74 -r a1ff99289b028229a1570481842295a9fb28b31b yt/frontends/pluto/api.py
--- /dev/null
+++ b/yt/frontends/pluto/api.py
@@ -0,0 +1,41 @@
+"""
+API for yt.frontends.pluto
+
+Author: Matthew Turk <matthewturk at gmail.com>
+Affiliation: UCSD
+Author: J.S. Oishi <jsoishi at gmail.com>
+Affiliation: KIPAC/SLAC/Stanford
+Author: Britton Smith <brittonsmith at gmail.com>
+Affiliation: MSU
+Homepage: http://yt.Chombotools.org/
+License:
+  Copyright (C) 2010-2011 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+"""
+
+from .data_structures import \
+      PlutoGrid, \
+      PlutoHierarchy, \
+      PlutoStaticOutput
+
+from .fields import \
+      PlutoFieldInfo, \
+      add_pluto_field
+
+from .io import \
+      IOHandlerChomboHDF5

diff -r 052135b61bdc98c611bb8371af4e2722f3b55f74 -r a1ff99289b028229a1570481842295a9fb28b31b yt/frontends/pluto/data_structures.py
--- /dev/null
+++ b/yt/frontends/pluto/data_structures.py
@@ -0,0 +1,307 @@
+"""
+Data structures for Pluto.
+
+Author: Matthew Turk <matthewturk at gmail.com>
+Author: J. S. Oishi <jsoishi at gmail.com>
+Affiliation: KIPAC/SLAC/Stanford
+Homepage: http://yt-project.org/
+License:
+  Copyright (C) 2008-2011 Matthew Turk, J. S. Oishi.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+import h5py
+import re
+import os
+import weakref
+import numpy as np
+
+from collections import \
+     defaultdict
+from string import \
+     strip, \
+     rstrip
+from stat import \
+     ST_CTIME
+
+from .definitions import \
+     pluto2enzoDict, \
+     yt2plutoFieldsDict, \
+     parameterDict \
+
+from yt.funcs import *
+from yt.data_objects.grid_patch import \
+     AMRGridPatch
+from yt.data_objects.hierarchy import \
+     AMRHierarchy
+from yt.data_objects.static_output import \
+     StaticOutput
+from yt.utilities.definitions import \
+     mpc_conversion, sec_conversion
+from yt.utilities.parallel_tools.parallel_analysis_interface import \
+     parallel_root_only
+from yt.utilities.io_handler import \
+    io_registry
+
+from yt.data_objects.field_info_container import \
+    FieldInfoContainer, NullFunc
+from .fields import PlutoFieldInfo, KnownPlutoFields
+
+class PlutoGrid(AMRGridPatch):
+    _id_offset = 0
+    __slots__ = ["_level_id", "stop_index"]
+    def __init__(self, id, hierarchy, level, start, stop):
+        AMRGridPatch.__init__(self, id, filename = hierarchy.hierarchy_filename,
+                              hierarchy = hierarchy)
+        self.Parent = []
+        self.Children = []
+        self.Level = level
+        self.ActiveDimensions = stop - start + 1
+
+    def get_global_startindex(self):
+        """
+        Return the integer starting index for each dimension at the current
+        level.
+
+        """
+        if self.start_index != None:
+            return self.start_index
+        if self.Parent == []:
+            iLE = self.LeftEdge - self.pf.domain_left_edge
+            start_index = iLE / self.dds
+            return np.rint(start_index).astype('int64').ravel()
+        pdx = self.Parent[0].dds
+        start_index = (self.Parent[0].get_global_startindex()) + \
+            np.rint((self.LeftEdge - self.Parent[0].LeftEdge)/pdx)
+        self.start_index = (start_index*self.pf.refine_by).astype('int64').ravel()
+        return self.start_index
+
+    def _setup_dx(self):
+        # has already been read in and stored in hierarchy
+        self.dds = self.hierarchy.dds_list[self.Level]
+        self.field_data['dx'], self.field_data['dy'], self.field_data['dz'] = self.dds
+
+class PlutoHierarchy(AMRHierarchy):
+
+    grid = PlutoGrid
+
+    def __init__(self,pf,data_style='chombo_hdf5'):
+        self.domain_left_edge = pf.domain_left_edge
+        self.domain_right_edge = pf.domain_right_edge
+        self.data_style = data_style
+        self.field_indexes = {}
+        self.parameter_file = weakref.proxy(pf)
+        # for now, the hierarchy file is the parameter file!
+        self.hierarchy_filename = os.path.abspath(
+            self.parameter_file.parameter_filename)
+        self.directory = pf.fullpath
+        self._handle = pf._handle
+
+        self.float_type = self._handle['/level_0']['data:datatype=0'].dtype.name
+        self._levels = self._handle.keys()[2:]
+        AMRHierarchy.__init__(self,pf,data_style)
+
+    def _detect_fields(self):
+        ncomp = int(self._handle['/'].attrs['num_components'])
+        self.field_list = [c[1] for c in self._handle['/'].attrs.items()[-ncomp:]]
+          
+    def _setup_classes(self):
+        dd = self._get_data_reader_dict()
+        AMRHierarchy._setup_classes(self, dd)
+        self.object_types.sort()
+
+    def _count_grids(self):
+        self.num_grids = 0
+        for lev in self._levels:
+            self.num_grids += self._handle[lev]['Processors'].len()
+
+    def _parse_hierarchy(self):
+        f = self._handle # shortcut
+
+        # this relies on the first Group in the H5 file being
+        # 'Chombo_global' and the second 'Expressions'
+        levels = f.keys()[2:]
+        grids = []
+        self.dds_list = []
+        i = 0
+        for lev in levels:
+            level_number = int(re.match('level_(\d+)',lev).groups()[0])
+            boxes = f[lev]['boxes'].value
+            dx = f[lev].attrs['dx']
+            self.dds_list.append(dx * np.ones(3))
+            for level_id, box in enumerate(boxes):
+                si = np.array([box['lo_%s' % ax] for ax in 'ijk'])
+                ei = np.array([box['hi_%s' % ax] for ax in 'ijk'])
+                pg = self.grid(len(grids),self,level=level_number,
+                               start = si, stop = ei)
+                grids.append(pg)
+                grids[-1]._level_id = level_id
+                self.grid_left_edge[i] = dx*si.astype(self.float_type) + self.domain_left_edge
+                self.grid_right_edge[i] = dx*(ei.astype(self.float_type)+1) + self.domain_left_edge
+                self.grid_particle_count[i] = 0
+                self.grid_dimensions[i] = ei - si + 1
+                i += 1
+        self.grids = np.empty(len(grids), dtype='object')
+        for gi, g in enumerate(grids): self.grids[gi] = g
+#        self.grids = np.array(self.grids, dtype='object')
+
+    def _populate_grid_objects(self):
+        for g in self.grids:
+            g._prepare_grid()
+            g._setup_dx()
+
+        for g in self.grids:
+            g.Children = self._get_grid_children(g)
+            for g1 in g.Children:
+                g1.Parent.append(g)
+        self.max_level = self.grid_levels.max()
+
+    def _setup_derived_fields(self):
+        self.derived_field_list = []
+
+    def _get_grid_children(self, grid):
+        mask = np.zeros(self.num_grids, dtype='bool')
+        grids, grid_ind = self.get_box_grids(grid.LeftEdge, grid.RightEdge)
+        mask[grid_ind] = True
+        return [g for g in self.grids[mask] if g.Level == grid.Level + 1]
+
+    def _setup_data_io(self):
+        self.io = io_registry[self.data_style](self.parameter_file)
+
+class PlutoStaticOutput(StaticOutput):
+    _hierarchy_class = PlutoHierarchy
+    _fieldinfo_fallback = PlutoFieldInfo
+    _fieldinfo_known = KnownPlutoFields
+
+    def __init__(self, filename, data_style='chombo_hdf5',
+                 storage_filename = None, ini_filename = None):
+        self._handle = h5py.File(filename,'r')
+        self.current_time = self._handle.attrs['time']
+        self.ini_filename = ini_filename
+        self.fullplotdir = os.path.abspath(filename)
+        StaticOutput.__init__(self,filename,data_style)
+        self.storage_filename = storage_filename
+        self.cosmological_simulation = False
+
+        # These are parameters that I very much wish to get rid of.
+        self.parameters["HydroMethod"] = 'chombo' # always PPM DE
+        self.parameters["DualEnergyFormalism"] = 0 
+        self.parameters["EOSType"] = -1 # default
+
+    def __del__(self):
+        self._handle.close()
+
+    def _set_units(self):
+        """
+        Generates the conversion to various physical _units based on the parameter file
+        """
+        self.units = {}
+        self.time_units = {}
+        if len(self.parameters) == 0:
+            self._parse_parameter_file()
+        self._setup_nounits_units()
+        self.conversion_factors = defaultdict(lambda: 1.0)
+        self.time_units['1'] = 1
+        self.units['1'] = 1.0
+        self.units['unitary'] = 1.0 / (self.domain_right_edge - self.domain_left_edge).max()
+        seconds = 1 #self["Time"]
+        for unit in sec_conversion.keys():
+            self.time_units[unit] = seconds / sec_conversion[unit]
+        for key in yt2plutoFieldsDict:
+            self.conversion_factors[key] = 1.0
+
+    def _setup_nounits_units(self):
+        z = 0
+        mylog.warning("Setting 1.0 in code units to be 1.0 cm")
+        if not self.has_key("TimeUnits"):
+            mylog.warning("No time units.  Setting 1.0 = 1 second.")
+            self.conversion_factors["Time"] = 1.0
+        for unit in mpc_conversion.keys():
+            self.units[unit] = mpc_conversion[unit] / mpc_conversion["cm"]
+
+
+    def _localize(self, f, default):
+        if f is None:
+            return os.path.join(self.directory, default)
+        return f
+
+    def _parse_parameter_file(self):
+        """
+        Reads in an inputs file in the 'pluto.ini' format. Probably not
+        especially robust at the moment.
+        """
+
+        ini_filename = 'pluto.ini'
+        self.fullplotdir = os.path.abspath(self.parameter_filename)
+        self.ini_filename = self._localize( \
+            self.ini_filename, ini_filename)
+        self.unique_identifier = \
+                               int(os.stat(self.parameter_filename)[ST_CTIME])
+        lines = open(self.ini_filename).readlines()
+        # read the file line by line, storing important parameters
+        for lineI, line in enumerate(lines):
+            try:
+                param, sep, vals = map(rstrip,line.partition(' '))
+            except ValueError:
+                mylog.error("ValueError: '%s'", line)
+            if pluto2enzoDict.has_key(param):
+                paramName = pluto2enzoDict[param]
+                t = map(parameterDict[paramName], vals.split())
+                if len(t) == 1:
+                    self.parameters[paramName] = t[0]
+                else:
+                    if paramName == "RefineBy":
+                        self.parameters[paramName] = t[0]
+                    else:
+                        self.parameters[paramName] = t
+
+            # assumes 3D for now
+            elif param.startswith("X1-grid"):
+                t = vals.split()
+                low1 = float(t[1])
+                high1 = float(t[4])
+                N1 = int(t[2])
+            elif param.startswith("X2-grid"):
+                t = vals.split()
+                low2 = float(t[1])
+                high2 = float(t[4])
+                N2 = int(t[2])
+            elif param.startswith("X3-grid"):
+                t = vals.split()
+                low3 = float(t[1])
+                high3 = float(t[4])
+                N3 = int(t[2])
+            
+        self.dimensionality = 3
+        self.domain_left_edge = np.array([low1,low2,low3])
+        self.domain_right_edge = np.array([high1,high2,high3])
+        self.domain_dimensions = np.array([N1,N2,N3])
+        self.refine_by = self.parameters["RefineBy"]
+            
+    @classmethod
+    def _is_valid(self, *args, **kwargs):
+        return os.path.isfile('pluto.ini')
+
+    @parallel_root_only
+    def print_key_parameters(self):
+        for a in ["current_time", "domain_dimensions", "domain_left_edge",
+                  "domain_right_edge"]:
+            if not hasattr(self, a):
+                mylog.error("Missing %s in parameter file definition!", a)
+                continue
+            v = getattr(self, a)
+            mylog.info("Parameters: %-25s = %s", a, v)

diff -r 052135b61bdc98c611bb8371af4e2722f3b55f74 -r a1ff99289b028229a1570481842295a9fb28b31b yt/frontends/pluto/definitions.py
--- /dev/null
+++ b/yt/frontends/pluto/definitions.py
@@ -0,0 +1,65 @@
+"""
+Various definitions for various other modules and routines
+
+Author: J. S. Oishi <jsoishi at gmail.com>
+Affiliation: KIPAC/SLAC/Stanford
+Homepage: http://yt-project.org/
+License:
+  Copyright (C) 2008-2011 J.S. Oishi.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+"""
+
+parameterDict = {"CosmologyCurrentRedshift": float,
+                 "CosmologyComovingBoxSize": float,
+                 "CosmologyOmegaMatterNow": float,
+                 "CosmologyOmegaLambdaNow": float,
+                 "CosmologyHubbleConstantNow": float,
+                 "CosmologyInitialRedshift": float,
+                 "DualEnergyFormalismEta1": float,
+                 "DualEnergyFormalismEta2": float,
+                 "MetaDataString": str,
+                 "HydroMethod": int,
+                 "DualEnergyFormalism": int,
+                 "InitialTime": float,
+                 "ComovingCoordinates": int,
+                 "DensityUnits": float,
+                 "LengthUnits": float,
+                 "LengthUnit": float,
+                 "TemperatureUnits": float,
+                 "TimeUnits": float,
+                 "GravitationalConstant": float,
+                 "Gamma": float,
+                 "MultiSpecies": int,
+                 "CompilerPrecision": str,
+                 "CurrentTimeIdentifier": int,
+                 "RefineBy": int,
+                 "BoundaryConditionName": str,
+                 "TopGridRank": int,
+                 "TopGridDimensions": int,
+                 "EOSSoundSpeed": float,
+                 "EOSType": int,
+                 "NumberOfParticleAttributes": int,
+                                 }
+
+pluto2enzoDict = {"GAMMA": "Gamma",
+                  "Ref_ratio": "RefineBy"
+                                    }
+
+yt2plutoFieldsDict = {}
+pluto2ytFieldsDict = {}
+

diff -r 052135b61bdc98c611bb8371af4e2722f3b55f74 -r a1ff99289b028229a1570481842295a9fb28b31b yt/frontends/pluto/fields.py
--- /dev/null
+++ b/yt/frontends/pluto/fields.py
@@ -0,0 +1,97 @@
+"""
+Pluto-specific fields
+
+Author: J. S. Oishi <jsoishi at gmail.com>
+Affiliation: KIPAC/SLAC/Stanford
+Homepage: http://yt-project.org/
+License:
+  Copyright (C) 2009-2011 J. S. Oishi, Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+
+from yt.data_objects.field_info_container import \
+    FieldInfoContainer, \
+    FieldInfo, \
+    NullFunc, \
+    ValidateParameter, \
+    ValidateDataField, \
+    ValidateProperty, \
+    ValidateSpatial, \
+    ValidateGridType
+import yt.data_objects.universal_fields
+import numpy as np
+
+KnownPlutoFields = FieldInfoContainer()
+add_pluto_field = KnownPlutoFields.add_field
+
+PlutoFieldInfo = FieldInfoContainer.create_with_fallback(FieldInfo)
+add_field = PlutoFieldInfo.add_field
+
+add_pluto_field("rho", function=NullFunc, take_log=True,
+                 validators = [ValidateDataField("density")],
+                 units=r"\rm{g}/\rm{cm}^3")
+
+KnownPlutoFields["rho"]._projected_units =r"\rm{g}/\rm{cm}^2"
+
+add_pluto_field("vx1", function=NullFunc, take_log=False,
+                 validators = [ValidateDataField("X-Momentum")],
+                 units=r"",display_name=r"M_x")
+KnownPlutoFields["vx1"]._projected_units=r""
+
+add_pluto_field("vx2", function=NullFunc, take_log=False,
+                 validators = [ValidateDataField("Y-Momentum")],
+                 units=r"",display_name=r"M_y")
+KnownPlutoFields["vx2"]._projected_units=r""
+
+add_pluto_field("vx3", function=NullFunc, take_log=False,
+                 validators = [ValidateDataField("Z-Momentum")],
+                 units=r"",display_name=r"M_z")
+KnownPlutoFields["vx3"]._projected_units=r""
+
+add_pluto_field("prs", function=NullFunc, take_log=True,
+                 validators = [ValidateDataField("energy-density")],
+                 units=r"\rm{erg}/\rm{cm}^3")
+
+def _Density(field,data):
+    """A duplicate of the density field. This is needed because when you try 
+    to instantiate a PlotCollection without passing in a center, the code
+    will try to generate one for you using the "Density" field, which gives an error 
+    if it isn't defined.
+
+    """
+    return data["rho"]
+add_field("Density",function=_Density, take_log=True,
+          units=r'\rm{g}/\rm{cm^3}')
+
+def _Xmomentum(field, data):
+    """ Generate x-momentum. """
+    return data["vx1"]*data["density"]
+add_field("X-momentum",function=_Xmomentum, take_log=False,
+          units=r'\rm{g}/\rm{cm^2 s}')
+
+def _Ymomentum(field, data):
+    """ Generate y-momentum  """
+    return data["vx2"]*data["density"]
+add_field("Y-momentum",function=_Ymomentum, take_log=False,
+          units=r'\rm{g}/\rm{cm^2 s}')
+
+def _Zmomentum(field,data):
+    """ Generate z-momentum"""
+    return data["vx3"]*data["density"]
+add_field("Z-Momentum",function=_Zmomentum, take_log=False,
+          units=r'\rm{g}/\rm{cm^2 s}')
+

diff -r 052135b61bdc98c611bb8371af4e2722f3b55f74 -r a1ff99289b028229a1570481842295a9fb28b31b yt/frontends/pluto/io.py
--- /dev/null
+++ b/yt/frontends/pluto/io.py
@@ -0,0 +1,73 @@
+"""
+The data-file handling functions
+
+Author: Matthew Turk <matthewturk at gmail.com>
+Author: J. S. Oishi <jsoishi at gmail.com>
+Affiliation: KIPAC/SLAC/Stanford
+Homepage: http://yt-project.org/
+License:
+  Copyright (C) 2007-2011 Matthew Turk.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+import h5py
+import os
+import re
+import numpy as np
+
+from yt.utilities.io_handler import \
+           BaseIOHandler
+
+class IOHandlerChomboHDF5(BaseIOHandler):
+    _data_style = "chombo_hdf5"
+    _offset_string = 'data:offsets=0'
+    _data_string = 'data:datatype=0'
+
+    def __init__(self, pf, *args, **kwargs):
+        BaseIOHandler.__init__(self, *args, **kwargs)
+        self.pf = pf
+        self._handle = pf._handle
+
+    _field_dict = None
+    @property
+    def field_dict(self):
+        if self._field_dict is not None:
+            return self._field_dict
+        ncomp = int(self._handle['/'].attrs['num_components'])
+        temp =  self._handle['/'].attrs.items()[-ncomp:]
+        val, keys = zip(*temp)
+        val = [int(re.match('component_(\d+)',v).groups()[0]) for v in val]
+        self._field_dict = dict(zip(keys,val))
+        return self._field_dict
+        
+    def _read_field_names(self,grid):
+        ncomp = int(self._handle['/'].attrs['num_components'])
+
+        fns = [c[1] for c in f['/'].attrs.items()[-ncomp-1:-1]]
+    
+    def _read_data(self,grid,field):
+
+        lstring = 'level_%i' % grid.Level
+        lev = self._handle[lstring]
+        dims = grid.ActiveDimensions
+        boxsize = dims.prod()
+        
+        grid_offset = lev[self._offset_string][grid._level_id]
+        start = grid_offset+self.field_dict[field]*boxsize
+        stop = start + boxsize
+        data = lev[self._data_string][start:stop]
+        
+        return data.reshape(dims, order='F')

diff -r 052135b61bdc98c611bb8371af4e2722f3b55f74 -r a1ff99289b028229a1570481842295a9fb28b31b yt/frontends/pluto/setup.py
--- /dev/null
+++ b/yt/frontends/pluto/setup.py
@@ -0,0 +1,13 @@
+#!/usr/bin/env python
+import setuptools
+import os
+import sys
+import os.path
+
+
+def configuration(parent_package='', top_path=None):
+    from numpy.distutils.misc_util import Configuration
+    config = Configuration('pluto', parent_package, top_path)
+    config.make_config_py()  # installs __config__.py
+    #config.make_svn_version_py()
+    return config

diff -r 052135b61bdc98c611bb8371af4e2722f3b55f74 -r a1ff99289b028229a1570481842295a9fb28b31b yt/frontends/setup.py
--- a/yt/frontends/setup.py
+++ b/yt/frontends/setup.py
@@ -20,4 +20,5 @@
     config.add_subpackage("maestro")
     config.add_subpackage("castro")
     config.add_subpackage("stream")
+    config.add_subpackage("pluto")
     return config

diff -r 052135b61bdc98c611bb8371af4e2722f3b55f74 -r a1ff99289b028229a1570481842295a9fb28b31b yt/mods.py
--- a/yt/mods.py
+++ b/yt/mods.py
@@ -102,6 +102,9 @@
 from yt.frontends.art.api import \
     ARTStaticOutput, ARTFieldInfo, add_art_field
 
+from yt.frontends.pluto.api import \
+     PlutoStaticOutput, PlutoFieldInfo, add_pluto_field
+
 #from yt.frontends.maestro.api import \
 #    MaestroStaticOutput, MaestroFieldInfo, add_maestro_field
 


https://bitbucket.org/yt_analysis/yt/commits/b59c9b89ad06/
changeset:   b59c9b89ad06
branch:      yt
user:        atmyers
date:        2013-03-07 22:55:00
summary:     merging
affected #:  63 files

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 doc/install_script.sh
--- a/doc/install_script.sh
+++ b/doc/install_script.sh
@@ -837,16 +837,11 @@
 cd $YT_DIR
 ( ${HG_EXEC} pull 2>1 && ${HG_EXEC} up -C 2>1 ${BRANCH} 2>&1 ) 1>> ${LOG_FILE}
 
-echo "Building Fortran kD-tree module."
-cd yt/utilities/kdtree
-( make 2>&1 ) 1>> ${LOG_FILE}
-cd ../../..
-
 echo "Installing yt"
 echo $HDF5_DIR > hdf5.cfg
 [ $INST_PNG -eq 1 ] && echo $PNG_DIR > png.cfg
 [ $INST_FTYPE -eq 1 ] && echo $FTYPE_DIR > freetype.cfg
-( ${DEST_DIR}/bin/python2.7 setup.py develop 2>&1 ) 1>> ${LOG_FILE} || do_exit
+( export PATH=$DEST_DIR/bin:$PATH ; ${DEST_DIR}/bin/python2.7 setup.py develop 2>&1 ) 1>> ${LOG_FILE} || do_exit
 touch done
 cd $MY_PWD
 

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 setup.py
--- a/setup.py
+++ b/setup.py
@@ -4,14 +4,61 @@
 import sys
 import time
 import subprocess
+import shutil
+import glob
 import distribute_setup
 distribute_setup.use_setuptools()
 
 from distutils.command.build_py import build_py
 from numpy.distutils.misc_util import appendpath
+from numpy.distutils.command import install_data as np_install_data
 from numpy.distutils import log
 from distutils import version
 
+from distutils.core import Command
+from distutils.spawn import find_executable
+
+
+class BuildForthon(Command):
+
+    """Command for building Forthon modules"""
+
+    description = "Build Forthon modules"
+    user_options = []
+
+    def initialize_options(self):
+
+        """init options"""
+
+        pass
+
+    def finalize_options(self):
+
+        """finalize options"""
+
+        pass
+
+    def run(self):
+
+        """runner"""
+        Forthon_exe = find_executable("Forthon")
+        gfortran_exe = find_executable("gfortran")
+
+        if None in (Forthon_exe, gfortran_exe):
+            sys.stderr.write(
+                "fKDpy.so won't be built due to missing Forthon/gfortran\n"
+            )
+            return
+
+        cwd = os.getcwd()
+        os.chdir(os.path.join(cwd, 'yt/utilities/kdtree'))
+        cmd = [Forthon_exe, "-F", "gfortran", "--compile_first",
+               "fKD_source", "--no2underscores", "--fopt", "'-O3'", "fKD",
+               "fKD_source.f90"]
+        subprocess.check_call(cmd, shell=False)
+        shutil.move(glob.glob('build/lib*/fKDpy.so')[0], os.getcwd())
+        os.chdir(cwd)
+
 REASON_FILES = []
 REASON_DIRS = [
     "",
@@ -36,7 +83,7 @@
     files = []
     for ext in ["js", "html", "css", "png", "ico", "gif"]:
         files += glob.glob("%s/*.%s" % (dir_name, ext))
-    REASON_FILES.append( (dir_name, files) )
+    REASON_FILES.append((dir_name, files))
 
 # Verify that we have Cython installed
 try:
@@ -93,10 +140,10 @@
             language=extension.language, cplus=cplus,
             output_file=target_file)
         cython_result = Cython.Compiler.Main.compile(source,
-                                                   options=options)
+                                                     options=options)
         if cython_result.num_errors != 0:
-            raise DistutilsError("%d errors while compiling %r with Cython" \
-                  % (cython_result.num_errors, source))
+            raise DistutilsError("%d errors while compiling %r with Cython"
+                                 % (cython_result.num_errors, source))
     return target_file
 
 
@@ -109,7 +156,9 @@
 
 VERSION = "2.5dev"
 
-if os.path.exists('MANIFEST'): os.remove('MANIFEST')
+if os.path.exists('MANIFEST'):
+    os.remove('MANIFEST')
+
 
 def get_mercurial_changeset_id(target_dir):
     """adapted from a script by Jason F. Harris, published at
@@ -123,11 +172,11 @@
                                      stdout=subprocess.PIPE,
                                      stderr=subprocess.PIPE,
                                      shell=True)
-        
+
     if (get_changeset.stderr.read() != ""):
         print "Error in obtaining current changeset of the Mercurial repository"
         changeset = None
-        
+
     changeset = get_changeset.stdout.read().strip()
     if (not re.search("^[0-9a-f]{12}", changeset)):
         print "Current changeset of the Mercurial repository is malformed"
@@ -135,12 +184,26 @@
 
     return changeset
 
+
+class my_build_src(build_src.build_src):
+    def run(self):
+        self.run_command("build_forthon")
+        build_src.build_src.run(self)
+
+
+class my_install_data(np_install_data.install_data):
+    def run(self):
+        self.distribution.data_files.append(
+            ('yt/utilities/kdtree', ['yt/utilities/kdtree/fKDpy.so'])
+        )
+        np_install_data.install_data.run(self)
+
 class my_build_py(build_py):
     def run(self):
         # honor the --dry-run flag
         if not self.dry_run:
-            target_dir = os.path.join(self.build_lib,'yt')
-            src_dir =  os.getcwd() 
+            target_dir = os.path.join(self.build_lib, 'yt')
+            src_dir = os.getcwd()
             changeset = get_mercurial_changeset_id(src_dir)
             self.mkpath(target_dir)
             with open(os.path.join(target_dir, '__hg_version__.py'), 'w') as fobj:
@@ -148,6 +211,7 @@
 
             build_py.run(self)
 
+
 def configuration(parent_package='', top_path=None):
     from numpy.distutils.misc_util import Configuration
 
@@ -158,7 +222,7 @@
                        quiet=True)
 
     config.make_config_py()
-    #config.make_svn_version_py()
+    # config.make_svn_version_py()
     config.add_subpackage('yt', 'yt')
     config.add_scripts("scripts/*")
 
@@ -176,25 +240,25 @@
                     + "simulations, focusing on Adaptive Mesh Refinement data "
                       "from Enzo, Orion, FLASH, and others.",
         classifiers=["Development Status :: 5 - Production/Stable",
-            "Environment :: Console",
-            "Intended Audience :: Science/Research",
-            "License :: OSI Approved :: GNU General Public License (GPL)",
-            "Operating System :: MacOS :: MacOS X",
-            "Operating System :: POSIX :: AIX",
-            "Operating System :: POSIX :: Linux",
-            "Programming Language :: C",
-            "Programming Language :: Python",
-            "Topic :: Scientific/Engineering :: Astronomy",
-            "Topic :: Scientific/Engineering :: Physics",
-            "Topic :: Scientific/Engineering :: Visualization"],
-        keywords='astronomy astrophysics visualization ' + \
-            'amr adaptivemeshrefinement',
+                     "Environment :: Console",
+                     "Intended Audience :: Science/Research",
+                     "License :: OSI Approved :: GNU General Public License (GPL)",
+                     "Operating System :: MacOS :: MacOS X",
+                     "Operating System :: POSIX :: AIX",
+                     "Operating System :: POSIX :: Linux",
+                     "Programming Language :: C",
+                     "Programming Language :: Python",
+                     "Topic :: Scientific/Engineering :: Astronomy",
+                     "Topic :: Scientific/Engineering :: Physics",
+                     "Topic :: Scientific/Engineering :: Visualization"],
+        keywords='astronomy astrophysics visualization ' +
+        'amr adaptivemeshrefinement',
         entry_points={'console_scripts': [
-                            'yt = yt.utilities.command_line:run_main',
-                      ],
-                      'nose.plugins.0.10': [
-                            'answer-testing = yt.utilities.answer_testing.framework:AnswerTesting'
-                      ]
+        'yt = yt.utilities.command_line:run_main',
+        ],
+            'nose.plugins.0.10': [
+                'answer-testing = yt.utilities.answer_testing.framework:AnswerTesting'
+            ]
         },
         author="Matthew J. Turk",
         author_email="matthewturk at gmail.com",
@@ -203,8 +267,9 @@
         configuration=configuration,
         zip_safe=False,
         data_files=REASON_FILES,
-        cmdclass = {'build_py': my_build_py},
-        )
+        cmdclass={'build_py': my_build_py, 'build_forthon': BuildForthon,
+                  'build_src': my_build_src, 'install_data': my_install_data},
+    )
     return
 
 if __name__ == '__main__':

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
--- a/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
+++ b/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
@@ -244,8 +244,9 @@
             If True, use dynamic load balancing to create the projections.
             Default: False.
 
-        Getting the Nearest Galaxies
-        ----------------------------
+        Notes
+        -----
+
         The light ray tool will use the HaloProfiler to calculate the
         distance and mass of the nearest halo to that pixel.  In order
         to do this, a dictionary called halo_profiler_parameters is used

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 yt/analysis_modules/halo_finding/rockstar/rockstar.py
--- a/yt/analysis_modules/halo_finding/rockstar/rockstar.py
+++ b/yt/analysis_modules/halo_finding/rockstar/rockstar.py
@@ -164,6 +164,13 @@
         If set to ``True``, it will be assumed that there are only dark
         matter particles present in the simulation. This can save analysis
         time if this is indeed the case. Default: ``False``.
+    hires_dm_mass : float
+        If supplied, use only the highest resolution dark matter
+        particles, with a mass less than (1.1*hires_dm_mass), in units
+        of ParticleMassMsun. This is useful for multi-dm-mass
+        simulations. Note that this will only give sensible results for
+        halos that are not "polluted" by lower resolution
+        particles. Default: ``None``.
         
     Returns
     -------
@@ -187,7 +194,8 @@
     """
     def __init__(self, ts, num_readers = 1, num_writers = None,
             outbase="rockstar_halos", dm_type=1, 
-            force_res=None, total_particles=None, dm_only=False):
+            force_res=None, total_particles=None, dm_only=False,
+            hires_dm_mass=None):
         mylog.warning("The citation for the Rockstar halo finder can be found at")
         mylog.warning("http://adsabs.harvard.edu/abs/2013ApJ...762..109B")
         ParallelAnalysisInterface.__init__(self)
@@ -217,6 +225,7 @@
             self.force_res = force_res
         self.total_particles = total_particles
         self.dm_only = dm_only
+        self.hires_dm_mass = hires_dm_mass
         # Setup pool and workgroups.
         self.pool, self.workgroup = self.runner.setup_pool()
         p = self._setup_parameters(ts)
@@ -227,28 +236,51 @@
     def _setup_parameters(self, ts):
         if self.workgroup.name != "readers": return None
         tpf = ts[0]
+
         def _particle_count(field, data):
-            if self.dm_only:
-                return np.prod(data["particle_position_x"].shape)
             try:
-                return (data["particle_type"]==self.dm_type).sum()
+                data["particle_type"]
+                has_particle_type=True
             except KeyError:
-                return np.prod(data["particle_position_x"].shape)
+                has_particle_type=False
+                
+            if (self.dm_only or (not has_particle_type)):
+                if self.hires_dm_mass is None:
+                    return np.prod(data["particle_position_x"].shape)
+                else:
+                    return (data['ParticleMassMsun'] < self.hires_dm_mass*1.1).sum()
+            elif has_particle_type:
+                if self.hires_dm_mass is None:
+                    return (data["particle_type"]==self.dm_type).sum()
+                else:
+                    return ( (data["particle_type"]==self.dm_type) & 
+                             (data['ParticleMassMsun'] < self.hires_dm_mass*1.1) ).sum()
+            else:                
+                raise RuntimeError() # should never get here
+
         add_field("particle_count", function=_particle_count,
                   not_in_all=True, particle_type=True)
         dd = tpf.h.all_data()
         # Get DM particle mass.
         all_fields = set(tpf.h.derived_field_list + tpf.h.field_list)
-        for g in tpf.h._get_objs("grids"):
-            if g.NumberOfParticles == 0: continue
-            if self.dm_only:
-                iddm = Ellipsis
-            elif "particle_type" in all_fields:
-                iddm = g["particle_type"] == self.dm_type
-            else:
-                iddm = Ellipsis
-            particle_mass = g['ParticleMassMsun'][iddm][0] / tpf.hubble_constant
-            break
+        has_particle_type = ("particle_type" in all_fields)
+
+        if self.hires_dm_mass is None:
+            for g in tpf.h._get_objs("grids"):
+                if g.NumberOfParticles == 0: continue
+
+                if (self.dm_only or (not has_particle_type)):
+                    iddm = Ellipsis
+                elif has_particle_type:
+                    iddm = g["particle_type"] == self.dm_type
+                else:                    
+                    iddm = Ellipsis # should never get here
+
+                particle_mass = g['ParticleMassMsun'][iddm][0] / tpf.hubble_constant
+                break
+        else:
+            particle_mass = self.hires_dm_mass / tpf.hubble_constant
+
         p = {}
         if self.total_particles is None:
             # Get total_particles in parallel.
@@ -302,6 +334,7 @@
                     force_res = self.force_res,
                     particle_mass = float(self.particle_mass),
                     dm_only = int(self.dm_only),
+                    hires_only = (self.hires_dm_mass is not None),
                     **kwargs)
         # Make the directory to store the halo lists in.
         if self.comm.rank == 0:

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 yt/analysis_modules/halo_finding/rockstar/rockstar_interface.pyx
--- a/yt/analysis_modules/halo_finding/rockstar/rockstar_interface.pyx
+++ b/yt/analysis_modules/halo_finding/rockstar/rockstar_interface.pyx
@@ -163,6 +163,7 @@
     SCALE_NOW = 1.0/(pf.current_redshift+1.0)
     # Now we want to grab data from only a subset of the grids for each reader.
     all_fields = set(pf.h.derived_field_list + pf.h.field_list)
+    has_particle_type = ("particle_type" in all_fields)
 
     # First we need to find out how many this reader is going to read in
     # if the number of readers > 1.
@@ -170,12 +171,19 @@
         local_parts = 0
         for g in pf.h._get_objs("grids"):
             if g.NumberOfParticles == 0: continue
-            if rh.dm_only:
-                iddm = Ellipsis
-            elif "particle_type" in all_fields:
-                iddm = g["particle_type"] == rh.dm_type
+            if (rh.dm_only or (not has_particle_type)):
+                if rh.hires_only:
+                    iddm = (g['ParticleMassMsun'] < PARTICLE_MASS*1.1)
+                else:
+                    iddm = Ellipsis
+            elif has_particle_type:
+                if rh.hires_only:
+                    iddm = ( (g["particle_type"]==rh.dm_type) &
+                             (g['ParticleMassMsun'] < PARTICLE_MASS*1.1) )                    
+                else:
+                    iddm = g["particle_type"] == rh.dm_type
             else:
-                iddm = Ellipsis
+                iddm = Ellipsis # should never get here
             arri = g["particle_index"].astype("int64")
             arri = arri[iddm] #pick only DM
             local_parts += arri.size
@@ -195,12 +203,19 @@
     pi = 0
     for g in pf.h._get_objs("grids"):
         if g.NumberOfParticles == 0: continue
-        if rh.dm_only:
-            iddm = Ellipsis
-        elif "particle_type" in all_fields:
-            iddm = g["particle_type"] == rh.dm_type
-        else:
-            iddm = Ellipsis
+        if (rh.dm_only or (not has_particle_type)):
+            if rh.hires_only:
+                iddm = (g['ParticleMassMsun'] < PARTICLE_MASS*1.1)
+            else:
+                iddm = Ellipsis
+        elif has_particle_type:
+            if rh.hires_only:
+                iddm = ( (g["particle_type"]==rh.dm_type) &
+                         (g['ParticleMassMsun'] < PARTICLE_MASS*1.1) )                    
+            else:
+                iddm = g["particle_type"] == rh.dm_type
+        else:            
+            iddm = Ellipsis # should never get here
         arri = g["particle_index"].astype("int64")
         arri = arri[iddm] #pick only DM
         npart = arri.size
@@ -230,6 +245,7 @@
     cdef public int dm_type
     cdef public int total_particles
     cdef public int dm_only
+    cdef public int hires_only
 
     def __cinit__(self, ts):
         self.ts = ts
@@ -244,7 +260,7 @@
                        int writing_port = -1, int block_ratio = 1,
                        int periodic = 1, force_res=None,
                        int min_halo_size = 25, outbase = "None",
-                       int dm_only = 0):
+                       int dm_only = 0, int hires_only = False):
         global PARALLEL_IO, PARALLEL_IO_SERVER_ADDRESS, PARALLEL_IO_SERVER_PORT
         global FILENAME, FILE_FORMAT, NUM_SNAPS, STARTING_SNAP, h0, Ol, Om
         global BOX_SIZE, PERIODIC, PARTICLE_MASS, NUM_BLOCKS, NUM_READERS
@@ -276,6 +292,7 @@
         TOTAL_PARTICLES = total_particles
         self.block_ratio = block_ratio
         self.dm_only = dm_only
+        self.hires_only = hires_only
         
         tpf = self.ts[0]
         h0 = tpf.hubble_constant

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 yt/analysis_modules/halo_merger_tree/enzofof_merger_tree.py
--- a/yt/analysis_modules/halo_merger_tree/enzofof_merger_tree.py
+++ b/yt/analysis_modules/halo_merger_tree/enzofof_merger_tree.py
@@ -454,8 +454,8 @@
         halonum : int
             Halo number at the last output to trace.
 
-        Output
-        ------
+        Returns
+        -------
         output : dict
             Dictionary of redshifts, cycle numbers, and halo numbers
             of the most massive progenitor.  keys = {redshift, cycle,
@@ -810,6 +810,6 @@
         ax.set_xscale("log")
     if y_log:
         ax.set_yscale("log")
-    ofn = "%s_%s_%s.png" % (basename, fields[0], fields[1])
+    ofn = "%s/%s_%s_%s.png" % (FOF_directory, basename, fields[0], fields[1])
     plt.savefig(ofn)
     plt.clf()

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 yt/analysis_modules/halo_merger_tree/merger_tree.py
--- a/yt/analysis_modules/halo_merger_tree/merger_tree.py
+++ b/yt/analysis_modules/halo_merger_tree/merger_tree.py
@@ -758,17 +758,19 @@
     
     def query(self, string):
         r"""Performs a query of the database and returns the results as a list
-        of tuple(s), even if the result is singular.
+        of tuples, even if the result is singular.
         
         Parameters
         ----------
-        string : String
+        
+        string : str
             The SQL query of the database.
         
         Examples
-        -------
+        --------
+
         >>> results = mtc.query("SELECT GlobalHaloID from Halos where SnapHaloID = 0 and \
-        ... SnapZ = 0;")
+        ...    SnapZ = 0;")
         """
         # Query the database and return a list of tuples.
         if string is None:

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 yt/analysis_modules/halo_profiler/multi_halo_profiler.py
--- a/yt/analysis_modules/halo_profiler/multi_halo_profiler.py
+++ b/yt/analysis_modules/halo_profiler/multi_halo_profiler.py
@@ -430,8 +430,8 @@
         After all the calls to `add_profile`, this will trigger the actual
         calculations and output the profiles to disk.
 
-        Paramters
-        ---------
+        Parameters
+        ----------
 
         filename : str
             If set, a file will be written with all of the filtered halos

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 yt/analysis_modules/spectral_integrator/spectral_frequency_integrator.py
--- a/yt/analysis_modules/spectral_integrator/spectral_frequency_integrator.py
+++ b/yt/analysis_modules/spectral_integrator/spectral_frequency_integrator.py
@@ -60,9 +60,9 @@
     
     Initialize an EmissivityIntegrator object.
 
-    Keyword Parameters
-    ------------------
-    filename: string
+    Parameters
+    ----------
+    filename: string, default None
         Path to data file containing emissivity values.  If None,
         a file called xray_emissivity.h5 is used.  This file contains 
         emissivity tables for primordial elements and for metals at 
@@ -146,8 +146,8 @@
     e_min: float
         the maximum energy in keV for the energy band.
 
-    Keyword Parameters
-    ------------------
+    Other Parameters
+    ----------------
     filename: string
         Path to data file containing emissivity values.  If None,
         a file called xray_emissivity.h5 is used.  This file contains 
@@ -220,8 +220,8 @@
     e_min: float
         the maximum energy in keV for the energy band.
 
-    Keyword Parameters
-    ------------------
+    Other Parameters
+    ----------------
     filename: string
         Path to data file containing emissivity values.  If None,
         a file called xray_emissivity.h5 is used.  This file contains 
@@ -277,8 +277,8 @@
     e_min: float
         the maximum energy in keV for the energy band.
 
-    Keyword Parameters
-    ------------------
+    Other Parameters
+    ----------------
     filename: string
         Path to data file containing emissivity values.  If None,
         a file called xray_emissivity.h5 is used.  This file contains 

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -178,7 +178,7 @@
         self.child_mask = 1
         self.ActiveDimensions = self.field_data['x'].shape
         self.DW = grid.pf.domain_right_edge - grid.pf.domain_left_edge
-        
+
     def __getitem__(self, field):
         if field not in self.field_data.keys():
             if field == "RadiusCode":
@@ -424,7 +424,7 @@
         return grids
 
     def select_grid_indices(self, level):
-        return np.where(self.grid_levels == level)
+        return np.where(self.grid_levels[:,0] == level)
 
     def __get_grid_left_edge(self):
         if self.__grid_left_edge == None:
@@ -461,6 +461,7 @@
     def __get_grid_levels(self):
         if self.__grid_levels == None:
             self.__grid_levels = np.array([g.Level for g in self._grids])
+            self.__grid_levels.shape = (self.__grid_levels.size, 1)
         return self.__grid_levels
 
     def __del_grid_levels(self):
@@ -474,7 +475,6 @@
     grid_levels = property(__get_grid_levels, __set_grid_levels,
                              __del_grid_levels)
 
-
     def __get_grid_dimensions(self):
         if self.__grid_dimensions == None:
             self.__grid_dimensions = np.array([g.ActiveDimensions for g in self._grids])
@@ -491,6 +491,19 @@
     grid_dimensions = property(__get_grid_dimensions, __set_grid_dimensions,
                              __del_grid_dimensions)
 
+    @property
+    def grid_corners(self):
+        return np.array([
+          [self.grid_left_edge[:,0], self.grid_left_edge[:,1], self.grid_left_edge[:,2]],
+          [self.grid_right_edge[:,0], self.grid_left_edge[:,1], self.grid_left_edge[:,2]],
+          [self.grid_right_edge[:,0], self.grid_right_edge[:,1], self.grid_left_edge[:,2]],
+          [self.grid_left_edge[:,0], self.grid_right_edge[:,1], self.grid_left_edge[:,2]],
+          [self.grid_left_edge[:,0], self.grid_left_edge[:,1], self.grid_right_edge[:,2]],
+          [self.grid_right_edge[:,0], self.grid_left_edge[:,1], self.grid_right_edge[:,2]],
+          [self.grid_right_edge[:,0], self.grid_right_edge[:,1], self.grid_right_edge[:,2]],
+          [self.grid_left_edge[:,0], self.grid_right_edge[:,1], self.grid_right_edge[:,2]],
+        ], dtype='float64')
+
 
 class AMR1DData(AMRData, GridPropertiesMixin):
     _spatial = False
@@ -530,7 +543,7 @@
             # generated it above.  This way, fields that are grabbed from the
             # grids are sorted properly.
             self[field] = self[field][self._sortkey]
-       
+
 class AMROrthoRayBase(AMR1DData):
     """
     This is an orthogonal ray cast through the entire domain, at a specific
@@ -673,9 +686,9 @@
             vs = self._get_line_at_coord(RE[:,i], i)
             p = p | ( ( (LE[:,i1] <= vs[:,i1]) & (RE[:,i1] >= vs[:,i1]) ) \
                     & ( (LE[:,i2] <= vs[:,i2]) & (RE[:,i2] >= vs[:,i2]) ) )
-        p = p | ( np.all( LE <= self.start_point, axis=1 ) 
+        p = p | ( np.all( LE <= self.start_point, axis=1 )
                 & np.all( RE >= self.start_point, axis=1 ) )
-        p = p | ( np.all( LE <= self.end_point,   axis=1 ) 
+        p = p | ( np.all( LE <= self.end_point,   axis=1 )
                 & np.all( RE >= self.end_point,   axis=1 ) )
         self._grids = self.hierarchy.grids[p]
 
@@ -695,7 +708,7 @@
         if not iterable(gf):
             gf = gf * np.ones(grid.child_mask.shape)
         return gf[mask]
-        
+
     @cache_mask
     def _get_cut_mask(self, grid):
         mask = np.zeros(grid.ActiveDimensions, dtype='int')
@@ -738,11 +751,11 @@
     --------
 
     >>> from yt.visualization.api import Streamlines
-    >>> streamlines = Streamlines(pf, [0.5]*3) 
+    >>> streamlines = Streamlines(pf, [0.5]*3)
     >>> streamlines.integrate_through_volume()
     >>> stream = streamlines.path(0)
     >>> matplotlib.pylab.semilogy(stream['t'], stream['Density'], '-x')
-    
+
     """
     _type_name = "streamline"
     _con_args = ('positions')
@@ -775,16 +788,16 @@
     @restore_grid_state
     def _get_data_from_grid(self, grid, field):
         # No child masking here; it happens inside the mask cut
-        mask = self._get_cut_mask(grid) 
+        mask = self._get_cut_mask(grid)
         if field == 'dts': return self._dts[grid.id]
         if field == 't': return self._ts[grid.id]
         return grid[field].flat[mask]
-        
+
     @cache_mask
     def _get_cut_mask(self, grid):
         #pdb.set_trace()
         points_in_grid = np.all(self.positions > grid.LeftEdge, axis=1) & \
-                         np.all(self.positions <= grid.RightEdge, axis=1) 
+                         np.all(self.positions <= grid.RightEdge, axis=1)
         pids = np.where(points_in_grid)[0]
         mask = np.zeros(points_in_grid.sum(), dtype='int')
         dts = np.zeros(points_in_grid.sum(), dtype='float64')
@@ -819,7 +832,7 @@
         AMRData.__init__(self, pf, fields, **kwargs)
         self.field = ensure_list(fields)[0]
         self.set_field_parameter("axis",axis)
-        
+
     def _convert_field_name(self, field):
         return field
 
@@ -838,7 +851,6 @@
             fields_to_get = self.fields[:]
         else:
             fields_to_get = ensure_list(fields)
-        temp_data = {}
         for field in fields_to_get:
             if self.field_data.has_key(field): continue
             if field not in self.hierarchy.field_list:
@@ -848,18 +860,13 @@
             # we're going to have to set the same thing several times
             data = [self._get_data_from_grid(grid, field)
                     for grid in self._get_grids()]
-            if len(data) == 0: data = np.array([])
-            else: data = np.concatenate(data)
-            temp_data[field] = data
+            if len(data) == 0:
+                data = np.array([])
+            else:
+                data = np.concatenate(data)
             # Now the next field can use this field
-            self[field] = temp_data[field] 
-        # We finalize
-        if temp_data != {}:
-            temp_data = self.comm.par_combine_object(temp_data,
-                    datatype='dict', op='cat')
-        # And set, for the next group
-        for field in temp_data.keys():
-            self[field] = temp_data[field]
+            self[field] = self.comm.par_combine_object(data, op='cat',
+                                                       datatype='array')
 
     def _get_pw(self, fields, center, width, origin, axes_unit, plot_type):
         axis = self.axis
@@ -874,7 +881,7 @@
         (bounds, center, units) = GetWindowParameters(axis, center, width, self.pf)
         if axes_unit is None and units != ('1', '1'):
             axes_unit = units
-        pw = PWViewerMPL(self, bounds, origin=origin, frb_generator=FixedResolutionBuffer, 
+        pw = PWViewerMPL(self, bounds, origin=origin, frb_generator=FixedResolutionBuffer,
                          plot_type=plot_type)
         pw.set_axes_unit(axes_unit)
         return pw
@@ -980,7 +987,7 @@
         for field in fields:
             #mylog.debug("Trying to obtain %s from node %s",
                 #self._convert_field_name(field), node_name)
-            fdata=self.hierarchy.get_data(node_name, 
+            fdata=self.hierarchy.get_data(node_name,
                 self._convert_field_name(field))
             if fdata is not None:
                 #mylog.debug("Got %s from node %s", field, node_name)
@@ -1138,7 +1145,7 @@
         t = points * ind[cm] * dx + (grid.LeftEdge[xaxis] + 0.5 * dx)
         # calculate ypoints array
         ind = cmI[1, :].ravel()   # yind
-        del cmI   # no longer needed 
+        del cmI   # no longer needed
         t = np.vstack( (t, points * ind[cm] * dy + \
                 (grid.LeftEdge[yaxis] + 0.5 * dy))
             )
@@ -1197,7 +1204,7 @@
     def hub_upload(self):
         self._mrep.upload()
 
-    def to_pw(self, fields=None, center='c', width=None, axes_unit=None, 
+    def to_pw(self, fields=None, center='c', width=None, axes_unit=None,
                origin='center-window'):
         r"""Create a :class:`~yt.visualization.plot_window.PWViewerMPL` from this
         object.
@@ -1477,7 +1484,7 @@
         self.dims = dims
         self.dds = self.width / self.dims
         self.bounds = np.array([0.0,1.0,0.0,1.0])
-        
+
         self.set_field_parameter('center', center)
         # Let's set up our plane equation
         # ax + by + cz + d = 0
@@ -1563,7 +1570,7 @@
 
             # Mark these pixels to speed things up
             self._pixelmask[pointI] = 0
-            
+
             return
         else:
             raise SyntaxError("Making a fixed resolution slice with "
@@ -1651,7 +1658,7 @@
         L_name = ("%s" % self._norm_vec).replace(" ","_")[1:-1]
         return "%s/c%s_L%s" % \
             (self._top_node, cen_name, L_name)
-        
+
 class AMRQuadTreeProjBase(AMR2DData):
     """
     This is a data object corresponding to a line integral through the
@@ -1809,7 +1816,7 @@
             convs[:] = 1.0
         return dls, convs
 
-    def to_pw(self, fields=None, center='c', width=None, axes_unit=None, 
+    def to_pw(self, fields=None, center='c', width=None, axes_unit=None,
                origin='center-window'):
         r"""Create a :class:`~yt.visualization.plot_window.PWViewerMPL` from this
         object.
@@ -1850,7 +1857,7 @@
                                  if g.Level == level],
                               self.get_dependencies(fields), self.hierarchy.io)
             self._add_level_to_tree(tree, level, fields)
-            mylog.debug("End of projecting level level %s, memory usage %0.3e", 
+            mylog.debug("End of projecting level level %s, memory usage %0.3e",
                         level, get_memory_usage()/1024.)
         # Note that this will briefly double RAM usage
         if self.proj_style == "mip":
@@ -1942,7 +1949,7 @@
         xpoints = (xind + (start_index[x_dict[self.axis]])).astype('int64')
         ypoints = (yind + (start_index[y_dict[self.axis]])).astype('int64')
         to_add = np.array([d[used_points].ravel() for d in full_proj], order='F')
-        tree.add_array_to_tree(grid.Level, xpoints, ypoints, 
+        tree.add_array_to_tree(grid.Level, xpoints, ypoints,
                     to_add, weight_proj[used_points].ravel())
 
     def _add_level_to_tree(self, tree, level, fields):
@@ -2068,6 +2075,7 @@
                  source=None, node_name = None, field_cuts = None,
                  preload_style='level', serialize=True,**kwargs):
         AMR2DData.__init__(self, axis, field, pf, node_name = None, **kwargs)
+        self.proj_style = "integrate"
         self.weight_field = weight_field
         self._field_cuts = field_cuts
         self.serialize = serialize
@@ -2282,7 +2290,7 @@
                 del self.__retval_coords[grid.id]
                 del self.__retval_fields[grid.id]
                 del self.__overlap_masks[grid.id]
-            mylog.debug("End of projecting level level %s, memory usage %0.3e", 
+            mylog.debug("End of projecting level level %s, memory usage %0.3e",
                         level, get_memory_usage()/1024.)
         coord_data = np.concatenate(coord_data, axis=1)
         field_data = np.concatenate(field_data, axis=1)
@@ -2313,7 +2321,7 @@
     def add_fields(self, fields, weight = "CellMassMsun"):
         pass
 
-    def to_pw(self, fields=None, center='c', width=None, axes_unit=None, 
+    def to_pw(self, fields=None, center='c', width=None, axes_unit=None,
                origin='center-window'):
         r"""Create a :class:`~yt.visualization.plot_window.PWViewerMPL` from this
         object.
@@ -2521,7 +2529,7 @@
         ref_ratio = self.pf.refine_by**(self.level - grid.Level)
         FillBuffer(ref_ratio,
             grid.get_global_startindex(), self.global_startindex,
-            c_fields, g_fields, 
+            c_fields, g_fields,
             self.ActiveDimensions, grid.ActiveDimensions,
             grid.child_mask, self.domain_width, dls[grid.Level],
             self.axis)
@@ -2682,9 +2690,9 @@
     def cut_region(self, field_cuts):
         """
         Return an InLineExtractedRegion, where the grid cells are cut on the
-        fly with a set of field_cuts.  It is very useful for applying 
+        fly with a set of field_cuts.  It is very useful for applying
         conditions to the fields in your data object.
-        
+
         Examples
         --------
         To find the total mass of gas above 10^6 K in your volume:
@@ -2725,7 +2733,7 @@
         useful for calculating, for instance, total isocontour area, or
         visualizing in an external program (such as `MeshLab
         <http://meshlab.sf.net>`_.)
-        
+
         Parameters
         ----------
         field : string
@@ -2839,7 +2847,7 @@
 
         Additionally, the returned flux is defined as flux *into* the surface,
         not flux *out of* the surface.
-        
+
         Parameters
         ----------
         field : string
@@ -2896,7 +2904,7 @@
             ff = np.ones(vals.shape, dtype="float64")
         else:
             ff = grid.get_vertex_centered_data(fluxing_field)
-        xv, yv, zv = [grid.get_vertex_centered_data(f) for f in 
+        xv, yv, zv = [grid.get_vertex_centered_data(f) for f in
                      [field_x, field_y, field_z]]
         return march_cubes_grid_flux(value, vals, xv, yv, zv,
                     ff, mask, grid.LeftEdge, grid.dds)
@@ -2989,7 +2997,7 @@
     ----------------
     force_refresh : bool
        Force a refresh of the data. Defaults to True.
-    
+
     Examples
     --------
     """
@@ -3229,7 +3237,7 @@
         if self._grids is not None: return
         GLE = self.pf.h.grid_left_edge
         GRE = self.pf.h.grid_right_edge
-        goodI = find_grids_in_inclined_box(self.box_vectors, self.center, 
+        goodI = find_grids_in_inclined_box(self.box_vectors, self.center,
                                            GLE, GRE)
         cgrids = self.pf.h.grids[goodI.astype('bool')]
        # find_grids_in_inclined_box seems to be broken.
@@ -3237,13 +3245,13 @@
         grids = []
         for i,grid in enumerate(cgrids):
             v = grid_points_in_volume(self.box_lengths, self.origin,
-                                      self._rot_mat, grid.LeftEdge, 
+                                      self._rot_mat, grid.LeftEdge,
                                       grid.RightEdge, grid.dds,
                                       grid.child_mask, 1)
             if v: grids.append(grid)
         self._grids = np.empty(len(grids), dtype='object')
         for gi, g in enumerate(grids): self._grids[gi] = g
-            
+
 
     def _is_fully_enclosed(self, grid):
         # This should be written at some point.
@@ -3256,10 +3264,10 @@
             return True
         pm = np.zeros(grid.ActiveDimensions, dtype='int32')
         grid_points_in_volume(self.box_lengths, self.origin,
-                              self._rot_mat, grid.LeftEdge, 
+                              self._rot_mat, grid.LeftEdge,
                               grid.RightEdge, grid.dds, pm, 0)
         return pm
-        
+
 
 class AMRRegionBase(AMR3DData):
     """A 3D region of data with an arbitrary center.
@@ -3395,9 +3403,9 @@
     _dx_pad = 0.0
     def __init__(self, center, left_edge, right_edge, fields = None,
                  pf = None, **kwargs):
-        AMRPeriodicRegionBase.__init__(self, center, left_edge, right_edge, 
+        AMRPeriodicRegionBase.__init__(self, center, left_edge, right_edge,
                                        fields = None, pf = None, **kwargs)
-    
+
 
 class AMRGridCollectionBase(AMR3DData):
     """
@@ -3564,7 +3572,7 @@
         self._C = C
         self._e0 = e0 = e0 / (e0**2.0).sum()**0.5
         self._tilt = tilt
-        
+
         # find the t1 angle needed to rotate about z axis to align e0 to x
         t1 = np.arctan(e0[1] / e0[0])
         # rotate e0 by -t1
@@ -3574,7 +3582,7 @@
         t2 = np.arctan(-r1[2] / r1[0])
         """
         calculate the original e1
-        given the tilt about the x axis when e0 was aligned 
+        given the tilt about the x axis when e0 was aligned
         to x after t1, t2 rotations about z, y
         """
         RX = get_rotation_matrix(-tilt, (1,0,0)).transpose()
@@ -3598,7 +3606,7 @@
         self._refresh_data()
 
         """
-        Having another function find_ellipsoid_grids is too much work, 
+        Having another function find_ellipsoid_grids is too much work,
         can just use the sphere one and forget about checking orientation
         but feed in the A parameter for radius
         """
@@ -3686,7 +3694,7 @@
 class AMRCoveringGridBase(AMR3DData):
     """A 3D region with all data extracted to a single, specified
     resolution.
-    
+
     Parameters
     ----------
     level : int
@@ -3784,7 +3792,7 @@
             n_bad = np.where(self[obtain_fields[0]]==-999)[0].size
             mylog.error("Covering problem: %s cells are uncovered", n_bad)
             raise KeyError(n_bad)
-            
+
     def _generate_field(self, field):
         if self.pf.field_info.has_key(field):
             # First we check the validator; this might even raise!
@@ -3812,13 +3820,13 @@
     def _get_data_from_grid(self, grid, fields):
         ll = int(grid.Level == self.level)
         ref_ratio = self.pf.refine_by**(self.level - grid.Level)
-        g_fields = [gf.astype("float64") 
+        g_fields = [gf.astype("float64")
                     if gf.dtype != "float64"
                     else gf for gf in (grid[field] for field in fields)]
         c_fields = [self[field] for field in fields]
         count = FillRegion(ref_ratio,
             grid.get_global_startindex(), self.global_startindex,
-            c_fields, g_fields, 
+            c_fields, g_fields,
             self.ActiveDimensions, grid.ActiveDimensions,
             grid.child_mask, self.domain_width, ll, 0)
         return count
@@ -3834,7 +3842,7 @@
         c_fields = [self[field] for field in fields]
         FillRegion(ref_ratio,
             grid.get_global_startindex(), self.global_startindex,
-            c_fields, g_fields, 
+            c_fields, g_fields,
             self.ActiveDimensions, grid.ActiveDimensions,
             grid.child_mask, self.domain_width, ll, 1)
 
@@ -3855,7 +3863,7 @@
     fill the region to level 1, replacing any cells actually
     covered by level 1 data, and then recursively repeating this
     process until it reaches the specified `level`.
-    
+
     Parameters
     ----------
     level : int
@@ -3867,10 +3875,11 @@
     fields : array_like, optional
         A list of fields that you'd like pre-generated for your object
 
-    Example
-    -------
-    cube = pf.h.smoothed_covering_grid(2, left_edge=[0.0, 0.0, 0.0], \
-                              dims=[128, 128, 128])
+    Examples
+    --------
+
+    >>> cube = pf.h.smoothed_covering_grid(2, left_edge=[0.0, 0.0, 0.0], \
+    ...                          dims=[128, 128, 128])
     """
     _type_name = "smoothed_covering_grid"
     def __init__(self, *args, **kwargs):
@@ -3975,7 +3984,7 @@
     def _refine(self, dlevel, fields):
         rf = float(self.pf.refine_by**dlevel)
 
-        input_left = (self._old_global_startindex + 0.5) * rf 
+        input_left = (self._old_global_startindex + 0.5) * rf
         dx = np.fromiter((self['cd%s' % ax] for ax in 'xyz'), count=3, dtype='float64')
         output_dims = np.rint((self.ActiveDimensions*self.dds)/dx+0.5).astype('int32') + 2
         self._cur_dims = output_dims
@@ -3989,13 +3998,13 @@
 
     @restore_field_information_state
     def _get_data_from_grid(self, grid, fields):
-        g_fields = [gf.astype("float64") 
+        g_fields = [gf.astype("float64")
                     if gf.dtype != "float64"
                     else gf for gf in (grid[field] for field in fields)]
         c_fields = [self.field_data[field] for field in fields]
         count = FillRegion(1,
             grid.get_global_startindex(), self.global_startindex,
-            c_fields, g_fields, 
+            c_fields, g_fields,
             self._cur_dims, grid.ActiveDimensions,
             grid.child_mask, self.domain_width, 1, 0)
         return count
@@ -4007,14 +4016,14 @@
     """
     This will build a hybrid region based on the boolean logic
     of the regions.
-    
+
     Parameters
     ----------
     regions : list
         A list of region objects and strings describing the boolean logic
         to use when building the hybrid region. The boolean logic can be
         nested using parentheses.
-    
+
     Examples
     --------
     >>> re1 = pf.h.region([0.5, 0.5, 0.5], [0.4, 0.4, 0.4],
@@ -4027,7 +4036,7 @@
         sp1, ")"])
     """
     _type_name = "boolean"
-    _con_args = ("regions")
+    _con_args = ("regions",)
     def __init__(self, regions, fields = None, pf = None, **kwargs):
         # Center is meaningless, but we'll define it all the same.
         AMR3DData.__init__(self, [0.5]*3, fields, pf, **kwargs)
@@ -4039,7 +4048,7 @@
         self._get_all_regions()
         self._make_overlaps()
         self._get_list_of_grids()
-    
+
     def _get_all_regions(self):
         # Before anything, we simply find out which regions are involved in all
         # of this process, uniquely.
@@ -4049,7 +4058,7 @@
             # So cut_masks don't get messed up.
             item._boolean_touched = True
         self._all_regions = np.unique(self._all_regions)
-    
+
     def _make_overlaps(self):
         # Using the processed cut_masks, we'll figure out what grids
         # are left in the hybrid region.
@@ -4083,7 +4092,7 @@
                     continue
             pbar.update(i)
         pbar.finish()
-    
+
     def __repr__(self):
         # We'll do this the slow way to be clear what's going on
         s = "%s (%s): " % (self.__class__.__name__, self.pf)
@@ -4096,7 +4105,7 @@
             if i < (len(self.regions) - 1): s += ", "
         s += "]"
         return s
-    
+
     def _is_fully_enclosed(self, grid):
         return (grid in self._all_overlap)
 
@@ -4183,7 +4192,7 @@
     <http://meshlab.sf.net>`_.)  The object has the properties .vertices
     and will sample values if a field is requested.  The values are
     interpolated to the center of a given face.
-    
+
     Parameters
     ----------
     data_source : AMR3DDataObject
@@ -4258,7 +4267,7 @@
                 self[fields] = samples
             elif sample_type == "vertex":
                 self.vertex_samples[fields] = samples
-        
+
 
     @restore_grid_state
     def _extract_isocontours_from_grid(self, grid, field, value,
@@ -4295,7 +4304,7 @@
 
         Additionally, the returned flux is defined as flux *into* the surface,
         not flux *out of* the surface.
-        
+
         Parameters
         ----------
         field_x : string
@@ -4342,7 +4351,7 @@
         return flux
 
     @restore_grid_state
-    def _calculate_flux_in_grid(self, grid, 
+    def _calculate_flux_in_grid(self, grid,
                     field_x, field_y, field_z, fluxing_field = None):
         mask = self.data_source._get_cut_mask(grid) * grid.child_mask
         vals = grid.get_vertex_centered_data(self.surface_field)
@@ -4350,7 +4359,7 @@
             ff = np.ones(vals.shape, dtype="float64")
         else:
             ff = grid.get_vertex_centered_data(fluxing_field)
-        xv, yv, zv = [grid.get_vertex_centered_data(f) for f in 
+        xv, yv, zv = [grid.get_vertex_centered_data(f) for f in
                      [field_x, field_y, field_z]]
         return march_cubes_grid_flux(self.field_value, vals, xv, yv, zv,
                     ff, mask, grid.LeftEdge, grid.dds)
@@ -4468,7 +4477,7 @@
             w = bounds[i][1] - bounds[i][0]
             np.divide(tmp, w, tmp)
             np.subtract(tmp, 0.5, tmp) # Center at origin.
-            v[ax][:] = tmp 
+            v[ax][:] = tmp
         f.write("end_header\n")
         v.tofile(f)
         arr["ni"][:] = 3

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 yt/data_objects/hierarchy.py
--- a/yt/data_objects/hierarchy.py
+++ b/yt/data_objects/hierarchy.py
@@ -209,7 +209,7 @@
         pf = self.parameter_file
         if find_max: c = self.find_max("Density")[1]
         else: c = (pf.domain_right_edge + pf.domain_left_edge)/2.0
-        return self.region(c, 
+        return self.region(c,
             pf.domain_left_edge, pf.domain_right_edge)
 
     def clear_all_data(self):
@@ -308,7 +308,7 @@
             self.save_data = self._save_data
         else:
             self.save_data = parallel_splitter(self._save_data, self._reload_data_file)
-    
+
     save_data = parallel_splitter(_save_data, _reload_data_file)
 
     def save_object(self, obj, name):
@@ -367,7 +367,7 @@
         """
         Returns (in code units) the smallest cell size in the simulation.
         """
-        return self.select_grids(self.grid_levels.max())[0].dds[0]
+        return self.select_grids(self.grid_levels.max())[0].dds[:].min()
 
     def _add_object_class(self, name, class_name, base, dd):
         self.object_types.append(name)

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 yt/data_objects/image_array.py
--- a/yt/data_objects/image_array.py
+++ b/yt/data_objects/image_array.py
@@ -71,12 +71,12 @@
 
     >>> im = np.zeros([64,128,3])
     >>> for i in xrange(im.shape[0]):
-    >>>     for k in xrange(im.shape[2]):
-    >>>         im[i,:,k] = np.linspace(0.,0.3*k, im.shape[1])
+    ...     for k in xrange(im.shape[2]):
+    ...         im[i,:,k] = np.linspace(0.,0.3*k, im.shape[1])
 
     >>> myinfo = {'field':'dinosaurs', 'east_vector':np.array([1.,0.,0.]), 
-    >>>     'north_vector':np.array([0.,0.,1.]), 'normal_vector':np.array([0.,1.,0.]),  
-    >>>     'width':0.245, 'units':'cm', 'type':'rendering'}
+    ...     'north_vector':np.array([0.,0.,1.]), 'normal_vector':np.array([0.,1.,0.]),  
+    ...     'width':0.245, 'units':'cm', 'type':'rendering'}
 
     >>> im_arr = ImageArray(im, info=myinfo)
     >>> im_arr.save('test_ImageArray')
@@ -112,12 +112,12 @@
         -------- 
         >>> im = np.zeros([64,128,3])
         >>> for i in xrange(im.shape[0]):
-        >>>     for k in xrange(im.shape[2]):
-        >>>         im[i,:,k] = np.linspace(0.,0.3*k, im.shape[1])
+        ...     for k in xrange(im.shape[2]):
+        ...         im[i,:,k] = np.linspace(0.,0.3*k, im.shape[1])
 
         >>> myinfo = {'field':'dinosaurs', 'east_vector':np.array([1.,0.,0.]), 
-        >>>     'north_vector':np.array([0.,0.,1.]), 'normal_vector':np.array([0.,1.,0.]),  
-        >>>     'width':0.245, 'units':'cm', 'type':'rendering'}
+        ...     'north_vector':np.array([0.,0.,1.]), 'normal_vector':np.array([0.,1.,0.]),  
+        ...     'width':0.245, 'units':'cm', 'type':'rendering'}
 
         >>> im_arr = ImageArray(im, info=myinfo)
         >>> im_arr.write_hdf5('test_ImageArray.h5')
@@ -133,38 +133,191 @@
             d.attrs.create(k, v)
         f.close()
 
-    def write_png(self, filename, clip_ratio=None):
+    def add_background_color(self, background='black', inline=True):
+        r"""Adds a background color to a 4-channel ImageArray
+
+        This adds a background color to a 4-channel ImageArray, by default
+        doing so inline.  The ImageArray must already be normalized to the
+        [0,1] range.
+
+        Parameters
+        ----------
+        background: 
+            This can be used to set a background color for the image, and can
+            take several types of values:
+
+               * ``white``: white background, opaque
+               * ``black``: black background, opaque
+               * ``None``: transparent background
+               * 4-element array [r,g,b,a]: arbitrary rgba setting.
+
+            Default: 'black'
+        inline: boolean, optional
+            If True, original ImageArray is modified. If False, a copy is first
+            created, then modified. Default: True
+
+        Returns
+        -------
+        out: ImageArray
+            The modified ImageArray with a background color added.
+       
+        Examples
+        --------
+        >>> im = np.zeros([64,128,4])
+        >>> for i in xrange(im.shape[0]):
+        ...     for k in xrange(im.shape[2]):
+        ...         im[i,:,k] = np.linspace(0.,10.*k, im.shape[1])
+
+        >>> im_arr = ImageArray(im)
+        >>> im_arr.rescale()
+        >>> new_im = im_arr.add_background_color([1.,0.,0.,1.], inline=False)
+        >>> new_im.write_png('red_bg.png')
+        >>> im_arr.add_background_color('black')
+        >>> im_arr.write_png('black_bg.png')
+        """
+        assert(self.shape[-1] == 4)
+        
+        if background == None:
+            background = (0., 0., 0., 0.)
+        elif background == 'white':
+            background = (1., 1., 1., 1.)
+        elif background == 'black':
+            background = (0., 0., 0., 1.)
+
+        # Alpha blending to background
+        if inline:
+            out = self
+        else:
+            out = self.copy()
+
+        for i in range(3):
+            out[:,:,i] = self[:,:,i]*self[:,:,3] + \
+                    background[i]*background[3]*(1.0-self[:,:,3])
+        out[:,:,3] = self[:,:,3] + background[3]*(1.0-self[:,:,3]) 
+        return out 
+
+
+    def rescale(self, cmax=None, amax=None, inline=True):
+        r"""Rescales the image to be in [0,1] range.
+
+        Parameters
+        ----------
+        cmax: float, optional
+            Normalization value to use for rgb channels. Defaults to None,
+            corresponding to using the maximum value in the rgb channels.
+        amax: float, optional
+            Normalization value to use for alpha channel. Defaults to None,
+            corresponding to using the maximum value in the alpha channel.
+        inline: boolean, optional
+            Specifies whether or not the rescaling is done inline. If false,
+            a new copy of the ImageArray will be created, returned. 
+            Default:True.
+
+        Returns
+        -------
+        out: ImageArray
+            The rescaled ImageArray, clipped to the [0,1] range.
+
+        Notes
+        -----
+        This requires that the shape of the ImageArray to have a length of 3,
+        and for the third dimension to be >= 3.  If the third dimension has
+        a shape of 4, the alpha channel will also be rescaled.
+       
+        Examples
+        -------- 
+        >>> im = np.zeros([64,128,4])
+        >>> for i in xrange(im.shape[0]):
+        ...     for k in xrange(im.shape[2]):
+        ...         im[i,:,k] = np.linspace(0.,0.3*k, im.shape[1])
+
+        >>> im_arr.write_png('original.png')
+        >>> im_arr.rescale()
+        >>> im_arr.write_png('normalized.png')
+
+        """
+        assert(len(self.shape) == 3)
+        assert(self.shape[2] >= 3)
+        if inline:
+            out = self
+        else:
+            out = self.copy()
+        if cmax is None: 
+            cmax = self[:,:,:3].sum(axis=2).max()
+
+        np.multiply(self[:,:,:3], 1./cmax, out[:,:,:3])
+
+        if self.shape[2] == 4:
+            if amax is None:
+                amax = self[:,:,3].max()
+            if amax > 0.0:
+                np.multiply(self[:,:,3], 1./amax, out[:,:,3])
+        
+        np.clip(out, 0.0, 1.0, out)
+        return out
+
+    def write_png(self, filename, clip_ratio=None, background='black',
+                 rescale=True):
         r"""Writes ImageArray to png file.
 
         Parameters
         ----------
         filename: string
             Note filename not be modified.
+        clip_ratio: float, optional
+            Image will be clipped before saving to the standard deviation
+            of the image multiplied by this value.  Useful for enhancing 
+            images. Default: None
+        background: 
+            This can be used to set a background color for the image, and can
+            take several types of values:
+
+               * ``white``: white background, opaque
+               * ``black``: black background, opaque
+               * ``None``: transparent background
+               * 4-element array [r,g,b,a]: arbitrary rgba setting.
+
+            Default: 'black'
+        rescale: boolean, optional
+            If True, will write out a rescaled image (without modifying the
+            original image). Default: True
        
         Examples
         --------
-        
-        >>> im = np.zeros([64,128,3])
+        >>> im = np.zeros([64,128,4])
         >>> for i in xrange(im.shape[0]):
-        >>>     for k in xrange(im.shape[2]):
-        >>>         im[i,:,k] = np.linspace(0.,0.3*k, im.shape[1])
+        ...     for k in xrange(im.shape[2]):
+        ...         im[i,:,k] = np.linspace(0.,10.*k, im.shape[1])
 
-        >>> myinfo = {'field':'dinosaurs', 'east_vector':np.array([1.,0.,0.]), 
-        >>>     'north_vector':np.array([0.,0.,1.]), 'normal_vector':np.array([0.,1.,0.]),  
-        >>>     'width':0.245, 'units':'cm', 'type':'rendering'}
-
-        >>> im_arr = ImageArray(im, info=myinfo)
-        >>> im_arr.write_png('test_ImageArray.png')
+        >>> im_arr = ImageArray(im)
+        >>> im_arr.write_png('standard.png')
+        >>> im_arr.write_png('non-scaled.png', rescale=False)
+        >>> im_arr.write_png('black_bg.png', background='black')
+        >>> im_arr.write_png('white_bg.png', background='white')
+        >>> im_arr.write_png('green_bg.png', background=[0,1,0,1])
+        >>> im_arr.write_png('transparent_bg.png', background=None)
 
         """
+        if rescale:
+            scaled = self.rescale(inline=False)
+        else:
+            scaled = self
+
+        if self.shape[-1] == 4:
+            out = scaled.add_background_color(background, inline=False)
+        else:
+            out = scaled
+
         if filename[-4:] != '.png': 
             filename += '.png'
 
         if clip_ratio is not None:
-            return write_bitmap(self.swapaxes(0, 1), filename,
-                                clip_ratio * self.std())
+            nz = out[:,:,:3][out[:,:,:3].nonzero()]
+            return write_bitmap(out.swapaxes(0, 1), filename,
+                                nz.mean() + \
+                                clip_ratio * nz.std())
         else:
-            return write_bitmap(self.swapaxes(0, 1), filename)
+            return write_bitmap(out.swapaxes(0, 1), filename)
 
     def write_image(self, filename, color_bounds=None, channel=None,  cmap_name="algae", func=lambda x: x):
         r"""Writes a single channel of the ImageArray to a png file.
@@ -197,11 +350,11 @@
         
         >>> im = np.zeros([64,128])
         >>> for i in xrange(im.shape[0]):
-        >>>     im[i,:] = np.linspace(0.,0.3*k, im.shape[1])
+        ...     im[i,:] = np.linspace(0.,0.3*k, im.shape[1])
 
         >>> myinfo = {'field':'dinosaurs', 'east_vector':np.array([1.,0.,0.]), 
-        >>>     'north_vector':np.array([0.,0.,1.]), 'normal_vector':np.array([0.,1.,0.]),  
-        >>>     'width':0.245, 'units':'cm', 'type':'rendering'}
+        ...     'north_vector':np.array([0.,0.,1.]), 'normal_vector':np.array([0.,1.,0.]),  
+        ...     'width':0.245, 'units':'cm', 'type':'rendering'}
 
         >>> im_arr = ImageArray(im, info=myinfo)
         >>> im_arr.write_image('test_ImageArray.png')
@@ -245,27 +398,3 @@
 
     __doc__ += np.ndarray.__doc__
 
-if __name__ == "__main__":
-    im = np.zeros([64,128,3])
-    for i in xrange(im.shape[0]):
-        for k in xrange(im.shape[2]):
-            im[i,:,k] = np.linspace(0.,0.3*k, im.shape[1])
-
-    myinfo = {'field':'dinosaurs', 'east_vector':np.array([1.,0.,0.]), 
-        'north_vector':np.array([0.,0.,1.]), 'normal_vector':np.array([0.,1.,0.]),  
-        'width':0.245, 'units':'cm', 'type':'rendering'}
-
-    im_arr = ImageArray(im, info=myinfo)
-    im_arr.save('test_3d_ImageArray')
-
-    im = np.zeros([64,128])
-    for i in xrange(im.shape[0]):
-        im[i,:] = np.linspace(0.,0.3*k, im.shape[1])
-
-    myinfo = {'field':'dinosaurs', 'east_vector':np.array([1.,0.,0.]), 
-        'north_vector':np.array([0.,0.,1.]), 'normal_vector':np.array([0.,1.,0.]),  
-        'width':0.245, 'units':'cm', 'type':'rendering'}
-
-    im_arr = ImageArray(im, info=myinfo)
-    im_arr.save('test_2d_ImageArray')
-

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 yt/data_objects/object_finding_mixin.py
--- a/yt/data_objects/object_finding_mixin.py
+++ b/yt/data_objects/object_finding_mixin.py
@@ -198,8 +198,10 @@
         """
         Gets back all the grids between a left edge and right edge
         """
-        grid_i = np.where((np.all(self.grid_right_edge > left_edge, axis=1)
-                         & np.all(self.grid_left_edge < right_edge, axis=1)) == True)
+        eps = np.finfo(np.float64).eps
+        grid_i = np.where((np.all((self.grid_right_edge - left_edge) > eps, axis=1)
+                         & np.all((right_edge - self.grid_left_edge) > eps, axis=1)) == True)
+
         return self.grids[grid_i], grid_i
 
     def get_periodic_box_grids(self, left_edge, right_edge):

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 yt/data_objects/tests/test_image_array.py
--- /dev/null
+++ b/yt/data_objects/tests/test_image_array.py
@@ -0,0 +1,130 @@
+from yt.testing import *
+from yt.data_objects.image_array import ImageArray
+import numpy as np
+import os
+import tempfile
+import shutil
+
+def setup():
+    from yt.config import ytcfg
+    ytcfg["yt","__withintesting"] = "True"
+    np.seterr(all = 'ignore')
+
+def test_rgba_rescale():
+    im = np.zeros([64,128,4])
+    for i in xrange(im.shape[0]):
+        for k in xrange(im.shape[2]):
+            im[i,:,k] = np.linspace(0.,10.*k, im.shape[1])
+    im_arr = ImageArray(im)
+
+    new_im = im_arr.rescale(inline=False)
+    yield assert_equal, im_arr[:,:,:3].max(), 2*10.
+    yield assert_equal, im_arr[:,:,3].max(), 3*10.
+    yield assert_equal, new_im[:,:,:3].sum(axis=2).max(), 1.0 
+    yield assert_equal, new_im[:,:,3].max(), 1.0
+
+    im_arr.rescale()
+    yield assert_equal, im_arr[:,:,:3].sum(axis=2).max(), 1.0
+    yield assert_equal, im_arr[:,:,3].max(), 1.0
+
+def test_image_array_hdf5():
+    # Perform I/O in safe place instead of yt main dir
+    tmpdir = tempfile.mkdtemp()
+    curdir = os.getcwd()
+    os.chdir(tmpdir)
+
+    im = np.zeros([64,128,3])
+    for i in xrange(im.shape[0]):
+        for k in xrange(im.shape[2]):
+            im[i,:,k] = np.linspace(0.,0.3*k, im.shape[1])
+
+    myinfo = {'field':'dinosaurs', 'east_vector':np.array([1.,0.,0.]), 
+        'north_vector':np.array([0.,0.,1.]), 'normal_vector':np.array([0.,1.,0.]),  
+        'width':0.245, 'units':'cm', 'type':'rendering'}
+
+    im_arr = ImageArray(im, info=myinfo)
+    im_arr.save('test_3d_ImageArray')
+
+    im = np.zeros([64,128])
+    for i in xrange(im.shape[0]):
+        im[i,:] = np.linspace(0.,0.3*k, im.shape[1])
+
+    myinfo = {'field':'dinosaurs', 'east_vector':np.array([1.,0.,0.]), 
+        'north_vector':np.array([0.,0.,1.]), 'normal_vector':np.array([0.,1.,0.]),  
+        'width':0.245, 'units':'cm', 'type':'rendering'}
+
+    im_arr = ImageArray(im, info=myinfo)
+    im_arr.save('test_2d_ImageArray')
+
+    os.chdir(curdir)
+    # clean up
+    shutil.rmtree(tmpdir)
+
+def test_image_array_rgb_png():
+    # Perform I/O in safe place instead of yt main dir
+    tmpdir = tempfile.mkdtemp()
+    curdir = os.getcwd()
+    os.chdir(tmpdir)
+
+    im = np.zeros([64,128,3])
+    for i in xrange(im.shape[0]):
+        for k in xrange(im.shape[2]):
+            im[i,:,k] = np.linspace(0.,10.*k, im.shape[1])
+
+    im_arr = ImageArray(im)
+    im_arr.write_png('standard.png')
+
+def test_image_array_rgba_png():
+    # Perform I/O in safe place instead of yt main dir
+    tmpdir = tempfile.mkdtemp()
+    curdir = os.getcwd()
+    os.chdir(tmpdir)
+
+    im = np.zeros([64,128,4])
+    for i in xrange(im.shape[0]):
+        for k in xrange(im.shape[2]):
+            im[i,:,k] = np.linspace(0.,10.*k, im.shape[1])
+
+    im_arr = ImageArray(im)
+    im_arr.write_png('standard.png')
+    im_arr.write_png('non-scaled.png', rescale=False)
+    im_arr.write_png('black_bg.png', background='black')
+    im_arr.write_png('white_bg.png', background='white')
+    im_arr.write_png('green_bg.png', background=[0.,1.,0.,1.])
+    im_arr.write_png('transparent_bg.png', background=None)
+
+
+def test_image_array_background():
+    # Perform I/O in safe place instead of yt main dir
+    tmpdir = tempfile.mkdtemp()
+    curdir = os.getcwd()
+    os.chdir(tmpdir)
+
+    im = np.zeros([64,128,4])
+    for i in xrange(im.shape[0]):
+        for k in xrange(im.shape[2]):
+            im[i,:,k] = np.linspace(0.,10.*k, im.shape[1])
+
+    im_arr = ImageArray(im)
+    im_arr.rescale()
+    new_im = im_arr.add_background_color([1.,0.,0.,1.], inline=False)
+    new_im.write_png('red_bg.png')
+    im_arr.add_background_color('black')
+    im_arr.write_png('black_bg2.png')
+ 
+    os.chdir(curdir)
+    # clean up
+    shutil.rmtree(tmpdir)
+
+
+
+
+
+
+
+
+
+
+
+
+

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 yt/data_objects/tests/test_slice.py
--- a/yt/data_objects/tests/test_slice.py
+++ b/yt/data_objects/tests/test_slice.py
@@ -1,24 +1,60 @@
-from yt.testing import *
+"""
+Tests for AMRSlice
+
+Authors: Samuel Skillman <samskillman at gmail.com>
+Affiliation: University of Colorado at Boulder
+Author: Kacper Kowalik <xarthisius.kk at gmail.com>
+Affiliation: CA UMK
+Homepage: http://yt-project.org/
+License:
+  Copyright (C) 2012 Samuel Skillman.  All Rights Reserved.
+  Copyright (C) 2013 Kacper Kowalik.  All Rights Reserved.
+
+  This file is part of yt.
+
+  yt is free software; you can redistribute it and/or modify
+  it under the terms of the GNU General Public License as published by
+  the Free Software Foundation; either version 3 of the License, or
+  (at your option) any later version.
+
+  This program is distributed in the hope that it will be useful,
+  but WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+  GNU General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
 import os
+import numpy as np
+from nose.tools import raises
+from yt.testing import \
+    fake_random_pf, assert_equal, assert_array_equal
+from yt.utilities.definitions import \
+    x_dict, y_dict
+from yt.utilities.exceptions import \
+    YTNoDataInObjectError
 
 def setup():
     from yt.config import ytcfg
-    ytcfg["yt","__withintesting"] = "True"
+    ytcfg["yt", "__withintesting"] = "True"
+
 
 def teardown_func(fns):
     for fn in fns:
         os.remove(fn)
 
+
 def test_slice():
     for nprocs in [8, 1]:
         # We want to test both 1 proc and 8 procs, to make sure that
         # parallelism isn't broken
-        pf = fake_random_pf(64, nprocs = nprocs)
+        pf = fake_random_pf(64, nprocs=nprocs)
         dims = pf.domain_dimensions
         xn, yn, zn = pf.domain_dimensions
-        xi, yi, zi = pf.domain_left_edge + 1.0/(pf.domain_dimensions * 2)
-        xf, yf, zf = pf.domain_right_edge - 1.0/(pf.domain_dimensions * 2)
-        coords = np.mgrid[xi:xf:xn*1j, yi:yf:yn*1j, zi:zf:zn*1j]
+        xi, yi, zi = pf.domain_left_edge + 1.0 / (pf.domain_dimensions * 2)
+        xf, yf, zf = pf.domain_right_edge - 1.0 / (pf.domain_dimensions * 2)
+        coords = np.mgrid[xi:xf:xn * 1j, yi:yf:yn * 1j, zi:zf:zn * 1j]
         uc = [np.unique(c) for c in coords]
         slc_pos = 0.5
         # Some simple slice tests with single grids
@@ -33,31 +69,45 @@
                 yield assert_equal, slc["Ones"].max(), 1.0
                 yield assert_equal, np.unique(slc["px"]), uc[xax]
                 yield assert_equal, np.unique(slc["py"]), uc[yax]
-                yield assert_equal, np.unique(slc["pdx"]), 1.0/(dims[xax]*2.0)
-                yield assert_equal, np.unique(slc["pdy"]), 1.0/(dims[yax]*2.0)
+                yield assert_equal, np.unique(slc["pdx"]), 0.5 / dims[xax]
+                yield assert_equal, np.unique(slc["pdy"]), 0.5 / dims[yax]
                 pw = slc.to_pw()
                 fns += pw.save()
-                frb = slc.to_frb((1.0,'unitary'), 64)
+                frb = slc.to_frb((1.0, 'unitary'), 64)
                 for slc_field in ['Ones', 'Density']:
                     yield assert_equal, frb[slc_field].info['data_source'], \
-                            slc.__str__()
+                        slc.__str__()
                     yield assert_equal, frb[slc_field].info['axis'], \
-                            ax
+                        ax
                     yield assert_equal, frb[slc_field].info['field'], \
-                            slc_field
+                        slc_field
                     yield assert_equal, frb[slc_field].info['units'], \
-                            pf.field_info[slc_field].get_units()
+                        pf.field_info[slc_field].get_units()
                     yield assert_equal, frb[slc_field].info['xlim'], \
-                            frb.bounds[:2]
+                        frb.bounds[:2]
                     yield assert_equal, frb[slc_field].info['ylim'], \
-                            frb.bounds[2:]
+                        frb.bounds[2:]
                     yield assert_equal, frb[slc_field].info['length_to_cm'], \
-                            pf['cm']
+                        pf['cm']
                     yield assert_equal, frb[slc_field].info['center'], \
-                            slc.center
+                        slc.center
                     yield assert_equal, frb[slc_field].info['coord'], \
-                            slc_pos
+                        slc_pos
                 teardown_func(fns)
             # wf == None
             yield assert_equal, wf, None
 
+
+def test_slice_over_edges():
+    pf = fake_random_pf(64, nprocs=8, fields=["Density"], negative=[False])
+
+    slc = pf.h.slice(0, 0.0, "Density")
+    yield assert_array_equal, slc.grid_left_edge[:, 0], np.zeros((4))
+    slc = pf.h.slice(1, 0.5, "Density")
+    yield assert_array_equal, slc.grid_left_edge[:, 1], np.ones((4)) * 0.5
+
+
+ at raises(YTNoDataInObjectError)
+def test_slice_over_outer_boundary():
+    pf = fake_random_pf(64, nprocs=8, fields=["Density"], negative=[False])
+    slc = pf.h.slice(2, 1.0, "Density")

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 yt/data_objects/universal_fields.py
--- a/yt/data_objects/universal_fields.py
+++ b/yt/data_objects/universal_fields.py
@@ -55,7 +55,7 @@
      G, \
      rho_crit_now, \
      speed_of_light_cgs, \
-     km_per_cm
+     km_per_cm, keV_per_K
 
 from yt.utilities.math_utils import \
     get_sph_r_component, \
@@ -216,18 +216,25 @@
            data["Density"] * data["ThermalEnergy"]
 add_field("Pressure", function=_Pressure, units=r"\rm{dyne}/\rm{cm}^{2}")
 
+def _TempkeV(field, data):
+    return data["Temperature"] * keV_per_K
+add_field("TempkeV", function=_TempkeV, units=r"\rm{keV}",
+          display_name="Temperature")
+
 def _Entropy(field, data):
     if data.has_field_parameter("mu"):
         mw = mh*data.get_field_parameter("mu")
     else :
         mw = mh
+    try:
+        gammam1 = data.pf["Gamma"] - 1.0
+    except:
+        gammam1 = 5./3. - 1.0
     return kboltz * data["Temperature"] / \
-           ((data["Density"]/mw)**(data.pf["Gamma"] - 1.0))
+           ((data["Density"]/mw)**gammam1)
 add_field("Entropy", units=r"\rm{ergs}\ \rm{cm}^{3\gamma-3}",
           function=_Entropy)
 
-
-
 ### spherical coordinates: r (radius)
 def _sph_r(field, data):
     center = data.get_field_parameter("center")
@@ -784,22 +791,28 @@
          units=r"\rm{g}\/\rm{cm}^2/\rm{s}", particle_type=True,
          validators=[ValidateParameter('center')])
 
-def get_radius(positions, data):
-    c = data.get_field_parameter("center")
-    n_tup = tuple([1 for i in range(positions.ndim-1)])
-    center = np.tile(np.reshape(c, (positions.shape[0],)+n_tup),(1,)+positions.shape[1:])
-    periodicity = data.pf.periodicity
-    if any(periodicity):
-        period = data.pf.domain_right_edge - data.pf.domain_left_edge
-        return periodic_dist(positions, center, period, periodicity)
-    else:
-        return euclidean_dist(positions, center)
+def get_radius(data, field_prefix):
+    center = data.get_field_parameter("center")
+    DW = data.pf.domain_right_edge - data.pf.domain_left_edge
+    radius = np.zeros(data[field_prefix+"x"].shape, dtype='float64')
+    r = radius.copy()
+    if any(data.pf.periodicity):
+        rdw = radius.copy()
+    for i, ax in enumerate('xyz'):
+        np.subtract(data["%s%s" % (field_prefix, ax)], center[i], r)
+        if data.pf.periodicity[i] == True:
+            np.subtract(DW[i], r, rdw)
+            np.abs(r, r)
+            np.minimum(r, rdw, r)
+        np.power(r, 2.0, r)
+        np.add(radius, r, radius)
+    np.sqrt(radius, radius)
+    return radius
+
 def _ParticleRadius(field, data):
-    positions = np.array([data["particle_position_%s" % ax] for ax in 'xyz'])
-    return get_radius(positions, data)
+    return get_radius(data, "particle_position_")
 def _Radius(field, data):
-    positions = np.array([data['x'], data['y'], data['z']])
-    return get_radius(positions, data)
+    return get_radius(data, "")
 
 def _ConvertRadiusCGS(data):
     return data.convert("cm")

diff -r a1ff99289b028229a1570481842295a9fb28b31b -r b59c9b89ad062e0bd2df49df6b5466b0e7f90865 yt/frontends/athena/data_structures.py
--- a/yt/frontends/athena/data_structures.py
+++ b/yt/frontends/athena/data_structures.py
@@ -289,6 +289,11 @@
                      self.parameter_file.domain_right_edge)
         self.parameter_file.domain_dimensions = \
                 np.round(self.parameter_file.domain_width/gdds[0]).astype('int')
+
+        # Need to reset the units in the parameter file based on the correct
+        # domain left/right/dimensions.
+        self.parameter_file._set_units()
+
         if self.parameter_file.dimensionality <= 2 :
             self.parameter_file.domain_dimensions[2] = np.int(1)
         if self.parameter_file.dimensionality == 1 :

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/34b95297062b/
changeset:   34b95297062b
branch:      yt
user:        atmyers
date:        2013-03-07 23:02:12
summary:     undoing a couple of changes I made by accident
affected #:  2 files
Diff not available.

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list