[yt-svn] commit/yt: 4 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Fri Jul 10 16:49:38 PDT 2015


4 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/848720c0c1c6/
Changeset:   848720c0c1c6
Branch:      yt
User:        jzuhone
Date:        2015-07-09 17:24:01+00:00
Summary:     Merge
Affected #:  22 files

diff -r 8b398054b8fac7adf74d677330d6916ccf73f168 -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be doc/README
--- a/doc/README
+++ b/doc/README
@@ -7,4 +7,4 @@
 Because the documentation requires a number of dependencies, we provide
 pre-built versions online, accessible here:
 
-http://yt-project.org/docs/dev-3.0/
+http://yt-project.org/docs/dev/

diff -r 8b398054b8fac7adf74d677330d6916ccf73f168 -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -104,7 +104,11 @@
 -----------
 
 Athena 4.x VTK data is *mostly* supported and cared for by John
-ZuHone. Both uniform grid and SMR datasets are supported.
+ZuHone. Both uniform grid and SMR datasets are supported. 
+
+.. note: 
+   yt also recognizes Fargo3D data written to VTK files as 
+   Athena data, but support for Fargo3D data is preliminary. 
 
 Loading Athena datasets is slightly different depending on whether
 your dataset came from a serial or a parallel run. If the data came

diff -r 8b398054b8fac7adf74d677330d6916ccf73f168 -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be doc/source/installing.rst
--- a/doc/source/installing.rst
+++ b/doc/source/installing.rst
@@ -39,6 +39,28 @@
   have the the necessary compilers installed (e.g. the ``build-essentials``
   package on debian and ubuntu).
 
+.. _branches-of-yt:
+
+Branches of yt: ``yt``, ``stable``, and ``yt-2.x``
+++++++++++++++++++++++++++++++++++++++++++++++++++
+
+Before you install yt, you must decide which branch (i.e. version) of the code 
+you prefer to use:
+
+* ``yt`` -- The most up-to-date *development* version with the most current features but sometimes unstable (yt-3.x)
+* ``stable`` -- The latest stable release of yt-3.x
+* ``yt-2.x`` -- The latest stable release of yt-2.x
+
+If this is your first time using the code, we recommend using ``stable``, 
+unless you specifically need some piece of brand-new functionality only 
+available in ``yt`` or need to run an old script developed for ``yt-2.x``.
+There were major API and functionality changes made in yt after version 2.7
+in moving to version 3.0.  For a detailed description of the changes
+between versions 2.x (e.g. branch ``yt-2.x``) and 3.x (e.g. branches ``yt`` and 
+``stable``) see :ref:`yt3differences`.  Lastly, don't feel like you're locked 
+into one branch when you install yt, because you can easily change the active
+branch by following the instructions in :ref:`switching-between-yt-versions`.
+
 .. _install-script:
 
 All-in-One Installation Script
@@ -60,16 +82,22 @@
 its dependencies will be removed from your system (no scattered files remaining
 throughout your system).
 
+.. _installing-yt:
+
 Running the Install Script
 ^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-To get the installation script, download it from:
+To get the installation script for the ``stable`` branch of the code, 
+download it from:
 
 .. code-block:: bash
 
   wget http://bitbucket.org/yt_analysis/yt/raw/stable/doc/install_script.sh
 
-.. _installing-yt:
+If you wish to install a different version of yt (see 
+:ref:`above <branches-of-yt>`), replace ``stable`` with the appropriate 
+branch name (e.g. ``yt``, ``yt-2.x``) in the path above to get the correct 
+install script.
 
 By default, the bash install script will install an array of items, but there
 are additional packages that can be downloaded and installed (e.g. SciPy, enzo,
@@ -329,8 +357,8 @@
 
 .. _switching-between-yt-versions:
 
-Switching between yt-2.x and yt-3.x
------------------------------------
+Switching versions of yt: yt-2.x, yt-3.x, stable, and dev
+---------------------------------------------------------
 
 With the release of version 3.0 of yt, development of the legacy yt 2.x series
 has been relegated to bugfixes.  That said, we will continue supporting the 2.x
@@ -356,12 +384,8 @@
   hg update <desired-version>
   python setup.py develop
 
-Valid versions to jump to are:
+Valid versions to jump to are described in :ref:`branches-of-yt`).
 
-* ``yt`` -- The latest *dev* changes in yt-3.x (can be unstable)
-* ``stable`` -- The latest stable release of yt-3.x
-* ``yt-2.x`` -- The latest stable release of yt-2.x
-    
 You can check which version of yt you have installed by invoking ``yt version``
 at the command line.  If you encounter problems, see :ref:`update-errors`.
 
@@ -387,11 +411,7 @@
   hg update <desired-version>
   python setup.py install --user --prefix=
 
-Valid versions to jump to are:
-
-* ``yt`` -- The latest *dev* changes in yt-3.x (can be unstable)
-* ``stable`` -- The latest stable release of yt-3.x
-* ``yt-2.x`` -- The latest stable release of yt-2.x
+Valid versions to jump to are described in :ref:`branches-of-yt`).
     
 You can check which version of yt you have installed by invoking ``yt version``
 at the command line.  If you encounter problems, see :ref:`update-errors`.

diff -r 8b398054b8fac7adf74d677330d6916ccf73f168 -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -274,8 +274,8 @@
                                                     'b_thermal': thermal_b[lixel],
                                                     'redshift': field_data['redshift'][lixel],
                                                     'v_pec': peculiar_velocity})
-                    pbar.update(i)
-                pbar.finish()
+                pbar.update(i)
+            pbar.finish()
 
             del column_density, delta_lambda, thermal_b, \
                 center_bins, width_ratio, left_index, right_index

diff -r 8b398054b8fac7adf74d677330d6916ccf73f168 -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be yt/analysis_modules/halo_analysis/halo_callbacks.py
--- a/yt/analysis_modules/halo_analysis/halo_callbacks.py
+++ b/yt/analysis_modules/halo_analysis/halo_callbacks.py
@@ -400,7 +400,7 @@
         The field used as the overdensity from which interpolation is done to 
         calculate virial quantities.
         Default: ("gas", "overdensity")
-    critical_density : float
+    critical_overdensity : float
         The value of the overdensity at which to evaulate the virial quantities.  
         Overdensity is with respect to the critical density.
         Default: 200

diff -r 8b398054b8fac7adf74d677330d6916ccf73f168 -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -96,6 +96,10 @@
     r"""
     Calculates the weight average of a field or fields.
 
+    Returns a YTQuantity for each field requested; if one,
+    it returns a single YTQuantity, if many, it returns a list of YTQuantities
+    in order of the listed fields.  
+
     Where f is the field and w is the weight, the weighted average is
     Sum_i(f_i \* w_i) / Sum_i(w_i).
 
@@ -173,8 +177,9 @@
 
 class TotalMass(TotalQuantity):
     r"""
-    Calculates the total mass in gas and particles. Returns a tuple where the
-    first part is total gas mass and the second part is total particle mass.
+    Calculates the total mass of the object. Returns a YTArray where the
+    first element is total gas mass and the second element is total particle 
+    mass.
 
     Examples
     --------
@@ -189,11 +194,14 @@
         fi = self.data_source.ds.field_info
         fields = []
         if ("gas", "cell_mass") in fi:
-            fields.append(("gas", "cell_mass"))
+            gas = super(TotalMass, self).__call__([('gas', 'cell_mass')])
+        else:
+            gas = self.data_source.ds.arr([0], 'g')
         if ("all", "particle_mass") in fi:
-            fields.append(("all", "particle_mass"))
-        rv = super(TotalMass, self).__call__(fields)
-        return rv
+            part = super(TotalMass, self).__call__([('all', 'particle_mass')])
+        else:
+            part = self.data_source.ds.arr([0], 'g')
+        return self.data_source.ds.arr([gas, part])
 
 class CenterOfMass(DerivedQuantity):
     r"""
@@ -330,7 +338,10 @@
 class WeightedVariance(DerivedQuantity):
     r"""
     Calculates the weighted variance and weighted mean for a field
-    or list of fields.
+    or list of fields. Returns a YTArray for each field requested; if one,
+    it returns a single YTArray, if many, it returns a list of YTArrays
+    in order of the listed fields.  The first element of each YTArray is
+    the weighted variance, and the second element is the weighted mean.
 
     Where f is the field, w is the weight, and <f_w> is the weighted mean,
     the weighted variance is
@@ -384,10 +395,10 @@
             my_mean = values[i]
             my_var2 = values[i + int(len(values) / 2)]
             all_mean = (my_weight * my_mean).sum(dtype=np.float64) / all_weight
-            rvals.append(np.sqrt((my_weight * (my_var2 +
-                                               (my_mean - all_mean)**2)).sum(dtype=np.float64) /
-                                               all_weight))
-            rvals.append(all_mean)
+            rvals.append(self.data_source.ds.arr([(np.sqrt((my_weight * 
+                                                 (my_var2 + (my_mean - 
+                                                  all_mean)**2)).sum(dtype=np.float64) 
+                                                  / all_weight)), all_mean]))
         return rvals
 
 class AngularMomentumVector(DerivedQuantity):
@@ -395,6 +406,7 @@
     Calculates the angular momentum vector, using gas and/or particles.
 
     The angular momentum vector is the mass-weighted mean specific angular momentum.
+    Returns a YTArray of the vector.
 
     Parameters
     ----------
@@ -416,10 +428,6 @@
 
     """
     def count_values(self, use_gas=True, use_particles=True):
-        use_gas &= \
-          (("gas", "cell_mass") in self.data_source.ds.field_info)
-        use_particles &= \
-          (("all", "particle_mass") in self.data_source.ds.field_info)
         num_vals = 0
         if use_gas: num_vals += 4
         if use_particles: num_vals += 4
@@ -453,11 +461,15 @@
             jy += values.pop(0).sum(dtype=np.float64)
             jz += values.pop(0).sum(dtype=np.float64)
             m  += values.pop(0).sum(dtype=np.float64)
-        return (jx / m, jy / m, jz / m)
+        return self.data_source.ds.arr([jx / m, jy / m, jz / m])
 
 class Extrema(DerivedQuantity):
     r"""
     Calculates the min and max value of a field or list of fields.
+    Returns a YTArray for each field requested.  If one, a single YTArray
+    is returned, if many, a list of YTArrays in order of field list is 
+    returned.  The first element of each YTArray is the minimum of the
+    field and the second is the maximum of the field.
 
     Parameters
     ----------
@@ -500,7 +512,7 @@
 
     def reduce_intermediate(self, values):
         # The values get turned into arrays here.
-        return [(mis.min(), mas.max() )
+        return [self.data_source.ds.arr([mis.min(), mas.max()])
                 for mis, mas in zip(values[::2], values[1::2])]
 
 class MaxLocation(DerivedQuantity):

diff -r 8b398054b8fac7adf74d677330d6916ccf73f168 -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be yt/fields/field_info_container.py
--- a/yt/fields/field_info_container.py
+++ b/yt/fields/field_info_container.py
@@ -126,7 +126,7 @@
         else:
             sml_name = None
         new_aliases = []
-        for ptype2, alias_name in self.keys():
+        for ptype2, alias_name in list(self):
             if ptype2 != ptype:
                 continue
             if alias_name not in sph_whitelist_fields:

diff -r 8b398054b8fac7adf74d677330d6916ccf73f168 -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be yt/frontends/athena/data_structures.py
--- a/yt/frontends/athena/data_structures.py
+++ b/yt/frontends/athena/data_structures.py
@@ -24,18 +24,50 @@
     GridIndex
 from yt.data_objects.static_output import \
            Dataset
-from yt.utilities.definitions import \
-    mpc_conversion, sec_conversion
 from yt.utilities.lib.misc_utilities import \
     get_box_grids_level
 from yt.geometry.geometry_handler import \
     YTDataChunk
+from yt.extern.six import PY3
 
 from .fields import AthenaFieldInfo
 from yt.units.yt_array import YTQuantity
 from yt.utilities.decompose import \
     decompose_array, get_psize
 
+def chk23(strin):
+    if PY3:
+        return strin
+    else:
+        return strin.encode('utf-8')
+
+def str23(strin):
+    if PY3:
+        return strin
+    else:
+        if isinstance(strin, list):
+            return [s.decode('utf-8') for s in strin]
+        else:
+            return strin.decode('utf-8')
+
+def check_readline(fl):
+    line = fl.readline()
+    chk = chk23("SCALARS")
+    if chk in line and not line.startswith(chk):
+        line = line[line.find(chk):]
+    chk = chk23("VECTORS")
+    if chk in line and not line.startswith(chk):
+        line = line[line.find(chk):]
+    return line
+
+def check_break(line):
+    splitup = line.strip().split()
+    do_break = chk23('SCALAR') in splitup
+    do_break = (chk23('VECTOR') in splitup) & do_break
+    do_break = (chk23('TABLE') in splitup) & do_break
+    do_break = (len(line) == 0) & do_break
+    return do_break
+
 def _get_convert(fname):
     def _conv(data):
         return data.convert(fname)
@@ -77,92 +109,51 @@
         return "AthenaGrid_%04i (%s)" % (self.id, self.ActiveDimensions)
 
 def parse_line(line, grid):
-    from sys import version
     # grid is a dictionary
-    from sys import version
     splitup = line.strip().split()
-    if version < '3':
-        if "vtk" in splitup:
-            grid['vtk_version'] = splitup[-1]
-        elif "time=" in splitup:
-            time_index = splitup.index("time=")
-            grid['time'] = float(splitup[time_index+1].rstrip(','))
-            grid['level'] = int(splitup[time_index+3].rstrip(','))
-            grid['domain'] = int(splitup[time_index+5].rstrip(','))                        
-        elif "DIMENSIONS" in splitup:
-            grid['dimensions'] = np.array(splitup[-3:]).astype('int')
-        elif "ORIGIN" in splitup:
-            grid['left_edge'] = np.array(splitup[-3:]).astype('float64')
-        elif "SPACING" in splitup:
-            grid['dds'] = np.array(splitup[-3:]).astype('float64')
-        elif "CELL_DATA" in splitup:
-            grid["ncells"] = int(splitup[-1])
-        elif "SCALARS" in splitup:
-            field = splitup[1]
-            grid['read_field'] = field
-            grid['read_type'] = 'scalar'
-        elif "VECTORS" in splitup:
-            field = splitup[1]
-            grid['read_field'] = field
-            grid['read_type'] = 'vector'
-    else:
-        if b"vtk" in splitup:
-            grid['vtk_version'] = splitup[-1].decode('utf-8')
-        elif b"time=" in splitup:
-            time_index = splitup.index(b"time=")
-            field = splitup[time_index+1].decode('utf-8')
-            field = field.rstrip(',')
-            grid['time'] = float(field)
-            field = splitup[time_index+3].decode('utf-8')
-            field = field.rstrip(',')
-            grid['level'] = int(field)
-            field = splitup[time_index+5].decode('utf-8')
-            field = field.rstrip(',')
-            grid['domain'] = int(field)                        
-        elif b"DIMENSIONS" in splitup:
-            field = splitup[-3:]
-            for i in range(0,len(field)):
-                field[i] = field[i].decode('utf-8')
-            grid['dimensions'] = np.array(field).astype('int')
-        elif b"ORIGIN" in splitup:
-            field = splitup[-3:]
-            for i in range(0,len(field)):
-                field[i] = field[i].decode('utf-8')
-            grid['left_edge'] = np.array(field).astype('float64')
-        elif b"SPACING" in splitup:
-            field = splitup[-3:]
-            for i in range(0,len(field)):
-                field[i] = field[i].decode('utf-8')
-            grid['dds'] = np.array(field).astype('float64')
-        elif b"CELL_DATA" in splitup:
-            grid["ncells"] = int(splitup[-1].decode('utf-8'))
-        elif b"SCALARS" in splitup:
-            field = splitup[1].decode('utf-8')
-            grid['read_field'] = field
-            grid['read_type'] = 'scalar'
-        elif b"VECTORS" in splitup:
-            field = splitup[1].decode('utf-8')
-            grid['read_field'] = field
-            grid['read_type'] = 'vector'
-
+    if chk23("vtk") in splitup:
+        grid['vtk_version'] = str23(splitup[-1])
+    elif chk23("time=") in splitup:
+        time_index = splitup.index(chk23("time="))
+        grid['time'] = float(str23(splitup[time_index+1]).rstrip(','))
+        grid['level'] = int(str23(splitup[time_index+3]).rstrip(','))
+        grid['domain'] = int(str23(splitup[time_index+5]).rstrip(','))
+    elif chk23("DIMENSIONS") in splitup:
+        grid['dimensions'] = np.array(str23(splitup[-3:])).astype('int')
+    elif chk23("ORIGIN") in splitup:
+        grid['left_edge'] = np.array(str23(splitup[-3:])).astype('float64')
+    elif chk23("SPACING") in splitup:
+        grid['dds'] = np.array(str23(splitup[-3:])).astype('float64')
+    elif chk23("CELL_DATA") in splitup or chk23("POINT_DATA") in splitup:
+        grid["ncells"] = int(str23(splitup[-1]))
+    elif chk23("SCALARS") in splitup:
+        field = str23(splitup[1])
+        grid['read_field'] = field
+        grid['read_type'] = 'scalar'
+    elif chk23("VECTORS") in splitup:
+        field = str23(splitup[1])
+        grid['read_field'] = field
+        grid['read_type'] = 'vector'
+    elif chk23("time") in splitup:
+        time_index = splitup.index(chk23("time"))
+        grid['time'] = float(str23(splitup[time_index+1]))
+    
 class AthenaHierarchy(GridIndex):
 
     grid = AthenaGrid
     _dataset_type='athena'
     _data_file = None
-    
+
     def __init__(self, ds, dataset_type='athena'):
-        from sys import version
         self.dataset = weakref.proxy(ds)
         self.directory = os.path.dirname(self.dataset.filename)
         self.dataset_type = dataset_type
         # for now, the index file is the dataset!
         self.index_filename = os.path.join(os.getcwd(), self.dataset.filename)
-        #self.directory = os.path.dirname(self.index_filename)
-        if version < '3':
+        if PY3:
             self._fhandle = file(self.index_filename,'rb')
         else:
-            self._fhandle = open(self.index_filename,'rb')            
+            self._fhandle = open(self.index_filename,'rb')
         GridIndex.__init__(self, ds, dataset_type)
 
         self._fhandle.close()
@@ -170,39 +161,19 @@
     def _detect_output_fields(self):
         field_map = {}
         f = open(self.index_filename,'rb')
-        from sys import version
-        def chk23(strin):
-            if version < '3':
-                return strin
-            else:
-                return strin.encode('utf-8')
-        def check_readline(fl):
-            line = fl.readline()
-            chk = chk23("SCALARS")
-            if chk in line and not line.startswith(chk):
-                line = line[line.find(chk):]
-            chk = chk23("VECTORS")
-            if chk in line and not line.startswith(chk):
-                line = line[line.find(chk):]
-            return line
         line = check_readline(f)
         chkwhile = chk23('')
         while line != chkwhile:
             splitup = line.strip().split()
             chkd = chk23("DIMENSIONS")
             chkc = chk23("CELL_DATA")
+            chkp = chk23("POINT_DATA")
             if chkd in splitup:
-                field = splitup[-3:]
-                if version >= '3':
-                    for i in range(0,len(field)):
-                        field[i] = field[i].decode('utf-8')
+                field = str23(splitup[-3:])
                 grid_dims = np.array(field).astype('int')
                 line = check_readline(f)
-            elif chkc in splitup:
-                if version < '3':
-                    grid_ncells = int(splitup[-1])
-                else:
-                    grid_ncells = int(splitup[-1].decode('utf-8'))
+            elif chkc in splitup or chkp in splitup:
+                grid_ncells = int(str23(splitup[-1]))
                 line = check_readline(f)
                 if np.prod(grid_dims) != grid_ncells:
                     grid_dims -= 1
@@ -221,32 +192,23 @@
             chks = chk23('SCALARS')
             chkv = chk23('VECTORS')
             if chks in line and chks not in splitup:
-                splitup = line[line.find(chks):].strip().split()
-                if version >='3':
-                    splitup = splitup.decode('utf-8')
+                splitup = str23(line[line.find(chks):].strip().split())
             if chkv in line and chkv not in splitup:
-                splitup = line[line.find(chkv):].strip().split()
-                if version >='3':
-                    splitup = splitup.decode('utf-8')
+                splitup = str23(line[line.find(chkv):].strip().split())
             if chks in splitup:
-                if version < '3':
-                    field = ("athena", splitup[1])
-                else:
-                    field = ("athena", splitup[1].decode('utf-8'))
+                field = ("athena", str23(splitup[1]))
+                dtype = str23(splitup[-1]).lower()
                 if not read_table:
                     line = check_readline(f) # Read the lookup table line
                     read_table = True
-                field_map[field] = ('scalar', f.tell() - read_table_offset)
+                field_map[field] = ('scalar', f.tell() - read_table_offset, dtype)
                 read_table=False
-
             elif chkv in splitup:
-                if version < '3':
-                    field = splitup[1]
-                else:
-                    field = splitup[1].decode('utf-8')
+                field = str23(splitup[1])
+                dtype = str23(splitup[-1]).lower()
                 for ax in 'xyz':
                     field_map[("athena","%s_%s" % (field, ax))] =\
-                            ('vector', f.tell() - read_table_offset)
+                            ('vector', f.tell() - read_table_offset, dtype)
             line = check_readline(f)
 
         f.close()
@@ -266,13 +228,7 @@
         line = f.readline()
         while grid['read_field'] is None:
             parse_line(line, grid)
-            if "SCALAR" in line.strip().split():
-                break
-            if "VECTOR" in line.strip().split():
-                break
-            if 'TABLE' in line.strip().split():
-                break
-            if len(line) == 0: break
+            if check_break(line): break
             line = f.readline()
         f.close()
 
@@ -282,7 +238,7 @@
             grid['dimensions'] -= 1
             grid['dimensions'][grid['dimensions']==0]=1
         if np.prod(grid['dimensions']) != grid['ncells']:
-            mylog.error('product of dimensions %i not equal to number of cells %i' % 
+            mylog.error('product of dimensions %i not equal to number of cells %i' %
                   (np.prod(grid['dimensions']), grid['ncells']))
             raise TypeError
 
@@ -292,10 +248,10 @@
         if dataset_dir.endswith("id0"):
             dname = "id0/"+dname
             dataset_dir = dataset_dir[:-3]
-                        
+
         gridlistread = glob.glob(os.path.join(dataset_dir, 'id*/%s-id*%s' % (dname[4:-9],dname[-9:])))
         gridlistread.insert(0,self.index_filename)
-        if 'id0' in dname :
+        if 'id0' in dname:
             gridlistread += glob.glob(os.path.join(dataset_dir, 'id*/lev*/%s*-lev*%s' % (dname[4:-9],dname[-9:])))
         else :
             gridlistread += glob.glob(os.path.join(dataset_dir, 'lev*/%s*-lev*%s' % (dname[:-9],dname[-9:])))
@@ -318,26 +274,35 @@
             line = f.readline()
             while gridread['read_field'] is None:
                 parse_line(line, gridread)
-                if "SCALAR" in line.strip().split():
-                    break
-                if "VECTOR" in line.strip().split():
-                    break 
-                if 'TABLE' in line.strip().split():
-                    break
-                if len(line) == 0: break
+                splitup = line.strip().split()
+                if chk23('X_COORDINATES') in splitup:
+                    gridread['left_edge'] = np.zeros(3)
+                    gridread['dds'] = np.zeros(3)
+                    v = np.fromfile(f, dtype='>f8', count=2)
+                    gridread['left_edge'][0] = v[0]-0.5*(v[1]-v[0])
+                    gridread['dds'][0] = v[1]-v[0]
+                if chk23('Y_COORDINATES') in splitup:
+                    v = np.fromfile(f, dtype='>f8', count=2)
+                    gridread['left_edge'][1] = v[0]-0.5*(v[1]-v[0])
+                    gridread['dds'][1] = v[1]-v[0]
+                if chk23('Z_COORDINATES') in splitup:
+                    v = np.fromfile(f, dtype='>f8', count=2)
+                    gridread['left_edge'][2] = v[0]-0.5*(v[1]-v[0])
+                    gridread['dds'][2] = v[1]-v[0]
+                if check_break(line): break
                 line = f.readline()
             f.close()
-            levels[j] = gridread['level']
+            levels[j] = gridread.get('level', 0)
             glis[j,0] = gridread['left_edge'][0]
             glis[j,1] = gridread['left_edge'][1]
             glis[j,2] = gridread['left_edge'][2]
-            # It seems some datasets have a mismatch between ncells and 
+            # It seems some datasets have a mismatch between ncells and
             # the actual grid dimensions.
             if np.prod(gridread['dimensions']) != gridread['ncells']:
                 gridread['dimensions'] -= 1
                 gridread['dimensions'][gridread['dimensions']==0]=1
             if np.prod(gridread['dimensions']) != gridread['ncells']:
-                mylog.error('product of dimensions %i not equal to number of cells %i' % 
+                mylog.error('product of dimensions %i not equal to number of cells %i' %
                       (np.prod(gridread['dimensions']), gridread['ncells']))
                 raise TypeError
             gdims[j,0] = gridread['dimensions'][0]
@@ -346,7 +311,7 @@
             # Setting dds=1 for non-active dimensions in 1D/2D datasets
             gridread['dds'][gridread['dimensions']==1] = 1.
             gdds[j,:] = gridread['dds']
-            
+
             j=j+1
 
         gres = glis + gdims*gdds
@@ -357,10 +322,10 @@
         new_dre = np.max(gres,axis=0)
         self.dataset.domain_right_edge[:] = np.round(new_dre, decimals=12)[:]
         self.dataset.domain_width = \
-                (self.dataset.domain_right_edge - 
+                (self.dataset.domain_right_edge -
                  self.dataset.domain_left_edge)
         self.dataset.domain_center = \
-                0.5*(self.dataset.domain_left_edge + 
+                0.5*(self.dataset.domain_left_edge +
                      self.dataset.domain_right_edge)
         self.dataset.domain_dimensions = \
                 np.round(self.dataset.domain_width/gdds[0]).astype('int')
@@ -434,7 +399,6 @@
                                                         dx*self.grid_dimensions,
                                                         decimals=12),
                                                "code_length")
-        
         if self.dataset.dimensionality <= 2:
             self.grid_right_edge[:,2] = dre[2]
         if self.dataset.dimensionality == 1:
@@ -543,13 +507,22 @@
         line = self._handle.readline()
         while grid['read_field'] is None:
             parse_line(line, grid)
-            if "SCALAR" in line.strip().split():
-                break
-            if "VECTOR" in line.strip().split():
-                break
-            if 'TABLE' in line.strip().split():
-                break
-            if len(line) == 0: break
+            splitup = line.strip().split()
+            if chk23('X_COORDINATES') in splitup:
+                grid['left_edge'] = np.zeros(3)
+                grid['dds'] = np.zeros(3)
+                v = np.fromfile(self._handle, dtype='>f8', count=2)
+                grid['left_edge'][0] = v[0]-0.5*(v[1]-v[0])
+                grid['dds'][0] = v[1]-v[0]
+            if chk23('Y_COORDINATES') in splitup:
+                v = np.fromfile(self._handle, dtype='>f8', count=2)
+                grid['left_edge'][1] = v[0]-0.5*(v[1]-v[0])
+                grid['dds'][1] = v[1]-v[0]
+            if chk23('Z_COORDINATES') in splitup:
+                v = np.fromfile(self._handle, dtype='>f8', count=2)
+                grid['left_edge'][2] = v[0]-0.5*(v[1]-v[0])
+                grid['dds'][2] = v[1]-v[0]
+            if check_break(line): break
             line = self._handle.readline()
 
         self.domain_left_edge = grid['left_edge']
@@ -607,7 +580,7 @@
         if "gamma" in self.specified_parameters:
             self.parameters["Gamma"] = self.specified_parameters["gamma"]
         else:
-            self.parameters["Gamma"] = 5./3. 
+            self.parameters["Gamma"] = 5./3.
         self.geometry = self.specified_parameters.get("geometry", "cartesian")
         self._handle.close()
 

diff -r 8b398054b8fac7adf74d677330d6916ccf73f168 -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be yt/frontends/athena/io.py
--- a/yt/frontends/athena/io.py
+++ b/yt/frontends/athena/io.py
@@ -16,8 +16,11 @@
            BaseIOHandler
 import numpy as np
 from yt.funcs import mylog, defaultdict
+from .data_structures import chk23
 
-float_size = np.dtype(">f4").itemsize
+float_size = {"float":np.dtype(">f4").itemsize,
+              "double":np.dtype(">f8").itemsize}
+
 axis_list = ["_x","_y","_z"]
 
 class IOHandlerAthena(BaseIOHandler):
@@ -48,24 +51,28 @@
             grid0_ncells = np.prod(grid.index.grids[0].read_dims)
             read_table_offset = get_read_table_offset(f)
             for field in fields:
-                dtype, offsetr = grid.index._field_map[field]
+                ftype, offsetr, dtype = grid.index._field_map[field]
                 if grid_ncells != grid0_ncells:
                     offset = offsetr + ((grid_ncells-grid0_ncells) * (offsetr//grid0_ncells))
                 if grid_ncells == grid0_ncells:
                     offset = offsetr
                 offset = int(offset) # Casting to be certain.
-                file_offset = grid.file_offset[2]*read_dims[0]*read_dims[1]*float_size
+                file_offset = grid.file_offset[2]*read_dims[0]*read_dims[1]*float_size[dtype]
                 xread = slice(grid.file_offset[0],grid.file_offset[0]+grid_dims[0])
                 yread = slice(grid.file_offset[1],grid.file_offset[1]+grid_dims[1])
                 f.seek(read_table_offset+offset+file_offset)
-                if dtype == 'scalar':
+                if dtype == 'float':
+                    dt = '>f4'
+                elif dtype == 'double':
+                    dt = '>f8'
+                if ftype == 'scalar':
                     f.seek(read_table_offset+offset+file_offset)
-                    v = np.fromfile(f, dtype='>f4',
+                    v = np.fromfile(f, dtype=dt,
                                     count=grid_ncells).reshape(read_dims,order='F')
-                if dtype == 'vector':
+                if ftype == 'vector':
                     vec_offset = axis_list.index(field[-1][-2:])
                     f.seek(read_table_offset+offset+3*file_offset)
-                    v = np.fromfile(f, dtype='>f4', count=3*grid_ncells)
+                    v = np.fromfile(f, dtype=dt, count=3*grid_ncells)
                     v = v[vec_offset::3].reshape(read_dims,order='F')
                 if grid.ds.field_ordering == 1:
                     data[grid.id][field] = v[xread,yread,:].T.astype("float64")
@@ -104,15 +111,12 @@
         return rv
 
 def get_read_table_offset(f):
-    from sys import version
     line = f.readline()
     while True:
         splitup = line.strip().split()
-        if version < '3':
-            chk = 'CELL_DATA'
-        else:
-            chk = b'CELL_DATA'
-        if chk in splitup:
+        chkc = chk23('CELL_DATA')
+        chkp = chk23('POINT_DATA')
+        if chkc in splitup or chkp in splitup:
             f.readline()
             read_table_offset = f.tell()
             break

diff -r 8b398054b8fac7adf74d677330d6916ccf73f168 -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be yt/frontends/chombo/api.py
--- a/yt/frontends/chombo/api.py
+++ b/yt/frontends/chombo/api.py
@@ -33,7 +33,6 @@
     PlutoFieldInfo
 
 from .io import \
-    IOHandlerChomboHDF5,\
-    IOHandlerPlutoHDF5
+    IOHandlerChomboHDF5
 
 from . import tests

diff -r 8b398054b8fac7adf74d677330d6916ccf73f168 -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be yt/frontends/chombo/data_structures.py
--- a/yt/frontends/chombo/data_structures.py
+++ b/yt/frontends/chombo/data_structures.py
@@ -47,6 +47,7 @@
 class ChomboGrid(AMRGridPatch):
     _id_offset = 0
     __slots__ = ["_level_id", "stop_index"]
+
     def __init__(self, id, index, level, start, stop):
         AMRGridPatch.__init__(self, id, filename = index.index_filename,
                               index = index)
@@ -99,12 +100,6 @@
         self.domain_left_edge = ds.domain_left_edge
         self.domain_right_edge = ds.domain_right_edge
         self.dataset_type = dataset_type
-
-        if ds.dimensionality == 1:
-            self.dataset_type = "chombo1d_hdf5"
-        if ds.dimensionality == 2:
-            self.dataset_type = "chombo2d_hdf5"
-
         self.field_indexes = {}
         self.dataset = weakref.proxy(ds)
         # for now, the index file is the dataset!
@@ -114,7 +109,7 @@
         self._handle = ds._handle
 
         tr = self._handle['Chombo_global'].attrs.get("testReal", "float32")
-            
+
         self._levels = [key for key in self._handle.keys() if key.startswith('level')]
         GridIndex.__init__(self, ds, dataset_type)
 
@@ -256,17 +251,10 @@
         self._handle = HDF5FileHandler(filename)
         self.dataset_type = dataset_type
 
-        # look up the dimensionality of the dataset
-        D = self._handle['Chombo_global/'].attrs['SpaceDim']
-        if D == 1:
-            self.dataset_type = 'chombo1d_hdf5'
-        if D == 2:
-            self.dataset_type = 'chombo2d_hdf5'
-
         self.geometry = "cartesian"
         self.ini_filename = ini_filename
         self.fullplotdir = os.path.abspath(filename)
-        Dataset.__init__(self,filename, self.dataset_type,
+        Dataset.__init__(self, filename, self.dataset_type,
                          units_override=units_override)
         self.storage_filename = storage_filename
         self.cosmological_simulation = False
@@ -398,7 +386,7 @@
 
 class PlutoHierarchy(ChomboHierarchy):
 
-    def __init__(self, ds, dataset_type="pluto_chombo_native"):
+    def __init__(self, ds, dataset_type="chombo_hdf5"):
         ChomboHierarchy.__init__(self, ds, dataset_type)
 
     def _parse_index(self):
@@ -456,7 +444,7 @@
     _index_class = PlutoHierarchy
     _field_info_class = PlutoFieldInfo
 
-    def __init__(self, filename, dataset_type='pluto_chombo_native',
+    def __init__(self, filename, dataset_type='chombo_hdf5',
                  storage_filename = None, ini_filename = None,
                  units_override=None):
 

diff -r 8b398054b8fac7adf74d677330d6916ccf73f168 -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be yt/frontends/chombo/fields.py
--- a/yt/frontends/chombo/fields.py
+++ b/yt/frontends/chombo/fields.py
@@ -51,8 +51,11 @@
         ("X-magnfield", (b_units, ["magnetic_field_x"], None)),
         ("Y-magnfield", (b_units, ["magnetic_field_y"], None)),
         ("Z-magnfield", (b_units, ["magnetic_field_z"], None)),
-    )
-
+        ("directrad-dedt-density", (eden_units, ["directrad-dedt-density"], None)),
+        ("directrad-dpxdt-density", (mom_units, ["directrad-dpxdt-density"], None)),
+        ("directrad-dpydt-density", (mom_units, ["directrad-dpydt-density"], None)),
+        ("directrad-dpzdt-density", (mom_units, ["directrad-dpzdt-density"], None)),
+        )
     known_particle_fields = (
         ("particle_mass", ("code_mass", [], None)),
         ("particle_position_x", ("code_length", [], None)),

diff -r 8b398054b8fac7adf74d677330d6916ccf73f168 -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be yt/frontends/chombo/io.py
--- a/yt/frontends/chombo/io.py
+++ b/yt/frontends/chombo/io.py
@@ -75,7 +75,7 @@
         field_dict = {}
         for key, val in self._handle.attrs.items():
             if key.startswith('component_'):
-                comp_number = int(re.match('component_(\d)', key).groups()[0])
+                comp_number = int(re.match('component_(\d+)', key).groups()[0])
                 field_dict[val] = comp_number
         self._field_dict = field_dict
         return self._field_dict
@@ -93,15 +93,11 @@
         self._particle_field_index = field_dict
         return self._particle_field_index
 
-    def _read_field_names(self, grid):
-        ncomp = int(self._handle.attrs['num_components'])
-        fns = [c[1] for c in f.attrs.items()[-ncomp-1:-1]]
-
     def _read_data(self, grid, field):
         lstring = 'level_%i' % grid.Level
         lev = self._handle[lstring]
         dims = grid.ActiveDimensions
-        shape = grid.ActiveDimensions + 2*self.ghost
+        shape = dims + 2*self.ghost
         boxsize = shape.prod()
 
         if self._offsets is not None:
@@ -112,7 +108,7 @@
         stop = start + boxsize
         data = lev[self._data_string][start:stop]
         data_no_ghost = data.reshape(shape, order='F')
-        ghost_slice = [slice(g, d-g, None) for g, d in zip(self.ghost, grid.ActiveDimensions)]
+        ghost_slice = [slice(g, d-g, None) for g, d in zip(self.ghost, dims)]
         ghost_slice = ghost_slice[0:self.dim]
         return data_no_ghost[ghost_slice]
 
@@ -201,43 +197,6 @@
         return np.asarray(data[field_index::items_per_particle], dtype=np.float64, order='F')
 
 
-class IOHandlerChombo2DHDF5(IOHandlerChomboHDF5):
-    _dataset_type = "chombo2d_hdf5"
-    _offset_string = 'data:offsets=0'
-    _data_string = 'data:datatype=0'
-
-    def __init__(self, ds, *args, **kwargs):
-        BaseIOHandler.__init__(self, ds, *args, **kwargs)
-        self.ds = ds
-        self._handle = ds._handle
-        self.dim = 2
-        self._read_ghost_info()
-
-
-class IOHandlerChombo1DHDF5(IOHandlerChomboHDF5):
-    _dataset_type = "chombo1d_hdf5"
-    _offset_string = 'data:offsets=0'
-    _data_string = 'data:datatype=0'
-
-    def __init__(self, ds, *args, **kwargs):
-        BaseIOHandler.__init__(self, ds, *args, **kwargs)
-        self.ds = ds
-        self.dim = 1
-        self._handle = ds._handle
-        self._read_ghost_info()
-
-
-class IOHandlerPlutoHDF5(IOHandlerChomboHDF5):
-    _dataset_type = "pluto_chombo_native"
-    _offset_string = 'data:offsets=0'
-    _data_string = 'data:datatype=0'
-
-    def __init__(self, ds, *args, **kwargs):
-        BaseIOHandler.__init__(self, ds, *args, **kwargs)
-        self.ds = ds
-        self._handle = ds._handle
-
-
 def parse_orion_sinks(fn):
     '''
 

diff -r 8b398054b8fac7adf74d677330d6916ccf73f168 -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be yt/frontends/rockstar/data_structures.py
--- a/yt/frontends/rockstar/data_structures.py
+++ b/yt/frontends/rockstar/data_structures.py
@@ -73,7 +73,7 @@
             int(os.stat(self.parameter_filename)[stat.ST_CTIME])
         prefix = ".".join(self.parameter_filename.rsplit(".", 2)[:-2])
         self.filename_template = "%s.%%(num)s%s" % (prefix, self._suffix)
-        self.file_count = len(glob.glob(prefix + "*" + self._suffix))
+        self.file_count = len(glob.glob(prefix + ".*" + self._suffix))
         
         # Now we can set up things we already know.
         self.cosmological_simulation = 1

diff -r 8b398054b8fac7adf74d677330d6916ccf73f168 -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be yt/frontends/rockstar/io.py
--- a/yt/frontends/rockstar/io.py
+++ b/yt/frontends/rockstar/io.py
@@ -91,6 +91,7 @@
         morton = np.empty(pcount, dtype='uint64')
         mylog.debug("Initializing index % 5i (% 7i particles)",
                     data_file.file_id, pcount)
+        if pcount == 0: return morton
         ind = 0
         with open(data_file.filename, "rb") as f:
             f.seek(data_file._position_offset, os.SEEK_SET)
@@ -108,7 +109,7 @@
             # domain edges.  This helps alleviate that.
             np.clip(pos, self.ds.domain_left_edge + dx,
                          self.ds.domain_right_edge - dx, pos)
-            #del halos
+            del halos
             if np.any(pos.min(axis=0) < self.ds.domain_left_edge) or \
                np.any(pos.max(axis=0) > self.ds.domain_right_edge):
                 raise YTDomainOverflow(pos.min(axis=0),

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/eba1fe3fbcb2/
Changeset:   eba1fe3fbcb2
Branch:      yt
User:        jzuhone
Date:        2015-07-10 18:09:05+00:00
Summary:     Making yt update and yt instinfo work with Python 3
Affected #:  2 files

diff -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be -r eba1fe3fbcb274698521c5df139ccbe995fd58c8 yt/funcs.py
--- a/yt/funcs.py
+++ b/yt/funcs.py
@@ -479,10 +479,6 @@
     pass
 
 def update_hg(path, skip_rebuild = False):
-    if sys.version_info >= (3,0,0):
-        print("python-hglib does not currently work with Python 3,")
-        print("so this function is currently disabled.")
-        return -1
     try:
         import hglib
     except ImportError:
@@ -492,7 +488,7 @@
     f = open(os.path.join(path, "yt_updater.log"), "a")
     repo = hglib.open(path)
     repo.pull()
-    ident = repo.identify()
+    ident = repo.identify().decode("utf-8")
     if "+" in ident:
         print("Can't rebuild modules by myself.")
         print("You will have to do this yourself.  Here's a sample commands:")
@@ -519,10 +515,6 @@
     print("Updated successfully.")
 
 def get_hg_version(path):
-    if sys.version_info >= (3,0,0):
-        print("python-hglib does not currently work with Python 3,")
-        print("so this function is currently disabled.")
-        return -1
     try:
         import hglib
     except ImportError:

diff -r 848720c0c1c65a9a0eecc1a6d1a5cde2174376be -r eba1fe3fbcb274698521c5df139ccbe995fd58c8 yt/utilities/command_line.py
--- a/yt/utilities/command_line.py
+++ b/yt/utilities/command_line.py
@@ -602,7 +602,7 @@
             print()
             print("---")
             print("Version = %s" % yt.__version__)
-            print("Changeset = %s" % vstring.strip())
+            print("Changeset = %s" % vstring.strip().decode("utf-8"))
             print("---")
             print()
             if "site-packages" not in path:


https://bitbucket.org/yt_analysis/yt/commits/50f6cf165b2d/
Changeset:   50f6cf165b2d
Branch:      yt
User:        jzuhone
Date:        2015-07-10 18:13:11+00:00
Summary:     A few more changes
Affected #:  2 files

diff -r eba1fe3fbcb274698521c5df139ccbe995fd58c8 -r 50f6cf165b2dd7b4f0472b937e65e923e29c2599 yt/funcs.py
--- a/yt/funcs.py
+++ b/yt/funcs.py
@@ -506,7 +506,7 @@
     p = subprocess.Popen([sys.executable, "setup.py", "build_ext", "-i"], cwd=path,
                         stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
     stdout, stderr = p.communicate()
-    f.write(stdout)
+    f.write(stdout.decode('utf-8'))
     f.write("\n\n")
     if p.returncode:
         print("BROKEN: See %s" % (os.path.join(path, "yt_updater.log")))

diff -r eba1fe3fbcb274698521c5df139ccbe995fd58c8 -r 50f6cf165b2dd7b4f0472b937e65e923e29c2599 yt/utilities/command_line.py
--- a/yt/utilities/command_line.py
+++ b/yt/utilities/command_line.py
@@ -1019,7 +1019,7 @@
             print()
             print("---")
             print("Version = %s" % yt.__version__)
-            print("Changeset = %s" % vstring.strip())
+            print("Changeset = %s" % vstring.strip().decode("utf-8"))
             print("---")
             print()
             print("This installation CAN be automatically updated.")


https://bitbucket.org/yt_analysis/yt/commits/f06003cb29f0/
Changeset:   f06003cb29f0
Branch:      yt
User:        MatthewTurk
Date:        2015-07-10 23:49:31+00:00
Summary:     Merged in jzuhone/yt (pull request #1633)

Making yt update and yt instinfo work with Python 3
Affected #:  2 files

diff -r 7d4877bae33a4739b0f46c78a11eac5b7db711c1 -r f06003cb29f0f743c02cd8543da8385787151677 yt/funcs.py
--- a/yt/funcs.py
+++ b/yt/funcs.py
@@ -479,10 +479,6 @@
     pass
 
 def update_hg(path, skip_rebuild = False):
-    if sys.version_info >= (3,0,0):
-        print("python-hglib does not currently work with Python 3,")
-        print("so this function is currently disabled.")
-        return -1
     try:
         import hglib
     except ImportError:
@@ -492,7 +488,7 @@
     f = open(os.path.join(path, "yt_updater.log"), "a")
     repo = hglib.open(path)
     repo.pull()
-    ident = repo.identify()
+    ident = repo.identify().decode("utf-8")
     if "+" in ident:
         print("Can't rebuild modules by myself.")
         print("You will have to do this yourself.  Here's a sample commands:")
@@ -510,7 +506,7 @@
     p = subprocess.Popen([sys.executable, "setup.py", "build_ext", "-i"], cwd=path,
                         stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
     stdout, stderr = p.communicate()
-    f.write(stdout)
+    f.write(stdout.decode('utf-8'))
     f.write("\n\n")
     if p.returncode:
         print("BROKEN: See %s" % (os.path.join(path, "yt_updater.log")))
@@ -519,10 +515,6 @@
     print("Updated successfully.")
 
 def get_hg_version(path):
-    if sys.version_info >= (3,0,0):
-        print("python-hglib does not currently work with Python 3,")
-        print("so this function is currently disabled.")
-        return -1
     try:
         import hglib
     except ImportError:

diff -r 7d4877bae33a4739b0f46c78a11eac5b7db711c1 -r f06003cb29f0f743c02cd8543da8385787151677 yt/utilities/command_line.py
--- a/yt/utilities/command_line.py
+++ b/yt/utilities/command_line.py
@@ -602,7 +602,7 @@
             print()
             print("---")
             print("Version = %s" % yt.__version__)
-            print("Changeset = %s" % vstring.strip())
+            print("Changeset = %s" % vstring.strip().decode("utf-8"))
             print("---")
             print()
             if "site-packages" not in path:
@@ -1019,7 +1019,7 @@
             print()
             print("---")
             print("Version = %s" % yt.__version__)
-            print("Changeset = %s" % vstring.strip())
+            print("Changeset = %s" % vstring.strip().decode("utf-8"))
             print("---")
             print()
             print("This installation CAN be automatically updated.")

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list