[yt-svn] commit/yt: 31 new changesets

Bitbucket commits-noreply at bitbucket.org
Thu Feb 14 10:45:59 PST 2013


31 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/31361cabdc34/
changeset:   31361cabdc34
branch:      yt
user:        atmyers
date:        2013-02-06 21:34:24
summary:     setting the 'cosmological simulation' parameter in the Chombo frontend to False by default
affected #:  1 file

diff -r 5a379eb6dc53e51731a82be30308812d8676d68c -r 31361cabdc3488e4fb692afd40ab013771d13c8b yt/frontends/chombo/data_structures.py
--- a/yt/frontends/chombo/data_structures.py
+++ b/yt/frontends/chombo/data_structures.py
@@ -220,6 +220,7 @@
         self.fullplotdir = os.path.abspath(filename)
         StaticOutput.__init__(self,filename,data_style)
         self.storage_filename = storage_filename
+        self.cosmological_simulation = False
         fileh.close()
 
     def _set_units(self):


https://bitbucket.org/yt_analysis/yt/commits/adc5ab9dc397/
changeset:   adc5ab9dc397
branch:      yt
user:        atmyers
date:        2013-02-06 22:44:19
summary:     refactoring the gdf writer a bit and adding the option to write to single fields to backup files
affected #:  1 file

diff -r 31361cabdc3488e4fb692afd40ab013771d13c8b -r adc5ab9dc397ffedad5d56b683b46d8c339ee35d yt/utilities/grid_data_format/writer.py
--- a/yt/utilities/grid_data_format/writer.py
+++ b/yt/utilities/grid_data_format/writer.py
@@ -46,6 +46,74 @@
         The path of the file to output.
 
     """
+
+    f = _create_new_gdf(pf, gdf_path, data_author, data_comment,
+                       particle_type_name)
+
+    # now add the fields one-by-one
+    for field_name in pf.h.field_list:
+        _write_field_to_gdf(pf, f, field_name, particle_type_name)
+
+    # don't forget to close the file.
+    f.close()
+
+def save_field(pf, field_name):
+    backup_filename = pf.backup_filename
+    if os.path.exists(backup_filename):
+        # backup file already exists, open it
+        f = h5py.File(backup_filename, "a")
+    else:
+        # backup file does not exist, create it
+        f = _create_new_gdf(pf, backup_filename, data_author=None, data_comment=None,
+                       particle_type_name="dark_matter")
+
+    # now save the field
+    _write_field_to_gdf(pf, f, field_name, particle_type_name="dark_matter")
+
+    # don't forget to close the file.
+    f.close()
+        
+def _write_field_to_gdf(pf, fhandle, field_name, particle_type_name):
+
+    # add field info to field_types group
+    g = fhandle["field_types"]
+    # create the subgroup with the field's name
+    sg = g.create_group(field_name)
+
+    # grab the display name and units from the field info container.
+    display_name = pf.field_info[field_name].display_name
+    units = pf.field_info[field_name].get_units()
+
+    # check that they actually contain something...
+    if display_name:
+        sg.attrs["field_name"] = display_name
+    else:
+        sg.attrs["field_name"] = field_name
+    if units:
+        sg.attrs["field_units"] = units
+    else:
+        sg.attrs["field_units"] = "None"
+    # @todo: the values must be in CGS already right?
+    sg.attrs["field_to_cgs"] = 1.0
+    # @todo: is this always true?
+    sg.attrs["staggering"] = 0
+
+    # now add actual data, grid by grid
+    g = fhandle["data"]     
+    for grid in pf.h.grids:
+        grid_group = g["grid_%010i" % grid.id]
+        particles_group = grid_group["particles"]
+        pt_group = particles_group[particle_type_name]
+        # add the field data to the grid group
+        # Check if this is a real field or particle data.
+        field_obj = pf.field_info[field_name]
+        if field_obj.particle_type:  # particle data
+            pt_group[field_name] = grid.get_data(field_name)
+        else:  # a field
+            grid_group[field_name] = grid.get_data(field_name)
+
+def _create_new_gdf(pf, gdf_path, data_author=None, data_comment=None,
+                   particle_type_name="dark_matter"):
     # Make sure we have the absolute path to the file first
     gdf_path = os.path.abspath(gdf_path)
 
@@ -100,29 +168,6 @@
     ###
     g = f.create_group("field_types")
 
-    # Which field list should we iterate over?
-    for field_name in pf.h.field_list:
-        # create the subgroup with the field's name
-        sg = g.create_group(field_name)
-
-        # grab the display name and units from the field info container.
-        display_name = pf.field_info[field_name].display_name
-        units = pf.field_info[field_name].get_units()
-
-        # check that they actually contain something...
-        if display_name:
-            sg.attrs["field_name"] = display_name
-        else:
-            sg.attrs["field_name"] = field_name
-        if units:
-            sg.attrs["field_units"] = units
-        else:
-            sg.attrs["field_units"] = "None"
-        # @todo: the values must be in CGS already right?
-        sg.attrs["field_to_cgs"] = 1.0
-        # @todo: is this always true?
-        sg.attrs["staggering"] = 0
-
     ###
     # "particle_types" group
     ###
@@ -147,25 +192,14 @@
     ###
     # "data" group -- where we should spend the most time
     ###
+    
     g = f.create_group("data")
-
     for grid in pf.h.grids:
         # add group for this grid
-
         grid_group = g.create_group("grid_%010i" % grid.id)
         # add group for the particles on this grid
         particles_group = grid_group.create_group("particles")
         pt_group = particles_group.create_group(particle_type_name)
 
-        # add the field data to the grid group
-        for field_name in pf.h.field_list:
-            # Check if this is a real field or particle data.
-            field_obj = pf.field_info[field_name]
+    return f
 
-            if field_obj.particle_type:  # particle data
-                pt_group[field_name] = grid.get_data(field_name)
-            else:  # a field
-                grid_group[field_name] = grid.get_data(field_name)
-
-    # don't forget to close the file.
-    f.close()


https://bitbucket.org/yt_analysis/yt/commits/2124bcf32368/
changeset:   2124bcf32368
branch:      yt
user:        atmyers
date:        2013-02-06 22:45:08
summary:     adding a backup_filename parameter to the chombo frontend
affected #:  1 file

diff -r adc5ab9dc397ffedad5d56b683b46d8c339ee35d -r 2124bcf3236800b20ce10a21e3b11aa232e957e3 yt/frontends/chombo/data_structures.py
--- a/yt/frontends/chombo/data_structures.py
+++ b/yt/frontends/chombo/data_structures.py
@@ -220,6 +220,7 @@
         self.fullplotdir = os.path.abspath(filename)
         StaticOutput.__init__(self,filename,data_style)
         self.storage_filename = storage_filename
+        self.backup_filename  = self.fullplotdir + '_backup.gdf'
         self.cosmological_simulation = False
         fileh.close()
 


https://bitbucket.org/yt_analysis/yt/commits/2b305e8d52e1/
changeset:   2b305e8d52e1
branch:      yt
user:        atmyers
date:        2013-02-07 00:57:45
summary:     modifying the io routines for the Chombo frontend to check the backup file
affected #:  1 file

diff -r 2124bcf3236800b20ce10a21e3b11aa232e957e3 -r 2b305e8d52e1c2e5725901bcd47a38b8ce8ba892 yt/frontends/chombo/io.py
--- a/yt/frontends/chombo/io.py
+++ b/yt/frontends/chombo/io.py
@@ -50,20 +50,30 @@
         fhandle.close()
     
     def _read_data_set(self,grid,field):
-        fhandle = h5py.File(grid.hierarchy.hierarchy_filename,'r')
+        # try to read from backup file first
+        try:
+            backup_filename = grid.pf.backup_filename
+            fhandle = h5py.File(backup_filename, 'r')
+            g = fhandle["data"]
+            grid_group = g["grid_%010i" % grid.id]
+            data = grid_group[field][:]
+            fhandle.close()
+            return data
+        except:
+            fhandle = h5py.File(grid.hierarchy.hierarchy_filename,'r')
 
-        field_dict = self._field_dict(fhandle)
-        lstring = 'level_%i' % grid.Level
-        lev = fhandle[lstring]
-        dims = grid.ActiveDimensions
-        boxsize = dims.prod()
+            field_dict = self._field_dict(fhandle)
+            lstring = 'level_%i' % grid.Level
+            lev = fhandle[lstring]
+            dims = grid.ActiveDimensions
+            boxsize = dims.prod()
         
-        grid_offset = lev[self._offset_string][grid._level_id]
-        start = grid_offset+field_dict[field]*boxsize
-        stop = start + boxsize
-        data = lev[self._data_string][start:stop]
+            grid_offset = lev[self._offset_string][grid._level_id]
+            start = grid_offset+field_dict[field]*boxsize
+            stop = start + boxsize
+            data = lev[self._data_string][start:stop]
 
-        fhandle.close()
+            fhandle.close()
         return data.reshape(dims, order='F')
 
     def _read_data_slice(self, grid, field, axis, coord):


https://bitbucket.org/yt_analysis/yt/commits/cd792f88aca4/
changeset:   cd792f88aca4
branch:      yt
user:        atmyers
date:        2013-02-07 00:58:50
summary:     detecting fields in the backup as well for the Chombo frontend
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/ec04e80b9ece/
changeset:   ec04e80b9ece
branch:      yt
user:        atmyers
date:        2013-02-07 02:23:20
summary:     adding a docstring to the save_field function
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/84f10af3293d/
changeset:   84f10af3293d
branch:      yt
user:        atmyers
date:        2013-02-08 02:07:27
summary:     some refactoring of the io functions to support saving of derived fields accross all frontends
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/600e45161059/
changeset:   600e45161059
branch:      yt
user:        atmyers
date:        2013-02-08 02:33:38
summary:     catching an exception for when you try to write a field with a name that already exits
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/14efec3bd683/
changeset:   14efec3bd683
branch:      yt
user:        atmyers
date:        2013-02-08 02:34:50
summary:     Merged yt_analysis/yt into yt
affected #:  21 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/4b71796d20a0/
changeset:   4b71796d20a0
branch:      yt
user:        atmyers
date:        2013-02-08 02:41:00
summary:     don't need this import anymore
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/50d895cf7cb2/
changeset:   50d895cf7cb2
branch:      yt
user:        atmyers
date:        2013-02-08 02:45:22
summary:     merging
affected #:  21 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/df9196aa7996/
changeset:   df9196aa7996
branch:      yt
user:        atmyers
date:        2013-02-08 07:18:56
summary:     further refactoring of the derived field saving mechanism. Supports the Enzo frontend now
affected #:  4 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/00407b021b37/
changeset:   00407b021b37
branch:      yt
user:        atmyers
date:        2013-02-08 08:13:55
summary:     adding support for orion data
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/4ada4f91eaf5/
changeset:   4ada4f91eaf5
branch:      yt
user:        atmyers
date:        2013-02-08 09:21:01
summary:     flash
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/73eeba46c80b/
changeset:   73eeba46c80b
branch:      yt
user:        atmyers
date:        2013-02-08 21:12:07
summary:     replacing clunky function name '_read_data_set_from_pf' with '_read_data'
affected #:  5 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/43cd92f9d706/
changeset:   43cd92f9d706
branch:      yt
user:        atmyers
date:        2013-02-08 21:16:00
summary:     forgot one place
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/c00f33f71331/
changeset:   c00f33f71331
branch:      yt
user:        atmyers
date:        2013-02-08 21:25:13
summary:     making _read_data_set --> _read_data change in all of the other frontends
affected #:  11 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/ce9625180cfe/
changeset:   ce9625180cfe
branch:      yt
user:        atmyers
date:        2013-02-08 21:41:40
summary:     set the backup_filename in the base StaticOutput class rather than in the individual frontends
affected #:  5 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/da27899e8fd7/
changeset:   da27899e8fd7
branch:      yt
user:        atmyers
date:        2013-02-08 21:57:52
summary:     catching a couple of exceptions and changing the default backup file location
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/5cd51067274f/
changeset:   5cd51067274f
branch:      yt
user:        atmyers
date:        2013-02-08 23:05:57
summary:     fixing some changes I somehow accidently made to the periodicity checking in the flash, orion, and enzo frontents
affected #:  3 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/47caf837151a/
changeset:   47caf837151a
branch:      yt
user:        atmyers
date:        2013-02-08 23:08:28
summary:     missed this one
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/4dc9fe209517/
changeset:   4dc9fe209517
branch:      yt
user:        atmyers
date:        2013-02-08 23:09:57
summary:     maybe this time
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/84d25f6ceac9/
changeset:   84d25f6ceac9
branch:      yt
user:        atmyers
date:        2013-02-10 03:45:20
summary:     fixing grid offset bug in this version too
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/fa56e6874158/
changeset:   fa56e6874158
branch:      yt
user:        atmyers
date:        2013-02-10 07:34:10
summary:     resolving conflict
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/15cc336494aa/
changeset:   15cc336494aa
branch:      yt
user:        atmyers
date:        2013-02-10 09:14:21
summary:     fixing a couple of bugs
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/5e7383e0ce54/
changeset:   5e7383e0ce54
branch:      yt
user:        atmyers
date:        2013-02-14 03:08:41
summary:     Merged yt_analysis/yt into yt
affected #:  9 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/043815a1ec25/
changeset:   043815a1ec25
branch:      yt
user:        atmyers
date:        2013-02-14 06:56:34
summary:     only check disk for presence of data file once
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/4ff66fe9aa72/
changeset:   4ff66fe9aa72
branch:      yt
user:        atmyers
date:        2013-02-14 18:28:39
summary:     simplfying the detect fields code somewhat
affected #:  1 file
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/c237cc8de828/
changeset:   c237cc8de828
branch:      yt
user:        atmyers
date:        2013-02-14 19:03:35
summary:     warn people if they try to save particle fields
affected #:  2 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/e1b7792e6807/
changeset:   e1b7792e6807
branch:      yt
user:        MatthewTurk
date:        2013-02-14 19:25:39
summary:     Adjusting IO frontends for the new sidecar file.
affected #:  15 files
Diff not available.

https://bitbucket.org/yt_analysis/yt/commits/5027f592349e/
changeset:   5027f592349e
branch:      yt
user:        MatthewTurk
date:        2013-02-14 19:45:49
summary:     Merged in atmyers/yt (pull request #416)

Saving derived fields to disk
affected #:  24 files
Diff not available.

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list