[Yt-svn] yt-commit r1789 - in trunk/yt: . extensions extensions/enzo_test lagos raven

mturk at wrangler.dreamhost.com mturk at wrangler.dreamhost.com
Fri Jul 2 14:22:26 PDT 2010


Author: mturk
Date: Fri Jul  2 14:22:25 2010
New Revision: 1789
URL: http://yt.enzotools.org/changeset/1789

Log:
Backporting from hg:
  * Fixes to opengl viewer of grids
  * Importing the enzo test runner
  * Jeff's fixes for Orion's time
  * Jeff's time annotation
  * Mag-field callback from Jeff



Added:
   trunk/yt/extensions/enzo_test/README
   trunk/yt/extensions/enzo_test/halo_tests.py
   trunk/yt/extensions/enzo_test/hydro_tests.py
   trunk/yt/extensions/enzo_test/output_tests.py
   trunk/yt/extensions/enzo_test/particle_tests.py
   trunk/yt/extensions/enzo_test/run_tests.py
   trunk/yt/extensions/enzo_test/runner.py
Removed:
   trunk/yt/extensions/enzo_test/DataProviders.py
   trunk/yt/extensions/enzo_test/ProblemSetup.py
   trunk/yt/extensions/enzo_test/ProblemVerification.py
   trunk/yt/extensions/enzo_test/SimulationTests.py
   trunk/yt/extensions/enzo_test/__init__.py
Modified:
   trunk/yt/command_line.py
   trunk/yt/extensions/opengl_image_viewer.py
   trunk/yt/lagos/OutputTypes.py
   trunk/yt/raven/Callbacks.py
   trunk/yt/raven/plot_collection.py

Modified: trunk/yt/command_line.py
==============================================================================
--- trunk/yt/command_line.py	(original)
+++ trunk/yt/command_line.py	Fri Jul  2 14:22:25 2010
@@ -142,6 +142,10 @@
                    action="store_true",
                    dest="grids", default=False,
                    help="Show the grid boundaries"),
+    time    = dict(short="", long="--time",
+                   action="store_true",
+                   dest="time", default=False,
+                   help="Print time in years on image"),
     halos   = dict(short="", long="--halos",
                    action="store", type="string",
                    dest="halos",default="multiple",
@@ -440,7 +444,7 @@
 
     @add_cmd_options(["width", "unit", "bn", "proj", "center",
                       "zlim", "axis", "field", "weight", "skip",
-                      "cmap", "output", "grids"])
+                      "cmap", "output", "grids", "time"])
     @check_args
     def do_plot(self, subcmd, opts, arg):
         """
@@ -466,6 +470,9 @@
                                     weight_field=opts.weight, center=center)
             else: pc.add_slice(opts.field, ax, center=center)
             if opts.grids: pc.plots[-1].modify["grids"]()
+            if opts.time: 
+                time = pf['InitialTime']*pf['Time']*pf['years']
+                pc.plots[-1].modify["text"]((0.2,0.8), 't = %5.2f yr'%time)
         pc.set_width(opts.width, opts.unit)
         pc.set_cmap(opts.cmap)
         if opts.zlim: pc.set_zlim(*opts.zlim)

Added: trunk/yt/extensions/enzo_test/README
==============================================================================
--- (empty file)
+++ trunk/yt/extensions/enzo_test/README	Fri Jul  2 14:22:25 2010
@@ -0,0 +1,275 @@
+Enzo Regression Test Runner
+===========================
+
+This is an evolving sketch of how Enzo regression tests might work.  They will
+be based on a python test runner, called from LCA test, that will output images
+as well as success/failure for a set of tests.
+
+The interface is still evolving, but we're working on both creating something
+fun, easy to write tests for, and clear.
+
+This is still a work in progress!  Things might change without notice!
+
+What Is A Test And How To Write One
+-----------------------------------
+
+A test at its most fundamental level makes some value from one or multiple
+outputs from a simulation and then compares those outputs to the outputs from
+some previous simulation.
+
+Each test follows a fixed interface, but we're trying to provide a couple
+mechanisms to make test writing easier.  To implement a test, you have to
+define a python class that subclasses from a particular type of test case.
+
+Your new test must implement the following interface, or it will fail:
+
+    name
+        All tests have to have the variable "name" defined in the class
+        definition.  This is a unique key that identifies the test, and it is
+        used to self-register every test in a global registry.  This will play
+        into filenames, so it's for the best if it doesn't contain spaces or
+        other unacceptable filename characters.
+
+    setup(self)
+        If you subclass from the YT test case or another test case base
+        class that implements setup, this may not be necessary.  This is
+        where all the pre-testing operation occurs, and is useful if you
+        want to write a bunch of tests that have the same setup.  Not return
+        value is needed.
+
+    run(self)
+        This is where the testing occurs and some value generated -- this value
+        can be an array, a number, a string, any Python or NumPy base type.
+        (For various reasons, YT objects can't be considered results, only
+        their base components.)  When this value is prepared, it needs to be
+        stored as the property "result" on the object -- for example, you might
+        do self.result = some_time_average .  No return value is needed.
+
+    compare(self, old_result)
+        This routine compares an existing result against the value computed
+        from a previous run.  It can be assumed that the "old_result" was
+        constructed in an identical "run" function, so direct comparison can be
+        made.  No return value is needed, but instead it is assumed that in
+        case of failure an exception that subclasses from
+        RegressionTestException will be raised -- however, the usage of
+        operations like compare_array_delta and compare_value_delta is
+        encouraged because they will handle the appropriate exception raising.
+
+    plot(self)
+        This function is optional, but it is used to generate an image from a
+        test.  The return value is the filename of the created image.
+
+Helpful Functions For Test Writing
+----------------------------------
+
+All test cases supply several base sets of operations:
+
+  * compare_array_delta(array1, array2, tolerance)
+        This computes
+            max(abs(array1-array2)/(array1+array2))
+        and fails if that is greater than tolerance.  Set tolerance to 0.0 for
+        an exact comparison.
+            
+  * compare_value_delta(value1, value2, tolerance)
+        This computes
+            abs(value1-value2)/(value1+value2)
+        and fails if that is greater than tolerance.  Set tolerance to 0.0 for
+        an exact comparison.
+
+Currently, a few exist:
+
+    SingleOutputTest
+        This is a test case designed to handle a single test.
+
+        Additional Attributes:
+          * filename => The parameter file to test
+
+        Additional Methods:
+          * None
+        
+    MultipleOutputTest
+        This is a test case designed to handle multiple tests.
+
+        Additional Attributes:
+          * io_log => The IO log from the simulation
+
+        Additional Methods:
+          * __iter__ => You can iterate over the test case:
+                 for filename in self:
+                     ...
+                to have it return all the filenames in the IO log.
+        
+    YTStaticOutputTest
+        This test case is designed to work with YT, and provides a couple
+        additional things that YT can provide.
+
+        Additional Attributes:
+          * sim_center => The center of the simulation, from the domain left
+                          and right edges.
+          * max_dens_location => The point of highest density.
+
+          * entire_simulation => A data object containing the entire
+                                 simulation.
+
+        Additional Methods:
+          * pixelize(data_source, field, edges, dims) =>
+                This returns a (dims[0], dims[1]) array constructed from the
+                variable resolution (projection or slice) data object.  Edges
+                are in code units, (px_min, px_max, py_min, py_max) and default
+                to the entire domain.  dims is a tuple, (Nx, Ny).
+
+          * compare_data_arrays(d1, d2, tolerance) =>
+                yt often stores arrays hanging off dictionaries.  This accepts
+                d1 and d2, which are dictionarys with arrays as values, and
+                compares all the arrays using compare_array_delta, with
+                given tolerance.
+
+Sample Tests
+------------
+
+There are some example tests in the distribution.  But, a simple test case
+would also work well.  This is a test case using yt to find the maximum density
+in the simulation.  Note that we don't have to provide a setup function, as
+that's taken care of in the base class (YTStaticOutputTest.)
+
+    class TestMaximumDensity(YTStaticOutputTest):
+        name = "maximum_density"
+
+        def run(self):
+            # self.pf already exists
+            value, center = self.pf.h.find_max("Density")
+            self.result = (value, center)
+
+        def compare(self, old_result):
+            value, center = self.result
+            old_value, old_center = old_result
+
+            # We want our old max density to agree with our new max density to
+            # a relative difference of 1e-7.
+            self.compare_value_delta(value, old_value, 1e-7)
+
+            # Now we check if our center has moved.
+            self.compare_value_array(center, old_center, 1e-7)
+
+        def plot(self):
+            # There's not much to plot, so we just return an empty list.
+            return []
+
+Running Tests
+-------------
+
+Subclasses of RegressionTest are *self-registering*, which means they can be
+run.  Two classes are provided for running tests.  One is the test runner, and
+the other is a thin wrapper around a Shelve from the shelve module.  To run a
+series of tests, you need to instantiate a RegressionTestRunner and then tell
+it which tests to run.
+
+If the runner has a set of results against which to compare, it will do so.
+For every test, it will perform the following actions:
+
+    1. setup()
+    2. run()
+    3. plot(), store list of filenames in self.plot_list[test_name]
+    4. store test.results
+    5. test.compare(old_results), if a compare_id is supplied
+
+If a test is of type SingleOutputTest, or a subclass, this test will be run for
+every single output in the IO log.  If it is a MultipleOutputTest, only one for
+each test will be executed.
+
+The RegressionTestRunner has a public interface:
+
+    RegressionTestRunner:
+        __init__(id, compare_id, results_path, io_log)
+            The id is the unique id for this test case, which will be used for
+            the name of the results database.  The compare_id (optional) is the
+            id of the results database against which we will compare.  The
+            results_path is the path to the directory in which results sets are
+            stored, defaulting to the current directory.  io_log, defaulting to
+            "OutputLog", is the IO log from Enzo that lists all of the outputs.
+
+        run_test(name):
+            The test corresponding to that test name is run.
+
+        run_all_tests()
+            This runs all of the tests that have been registered.  Every time a
+            test is defined, it is registered -- so this list can get quite
+            long!  But, by selectively importing 'plugin' modules, the full
+            list of tests can be controlled.
+
+        run_tests_from_file(filename):
+            Every line in a filename is parsed, and if it matches a test name
+            in the test registry, it will be run.
+
+The included sample script run_tests.py will instantiate a test runner, run it
+once on a set of outputs, and then run it again comparing against the results
+from the first run.  This should always succeed, but it gives an idea of how to
+go about running tests.
+
+Test Creation Convenience Functions
+-----------------------------------
+
+Because of the self-registering nature of the tests, we can very conveniently
+create new ones just by subclassing.  But, subclassing a lot of tests can be a
+bit annoying!  So the create_test function has been created.
+
+Going back to our example of the maximum density location function, we could
+rewrite it slightly to make it work with the create_test function.  We remove
+the name and we make our parameter, field, known, but we don't set it.
+
+    class TestMaximumValue(YTStaticOutputTest):
+
+        field = None
+
+        def run(self):
+            # self.pf already exists
+            value, center = self.pf.h.find_max(self.field)
+            self.result = (value, center)
+
+        def compare(self, old_result):
+            value, center = self.result
+            old_value, old_center = old_result
+
+            # We want our old max density to agree with our new max density to
+            # a relative difference of 1e-7.
+            self.compare_value_delta(value, old_value, 1e-7)
+
+            # Now we check if our center has moved.
+            self.compare_value_array(center, old_center, 1e-7)
+
+        def plot(self):
+            # There's not much to plot, so we just return an empty list.
+            return []
+
+Note that it's mostly the same, but we are using self.field to find the maximum
+density instead of hard coding it to Density.  We also don't specify 'name' so
+that this base class won't be registered.  We can now use create_test to make a
+bunch, setting "field" to anything we want, and naming them anything we want:
+
+    for field in ["Temperature", "x-velocity", "y-velocity", "z-velocity"]:
+        create_test(TestMaximumValue, "maximum_%s_test" % field,
+                    field = field)
+
+This makes and then registers tests of the name format given, which are then
+accessible through the test runner.  See the projection and gas distribution
+test creations in hydro_tests.py for a few more examples of how to use this.
+
+TODO
+====
+
+This is still fairly bare bones!  There are some fun areas we can expand into:
+
+    * We need more tests!  More than that, we need tests that know something
+      about the different test *problems*.  We'll need lists of tests to run
+      for every single problem type.
+    * Sometimes the results database acts oddly and can't add a new value.
+    * The source tree needs to be re-organized and this README file turned into
+      documentation that includes every test in the main distribution.
+    * Doc strings need to be added to all functions and classes.  Comments for
+      all tests need to be included.
+    * More explicit test naming and running.
+    * Generation of HTML pages including all the pages and the results, along
+      with download links.  This should be done with LCA test.
+    * Plots should be zipped up and removed from the file system.  The zipfile
+      module would work great for this.
+    * And lots more ...

Added: trunk/yt/extensions/enzo_test/halo_tests.py
==============================================================================
--- (empty file)
+++ trunk/yt/extensions/enzo_test/halo_tests.py	Fri Jul  2 14:22:25 2010
@@ -0,0 +1,48 @@
+from yt.mods import *
+import matplotlib; matplotlib.use("Agg")
+import pylab
+from output_tests import SingleOutputTest, YTStaticOutputTest, create_test
+
+class TestHaloCount(YTStaticOutputTest):
+    threshold = 80.0
+
+    def run(self):
+        # Find the haloes using vanilla HOP.
+        haloes = HaloFinder(self.pf, threshold=self.threshold, dm_only=False)
+        # We only care about the number of haloes.
+        self.result = len(haloes)
+                    
+    def compare(self, old_result):
+        # The new value should be identical to the old one.
+        self.compare_value_delta(self.result, old_result, 0)
+
+    def plot(self):
+        return []
+
+create_test(TestHaloCount, "halo_count_test", threshold=80.0)
+
+class TestHaloComposition(YTStaticOutputTest):
+    threshold=80.0
+    
+    def run(self):
+        # Find the haloes using vanilla HOP.
+        haloes = HaloFinder(self.pf, threshold=self.threshold, dm_only=False)
+        # The result is a list of the particle IDs, stored
+        # as sets for easy comparison.
+        IDs = []
+        for halo in haloes:
+            IDs.append(set(halo["particle_index"]))
+        self.result = IDs
+    
+    def compare(self, old_result):
+        # All the sets should be identical.
+        pairs = zip(self.result, old_result)
+        for pair in pairs:
+            if len(pair[0] - pair[1]) != 0:
+                return False
+        return True
+    
+    def plot(self):
+        return []
+
+create_test(TestHaloComposition, "halo_composition_test", threshold=80.0)

Added: trunk/yt/extensions/enzo_test/hydro_tests.py
==============================================================================
--- (empty file)
+++ trunk/yt/extensions/enzo_test/hydro_tests.py	Fri Jul  2 14:22:25 2010
@@ -0,0 +1,77 @@
+import matplotlib; matplotlib.use("Agg")
+import pylab
+from output_tests import SingleOutputTest, YTSTaticOutputTest, create_test
+
+class TestProjection(YTStaticOutputTest):
+
+    field = None
+    axis = None
+
+    def run(self):
+        # First we get our flattened projection -- this is the
+        # Density, px, py, pdx, and pdy
+        proj = self.pf.h.proj(self.axis, self.field)
+        # Now let's stick it in a buffer
+        pixelized_proj = self.pixelize(proj, self.field)
+        # We just want the values, so this can be stored
+        # independently of the parameter file.
+        # The .data attributes strip out everything other than the actual array
+        # values.
+        self.result = (proj.data, pixelized_proj.data)
+
+    def compare(self, old_result):
+        proj, pixelized_proj = self.result
+        oproj, opixelized_proj = old_result
+
+        self.compare_data_arrays(proj, oproj)
+        self.compare_array_delta(
+            pixelized_proj[self.field],
+            opixelized_proj[self.field],
+            1e-7)
+
+    def plot(self):
+        pylab.clf()
+        pylab.imshow(self.result[1][self.field],
+            interpolation='nearest', origin='lower')
+        fn = "%s_%s_projection.png" % (self.pf, self.field)
+        pylab.savefig(fn)
+        return [fn]
+
+# Now we create all our tests.  We are using the create_test
+# function, which is a relatively simple function that takes the base class,
+# a name, and any parameters that the test requires.
+for axis in range(3):
+    for field in ["Density", "Temperature"]:
+        create_test(TestProjection, "projection_test_%s_%s" % (axis, field),
+                    field = field, axis = axis)
+
+class TestGasDistribution(YTStaticOutputTest):
+    field_x = None
+    field_y = None
+    weight = "CellMassMsun"
+    n_bins = 32
+
+    def run(self):
+        # We're NOT going to use the low-level profiling API here,
+        # because we are avoiding the calculations of min/max,
+        # as those should be tested in another test.
+        pc = PlotCollection(self.pf, center=self.sim_center)
+        p = pc.add_profile_object(self.entire_simulation,
+            [self.field_x, self.field_y], x_bins = self.n_bins,
+            weight=self.weight)
+        # The arrays are all stored in a dictionary hanging off the profile
+        # object
+        self.result = p.data._data
+                    
+    def compare(self, old_result):
+        self.compare_data_arrays(
+            self.result, old_result)
+
+    def plot(self):
+        return []
+
+# Now we create all our tests, but we're only going to check the binning
+# against Density for now.
+for field in ["Temperature", "x-velocity"]:
+    create_test(TestGasDistribution, "profile_density_test_%s" % field,
+                field_x = "Density", field_y = field)

Added: trunk/yt/extensions/enzo_test/output_tests.py
==============================================================================
--- (empty file)
+++ trunk/yt/extensions/enzo_test/output_tests.py	Fri Jul  2 14:22:25 2010
@@ -0,0 +1,117 @@
+from yt.mods import *
+
+test_registry = {}
+
+class RegressionTestException(Exception):
+    pass
+
+class ValueDelta(RegressionTestException):
+    def __init__(self, delta, acceptable):
+        self.delta = delta
+        self.acceptable = acceptable
+
+    def __repr__(self):
+        return "ValueDelta: Delta %0.5e, max of %0.5e" % (
+            self.delta, self.acceptable)
+
+class ArrayDelta(ValueDelta):
+    def __repr__(self):
+        return "ArrayDelta: Delta %0.5e, max of %0.5e" % (
+            self.delta, self.acceptable)
+
+class RegressionTest(object):
+    name = None
+    result = None
+    output_type = None
+
+    class __metaclass__(type):
+        # This ensures that all the tests are auto-registered if they have a
+        # name.
+
+        def __init__(cls, name, b, d):
+            type.__init__(cls, name, b, d)
+            if cls.name is not None:
+                test_registry[cls.name] = cls
+
+    def setup(self):
+        pass
+
+    def run(self):
+        pass
+
+    def compare(self, old_result):
+        pass
+
+    def plot(self):
+        pass
+
+    def compare_array_delta(self, a1, a2, acceptable):
+        delta = na.abs(a1 - a2)/(a1 + a2)
+        if delta.max() > acceptable:
+            raise ArrayDelta(delta, acceptable)
+        return True
+
+    def compare_value_delta(self, v1, v2, acceptable):
+        delta = na.abs(v1 - v2)/(v1 + v2)
+        if delta > acceptable:
+            raise ValueDelta(delta, acceptable)
+        return True
+
+class SingleOutputTest(RegressionTest):
+    output_type = 'single'
+
+    def __init__(self, filename):
+        self.filename = filename
+
+class MultipleOutputTest(RegressionTest):
+    output_type = 'multiple'
+
+    io_log_header = "DATASET WRITTEN"
+
+    def __init__(self, io_log):
+        self.io_log = io_log
+
+    def __iter__(self):
+        for line in open(self.io_log):
+            yield line[len(self.io_log_header):].strip()
+
+def create_test(base, new_name, **attrs):
+    new_name = "%s_%s" % (base.__name__, new_name)
+    attrs['name'] = new_name
+    return type(new_name, (base,), attrs)
+
+class YTStaticOutputTest(SingleOutputTest):
+
+    def setup(self):
+        self.pf = load(self.filename)
+
+    def pixelize(self, data, field, edges = None, dims = (512, 512)):
+        xax = lagos.x_dict[self.axis]
+        yax = lagos.y_dict[self.axis]
+        
+        if edges is None:
+            edges = (self.pf["DomainLeftEdge"][xax],
+                     self.pf["DomainRightEdge"][xax],
+                     self.pf["DomainLeftEdge"][yax],
+                     self.pf["DomainRightEdge"][yax])
+        frb = raven.FixedResolutionBuffer( data, edges, dims)
+        frb[field] # To make the pixelization
+        return frb
+
+    def compare_data_arrays(self, d1, d2, tol = 1e-7):
+        for field in d1.keys():
+            self.compare_array_delta(d1[field], d2[field], tol)
+
+    @property
+    def sim_center(self):
+        return 0.5*(self.pf["DomainRightEdge"] + self.pf["DomainLeftEdge"])
+
+    @property
+    def max_dens_location(self):
+        return self.pf.h.find_max("Density")[1]
+
+    @property
+    def entire_simulation(self):
+        return self.pf.h.all_data()
+        
+

Added: trunk/yt/extensions/enzo_test/particle_tests.py
==============================================================================
--- (empty file)
+++ trunk/yt/extensions/enzo_test/particle_tests.py	Fri Jul  2 14:22:25 2010
@@ -0,0 +1,59 @@
+from yt.mods import *
+import matplotlib; matplotlib.use("Agg")
+import pylab
+from output_tests import SingleOutputTest, YTStaticOutputTest, create_test
+
+class TestParticleUniqueIDs(YTStaticOutputTest):
+
+    def run(self):
+        # Test to make sure that all the particles have unique IDs.
+        all = self.pf.h.all_data()
+        IDs = all["particle_index"]
+        # Make sure the order is the same every time.
+        IDs = IDs[IDs.argsort()]
+        self.result = IDs
+                    
+    def compare(self, old_result):
+        # Two things: there should be no repeats in either the new or
+        # the old, and the two sets should be the same.
+        if len(old_result) != len(set(old_result)): return False
+        if len(self.result) != len(set(self.result)): return False
+        if (self.result != old_result).all(): return False
+        return True
+
+    def plot(self):
+        return []
+
+create_test(TestParticleUniqueIDs, "particle_unique_ids_test")
+
+class TestParticleExtrema(YTStaticOutputTest):
+
+    def run(self):
+        # Tests to make sure there are no particle positions aren't changing
+        # drastically. This is very unlikely to be a problem.
+        all = self.pf.h.all_data()
+        min = na.empty(3,dtype='float64')
+        max = min.copy()
+        dims = ["particle_position_x","particle_position_y",
+            "particle_position_z"]
+        for i in xrange(3):
+            min[i] = na.min(all[dims[i]])
+            max[i] = na.max(all[dims[i]])
+        self.result = (min,max)
+    
+    def compare(self, old_result):
+        min,max = self.result
+        old_min, old_max = old_result
+        # The extrema should be very similar.
+        self.compare_array_delta(min, old_min, 1e-7)
+        self.compare_array_delta(max, old_max, 1e-7)
+        # Also, the min/max shouldn't be outside the boundaries.
+        if (min < self.pf['DomainLeftEdge']).any(): return False
+        if (max > self.pf['DomainRightEdge']).any(): return False
+        return True
+    
+    def plot(self):
+        return []
+
+create_test(TestParticleExtrema, "particle_extrema_test")
+

Added: trunk/yt/extensions/enzo_test/run_tests.py
==============================================================================
--- (empty file)
+++ trunk/yt/extensions/enzo_test/run_tests.py	Fri Jul  2 14:22:25 2010
@@ -0,0 +1,13 @@
+from yt.config import ytcfg
+ytcfg["yt","loglevel"] = '50'
+ytcfg["yt","suppressStreamLogging"] = 'True'
+
+import hydro_tests # Just importing will register the tests!
+import halo_tests
+import particle_tests
+from runner import RegressionTestRunner
+
+first_runner = RegressionTestRunner("first")
+first_runner.run_all_tests()
+second_runner = RegressionTestRunner("second", "first")
+second_runner.run_all_tests()

Added: trunk/yt/extensions/enzo_test/runner.py
==============================================================================
--- (empty file)
+++ trunk/yt/extensions/enzo_test/runner.py	Fri Jul  2 14:22:25 2010
@@ -0,0 +1,112 @@
+import os, shelve, cPickle, sys
+from output_tests import test_registry, MultipleOutputTest, \
+                         RegressionTestException
+
+class RegressionTestStorage(object):
+    def __init__(self, id, path = "."):
+        self.id = id
+        self._path = os.path.join(path, "results_%s" % self.id)
+        if not os.path.isdir(self._path): os.mkdir(self._path)
+        if os.path.isfile(self._path): raise RuntimeError
+
+    def _fn(self, tn):
+        return os.path.join(self._path, tn)
+
+    def __setitem__(self, test_name, result):
+        # We have to close our shelf manually,
+        # as the destructor does not necessarily do this.
+        # Context managers would be more appropriate.
+        f = open(self._fn(test_name), "wb")
+        cPickle.dump(result, f, protocol=-1)
+        f.close()
+
+    def __getitem__(self, test_name):
+        f = open(self._fn(test_name), "rb")
+        tr = cPickle.load(f)
+        f.close()
+        return tr
+
+class RegressionTestRunner(object):
+    def __init__(self, id, compare_id = None,
+                 results_path = ".", io_log = "OutputLog"):
+        # This test runner assumes it has been launched with the current
+        # working directory that of the test case itself.
+        self.io_log = io_log
+        self.id = id
+        if compare_id is not None:
+            self.old_results = RegressionTestStorage(
+                                    compare_id, path=results_path)
+        else:
+            self.old_results = None
+        self.results = RegressionTestStorage(id, path=results_path)
+        self.plot_list = {}
+        self.passed_tests = {}
+
+    def run_all_tests(self):
+        plot_list = []
+        for i,name in enumerate(sorted(test_registry)):
+            self.run_test(name)
+        return plot_list
+
+    def run_test(self, name):
+        # We'll also need to call the "compare" operation,
+        # but for that we'll need a data store.
+        test = test_registry[name]
+        plot_list = []
+        if test.output_type == 'single':
+            mot = MultipleOutputTest(self.io_log)
+            for i,fn in enumerate(mot):
+                # This next line is to keep the shelve module
+                # from happily gobbling the disk
+                #if i > 5: break 
+                test_instance = test(fn)
+                test_instance.name = "%s_%s" % (
+                    os.path.basename(fn), test_instance.name )
+                self._run(test_instance)
+
+        elif test.output_type == 'multiple':
+            test_instance = test(self.io_log)
+            self._run(test_instance)
+
+    def _run(self, test):
+        print self.id, "Running", test.name,
+        test.setup()
+        test.run()
+        self.plot_list[test.name] = test.plot()
+        self.results[test.name] = test.result
+        success = self._compare(test)
+        if success == True: print "SUCCEEDED"
+        else: print "FAILED"
+        self.passed_tests[test.name] = success
+
+    def _compare(self, test):
+        if self.old_results is None:
+            return True
+        old_result = self.old_results[test.name]
+        try:
+            test.compare(old_result)
+        except RegressionTestException as exc:
+            return str(exc)
+        return True
+
+    def run_tests_from_file(self, filename):
+        for line in open(filename):
+            test_name = line.strip()
+            if test_name not in test_registry:
+                if test_name[0] != "#":
+                    print "Test '%s' not recognized, skipping" % (test_name)
+                continue
+            print "Running '%s'" % (test_name)
+            self.run_test(line.strip())
+
+def run():
+    # This should be made to work with the optparse library
+    if sys.argv[-1] == "-f":
+        first_runner = RegressionTestRunner("first")
+        first_runner.run_all_tests()
+    else:
+        second_runner = RegressionTestRunner("second", "first")
+        second_runner.run_all_tests()
+
+if __name__ == "__main__":
+    run()

Modified: trunk/yt/extensions/opengl_image_viewer.py
==============================================================================
--- trunk/yt/extensions/opengl_image_viewer.py	(original)
+++ trunk/yt/extensions/opengl_image_viewer.py	Fri Jul  2 14:22:25 2010
@@ -279,10 +279,12 @@
     _title = "Grids"
 
     def _get_grid_vertices(self, offset):
+        DLE, DRE = self.pf['DomainLeftEdge'], pf['DomainRightEdge']
+        DW = DRE - DLE
         k = 0
         self._grid_offsets = {}
         for g in self.pf.h.grids:
-            vs = (g.LeftEdge, g.RightEdge)
+            vs = ((g.LeftEdge-DLE)/DW, (g.RightEdge-DLE)/DW)
             self._grid_offsets[g.id] = k
             for vert in _verts:
                 for i,v in enumerate(vert):

Modified: trunk/yt/lagos/OutputTypes.py
==============================================================================
--- trunk/yt/lagos/OutputTypes.py	(original)
+++ trunk/yt/lagos/OutputTypes.py	Fri Jul  2 14:22:25 2010
@@ -491,7 +491,7 @@
 
         # These should maybe not be hardcoded?
         self.parameters["HydroMethod"] = 'orion' # always PPM DE
-        self.parameters["InitialTime"] = 0. # FIX ME!!!
+        self.parameters["Time"] = 1. # default unit is 1...
         self.parameters["DualEnergyFormalism"] = 0 # always off.
         self.parameters["EOSType"] = -1 # default
         if self.fparameters.has_key("mu"):
@@ -586,7 +586,7 @@
         lines = header_file.readlines()
         header_file.close()
         n_fields = int(lines[1])
-        self.parameters["Time"] = float(lines[3+n_fields])
+        self.parameters["InitialTime"] = float(lines[3+n_fields])
 
                 
     def _set_units(self):

Modified: trunk/yt/raven/Callbacks.py
==============================================================================
--- trunk/yt/raven/Callbacks.py	(original)
+++ trunk/yt/raven/Callbacks.py	Fri Jul  2 14:22:25 2010
@@ -74,6 +74,26 @@
             qcb = QuiverCallback(xv, yv, self.factor)
         return qcb(plot)
 
+class MagFieldCallback(PlotCallback):
+    _type_name = "magnetic_field"
+    def __init__(self, factor=16):
+        """
+        Adds a 'quiver' plot of magnetic field to the plot, skipping all but
+        every *factor* datapoint
+        """
+        PlotCallback.__init__(self)
+        self.factor = factor
+
+    def __call__(self, plot):
+        # Instantiation of these is cheap
+        if plot._type_name == "CuttingPlane":
+            print "WARNING: Magnetic field on Cutting Plane Not implemented."
+        else:
+            xv = "B%s" % (lagos.x_names[plot.data.axis])
+            yv = "B%s" % (lagos.y_names[plot.data.axis])
+            qcb = QuiverCallback(xv, yv, self.factor)
+        return qcb(plot)
+
 class QuiverCallback(PlotCallback):
     _type_name = "quiver"
     def __init__(self, field_x, field_y, factor):

Modified: trunk/yt/raven/plot_collection.py
==============================================================================
--- trunk/yt/raven/plot_collection.py	(original)
+++ trunk/yt/raven/plot_collection.py	Fri Jul  2 14:22:25 2010
@@ -926,7 +926,7 @@
             generation.
         fields : list of strings
             The first element of this list is the field by which we will bin
-            into the y-axis, the second is the field by which we will bin onto
+            into the x-axis, the second is the field by which we will bin onto
             the y-axis.  All subsequent fields will be binned and their
             profiles added to the underlying `BinnedProfile2D`.
         cmap : string, optional



More information about the yt-svn mailing list