[yt-svn] commit/yt-doc: 3 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Sat Mar 30 03:41:27 PDT 2013


3 new commits in yt-doc:

https://bitbucket.org/yt_analysis/yt-doc/commits/dc230d231d93/
Changeset:   dc230d231d93
User:        ngoldbaum
Date:        2013-03-30 11:38:25
Summary:     Updating the testing docs with a bit more answer testing information.
Affected #:  1 file

diff -r 5e5790c04a5123ce810c3812da408fd3003cb295 -r dc230d231d9369b6ea09c59f49e227cd988ef70e source/advanced/testing.rst
--- a/source/advanced/testing.rst
+++ b/source/advanced/testing.rst
@@ -5,12 +5,12 @@
 =======
 
 yt includes a testing suite which one can run on the codebase to assure that
-no major functional breaks have occurred.  This testing suite is based on 
+no major functional breaks have occurred.  This testing suite is based on
 python nosetests_.  It consists of unit testing, a basic level of testing
 where we confirm that the units in functions make sense and that these functions
 will run without failure.  The testing suite also includes more rigorous
 tests, like answer tests, which involve generating output from yt functions,
-and comparing and matching those results against outputs of the same code in 
+and comparing and matching those results against outputs of the same code in
 previous versions of yt for consistency in results.
 
 .. _nosetests: https://nose.readthedocs.org/en/latest/
@@ -39,7 +39,7 @@
 ^^^^^^^^^^^^^^^^^^^^^
 
 One can run unit tests in a similar way to running answer tests.  First
-follow the setup instructions on `running answer testing`__, then simply 
+follow the setup instructions on `running answer testing`__, then simply
 execute this at the command line to run all unit tests:
 
 __ run_answer_testing_
@@ -105,8 +105,8 @@
 What do Answer Tests Do
 ^^^^^^^^^^^^^^^^^^^^^^^
 
-Answer tests test **actual data**, and many operations on that data, to make 
-sure that answers don't drift over time.  This is how we will be testing 
+Answer tests test **actual data**, and many operations on that data, to make
+sure that answers don't drift over time.  This is how we will be testing
 frontends, as opposed to operations, in yt.
 
 .. _run_answer_testing:
@@ -121,7 +121,6 @@
  * ``IsolatedGalaxy/galaxy0030/galaxy0030``
  * ``WindTunnel/windtunnel_4lev_hdf5_plt_cnt_0030``
  * ``GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0300``
- * ``GalaxyClusterMerger/fiducial_1to3_b0.273d_hdf5_plt_cnt_0175``
  * ``TurbBoxLowRes/data.0005.3d.hdf5``
  * ``GaussianCloud/data.0077.3d.hdf5``
  * ``RadAdvect/plt00000``
@@ -129,9 +128,9 @@
 
 These datasets are available at http://yt-project.org/data/.
 
-Next, modify the file ``~/.yt/config`` to include a section ``[yt]`` 
+Next, modify the file ``~/.yt/config`` to include a section ``[yt]``
 with the parameter ``test_data_dir``.  Set this to point to the
-directory with the test data you want to compare.  Here is an example 
+directory with the test data you want to compare.  Here is an example
 config file:
 
 .. code-block:: bash
@@ -139,7 +138,7 @@
    [yt]
    test_data_dir = /Users/tomservo/src/yt-data
 
-More data will be added over time.  To run a comparison, you must first run 
+More data will be added over time.  To run a comparison, you must first run
 "develop" so that the new nose plugin becomes available:
 
 .. code-block:: bash
@@ -153,22 +152,29 @@
 
    $ nosetests --with-answer-testing
 
-The current gold standard results will be downloaded from the amazon cloud 
-and compared to what is generated locally.  The results from a nose testing 
-session are pretty straightforward to understand, the results for each test 
-are printed directly to STDOUT. If a test passes, nose prints a period, F if 
-a test fails, and E if the test encounters an exception or errors out for 
-some reason.  If you want to also run tests for the 'big' datasets, then in 
+The current gold standard results will be downloaded from the amazon cloud
+and compared to what is generated locally.  The results from a nose testing
+session are pretty straightforward to understand, the results for each test
+are printed directly to STDOUT. If a test passes, nose prints a period, F if
+a test fails, and E if the test encounters an exception or errors out for
+some reason.  If you want to also run tests for the 'big' datasets, then in
 the yt directory,
 
 .. code-block:: bash
 
    $ nosetests --with-answer-testing --answer-big-data
 
+It's also possible to only run the answer tests for one frontend.  For example,
+to run only the enzo answers tests, one can do,
+
+.. code-block:: bash
+
+   $ nosetests --with-answer-testing yt.frontends.enzo.tests.test_outputs
+
 How to Write Answer Tests
 ^^^^^^^^^^^^^^^^^^^^^^^^^
 
-Tests can be added in the file ``yt/utilities/answer_testing/framework.py`` .  
+Tests can be added in the file ``yt/utilities/answer_testing/framework.py`` .
 You can find examples there of how to write a test.  Here is a trivial example:
 
 .. code-block:: python
@@ -180,7 +186,7 @@
        def __init__(self, pf_fn, field):
            super(MaximumValue, self).__init__(pf_fn)
            self.field = field
-   
+
        def run(self):
            v, c = self.pf.h.find_max(self.field)
            result = np.empty(4, dtype="float64")
@@ -191,57 +197,60 @@
        def compare(self, new_result, old_result):
            assert_equal(new_result, old_result)
 
-What this does is calculate the location and value of the maximum of a 
-field.  It then puts that into the variable result, returns that from 
+What this does is calculate the location and value of the maximum of a
+field.  It then puts that into the variable result, returns that from
 ``run`` and then in ``compare`` makes sure that all are exactly equal.
 
 To write a new test:
 
  * Subclass ``AnswerTestingTest``
- * Add the attributes ``_type_name`` (a string) and ``_attrs`` 
-   (a tuple of strings, one for each attribute that defines the test -- 
+ * Add the attributes ``_type_name`` (a string) and ``_attrs``
+   (a tuple of strings, one for each attribute that defines the test --
    see how this is done for projections, for instance)
- * Implement the two routines ``run`` and ``compare``  The first 
-   should return a result and the second should compare a result to an old 
-   result.  Neither should yield, but instead actually return.  If you need 
+ * Implement the two routines ``run`` and ``compare``  The first
+   should return a result and the second should compare a result to an old
+   result.  Neither should yield, but instead actually return.  If you need
    additional arguments to the test, implement an ``__init__`` routine.
- * Keep in mind that *everything* returned from ``run`` will be stored.  
-   So if you are going to return a huge amount of data, please ensure that 
-   the test only gets run for small data.  If you want a fast way to 
-   measure something as being similar or different, either an md5 hash 
-   (see the grid values test) or a sum and std of an array act as good proxies.
- * Typically for derived values, we compare to 10 or 12 decimal places.  
+ * Keep in mind that *everything* returned from ``run`` will be stored.  So if
+   you are going to return a huge amount of data, please ensure that the test
+   only gets run for small data.  If you want a fast way to measure something as
+   being similar or different, either an md5 hash (see the grid values test) or
+   a sum and std of an array act as good proxies.  If you must store a large
+   amount of data for some reason, try serializing the data to a string
+   (e.g. using ``numpy.ndarray.dumps``), and then compressing the data stream
+   using ``zlib.compress``.
+ * Typically for derived values, we compare to 10 or 12 decimal places.
    For exact values, we compare exactly.
 
 How to add data to the testing suite
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-To add data to the testing suite, first write a new set of tests for the data.  
-The Enzo example in ``yt/frontends/enzo/tests/test_outputs.py`` is 
+To add data to the testing suite, first write a new set of tests for the data.
+The Enzo example in ``yt/frontends/enzo/tests/test_outputs.py`` is
 considered canonical.  Do these things:
 
  * Create a new directory, ``tests`` inside the frontend's directory.
 
- * Create a new file, ``test_outputs.py`` in the frontend's ``tests`` 
+ * Create a new file, ``test_outputs.py`` in the frontend's ``tests``
    directory.
 
- * Create a new routine that operates similarly to the routines you can see 
+ * Create a new routine that operates similarly to the routines you can see
    in Enzo's outputs.
 
    * This routine should test a number of different fields and data objects.
 
-   * The test routine itself should be decorated with 
-     ``@requires_pf(file_name)``  This decorate can accept the argument 
+   * The test routine itself should be decorated with
+     ``@requires_pf(file_name)``  This decorate can accept the argument
      ``big_data`` for if this data is too big to run all the time.
 
-   * There are ``small_patch_amr`` and ``big_patch_amr`` routines that 
-     you can yield from to execute a bunch of standard tests.  This is where 
-     you should start, and then yield additional tests that stress the 
+   * There are ``small_patch_amr`` and ``big_patch_amr`` routines that
+     you can yield from to execute a bunch of standard tests.  This is where
+     you should start, and then yield additional tests that stress the
      outputs in whatever ways are necessary to ensure functionality.
 
    * **All tests should be yielded!**
 
-If you are adding to a frontend that has a few tests already, skip the first 
+If you are adding to a frontend that has a few tests already, skip the first
 two steps.
 
 How to Upload Answers
@@ -253,22 +262,21 @@
 
    $ nosetests --with-answer-testing frontends/enzo/ --answer-store --answer-name=whatever
 
-The current version of the gold standard can be found in the variable 
-``_latest`` inside ``yt/utilities/answer_testing/framework.py``  As of 
-the time of this writing, it is ``gold001``  Note that the name of the 
-suite of results is now disconnected from the parameter file's name, so you 
+The current version of the gold standard can be found in the variable
+``_latest`` inside ``yt/utilities/answer_testing/framework.py``  As of
+the time of this writing, it is ``gold007``  Note that the name of the
+suite of results is now disconnected from the parameter file's name, so you
 can upload multiple outputs with the same name and not collide.
 
-To upload answers, you **must** have the package boto installed, and you 
+To upload answers, you **must** have the package boto installed, and you
 **must** have an Amazon key provided by Matt.  Contact Matt for these keys.
 
 What Needs to be Done
 ^^^^^^^^^^^^^^^^^^^^^
 
- * Many of the old answer tests need to be converted.  This includes tests 
-   for halos, volume renderings, data object access, and profiles.  These 
-   will require taking the old tests and converting them over, but this 
+ * Many of the old answer tests need to be converted.  This includes tests
+   for halos, volume renderings, data object access, and profiles.  These
+   will require taking the old tests and converting them over, but this
    process should be straightforward.
- * We need to have data for Orion, Nyx, and FLASH and any other codes that 
-   want to be tested
- * Tests need to be written for Orion, Nyx, FLASH
+ * We need to have data for all supported codes.  We currently test Enzo, FLASH,
+   Orion, and Chombo.


https://bitbucket.org/yt_analysis/yt-doc/commits/7b79f9ad4cc9/
Changeset:   7b79f9ad4cc9
User:        ngoldbaum
Date:        2013-03-30 11:39:07
Summary:     Merging to tip.
Affected #:  2 files

diff -r dc230d231d9369b6ea09c59f49e227cd988ef70e -r 7b79f9ad4cc9d2298c9153b9a48cda61e95685a4 source/cookbook/aligned_cutting_plane.py
--- a/source/cookbook/aligned_cutting_plane.py
+++ b/source/cookbook/aligned_cutting_plane.py
@@ -6,7 +6,7 @@
 # Create a 1 kpc radius sphere, centered on the max density.  Note that this
 # sphere is very small compared to the size of our final plot, and it has a
 # non-axially aligned L vector.
-sp = pf.h.sphere("max", (15.0, "kpc"))
+sp = pf.h.sphere("center", (15.0, "kpc"))
 
 # Get the angular momentum vector for the sphere.
 L = sp.quantities["AngularMomentumVector"]()

diff -r dc230d231d9369b6ea09c59f49e227cd988ef70e -r 7b79f9ad4cc9d2298c9153b9a48cda61e95685a4 source/cookbook/simple_off_axis_projection.py
--- a/source/cookbook/simple_off_axis_projection.py
+++ b/source/cookbook/simple_off_axis_projection.py
@@ -6,7 +6,7 @@
 # Create a 1 kpc radius sphere, centered on the max density.  Note that this
 # sphere is very small compared to the size of our final plot, and it has a
 # non-axially aligned L vector.
-sp = pf.h.sphere("max", (15.0, "kpc"))
+sp = pf.h.sphere("center", (15.0, "kpc"))
 
 # Get the angular momentum vector for the sphere.
 L = sp.quantities["AngularMomentumVector"]()
@@ -14,5 +14,5 @@
 print "Angular momentum vector: %s" % (L)
 
 # Create an OffAxisSlicePlot on the object with the L vector as its normal
-p = OffAxisProjectionPlot(pf, L, "Density", [0.5,0.5,0.5], (25, "kpc"))
+p = OffAxisProjectionPlot(pf, L, "Density", sp.center, (25, "kpc"))
 p.save()


https://bitbucket.org/yt_analysis/yt-doc/commits/26462bde1cd6/
Changeset:   26462bde1cd6
User:        ngoldbaum
Date:        2013-03-30 11:40:20
Summary:     Merging to tip.
Affected #:  0 files

Repository URL: https://bitbucket.org/yt_analysis/yt-doc/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list