[yt-svn] commit/yt-doc: 3 new changesets

Bitbucket commits-noreply at bitbucket.org
Wed Feb 13 12:36:34 PST 2013


3 new commits in yt-doc:

https://bitbucket.org/yt_analysis/yt-doc/commits/b52582340793/
changeset:   b52582340793
user:        chummels
date:        2013-02-08 08:16:18
summary:     Created testing.rst to describe unit and answer testing in the docs.  Taken largely from the bitbucket wiki, but added/modified text to bring it up to speed.
affected #:  2 files

diff -r bcb26c76ff8c007e2e8e6c84c0b6164cb562a2bc -r b5258234079312baa346196bf20b0ae45ab199cd source/advanced/index.rst
--- a/source/advanced/index.rst
+++ b/source/advanced/index.rst
@@ -16,4 +16,5 @@
    debugdrive
    external_analysis
    developing
+   testing
    reason_architecture

diff -r bcb26c76ff8c007e2e8e6c84c0b6164cb562a2bc -r b5258234079312baa346196bf20b0ae45ab199cd source/advanced/testing.rst
--- /dev/null
+++ b/source/advanced/testing.rst
@@ -0,0 +1,221 @@
+.. _testing:
+
+=======
+Testing
+=======
+
+yt includes a testing suite which one can run on the codebase to assure that
+no major functional breaks have occurred.  This testing suite is based on 
+python nosetests_.  It consists of unit testing, a basic level of testing
+where we confirm that the units in functions make sense and that these functions
+will run without failure.  The testing suite also includes more rigorous
+tests, like answer tests, which involve generating output from yt functions,
+and comparing and matching those results against outputs of the same code in 
+previous versions of yt for consistency in results.
+
+.. _nosetests: https://nose.readthedocs.org/en/latest/
+
+The testing suite should be run locally by developers to make sure they aren't
+checking in any code that breaks existing functionality.  To further this goal,
+an automatic buildbot runs the test suite after each code commit to confirm
+that yt hasn't broken recently.
+
+.. _unit_testing:
+
+Unit Testing
+------------
+
+What do Unit Tests Do
+^^^^^^^^^^^^^^^^^^^^^
+
+How to Run Unit Tests
+^^^^^^^^^^^^^^^^^^^^^
+
+One can run unit tests in a similar way to running answer tests.  First
+follow the setup instructions on `running answer testing`__, then simply 
+execute this at the command line to run all unit tests:
+
+__ run_answer_testing_
+
+.. code-block:: bash
+
+   $ nosetests
+
+If you want to specify a specific unit test to run (and not run the entire
+suite), you can do so by specifying the path of the test relative to the
+``$YT_DEST/src/yt-hg/yt`` directory.  For example, if you want to run the
+plot_window tests, you'd run:
+
+.. code-block:: bash
+
+   $ nosetests visualization/tests/test_plotwindow.py
+
+How to Write Unit Tests
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. _answer_testing:
+
+Answer Testing
+--------------
+
+What do Answer Tests Do
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Answer tests test **actual data**, and many operations on that data, to make 
+sure that answers don't drift over time.  This is how we will be testing 
+frontends, as opposed to operations, in yt.
+
+.. _run_answer_testing:
+
+How to Run Answer Tests
+^^^^^^^^^^^^^^^^^^^^^^^
+
+The very first step is to make a directory and copy over the data against which
+you want to test.  Currently, we test:
+
+ * ``DD0010/moving7_0010`` (available in ``tests/`` in the yt distribution)
+ * ``IsolatedGalaxy/galaxy0030/galaxy0030`` (available here: http://yt-project.org/data/ )
+
+Next, modify the file ``~/.yt/config`` to include a section ``[yt]`` 
+with the parameter ``test_data_dir``.  Set this to point to the
+directory with the test data you want to compare.  Here is an example 
+config file:
+
+.. code-block:: bash
+
+   [yt]
+   test_data_dir = /Users/tomservo/src/yt-data
+
+More data will be added over time.  To run a comparison, you must first run 
+"develop" so that the new nose plugin becomes available:
+
+.. code-block:: bash
+
+   $ cd $YT_DEST/src/yt-hg
+   $ python setup.py develop
+
+Then, in the same directory,
+
+.. code-block:: bash
+
+   $ nosetests --with-answer-testing
+
+The current gold standard results will be downloaded from the amazon cloud 
+and compared to what is generated locally.  The results from a nose testing 
+session are pretty straightforward to understand, the results for each test 
+are printed directly to STDOUT. If a test passes, nose prints a period, F if 
+a test fails, and E if the test encounters an exception or errors out for 
+some reason.  If you want to also run tests for the 'big' datasets, then in 
+the yt directory,
+
+.. code-block:: bash
+
+   $ nosetests --with-answer-testing --answer-big-data
+
+How to Write Answer Tests
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Tests can be added in the file ``yt/utilities/answer_testing/framework.py`` .  
+You can find examples there of how to write a test.  Here is a trivial example:
+
+.. code-block:: python
+
+   #!python
+   class MaximumValue(AnswerTestingTest):
+       _type_name = "ParentageRelationships"
+       _attrs = ("field",)
+       def __init__(self, pf_fn, field):
+           super(MaximumValue, self).__init__(pf_fn)
+           self.field = field
+   
+       def run(self):
+           v, c = self.pf.h.find_max(self.field)
+           result = np.empty(4, dtype="float64")
+           result[0] = v
+           result[1:] = c
+           return result
+
+       def compare(self, new_result, old_result):
+           assert_equal(new_result, old_result)
+
+What this does is calculate the location and value of the maximum of a 
+field.  It then puts that into the variable result, returns that from 
+``run`` and then in ``compare`` makes sure that all are exactly equal.
+
+To write a new test:
+
+ * Subclass ``AnswerTestingTest``
+ * Add the attributes ``_type_name`` (a string) and ``_attrs`` 
+   (a tuple of strings, one for each attribute that defines the test -- 
+   see how this is done for projections, for instance)
+ * Implement the two routines ``run`` and ``compare``  The first 
+   should return a result and the second should compare a result to an old 
+   result.  Neither should yield, but instead actually return.  If you need 
+   additional arguments to the test, implement an ``__init__`` routine.
+ * Keep in mind that *everything* returned from ``run`` will be stored.  
+   So if you are going to return a huge amount of data, please ensure that 
+   the test only gets run for small data.  If you want a fast way to 
+   measure something as being similar or different, either an md5 hash 
+   (see the grid values test) or a sum and std of an array act as good proxies.
+ * Typically for derived values, we compare to 10 or 12 decimal places.  
+   For exact values, we compare exactly.
+
+How to add data to the testing suite
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To add data to the testing suite, first write a new set of tests for the data.  
+The Enzo example in ``yt/frontends/enzo/tests/test_outputs.py`` is 
+considered canonical.  Do these things:
+
+ * Create a new directory, ``tests`` inside the frontend's directory.
+
+ * Create a new file, ``test_outputs.py`` in the frontend's ``tests`` 
+   directory.
+
+ * Create a new routine that operates similarly to the routines you can see 
+   in Enzo's outputs.
+
+   * This routine should test a number of different fields and data objects.
+
+   * The test routine itself should be decorated with 
+     ``@requires_pf(file_name)``  This decorate can accept the argument 
+     ``big_data`` for if this data is too big to run all the time.
+
+   * There are ``small_patch_amr`` and ``big_patch_amr`` routines that 
+     you can yield from to execute a bunch of standard tests.  This is where 
+     you should start, and then yield additional tests that stress the 
+     outputs in whatever ways are necessary to ensure functionality.
+
+   * **All tests should be yielded!**
+
+If you are adding to a frontend that has a few tests already, skip the first 
+two steps.
+
+How to Upload Answers
+^^^^^^^^^^^^^^^^^^^^^
+
+To upload answers you can execute this command:
+
+.. code-block:: bash
+
+   $ nosetests --with-answer-testing frontends/enzo/ --answer-store --answer-name=whatever
+
+The current version of the gold standard can be found in the variable 
+``_latest`` inside ``yt/utilities/answer_testing/framework.py``  As of 
+the time of this writing, it is ``gold001``  Note that the name of the 
+suite of results is now disconnected from the parameter file's name, so you 
+can upload multiple outputs with the same name and not collide.
+
+To upload answers, you **must** have the package boto installed, and you 
+**must** have an Amazon key provided by Matt.  Contact Matt for these keys.
+
+What Needs to be Done
+^^^^^^^^^^^^^^^^^^^^^
+
+ * Many of the old answer tests need to be converted.  This includes tests 
+   for halos, volume renderings, data object access, and profiles.  These 
+   will require taking the old tests and converting them over, but this 
+   process should be straightforward.
+ * We need to have data for Orion, Nyx, and FLASH and any other codes that 
+   want to be tested
+ * Tests need to be written for Orion, Nyx, FLASH


https://bitbucket.org/yt_analysis/yt-doc/commits/330aff3969ee/
changeset:   330aff3969ee
user:        chummels
date:        2013-02-08 08:28:20
summary:     Linking Sam's bounding box and grid volume rendering overlay python script in the cookbook docs.
affected #:  1 file

diff -r b5258234079312baa346196bf20b0ae45ab199cd -r 330aff3969eef326ec8db8d789ebdacade5bd95c source/cookbook/complex_plots.rst
--- a/source/cookbook/complex_plots.rst
+++ b/source/cookbook/complex_plots.rst
@@ -148,6 +148,15 @@
 
 .. yt_cookbook:: amrkdtree_downsampling.py
 
+Volume Rendering with Bounding Box and Overlaid Grids
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This recipe demonstrates how to overplot a bounding box on a volume rendering
+as well as overplotting grids representing the level of refinement achieved
+in different regions of the code.
+
+.. yt_cookbook:: rendering_with_box_and_grids.py
+
 Plotting Streamlines
 ~~~~~~~~~~~~~~~~~~~~
 


https://bitbucket.org/yt_analysis/yt-doc/commits/521bc319171e/
changeset:   521bc319171e
user:        MatthewTurk
date:        2013-02-13 21:36:29
summary:     Merged in chummels/yt-doc (pull request #71)

Adding Answer Testing Docs to Documentation
affected #:  3 files

diff -r 7690b3873f6ef72324482c5a3aedf51a6b382b0d -r 521bc319171e5ad87fbf182c3b63d39a078db151 source/advanced/index.rst
--- a/source/advanced/index.rst
+++ b/source/advanced/index.rst
@@ -16,4 +16,5 @@
    debugdrive
    external_analysis
    developing
+   testing
    reason_architecture

diff -r 7690b3873f6ef72324482c5a3aedf51a6b382b0d -r 521bc319171e5ad87fbf182c3b63d39a078db151 source/advanced/testing.rst
--- /dev/null
+++ b/source/advanced/testing.rst
@@ -0,0 +1,221 @@
+.. _testing:
+
+=======
+Testing
+=======
+
+yt includes a testing suite which one can run on the codebase to assure that
+no major functional breaks have occurred.  This testing suite is based on 
+python nosetests_.  It consists of unit testing, a basic level of testing
+where we confirm that the units in functions make sense and that these functions
+will run without failure.  The testing suite also includes more rigorous
+tests, like answer tests, which involve generating output from yt functions,
+and comparing and matching those results against outputs of the same code in 
+previous versions of yt for consistency in results.
+
+.. _nosetests: https://nose.readthedocs.org/en/latest/
+
+The testing suite should be run locally by developers to make sure they aren't
+checking in any code that breaks existing functionality.  To further this goal,
+an automatic buildbot runs the test suite after each code commit to confirm
+that yt hasn't broken recently.
+
+.. _unit_testing:
+
+Unit Testing
+------------
+
+What do Unit Tests Do
+^^^^^^^^^^^^^^^^^^^^^
+
+How to Run Unit Tests
+^^^^^^^^^^^^^^^^^^^^^
+
+One can run unit tests in a similar way to running answer tests.  First
+follow the setup instructions on `running answer testing`__, then simply 
+execute this at the command line to run all unit tests:
+
+__ run_answer_testing_
+
+.. code-block:: bash
+
+   $ nosetests
+
+If you want to specify a specific unit test to run (and not run the entire
+suite), you can do so by specifying the path of the test relative to the
+``$YT_DEST/src/yt-hg/yt`` directory.  For example, if you want to run the
+plot_window tests, you'd run:
+
+.. code-block:: bash
+
+   $ nosetests visualization/tests/test_plotwindow.py
+
+How to Write Unit Tests
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. _answer_testing:
+
+Answer Testing
+--------------
+
+What do Answer Tests Do
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Answer tests test **actual data**, and many operations on that data, to make 
+sure that answers don't drift over time.  This is how we will be testing 
+frontends, as opposed to operations, in yt.
+
+.. _run_answer_testing:
+
+How to Run Answer Tests
+^^^^^^^^^^^^^^^^^^^^^^^
+
+The very first step is to make a directory and copy over the data against which
+you want to test.  Currently, we test:
+
+ * ``DD0010/moving7_0010`` (available in ``tests/`` in the yt distribution)
+ * ``IsolatedGalaxy/galaxy0030/galaxy0030`` (available here: http://yt-project.org/data/ )
+
+Next, modify the file ``~/.yt/config`` to include a section ``[yt]`` 
+with the parameter ``test_data_dir``.  Set this to point to the
+directory with the test data you want to compare.  Here is an example 
+config file:
+
+.. code-block:: bash
+
+   [yt]
+   test_data_dir = /Users/tomservo/src/yt-data
+
+More data will be added over time.  To run a comparison, you must first run 
+"develop" so that the new nose plugin becomes available:
+
+.. code-block:: bash
+
+   $ cd $YT_DEST/src/yt-hg
+   $ python setup.py develop
+
+Then, in the same directory,
+
+.. code-block:: bash
+
+   $ nosetests --with-answer-testing
+
+The current gold standard results will be downloaded from the amazon cloud 
+and compared to what is generated locally.  The results from a nose testing 
+session are pretty straightforward to understand, the results for each test 
+are printed directly to STDOUT. If a test passes, nose prints a period, F if 
+a test fails, and E if the test encounters an exception or errors out for 
+some reason.  If you want to also run tests for the 'big' datasets, then in 
+the yt directory,
+
+.. code-block:: bash
+
+   $ nosetests --with-answer-testing --answer-big-data
+
+How to Write Answer Tests
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Tests can be added in the file ``yt/utilities/answer_testing/framework.py`` .  
+You can find examples there of how to write a test.  Here is a trivial example:
+
+.. code-block:: python
+
+   #!python
+   class MaximumValue(AnswerTestingTest):
+       _type_name = "ParentageRelationships"
+       _attrs = ("field",)
+       def __init__(self, pf_fn, field):
+           super(MaximumValue, self).__init__(pf_fn)
+           self.field = field
+   
+       def run(self):
+           v, c = self.pf.h.find_max(self.field)
+           result = np.empty(4, dtype="float64")
+           result[0] = v
+           result[1:] = c
+           return result
+
+       def compare(self, new_result, old_result):
+           assert_equal(new_result, old_result)
+
+What this does is calculate the location and value of the maximum of a 
+field.  It then puts that into the variable result, returns that from 
+``run`` and then in ``compare`` makes sure that all are exactly equal.
+
+To write a new test:
+
+ * Subclass ``AnswerTestingTest``
+ * Add the attributes ``_type_name`` (a string) and ``_attrs`` 
+   (a tuple of strings, one for each attribute that defines the test -- 
+   see how this is done for projections, for instance)
+ * Implement the two routines ``run`` and ``compare``  The first 
+   should return a result and the second should compare a result to an old 
+   result.  Neither should yield, but instead actually return.  If you need 
+   additional arguments to the test, implement an ``__init__`` routine.
+ * Keep in mind that *everything* returned from ``run`` will be stored.  
+   So if you are going to return a huge amount of data, please ensure that 
+   the test only gets run for small data.  If you want a fast way to 
+   measure something as being similar or different, either an md5 hash 
+   (see the grid values test) or a sum and std of an array act as good proxies.
+ * Typically for derived values, we compare to 10 or 12 decimal places.  
+   For exact values, we compare exactly.
+
+How to add data to the testing suite
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To add data to the testing suite, first write a new set of tests for the data.  
+The Enzo example in ``yt/frontends/enzo/tests/test_outputs.py`` is 
+considered canonical.  Do these things:
+
+ * Create a new directory, ``tests`` inside the frontend's directory.
+
+ * Create a new file, ``test_outputs.py`` in the frontend's ``tests`` 
+   directory.
+
+ * Create a new routine that operates similarly to the routines you can see 
+   in Enzo's outputs.
+
+   * This routine should test a number of different fields and data objects.
+
+   * The test routine itself should be decorated with 
+     ``@requires_pf(file_name)``  This decorate can accept the argument 
+     ``big_data`` for if this data is too big to run all the time.
+
+   * There are ``small_patch_amr`` and ``big_patch_amr`` routines that 
+     you can yield from to execute a bunch of standard tests.  This is where 
+     you should start, and then yield additional tests that stress the 
+     outputs in whatever ways are necessary to ensure functionality.
+
+   * **All tests should be yielded!**
+
+If you are adding to a frontend that has a few tests already, skip the first 
+two steps.
+
+How to Upload Answers
+^^^^^^^^^^^^^^^^^^^^^
+
+To upload answers you can execute this command:
+
+.. code-block:: bash
+
+   $ nosetests --with-answer-testing frontends/enzo/ --answer-store --answer-name=whatever
+
+The current version of the gold standard can be found in the variable 
+``_latest`` inside ``yt/utilities/answer_testing/framework.py``  As of 
+the time of this writing, it is ``gold001``  Note that the name of the 
+suite of results is now disconnected from the parameter file's name, so you 
+can upload multiple outputs with the same name and not collide.
+
+To upload answers, you **must** have the package boto installed, and you 
+**must** have an Amazon key provided by Matt.  Contact Matt for these keys.
+
+What Needs to be Done
+^^^^^^^^^^^^^^^^^^^^^
+
+ * Many of the old answer tests need to be converted.  This includes tests 
+   for halos, volume renderings, data object access, and profiles.  These 
+   will require taking the old tests and converting them over, but this 
+   process should be straightforward.
+ * We need to have data for Orion, Nyx, and FLASH and any other codes that 
+   want to be tested
+ * Tests need to be written for Orion, Nyx, FLASH

diff -r 7690b3873f6ef72324482c5a3aedf51a6b382b0d -r 521bc319171e5ad87fbf182c3b63d39a078db151 source/cookbook/complex_plots.rst
--- a/source/cookbook/complex_plots.rst
+++ b/source/cookbook/complex_plots.rst
@@ -148,6 +148,15 @@
 
 .. yt_cookbook:: amrkdtree_downsampling.py
 
+Volume Rendering with Bounding Box and Overlaid Grids
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This recipe demonstrates how to overplot a bounding box on a volume rendering
+as well as overplotting grids representing the level of refinement achieved
+in different regions of the code.
+
+.. yt_cookbook:: rendering_with_box_and_grids.py
+
 Plotting Streamlines
 ~~~~~~~~~~~~~~~~~~~~

Repository URL: https://bitbucket.org/yt_analysis/yt-doc/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list