[yt-svn] commit/yt-doc: 4 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Tue Jun 11 09:42:04 PDT 2013


4 new commits in yt-doc:

https://bitbucket.org/yt_analysis/yt-doc/commits/f22fb71a18bc/
Changeset:   f22fb71a18bc
User:        ngoldbaum
Date:        2013-06-07 21:52:50
Summary:     Adjusting the documentation to reflect the new way to run the unit and answer tests.
Affected #:  1 file

diff -r aa42eb07c72160460f230d98f43f2398aa19546e -r f22fb71a18bca22946f5dbd1b7a1eb3bce656cb0 source/advanced/testing.rst
--- a/source/advanced/testing.rst
+++ b/source/advanced/testing.rst
@@ -4,14 +4,15 @@
 Testing
 =======
 
-yt includes a testing suite which one can run on the codebase to assure that
-no major functional breaks have occurred.  This testing suite is based on
-python nosetests_.  It consists of unit testing, a basic level of testing
-where we confirm that the units in functions make sense and that these functions
-will run without failure.  The testing suite also includes more rigorous
-tests, like answer tests, which involve generating output from yt functions,
-and comparing and matching those results against outputs of the same code in
-previous versions of yt for consistency in results.
+yt includes a testing suite which one can run on the codebase to assure that no
+breaks in functionality have occurred.  This testing suite the Nose_ testing
+framework.  The suite consists of two components, unit tests and answer
+tests. Unit tests confirm that an isolated piece of functionality runs without
+failure for inputs with known correct outputs.  Answer tests verify the
+integration and compatibility of the individual code unit by generating output
+from user-visible yt functions and comparing and matching the results against
+outputs of the same function produced using older versions of the yt codebase.
+This ensures consistency in results as development proceeds.
 
 .. _nosetests: https://nose.readthedocs.org/en/latest/
 
@@ -35,17 +36,25 @@
 assertions, and Nose identifies those scripts, runs them, and verifies that the
 assertions are true.
 
-How to Run Unit Tests
-^^^^^^^^^^^^^^^^^^^^^
+How to Run the Tests
+^^^^^^^^^^^^^^^^^^^
 
-One can run unit tests in a similar way to running answer tests.  First
-follow the setup instructions on `running answer testing`__, then simply
-execute this at the command line to run all unit tests:
+One can run the unit tests very straightforwardly from any python interpreter
+that can import the yt module:
 
-__ run_answer_testing_
+.. code-block:: python
+
+   >>> import yt
+   >>> yt.run_nose()
+
+If you are developing new functionality, it is sometimes more convenient to use
+the Nose command line interface, ``nosetests``. You can run the unit tests
+using `no`qsetets` by navigating to the base directory of the yt mercurial
+repository and invoking ``nosetests``:
 
 .. code-block:: bash
 
+   $ cd $YT_HG
    $ nosetests
 
 If you want to specify a specific unit test to run (and not run the entire
@@ -66,20 +75,24 @@
 document, as in some cases they belong to other packages.  However, a few come
 in handy:
 
- * :func:`yt.testing.fake_random_pf` provides the ability to create a random parameter file,
-   with several fields and divided into several different grids, that can be
-   operated on.
+ * :func:`yt.testing.fake_random_pf` provides the ability to create a random
+   parameter file, with several fields and divided into several different
+   grids, that can be operated on.
  * :func:`yt.testing.assert_equal` can operate on arrays.
- * :func:`yt.testing.assert_almost_equal` can operate on arrays and accepts a relative
-   allowable difference.
- * :func:`yt.testing.amrspace` provides the ability to create AMR grid structures.
- * :func:`~yt.testing.expand_keywords` provides the ability to iterate over many values for
-   keywords.
+ * :func:`yt.testing.assert_almost_equal` can operate on arrays and accepts a
+   relative allowable difference.
+ * :func:`yt.testing.amrspace` provides the ability to create AMR grid
+   structures.
+ * :func:`~yt.testing.expand_keywords` provides the ability to iterate over
+   many values for keywords.
 
 To create new unit tests:
 
  #. Create a new ``tests/`` directory next to the file containing the
-    functionality you want to test.
+    functionality you want to test.  Be sure to add this new directory as a
+    subpackage in the setup.py script located in the directory you're adding a
+    new ``tests/`` folder to.  This ensures that the tests will be deployed in
+    yt source and binary distributions.
  #. Inside that directory, create a new python file prefixed with ``test_`` and
     including the name of the functionality.
  #. Inside that file, create one or more routines prefixed with ``test_`` that
@@ -138,27 +151,38 @@
    [yt]
    test_data_dir = /Users/tomservo/src/yt-data
 
-More data will be added over time.  To run a comparison, you must first run
-"develop" so that the new nose plugin becomes available:
+More data will be added over time.  To run the tests, you can import the yt
+module and invoke ``yt.run_nose()`` with a new keyword argument:
+
+__ run_answer_testing_
+
+.. code-block:: python
+
+   >>> import yt
+   >>> yt.run_nose(run_answer_tests=True)
+
+If you have installed yt using ``python setup.py develop`` you can also
+optionally invoke nose using the ``nosetests`` command line interface:
 
 .. code-block:: bash
 
-   $ cd $YT_DEST/src/yt-hg
-   $ python setup.py develop
-
-Then, in the same directory,
-
-.. code-block:: bash
-
+   $ cd $YT_HG
    $ nosetests --with-answer-testing
 
-The current gold standard results will be downloaded from the amazon cloud
-and compared to what is generated locally.  The results from a nose testing
-session are pretty straightforward to understand, the results for each test
-are printed directly to STDOUT. If a test passes, nose prints a period, F if
-a test fails, and E if the test encounters an exception or errors out for
-some reason.  If you want to also run tests for the 'big' datasets, then in
-the yt directory,
+In either case, the current gold standard results will be downloaded from the
+amazon cloud and compared to what is generated locally.  The results from a
+nose testing session are pretty straightforward to understand, the results for
+each test are printed directly to STDOUT. If a test passes, nose prints a
+period, F if a test fails, and E if the test encounters an exception or errors
+out for some reason.  If you want to also run tests for the 'big' datasets,
+then you can use the ``answer_big_data`` keyword argument:
+
+.. code-block:: python
+
+   >>> import yt
+   >>> yt.run_nose(run_answer_tests=True, answer_big_data=True)
+
+or, in the base directory of the yt mercurial repository:
 
 .. code-block:: bash
 
@@ -169,7 +193,7 @@
 
 .. code-block:: bash
 
-   $ nosetests --with-answer-testing yt.frontends.enzo.tests.test_outputs
+   $ nosetests --with-answer-testing yt.frontends.enzo
 
 How to Write Answer Tests
 ^^^^^^^^^^^^^^^^^^^^^^^^^


https://bitbucket.org/yt_analysis/yt-doc/commits/f14664da0922/
Changeset:   f14664da0922
User:        ngoldbaum
Date:        2013-06-07 22:48:52
Summary:     Fixing a title formatting issue.

testing.rst edited online with Bitbucket
Affected #:  1 file

diff -r f22fb71a18bca22946f5dbd1b7a1eb3bce656cb0 -r f14664da0922d9f3b27ecc1671bc3012e6b78115 source/advanced/testing.rst
--- a/source/advanced/testing.rst
+++ b/source/advanced/testing.rst
@@ -37,7 +37,7 @@
 assertions are true.
 
 How to Run the Tests
-^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^^^^^
 
 One can run the unit tests very straightforwardly from any python interpreter
 that can import the yt module:


https://bitbucket.org/yt_analysis/yt-doc/commits/31a8bf9304ef/
Changeset:   31a8bf9304ef
User:        ngoldbaum
Date:        2013-06-07 23:20:49
Summary:     Addressing Cameron's comments.
Affected #:  1 file

diff -r f22fb71a18bca22946f5dbd1b7a1eb3bce656cb0 -r 31a8bf9304eff85e66872d00800288bd57ee37b6 source/advanced/testing.rst
--- a/source/advanced/testing.rst
+++ b/source/advanced/testing.rst
@@ -5,8 +5,8 @@
 =======
 
 yt includes a testing suite which one can run on the codebase to assure that no
-breaks in functionality have occurred.  This testing suite the Nose_ testing
-framework.  The suite consists of two components, unit tests and answer
+breaks in functionality have occurred.  This testing suite is based on the Nose_
+testing framework.  The suite consists of two components, unit tests and answer
 tests. Unit tests confirm that an isolated piece of functionality runs without
 failure for inputs with known correct outputs.  Answer tests verify the
 integration and compatibility of the individual code unit by generating output
@@ -36,8 +36,8 @@
 assertions, and Nose identifies those scripts, runs them, and verifies that the
 assertions are true.
 
-How to Run the Tests
-^^^^^^^^^^^^^^^^^^^
+How to Run the Unit Tests
+^^^^^^^^^^^^^^^^^^^^^^^^^
 
 One can run the unit tests very straightforwardly from any python interpreter
 that can import the yt module:
@@ -124,8 +124,8 @@
 
 .. _run_answer_testing:
 
-How to Run Answer Tests
-^^^^^^^^^^^^^^^^^^^^^^^
+How to Run the Answer Tests
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 The very first step is to make a directory and copy over the data against which
 you want to test.  Currently, we test:
@@ -246,7 +246,7 @@
  * Typically for derived values, we compare to 10 or 12 decimal places.
    For exact values, we compare exactly.
 
-How to add data to the testing suite
+How to Add Data to the Testing Suite
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 To add data to the testing suite, first write a new set of tests for the data.


https://bitbucket.org/yt_analysis/yt-doc/commits/d4865fc6147b/
Changeset:   d4865fc6147b
User:        ngoldbaum
Date:        2013-06-07 23:21:57
Summary:     Merging.
Affected #:  1 file

Repository URL: https://bitbucket.org/yt_analysis/yt-doc/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list