[yt-svn] commit/yt: bwkeller: Merged in ngoldbaum/yt (pull request #1848)

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Mon Nov 16 11:15:57 PST 2015


1 new commit in yt:

https://bitbucket.org/yt_analysis/yt/commits/cba5e95e12bd/
Changeset:   cba5e95e12bd
Branch:      yt
User:        bwkeller
Date:        2015-11-16 19:15:45+00:00
Summary:     Merged in ngoldbaum/yt (pull request #1848)

Linting yt.data_objects, yt.utilities, and top-level yt.* submodules. Adding a flake8 test
Affected #:  167 files

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be coding_styleguide.txt
--- /dev/null
+++ b/coding_styleguide.txt
@@ -0,0 +1,101 @@
+Style Guide for Coding in yt
+============================
+
+Coding Style Guide
+------------------
+
+ * In general, follow PEP-8 guidelines.
+   http://www.python.org/dev/peps/pep-0008/
+ * Classes are ``ConjoinedCapitals``, methods and functions are
+   ``lowercase_with_underscores``.
+ * Use 4 spaces, not tabs, to represent indentation.
+ * Line widths should not be more than 80 characters.
+ * Do not use nested classes unless you have a very good reason to, such as
+   requiring a namespace or class-definition modification.  Classes should live
+   at the top level.  ``__metaclass__`` is exempt from this.
+ * Do not use unnecessary parenthesis in conditionals.  ``if((something) and
+   (something_else))`` should be rewritten as
+   ``if something and something_else``. Python is more forgiving than C.
+ * Avoid copying memory when possible. For example, don't do
+   ``a = a.reshape(3,4)`` when ``a.shape = (3,4)`` will do, and ``a = a * 3``
+   should be ``np.multiply(a, 3, a)``.
+ * In general, avoid all double-underscore method names: ``__something`` is
+   usually unnecessary.
+ * When writing a subclass, use the super built-in to access the super class,
+   rather than explicitly. Ex: ``super(SpecialGridSubclass, self).__init__()``
+   rather than ``SpecialGrid.__init__()``.
+ * Docstrings should describe input, output, behavior, and any state changes
+   that occur on an object.  See the file ``doc/docstring_example.txt`` for a
+   fiducial example of a docstring.
+ * Use only one top-level import per line. Unless there is a good reason not to,
+   imports should happen at the top of the file, after the copyright blurb.
+ * Never compare with ``True`` or ``False`` using ``==`` or ``!=``, always use
+   ``is`` or ``is not``.
+ * If you are comparing with a numpy boolean array, just refer to the array.
+   Ex: do ``np.all(array)`` instead of ``np.all(array == True)``.
+ * Never comapre with None using ``==`` or ``!=``, use ``is None`` or
+   ``is not None``.
+ * Use ``statement is not True`` instead of ``not statement is True``
+ * Only one statement per line, do not use semicolons to put two or more
+   statements on a single line.
+ * Only declare local variables if they will be used later. If you do not use the
+   return value of a function, do not store it in a variable.
+ * Add tests for new functionality. When fixing a bug, consider adding a test to
+   prevent the bug from recurring.
+
+API Guide
+---------
+
+ * Do not use ``from some_module import *``
+ * Internally, only import from source files directly -- instead of:
+
+     ``from yt.visualization.api import ProjectionPlot``
+
+   do:
+
+     ``from yt.visualization.plot_window import ProjectionPlot``
+
+ * Import symbols from the module where they are defined, avoid transitive
+   imports.
+ * Import standard library modules, functions, and classes from builtins, do not
+   import them from other yt files.
+ * Numpy is to be imported as ``np``.
+ * Do not use too many keyword arguments.  If you have a lot of keyword
+   arguments, then you are doing too much in ``__init__`` and not enough via
+   parameter setting.
+ * In function arguments, place spaces before commas.  ``def something(a,b,c)``
+   should be ``def something(a, b, c)``.
+ * Don't create a new class to replicate the functionality of an old class --
+   replace the old class.  Too many options makes for a confusing user
+   experience.
+ * Parameter files external to yt are a last resort.
+ * The usage of the ``**kwargs`` construction should be avoided.  If they cannot
+   be avoided, they must be explained, even if they are only to be passed on to
+   a nested function.
+
+Variable Names and Enzo-isms
+----------------------------
+Avoid Enzo-isms.  This includes but is not limited to:
+
+ * Hard-coding parameter names that are the same as those in Enzo.  The
+   following translation table should be of some help.  Note that the
+   parameters are now properties on a ``Dataset`` subclass: you access them
+   like ds.refine_by .
+
+    - ``RefineBy `` => `` refine_by``
+    - ``TopGridRank `` => `` dimensionality``
+    - ``TopGridDimensions `` => `` domain_dimensions``
+    - ``InitialTime `` => `` current_time``
+    - ``DomainLeftEdge `` => `` domain_left_edge``
+    - ``DomainRightEdge `` => `` domain_right_edge``
+    - ``CurrentTimeIdentifier `` => `` unique_identifier``
+    - ``CosmologyCurrentRedshift `` => `` current_redshift``
+    - ``ComovingCoordinates `` => `` cosmological_simulation``
+    - ``CosmologyOmegaMatterNow `` => `` omega_matter``
+    - ``CosmologyOmegaLambdaNow `` => `` omega_lambda``
+    - ``CosmologyHubbleConstantNow `` => `` hubble_constant``
+
+ * Do not assume that the domain runs from 0 .. 1.  This is not true
+   everywhere.
+ * Variable names should be short but descriptive.
+ * No globals!

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be doc/coding_styleguide.txt
--- a/doc/coding_styleguide.txt
+++ /dev/null
@@ -1,80 +0,0 @@
-Style Guide for Coding in yt
-============================
-
-Coding Style Guide
-------------------
-
- * In general, follow PEP-8 guidelines.
-   http://www.python.org/dev/peps/pep-0008/
- * Classes are ConjoinedCapitals, methods and functions are
-   lowercase_with_underscores.
- * Use 4 spaces, not tabs, to represent indentation.
- * Line widths should not be more than 80 characters.
- * Do not use nested classes unless you have a very good reason to, such as
-   requiring a namespace or class-definition modification.  Classes should live
-   at the top level.  __metaclass__ is exempt from this.
- * Do not use unnecessary parenthesis in conditionals.  if((something) and
-   (something_else)) should be rewritten as if something and something_else.
-   Python is more forgiving than C.
- * Avoid copying memory when possible. For example, don't do 
-   "a = a.reshape(3,4)" when "a.shape = (3,4)" will do, and "a = a * 3" should
-   be "np.multiply(a, 3, a)".
- * In general, avoid all double-underscore method names: __something is usually
-   unnecessary.
- * When writing a subclass, use the super built-in to access the super class,
-   rather than explicitly. Ex: "super(SpecialGrid, self).__init__()" rather than
-   "SpecialGrid.__init__()".
- * Doc strings should describe input, output, behavior, and any state changes
-   that occur on an object.  See the file `doc/docstring_example.txt` for a
-   fiducial example of a docstring.
-
-API Guide
----------
-
- * Do not import "*" from anything other than "yt.funcs".
- * Internally, only import from source files directly -- instead of:
-
-   from yt.visualization.api import ProjectionPlot
-
-   do:
-
-   from yt.visualization.plot_window import ProjectionPlot
-
- * Numpy is to be imported as "np", after a long time of using "na".
- * Do not use too many keyword arguments.  If you have a lot of keyword
-   arguments, then you are doing too much in __init__ and not enough via
-   parameter setting.
- * In function arguments, place spaces before commas.  def something(a,b,c)
-   should be def something(a, b, c).
- * Don't create a new class to replicate the functionality of an old class --
-   replace the old class.  Too many options makes for a confusing user
-   experience.
- * Parameter files external to yt are a last resort.
- * The usage of the **kwargs construction should be avoided.  If they cannot
-   be avoided, they must be explained, even if they are only to be passed on to
-   a nested function.
-
-Variable Names and Enzo-isms
-----------------------------
-
- * Avoid Enzo-isms.  This includes but is not limited to:
-   * Hard-coding parameter names that are the same as those in Enzo.  The
-     following translation table should be of some help.  Note that the
-     parameters are now properties on a Dataset subclass: you access them
-     like ds.refine_by .
-     * RefineBy => refine_by
-     * TopGridRank => dimensionality
-     * TopGridDimensions => domain_dimensions
-     * InitialTime => current_time
-     * DomainLeftEdge => domain_left_edge
-     * DomainRightEdge => domain_right_edge
-     * CurrentTimeIdentifier => unique_identifier
-     * CosmologyCurrentRedshift => current_redshift
-     * ComovingCoordinates => cosmological_simulation
-     * CosmologyOmegaMatterNow => omega_matter
-     * CosmologyOmegaLambdaNow => omega_lambda
-     * CosmologyHubbleConstantNow => hubble_constant
-   * Do not assume that the domain runs from 0 .. 1.  This is not true
-     everywhere.
- * Variable names should be short but descriptive.
- * No globals!

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
--- a/doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
+++ b/doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
@@ -59,7 +59,7 @@
   from yt.analysis_modules.halo_finding.api import *
 
   ds = yt.load('Enzo_64/RD0006/RedshiftOutput0006')
-  halo_list = parallelHF(ds)
+  halo_list = HaloFinder(ds)
   halo_list.dump('MyHaloList')
 
 Ellipsoid Parameters

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be doc/source/analyzing/parallel_computation.rst
--- a/doc/source/analyzing/parallel_computation.rst
+++ b/doc/source/analyzing/parallel_computation.rst
@@ -501,11 +501,7 @@
 subtle art in estimating the amount of memory needed for halo finding, but a
 rule of thumb is that the HOP halo finder is the most memory intensive
 (:func:`HaloFinder`), and Friends of Friends (:func:`FOFHaloFinder`) being the
-most memory-conservative.  It has been found that :func:`parallelHF` needs
-roughly 1 MB of memory per 5,000 particles, although recent work has improved
-this and the memory requirement is now smaller than this. But this is a good
-starting point for beginning to calculate the memory required for halo-finding.
-For more information, see :ref:`halo_finding`.
+most memory-conservative. For more information, see :ref:`halo_finding`.
 
 **Volume Rendering**
 

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be doc/source/developing/developing.rst
--- a/doc/source/developing/developing.rst
+++ b/doc/source/developing/developing.rst
@@ -494,80 +494,4 @@
 
 .. _code-style-guide:
 
-Code Style Guide
-----------------
-
-To keep things tidy, we try to stick with a couple simple guidelines.
-
-General Guidelines
-++++++++++++++++++
-
-* In general, follow `PEP-8 <http://www.python.org/dev/peps/pep-0008/>`_ guidelines.
-* Classes are ConjoinedCapitals, methods and functions are
-  ``lowercase_with_underscores.``
-* Use 4 spaces, not tabs, to represent indentation.
-* Line widths should not be more than 80 characters.
-* Do not use nested classes unless you have a very good reason to, such as
-  requiring a namespace or class-definition modification.  Classes should live
-  at the top level.  ``__metaclass__`` is exempt from this.
-* Do not use unnecessary parentheses in conditionals.  ``if((something) and
-  (something_else))`` should be rewritten as ``if something and
-  something_else``.  Python is more forgiving than C.
-* Avoid copying memory when possible. For example, don't do ``a =
-  a.reshape(3,4)`` when ``a.shape = (3,4)`` will do, and ``a = a * 3`` should be
-  ``np.multiply(a, 3, a)``.
-* In general, avoid all double-underscore method names: ``__something`` is
-  usually unnecessary.
-* Doc strings should describe input, output, behavior, and any state changes
-  that occur on an object.  See the file `doc/docstring_example.txt` for a
-  fiducial example of a docstring.
-
-API Guide
-+++++++++
-
-* Do not import "*" from anything other than ``yt.funcs``.
-* Internally, only import from source files directly; instead of: ``from
-  yt.visualization.api import SlicePlot`` do
-  ``from yt.visualization.plot_window import SlicePlot``.
-* Numpy is to be imported as ``np``.
-* Do not use too many keyword arguments.  If you have a lot of keyword
-  arguments, then you are doing too much in ``__init__`` and not enough via
-  parameter setting.
-* In function arguments, place spaces before commas.  ``def something(a,b,c)``
-  should be ``def something(a, b, c)``.
-* Don't create a new class to replicate the functionality of an old class --
-  replace the old class.  Too many options makes for a confusing user
-  experience.
-* Parameter files external to yt are a last resort.
-* The usage of the ``**kwargs`` construction should be avoided.  If they
-  cannot be avoided, they must be explained, even if they are only to be
-  passed on to a nested function.
-* Constructor APIs should be kept as *simple* as possible.
-* Variable names should be short but descriptive.
-* No global variables!
-
-Variable Names and Enzo-isms
-++++++++++++++++++++++++++++
-
-* Avoid Enzo-isms.  This includes but is not limited to:
-
-  + Hard-coding parameter names that are the same as those in Enzo.  The
-    following translation table should be of some help.  Note that the
-    parameters are now properties on a Dataset subclass: you access them
-    like ``ds.refine_by`` .
-
-    - ``RefineBy `` => `` refine_by``
-    - ``TopGridRank `` => `` dimensionality``
-    - ``TopGridDimensions `` => `` domain_dimensions``
-    - ``InitialTime `` => `` current_time``
-    - ``DomainLeftEdge `` => `` domain_left_edge``
-    - ``DomainRightEdge `` => `` domain_right_edge``
-    - ``CurrentTimeIdentifier `` => `` unique_identifier``
-    - ``CosmologyCurrentRedshift `` => `` current_redshift``
-    - ``ComovingCoordinates `` => `` cosmological_simulation``
-    - ``CosmologyOmegaMatterNow `` => `` omega_matter``
-    - ``CosmologyOmegaLambdaNow `` => `` omega_lambda``
-    - ``CosmologyHubbleConstantNow `` => `` hubble_constant``
-
-  + Do not assume that the domain runs from 0 to 1.  This is not true
-    for many codes and datasets.
+.. include:: ../../../coding_styleguide.txt

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be setup.cfg
--- a/setup.cfg
+++ b/setup.cfg
@@ -9,7 +9,11 @@
 with-xunit=1
 
 [flake8]
-# if we include api.py files, we get tons of spurious "imported but unused" errors
-exclude = */api.py,*/__config__.py,yt/visualization/_mpl_imports.py
+# we exclude:
+#      api.py and __init__.py files to avoid spurious unused import errors
+#      _mpl_imports.py for the same reason
+#      autogenerated __config__.py files
+#      vendored libraries
+exclude = */api.py,*/__init__.py,*/__config__.py,yt/visualization/_mpl_imports.py,yt/utilities/lodgeit.py,yt/utilities/poster/*,yt/extern/*,yt/mods.py
 max-line-length=999
-ignore = E111,E121,E122,E123,E124,E125,E126,E127,E128,E129,E131,E201,E202,E211,E221,E222,E228,E241,E301,E203,E225,E226,E231,E251,E261,E262,E265,E302,E303,E401,E502,E701,E703,W291,W293,W391
\ No newline at end of file
+ignore = E111,E121,E122,E123,E124,E125,E126,E127,E128,E129,E131,E201,E202,E211,E221,E222,E227,E228,E241,E301,E203,E225,E226,E231,E251,E261,E262,E265,E266,E302,E303,E402,E502,E701,E703,E731,W291,W293,W391,W503
\ No newline at end of file

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be tests/README
--- /dev/null
+++ b/tests/README
@@ -0,0 +1,3 @@
+This directory contains two tiny enzo cosmological datasets. 
+
+They were added a long time ago and are provided for testing purposes.
\ No newline at end of file

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be tests/boolean_regions.py
--- a/tests/boolean_regions.py
+++ /dev/null
@@ -1,18 +0,0 @@
-from yt.utilities.answer_testing.output_tests import \
-    SingleOutputTest, create_test
-from yt.utilities.answer_testing.boolean_region_tests import \
-    TestBooleanANDGridQuantity, TestBooleanORGridQuantity, \
-    TestBooleanNOTGridQuantity, TestBooleanANDParticleQuantity, \
-    TestBooleanORParticleQuantity, TestBooleanNOTParticleQuantity
-
-create_test(TestBooleanANDGridQuantity, "BooleanANDGrid")
-
-create_test(TestBooleanORGridQuantity, "BooleanORGrid")
-
-create_test(TestBooleanNOTGridQuantity, "BooleanNOTGrid")
-
-create_test(TestBooleanANDParticleQuantity, "BooleanANDParticle")
-
-create_test(TestBooleanORParticleQuantity, "BooleanORParticle")
-
-create_test(TestBooleanNOTParticleQuantity, "BooleanNOTParticle")

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be tests/fields_to_test.py
--- a/tests/fields_to_test.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# We want to test several things.  We need to be able to run the
-
-field_list = ["Density", "Temperature", "x-velocity", "y-velocity",
-    "z-velocity",
-    # Now some derived fields
-    "Pressure", "SoundSpeed", "particle_density", "Entropy",
-    # Ghost zones
-    "AveragedDensity", "DivV"]
-
-particle_field_list = ["particle_position_x", "ParticleMassMsun"]

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be tests/halos.py
--- a/tests/halos.py
+++ /dev/null
@@ -1,10 +0,0 @@
-from yt.utilities.answer_testing.output_tests import \
-    SingleOutputTest, create_test
-from yt.utilities.answer_testing.halo_tests import \
-    TestHaloCountHOP, TestHaloCountFOF, TestHaloCountPHOP
-
-create_test(TestHaloCountHOP, "halo_count_HOP", threshold=80.0)
-
-create_test(TestHaloCountFOF, "halo_count_FOF", link=0.2, padding=0.02)
-
-create_test(TestHaloCountPHOP, "halo_count_PHOP", threshold=80.0)

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be tests/hierarchy_consistency.py
--- a/tests/hierarchy_consistency.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import numpy as na
-
-from yt.utilities.answer_testing.output_tests import \
-    YTDatasetTest, RegressionTestException
-from yt.funcs import ensure_list
-
-
-class HierarchyInconsistent(RegressionTestException):
-    pass
-
-
-class HierarchyConsistency(YTDatasetTest):
-    name = "index_consistency"
-
-    def run(self):
-        self.result = \
-            all(g in ensure_list(c.Parent) for g in self.ds.index.grids
-                                            for c in g.Children)
-
-    def compare(self, old_result):
-        if not(old_result and self.result): raise HierarchyInconsistent()
-
-
-class GridLocationsProperties(YTDatasetTest):
-    name = "level_consistency"
-
-    def run(self):
-        self.result = dict(grid_left_edge=self.ds.grid_left_edge,
-                           grid_right_edge=self.ds.grid_right_edge,
-                           grid_levels=self.ds.grid_levels,
-                           grid_particle_count=self.ds.grid_particle_count,
-                           grid_dimensions=self.ds.grid_dimensions)
-
-    def compare(self, old_result):
-        # We allow now difference between these values
-        self.compare_data_arrays(self.result, old_result, 0.0)
-
-
-class GridRelationshipsChanged(RegressionTestException):
-    pass
-
-
-class GridRelationships(YTDatasetTest):
-
-    name = "grid_relationships"
-
-    def run(self):
-        self.result = [[p.id for p in ensure_list(g.Parent) \
-            if g.Parent is not None]
-            for g in self.ds.index.grids]
-
-    def compare(self, old_result):
-        if len(old_result) != len(self.result):
-            raise GridRelationshipsChanged()
-        for plist1, plist2 in zip(old_result, self.result):
-            if len(plist1) != len(plist2): raise GridRelationshipsChanged()
-            if not all((p1 == p2 for p1, p2 in zip(plist1, plist2))):
-                raise GridRelationshipsChanged()
-
-
-class GridGlobalIndices(YTDatasetTest):
-    name = "global_startindex"
-
-    def run(self):
-        self.result = na.array([g.get_global_startindex()
-                                for g in self.ds.index.grids])
-
-    def compare(self, old_result):
-        self.compare_array_delta(old_result, self.result, 0.0)

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be tests/object_field_values.py
--- a/tests/object_field_values.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import hashlib
-import numpy as na
-
-from yt.utilities.answer_testing.output_tests import \
-    YTDatasetTest, RegressionTestException, create_test
-from yt.funcs import ensure_list, iterable
-from fields_to_test import field_list, particle_field_list
-
-
-class FieldHashesDontMatch(RegressionTestException):
-    pass
-
-known_objects = {}
-
-
-def register_object(func):
-    known_objects[func.func_name] = func
-    return func
-
-
- at register_object
-def centered_sphere(tobj):
-    center = 0.5 * (tobj.ds.domain_right_edge + tobj.ds.domain_left_edge)
-    width = (tobj.ds.domain_right_edge - tobj.ds.domain_left_edge).max()
-    tobj.data_object = tobj.ds.sphere(center, width / 0.25)
-
-
- at register_object
-def off_centered_sphere(tobj):
-    center = 0.5 * (tobj.ds.domain_right_edge + tobj.ds.domain_left_edge)
-    width = (tobj.ds.domain_right_edge - tobj.ds.domain_left_edge).max()
-    tobj.data_object = tobj.ds.sphere(center - 0.25 * width, width / 0.25)
-
-
- at register_object
-def corner_sphere(tobj):
-    width = (tobj.ds.domain_right_edge - tobj.ds.domain_left_edge).max()
-    tobj.data_object = tobj.ds.sphere(tobj.ds.domain_left_edge, width / 0.25)
-
-
- at register_object
-def disk(self):
-    center = (self.ds.domain_right_edge + self.ds.domain_left_edge) / 2.
-    radius = (self.ds.domain_right_edge - self.ds.domain_left_edge).max() / 10.
-    height = (self.ds.domain_right_edge - self.ds.domain_left_edge).max() / 10.
-    normal = na.array([1.] * 3)
-    self.data_object = self.ds.disk(center, normal, radius, height)
-
-
- at register_object
-def all_data(self):
-    self.data_object = self.ds.all_data()
-
-_new_known_objects = {}
-for field in ["Density"]:  # field_list:
-    for object_name in known_objects:
-
-        def _rfunc(oname, fname):
-
-            def func(tobj):
-                known_objects[oname](tobj)
-                tobj.orig_data_object = tobj.data_object
-                avg_value = tobj.orig_data_object.quantities[
-                        "WeightedAverageQuantity"](fname, "Density")
-                tobj.data_object = tobj.orig_data_object.cut_region(
-                        ["grid['%s'] > %s" % (fname, avg_value)])
-            return func
-        _new_known_objects["%s_cut_region_%s" % (object_name, field)] = \
-                _rfunc(object_name, field)
-known_objects.update(_new_known_objects)
-
-
-class YTFieldValuesTest(YTDatasetTest):
-
-    def run(self):
-        vals = self.data_object[self.field].copy()
-        vals.sort()
-        self.result = hashlib.sha256(vals.tostring()).hexdigest()
-
-    def compare(self, old_result):
-        if self.result != old_result: raise FieldHashesDontMatch
-
-    def setup(self):
-        YTDatasetTest.setup(self)
-        known_objects[self.object_name](self)
-
-
-class YTExtractIsocontoursTest(YTFieldValuesTest):
-
-    def run(self):
-        val = self.data_object.quantities["WeightedAverageQuantity"](
-            "Density", "Density")
-        rset = self.data_object.extract_isocontours("Density",
-            val, rescale=False, sample_values="Temperature")
-        self.result = rset
-
-    def compare(self, old_result):
-        if self.result[0].size == 0 and old_result[0].size == 0:
-            return True
-        self.compare_array_delta(self.result[0].ravel(),
-                                 old_result[0].ravel(), 1e-7)
-        self.compare_array_delta(self.result[1], old_result[1], 1e-7)
-
-
-class YTIsocontourFluxTest(YTFieldValuesTest):
-
-    def run(self):
-        val = self.data_object.quantities["WeightedAverageQuantity"](
-            "Density", "Density")
-        flux = self.data_object.calculate_isocontour_flux(
-           "Density", val, "x-velocity", "y-velocity", "z-velocity")
-        self.result = flux
-
-    def compare(self, old_result):
-        self.compare_value_delta(self.result, old_result, 1e-7)
-
-for object_name in known_objects:
-    for field in field_list + particle_field_list:
-        if "cut_region" in object_name and field in particle_field_list:
-            continue
-        create_test(YTFieldValuesTest, "%s_%s" % (object_name, field),
-                    field=field, object_name=object_name)
-    create_test(YTExtractIsocontoursTest, "%s" % (object_name),
-                object_name=object_name)
-    create_test(YTIsocontourFluxTest, "%s" % (object_name),
-                object_name=object_name)
-
-
-class YTDerivedQuantityTest(YTDatasetTest):
-
-    def setup(self):
-        YTDatasetTest.setup(self)
-        known_objects[self.object_name](self)
-
-    def compare(self, old_result):
-        if hasattr(self.result, 'tostring'):
-            self.compare_array_delta(self.result, old_result, 1e-7)
-            return
-        elif iterable(self.result):
-            a1 = na.array(self.result)
-            a2 = na.array(old_result)
-            self.compare_array_delta(a1, a2, 1e-7)
-        else:
-            if self.result != old_result: raise FieldHashesDontMatch
-
-    def run(self):
-        # This only works if it takes no arguments
-        self.result = self.data_object.quantities[self.dq_name]()
-
-dq_names = ["TotalMass", "AngularMomentumVector", "CenterOfMass",
-            "BulkVelocity", "BaryonSpinParameter", "ParticleSpinParameter"]
-
-# Extrema, WeightedAverageQuantity, TotalQuantity, MaxLocation,
-# MinLocation
-
-for object_name in known_objects:
-    for dq in dq_names:
-        # Some special exceptions
-        if "cut_region" in object_name and (
-            "SpinParameter" in dq or
-            "TotalMass" in dq):
-            continue
-        create_test(YTDerivedQuantityTest, "%s_%s" % (object_name, dq),
-                    dq_name=dq, object_name=object_name)
-
-
-class YTDerivedQuantityTestField(YTDerivedQuantityTest):
-
-    def run(self):
-        self.result = self.data_object.quantities[self.dq_name](
-            self.field_name)
-
-for object_name in known_objects:
-    for field in field_list:
-        for dq in ["Extrema", "TotalQuantity", "MaxLocation", "MinLocation"]:
-            create_test(YTDerivedQuantityTestField,
-                        "%s_%s" % (object_name, field),
-                        field_name=field, dq_name=dq,
-                        object_name=object_name)
-
-
-class YTDerivedQuantityTest_WeightedAverageQuantity(YTDerivedQuantityTest):
-
-    def run(self):
-        self.result = self.data_object.quantities["WeightedAverageQuantity"](
-            self.field_name, weight="CellMassMsun")
-
-for object_name in known_objects:
-    for field in field_list:
-        create_test(YTDerivedQuantityTest_WeightedAverageQuantity,
-                    "%s_%s" % (object_name, field),
-                    field_name=field,
-                    object_name=object_name)

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be tests/projections.py
--- a/tests/projections.py
+++ /dev/null
@@ -1,37 +0,0 @@
-from yt.utilities.answer_testing.output_tests import \
-    SingleOutputTest, create_test
-from yt.utilities.answer_testing.hydro_tests import \
-    TestProjection, TestOffAxisProjection, TestSlice, \
-    TestRay, TestGasDistribution, Test2DGasDistribution
-
-from fields_to_test import field_list
-
-for field in field_list:
-    create_test(TestRay, "%s" % field, field=field)
-
-for axis in range(3):
-    for field in field_list:
-        create_test(TestSlice, "%s_%s" % (axis, field),
-                    field=field, axis=axis)
-
-for axis in range(3):
-    for field in field_list:
-        create_test(TestProjection, "%s_%s" % (axis, field),
-                    field=field, axis=axis)
-        create_test(TestProjection, "%s_%s_Density" % (axis, field),
-                    field=field, axis=axis, weight_field="Density")
-
-for field in field_list:
-    create_test(TestOffAxisProjection, "%s_%s" % (axis, field),
-                field=field, axis=axis)
-    create_test(TestOffAxisProjection, "%s_%s_Density" % (axis, field),
-                field=field, axis=axis, weight_field="Density")
-
-for field in field_list:
-    if field != "Density":
-        create_test(TestGasDistribution, "density_%s" % field,
-                    field_x="Density", field_y=field)
-    if field not in ("x-velocity", "Density"):
-        create_test(Test2DGasDistribution, "density_x-vel_%s" % field,
-                    field_x="Density", field_y="x-velocity", field_z=field,
-                    weight="CellMassMsun")

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be tests/runall.py
--- a/tests/runall.py
+++ /dev/null
@@ -1,135 +0,0 @@
-import matplotlib
-matplotlib.use('Agg')
-from yt.config import ytcfg
-ytcfg["yt", "loglevel"] = "50"
-ytcfg["yt", "serialize"] = "False"
-
-from yt.utilities.answer_testing.api import \
-    RegressionTestRunner, clear_registry, create_test, \
-    TestFieldStatistics, TestAllProjections, registry_entries, \
-    Xunit
-from yt.utilities.command_line import get_yt_version
-
-from yt.mods import *
-import fnmatch
-import imp
-import optparse
-import itertools
-import time
-
-#
-# We assume all tests are to be run, unless explicitly given the name of a
-# single test or something that can be run through fnmatch.
-#
-# Keep in mind that we use a different nomenclature here than is used in the
-# Enzo testing system.  Our 'tests' are actually tests that are small and that
-# run relatively quickly on a single dataset; in Enzo's system, a 'test'
-# encompasses both the creation and the examination of data.  Here we assume
-# the data is kept constant.
-#
-
-cwd = os.path.dirname(globals().get("__file__", os.getcwd()))
-
-
-def load_tests(iname, idir):
-    f, filename, desc = imp.find_module(iname, [idir])
-    tmod = imp.load_module(iname, f, filename, desc)
-    return tmod
-
-
-def find_and_initialize_tests():
-    mapping = {}
-    for f in glob.glob(os.path.join(cwd, "*.py")):
-        clear_registry()
-        iname = os.path.basename(f[:-3])
-        try:
-            load_tests(iname, cwd)
-            mapping[iname] = registry_entries()
-            #print "Associating %s with" % (iname)
-            #print "\n    ".join(registry_entries())
-        except ImportError:
-            pass
-    return mapping
-
-if __name__ == "__main__":
-    clear_registry()
-    mapping = find_and_initialize_tests()
-    test_storage_directory = ytcfg.get("yt", "test_storage_dir")
-    try:
-        my_hash = get_yt_version()
-    except:
-        my_hash = "UNKNOWN%s" % (time.time())
-    parser = optparse.OptionParser()
-    parser.add_option("-f", "--parameter-file", dest="parameter_file",
-        default=os.path.join(cwd, "DD0010/moving7_0010"),
-        help="The parameter file value to feed to 'load' to test against")
-    parser.add_option("-l", "--list", dest="list_tests", action="store_true",
-        default=False, help="List all tests and then exit")
-    parser.add_option("-t", "--tests", dest="test_pattern", default="*",
-        help="The test name pattern to match.  Can include wildcards.")
-    parser.add_option("-o", "--output", dest="storage_dir",
-        default=test_storage_directory,
-        help="Base directory for storing test output.")
-    parser.add_option("-c", "--compare", dest="compare_name",
-        default=None,
-        help="The name against which we will compare")
-    parser.add_option("-n", "--name", dest="this_name",
-        default=my_hash,
-        help="The name we'll call this set of tests")
-    opts, args = parser.parse_args()
-
-    if opts.list_tests:
-        tests_to_run = []
-        for m, vals in mapping.items():
-            new_tests = fnmatch.filter(vals, opts.test_pattern)
-            if len(new_tests) == 0: continue
-            load_tests(m, cwd)
-            keys = set(registry_entries())
-            tests_to_run += [t for t in new_tests if t in keys]
-        tests = list(set(tests_to_run))
-        print ("\n    ".join(tests))
-        sys.exit(0)
-
-    # Load the test ds and make sure it's good.
-    ds = load(opts.parameter_file)
-    if ds is None:
-        print "Couldn't load the specified parameter file."
-        sys.exit(1)
-
-    # Now we modify our compare name and self name to include the ds.
-    compare_id = opts.compare_name
-    watcher = None
-    if compare_id is not None:
-        compare_id += "_%s_%s" % (ds, ds._hash())
-        watcher = Xunit()
-    this_id = opts.this_name + "_%s_%s" % (ds, ds._hash())
-
-    rtr = RegressionTestRunner(this_id, compare_id,
-                               results_path=opts.storage_dir,
-                               compare_results_path=opts.storage_dir,
-                               io_log=[opts.parameter_file])
-
-    rtr.watcher = watcher
-    tests_to_run = []
-    for m, vals in mapping.items():
-        new_tests = fnmatch.filter(vals, opts.test_pattern)
-
-        if len(new_tests) == 0: continue
-        load_tests(m, cwd)
-        keys = set(registry_entries())
-        tests_to_run += [t for t in new_tests if t in keys]
-    for test_name in sorted(tests_to_run):
-        print "RUNNING TEST", test_name
-        rtr.run_test(test_name)
-    if watcher is not None:
-        rtr.watcher.report()
-    failures = 0
-    passes = 1
-    for test_name, result in sorted(rtr.passed_tests.items()):
-        if not result:
-            print "TEST %s: %s" % (test_name, result)
-            print "    %s" % rtr.test_messages[test_name]
-        if result: passes += 1
-        else: failures += 1
-    print "Number of passes  : %s" % passes
-    print "Number of failures: %s" % failures

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be tests/volume_rendering.py
--- a/tests/volume_rendering.py
+++ /dev/null
@@ -1,42 +0,0 @@
-from yt.mods import *
-import numpy as na
-
-from yt.utilities.answer_testing.output_tests import \
-    YTDatasetTest, RegressionTestException
-from yt.funcs import ensure_list
-
-
-class VolumeRenderingInconsistent(RegressionTestException):
-    pass
-
-
-class VolumeRenderingConsistency(YTDatasetTest):
-    name = "volume_rendering_consistency"
-
-    def run(self):
-        c = (self.ds.domain_right_edge + self.ds.domain_left_edge) / 2.
-        W = na.sqrt(3.) * (self.ds.domain_right_edge - \
-            self.ds.domain_left_edge)
-        N = 512
-        n_contours = 5
-        cmap = 'algae'
-        field = 'Density'
-        mi, ma = self.ds.all_data().quantities['Extrema'](field)[0]
-        mi, ma = na.log10(mi), na.log10(ma)
-        contour_width = (ma - mi) / 100.
-        L = na.array([1.] * 3)
-        tf = ColorTransferFunction((mi - 2, ma + 2))
-        tf.add_layers(n_contours, w=contour_width,
-                      col_bounds=(mi * 1.001, ma * 0.999),
-                      colormap=cmap, alpha=na.logspace(-1, 0, n_contours))
-        cam = self.ds.camera(c, L, W, (N, N), transfer_function=tf,
-            no_ghost=True)
-        image = cam.snapshot()
-        # image = cam.snapshot('test_rendering_%s.png'%field)
-        self.result = image
-
-    def compare(self, old_result):
-        # Compare the deltas; give a leeway of 1e-8
-        delta = na.nanmax(na.abs(self.result - old_result) /
-                                 (self.result + old_result))
-        if delta > 1e-9: raise VolumeRenderingInconsistent()

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/__init__.py
--- a/yt/__init__.py
+++ b/yt/__init__.py
@@ -121,7 +121,6 @@
     derived_field
 
 from yt.data_objects.api import \
-    BinnedProfile1D, BinnedProfile2D, BinnedProfile3D, \
     DatasetSeries, ImageArray, \
     particle_filter, add_particle_filter, \
     create_profile, Profile1D, Profile2D, Profile3D, \

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/analysis_modules/absorption_spectrum/absorption_spectrum_fit.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum_fit.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum_fit.py
@@ -281,8 +281,6 @@
         errSq=sum(dif**2)
 
         if any(linesP[:,1]==speciesDict['init_b']):
-         #   linesP = prevLinesP
-
             flag = True
             break
             

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/analysis_modules/halo_finding/halo_objects.py
--- a/yt/analysis_modules/halo_finding/halo_objects.py
+++ b/yt/analysis_modules/halo_finding/halo_objects.py
@@ -30,10 +30,10 @@
     get_rotation_matrix, \
     periodic_dist
 from yt.utilities.physical_constants import \
-    mass_sun_cgs, \
+    mass_sun_cgs
+from yt.utilities.physical_ratios import \
+    rho_crit_g_cm3_h2, \
     TINY
-from yt.utilities.physical_ratios import \
-    rho_crit_g_cm3_h2
 
 from .hop.EnzoHop import RunHOP
 from .fof.EnzoFOF import RunFOF

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/analysis_modules/photon_simulator/tests/test_beta_model.py
--- a/yt/analysis_modules/photon_simulator/tests/test_beta_model.py
+++ b/yt/analysis_modules/photon_simulator/tests/test_beta_model.py
@@ -14,9 +14,7 @@
     XSpecThermalModel, XSpecAbsorbModel, \
     ThermalPhotonModel, PhotonList
 from yt.config import ytcfg
-from yt.utilities.answer_testing.framework import \
-    requires_module
-from yt.testing import requires_file
+from yt.testing import requires_file, requires_module
 import numpy as np
 from yt.utilities.physical_ratios import \
     K_per_keV, mass_hydrogen_grams

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/analysis_modules/photon_simulator/tests/test_sloshing.py
--- a/yt/analysis_modules/photon_simulator/tests/test_sloshing.py
+++ b/yt/analysis_modules/photon_simulator/tests/test_sloshing.py
@@ -17,7 +17,6 @@
 from yt.testing import requires_file
 from yt.utilities.answer_testing.framework import requires_ds, \
     GenericArrayTest, data_dir_load
-import numpy as np
 from numpy.random import RandomState
 import os
 

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/analysis_modules/photon_simulator/tests/test_spectra.py
--- a/yt/analysis_modules/photon_simulator/tests/test_spectra.py
+++ b/yt/analysis_modules/photon_simulator/tests/test_spectra.py
@@ -1,9 +1,8 @@
 from yt.analysis_modules.photon_simulator.api import \
     TableApecModel, XSpecThermalModel
-import numpy as np
 from yt.testing import requires_module, fake_random_ds
 from yt.utilities.answer_testing.framework import \
-    GenericArrayTest, data_dir_load
+    GenericArrayTest
 from yt.config import ytcfg
 
 def setup():

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/analysis_modules/sunyaev_zeldovich/projection.py
--- a/yt/analysis_modules/sunyaev_zeldovich/projection.py
+++ b/yt/analysis_modules/sunyaev_zeldovich/projection.py
@@ -19,11 +19,11 @@
 #-----------------------------------------------------------------------------
 
 from yt.utilities.physical_constants import sigma_thompson, clight, hcgs, kboltz, mh, Tcmb
-from yt.units.yt_array import YTQuantity
-from yt.funcs import fix_axis, mylog, iterable, get_pbar
-from yt.visualization.volume_rendering.api import off_axis_projection
+from yt.funcs import fix_axis, mylog, get_pbar
+from yt.visualization.volume_rendering.off_axis_projection import \
+    off_axis_projection
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
-     communication_system, parallel_root_only
+    communication_system, parallel_root_only
 from yt import units
 from yt.utilities.on_demand_imports import _astropy
 

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/analysis_modules/sunyaev_zeldovich/tests/test_projection.py
--- a/yt/analysis_modules/sunyaev_zeldovich/tests/test_projection.py
+++ b/yt/analysis_modules/sunyaev_zeldovich/tests/test_projection.py
@@ -12,8 +12,17 @@
 
 from yt.frontends.stream.api import load_uniform_grid
 from yt.funcs import get_pbar
-from yt.utilities.physical_constants import cm_per_kpc, K_per_keV, \
-    mh, cm_per_km, kboltz, Tcmb, hcgs, clight, sigma_thompson
+from yt.utilities.physical_ratios import \
+    cm_per_kpc, \
+    K_per_keV, \
+    cm_per_km
+from yt.utilities.physical_constants import \
+    mh, \
+    kboltz, \
+    Tcmb, \
+    hcgs, \
+    clight, \
+    sigma_thompson
 from yt.testing import requires_module, assert_almost_equal
 from yt.utilities.answer_testing.framework import requires_ds, \
     GenericArrayTest, data_dir_load, GenericImageTest

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/analysis_modules/two_point_functions/two_point_functions.py
--- a/yt/analysis_modules/two_point_functions/two_point_functions.py
+++ b/yt/analysis_modules/two_point_functions/two_point_functions.py
@@ -26,7 +26,9 @@
 except ImportError:
     mylog.debug("The Fortran kD-Tree did not import correctly.")
 
-import math, inspect, time
+import math
+import inspect
+import time
 from collections import defaultdict
 
 sep = 12

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/config.py
--- a/yt/config.py
+++ b/yt/config.py
@@ -16,7 +16,6 @@
 #-----------------------------------------------------------------------------
 
 import os
-import types
 from yt.extern.six.moves import configparser
 
 ytcfg_defaults = dict(

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/convenience.py
--- a/yt/convenience.py
+++ b/yt/convenience.py
@@ -13,16 +13,19 @@
 # The full license is in the file COPYING.txt, distributed with this software.
 #-----------------------------------------------------------------------------
 
-import os, os.path, types
+import os
 
 # Named imports
 from yt.extern.six import string_types
-from yt.funcs import *
 from yt.config import ytcfg
+from yt.funcs import mylog
 from yt.utilities.parameter_file_storage import \
     output_type_registry, \
     simulation_time_series_registry, \
     EnzoRunDatabase
+from yt.utilities.exceptions import \
+    YTOutputNotIdentified, \
+    YTSimulationNotIdentified
 from yt.utilities.hierarchy_inspection import find_lowest_subclasses
 
 def load(*args ,**kwargs):

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/data_objects/analyzer_objects.py
--- a/yt/data_objects/analyzer_objects.py
+++ b/yt/data_objects/analyzer_objects.py
@@ -15,7 +15,6 @@
 
 import inspect
 
-from yt.funcs import *
 from yt.extern.six import add_metaclass
 
 analysis_task_registry = {}
@@ -23,7 +22,7 @@
 class RegisteredTask(type):
     def __init__(cls, name, b, d):
         type.__init__(cls, name, b, d)
-        if hasattr(cls, "skip") and cls.skip == False:
+        if hasattr(cls, "skip") and cls.skip is False:
             return
         analysis_task_registry[cls.__name__] = cls
 

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/data_objects/api.py
--- a/yt/data_objects/api.py
+++ b/yt/data_objects/api.py
@@ -27,11 +27,6 @@
     particle_handler_registry
 
 from .profiles import \
-    YTEmptyProfileData, \
-    BinnedProfile, \
-    BinnedProfile1D, \
-    BinnedProfile2D, \
-    BinnedProfile3D, \
     create_profile, \
     Profile1D, \
     Profile2D, \

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py
+++ b/yt/data_objects/construction_data_containers.py
@@ -15,21 +15,29 @@
 #-----------------------------------------------------------------------------
 
 import numpy as np
-import math
-import weakref
-import itertools
-import shelve
 from functools import wraps
 import fileinput
 from re import finditer
+from tempfile import TemporaryFile
 import os
+import zipfile
 
 from yt.config import ytcfg
-from yt.funcs import *
-from yt.utilities.logger import ytLogger
-from .data_containers import \
-    YTSelectionContainer1D, YTSelectionContainer2D, YTSelectionContainer3D, \
-    restore_field_information_state, YTFieldData
+from yt.data_objects.data_containers import \
+    YTSelectionContainer1D, \
+    YTSelectionContainer2D, \
+    YTSelectionContainer3D, \
+    YTFieldData
+from yt.funcs import \
+    ensure_list, \
+    mylog, \
+    get_memory_usage, \
+    iterable, \
+    only_on_root
+from yt.utilities.exceptions import \
+    YTParticleDepositionNotImplemented, \
+    YTNoAPIKey, \
+    YTTooManyVertices
 from yt.utilities.lib.QuadTree import \
     QuadTree
 from yt.utilities.lib.Interpolators import \
@@ -38,8 +46,6 @@
     fill_region
 from yt.utilities.lib.marching_cubes import \
     march_cubes_grid, march_cubes_grid_flux
-from yt.utilities.data_point_utilities import CombineGrids,\
-    DataCubeRefine, DataCubeReplace, FillRegion, FillBuffer
 from yt.utilities.minimal_representation import \
     MinimalProjectionData
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
@@ -47,16 +53,10 @@
 from yt.units.unit_object import Unit
 import yt.geometry.particle_deposit as particle_deposit
 from yt.utilities.grid_data_format.writer import write_to_gdf
+from yt.fields.field_exceptions import \
+    NeedsOriginalGrid
 from yt.frontends.stream.api import load_uniform_grid
 
-from yt.fields.field_exceptions import \
-    NeedsGridType,\
-    NeedsOriginalGrid,\
-    NeedsDataField,\
-    NeedsProperty,\
-    NeedsParameter
-from yt.fields.derived_field import \
-    TranslationFunc
 
 class YTStreamline(YTSelectionContainer1D):
     """
@@ -369,14 +369,13 @@
         data['pdy'] = self.ds.arr(pdy, code_length)
         data['fields'] = nvals
         # Now we run the finalizer, which is ignored if we don't need it
-        fd = data['fields']
         field_data = np.hsplit(data.pop('fields'), len(fields))
         for fi, field in enumerate(fields):
-            finfo = self.ds._get_field_info(*field)
             mylog.debug("Setting field %s", field)
             input_units = self._projected_units[field]
             self[field] = self.ds.arr(field_data[fi].ravel(), input_units)
-        for i in list(data.keys()): self[i] = data.pop(i)
+        for i in list(data.keys()):
+            self[i] = data.pop(i)
         mylog.info("Projection completed")
         self.tree = tree
 
@@ -939,7 +938,6 @@
         ls.current_level += 1
         ls.current_dx = ls.base_dx / \
             self.ds.relative_refinement(0, ls.current_level)
-        LL = ls.left_edge - ls.domain_left_edge
         ls.old_global_startindex = ls.global_startindex
         ls.global_startindex, end_index, ls.current_dims = \
             self._minimal_box(ls.current_dx)
@@ -1509,11 +1507,8 @@
                     color_log = True, emit_log = True, plot_index = None,
                     color_field_max = None, color_field_min = None,
                     emit_field_max = None, emit_field_min = None):
-        import io
-        from sys import version
         if plot_index is None:
             plot_index = 0
-            vmax=0
         ftype = [("cind", "uint8"), ("emit", "float")]
         vtype = [("x","float"),("y","float"), ("z","float")]
         #(0) formulate vertices
@@ -1552,7 +1547,7 @@
                 tmp = self.vertices[i,:]
                 np.divide(tmp, dist_fac, tmp)
                 v[ax][:] = tmp
-        return  v, lut, transparency, emiss, f['cind']
+        return v, lut, transparency, emiss, f['cind']
 
 
     def export_ply(self, filename, bounds = None, color_field = None,
@@ -1734,8 +1729,6 @@
         api_key = api_key or ytcfg.get("yt","sketchfab_api_key")
         if api_key in (None, "None"):
             raise YTNoAPIKey("SketchFab.com", "sketchfab_api_key")
-        import zipfile, json
-        from tempfile import TemporaryFile
 
         ply_file = TemporaryFile()
         self.export_ply(ply_file, bounds, color_field, color_map, color_log,

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -13,32 +13,39 @@
 # The full license is in the file COPYING.txt, distributed with this software.
 #-----------------------------------------------------------------------------
 
-import h5py
 import itertools
-import os
-import types
 import uuid
-from yt.extern.six import string_types
-
-data_object_registry = {}
 
 import numpy as np
 import weakref
 import shelve
+
+from collections import defaultdict
 from contextlib import contextmanager
 
-from yt.funcs import get_output_filename
-from yt.funcs import *
-
 from yt.data_objects.particle_io import particle_handler_registry
 from yt.frontends.ytdata.utilities import \
     save_as_dataset
+from yt.funcs import \
+    get_output_filename, \
+    mylog, \
+    ensure_list, \
+    fix_axis, \
+    iterable
 from yt.units.unit_object import UnitParseError
+from yt.units.yt_array import \
+    YTArray, \
+    YTQuantity
 from yt.utilities.exceptions import \
     YTUnitConversionError, \
     YTFieldUnitError, \
     YTFieldUnitParseError, \
-    YTSpatialFieldUnitError
+    YTSpatialFieldUnitError, \
+    YTCouldNotGenerateField, \
+    YTFieldNotParseable, \
+    YTFieldNotFound, \
+    YTFieldTypeNotFound, \
+    YTDataSelectorNotImplemented
 from yt.utilities.lib.marching_cubes import \
     march_cubes_grid, march_cubes_grid_flux
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
@@ -55,9 +62,10 @@
     compose_selector
 from yt.extern.six import add_metaclass, string_types
 
+data_object_registry = {}
+
 def force_array(item, shape):
     try:
-        sh = item.shape
         return item.copy()
     except AttributeError:
         if item:
@@ -189,7 +197,7 @@
         elif isinstance(center, string_types):
             if center.lower() in ("c", "center"):
                 self.center = self.ds.domain_center
-             # is this dangerous for race conditions?
+            # is this dangerous for race conditions?
             elif center.lower() in ("max", "m"):
                 self.center = self.ds.find_max(("gas", "density"))[1]
             elif center.startswith("max_"):
@@ -831,7 +839,7 @@
             fields_to_get.append(field)
         if len(fields_to_get) == 0 and len(fields_to_generate) == 0:
             return
-        elif self._locked == True:
+        elif self._locked is True:
             raise GenerationInProgress(fields)
         # Track which ones we want in the end
         ofields = set(list(self.field_data.keys())
@@ -1407,7 +1415,7 @@
         with child cells are left untouched.
         """
         for grid in self._grids:
-            if default_value != None:
+            if default_value is not None:
                 grid[field] = np.ones(grid.ActiveDimensions)*default_value
             grid[field][self._get_point_indices(grid)] = value
 
@@ -1474,167 +1482,3 @@
     obj = cls(*new_args)
     obj.field_parameters.update(field_parameters)
     return ReconstructedObject((ds, obj))
-
-class YTBooleanRegionBase(YTSelectionContainer3D):
-    """
-    This will build a hybrid region based on the boolean logic
-    of the regions.
-
-    Parameters
-    ----------
-    regions : list
-        A list of region objects and strings describing the boolean logic
-        to use when building the hybrid region. The boolean logic can be
-        nested using parentheses.
-
-    Examples
-    --------
-    >>> re1 = ds.region([0.5, 0.5, 0.5], [0.4, 0.4, 0.4],
-        [0.6, 0.6, 0.6])
-    >>> re2 = ds.region([0.5, 0.5, 0.5], [0.45, 0.45, 0.45],
-        [0.55, 0.55, 0.55])
-    >>> sp1 = ds.sphere([0.575, 0.575, 0.575], .03)
-    >>> toroid_shape = ds.boolean([re1, "NOT", re2])
-    >>> toroid_shape_with_hole = ds.boolean([re1, "NOT", "(", re2, "OR",
-        sp1, ")"])
-    """
-    _type_name = "boolean"
-    _con_args = ("regions",)
-    def __init__(self, regions, fields = None, ds = None, field_parameters = None, data_source = None):
-        # Center is meaningless, but we'll define it all the same.
-        YTSelectionContainer3D.__init__(self, [0.5]*3, fields, ds, field_parameters, data_source)
-        self.regions = regions
-        self._all_regions = []
-        self._some_overlap = []
-        self._all_overlap = []
-        self._cut_masks = {}
-        self._get_all_regions()
-        self._make_overlaps()
-        self._get_list_of_grids()
-
-    def _get_all_regions(self):
-        # Before anything, we simply find out which regions are involved in all
-        # of this process, uniquely.
-        for item in self.regions:
-            if isinstance(item, bytes): continue
-            self._all_regions.append(item)
-            # So cut_masks don't get messed up.
-            item._boolean_touched = True
-        self._all_regions = np.unique(self._all_regions)
-
-    def _make_overlaps(self):
-        # Using the processed cut_masks, we'll figure out what grids
-        # are left in the hybrid region.
-        pbar = get_pbar("Building boolean", len(self._all_regions))
-        for i, region in enumerate(self._all_regions):
-            try:
-                region._get_list_of_grids() # This is no longer supported.
-                alias = region
-            except AttributeError:
-                alias = region.data         # This is no longer supported.
-            for grid in alias._grids:
-                if grid in self._some_overlap or grid in self._all_overlap:
-                    continue
-                # Get the cut_mask for this grid in this region, and see
-                # if there's any overlap with the overall cut_mask.
-                overall = self._get_cut_mask(grid)
-                local = force_array(alias._get_cut_mask(grid),
-                    grid.ActiveDimensions)
-                # Below we don't want to match empty masks.
-                if overall.sum() == 0 and local.sum() == 0: continue
-                # The whole grid is in the hybrid region if a) its cut_mask
-                # in the original region is identical to the new one and b)
-                # the original region cut_mask is all ones.
-                if (local == np.bitwise_and(overall, local)).all() and \
-                        (local == True).all():
-                    self._all_overlap.append(grid)
-                    continue
-                if (overall == local).any():
-                    # Some of local is in overall
-                    self._some_overlap.append(grid)
-                    continue
-            pbar.update(i)
-        pbar.finish()
-
-    def __repr__(self):
-        # We'll do this the slow way to be clear what's going on
-        s = "%s (%s): " % (self.__class__.__name__, self.ds)
-        s += "["
-        for i, region in enumerate(self.regions):
-            if region in ["OR", "AND", "NOT", "(", ")"]:
-                s += region
-            else:
-                s += region.__repr__()
-            if i < (len(self.regions) - 1): s += ", "
-        s += "]"
-        return s
-
-    def _is_fully_enclosed(self, grid):
-        return (grid in self._all_overlap)
-
-    def _get_list_of_grids(self):
-        self._grids = np.array(self._some_overlap + self._all_overlap,
-            dtype='object')
-
-    def _get_cut_mask(self, grid, field=None):
-        if self._is_fully_enclosed(grid):
-            return True # We do not want child masking here
-        if grid.id in self._cut_masks:
-            return self._cut_masks[grid.id]
-        # If we get this far, we have to generate the cut_mask.
-        return self._get_level_mask(self.regions, grid)
-
-    def _get_level_mask(self, ops, grid):
-        level_masks = []
-        end = 0
-        for i, item in enumerate(ops):
-            if end > 0 and i < end:
-                # We skip over things inside parentheses on this level.
-                continue
-            if isinstance(item, YTDataContainer):
-                # Add this regions cut_mask to level_masks
-                level_masks.append(force_array(item._get_cut_mask(grid),
-                    grid.ActiveDimensions))
-            elif item == "AND" or item == "NOT" or item == "OR":
-                level_masks.append(item)
-            elif item == "(":
-                # recurse down, and we'll append the results, which
-                # should be a single cut_mask
-                open_count = 0
-                for ii, item in enumerate(ops[i + 1:]):
-                    # We look for the matching closing parentheses to find
-                    # where we slice ops.
-                    if item == "(":
-                        open_count += 1
-                    if item == ")" and open_count > 0:
-                        open_count -= 1
-                    elif item == ")" and open_count == 0:
-                        end = i + ii + 1
-                        break
-                level_masks.append(force_array(self._get_level_mask(ops[i + 1:end],
-                    grid), grid.ActiveDimensions))
-                end += 1
-            elif isinstance(item.data, AMRData):
-                level_masks.append(force_array(item.data._get_cut_mask(grid),
-                    grid.ActiveDimensions))
-            else:
-                mylog.error("Item in the boolean construction unidentified.")
-        # Now we do the logic on our level_mask.
-        # There should be no nested logic anymore.
-        # The first item should be a cut_mask,
-        # so that will be our starting point.
-        this_cut_mask = level_masks[0]
-        for i, item in enumerate(level_masks):
-            # I could use a slice above, but I'll keep i consistent instead.
-            if i == 0: continue
-            if item == "AND":
-                # So, the next item in level_masks we want to AND.
-                np.bitwise_and(this_cut_mask, level_masks[i+1], this_cut_mask)
-            if item == "NOT":
-                # It's convenient to remember that NOT == AND NOT
-                np.bitwise_and(this_cut_mask, np.invert(level_masks[i+1]),
-                    this_cut_mask)
-            if item == "OR":
-                np.bitwise_or(this_cut_mask, level_masks[i+1], this_cut_mask)
-        self._cut_masks[grid.id] = this_cut_mask
-        return this_cut_mask

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -17,18 +17,15 @@
 
 import numpy as np
 
-from yt.funcs import *
-
-from yt.config import ytcfg
-from yt.units.yt_array import YTArray, uconcatenate, array_like_field
-from yt.utilities.exceptions import YTFieldNotFound
+from yt.funcs import \
+    camelcase_to_underscore, \
+    ensure_list
+from yt.units.yt_array import array_like_field
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
     ParallelAnalysisInterface, parallel_objects
-from yt.utilities.lib.Octree import Octree
 from yt.utilities.physical_constants import \
-    gravitational_constant_cgs, \
-    HUGE
-from yt.utilities.math_utils import prec_accum
+    gravitational_constant_cgs
+from yt.utilities.physical_ratios import HUGE
 from yt.extern.six import add_metaclass
 
 derived_quantity_registry = {}
@@ -202,7 +199,6 @@
     def __call__(self):
         self.data_source.ds.index
         fi = self.data_source.ds.field_info
-        fields = []
         if ("gas", "cell_mass") in fi:
             gas = super(TotalMass, self).__call__([('gas', 'cell_mass')])
         else:

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/data_objects/grid_patch.py
--- a/yt/data_objects/grid_patch.py
+++ b/yt/data_objects/grid_patch.py
@@ -13,25 +13,17 @@
 # The full license is in the file COPYING.txt, distributed with this software.
 #-----------------------------------------------------------------------------
 
-import pdb
 import weakref
-import itertools
 import numpy as np
 
-from yt.funcs import *
-
 from yt.data_objects.data_containers import \
     YTFieldData, \
-    YTDataContainer, \
     YTSelectionContainer
-from yt.fields.field_exceptions import \
-    NeedsGridType, \
-    NeedsOriginalGrid, \
-    NeedsDataField, \
-    NeedsProperty, \
-    NeedsParameter
 from yt.geometry.selection_routines import convert_mask_to_indices
 import yt.geometry.particle_deposit as particle_deposit
+from yt.utilities.exceptions import \
+    YTFieldTypeNotFound, \
+    YTParticleDepositionNotImplemented
 from yt.utilities.lib.Interpolators import \
     ghost_zone_interpolate
 
@@ -234,15 +226,12 @@
         # We will attempt this by creating a datacube that is exactly bigger
         # than the grid by nZones*dx in each direction
         nl = self.get_global_startindex() - n_zones
-        nr = nl + self.ActiveDimensions + 2 * n_zones
         new_left_edge = nl * self.dds + self.ds.domain_left_edge
-        new_right_edge = nr * self.dds + self.ds.domain_left_edge
 
         # Something different needs to be done for the root grid, though
         level = self.Level
         if all_levels:
             level = self.index.max_level + 1
-        args = (level, new_left_edge, new_right_edge)
         kwargs = {'dims': self.ActiveDimensions + 2*n_zones,
                   'num_ghost_zones':n_zones,
                   'use_pbar':False, 'fields':fields}

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/data_objects/octree_subset.py
--- a/yt/data_objects/octree_subset.py
+++ b/yt/data_objects/octree_subset.py
@@ -18,23 +18,20 @@
 
 from yt.data_objects.data_containers import \
     YTFieldData, \
-    YTDataContainer, \
     YTSelectionContainer
-from yt.fields.field_exceptions import \
-    NeedsGridType, \
-    NeedsOriginalGrid, \
-    NeedsDataField, \
-    NeedsProperty, \
-    NeedsParameter
 import yt.geometry.particle_deposit as particle_deposit
 import yt.geometry.particle_smooth as particle_smooth
-from yt.funcs import *
+
+from yt.funcs import mylog
 from yt.utilities.lib.geometry_utils import compute_morton
 from yt.geometry.particle_oct_container import \
     ParticleOctreeContainer
 from yt.units.yt_array import YTArray
 from yt.units.dimensions import length
-from yt.utilities.exceptions import YTInvalidPositionArray
+from yt.utilities.exceptions import \
+    YTInvalidPositionArray, \
+    YTFieldTypeNotFound, \
+    YTParticleDepositionNotImplemented
 
 def cell_count_cache(func):
     def cc_cache_func(self, dobj):

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/data_objects/particle_filters.py
--- a/yt/data_objects/particle_filters.py
+++ b/yt/data_objects/particle_filters.py
@@ -14,16 +14,14 @@
 # The full license is in the file COPYING.txt, distributed with this software.
 #-----------------------------------------------------------------------------
 
-import numpy as np
 import copy
+from collections import defaultdict
 
 from contextlib import contextmanager
-from functools import wraps
 
 from yt.fields.field_info_container import \
     NullFunc, TranslationFunc
 from yt.utilities.exceptions import YTIllDefinedFilter
-from yt.funcs import *
 
 # One to many mapping
 filter_registry = defaultdict(list)

diff -r 2f1010e43b87c85b89ad01865f2469084cd59f9f -r cba5e95e12bdd42c2a6f1b7f540ea524d73785be yt/data_objects/particle_io.py
--- a/yt/data_objects/particle_io.py
+++ b/yt/data_objects/particle_io.py
@@ -15,7 +15,11 @@
 
 import numpy as np
 
-from yt.funcs import *
+from collections import defaultdict
+
+from yt.funcs import \
+    ensure_list, \
+    mylog
 from yt.extern.six import add_metaclass
 
 particle_handler_registry = defaultdict()

This diff is so big that we needed to truncate the remainder.

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list