[yt-svn] commit/yt: 12 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Sun May 11 13:13:16 PDT 2014


12 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/4b2e3618a519/
Changeset:   4b2e3618a519
Branch:      yt-3.0
User:        jzuhone
Date:        2014-05-09 06:50:31
Summary:     It turns out the old way of doing the PlotWindow axes for slices/projections along the y axis is the right way to go for ppv_coordinates.
Affected #:  1 file

diff -r 721b37f6b4378a5487d0208dfd0897eace5db5a1 -r 4b2e3618a519da149b09944480610e91a037346c yt/geometry/ppv_coordinates.py
--- a/yt/geometry/ppv_coordinates.py
+++ b/yt/geometry/ppv_coordinates.py
@@ -45,24 +45,18 @@
             if axis == 0:
                 self.x_axis[axis] = 1
                 self.x_axis[lower_ax] = 1
-                self.x_axis[axis_name] = 1
                 self.y_axis[axis] = 2
                 self.y_axis[lower_ax] = 2
-                self.y_axis[axis_name] = 2
             elif axis == 1:
-                self.x_axis[axis] = 2
-                self.x_axis[lower_ax] = 2
-                self.x_axis[axis_name] = 2
-                self.y_axis[axis] = 0
-                self.y_axis[lower_ax] = 0
-                self.y_axis[axis_name] = 0
+                self.x_axis[axis] = 0
+                self.x_axis[lower_ax] = 0
+                self.y_axis[axis] = 2
+                self.y_axis[lower_ax] = 2
             elif axis == 2:
                 self.x_axis[axis] = 0
                 self.x_axis[lower_ax] = 0
-                self.x_axis[axis_name] = 0
                 self.y_axis[axis] = 1
                 self.y_axis[lower_ax] = 1
-                self.y_axis[axis_name] = 1
 
         self.default_unit_label = {}
         self.default_unit_label[pf.lon_axis] = "pixel"


https://bitbucket.org/yt_analysis/yt/commits/5cc719fa502a/
Changeset:   5cc719fa502a
Branch:      yt-3.0
User:        jzuhone
Date:        2014-05-09 06:53:14
Summary:     Missed this one
Affected #:  1 file

diff -r 4b2e3618a519da149b09944480610e91a037346c -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 yt/analysis_modules/particle_trajectories/particle_trajectories.py
--- a/yt/analysis_modules/particle_trajectories/particle_trajectories.py
+++ b/yt/analysis_modules/particle_trajectories/particle_trajectories.py
@@ -66,13 +66,13 @@
         if isinstance(outputs, DatasetSeries):
             self.data_series = outputs
         else:
-            self.data_series = DatasetSeries.from_filenames(outputs)
+            self.data_series = DatasetSeries(outputs)
         self.masks = []
         self.sorts = []
         self.array_indices = []
         self.indices = indices
         self.num_indices = len(indices)
-        self.num_steps = len(filenames)
+        self.num_steps = len(outputs)
         self.times = []
 
         # Default fields 


https://bitbucket.org/yt_analysis/yt/commits/0783d1f3ae3e/
Changeset:   0783d1f3ae3e
Branch:      yt-3.0
User:        jzuhone
Date:        2014-05-09 17:02:46
Summary:     Merge
Affected #:  19 files

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d doc/source/analyzing/particles.rst
--- a/doc/source/analyzing/particles.rst
+++ b/doc/source/analyzing/particles.rst
@@ -28,8 +28,7 @@
 the quantities (:ref:`derived-quantities`) in those objects will operate on
 particle fields.
 
-(For information on halo finding, see :ref:`cookbook-halo_finding` and
-:ref:`cookbook-halo_mass_info`.)
+(For information on halo finding, see :ref:`cookbook-halo_finding`)
 
 .. warning:: If you use the built-in methods of interacting with particles, you
              should be well off.  Otherwise, there are caveats!

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d doc/source/cookbook/cosmological_analysis.rst
--- a/doc/source/cookbook/cosmological_analysis.rst
+++ b/doc/source/cookbook/cosmological_analysis.rst
@@ -4,14 +4,6 @@
 These scripts demonstrate some basic and more advanced analysis that can be 
 performed on cosmological simulations.
 
-.. _cookbook-halo_finding:
-
-Simple Halo Finding
-~~~~~~~~~~~~~~~~~~~
-This script shows how to create a halo catalog for a single dataset.
-
-.. yt_cookbook:: halo_finding.py
-
 Plotting Halos
 ~~~~~~~~~~~~~~
 This is a mechanism for plotting circles representing identified particle halos
@@ -19,20 +11,7 @@
 
 .. yt_cookbook:: halo_plotting.py
 
-Plotting Halo Particles
-~~~~~~~~~~~~~~~~~~~~~~~
-This is a simple mechanism for overplotting the particles belonging only to
-halos.
-
-.. yt_cookbook:: halo_particle_plotting.py
-
-.. _cookbook-halo_mass_info:
-
-Halo Information
-~~~~~~~~~~~~~~~~
-This recipe finds halos and then prints out information about them.
-
-.. yt_cookbook:: halo_mass_info.py
+.. _cookbook-halo_finding:
 
 Halo Profiling and Custom Analysis
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d doc/source/cookbook/halo_finding.py
--- a/doc/source/cookbook/halo_finding.py
+++ /dev/null
@@ -1,10 +0,0 @@
-"""
-This script shows the simplest way of getting halo information.  For more
-information, see :ref:`halo_finding`.
-"""
-import yt
-
-ds = yt.load("Enzo_64/DD0043/data0043")
-
-halos = yt.HaloFinder(ds)
-halos.write_out("%s_halos.txt" % ds)

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d doc/source/cookbook/halo_mass_info.py
--- a/doc/source/cookbook/halo_mass_info.py
+++ /dev/null
@@ -1,34 +0,0 @@
-"""
-Title: Halo Mass Info
-Description: This recipe finds halos and then prints out information about
-             them.  Note that this recipe will take advantage of multiple CPUs
-             if executed with mpirun and supplied the --parallel command line
-             argument.  
-Outputs: [RedshiftOutput0006_halo_info.txt]
-"""
-from yt.mods import *
-
-fn = "Enzo_64/RD0006/RedshiftOutput0006" # parameter file to load
-pf = load(fn) # load data
-
-# First we run our halo finder to identify all the halos in the dataset.  This
-# can take arguments, but the default are pretty sane.
-halos = HaloFinder(pf)
-
-f = open("%s_halo_info.txt" % pf, "w")
-
-# Now, for every halo, we get the baryon data and examine it.
-for halo in halos:
-    # The halo has a property called 'get_sphere' that obtains a sphere
-    # centered on the point of maximum density (or the center of mass, if that
-    # argument is supplied) and with the radius the maximum particle radius of
-    # that halo.
-    sphere = halo.get_sphere()
-    # We use the quantities[] method to get the total mass in baryons and in
-    # particles.
-    baryon_mass, particle_mass = sphere.quantities["TotalQuantity"](
-            ["cell_mass", "particle_mass"])
-    # Now we print out this information, along with the ID.
-    f.write("Total mass in HOP group %s is %0.5e (gas = %0.5e / particles = %0.5e)\n" % \
-            (halo.id, baryon_mass + particle_mass, baryon_mass, particle_mass))
-f.close()

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d doc/source/cookbook/halo_particle_plotting.py
--- a/doc/source/cookbook/halo_particle_plotting.py
+++ /dev/null
@@ -1,14 +0,0 @@
-"""
-This is a simple mechanism for overplotting the particles belonging only to
-halos.  For more information, see :ref:`halo_finding`.
-"""
-from yt.mods import * # set up our namespace
-
-pf = load("Enzo_64/DD0043/data0043")
-
-halos = HaloFinder(pf)
-
-p = ProjectionPlot(pf, "x", "density")
-p.annotate_hop_circles(halos)
-p.annotate_hop_particles(halos, max_number=100)
-p.save()

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d doc/source/cookbook/halo_plotting.py
--- a/doc/source/cookbook/halo_plotting.py
+++ b/doc/source/cookbook/halo_plotting.py
@@ -4,10 +4,13 @@
 """
 from yt.mods import * # set up our namespace
 
-pf = load("Enzo_64/DD0043/data0043")
+data_pf = load("Enzo_64/RD0006/RedshiftOutput0006")
 
-halos = HaloFinder(pf)
+halo_pf = load('rockstar_halos/halos_0.0.bin')
 
-p = ProjectionPlot(pf, "z", "density")
-p.annotate_hop_circles(halos)
+hc - HaloCatalog(halos_pf = halo_pf)
+hc.load()
+
+p = ProjectionPlot(pf, "x", "density")
+p.annotate_halos(hc)
 p.save()

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d doc/source/cookbook/index.rst
--- a/doc/source/cookbook/index.rst
+++ b/doc/source/cookbook/index.rst
@@ -21,9 +21,8 @@
 If you want to take a look at more complex recipes, or submit your own,
 check out the `yt Hub <http://hub.yt-project.org>`_.
 
-.. note:: To contribute your own recipes, please 
-   `fork <http://bitbucket.org/yt_analysis/yt-doc/fork>`_
-   the documentation repository!
+.. note:: To contribute your own recipes, please follow the instructions 
+    on how to contribute documentation code: :ref:`writing_documentation`.
 
 Example Scripts
 ---------------

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d doc/source/cookbook/simple_contour_in_slice.py
--- a/doc/source/cookbook/simple_contour_in_slice.py
+++ b/doc/source/cookbook/simple_contour_in_slice.py
@@ -4,20 +4,20 @@
 pf = load("Sedov_3d/sedov_hdf5_chk_0002")
 
 # Make a traditional slice plot.
-sp = SlicePlot(pf,"x","dens")
+sp = SlicePlot(pf,"x","density")
 
 # Overlay the slice plot with thick red contours of density.
-sp.annotate_contour("dens", ncont=3, clim=(1e-2,1e-1), label=True,
+sp.annotate_contour("density", ncont=3, clim=(1e-2,1e-1), label=True,
                     plot_args={"colors": "red",
                                "linewidths": 2})
 
 # What about some nice temperature contours in blue?
-sp.annotate_contour("temp", ncont=3, clim=(1e-8,1e-6), label=True,
+sp.annotate_contour("temperature", ncont=3, clim=(1e-8,1e-6), label=True,
                     plot_args={"colors": "blue",
                                "linewidths": 2})
 
 # This is the plot object.
-po = sp.plots["dens"]
+po = sp.plots["density"]
 
 # Turn off the colormap image, leaving just the contours.
 po.axes.images[0].set_visible(False)

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d doc/source/cookbook/simple_off_axis_projection.py
--- a/doc/source/cookbook/simple_off_axis_projection.py
+++ b/doc/source/cookbook/simple_off_axis_projection.py
@@ -11,7 +11,7 @@
 # Get the angular momentum vector for the sphere.
 L = sp.quantities["AngularMomentumVector"]()
 
-print "Angular momentum vector: %s" % (L)
+print "Angular momentum vector: {0}".format(L)
 
 # Create an OffAxisSlicePlot on the object with the L vector as its normal
 p = OffAxisProjectionPlot(pf, L, "density", sp.center, (25, "kpc"))

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d doc/source/cookbook/simple_slice_with_multiple_fields.py
--- a/doc/source/cookbook/simple_slice_with_multiple_fields.py
+++ b/doc/source/cookbook/simple_slice_with_multiple_fields.py
@@ -4,5 +4,5 @@
 pf = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
 
 # Create density slices of several fields along the x axis
-SlicePlot(pf, 'x', ['density','temperature','Pressure','VorticitySquared'], 
+SlicePlot(pf, 'x', ['density','temperature','pressure','vorticity_squared'], 
           width = (800.0, 'kpc')).save()

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d doc/source/cookbook/thin_slice_projection.py
--- a/doc/source/cookbook/thin_slice_projection.py
+++ b/doc/source/cookbook/thin_slice_projection.py
@@ -17,10 +17,9 @@
 right_corner = pf.domain_right_edge
 
 # Now adjust the size of the region along the line of sight (x axis).
-depth = 10.0 # in Mpc
-left_corner[0] = center[0] - 0.5 * depth / pf.units['mpc']
-left_corner[0] = center[0] + 0.5 * depth / pf.units['mpc']
-
+depth = pf.quan(10.0,'Mpc') 
+left_corner[0] = center[0] - 0.5 * depth 
+left_corner[0] = center[0] + 0.5 * depth 
 # Create the region
 region = pf.region(center, left_corner, right_corner)
 

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d doc/source/developing/building_the_docs.rst
--- a/doc/source/developing/building_the_docs.rst
+++ b/doc/source/developing/building_the_docs.rst
@@ -1,8 +1,8 @@
 .. _docs_build:
 
-=================
-Building the Docs
-=================
+==========================
+Building the Documentation
+==========================
 
 The yt documentation makes heavy use of the sphinx documentation automation
 suite.  Sphinx, written in python, was originally created for the documentation
@@ -11,13 +11,64 @@
 
 While much of the yt documentation is static text, we make heavy use of
 cross-referencing with API documentation that is automatically generated at
-build time by sphinx.  We also use sphinx to run code snippets and embed
-resulting images and example data.
+build time by sphinx.  We also use sphinx to run code snippets (e.g. the 
+cookbook and the notebooks) and embed resulting images and example data.
 
-yt Sphinx extensions
---------------------
+Quick versus full documentation builds
+--------------------------------------
 
-The documentation makes heavy use of custom sphinx extensions to transform
+Building the entire set of yt documentation is a laborious task, since you 
+need to have a large number of packages in order to successfully execute
+and render all of the notebooks and yt recipes drawing from every corner
+of the yt source.  As an quick alternative, one can do a ``quick`` build
+of the documentation, which eschews the need for downloading all of these
+dependencies, but it only produces the static docs.  The static docs do 
+not include the cookbook outputs and the notebooks, but this is good
+enough for most cases of people testing out whether or not their documentation
+contributions look OK before submitting them to the yt repository.
+
+If you want to create the full documentation locally, then you'll need
+to follow the instructions for building the ``full`` docs, so that you can
+dynamically execute and render the cookbook recipes, the notebooks, etc.
+
+Building the docs (quick)
+-------------------------
+
+You will need to have the yt repository available on your computer, which
+is done by default if you have yt installed.  In addition, you need a 
+current version of Sphinx_ (1.1.3) documentation software installed.
+
+In order to tell sphinx not to do all of the dynamical building, you must
+set the ``$READTHEDOCS`` environment variable to be True by typing this at 
+the command line:
+
+.. code-block:: bash
+
+   export READTHEDOCS=True  # for bash
+   setenv READTHEDOCS True  # for csh
+
+This variable is set for automated builds on the free ReadTheDocs service but
+can be used by anyone to force a quick, minimal build.
+
+Now all you need to do is execute sphinx on the yt doc source.  Go to the 
+documentation directory and build the docs:
+
+.. code-block:: bash
+
+   cd $YT_DEST/src/yt-hg/doc
+   make html
+
+This will produce an html version of the documentation locally in the 
+``$YT_DEST/src/yt-hg/doc/build/html`` directory.  You can now go there and open
+up ``index.html`` or whatever file you wish in your web browser.
+
+Building the docs (full)
+------------------------
+
+As alluded to earlier, building the full documentation is a bit more involved
+than simply building the static documentation.  
+
+The full documentation makes heavy use of custom sphinx extensions to transform
 recipes, notebooks, and inline code snippets into python scripts, IPython_
 notebooks, or notebook cells that are executed when the docs are built.
 
@@ -30,12 +81,9 @@
 .. _runipy: https://github.com/paulgb/runipy
 .. _IPython: http://ipython.org/
 
-Dependencies
-------------
-
-To build the docs, you will need yt, IPython, runipy, and all supplementary yt
-analysis modules installed. The following dependencies were used to generate the
-yt documentation during the release of yt 2.6 in late 2013.
+To build the full documentation, you will need yt, IPython, runipy, and all 
+supplementary yt analysis modules installed. The following dependencies were 
+used to generate the yt documentation during the release of yt 2.6 in late 2013.
 
 - Sphinx_ 1.1.3
 - IPython_ 1.1
@@ -58,49 +106,32 @@
 <http://yt-project.org/data/>`_, including the larger datasets that are not used
 in the answer tests.
 
-Building the docs
------------------
-
-First, you will need to ensure that your testing configuration is properly
+You will need to ensure that your testing configuration is properly
 configured and that all of the yt test data is in the testing directory.  See
 :ref:`run_answer_testing` for more details on how to set up the testing
 configuration.
 
-Next, clone the yt-doc repository, navigate to the root of the repository, and
-do :code:`make html`.
+Now that you have everything set up properly, go to the documentation directory
+and build it using sphinx:
 
 .. code-block:: bash
 
-   hg clone https://bitbucket.org/yt_analysis/yt-doc ./yt-doc
-   cd yt-doc
+   cd $YT_DEST/src/yt-hg/doc
    make html
 
 If all of the dependencies are installed and all of the test data is in the
-testing directory, this should churn away for a while and eventually generate a
-docs build.  This process is lengthy but shouldn't last more than an hour.  We
-suggest setting :code:`suppressStreamLogging = True` in your yt configuration
-(See :ref:`configuration-file`) to suppress large amounts of debug output from
+testing directory, this should churn away for a while (~ 1 hour) and 
+eventually generate a docs build.  We suggest setting 
+:code:`suppressStreamLogging = True` in your yt configuration (See 
+:ref:`configuration-file`) to suppress large amounts of debug output from
 yt.
 
 To clean the docs build, use :code:`make clean`.  By default, :code:`make clean`
 will not delete the autogenerated API docs, so use :code:`make fullclean` to
 delete those as well.
 
-
-Quick docs builds
------------------
-
-Clearly, building the complete docs is something of an undertaking.  If you are
-adding new static content building the complete docs build is probably
-overkill.  To skip some of the lengthier operations, you can do the following
-from the bash prompt:
-
-.. code-block:: bash
-
-   export READTHEDOCS=True
-
-This variable is set for automated builds on the free ReadTheDocs service but
-can be used by anyone to force a quick, minimal build.
+Building the docs (hybrid)
+--------------------------
 
 It's also possible to create a custom sphinx build that builds a restricted set
 of notebooks or scripts.  This can be accomplished by editing the Sphinx

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d doc/source/developing/developing.rst
--- a/doc/source/developing/developing.rst
+++ b/doc/source/developing/developing.rst
@@ -211,16 +211,22 @@
   #. Update your pull request by visiting
      https://bitbucket.org/YourUsername/yt/pull-request/new
 
+.. _writing_documentation:
+
 How to Write Documentation
 ++++++++++++++++++++++++++
 
 The process for writing documentation is identical to the above, except that
-instead of ``yt_analysis/yt`` you should be forking and pushing to
-``yt_analysis/yt-doc``.  All the source for the documentation is written in
+you're modifying source files in the doc directory (i.e. ``$YT_DEST/src/yt-hg/doc``) 
+instead of the src directory (i.e. ``$YT_DEST/src/yt-hg/yt``) of the yt repository.
+All the source for the documentation is written in 
 `Sphinx <http://sphinx-doc.org/>`_, which uses ReST for markup.
 
 Cookbook recipes go in ``source/cookbook/`` and must be added to one of the
-``.rst`` files in that directory.
+``.rst`` files in that directory.  
+
+For more information on how to build the documentation to make sure it looks
+the way you expect it to after modifying it, see :ref:`docs_build`.
 
 How To Get The Source Code For Editing
 --------------------------------------

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d doc/source/developing/intro.rst
--- a/doc/source/developing/intro.rst
+++ b/doc/source/developing/intro.rst
@@ -66,10 +66,8 @@
 typo or grammatical fixes, adding a FAQ, or increasing coverage of
 functionality, it would be very helpful if you wanted to help out.
 
-The easiest way to help out is to fork the repository:
-
-http://hg.yt-project.org/yt-doc/fork
-
+The easiest way to help out is to fork the main yt repository (where 
+the documentation lives in the ``$YT_DEST/src/yt-hg/doc`` directory,
 and then make your changes in your own fork.  When you are done, issue a pull
 request through the website for your new fork, and we can comment back and
 forth and eventually accept your changes.

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d doc/source/yt3differences.rst
--- a/doc/source/yt3differences.rst
+++ b/doc/source/yt3differences.rst
@@ -27,7 +27,7 @@
     FieldName)``.
   * Previously, yt would use "Enzo-isms" for field names.  We now very
     specifically define fields as lowercase with underscores.  For instance,
-    what used to be ``VelocityMagnitude`` would not be ``velocity_magnitude``.
+    what used to be ``VelocityMagnitude`` would now be ``velocity_magnitude``.
   * Particles are either named by their type or default to the type ``io``.
   * Axis names are now at the *end* of field names, not the beginning.
     ``x-velocity`` is now ``velocity_x``.

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d yt/analysis_modules/halo_finding/rockstar/rockstar_groupies.pyx
--- a/yt/analysis_modules/halo_finding/rockstar/rockstar_groupies.pyx
+++ b/yt/analysis_modules/halo_finding/rockstar/rockstar_groupies.pyx
@@ -6,6 +6,10 @@
 from libc.stdlib cimport malloc, free
 import sys
 
+ctypedef fused anyfloat:
+    np.float32_t
+    np.float64_t
+
 # Importing relevant rockstar data types particle, fof halo, halo
 
 cdef import from "particle.h":
@@ -235,8 +239,8 @@
     @cython.wraparound(False)
     def make_rockstar_fof(self, np.ndarray[np.int64_t, ndim=1] pind,
                                 np.ndarray[np.int64_t, ndim=1] fof_tags,
-                                np.ndarray[np.float64_t, ndim=2] pos,
-                                np.ndarray[np.float64_t, ndim=2] vel):
+                                np.ndarray[anyfloat, ndim=2] pos,
+                                np.ndarray[anyfloat, ndim=2] vel):
 
         # Define fof object
 

diff -r 5cc719fa502abbc0420f38a12937f669a8e51cc4 -r 0783d1f3ae3e4df181345c7526da0d66d773711d yt/utilities/lib/ContourFinding.pyx
--- a/yt/utilities/lib/ContourFinding.pyx
+++ b/yt/utilities/lib/ContourFinding.pyx
@@ -751,7 +751,6 @@
                 c1 = container[offset]
                 if c1 == NULL: continue
                 c0 = contour_find(c1)
-                offset = pind[offset]
                 if c0.count < minimum_count:
                     contour_ids[offset] = -1
         free(container)


https://bitbucket.org/yt_analysis/yt/commits/ab4009abe4b9/
Changeset:   ab4009abe4b9
Branch:      yt-3.0
User:        jzuhone
Date:        2014-05-09 18:06:23
Summary:     Accept a FITS HDU that is already instantiated in memory (for use with the ALMA tools). Do data decomposition differently so that grids span the x-y plane along the z-axis if it's a 3D cube. This results in faster slices and projections along the z-axis, which is the axis on which we'll usually be doing this anyway.
Affected #:  1 file

diff -r 0783d1f3ae3e4df181345c7526da0d66d773711d -r ab4009abe4b92582b14e070a5344cdd51a949eb2 yt/frontends/fits/data_structures.py
--- a/yt/frontends/fits/data_structures.py
+++ b/yt/frontends/fits/data_structures.py
@@ -17,6 +17,7 @@
 import weakref
 import warnings
 import re
+import uuid
 
 from yt.config import ytcfg
 from yt.funcs import *
@@ -200,21 +201,23 @@
             self.parameter_file.field_units[k] = self.parameter_file.field_units[primary_fname]
 
     def _count_grids(self):
-        self.num_grids = self.pf.nprocs
+        self.num_grids = self.pf.parameters["nprocs"]
 
     def _parse_index(self):
         f = self._handle # shortcut
         pf = self.parameter_file # shortcut
 
         # If nprocs > 1, decompose the domain into virtual grids
-        if pf.nprocs > 1:
+        if self.num_grids > 1:
             bbox = np.array([[le,re] for le, re in zip(pf.domain_left_edge,
                                                        pf.domain_right_edge)])
             dims = np.array(pf.domain_dimensions)
             # If we are creating a dataset of lines, only decompose along the position axes
             if len(pf.line_database) > 0:
                 dims[pf.vel_axis] = 1
-            psize = get_psize(dims, pf.nprocs)
+            elif self.pf.dimensionality == 3:
+                dims[:2] = 1
+            psize = get_psize(dims, self.num_grids)
             gle, gre, shapes, slices = decompose_array(dims, psize, bbox)
             self.grid_left_edge = self.pf.arr(gle, "code_length")
             self.grid_right_edge = self.pf.arr(gre, "code_length")
@@ -224,18 +227,23 @@
                 self.grid_left_edge[:,pf.vel_axis] = pf.domain_left_edge[pf.vel_axis]
                 self.grid_right_edge[:,pf.vel_axis] = pf.domain_right_edge[pf.vel_axis]
                 self.grid_dimensions[:,pf.vel_axis] = pf.domain_dimensions[pf.vel_axis]
+            elif self.pf.dimensionality == 3:
+                self.grid_left_edge[:,:2] = pf.domain_left_edge[:2]
+                self.grid_right_edge[:,:2] = pf.domain_right_edge[:2]
+                self.grid_dimensions[:,:2] = pf.domain_dimensions[:2]
 
         else:
             self.grid_left_edge[0,:] = pf.domain_left_edge
             self.grid_right_edge[0,:] = pf.domain_right_edge
             self.grid_dimensions[0] = pf.domain_dimensions
 
-        if self.pf.events_data:
+        if pf.events_data:
             try:
                 self.grid_particle_count[:] = pf.primary_header["naxis2"]
             except KeyError:
                 self.grid_particle_count[:] = 0.0
             self._particle_indices = np.zeros(self.num_grids + 1, dtype='int64')
+            print self.grid_particle_count, self.grid_particle_count.squeeze()
             self._particle_indices[1] = self.grid_particle_count.squeeze()
 
         self.grid_levels.flat[:] = 0
@@ -297,6 +305,7 @@
 
         if parameters is None:
             parameters = {}
+        parameters["nprocs"] = nprocs
         self.specified_parameters = parameters
 
         if line_width is not None:
@@ -322,11 +331,15 @@
             self.nan_mask = {"all":nan_mask}
         elif isinstance(nan_mask, dict):
             self.nan_mask = nan_mask
-        self.nprocs = nprocs
-        self._handle = _astropy.pyfits.open(self.filenames[0],
-                                      memmap=True,
-                                      do_not_scale_image_data=True,
-                                      ignore_blank=True)
+        if isinstance(self.filenames[0], _astropy.pyfits.PrimaryHDU):
+            self._handle = _astropy.pyfits.HDUList(self.filenames[0])
+            fn = "InMemoryFITSImage_%s" % (uuid.uuid4().hex)
+        else:
+            self._handle = _astropy.pyfits.open(self.filenames[0],
+                                                memmap=True,
+                                                do_not_scale_image_data=True,
+                                                ignore_blank=True)
+            fn = self.filenames[0]
         self._fits_files = [self._handle]
         if self.num_files > 1:
             for fits_file in auxiliary_files:
@@ -387,7 +400,7 @@
 
         self.refine_by = 2
 
-        Dataset.__init__(self, filename, dataset_type)
+        Dataset.__init__(self, fn, dataset_type)
         self.storage_filename = storage_filename
 
     def _set_code_unit_attributes(self):
@@ -435,8 +448,11 @@
 
     def _parse_parameter_file(self):
 
-        self.unique_identifier = \
-            int(os.stat(self.parameter_filename)[stat.ST_CTIME])
+        if self.parameter_filename.startswith("InMemory"):
+            self.unique_identifier = time.time()
+        else:
+            self.unique_identifier = \
+                int(os.stat(self.parameter_filename)[stat.ST_CTIME])
 
         # Determine dimensionality
 
@@ -472,14 +488,17 @@
         self.current_redshift = self.omega_lambda = self.omega_matter = \
             self.hubble_constant = self.cosmological_simulation = 0.0
 
-        # If this is a 2D events file, no need to decompose
-        if self.events_data: self.nprocs = 1
-
         # If nprocs is None, do some automatic decomposition of the domain
-        if self.nprocs is None:
-            self.nprocs = np.around(np.prod(self.domain_dimensions) /
-                                    32**self.dimensionality).astype("int")
-            self.nprocs = max(min(self.nprocs, 512), 1)
+        if self.specified_parameters["nprocs"] is None:
+            if len(self.line_database) > 0 or self.dimensionality == 2:
+                nprocs = np.around(np.prod(self.domain_dimensions[:2])/32*32).astype("int")
+            else:
+                nprocs = np.around(self.domain_dimensions[2]/32).astype("int")
+            self.parameters["nprocs"] = max(min(nprocs, 512), 1)
+        elif self.events_data:
+            self.parameters["nprocs"] = 1
+        else:
+            self.parameters["nprocs"] = self.specified_parameters["nprocs"]
 
         self.reversed = False
 


https://bitbucket.org/yt_analysis/yt/commits/bd2e87796416/
Changeset:   bd2e87796416
Branch:      yt-3.0
User:        jzuhone
Date:        2014-05-09 19:18:26
Summary:     Simplifying this
Affected #:  1 file

diff -r ab4009abe4b92582b14e070a5344cdd51a949eb2 -r bd2e877964162080b8bd217b2ecfa0aabb73edbd yt/geometry/ppv_coordinates.py
--- a/yt/geometry/ppv_coordinates.py
+++ b/yt/geometry/ppv_coordinates.py
@@ -25,8 +25,6 @@
 
         self.axis_name = {}
         self.axis_id = {}
-        self.x_axis = {}
-        self.y_axis = {}
 
         for axis, axis_name in zip([pf.lon_axis, pf.lat_axis, pf.vel_axis],
                                    ["Image\ x", "Image\ y", pf.vel_name]):
@@ -42,22 +40,6 @@
             self.axis_id[axis] = axis
             self.axis_id[axis_name] = axis
 
-            if axis == 0:
-                self.x_axis[axis] = 1
-                self.x_axis[lower_ax] = 1
-                self.y_axis[axis] = 2
-                self.y_axis[lower_ax] = 2
-            elif axis == 1:
-                self.x_axis[axis] = 0
-                self.x_axis[lower_ax] = 0
-                self.y_axis[axis] = 2
-                self.y_axis[lower_ax] = 2
-            elif axis == 2:
-                self.x_axis[axis] = 0
-                self.x_axis[lower_ax] = 0
-                self.y_axis[axis] = 1
-                self.y_axis[lower_ax] = 1
-
         self.default_unit_label = {}
         self.default_unit_label[pf.lon_axis] = "pixel"
         self.default_unit_label[pf.lat_axis] = "pixel"
@@ -69,3 +51,8 @@
     def convert_from_cylindrical(self, coord):
         raise NotImplementedError
 
+    x_axis = { 'x' : 1, 'y' : 0, 'z' : 0,
+                0  : 1,  1  : 0,  2  : 0}
+
+    y_axis = { 'x' : 2, 'y' : 2, 'z' : 1,
+                0  : 2,  1  : 2,  2  : 1}


https://bitbucket.org/yt_analysis/yt/commits/7f62cbb82dad/
Changeset:   7f62cbb82dad
Branch:      yt-3.0
User:        jzuhone
Date:        2014-05-09 19:18:39
Summary:     This was actually a bug
Affected #:  1 file

diff -r bd2e877964162080b8bd217b2ecfa0aabb73edbd -r 7f62cbb82dad3cd8068b74f9c36f09f2021526f8 yt/frontends/fits/io.py
--- a/yt/frontends/fits/io.py
+++ b/yt/frontends/fits/io.py
@@ -88,7 +88,7 @@
             for chunk in chunks:
                 for g in chunk.objs:
                     start = ((g.LeftEdge-self.pf.domain_left_edge)/dx).to_ndarray().astype("int")
-                    end = ((g.RightEdge-self.pf.domain_left_edge)/dx).to_ndarray().astype("int")
+                    end = start + g.ActiveDimensions
                     if self.line_db is not None and fname in self.line_db:
                         my_off = self.line_db.get(fname).in_units(self.pf.vel_unit).value
                         my_off = my_off - 0.5*self.pf.line_width


https://bitbucket.org/yt_analysis/yt/commits/810baa4e6c1e/
Changeset:   810baa4e6c1e
Branch:      yt-3.0
User:        jzuhone
Date:        2014-05-09 19:22:32
Summary:     Missed the unit label the last time
Affected #:  1 file

diff -r 7f62cbb82dad3cd8068b74f9c36f09f2021526f8 -r 810baa4e6c1e7e78996c835f72ab676952200d35 yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py
+++ b/yt/data_objects/construction_data_containers.py
@@ -329,7 +329,7 @@
             self[field] = YTArray(field_data[fi].ravel(),
                                   input_units=input_units,
                                   registry=self.pf.unit_registry)
-            if self.weight_field is None:
+            if self.weight_field is None and not self._sum_only:
                 u_obj = Unit(units, registry=self.pf.unit_registry)
                 if u_obj.is_code_unit and input_units != units \
                     or self.pf.no_cgs_equiv_length:


https://bitbucket.org/yt_analysis/yt/commits/142cce89e4e6/
Changeset:   142cce89e4e6
Branch:      yt-3.0
User:        jzuhone
Date:        2014-05-09 19:24:39
Summary:     Making domain decomposition along the z-axis more intelligent, and optional.
Affected #:  1 file

diff -r 810baa4e6c1e7e78996c835f72ab676952200d35 -r 142cce89e4e6ee72eecfab28f2620191a282dc4f yt/frontends/fits/data_structures.py
--- a/yt/frontends/fits/data_structures.py
+++ b/yt/frontends/fits/data_structures.py
@@ -209,29 +209,35 @@
 
         # If nprocs > 1, decompose the domain into virtual grids
         if self.num_grids > 1:
-            bbox = np.array([[le,re] for le, re in zip(pf.domain_left_edge,
-                                                       pf.domain_right_edge)])
-            dims = np.array(pf.domain_dimensions)
-            # If we are creating a dataset of lines, only decompose along the position axes
-            if len(pf.line_database) > 0:
-                dims[pf.vel_axis] = 1
-            elif self.pf.dimensionality == 3:
-                dims[:2] = 1
-            psize = get_psize(dims, self.num_grids)
-            gle, gre, shapes, slices = decompose_array(dims, psize, bbox)
-            self.grid_left_edge = self.pf.arr(gle, "code_length")
-            self.grid_right_edge = self.pf.arr(gre, "code_length")
-            self.grid_dimensions = np.array([shape for shape in shapes], dtype="int32")
-            # If we are creating a dataset of lines, only decompose along the position axes
-            if len(pf.line_database) > 0:
-                self.grid_left_edge[:,pf.vel_axis] = pf.domain_left_edge[pf.vel_axis]
-                self.grid_right_edge[:,pf.vel_axis] = pf.domain_right_edge[pf.vel_axis]
-                self.grid_dimensions[:,pf.vel_axis] = pf.domain_dimensions[pf.vel_axis]
-            elif self.pf.dimensionality == 3:
+            if self.pf.z_axis_decomp:
+                dz = (pf.domain_width/pf.domain_dimensions)[2]
+                self.grid_dimensions[:,2] = np.around(float(pf.domain_dimensions[2])/
+                                                            self.num_grids).astype("int")
+                self.grid_dimensions[-1,2] += (pf.domain_dimensions[2] % self.num_grids)
+                self.grid_left_edge[0,2] = pf.domain_left_edge[2]
+                self.grid_left_edge[1:,2] = pf.domain_left_edge[2] + \
+                                            np.cumsum(self.grid_dimensions[:-1,2])*dz
+                self.grid_right_edge[:,2] = self.grid_left_edge[:,2]+self.grid_dimensions[:,2]*dz
                 self.grid_left_edge[:,:2] = pf.domain_left_edge[:2]
                 self.grid_right_edge[:,:2] = pf.domain_right_edge[:2]
                 self.grid_dimensions[:,:2] = pf.domain_dimensions[:2]
-
+            else:
+                bbox = np.array([[le,re] for le, re in zip(pf.domain_left_edge,
+                                                           pf.domain_right_edge)])
+                dims = np.array(pf.domain_dimensions)
+                # If we are creating a dataset of lines, only decompose along the position axes
+                if len(pf.line_database) > 0:
+                    dims[pf.vel_axis] = 1
+                psize = get_psize(dims, self.num_grids)
+                gle, gre, shapes, slices = decompose_array(dims, psize, bbox)
+                self.grid_left_edge = self.pf.arr(gle, "code_length")
+                self.grid_right_edge = self.pf.arr(gre, "code_length")
+                self.grid_dimensions = np.array([shape for shape in shapes], dtype="int32")
+                # If we are creating a dataset of lines, only decompose along the position axes
+                if len(pf.line_database) > 0:
+                    self.grid_left_edge[:,pf.vel_axis] = pf.domain_left_edge[pf.vel_axis]
+                    self.grid_right_edge[:,pf.vel_axis] = pf.domain_right_edge[pf.vel_axis]
+                    self.grid_dimensions[:,pf.vel_axis] = pf.domain_dimensions[pf.vel_axis]
         else:
             self.grid_left_edge[0,:] = pf.domain_left_edge
             self.grid_right_edge[0,:] = pf.domain_right_edge
@@ -298,6 +304,7 @@
                  nprocs = None,
                  storage_filename = None,
                  nan_mask = None,
+                 z_axis_decomp = False,
                  line_database = None,
                  line_width = None,
                  suppress_astropy_warnings = True,
@@ -308,6 +315,8 @@
         parameters["nprocs"] = nprocs
         self.specified_parameters = parameters
 
+        self.z_axis_decomp = z_axis_decomp
+
         if line_width is not None:
             self.line_width = YTQuantity(line_width[0], line_width[1])
             self.line_units = line_width[1]
@@ -488,12 +497,21 @@
         self.current_redshift = self.omega_lambda = self.omega_matter = \
             self.hubble_constant = self.cosmological_simulation = 0.0
 
+        if self.dimensionality == 2 and self.z_axis_decomp:
+            mylog.warning("You asked to decompose along the z-axis, but this is a 2D dataset. " +
+                          "Ignoring.")
+            self.z_axis_decomp = False
+
         # If nprocs is None, do some automatic decomposition of the domain
         if self.specified_parameters["nprocs"] is None:
-            if len(self.line_database) > 0 or self.dimensionality == 2:
-                nprocs = np.around(np.prod(self.domain_dimensions[:2])/32*32).astype("int")
+            if len(self.line_database) > 0:
+                dims = 2
             else:
-                nprocs = np.around(self.domain_dimensions[2]/32).astype("int")
+                dims = self.dimensionality
+            if self.z_axis_decomp:
+                nprocs = np.around(self.domain_dimensions[2]/8).astype("int")
+            else:
+                nprocs = np.around(np.prod(self.domain_dimensions)/32**dims).astype("int")
             self.parameters["nprocs"] = max(min(nprocs, 512), 1)
         elif self.events_data:
             self.parameters["nprocs"] = 1


https://bitbucket.org/yt_analysis/yt/commits/b65d783b219b/
Changeset:   b65d783b219b
Branch:      yt-3.0
User:        jzuhone
Date:        2014-05-09 19:29:48
Summary:     Missed this
Affected #:  1 file

diff -r 142cce89e4e6ee72eecfab28f2620191a282dc4f -r b65d783b219b842a80f5c19e5451e0aba79f3f57 yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py
+++ b/yt/data_objects/construction_data_containers.py
@@ -317,7 +317,7 @@
             finfo = self.pf._get_field_info(*field)
             mylog.debug("Setting field %s", field)
             units = finfo.units
-            if self.weight_field is None:
+            if self.weight_field is None and not self._sum_only:
                 # See _handle_chunk where we mandate cm
                 if units == '':
                     input_units = "cm"


https://bitbucket.org/yt_analysis/yt/commits/46ca65aa0b99/
Changeset:   46ca65aa0b99
Branch:      yt-3.0
User:        jzuhone
Date:        2014-05-09 19:43:33
Summary:     Bug fix
Affected #:  1 file

diff -r b65d783b219b842a80f5c19e5451e0aba79f3f57 -r 46ca65aa0b99d74d75a25f9d180820bda73cbf22 yt/frontends/fits/data_structures.py
--- a/yt/frontends/fits/data_structures.py
+++ b/yt/frontends/fits/data_structures.py
@@ -249,7 +249,6 @@
             except KeyError:
                 self.grid_particle_count[:] = 0.0
             self._particle_indices = np.zeros(self.num_grids + 1, dtype='int64')
-            print self.grid_particle_count, self.grid_particle_count.squeeze()
             self._particle_indices[1] = self.grid_particle_count.squeeze()
 
         self.grid_levels.flat[:] = 0
@@ -502,6 +501,8 @@
                           "Ignoring.")
             self.z_axis_decomp = False
 
+        if self.events_data: self.specified_parameters["nprocs"] = 1
+
         # If nprocs is None, do some automatic decomposition of the domain
         if self.specified_parameters["nprocs"] is None:
             if len(self.line_database) > 0:
@@ -513,8 +514,6 @@
             else:
                 nprocs = np.around(np.prod(self.domain_dimensions)/32**dims).astype("int")
             self.parameters["nprocs"] = max(min(nprocs, 512), 1)
-        elif self.events_data:
-            self.parameters["nprocs"] = 1
         else:
             self.parameters["nprocs"] = self.specified_parameters["nprocs"]
 


https://bitbucket.org/yt_analysis/yt/commits/b785710c4107/
Changeset:   b785710c4107
Branch:      yt-3.0
User:        jzuhone
Date:        2014-05-09 19:44:12
Summary:     Fixed up notebook
Affected #:  1 file

diff -r 46ca65aa0b99d74d75a25f9d180820bda73cbf22 -r b785710c410756663601d72535060577d2df5895 doc/source/cookbook/fits_radio_cubes.ipynb
--- a/doc/source/cookbook/fits_radio_cubes.ipynb
+++ b/doc/source/cookbook/fits_radio_cubes.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:2f774139560d94508c2c51b70930d46941d9ceef7228655de32a69634f6c6d83"
+  "signature": "sha256:dbc41f6f836cdeb88a549d85e389d6e4e43d163d8c4c267baea8cce0ebdbf441"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -45,7 +45,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "ds = yt.load(\"radio_fits/m33_hi.fits\", nan_mask=0.0)"
+      "ds = yt.load(\"radio_fits/m33_hi.fits\", nan_mask=0.0, z_axis_decomp=True)"
      ],
      "language": "python",
      "metadata": {},
@@ -179,6 +179,31 @@
      "cell_type": "markdown",
      "metadata": {},
      "source": [
+      "We can also make a projection of all the emission along the line of sight:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "prj = yt.ProjectionPlot(ds, \"z\", [\"intensity\"], origin=\"native\", proj_style=\"sum\")\n",
+      "prj.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Since we're not doing an integration along a path length, we needed to specify `proj_style = \"sum\"`. "
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
       "We can also look at the slices perpendicular to the other axes, which will show us the structure along the velocity axis:"
      ]
     },


https://bitbucket.org/yt_analysis/yt/commits/788d2640d9f4/
Changeset:   788d2640d9f4
Branch:      yt-3.0
User:        ngoldbaum
Date:        2014-05-11 22:13:10
Summary:     Merged in jzuhone/yt-3.x/yt-3.0 (pull request #891)

Fixing two bugs related to the FITS PRs
Affected #:  6 files

diff -r b19ce8c82e07be0ebc07f8e8943ac56c1bf40a21 -r 788d2640d9f4ba392b0fb20d95a1da3085192355 doc/source/cookbook/fits_radio_cubes.ipynb
--- a/doc/source/cookbook/fits_radio_cubes.ipynb
+++ b/doc/source/cookbook/fits_radio_cubes.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:2f774139560d94508c2c51b70930d46941d9ceef7228655de32a69634f6c6d83"
+  "signature": "sha256:dbc41f6f836cdeb88a549d85e389d6e4e43d163d8c4c267baea8cce0ebdbf441"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -45,7 +45,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "ds = yt.load(\"radio_fits/m33_hi.fits\", nan_mask=0.0)"
+      "ds = yt.load(\"radio_fits/m33_hi.fits\", nan_mask=0.0, z_axis_decomp=True)"
      ],
      "language": "python",
      "metadata": {},
@@ -179,6 +179,31 @@
      "cell_type": "markdown",
      "metadata": {},
      "source": [
+      "We can also make a projection of all the emission along the line of sight:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "prj = yt.ProjectionPlot(ds, \"z\", [\"intensity\"], origin=\"native\", proj_style=\"sum\")\n",
+      "prj.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Since we're not doing an integration along a path length, we needed to specify `proj_style = \"sum\"`. "
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
       "We can also look at the slices perpendicular to the other axes, which will show us the structure along the velocity axis:"
      ]
     },

diff -r b19ce8c82e07be0ebc07f8e8943ac56c1bf40a21 -r 788d2640d9f4ba392b0fb20d95a1da3085192355 yt/analysis_modules/particle_trajectories/particle_trajectories.py
--- a/yt/analysis_modules/particle_trajectories/particle_trajectories.py
+++ b/yt/analysis_modules/particle_trajectories/particle_trajectories.py
@@ -66,13 +66,13 @@
         if isinstance(outputs, DatasetSeries):
             self.data_series = outputs
         else:
-            self.data_series = DatasetSeries.from_filenames(outputs)
+            self.data_series = DatasetSeries(outputs)
         self.masks = []
         self.sorts = []
         self.array_indices = []
         self.indices = indices
         self.num_indices = len(indices)
-        self.num_steps = len(filenames)
+        self.num_steps = len(outputs)
         self.times = []
 
         # Default fields 

diff -r b19ce8c82e07be0ebc07f8e8943ac56c1bf40a21 -r 788d2640d9f4ba392b0fb20d95a1da3085192355 yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py
+++ b/yt/data_objects/construction_data_containers.py
@@ -317,7 +317,7 @@
             finfo = self.pf._get_field_info(*field)
             mylog.debug("Setting field %s", field)
             units = finfo.units
-            if self.weight_field is None:
+            if self.weight_field is None and not self._sum_only:
                 # See _handle_chunk where we mandate cm
                 if units == '':
                     input_units = "cm"
@@ -329,7 +329,7 @@
             self[field] = YTArray(field_data[fi].ravel(),
                                   input_units=input_units,
                                   registry=self.pf.unit_registry)
-            if self.weight_field is None:
+            if self.weight_field is None and not self._sum_only:
                 u_obj = Unit(units, registry=self.pf.unit_registry)
                 if u_obj.is_code_unit and input_units != units \
                     or self.pf.no_cgs_equiv_length:

diff -r b19ce8c82e07be0ebc07f8e8943ac56c1bf40a21 -r 788d2640d9f4ba392b0fb20d95a1da3085192355 yt/frontends/fits/data_structures.py
--- a/yt/frontends/fits/data_structures.py
+++ b/yt/frontends/fits/data_structures.py
@@ -17,6 +17,7 @@
 import weakref
 import warnings
 import re
+import uuid
 
 from yt.config import ytcfg
 from yt.funcs import *
@@ -200,37 +201,49 @@
             self.parameter_file.field_units[k] = self.parameter_file.field_units[primary_fname]
 
     def _count_grids(self):
-        self.num_grids = self.pf.nprocs
+        self.num_grids = self.pf.parameters["nprocs"]
 
     def _parse_index(self):
         f = self._handle # shortcut
         pf = self.parameter_file # shortcut
 
         # If nprocs > 1, decompose the domain into virtual grids
-        if pf.nprocs > 1:
-            bbox = np.array([[le,re] for le, re in zip(pf.domain_left_edge,
-                                                       pf.domain_right_edge)])
-            dims = np.array(pf.domain_dimensions)
-            # If we are creating a dataset of lines, only decompose along the position axes
-            if len(pf.line_database) > 0:
-                dims[pf.vel_axis] = 1
-            psize = get_psize(dims, pf.nprocs)
-            gle, gre, shapes, slices = decompose_array(dims, psize, bbox)
-            self.grid_left_edge = self.pf.arr(gle, "code_length")
-            self.grid_right_edge = self.pf.arr(gre, "code_length")
-            self.grid_dimensions = np.array([shape for shape in shapes], dtype="int32")
-            # If we are creating a dataset of lines, only decompose along the position axes
-            if len(pf.line_database) > 0:
-                self.grid_left_edge[:,pf.vel_axis] = pf.domain_left_edge[pf.vel_axis]
-                self.grid_right_edge[:,pf.vel_axis] = pf.domain_right_edge[pf.vel_axis]
-                self.grid_dimensions[:,pf.vel_axis] = pf.domain_dimensions[pf.vel_axis]
-
+        if self.num_grids > 1:
+            if self.pf.z_axis_decomp:
+                dz = (pf.domain_width/pf.domain_dimensions)[2]
+                self.grid_dimensions[:,2] = np.around(float(pf.domain_dimensions[2])/
+                                                            self.num_grids).astype("int")
+                self.grid_dimensions[-1,2] += (pf.domain_dimensions[2] % self.num_grids)
+                self.grid_left_edge[0,2] = pf.domain_left_edge[2]
+                self.grid_left_edge[1:,2] = pf.domain_left_edge[2] + \
+                                            np.cumsum(self.grid_dimensions[:-1,2])*dz
+                self.grid_right_edge[:,2] = self.grid_left_edge[:,2]+self.grid_dimensions[:,2]*dz
+                self.grid_left_edge[:,:2] = pf.domain_left_edge[:2]
+                self.grid_right_edge[:,:2] = pf.domain_right_edge[:2]
+                self.grid_dimensions[:,:2] = pf.domain_dimensions[:2]
+            else:
+                bbox = np.array([[le,re] for le, re in zip(pf.domain_left_edge,
+                                                           pf.domain_right_edge)])
+                dims = np.array(pf.domain_dimensions)
+                # If we are creating a dataset of lines, only decompose along the position axes
+                if len(pf.line_database) > 0:
+                    dims[pf.vel_axis] = 1
+                psize = get_psize(dims, self.num_grids)
+                gle, gre, shapes, slices = decompose_array(dims, psize, bbox)
+                self.grid_left_edge = self.pf.arr(gle, "code_length")
+                self.grid_right_edge = self.pf.arr(gre, "code_length")
+                self.grid_dimensions = np.array([shape for shape in shapes], dtype="int32")
+                # If we are creating a dataset of lines, only decompose along the position axes
+                if len(pf.line_database) > 0:
+                    self.grid_left_edge[:,pf.vel_axis] = pf.domain_left_edge[pf.vel_axis]
+                    self.grid_right_edge[:,pf.vel_axis] = pf.domain_right_edge[pf.vel_axis]
+                    self.grid_dimensions[:,pf.vel_axis] = pf.domain_dimensions[pf.vel_axis]
         else:
             self.grid_left_edge[0,:] = pf.domain_left_edge
             self.grid_right_edge[0,:] = pf.domain_right_edge
             self.grid_dimensions[0] = pf.domain_dimensions
 
-        if self.pf.events_data:
+        if pf.events_data:
             try:
                 self.grid_particle_count[:] = pf.primary_header["naxis2"]
             except KeyError:
@@ -290,6 +303,7 @@
                  nprocs = None,
                  storage_filename = None,
                  nan_mask = None,
+                 z_axis_decomp = False,
                  line_database = None,
                  line_width = None,
                  suppress_astropy_warnings = True,
@@ -297,8 +311,11 @@
 
         if parameters is None:
             parameters = {}
+        parameters["nprocs"] = nprocs
         self.specified_parameters = parameters
 
+        self.z_axis_decomp = z_axis_decomp
+
         if line_width is not None:
             self.line_width = YTQuantity(line_width[0], line_width[1])
             self.line_units = line_width[1]
@@ -322,11 +339,15 @@
             self.nan_mask = {"all":nan_mask}
         elif isinstance(nan_mask, dict):
             self.nan_mask = nan_mask
-        self.nprocs = nprocs
-        self._handle = _astropy.pyfits.open(self.filenames[0],
-                                      memmap=True,
-                                      do_not_scale_image_data=True,
-                                      ignore_blank=True)
+        if isinstance(self.filenames[0], _astropy.pyfits.PrimaryHDU):
+            self._handle = _astropy.pyfits.HDUList(self.filenames[0])
+            fn = "InMemoryFITSImage_%s" % (uuid.uuid4().hex)
+        else:
+            self._handle = _astropy.pyfits.open(self.filenames[0],
+                                                memmap=True,
+                                                do_not_scale_image_data=True,
+                                                ignore_blank=True)
+            fn = self.filenames[0]
         self._fits_files = [self._handle]
         if self.num_files > 1:
             for fits_file in auxiliary_files:
@@ -387,7 +408,7 @@
 
         self.refine_by = 2
 
-        Dataset.__init__(self, filename, dataset_type)
+        Dataset.__init__(self, fn, dataset_type)
         self.storage_filename = storage_filename
 
     def _set_code_unit_attributes(self):
@@ -435,8 +456,11 @@
 
     def _parse_parameter_file(self):
 
-        self.unique_identifier = \
-            int(os.stat(self.parameter_filename)[stat.ST_CTIME])
+        if self.parameter_filename.startswith("InMemory"):
+            self.unique_identifier = time.time()
+        else:
+            self.unique_identifier = \
+                int(os.stat(self.parameter_filename)[stat.ST_CTIME])
 
         # Determine dimensionality
 
@@ -472,14 +496,26 @@
         self.current_redshift = self.omega_lambda = self.omega_matter = \
             self.hubble_constant = self.cosmological_simulation = 0.0
 
-        # If this is a 2D events file, no need to decompose
-        if self.events_data: self.nprocs = 1
+        if self.dimensionality == 2 and self.z_axis_decomp:
+            mylog.warning("You asked to decompose along the z-axis, but this is a 2D dataset. " +
+                          "Ignoring.")
+            self.z_axis_decomp = False
+
+        if self.events_data: self.specified_parameters["nprocs"] = 1
 
         # If nprocs is None, do some automatic decomposition of the domain
-        if self.nprocs is None:
-            self.nprocs = np.around(np.prod(self.domain_dimensions) /
-                                    32**self.dimensionality).astype("int")
-            self.nprocs = max(min(self.nprocs, 512), 1)
+        if self.specified_parameters["nprocs"] is None:
+            if len(self.line_database) > 0:
+                dims = 2
+            else:
+                dims = self.dimensionality
+            if self.z_axis_decomp:
+                nprocs = np.around(self.domain_dimensions[2]/8).astype("int")
+            else:
+                nprocs = np.around(np.prod(self.domain_dimensions)/32**dims).astype("int")
+            self.parameters["nprocs"] = max(min(nprocs, 512), 1)
+        else:
+            self.parameters["nprocs"] = self.specified_parameters["nprocs"]
 
         self.reversed = False
 

diff -r b19ce8c82e07be0ebc07f8e8943ac56c1bf40a21 -r 788d2640d9f4ba392b0fb20d95a1da3085192355 yt/frontends/fits/io.py
--- a/yt/frontends/fits/io.py
+++ b/yt/frontends/fits/io.py
@@ -88,7 +88,7 @@
             for chunk in chunks:
                 for g in chunk.objs:
                     start = ((g.LeftEdge-self.pf.domain_left_edge)/dx).to_ndarray().astype("int")
-                    end = ((g.RightEdge-self.pf.domain_left_edge)/dx).to_ndarray().astype("int")
+                    end = start + g.ActiveDimensions
                     if self.line_db is not None and fname in self.line_db:
                         my_off = self.line_db.get(fname).in_units(self.pf.vel_unit).value
                         my_off = my_off - 0.5*self.pf.line_width

diff -r b19ce8c82e07be0ebc07f8e8943ac56c1bf40a21 -r 788d2640d9f4ba392b0fb20d95a1da3085192355 yt/geometry/ppv_coordinates.py
--- a/yt/geometry/ppv_coordinates.py
+++ b/yt/geometry/ppv_coordinates.py
@@ -25,8 +25,6 @@
 
         self.axis_name = {}
         self.axis_id = {}
-        self.x_axis = {}
-        self.y_axis = {}
 
         for axis, axis_name in zip([pf.lon_axis, pf.lat_axis, pf.vel_axis],
                                    ["Image\ x", "Image\ y", pf.vel_name]):
@@ -42,28 +40,6 @@
             self.axis_id[axis] = axis
             self.axis_id[axis_name] = axis
 
-            if axis == 0:
-                self.x_axis[axis] = 1
-                self.x_axis[lower_ax] = 1
-                self.x_axis[axis_name] = 1
-                self.y_axis[axis] = 2
-                self.y_axis[lower_ax] = 2
-                self.y_axis[axis_name] = 2
-            elif axis == 1:
-                self.x_axis[axis] = 2
-                self.x_axis[lower_ax] = 2
-                self.x_axis[axis_name] = 2
-                self.y_axis[axis] = 0
-                self.y_axis[lower_ax] = 0
-                self.y_axis[axis_name] = 0
-            elif axis == 2:
-                self.x_axis[axis] = 0
-                self.x_axis[lower_ax] = 0
-                self.x_axis[axis_name] = 0
-                self.y_axis[axis] = 1
-                self.y_axis[lower_ax] = 1
-                self.y_axis[axis_name] = 1
-
         self.default_unit_label = {}
         self.default_unit_label[pf.lon_axis] = "pixel"
         self.default_unit_label[pf.lat_axis] = "pixel"
@@ -75,3 +51,8 @@
     def convert_from_cylindrical(self, coord):
         raise NotImplementedError
 
+    x_axis = { 'x' : 1, 'y' : 0, 'z' : 0,
+                0  : 1,  1  : 0,  2  : 0}
+
+    y_axis = { 'x' : 2, 'y' : 2, 'z' : 1,
+                0  : 2,  1  : 2,  2  : 1}

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list