[yt-svn] commit/yt: 27 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Thu Jul 16 09:43:21 PDT 2015


27 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/9b7d916c9727/
Changeset:   9b7d916c9727
Branch:      yt
User:        MatthewTurk
Date:        2015-06-30 21:31:19+00:00
Summary:     Remove seismodome information.
Affected #:  1 file

diff -r b597f0783b1e4104896f741d84717cd2b7d88458 -r 9b7d916c9727ab3e75dfd4dfbe6ce0884e400826 doc/source/visualizing/volume_rendering.rst
--- a/doc/source/visualizing/volume_rendering.rst
+++ b/doc/source/visualizing/volume_rendering.rst
@@ -469,18 +469,3 @@
 
 For an in-depth example, please see the cookbook example on opaque renders here: 
 :ref:`cookbook-opaque_rendering`.
-
-Making Production Quality Movies
---------------------------------
-
-There are a number of aspects of generating movies at production quality that
-can be tweaked and adjusted.  At least two planetarium shows have been created
-that have in part used the new volume rendering framework in yt 3.2, including
-Seismodome and Solar Superstorms.  The scripts used to create Seismodome can be
-found in this repository: https://bitbucket.org/seismodome/movie_scripts , but
-may now be out of date as they were updated during the course of production
-before the new volume rendering API was stabilized.  Among other things, these
-scripts show how to use multiple lenses from a single source, how to rotate
-objects and cameras (sometimes by hand), and how to transform between
-coordinate systems.  A low-resolution 360 movie generated with this script can
-be seen here: https://www.youtube.com/watch?v=0J1colZzOCk .


https://bitbucket.org/yt_analysis/yt/commits/ec7a9efa56ca/
Changeset:   ec7a9efa56ca
Branch:      yt
User:        samskillman
Date:        2015-07-12 23:49:01+00:00
Summary:     Merged in MatthewTurk/yt (pull request #16)

Remove seismodome references
Affected #:  1 file

diff -r f256e9d8a954023a3096d086bff35f9b9aa3a905 -r ec7a9efa56ca87efde6f302f41807ad1f2bc4316 doc/source/visualizing/volume_rendering.rst
--- a/doc/source/visualizing/volume_rendering.rst
+++ b/doc/source/visualizing/volume_rendering.rst
@@ -476,18 +476,3 @@
 
 For an in-depth example, please see the cookbook example on opaque renders here: 
 :ref:`cookbook-opaque_rendering`.
-
-Making Production Quality Movies
---------------------------------
-
-There are a number of aspects of generating movies at production quality that
-can be tweaked and adjusted.  At least two planetarium shows have been created
-that have in part used the new volume rendering framework in yt 3.2, including
-Seismodome and Solar Superstorms.  The scripts used to create Seismodome can be
-found in this repository: https://bitbucket.org/seismodome/movie_scripts , but
-may now be out of date as they were updated during the course of production
-before the new volume rendering API was stabilized.  Among other things, these
-scripts show how to use multiple lenses from a single source, how to rotate
-objects and cameras (sometimes by hand), and how to transform between
-coordinate systems.  A low-resolution 360 movie generated with this script can
-be seen here: https://www.youtube.com/watch?v=0J1colZzOCk .


https://bitbucket.org/yt_analysis/yt/commits/f413e80b8024/
Changeset:   f413e80b8024
Branch:      yt
User:        chummels
Date:        2015-07-05 19:14:07+00:00
Summary:     GridsSource -> GridSource; PointsSource -> PointSource; to match convention of LineSource, BoxSource, and CoordinateVectorSource
Affected #:  6 files

diff -r f256e9d8a954023a3096d086bff35f9b9aa3a905 -r f413e80b8024dbb0d7307de3f0228bf53a38ba07 doc/source/reference/api/api.rst
--- a/doc/source/reference/api/api.rst
+++ b/doc/source/reference/api/api.rst
@@ -629,10 +629,10 @@
    :toctree: generated/
 
    ~yt.visualization.volume_rendering.api.VolumeSource
-   ~yt.visualization.volume_rendering.api.PointsSource
+   ~yt.visualization.volume_rendering.api.PointSource
    ~yt.visualization.volume_rendering.api.LineSource
    ~yt.visualization.volume_rendering.api.BoxSource
-   ~yt.visualization.volume_rendering.api.GridsSource
+   ~yt.visualization.volume_rendering.api.GridSource
    ~yt.visualization.volume_rendering.api.CoordinateVectorSource
 
 Streamlining

diff -r f256e9d8a954023a3096d086bff35f9b9aa3a905 -r f413e80b8024dbb0d7307de3f0228bf53a38ba07 doc/source/visualizing/volume_rendering.rst
--- a/doc/source/visualizing/volume_rendering.rst
+++ b/doc/source/visualizing/volume_rendering.rst
@@ -226,7 +226,7 @@
 versions of yt surfaces and texture mapped objects will be included.
 
 The primary objects now available for hard and opaque objects are 
-:class:`~yt.visualization.volume_rendering.api.PointsSource` and
+:class:`~yt.visualization.volume_rendering.api.PointSource` and
 :class:`~yt.visualization.volume_rendering.api.LineSource`.  These are useful
 if you want to annotate points, for instance by splatting a set of particles
 onto an image, or if you want to draw lines connecting different regions or
@@ -239,7 +239,7 @@
 By annotating a visualization, additional information can be drawn out.  yt
 provides three annotations:
 :class:`~yt.visualization.volume_rendering.api.BoxSource`,
-:class:`~yt.visualization.volume_rendering.api.GridsSource`, and
+:class:`~yt.visualization.volume_rendering.api.GridSource`, and
 :class:`~yt.visualization.volume_rendering.api.CoordinateVectorSource`.  These
 annotations will operate in data space and can draw boxes, grid information,
 and also provide a vector orientation within the image.

diff -r f256e9d8a954023a3096d086bff35f9b9aa3a905 -r f413e80b8024dbb0d7307de3f0228bf53a38ba07 yt/visualization/volume_rendering/api.py
--- a/yt/visualization/volume_rendering/api.py
+++ b/yt/visualization/volume_rendering/api.py
@@ -31,5 +31,5 @@
 from .off_axis_projection import off_axis_projection
 from .scene import Scene
 from .render_source import VolumeSource, OpaqueSource, LineSource, \
-    BoxSource, PointsSource, CoordinateVectorSource, GridsSource
+    BoxSource, PointSource, CoordinateVectorSource, GridSource
 from .zbuffer_array import ZBuffer

diff -r f256e9d8a954023a3096d086bff35f9b9aa3a905 -r f413e80b8024dbb0d7307de3f0228bf53a38ba07 yt/visualization/volume_rendering/render_source.py
--- a/yt/visualization/volume_rendering/render_source.py
+++ b/yt/visualization/volume_rendering/render_source.py
@@ -224,7 +224,7 @@
         return disp
 
 
-class PointsSource(OpaqueSource):
+class PointSource(OpaqueSource):
 
     _image = None
     data_source = None
@@ -249,7 +249,7 @@
 
         Examples
         --------
-        >>> source = PointsSource(particle_positions)
+        >>> source = PointSource(particle_positions)
 
         """
         self.positions = positions
@@ -385,7 +385,7 @@
         super(BoxSource, self).__init__(vertices, color, color_stride=24)
 
 
-class GridsSource(LineSource):
+class GridSource(LineSource):
     def __init__(self, data_source, alpha=0.3, cmap='alage',
                  min_level=None, max_level=None):
         r"""A render source for drawing grids in a scene.
@@ -409,7 +409,7 @@
         Examples
         --------
         >>> dd = ds.sphere("c", (0.1, "unitary"))
-        >>> source = GridsSource(dd, alpha=1.0)
+        >>> source = GridSource(dd, alpha=1.0)
 
         """
         data_source = data_source_or_all(data_source)
@@ -454,7 +454,7 @@
         for i in range(3):
             vertices[:, i] = corners[order, i, ...].ravel(order='F')
 
-        super(GridsSource, self).__init__(vertices, colors, color_stride=24)
+        super(GridSource, self).__init__(vertices, colors, color_stride=24)
 
 
 class CoordinateVectorSource(OpaqueSource):

diff -r f256e9d8a954023a3096d086bff35f9b9aa3a905 -r f413e80b8024dbb0d7307de3f0228bf53a38ba07 yt/visualization/volume_rendering/scene.py
--- a/yt/visualization/volume_rendering/scene.py
+++ b/yt/visualization/volume_rendering/scene.py
@@ -17,7 +17,7 @@
 from yt.extern.six import iteritems, itervalues
 from .camera import Camera
 from .render_source import OpaqueSource, BoxSource, CoordinateVectorSource, \
-    GridsSource
+    GridSource
 from .zbuffer_array import ZBuffer
 
 
@@ -243,7 +243,7 @@
 
     def annotate_grids(self, data_source, alpha=0.3, cmap='algae',
                        min_level=None, max_level=None):
-        grids = GridsSource(data_source, alpha=alpha, cmap=cmap,
+        grids = GridSource(data_source, alpha=alpha, cmap=cmap,
                             min_level=min_level, max_level=max_level)
         self.add_source(grids)
         return self

diff -r f256e9d8a954023a3096d086bff35f9b9aa3a905 -r f413e80b8024dbb0d7307de3f0228bf53a38ba07 yt/visualization/volume_rendering/tests/test_points.py
--- a/yt/visualization/volume_rendering/tests/test_points.py
+++ b/yt/visualization/volume_rendering/tests/test_points.py
@@ -13,7 +13,7 @@
 import yt
 from yt.testing import fake_random_ds
 from yt.visualization.volume_rendering.api import Scene, Camera, ZBuffer, \
-    VolumeSource, OpaqueSource, LineSource, BoxSource, PointsSource
+    VolumeSource, OpaqueSource, LineSource, BoxSource, PointSource
 from yt.utilities.lib.misc_utilities import lines
 from yt.data_objects.api import ImageArray
 import numpy as np
@@ -43,7 +43,7 @@
     colors = np.random.random([npoints, 4])
     colors[:,3] = 0.10
 
-    points_source = PointsSource(vertices, colors=colors)
+    points_source = PointSource(vertices, colors=colors)
     sc.add_source(points_source)
     im = sc.render()
     im.write_png("points.png")


https://bitbucket.org/yt_analysis/yt/commits/f9176a102c5b/
Changeset:   f9176a102c5b
Branch:      yt
User:        samskillman
Date:        2015-07-12 23:53:03+00:00
Summary:     Merged in chummels/yt-vr-refactor (pull request #24)

GridsSource -> GridSource; PointsSource -> PointSource; to match convention of LineSource, BoxSource, and CoordinateVectorSource
Affected #:  6 files

diff -r ec7a9efa56ca87efde6f302f41807ad1f2bc4316 -r f9176a102c5b62133455726f93f4d7dfd6da96cd doc/source/reference/api/api.rst
--- a/doc/source/reference/api/api.rst
+++ b/doc/source/reference/api/api.rst
@@ -629,10 +629,10 @@
    :toctree: generated/
 
    ~yt.visualization.volume_rendering.api.VolumeSource
-   ~yt.visualization.volume_rendering.api.PointsSource
+   ~yt.visualization.volume_rendering.api.PointSource
    ~yt.visualization.volume_rendering.api.LineSource
    ~yt.visualization.volume_rendering.api.BoxSource
-   ~yt.visualization.volume_rendering.api.GridsSource
+   ~yt.visualization.volume_rendering.api.GridSource
    ~yt.visualization.volume_rendering.api.CoordinateVectorSource
 
 Streamlining

diff -r ec7a9efa56ca87efde6f302f41807ad1f2bc4316 -r f9176a102c5b62133455726f93f4d7dfd6da96cd doc/source/visualizing/volume_rendering.rst
--- a/doc/source/visualizing/volume_rendering.rst
+++ b/doc/source/visualizing/volume_rendering.rst
@@ -226,7 +226,7 @@
 versions of yt surfaces and texture mapped objects will be included.
 
 The primary objects now available for hard and opaque objects are 
-:class:`~yt.visualization.volume_rendering.api.PointsSource` and
+:class:`~yt.visualization.volume_rendering.api.PointSource` and
 :class:`~yt.visualization.volume_rendering.api.LineSource`.  These are useful
 if you want to annotate points, for instance by splatting a set of particles
 onto an image, or if you want to draw lines connecting different regions or
@@ -239,7 +239,7 @@
 By annotating a visualization, additional information can be drawn out.  yt
 provides three annotations:
 :class:`~yt.visualization.volume_rendering.api.BoxSource`,
-:class:`~yt.visualization.volume_rendering.api.GridsSource`, and
+:class:`~yt.visualization.volume_rendering.api.GridSource`, and
 :class:`~yt.visualization.volume_rendering.api.CoordinateVectorSource`.  These
 annotations will operate in data space and can draw boxes, grid information,
 and also provide a vector orientation within the image.

diff -r ec7a9efa56ca87efde6f302f41807ad1f2bc4316 -r f9176a102c5b62133455726f93f4d7dfd6da96cd yt/visualization/volume_rendering/api.py
--- a/yt/visualization/volume_rendering/api.py
+++ b/yt/visualization/volume_rendering/api.py
@@ -31,5 +31,5 @@
 from .off_axis_projection import off_axis_projection
 from .scene import Scene
 from .render_source import VolumeSource, OpaqueSource, LineSource, \
-    BoxSource, PointsSource, CoordinateVectorSource, GridsSource
+    BoxSource, PointSource, CoordinateVectorSource, GridSource
 from .zbuffer_array import ZBuffer

diff -r ec7a9efa56ca87efde6f302f41807ad1f2bc4316 -r f9176a102c5b62133455726f93f4d7dfd6da96cd yt/visualization/volume_rendering/render_source.py
--- a/yt/visualization/volume_rendering/render_source.py
+++ b/yt/visualization/volume_rendering/render_source.py
@@ -224,7 +224,7 @@
         return disp
 
 
-class PointsSource(OpaqueSource):
+class PointSource(OpaqueSource):
 
     _image = None
     data_source = None
@@ -249,7 +249,7 @@
 
         Examples
         --------
-        >>> source = PointsSource(particle_positions)
+        >>> source = PointSource(particle_positions)
 
         """
         self.positions = positions
@@ -385,7 +385,7 @@
         super(BoxSource, self).__init__(vertices, color, color_stride=24)
 
 
-class GridsSource(LineSource):
+class GridSource(LineSource):
     def __init__(self, data_source, alpha=0.3, cmap='alage',
                  min_level=None, max_level=None):
         r"""A render source for drawing grids in a scene.
@@ -409,7 +409,7 @@
         Examples
         --------
         >>> dd = ds.sphere("c", (0.1, "unitary"))
-        >>> source = GridsSource(dd, alpha=1.0)
+        >>> source = GridSource(dd, alpha=1.0)
 
         """
         data_source = data_source_or_all(data_source)
@@ -454,7 +454,7 @@
         for i in range(3):
             vertices[:, i] = corners[order, i, ...].ravel(order='F')
 
-        super(GridsSource, self).__init__(vertices, colors, color_stride=24)
+        super(GridSource, self).__init__(vertices, colors, color_stride=24)
 
 
 class CoordinateVectorSource(OpaqueSource):

diff -r ec7a9efa56ca87efde6f302f41807ad1f2bc4316 -r f9176a102c5b62133455726f93f4d7dfd6da96cd yt/visualization/volume_rendering/scene.py
--- a/yt/visualization/volume_rendering/scene.py
+++ b/yt/visualization/volume_rendering/scene.py
@@ -17,7 +17,7 @@
 from yt.extern.six import iteritems, itervalues
 from .camera import Camera
 from .render_source import OpaqueSource, BoxSource, CoordinateVectorSource, \
-    GridsSource
+    GridSource
 from .zbuffer_array import ZBuffer
 
 
@@ -243,7 +243,7 @@
 
     def annotate_grids(self, data_source, alpha=0.3, cmap='algae',
                        min_level=None, max_level=None):
-        grids = GridsSource(data_source, alpha=alpha, cmap=cmap,
+        grids = GridSource(data_source, alpha=alpha, cmap=cmap,
                             min_level=min_level, max_level=max_level)
         self.add_source(grids)
         return self

diff -r ec7a9efa56ca87efde6f302f41807ad1f2bc4316 -r f9176a102c5b62133455726f93f4d7dfd6da96cd yt/visualization/volume_rendering/tests/test_points.py
--- a/yt/visualization/volume_rendering/tests/test_points.py
+++ b/yt/visualization/volume_rendering/tests/test_points.py
@@ -13,7 +13,7 @@
 import yt
 from yt.testing import fake_random_ds
 from yt.visualization.volume_rendering.api import Scene, Camera, ZBuffer, \
-    VolumeSource, OpaqueSource, LineSource, BoxSource, PointsSource
+    VolumeSource, OpaqueSource, LineSource, BoxSource, PointSource
 from yt.utilities.lib.misc_utilities import lines
 from yt.data_objects.api import ImageArray
 import numpy as np
@@ -43,7 +43,7 @@
     colors = np.random.random([npoints, 4])
     colors[:,3] = 0.10
 
-    points_source = PointsSource(vertices, colors=colors)
+    points_source = PointSource(vertices, colors=colors)
     sc.add_source(points_source)
     im = sc.render()
     im.write_png("points.png")


https://bitbucket.org/yt_analysis/yt/commits/48f9b41d221a/
Changeset:   48f9b41d221a
Branch:      yt
User:        chummels
Date:        2015-07-05 19:19:43+00:00
Summary:     Removing some old todo files.
Affected #:  3 files

diff -r f256e9d8a954023a3096d086bff35f9b9aa3a905 -r 48f9b41d221a3649d4377ab92e0f2ce8f673b0f8 vr_refactor_todo.markdown
--- a/vr_refactor_todo.markdown
+++ /dev/null
@@ -1,31 +0,0 @@
-Todo
-----
-
-Known Issues:
-
-* ~~FRB Off-axis projections are broken I think. Currently should raise not-implemented error.~~
-* Parallelism
-  * Need to write parallel z-buffer reduce.
-  * Need to verify brick ordering
-* Alpha blending level for opaque sources such as grid lines/domains/etc may
-  not currently be ideal. Difficult to get it right when the transparent VRs
-  have wildly different levels. One approach would be to normalize the transfer
-  function such that the integral of the TF multiplied by the depth of the 
-  rendering is equal to 1. With grey opacity on, all of these things get a bit
-  easier, in my opinion
-
-Documentation:
-
-* ~~Scene~~
-* ~~Camera~~
-* Lens
-* Narrative
-  * Have started, but more work to do. Replaced at least the tutorial
-    rendering, which saves a number of lines!
-* Cookbooks
-  * All relevant cookbooks have been updated
-* Parallelism
-* OpaqueSource
-* RenderSource
-* Narrative Developer Documentation
-* Diagram of Camera, Scene, Lens, Source

diff -r f256e9d8a954023a3096d086bff35f9b9aa3a905 -r 48f9b41d221a3649d4377ab92e0f2ce8f673b0f8 yt/visualization/volume_rendering/notes.md
--- a/yt/visualization/volume_rendering/notes.md
+++ /dev/null
@@ -1,39 +0,0 @@
-
-Overview of Volume Rendering
-============================
-
-In 3.0, we have moved away from the "god class" that was Camera, and have
-attempted to break down the VR system into a hierarchy of classes.  So far
-we are at:
-
-1. Scene 
-2. Camera
-3. Lens 
-4. Source
-
-For now, a scene only has one camera, i.e. one viewpoint. I would like this to be
-extended to multiple cameras at some point, but not in this pass.
-
-A Camera can have many lenses. When taking a snapshot, the Camera will loop 
-over the lenses that have been added by the user.  We should come up with a
-naming convention and storage system.
-
-
-A Lens defines how the vectors are oriented pointing outward from the camera
-position.  Plane-parallel, Perspective, Fisheye are the first set that need to
-be implemented. As much of the Lens as possible will be set up using defaults 
-derived from the scene, such as the width/depth/etc.
-
-A Source is a data source with intent on how to visualize it.  For example, a
-VolumeSource should be treated volumetrically, with a transfer function defined
-for a given field or set of fields.  A generic OpaqueSource should define
-a method for pixelizing a ZBuffer object, carrying information about both the
-color and depth of the surface/streamline/annotation. These will be used for
-compositing later.
-
-
-sc = Scene(data_source)
-cam = sc.add_camera(cam) // triggers cam.set_defaults_from_data_source(data_source)
-lens = PlaneParallelLens()
-cam.set_lens(lens) # This sets up lens based on camera.
-

diff -r f256e9d8a954023a3096d086bff35f9b9aa3a905 -r 48f9b41d221a3649d4377ab92e0f2ce8f673b0f8 yt/visualization/volume_rendering/render_source.py
--- a/yt/visualization/volume_rendering/render_source.py
+++ b/yt/visualization/volume_rendering/render_source.py
@@ -272,7 +272,7 @@
             empty = zbuffer.rgba
             z = zbuffer.z
 
-        # DRAW SOME LINES
+        # DRAW SOME POINTS
         camera.lens.setup_box_properties(camera)
         px, py, dz = camera.lens.project_to_plane(camera, vertices)
         zpoints(empty, z, px.d, py.d, dz.d, self.colors, self.color_stride)


https://bitbucket.org/yt_analysis/yt/commits/6f297ca38d2e/
Changeset:   6f297ca38d2e
Branch:      yt
User:        samskillman
Date:        2015-07-12 23:54:11+00:00
Summary:     Merged in chummels/yt-vr-refactor (pull request #25)

Removing some old todo files.
Affected #:  3 files

diff -r f9176a102c5b62133455726f93f4d7dfd6da96cd -r 6f297ca38d2ef598c590a4ba4b15ee56dab69791 vr_refactor_todo.markdown
--- a/vr_refactor_todo.markdown
+++ /dev/null
@@ -1,31 +0,0 @@
-Todo
-----
-
-Known Issues:
-
-* ~~FRB Off-axis projections are broken I think. Currently should raise not-implemented error.~~
-* Parallelism
-  * Need to write parallel z-buffer reduce.
-  * Need to verify brick ordering
-* Alpha blending level for opaque sources such as grid lines/domains/etc may
-  not currently be ideal. Difficult to get it right when the transparent VRs
-  have wildly different levels. One approach would be to normalize the transfer
-  function such that the integral of the TF multiplied by the depth of the 
-  rendering is equal to 1. With grey opacity on, all of these things get a bit
-  easier, in my opinion
-
-Documentation:
-
-* ~~Scene~~
-* ~~Camera~~
-* Lens
-* Narrative
-  * Have started, but more work to do. Replaced at least the tutorial
-    rendering, which saves a number of lines!
-* Cookbooks
-  * All relevant cookbooks have been updated
-* Parallelism
-* OpaqueSource
-* RenderSource
-* Narrative Developer Documentation
-* Diagram of Camera, Scene, Lens, Source

diff -r f9176a102c5b62133455726f93f4d7dfd6da96cd -r 6f297ca38d2ef598c590a4ba4b15ee56dab69791 yt/visualization/volume_rendering/notes.md
--- a/yt/visualization/volume_rendering/notes.md
+++ /dev/null
@@ -1,39 +0,0 @@
-
-Overview of Volume Rendering
-============================
-
-In 3.0, we have moved away from the "god class" that was Camera, and have
-attempted to break down the VR system into a hierarchy of classes.  So far
-we are at:
-
-1. Scene 
-2. Camera
-3. Lens 
-4. Source
-
-For now, a scene only has one camera, i.e. one viewpoint. I would like this to be
-extended to multiple cameras at some point, but not in this pass.
-
-A Camera can have many lenses. When taking a snapshot, the Camera will loop 
-over the lenses that have been added by the user.  We should come up with a
-naming convention and storage system.
-
-
-A Lens defines how the vectors are oriented pointing outward from the camera
-position.  Plane-parallel, Perspective, Fisheye are the first set that need to
-be implemented. As much of the Lens as possible will be set up using defaults 
-derived from the scene, such as the width/depth/etc.
-
-A Source is a data source with intent on how to visualize it.  For example, a
-VolumeSource should be treated volumetrically, with a transfer function defined
-for a given field or set of fields.  A generic OpaqueSource should define
-a method for pixelizing a ZBuffer object, carrying information about both the
-color and depth of the surface/streamline/annotation. These will be used for
-compositing later.
-
-
-sc = Scene(data_source)
-cam = sc.add_camera(cam) // triggers cam.set_defaults_from_data_source(data_source)
-lens = PlaneParallelLens()
-cam.set_lens(lens) # This sets up lens based on camera.
-

diff -r f9176a102c5b62133455726f93f4d7dfd6da96cd -r 6f297ca38d2ef598c590a4ba4b15ee56dab69791 yt/visualization/volume_rendering/render_source.py
--- a/yt/visualization/volume_rendering/render_source.py
+++ b/yt/visualization/volume_rendering/render_source.py
@@ -272,7 +272,7 @@
             empty = zbuffer.rgba
             z = zbuffer.z
 
-        # DRAW SOME LINES
+        # DRAW SOME POINTS
         camera.lens.setup_box_properties(camera)
         px, py, dz = camera.lens.project_to_plane(camera, vertices)
         zpoints(empty, z, px.d, py.d, dz.d, self.colors, self.color_stride)


https://bitbucket.org/yt_analysis/yt/commits/2f2320972022/
Changeset:   2f2320972022
Branch:      yt
User:        chummels
Date:        2015-07-05 19:00:01+00:00
Summary:     Making note about VR-annotated recipe using old VR infrastructure.
Affected #:  1 file

diff -r f256e9d8a954023a3096d086bff35f9b9aa3a905 -r 2f23209720223ec5fc4cc59fa96823fd8c830b2d doc/source/cookbook/complex_plots.rst
--- a/doc/source/cookbook/complex_plots.rst
+++ b/doc/source/cookbook/complex_plots.rst
@@ -250,8 +250,10 @@
 
 This recipe demonstrates how to write the simulation time, show an
 axis triad indicating the direction of the coordinate system, and show
-the transfer function on a volume rendering.
-See :ref:`volume_rendering` for more information.
+the transfer function on a volume rendering.  Please note that this 
+recipe relies on the old volume rendering interface.  While one can
+continue to use this interface, it may be incompatible with some of the
+new developments and the infrastructure described in :ref:`volume_rendering`.
 
 .. yt_cookbook:: vol-annotated.py
 


https://bitbucket.org/yt_analysis/yt/commits/ad526aef1ee2/
Changeset:   ad526aef1ee2
Branch:      yt
User:        samskillman
Date:        2015-07-12 23:54:53+00:00
Summary:     Merged in chummels/yt-vr-refactor (pull request #23)

Making note about VR-annotated recipe using old VR infrastructure.
Affected #:  1 file

diff -r 6f297ca38d2ef598c590a4ba4b15ee56dab69791 -r ad526aef1ee296130ae58453e7c50aa213fbb30b doc/source/cookbook/complex_plots.rst
--- a/doc/source/cookbook/complex_plots.rst
+++ b/doc/source/cookbook/complex_plots.rst
@@ -250,8 +250,10 @@
 
 This recipe demonstrates how to write the simulation time, show an
 axis triad indicating the direction of the coordinate system, and show
-the transfer function on a volume rendering.
-See :ref:`volume_rendering` for more information.
+the transfer function on a volume rendering.  Please note that this 
+recipe relies on the old volume rendering interface.  While one can
+continue to use this interface, it may be incompatible with some of the
+new developments and the infrastructure described in :ref:`volume_rendering`.
 
 .. yt_cookbook:: vol-annotated.py
 


https://bitbucket.org/yt_analysis/yt/commits/eeadc9b12217/
Changeset:   eeadc9b12217
Branch:      yt
User:        MatthewTurk
Date:        2015-07-01 15:51:00+00:00
Summary:     Fix get_source by making sources an OrderedDict
Affected #:  1 file

diff -r 0e6e36ce3dd5b7227860cf2a243ee0d9a7f81357 -r eeadc9b12217b49815a06de04bbb93411c6d9fe4 yt/visualization/volume_rendering/scene.py
--- a/yt/visualization/volume_rendering/scene.py
+++ b/yt/visualization/volume_rendering/scene.py
@@ -13,6 +13,7 @@
 
 
 import numpy as np
+from collections import OrderedDict
 from yt.funcs import mylog
 from yt.extern.six import iteritems, itervalues
 from .camera import Camera
@@ -75,7 +76,7 @@
 
         """
         super(Scene, self).__init__()
-        self.sources = {}
+        self.sources = OrderedDict()
         self.camera = None
 
     def get_source(self, source_num):


https://bitbucket.org/yt_analysis/yt/commits/54610678a9b4/
Changeset:   54610678a9b4
Branch:      yt
User:        samskillman
Date:        2015-07-12 23:55:55+00:00
Summary:     Merged in MatthewTurk/yt (pull request #22)

Use ordereddict in scene objects
Affected #:  1 file

diff -r ad526aef1ee296130ae58453e7c50aa213fbb30b -r 54610678a9b4afb3dfd345e0e89e00d45cc0445c yt/visualization/volume_rendering/scene.py
--- a/yt/visualization/volume_rendering/scene.py
+++ b/yt/visualization/volume_rendering/scene.py
@@ -13,6 +13,7 @@
 
 
 import numpy as np
+from collections import OrderedDict
 from yt.funcs import mylog
 from yt.extern.six import iteritems, itervalues
 from .camera import Camera
@@ -75,7 +76,7 @@
 
         """
         super(Scene, self).__init__()
-        self.sources = {}
+        self.sources = OrderedDict()
         self.camera = None
 
     def get_source(self, source_num):


https://bitbucket.org/yt_analysis/yt/commits/a71c51d3130f/
Changeset:   a71c51d3130f
Branch:      yt
User:        atmyers
Date:        2015-06-30 21:09:44+00:00
Summary:     filling in some docstrings
Affected #:  1 file

diff -r b597f0783b1e4104896f741d84717cd2b7d88458 -r a71c51d3130f018b25db50135c7ea45c6c919da6 yt/visualization/volume_rendering/scene.py
--- a/yt/visualization/volume_rendering/scene.py
+++ b/yt/visualization/volume_rendering/scene.py
@@ -234,7 +234,21 @@
         return handle
 
     def annotate_domain(self, ds, color=None):
-        """docstring for annotate_domain"""
+        r"""
+
+        Modifies this scene by drawing the edges of the computational domain.
+        This adds a new BoxSource to the scene corresponding to the domain
+        boundaries and returns the modified scene object.
+
+        Parameters
+        ----------
+
+        ds : :class:`yt.data_objects.api.Dataset`
+            This is the dataset object corresponding to the
+            simulation being rendered. Used to get the domain bounds.
+
+
+        """
         box_source = BoxSource(ds.domain_left_edge,
                                ds.domain_right_edge,
                                color=None)
@@ -249,7 +263,20 @@
         return self
 
     def annotate_axes(self, colors=None, alpha=1.0):
-        """docstring for annotate_axes"""
+        r"""
+
+        Modifies this scene by drawing the coordinate axes.
+        This adds a new CoordinateVectorSource to the scene
+        and returns the modified scene object.
+
+        Parameters
+        ----------
+        colors: array-like, shape (3,4), optional
+            The x, y, z RGBA values to use to draw the axes.
+        alpha : float, optional
+            The opacity of the vectors.
+
+        """
         coords = CoordinateVectorSource(colors, alpha)
         self.add_source(coords)
         return self


https://bitbucket.org/yt_analysis/yt/commits/8469da194482/
Changeset:   8469da194482
Branch:      yt
User:        atmyers
Date:        2015-06-30 21:31:57+00:00
Summary:     removing the SceneHandler
Affected #:  1 file

diff -r a71c51d3130f018b25db50135c7ea45c6c919da6 -r 8469da194482dbcc9505cb459330a78554dccd73 yt/visualization/volume_rendering/scene.py
--- a/yt/visualization/volume_rendering/scene.py
+++ b/yt/visualization/volume_rendering/scene.py
@@ -21,29 +21,6 @@
 from .zbuffer_array import ZBuffer
 
 
-class SceneHandle(object):
-    """docstring for SceneHandle"""
-    def __init__(self, scene, camera, source, lens):
-        mylog.debug("Entering %s" % str(self))
-        self.scene = scene
-        self.camera = camera
-        self.source = source
-        self.lens = lens
-
-    def __repr__(self):
-        desc = super(SceneHandle, self).__repr__()
-        desc += str(self)
-        return desc
-
-    def __str__(self):
-        desc = "Scene Handler\n"
-        desc += ".scene: " + self.scene.__repr__() + "\n"
-        desc += ".camera: " + self.camera.__repr__() + "\n"
-        desc += ".source: " + self.source.__repr__() + "\n"
-        desc += ".lens: " + self.lens.__repr__() + "\n"
-        return desc
-
-
 class Scene(object):
 
     """The Scene Class
@@ -217,22 +194,22 @@
         return locals()
     camera = property(**camera())
 
-    # Are these useful?
     def set_camera(self, camera):
+        r"""
+
+        Set the camera to be used by this scene.
+
+        """
         self.camera = camera
 
     def get_camera(self, camera):
+        r"""
+
+        Get the camera currently used by this scene.
+
+        """
         return self.camera
 
-    def get_handle(self, key=None):
-        """docstring for get_handle"""
-
-        if key is None:
-            key = self.sources.keys()[0]
-        handle = SceneHandle(self, self.camera, self.sources[key],
-                             self.sources[key].lens)
-        return handle
-
     def annotate_domain(self, ds, color=None):
         r"""
 


https://bitbucket.org/yt_analysis/yt/commits/baae8e72d784/
Changeset:   baae8e72d784
Branch:      yt
User:        atmyers
Date:        2015-06-30 21:44:29+00:00
Summary:     some more docstrings for the camera class
Affected #:  1 file

diff -r 8469da194482dbcc9505cb459330a78554dccd73 -r baae8e72d784d3ac408ee642c3fd6e0a6e63d96c yt/visualization/volume_rendering/camera.py
--- a/yt/visualization/volume_rendering/camera.py
+++ b/yt/visualization/volume_rendering/camera.py
@@ -22,7 +22,17 @@
 
 class Camera(Orientation):
 
-    r"""    """
+    r"""    
+
+    The Camera class. A Camera represents of point of view into a
+    Scene. It is defined by a position (the location of the camera
+    in the simulation domain,), a focus (the point at which the
+    camera is pointed), a width (the width of the snapshot that will
+    be taken, a resolution (the number of pixels in the image), and
+    a north_vector (the "up" direction in the resulting image). A
+    camera can use a variety of different Lens objects.
+
+    """
 
     _moved = True
     _width = None
@@ -79,7 +89,8 @@
         self.lens.setup_box_properties(self)
 
     def position():
-        doc = "The position property."
+        doc = '''The position is the location of the camera in
+               the coordinate system of the simulation.'''
 
         def fget(self):
             return self._position
@@ -94,7 +105,7 @@
     position = property(**position())
 
     def width():
-        doc = "The width property."
+        doc = '''The width of the image that will be produced. '''
 
         def fget(self):
             return self._width
@@ -110,7 +121,7 @@
     width = property(**width())
 
     def focus():
-        doc = "The focus property."
+        doc = '''The focus defines the point the Camera is pointed at. '''
 
         def fget(self):
             return self._focus
@@ -125,7 +136,8 @@
     focus = property(**focus())
 
     def resolution():
-        doc = "The resolution property."
+        doc = '''The resolution is the number of pixels in the image that
+               will be produced. '''
 
         def fget(self):
             return self._resolution
@@ -197,9 +209,9 @@
         self.switch_orientation()
 
     def set_position(self, position, north_vector=None):
-          self.position = position
-          self.switch_orientation(normal_vector=self.focus - self.position,
-                                  north_vector=north_vector)
+        self.position = position
+        self.switch_orientation(normal_vector=self.focus - self.position,
+                                north_vector=north_vector)
 
     def switch_orientation(self, normal_vector=None, north_vector=None):
         r"""
@@ -264,6 +276,7 @@
         --------
 
         >>> cam.rotate(np.pi/4)
+
         """
         rotate_all = rot_vector is not None
         if rot_vector is None:


https://bitbucket.org/yt_analysis/yt/commits/017deaef72ca/
Changeset:   017deaef72ca
Branch:      yt
User:        atmyers
Date:        2015-06-30 21:59:19+00:00
Summary:     more complete docstrings for the camera
Affected #:  2 files

diff -r baae8e72d784d3ac408ee642c3fd6e0a6e63d96c -r 017deaef72ca5f5ebc299e51236c666855a743bd yt/visualization/volume_rendering/camera.py
--- a/yt/visualization/volume_rendering/camera.py
+++ b/yt/visualization/volume_rendering/camera.py
@@ -161,6 +161,23 @@
         return lens_params
 
     def set_lens(self, lens_type):
+        r'''
+
+        Set the lens to be used with this camera. 
+
+        Parameters
+        ----------
+
+        lens_type : string
+            Must be one of the following:
+            'plane-parallel'
+            'perspective'
+            'stereo-perspective'
+            'fisheye'
+            'spherical'
+            'stereo-spherical'
+
+        '''
         if lens_type not in lenses:
             mylog.error("Lens type not available")
             raise RuntimeError()
@@ -198,7 +215,17 @@
         self._moved = True
 
     def set_width(self, width):
-        """This must have been created using ds.arr"""
+        r"""
+
+        Set the width of the image that will be produced by this camera.
+        This must be a YTQuantity.
+
+        Parameters
+        ----------
+
+        width : :class:`yt.units.yt_array.YTQuantity`
+
+        """
         assert isinstance(width, YTArray), 'Width must be created with ds.arr'
         if isinstance(width, YTArray):
             width = width.in_units('code_length')
@@ -209,6 +236,21 @@
         self.switch_orientation()
 
     def set_position(self, position, north_vector=None):
+        r"""
+
+        Set the position of the camera.
+
+        Parameters
+        ----------
+
+        position : array_like
+            The new position
+        north_vector : array_like, optional
+            The 'up' direction for the plane of rays.  If not specific,
+            calculated automatically.
+
+        """
+
         self.position = position
         self.switch_orientation(normal_vector=self.focus - self.position,
                                 north_vector=north_vector)

diff -r baae8e72d784d3ac408ee642c3fd6e0a6e63d96c -r 017deaef72ca5f5ebc299e51236c666855a743bd yt/visualization/volume_rendering/lens.py
--- a/yt/visualization/volume_rendering/lens.py
+++ b/yt/visualization/volume_rendering/lens.py
@@ -682,4 +682,4 @@
           'stereo-perspective': StereoPerspectiveLens,
           'fisheye': FisheyeLens,
           'spherical': SphericalLens,
-          'stereo-spherical':StereoSphericalLens}
+          'stereo-spherical': StereoSphericalLens}


https://bitbucket.org/yt_analysis/yt/commits/c232346dc435/
Changeset:   c232346dc435
Branch:      yt
User:        atmyers
Date:        2015-06-30 22:11:19+00:00
Summary:     adding some docstrings for the lens objects
Affected #:  1 file

diff -r 017deaef72ca5f5ebc299e51236c666855a743bd -r c232346dc43582e154e9909ba643ba30edaafa14 yt/visualization/volume_rendering/lens.py
--- a/yt/visualization/volume_rendering/lens.py
+++ b/yt/visualization/volume_rendering/lens.py
@@ -27,7 +27,13 @@
 
 class Lens(ParallelAnalysisInterface):
 
-    """docstring for Lens"""
+    """
+
+    A base class for setting up Lens objects. A Lens,
+    along with a Camera, is used to defined the set of
+    rays that will be used for rendering.
+
+    """
 
     def __init__(self, ):
         super(Lens, self).__init__()
@@ -75,7 +81,12 @@
 
 class PlaneParallelLens(Lens):
 
-    """docstring for PlaneParallelLens"""
+    r'''
+
+    This lens type is the standard type used for orthographic projections. 
+    All rays emerge parallel to each other, arranged along a plane.
+
+    '''
 
     def __init__(self, ):
         super(PlaneParallelLens, self).__init__()
@@ -125,7 +136,12 @@
 
 class PerspectiveLens(Lens):
 
-    """docstring for PerspectiveLens"""
+    r'''
+
+    This lens type adjusts for an opening view angle, so that the scene will 
+    have an element of perspective to it.
+
+    '''
 
     def __init__(self):
         super(PerspectiveLens, self).__init__()
@@ -239,6 +255,7 @@
             (self.viewpoint)
         return disp
 
+
 class StereoPerspectiveLens(Lens):
 
     """docstring for StereoPerspectiveLens"""
@@ -409,9 +426,18 @@
             (self.viewpoint)
         return disp
 
+
 class FisheyeLens(Lens):
 
-    """docstring for FisheyeLens"""
+    r"""
+
+    This lens type accepts a field-of-view property, fov, that describes how wide 
+    an angle the fisheye can see. Fisheye images are typically used for dome-based 
+    presentations; the Hayden planetarium for instance has a field of view of 194.6. 
+    The images returned by this camera will be flat pixel images that can and should 
+    be reshaped to the resolution.    
+
+    """
 
     def __init__(self):
         super(FisheyeLens, self).__init__()
@@ -461,7 +487,7 @@
 
     def set_viewpoint(self, camera):
         """
-        For a PerspectiveLens, the viewpoint is the front center.
+        For a FisheyeLens, the viewpoint is the front center.
         """
         self.viewpoint = camera.position
 
@@ -502,7 +528,12 @@
 
 class SphericalLens(Lens):
 
-    """docstring for SphericalLens"""
+    r"""
+
+    This is a cylindrical-spherical projection. Movies rendered in this way 
+    can be displayed in head-tracking devices or in YouTube 360 view.
+    
+    """
 
     def __init__(self):
         super(SphericalLens, self).__init__()
@@ -595,6 +626,7 @@
         py = (u * np.rint(py)).astype("int64")
         return px, py, dz
 
+
 class StereoSphericalLens(Lens):
 
     """docstring for StereoSphericalLens"""


https://bitbucket.org/yt_analysis/yt/commits/cddccd4c0884/
Changeset:   cddccd4c0884
Branch:      yt
User:        atmyers
Date:        2015-06-30 22:16:54+00:00
Summary:     filling in some docstrings for the render sources
Affected #:  1 file

diff -r c232346dc43582e154e9909ba643ba30edaafa14 -r cddccd4c0884c42dcaecdcc0e31baee8f3ef6820 yt/visualization/volume_rendering/render_source.py
--- a/yt/visualization/volume_rendering/render_source.py
+++ b/yt/visualization/volume_rendering/render_source.py
@@ -30,8 +30,12 @@
 
 class RenderSource(ParallelAnalysisInterface):
 
-    """Base Class for Render Sources. Will be inherited for volumes,
-       streamlines, etc"""
+    """
+
+    Base Class for Render Sources. Will be inherited for volumes,
+    streamlines, etc.
+
+    """
 
     def __init__(self):
         super(RenderSource, self).__init__()
@@ -46,7 +50,12 @@
 
 
 class OpaqueSource(RenderSource):
-    """docstring for OpaqueSource"""
+    """
+
+    A base class for opaque render sources. Will be inherited from
+    for LineSources, BoxSources, etc.
+
+    """
     def __init__(self):
         super(OpaqueSource, self).__init__()
         self.opaque = True
@@ -65,7 +74,14 @@
 
 class VolumeSource(RenderSource):
 
-    """docstring for VolumeSource"""
+    """
+
+    A VolumeSource is a class for rendering data from
+    an arbitrary volumetric data source, e.g. a sphere,
+    cylinder, or the entire computational domain.
+
+
+    """
     _image = None
     data_source = None
 


https://bitbucket.org/yt_analysis/yt/commits/1f7f5a8cb7e1/
Changeset:   1f7f5a8cb7e1
Branch:      yt
User:        atmyers
Date:        2015-06-30 22:23:48+00:00
Summary:     remove references to a look_at
Affected #:  1 file

diff -r cddccd4c0884c42dcaecdcc0e31baee8f3ef6820 -r 1f7f5a8cb7e119f4f112344de85f5c8f72a9773a yt/visualization/volume_rendering/camera.py
--- a/yt/visualization/volume_rendering/camera.py
+++ b/yt/visualization/volume_rendering/camera.py
@@ -22,7 +22,7 @@
 
 class Camera(Orientation):
 
-    r"""    
+    r"""
 
     The Camera class. A Camera represents of point of view into a
     Scene. It is defined by a position (the location of the camera
@@ -403,7 +403,7 @@
         theta : float, in radians
             Angle (in radians) by which to rotate the view.
         n_steps : int
-            The number of look_at snapshots to make.
+            The number of snapshots to make.
         rot_vector  : array_like, optional
             Specify the rotation vector around which rotation will
             occur.  Defaults to None, which sets rotation around the
@@ -422,7 +422,7 @@
             yield i
 
     def move_to(self, final, n_steps, exponential=False):
-        r"""Loop over a look_at
+        r"""Loop over a movement, creating a zoom or pan.
 
         This will yield `n_steps` until the current view has been
         moved to a final center of `final`.
@@ -432,7 +432,7 @@
         final : YTArray
             The final center to move to after `n_steps`
         n_steps : int
-            The number of look_at snapshots to make.
+            The number of snapshots to make.
         exponential : boolean
             Specifies whether the move/zoom transition follows an
             exponential path toward the destination or linear


https://bitbucket.org/yt_analysis/yt/commits/e81694926d5e/
Changeset:   e81694926d5e
Branch:      yt
User:        atmyers
Date:        2015-07-01 02:53:28+00:00
Summary:     changing the input format for the LineSource, as Cameron suggested
Affected #:  1 file

diff -r 1f7f5a8cb7e119f4f112344de85f5c8f72a9773a -r e81694926d5ecd40400bafe986bc54628aa3ae22 yt/visualization/volume_rendering/render_source.py
--- a/yt/visualization/volume_rendering/render_source.py
+++ b/yt/visualization/volume_rendering/render_source.py
@@ -314,9 +314,12 @@
 
         Parameters
         ----------
-        positions: array, shape (N, 3)
-            These positions, in data-space coordinates, are the points to be
-            connected with lines.
+        positions: array, shape (N, 2, 3)
+            These positions, in data-space coordinates, are the starting and
+            stopping points for each pair of lines. For example,
+            positions[0][0] and positions[0][1] would give the (x, y, z)
+            coordinates of the beginning and end points of the first line,
+            respectively.
         colors : array, shape (N, 4), optional
             The colors of the points, including an alpha channel, in floating
             point running from 0..1.  Note that they correspond to the line
@@ -333,7 +336,12 @@
         """
 
         super(LineSource, self).__init__()
-        self.positions = positions
+
+        assert(positions.shape[1] == 2)
+        assert(positions.shape[2] == 3)
+        N = positions.shape[0]
+        self.positions = positions.reshape((2*N, 3))
+
         # If colors aren't individually set, make black with full opacity
         if colors is None:
             colors = np.ones((len(positions), 4))
@@ -398,6 +406,8 @@
         vertices = np.empty([24, 3])
         for i in range(3):
             vertices[:, i] = corners[order, i, ...].ravel(order='F')
+        vertices = vertices.reshape((12, 2, 3))
+
         super(BoxSource, self).__init__(vertices, color, color_stride=24)
 
 
@@ -469,6 +479,7 @@
         vertices = np.empty([corners.shape[2]*2*12, 3])
         for i in range(3):
             vertices[:, i] = corners[order, i, ...].ravel(order='F')
+        vertices = vertices.reshape((12, 2, 3))
 
         super(GridsSource, self).__init__(vertices, colors, color_stride=24)
 


https://bitbucket.org/yt_analysis/yt/commits/6df440103bb0/
Changeset:   6df440103bb0
Branch:      yt
User:        atmyers
Date:        2015-07-01 02:54:03+00:00
Summary:     removing the project_to_plane method of camera, that goes in the lenses
Affected #:  1 file

diff -r e81694926d5ecd40400bafe986bc54628aa3ae22 -r 6df440103bb0ea7631607ab359d54f5bec2d8a71 yt/visualization/volume_rendering/camera.py
--- a/yt/visualization/volume_rendering/camera.py
+++ b/yt/visualization/volume_rendering/camera.py
@@ -502,17 +502,6 @@
             self.zoom(f)
             yield i
 
-    def project_to_plane(self, pos, res=None):
-        if res is None:
-            res = self.resolution
-        dx = np.dot(pos - self.position.d, self.unit_vectors[1])
-        dy = np.dot(pos - self.position.d, self.unit_vectors[0])
-        dz = np.dot(pos - self.position.d, self.unit_vectors[2])
-        # Transpose into image coords.
-        py = (res[0]/2 + res[0]*(dx/self.width[0].d)).astype('int')
-        px = (res[1]/2 + res[1]*(dy/self.width[1].d)).astype('int')
-        return px, py, dz
-
     def __repr__(self):
         disp = ("<Camera Object>:\n\tposition:%s\n\tfocus:%s\n\t" +
                 "north_vector:%s\n\twidth:%s\n\tlight:%s\n\tresolution:%s\n") \


https://bitbucket.org/yt_analysis/yt/commits/7de1ae02dc6d/
Changeset:   7de1ae02dc6d
Branch:      yt
User:        atmyers
Date:        2015-07-01 03:01:07+00:00
Summary:     merging
Affected #:  5 files

diff -r 6df440103bb0ea7631607ab359d54f5bec2d8a71 -r 7de1ae02dc6deaea143bf4bcbec1f4eba932f77d doc/source/cookbook/camera_movement.py
--- a/doc/source/cookbook/camera_movement.py
+++ b/doc/source/cookbook/camera_movement.py
@@ -13,16 +13,16 @@
 
 frame = 0
 # Move to the maximum density location over 5 frames
-for _ in cam.move_to(max_c, 5):
+for _ in cam.iter_move(max_c, 5):
     sc.render('camera_movement_%04i.png' % frame, clip_ratio=8.0)
     frame += 1
 
 # Zoom in by a factor of 10 over 5 frames
-for _ in cam.zoomin(10.0, 5):
+for _ in cam.iter_zoom(10.0, 5):
     sc.render('camera_movement_%04i.png' % frame, clip_ratio=8.0)
     frame += 1
 
 # Do a rotation over 5 frames
-for _ in cam.rotation(np.pi, 5):
+for _ in cam.iter_rotate(np.pi, 5):
     sc.render('camera_movement_%04i.png' % frame, clip_ratio=8.0)
     frame += 1

diff -r 6df440103bb0ea7631607ab359d54f5bec2d8a71 -r 7de1ae02dc6deaea143bf4bcbec1f4eba932f77d doc/source/visualizing/volume_rendering.rst
--- a/doc/source/visualizing/volume_rendering.rst
+++ b/doc/source/visualizing/volume_rendering.rst
@@ -380,8 +380,15 @@
 .. image:: _images/vr_sample.jpg
    :width: 512
 
+Parallelism
+-----------
+
+yt can utilize both MPI and OpenMP parallelism for volume rendering.  Both, and
+their combination, are described below.
+
 MPI Parallelization
--------------------
++++++++++++++++++++
+
 Currently the volume renderer is parallelized using MPI to decompose the volume
 by attempting to split up the
 :class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` in a balanced way.  This
@@ -414,7 +421,7 @@
 For more information about enabling parallelism, see :ref:`parallel-computation`.
 
 OpenMP Parallelization
-----------------------
+++++++++++++++++++++++
 
 The volume rendering also parallelized using the OpenMP interface in Cython.
 While the MPI parallelization is done using domain decomposition, the OpenMP
@@ -430,7 +437,7 @@
 by default by modifying the environment variable OMP_NUM_THREADS. 
 
 Running in Hybrid MPI + OpenMP
-------------------------------
+++++++++++++++++++++++++++++++
 
 The two methods for volume rendering parallelization can be used together to
 leverage large supercomputing resources.  When choosing how to balance the

diff -r 6df440103bb0ea7631607ab359d54f5bec2d8a71 -r 7de1ae02dc6deaea143bf4bcbec1f4eba932f77d yt/visualization/volume_rendering/camera.py
--- a/yt/visualization/volume_rendering/camera.py
+++ b/yt/visualization/volume_rendering/camera.py
@@ -392,7 +392,7 @@
         """
         self.rotate(theta, rot_vector=self.unit_vectors[2])
 
-    def rotation(self, theta, n_steps, rot_vector=None):
+    def iter_rotate(self, theta, n_steps, rot_vector=None):
         r"""Loop over rotate, creating a rotation
 
         This will rotate `n_steps` until the current view has been
@@ -412,7 +412,7 @@
         Examples
         --------
 
-        >>> for i in cam.rotation(np.pi, 10):
+        >>> for i in cam.iter_rotate(np.pi, 10):
         ...     im = sc.render("rotation_%04i.png" % i)
         """
 
@@ -421,8 +421,8 @@
             self.rotate(dtheta, rot_vector=rot_vector)
             yield i
 
-    def move_to(self, final, n_steps, exponential=False):
-        r"""Loop over a movement, creating a zoom or pan.
+    def iter_move(self, final, n_steps, exponential=False):
+        r"""Loop over an iter_move and return snapshots along the way.
 
         This will yield `n_steps` until the current view has been
         moved to a final center of `final`.
@@ -440,7 +440,7 @@
         Examples
         --------
 
-        >>> for i in cam.move_to([0.2,0.3,0.6], 10):
+        >>> for i in cam.iter_move([0.2,0.3,0.6], 10):
         ...     sc.render("move_%04i.png" % i)
         """
         assert isinstance(final, YTArray)
@@ -477,8 +477,8 @@
         """
         self.set_width(self.width / factor)
 
-    def zoomin(self, final, n_steps):
-        r"""Loop over a zoomin and return snapshots along the way.
+    def iter_zoom(self, final, n_steps):
+        r"""Loop over a iter_zoom and return snapshots along the way.
 
         This will yield `n_steps` snapshots until the current view has been
         zooming in to a final factor of `final`.
@@ -494,7 +494,7 @@
         Examples
         --------
 
-        >>> for i in cam.zoomin(100.0, 10):
+        >>> for i in cam.iter_zoom(100.0, 10):
         ...     sc.render("zoom_%04i.png" % i)
         """
         f = final**(1.0/n_steps)

diff -r 6df440103bb0ea7631607ab359d54f5bec2d8a71 -r 7de1ae02dc6deaea143bf4bcbec1f4eba932f77d yt/visualization/volume_rendering/tests/test_vr_cameras.py
--- a/yt/visualization/volume_rendering/tests/test_vr_cameras.py
+++ b/yt/visualization/volume_rendering/tests/test_vr_cameras.py
@@ -141,24 +141,24 @@
         cam = ds.camera(self.c, self.L, self.W, self.N, transfer_function=tf,
                         log_fields=[False], north_vector=[0., 0., 1.0])
         cam.zoom(0.5)
-        for snap in cam.zoomin(2.0, 3):
+        for snap in cam.iter_zoom(2.0, 3):
             snap
-        for snap in cam.move_to(np.array(self.c) + 0.1, 3,
+        for snap in cam.iter_move(np.array(self.c) + 0.1, 3,
                                 final_width=None, exponential=False):
             snap
-        for snap in cam.move_to(np.array(self.c) - 0.1, 3,
+        for snap in cam.iter_move(np.array(self.c) - 0.1, 3,
                                 final_width=2.0*self.W, exponential=False):
             snap
-        for snap in cam.move_to(np.array(self.c), 3,
+        for snap in cam.iter_move(np.array(self.c), 3,
                                 final_width=1.0*self.W, exponential=True):
             snap
         cam.rotate(np.pi/10)
         cam.pitch(np.pi/10)
         cam.yaw(np.pi/10)
         cam.roll(np.pi/10)
-        for snap in cam.rotation(np.pi, 3, rot_vector=None):
+        for snap in cam.iter_rotate(np.pi, 3, rot_vector=None):
             snap
-        for snap in cam.rotation(np.pi, 3, rot_vector=np.random.random(3)):
+        for snap in cam.iter_rotate(np.pi, 3, rot_vector=np.random.random(3)):
             snap
         cam.snapshot('final.png')
         assert_fname('final.png')


https://bitbucket.org/yt_analysis/yt/commits/1559f6c83e50/
Changeset:   1559f6c83e50
Branch:      yt
User:        atmyers
Date:        2015-07-01 03:03:42+00:00
Summary:     add a short explanatory note
Affected #:  1 file

diff -r 7de1ae02dc6deaea143bf4bcbec1f4eba932f77d -r 1559f6c83e506fd8846eee59032d3939545befb0 yt/visualization/volume_rendering/render_source.py
--- a/yt/visualization/volume_rendering/render_source.py
+++ b/yt/visualization/volume_rendering/render_source.py
@@ -339,6 +339,8 @@
 
         assert(positions.shape[1] == 2)
         assert(positions.shape[2] == 3)
+
+        # convert the positions to the shape expected by zlines, below
         N = positions.shape[0]
         self.positions = positions.reshape((2*N, 3))
 


https://bitbucket.org/yt_analysis/yt/commits/afdaceaa6c06/
Changeset:   afdaceaa6c06
Branch:      yt
User:        samskillman
Date:        2015-07-12 23:59:31+00:00
Summary:     Merged in atmyers/yt (pull request #18)

Making some changes suggested by Cameron to the new VR code
Affected #:  4 files

diff -r 54610678a9b4afb3dfd345e0e89e00d45cc0445c -r afdaceaa6c06e0409e41802e480f113c9c377c8a yt/visualization/volume_rendering/camera.py
--- a/yt/visualization/volume_rendering/camera.py
+++ b/yt/visualization/volume_rendering/camera.py
@@ -22,7 +22,17 @@
 
 class Camera(Orientation):
 
-    r"""    """
+    r"""
+
+    The Camera class. A Camera represents of point of view into a
+    Scene. It is defined by a position (the location of the camera
+    in the simulation domain,), a focus (the point at which the
+    camera is pointed), a width (the width of the snapshot that will
+    be taken, a resolution (the number of pixels in the image), and
+    a north_vector (the "up" direction in the resulting image). A
+    camera can use a variety of different Lens objects.
+
+    """
 
     _moved = True
     _width = None
@@ -79,7 +89,8 @@
         self.lens.setup_box_properties(self)
 
     def position():
-        doc = "The position property."
+        doc = '''The position is the location of the camera in
+               the coordinate system of the simulation.'''
 
         def fget(self):
             return self._position
@@ -94,7 +105,7 @@
     position = property(**position())
 
     def width():
-        doc = "The width property."
+        doc = '''The width of the image that will be produced. '''
 
         def fget(self):
             return self._width
@@ -110,7 +121,7 @@
     width = property(**width())
 
     def focus():
-        doc = "The focus property."
+        doc = '''The focus defines the point the Camera is pointed at. '''
 
         def fget(self):
             return self._focus
@@ -125,7 +136,8 @@
     focus = property(**focus())
 
     def resolution():
-        doc = "The resolution property."
+        doc = '''The resolution is the number of pixels in the image that
+               will be produced. '''
 
         def fget(self):
             return self._resolution
@@ -149,6 +161,23 @@
         return lens_params
 
     def set_lens(self, lens_type):
+        r'''
+
+        Set the lens to be used with this camera. 
+
+        Parameters
+        ----------
+
+        lens_type : string
+            Must be one of the following:
+            'plane-parallel'
+            'perspective'
+            'stereo-perspective'
+            'fisheye'
+            'spherical'
+            'stereo-spherical'
+
+        '''
         if lens_type not in lenses:
             mylog.error("Lens type not available")
             raise RuntimeError()
@@ -186,7 +215,17 @@
         self._moved = True
 
     def set_width(self, width):
-        """This must have been created using ds.arr"""
+        r"""
+
+        Set the width of the image that will be produced by this camera.
+        This must be a YTQuantity.
+
+        Parameters
+        ----------
+
+        width : :class:`yt.units.yt_array.YTQuantity`
+
+        """
         assert isinstance(width, YTArray), 'Width must be created with ds.arr'
         if isinstance(width, YTArray):
             width = width.in_units('code_length')
@@ -197,9 +236,24 @@
         self.switch_orientation()
 
     def set_position(self, position, north_vector=None):
-          self.position = position
-          self.switch_orientation(normal_vector=self.focus - self.position,
-                                  north_vector=north_vector)
+        r"""
+
+        Set the position of the camera.
+
+        Parameters
+        ----------
+
+        position : array_like
+            The new position
+        north_vector : array_like, optional
+            The 'up' direction for the plane of rays.  If not specific,
+            calculated automatically.
+
+        """
+
+        self.position = position
+        self.switch_orientation(normal_vector=self.focus - self.position,
+                                north_vector=north_vector)
 
     def switch_orientation(self, normal_vector=None, north_vector=None):
         r"""
@@ -264,6 +318,7 @@
         --------
 
         >>> cam.rotate(np.pi/4)
+
         """
         rotate_all = rot_vector is not None
         if rot_vector is None:
@@ -348,7 +403,7 @@
         theta : float, in radians
             Angle (in radians) by which to rotate the view.
         n_steps : int
-            The number of look_at snapshots to make.
+            The number of snapshots to make.
         rot_vector  : array_like, optional
             Specify the rotation vector around which rotation will
             occur.  Defaults to None, which sets rotation around the
@@ -367,7 +422,7 @@
             yield i
 
     def iter_move(self, final, n_steps, exponential=False):
-        r"""Loop over a look_at
+        r"""Loop over an iter_move and return snapshots along the way.
 
         This will yield `n_steps` until the current view has been
         moved to a final center of `final`.
@@ -377,7 +432,7 @@
         final : YTArray
             The final center to move to after `n_steps`
         n_steps : int
-            The number of look_at snapshots to make.
+            The number of snapshots to make.
         exponential : boolean
             Specifies whether the move/zoom transition follows an
             exponential path toward the destination or linear
@@ -447,17 +502,6 @@
             self.zoom(f)
             yield i
 
-    def project_to_plane(self, pos, res=None):
-        if res is None:
-            res = self.resolution
-        dx = np.dot(pos - self.position.d, self.unit_vectors[1])
-        dy = np.dot(pos - self.position.d, self.unit_vectors[0])
-        dz = np.dot(pos - self.position.d, self.unit_vectors[2])
-        # Transpose into image coords.
-        py = (res[0]/2 + res[0]*(dx/self.width[0].d)).astype('int')
-        px = (res[1]/2 + res[1]*(dy/self.width[1].d)).astype('int')
-        return px, py, dz
-
     def __repr__(self):
         disp = ("<Camera Object>:\n\tposition:%s\n\tfocus:%s\n\t" +
                 "north_vector:%s\n\twidth:%s\n\tlight:%s\n\tresolution:%s\n") \

diff -r 54610678a9b4afb3dfd345e0e89e00d45cc0445c -r afdaceaa6c06e0409e41802e480f113c9c377c8a yt/visualization/volume_rendering/lens.py
--- a/yt/visualization/volume_rendering/lens.py
+++ b/yt/visualization/volume_rendering/lens.py
@@ -27,7 +27,13 @@
 
 class Lens(ParallelAnalysisInterface):
 
-    """docstring for Lens"""
+    """
+
+    A base class for setting up Lens objects. A Lens,
+    along with a Camera, is used to defined the set of
+    rays that will be used for rendering.
+
+    """
 
     def __init__(self, ):
         super(Lens, self).__init__()
@@ -75,7 +81,12 @@
 
 class PlaneParallelLens(Lens):
 
-    """docstring for PlaneParallelLens"""
+    r'''
+
+    This lens type is the standard type used for orthographic projections. 
+    All rays emerge parallel to each other, arranged along a plane.
+
+    '''
 
     def __init__(self, ):
         super(PlaneParallelLens, self).__init__()
@@ -125,7 +136,12 @@
 
 class PerspectiveLens(Lens):
 
-    """docstring for PerspectiveLens"""
+    r'''
+
+    This lens type adjusts for an opening view angle, so that the scene will 
+    have an element of perspective to it.
+
+    '''
 
     def __init__(self):
         super(PerspectiveLens, self).__init__()
@@ -239,6 +255,7 @@
             (self.viewpoint)
         return disp
 
+
 class StereoPerspectiveLens(Lens):
 
     """docstring for StereoPerspectiveLens"""
@@ -409,9 +426,18 @@
             (self.viewpoint)
         return disp
 
+
 class FisheyeLens(Lens):
 
-    """docstring for FisheyeLens"""
+    r"""
+
+    This lens type accepts a field-of-view property, fov, that describes how wide 
+    an angle the fisheye can see. Fisheye images are typically used for dome-based 
+    presentations; the Hayden planetarium for instance has a field of view of 194.6. 
+    The images returned by this camera will be flat pixel images that can and should 
+    be reshaped to the resolution.    
+
+    """
 
     def __init__(self):
         super(FisheyeLens, self).__init__()
@@ -461,7 +487,7 @@
 
     def set_viewpoint(self, camera):
         """
-        For a PerspectiveLens, the viewpoint is the front center.
+        For a FisheyeLens, the viewpoint is the front center.
         """
         self.viewpoint = camera.position
 
@@ -502,7 +528,12 @@
 
 class SphericalLens(Lens):
 
-    """docstring for SphericalLens"""
+    r"""
+
+    This is a cylindrical-spherical projection. Movies rendered in this way 
+    can be displayed in head-tracking devices or in YouTube 360 view.
+    
+    """
 
     def __init__(self):
         super(SphericalLens, self).__init__()
@@ -595,6 +626,7 @@
         py = (u * np.rint(py)).astype("int64")
         return px, py, dz
 
+
 class StereoSphericalLens(Lens):
 
     """docstring for StereoSphericalLens"""
@@ -682,4 +714,4 @@
           'stereo-perspective': StereoPerspectiveLens,
           'fisheye': FisheyeLens,
           'spherical': SphericalLens,
-          'stereo-spherical':StereoSphericalLens}
+          'stereo-spherical': StereoSphericalLens}

diff -r 54610678a9b4afb3dfd345e0e89e00d45cc0445c -r afdaceaa6c06e0409e41802e480f113c9c377c8a yt/visualization/volume_rendering/render_source.py
--- a/yt/visualization/volume_rendering/render_source.py
+++ b/yt/visualization/volume_rendering/render_source.py
@@ -30,8 +30,12 @@
 
 class RenderSource(ParallelAnalysisInterface):
 
-    """Base Class for Render Sources. Will be inherited for volumes,
-       streamlines, etc"""
+    """
+
+    Base Class for Render Sources. Will be inherited for volumes,
+    streamlines, etc.
+
+    """
 
     def __init__(self):
         super(RenderSource, self).__init__()
@@ -46,7 +50,12 @@
 
 
 class OpaqueSource(RenderSource):
-    """docstring for OpaqueSource"""
+    """
+
+    A base class for opaque render sources. Will be inherited from
+    for LineSources, BoxSources, etc.
+
+    """
     def __init__(self):
         super(OpaqueSource, self).__init__()
         self.opaque = True
@@ -65,7 +74,14 @@
 
 class VolumeSource(RenderSource):
 
-    """docstring for VolumeSource"""
+    """
+
+    A VolumeSource is a class for rendering data from
+    an arbitrary volumetric data source, e.g. a sphere,
+    cylinder, or the entire computational domain.
+
+
+    """
     _image = None
     data_source = None
 
@@ -298,9 +314,12 @@
 
         Parameters
         ----------
-        positions: array, shape (N, 3)
-            These positions, in data-space coordinates, are the points to be
-            connected with lines.
+        positions: array, shape (N, 2, 3)
+            These positions, in data-space coordinates, are the starting and
+            stopping points for each pair of lines. For example,
+            positions[0][0] and positions[0][1] would give the (x, y, z)
+            coordinates of the beginning and end points of the first line,
+            respectively.
         colors : array, shape (N, 4), optional
             The colors of the points, including an alpha channel, in floating
             point running from 0..1.  Note that they correspond to the line
@@ -317,7 +336,14 @@
         """
 
         super(LineSource, self).__init__()
-        self.positions = positions
+
+        assert(positions.shape[1] == 2)
+        assert(positions.shape[2] == 3)
+
+        # convert the positions to the shape expected by zlines, below
+        N = positions.shape[0]
+        self.positions = positions.reshape((2*N, 3))
+
         # If colors aren't individually set, make black with full opacity
         if colors is None:
             colors = np.ones((len(positions), 4))
@@ -382,6 +408,8 @@
         vertices = np.empty([24, 3])
         for i in range(3):
             vertices[:, i] = corners[order, i, ...].ravel(order='F')
+        vertices = vertices.reshape((12, 2, 3))
+
         super(BoxSource, self).__init__(vertices, color, color_stride=24)
 
 
@@ -453,6 +481,7 @@
         vertices = np.empty([corners.shape[2]*2*12, 3])
         for i in range(3):
             vertices[:, i] = corners[order, i, ...].ravel(order='F')
+        vertices = vertices.reshape((12, 2, 3))
 
         super(GridSource, self).__init__(vertices, colors, color_stride=24)
 

diff -r 54610678a9b4afb3dfd345e0e89e00d45cc0445c -r afdaceaa6c06e0409e41802e480f113c9c377c8a yt/visualization/volume_rendering/scene.py
--- a/yt/visualization/volume_rendering/scene.py
+++ b/yt/visualization/volume_rendering/scene.py
@@ -22,29 +22,6 @@
 from .zbuffer_array import ZBuffer
 
 
-class SceneHandle(object):
-    """docstring for SceneHandle"""
-    def __init__(self, scene, camera, source, lens):
-        mylog.debug("Entering %s" % str(self))
-        self.scene = scene
-        self.camera = camera
-        self.source = source
-        self.lens = lens
-
-    def __repr__(self):
-        desc = super(SceneHandle, self).__repr__()
-        desc += str(self)
-        return desc
-
-    def __str__(self):
-        desc = "Scene Handler\n"
-        desc += ".scene: " + self.scene.__repr__() + "\n"
-        desc += ".camera: " + self.camera.__repr__() + "\n"
-        desc += ".source: " + self.source.__repr__() + "\n"
-        desc += ".lens: " + self.lens.__repr__() + "\n"
-        return desc
-
-
 class Scene(object):
 
     """The Scene Class
@@ -218,24 +195,38 @@
         return locals()
     camera = property(**camera())
 
-    # Are these useful?
     def set_camera(self, camera):
+        r"""
+
+        Set the camera to be used by this scene.
+
+        """
         self.camera = camera
 
     def get_camera(self, camera):
+        r"""
+
+        Get the camera currently used by this scene.
+
+        """
         return self.camera
 
-    def get_handle(self, key=None):
-        """docstring for get_handle"""
+    def annotate_domain(self, ds, color=None):
+        r"""
 
-        if key is None:
-            key = self.sources.keys()[0]
-        handle = SceneHandle(self, self.camera, self.sources[key],
-                             self.sources[key].lens)
-        return handle
+        Modifies this scene by drawing the edges of the computational domain.
+        This adds a new BoxSource to the scene corresponding to the domain
+        boundaries and returns the modified scene object.
 
-    def annotate_domain(self, ds, color=None):
-        """docstring for annotate_domain"""
+        Parameters
+        ----------
+
+        ds : :class:`yt.data_objects.api.Dataset`
+            This is the dataset object corresponding to the
+            simulation being rendered. Used to get the domain bounds.
+
+
+        """
         box_source = BoxSource(ds.domain_left_edge,
                                ds.domain_right_edge,
                                color=None)
@@ -250,7 +241,20 @@
         return self
 
     def annotate_axes(self, colors=None, alpha=1.0):
-        """docstring for annotate_axes"""
+        r"""
+
+        Modifies this scene by drawing the coordinate axes.
+        This adds a new CoordinateVectorSource to the scene
+        and returns the modified scene object.
+
+        Parameters
+        ----------
+        colors: array-like, shape (3,4), optional
+            The x, y, z RGBA values to use to draw the axes.
+        alpha : float, optional
+            The opacity of the vectors.
+
+        """
         coords = CoordinateVectorSource(colors, alpha)
         self.add_source(coords)
         return self


https://bitbucket.org/yt_analysis/yt/commits/3bc8fda7710a/
Changeset:   3bc8fda7710a
Branch:      yt
User:        jisuoqing
Date:        2015-06-30 23:56:05+00:00
Summary:     Fixed the orientation of off-axis plot
Affected #:  1 file

diff -r b597f0783b1e4104896f741d84717cd2b7d88458 -r 3bc8fda7710ac6ac447792f20a8c7822ed2fe504 yt/visualization/volume_rendering/off_axis_projection.py
--- a/yt/visualization/volume_rendering/off_axis_projection.py
+++ b/yt/visualization/volume_rendering/off_axis_projection.py
@@ -134,6 +134,8 @@
     else:
         vol.set_fields([item, weight])
     camera = Camera(data_source)
+    camera.normal_vector = normal_vector
+    camera.north_vector = north_vector
     camera.set_width(width)
     camera.focus = center
     sc.camera = camera


https://bitbucket.org/yt_analysis/yt/commits/ab5e33f432b0/
Changeset:   ab5e33f432b0
Branch:      yt
User:        jisuoqing
Date:        2015-07-01 03:07:51+00:00
Summary:     Fixed the colorbar scale and weight field in off-axis projection
Affected #:  1 file

diff -r 3bc8fda7710ac6ac447792f20a8c7822ed2fe504 -r ab5e33f432b0ffd768b58d8dd0933ec4f2336a6e yt/visualization/volume_rendering/off_axis_projection.py
--- a/yt/visualization/volume_rendering/off_axis_projection.py
+++ b/yt/visualization/volume_rendering/off_axis_projection.py
@@ -132,7 +132,22 @@
     if weight is None:
         vol.set_fields([item])
     else:
-        vol.set_fields([item, weight])
+        # This is a temporary field, which we will remove at the end.
+        weightfield = ("index", "temp_weightfield")
+        def _make_wf(f, w):
+            def temp_weightfield(a, b):
+                tr = b[f].astype("float64") * b[w]
+                return b.apply_units(tr, a.units)
+                return tr
+            return temp_weightfield
+        data_source.ds.field_info.add_field(weightfield,
+            function=_make_wf(item, weight))
+        # Now we have to tell the dataset to add it and to calculate
+        # its dependencies..
+        deps, _ = data_source.ds.field_info.check_derived_fields([weightfield])
+        data_source.ds.field_dependencies.update(deps)
+        fields = [weightfield, weight]
+        vol.set_fields(fields)
     camera = Camera(data_source)
     camera.normal_vector = normal_vector
     camera.north_vector = north_vector
@@ -187,7 +202,7 @@
 
     if method == "integrate":
         if weight is None:
-            dl = width[2]
+            dl = width[2].in_units("cm")
             image *= dl
         else:
             image[:,:,0] /= image[:,:,1]


https://bitbucket.org/yt_analysis/yt/commits/f35352ec81fe/
Changeset:   f35352ec81fe
Branch:      yt
User:        jisuoqing
Date:        2015-07-01 03:23:53+00:00
Summary:     A better way to change the orientation
Affected #:  1 file

diff -r ab5e33f432b0ffd768b58d8dd0933ec4f2336a6e -r f35352ec81feb1f6b7fe1e03109ffb8fac8a5b75 yt/visualization/volume_rendering/off_axis_projection.py
--- a/yt/visualization/volume_rendering/off_axis_projection.py
+++ b/yt/visualization/volume_rendering/off_axis_projection.py
@@ -149,9 +149,9 @@
         fields = [weightfield, weight]
         vol.set_fields(fields)
     camera = Camera(data_source)
-    camera.normal_vector = normal_vector
-    camera.north_vector = north_vector
     camera.set_width(width)
+    camera.switch_orientation(normal_vector=normal_vector,
+                              north_vector=north_vector)
     camera.focus = center
     sc.camera = camera
     sc.add_source(vol)


https://bitbucket.org/yt_analysis/yt/commits/94c8b783f90f/
Changeset:   94c8b783f90f
Branch:      yt
User:        samskillman
Date:        2015-07-13 00:04:58+00:00
Summary:     Merged in jisuoqing/yt_sam (pull request #19)

A few bugs fixed in off-axis projection
Affected #:  1 file

diff -r afdaceaa6c06e0409e41802e480f113c9c377c8a -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 yt/visualization/volume_rendering/off_axis_projection.py
--- a/yt/visualization/volume_rendering/off_axis_projection.py
+++ b/yt/visualization/volume_rendering/off_axis_projection.py
@@ -132,9 +132,26 @@
     if weight is None:
         vol.set_fields([item])
     else:
-        vol.set_fields([item, weight])
+        # This is a temporary field, which we will remove at the end.
+        weightfield = ("index", "temp_weightfield")
+        def _make_wf(f, w):
+            def temp_weightfield(a, b):
+                tr = b[f].astype("float64") * b[w]
+                return b.apply_units(tr, a.units)
+                return tr
+            return temp_weightfield
+        data_source.ds.field_info.add_field(weightfield,
+            function=_make_wf(item, weight))
+        # Now we have to tell the dataset to add it and to calculate
+        # its dependencies..
+        deps, _ = data_source.ds.field_info.check_derived_fields([weightfield])
+        data_source.ds.field_dependencies.update(deps)
+        fields = [weightfield, weight]
+        vol.set_fields(fields)
     camera = Camera(data_source)
     camera.set_width(width)
+    camera.switch_orientation(normal_vector=normal_vector,
+                              north_vector=north_vector)
     camera.focus = center
     sc.camera = camera
     sc.add_source(vol)
@@ -185,7 +202,7 @@
 
     if method == "integrate":
         if weight is None:
-            dl = width[2]
+            dl = width[2].in_units("cm")
             image *= dl
         else:
             image[:,:,0] /= image[:,:,1]


https://bitbucket.org/yt_analysis/yt/commits/7709f96fe34a/
Changeset:   7709f96fe34a
Branch:      yt
User:        MatthewTurk
Date:        2015-07-16 16:42:40+00:00
Summary:     Merging
Affected #:  58 files

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 doc/README
--- a/doc/README
+++ b/doc/README
@@ -7,4 +7,4 @@
 Because the documentation requires a number of dependencies, we provide
 pre-built versions online, accessible here:
 
-http://yt-project.org/docs/dev-3.0/
+http://yt-project.org/docs/dev/

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 doc/source/analyzing/analysis_modules/SZ_projections.ipynb
--- a/doc/source/analyzing/analysis_modules/SZ_projections.ipynb
+++ b/doc/source/analyzing/analysis_modules/SZ_projections.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:2cc168b2c1737c67647aa29892c0213e7a58233fa53c809f9cd975a4306e9bc8"
+  "signature": "sha256:487383ec23a092310522ec25bd02ad2eb16a3402c5ed3d2b103d33fe17697b3c"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -70,6 +70,13 @@
      ]
     },
     {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "<font color='red'>**NOTE**</font>: Currently, use of the SZpack library to create S-Z projections in yt is limited to Python 2.x."
+     ]
+    },
+    {
      "cell_type": "heading",
      "level": 2,
      "metadata": {},

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 doc/source/analyzing/analysis_modules/photon_simulator.rst
--- a/doc/source/analyzing/analysis_modules/photon_simulator.rst
+++ b/doc/source/analyzing/analysis_modules/photon_simulator.rst
@@ -10,6 +10,10 @@
 simulated X-ray photon lists of events from datasets that yt is able
 to read. The simulated events then can be exported to X-ray telescope
 simulators to produce realistic observations or can be analyzed in-line.
+
+For detailed information about the design of the algorithm in yt, check 
+out `the SciPy 2014 Proceedings. <http://conference.scipy.org/proceedings/scipy2014/zuhone.html>`_.
+
 The algorithm is based off of that implemented in
 `PHOX <http://www.mpa-garching.mpg.de/~kdolag/Phox/>`_ for SPH datasets
 by Veronica Biffi and Klaus Dolag. There are two relevant papers:
@@ -139,6 +143,12 @@
 the optional keyword ``thermal_broad`` is set to ``True``, the spectral
 lines will be thermally broadened.
 
+.. note:: 
+
+   ``SpectralModel`` objects based on XSPEC models (both the thermal 
+   emission and Galactic absorption models mentioned below) only work 
+   in Python 2.7, since currently PyXspec only works with Python 2.x. 
+   
 Now that we have our ``SpectralModel`` that gives us a spectrum, we need
 to connect this model to a ``PhotonModel`` that will connect the field
 data in the ``data_source`` to the spectral model to actually generate
@@ -148,7 +158,8 @@
 .. code:: python
 
     thermal_model = ThermalPhotonModel(apec_model, X_H=0.75, Zmet=0.3,
-                                       photons_per_chunk=100000000)
+                                       photons_per_chunk=100000000,
+                                       method="invert_cdf")
 
 Where we pass in the ``SpectralModel``, and can optionally set values for
 the hydrogen mass fraction ``X_H`` and metallicity ``Z_met``. If
@@ -165,6 +176,18 @@
 this parameter needs to be set higher, or if you are looking to decrease memory
 usage, you might set this parameter lower.
 
+The ``method`` keyword argument is also optional, and determines how the individual
+photon energies are generated from the spectrum. It may be set to one of two values:
+
+* ``method="invert_cdf"``: Construct the cumulative distribution function of the spectrum and invert
+  it, using uniformly drawn random numbers to determine the photon energies (fast, but relies
+  on construction of the CDF and interpolation between the points, so for some spectra it
+  may not be accurate enough). 
+* ``method="accept_reject"``: Generate the photon energies from the spectrum using an acceptance-rejection
+  technique (accurate, but likely to be slow). 
+
+``method="invert_cdf"`` (the default) should be sufficient for most cases. 
+
 Next, we need to specify "fiducial" values for the telescope collecting
 area, exposure time, and cosmological redshift. Remember, the initial
 photon generation will act as a source for Monte-Carlo sampling for more
@@ -191,12 +214,29 @@
 By default, the angular diameter distance to the object is determined
 from the ``cosmology`` and the cosmological ``redshift``. If a
 ``Cosmology`` instance is not provided, one will be made from the
-default cosmological parameters. If your source is local to the galaxy,
-you can set its distance directly, using a tuple, e.g.
-``dist=(30, "kpc")``. In this case, the ``redshift`` and ``cosmology``
-will be ignored. Finally, if the photon generating function accepts any
-parameters, they can be passed to ``from_scratch`` via a ``parameters``
-dictionary.
+default cosmological parameters. The ``center`` keyword argument specifies
+the center of the photon distribution, and the photon positions will be 
+rescaled with this value as the origin. This argument accepts the following
+values:
+
+* A NumPy array or list corresponding to the coordinates of the center in
+  units of code length. 
+* A ``YTArray`` corresponding to the coordinates of the center in some
+  length units. 
+* ``"center"`` or ``"c"`` corresponds to the domain center. 
+* ``"max"`` or ``"m"`` corresponds to the location of the maximum gas density. 
+* A two-element tuple specifying the max or min of a specific field, e.g.,
+  ``("min","gravitational_potential")``, ``("max","dark_matter_density")``
+
+If ``center`` is not specified, ``from_scratch`` will attempt to use the 
+``"center"`` field parameter of the ``data_source``. 
+
+``from_scratch`` takes a few other optional keyword arguments. If your 
+source is local to the galaxy, you can set its distance directly, using 
+a tuple, e.g. ``dist=(30, "kpc")``. In this case, the ``redshift`` and 
+``cosmology`` will be ignored. Finally, if the photon generating 
+function accepts any parameters, they can be passed to ``from_scratch`` 
+via a ``parameters`` dictionary.
 
 At this point, the ``photons`` are distributed in the three-dimensional
 space of the ``data_source``, with energies in the rest frame of the
@@ -265,7 +305,7 @@
     abs_model = TableAbsorbModel("tbabs_table.h5", 0.1)
 
 Now we're ready to project the photons. First, we choose a line-of-sight
-vector ``L``. Second, we'll adjust the exposure time and the redshift.
+vector ``normal``. Second, we'll adjust the exposure time and the redshift.
 Third, we'll pass in the absorption ``SpectrumModel``. Fourth, we'll
 specify a ``sky_center`` in RA,DEC on the sky in degrees.
 
@@ -274,26 +314,40 @@
 course far short of a full simulation of a telescope ray-trace, but it's
 a quick-and-dirty way to get something close to the real thing. We'll
 discuss how to get your simulated events into a format suitable for
-reading by telescope simulation codes later.
+reading by telescope simulation codes later. If you just want to convolve 
+the photons with an ARF, you may specify that as the only response, but some
+ARFs are unnormalized and still require the RMF for normalization. Check with
+the documentation associated with these files for details. If we are using the
+RMF to convolve energies, we must set ``convolve_energies=True``. 
 
 .. code:: python
 
     ARF = "chandra_ACIS-S3_onaxis_arf.fits"
     RMF = "chandra_ACIS-S3_onaxis_rmf.fits"
-    L = [0.0,0.0,1.0]
-    events = photons.project_photons(L, exp_time_new=2.0e5, redshift_new=0.07, absorb_model=abs_model,
-                                     sky_center=(187.5,12.333), responses=[ARF,RMF])
+    normal = [0.0,0.0,1.0]
+    events = photons.project_photons(normal, exp_time_new=2.0e5, redshift_new=0.07, dist_new=None, 
+                                     absorb_model=abs_model, sky_center=(187.5,12.333), responses=[ARF,RMF], 
+                                     convolve_energies=True, no_shifting=False, north_vector=None,
+                                     psf_sigma=None)
 
-Also, the optional keyword ``psf_sigma`` specifies a Gaussian standard
-deviation to scatter the photon sky positions around with, providing a
-crude representation of a PSF.
+In this case, we chose a three-vector ``normal`` to specify an arbitrary 
+line-of-sight, but ``"x"``, ``"y"``, or ``"z"`` could also be chosen to 
+project along one of those axes. 
 
-.. warning::
+``project_photons`` takes several other optional keyword arguments. 
 
-   The binned images that result, even if you convolve with responses,
-   are still of the same resolution as the finest cell size of the
-   simulation dataset. If you want a more accurate simulation of a
-   particular X-ray telescope, you should check out `Storing events for future use and for reading-in by telescope simulators`_.
+* ``no_shifting`` (default ``False``) controls whether or not Doppler 
+  shifting of photon energies is turned on. 
+* ``dist_new`` is a (value, unit) tuple that is used to set a new
+  angular diameter distance by hand instead of having it determined
+  by the cosmology and the value of the redshift. Should only be used
+  for simulations of nearby objects. 
+* For off-axis ``normal`` vectors,  the ``north_vector`` argument can 
+  be used to control what vector corresponds to the "up" direction in 
+  the resulting event list. 
+* ``psf_sigma`` may be specified to provide a crude representation of 
+  a PSF, and corresponds to the standard deviation (in degress) of a 
+  Gaussian PSF model. 
 
 Let's just take a quick look at the raw events object:
 
@@ -343,19 +397,27 @@
 
 Which is starting to look like a real observation!
 
+.. warning::
+
+   The binned images that result, even if you convolve with responses,
+   are still of the same resolution as the finest cell size of the
+   simulation dataset. If you want a more accurate simulation of a
+   particular X-ray telescope, you should check out `Storing events for future use and for reading-in by telescope simulators`_.
+
 We can also bin up the spectrum into energy bins, and write it to a FITS
 table file. This is an example where we've binned up the spectrum
 according to the unconvolved photon energy:
 
 .. code:: python
 
-    events.write_spectrum("virgo_spec.fits", energy_bins=True, emin=0.1, emax=10.0, nchan=2000, clobber=True)
+    events.write_spectrum("virgo_spec.fits", bin_type="energy", emin=0.1, emax=10.0, nchan=2000, clobber=True)
 
-If we don't set ``energy_bins=True``, and we have convolved our events
+We can also set ``bin_type="channel"``. If we have convolved our events
 with response files, then any other keywords will be ignored and it will
 try to make a spectrum from the channel information that is contained
-within the RMF, suitable for analyzing in XSPEC. For now, we'll stick
-with the energy spectrum, and plot it up:
+within the RMF. Otherwise, the channels will be determined from the ``emin``, 
+``emax``, and ``nchan`` keywords, and will be numbered from 1 to ``nchan``. 
+For now, we'll stick with the energy spectrum, and plot it up:
 
 .. code:: python
 

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 doc/source/analyzing/fields.rst
--- a/doc/source/analyzing/fields.rst
+++ b/doc/source/analyzing/fields.rst
@@ -174,7 +174,7 @@
 
 Field plugins can be loaded dynamically, although at present this is not
 particularly useful.  Plans for extending field plugins to dynamically load, to
-enable simple definition of common types (gradient, divergence, etc), and to
+enable simple definition of common types (divergence, curl, etc), and to
 more verbosely describe available fields, have been put in place for future
 versions.
 

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 doc/source/cookbook/fit_spectrum.py
--- a/doc/source/cookbook/fit_spectrum.py
+++ b/doc/source/cookbook/fit_spectrum.py
@@ -10,10 +10,10 @@
 def _OVI_number_density(field, data):
     return data['H_number_density']*2.0
 
-# Define a function that will accept a ds and add the new field 
+# Define a function that will accept a ds and add the new field
 # defined above.  This will be given to the LightRay below.
 def setup_ds(ds):
-    ds.add_field("O_p5_number_density", 
+    ds.add_field(("gas","O_p5_number_density"),
                  function=_OVI_number_density,
                  units="cm**-3")
 
@@ -62,7 +62,7 @@
 
 # Get all fields that need to be added to the light ray
 fields = ['temperature']
-for s, params in species_dicts.iteritems():
+for s, params in species_dicts.items():
     fields.append(params['field'])
 
 # Make a light ray, and set njobs to -1 to use one core
@@ -79,7 +79,7 @@
 sp = AbsorptionSpectrum(900.0, 1400.0, 50000)
 
 # Iterate over species
-for s, params in species_dicts.iteritems():
+for s, params in species_dicts.items():
     # Iterate over transitions for a single species
     for i in range(params['numLines']):
         # Add the lines to the spectrum

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 doc/source/cookbook/free_free_field.py
--- a/doc/source/cookbook/free_free_field.py
+++ /dev/null
@@ -1,105 +0,0 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
-
-import numpy as np
-import yt
-# Need to grab the proton mass from the constants database
-from yt.utilities.physical_constants import mp
-
-exit()
-# Define the emission field
-
-keVtoerg = 1.602e-9  # Convert energy in keV to energy in erg
-KtokeV = 8.617e-08  # Convert degrees Kelvin to degrees keV
-sqrt3 = np.sqrt(3.)
-expgamma = 1.78107241799  # Exponential of Euler's constant
-
-
-def _FreeFree_Emission(field, data):
-
-    if data.has_field_parameter("Z"):
-        Z = data.get_field_parameter("Z")
-    else:
-        Z = 1.077  # Primordial H/He plasma
-
-    if data.has_field_parameter("mue"):
-        mue = data.get_field_parameter("mue")
-    else:
-        mue = 1./0.875  # Primordial H/He plasma
-
-    if data.has_field_parameter("mui"):
-        mui = data.get_field_parameter("mui")
-    else:
-        mui = 1./0.8125  # Primordial H/He plasma
-
-    if data.has_field_parameter("Ephoton"):
-        Ephoton = data.get_field_parameter("Ephoton")
-    else:
-        Ephoton = 1.0  # in keV
-
-    if data.has_field_parameter("photon_emission"):
-        photon_emission = data.get_field_parameter("photon_emission")
-    else:
-        photon_emission = False  # Flag for energy or photon emission
-
-    n_e = data["density"]/(mue*mp)
-    n_i = data["density"]/(mui*mp)
-    kT = data["temperature"]*KtokeV
-
-    # Compute the Gaunt factor
-
-    g_ff = np.zeros(kT.shape)
-    g_ff[Ephoton/kT > 1.] = np.sqrt((3./np.pi)*kT[Ephoton/kT > 1.]/Ephoton)
-    g_ff[Ephoton/kT < 1.] = (sqrt3/np.pi)*np.log((4./expgamma) *
-                                                 kT[Ephoton/kT < 1.]/Ephoton)
-
-    eps_E = 1.64e-20*Z*Z*n_e*n_i/np.sqrt(data["temperature"]) * \
-        np.exp(-Ephoton/kT)*g_ff
-
-    if photon_emission:
-        eps_E /= (Ephoton*keVtoerg)
-
-    return eps_E
-
-yt.add_field("FreeFree_Emission", function=_FreeFree_Emission)
-
-# Define the luminosity derived quantity
-def _FreeFreeLuminosity(data):
-    return (data["FreeFree_Emission"]*data["cell_volume"]).sum()
-
-
-def _combFreeFreeLuminosity(data, luminosity):
-    return luminosity.sum()
-
-yt.add_quantity("FreeFree_Luminosity", function=_FreeFreeLuminosity,
-                combine_function=_combFreeFreeLuminosity, n_ret=1)
-
-ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
-
-sphere = ds.sphere(ds.domain_center, (100., "kpc"))
-
-# Print out the total luminosity at 1 keV for the sphere
-
-print("L_E (1 keV, primordial) = ", sphere.quantities["FreeFree_Luminosity"]())
-
-# The defaults for the field assume a H/He primordial plasma.
-# Let's set the appropriate parameters for a pure hydrogen plasma.
-
-sphere.set_field_parameter("mue", 1.0)
-sphere.set_field_parameter("mui", 1.0)
-sphere.set_field_parameter("Z", 1.0)
-
-print("L_E (1 keV, pure hydrogen) = ", sphere.quantities["FreeFree_Luminosity"]())
-
-# Now let's print the luminosity at an energy of E = 10 keV
-
-sphere.set_field_parameter("Ephoton", 10.0)
-
-print("L_E (10 keV, pure hydrogen) = ", sphere.quantities["FreeFree_Luminosity"]())
-
-# Finally, let's set the flag for photon emission, to get the total number
-# of photons emitted at this energy:
-
-sphere.set_field_parameter("photon_emission", True)
-
-print("L_ph (10 keV, pure hydrogen) = ", sphere.quantities["FreeFree_Luminosity"]())

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 doc/source/cookbook/simulation_analysis.py
--- a/doc/source/cookbook/simulation_analysis.py
+++ b/doc/source/cookbook/simulation_analysis.py
@@ -2,11 +2,11 @@
 yt.enable_parallelism()
 import collections
 
-# Enable parallelism in the script (assuming it was called with 
+# Enable parallelism in the script (assuming it was called with
 # `mpirun -np <n_procs>` )
 yt.enable_parallelism()
 
-# By using wildcards such as ? and * with the load command, we can load up a 
+# By using wildcards such as ? and * with the load command, we can load up a
 # Time Series containing all of these datasets simultaneously.
 ts = yt.load('enzo_tiny_cosmology/DD????/DD????')
 
@@ -16,7 +16,7 @@
 # Create an empty dictionary
 data = {}
 
-# Iterate through each dataset in the Time Series (using piter allows it 
+# Iterate through each dataset in the Time Series (using piter allows it
 # to happen in parallel automatically across available processors)
 for ds in ts.piter():
     ad = ds.all_data()
@@ -31,6 +31,6 @@
 # Print out all the values we calculated.
 print("Dataset      Redshift        Density Min      Density Max")
 print("---------------------------------------------------------")
-for key, val in od.iteritems(): 
+for key, val in od.items(): 
     print("%s       %05.3f          %5.3g g/cm^3   %5.3g g/cm^3" % \
            (key, val[1], val[0][0], val[0][1]))

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 doc/source/cookbook/time_series.py
--- a/doc/source/cookbook/time_series.py
+++ b/doc/source/cookbook/time_series.py
@@ -12,7 +12,7 @@
 
 storage = {}
 
-# By using the piter() function, we can iterate on every dataset in 
+# By using the piter() function, we can iterate on every dataset in
 # the TimeSeries object.  By using the storage keyword, we can populate
 # a dictionary where the dataset is the key, and sto.result is the value
 # for later use when the loop is complete.
@@ -25,13 +25,13 @@
     sphere = ds.sphere("c", (100., "kpc"))
     # Calculate the entropy within that sphere
     entr = sphere["entropy"].sum()
-    # Store the current time and sphere entropy for this dataset in our 
+    # Store the current time and sphere entropy for this dataset in our
     # storage dictionary as a tuple
     store.result = (ds.current_time.in_units('Gyr'), entr)
 
 # Convert the storage dictionary values to a Nx2 array, so the can be easily
 # plotted
-arr = np.array(storage.values())
+arr = np.array(list(storage.values()))
 
 # Plot up the results: time versus entropy
 plt.semilogy(arr[:,0], arr[:,1], 'r-')

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -104,7 +104,11 @@
 -----------
 
 Athena 4.x VTK data is *mostly* supported and cared for by John
-ZuHone. Both uniform grid and SMR datasets are supported.
+ZuHone. Both uniform grid and SMR datasets are supported. 
+
+.. note: 
+   yt also recognizes Fargo3D data written to VTK files as 
+   Athena data, but support for Fargo3D data is preliminary. 
 
 Loading Athena datasets is slightly different depending on whether
 your dataset came from a serial or a parallel run. If the data came

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 doc/source/installing.rst
--- a/doc/source/installing.rst
+++ b/doc/source/installing.rst
@@ -39,6 +39,28 @@
   have the the necessary compilers installed (e.g. the ``build-essentials``
   package on debian and ubuntu).
 
+.. _branches-of-yt:
+
+Branches of yt: ``yt``, ``stable``, and ``yt-2.x``
+++++++++++++++++++++++++++++++++++++++++++++++++++
+
+Before you install yt, you must decide which branch (i.e. version) of the code 
+you prefer to use:
+
+* ``yt`` -- The most up-to-date *development* version with the most current features but sometimes unstable (yt-3.x)
+* ``stable`` -- The latest stable release of yt-3.x
+* ``yt-2.x`` -- The latest stable release of yt-2.x
+
+If this is your first time using the code, we recommend using ``stable``, 
+unless you specifically need some piece of brand-new functionality only 
+available in ``yt`` or need to run an old script developed for ``yt-2.x``.
+There were major API and functionality changes made in yt after version 2.7
+in moving to version 3.0.  For a detailed description of the changes
+between versions 2.x (e.g. branch ``yt-2.x``) and 3.x (e.g. branches ``yt`` and 
+``stable``) see :ref:`yt3differences`.  Lastly, don't feel like you're locked 
+into one branch when you install yt, because you can easily change the active
+branch by following the instructions in :ref:`switching-between-yt-versions`.
+
 .. _install-script:
 
 All-in-One Installation Script
@@ -60,16 +82,22 @@
 its dependencies will be removed from your system (no scattered files remaining
 throughout your system).
 
+.. _installing-yt:
+
 Running the Install Script
 ^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-To get the installation script, download it from:
+To get the installation script for the ``stable`` branch of the code, 
+download it from:
 
 .. code-block:: bash
 
   wget http://bitbucket.org/yt_analysis/yt/raw/stable/doc/install_script.sh
 
-.. _installing-yt:
+If you wish to install a different version of yt (see 
+:ref:`above <branches-of-yt>`), replace ``stable`` with the appropriate 
+branch name (e.g. ``yt``, ``yt-2.x``) in the path above to get the correct 
+install script.
 
 By default, the bash install script will install an array of items, but there
 are additional packages that can be downloaded and installed (e.g. SciPy, enzo,
@@ -329,8 +357,8 @@
 
 .. _switching-between-yt-versions:
 
-Switching between yt-2.x and yt-3.x
------------------------------------
+Switching versions of yt: yt-2.x, yt-3.x, stable, and dev
+---------------------------------------------------------
 
 With the release of version 3.0 of yt, development of the legacy yt 2.x series
 has been relegated to bugfixes.  That said, we will continue supporting the 2.x
@@ -356,12 +384,8 @@
   hg update <desired-version>
   python setup.py develop
 
-Valid versions to jump to are:
+Valid versions to jump to are described in :ref:`branches-of-yt`).
 
-* ``yt`` -- The latest *dev* changes in yt-3.x (can be unstable)
-* ``stable`` -- The latest stable release of yt-3.x
-* ``yt-2.x`` -- The latest stable release of yt-2.x
-    
 You can check which version of yt you have installed by invoking ``yt version``
 at the command line.  If you encounter problems, see :ref:`update-errors`.
 
@@ -387,11 +411,7 @@
   hg update <desired-version>
   python setup.py install --user --prefix=
 
-Valid versions to jump to are:
-
-* ``yt`` -- The latest *dev* changes in yt-3.x (can be unstable)
-* ``stable`` -- The latest stable release of yt-3.x
-* ``yt-2.x`` -- The latest stable release of yt-2.x
+Valid versions to jump to are described in :ref:`branches-of-yt`).
     
 You can check which version of yt you have installed by invoking ``yt version``
 at the command line.  If you encounter problems, see :ref:`update-errors`.

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 doc/source/reference/api/api.rst
--- a/doc/source/reference/api/api.rst
+++ b/doc/source/reference/api/api.rst
@@ -227,8 +227,6 @@
    ~yt.frontends.chombo.data_structures.Orion2Hierarchy
    ~yt.frontends.chombo.data_structures.Orion2Dataset
    ~yt.frontends.chombo.io.IOHandlerChomboHDF5
-   ~yt.frontends.chombo.io.IOHandlerChombo2DHDF5
-   ~yt.frontends.chombo.io.IOHandlerChombo1DHDF5
    ~yt.frontends.chombo.io.IOHandlerOrion2HDF5
 
 Enzo

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 doc/source/visualizing/callbacks.rst
--- a/doc/source/visualizing/callbacks.rst
+++ b/doc/source/visualizing/callbacks.rst
@@ -483,7 +483,7 @@
    import yt
    ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
    s = yt.SlicePlot(ds, 'z', 'density', center='max', width=(10, 'kpc'))
-   s.annotate_text((2, 2), coord_system='plot', 'Galaxy!')
+   s.annotate_text((2, 2), 'Galaxy!', coord_system='plot')
    s.save()
 
 .. _annotate-title:

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 setup.py
--- a/setup.py
+++ b/setup.py
@@ -49,19 +49,18 @@
     REASON_FILES.append((dir_name, files))
 
 # Verify that we have Cython installed
+REQ_CYTHON = '0.22'
 try:
     import Cython
-    if version.LooseVersion(Cython.__version__) < version.LooseVersion('0.16'):
-        needs_cython = True
-    else:
-        needs_cython = False
+    needs_cython = \
+        version.LooseVersion(Cython.__version__) < version.LooseVersion(REQ_CYTHON)
 except ImportError as e:
     needs_cython = True
 
 if needs_cython:
     print("Cython is a build-time requirement for the source tree of yt.")
     print("Please either install yt from a provided, release tarball,")
-    print("or install Cython (version 0.16 or higher).")
+    print("or install Cython (version %s or higher)." % REQ_CYTHON)
     print("You may be able to accomplish this by typing:")
     print("     pip install -U Cython")
     sys.exit(1)

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 yt/analysis_modules/absorption_spectrum/absorption_line.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_line.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_line.py
@@ -18,8 +18,16 @@
     charge_proton_cgs, \
     mass_electron_cgs, \
     speed_of_light_cgs
+from yt.utilities.on_demand_imports import _scipy, NotAModule
 
-def voigt(a,u):
+special = _scipy.special
+
+def voigt_scipy(a, u):
+    x = np.asarray(u).astype(np.float64)
+    y = np.asarray(a).astype(np.float64)
+    return special.wofz(x + 1j * y).real
+
+def voigt_old(a, u):
     """
     NAME:
         VOIGT 
@@ -209,3 +217,8 @@
     tauphi = (tau0 * phi).in_units("")               # profile scaled with tau0
 
     return (lambda_bins, tauphi)
+
+if isinstance(special, NotAModule):
+    voigt = voigt_old
+else:
+    voigt = voigt_scipy

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -274,8 +274,8 @@
                                                     'b_thermal': thermal_b[lixel],
                                                     'redshift': field_data['redshift'][lixel],
                                                     'v_pec': peculiar_velocity})
-                    pbar.update(i)
-                pbar.finish()
+                pbar.update(i)
+            pbar.finish()
 
             del column_density, delta_lambda, thermal_b, \
                 center_bins, width_ratio, left_index, right_index

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 yt/analysis_modules/absorption_spectrum/absorption_spectrum_fit.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum_fit.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum_fit.py
@@ -1011,7 +1011,7 @@
 
     """
     f = h5py.File(file_name, 'w')
-    for ion, params in lineDic.iteritems():
+    for ion, params in lineDic.items():
         f.create_dataset("{0}/N".format(ion),data=params['N'])
         f.create_dataset("{0}/b".format(ion),data=params['b'])
         f.create_dataset("{0}/z".format(ion),data=params['z'])

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 yt/analysis_modules/absorption_spectrum/tests/test_absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/tests/test_absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/tests/test_absorption_spectrum.py
@@ -10,8 +10,11 @@
 # The full license is in the file COPYING.txt, distributed with this software.
 #-----------------------------------------------------------------------------
 
-import yt
-from yt.testing import *
+import numpy as np
+from yt.testing import \
+    assert_allclose, requires_file, requires_module
+from yt.analysis_modules.absorption_spectrum.absorption_line import \
+    voigt_old, voigt_scipy
 from yt.analysis_modules.absorption_spectrum.api import AbsorptionSpectrum
 from yt.analysis_modules.cosmological_observation.api import LightRay
 import tempfile
@@ -20,6 +23,7 @@
 
 COSMO_PLUS = "enzo_cosmology_plus/AMRCosmology.enzo"
 
+
 @requires_file(COSMO_PLUS)
 def test_absorption_spectrum():
     """
@@ -44,22 +48,24 @@
 
     my_label = 'HI Lya'
     field = 'H_number_density'
-    wavelength = 1215.6700 # Angstromss
+    wavelength = 1215.6700  # Angstromss
     f_value = 4.164E-01
     gamma = 6.265e+08
     mass = 1.00794
 
-    sp.add_line(my_label, field, wavelength, f_value, gamma, mass, label_threshold=1.e10)
+    sp.add_line(my_label, field, wavelength, f_value,
+                gamma, mass, label_threshold=1.e10)
 
     my_label = 'HI Lya'
     field = 'H_number_density'
-    wavelength = 912.323660 # Angstroms
+    wavelength = 912.323660  # Angstroms
     normalization = 1.6e17
     index = 3.0
 
     sp.add_continuum(my_label, field, wavelength, normalization, index)
 
-    wavelength, flux = sp.make_spectrum('lightray.h5', output_file='spectrum.txt',
+    wavelength, flux = sp.make_spectrum('lightray.h5',
+                                        output_file='spectrum.txt',
                                         line_list_file='lines.txt',
                                         use_peculiar_velocity=True)
 
@@ -93,25 +99,34 @@
 
     my_label = 'HI Lya'
     field = 'H_number_density'
-    wavelength = 1215.6700 # Angstromss
+    wavelength = 1215.6700  # Angstromss
     f_value = 4.164E-01
     gamma = 6.265e+08
     mass = 1.00794
 
-    sp.add_line(my_label, field, wavelength, f_value, gamma, mass, label_threshold=1.e10)
+    sp.add_line(my_label, field, wavelength, f_value,
+                gamma, mass, label_threshold=1.e10)
 
     my_label = 'HI Lya'
     field = 'H_number_density'
-    wavelength = 912.323660 # Angstroms
+    wavelength = 912.323660  # Angstroms
     normalization = 1.6e17
     index = 3.0
 
     sp.add_continuum(my_label, field, wavelength, normalization, index)
 
-    wavelength, flux = sp.make_spectrum('lightray.h5', output_file='spectrum.fits',
+    wavelength, flux = sp.make_spectrum('lightray.h5',
+                                        output_file='spectrum.fits',
                                         line_list_file='lines.txt',
                                         use_peculiar_velocity=True)
 
     # clean up
     os.chdir(curdir)
     shutil.rmtree(tmpdir)
+
+
+ at requires_module("scipy")
+def test_voigt_profiles():
+    a = 1.7e-4
+    x = np.linspace(5.0, -3.6, 60)
+    yield assert_allclose, voigt_old(a, x), voigt_scipy(a, x), 1e-8

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 yt/analysis_modules/cosmological_observation/light_cone/light_cone.py
--- a/yt/analysis_modules/cosmological_observation/light_cone/light_cone.py
+++ b/yt/analysis_modules/cosmological_observation/light_cone/light_cone.py
@@ -343,7 +343,7 @@
             del output["object"]
 
         # Combine results from each slice.
-        all_slices = all_storage.keys()
+        all_slices = list(all_storage.keys())
         all_slices.sort()
         for my_slice in all_slices:
             if save_slice_images:

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 yt/analysis_modules/halo_analysis/halo_callbacks.py
--- a/yt/analysis_modules/halo_analysis/halo_callbacks.py
+++ b/yt/analysis_modules/halo_analysis/halo_callbacks.py
@@ -400,7 +400,7 @@
         The field used as the overdensity from which interpolation is done to 
         calculate virial quantities.
         Default: ("gas", "overdensity")
-    critical_density : float
+    critical_overdensity : float
         The value of the overdensity at which to evaulate the virial quantities.  
         Overdensity is with respect to the critical density.
         Default: 200

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 yt/analysis_modules/halo_finding/halo_objects.py
--- a/yt/analysis_modules/halo_finding/halo_objects.py
+++ b/yt/analysis_modules/halo_finding/halo_objects.py
@@ -22,6 +22,7 @@
 import glob
 import os
 import os.path as path
+from functools import cmp_to_key
 from collections import defaultdict
 from yt.extern.six import add_metaclass
 from yt.extern.six.moves import zip as izip
@@ -39,7 +40,7 @@
     TINY
 from yt.utilities.physical_ratios import \
      rho_crit_g_cm3_h2
-    
+
 from .hop.EnzoHop import RunHOP
 from .fof.EnzoFOF import RunFOF
 
@@ -138,9 +139,9 @@
         c[2] = self["particle_position_z"] - self.ds.domain_left_edge[2]
         com = []
         for i in range(3):
-            # A halo is likely periodic around a boundary if the distance 
+            # A halo is likely periodic around a boundary if the distance
             # between the max and min particle
-            # positions are larger than half the box. 
+            # positions are larger than half the box.
             # So skip the rest if the converse is true.
             # Note we might make a change here when periodicity-handling is
             # fully implemented.
@@ -444,7 +445,7 @@
         Msun2g = mass_sun_cgs
         rho_crit = rho_crit * ((1.0 + z) ** 3.0)
         # Get some pertinent information about the halo.
-        self.mass_bins = self.ds.arr(np.zeros(self.bin_count + 1, 
+        self.mass_bins = self.ds.arr(np.zeros(self.bin_count + 1,
                                               dtype='float64'),'Msun')
         dist = np.empty(thissize, dtype='float64')
         cen = self.center_of_mass()
@@ -475,7 +476,7 @@
         self.overdensity = self.mass_bins * Msun2g / \
             (4./3. * math.pi * rho_crit * \
             (self.radial_bins )**3.0)
-        
+
     def _get_ellipsoid_parameters_basic(self):
         np.seterr(all='ignore')
         # check if there are 4 particles to form an ellipsoid
@@ -501,7 +502,7 @@
         for axis in range(np.size(DW)):
             cases = np.array([position[axis],
                                 position[axis] + DW[axis],
-                              position[axis] - DW[axis]])        
+                              position[axis] - DW[axis]])
             # pick out the smallest absolute distance from com
             position[axis] = np.choose(np.abs(cases).argmin(axis=0), cases)
         # find the furthest particle's index
@@ -571,7 +572,7 @@
     _name = "RockstarHalo"
     # See particle_mask
     _radjust = 4.
-    
+
     def maximum_density(self):
         r"""Not implemented."""
         return -1
@@ -635,11 +636,11 @@
     def get_ellipsoid_parameters(self):
         r"""Calculate the parameters that describe the ellipsoid of
         the particles that constitute the halo.
-        
+
         Parameters
         ----------
         None
-        
+
         Returns
         -------
         tuple : (cm, mag_A, mag_B, mag_C, e0_vector, tilt)
@@ -650,7 +651,7 @@
               #. mag_C as a float.
               #. e0_vector as an array.
               #. tilt as a float.
-        
+
         Examples
         --------
         >>> params = halos[0].get_ellipsoid_parameters()
@@ -662,22 +663,22 @@
             basic_parameters[4], basic_parameters[5]]), basic_parameters[6]]
         toreturn.extend(updated)
         return tuple(toreturn)
-    
+
     def get_ellipsoid(self):
         r"""Returns an ellipsoidal data object.
-        
+
         This will generate a new, empty ellipsoidal data object for this
         halo.
-        
+
         Parameters
         ----------
         None.
-        
+
         Returns
         -------
         ellipsoid : `yt.data_objects.data_containers.YTEllipsoidBase`
             The ellipsoidal data object.
-        
+
         Examples
         --------
         >>> ell = halos[0].get_ellipsoid()
@@ -686,7 +687,7 @@
         ell = self.data.ds.ellipsoid(ep[0], ep[1], ep[2], ep[3],
             ep[4], ep[5])
         return ell
-    
+
 class HOPHalo(Halo):
     _name = "HOPHalo"
     pass
@@ -763,14 +764,14 @@
             self.size, key)
         if field_data is not None:
             if key == 'particle_index':
-                #this is an index for turning data sorted by particle index 
+                #this is an index for turning data sorted by particle index
                 #into the same order as the fields on disk
                 self._pid_sort = field_data.argsort().argsort()
             #convert to YTArray using the data from disk
             if key == 'particle_mass':
                 field_data = self.ds.arr(field_data, 'Msun')
             else:
-                field_data = self.ds.arr(field_data, 
+                field_data = self.ds.arr(field_data,
                     self.ds._get_field_info('unknown',key).units)
             self._saved_fields[key] = field_data
             return self._saved_fields[key]
@@ -856,21 +857,21 @@
             basic_parameters[4], basic_parameters[5]]), basic_parameters[6]]
         toreturn.extend(updated)
         return tuple(toreturn)
-    
+
     def get_ellipsoid(self):
-        r"""Returns an ellipsoidal data object.        
+        r"""Returns an ellipsoidal data object.
         This will generate a new, empty ellipsoidal data object for this
         halo.
-        
+
         Parameters
         ----------
         None.
-        
+
         Returns
         -------
         ellipsoid : `yt.data_objects.data_containers.YTEllipsoidBase`
             The ellipsoidal data object.
-        
+
         Examples
         --------
         >>> ell = halos[0].get_ellipsoid()
@@ -947,11 +948,11 @@
     def maximum_density(self):
         r"""Undefined for text halos."""
         return -1
-    
+
     def maximum_density_location(self):
         r"""Undefined, default to CoM"""
         return self.center_of_mass()
-    
+
     def get_size(self):
         # Have to just get it from the sphere.
         return self["particle_position_x"].size
@@ -964,8 +965,8 @@
     def __init__(self, data_source, dm_only=True, redshift=-1):
         """
         Run hop on *data_source* with a given density *threshold*.  If
-        *dm_only* is True (default), only run it on the dark matter particles, 
-        otherwise on all particles.  Returns an iterable collection of 
+        *dm_only* is True (default), only run it on the dark matter particles,
+        otherwise on all particles.  Returns an iterable collection of
         *HopGroup* items.
         """
         self._data_source = data_source
@@ -1051,7 +1052,7 @@
         ellipsoid_data : bool.
             Whether to print the ellipsoidal information to the file.
             Default = False.
-        
+
         Examples
         --------
         >>> halos.write_out("HopAnalysis.out")
@@ -1144,10 +1145,10 @@
     _halo_dt = np.dtype([('id', np.int64), ('pos', (np.float32, 6)),
         ('corevel', (np.float32, 3)), ('bulkvel', (np.float32, 3)),
         ('m', np.float32), ('r', np.float32), ('child_r', np.float32),
-        ('vmax_r', np.float32), 
+        ('vmax_r', np.float32),
         ('mgrav', np.float32), ('vmax', np.float32),
         ('rvmax', np.float32), ('rs', np.float32),
-        ('klypin_rs', np.float32), 
+        ('klypin_rs', np.float32),
         ('vrms', np.float32), ('J', (np.float32, 3)),
         ('energy', np.float32), ('spin', np.float32),
         ('alt_m', (np.float32, 4)), ('Xoff', np.float32),
@@ -1221,9 +1222,9 @@
         """
         Read the out_*.list text file produced
         by Rockstar into memory."""
-        
+
         ds = self.ds
-        # In order to read the binary data, we need to figure out which 
+        # In order to read the binary data, we need to figure out which
         # binary files belong to this output.
         basedir = os.path.dirname(self.out_list)
         s = self.out_list.split('_')[-1]
@@ -1523,12 +1524,14 @@
                 id += 1
 
         def haloCmp(h1, h2):
+            def cmp(a, b):
+                return (a > b) - (a < b)
             c = cmp(h1.total_mass(), h2.total_mass())
             if c != 0:
                 return -1 * c
             if c == 0:
                 return cmp(h1.center_of_mass()[0], h2.center_of_mass()[0])
-        self._groups.sort(haloCmp)
+        self._groups.sort(key=cmp_to_key(haloCmp))
         sorted_max_dens = {}
         for i, halo in enumerate(self._groups):
             if halo.id in self._max_dens:
@@ -1873,7 +1876,7 @@
 
 class LoadTextHaloes(GenericHaloFinder, TextHaloList):
     r"""Load a text file of halos.
-    
+
     Like LoadHaloes, but when all that is available is a plain
     text file. This assumes the text file has the 3-positions of halos
     along with a radius. The halo objects created are spheres.
@@ -1882,7 +1885,7 @@
     ----------
     fname : String
         The name of the text file to read in.
-    
+
     columns : dict
         A dict listing the column name : column number pairs for data
         in the text file. It is zero-based (like Python).
@@ -1890,7 +1893,7 @@
         Any column name outside of ['x', 'y', 'z', 'r'] will be attached
         to each halo object in the supplementary dict 'supp'. See
         example.
-    
+
     comment : String
         If the first character of a line is equal to this, the line is
         skipped. Default = "#".
@@ -1915,7 +1918,7 @@
     Parameters
     ----------
     fname : String
-        The name of the Rockstar file to read in. Default = 
+        The name of the Rockstar file to read in. Default =
         "rockstar_halos/out_0.list'.
 
     Examples

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 yt/analysis_modules/level_sets/clump_handling.py
--- a/yt/analysis_modules/level_sets/clump_handling.py
+++ b/yt/analysis_modules/level_sets/clump_handling.py
@@ -20,7 +20,8 @@
 from yt.fields.derived_field import \
     ValidateSpatial
 from yt.funcs import mylog
-    
+from yt.extern.six import string_types
+
 from .clump_info_items import \
     clump_info_registry
 from .clump_validators import \
@@ -268,7 +269,7 @@
 
 def write_clump_index(clump, level, fh):
     top = False
-    if not isinstance(fh, file):
+    if isinstance(fh, string_types):
         fh = open(fh, "w")
         top = True
     for q in range(level):
@@ -285,7 +286,7 @@
 
 def write_clumps(clump, level, fh):
     top = False
-    if not isinstance(fh, file):
+    if isinstance(fh, string_types):
         fh = open(fh, "w")
         top = True
     if ((clump.children is None) or (len(clump.children) == 0)):

diff -r 94c8b783f90f30ff0e4ea6e01e73d1b2d323edc6 -r 7709f96fe34aabce6ade8b00bac40e866d218c08 yt/analysis_modules/photon_simulator/photon_models.py
--- a/yt/analysis_modules/photon_simulator/photon_models.py
+++ b/yt/analysis_modules/photon_simulator/photon_models.py
@@ -25,7 +25,7 @@
 from yt.extern.six import string_types
 import numpy as np
 from yt.funcs import *
-from yt.utilities.physical_constants import mp, kboltz
+from yt.utilities.physical_constants import mp
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
      parallel_objects
 from yt.units.yt_array import uconcatenate
@@ -34,6 +34,12 @@
 kT_min = 8.08e-2
 kT_max = 50.
 
+photon_units = {"Energy":"keV",
+                "dx":"kpc"}
+for ax in "xyz":
+    photon_units[ax] = "kpc"
+    photon_units["v"+ax] = "km/s"
+
 class PhotonModel(object):
 
     def __init__(self):
@@ -46,7 +52,7 @@
 class ThermalPhotonModel(PhotonModel):
     r"""
     Initialize a ThermalPhotonModel from a thermal spectrum. 
-    
+
     Parameters
     ----------
 
@@ -61,30 +67,37 @@
     photons_per_chunk : integer
         The maximum number of photons that are allocated per chunk. Increase or decrease
         as needed.
+    method : string, optional
+        The method used to generate the photon energies from the spectrum:
+        "invert_cdf": Invert the cumulative distribution function of the spectrum.
+        "accept_reject": Acceptance-rejection method using the spectrum. 
+        The first method should be sufficient for most cases. 
     """
-    def __init__(self, spectral_model, X_H=0.75, Zmet=0.3, photons_per_chunk=10000000):
+    def __init__(self, spectral_model, X_H=0.75, Zmet=0.3, 
+                 photons_per_chunk=10000000, method="invert_cdf"):
         self.X_H = X_H
         self.Zmet = Zmet
         self.spectral_model = spectral_model
         self.photons_per_chunk = photons_per_chunk
+        self.method = method
 
     def __call__(self, data_source, parameters):
-        
+
         ds = data_source.ds
 
         exp_time = parameters["FiducialExposureTime"]
         area = parameters["FiducialArea"]
         redshift = parameters["FiducialRedshift"]
         D_A = parameters["FiducialAngularDiameterDistance"].in_cgs()
-        dist_fac = 1.0/(4.*np.pi*D_A.value*D_A.value*(1.+redshift)**3)
+        dist_fac = 1.0/(4.*np.pi*D_A.value*D_A.value*(1.+redshift)**2)
         src_ctr = parameters["center"]
 
-        vol_scale = 1.0/np.prod(ds.domain_width.in_cgs().to_ndarray())
-
         my_kT_min, my_kT_max = data_source.quantities.extrema("kT")
 
-        self.spectral_model.prepare()
-        energy = self.spectral_model.ebins
+        self.spectral_model.prepare_spectrum(redshift)
+        emid = self.spectral_model.emid
+        ebins = self.spectral_model.ebins
+        nchan = len(emid)
 
         citer = data_source.chunks([], "io")
 
@@ -99,7 +112,13 @@
         photons["Energy"] = []
         photons["NumberOfPhotons"] = []
 
-        spectral_norm = area.v*exp_time.v*dist_fac/vol_scale
+        spectral_norm = area.v*exp_time.v*dist_fac
+
+        tot_num_cells = data_source.ires.shape[0]
+
+        pbar = get_pbar("Generating photons ", tot_num_cells)
+
+        cell_counter = 0
 
         for chunk in parallel_objects(citer):
 
@@ -114,7 +133,7 @@
             if isinstance(self.Zmet, string_types):
                 metalZ = chunk[self.Zmet].v
             else:
-                metalZ = self.Zmet
+                metalZ = self.Zmet*np.ones(num_cells)
 
             idxs = np.argsort(kT)
 
@@ -133,7 +152,7 @@
                 n += bcount
             kT_idxs = np.unique(kT_idxs)
 
-            cell_em = EM[idxs]*vol_scale
+            cell_em = EM[idxs]*spectral_norm
 
             number_of_photons = np.zeros(num_cells, dtype="uint64")
             energies = np.zeros(self.photons_per_chunk)
@@ -141,8 +160,6 @@
             start_e = 0
             end_e = 0
 
-            pbar = get_pbar("Generating photons for chunk ", num_cells)
-
             for ibegin, iend, ikT in zip(bcell, ecell, kT_idxs):
 
                 kT = kT_bins[ikT] + 0.5*dkT
@@ -151,58 +168,57 @@
 
                 cem = cell_em[ibegin:iend]
 
-                em_sum_c = cem.sum()
-                if isinstance(self.Zmet, string_types):
-                    em_sum_m = (metalZ[ibegin:iend]*cem).sum()
-                else:
-                    em_sum_m = metalZ*em_sum_c
-
                 cspec, mspec = self.spectral_model.get_spectrum(kT)
 
-                cumspec_c = np.cumsum(cspec.d)
-                tot_ph_c = cumspec_c[-1]*spectral_norm*em_sum_c
-                cumspec_c /= cumspec_c[-1]
-                cumspec_c = np.insert(cumspec_c, 0, 0.0)
-
-                cumspec_m = np.cumsum(mspec.d)
-                tot_ph_m = cumspec_m[-1]*spectral_norm*em_sum_m
-                cumspec_m /= cumspec_m[-1]
-                cumspec_m = np.insert(cumspec_m, 0, 0.0)
+                tot_ph_c = cspec.d.sum()
+                tot_ph_m = mspec.d.sum()
 
                 u = np.random.random(size=n_current)
 
-                cell_norm_c = tot_ph_c*cem/em_sum_c
-                cell_n_c = np.uint64(cell_norm_c) + np.uint64(np.modf(cell_norm_c)[0] >= u)
-            
-                if isinstance(self.Zmet, string_types):
-                    cell_norm_m = tot_ph_m*metalZ[ibegin:iend]*cem/em_sum_m
-                else:
-                    cell_norm_m = tot_ph_m*metalZ*cem/em_sum_m
-                cell_n_m = np.uint64(cell_norm_m) + np.uint64(np.modf(cell_norm_m)[0] >= u)
-            
-                number_of_photons[ibegin:iend] = cell_n_c + cell_n_m
+                cell_norm_c = tot_ph_c*cem
+                cell_norm_m = tot_ph_m*metalZ[ibegin:iend]*cem
+                cell_norm = np.modf(cell_norm_c + cell_norm_m)
+                cell_n = np.uint64(cell_norm[1]) + np.uint64(cell_norm[0] >= u)
 
-                end_e += int((cell_n_c+cell_n_m).sum())
+                number_of_photons[ibegin:iend] = cell_n
+
+                end_e += int(cell_n.sum())
 
                 if end_e > self.photons_per_chunk:
                     raise RuntimeError("Number of photons generated for this chunk "+
                                        "exceeds photons_per_chunk (%d)! " % self.photons_per_chunk +
                                        "Increase photons_per_chunk!")
 
-                energies[start_e:end_e] = _generate_energies(cell_n_c, cell_n_m, cumspec_c, cumspec_m, energy)
-            
+                if self.method == "invert_cdf":
+                    cumspec_c = np.cumsum(cspec.d)
+                    cumspec_m = np.cumsum(mspec.d)
+                    cumspec_c = np.insert(cumspec_c, 0, 0.0)
+                    cumspec_m = np.insert(cumspec_m, 0, 0.0)
+
+                ei = start_e
+                for cn, Z in zip(number_of_photons[ibegin:iend], metalZ[ibegin:iend]):
+                    if cn == 0: continue
+                    if self.method == "invert_cdf":
+                        cumspec = cumspec_c + Z*cumspec_m
+                        cumspec /= cumspec[-1]
+                        randvec = np.random.uniform(size=cn)
+                        randvec.sort()
+                        cell_e = np.interp(randvec, cumspec, ebins)
+                    elif self.method == "accept_reject":
+                        tot_spec = cspec.d+Z*mspec.d
+                        tot_spec /= tot_spec.sum()
+                        eidxs = np.random.choice(nchan, size=cn, p=tot_spec)
+                        cell_e = emid[eidxs]
+                    energies[ei:ei+cn] = cell_e
+                    cell_counter += 1
+                    pbar.update(cell_counter)
+                    ei += cn
+
                 start_e = end_e
 
-                pbar.update(iend)
-
-            pbar.finish()
-
             active_cells = number_of_photons > 0
             idxs = idxs[active_cells]
 
-            mylog.info("Number of photons generated for this chunk: %d" % int(number_of_photons.sum()))
-            mylog.info("Number of cells with photons: %d" % int(active_cells.sum()))
-
             photons["NumberOfPhotons"].append(number_of_photons[active_cells])
             photons["Energy"].append(ds.arr(energies[:end_e].copy(), "keV"))
             photons["x"].append((chunk["x"][idxs]-src_ctr[0]).in_units("kpc"))
@@ -213,23 +229,17 @@
             photons["vz"].append(chunk["velocity_z"][idxs].in_units("km/s"))
             photons["dx"].append(chunk["dx"][idxs].in_units("kpc"))
 
+        pbar.finish()
+
         for key in photons:
             if len(photons[key]) > 0:
                 photons[key] = uconcatenate(photons[key])
+            elif key == "NumberOfPhotons":
+                photons[key] = np.array([])
+            else:
+                photons[key] = YTArray([], photon_units[key])
+
+        mylog.info("Number of photons generated: %d" % int(np.sum(photons["NumberOfPhotons"])))
+        mylog.info("Number of cells with photons: %d" % len(photons["x"]))
 
         return photons
-
-def _generate_energies(cell_n_c, cell_n_m, counts_c, counts_m, energy):
-    energies = np.array([])
-    for cn_c, cn_m in zip(cell_n_c, cell_n_m):
-        if cn_c > 0:
-            randvec_c = np.random.uniform(size=cn_c)
-            randvec_c.sort()
-            cell_e_c = np.interp(randvec_c, counts_c, energy)
-            energies = np.append(energies, cell_e_c)
-        if cn_m > 0: 
-            randvec_m = np.random.uniform(size=cn_m)
-            randvec_m.sort()
-            cell_e_m = np.interp(randvec_m, counts_m, energy)
-            energies = np.append(energies, cell_e_m)
-    return energies

This diff is so big that we needed to truncate the remainder.

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list