[yt-svn] commit/yt: 6 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Wed May 24 07:27:41 PDT 2017


6 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/baaf6482ca18/
Changeset:   baaf6482ca18
User:        jisuoqing
Date:        2017-05-21 08:17:01+00:00
Summary:     Make aspect ratio consistent with plot window
Affected #:  1 file

diff -r ea18a2a782327bbcd0a55eb6d60d29566853be86 -r baaf6482ca187493a3ce4b908a0d87393f465626 yt/visualization/plot_modifications.py
--- a/yt/visualization/plot_modifications.py
+++ b/yt/visualization/plot_modifications.py
@@ -2357,7 +2357,7 @@
 
         if self.const_alpha:
             plot._axes.imshow(lic_data_clip, extent=extent, cmap=self.cmap,
-                              alpha=self.alpha, origin='lower')
+                              alpha=self.alpha, origin='lower', aspect="auto")
         else:
             lic_data_rgba = cm.ScalarMappable(norm=None, cmap=self.cmap).\
                             to_rgba(lic_data_clip)
@@ -2365,7 +2365,7 @@
                                     / (self.lim[1] - self.lim[0])
             lic_data_rgba[...,3] = lic_data_clip_rescale * self.alpha
             plot._axes.imshow(lic_data_rgba, extent=extent, cmap=self.cmap,
-                              origin='lower')
+                              origin='lower', aspect="auto")
 
         return plot
 


https://bitbucket.org/yt_analysis/yt/commits/33dba5d78a2d/
Changeset:   33dba5d78a2d
User:        jisuoqing
Date:        2017-05-22 00:01:20+00:00
Summary:     Merge github.com:yt-project/yt
Affected #:  71 files

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 .gitignore
--- a/.gitignore
+++ b/.gitignore
@@ -63,7 +63,6 @@
 yt/utilities/lib/png_writer.c
 yt/utilities/lib/points_in_volume.c
 yt/utilities/lib/quad_tree.c
-yt/utilities/lib/ray_integrators.c
 yt/utilities/lib/ragged_arrays.c
 yt/utilities/lib/cosmology_time.c
 yt/utilities/lib/grid_traversal.c

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 .travis.yml
--- a/.travis.yml
+++ b/.travis.yml
@@ -10,9 +10,22 @@
     packages:
       - libhdf5-serial-dev
 
+env:
+  global:
+    NUMPY=numpy
+    CYTHON=cython
+    MATPLOTLIB=matplotlib
+    SYMPY=sympy
+    H5PY=h5py
+    SCIPY=scipy
+    IPYTHON=ipython
+    FASTCACHE=fastcache
+
 matrix:
   include:
     - python: 2.7
+      env: NUMPY=numpy==1.10.4 CYTHON=cython==0.24 MATPLOTLIB=matplotlib==1.5.3 SYMPY=sympy==1.0 H5PY= SCIPY= FASTCACHE= IPYTHON=ipython==1.0
+    - python: 2.7
     - python: 3.4
     - python: 3.5
     - python: 3.6
@@ -52,7 +65,7 @@
     pip install --upgrade wheel
     pip install --upgrade setuptools
     # Install dependencies
-    pip install mock numpy scipy cython matplotlib sympy fastcache nose flake8 h5py ipython nose-timer
+    pip install mock $NUMPY $SCIPY $H5PY $CYTHON $MATPLOTLIB $SYMPY $FASTCACHE $IPYTHON nose flake8 nose-timer
     # install yt
     pip install -e .
 

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 README
--- a/README
+++ /dev/null
@@ -1,24 +0,0 @@
-Hi there!  You've just downloaded yt, an analysis tool for scientific
-datasets, generated on a variety of data platforms.  It's written in 
-python and heavily leverages NumPy, Matplotlib, SymPy and Cython for a variety
-of tasks.
-
-Full documentation and a user community can be found at:
-
-http://yt-project.org/
-
-http://yt-project.org/doc/
-
-If you have used Python before, and are comfortable with installing packages,
-you should find the setup.py script fairly straightforward: simply execute
-"python setup.py install".
-
-If you would rather a more automated installation, you can use the script
-doc/install_script.sh .  You will have to set the destination directory, and
-there are options available, but it should be straightforward.
-
-For more information on installation, what to do if you run into problems, or 
-ways to help development, please visit our website.
-
-Enjoy!
-

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 README.md
--- /dev/null
+++ b/README.md
@@ -0,0 +1,112 @@
+# The yt Project
+
+[![Users' Mailing List](https://img.shields.io/badge/Users-List-lightgrey.svg)](http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org/)
+[![Devel Mailing List](https://img.shields.io/badge/Devel-List-lightgrey.svg)](http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org/)
+[![Build Status](https://img.shields.io/travis/yt-project/yt.svg?branch=master)](https://travis-ci.org/yt-project/yt)
+[![Latest Documentation](https://img.shields.io/badge/docs-latest-brightgreen.svg)](http://yt-project.org/docs/dev/)
+[![Data Hub](https://img.shields.io/badge/data-hub-orange.svg)](https://hub.yt/)
+                
+<a href="http://yt-project.org"><img src="doc/source/_static/yt_logo.png" width="300"></a>
+
+yt is an open-source, permissively-licensed python package for analyzing and
+visualizing volumetric data.
+
+yt supports structured, variable-resolution meshes, unstructured meshes, and
+discrete or sampled data such as particles. Focused on driving
+physically-meaningful inquiry, yt has been applied in domains such as
+astrophysics, seismology, nuclear engineering, molecular dynamics, and
+oceanography. Composed of a friendly community of users and developers, we want
+to make it easy to use and develop - we'd love it if you got involved!
+
+We've written a [method
+paper](http://adsabs.harvard.edu/abs/2011ApJS..192....9T) you may be interested
+in; if you use yt in the preparation of a publication, please consider citing
+it.
+
+## Installation
+
+If you're using conda with [conda-forge](http://conda-forge.github.io/), you
+can install the most recent stable version by running:
+
+```
+conda install -c conda-forge yt
+```
+
+or by doing:
+
+```
+pip install yt
+```
+
+If you want the latest nightly build, you can manually install from our
+repository:
+
+```
+conda install -c http://use.yt/with_conda yt
+```
+
+To get set up with a development version, you can clone this repository and
+install like this:
+
+```
+git clone https://github.com/yt-project/yt yt-git
+cd yt-git
+python setup.py develop
+```
+
+To set up yt in a virtualenv (and there are [many good
+reasons](https://packaging.python.org/installing/#creating-virtual-environments)
+to do so!) you can follow this prescription:
+
+```
+# Assuming you have cd'd into yt-git
+# It is conventional to create virtualenvs at ~/.virtualenv/
+$ mkdir -p ~/.virtualenv
+# Assuming your version of Python 3 is 3.4 or higher,
+# create a virtualenv named yt
+$ python3 -m venv ~/.virtualenv/yt
+# Activate it
+$ source ~/.virtualenv/yt/bin/activate
+# Make sure you run the latest version of pip
+$ pip install --upgrade pip
+$ pip install -e .
+# Output installed packages
+$ pip freeze
+```
+
+## Getting Started
+
+yt is designed to provide meaningful analysis of data.  We have some Quickstart
+example notebooks in the repository:
+
+ * [Introduction](doc/source/quickstart/1\)_Introduction.ipynb)
+ * [Data Inspection](doc/source/quickstart/2\)_Data_Inspection.ipynb)
+ * [Simple Visualization](doc/source/quickstart/3\)_Simple_Visualization.ipynb)
+ * [Data Objects and Time Series](doc/source/quickstart/4\)_Data_Objects_and_Time_Series.ipynb)
+ * [Derived Fields and Profiles](doc/source/quickstart/5\)_Derived_Fields_and_Profiles.ipynb)
+ * [Volume Rendering](doc/source/quickstart/6\)_Volume_Rendering.ipynb)
+
+If you'd like to try these online, you can visit our [yt Hub](https://hub.yt/)
+and run a notebook next to some of our example data.
+
+## Contributing
+
+We love contributions!  yt is open source, built on open source, and we'd love
+to have you hang out in our community.
+
+We have developed some [guidelines](CONTRIBUTING.rst) for contributing to yt.
+
+## Resources
+
+We have some community and documentation resources available.
+
+ * Our latest documentation is always at http://yt-project.org/docs/dev/ and it
+   includes recipes, tutorials, and API documentation
+ * The [discussion mailing
+   list](http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org/)
+   should be your first stop for general questions
+ * The [development mailing
+   list](http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org/) is
+   better suited for more development issues
+ * You can also join us on Slack at yt-project.slack.com ([request an
+   invite](http://yt-project.org/slack.html))

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 appveyor.yml
--- a/appveyor.yml
+++ b/appveyor.yml
@@ -4,19 +4,18 @@
 environment:
 
   global:
-      PYTHON: "C:\\Miniconda-x64"
+      PYTHON: "C:\\Miniconda3-x64"
 
   matrix:
-
-      - PYTHON_VERSION: "2.7"
-
-      - PYTHON_VERSION: "3.5"
-
+      - PYTHON_VERSION: "3.6"
 
 platform:
     -x64
 
 install:
+    - "if not exist \"%userprofile%\\.config\\yt\" mkdir %userprofile%\\.config\\yt"
+    - "echo [yt] > %userprofile%\\.config\\yt\\ytrc"
+    - "echo suppressStreamLogging = True >> %userprofile%\\.config\\yt\\ytrc"
     - "SET PATH=%PYTHON%;%PYTHON%\\Scripts;%PATH%"
 
     # Install the build and runtime dependencies of the project.
@@ -28,11 +27,11 @@
     - "python --version"
 
     # Install specified version of numpy and dependencies
-    - "conda install -q --yes numpy nose setuptools ipython Cython sympy h5py matplotlib"
-    - "python setup.py develop"
+    - "conda install -q --yes -c conda-forge numpy scipy nose setuptools ipython Cython sympy fastcache h5py matplotlib flake8 "
+    - "pip install -e ."
 
 # Not a .NET project
 build: false
 
 test_script:
-  - "nosetests -e test_all_fields ."
+  - "nosetests --nologcapture -sv yt"

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 doc/Makefile
--- a/doc/Makefile
+++ b/doc/Makefile
@@ -49,7 +49,8 @@
 ifneq ($(READTHEDOCS),True)
 	SPHINX_APIDOC_OPTIONS=members,undoc-members,inherited-members,show-inheritance sphinx-apidoc \
         -o source/reference/api/ \
-        -e ../yt ../yt/extern/* $(shell find ../yt -name "*tests*" -type d) ../yt/utilities/voropp*
+        -e ../yt ../yt/extern* $(shell find ../yt -name "*tests*" -type d) ../yt/utilities/voropp* \
+           ../yt/analysis_modules/halo_finding/{fof,hop}
 endif
 	$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
 	@echo

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 doc/helper_scripts/show_fields.py
--- a/doc/helper_scripts/show_fields.py
+++ b/doc/helper_scripts/show_fields.py
@@ -171,7 +171,7 @@
         u = field[1][0]
         if len(u) > 0:
             self.units = ":math:`\mathrm{%s}`" % fix_units(u)
-        a = ["``%s``" % f for f in field[1][1]]
+        a = ["``%s``" % f for f in field[1][1] if f]
         self.aliases = " ".join(a)
         self.dname = ""
         if field[1][2] is not None:

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 doc/install_script.sh
--- a/doc/install_script.sh
+++ b/doc/install_script.sh
@@ -1072,7 +1072,7 @@
     # Install setuptools
     do_setup_py $SETUPTOOLS
 
-    if type -p git &>/dev/null
+    if type -P git &>/dev/null
     then
         GIT_EXE="git"
     else
@@ -1087,7 +1087,7 @@
             YT_DIR="$ORIG_PWD"
         elif [ -e $ORIG_PWD/../yt/mods.py ]
         then
-            YT_DIR=`dirname $ORIG_PWD`
+            YT_DIR=$(dirname $ORIG_PWD)
         elif [ ! -e yt-git ]
         then
             echo "Cloning yt"
@@ -1420,8 +1420,19 @@
     fi
     
     log_cmd ${DEST_DIR}/bin/conda update --yes conda
-    
-    GIT_EXE=${DEST_DIR}/bin/git
+
+    if [ $INST_GIT -eq 1 ]
+    then
+        GIT_EXE=${DEST_DIR}/bin/git
+    else
+        if type -P git &>/dev/null
+        then
+            GIT_EXE="git"
+        else
+            echo "Cannot find git. Please install git or set INST_GIT=1."
+            do_exit
+        fi
+    fi
 
     log_cmd echo "DEPENDENCIES" ${YT_DEPS[@]}
     for YT_DEP in "${YT_DEPS[@]}"; do
@@ -1494,9 +1505,27 @@
         log_cmd ${DEST_DIR}/bin/conda install -c conda-forge --yes yt
     else
         echo "Building yt from source"
-        YT_DIR="${DEST_DIR}/src/yt-git"
-        log_cmd ${GIT_EXE} clone https://github.com/yt-project/yt ${YT_DIR}
-        log_cmd ${GIT_EXE} -C ${YT_DIR} checkout ${BRANCH}
+        if [ -z "$YT_DIR" ]
+        then
+            if [ -e $ORIG_PWD/yt/mods.py ]
+            then
+                YT_DIR="$ORIG_PWD"
+            elif [ -e $ORIG_PWD/../yt/mods.py ]
+            then
+                YT_DIR=$(dirname $ORIG_PWD)
+            else
+                YT_DIR="${DEST_DIR}/src/yt-git"
+                log_cmd ${GIT_EXE} clone https://github.com/yt-project/yt ${YT_DIR}
+                log_cmd ${GIT_EXE} -C ${YT_DIR} checkout ${BRANCH}
+            fi
+            echo Setting YT_DIR=${YT_DIR}
+        else
+            if [ ! -e $YT_DIR/.git ]
+            then
+                echo "$YT_DIR is not a clone of the yt git repository, exiting"
+                do_exit
+            fi
+        fi
         if [ $INST_EMBREE -eq 1 ]
         then
             echo $DEST_DIR > ${YT_DIR}/embree.cfg

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 doc/source/_static/yt_logo.png
Binary file doc/source/_static/yt_logo.png has changed

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 doc/source/_static/yt_logo.svg
--- /dev/null
+++ b/doc/source/_static/yt_logo.svg
@@ -0,0 +1,37 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!-- Generator: Adobe Illustrator 19.2.1, SVG Export Plug-In . SVG Version: 6.00 Build 0)  -->
+<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
+	 viewBox="0 0 416.7 360" style="enable-background:new 0 0 416.7 360;" xml:space="preserve">
+<style type="text/css">
+	.st0{opacity:0.55;fill:#00A3BB;}
+	.st1{opacity:0.55;fill:#F26222;}
+	.st2{opacity:0.55;}
+	.st3{fill:#EF3A24;}
+	.st4{fill:#EFE929;}
+	.st5{fill:#72BF44;}
+	.st6{fill:#020202;}
+</style>
+<g>
+	<path class="st0" d="M397.8,290c-45.6,69.6-187.6,17.9-247.1-21.1S-6.7,118.4,38.9,48.8S206.8,44.1,266.2,83S443.4,220.4,397.8,290
+		z"/>
+	<path class="st1" d="M373.6,276.6c-36,54.9-155.7,9-206.3-24.1S31.9,126.4,67.9,71.5s139.7,1.1,190.3,34.2S409.6,221.7,373.6,276.6
+		z"/>
+	<g class="st2">
+		<path class="st3" d="M250.9,134.4c89.1,67.2,125,135.2,83,147.3c-27.6,8-87.2-1.2-155.6-53.6S68.5,114.7,86,91.8
+			c9.9-12.9,63.6-26.2,97.4-12.3C209.1,90,214.5,107,250.9,134.4z"/>
+		<path class="st4" d="M247.1,139.3c79.3,59.8,111.3,120.3,73.9,131.2c-24.6,7.1-77.6-1.1-138.5-47.7s-97.7-101-82.2-121.3
+			c8.8-11.5,56.7-23.3,86.7-11C210,99.8,214.8,114.9,247.1,139.3z"/>
+		<path class="st5" d="M241.1,147.2c63.6,48,89.2,96.4,59.2,105.1c-19.7,5.7-62.2-0.8-111-38.2s-78.3-80.9-65.8-97.2
+			c7.1-9.2,45.4-18.7,69.4-8.8C211.3,115.5,215.2,127.6,241.1,147.2z"/>
+	</g>
+	<g>
+		<path class="st6" d="M58.7,351c-8.7,0-16.6-0.9-23.7-2.7c-7.1-1.8-14.2-4.4-21.3-8l10.6-25c5.2,2.7,10.3,4.7,15.3,5.9
+			c5,1.2,11,1.8,17.8,1.8c9.5,0,17.6-2.9,24.1-8.6c6.5-5.7,12.9-16.1,19.2-31.1L4.7,72.8h34.8l76.9,176.3l67.9-176.3h33.5L129,291.2
+			c-9,21.8-19,37.2-30.1,46.2C87.9,346.5,74.5,351,58.7,351z"/>
+		<path class="st6" d="M338.4,288c-8.5,0-16.4-1.1-23.7-3.3c-7.4-2.2-13.7-5.7-19-10.4c-5.3-4.8-9.5-11-12.5-18.6
+			c-3-7.6-4.5-16.8-4.5-27.4V100.6h-29.5V72.8h29.5V9h31.5v63.8h67.1v27.8h-67.1v123.2c0,13.1,3.3,22.2,9.8,27.3
+			c6.5,5.2,15.1,7.8,25.8,7.8c5.4,0,10.6-0.5,15.3-1.5c4.8-1,9.9-2.8,15.3-5.4v27c-5.5,3-11.2,5.3-17.4,6.7
+			C353,287.2,346.1,288,338.4,288z"/>
+	</g>
+</g>
+</svg>

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 doc/source/analyzing/analysis_modules/star_analysis.rst
--- a/doc/source/analyzing/analysis_modules/star_analysis.rst
+++ b/doc/source/analyzing/analysis_modules/star_analysis.rst
@@ -1,11 +1,11 @@
-.. _star_analysis:
-
 .. note::
 
    This module has been deprecated as it is unmaintained.  The code has been
    moved to the `yt attic <https://github.com/yt-project/yt_attic>`__.
    If you'd like to take it over, please do!
 
+.. _star_analysis:
+
 Star Particle Analysis
 ======================
 .. sectionauthor:: Stephen Skory <sskory at physics.ucsd.edu>

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 doc/source/analyzing/analysis_modules/two_point_functions.rst
--- a/doc/source/analyzing/analysis_modules/two_point_functions.rst
+++ b/doc/source/analyzing/analysis_modules/two_point_functions.rst
@@ -1,11 +1,11 @@
-.. _two_point_functions:
-
 .. note::
 
    This module has been deprecated as it is unmaintained.  The code has been
    moved to the `yt attic <https://github.com/yt-project/yt_attic>`__.
    If you'd like to take it over, please do!
 
+.. _two_point_functions:
+
 Two Point Functions
 ===================
 .. sectionauthor:: Stephen Skory <sskory at physics.ucsd.edu>

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 doc/source/analyzing/objects.rst
--- a/doc/source/analyzing/objects.rst
+++ b/doc/source/analyzing/objects.rst
@@ -199,6 +199,21 @@
 
 .. _available-objects:
 
+Making Image Buffers
+^^^^^^^^^^^^^^^^^^^^
+
+Using the slicing syntax above for choosing a slice, if you also provide an
+imaginary step value you can obtain a
+:class:`~yt.visualization.api.FixedResolutionBuffer` of the chosen resolution.
+
+For instance, to obtain a 1024 by 1024 buffer covering the entire
+domain but centered at 0.5 in code units, you can do:::
+
+   frb = ds.r[0.5, ::1024j, ::1024j]
+
+This `frb` object then can be queried like a normal fixed resolution buffer,
+and it will return arrays of shape (1024, 1024).
+
 Available Objects
 -----------------
 
@@ -650,7 +665,7 @@
    cutout = sp1 - sp3
    sp4 = sp1 ^ sp2
    sp5 = sp1 & sp2
-   
+
 
 Note that the ``+`` operation and the ``|`` operation are identical.  For when
 multiple objects are to be combined in an intersection or a union, there are

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 doc/source/cookbook/power_spectrum_example.py
--- a/doc/source/cookbook/power_spectrum_example.py
+++ b/doc/source/cookbook/power_spectrum_example.py
@@ -40,7 +40,7 @@
 
     nindex_rho = 1./3.
 
-    Kk = np.zeros( (nx/2+1, ny/2+1, nz/2+1))
+    Kk = np.zeros( (nx//2+1, ny//2+1, nz//2+1))
 
     for vel in [("gas", "velocity_x"), ("gas", "velocity_y"),
                 ("gas", "velocity_z")]:
@@ -106,7 +106,7 @@
     # the first half of the axes -- that's what we keep.  Our
     # normalization has an '8' to account for this clipping to one
     # octant.
-    ru = np.fft.fftn(rho**nindex_rho * u)[0:nx/2+1,0:ny/2+1,0:nz/2+1]
+    ru = np.fft.fftn(rho**nindex_rho * u)[0:nx//2+1,0:ny//2+1,0:nz//2+1]
     ru = 8.0*ru/(nx*ny*nz)
 
     return np.abs(ru)**2

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -340,6 +340,7 @@
 defined, with the "raw" field type:
 
 .. code-block:: python
+
     import yt
     ds = yt.load("Laser/plt00015/")
     print(ds.field_list)
@@ -445,6 +446,7 @@
 --------------
 
 .. note::
+
    To load Exodus II data, you need to have the `netcdf4 <http://unidata.github.io/
    netcdf4-python/>`_ python interface installed.
 
@@ -1009,10 +1011,10 @@
    ds = yt.load("snapshot_061.hdf5", index_ptype="PartType0")
 
 By default, ``index_ptype`` is set to ``"all"``, which means all the particles.
-Currently this feature only works for the Gadget HDF5 and OWLS datasets. To
-bring the feature to other frontends, it's recommended to refer to this
-`PR <https://bitbucket.org/yt_analysis/yt/pull-requests/1985/add-particle-type-aware-octree/diff>`_
-for implementation details.
+For Gadget binary outputs, ``index_ptype`` should be set using the particle type
+names yt uses internally (e.g. ``'Gas'``, ``'Halo'``, ``'Disk'``, etc). For
+Gadget HDF5 outputs the particle type names come from the HDF5 output and so
+should be referred to using names like ``'PartType0'``.
 
 .. _gadget-field-spec:
 
@@ -1206,23 +1208,28 @@
    import yt
 
    grid_data = [
-       dict(left_edge = [0.0, 0.0, 0.0],
-            right_edge = [1.0, 1.0, 1.],
-            level = 0,
-            dimensions = [32, 32, 32],
-            number_of_particles = 0)
-       dict(left_edge = [0.25, 0.25, 0.25],
-            right_edge = [0.75, 0.75, 0.75],
-            level = 1,
-            dimensions = [32, 32, 32],
-            number_of_particles = 0)
+       dict(left_edge=[0.0, 0.0, 0.0],
+            right_edge=[1.0, 1.0, 1.0],
+            level=0,
+            dimensions=[32, 32, 32],
+            number_of_particles=0)
+       dict(left_edge=[0.25, 0.25, 0.25],
+            right_edge=[0.75, 0.75, 0.75],
+            level=1,
+            dimensions=[32, 32, 32],
+            number_of_particles=0)
    ]
 
    for g in grid_data:
-       g["density"] = np.random.random(g["dimensions"]) * 2**g["level"]
+       g["density"] = np.random.random(g["dimensions"]) * 2 ** g["level"]
 
    ds = yt.load_amr_grids(grid_data, [32, 32, 32], 1.0)
 
+.. note::
+
+   yt only supports a block structure where the grid edges on the ``n``-th
+   refinement level are aligned with the cell edges on the ``n-1``-th level.
+
 Particle fields are supported by adding 1-dimensional arrays and
 setting the ``number_of_particles`` key to each ``grid``'s dict:
 

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 scripts/iyt
--- a/scripts/iyt
+++ b/scripts/iyt
@@ -2,7 +2,6 @@
 from __future__ import print_function
 import os
 import re
-from distutils.version import LooseVersion
 from yt.mods import *
 from yt.data_objects.data_containers import YTDataContainer
 namespace = locals().copy()
@@ -25,38 +24,11 @@
     code.interact(doc, None, namespace)
     sys.exit()
 
-if LooseVersion(IPython.__version__) <= LooseVersion('0.10'):
-    api_version = '0.10'
-elif LooseVersion(IPython.__version__) <= LooseVersion('1.0'):
-    api_version = '0.11'
-else:
-    api_version = '1.0'
-
-if api_version == "0.10" and "DISPLAY" in os.environ:
-    from matplotlib import rcParams
-    ipbackends = dict(Qt4 = IPython.Shell.IPShellMatplotlibQt4,
-                      WX  = IPython.Shell.IPShellMatplotlibWX,
-                      GTK = IPython.Shell.IPShellMatplotlibGTK,
-                      Qt  = IPython.Shell.IPShellMatplotlibQt)
-    bend = (rcParams["backend"]).rstrip('Agg')
-
-    try:
-        ip_shell = ipbackends[bend](user_ns=namespace)
-    except KeyError:
-        ip_shell = IPython.Shell.IPShellMatplotlib(user_ns=namespace)
-elif api_version == "0.10":
-    ip_shell = IPython.Shell.IPShellMatplotlib(user_ns=namespace)
-else:
-    if api_version == "0.11":
-        from IPython.frontend.terminal.interactiveshell import \
-            TerminalInteractiveShell
-    elif api_version == "1.0":
-        from IPython.terminal.interactiveshell import TerminalInteractiveShell
-    else:
-        raise RuntimeError
-    ip_shell = TerminalInteractiveShell(user_ns=namespace, banner1 = doc,
-                    display_banner = True)
-    if "DISPLAY" in os.environ: ip_shell.enable_pylab(import_all=False)
+from IPython.terminal.interactiveshell import TerminalInteractiveShell
+ip_shell = TerminalInteractiveShell(user_ns=namespace, banner1 = doc,
+                                    display_banner = True)
+if "DISPLAY" in os.environ:
+    ip_shell.enable_pylab(import_all=False)
 
 
 # The rest is a modified version of the IPython default profile code
@@ -82,14 +54,9 @@
 # Most of your config files and extensions will probably start with this import
 
 #import IPython.ipapi
-if api_version == "0.10":
-    ip = ip_shell.IP.getapi()
-    try_next = IPython.ipapi.TryNext
-    kwargs = dict(sys_exit=1, banner=doc)
-elif api_version in ("0.11", "1.0"):
-    ip = ip_shell
-    try_next = IPython.core.error.TryNext
-    kwargs = dict()
+ip = ip_shell
+try_next = IPython.core.error.TryNext
+kwargs = dict()
 
 ip.ex("from yt.mods import *")
 ip.ex("import yt")

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 setup.py
--- a/setup.py
+++ b/setup.py
@@ -6,17 +6,19 @@
 from setuptools.extension import Extension
 from setuptools.command.build_ext import build_ext as _build_ext
 from setuptools.command.sdist import sdist as _sdist
-from setuptools.command.build_py import build_py as _build_py
 from setupext import \
-    check_for_openmp, check_for_pyembree, read_embree_location, \
-    get_mercurial_changeset_id, in_conda_env
+    check_for_openmp, \
+    check_for_pyembree, \
+    read_embree_location, \
+    in_conda_env
 from distutils.version import LooseVersion
 import pkg_resources
 
 
-if sys.version_info < (2, 7):
-    print("yt currently requires Python version 2.7")
-    print("certain features may fail unexpectedly and silently with older versions.")
+if sys.version_info < (2, 7) or (3, 0) < sys.version_info < (3, 3):
+    print("yt currently supports Python 2.7 or versions newer than Python 3.4")
+    print("certain features may fail unexpectedly and silently with older "
+          "versions.")
     sys.exit(1)
 
 try:
@@ -37,6 +39,12 @@
 if os.path.exists('MANIFEST'):
     os.remove('MANIFEST')
 
+try:
+    import pypandoc
+    long_description = pypandoc.convert_file('README.md', 'rst')
+except (ImportError, IOError):
+    with open('README.md') as file:
+        long_description = file.read()
 
 if check_for_openmp() is True:
     omp_args = ['-fopenmp']
@@ -179,7 +187,7 @@
 lib_exts = [
     "particle_mesh_operations", "depth_first_octree", "fortran_reader",
     "interpolators", "misc_utilities", "basic_octree", "image_utilities",
-    "points_in_volume", "quad_tree", "ray_integrators", "mesh_utilities",
+    "points_in_volume", "quad_tree", "mesh_utilities",
     "amr_kdtools", "lenses", "distance_queue", "allocation_container"
 ]
 for ext_name in lib_exts:
@@ -284,31 +292,30 @@
                   library_dirs=[ldir],
                   include_dirs=[idir]))
 
-class build_py(_build_py):
-    def run(self):
-        # honor the --dry-run flag
-        if not self.dry_run:
-            target_dir = os.path.join(self.build_lib, 'yt')
-            src_dir = os.getcwd()
-            changeset = get_mercurial_changeset_id(src_dir)
-            self.mkpath(target_dir)
-            with open(os.path.join(target_dir, '__hg_version__.py'), 'w') as fobj:
-                fobj.write("hg_version = '%s'\n" % changeset)
-        _build_py.run(self)
-
-    def get_outputs(self):
-        # http://bitbucket.org/yt_analysis/yt/issues/1296
-        outputs = _build_py.get_outputs(self)
-        outputs.append(
-            os.path.join(self.build_lib, 'yt', '__hg_version__.py')
-        )
-        return outputs
-
-
 class build_ext(_build_ext):
     # subclass setuptools extension builder to avoid importing cython and numpy
     # at top level in setup.py. See http://stackoverflow.com/a/21621689/1382869
     def finalize_options(self):
+        try:
+            import cython
+            import numpy
+        except ImportError:
+            raise ImportError(
+"""Could not import cython or numpy. Building yt from source requires
+cython and numpy to be installed. Please install these packages using
+the appropriate package manager for your python environment.""")
+        if LooseVersion(cython.__version__) < LooseVersion('0.24'):
+            raise RuntimeError(
+"""Building yt from source requires Cython 0.24 or newer but
+Cython %s is installed. Please update Cython using the appropriate
+package manager for your python environment.""" %
+                cython.__version__)
+        if LooseVersion(numpy.__version__) < LooseVersion('1.10.4'):
+            raise RuntimeError(
+"""Building yt from source requires NumPy 1.10.4 or newer but
+NumPy %s is installed. Please update NumPy using the appropriate
+package manager for your python environment.""" %
+                numpy.__version__)
         from Cython.Build import cythonize
         self.distribution.ext_modules[:] = cythonize(
                 self.distribution.ext_modules)
@@ -321,14 +328,28 @@
             __builtins__["__NUMPY_SETUP__"] = False
         else:
             __builtins__.__NUMPY_SETUP__ = False
-        import numpy
         self.include_dirs.append(numpy.get_include())
 
 class sdist(_sdist):
     # subclass setuptools source distribution builder to ensure cython
     # generated C files are included in source distribution.
     # See http://stackoverflow.com/a/18418524/1382869
+    # subclass setuptools source distribution builder to ensure cython
+    # generated C files are included in source distribution and readme
+    # is converted from markdown to restructured text.  See
+    # http://stackoverflow.com/a/18418524/1382869
     def run(self):
+        # Make sure the compiled Cython files in the distribution are
+        # up-to-date
+
+        try:
+            import pypandoc
+        except ImportError:
+            raise RuntimeError(
+                'Trying to create a source distribution without pypandoc. '
+                'The readme will not render correctly on pypi without '
+                'pypandoc so we are exiting.'
+            )
         # Make sure the compiled Cython files in the distribution are up-to-date
         from Cython.Build import cythonize
         cythonize(cython_extensions)
@@ -338,6 +359,7 @@
     name="yt",
     version=VERSION,
     description="An analysis and visualization toolkit for volumetric data",
+    long_description = long_description,
     classifiers=["Development Status :: 5 - Production/Stable",
                  "Environment :: Console",
                  "Intended Audience :: Science/Research",
@@ -365,26 +387,21 @@
     },
     packages=find_packages(),
     include_package_data = True,
-    setup_requires=[
-        'numpy',
-        'cython>=0.24',
-    ],
     install_requires=[
-        'matplotlib',
+        'matplotlib>=1.5.3',
         'setuptools>=19.6',
-        'sympy',
-        'numpy',
-        'IPython',
-        'cython',
+        'sympy>=1.0',
+        'numpy>=1.10.4',
+        'IPython>=1.0',
     ],
     extras_require = {
         'hub':  ["girder_client"]
     },
-    cmdclass={'sdist': sdist, 'build_ext': build_ext, 'build_py': build_py},
+    cmdclass={'sdist': sdist, 'build_ext': build_ext},
     author="The yt project",
     author_email="yt-dev at lists.spacepope.org",
     url="http://yt-project.org/",
-    license="BSD",
+    license="BSD 3-Clause",
     zip_safe=False,
     scripts=["scripts/iyt"],
     ext_modules=cython_extensions + extensions,

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 setupext.py
--- a/setupext.py
+++ b/setupext.py
@@ -142,20 +142,3 @@
         shutil.rmtree(tmpdir)
 
     return rd
-
-
-def get_mercurial_changeset_id(target_dir):
-    '''
-    Returns changeset and branch using hglib
-    '''
-    try:
-        import hglib
-    except ImportError:
-        return None
-    try:
-        with hglib.open(target_dir) as repo:
-            changeset = repo.identify(
-                id=True, branch=True).strip().decode('utf8')
-    except hglib.error.ServerError:
-        return None
-    return changeset

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/__init__.py
--- a/yt/__init__.py
+++ b/yt/__init__.py
@@ -1,66 +1,10 @@
 """
-YT is a package written primarily in Python designed to make the task of
-running Enzo easier.  It contains facilities for creating Enzo data (currently
-in prototype form) as well as runnning Enzo simulations, simulating the actions
-of Enzo on various existing data, and analyzing output from Enzo in a
-wide-variety of methods.
-
-An ever-growing collection of documentation is also available at
-http://yt-project.org/doc/ . Additionally, there is a
-project site at http://yt-project.org/ with recipes, a wiki, a variety of
-ways of peering into the version control, and a bug-reporting system.
-
-YT is divided into several packages.
-
-frontends
----------
-
-This is where interfaces to codes are created.  Within each subdirectory of
-yt/frontends/ there must exist the following files, even if empty:
-
-* data_structures.py, where subclasses of AMRGridPatch, Dataset and
-  AMRHierarchy are defined.
-* io.py, where a subclass of IOHandler is defined.
-* misc.py, where any miscellaneous functions or classes are defined.
-* definitions.py, where any definitions specific to the frontend are
-  defined.  (i.e., header formats, etc.)
-
-visualization
--------------
+yt is a toolkit for analyzing and visualizing volumetric data.
 
-This is where all visualization modules are stored.  This includes plot
-collections, the volume rendering interface, and pixelization frontends.
-
-data_objects
-------------
-
-All objects that handle data, processed or unprocessed, not explicitly
-defined as visualization are located in here.  This includes the base
-classes for data regions, covering grids, time series, and so on.  This
-also includes derived fields and derived quantities.
-
-analysis_modules
-----------------
-
-This is where all mechanisms for processing data live.  This includes
-things like clump finding, halo profiling, halo finding, and so on.  This
-is something of a catchall, but it serves as a level of greater
-abstraction that simply data selection and modification.
-
-gui
----
-
-This is where all GUI components go.  Typically this will be some small
-tool used for one or two things, which contains a launching mechanism on
-the command line.
-
-utilities
----------
-
-All broadly useful code that doesn't clearly fit in one of the other
-categories goes here.
-
-
+* Website: http://yt-project.org
+* Documentation: http://yt-project.org/doc
+* Data hub: http://hub.yt
+* Contribute: http://github.com/yt-project/yt
 
 """
 

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/analysis_modules/cosmological_observation/light_cone/light_cone.py
--- a/yt/analysis_modules/cosmological_observation/light_cone/light_cone.py
+++ b/yt/analysis_modules/cosmological_observation/light_cone/light_cone.py
@@ -298,7 +298,7 @@
                              "or a tuple of type (float, str).")
         
         # Calculate number of pixels on a side.
-        pixels = (field_of_view / image_resolution).in_units("")
+        pixels = int((field_of_view / image_resolution).in_units(""))
 
         # Clear projection stack.
         projection_stack = []

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
--- a/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
+++ b/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
@@ -53,7 +53,7 @@
 
     Parameters
     ----------
-    parameter_filename : string or :class:`yt.data_objects.static_output.Dataset`
+    parameter_filename : string or :class:`~yt.data_objects.static_output.Dataset`
         For simple rays, one may pass either a loaded dataset object or
         the filename of a dataset.
         For compound rays, one must pass the filename of the simulation

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/analysis_modules/halo_analysis/enzofof_merger_tree.py
--- a/yt/analysis_modules/halo_analysis/enzofof_merger_tree.py
+++ b/yt/analysis_modules/halo_analysis/enzofof_merger_tree.py
@@ -713,8 +713,8 @@
         The filename to which you saved the hdf5 data from save_halo_evolution
     halo_id : int
         The halo in 'filename' that you want to follow
-    x_quantity, y_quantity : str, optional
-        The quantity that you want to plot as the x_coord (or y_coords).
+    x_quantity : str, optional
+        The quantity that you want to plot as the x_coord.
         Valid options are:
 
            * cycle
@@ -732,8 +732,12 @@
            * COM_vy
            * COM_vz
 
-    x_log, y_log : bool, optional
-        Do you want the x(y)-axis to be in log or linear?
+    y_quantity : str, optional
+        The quantity that you want to plot as the y_coord.
+    x_log : bool, optional
+        Do you want the x-axis to be in log or linear?
+    y_log : bool, optional
+        Do you want the y-axis to be in log or linear?
     FOF_directory : str, optional
         Directory where FOF files (and hdf file) are located
 

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/analysis_modules/halo_finding/rockstar/rockstar.py
--- a/yt/analysis_modules/halo_finding/rockstar/rockstar.py
+++ b/yt/analysis_modules/halo_finding/rockstar/rockstar.py
@@ -114,29 +114,29 @@
 
     Parameters
     ----------
-    ts: DatasetSeries, Dataset
+    ts : DatasetSeries, ~yt.data_objects.static_output.Dataset
         This is the data source containing the DM particles. Because 
         halo IDs may change from one snapshot to the next, the only
         way to keep a consistent halo ID across time is to feed 
         Rockstar a set of snapshots, ie, via DatasetSeries.
-    num_readers: int
+    num_readers : int
         The number of reader can be increased from the default
         of 1 in the event that a single snapshot is split among
         many files. This can help in cases where performance is
         IO-limited. Default is 1. If run inline, it is
         equal to the number of MPI threads.
-    num_writers: int
+    num_writers : int
         The number of writers determines the number of processing threads
         as well as the number of threads writing output data.
         The default is set to comm.size-num_readers-1. If run inline,
         the default is equal to the number of MPI threads.
-    outbase: str
+    outbase : str
         This is where the out*list files that Rockstar makes should be
         placed. Default is 'rockstar_halos'.
-    particle_type: str
+    particle_type : str
         This is the "particle type" that can be found in the data.  This can be
         a filtered particle or an inherent type.
-    force_res: float
+    force_res : float
         This parameter specifies the force resolution that Rockstar uses
         in units of Mpc/h.
         If no value is provided, this parameter is automatically set to

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/convenience.py
--- a/yt/convenience.py
+++ b/yt/convenience.py
@@ -32,9 +32,9 @@
     """
     This function attempts to determine the base data type of a filename or
     other set of arguments by calling
-    :meth:`yt.data_objects.api.Dataset._is_valid` until it finds a
+    :meth:`yt.data_objects.static_output.Dataset._is_valid` until it finds a
     match, at which point it returns an instance of the appropriate
-    :class:`yt.data_objects.api.Dataset` subclass.
+    :class:`yt.data_objects.static_output.Dataset` subclass.
     """
     candidates = []
     args = [os.path.expanduser(arg) if isinstance(arg, string_types)

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/data_objects/data_containers.py
--- a/yt/data_objects/data_containers.py
+++ b/yt/data_objects/data_containers.py
@@ -132,7 +132,7 @@
     def __init__(self, ds, field_parameters):
         """
         Typically this is never called directly, but only due to inheritance.
-        It associates a :class:`~yt.data_objects.api.Dataset` with the class,
+        It associates a :class:`~yt.data_objects.static_output.Dataset` with the class,
         sets its initial set of fields, and the remainder of the arguments
         are passed as field_parameters.
         """
@@ -495,16 +495,16 @@
     def save_as_dataset(self, filename=None, fields=None):
         r"""Export a data object to a reloadable yt dataset.
 
-        This function will take a data object and output a dataset 
-        containing either the fields presently existing or fields 
+        This function will take a data object and output a dataset
+        containing either the fields presently existing or fields
         given in the ``fields`` list.  The resulting dataset can be
         reloaded as a yt dataset.
 
         Parameters
         ----------
         filename : str, optional
-            The name of the file to be written.  If None, the name 
-            will be a combination of the original dataset and the type 
+            The name of the file to be written.  If None, the name
+            will be a combination of the original dataset and the type
             of data container.
         fields : list of string or tuple field names, optional
             If this is supplied, it is the list of fields to be saved to
@@ -1628,6 +1628,8 @@
         elif iterable(height):
             h, u = height
             height = self.ds.quan(h, input_units = u)
+        elif not isinstance(height, YTArray):
+            height = self.ds.quan(height, 'code_length')
         if not iterable(resolution):
             resolution = (resolution, resolution)
         from yt.visualization.fixed_resolution import FixedResolutionBuffer
@@ -1857,7 +1859,7 @@
 
     def _calculate_flux_in_grid(self, grid, mask, field, value,
                     field_x, field_y, field_z, fluxing_field = None):
-        
+
         vc_fields = [field, field_x, field_y, field_z]
         if fluxing_field is not None:
             vc_fields.append(fluxing_field)

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/data_objects/derived_quantities.py
--- a/yt/data_objects/derived_quantities.py
+++ b/yt/data_objects/derived_quantities.py
@@ -156,8 +156,8 @@
 
     Parameters
     ----------
-    fields : field or list of fields
-        The field to be summed.
+    fields
+        The field or list of fields to be summed.
 
     Examples
     --------
@@ -487,8 +487,9 @@
 
     Parameters
     ----------
-    fields : field or list of fields
-        The field over which the extrema are to be calculated.
+    fields
+        The field or list of fields over which the extrema are to be
+        calculated.
     non_zero : bool
         If True, only positive values are considered in the calculation.
         Default: False

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/data_objects/particle_trajectories.py
--- a/yt/data_objects/particle_trajectories.py
+++ b/yt/data_objects/particle_trajectories.py
@@ -29,7 +29,7 @@
 
     Parameters
     ----------
-    outputs : `yt.data_objects.time_series.DatasetSeries`
+    outputs : ~yt.data_objects.time_series.DatasetSeries
         DatasetSeries object from which to draw the particles.
     indices : array_like
         An integer array of particle indices whose trajectories we

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/data_objects/region_expression.py
--- a/yt/data_objects/region_expression.py
+++ b/yt/data_objects/region_expression.py
@@ -38,10 +38,16 @@
         if isinstance(item, tuple) and isinstance(item[1], string_types):
             return self.all_data[item]
         if isinstance(item, slice):
+            # This is for the case where we give a slice as an index; one
+            # possible use case of this would be where we supply something
+            # like ds.r[::256j] .  This would be expanded, implicitly into
+            # ds.r[::256j, ::256j, ::256j].  Other cases would be if we do
+            # ds.r[0.1:0.9] where it will be expanded along three dimensions.
             item = (item, item, item)
         if len(item) != self.ds.dimensionality:
             # Not the right specification, and we don't want to do anything
-            # implicitly.
+            # implicitly.  Note that this happens *after* the implicit expansion
+            # of a single slice.
             raise YTDimensionalityError(len(item), self.ds.dimensionality)
         if self.ds.dimensionality != 3:
             # We'll pass on this for the time being.
@@ -49,14 +55,14 @@
 
         # OK, now we need to look at our slices.  How many are a specific
         # coordinate?
-        
+
         if not all(isinstance(v, slice) for v in item):
             return self._create_slice(item)
         else:
             if all(s.start is s.stop is s.step is None for s in item):
                 return self.all_data
             return self._create_region(item)
-            
+
     def _spec_to_value(self, input_tuple):
         if not isinstance(input_tuple, tuple):
             # We now assume that it's in code_length
@@ -66,6 +72,9 @@
         return value
 
     def _create_slice(self, slice_tuple):
+        # This is somewhat more complex because we want to allow for slicing
+        # in one dimension but also *not* using the entire domain; for instance
+        # this means we allow something like ds.r[0.5, 0.1:0.4, 0.1:0.4].
         axis = None
         new_slice = []
         for ax, v in enumerate(slice_tuple):
@@ -79,6 +88,23 @@
         # This new slice doesn't need to be a tuple
         source = self._create_region(new_slice)
         sl = self.ds.slice(axis, coord, data_source = source)
+        # Now, there's the possibility that what we're also seeing here
+        # includes some steps, which would be for getting back a fixed
+        # resolution buffer.  We check for that by checking if we have
+        # exactly two imaginary steps.
+        xax = self.ds.coordinates.x_axis[axis]
+        yax = self.ds.coordinates.y_axis[axis]
+        if getattr(new_slice[xax].step, "imag", 0.0) != 0.0 and \
+           getattr(new_slice[yax].step, "imag", 0.0) != 0.0:
+            # We now need to convert to a fixed res buffer.
+            # We'll do this by getting the x/y axes, and then using that.
+            width = source.right_edge[xax] - source.left_edge[xax]
+            height = source.right_edge[yax] - source.left_edge[yax]
+            # Make a resolution tuple with
+            resolution = (int(new_slice[xax].step.imag),
+                          int(new_slice[yax].step.imag))
+            sl = sl.to_frb(width = width, resolution = resolution,
+                           height = height)
         return sl
 
     def _slice_to_edges(self, ax, val):

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/data_objects/selection_data_containers.py
--- a/yt/data_objects/selection_data_containers.py
+++ b/yt/data_objects/selection_data_containers.py
@@ -49,7 +49,7 @@
         periodic its position will be corrected to lie inside
         the range [DLE,DRE) to ensure one and only one cell may
         match that point
-    ds: Dataset, optional
+    ds: ~yt.data_objects.static_output.Dataset, optional
         An optional dataset to use rather than self.ds
     field_parameters : dictionary
         A dictionary of field parameters than can be accessed by derived
@@ -95,7 +95,7 @@
         that this is in the plane coordinates: so if you are casting along
         x, this will be (y, z).  If you are casting along y, this will be
         (z, x).  If you are casting along z, this will be (x, y).
-    ds: Dataset, optional
+    ds: ~yt.data_objects.static_output.Dataset, optional
         An optional dataset to use rather than self.ds
     field_parameters : dictionary
          A dictionary of field parameters than can be accessed by derived
@@ -162,7 +162,7 @@
         The place where the ray starts.
     end_point : array-like set of 3 floats
         The place where the ray ends.
-    ds: Dataset, optional
+    ds: ~yt.data_objects.static_output.Dataset, optional
         An optional dataset to use rather than self.ds
     field_parameters : dictionary
          A dictionary of field parameters than can be accessed by derived
@@ -248,7 +248,7 @@
     center : array_like, optional
         The 'center' supplied to fields that use it.  Note that this does
         not have to have `coord` as one value.  optional.
-    ds: Dataset, optional
+    ds: ~yt.data_objects.static_output.Dataset, optional
         An optional dataset to use rather than self.ds
     field_parameters : dictionary
          A dictionary of field parameters than can be accessed by derived
@@ -355,7 +355,7 @@
     north_vector: array_like, optional
         An optional vector to describe the north-facing direction in the resulting
         plane.
-    ds: Dataset, optional
+    ds: ~yt.data_objects.static_output.Dataset, optional
         An optional dataset to use rather than self.ds
     field_parameters : dictionary
          A dictionary of field parameters than can be accessed by derived
@@ -556,7 +556,7 @@
         bottom planes
     fields : array of fields, optional
         any fields to be pre-loaded in the cylinder object
-    ds: Dataset, optional
+    ds: ~yt.data_objects.static_output.Dataset, optional
         An optional dataset to use rather than self.ds
     field_parameters : dictionary
          A dictionary of field parameters than can be accessed by derived

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -1399,7 +1399,7 @@
         ----------
         symbol : string
             The symbol for the new unit.
-        value : (value, unit) tuple or YTQuantity
+        value : tuple or ~yt.units.yt_array.YTQuantity
             The definition of the new unit in terms of some other units. For example,
             one would define a new "mph" unit with (1.0, "mile/hr") 
         tex_repr : string, optional
@@ -1407,7 +1407,7 @@
             be generated automatically based on the symbol string.
         offset : float, optional
             The default offset for the unit. If not set, an offset of 0 is assumed.
-        prefixable : boolean, optional
+        prefixable : bool, optional
             Whether or not the new unit can use SI prefixes. Default: False
 
         Examples

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/data_objects/tests/test_dataset_access.py
--- a/yt/data_objects/tests/test_dataset_access.py
+++ b/yt/data_objects/tests/test_dataset_access.py
@@ -66,6 +66,16 @@
     assert_equal(dd["density"]*2.0, ds.r["density"])
     assert_equal(dd["gas", "density"]*2.0, ds.r["gas", "density"])
 
+def test_slice_from_r():
+    ds = fake_amr_ds(fields = ["density"])
+    sl1 = ds.r[0.5, :, :]
+    sl2 = ds.slice("x", 0.5)
+    assert_equal(sl1["density"], sl2["density"])
+
+    frb1 = sl1.to_frb(width = 1.0, height = 1.0, resolution = (1024, 512))
+    frb2 = ds.r[0.5, ::1024j, ::512j]
+    assert_equal(frb1["density"], frb2["density"])
+
 def test_particle_counts():
     ds = fake_random_ds(16, particles=100)
     assert ds.particle_type_counts == {'io': 100}

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/extern/tqdm/_tqdm.py
--- a/yt/extern/tqdm/_tqdm.py
+++ b/yt/extern/tqdm/_tqdm.py
@@ -217,7 +217,7 @@
         """
         Parameters
         ----------
-        iterable  : iterable, optional
+        iterable  : :obj:`!iterable`, optional
             Iterable to decorate with a progressbar.
             Leave blank [default: None] to manually manage the updates.
         desc  : str, optional

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/fields/astro_fields.py
--- a/yt/fields/astro_fields.py
+++ b/yt/fields/astro_fields.py
@@ -97,8 +97,8 @@
             X_H = data.get_field_parameter("X_H")
         else:
             X_H = 0.76
-        nenh = data["density"]*data["density"]
-        nenh /= mh*mh
+        nenh = data["density"]/mh
+        nenh *= nenh
         nenh *= 0.5*(1.+X_H)*X_H*data["cell_volume"]
         return nenh
     
@@ -119,7 +119,9 @@
     def _mazzotta_weighting(field, data):
         # Spectroscopic-like weighting field for galaxy clusters
         # Only useful as a weight_field for temperature, metallicity, velocity
-        return data["density"]*data["density"]*data["kT"]**-0.25/mh/mh
+        ret = data["density"]/mh
+        ret *= ret*data["kT"]**-0.25
+        return ret
 
     registry.add_field((ftype,"mazzotta_weighting"), sampling_type="cell", 
                        function=_mazzotta_weighting,

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/fields/xray_emission_fields.py
--- a/yt/fields/xray_emission_fields.py
+++ b/yt/fields/xray_emission_fields.py
@@ -159,7 +159,7 @@
         The maximum energy in keV for the energy band.
     redshift : float, optional
         The cosmological redshift of the source of the field. Default: 0.0.
-    metallicity : field or float, optional
+    metallicity : str or tuple of str or float, optional
         Either the name of a metallicity field or a single floating-point
         number specifying a spatially constant metallicity. Must be in
         solar units. If set to None, no metals will be assumed. Default: 

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/frontends/fits/misc.py
--- a/yt/frontends/fits/misc.py
+++ b/yt/frontends/fits/misc.py
@@ -49,7 +49,7 @@
 
     Parameters
     ----------
-    ds : `~yt.data_objects.static_output.Dataset`
+    ds : ~yt.data_objects.static_output.Dataset
         The FITS events file dataset to add the counts fields to.
     ebounds : list of tuples
         A list of tuples, one for each field, with (emin, emax) as the

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/frontends/gadget/data_structures.py
--- a/yt/frontends/gadget/data_structures.py
+++ b/yt/frontends/gadget/data_structures.py
@@ -37,34 +37,45 @@
 from .definitions import \
     gadget_header_specs, \
     gadget_field_specs, \
-    gadget_ptype_specs
+    gadget_ptype_specs, \
+    SNAP_FORMAT_2_OFFSET
 
 from .fields import \
     GadgetFieldInfo
 
+
 def _fix_unit_ordering(unit):
     if isinstance(unit[0], string_types):
         unit = unit[1], unit[0]
     return unit
-    
+
+
 def _get_gadget_format(filename):
-    # check and return gadget binary format
-    f = open(filename, 'rb')
-    (rhead,) = struct.unpack('<I',f.read(4))
-    f.close()
-    if (rhead == 134217728) | (rhead == 8):
-        return 2
-    elif (rhead == 65536) | (rhead == 256):
-        return 1
+    # check and return gadget binary format with file endianness
+    ff = open(filename, 'rb')
+    (rhead,) = struct.unpack('<I', ff.read(4))
+    ff.close()
+    if (rhead == 134217728):
+        return 2, '>'
+    elif (rhead == 8):
+        return 2, '<'
+    elif (rhead == 65536):
+        return 1, '>'
+    elif (rhead == 256):
+        return 1, '<'
     else:
         raise RuntimeError("Incorrect Gadget format %s!" % str(rhead))
 
+
 class GadgetBinaryFile(ParticleFile):
     def __init__(self, ds, io, filename, file_id):
+        gformat = _get_gadget_format(filename)
         with open(filename, "rb") as f:
-            if _get_gadget_format(filename) == 2:
-                f.seek(f.tell()+16)
-            self.header = read_record(f, ds._header_spec)
+            if gformat[0] == 2:
+                f.seek(f.tell() + SNAP_FORMAT_2_OFFSET)
+            self.header = read_record(f, ds._header_spec, endian=gformat[1])
+            if gformat[0] == 2:
+                f.seek(f.tell() + SNAP_FORMAT_2_OFFSET)
             self._position_offset = f.tell()
             f.seek(0, os.SEEK_END)
             self._file_size = f.tell()
@@ -76,6 +87,7 @@
             field_list, self.total_particles,
             self._position_offset, self._file_size)
 
+
 class GadgetDataset(SPHDataset):
     _index_class = ParticleIndex
     _file_class = GadgetBinaryFile
@@ -91,16 +103,17 @@
                  over_refine_factor=1,
                  kernel_name=None,
                  index_ptype="all",
-                 bounding_box = None,
-                 header_spec = "default",
-                 field_spec = "default",
-                 ptype_spec = "default",
+                 bounding_box=None,
+                 header_spec="default",
+                 field_spec="default",
+                 ptype_spec="default",
                  units_override=None,
                  unit_system="cgs",
                  use_dark_factor = False,
                  w_0 = -1.0,
                  w_a = 0.0):
-        if self._instantiated: return
+        if self._instantiated:
+            return
         self._header_spec = self._setup_binary_spec(
             header_spec, gadget_header_specs)
         self._field_spec = self._setup_binary_spec(
@@ -118,12 +131,12 @@
             bbox = np.array(bounding_box, dtype="float64")
             if bbox.shape == (2, 3):
                 bbox = bbox.transpose()
-            self.domain_left_edge = bbox[:,0]
-            self.domain_right_edge = bbox[:,1]
+            self.domain_left_edge = bbox[:, 0]
+            self.domain_right_edge = bbox[:, 1]
         else:
             self.domain_left_edge = self.domain_right_edge = None
         if units_override is not None:
-            raise RuntimeError("units_override is not supported for GadgetDataset. "+
+            raise RuntimeError("units_override is not supported for GadgetDataset. " +
                                "Use unit_base instead.")
 
         # Set dark energy parameters before cosmology object is created
@@ -158,11 +171,11 @@
     def _get_hvals(self):
         # The entries in this header are capitalized and named to match Table 4
         # in the GADGET-2 user guide.
-
+        gformat = _get_gadget_format(self.parameter_filename)
         f = open(self.parameter_filename, 'rb')
-        if _get_gadget_format(self.parameter_filename) == 2:
-            f.seek(f.tell()+16)
-        hvals = read_record(f, self._header_spec)
+        if gformat[0] == 2:
+            f.seek(f.tell() + SNAP_FORMAT_2_OFFSET)
+        hvals = read_record(f, self._header_spec, endian=gformat[1])
         for i in hvals:
             if len(hvals[i]) == 1:
                 hvals[i] = hvals[i][0]
@@ -200,7 +213,8 @@
         # It may be possible to deduce whether ComovingIntegration is on
         # somehow, but opinions on this vary.
         if self.omega_lambda == 0.0:
-            only_on_root(mylog.info, "Omega Lambda is 0.0, so we are turning off Cosmology.")
+            only_on_root(
+                mylog.info, "Omega Lambda is 0.0, so we are turning off Cosmology.")
             self.hubble_constant = 1.0  # So that scaling comes out correct
             self.cosmological_simulation = 0
             self.current_redshift = 0.0
@@ -230,14 +244,17 @@
         self.file_count = hvals["NumFiles"]
 
     def _set_code_unit_attributes(self):
-        # If no units passed in by user, set a sane default (Gadget-2 users guide).
+        # If no units passed in by user, set a sane default (Gadget-2 users
+        # guide).
         if self._unit_base is None:
             if self.cosmological_simulation == 1:
-                only_on_root(mylog.info, "Assuming length units are in kpc/h (comoving)")
-                self._unit_base = dict(length = (1.0, "kpccm/h"))
+                only_on_root(
+                    mylog.info, "Assuming length units are in kpc/h (comoving)")
+                self._unit_base = dict(length=(1.0, "kpccm/h"))
             else:
-                only_on_root(mylog.info, "Assuming length units are in kpc (physical)")
-                self._unit_base = dict(length = (1.0, "kpc"))
+                only_on_root(
+                    mylog.info, "Assuming length units are in kpc (physical)")
+                self._unit_base = dict(length=(1.0, "kpc"))
 
         # If units passed in by user, decide what to do about
         # co-moving and factors of h
@@ -314,10 +331,10 @@
         is a Gadget binary file, and endianswap is the endianness character '>' or '<'.
         '''
         try:
-            f = open(filename,'rb')
+            f = open(filename, 'rb')
         except IOError:
             try:
-                f = open(filename+".0")
+                f = open(filename + ".0")
             except IOError:
                 return False, 1
 
@@ -327,7 +344,7 @@
         # The int32 following the header (first 4+256 bytes) must equal this
         # number.
         try:
-            (rhead,) = struct.unpack('<I',f.read(4))
+            (rhead,) = struct.unpack('<I', f.read(4))
         except struct.error:
             f.close()
             return False, 1
@@ -339,25 +356,25 @@
         # Enabled Format2 here
         elif rhead == 8:
             f.close()
-            return True, 'float32'
+            return True, 'f8'
         elif rhead == 134217728:
             f.close()
-            return True, 'float32'
+            return True, 'f4'
         else:
             f.close()
             return False, 1
         # Read in particle number from header
-        np0 = sum(struct.unpack(endianswap+'IIIIII',f.read(6*4)))
+        np0 = sum(struct.unpack(endianswap + 'IIIIII', f.read(6 * 4)))
         # Read in size of position block. It should be 4 bytes per float,
         # with 3 coordinates (x,y,z) per particle. (12 bytes per particle)
-        f.seek(4+256+4,0)
-        np1 = struct.unpack(endianswap+'I',f.read(4))[0]/(4*3)
+        f.seek(4 + 256 + 4, 0)
+        np1 = struct.unpack(endianswap + 'I', f.read(4))[0] / (4 * 3)
         f.close()
         # Compare
         if np0 == np1:
-            return True, 'float32'
+            return True, 'f4'
         elif np1 == 2*np0:
-            return True, 'float64'
+            return True, 'f8'
         else:
             return False, 1
 
@@ -366,6 +383,7 @@
         # First 4 bytes used to check load
         return GadgetDataset._validate_header(args[0])[0]
 
+
 class GadgetHDF5Dataset(GadgetDataset):
     _file_class = ParticleFile
     _field_info_class = GadgetFieldInfo
@@ -373,17 +391,17 @@
     _suffix = ".hdf5"
 
     def __init__(self, filename, dataset_type="gadget_hdf5",
-                 unit_base = None, n_ref=64,
+                 unit_base=None, n_ref=64,
                  over_refine_factor=1,
                  kernel_name=None,
                  index_ptype="all",
-                 bounding_box = None,
+                 bounding_box=None,
                  units_override=None,
                  unit_system="cgs"):
         self.storage_filename = None
         filename = os.path.abspath(filename)
         if units_override is not None:
-            raise RuntimeError("units_override is not supported for GadgetHDF5Dataset. "+
+            raise RuntimeError("units_override is not supported for GadgetHDF5Dataset. " +
                                "Use unit_base instead.")
         super(GadgetHDF5Dataset, self).__init__(
             filename, dataset_type, unit_base=unit_base, n_ref=n_ref,
@@ -408,8 +426,6 @@
         handle.close()
         return uvals
 
-
-
     def _set_owls_eagle(self):
 
         self.dimensionality = 3
@@ -428,7 +444,8 @@
 
         if self.domain_left_edge is None:
             self.domain_left_edge = np.zeros(3, "float64")
-            self.domain_right_edge = np.ones(3, "float64") * self.parameters["BoxSize"]
+            self.domain_right_edge = np.ones(
+                3, "float64") * self.parameters["BoxSize"]
 
         nz = 1 << self.over_refine_factor
         self.domain_dimensions = np.ones(3, "int32") * nz
@@ -452,9 +469,11 @@
 
         # note the contents of the HDF5 Units group are in _unit_base
         # note the velocity stored on disk is sqrt(a) dx/dt
-        self.length_unit = self.quan(self._unit_base["UnitLength_in_cm"], 'cmcm/h')
+        self.length_unit = self.quan(
+            self._unit_base["UnitLength_in_cm"], 'cmcm/h')
         self.mass_unit = self.quan(self._unit_base["UnitMass_in_g"], 'g/h')
-        self.velocity_unit = self.quan(self._unit_base["UnitVelocity_in_cm_per_s"], 'cm/s')
+        self.velocity_unit = self.quan(
+            self._unit_base["UnitVelocity_in_cm_per_s"], 'cm/s')
         self.time_unit = self.quan(self._unit_base["UnitTime_in_s"], 's/h')
 
     @classmethod
@@ -465,7 +484,7 @@
         try:
             fh = h5py.File(args[0], mode='r')
             valid = all(ng in fh["/"] for ng in need_groups) and \
-              not any(vg in fh["/"] for vg in veto_groups)
+                not any(vg in fh["/"] for vg in veto_groups)
             fh.close()
         except:
             valid = False

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/frontends/gadget/definitions.py
--- a/yt/frontends/gadget/definitions.py
+++ b/yt/frontends/gadget/definitions.py
@@ -93,3 +93,5 @@
     "PartType4",
     "PartType5"
 )
+
+SNAP_FORMAT_2_OFFSET = 16

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/frontends/gadget/io.py
--- a/yt/frontends/gadget/io.py
+++ b/yt/frontends/gadget/io.py
@@ -30,7 +30,8 @@
     _get_gadget_format
 
 from .definitions import \
-    gadget_hdf5_ptypes
+    gadget_hdf5_ptypes, \
+    SNAP_FORMAT_2_OFFSET
 
 
 class IOHandlerGadgetHDF5(BaseIOHandler):
@@ -39,8 +40,7 @@
     _known_ptypes = gadget_hdf5_ptypes
     _var_mass = None
     _element_names = ('Hydrogen', 'Helium', 'Carbon', 'Nitrogen', 'Oxygen',
-                       'Neon', 'Magnesium', 'Silicon', 'Iron' )
-
+                      'Neon', 'Magnesium', 'Silicon', 'Iron')
 
     @property
     def var_mass(self):
@@ -68,9 +68,9 @@
             for ptype, field_list in sorted(ptf.items()):
                 if data_file.total_particles[ptype] == 0:
                     continue
-                x = f["/%s/Coordinates" % ptype][:,0].astype("float64")
-                y = f["/%s/Coordinates" % ptype][:,1].astype("float64")
-                z = f["/%s/Coordinates" % ptype][:,2].astype("float64")
+                x = f["/%s/Coordinates" % ptype][:, 0].astype("float64")
+                y = f["/%s/Coordinates" % ptype][:, 1].astype("float64")
+                z = f["/%s/Coordinates" % ptype][:, 2].astype("float64")
                 yield ptype, (x, y, z)
             f.close()
 
@@ -88,28 +88,29 @@
                 g = f["/%s" % ptype]
                 coords = g["Coordinates"][:].astype("float64")
                 mask = selector.select_points(
-                            coords[:,0], coords[:,1], coords[:,2], 0.0)
+                    coords[:, 0], coords[:, 1], coords[:, 2], 0.0)
                 del coords
-                if mask is None: continue
+                if mask is None:
+                    continue
                 for field in field_list:
 
                     if field in ("Mass", "Masses") and \
-                        ptype not in self.var_mass:
+                            ptype not in self.var_mass:
                         data = np.empty(mask.sum(), dtype="float64")
                         ind = self._known_ptypes.index(ptype)
                         data[:] = self.ds["Massarr"][ind]
 
                     elif field in self._element_names:
                         rfield = 'ElementAbundance/' + field
-                        data = g[rfield][:][mask,...]
+                        data = g[rfield][:][mask, ...]
                     elif field.startswith("Metallicity_"):
                         col = int(field.rsplit("_", 1)[-1])
-                        data = g["Metallicity"][:,col][mask]
+                        data = g["Metallicity"][:, col][mask]
                     elif field.startswith("Chemistry_"):
                         col = int(field.rsplit("_", 1)[-1])
-                        data = g["ChemistryAbundances"][:,col][mask]
+                        data = g["ChemistryAbundances"][:, col][mask]
                     else:
-                        data = g[field][:][mask,...]
+                        data = g[field][:][mask, ...]
 
                     yield (ptype, field), data
             f.close()
@@ -127,16 +128,18 @@
         morton = np.empty(pcount, dtype='uint64')
         ind = 0
         for key in keys:
-            if not key.startswith("PartType"): continue
-            if "Coordinates" not in f[key]: continue
+            if not key.startswith("PartType"):
+                continue
+            if "Coordinates" not in f[key]:
+                continue
             ds = f[key]["Coordinates"]
-            dt = ds.dtype.newbyteorder("N") # Native
+            dt = ds.dtype.newbyteorder("N")  # Native
             pos = np.empty(ds.shape, dtype=dt)
             pos[:] = ds
             regions.add_data_file(pos, data_file.file_id,
                                   data_file.ds.filter_bbox)
-            morton[ind:ind+pos.shape[0]] = compute_morton(
-                pos[:,0], pos[:,1], pos[:,2],
+            morton[ind:ind + pos.shape[0]] = compute_morton(
+                pos[:, 0], pos[:, 1], pos[:, 2],
                 data_file.ds.domain_left_edge,
                 data_file.ds.domain_right_edge,
                 data_file.ds.filter_bbox)
@@ -151,7 +154,6 @@
         npart = dict(("PartType%s" % (i), v) for i, v in enumerate(pcount))
         return npart
 
-
     def _identify_fields(self, data_file):
         f = h5py.File(data_file.filename, "r")
         fields = []
@@ -205,8 +207,10 @@
         f.close()
         return fields, {}
 
+
 ZeroMass = object()
 
+
 class IOHandlerGadgetBinary(BaseIOHandler):
     _dataset_type = "gadget_binary"
     _vector_fields = (("Coordinates", 3),
@@ -231,13 +235,17 @@
     #   TSTP    (only if enabled in makefile)
 
     _var_mass = None
+    _format = None
 
     def __init__(self, ds, *args, **kwargs):
         self._vector_fields = dict(self._vector_fields)
         self._fields = ds._field_spec
         self._ptypes = ds._ptype_spec
         self.data_files = set([])
-        self._format =  _get_gadget_format(ds.parameter_filename)#default gadget format 1
+        gformat = _get_gadget_format(ds.parameter_filename)
+        # gadget format 1 original, 2 with block name
+        self._format = gformat[0]
+        self._endian = gformat[1]
         super(IOHandlerGadgetBinary, self).__init__(ds, *args, **kwargs)
 
     @property
@@ -266,8 +274,8 @@
                 # This is where we could implement sub-chunking
                 f.seek(poff[ptype, "Coordinates"], os.SEEK_SET)
                 pos = self._read_field_from_file(f,
-                            tp[ptype], "Coordinates")
-                yield ptype, (pos[:,0], pos[:,1], pos[:,2])
+                                                 tp[ptype], "Coordinates")
+                yield ptype, (pos[:, 0], pos[:, 1], pos[:, 2])
             f.close()
 
     def _read_particle_fields(self, chunks, ptf, selector):
@@ -282,11 +290,12 @@
             for ptype, field_list in sorted(ptf.items()):
                 f.seek(poff[ptype, "Coordinates"], os.SEEK_SET)
                 pos = self._read_field_from_file(f,
-                            tp[ptype], "Coordinates")
+                                                 tp[ptype], "Coordinates")
                 mask = selector.select_points(
-                    pos[:,0], pos[:,1], pos[:,2], 0.0)
+                    pos[:, 0], pos[:, 1], pos[:, 2], 0.0)
                 del pos
-                if mask is None: continue
+                if mask is None:
+                    continue
                 for field in field_list:
                     if field == "Mass" and ptype not in self.var_mass:
                         data = np.empty(mask.sum(), dtype="float64")
@@ -297,66 +306,90 @@
                         continue
                     f.seek(poff[ptype, field], os.SEEK_SET)
                     data = self._read_field_from_file(f, tp[ptype], field)
-                    data = data[mask,...]
+                    data = data[mask, ...]
                     yield (ptype, field), data
             f.close()
 
     def _read_field_from_file(self, f, count, name):
-        if count == 0: return
+        if count == 0:
+            return
         if name == "ParticleIDs":
-            dt = "uint32"
+            dt = self._endian + "u4"
         else:
-            dt = self._float_type
+            dt = self._endian + self._float_type
         if name in self._vector_fields:
             count *= self._vector_fields[name]
-        arr = np.fromfile(f, dtype=dt, count = count)
+        arr = np.fromfile(f, dtype=dt, count=count)
         if name in self._vector_fields:
             factor = self._vector_fields[name]
-            arr = arr.reshape((count//factor, factor), order="C")
-        return arr.astype("float64")
+            arr = arr.reshape((count // factor, factor), order="C")
+        return arr.astype(self._float_type)
+
+    def _get_morton_from_position(self, data_file, count, offset_count,
+                                  regions, DLE, DRE):
+        with open(data_file.filename, "rb") as f:
+            # We add on an additionally 4 for the first record.
+            f.seek(data_file._position_offset + 4 + offset_count * 12)
+            # The first total_particles * 3 values are positions
+            pp = np.fromfile(f, dtype=self._endian + self._float_type,
+                             count=count * 3)
+            pp.shape = (count, 3)
+            pp = pp.astype(self._float_type)
+        regions.add_data_file(pp, data_file.file_id,
+                                  data_file.ds.filter_bbox)
+        morton = compute_morton(pp[:, 0], pp[:, 1], pp[:, 2], DLE, DRE,
+                                data_file.ds.filter_bbox)
+        return morton
 
     def _initialize_index(self, data_file, regions):
-        count = sum(data_file.total_particles.values())
         DLE = data_file.ds.domain_left_edge
         DRE = data_file.ds.domain_right_edge
         self._float_type = data_file.ds._validate_header(data_file.filename)[1]
-        self._field_size = np.dtype(self._float_type).itemsize
-        with open(data_file.filename, "rb") as f:
-            # We add on an additionally 4 for the first record.
-            f.seek(data_file._position_offset + 4)
-            # The first total_particles * 3 values are positions
-            pp = np.fromfile(f, dtype=self._float_type, count=count*3)
-            pp.shape = (count, 3)
-        regions.add_data_file(pp, data_file.file_id, data_file.ds.filter_bbox)
-        morton = compute_morton(pp[:,0], pp[:,1], pp[:,2], DLE, DRE,
-                                data_file.ds.filter_bbox)
-        return morton
+        if self.index_ptype == "all":
+            count = sum(data_file.total_particles.values())
+            return self._get_morton_from_position(
+                data_file, count, 0, regions, DLE, DRE)
+        else:
+            idpos = self._ptypes.index(self.index_ptype)
+            count = data_file.total_particles.get(self.index_ptype)
+            account = [0] + [data_file.total_particles.get(ptype)
+                             for ptype in self._ptypes]
+            account = np.cumsum(account)
+            return self._get_morton_from_position(
+                data_file, account, account[idpos], regions, DLE, DRE)
 
     def _count_particles(self, data_file):
         npart = dict((self._ptypes[i], v)
-            for i, v in enumerate(data_file.header["Npart"]))
+                     for i, v in enumerate(data_file.header["Npart"]))
         return npart
 
     # header is 256, but we have 4 at beginning and end for ints
+    _field_size = 4
     def _calculate_field_offsets(self, field_list, pcount,
-                                 offset, file_size = None):
+                                 offset, file_size=None):
         # field_list is (ftype, fname) but the blocks are ordered
         # (fname, ftype) in the file.
-        pos = offset
+        if self._format == 2:
+            # Need to subtract offset due to extra header block
+            pos = offset - SNAP_FORMAT_2_OFFSET
+        else:
+            pos = offset
         fs = self._field_size
         offsets = {}
+
         for field in self._fields:
             if not isinstance(field, string_types):
                 field = field[0]
-            if not any( (ptype, field) in field_list
-                        for ptype in self._ptypes):
+            if not any((ptype, field) in field_list
+                       for ptype in self._ptypes):
                 continue
             if self._format == 2:
-                pos += 20 #skip block header
+                pos += 20  # skip block header
             elif self._format == 1:
                 pos += 4
             else:
-                raise RuntimeError("incorrect Gadget format %s!" % str(self._format))
+                raise RuntimeError(
+                    "incorrect Gadget format %s!" % str(self._format))
             any_ptypes = False
             for ptype in self._ptypes:
                 if field == "Mass" and ptype not in self.var_mass:
@@ -370,9 +403,10 @@
                 else:
                     pos += pcount[ptype] * fs
             pos += 4
-            if not any_ptypes: pos -= 8
+            if not any_ptypes:
+                pos -= 8
         if file_size is not None:
-            if (file_size != pos) & (self._format == 1): #ignore the rest of format 2 
+            if (file_size != pos) & (self._format == 1):  # ignore the rest of format 2
                 mylog.warning("Your Gadget-2 file may have extra " +
                               "columns or different precision!" +
                               " (%s file vs %s computed)",
@@ -385,13 +419,15 @@
         tp = domain.total_particles
         for i, ptype in enumerate(self._ptypes):
             count = tp[ptype]
-            if count == 0: continue
+            if count == 0:
+                continue
             m = domain.header["Massarr"][i]
             for field in self._fields:
                 if isinstance(field, tuple):
                     field, req = field
                     if req is ZeroMass:
-                        if m > 0.0 : continue
+                        if m > 0.0:
+                            continue
                     elif isinstance(req, tuple) and ptype in req:
                         pass
                     elif req != ptype:

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/frontends/gadget/tests/test_outputs.py
--- a/yt/frontends/gadget/tests/test_outputs.py
+++ b/yt/frontends/gadget/tests/test_outputs.py
@@ -25,6 +25,7 @@
 
 isothermal_h5 = "IsothermalCollapse/snap_505.hdf5"
 isothermal_bin = "IsothermalCollapse/snap_505"
+BE_Gadget = "BigEndianGadgetBinary/BigEndianGadgetBinary"
 
 # This maps from field names to weight field names to use for projections
 iso_fields = OrderedDict(
@@ -41,13 +42,17 @@
 )
 iso_kwargs = dict(bounding_box=[[-3, 3], [-3, 3], [-3, 3]])
 
+
 @requires_file(isothermal_h5)
 @requires_file(isothermal_bin)
+ at requires_file(BE_Gadget)
 def test_GadgetDataset():
     assert isinstance(data_dir_load(isothermal_h5, kwargs=iso_kwargs),
                       GadgetHDF5Dataset)
     assert isinstance(data_dir_load(isothermal_bin, kwargs=iso_kwargs),
                       GadgetDataset)
+    assert isinstance(data_dir_load(BE_Gadget, kwargs=''),
+                      GadgetDataset)
 
 
 @requires_ds(isothermal_h5)

diff -r baaf6482ca187493a3ce4b908a0d87393f465626 -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 yt/frontends/gadget_fof/data_structures.py
--- a/yt/frontends/gadget_fof/data_structures.py
+++ b/yt/frontends/gadget_fof/data_structures.py
@@ -462,8 +462,8 @@
 class GagdetFOFHaloContainer(YTSelectionContainer):
     """
     Create a data container to get member particles and individual
-    values from halos and subhalos.  Halo mass, position, and
-    velocity are set as attributes.  Halo IDs are accessible
+    values from halos and subhalos. Halo mass, position, and
+    velocity are set as attributes. Halo IDs are accessible
     through the field, "member_ids".  Other fields that are one
     value per halo are accessible as normal.  The field list for
     halo objects can be seen in `ds.halos_field_list`.
@@ -473,13 +473,13 @@
     ptype : string
         The type of halo, either "Group" for the main halo or
         "Subhalo" for subhalos.
-    particle_identifier : int or tuple of (int, int)
+    particle_identifier : int or tuple of ints
         The halo or subhalo id.  If requesting a subhalo, the id
         can also be given as a tuple of the main halo id and
         subgroup id, such as (1, 4) for subgroup 4 of halo 1.
 
-    Halo Container Attributes
-    -------------------------
+    Attributes
+    ----------
     particle_identifier : int
         The id of the halo or subhalo.
     group_identifier : int
@@ -496,14 +496,13 @@
     velocity : array of floats
         Halo velocity.
 
-    Relevant Fields
-    ---------------
-    particle_number :
-        number of particles
-    subhalo_number :
-        number of subhalos
-    group_identifier :
-        id of parent group for subhalos
+    Note
+    ----
+    Relevant Fields:
+
+     * particle_number - number of particles
+     * subhalo_number - number of subhalos
+     * group_identifier - id of parent group for subhalos
 
     Examples
     --------

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/c3e5bb90b931/
Changeset:   c3e5bb90b931
User:        jisuoqing
Date:        2017-05-24 02:52:55+00:00
Summary:     Add minorticks for colorbar when symlog is used
Affected #:  1 file

diff -r 33dba5d78a2d0ceac9d6cc089e6415a0669cd461 -r c3e5bb90b9312de01b87df045a65c8a57f852a9e yt/visualization/plot_window.py
--- a/yt/visualization/plot_window.py
+++ b/yt/visualization/plot_window.py
@@ -982,7 +982,8 @@
             # colorbar minorticks
             if f not in self._cbar_minorticks:
                 self._cbar_minorticks[f] = True
-            if self._cbar_minorticks[f] is True and MPL_VERSION < LooseVersion('2.0.0'):
+            if (self._cbar_minorticks[f] is True and MPL_VERSION < LooseVersion('2.0.0')) \
+                                                 or self._field_transform[f] == symlog_transform:
                 if self._field_transform[f] == linear_transform:
                     self.plots[f].cax.minorticks_on()
                 else:


https://bitbucket.org/yt_analysis/yt/commits/d102eb460a5c/
Changeset:   d102eb460a5c
User:        jisuoqing
Date:        2017-05-24 02:56:14+00:00
Summary:     Revert to orginal file
Affected #:  1 file

diff -r c3e5bb90b9312de01b87df045a65c8a57f852a9e -r d102eb460a5c2c901ce757bebff76d8c78006c51 yt/visualization/plot_window.py
--- a/yt/visualization/plot_window.py
+++ b/yt/visualization/plot_window.py
@@ -982,8 +982,7 @@
             # colorbar minorticks
             if f not in self._cbar_minorticks:
                 self._cbar_minorticks[f] = True
-            if (self._cbar_minorticks[f] is True and MPL_VERSION < LooseVersion('2.0.0')) \
-                                                 or self._field_transform[f] == symlog_transform:
+            if self._cbar_minorticks[f] is True and MPL_VERSION < LooseVersion('2.0.0'):
                 if self._field_transform[f] == linear_transform:
                     self.plots[f].cax.minorticks_on()
                 else:


https://bitbucket.org/yt_analysis/yt/commits/b635bfc5734e/
Changeset:   b635bfc5734e
User:        jisuoqing
Date:        2017-05-24 03:02:34+00:00
Summary:     Merge github.com:yt-project/yt
Affected #:  3 files

diff -r d102eb460a5c2c901ce757bebff76d8c78006c51 -r b635bfc5734e84bdfea7deda675b94df8db28198 doc/source/analyzing/analysis_modules/PPVCube.ipynb
--- a/doc/source/analyzing/analysis_modules/PPVCube.ipynb
+++ b/doc/source/analyzing/analysis_modules/PPVCube.ipynb
@@ -93,14 +93,14 @@
    "outputs": [],
    "source": [
     "dens = np.zeros((nx,ny,nz))\n",
-    "dens[:,:,nz/2-3:nz/2+3] = (r**alpha).reshape(nx,ny,1) # the density profile of the disk\n",
+    "dens[:,:,nz//2-3:nz//2+3] = (r**alpha).reshape(nx,ny,1) # the density profile of the disk\n",
     "temp = np.zeros((nx,ny,nz))\n",
-    "temp[:,:,nz/2-3:nz/2+3] = 1.0e5 # Isothermal\n",
+    "temp[:,:,nz//2-3:nz//2+3] = 1.0e5 # Isothermal\n",
     "vel_theta = 100.*r/(1.+(r/r_0)**beta) # the azimuthal velocity profile of the disk\n",
     "velx = np.zeros((nx,ny,nz))\n",
     "vely = np.zeros((nx,ny,nz))\n",
-    "velx[:,:,nz/2-3:nz/2+3] = (-vel_theta*np.sin(theta)).reshape(nx,ny,1) # convert polar to cartesian\n",
-    "vely[:,:,nz/2-3:nz/2+3] = (vel_theta*np.cos(theta)).reshape(nx,ny,1) # convert polar to cartesian\n",
+    "velx[:,:,nz//2-3:nz//2+3] = (-vel_theta*np.sin(theta)).reshape(nx,ny,1) # convert polar to cartesian\n",
+    "vely[:,:,nz//2-3:nz//2+3] = (vel_theta*np.cos(theta)).reshape(nx,ny,1) # convert polar to cartesian\n",
     "dens[r > R] = 0.0\n",
     "temp[r > R] = 0.0\n",
     "velx[r > R] = 0.0\n",

diff -r d102eb460a5c2c901ce757bebff76d8c78006c51 -r b635bfc5734e84bdfea7deda675b94df8db28198 doc/source/examining/Loading_Generic_Particle_Data.ipynb
--- a/doc/source/examining/Loading_Generic_Particle_Data.ipynb
+++ b/doc/source/examining/Loading_Generic_Particle_Data.ipynb
@@ -19,7 +19,7 @@
    "source": [
     "import numpy as np\n",
     "\n",
-    "n_particles = 5e6\n",
+    "n_particles = 5000000\n",
     "\n",
     "ppx, ppy, ppz = 1e6*np.random.normal(size=[3, n_particles])\n",
     "\n",

diff -r d102eb460a5c2c901ce757bebff76d8c78006c51 -r b635bfc5734e84bdfea7deda675b94df8db28198 yt/frontends/art/data_structures.py
--- a/yt/frontends/art/data_structures.py
+++ b/yt/frontends/art/data_structures.py
@@ -277,7 +277,7 @@
             # domain dimensions is the number of root *cells*
             self.domain_dimensions = np.ones(3, dtype='int64')*est
             self.root_grid_mask_offset = f.tell()
-            self.root_nocts = self.domain_dimensions.prod()/8
+            self.root_nocts = self.domain_dimensions.prod() // 8
             self.root_ncells = self.root_nocts*8
             mylog.debug("Estimating %i cells on a root grid side," +
                         "%i root octs", est, self.root_nocts)
@@ -705,7 +705,7 @@
             tr[field] = np.zeros(cell_count, 'float64')
         data = _read_root_level(content, self.domain.level_child_offsets,
                                 self.domain.level_count)
-        ns = (self.domain.ds.domain_dimensions.prod() / 8, 8)
+        ns = (self.domain.ds.domain_dimensions.prod() // 8, 8)
         for field, fi in zip(fields, field_idxs):
             source[field] = np.empty(ns, dtype="float64", order="C")
             dt = data[fi,:].reshape(self.domain.ds.domain_dimensions,
@@ -781,7 +781,7 @@
             self._count_art_octs(f,  self.ds.child_grid_offset,
                 self.ds.min_level, self.ds.max_level)
         # remember that the root grid is by itself; manually add it back in
-        inoll[0] = self.ds.domain_dimensions.prod()/8
+        inoll[0] = self.ds.domain_dimensions.prod() // 8
         _level_child_offsets[0] = self.ds.root_grid_offset
         self.nhydrovars = nhydrovars
         self.inoll = inoll  # number of octs


https://bitbucket.org/yt_analysis/yt/commits/b477a656ca44/
Changeset:   b477a656ca44
User:        ngoldbaum
Date:        2017-05-24 14:27:27+00:00
Summary:     Merge pull request #1417 from jisuoqing/master

Make aspect ratio consistent with plot window when annotate_line_integral_convolution is called
Affected #:  2 files

diff -r 1dbd74959b56ee16018f20d0deecd5ac1d27e688 -r b477a656ca440f55267d6d556ed6758726272405 yt/visualization/plot_modifications.py
--- a/yt/visualization/plot_modifications.py
+++ b/yt/visualization/plot_modifications.py
@@ -2370,7 +2370,7 @@
 
         if self.const_alpha:
             plot._axes.imshow(lic_data_clip, extent=extent, cmap=self.cmap,
-                              alpha=self.alpha, origin='lower')
+                              alpha=self.alpha, origin='lower', aspect="auto")
         else:
             lic_data_rgba = cm.ScalarMappable(norm=None, cmap=self.cmap).\
                             to_rgba(lic_data_clip)
@@ -2378,7 +2378,7 @@
                                     / (self.lim[1] - self.lim[0])
             lic_data_rgba[...,3] = lic_data_clip_rescale * self.alpha
             plot._axes.imshow(lic_data_rgba, extent=extent, cmap=self.cmap,
-                              origin='lower')
+                              origin='lower', aspect="auto")
 
         return plot

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list