[yt-svn] commit/yt: 31 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Wed Jan 27 19:26:25 PST 2016


31 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/0e562f7ae0ee/
Changeset:   0e562f7ae0ee
Branch:      yt
User:        ngoldbaum
Date:        2015-10-20 21:35:06+00:00
Summary:     Converting quickstart notebooks to nbformat version 4
Affected #:  6 files

diff -r 35741b799d57ac96134c4f869cfa92e8c4f9e2be -r 0e562f7ae0ee7e375f171e909fb57318e7a3acdb doc/source/quickstart/1)_Introduction.ipynb
--- a/doc/source/quickstart/1)_Introduction.ipynb
+++ b/doc/source/quickstart/1)_Introduction.ipynb
@@ -1,72 +1,83 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Welcome to the yt quickstart!\n",
+    "\n",
+    "In this brief tutorial, we'll go over how to load up data, analyze things, inspect your data, and make some visualizations.\n",
+    "\n",
+    "Our documentation page can provide information on a variety of the commands that are used here, both in narrative documentation as well as recipes for specific functionality in our cookbook.  The documentation exists at http://yt-project.org/doc/.  If you encounter problems, look for help here: http://yt-project.org/doc/help/index.html.\n",
+    "\n",
+    "## Acquiring the datasets for this tutorial\n",
+    "\n",
+    "If you are executing these tutorials interactively, you need some sample datasets on which to run the code.  You can download these datasets at http://yt-project.org/data/.  The datasets necessary for each lesson are noted next to the corresponding tutorial.\n",
+    "\n",
+    "## What's Next?\n",
+    "\n",
+    "The Notebooks are meant to be explored in this order:\n",
+    "\n",
+    "1. Introduction\n",
+    "2. Data Inspection (IsolatedGalaxy dataset)\n",
+    "3. Simple Visualization (enzo_tiny_cosmology & Enzo_64 datasets)\n",
+    "4. Data Objects and Time Series (IsolatedGalaxy dataset)\n",
+    "5. Derived Fields and Profiles (IsolatedGalaxy dataset)\n",
+    "6. Volume Rendering (IsolatedGalaxy dataset)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The following code will download the data needed for this tutorial automatically using `curl`. It may take some time so please wait when the kernel is busy. You will need to set `download_datasets` to True before using it."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "download_datasets = False\n",
+    "if download_datasets:\n",
+    "    !curl -sSO http://yt-project.org/data/enzo_tiny_cosmology.tar.gz\n",
+    "    print \"Got enzo_tiny_cosmology\"\n",
+    "    !tar xzf enzo_tiny_cosmology.tar.gz\n",
+    "    \n",
+    "    !curl -sSO http://yt-project.org/data/Enzo_64.tar.gz\n",
+    "    print \"Got Enzo_64\"\n",
+    "    !tar xzf Enzo_64.tar.gz\n",
+    "    \n",
+    "    !curl -sSO http://yt-project.org/data/IsolatedGalaxy.tar.gz\n",
+    "    print \"Got IsolatedGalaxy\"\n",
+    "    !tar xzf IsolatedGalaxy.tar.gz\n",
+    "    \n",
+    "    print \"All done!\""
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:59d4f454218415689bef43d45dfcd59bf9913cfbeb416efa596fa99ff5c44856"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.4.3"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "# Welcome to the yt quickstart!\n",
-      "\n",
-      "In this brief tutorial, we'll go over how to load up data, analyze things, inspect your data, and make some visualizations.\n",
-      "\n",
-      "Our documentation page can provide information on a variety of the commands that are used here, both in narrative documentation as well as recipes for specific functionality in our cookbook.  The documentation exists at http://yt-project.org/doc/.  If you encounter problems, look for help here: http://yt-project.org/doc/help/index.html.\n",
-      "\n",
-      "## Acquiring the datasets for this tutorial\n",
-      "\n",
-      "If you are executing these tutorials interactively, you need some sample datasets on which to run the code.  You can download these datasets at http://yt-project.org/data/.  The datasets necessary for each lesson are noted next to the corresponding tutorial.\n",
-      "\n",
-      "## What's Next?\n",
-      "\n",
-      "The Notebooks are meant to be explored in this order:\n",
-      "\n",
-      "1. Introduction\n",
-      "2. Data Inspection (IsolatedGalaxy dataset)\n",
-      "3. Simple Visualization (enzo_tiny_cosmology & Enzo_64 datasets)\n",
-      "4. Data Objects and Time Series (IsolatedGalaxy dataset)\n",
-      "5. Derived Fields and Profiles (IsolatedGalaxy dataset)\n",
-      "6. Volume Rendering (IsolatedGalaxy dataset)"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The following code will download the data needed for this tutorial automatically using `curl`. It may take some time so please wait when the kernel is busy. You will need to set `download_datasets` to True before using it."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "download_datasets = False\n",
-      "if download_datasets:\n",
-      "    !curl -sSO http://yt-project.org/data/enzo_tiny_cosmology.tar.gz\n",
-      "    print \"Got enzo_tiny_cosmology\"\n",
-      "    !tar xzf enzo_tiny_cosmology.tar.gz\n",
-      "    \n",
-      "    !curl -sSO http://yt-project.org/data/Enzo_64.tar.gz\n",
-      "    print \"Got Enzo_64\"\n",
-      "    !tar xzf Enzo_64.tar.gz\n",
-      "    \n",
-      "    !curl -sSO http://yt-project.org/data/IsolatedGalaxy.tar.gz\n",
-      "    print \"Got IsolatedGalaxy\"\n",
-      "    !tar xzf IsolatedGalaxy.tar.gz\n",
-      "    \n",
-      "    print \"All done!\""
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}

diff -r 35741b799d57ac96134c4f869cfa92e8c4f9e2be -r 0e562f7ae0ee7e375f171e909fb57318e7a3acdb doc/source/quickstart/2)_Data_Inspection.ipynb
--- a/doc/source/quickstart/2)_Data_Inspection.ipynb
+++ b/doc/source/quickstart/2)_Data_Inspection.ipynb
@@ -1,384 +1,418 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Starting Out and Loading Data\n",
+    "\n",
+    "We're going to get started by loading up yt.  This next command brings all of the libraries into memory and sets up our environment."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now that we've loaded yt, we can load up some data.  Let's load the `IsolatedGalaxy` dataset."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Fields and Facts\n",
+    "\n",
+    "When you call the `load` function, yt tries to do very little -- this is designed to be a fast operation, just setting up some information about the simulation.  Now, the first time you access the \"index\" it will read and load the mesh and then determine where data is placed in the physical domain and on disk.  Once it knows that, yt can tell you some statistics about the simulation:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds.print_stats()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "yt can also tell you the fields it found on disk:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds.field_list"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "And, all of the fields it thinks it knows how to generate:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds.derived_field_list"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "yt can also transparently generate fields.  However, we encourage you to examine exactly what yt is doing when it generates those fields.  To see, you can ask for the source of a given field."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print ds.field_info[\"gas\", \"vorticity_x\"].get_source()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "yt stores information about the domain of the simulation:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print ds.domain_width"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "yt can also convert this into various units:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print ds.domain_width.in_units(\"kpc\")\n",
+    "print ds.domain_width.in_units(\"au\")\n",
+    "print ds.domain_width.in_units(\"mile\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Mesh Structure\n",
+    "\n",
+    "If you're using a simulation type that has grids (for instance, here we're using an Enzo simulation) you can examine the structure of the mesh.  For the most part, you probably won't have to use this unless you're debugging a simulation or examining in detail what is going on."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print ds.index.grid_left_edge"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "But, you may have to access information about individual grid objects!  Each grid object mediates accessing data from the disk and has a number of attributes that tell you about it.  The index (`ds.index` here) has an attribute `grids` which is all of the grid objects."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print ds.index.grids[1]"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "g = ds.index.grids[1]\n",
+    "print g"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Grids have dimensions, extents, level, and even a list of Child grids."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "g.ActiveDimensions"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "g.LeftEdge, g.RightEdge"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "g.Level"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "g.Children"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Advanced Grid Inspection\n",
+    "\n",
+    "If we want to examine grids only at a given level, we can!  Not only that, but we can load data and take a look at various fields.\n",
+    "\n",
+    "*This section can be skipped!*"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "gs = ds.index.select_grids(ds.index.max_level)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "g2 = gs[0]\n",
+    "print g2\n",
+    "print g2.Parent\n",
+    "print g2.get_global_startindex()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print g2[\"density\"][:,:,0]"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (g2.Parent.child_mask == 0).sum() * 8\n",
+    "print g2.ActiveDimensions.prod()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "for f in ds.field_list:\n",
+    "    fv = g[f]\n",
+    "    if fv.size == 0: continue\n",
+    "    print f, fv.min(), fv.max()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Examining Data in Regions\n",
+    "\n",
+    "yt provides data object selectors.  In subsequent notebooks we'll examine these in more detail, but we can select a sphere of data and perform a number of operations on it.  yt makes it easy to operate on fluid fields in an object in *bulk*, but you can also examine individual field values.\n",
+    "\n",
+    "This creates a sphere selector positioned at the most dense point in the simulation that has a radius of 10 kpc."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "sp = ds.sphere(\"max\", (10, 'kpc'))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print sp"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can calculate a bunch of bulk quantities.  Here's that list, but there's a list in the docs, too!"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print sp.quantities.keys()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's look at the total mass.  This is how you call a given quantity.  yt calls these \"Derived Quantities\".  We'll talk about a few in a later notebook."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print sp.quantities.total_mass()"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:a8fe78715c1f3900c37c675d84320fe65f0ba8734abba60fd12e74d957e5d8ee"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.4.3"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "# Starting Out and Loading Data\n",
-      "\n",
-      "We're going to get started by loading up yt.  This next command brings all of the libraries into memory and sets up our environment."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now that we've loaded yt, we can load up some data.  Let's load the `IsolatedGalaxy` dataset."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "## Fields and Facts\n",
-      "\n",
-      "When you call the `load` function, yt tries to do very little -- this is designed to be a fast operation, just setting up some information about the simulation.  Now, the first time you access the \"index\" it will read and load the mesh and then determine where data is placed in the physical domain and on disk.  Once it knows that, yt can tell you some statistics about the simulation:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds.print_stats()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "yt can also tell you the fields it found on disk:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds.field_list"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "And, all of the fields it thinks it knows how to generate:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds.derived_field_list"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "yt can also transparently generate fields.  However, we encourage you to examine exactly what yt is doing when it generates those fields.  To see, you can ask for the source of a given field."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print ds.field_info[\"gas\", \"vorticity_x\"].get_source()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "yt stores information about the domain of the simulation:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print ds.domain_width"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "yt can also convert this into various units:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print ds.domain_width.in_units(\"kpc\")\n",
-      "print ds.domain_width.in_units(\"au\")\n",
-      "print ds.domain_width.in_units(\"mile\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "# Mesh Structure\n",
-      "\n",
-      "If you're using a simulation type that has grids (for instance, here we're using an Enzo simulation) you can examine the structure of the mesh.  For the most part, you probably won't have to use this unless you're debugging a simulation or examining in detail what is going on."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print ds.index.grid_left_edge"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "But, you may have to access information about individual grid objects!  Each grid object mediates accessing data from the disk and has a number of attributes that tell you about it.  The index (`ds.index` here) has an attribute `grids` which is all of the grid objects."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print ds.index.grids[1]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "g = ds.index.grids[1]\n",
-      "print g"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Grids have dimensions, extents, level, and even a list of Child grids."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "g.ActiveDimensions"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "g.LeftEdge, g.RightEdge"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "g.Level"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "g.Children"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "## Advanced Grid Inspection\n",
-      "\n",
-      "If we want to examine grids only at a given level, we can!  Not only that, but we can load data and take a look at various fields.\n",
-      "\n",
-      "*This section can be skipped!*"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "gs = ds.index.select_grids(ds.index.max_level)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "g2 = gs[0]\n",
-      "print g2\n",
-      "print g2.Parent\n",
-      "print g2.get_global_startindex()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print g2[\"density\"][:,:,0]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print (g2.Parent.child_mask == 0).sum() * 8\n",
-      "print g2.ActiveDimensions.prod()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "for f in ds.field_list:\n",
-      "    fv = g[f]\n",
-      "    if fv.size == 0: continue\n",
-      "    print f, fv.min(), fv.max()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "# Examining Data in Regions\n",
-      "\n",
-      "yt provides data object selectors.  In subsequent notebooks we'll examine these in more detail, but we can select a sphere of data and perform a number of operations on it.  yt makes it easy to operate on fluid fields in an object in *bulk*, but you can also examine individual field values.\n",
-      "\n",
-      "This creates a sphere selector positioned at the most dense point in the simulation that has a radius of 10 kpc."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "sp = ds.sphere(\"max\", (10, 'kpc'))"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print sp"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can calculate a bunch of bulk quantities.  Here's that list, but there's a list in the docs, too!"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print sp.quantities.keys()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Let's look at the total mass.  This is how you call a given quantity.  yt calls these \"Derived Quantities\".  We'll talk about a few in a later notebook."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print sp.quantities.total_mass()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}

diff -r 35741b799d57ac96134c4f869cfa92e8c4f9e2be -r 0e562f7ae0ee7e375f171e909fb57318e7a3acdb doc/source/quickstart/3)_Simple_Visualization.ipynb
--- a/doc/source/quickstart/3)_Simple_Visualization.ipynb
+++ b/doc/source/quickstart/3)_Simple_Visualization.ipynb
@@ -1,275 +1,301 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Simple Visualizations of Data\n",
+    "\n",
+    "Just like in our first notebook, we have to load yt and then some data."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "For this notebook, we'll load up a cosmology dataset."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+    "print \"Redshift =\", ds.current_redshift"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "In the terms that yt uses, a projection is a line integral through the domain.  This can either be unweighted (in which case a column density is returned) or weighted, in which case an average value is returned.  Projections are, like all other data objects in yt, full-fledged data objects that churn through data and present that to you.  However, we also provide a simple method of creating Projections and plotting them in a single step.  This is called a Plot Window, here specifically known as a `ProjectionPlot`.  One thing to note is that in yt, we project all the way through the entire domain at a single time.  This means that the first call to projecting can be somewhat time consuming, but panning, zooming and plotting are all quite fast.\n",
+    "\n",
+    "yt is designed to make it easy to make nice plots and straightforward to modify those plots directly.  The cookbook in the documentation includes detailed examples of this."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "p = yt.ProjectionPlot(ds, \"y\", \"density\")\n",
+    "p.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The `show` command simply sends the plot to the IPython notebook.  You can also call `p.save()` which will save the plot to the file system.  This function accepts an argument, which will be pre-prended to the filename and can be used to name it based on the width or to supply a location.\n",
+    "\n",
+    "Now we'll zoom and pan a bit."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "p.zoom(2.0)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "p.pan_rel((0.1, 0.0))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "p.zoom(10.0)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "p.pan_rel((-0.25, -0.5))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "p.zoom(0.1)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "If we specify multiple fields, each time we call `show` we get multiple plots back.  Same for `save`!"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "p = yt.ProjectionPlot(ds, \"z\", [\"density\", \"temperature\"], weight_field=\"density\")\n",
+    "p.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can adjust the colormap on a field-by-field basis."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "p.set_cmap(\"temperature\", \"hot\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "And, we can re-center the plot on different locations.  One possible use of this would be to make a single `ProjectionPlot` which you move around to look at different regions in your simulation, saving at each one."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "v, c = ds.find_max(\"density\")\n",
+    "p.set_center((c[0], c[1]))\n",
+    "p.zoom(10)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Okay, let's load up a bigger simulation (from `Enzo_64` this time) and make a slice plot."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load(\"Enzo_64/DD0043/data0043\")\n",
+    "s = yt.SlicePlot(ds, \"z\", [\"density\", \"velocity_magnitude\"], center=\"max\")\n",
+    "s.set_cmap(\"velocity_magnitude\", \"kamae\")\n",
+    "s.zoom(10.0)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can adjust the logging of various fields:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "s.set_log(\"velocity_magnitude\", True)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "yt provides many different annotations for your plots.  You can see all of these in the documentation, or if you type `s.annotate_` and press tab, a list will show up here.  We'll annotate with velocity arrows."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "s.annotate_velocity()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Contours can also be overlaid:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "s = yt.SlicePlot(ds, \"x\", [\"density\"], center=\"max\")\n",
+    "s.annotate_contour(\"temperature\")\n",
+    "s.zoom(2.5)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Finally, we can save out to the file system."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "s.save()"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:c00ba7fdbbd9ea957d06060ad70f06f629b1fd4ebf5379c1fdad2697ab0a4cd6"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.4.3"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "# Simple Visualizations of Data\n",
-      "\n",
-      "Just like in our first notebook, we have to load yt and then some data."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "For this notebook, we'll load up a cosmology dataset."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
-      "print \"Redshift =\", ds.current_redshift"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "In the terms that yt uses, a projection is a line integral through the domain.  This can either be unweighted (in which case a column density is returned) or weighted, in which case an average value is returned.  Projections are, like all other data objects in yt, full-fledged data objects that churn through data and present that to you.  However, we also provide a simple method of creating Projections and plotting them in a single step.  This is called a Plot Window, here specifically known as a `ProjectionPlot`.  One thing to note is that in yt, we project all the way through the entire domain at a single time.  This means that the first call to projecting can be somewhat time consuming, but panning, zooming and plotting are all quite fast.\n",
-      "\n",
-      "yt is designed to make it easy to make nice plots and straightforward to modify those plots directly.  The cookbook in the documentation includes detailed examples of this."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "p = yt.ProjectionPlot(ds, \"y\", \"density\")\n",
-      "p.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The `show` command simply sends the plot to the IPython notebook.  You can also call `p.save()` which will save the plot to the file system.  This function accepts an argument, which will be pre-prended to the filename and can be used to name it based on the width or to supply a location.\n",
-      "\n",
-      "Now we'll zoom and pan a bit."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "p.zoom(2.0)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "p.pan_rel((0.1, 0.0))"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "p.zoom(10.0)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "p.pan_rel((-0.25, -0.5))"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "p.zoom(0.1)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "If we specify multiple fields, each time we call `show` we get multiple plots back.  Same for `save`!"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "p = yt.ProjectionPlot(ds, \"z\", [\"density\", \"temperature\"], weight_field=\"density\")\n",
-      "p.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can adjust the colormap on a field-by-field basis."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "p.set_cmap(\"temperature\", \"hot\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "And, we can re-center the plot on different locations.  One possible use of this would be to make a single `ProjectionPlot` which you move around to look at different regions in your simulation, saving at each one."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "v, c = ds.find_max(\"density\")\n",
-      "p.set_center((c[0], c[1]))\n",
-      "p.zoom(10)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Okay, let's load up a bigger simulation (from `Enzo_64` this time) and make a slice plot."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load(\"Enzo_64/DD0043/data0043\")\n",
-      "s = yt.SlicePlot(ds, \"z\", [\"density\", \"velocity_magnitude\"], center=\"max\")\n",
-      "s.set_cmap(\"velocity_magnitude\", \"kamae\")\n",
-      "s.zoom(10.0)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can adjust the logging of various fields:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "s.set_log(\"velocity_magnitude\", True)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "yt provides many different annotations for your plots.  You can see all of these in the documentation, or if you type `s.annotate_` and press tab, a list will show up here.  We'll annotate with velocity arrows."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "s.annotate_velocity()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Contours can also be overlaid:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "s = yt.SlicePlot(ds, \"x\", [\"density\"], center=\"max\")\n",
-      "s.annotate_contour(\"temperature\")\n",
-      "s.zoom(2.5)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Finally, we can save out to the file system."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "s.save()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/4663e9f8170e/
Changeset:   4663e9f8170e
Branch:      yt
User:        ngoldbaum
Date:        2015-10-20 23:57:55+00:00
Summary:     Updating quickstart notebooks to by py3 compatible
Affected #:  5 files

diff -r 0e562f7ae0ee7e375f171e909fb57318e7a3acdb -r 4663e9f8170e508289ffc35354ea2e8f773158f9 doc/source/quickstart/1)_Introduction.ipynb
--- a/doc/source/quickstart/1)_Introduction.ipynb
+++ b/doc/source/quickstart/1)_Introduction.ipynb
@@ -44,18 +44,18 @@
     "download_datasets = False\n",
     "if download_datasets:\n",
     "    !curl -sSO http://yt-project.org/data/enzo_tiny_cosmology.tar.gz\n",
-    "    print \"Got enzo_tiny_cosmology\"\n",
+    "    print (\"Got enzo_tiny_cosmology\")\n",
     "    !tar xzf enzo_tiny_cosmology.tar.gz\n",
     "    \n",
     "    !curl -sSO http://yt-project.org/data/Enzo_64.tar.gz\n",
-    "    print \"Got Enzo_64\"\n",
+    "    print (\"Got Enzo_64\")\n",
     "    !tar xzf Enzo_64.tar.gz\n",
     "    \n",
     "    !curl -sSO http://yt-project.org/data/IsolatedGalaxy.tar.gz\n",
-    "    print \"Got IsolatedGalaxy\"\n",
+    "    print (\"Got IsolatedGalaxy\")\n",
     "    !tar xzf IsolatedGalaxy.tar.gz\n",
     "    \n",
-    "    print \"All done!\""
+    "    print (\"All done!\")"
    ]
   }
  ],

diff -r 0e562f7ae0ee7e375f171e909fb57318e7a3acdb -r 4663e9f8170e508289ffc35354ea2e8f773158f9 doc/source/quickstart/2)_Data_Inspection.ipynb
--- a/doc/source/quickstart/2)_Data_Inspection.ipynb
+++ b/doc/source/quickstart/2)_Data_Inspection.ipynb
@@ -109,7 +109,7 @@
    },
    "outputs": [],
    "source": [
-    "print ds.field_info[\"gas\", \"vorticity_x\"].get_source()"
+    "print(ds.field_info[\"gas\", \"vorticity_x\"].get_source())"
    ]
   },
   {
@@ -127,7 +127,7 @@
    },
    "outputs": [],
    "source": [
-    "print ds.domain_width"
+    "ds.domain_width"
    ]
   },
   {
@@ -145,9 +145,9 @@
    },
    "outputs": [],
    "source": [
-    "print ds.domain_width.in_units(\"kpc\")\n",
-    "print ds.domain_width.in_units(\"au\")\n",
-    "print ds.domain_width.in_units(\"mile\")"
+    "print (ds.domain_width.in_units(\"kpc\"))\n",
+    "print (ds.domain_width.in_units(\"au\"))\n",
+    "print (ds.domain_width.in_units(\"mile\"))"
    ]
   },
   {
@@ -167,7 +167,7 @@
    },
    "outputs": [],
    "source": [
-    "print ds.index.grid_left_edge"
+    "print (ds.index.grid_left_edge)"
    ]
   },
   {
@@ -185,7 +185,7 @@
    },
    "outputs": [],
    "source": [
-    "print ds.index.grids[1]"
+    "ds.index.grids[1]"
    ]
   },
   {
@@ -197,7 +197,7 @@
    "outputs": [],
    "source": [
     "g = ds.index.grids[1]\n",
-    "print g"
+    "print(g)"
    ]
   },
   {
@@ -282,9 +282,9 @@
    "outputs": [],
    "source": [
     "g2 = gs[0]\n",
-    "print g2\n",
-    "print g2.Parent\n",
-    "print g2.get_global_startindex()"
+    "print (g2)\n",
+    "print (g2.Parent)\n",
+    "print (g2.get_global_startindex())"
    ]
   },
   {
@@ -295,7 +295,7 @@
    },
    "outputs": [],
    "source": [
-    "print g2[\"density\"][:,:,0]"
+    "g2[\"density\"][:,:,0]"
    ]
   },
   {
@@ -306,8 +306,8 @@
    },
    "outputs": [],
    "source": [
-    "print (g2.Parent.child_mask == 0).sum() * 8\n",
-    "print g2.ActiveDimensions.prod()"
+    "print ((g2.Parent.child_mask == 0).sum() * 8)\n",
+    "print (g2.ActiveDimensions.prod())"
    ]
   },
   {
@@ -321,7 +321,7 @@
     "for f in ds.field_list:\n",
     "    fv = g[f]\n",
     "    if fv.size == 0: continue\n",
-    "    print f, fv.min(), fv.max()"
+    "    print (f, fv.min(), fv.max())"
    ]
   },
   {
@@ -354,7 +354,7 @@
    },
    "outputs": [],
    "source": [
-    "print sp"
+    "sp"
    ]
   },
   {
@@ -372,7 +372,7 @@
    },
    "outputs": [],
    "source": [
-    "print sp.quantities.keys()"
+    "list(sp.quantities.keys())"
    ]
   },
   {
@@ -390,7 +390,7 @@
    },
    "outputs": [],
    "source": [
-    "print sp.quantities.total_mass()"
+    "sp.quantities.total_mass()"
    ]
   }
  ],

diff -r 0e562f7ae0ee7e375f171e909fb57318e7a3acdb -r 4663e9f8170e508289ffc35354ea2e8f773158f9 doc/source/quickstart/3)_Simple_Visualization.ipynb
--- a/doc/source/quickstart/3)_Simple_Visualization.ipynb
+++ b/doc/source/quickstart/3)_Simple_Visualization.ipynb
@@ -36,7 +36,7 @@
    "outputs": [],
    "source": [
     "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
-    "print \"Redshift =\", ds.current_redshift"
+    "print (\"Redshift =\", ds.current_redshift)"
    ]
   },
   {

diff -r 0e562f7ae0ee7e375f171e909fb57318e7a3acdb -r 4663e9f8170e508289ffc35354ea2e8f773158f9 doc/source/quickstart/4)_Data_Objects_and_Time_Series.ipynb
--- a/doc/source/quickstart/4)_Data_Objects_and_Time_Series.ipynb
+++ b/doc/source/quickstart/4)_Data_Objects_and_Time_Series.ipynb
@@ -43,7 +43,7 @@
    },
    "outputs": [],
    "source": [
-    "ts = yt.DatasetSeries(\"enzo_tiny_cosmology/*/*.hierarchy\")"
+    "ts = yt.load(\"enzo_tiny_cosmology/DD????/DD????\")"
    ]
   },
   {
@@ -66,6 +66,7 @@
     "rho_ex = []\n",
     "times = []\n",
     "for ds in ts:\n",
+    "    print (ds)\n",
     "    dd = ds.all_data()\n",
     "    rho_ex.append(dd.quantities.extrema(\"density\"))\n",
     "    times.append(ds.current_time.in_units(\"Gyr\"))\n",
@@ -190,7 +191,7 @@
    },
    "outputs": [],
    "source": [
-    "print ray[\"dts\"]"
+    "print (ray[\"dts\"])"
    ]
   },
   {
@@ -201,7 +202,7 @@
    },
    "outputs": [],
    "source": [
-    "print ray[\"t\"]"
+    "print (ray[\"t\"])"
    ]
   },
   {
@@ -212,7 +213,7 @@
    },
    "outputs": [],
    "source": [
-    "print ray[\"x\"]"
+    "print (ray[\"x\"])"
    ]
   },
   {
@@ -234,11 +235,11 @@
    "source": [
     "ds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")\n",
     "v, c = ds.find_max(\"density\")\n",
-    "sl = ds.slice(0, c[0])\n",
-    "print sl[\"index\", \"x\"]\n",
-    "print sl[\"index\", \"z\"]\n",
-    "print sl[\"pdx\"]\n",
-    "print sl[\"gas\", \"density\"].shape"
+    "sl = ds.slice(2, c[0])\n",
+    "print (sl[\"index\", \"x\"])\n",
+    "print (sl[\"index\", \"z\"])\n",
+    "print (sl[\"pdx\"])\n",
+    "print (sl[\"gas\", \"density\"].shape)"
    ]
   },
   {
@@ -257,7 +258,7 @@
    "outputs": [],
    "source": [
     "frb = sl.to_frb((50.0, 'kpc'), 1024)\n",
-    "print frb[\"gas\", \"density\"].shape"
+    "print (frb[\"gas\", \"density\"].shape)"
    ]
   },
   {
@@ -277,7 +278,7 @@
    "source": [
     "yt.write_image(np.log10(frb[\"gas\", \"density\"]), \"temp.png\")\n",
     "from IPython.display import Image\n",
-    "Image(filename = \"temp.png\")"
+    "Image(filename=\"temp.png\")"
    ]
   },
   {
@@ -300,7 +301,7 @@
    "outputs": [],
    "source": [
     "cp = ds.cutting([0.2, 0.3, 0.5], \"max\")\n",
-    "pw = cp.to_pw(fields = [(\"gas\", \"density\")])"
+    "pw = cp.to_pw(fields=[(\"gas\", \"density\")])"
    ]
   },
   {
@@ -322,6 +323,17 @@
    ]
   },
   {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "pw.zoom(10)"
+   ]
+  },
+  {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
@@ -338,7 +350,7 @@
    "source": [
     "pws = sl.to_pw(fields=[\"density\"])\n",
     "#pws.show()\n",
-    "print pws.plots.keys()"
+    "print (list(pws.plots.keys()))"
    ]
   },
   {
@@ -363,7 +375,7 @@
    "outputs": [],
    "source": [
     "cg = ds.covering_grid(2, [0.0, 0.0, 0.0], ds.domain_dimensions * 2**2)\n",
-    "print cg[\"density\"].shape"
+    "print (cg[\"density\"].shape)"
    ]
   },
   {
@@ -382,7 +394,7 @@
    "outputs": [],
    "source": [
     "scg = ds.smoothed_covering_grid(2, [0.0, 0.0, 0.0], ds.domain_dimensions * 2**2)\n",
-    "print scg[\"density\"].shape"
+    "print (scg[\"density\"].shape)"
    ]
   }
  ],

diff -r 0e562f7ae0ee7e375f171e909fb57318e7a3acdb -r 4663e9f8170e508289ffc35354ea2e8f773158f9 doc/source/quickstart/5)_Derived_Fields_and_Profiles.ipynb
--- a/doc/source/quickstart/5)_Derived_Fields_and_Profiles.ipynb
+++ b/doc/source/quickstart/5)_Derived_Fields_and_Profiles.ipynb
@@ -63,7 +63,7 @@
    "source": [
     "ds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")\n",
     "dd = ds.all_data()\n",
-    "print dd.quantities.keys()"
+    "print (list(dd.quantities.keys()))"
    ]
   },
   {
@@ -81,7 +81,7 @@
    },
    "outputs": [],
    "source": [
-    "print dd.quantities.extrema(\"dinosaurs\")"
+    "print (dd.quantities.extrema(\"dinosaurs\"))"
    ]
   },
   {
@@ -99,7 +99,7 @@
    },
    "outputs": [],
    "source": [
-    "print dd.quantities.weighted_average_quantity(\"dinosaurs\", weight=\"temperature\")"
+    "print (dd.quantities.weighted_average_quantity(\"dinosaurs\", weight=\"temperature\"))"
    ]
   },
   {
@@ -123,9 +123,9 @@
     "bv = sp.quantities.bulk_velocity()\n",
     "L = sp.quantities.angular_momentum_vector()\n",
     "rho_min, rho_max = sp.quantities.extrema(\"density\")\n",
-    "print bv\n",
-    "print L\n",
-    "print rho_min, rho_max"
+    "print (bv)\n",
+    "print (L)\n",
+    "print (rho_min, rho_max)"
    ]
   },
   {
@@ -245,9 +245,9 @@
     "sp.set_field_parameter(\"bulk_velocity\", bv)\n",
     "rv2 = sp.quantities.extrema(\"radial_velocity\")\n",
     "\n",
-    "print bv\n",
-    "print rv1\n",
-    "print rv2"
+    "print (bv)\n",
+    "print (rv1)\n",
+    "print (rv2)"
    ]
   }
  ],


https://bitbucket.org/yt_analysis/yt/commits/a3dc58449d7b/
Changeset:   a3dc58449d7b
Branch:      yt
User:        ngoldbaum
Date:        2015-10-21 17:50:15+00:00
Summary:     Updating visualizing notebooks to nbformat4
Affected #:  2 files

diff -r 4663e9f8170e508289ffc35354ea2e8f773158f9 -r a3dc58449d7b5f4084c195a3040d829e21b44b31 doc/source/visualizing/FITSImageData.ipynb
--- a/doc/source/visualizing/FITSImageData.ipynb
+++ b/doc/source/visualizing/FITSImageData.ipynb
@@ -1,409 +1,438 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "yt has capabilities for writing 2D and 3D uniformly gridded data generated from datasets to FITS files. This is via the `FITSImageData` class. We'll test these capabilities out on an Athena dataset."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt\n",
+    "from yt.utilities.fits_image import FITSImageData, FITSProjection"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load(\"MHDSloshing/virgo_low_res.0054.vtk\", parameters={\"length_unit\":(1.0,\"Mpc\"),\n",
+    "                                                               \"mass_unit\":(1.0e14,\"Msun\"),\n",
+    "                                                               \"time_unit\":(1.0,\"Myr\")})"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Creating FITS images from Slices and Projections"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "There are several ways to make a `FITSImageData` instance. The most intuitive ways are to use the `FITSSlice`, `FITSProjection`, `FITSOffAxisSlice`, and `FITSOffAxisProjection` classes to write slices and projections directly to FITS. To demonstrate a useful example of creating a FITS file, let's first make a `ProjectionPlot`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj = yt.ProjectionPlot(ds, \"z\", [\"temperature\"], weight_field=\"density\", width=(500.,\"kpc\"))\n",
+    "prj.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Suppose that we wanted to write this projection to a FITS file for analysis and visualization in other programs, such as ds9. We can do that using `FITSProjection`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj_fits = FITSProjection(ds, \"z\", [\"temperature\"], weight_field=\"density\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "which took the same parameters as `ProjectionPlot` except the width, because `FITSProjection` and `FITSSlice` always make slices and projections of the width of the domain size, at the finest resolution available in the simulation, in a unit determined to be appropriate for the physical size of the dataset."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Because `FITSImageData` inherits from the [AstroPy `HDUList`](http://astropy.readthedocs.org/en/latest/io/fits/api/hdulists.html) class, we can call its methods. For example, `info` shows us the contents of the virtual FITS file:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj_fits.info()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can also look at the header for a particular field:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj_fits[\"temperature\"].header"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "where we can see that the temperature units are in Kelvin and the cell widths are in kiloparsecs. If we want the raw image data with units, we can call `get_data`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj_fits.get_data(\"temperature\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can use the `set_unit` method to change the units of a particular field:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj_fits.set_unit(\"temperature\",\"R\")\n",
+    "prj_fits.get_data(\"temperature\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The image can be written to disk using the `writeto` method:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj_fits.writeto(\"sloshing.fits\", clobber=True)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Since yt can read FITS image files, it can be loaded up just like any other dataset:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds2 = yt.load(\"sloshing.fits\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "and we can make a `SlicePlot` of the 2D image, which shows the same data as the previous image:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc2 = yt.SlicePlot(ds2, \"z\", [\"temperature\"], width=(500.,\"kpc\"))\n",
+    "slc2.set_log(\"temperature\", True)\n",
+    "slc2.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Using `FITSImageData` directly"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "If you want more fine-grained control over what goes into the FITS file, you can call `FITSImageData` directly, with various kinds of inputs. For example, you could use a `FixedResolutionBuffer`, and specify you want the units in parsecs instead:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc3 = ds.slice(0, 0.0)\n",
+    "frb = slc3.to_frb((500.,\"kpc\"), 800)\n",
+    "fid_frb = FITSImageData(frb, fields=[\"density\",\"temperature\"], units=\"pc\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "A 3D FITS cube can also be created from a covering grid:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "cvg = ds.covering_grid(ds.index.max_level, [-0.5,-0.5,-0.5], [64, 64, 64], fields=[\"density\",\"temperature\"])\n",
+    "fid_cvg = FITSImageData(cvg, fields=[\"density\",\"temperature\"], units=\"Mpc\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Other `FITSImageData` Methods"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "A `FITSImageData` instance can be generated from one previously written to disk using the `from_file` classmethod:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "fid = FITSImageData.from_file(\"sloshing.fits\")\n",
+    "fid.info()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Multiple `FITSImageData` can be combined to create a new one, provided that the coordinate information is the same:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj_fits2 = FITSProjection(ds, \"z\", [\"density\"])\n",
+    "prj_fits3 = FITSImageData.from_images([prj_fits, prj_fits2])\n",
+    "prj_fits3.info()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Alternatively, individual fields can be popped as well:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "dens_fits = prj_fits3.pop(\"density\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "dens_fits.info()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj_fits3.info()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "So far, the FITS images we have shown have linear spatial coordinates. One may want to take a projection of an object and make a crude mock observation out of it, with celestial coordinates. For this, we can use the `create_sky_wcs` method. Specify a center (RA, Dec) coordinate in degrees, as well as a linear scale in terms of angle per distance:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "sky_center = [30.,45.] # in degrees\n",
+    "sky_scale = (2.5, \"arcsec/kpc\") # could also use a YTQuantity\n",
+    "prj_fits.create_sky_wcs(sky_center, sky_scale, ctype=[\"RA---TAN\",\"DEC--TAN\"])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "By the default, a tangent RA/Dec projection is used, but one could also use another projection using the `ctype` keyword. We can now look at the header and see it has the appropriate WCS:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj_fits[\"temperature\"].header"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Finally, we can add header keywords to a single field or for all fields in the FITS image using `update_header`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "fid_frb.update_header(\"all\", \"time\", 0.1) # Update all the fields\n",
+    "fid_frb.update_header(\"temperature\", \"scale\", \"Rankine\") # Update just one field"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print fid_frb[\"density\"].header[\"time\"]\n",
+    "print fid_frb[\"temperature\"].header[\"scale\"]"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:c7de5ef190feaa2289595aec7eaa05db02fd535e408e0d04aa54088b0bd3ebae"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.4.3"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "yt has capabilities for writing 2D and 3D uniformly gridded data generated from datasets to FITS files. This is via the `FITSImageData` class. We'll test these capabilities out on an Athena dataset."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt\n",
-      "from yt.utilities.fits_image import FITSImageData, FITSProjection"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load(\"MHDSloshing/virgo_low_res.0054.vtk\", parameters={\"length_unit\":(1.0,\"Mpc\"),\n",
-      "                                                               \"mass_unit\":(1.0e14,\"Msun\"),\n",
-      "                                                               \"time_unit\":(1.0,\"Myr\")})"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Creating FITS images from Slices and Projections"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "There are several ways to make a `FITSImageData` instance. The most intuitive ways are to use the `FITSSlice`, `FITSProjection`, `FITSOffAxisSlice`, and `FITSOffAxisProjection` classes to write slices and projections directly to FITS. To demonstrate a useful example of creating a FITS file, let's first make a `ProjectionPlot`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj = yt.ProjectionPlot(ds, \"z\", [\"temperature\"], weight_field=\"density\", width=(500.,\"kpc\"))\n",
-      "prj.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Suppose that we wanted to write this projection to a FITS file for analysis and visualization in other programs, such as ds9. We can do that using `FITSProjection`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj_fits = FITSProjection(ds, \"z\", [\"temperature\"], weight_field=\"density\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "which took the same parameters as `ProjectionPlot` except the width, because `FITSProjection` and `FITSSlice` always make slices and projections of the width of the domain size, at the finest resolution available in the simulation, in a unit determined to be appropriate for the physical size of the dataset."
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Because `FITSImageData` inherits from the [AstroPy `HDUList`](http://astropy.readthedocs.org/en/latest/io/fits/api/hdulists.html) class, we can call its methods. For example, `info` shows us the contents of the virtual FITS file:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj_fits.info()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can also look at the header for a particular field:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj_fits[\"temperature\"].header"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "where we can see that the temperature units are in Kelvin and the cell widths are in kiloparsecs. If we want the raw image data with units, we can call `get_data`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj_fits.get_data(\"temperature\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can use the `set_unit` method to change the units of a particular field:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj_fits.set_unit(\"temperature\",\"R\")\n",
-      "prj_fits.get_data(\"temperature\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The image can be written to disk using the `writeto` method:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj_fits.writeto(\"sloshing.fits\", clobber=True)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Since yt can read FITS image files, it can be loaded up just like any other dataset:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds2 = yt.load(\"sloshing.fits\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "and we can make a `SlicePlot` of the 2D image, which shows the same data as the previous image:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc2 = yt.SlicePlot(ds2, \"z\", [\"temperature\"], width=(500.,\"kpc\"))\n",
-      "slc2.set_log(\"temperature\", True)\n",
-      "slc2.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Using `FITSImageData` directly"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "If you want more fine-grained control over what goes into the FITS file, you can call `FITSImageData` directly, with various kinds of inputs. For example, you could use a `FixedResolutionBuffer`, and specify you want the units in parsecs instead:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc3 = ds.slice(0, 0.0)\n",
-      "frb = slc3.to_frb((500.,\"kpc\"), 800)\n",
-      "fid_frb = FITSImageData(frb, fields=[\"density\",\"temperature\"], units=\"pc\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "A 3D FITS cube can also be created from a covering grid:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "cvg = ds.covering_grid(ds.index.max_level, [-0.5,-0.5,-0.5], [64, 64, 64], fields=[\"density\",\"temperature\"])\n",
-      "fid_cvg = FITSImageData(cvg, fields=[\"density\",\"temperature\"], units=\"Mpc\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Other `FITSImageData` Methods"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "A `FITSImageData` instance can be generated from one previously written to disk using the `from_file` classmethod:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "fid = FITSImageData.from_file(\"sloshing.fits\")\n",
-      "fid.info()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Multiple `FITSImageData` can be combined to create a new one, provided that the coordinate information is the same:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj_fits2 = FITSProjection(ds, \"z\", [\"density\"])\n",
-      "prj_fits3 = FITSImageData.from_images([prj_fits, prj_fits2])\n",
-      "prj_fits3.info()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Alternatively, individual fields can be popped as well:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "dens_fits = prj_fits3.pop(\"density\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "dens_fits.info()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj_fits3.info()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "So far, the FITS images we have shown have linear spatial coordinates. One may want to take a projection of an object and make a crude mock observation out of it, with celestial coordinates. For this, we can use the `create_sky_wcs` method. Specify a center (RA, Dec) coordinate in degrees, as well as a linear scale in terms of angle per distance:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "sky_center = [30.,45.] # in degrees\n",
-      "sky_scale = (2.5, \"arcsec/kpc\") # could also use a YTQuantity\n",
-      "prj_fits.create_sky_wcs(sky_center, sky_scale, ctype=[\"RA---TAN\",\"DEC--TAN\"])"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "By the default, a tangent RA/Dec projection is used, but one could also use another projection using the `ctype` keyword. We can now look at the header and see it has the appropriate WCS:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj_fits[\"temperature\"].header"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Finally, we can add header keywords to a single field or for all fields in the FITS image using `update_header`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "fid_frb.update_header(\"all\", \"time\", 0.1) # Update all the fields\n",
-      "fid_frb.update_header(\"temperature\", \"scale\", \"Rankine\") # Update just one field"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print fid_frb[\"density\"].header[\"time\"]\n",
-      "print fid_frb[\"temperature\"].header[\"scale\"]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}

diff -r 4663e9f8170e508289ffc35354ea2e8f773158f9 -r a3dc58449d7b5f4084c195a3040d829e21b44b31 doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb
--- a/doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb
+++ b/doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb
@@ -1,183 +1,200 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Here, we explain how to use TransferFunctionHelper to visualize and interpret yt volume rendering transfer functions.  TransferFunctionHelper is a utility class that makes it easy to visualize he probability density functions of yt fields that you might want to volume render.  This makes it easier to choose a nice transfer function that highlights interesting physical regimes.\n",
+    "\n",
+    "First, we set up our namespace and define a convenience function to display volume renderings inline in the notebook.  Using `%matplotlib inline` makes it so matplotlib plots display inline in the notebook."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt\n",
+    "import numpy as np\n",
+    "from IPython.core.display import Image\n",
+    "from yt.visualization.volume_rendering.transfer_function_helper import TransferFunctionHelper\n",
+    "\n",
+    "def showme(im):\n",
+    "    # screen out NaNs\n",
+    "    im[im != im] = 0.0\n",
+    "    \n",
+    "    # Create an RGBA bitmap to display\n",
+    "    imb = yt.write_bitmap(im, None)\n",
+    "    return Image(imb)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Next, we load up a low resolution Enzo cosmological simulation."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load('Enzo_64/DD0043/data0043')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now that we have the dataset loaded, let's create a `TransferFunctionHelper` to visualize the dataset and transfer function we'd like to use."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "tfh = yt.TransferFunctionHelper(ds)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`TransferFunctionHelpler` will intelligently choose transfer function bounds based on the data values.  Use the `plot()` method to take a look at the transfer function."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# Build a transfer function that is a multivariate gaussian in temperature\n",
+    "tfh = yt.TransferFunctionHelper(ds)\n",
+    "tfh.set_field('temperature')\n",
+    "tfh.set_log(True)\n",
+    "tfh.set_bounds()\n",
+    "tfh.build_transfer_function()\n",
+    "tfh.tf.add_layers(5)\n",
+    "tfh.plot()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's also look at the probability density function of the `cell_mass` field as a function of `temperature`.  This might give us an idea where there is a lot of structure. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "tfh.plot(profile_field='cell_mass')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "It looks like most of the gas is hot but there is still a lot of low-density cool gas.  Let's construct a transfer function that highlights both the rarefied hot gas and the dense cool gas simultaneously."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "tfh = yt.TransferFunctionHelper(ds)\n",
+    "tfh.set_field('temperature')\n",
+    "tfh.set_bounds()\n",
+    "tfh.set_log(True)\n",
+    "tfh.build_transfer_function()\n",
+    "tfh.tf.add_layers(8, w=0.01, mi=4.0, ma=8.0, col_bounds=[4.,8.], alpha=np.logspace(-1,2,7), colormap='RdBu_r')\n",
+    "tfh.tf.map_to_colormap(6.0, 8.0, colormap='Reds', scale=10.0)\n",
+    "tfh.tf.map_to_colormap(-1.0, 6.0, colormap='Blues_r', scale=1.)\n",
+    "\n",
+    "tfh.plot(profile_field='cell_mass')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Finally, let's take a look at the volume rendering."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "L = [-0.1, -1.0, -0.1]\n",
+    "c = ds.domain_center\n",
+    "W = 1.5*ds.domain_width\n",
+    "Npixels = 512 \n",
+    "cam = ds.camera(c, L, W, Npixels, tfh.tf, fields=['temperature'],\n",
+    "                  north_vector=[1.,0.,0.], steady_north=True, \n",
+    "                  sub_samples=5, no_ghost=False)\n",
+    "\n",
+    "# Here we substitute the TransferFunction we constructed earlier.\n",
+    "cam.transfer_function = tfh.tf\n",
+    "\n",
+    "\n",
+    "im = cam.snapshot()\n",
+    "showme(im[:,:,:3])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can clearly see that the hot gas is mostly associated with bound structures while the cool gas is associated with low-density voids."
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:5a1547973517987ff047f1b2405277a0e98392e8fd5ffe04521cb2dc372d32d3"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.4.3"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Here, we explain how to use TransferFunctionHelper to visualize and interpret yt volume rendering transfer functions.  TransferFunctionHelper is a utility class that makes it easy to visualize he probability density functions of yt fields that you might want to volume render.  This makes it easier to choose a nice transfer function that highlights interesting physical regimes.\n",
-      "\n",
-      "First, we set up our namespace and define a convenience function to display volume renderings inline in the notebook.  Using `%matplotlib inline` makes it so matplotlib plots display inline in the notebook."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt\n",
-      "import numpy as np\n",
-      "from IPython.core.display import Image\n",
-      "from yt.visualization.volume_rendering.transfer_function_helper import TransferFunctionHelper\n",
-      "\n",
-      "def showme(im):\n",
-      "    # screen out NaNs\n",
-      "    im[im != im] = 0.0\n",
-      "    \n",
-      "    # Create an RGBA bitmap to display\n",
-      "    imb = yt.write_bitmap(im, None)\n",
-      "    return Image(imb)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Next, we load up a low resolution Enzo cosmological simulation."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load('Enzo_64/DD0043/data0043')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now that we have the dataset loaded, let's create a `TransferFunctionHelper` to visualize the dataset and transfer function we'd like to use."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "tfh = yt.TransferFunctionHelper(ds)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "`TransferFunctionHelpler` will intelligently choose transfer function bounds based on the data values.  Use the `plot()` method to take a look at the transfer function."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# Build a transfer function that is a multivariate gaussian in temperature\n",
-      "tfh = yt.TransferFunctionHelper(ds)\n",
-      "tfh.set_field('temperature')\n",
-      "tfh.set_log(True)\n",
-      "tfh.set_bounds()\n",
-      "tfh.build_transfer_function()\n",
-      "tfh.tf.add_layers(5)\n",
-      "tfh.plot()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Let's also look at the probability density function of the `cell_mass` field as a function of `temperature`.  This might give us an idea where there is a lot of structure. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "tfh.plot(profile_field='cell_mass')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "It looks like most of the gas is hot but there is still a lot of low-density cool gas.  Let's construct a transfer function that highlights both the rarefied hot gas and the dense cool gas simultaneously."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "tfh = yt.TransferFunctionHelper(ds)\n",
-      "tfh.set_field('temperature')\n",
-      "tfh.set_bounds()\n",
-      "tfh.set_log(True)\n",
-      "tfh.build_transfer_function()\n",
-      "tfh.tf.add_layers(8, w=0.01, mi=4.0, ma=8.0, col_bounds=[4.,8.], alpha=np.logspace(-1,2,7), colormap='RdBu_r')\n",
-      "tfh.tf.map_to_colormap(6.0, 8.0, colormap='Reds', scale=10.0)\n",
-      "tfh.tf.map_to_colormap(-1.0, 6.0, colormap='Blues_r', scale=1.)\n",
-      "\n",
-      "tfh.plot(profile_field='cell_mass')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Finally, let's take a look at the volume rendering."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "L = [-0.1, -1.0, -0.1]\n",
-      "c = ds.domain_center\n",
-      "W = 1.5*ds.domain_width\n",
-      "Npixels = 512 \n",
-      "cam = ds.camera(c, L, W, Npixels, tfh.tf, fields=['temperature'],\n",
-      "                  north_vector=[1.,0.,0.], steady_north=True, \n",
-      "                  sub_samples=5, no_ghost=False)\n",
-      "\n",
-      "# Here we substitute the TransferFunction we constructed earlier.\n",
-      "cam.transfer_function = tfh.tf\n",
-      "\n",
-      "\n",
-      "im = cam.snapshot()\n",
-      "showme(im[:,:,:3])"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can clearly see that the hot gas is mostly associated with bound structures while the cool gas is associated with low-density voids."
-     ]
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}


https://bitbucket.org/yt_analysis/yt/commits/c59f89099460/
Changeset:   c59f89099460
Branch:      yt
User:        ngoldbaum
Date:        2016-01-11 16:19:43+00:00
Summary:     Merging
Affected #:  367 files

diff -r a3dc58449d7b5f4084c195a3040d829e21b44b31 -r c59f8909946037508870b6650c1a9de3b3f4a761 .hgchurn
--- a/.hgchurn
+++ b/.hgchurn
@@ -22,4 +22,21 @@
 ngoldbau at ucsc.edu = goldbaum at ucolick.org
 biondo at wisc.edu = Biondo at wisc.edu
 samgeen at googlemail.com = samgeen at gmail.com
-fbogert = fbogert at ucsc.edu
\ No newline at end of file
+fbogert = fbogert at ucsc.edu
+bwoshea = oshea at msu.edu
+mornkr at slac.stanford.edu = me at jihoonkim.org
+kbarrow = kssbarrow at gatech.edu
+kssbarrow at gmail.com = kssbarrow at gatech.edu
+kassbarrow at gmail.com = kssbarrow at gatech.edu
+antoine.strugarek at cea.fr = strugarek at astro.umontreal.ca
+rosen at ucolick.org = alrosen at ucsc.edu
+jzuhone = jzuhone at gmail.com
+karraki at nmsu.edu = karraki at gmail.com
+hckr at eml.cc = astrohckr at gmail.com
+julian3 at illinois.edu = astrohckr at gmail.com
+cosmosquark = bthompson2090 at gmail.com
+chris.m.malone at lanl.gov = chris.m.malone at gmail.com
+jnaiman at ucolick.org = jnaiman
+migueld.deval = miguel at archlinux.net
+slevy at ncsa.illinois.edu = salevy at illinois.edu
+malzraa at gmail.com = kellerbw at mcmaster.ca
\ No newline at end of file

diff -r a3dc58449d7b5f4084c195a3040d829e21b44b31 -r c59f8909946037508870b6650c1a9de3b3f4a761 .hgignore
--- a/.hgignore
+++ b/.hgignore
@@ -32,6 +32,7 @@
 yt/utilities/lib/CICDeposit.c
 yt/utilities/lib/ContourFinding.c
 yt/utilities/lib/DepthFirstOctree.c
+yt/utilities/lib/element_mappings.c
 yt/utilities/lib/FixedInterpolator.c
 yt/utilities/lib/fortran_reader.c
 yt/utilities/lib/freetype_writer.c
@@ -56,6 +57,11 @@
 yt/utilities/lib/marching_cubes.c
 yt/utilities/lib/png_writer.h
 yt/utilities/lib/write_array.c
+yt/utilities/lib/element_mappings.c
+yt/utilities/lib/mesh_construction.cpp
+yt/utilities/lib/mesh_samplers.cpp
+yt/utilities/lib/mesh_traversal.cpp
+yt/utilities/lib/mesh_intersection.cpp
 syntax: glob
 *.pyc
 .*.swp

diff -r a3dc58449d7b5f4084c195a3040d829e21b44b31 -r c59f8909946037508870b6650c1a9de3b3f4a761 CONTRIBUTING.rst
--- /dev/null
+++ b/CONTRIBUTING.rst
@@ -0,0 +1,970 @@
+.. This document is rendered in HTML with cross-reference links filled in at
+   http://yt-project.org/doc/developing/
+
+.. _getting-involved:
+
+Getting Involved
+================
+
+There are *lots* of ways to get involved with yt, as a community and as a
+technical system -- not all of them just contributing code, but also
+participating in the community, helping us with designing the websites, adding
+documentation, and sharing your scripts with others.
+
+Coding is only one way to be involved!
+
+Communication Channels
+----------------------
+
+There are five main communication channels for yt:
+
+ * We have an IRC channel, on ``irc.freenode.net`` in ``#yt``.
+   You can connect through our web
+   gateway without any special client, at http://yt-project.org/irc.html .
+   *IRC is the first stop for conversation!*
+ * Many yt developers participate in the yt Slack community. Slack is a free 
+   chat service that many teams use to organize their work. You can get an
+   invite to yt's Slack organization by clicking the "Join us @ Slack" button
+   on this page: http://yt-project.org/community.html
+ * `yt-users <http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org>`_
+   is a relatively high-traffic mailing list where people are encouraged to ask
+   questions about the code, figure things out and so on.
+ * `yt-dev <http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org>`_ is
+   a much lower-traffic mailing list designed to focus on discussions of
+   improvements to the code, ideas about planning, development issues, and so
+   on.
+ * `yt-svn <http://lists.spacepope.org/listinfo.cgi/yt-svn-spacepope.org>`_ is
+   the (now-inaccurately titled) mailing list where all pushes to the primary
+   repository are sent.
+
+The easiest way to get involved with yt is to read the mailing lists, hang out
+in IRC or slack chat, and participate.  If someone asks a question you know the
+answer to (or have your own question about!) write back and answer it.
+
+If you have an idea about something, suggest it!  We not only welcome
+participation, we encourage it.
+
+Documentation
+-------------
+
+The yt documentation is constantly being updated, and it is a task we would very
+much appreciate assistance with.  Whether that is adding a section, updating an
+outdated section, contributing typo or grammatical fixes, adding a FAQ, or
+increasing coverage of functionality, it would be very helpful if you wanted to
+help out.
+
+The easiest way to help out is to fork the main yt repository (where the
+documentation lives in the ``doc`` directory in the root of the yt mercurial
+repository) and then make your changes in your own fork.  When you are done,
+issue a pull request through the website for your new fork, and we can comment
+back and forth and eventually accept your changes. See :ref:`sharing-changes` for
+more information about contributing your changes to yt on bitbucket.
+
+Gallery Images and Videos
+-------------------------
+
+If you have an image or video you'd like to display in the image or video
+galleries, getting it included it easy!  You can either fork the `yt homepage
+repository <http://bitbucket.org/yt_analysis/website>`_ and add it there, or
+email it to us and we'll add it to the `Gallery
+<http://yt-project.org/gallery.html>`_.
+
+We're eager to show off the images and movies you make with yt, so please feel
+free to drop `us <http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org>`_
+a line and let us know if you've got something great!
+
+Technical Contributions
+-----------------------
+
+Contributing code is another excellent way to participate -- whether it's
+bug fixes, new features, analysis modules, or a new code frontend.  See
+:ref:`creating_frontend` for more details.
+
+The process is pretty simple: fork on BitBucket, make changes, issue a pull
+request.  We can then go back and forth with comments in the pull request, but
+usually we end up accepting.
+
+For more information, see :ref:`contributing-code`, where we spell out how to
+get up and running with a development environment, how to commit, and how to
+use BitBucket.
+
+Online Presence
+---------------
+
+Some of these fall under the other items, but if you'd like to help out with
+the website or any of the other ways yt is presented online, please feel free!
+Almost everything is kept in hg repositories on BitBucket, and it is very easy
+to fork and contribute back changes.
+
+Please feel free to dig in and contribute changes.
+
+Word of Mouth
+-------------
+
+If you're using yt and it has increased your productivity, please feel
+encouraged to share that information.  Cite our `paper
+<http://adsabs.harvard.edu/abs/2011ApJS..192....9T>`_, tell your colleagues,
+and just spread word of mouth.  By telling people about your successes, you'll
+help bring more eyes and hands to the table -- in this manner, by increasing
+participation, collaboration, and simply spreading the limits of what the code
+is asked to do, we hope to help scale the utility and capability of yt with the
+community size.
+
+Feel free to `blog <http://blog.yt-project.org/>`_ about, `tweet
+<http://twitter.com/yt_astro>`_ about and talk about what you are up to!
+
+Long-Term Projects
+------------------
+
+There are some wild-eyed, out-there ideas that have been bandied about for the
+future directions of yt -- some of them even written into the mission
+statement.  The ultimate goal is to move past simple analysis and visualization
+of data and begin to approach it from the other side, of generating data,
+running solvers.  We also hope to increase its ability to act as an in situ
+analysis code, by presenting a unified protocol.  Other projects include
+interfacing with ParaView and VisIt, creating a web GUI for running
+simulations, creating a run-tracker that follows simulations in progress, a
+federated database for simulation outputs, and so on and so forth.
+
+yt is an ambitious project.  Let's be ambitious together.
+
+yt Community Code of Conduct
+----------------------------
+
+The community of participants in open source
+Scientific projects is made up of members from around the
+globe with a diverse set of skills, personalities, and
+experiences. It is through these differences that our
+community experiences success and continued growth. We
+expect everyone in our community to follow these guidelines
+when interacting with others both inside and outside of our
+community. Our goal is to keep ours a positive, inclusive,
+successful, and growing community.
+
+As members of the community,
+
+- We pledge to treat all people with respect and
+  provide a harassment- and bullying-free environment,
+  regardless of sex, sexual orientation and/or gender
+  identity, disability, physical appearance, body size,
+  race, nationality, ethnicity, and religion. In
+  particular, sexual language and imagery, sexist,
+  racist, or otherwise exclusionary jokes are not
+  appropriate.
+
+- We pledge to respect the work of others by
+  recognizing acknowledgment/citation requests of
+  original authors. As authors, we pledge to be explicit
+  about how we want our own work to be cited or
+  acknowledged.
+
+- We pledge to welcome those interested in joining the
+  community, and realize that including people with a
+  variety of opinions and backgrounds will only serve to
+  enrich our community. In particular, discussions
+  relating to pros/cons of various technologies,
+  programming languages, and so on are welcome, but
+  these should be done with respect, taking proactive
+  measure to ensure that all participants are heard and
+  feel confident that they can freely express their
+  opinions.
+
+- We pledge to welcome questions and answer them
+  respectfully, paying particular attention to those new
+  to the community. We pledge to provide respectful
+  criticisms and feedback in forums, especially in
+  discussion threads resulting from code
+  contributions.
+
+- We pledge to be conscientious of the perceptions of
+  the wider community and to respond to criticism
+  respectfully. We will strive to model behaviors that
+  encourage productive debate and disagreement, both
+  within our community and where we are criticized. We
+  will treat those outside our community with the same
+  respect as people within our community.
+
+- We pledge to help the entire community follow the
+  code of conduct, and to not remain silent when we see
+  violations of the code of conduct. We will take action
+  when members of our community violate this code such as
+  contacting confidential at yt-project.org (all emails sent to
+  this address will be treated with the strictest
+  confidence) or talking privately with the person.
+
+This code of conduct applies to all
+community situations online and offline, including mailing
+lists, forums, social media, conferences, meetings,
+associated social events, and one-to-one interactions.
+
+The yt Community Code of Conduct was adapted from the
+`Astropy Community Code of Conduct
+<http://www.astropy.org/about.html#codeofconduct>`_,
+which was partially inspired by the PSF code of conduct.
+
+.. _contributing-code:
+
+How to Develop yt
+=================
+
+yt is a community project!
+
+We are very happy to accept patches, features, and bugfixes from any member of
+the community!  yt is developed using mercurial, primarily because it enables
+very easy and straightforward submission of changesets.  We're eager to hear
+from you, and if you are developing yt, we encourage you to subscribe to the
+`developer mailing list
+<http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org>`_. Please feel
+free to hack around, commit changes, and send them upstream.
+
+.. note:: If you already know how to use the `mercurial version control system
+   <http://mercurial-scm.org>`_ and are comfortable with handling it yourself,
+   the quickest way to contribute to yt is to `fork us on BitBucket
+   <http://bitbucket.org/yt_analysis/yt/fork>`_, make your changes, push the
+   changes to your fork and issue a `pull request
+   <http://bitbucket.org/yt_analysis/yt/pull-requests>`_.  The rest of this
+   document is just an explanation of how to do that.
+
+See :ref:`code-style-guide` for more information about coding style in yt and
+:ref:`docstrings` for an example docstring.  Please read them before hacking on
+the codebase, and feel free to email any of the mailing lists for help with the
+codebase.
+
+Keep in touch, and happy hacking!
+
+.. _open-issues:
+
+Open Issues
+-----------
+
+If you're interested in participating in yt development, take a look at the
+`issue tracker on bitbucket
+<https://bitbucket.org/yt_analysis/yt/issues?milestone=easy?status=new>`_.
+Issues are marked with a milestone of "easy", "moderate", or "difficult"
+depending on the estimated level of difficulty for fixing the issue. While we
+try to triage the issue tracker regularly, it may be the case that issues marked
+"moderate" are actually easier than their milestone label indicates since that
+is the default value.
+
+Here are some predefined issue searches that might be useful:
+
+* Unresolved issues `marked "easy" <https://bitbucket.org/yt_analysis/yt/issues?milestone=easy&status=open&status=new>`_.
+* Unresolved issues `marked "easy" or "moderate" <https://bitbucket.org/yt_analysis/yt/issues?milestone=easy&milestone=moderate&status=open&status=new>`_
+* `All unresolved issues <https://bitbucket.org/yt_analysis/yt/issues?status=open&status=new>`_
+
+Submitting Changes
+------------------
+
+We provide a brief introduction to submitting changes here.  yt thrives on the
+strength of its communities (http://arxiv.org/abs/1301.7064 has further
+discussion) and we encourage contributions from any user.  While we do not
+discuss version control, mercurial or the advanced usage of BitBucket in detail
+here, we do provide an outline of how to submit changes and we are happy to
+provide further assistance or guidance.
+
+Licensing
++++++++++
+
+yt is `licensed <http://blog.yt-project.org/post/Relicensing.html>`_ under the
+BSD 3-clause license.  Versions previous to yt-2.6 were released under the GPLv3.
+
+All contributed code must be BSD-compatible.  If you'd rather not license in
+this manner, but still want to contribute, please consider creating an external
+package, which we'll happily link to.
+
+How To Get The Source Code For Editing
+++++++++++++++++++++++++++++++++++++++
+
+yt is hosted on BitBucket, and you can see all of the yt repositories at
+http://bitbucket.org/yt_analysis/.  With the yt installation script you should have a
+copy of Mercurial for checking out pieces of code.  Make sure you have followed
+the steps above for bootstrapping your development (to assure you have a
+bitbucket account, etc.)
+
+In order to modify the source code for yt, we ask that you make a "fork" of the
+main yt repository on bitbucket.  A fork is simply an exact copy of the main
+repository (along with its history) that you will now own and can make
+modifications as you please.  You can create a personal fork by visiting the yt
+bitbucket webpage at https://bitbucket.org/yt_analysis/yt/ .  After logging in,
+you should see an option near the top right labeled "fork".  Click this option,
+and then click the fork repository button on the subsequent page.  You now have
+a forked copy of the yt repository for your own personal modification.
+
+This forked copy exists on the bitbucket repository, so in order to access
+it locally, follow the instructions at the top of that webpage for that
+forked repository, namely run at a local command line:
+
+.. code-block:: bash
+
+   $ hg clone http://bitbucket.org/<USER>/<REPOSITORY_NAME>
+
+This downloads that new forked repository to your local machine, so that you
+can access it, read it, make modifications, etc.  It will put the repository in
+a local directory of the same name as the repository in the current working
+directory.  You can see any past state of the code by using the hg log command.
+For example, the following command would show you the last 5 changesets
+(modifications to the code) that were submitted to that repository.
+
+.. code-block:: bash
+
+   $ cd <REPOSITORY_NAME>
+   $ hg log -l 5
+
+Using the revision specifier (the number or hash identifier next to each
+changeset), you can update the local repository to any past state of the
+code (a previous changeset or version) by executing the command:
+
+.. code-block:: bash
+
+   $ hg up revision_specifier
+
+Lastly, if you want to use this new downloaded version of your yt repository as
+the *active* version of yt on your computer (i.e. the one which is executed when
+you run yt from the command line or the one that is loaded when you do ``import
+yt``), then you must "activate" it using the following commands from within the
+repository directory.
+
+.. code-block:: bash
+
+   $ cd <REPOSITORY_NAME>
+   $ python2.7 setup.py develop
+
+This will rebuild all C modules as well.
+
+.. _reading-source:
+
+How To Read The Source Code
++++++++++++++++++++++++++++
+
+If you just want to *look* at the source code, you may already have it on your
+computer.  If you build yt using the install script, the source is available at
+``$YT_DEST/src/yt-hg``.  See :ref:`source-installation` for more details about
+to obtain the yt source code if you did not build yt using the install
+script.
+
+The root directory of the yt mercurial repository contains a number of
+subdirectories with different components of the code.  Most of the yt source
+code is contained in the yt subdirectory.  This directory its self contains
+the following subdirectories:
+
+``frontends``
+   This is where interfaces to codes are created.  Within each subdirectory of
+   yt/frontends/ there must exist the following files, even if empty:
+
+   * ``data_structures.py``, where subclasses of AMRGridPatch, Dataset
+     and AMRHierarchy are defined.
+   * ``io.py``, where a subclass of IOHandler is defined.
+   * ``fields.py``, where fields we expect to find in datasets are defined
+   * ``misc.py``, where any miscellaneous functions or classes are defined.
+   * ``definitions.py``, where any definitions specific to the frontend are
+     defined.  (i.e., header formats, etc.)
+
+``fields``
+   This is where all of the derived fields that ship with yt are defined.
+
+``geometry``
+   This is where geometric helpler routines are defined. Handlers
+   for grid and oct data, as well as helpers for coordinate transformations
+   can be found here.
+
+``visualization``
+   This is where all visualization modules are stored.  This includes plot
+   collections, the volume rendering interface, and pixelization frontends.
+
+``data_objects``
+   All objects that handle data, processed or unprocessed, not explicitly
+   defined as visualization are located in here.  This includes the base
+   classes for data regions, covering grids, time series, and so on.  This
+   also includes derived fields and derived quantities.
+
+``analysis_modules``
+   This is where all mechanisms for processing data live.  This includes
+   things like clump finding, halo profiling, halo finding, and so on.  This
+   is something of a catchall, but it serves as a level of greater
+   abstraction that simply data selection and modification.
+
+``gui``
+   This is where all GUI components go.  Typically this will be some small
+   tool used for one or two things, which contains a launching mechanism on
+   the command line.
+
+``utilities``
+   All broadly useful code that doesn't clearly fit in one of the other
+   categories goes here.
+
+``extern``
+   Bundled external modules (i.e. code that was not written by one of
+   the yt authors but that yt depends on) lives here.
+
+
+If you're looking for a specific file or function in the yt source code, use
+the unix find command:
+
+.. code-block:: bash
+
+   $ find <DIRECTORY_TREE_TO_SEARCH> -name '<FILENAME>'
+
+The above command will find the FILENAME in any subdirectory in the
+DIRECTORY_TREE_TO_SEARCH.  Alternatively, if you're looking for a function
+call or a keyword in an unknown file in a directory tree, try:
+
+.. code-block:: bash
+
+   $ grep -R <KEYWORD_TO_FIND><DIRECTORY_TREE_TO_SEARCH>
+
+This can be very useful for tracking down functions in the yt source.
+
+.. _building-yt:
+
+Building yt
++++++++++++
+
+If you have made changes to any C or Cython (``.pyx``) modules, you have to
+rebuild yt.  If your changes have exclusively been to Python modules, you will
+not need to re-build, but (see below) you may need to re-install.
+
+If you are running from a clone that is executable in-place (i.e., has been
+installed via the installation script or you have run ``setup.py develop``) you
+can rebuild these modules by executing:
+
+.. code-block:: bash
+
+  $ python2.7 setup.py develop
+
+If you have previously "installed" via ``setup.py install`` you have to
+re-install:
+
+.. code-block:: bash
+
+  $ python2.7 setup.py install
+
+Only one of these two options is needed.
+
+.. _windows-developing:
+
+Developing yt on Windows
+------------------------
+
+If you plan to develop yt on Windows, it is necessary to use the `MinGW
+<http://www.mingw.org/>`_ gcc compiler that can be installed using the `Anaconda
+Python Distribution <https://store.continuum.io/cshop/anaconda/>`_. The libpython package must be
+installed from Anaconda as well. These can both be installed with a single command:
+
+.. code-block:: bash
+
+  $ conda install libpython mingw
+
+Additionally, the syntax for the setup command is slightly different; you must type:
+
+.. code-block:: bash
+
+  $ python2.7 setup.py build --compiler=mingw32 develop
+
+or
+
+.. code-block:: bash
+
+  $ python2.7 setup.py build --compiler=mingw32 install
+
+.. _requirements-for-code-submission:
+
+Requirements for Code Submission
+--------------------------------
+
+Modifications to the code typically fall into one of three categories, each of
+which have different requirements for acceptance into the code base.  These
+requirements are in place for a few reasons -- to make sure that the code is
+maintainable, testable, and that we can easily include information about
+changes in changelogs during the release procedure.  (See `YTEP-0008
+<https://ytep.readthedocs.org/en/latest/YTEPs/YTEP-0008.html>`_ for more
+detail.)
+
+* New Features
+
+  * New unit tests (possibly new answer tests) (See :ref:`testing`)
+  * Docstrings in the source code for the public API
+  * Addition of new feature to the narrative documentation (See :ref:`writing_documentation`)
+  * Addition of cookbook recipe (See :ref:`writing_documentation`)
+  * Issue created on issue tracker, to ensure this is added to the changelog
+
+* Extension or Breakage of API in Existing Features
+
+  * Update existing narrative docs and docstrings (See :ref:`writing_documentation`)
+  * Update existing cookbook recipes (See :ref:`writing_documentation`)
+  * Modify of create new unit tests (See :ref:`testing`)
+  * Issue created on issue tracker, to ensure this is added to the changelog
+
+* Bug fixes
+
+  * Unit test is encouraged, to ensure breakage does not happen again in the
+    future. (See :ref:`testing`)
+  * Issue created on issue tracker, to ensure this is added to the changelog
+
+When submitting, you will be asked to make sure that your changes meet all of
+these requirements.  They are pretty easy to meet, and we're also happy to help
+out with them.  In :ref:`code-style-guide` there is a list of handy tips for
+how to structure and write your code.
+
+.. _mercurial-with-yt:
+
+How to Use Mercurial with yt
+----------------------------
+
+If you're new to Mercurial, these three resources are pretty great for learning
+the ins and outs:
+
+* http://hginit.com/
+* http://hgbook.red-bean.com/read/
+* http://mercurial-scm.org/
+* http://mercurial-scm.org/wiki
+
+The commands that are essential for using mercurial include:
+
+* ``hg help`` which provides help for any mercurial command. For example, you
+  can learn more about the ``log`` command by doing ``hg help log``. Other useful
+  topics to use with ``hg help`` are ``hg help glossary``, ``hg help config``,
+  ``hg help extensions``, and ``hg help revsets``.
+* ``hg commit`` which commits changes in the working directory to the
+  repository, creating a new "changeset object."
+* ``hg add`` which adds a new file to be tracked by mercurial.  This does
+  not change the working directory.
+* ``hg pull`` which pulls (from an optional path specifier) changeset
+  objects from a remote source.  The working directory is not modified.
+* ``hg push`` which sends (to an optional path specifier) changeset objects
+  to a remote source.  The working directory is not modified.
+* ``hg log`` which shows a log of all changeset objects in the current
+  repository.  Use ``-G`` to show a graph of changeset objects and their
+  relationship.
+* ``hg update`` which (with an optional "revision" specifier) updates the
+  state of the working directory to match a changeset object in the
+  repository.
+* ``hg merge`` which combines two changesets to make a union of their lines
+  of development.  This updates the working directory.
+
+We are happy to asnswers questions about mercurial use on our IRC, slack
+chat or on the mailing list to walk you through any troubles you might have.
+Here are some general suggestions for using mercurial with yt:
+
+* Named branches are to be avoided.  Try using bookmarks (``see hg help
+  bookmark``) to track work.  (`More info about bookmarks is available on the
+  mercurial wiki <http://mercurial-scm.org/wiki/Bookmarks>`_)
+* Make sure you set a username in your ``~/.hgrc`` before you commit any
+  changes!  All of the tutorials above will describe how to do this as one of
+  the very first steps.
+* When contributing changes, you might be asked to make a handful of
+  modifications to your source code.  We'll work through how to do this with
+  you, and try to make it as painless as possible.
+* Your test may fail automated style checks. See :ref:`code-style-guide` for
+  more information about automatically verifying your code style.
+* Please avoid deleting your yt forks, as that deletes the pull request
+  discussion from process from BitBucket's website, even if your pull request
+  is merged.
+* You should only need one fork.  To keep it in sync, you can sync from the
+  website. See Bitbucket's `Blog Post
+  <https://blog.bitbucket.org/2013/02/04/syncing-and-merging-come-to-bitbucket/>`_
+  about this. See :ref:`sharing-changes` for a description of the basic workflow
+  and :ref:`multiple-PRs` for a discussion about what to do when you want to
+  have multiple open pull requests at the same time.
+* If you run into any troubles, stop by IRC (see :ref:`irc`) or the mailing
+  list.
+
+.. _sharing-changes:
+
+Making and Sharing Changes
+--------------------------
+
+The simplest way to submit changes to yt is to do the following:
+
+* Build yt from the mercurial repository
+* Navigate to the root of the yt repository
+* Make some changes and commit them
+* Fork the `yt repository on BitBucket <https://bitbucket.org/yt_analysis/yt>`_
+* Push the changesets to your fork
+* Issue a pull request.
+
+Here's a more detailed flowchart of how to submit changes.
+
+#. If you have used the installation script, the source code for yt can be
+   found in ``$YT_DEST/src/yt-hg``.  Alternatively see
+   :ref:`source-installation` for instructions on how to build yt from the
+   mercurial repository. (Below, in :ref:`reading-source`, we describe how to
+   find items of interest.)
+#. Edit the source file you are interested in and
+   test your changes.  (See :ref:`testing` for more information.)
+#. Fork yt on BitBucket.  (This step only has to be done once.)  You can do
+   this at: https://bitbucket.org/yt_analysis/yt/fork.  Call this repository
+   yt.
+#. Create a bookmark to track your work. For example: ``hg bookmark
+   my-first-pull-request``
+#. Commit these changes, using ``hg commit``.  This can take an argument
+   which is a series of filenames, if you have some changes you do not want
+   to commit.
+#. Remember that this is a large development effort and to keep the code
+   accessible to everyone, good documentation is a must.  Add in source code
+   comments for what you are doing.  Add in docstrings
+   if you are adding a new function or class or keyword to a function.
+   Add documentation to the appropriate section of the online docs so that
+   people other than yourself know how to use your new code.
+#. If your changes include new functionality or cover an untested area of the
+   code, add a test.  (See :ref:`testing` for more information.)  Commit
+   these changes as well.
+#. Push your changes to your new fork using the command::
+
+      hg push -B my-first-pull-request https://bitbucket.org/YourUsername/yt/
+
+   Where you should substitute the name of the bookmark you are working on for
+   ``my-first-pull-request``. If you end up doing considerable development, you
+   can set an alias in the file ``.hg/hgrc`` to point to this path.
+
+   .. note::
+     Note that the above approach uses HTTPS as the transfer protocol
+     between your machine and BitBucket.  If you prefer to use SSH - or
+     perhaps you're behind a proxy that doesn't play well with SSL via
+     HTTPS - you may want to set up an `SSH key`_ on BitBucket.  Then, you use
+     the syntax ``ssh://hg@bitbucket.org/YourUsername/yt``, or equivalent, in
+     place of ``https://bitbucket.org/YourUsername/yt`` in Mercurial commands.
+     For consistency, all commands we list in this document will use the HTTPS
+     protocol.
+
+     .. _SSH key: https://confluence.atlassian.com/display/BITBUCKET/Set+up+SSH+for+Mercurial
+
+#. Issue a pull request at
+   https://bitbucket.org/YourUsername/yt/pull-request/new
+   A pull request is essentially just asking people to review and accept the
+   modifications you have made to your personal version of the code.
+
+
+During the course of your pull request you may be asked to make changes.  These
+changes may be related to style issues, correctness issues, or even requesting
+tests.  The process for responding to pull request code review is relatively
+straightforward.
+
+#. Make requested changes, or leave a comment indicating why you don't think
+   they should be made.
+#. Commit those changes to your local repository.
+#. Push the changes to your fork:
+
+      hg push https://bitbucket.org/YourUsername/yt/
+
+#. Your pull request will be automatically updated.
+
+.. _multiple-PRs:
+
+Working with Multiple BitBucket Pull Requests
++++++++++++++++++++++++++++++++++++++++++++++
+
+Once you become active developing for yt, you may be working on
+various aspects of the code or bugfixes at the same time.  Currently,
+BitBucket's *modus operandi* for pull requests automatically updates
+your active pull request with every ``hg push`` of commits that are a
+descendant of the head of your pull request.  In a normal workflow,
+this means that if you have an active pull request, make some changes
+locally for, say, an unrelated bugfix, then push those changes back to
+your fork in the hopes of creating a *new* pull request, you'll
+actually end up updating your current pull request!
+
+There are a few ways around this feature of BitBucket that will allow
+for multiple pull requests to coexist; we outline one such method
+below.  We assume that you have a fork of yt at
+``http://bitbucket.org/YourUsername/Your_yt`` (see
+:ref:`sharing-changes` for instructions on creating a fork) and that
+you have an active pull request to the main repository.
+
+The main issue with starting another pull request is to make sure that
+your push to BitBucket doesn't go to the same head as your
+existing pull request and trigger BitBucket's auto-update feature.
+Here's how to get your local repository away from your current pull
+request head using `revsets <http://www.selenic.com/hg/help/revsets>`_
+and your ``hgrc`` file:
+
+#. Set up a Mercurial path for the main yt repository (note this is a convenience
+   step and only needs to be done once).  Add the following to your
+   ``Your_yt/.hg/hgrc``::
+
+     [paths]
+     upstream = https://bitbucket.org/yt_analysis/yt
+
+   This will create a path called ``upstream`` that is aliased to the URL of the
+   main yt repository.
+#. Now we'll use revsets_ to update your local repository to the tip of the
+   ``upstream`` path:
+
+   .. code-block:: bash
+
+      $ hg pull upstream
+      $ hg update -r "remote(yt, 'upstream')"
+
+After the above steps, your local repository should be at the current head of
+the ``yt`` branch in the main yt repository.  If you find yourself doing this a
+lot, it may be worth aliasing this task in your ``hgrc`` file by adding
+something like::
+
+  [alias]
+  ytupdate = update -r "remote(yt, 'upstream')"
+
+And then you can just issue ``hg ytupdate`` to get at the current head of the
+``yt`` branch on main yt repository.
+
+Make sure you are on the branch you want to be on, and then you can make changes
+and ``hg commit`` them.  If you prefer working with `bookmarks
+<http://mercurial-scm.org/wiki/Bookmarks>`_, you may want to make a bookmark
+before committing your changes, such as ``hg bookmark mybookmark``.
+
+To push your changes on a bookmark to bitbucket, you can issue the following
+command:
+
+.. code-block:: bash
+
+    $ hg push -B myfeature https://bitbucket.org/YourUsername/Your_yt
+
+The ``-B`` means "publish my bookmark, the changeset the bookmark is pointing
+at, and any ancestors of that changeset that aren't already on the remote
+server".
+
+To push to your fork on BitBucket if you didn't use a bookmark, you issue the
+following:
+
+.. code-block:: bash
+
+  $ hg push -r . -f https://bitbucket.org/YourUsername/Your_yt
+
+The ``-r .`` means "push only the commit I'm standing on and any ancestors."
+The ``-f`` is to force Mecurial to do the push since we are creating a new
+remote head without a bookmark.
+
+You can then go to the BitBucket interface and issue a new pull request based on
+your last changes, as usual.
+
+.. _code-style-guide:
+
+Coding Style Guide
+==================
+
+Automatically checking code style
+---------------------------------
+
+Below are a list of rules for coding style in yt. Some of these rules are
+suggestions are not explicitly enforced, while some are enforced via automated
+testing. The yt project uses a subset of the rules checked by ``flake8`` to
+verify our code. The ``flake8`` tool is a combination of the ``pyflakes`` and
+``pep8`` tools. To check the coding style of your contributions locally you will
+need to install the ``flake8`` tool from ``pip``:
+
+.. code-block:: bash
+
+    $ pip install flake8
+
+And then navigate to the root of the yt repository and run ``flake8`` on the
+``yt`` folder:
+
+.. code-block:: bash
+
+    $ cd $YT_HG
+    $ flake8 ./yt
+
+This will print out any ``flake8`` errors or warnings that your newly added code
+triggers. The errors will be in your newly added code because we have already
+cleaned up the rest of the yt codebase of the errors and warnings detected by
+the `flake8` tool. Note that this will only trigger a subset of the `full flake8
+error and warning list
+<http://flake8.readthedocs.org/en/latest/warnings.html>`_, since we explicitly
+blacklist a large number of the full list of rules that are checked by
+``flake8`` by default.
+
+Source code style guide
+-----------------------
+
+ * In general, follow PEP-8 guidelines.
+   http://www.python.org/dev/peps/pep-0008/
+ * Classes are ``ConjoinedCapitals``, methods and functions are
+   ``lowercase_with_underscores``.
+ * Use 4 spaces, not tabs, to represent indentation.
+ * Line widths should not be more than 80 characters.
+ * Do not use nested classes unless you have a very good reason to, such as
+   requiring a namespace or class-definition modification.  Classes should live
+   at the top level.  ``__metaclass__`` is exempt from this.
+ * Do not use unnecessary parenthesis in conditionals.  ``if((something) and
+   (something_else))`` should be rewritten as
+   ``if something and something_else``. Python is more forgiving than C.
+ * Avoid copying memory when possible. For example, don't do
+   ``a = a.reshape(3,4)`` when ``a.shape = (3,4)`` will do, and ``a = a * 3``
+   should be ``np.multiply(a, 3, a)``.
+ * In general, avoid all double-underscore method names: ``__something`` is
+   usually unnecessary.
+ * When writing a subclass, use the super built-in to access the super class,
+   rather than explicitly. Ex: ``super(SpecialGridSubclass, self).__init__()``
+   rather than ``SpecialGrid.__init__()``.
+ * Docstrings should describe input, output, behavior, and any state changes
+   that occur on an object.  See the file ``doc/docstring_example.txt`` for a
+   fiducial example of a docstring.
+ * Use only one top-level import per line. Unless there is a good reason not to,
+   imports should happen at the top of the file, after the copyright blurb.
+ * Never compare with ``True`` or ``False`` using ``==`` or ``!=``, always use
+   ``is`` or ``is not``.
+ * If you are comparing with a numpy boolean array, just refer to the array.
+   Ex: do ``np.all(array)`` instead of ``np.all(array == True)``.
+ * Never comapre with None using ``==`` or ``!=``, use ``is None`` or
+   ``is not None``.
+ * Use ``statement is not True`` instead of ``not statement is True``
+ * Only one statement per line, do not use semicolons to put two or more
+   statements on a single line.
+ * Only declare local variables if they will be used later. If you do not use the
+   return value of a function, do not store it in a variable.
+ * Add tests for new functionality. When fixing a bug, consider adding a test to
+   prevent the bug from recurring.
+
+API Style Guide
+---------------
+
+ * Do not use ``from some_module import *``
+ * Internally, only import from source files directly -- instead of:
+
+     ``from yt.visualization.api import ProjectionPlot``
+
+   do:
+
+     ``from yt.visualization.plot_window import ProjectionPlot``
+
+ * Import symbols from the module where they are defined, avoid transitive
+   imports.
+ * Import standard library modules, functions, and classes from builtins, do not
+   import them from other yt files.
+ * Numpy is to be imported as ``np``.
+ * Do not use too many keyword arguments.  If you have a lot of keyword
+   arguments, then you are doing too much in ``__init__`` and not enough via
+   parameter setting.
+ * In function arguments, place spaces before commas.  ``def something(a,b,c)``
+   should be ``def something(a, b, c)``.
+ * Don't create a new class to replicate the functionality of an old class --
+   replace the old class.  Too many options makes for a confusing user
+   experience.
+ * Parameter files external to yt are a last resort.
+ * The usage of the ``**kwargs`` construction should be avoided.  If they cannot
+   be avoided, they must be explained, even if they are only to be passed on to
+   a nested function.
+
+.. _docstrings
+
+Docstrings
+----------
+
+The following is an example docstring. You can use it as a template for
+docstrings in your code and as a guide for how we expect docstrings to look and
+the level of detail we are looking for. Note that we use NumPy style docstrings
+written in `Sphinx restructured text format <http://sphinx-doc.org/rest.html>`_.
+
+.. code-block:: rest
+
+    r"""A one-line summary that does not use variable names or the
+    function name.
+
+    Several sentences providing an extended description. Refer to
+    variables using back-ticks, e.g. ``var``.
+
+    Parameters
+    ----------
+    var1 : array_like
+        Array_like means all those objects -- lists, nested lists, etc. --
+        that can be converted to an array.  We can also refer to
+        variables like ``var1``.
+    var2 : int
+        The type above can either refer to an actual Python type
+        (e.g. ``int``), or describe the type of the variable in more
+        detail, e.g. ``(N,) ndarray`` or ``array_like``.
+    Long_variable_name : {'hi', 'ho'}, optional
+        Choices in brackets, default first when optional.
+
+    Returns
+    -------
+    describe : type
+        Explanation
+    output : type
+        Explanation
+    tuple : type
+        Explanation
+    items : type
+        even more explaining
+
+    Other Parameters
+    ----------------
+    only_seldom_used_keywords : type
+        Explanation
+    common_parameters_listed_above : type
+        Explanation
+
+    Raises
+    ------
+    BadException
+        Because you shouldn't have done that.
+
+    See Also
+    --------
+    otherfunc : relationship (optional)
+    newfunc : Relationship (optional), which could be fairly long, in which
+              case the line wraps here.
+    thirdfunc, fourthfunc, fifthfunc
+
+    Notes
+    -----
+    Notes about the implementation algorithm (if needed).
+
+    This can have multiple paragraphs.
+
+    You may include some math:
+
+    .. math:: X(e^{j\omega } ) = x(n)e^{ - j\omega n}
+
+    And even use a greek symbol like :math:`omega` inline.
+
+    References
+    ----------
+    Cite the relevant literature, e.g. [1]_.  You may also cite these
+    references in the notes section above.
+
+    .. [1] O. McNoleg, "The integration of GIS, remote sensing,
+       expert systems and adaptive co-kriging for environmental habitat
+       modelling of the Highland Haggis using object-oriented, fuzzy-logic
+       and neural-network techniques," Computers & Geosciences, vol. 22,
+       pp. 585-588, 1996.
+
+    Examples
+    --------
+    These are written in doctest format, and should illustrate how to
+    use the function.  Use the variables 'ds' for the dataset, 'pc' for
+    a plot collection, 'c' for a center, and 'L' for a vector.
+
+    >>> a=[1,2,3]
+    >>> print [x + 3 for x in a]
+    [4, 5, 6]
+    >>> print "a\n\nb"
+    a
+    b
+
+    """
+
+Variable Names and Enzo-isms
+----------------------------
+Avoid Enzo-isms.  This includes but is not limited to:
+
+ * Hard-coding parameter names that are the same as those in Enzo.  The
+   following translation table should be of some help.  Note that the
+   parameters are now properties on a ``Dataset`` subclass: you access them
+   like ds.refine_by .
+
+    - ``RefineBy `` => `` refine_by``
+    - ``TopGridRank `` => `` dimensionality``
+    - ``TopGridDimensions `` => `` domain_dimensions``
+    - ``InitialTime `` => `` current_time``
+    - ``DomainLeftEdge `` => `` domain_left_edge``
+    - ``DomainRightEdge `` => `` domain_right_edge``
+    - ``CurrentTimeIdentifier `` => `` unique_identifier``
+    - ``CosmologyCurrentRedshift `` => `` current_redshift``
+    - ``ComovingCoordinates `` => `` cosmological_simulation``
+    - ``CosmologyOmegaMatterNow `` => `` omega_matter``
+    - ``CosmologyOmegaLambdaNow `` => `` omega_lambda``
+    - ``CosmologyHubbleConstantNow `` => `` hubble_constant``
+
+ * Do not assume that the domain runs from 0 .. 1.  This is not true
+   everywhere.
+ * Variable names should be short but descriptive.
+ * No globals!

diff -r a3dc58449d7b5f4084c195a3040d829e21b44b31 -r c59f8909946037508870b6650c1a9de3b3f4a761 CREDITS
--- a/CREDITS
+++ b/CREDITS
@@ -4,20 +4,30 @@
                 Tom Abel (tabel at stanford.edu)
                 Gabriel Altay (gabriel.altay at gmail.com)
                 Kenza Arraki (karraki at gmail.com)
+                Kirk Barrow (kssbarrow at gatech.edu)
+                Ricarda Beckmann (Ricarda.Beckmann at astro.ox.ac.uk)
                 Elliott Biondo (biondo at wisc.edu)
                 Alex Bogert (fbogert at ucsc.edu)
+                André-Patrick Bubel (code at andre-bubel.de)
                 Pengfei Chen (madcpf at gmail.com)
                 David Collins (dcollins4096 at gmail.com)
                 Brian Crosby (crosby.bd at gmail.com)
                 Andrew Cunningham (ajcunn at gmail.com)
                 Miguel de Val-Borro (miguel.deval at gmail.com)
+                Bili Dong (qobilidop at gmail.com)
+                Nicholas Earl (nchlsearl at gmail.com)
                 Hilary Egan (hilaryye at gmail.com)
+                Daniel Fenn (df11c at my.fsu.edu)
                 John Forces (jforbes at ucolick.org)
+                Adam Ginsburg (keflavich at gmail.com)
                 Sam Geen (samgeen at gmail.com)
                 Nathan Goldbaum (goldbaum at ucolick.org)
+                William Gray (graywilliamj at gmail.com)
                 Markus Haider (markus.haider at uibk.ac.at)
                 Eric Hallman (hallman13 at gmail.com)
                 Cameron Hummels (chummels at gmail.com)
+                Anni Järvenpää (anni.jarvenpaa at gmail.com)
+                Allyson Julian (astrohckr at gmail.com)
                 Christian Karch (chiffre at posteo.de)
                 Ben W. Keller (kellerbw at mcmaster.ca)
                 Ji-hoon Kim (me at jihoonkim.org)
@@ -25,11 +35,15 @@
                 Kacper Kowalik (xarthisius.kk at gmail.com)
                 Mark Krumholz (mkrumhol at ucsc.edu)
                 Michael Kuhlen (mqk at astro.berkeley.edu)
+                Meagan Lang (langmm.astro at gmail.com)
+                Doris Lee (dorislee at berkeley.edu)
                 Eve Lee (elee at cita.utoronto.ca)
                 Sam Leitner (sam.leitner at gmail.com)
+                Stuart Levy (salevy at illinois.edu)
                 Yuan Li (yuan at astro.columbia.edu)
                 Chris Malone (chris.m.malone at gmail.com)
                 Josh Maloney (joshua.moloney at colorado.edu)
+                Jonah Miller (jonah.maxwell.miller at gmail.com)
                 Chris Moody (cemoody at ucsc.edu)
                 Stuart Mumford (stuart at mumford.me.uk)
                 Andrew Myers (atmyers at astro.berkeley.edu)
@@ -44,6 +58,7 @@
                 Mark Richardson (Mark.L.Richardson at asu.edu)
                 Thomas Robitaille (thomas.robitaille at gmail.com)
                 Anna Rosen (rosen at ucolick.org)
+                Chuck Rozhon (rozhon2 at illinois.edu)
                 Douglas Rudd (drudd at uchicago.edu)
                 Anthony Scopatz (scopatz at gmail.com)
                 Noel Scudder (noel.scudder at stonybrook.edu)
@@ -59,6 +74,7 @@
                 Ji Suoqing (jisuoqing at gmail.com)
                 Elizabeth Tasker (tasker at astro1.sci.hokudai.ac.jp)
                 Benjamin Thompson (bthompson2090 at gmail.com)
+                Robert Thompson (rthompsonj at gmail.com)
                 Stephanie Tonnesen (stonnes at gmail.com)
                 Matthew Turk (matthewturk at gmail.com)
                 Rich Wagner (rwagner at physics.ucsd.edu)

diff -r a3dc58449d7b5f4084c195a3040d829e21b44b31 -r c59f8909946037508870b6650c1a9de3b3f4a761 doc/coding_styleguide.txt
--- a/doc/coding_styleguide.txt
+++ /dev/null
@@ -1,80 +0,0 @@
-Style Guide for Coding in yt
-============================
-
-Coding Style Guide
-------------------
-
- * In general, follow PEP-8 guidelines.
-   http://www.python.org/dev/peps/pep-0008/
- * Classes are ConjoinedCapitals, methods and functions are
-   lowercase_with_underscores.
- * Use 4 spaces, not tabs, to represent indentation.
- * Line widths should not be more than 80 characters.
- * Do not use nested classes unless you have a very good reason to, such as
-   requiring a namespace or class-definition modification.  Classes should live
-   at the top level.  __metaclass__ is exempt from this.
- * Do not use unnecessary parenthesis in conditionals.  if((something) and
-   (something_else)) should be rewritten as if something and something_else.
-   Python is more forgiving than C.
- * Avoid copying memory when possible. For example, don't do 
-   "a = a.reshape(3,4)" when "a.shape = (3,4)" will do, and "a = a * 3" should
-   be "np.multiply(a, 3, a)".
- * In general, avoid all double-underscore method names: __something is usually
-   unnecessary.
- * When writing a subclass, use the super built-in to access the super class,
-   rather than explicitly. Ex: "super(SpecialGrid, self).__init__()" rather than
-   "SpecialGrid.__init__()".
- * Doc strings should describe input, output, behavior, and any state changes
-   that occur on an object.  See the file `doc/docstring_example.txt` for a
-   fiducial example of a docstring.
-
-API Guide
----------
-
- * Do not import "*" from anything other than "yt.funcs".
- * Internally, only import from source files directly -- instead of:
-
-   from yt.visualization.api import ProjectionPlot
-
-   do:
-
-   from yt.visualization.plot_window import ProjectionPlot
-
- * Numpy is to be imported as "np", after a long time of using "na".
- * Do not use too many keyword arguments.  If you have a lot of keyword
-   arguments, then you are doing too much in __init__ and not enough via
-   parameter setting.
- * In function arguments, place spaces before commas.  def something(a,b,c)
-   should be def something(a, b, c).
- * Don't create a new class to replicate the functionality of an old class --
-   replace the old class.  Too many options makes for a confusing user
-   experience.
- * Parameter files external to yt are a last resort.
- * The usage of the **kwargs construction should be avoided.  If they cannot
-   be avoided, they must be explained, even if they are only to be passed on to
-   a nested function.
-
-Variable Names and Enzo-isms
-----------------------------
-
- * Avoid Enzo-isms.  This includes but is not limited to:
-   * Hard-coding parameter names that are the same as those in Enzo.  The
-     following translation table should be of some help.  Note that the
-     parameters are now properties on a Dataset subclass: you access them
-     like ds.refine_by .
-     * RefineBy => refine_by
-     * TopGridRank => dimensionality
-     * TopGridDimensions => domain_dimensions
-     * InitialTime => current_time
-     * DomainLeftEdge => domain_left_edge
-     * DomainRightEdge => domain_right_edge
-     * CurrentTimeIdentifier => unique_identifier
-     * CosmologyCurrentRedshift => current_redshift
-     * ComovingCoordinates => cosmological_simulation
-     * CosmologyOmegaMatterNow => omega_matter
-     * CosmologyOmegaLambdaNow => omega_lambda
-     * CosmologyHubbleConstantNow => hubble_constant
-   * Do not assume that the domain runs from 0 .. 1.  This is not true
-     everywhere.
- * Variable names should be short but descriptive.
- * No globals!

diff -r a3dc58449d7b5f4084c195a3040d829e21b44b31 -r c59f8909946037508870b6650c1a9de3b3f4a761 doc/docstring_example.txt
--- a/doc/docstring_example.txt
+++ b/doc/docstring_example.txt
@@ -1,86 +0,0 @@
-    r"""A one-line summary that does not use variable names or the
-    function name.
-
-    Several sentences providing an extended description. Refer to
-    variables using back-ticks, e.g. `var`.
-
-    Parameters
-    ----------
-    var1 : array_like
-        Array_like means all those objects -- lists, nested lists, etc. --
-        that can be converted to an array.  We can also refer to
-        variables like `var1`.
-    var2 : int
-        The type above can either refer to an actual Python type
-        (e.g. ``int``), or describe the type of the variable in more
-        detail, e.g. ``(N,) ndarray`` or ``array_like``.
-    Long_variable_name : {'hi', 'ho'}, optional
-        Choices in brackets, default first when optional.
-
-    Returns
-    -------
-    describe : type
-        Explanation
-    output : type
-        Explanation
-    tuple : type
-        Explanation
-    items : type
-        even more explaining
-
-    Other Parameters
-    ----------------
-    only_seldom_used_keywords : type
-        Explanation
-    common_parameters_listed_above : type
-        Explanation
-
-    Raises
-    ------
-    BadException
-        Because you shouldn't have done that.
-
-    See Also
-    --------
-    otherfunc : relationship (optional)
-    newfunc : Relationship (optional), which could be fairly long, in which
-              case the line wraps here.
-    thirdfunc, fourthfunc, fifthfunc
-
-    Notes
-    -----
-    Notes about the implementation algorithm (if needed).
-
-    This can have multiple paragraphs.
-
-    You may include some math:
-
-    .. math:: X(e^{j\omega } ) = x(n)e^{ - j\omega n}
-
-    And even use a greek symbol like :math:`omega` inline.
-
-    References
-    ----------
-    Cite the relevant literature, e.g. [1]_.  You may also cite these
-    references in the notes section above.
-
-    .. [1] O. McNoleg, "The integration of GIS, remote sensing,
-       expert systems and adaptive co-kriging for environmental habitat
-       modelling of the Highland Haggis using object-oriented, fuzzy-logic
-       and neural-network techniques," Computers & Geosciences, vol. 22,
-       pp. 585-588, 1996.
-
-    Examples
-    --------
-    These are written in doctest format, and should illustrate how to
-    use the function.  Use the variables 'ds' for the dataset, 'pc' for
-    a plot collection, 'c' for a center, and 'L' for a vector. 
-
-    >>> a=[1,2,3]
-    >>> print [x + 3 for x in a]
-    [4, 5, 6]
-    >>> print "a\n\nb"
-    a
-    b
-
-    """

diff -r a3dc58449d7b5f4084c195a3040d829e21b44b31 -r c59f8909946037508870b6650c1a9de3b3f4a761 doc/get_yt.sh
--- a/doc/get_yt.sh
+++ b/doc/get_yt.sh
@@ -23,7 +23,7 @@
 DEST_SUFFIX="yt-conda"
 DEST_DIR="`pwd`/${DEST_SUFFIX/ /}"   # Installation location
 BRANCH="yt" # This is the branch to which we will forcibly update.
-INST_YT_SOURCE=1 # Do we do a source install of yt?
+INST_YT_SOURCE=0 # Do we do a source install of yt?
 
 ##################################################################
 #                                                                #
@@ -37,7 +37,7 @@
 # ( SOMECOMMAND 2>&1 ) 1>> ${LOG_FILE} || do_exit
 
 MINICONDA_URLBASE="http://repo.continuum.io/miniconda"
-MINICONDA_VERSION="1.9.1"
+MINICONDA_VERSION="latest"
 YT_RECIPE_REPO="https://bitbucket.org/yt_analysis/yt_conda/raw/default"
 
 function do_exit
@@ -61,12 +61,14 @@
     ( $* 2>&1 ) 1>> ${LOG_FILE} || do_exit
 }
 
-function get_ytproject
-{
-    [ -e $1 ] && return
-    echo "Downloading $1 from yt-project.org"
-    ${GETFILE} "http://yt-project.org/dependencies/$1" || do_exit
-    ( ${SHASUM} -c $1.sha512 2>&1 ) 1>> ${LOG_FILE} || do_exit
+# These are needed to prevent pushd and popd from printing to stdout
+
+function pushd () {
+    command pushd "$@" > /dev/null
+}
+
+function popd () {
+    command popd "$@" > /dev/null
 }
 
 function get_ytdata
@@ -101,122 +103,125 @@
 echo "This will install Miniconda from Continuum Analytics, the necessary"
 echo "packages to run yt, and create a self-contained environment for you to"
 echo "use yt.  Additionally, Conda itself provides the ability to install"
-echo "many other packages that can be used for other purposes."
+echo "many other packages that can be used for other purposes using the"
+echo "'conda install' command."
 echo
 MYOS=`uname -s`       # A guess at the OS
-if [ "${MYOS##Darwin}" != "${MYOS}" ]
+if [ $INST_YT_SOURCE -ne 0 ]
 then
-  echo "Looks like you're running on Mac OSX."
-  echo
-  echo "NOTE: you must have the Xcode command line tools installed."
-  echo
-  echo "The instructions for obtaining these tools varies according"
-  echo "to your exact OS version.  On older versions of OS X, you"
-  echo "must register for an account on the apple developer tools"
-  echo "website: https://developer.apple.com/downloads to obtain the"
-  echo "download link."
-  echo
-  echo "We have gathered some additional instructions for each"
-  echo "version of OS X below. If you have trouble installing yt"
-  echo "after following these instructions, don't hesitate to contact"
-  echo "the yt user's e-mail list."
-  echo
-  echo "You can see which version of OSX you are running by clicking"
-  echo "'About This Mac' in the apple menu on the left hand side of"
-  echo "menu bar.  We're assuming that you've installed all operating"
-  echo "system updates; if you have an older version, we suggest"
-  echo "running software update and installing all available updates."
-  echo
-  echo "OS X 10.5.8: search for and download Xcode 3.1.4 from the"
-  echo "Apple developer tools website."
-  echo
-  echo "OS X 10.6.8: search for and download Xcode 3.2 from the Apple"
-  echo "developer tools website.  You can either download the"
-  echo "Xcode 3.2.2 Developer Tools package (744 MB) and then use"
-  echo "Software Update to update to XCode 3.2.6 or"
-  echo "alternatively, you can download the Xcode 3.2.6/iOS SDK"
-  echo "bundle (4.1 GB)."
-  echo
-  echo "OS X 10.7.5: download Xcode 4.2 from the mac app store"
-  echo "(search for Xcode)."
-  echo "Alternatively, download the Xcode command line tools from"
-  echo "the Apple developer tools website."
-  echo
-  echo "OS X 10.8.2: download Xcode 4.6.1 from the mac app store."
-  echo "(search for Xcode)."
-  echo "Additionally, you will have to manually install the Xcode"
-  echo "command line tools, see:"
-  echo "http://stackoverflow.com/questions/9353444"
-  echo "Alternatively, download the Xcode command line tools from"
-  echo "the Apple developer tools website."
-  echo
-  echo "NOTE: It's possible that the installation will fail, if so,"
-  echo "please set the following environment variables, remove any"
-  echo "broken installation tree, and re-run this script verbatim."
-  echo
-  echo "$ export CC=gcc"
-  echo "$ export CXX=g++"
-  echo
-  MINICONDA_OS="MacOSX-x86_64"
+    if [ "${MYOS##Darwin}" != "${MYOS}" ]
+    then
+        echo "Looks like you're running on Mac OSX."
+        echo
+        echo "NOTE: you must have the Xcode command line tools installed."
+        echo
+        echo "The instructions for obtaining these tools varies according"
+        echo "to your exact OS version.  On older versions of OS X, you"
+        echo "must register for an account on the apple developer tools"
+        echo "website: https://developer.apple.com/downloads to obtain the"
+        echo "download link."
+        echo
+        echo "We have gathered some additional instructions for each"
+        echo "version of OS X below. If you have trouble installing yt"
+        echo "after following these instructions, don't hesitate to contact"
+        echo "the yt user's e-mail list."
+        echo
+        echo "You can see which version of OSX you are running by clicking"
+        echo "'About This Mac' in the apple menu on the left hand side of"
+        echo "menu bar.  We're assuming that you've installed all operating"
+        echo "system updates; if you have an older version, we suggest"
+        echo "running software update and installing all available updates."
+        echo
+        echo "OS X 10.5.8: search for and download Xcode 3.1.4 from the"
+        echo "Apple developer tools website."
+        echo
+        echo "OS X 10.6.8: search for and download Xcode 3.2 from the Apple"
+        echo "developer tools website.  You can either download the"
+        echo "Xcode 3.2.2 Developer Tools package (744 MB) and then use"
+        echo "Software Update to update to XCode 3.2.6 or"
+        echo "alternatively, you can download the Xcode 3.2.6/iOS SDK"
+        echo "bundle (4.1 GB)."
+        echo
+        echo "OS X 10.7.5: download Xcode 4.2 from the mac app store"
+        echo "(search for Xcode)."
+        echo "Alternatively, download the Xcode command line tools from"
+        echo "the Apple developer tools website."
+        echo
+        echo "OS X 10.8.4, 10.9, 10.10, and 10.11:"
+        echo "download the appropriate version of Xcode from the"
+        echo "mac app store (search for Xcode)."
+        echo
+        echo "Additionally, you will have to manually install the Xcode"
+        echo "command line tools."
+        echo
+        echo "For OS X 10.8, see:"
+        echo "http://stackoverflow.com/questions/9353444"
+        echo
+        echo "For OS X 10.9 and newer the command line tools can be installed"
+        echo "with the following command:"
+        echo "    xcode-select --install"
+    fi
+    if [ "${MYOS##Linux}" != "${MYOS}" ]
+    then
+        echo "Looks like you're on Linux."
+        echo
+        echo "Please make sure you have the developer tools for your OS "
+        echo "installed."
+        echo
+        if [ -f /etc/SuSE-release ] && [ `grep --count SUSE /etc/SuSE-release` -gt 0 ]
+        then
+            echo "Looks like you're on an OpenSUSE-compatible machine."
+            echo
+            echo "You need to have these packages installed:"
+            echo
+            echo "  * devel_C_C++"
+            echo "  * libuuid-devel"
+            echo "  * gcc-c++"
+            echo "  * chrpath"
+            echo
+            echo "You can accomplish this by executing:"
+            echo
+            echo "$ sudo zypper install -t pattern devel_C_C++"
+            echo "$ sudo zypper install gcc-c++ libuuid-devel zip"
+            echo "$ sudo zypper install chrpath"
+        fi
+        if [ -f /etc/lsb-release ] && [ `grep --count buntu /etc/lsb-release` -gt 0 ]
+        then
+            echo "Looks like you're on an Ubuntu-compatible machine."
+            echo
+            echo "You need to have these packages installed:"
+            echo
+            echo "  * libssl-dev"
+            echo "  * build-essential"
+            echo "  * libncurses5"
+            echo "  * libncurses5-dev"
+            echo "  * uuid-dev"
+            echo "  * chrpath"
+            echo
+            echo "You can accomplish this by executing:"
+            echo
+            echo "$ sudo apt-get install libssl-dev build-essential libncurses5 libncurses5-dev zip uuid-dev chrpath"
+            echo
+        fi
+        echo
+        echo "If you are running on a supercomputer or other module-enabled"
+        echo "system, please make sure that the GNU module has been loaded."
+        echo
+    fi
 fi
-if [ "${MYOS##Linux}" != "${MYOS}" ]
+if [ "${MYOS##x86_64}" != "${MYOS}" ]
 then
-  echo "Looks like you're on Linux."
-  echo
-  echo "Please make sure you have the developer tools for your OS installed."
-  echo
-  if [ -f /etc/SuSE-release ] && [ `grep --count SUSE /etc/SuSE-release` -gt 0 ]
-  then
-    echo "Looks like you're on an OpenSUSE-compatible machine."
-    echo
-    echo "You need to have these packages installed:"
-    echo
-    echo "  * devel_C_C++"
-    echo "  * libopenssl-devel"
-    echo "  * libuuid-devel"
-    echo "  * zip"
-    echo "  * gcc-c++"
-    echo "  * chrpath"
-    echo
-    echo "You can accomplish this by executing:"
-    echo
-    echo "$ sudo zypper install -t pattern devel_C_C++"
-    echo "$ sudo zypper install gcc-c++ libopenssl-devel libuuid-devel zip"
-    echo "$ sudo zypper install chrpath"
-  fi
-  if [ -f /etc/lsb-release ] && [ `grep --count buntu /etc/lsb-release` -gt 0 ]
-  then
-    echo "Looks like you're on an Ubuntu-compatible machine."
-    echo
-    echo "You need to have these packages installed:"
-    echo
-    echo "  * libssl-dev"
-    echo "  * build-essential"
-    echo "  * libncurses5"
-    echo "  * libncurses5-dev"
-    echo "  * zip"
-    echo "  * uuid-dev"
-    echo "  * chrpath"
-    echo
-    echo "You can accomplish this by executing:"
-    echo
-    echo "$ sudo apt-get install libssl-dev build-essential libncurses5 libncurses5-dev zip uuid-dev chrpath"
-    echo
-  fi
-  echo
-  echo "If you are running on a supercomputer or other module-enabled"
-  echo "system, please make sure that the GNU module has been loaded."
-  echo
-  if [ "${MYOS##x86_64}" != "${MYOS}" ]
-  then
     MINICONDA_OS="Linux-x86_64"
-  elif [ "${MYOS##i386}" != "${MYOS}" ]
-  then
+elif [ "${MYOS##i386}" != "${MYOS}" ]
+then
     MINICONDA_OS="Linux-x86"
-  else
-    echo "Not sure which type of Linux you're on.  Going with x86_64."
+elif [ "${MYOS##Darwin}" != "${MYOS}" ]
+then
+     MINICONDA_OS="MacOSX-x86_64"
+else
+    echo "Not sure which Linux distro you are running."
+    echo "Going with x86_64 architecture."
     MINICONDA_OS="Linux-x86_64"
-  fi
 fi
 echo
 echo "If you'd rather not continue, hit Ctrl-C."
@@ -233,7 +238,7 @@
 if type -P wget &>/dev/null
 then
     echo "Using wget"
-    export GETFILE="wget -nv"
+    export GETFILE="wget -nv -nc"
 else
     echo "Using curl"
     export GETFILE="curl -sSO"
@@ -250,9 +255,6 @@
 
 log_cmd bash ./${MINICONDA_PKG} -b -p $DEST_DIR
 
-# I don't think we need OR want this anymore:
-#export LD_LIBRARY_PATH=${DEST_DIR}/lib:$LD_LIBRARY_PATH
-
 # This we *do* need.
 export PATH=${DEST_DIR}/bin:$PATH
 
@@ -261,51 +263,40 @@
 
 declare -a YT_DEPS
 YT_DEPS+=('python')
-YT_DEPS+=('distribute')
-YT_DEPS+=('libpng')
+YT_DEPS+=('setuptools')
 YT_DEPS+=('numpy')
-YT_DEPS+=('pygments')
-YT_DEPS+=('jinja2')
-YT_DEPS+=('tornado')
-YT_DEPS+=('pyzmq')
+YT_DEPS+=('jupyter')
 YT_DEPS+=('ipython')
 YT_DEPS+=('sphinx')
 YT_DEPS+=('h5py')
 YT_DEPS+=('matplotlib')
 YT_DEPS+=('cython')
 YT_DEPS+=('nose')
+YT_DEPS+=('conda-build')
+YT_DEPS+=('mercurial')
+YT_DEPS+=('sympy')
 
 # Here is our dependency list for yt
-log_cmd conda config --system --add channels http://repo.continuum.io/pkgs/free
-log_cmd conda config --system --add channels http://repo.continuum.io/pkgs/dev
-log_cmd conda config --system --add channels http://repo.continuum.io/pkgs/gpl
 log_cmd conda update --yes conda
 
-echo "Current dependencies: ${YT_DEPS[@]}"
 log_cmd echo "DEPENDENCIES" ${YT_DEPS[@]}
-log_cmd conda install --yes ${YT_DEPS[@]}
-
-echo "Installing mercurial."
-get_ytrecipe mercurial
+for YT_DEP in "${YT_DEPS[@]}"; do
+    echo "Installing $YT_DEP"
+    log_cmd conda install --yes ${YT_DEP}
+done
 
 if [ $INST_YT_SOURCE -eq 0 ]
 then
-  echo "Installing yt as a package."
-  get_ytrecipe yt
+  echo "Installing yt"
+  log_cmd conda install --yes yt
 else
-  # We do a source install.
-  YT_DIR="${DEST_DIR}/src/yt-hg"
-  export PNG_DIR=${DEST_DIR}
-  export FTYPE_DIR=${DEST_DIR}
-  export HDF5_DIR=${DEST_DIR}
-  log_cmd hg clone -r ${BRANCH} https://bitbucket.org/yt_analysis/yt ${YT_DIR}
-  pushd ${YT_DIR}
-  log_cmd python setup.py develop
-  popd
-  log_cmd cp ${YT_DIR}/doc/activate ${DEST_DIR}/bin/activate 
-  log_cmd sed -i.bak -e "s,__YT_DIR__,${DEST_DIR}," ${DEST_DIR}/bin/activate
-  log_cmd cp ${YT_DIR}/doc/activate.csh ${DEST_DIR}/bin/activate.csh
-  log_cmd sed -i.bak -e "s,__YT_DIR__,${DEST_DIR}," ${DEST_DIR}/bin/activate.csh
+    # We do a source install.
+    echo "Installing yt from source"
+    YT_DIR="${DEST_DIR}/src/yt-hg"
+    log_cmd hg clone -r ${BRANCH} https://bitbucket.org/yt_analysis/yt ${YT_DIR}
+    pushd ${YT_DIR}
+    log_cmd python setup.py develop
+    popd
 fi
 
 echo
@@ -314,34 +305,26 @@
 echo
 echo "yt and the Conda system are now installed in $DEST_DIR ."
 echo
-if [ $INST_YT_SOURCE -eq 0 ]
-then
-  echo "You must now modify your PATH variable by prepending:"
-  echo 
-  echo "   $DEST_DIR/bin"
-  echo
-  echo "For example, if you use bash, place something like this at the end"
-  echo "of your ~/.bashrc :"
-  echo
-  echo "   export PATH=$DEST_DIR/bin:$PATH"
-else
-  echo "To run from this new installation, use the activate script for this "
-  echo "environment."
-  echo
-  echo "    $ source $DEST_DIR/bin/activate"
-  echo
-  echo "This modifies the environment variables YT_DEST, PATH, PYTHONPATH, and"
-  echo "LD_LIBRARY_PATH to match your new yt install.  If you use csh, just"
-  echo "append .csh to the above."
-fi
+echo "You must now modify your PATH variable by prepending:"
+echo 
+echo "   $DEST_DIR/bin"
+echo
+echo "On Bash-style shells you can copy/paste the following command to "
+echo "temporarily activate the yt installtion:"
+echo
+echo "    export PATH=$DEST_DIR/bin:\$PATH"
+echo
+echo "and on csh-style shells:"
+echo
+echo "    setenv PATH $DEST_DIR/bin:\$PATH"
+echo
+echo "You can also update the init file appropriate for your shell to include"
+echo "the same command."
 echo
 echo "To get started with yt, check out the orientation:"
 echo
 echo "    http://yt-project.org/doc/orientation/"
 echo
-echo "or just activate your environment and run 'yt serve' to bring up the"
-echo "yt GUI."
-echo
 echo "For support, see the website and join the mailing list:"
 echo
 echo "    http://yt-project.org/"

diff -r a3dc58449d7b5f4084c195a3040d829e21b44b31 -r c59f8909946037508870b6650c1a9de3b3f4a761 doc/install_script.sh
--- a/doc/install_script.sh
+++ b/doc/install_script.sh
@@ -233,53 +233,61 @@
         echo
         echo "NOTE: you must have the Xcode command line tools installed."
         echo
-	echo "The instructions for obtaining these tools varies according"
-	echo "to your exact OS version.  On older versions of OS X, you"
-	echo "must register for an account on the apple developer tools"
-	echo "website: https://developer.apple.com/downloads to obtain the"
-	echo "download link."
-	echo
-	echo "We have gathered some additional instructions for each"
-	echo "version of OS X below. If you have trouble installing yt"
-	echo "after following these instructions, don't hesitate to contact"
-	echo "the yt user's e-mail list."
-	echo
-	echo "You can see which version of OSX you are running by clicking"
-	echo "'About This Mac' in the apple menu on the left hand side of"
-	echo "menu bar.  We're assuming that you've installed all operating"
-	echo "system updates; if you have an older version, we suggest"
-	echo "running software update and installing all available updates."
-	echo
+        echo "The instructions for obtaining these tools varies according"
+        echo "to your exact OS version.  On older versions of OS X, you"
+        echo "must register for an account on the apple developer tools"
+        echo "website: https://developer.apple.com/downloads to obtain the"
+        echo "download link."
+        echo
+        echo "We have gathered some additional instructions for each"
+        echo "version of OS X below. If you have trouble installing yt"
+        echo "after following these instructions, don't hesitate to contact"
+        echo "the yt user's e-mail list."
+        echo
+        echo "You can see which version of OSX you are running by clicking"
+        echo "'About This Mac' in the apple menu on the left hand side of"
+        echo "menu bar.  We're assuming that you've installed all operating"
+        echo "system updates; if you have an older version, we suggest"
+        echo "running software update and installing all available updates."
+        echo
         echo "OS X 10.5.8: search for and download Xcode 3.1.4 from the"
-	echo "Apple developer tools website."
+        echo "Apple developer tools website."
         echo
         echo "OS X 10.6.8: search for and download Xcode 3.2 from the Apple"
-	echo "developer tools website.  You can either download the"
-	echo "Xcode 3.2.2 Developer Tools package (744 MB) and then use"
-	echo "Software Update to update to XCode 3.2.6 or"
-	echo "alternatively, you can download the Xcode 3.2.6/iOS SDK"
-	echo "bundle (4.1 GB)."
+        echo "developer tools website.  You can either download the"
+        echo "Xcode 3.2.2 Developer Tools package (744 MB) and then use"
+        echo "Software Update to update to XCode 3.2.6 or"
+        echo "alternatively, you can download the Xcode 3.2.6/iOS SDK"
+        echo "bundle (4.1 GB)."
         echo
         echo "OS X 10.7.5: download Xcode 4.2 from the mac app store"
-	echo "(search for Xcode)."
+        echo "(search for Xcode)."
         echo "Alternatively, download the Xcode command line tools from"
         echo "the Apple developer tools website."
         echo
-	echo "OS X 10.8.4, 10.9, and 10.10: download the appropriate version of"
-	echo "Xcode from the mac app store (search for Xcode)."
-    echo
-	echo "Additionally, you will have to manually install the Xcode"
-	echo "command line tools."
-    echo
-    echo "For OS X 10.8, see:"
-   	echo "http://stackoverflow.com/questions/9353444"
-	echo
-    echo "For OS X 10.9 and 10.10, the command line tools can be installed"
-    echo "with the following command:"
-    echo "    xcode-select --install"
-    echo
-    OSX_VERSION=`sw_vers -productVersion`
-    if [ "${OSX_VERSION##10.8}" != "${OSX_VERSION}" ]
+        echo "OS X 10.8.4, 10.9, 10.10, and 10.11:"
+        echo "download the appropriate version of Xcode from the"
+        echo "mac app store (search for Xcode)."
+        echo
+        echo "Additionally, you will have to manually install the Xcode"
+        echo "command line tools."
+        echo
+        echo "For OS X 10.8, see:"
+        echo "http://stackoverflow.com/questions/9353444"
+        echo
+        echo "For OS X 10.9 and newer the command line tools can be installed"
+        echo "with the following command:"
+        echo "    xcode-select --install"
+        echo
+        echo "For OS X 10.11, you will additionally need to install the OpenSSL"
+        echo "library using a package manager like homebrew or macports."
+        echo "If you install fails with a message like"
+        echo "    ImportError: cannot import HTTPSHandler"
+        echo "then you do not have the OpenSSL headers available in a location"
+        echo "visible to your C compiler. Consider installing yt using the"
+        echo "get_yt.sh script instead, as that bundles OpenSSL."
+        OSX_VERSION=`sw_vers -productVersion`
+        if [ "${OSX_VERSION##10.8}" != "${OSX_VERSION}" ]
         then
             MPL_SUPP_CFLAGS="${MPL_SUPP_CFLAGS} -mmacosx-version-min=10.7"
             MPL_SUPP_CXXFLAGS="${MPL_SUPP_CXXFLAGS} -mmacosx-version-min=10.7"
@@ -358,17 +366,17 @@
     fi
     if [ $INST_SCIPY -eq 1 ]
     then
-	echo
-	echo "Looks like you've requested that the install script build SciPy."
-	echo
-	echo "If the SciPy build fails, please uncomment one of the the lines"
-	echo "at the top of the install script that sets NUMPY_ARGS, delete"
-	echo "any broken installation tree, and re-run the install script"
-	echo "verbatim."
-	echo
-	echo "If that doesn't work, don't hesitate to ask for help on the yt"
-	echo "user's mailing list."
-	echo
+    echo
+    echo "Looks like you've requested that the install script build SciPy."
+    echo
+    echo "If the SciPy build fails, please uncomment one of the the lines"
+    echo "at the top of the install script that sets NUMPY_ARGS, delete"
+    echo "any broken installation tree, and re-run the install script"
+    echo "verbatim."
+    echo
+    echo "If that doesn't work, don't hesitate to ask for help on the yt"
+    echo "user's mailing list."
+    echo
     fi
     if [ ! -z "${CFLAGS}" ]
     then
@@ -490,9 +498,9 @@
 
 if [ $INST_PY3 -eq 1 ]
 then
-	 PYTHON_EXEC='python3.4'
+     PYTHON_EXEC='python3.4'
 else 
-	 PYTHON_EXEC='python2.7'
+     PYTHON_EXEC='python2.7'
 fi
 
 function do_setup_py
@@ -899,28 +907,28 @@
 else
     if [ ! -e $SCIPY/done ]
     then
-	if [ ! -e BLAS/done ]
-	then
-	    tar xfz blas.tar.gz
-	    echo "Building BLAS"
-	    cd BLAS
-	    gfortran -O2 -fPIC -fno-second-underscore -c *.f
-	    ( ar r libfblas.a *.o 2>&1 ) 1>> ${LOG_FILE}
-	    ( ranlib libfblas.a 2>&1 ) 1>> ${LOG_FILE}
-	    rm -rf *.o
-	    touch done
-	    cd ..
-	fi
-	if [ ! -e $LAPACK/done ]
-	then
-	    tar xfz $LAPACK.tar.gz
-	    echo "Building LAPACK"
-	    cd $LAPACK/
-	    cp INSTALL/make.inc.gfortran make.inc
-	    ( make lapacklib OPTS="-fPIC -O2" NOOPT="-fPIC -O0" CFLAGS=-fPIC LDFLAGS=-fPIC 2>&1 ) 1>> ${LOG_FILE} || do_exit
-	    touch done
-	    cd ..
-	fi
+    if [ ! -e BLAS/done ]
+    then
+        tar xfz blas.tar.gz
+        echo "Building BLAS"
+        cd BLAS
+        gfortran -O2 -fPIC -fno-second-underscore -c *.f
+        ( ar r libfblas.a *.o 2>&1 ) 1>> ${LOG_FILE}
+        ( ranlib libfblas.a 2>&1 ) 1>> ${LOG_FILE}
+        rm -rf *.o
+        touch done
+        cd ..
+    fi
+    if [ ! -e $LAPACK/done ]
+    then
+        tar xfz $LAPACK.tar.gz
+        echo "Building LAPACK"
+        cd $LAPACK/
+        cp INSTALL/make.inc.gfortran make.inc
+        ( make lapacklib OPTS="-fPIC -O2" NOOPT="-fPIC -O0" CFLAGS=-fPIC LDFLAGS=-fPIC 2>&1 ) 1>> ${LOG_FILE} || do_exit
+        touch done
+        cd ..
+    fi
     fi
     export BLAS=$PWD/BLAS/libfblas.a
     export LAPACK=$PWD/$LAPACK/liblapack.a
@@ -1030,7 +1038,7 @@
 cd $MY_PWD
 
 if !( ( ${DEST_DIR}/bin/${PYTHON_EXEC} -c "import readline" 2>&1 )>> ${LOG_FILE}) || \
-	[[ "${MYOS##Darwin}" != "${MYOS}" && $INST_PY3 -eq 1 ]] 
+    [[ "${MYOS##Darwin}" != "${MYOS}" && $INST_PY3 -eq 1 ]] 
 then
     if !( ( ${DEST_DIR}/bin/${PYTHON_EXEC} -c "import gnureadline" 2>&1 )>> ${LOG_FILE})
     then

diff -r a3dc58449d7b5f4084c195a3040d829e21b44b31 -r c59f8909946037508870b6650c1a9de3b3f4a761 doc/source/analyzing/analysis_modules/absorption_spectrum.rst
--- a/doc/source/analyzing/analysis_modules/absorption_spectrum.rst
+++ b/doc/source/analyzing/analysis_modules/absorption_spectrum.rst
@@ -11,8 +11,8 @@
 with the path length of the ray through the cell.  Line profiles are 
 generated using a voigt profile based on the temperature field.  The lines 
 are then shifted according to the redshift recorded by the light ray tool 
-and (optionally) the line of sight peculiar velocity.  Inclusion of the 
-peculiar velocity requires setting ``get_los_velocity`` to True in the call to 
+and (optionally) the peculiar velocity of gas along the ray.  Inclusion of the 
+peculiar velocity requires setting ``use_peculiar_velocity`` to True in the call to 
 :meth:`~yt.analysis_modules.cosmological_observation.light_ray.light_ray.LightRay.make_light_ray`.
 
 The spectrum generator will output a file containing the wavelength and 

diff -r a3dc58449d7b5f4084c195a3040d829e21b44b31 -r c59f8909946037508870b6650c1a9de3b3f4a761 doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
--- a/doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
+++ b/doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
@@ -59,7 +59,7 @@
   from yt.analysis_modules.halo_finding.api import *
 
   ds = yt.load('Enzo_64/RD0006/RedshiftOutput0006')
-  halo_list = parallelHF(ds)
+  halo_list = HaloFinder(ds)
   halo_list.dump('MyHaloList')
 
 Ellipsoid Parameters

diff -r a3dc58449d7b5f4084c195a3040d829e21b44b31 -r c59f8909946037508870b6650c1a9de3b3f4a761 doc/source/analyzing/analysis_modules/light_ray_generator.rst
--- a/doc/source/analyzing/analysis_modules/light_ray_generator.rst
+++ b/doc/source/analyzing/analysis_modules/light_ray_generator.rst
@@ -79,7 +79,7 @@
 
   lr.make_light_ray(seed=8675309,
                     fields=['temperature', 'density'],
-                    get_los_velocity=True)
+                    use_peculiar_velocity=True)
 
 The keyword arguments are:
 
@@ -107,8 +107,10 @@
 * ``data_filename`` (*string*): Path to output file for ray data.  
   Default: None.
 
-* ``get_los_velocity`` (*bool*): If True, the line of sight velocity is 
-  calculated for each point in the ray.  Default: True.
+* ``use_peculiar_velocity`` (*bool*): If True, the doppler redshift from
+  the peculiar velocity of gas along the ray is calculated and added to the
+  cosmological redshift as the "effective" redshift.
+  Default: True.
 
 * ``redshift`` (*float*): Used with light rays made from single datasets to 
   specify a starting redshift for the ray.  If not used, the starting 

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/112deffd68da/
Changeset:   112deffd68da
Branch:      yt
User:        ngoldbaum
Date:        2016-01-11 18:38:53+00:00
Summary:     Update VR quickstart notebook to use new VR interface
Affected #:  1 file

diff -r c59f8909946037508870b6650c1a9de3b3f4a761 -r 112deffd68da8db44af99f3e9cd3b67b4b895135 doc/source/quickstart/6)_Volume_Rendering.ipynb
--- a/doc/source/quickstart/6)_Volume_Rendering.ipynb
+++ b/doc/source/quickstart/6)_Volume_Rendering.ipynb
@@ -1,96 +1,128 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# A Brief Demo of Volume Rendering\n",
+    "\n",
+    "This shows a small amount of volume rendering.  Really, just enough to get your feet wet!"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt\n",
+    "ds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "To create a volume rendering, we need a camera and a transfer function.  We'll use the `ColorTransferFunction`, which accepts (in log space) the minimum and maximum bounds of our transfer function.  This means behavior for data outside these values is undefined.\n",
+    "\n",
+    "We then add on \"layers\" like an onion.  This function can accept a width (here specified) in data units, and also a color map.  Here we add on four layers.\n",
+    "\n",
+    "Finally, we create a camera.  The focal point is `[0.5, 0.5, 0.5]`, the width is 20 kpc (including front-to-back integration) and we specify a transfer function.  Once we've done that, we call `show` to actually cast our rays and display them inline."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "sc = yt.create_scene(ds)\n",
+    "\n",
+    "sc.camera.set_width(ds.quan(20, 'kpc'))\n",
+    "\n",
+    "source = sc.sources['source_00']\n",
+    "\n",
+    "tf = yt.ColorTransferFunction((-28, -24))\n",
+    "tf.add_layers(4, w=0.01)\n",
+    "\n",
+    "source.set_transfer_function(tf)\n",
+    "\n",
+    "sc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "If we want to apply a clipping, we can specify the `sigma_clip`.  This will clip the upper bounds to this value times the standard deviation of the values in the image array."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "sc.show(sigma_clip=4)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "There are several other options we can specify.  Note that here we have turned on the use of ghost zones, shortened the data interval for the transfer function, and widened our gaussian layers."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "sc = yt.create_scene(ds)\n",
+    "\n",
+    "sc.camera.set_width(ds.quan(20, 'kpc'))\n",
+    "\n",
+    "source = sc.sources['source_00']\n",
+    "\n",
+    "source.set_fields('density', no_ghost=False)\n",
+    "\n",
+    "tf = yt.ColorTransferFunction((-28, -25))\n",
+    "tf.add_layers(4, w=0.03)\n",
+    "\n",
+    "source.set_transfer_function(tf)\n",
+    "\n",
+    "sc.show(sigma_clip=4.0)"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:2a24bbe82955f9d948b39cbd1b1302968ff57f62f73afb2c7a5c4953393d00ae"
+  "kernelspec": {
+   "display_name": "Python 2",
+   "language": "python",
+   "name": "python2"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 2
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython2",
+   "version": "2.7.10"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "# A Brief Demo of Volume Rendering\n",
-      "\n",
-      "This shows a small amount of volume rendering.  Really, just enough to get your feet wet!"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt\n",
-      "ds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "To create a volume rendering, we need a camera and a transfer function.  We'll use the `ColorTransferFunction`, which accepts (in log space) the minimum and maximum bounds of our transfer function.  This means behavior for data outside these values is undefined.\n",
-      "\n",
-      "We then add on \"layers\" like an onion.  This function can accept a width (here specified) in data units, and also a color map.  Here we add on four layers.\n",
-      "\n",
-      "Finally, we create a camera.  The focal point is `[0.5, 0.5, 0.5]`, the width is 20 kpc (including front-to-back integration) and we specify a transfer function.  Once we've done that, we call `show` to actually cast our rays and display them inline."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "tf = yt.ColorTransferFunction((-28, -24))\n",
-      "tf.add_layers(4, w=0.01)\n",
-      "cam = ds.camera([0.5, 0.5, 0.5], [1.0, 1.0, 1.0], (20, 'kpc'), 512, tf, fields=[\"density\"])\n",
-      "cam.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "If we want to apply a clipping, we can specify the `sigma_clip`.  This will clip the upper bounds to this value times the standard deviation of the values in the image array."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "cam.show(sigma_clip=4)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "There are several other options we can specify.  Note that here we have turned on the use of ghost zones, shortened the data interval for the transfer function, and widened our gaussian layers."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "tf = yt.ColorTransferFunction((-28, -25))\n",
-      "tf.add_layers(4, w=0.03)\n",
-      "cam = ds.camera([0.5, 0.5, 0.5], [1.0, 1.0, 1.0], (20.0, 'kpc'), 512, tf, no_ghost=False)\n",
-      "cam.show(sigma_clip=4.0)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
+ "nbformat": 4,
+ "nbformat_minor": 0
 }


https://bitbucket.org/yt_analysis/yt/commits/90feb570e03e/
Changeset:   90feb570e03e
Branch:      yt
User:        ngoldbaum
Date:        2016-01-11 19:52:11+00:00
Summary:     Update particle trajectories notebook
Affected #:  1 file

diff -r 112deffd68da8db44af99f3e9cd3b67b4b895135 -r 90feb570e03e87bb05ff181a40440580349ad8a2 doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
--- a/doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
+++ b/doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
@@ -1,357 +1,385 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The `particle_trajectories` analysis module enables the construction of particle trajectories from a time series of datasets for a specified list of particles identified by their unique indices. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "%matplotlib inline\n",
+    "import yt\n",
+    "import glob\n",
+    "from yt.analysis_modules.particle_trajectories.api import ParticleTrajectories\n",
+    "from yt.config import ytcfg\n",
+    "path = ytcfg.get(\"yt\", \"test_data_dir\")\n",
+    "import matplotlib.pyplot as plt"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "First, let's start off with a FLASH dataset containing only two particles in a mutual circular orbit. We can get the list of filenames this way:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "my_fns = glob.glob(path+\"/Orbit/orbit_hdf5_chk_00[0-9][0-9]\")\n",
+    "my_fns.sort()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "And let's define a list of fields that we want to include in the trajectories. The position fields will be included by default, so let's just ask for the velocity fields:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "fields = [\"particle_velocity_x\", \"particle_velocity_y\", \"particle_velocity_z\"]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "There are only two particles, but for consistency's sake let's grab their indices from the dataset itself:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load(my_fns[0])\n",
+    "dd = ds.all_data()\n",
+    "indices = dd[\"particle_index\"].astype(\"int\")\n",
+    "print (indices)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "which is what we expected them to be. Now we're ready to create a `ParticleTrajectories` object:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "trajs = ParticleTrajectories(my_fns, indices, fields=fields)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The `ParticleTrajectories` object `trajs` is essentially a dictionary-like container for the particle fields along the trajectory, and can be accessed as such:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (trajs[\"particle_position_x\"])\n",
+    "print (trajs[\"particle_position_x\"].shape)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Note that each field is a 2D NumPy array with the different particle indices along the first dimension and the times along the second dimension. As such, we can access them individually by indexing the field:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "plt.plot(trajs[\"particle_position_x\"][0].ndarray_view(), trajs[\"particle_position_y\"][0].ndarray_view())\n",
+    "plt.plot(trajs[\"particle_position_x\"][1].ndarray_view(), trajs[\"particle_position_y\"][1].ndarray_view())"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "And we can plot the velocity fields as well:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "plt.plot(trajs[\"particle_velocity_x\"][0].ndarray_view(), trajs[\"particle_velocity_y\"][0].ndarray_view())\n",
+    "plt.plot(trajs[\"particle_velocity_x\"][1].ndarray_view(), trajs[\"particle_velocity_y\"][1].ndarray_view())"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "If we want to access the time along the trajectory, we use the key `\"particle_time\"`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_velocity_x\"][1].ndarray_view())\n",
+    "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_velocity_y\"][1].ndarray_view())"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Alternatively, if we know the particle index we'd like to examine, we can get an individual trajectory corresponding to that index:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "particle1 = trajs.trajectory_from_index(1)\n",
+    "plt.plot(particle1[\"particle_time\"].ndarray_view(), particle1[\"particle_position_x\"].ndarray_view())\n",
+    "plt.plot(particle1[\"particle_time\"].ndarray_view(), particle1[\"particle_position_y\"].ndarray_view())"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now let's look at a more complicated (and fun!) example. We'll use an Enzo cosmology dataset. First, we'll find the maximum density in the domain, and obtain the indices of the particles within some radius of the center. First, let's have a look at what we're getting:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+    "slc = yt.SlicePlot(ds, \"x\", [\"density\",\"dark_matter_density\"], center=\"max\", width=(3.0, \"Mpc\"))\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "So far, so good--it looks like we've centered on a galaxy cluster. Let's grab all of the dark matter particles within a sphere of 0.5 Mpc (identified by `\"particle_type == 1\"`):"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "sp = ds.sphere(\"max\", (0.5, \"Mpc\"))\n",
+    "indices = sp[\"particle_index\"][sp[\"particle_type\"] == 1]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Next we'll get the list of datasets we want, and create trajectories for these particles:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "my_fns = glob.glob(path+\"/enzo_tiny_cosmology/DD*/*.hierarchy\")\n",
+    "my_fns.sort()\n",
+    "trajs = ParticleTrajectories(my_fns, indices)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Matplotlib can make 3D plots, so let's pick three particle trajectories at random and look at them in the volume:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import matplotlib.pyplot as plt\n",
+    "from mpl_toolkits.mplot3d import Axes3D\n",
+    "fig = plt.figure(figsize=(8.0, 8.0))\n",
+    "ax = fig.add_subplot(111, projection='3d')\n",
+    "ax.plot(trajs[\"particle_position_x\"][100].ndarray_view(), trajs[\"particle_position_z\"][100].ndarray_view(), \n",
+    "        trajs[\"particle_position_z\"][100].ndarray_view())\n",
+    "ax.plot(trajs[\"particle_position_x\"][8].ndarray_view(), trajs[\"particle_position_z\"][8].ndarray_view(), \n",
+    "        trajs[\"particle_position_z\"][8].ndarray_view())\n",
+    "ax.plot(trajs[\"particle_position_x\"][25].ndarray_view(), trajs[\"particle_position_z\"][25].ndarray_view(), \n",
+    "        trajs[\"particle_position_z\"][25].ndarray_view())"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "It looks like these three different particles fell into the cluster along different filaments. We can also look at their x-positions only as a function of time:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_position_x\"][100].ndarray_view())\n",
+    "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_position_x\"][8].ndarray_view())\n",
+    "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_position_x\"][25].ndarray_view())"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Suppose we wanted to know the gas density along the particle trajectory, but there wasn't a particle field corresponding to that in our dataset. Never fear! If the field exists as a grid field, yt will interpolate this field to the particle positions and add the interpolated field to the trajectory. To add such a field (or any field, including additional particle fields) we can call the `add_fields` method:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "trajs.add_fields([\"density\"])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We also could have included `\"density\"` in our original field list. Now, plot up the gas density for each particle as a function of time:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"density\"][100].ndarray_view())\n",
+    "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"density\"][8].ndarray_view())\n",
+    "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"density\"][25].ndarray_view())\n",
+    "plt.yscale(\"log\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Finally, the particle trajectories can be written to disk. Two options are provided: ASCII text files with a column for each field and the time, and HDF5 files:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "trajs.write_out(\"halo_trajectories\") # This will write a separate file for each trajectory\n",
+    "trajs.write_out_h5(\"halo_trajectories.h5\") # This will write all trajectories to a single file"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:5ab80c6b33a115cb88c36fde8659434d14a852dd43b0b419f2bb0c04acf66278"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The `particle_trajectories` analysis module enables the construction of particle trajectories from a time series of datasets for a specified list of particles identified by their unique indices. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "%matplotlib inline\n",
-      "import yt\n",
-      "import glob\n",
-      "from yt.analysis_modules.particle_trajectories.api import ParticleTrajectories\n",
-      "from yt.config import ytcfg\n",
-      "path = ytcfg.get(\"yt\", \"test_data_dir\")\n",
-      "import matplotlib.pyplot as plt"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "First, let's start off with a FLASH dataset containing only two particles in a mutual circular orbit. We can get the list of filenames this way:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "my_fns = glob.glob(path+\"/Orbit/orbit_hdf5_chk_00[0-9][0-9]\")\n",
-      "my_fns.sort()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "And let's define a list of fields that we want to include in the trajectories. The position fields will be included by default, so let's just ask for the velocity fields:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "fields = [\"particle_velocity_x\", \"particle_velocity_y\", \"particle_velocity_z\"]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "There are only two particles, but for consistency's sake let's grab their indices from the dataset itself:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load(my_fns[0])\n",
-      "dd = ds.all_data()\n",
-      "indices = dd[\"particle_index\"].astype(\"int\")\n",
-      "print indices"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "which is what we expected them to be. Now we're ready to create a `ParticleTrajectories` object:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "trajs = ParticleTrajectories(my_fns, indices, fields=fields)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The `ParticleTrajectories` object `trajs` is essentially a dictionary-like container for the particle fields along the trajectory, and can be accessed as such:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print trajs[\"particle_position_x\"]\n",
-      "print trajs[\"particle_position_x\"].shape"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Note that each field is a 2D NumPy array with the different particle indices along the first dimension and the times along the second dimension. As such, we can access them individually by indexing the field:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "plt.plot(trajs[\"particle_position_x\"][0].ndarray_view(), trajs[\"particle_position_y\"][0].ndarray_view())\n",
-      "plt.plot(trajs[\"particle_position_x\"][1].ndarray_view(), trajs[\"particle_position_y\"][1].ndarray_view())"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "And we can plot the velocity fields as well:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "plt.plot(trajs[\"particle_velocity_x\"][0].ndarray_view(), trajs[\"particle_velocity_y\"][0].ndarray_view())\n",
-      "plt.plot(trajs[\"particle_velocity_x\"][1].ndarray_view(), trajs[\"particle_velocity_y\"][1].ndarray_view())"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "If we want to access the time along the trajectory, we use the key `\"particle_time\"`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_velocity_x\"][1].ndarray_view())\n",
-      "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_velocity_y\"][1].ndarray_view())"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Alternatively, if we know the particle index we'd like to examine, we can get an individual trajectory corresponding to that index:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "particle1 = trajs.trajectory_from_index(1)\n",
-      "plt.plot(particle1[\"particle_time\"].ndarray_view(), particle1[\"particle_position_x\"].ndarray_view())\n",
-      "plt.plot(particle1[\"particle_time\"].ndarray_view(), particle1[\"particle_position_y\"].ndarray_view())"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now let's look at a more complicated (and fun!) example. We'll use an Enzo cosmology dataset. First, we'll find the maximum density in the domain, and obtain the indices of the particles within some radius of the center. First, let's have a look at what we're getting:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
-      "slc = yt.SlicePlot(ds, \"x\", [\"density\",\"dark_matter_density\"], center=\"max\", width=(3.0, \"Mpc\"))\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "So far, so good--it looks like we've centered on a galaxy cluster. Let's grab all of the dark matter particles within a sphere of 0.5 Mpc (identified by `\"particle_type == 1\"`):"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "sp = ds.sphere(\"max\", (0.5, \"Mpc\"))\n",
-      "indices = sp[\"particle_index\"][sp[\"particle_type\"] == 1]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Next we'll get the list of datasets we want, and create trajectories for these particles:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "my_fns = glob.glob(path+\"/enzo_tiny_cosmology/DD*/*.hierarchy\")\n",
-      "my_fns.sort()\n",
-      "trajs = ParticleTrajectories(my_fns, indices)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Matplotlib can make 3D plots, so let's pick three particle trajectories at random and look at them in the volume:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import matplotlib.pyplot as plt\n",
-      "from mpl_toolkits.mplot3d import Axes3D\n",
-      "fig = plt.figure(figsize=(8.0, 8.0))\n",
-      "ax = fig.add_subplot(111, projection='3d')\n",
-      "ax.plot(trajs[\"particle_position_x\"][100].ndarray_view(), trajs[\"particle_position_z\"][100].ndarray_view(), \n",
-      "        trajs[\"particle_position_z\"][100].ndarray_view())\n",
-      "ax.plot(trajs[\"particle_position_x\"][8].ndarray_view(), trajs[\"particle_position_z\"][8].ndarray_view(), \n",
-      "        trajs[\"particle_position_z\"][8].ndarray_view())\n",
-      "ax.plot(trajs[\"particle_position_x\"][25].ndarray_view(), trajs[\"particle_position_z\"][25].ndarray_view(), \n",
-      "        trajs[\"particle_position_z\"][25].ndarray_view())"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "It looks like these three different particles fell into the cluster along different filaments. We can also look at their x-positions only as a function of time:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_position_x\"][100].ndarray_view())\n",
-      "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_position_x\"][8].ndarray_view())\n",
-      "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_position_x\"][25].ndarray_view())"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Suppose we wanted to know the gas density along the particle trajectory, but there wasn't a particle field corresponding to that in our dataset. Never fear! If the field exists as a grid field, yt will interpolate this field to the particle positions and add the interpolated field to the trajectory. To add such a field (or any field, including additional particle fields) we can call the `add_fields` method:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "trajs.add_fields([\"density\"])"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We also could have included `\"density\"` in our original field list. Now, plot up the gas density for each particle as a function of time:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"density\"][100].ndarray_view())\n",
-      "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"density\"][8].ndarray_view())\n",
-      "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"density\"][25].ndarray_view())\n",
-      "plt.yscale(\"log\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Finally, the particle trajectories can be written to disk. Two options are provided: ASCII text files with a column for each field and the time, and HDF5 files:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "trajs.write_out(\"halo_trajectories\") # This will write a separate file for each trajectory\n",
-      "trajs.write_out_h5(\"halo_trajectories.h5\") # This will write all trajectories to a single file"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
+ "nbformat": 4,
+ "nbformat_minor": 0
 }


https://bitbucket.org/yt_analysis/yt/commits/751beb0d4abe/
Changeset:   751beb0d4abe
Branch:      yt
User:        ngoldbaum
Date:        2016-01-11 21:28:12+00:00
Summary:     Updating the PPVCube analysis module.

I've also decreased the resolution of the in-memory test dataset somewhat to make
the notebook run substantially faster.
Affected #:  1 file

diff -r 90feb570e03e87bb05ff181a40440580349ad8a2 -r 751beb0d4abe64b3845d50282b6956a2e420d47e doc/source/analyzing/analysis_modules/PPVCube.ipynb
--- a/doc/source/analyzing/analysis_modules/PPVCube.ipynb
+++ b/doc/source/analyzing/analysis_modules/PPVCube.ipynb
@@ -1,423 +1,455 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Detailed spectra of astrophysical objects sometimes allow for determinations of how much of the gas is moving with a certain velocity along the line of sight, thanks to Doppler shifting of spectral lines. This enables \"data cubes\" to be created in RA, Dec, and line-of-sight velocity space. In yt, we can use the `PPVCube` analysis module to project fields along a given line of sight traveling at different line-of-sight velocities, to \"mock-up\" what would be seen in observations."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.config import ytcfg\n",
+    "\n",
+    "import yt\n",
+    "import numpy as np\n",
+    "from yt.analysis_modules.ppv_cube.api import PPVCube\n",
+    "import yt.units as u"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "To demonstrate this functionality, we'll create a simple unigrid dataset from scratch of a rotating disk. We create a thin disk in the x-y midplane of the domain of three cells in height in either direction, and a radius of 10 kpc. The density and azimuthal velocity profiles of the disk as a function of radius will be given by the following functions:"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Density: $\\rho(r) \\propto r^{\\alpha}$"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Velocity: $v_{\\theta}(r) \\propto \\frac{r}{1+(r/r_0)^{\\beta}}$"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "where for simplicity we won't worry about the normalizations of these profiles. "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "First, we'll set up the grid and the parameters of the profiles:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# increasing the resolution will make the images in this notebook more visually appealing\n",
+    "nx,ny,nz = (64, 64, 64) # domain dimensions\n",
+    "R = 10. # outer radius of disk, kpc\n",
+    "r_0 = 3. # scale radius, kpc\n",
+    "beta = 1.4 # for the tangential velocity profile\n",
+    "alpha = -1. # for the radial density profile\n",
+    "x, y = np.mgrid[-R:R:nx*1j,-R:R:ny*1j] # cartesian coordinates of x-y plane of disk\n",
+    "r = np.sqrt(x*x+y*y) # polar coordinates\n",
+    "theta = np.arctan2(y, x) # polar coordinates"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Second, we'll construct the data arrays for the density, temperature, and velocity of the disk. Since we have the tangential velocity profile, we have to use the polar coordinates we derived earlier to compute `velx` and `vely`. Everywhere outside the disk, all fields are set to zero.  "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "dens = np.zeros((nx,ny,nz))\n",
+    "dens[:,:,nz/2-3:nz/2+3] = (r**alpha).reshape(nx,ny,1) # the density profile of the disk\n",
+    "temp = np.zeros((nx,ny,nz))\n",
+    "temp[:,:,nz/2-3:nz/2+3] = 1.0e5 # Isothermal\n",
+    "vel_theta = 100.*r/(1.+(r/r_0)**beta) # the azimuthal velocity profile of the disk\n",
+    "velx = np.zeros((nx,ny,nz))\n",
+    "vely = np.zeros((nx,ny,nz))\n",
+    "velx[:,:,nz/2-3:nz/2+3] = (-vel_theta*np.sin(theta)).reshape(nx,ny,1) # convert polar to cartesian\n",
+    "vely[:,:,nz/2-3:nz/2+3] = (vel_theta*np.cos(theta)).reshape(nx,ny,1) # convert polar to cartesian\n",
+    "dens[r > R] = 0.0\n",
+    "temp[r > R] = 0.0\n",
+    "velx[r > R] = 0.0\n",
+    "vely[r > R] = 0.0"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Finally, we'll package these data arrays up into a dictionary, which will then be shipped off to `load_uniform_grid`. We'll define the width of the grid to be `2*R` kpc, which will be equal to 1  `code_length`. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "data = {}\n",
+    "data[\"density\"] = (dens,\"g/cm**3\")\n",
+    "data[\"temperature\"] = (temp, \"K\")\n",
+    "data[\"velocity_x\"] = (velx, \"km/s\")\n",
+    "data[\"velocity_y\"] = (vely, \"km/s\")\n",
+    "data[\"velocity_z\"] = (np.zeros((nx,ny,nz)), \"km/s\") # zero velocity in the z-direction\n",
+    "bbox = np.array([[-0.5,0.5],[-0.5,0.5],[-0.5,0.5]]) # bbox of width 1 on a side with center (0,0,0)\n",
+    "ds = yt.load_uniform_grid(data, (nx,ny,nz), length_unit=(2*R,\"kpc\"), nprocs=1, bbox=bbox)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "To get a sense of what the data looks like, we'll take a slice through the middle of the disk:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc = yt.SlicePlot(ds, \"z\", [\"density\",\"velocity_x\",\"velocity_y\",\"velocity_magnitude\"])"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc.set_log(\"velocity_x\", False)\n",
+    "slc.set_log(\"velocity_y\", False)\n",
+    "slc.set_log(\"velocity_magnitude\", False)\n",
+    "slc.set_unit(\"velocity_magnitude\", \"km/s\")\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Which shows a rotating disk with a specific density and velocity profile. Now, suppose we wanted to look at this disk galaxy from a certain orientation angle, and simulate a 3D FITS data cube where we can see the gas that is emitting at different velocities along the line of sight. We can do this using the `PPVCube` class. First, let's assume we rotate our viewing angle 60 degrees from face-on, from along the z-axis into the x-axis. We'll create a normal vector:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "i = 60.*np.pi/180.\n",
+    "L = [np.sin(i),0.0,np.cos(i)]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Next, we need to specify a field that will serve as the \"intensity\" of the emission that we see. For simplicity, we'll simply choose the gas density as this field, though it could be any field (including derived fields) in principle. We also need to choose the bounds in line-of-sight velocity that the data will be binned into, which is a 4-tuple in the shape of `(vmin, vmax, nbins, units)`, which specifies a linear range of `nbins` velocity bins from `vmin` to `vmax` in units of `units`. We may also optionally specify the dimensions of the data cube with the `dims` argument."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false,
+    "scrolled": true
+   },
+   "outputs": [],
+   "source": [
+    "cube = PPVCube(ds, L, \"density\", (-150.,150.,50,\"km/s\"), dims=200, method=\"sum\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Following this, we can now write this cube to a FITS file. The x and y axes of the file can be in length units, which can be optionally specified by `length_unit`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "cube.write_fits(\"cube.fits\", clobber=True, length_unit=\"kpc\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Or one can use the `sky_scale` and `sky_center` keywords to set up the coordinates in RA and Dec:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "sky_scale = (1.0, \"arcsec/kpc\")\n",
+    "sky_center = (30., 45.) # RA, Dec in degrees\n",
+    "cube.write_fits(\"cube_sky.fits\", clobber=True, sky_scale=sky_scale, sky_center=sky_center)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now, we'll look at the FITS dataset in yt and look at different slices along the velocity axis, which is the \"z\" axis:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds_cube = yt.load(\"cube.fits\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# Specifying no center gives us the center slice\n",
+    "slc = yt.SlicePlot(ds_cube, \"z\", [\"density\"])\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# Picking different velocities for the slices\n",
+    "new_center = ds_cube.domain_center\n",
+    "new_center[2] = ds_cube.spec2pixel(-100.*u.km/u.s)\n",
+    "slc = yt.SlicePlot(ds_cube, \"z\", [\"density\"], center=new_center)\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "new_center[2] = ds_cube.spec2pixel(70.0*u.km/u.s)\n",
+    "slc = yt.SlicePlot(ds_cube, \"z\", [\"density\"], center=new_center)\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "new_center[2] = ds_cube.spec2pixel(-30.0*u.km/u.s)\n",
+    "slc = yt.SlicePlot(ds_cube, \"z\", [\"density\"], center=new_center)\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "If we project all the emission at all the different velocities along the z-axis, we recover the entire disk:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj = yt.ProjectionPlot(ds_cube, \"z\", [\"density\"], method=\"sum\")\n",
+    "prj.set_log(\"density\", True)\n",
+    "prj.set_zlim(\"density\", 1.0e-3, 0.2)\n",
+    "prj.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The `thermal_broad` keyword allows one to simulate thermal line broadening based on the temperature, and the `atomic_weight` argument is used to specify the atomic weight of the particle that is doing the emitting."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "cube2 = PPVCube(ds, L, \"density\", (-150.,150.,50,\"km/s\"), dims=200, thermal_broad=True, \n",
+    "                atomic_weight=12.0, method=\"sum\")\n",
+    "cube2.write_fits(\"cube2.fits\", clobber=True, length_unit=\"kpc\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Taking a slice of this cube shows:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds_cube2 = yt.load(\"cube2.fits\")\n",
+    "new_center = ds_cube2.domain_center\n",
+    "new_center[2] = ds_cube2.spec2pixel(70.0*u.km/u.s)\n",
+    "slc = yt.SlicePlot(ds_cube2, \"z\", [\"density\"], center=new_center)\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "new_center[2] = ds_cube2.spec2pixel(-100.*u.km/u.s)\n",
+    "slc = yt.SlicePlot(ds_cube2, \"z\", [\"density\"], center=new_center)\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "where we can see the emission has been smeared into this velocity slice from neighboring slices due to the thermal broadening. \n",
+    "\n",
+    "Finally, the \"velocity\" or \"spectral\" axis of the cube can be changed to a different unit, such as wavelength, frequency, or energy: "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (cube2.vbins[0], cube2.vbins[-1])\n",
+    "cube2.transform_spectral_axis(400.0,\"nm\")\n",
+    "print (cube2.vbins[0], cube2.vbins[-1])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "If a FITS file is now written from the cube, the spectral axis will be in the new units. To reset the spectral axis back to the original velocity units:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "cube2.reset_spectral_axis()\n",
+    "print (cube2.vbins[0], cube2.vbins[-1])"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:67e4297cbc32716b2481c71659305687cb5bdadad648a0acf6b48960267bb069"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Detailed spectra of astrophysical objects sometimes allow for determinations of how much of the gas is moving with a certain velocity along the line of sight, thanks to Doppler shifting of spectral lines. This enables \"data cubes\" to be created in RA, Dec, and line-of-sight velocity space. In yt, we can use the `PPVCube` analysis module to project fields along a given line of sight traveling at different line-of-sight velocities, to \"mock-up\" what would be seen in observations."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.config import ytcfg\n",
-      "ytcfg[\"yt\",\"loglevel\"] = 30\n",
-      "\n",
-      "import yt\n",
-      "import numpy as np\n",
-      "from yt.analysis_modules.ppv_cube.api import PPVCube\n",
-      "import yt.units as u"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "To demonstrate this functionality, we'll create a simple unigrid dataset from scratch of a rotating disk. We create a thin disk in the x-y midplane of the domain of three cells in height in either direction, and a radius of 10 kpc. The density and azimuthal velocity profiles of the disk as a function of radius will be given by the following functions:"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Density: $\\rho(r) \\propto r^{\\alpha}$"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Velocity: $v_{\\theta}(r) \\propto \\frac{r}{1+(r/r_0)^{\\beta}}$"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "where for simplicity we won't worry about the normalizations of these profiles. "
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "First, we'll set up the grid and the parameters of the profiles:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "nx,ny,nz = (256,256,256) # domain dimensions\n",
-      "R = 10. # outer radius of disk, kpc\n",
-      "r_0 = 3. # scale radius, kpc\n",
-      "beta = 1.4 # for the tangential velocity profile\n",
-      "alpha = -1. # for the radial density profile\n",
-      "x, y = np.mgrid[-R:R:nx*1j,-R:R:ny*1j] # cartesian coordinates of x-y plane of disk\n",
-      "r = np.sqrt(x*x+y*y) # polar coordinates\n",
-      "theta = np.arctan2(y, x) # polar coordinates"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Second, we'll construct the data arrays for the density, temperature, and velocity of the disk. Since we have the tangential velocity profile, we have to use the polar coordinates we derived earlier to compute `velx` and `vely`. Everywhere outside the disk, all fields are set to zero.  "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "dens = np.zeros((nx,ny,nz))\n",
-      "dens[:,:,nz/2-3:nz/2+3] = (r**alpha).reshape(nx,ny,1) # the density profile of the disk\n",
-      "temp = np.zeros((nx,ny,nz))\n",
-      "temp[:,:,nz/2-3:nz/2+3] = 1.0e5 # Isothermal\n",
-      "vel_theta = 100.*r/(1.+(r/r_0)**beta) # the azimuthal velocity profile of the disk\n",
-      "velx = np.zeros((nx,ny,nz))\n",
-      "vely = np.zeros((nx,ny,nz))\n",
-      "velx[:,:,nz/2-3:nz/2+3] = (-vel_theta*np.sin(theta)).reshape(nx,ny,1) # convert polar to cartesian\n",
-      "vely[:,:,nz/2-3:nz/2+3] = (vel_theta*np.cos(theta)).reshape(nx,ny,1) # convert polar to cartesian\n",
-      "dens[r > R] = 0.0\n",
-      "temp[r > R] = 0.0\n",
-      "velx[r > R] = 0.0\n",
-      "vely[r > R] = 0.0"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Finally, we'll package these data arrays up into a dictionary, which will then be shipped off to `load_uniform_grid`. We'll define the width of the grid to be `2*R` kpc, which will be equal to 1  `code_length`. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "data = {}\n",
-      "data[\"density\"] = (dens,\"g/cm**3\")\n",
-      "data[\"temperature\"] = (temp, \"K\")\n",
-      "data[\"velocity_x\"] = (velx, \"km/s\")\n",
-      "data[\"velocity_y\"] = (vely, \"km/s\")\n",
-      "data[\"velocity_z\"] = (np.zeros((nx,ny,nz)), \"km/s\") # zero velocity in the z-direction\n",
-      "bbox = np.array([[-0.5,0.5],[-0.5,0.5],[-0.5,0.5]]) # bbox of width 1 on a side with center (0,0,0)\n",
-      "ds = yt.load_uniform_grid(data, (nx,ny,nz), length_unit=(2*R,\"kpc\"), nprocs=1, bbox=bbox)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "To get a sense of what the data looks like, we'll take a slice through the middle of the disk:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc = yt.SlicePlot(ds, \"z\", [\"density\",\"velocity_x\",\"velocity_y\",\"velocity_magnitude\"])"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc.set_log(\"velocity_x\", False)\n",
-      "slc.set_log(\"velocity_y\", False)\n",
-      "slc.set_log(\"velocity_magnitude\", False)\n",
-      "slc.set_unit(\"velocity_magnitude\", \"km/s\")\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Which shows a rotating disk with a specific density and velocity profile. Now, suppose we wanted to look at this disk galaxy from a certain orientation angle, and simulate a 3D FITS data cube where we can see the gas that is emitting at different velocities along the line of sight. We can do this using the `PPVCube` class. First, let's assume we rotate our viewing angle 60 degrees from face-on, from along the z-axis into the x-axis. We'll create a normal vector:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "i = 60.*np.pi/180.\n",
-      "L = [np.sin(i),0.0,np.cos(i)]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Next, we need to specify a field that will serve as the \"intensity\" of the emission that we see. For simplicity, we'll simply choose the gas density as this field, though it could be any field (including derived fields) in principle. We also need to choose the bounds in line-of-sight velocity that the data will be binned into, which is a 4-tuple in the shape of `(vmin, vmax, nbins, units)`, which specifies a linear range of `nbins` velocity bins from `vmin` to `vmax` in units of `units`. We may also optionally specify the dimensions of the data cube with the `dims` argument."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "cube = PPVCube(ds, L, \"density\", (-150.,150.,50,\"km/s\"), dims=200, method=\"sum\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Following this, we can now write this cube to a FITS file. The x and y axes of the file can be in length units, which can be optionally specified by `length_unit`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "cube.write_fits(\"cube.fits\", clobber=True, length_unit=\"kpc\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Or one can use the `sky_scale` and `sky_center` keywords to set up the coordinates in RA and Dec:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "sky_scale = (1.0, \"arcsec/kpc\")\n",
-      "sky_center = (30., 45.) # RA, Dec in degrees\n",
-      "cube.write_fits(\"cube_sky.fits\", clobber=True, sky_scale=sky_scale, sky_center=sky_center)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now, we'll look at the FITS dataset in yt and look at different slices along the velocity axis, which is the \"z\" axis:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds_cube = yt.load(\"cube.fits\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# Specifying no center gives us the center slice\n",
-      "slc = yt.SlicePlot(ds_cube, \"z\", [\"density\"])\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# Picking different velocities for the slices\n",
-      "new_center = ds_cube.domain_center\n",
-      "new_center[2] = ds_cube.spec2pixel(-100.*u.km/u.s)\n",
-      "slc = yt.SlicePlot(ds_cube, \"z\", [\"density\"], center=new_center)\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "new_center[2] = ds_cube.spec2pixel(70.0*u.km/u.s)\n",
-      "slc = yt.SlicePlot(ds_cube, \"z\", [\"density\"], center=new_center)\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "new_center[2] = ds_cube.spec2pixel(-30.0*u.km/u.s)\n",
-      "slc = yt.SlicePlot(ds_cube, \"z\", [\"density\"], center=new_center)\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "If we project all the emission at all the different velocities along the z-axis, we recover the entire disk:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj = yt.ProjectionPlot(ds_cube, \"z\", [\"density\"], method=\"sum\")\n",
-      "prj.set_log(\"density\", True)\n",
-      "prj.set_zlim(\"density\", 1.0e-3, 0.2)\n",
-      "prj.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The `thermal_broad` keyword allows one to simulate thermal line broadening based on the temperature, and the `atomic_weight` argument is used to specify the atomic weight of the particle that is doing the emitting."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "cube2 = PPVCube(ds, L, \"density\", (-150.,150.,50,\"km/s\"), dims=200, thermal_broad=True, \n",
-      "                atomic_weight=12.0, method=\"sum\")\n",
-      "cube2.write_fits(\"cube2.fits\", clobber=True, length_unit=\"kpc\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Taking a slice of this cube shows:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds_cube2 = yt.load(\"cube2.fits\")\n",
-      "new_center = ds_cube2.domain_center\n",
-      "new_center[2] = ds_cube2.spec2pixel(70.0*u.km/u.s)\n",
-      "slc = yt.SlicePlot(ds_cube2, \"z\", [\"density\"], center=new_center)\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "new_center[2] = ds_cube2.spec2pixel(-100.*u.km/u.s)\n",
-      "slc = yt.SlicePlot(ds_cube2, \"z\", [\"density\"], center=new_center)\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "where we can see the emission has been smeared into this velocity slice from neighboring slices due to the thermal broadening. \n",
-      "\n",
-      "Finally, the \"velocity\" or \"spectral\" axis of the cube can be changed to a different unit, such as wavelength, frequency, or energy: "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print cube2.vbins[0], cube2.vbins[-1]\n",
-      "cube2.transform_spectral_axis(400.0,\"nm\")\n",
-      "print cube2.vbins[0], cube2.vbins[-1]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "If a FITS file is now written from the cube, the spectral axis will be in the new units. To reset the spectral axis back to the original velocity units:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "cube2.reset_spectral_axis()\n",
-      "print cube2.vbins[0], cube2.vbins[-1]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}


https://bitbucket.org/yt_analysis/yt/commits/9a1103248e8b/
Changeset:   9a1103248e8b
Branch:      yt
User:        ngoldbaum
Date:        2016-01-11 21:32:28+00:00
Summary:     Updating the SZ example notebook to nbformat4
Affected #:  1 file

diff -r 751beb0d4abe64b3845d50282b6956a2e420d47e -r 9a1103248e8b5be8a0935e43715861bfe561742a doc/source/analyzing/analysis_modules/SZ_projections.ipynb
--- a/doc/source/analyzing/analysis_modules/SZ_projections.ipynb
+++ b/doc/source/analyzing/analysis_modules/SZ_projections.ipynb
@@ -1,233 +1,245 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The change in the CMB intensity due to Compton scattering of CMB\n",
+    "photons off of thermal electrons in galaxy clusters, otherwise known as the\n",
+    "Sunyaev-Zeldovich (S-Z) effect, can to a reasonable approximation be represented by a\n",
+    "projection of the pressure field of a cluster. However, the *full* S-Z signal is a combination of thermal and kinetic\n",
+    "contributions, and for large frequencies and high temperatures\n",
+    "relativistic effects are important. For computing the full S-Z signal\n",
+    "incorporating all of these effects, there is a library:\n",
+    "SZpack ([Chluba et al 2012](http://adsabs.harvard.edu/abs/2012MNRAS.426..510C)). \n",
+    "\n",
+    "The `sunyaev_zeldovich` analysis module in yt makes it possible\n",
+    "to make projections of the full S-Z signal given the properties of the\n",
+    "thermal gas in the simulation using SZpack. SZpack has several different options for computing the S-Z signal, from full\n",
+    "integrations to very good approximations.  Since a full or even a\n",
+    "partial integration of the signal for each cell in the projection\n",
+    "would be prohibitively expensive, we use the method outlined in\n",
+    "[Chluba et al 2013](http://adsabs.harvard.edu/abs/2013MNRAS.430.3054C) to expand the\n",
+    "total S-Z signal in terms of moments of the projected optical depth $\\tau$, projected electron temperature $T_e$, and\n",
+    "velocities $\\beta_{c,\\parallel}$ and $\\beta_{c,\\perp}$ (their equation 18):"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "$$S(\\tau, T_{e},\\beta_{c,\\parallel},\\beta_{\\rm c,\\perp}) \\approx S_{\\rm iso}^{(0)} + S_{\\rm iso}^{(2)}\\omega^{(1)} + C_{\\rm iso}^{(1)}\\sigma^{(1)} + D_{\\rm iso}^{(2)}\\kappa^{(1)} + E_{\\rm iso}^{(2)}\\beta_{\\rm c,\\perp,SZ}^2 +~...$$\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "yt makes projections of the various moments needed for the\n",
+    "calculation, and then the resulting projected fields are used to\n",
+    "compute the S-Z signal. In our implementation, the expansion is carried out to first-order\n",
+    "terms in $T_e$ and zeroth-order terms in $\\beta_{c,\\parallel}$ by default, but terms up to second-order in can be optionally\n",
+    "included. "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Installing SZpack"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "SZpack can be downloaded [here](http://www.cita.utoronto.ca/~jchluba/Science_Jens/SZpack/SZpack.html). Make\n",
+    "sure you install a version later than v1.1.1. For computing the S-Z\n",
+    "integrals, SZpack requires the [GNU Scientific Library](http://www.gnu.org/software/gsl/). For compiling\n",
+    "the Python module, you need to have a recent version of [swig](http://www.swig.org>) installed. After running `make` in the top-level SZpack directory, you'll need to run it in the `python` subdirectory, which is the\n",
+    "location of the `SZpack` module. You may have to include this location in the `PYTHONPATH` environment variable.\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "<font color='red'>**NOTE**</font>: Currently, use of the SZpack library to create S-Z projections in yt is limited to Python 2.x."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Creating S-Z Projections"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Once you have SZpack installed, making S-Z projections from yt\n",
+    "datasets is fairly straightforward:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "%matplotlib inline\n",
+    "import yt\n",
+    "from yt.analysis_modules.sunyaev_zeldovich.api import SZProjection\n",
+    "\n",
+    "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+    "\n",
+    "freqs = [90.,180.,240.]\n",
+    "szprj = SZProjection(ds, freqs)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`freqs` is a list or array of frequencies in GHz at which the signal\n",
+    "is to be computed. The `SZProjection` constructor also accepts the\n",
+    "optional keywords, `mue` (mean molecular weight for computing the\n",
+    "electron number density, 1.143 is the default) and `high_order` (set\n",
+    "to True to compute terms in the S-Z signal expansion up to\n",
+    "second-order in $T_{e,SZ}$ and $\\beta$). "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Once you have created the `SZProjection` object, you can use it to\n",
+    "make on-axis and off-axis projections:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# An on-axis projection along the z-axis with width 10 Mpc, centered on the gas density maximum\n",
+    "szprj.on_axis(\"z\", center=\"max\", width=(10.0, \"Mpc\"), nx=400)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "To make an off-axis projection, `szprj.off_axis` is called in the same way, except that the first argument is a three-component normal vector. \n",
+    "\n",
+    "Currently, only one projection can be in memory at once. These methods\n",
+    "create images of the projected S-Z signal at each requested frequency,\n",
+    "which can be accessed dict-like from the projection object (e.g.,\n",
+    "`szprj[\"90_GHz\"]`). Projections of other quantities may also be\n",
+    "accessed; to see what fields are available call `szprj.keys()`. The methods also accept standard yt\n",
+    "keywords for projections such as `center`, `width`, and `source`. The image buffer size can be controlled by setting `nx`.  \n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Writing out the S-Z Projections"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "You may want to output the S-Z images to figures suitable for\n",
+    "inclusion in a paper, or save them to disk for later use. There are a\n",
+    "few methods included for this purpose. For PNG figures with a colorbar\n",
+    "and axes, use `write_png`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "szprj.write_png(\"SZ_example\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "For simple output of the image data to disk, call `write_hdf5`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "szprj.write_hdf5(\"SZ_example.h5\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Finally, for output to FITS files which can be opened or analyzed\n",
+    "using other programs (such as ds9), call `export_fits`."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "szprj.write_fits(\"SZ_example.fits\", clobber=True)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "which would write all of the projections to a single FITS file,\n",
+    "including coordinate information in kpc. The optional keyword\n",
+    "`clobber` allows a previous file to be overwritten. \n"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:487383ec23a092310522ec25bd02ad2eb16a3402c5ed3d2b103d33fe17697b3c"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The change in the CMB intensity due to Compton scattering of CMB\n",
-      "photons off of thermal electrons in galaxy clusters, otherwise known as the\n",
-      "Sunyaev-Zeldovich (S-Z) effect, can to a reasonable approximation be represented by a\n",
-      "projection of the pressure field of a cluster. However, the *full* S-Z signal is a combination of thermal and kinetic\n",
-      "contributions, and for large frequencies and high temperatures\n",
-      "relativistic effects are important. For computing the full S-Z signal\n",
-      "incorporating all of these effects, there is a library:\n",
-      "SZpack ([Chluba et al 2012](http://adsabs.harvard.edu/abs/2012MNRAS.426..510C)). \n",
-      "\n",
-      "The `sunyaev_zeldovich` analysis module in yt makes it possible\n",
-      "to make projections of the full S-Z signal given the properties of the\n",
-      "thermal gas in the simulation using SZpack. SZpack has several different options for computing the S-Z signal, from full\n",
-      "integrations to very good approximations.  Since a full or even a\n",
-      "partial integration of the signal for each cell in the projection\n",
-      "would be prohibitively expensive, we use the method outlined in\n",
-      "[Chluba et al 2013](http://adsabs.harvard.edu/abs/2013MNRAS.430.3054C) to expand the\n",
-      "total S-Z signal in terms of moments of the projected optical depth $\\tau$, projected electron temperature $T_e$, and\n",
-      "velocities $\\beta_{c,\\parallel}$ and $\\beta_{c,\\perp}$ (their equation 18):"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "$$S(\\tau, T_{e},\\beta_{c,\\parallel},\\beta_{\\rm c,\\perp}) \\approx S_{\\rm iso}^{(0)} + S_{\\rm iso}^{(2)}\\omega^{(1)} + C_{\\rm iso}^{(1)}\\sigma^{(1)} + D_{\\rm iso}^{(2)}\\kappa^{(1)} + E_{\\rm iso}^{(2)}\\beta_{\\rm c,\\perp,SZ}^2 +~...$$\n"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "yt makes projections of the various moments needed for the\n",
-      "calculation, and then the resulting projected fields are used to\n",
-      "compute the S-Z signal. In our implementation, the expansion is carried out to first-order\n",
-      "terms in $T_e$ and zeroth-order terms in $\\beta_{c,\\parallel}$ by default, but terms up to second-order in can be optionally\n",
-      "included. "
-     ]
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Installing SZpack"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "SZpack can be downloaded [here](http://www.cita.utoronto.ca/~jchluba/Science_Jens/SZpack/SZpack.html). Make\n",
-      "sure you install a version later than v1.1.1. For computing the S-Z\n",
-      "integrals, SZpack requires the [GNU Scientific Library](http://www.gnu.org/software/gsl/). For compiling\n",
-      "the Python module, you need to have a recent version of [swig](http://www.swig.org>) installed. After running `make` in the top-level SZpack directory, you'll need to run it in the `python` subdirectory, which is the\n",
-      "location of the `SZpack` module. You may have to include this location in the `PYTHONPATH` environment variable.\n"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "<font color='red'>**NOTE**</font>: Currently, use of the SZpack library to create S-Z projections in yt is limited to Python 2.x."
-     ]
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Creating S-Z Projections"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Once you have SZpack installed, making S-Z projections from yt\n",
-      "datasets is fairly straightforward:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "%matplotlib inline\n",
-      "import yt\n",
-      "from yt.analysis_modules.sunyaev_zeldovich.api import SZProjection\n",
-      "\n",
-      "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
-      "\n",
-      "freqs = [90.,180.,240.]\n",
-      "szprj = SZProjection(ds, freqs)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "`freqs` is a list or array of frequencies in GHz at which the signal\n",
-      "is to be computed. The `SZProjection` constructor also accepts the\n",
-      "optional keywords, `mue` (mean molecular weight for computing the\n",
-      "electron number density, 1.143 is the default) and `high_order` (set\n",
-      "to True to compute terms in the S-Z signal expansion up to\n",
-      "second-order in $T_{e,SZ}$ and $\\beta$). "
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Once you have created the `SZProjection` object, you can use it to\n",
-      "make on-axis and off-axis projections:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# An on-axis projection along the z-axis with width 10 Mpc, centered on the gas density maximum\n",
-      "szprj.on_axis(\"z\", center=\"max\", width=(10.0, \"Mpc\"), nx=400)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "To make an off-axis projection, `szprj.off_axis` is called in the same way, except that the first argument is a three-component normal vector. \n",
-      "\n",
-      "Currently, only one projection can be in memory at once. These methods\n",
-      "create images of the projected S-Z signal at each requested frequency,\n",
-      "which can be accessed dict-like from the projection object (e.g.,\n",
-      "`szprj[\"90_GHz\"]`). Projections of other quantities may also be\n",
-      "accessed; to see what fields are available call `szprj.keys()`. The methods also accept standard yt\n",
-      "keywords for projections such as `center`, `width`, and `source`. The image buffer size can be controlled by setting `nx`.  \n"
-     ]
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Writing out the S-Z Projections"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "You may want to output the S-Z images to figures suitable for\n",
-      "inclusion in a paper, or save them to disk for later use. There are a\n",
-      "few methods included for this purpose. For PNG figures with a colorbar\n",
-      "and axes, use `write_png`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "szprj.write_png(\"SZ_example\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "For simple output of the image data to disk, call `write_hdf5`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "szprj.write_hdf5(\"SZ_example.h5\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Finally, for output to FITS files which can be opened or analyzed\n",
-      "using other programs (such as ds9), call `export_fits`."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "szprj.write_fits(\"SZ_example.fits\", clobber=True)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "which would write all of the projections to a single FITS file,\n",
-      "including coordinate information in kpc. The optional keyword\n",
-      "`clobber` allows a previous file to be overwritten. \n"
-     ]
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}


https://bitbucket.org/yt_analysis/yt/commits/1ca19ab5f395/
Changeset:   1ca19ab5f395
Branch:      yt
User:        ngoldbaum
Date:        2016-01-11 21:48:53+00:00
Summary:     Updating the example notebooks for the filtering section
Affected #:  2 files

diff -r 9a1103248e8b5be8a0935e43715861bfe561742a -r 1ca19ab5f395f5cc1a6a21d0f218f00d7a38af02 doc/source/analyzing/mesh_filter.ipynb
--- a/doc/source/analyzing/mesh_filter.ipynb
+++ b/doc/source/analyzing/mesh_filter.ipynb
@@ -1,171 +1,179 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let us demonstrate this with an example using the same dataset as we used with the boolean masks."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt\n",
+    "ds = yt.load(\"Enzo_64/DD0042/data0042\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The only argument to a cut region is a conditional on field output from a data object.  The only catch is that you *must* denote the data object in the conditional as \"obj\" regardless of the actual object's name.  \n",
+    "\n",
+    "Here we create three new data objects which are copies of the all_data object (a region object covering the entire spatial domain of the simulation), but we've filtered on just \"hot\" material, the \"dense\" material, and the \"overpressure and fast\" material."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ad = ds.all_data()\n",
+    "hot_ad = ad.cut_region([\"obj['temperature'] > 1e6\"])\n",
+    "dense_ad = ad.cut_region(['obj[\"density\"] > 5e-30'])\n",
+    "\n",
+    "# you can chain cut regions in two ways:\n",
+    "dense_and_cool_ad = dense_ad.cut_region([\"obj['temperature'] < 1e5\"])\n",
+    "overpressure_and_fast_ad = ad.cut_region(['(obj[\"pressure\"] > 1e-14) & (obj[\"velocity_magnitude\"].in_units(\"km/s\") > 1e2)'])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Upon inspection of our \"hot_ad\" object, we can still get the same results as we got with the boolean masks example above:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (\"Temperature of all cells:\\n ad['temperature'] = \\n%s\\n\" % ad[\"temperature\"])\n",
+    "print (\"Temperatures of all \\\"hot\\\" cells:\\n hot_ad['temperature'] = \\n%s\" % hot_ad['temperature'])"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (\"Density of dense, cool material:\\n dense_and_cool_ad['density'] = \\n%s\\n\" % dense_and_cool_ad['density'])\n",
+    "print (\"Temperature of dense, cool material:\\n dense_and_cool_ad['temperature'] = \\n%s\" % dense_and_cool_ad['temperature'])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now that we've constructed a `cut_region`, we can use it as a data source for further analysis. To create a plot based on a `cut_region`, use the `data_source` keyword argument provided by yt's plotting objects.\n",
+    "\n",
+    "Here's an example using projections:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "proj1 = yt.ProjectionPlot(ds, 'x', \"density\", weight_field=\"density\")\n",
+    "proj1.annotate_title('No Cuts')\n",
+    "proj1.set_figure_size(5)\n",
+    "proj1.show()\n",
+    "\n",
+    "proj2 = yt.ProjectionPlot(ds, 'x', \"density\", weight_field=\"density\", data_source=hot_ad)\n",
+    "proj2.annotate_title('Hot Gas')\n",
+    "proj2.set_zlim(\"density\", 3e-31, 3e-27)\n",
+    "proj2.set_figure_size(5)\n",
+    "proj2.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The `data_source` keyword argument is also accepted by `SlicePlot`, `ProfilePlot` and `PhasePlot`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc1 = yt.SlicePlot(ds, 'x', \"density\", center='m')\n",
+    "slc1.set_zlim('density', 3e-31, 3e-27)\n",
+    "slc1.annotate_title('No Cuts')\n",
+    "slc1.set_figure_size(5)\n",
+    "slc1.show()\n",
+    "\n",
+    "slc2 = yt.SlicePlot(ds, 'x', \"density\", center='m', data_source=dense_ad)\n",
+    "slc2.set_zlim('density', 3e-31, 3e-27)\n",
+    "slc2.annotate_title('Dense Gas')\n",
+    "slc2.set_figure_size(5)\n",
+    "slc2.show()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ph1 = yt.PhasePlot(ad, 'density', 'temperature', 'cell_mass', weight_field=None)\n",
+    "ph1.set_xlim(3e-31, 3e-27)\n",
+    "ph1.set_title('cell_mass', 'No Cuts')\n",
+    "ph1.set_figure_size(5)\n",
+    "ph1.show()\n",
+    "\n",
+    "ph1 = yt.PhasePlot(dense_ad, 'density', 'temperature', 'cell_mass', weight_field=None)\n",
+    "ph1.set_xlim(3e-31, 3e-27)\n",
+    "ph1.set_title('cell_mass', 'Dense Gas')\n",
+    "ph1.set_figure_size(5)\n",
+    "ph1.show()"
+   ]
+  }
+ ],
  "metadata": {
   "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
    "codemirror_mode": {
     "name": "ipython",
-    "version": 2
+    "version": 3
    },
-   "display_name": "IPython (Python 2)",
-   "language": "python",
-   "name": "python2"
-  },
-  "name": "",
-  "signature": "sha256:e7a3de4de9be6a2c653bc45ed2c7ef5fa19da15aafca165d331a3125befe4d6c"
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Let us demonstrate this with an example using the same dataset as we used with the boolean masks."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt\n",
-      "ds = yt.load(\"Enzo_64/DD0042/data0042\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The only argument to a cut region is a conditional on field output from a data object.  The only catch is that you *must* denote the data object in the conditional as \"obj\" regardless of the actual object's name.  \n",
-      "\n",
-      "Here we create three new data objects which are copies of the all_data object (a region object covering the entire spatial domain of the simulation), but we've filtered on just \"hot\" material, the \"dense\" material, and the \"overpressure and fast\" material."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ad = ds.all_data()\n",
-      "hot_ad = ad.cut_region([\"obj['temperature'] > 1e6\"])\n",
-      "dense_ad = ad.cut_region(['obj[\"density\"] > 5e-30'])\n",
-      "\n",
-      "# you can chain cut regions in two ways:\n",
-      "dense_and_cool_ad = dense_ad.cut_region([\"obj['temperature'] < 1e5\"])\n",
-      "overpressure_and_fast_ad = ad.cut_region(['(obj[\"pressure\"] > 1e-14) & (obj[\"velocity_magnitude\"].in_units(\"km/s\") > 1e2)'])"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Upon inspection of our \"hot_ad\" object, we can still get the same results as we got with the boolean masks example above:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print \"Temperature of all cells:\\n ad['temperature'] = \\n%s\\n\" % ad[\"temperature\"] \n",
-      "print \"Temperatures of all \\\"hot\\\" cells:\\n hot_ad['temperature'] = \\n%s\" % hot_ad['temperature']"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print \"Density of dense, cool material:\\n dense_and_cool_ad['density'] = \\n%s\\n\" % dense_and_cool_ad['density']\n",
-      "print \"Temperature of dense, cool material:\\n dense_and_cool_ad['temperature'] = \\n%s\" % dense_and_cool_ad['temperature']"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now that we've constructed a `cut_region`, we can use it as a data source for further analysis. To create a plot based on a `cut_region`, use the `data_source` keyword argument provided by yt's plotting objects.\n",
-      "\n",
-      "Here's an example using projections:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "proj1 = yt.ProjectionPlot(ds, 'x', \"density\", weight_field=\"density\")\n",
-      "proj1.annotate_title('No Cuts')\n",
-      "proj1.set_figure_size(5)\n",
-      "proj1.show()\n",
-      "\n",
-      "proj2 = yt.ProjectionPlot(ds, 'x', \"density\", weight_field=\"density\", data_source=hot_ad)\n",
-      "proj2.annotate_title('Hot Gas')\n",
-      "proj2.set_zlim(\"density\", 3e-31, 3e-27)\n",
-      "proj2.set_figure_size(5)\n",
-      "proj2.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The `data_source` keyword argument is also accepted by `SlicePlot`, `ProfilePlot` and `PhasePlot`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc1 = yt.SlicePlot(ds, 'x', \"density\", center='m')\n",
-      "slc1.set_zlim('density', 3e-31, 3e-27)\n",
-      "slc1.annotate_title('No Cuts')\n",
-      "slc1.set_figure_size(5)\n",
-      "slc1.show()\n",
-      "\n",
-      "slc2 = yt.SlicePlot(ds, 'x', \"density\", center='m', data_source=dense_ad)\n",
-      "slc2.set_zlim('density', 3e-31, 3e-27)\n",
-      "slc2.annotate_title('Dense Gas')\n",
-      "slc2.set_figure_size(5)\n",
-      "slc2.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ph1 = yt.PhasePlot(ad, 'density', 'temperature', 'cell_mass', weight_field=None)\n",
-      "ph1.set_xlim(3e-31, 3e-27)\n",
-      "ph1.set_title('cell_mass', 'No Cuts')\n",
-      "ph1.set_figure_size(5)\n",
-      "ph1.show()\n",
-      "\n",
-      "ph1 = yt.PhasePlot(dense_ad, 'density', 'temperature', 'cell_mass', weight_field=None)\n",
-      "ph1.set_xlim(3e-31, 3e-27)\n",
-      "ph1.set_title('cell_mass', 'Dense Gas')\n",
-      "ph1.set_figure_size(5)\n",
-      "ph1.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}

diff -r 9a1103248e8b5be8a0935e43715861bfe561742a -r 1ca19ab5f395f5cc1a6a21d0f218f00d7a38af02 doc/source/analyzing/particle_filter.ipynb
--- a/doc/source/analyzing/particle_filter.ipynb
+++ b/doc/source/analyzing/particle_filter.ipynb
@@ -1,144 +1,159 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let us go through a full worked example.  Here we have a Tipsy SPH dataset.  By general\n",
+    "inspection, we see that there are stars present in the dataset, since\n",
+    "there are fields with field type: `Stars` in the `ds.field_list`. Let's look \n",
+    "at the `derived_field_list` for all of the `Stars` fields. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt\n",
+    "import numpy as np\n",
+    "\n",
+    "ds = yt.load(\"TipsyGalaxy/galaxy.00300\")\n",
+    "for field in ds.derived_field_list:\n",
+    "    if field[0] == 'Stars':\n",
+    "        print (field)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We will filter these into young stars and old stars by masking on the ('Stars', 'creation_time') field. \n",
+    "\n",
+    "In order to do this, we first make a function which applies our desired cut.  This function must accept two arguments: `pfilter` and `data`.  The first argument is a `ParticleFilter` object that contains metadata about the filter its self.  The second argument is a yt data container.\n",
+    "\n",
+    "Let's call \"young\" stars only those stars with ages less 5 million years.  Since Tipsy assigns a very large `creation_time` for stars in the initial conditions, we need to also exclude stars with negative ages. \n",
+    "\n",
+    "Conversely, let's define \"old\" stars as those stars formed dynamically in the simulation with ages greater than 5 Myr.  We also include stars with negative ages, since these stars were included in the simulation initial conditions.\n",
+    "\n",
+    "We make use of `pfilter.filtered_type` so that the filter definition will use the same particle type as the one specified in the call to `add_particle_filter` below.  This makes the filter definition usable for arbitrary particle types.  Since we're only filtering the `\"Stars\"` particle type in this example, we could have also replaced `pfilter.filtered_type` with `\"Stars\"` and gotten the same result."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "def young_stars(pfilter, data):\n",
+    "    age = data.ds.current_time - data[pfilter.filtered_type, \"creation_time\"]\n",
+    "    filter = np.logical_and(age.in_units('Myr') <= 5, age >= 0)\n",
+    "    return filter\n",
+    "\n",
+    "def old_stars(pfilter, data):\n",
+    "    age = data.ds.current_time - data[pfilter.filtered_type, \"creation_time\"]\n",
+    "    filter = np.logical_or(age.in_units('Myr') >= 5, age < 0)\n",
+    "    return filter"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now we define these as particle filters within the yt universe with the\n",
+    "`add_particle_filter()` function."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "yt.add_particle_filter(\"young_stars\", function=young_stars, filtered_type='Stars', requires=[\"creation_time\"])\n",
+    "\n",
+    "yt.add_particle_filter(\"old_stars\", function=old_stars, filtered_type='Stars', requires=[\"creation_time\"])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let us now apply these filters specifically to our dataset.\n",
+    "\n",
+    "Let's double check that it worked by looking at the derived_field_list for any new fields created by our filter."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds.add_particle_filter('young_stars')\n",
+    "ds.add_particle_filter('old_stars')\n",
+    "\n",
+    "for field in ds.derived_field_list:\n",
+    "    if \"young_stars\" in field or \"young_stars\" in field[1]:\n",
+    "        print (field)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We see all of the new `young_stars` fields as well as the 4 deposit fields.  These deposit fields are `mesh` fields generated by depositing particle fields on the grid.  Let's generate a couple of projections of where the young and old stars reside in this simulation by accessing some of these new fields."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "p = yt.ProjectionPlot(ds, 'z', [('deposit', 'young_stars_cic'), ('deposit', 'old_stars_cic')], width=(40, 'kpc'), center='m')\n",
+    "p.set_figure_size(5)\n",
+    "p.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We see that young stars are concentrated in regions of active star formation, while old stars are more spatially extended."
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:427da1e1d02deb543246218dc8cce991268b518b25cfdd5944a4a436695f874b"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Let us go through a full worked example.  Here we have a Tipsy SPH dataset.  By general\n",
-      "inspection, we see that there are stars present in the dataset, since\n",
-      "there are fields with field type: `Stars` in the `ds.field_list`. Let's look \n",
-      "at the `derived_field_list` for all of the `Stars` fields. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt\n",
-      "import numpy as np\n",
-      "\n",
-      "ds = yt.load(\"TipsyGalaxy/galaxy.00300\")\n",
-      "for field in ds.derived_field_list:\n",
-      "    if field[0] == 'Stars':\n",
-      "        print field"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We will filter these into young stars and old stars by masking on the ('Stars', 'creation_time') field. \n",
-      "\n",
-      "In order to do this, we first make a function which applies our desired cut.  This function must accept two arguments: `pfilter` and `data`.  The first argument is a `ParticleFilter` object that contains metadata about the filter its self.  The second argument is a yt data container.\n",
-      "\n",
-      "Let's call \"young\" stars only those stars with ages less 5 million years.  Since Tipsy assigns a very large `creation_time` for stars in the initial conditions, we need to also exclude stars with negative ages. \n",
-      "\n",
-      "Conversely, let's define \"old\" stars as those stars formed dynamically in the simulation with ages greater than 5 Myr.  We also include stars with negative ages, since these stars were included in the simulation initial conditions.\n",
-      "\n",
-      "We make use of `pfilter.filtered_type` so that the filter definition will use the same particle type as the one specified in the call to `add_particle_filter` below.  This makes the filter definition usable for arbitrary particle types.  Since we're only filtering the `\"Stars\"` particle type in this example, we could have also replaced `pfilter.filtered_type` with `\"Stars\"` and gotten the same result."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "def young_stars(pfilter, data):\n",
-      "    age = data.ds.current_time - data[pfilter.filtered_type, \"creation_time\"]\n",
-      "    filter = np.logical_and(age.in_units('Myr') <= 5, age >= 0)\n",
-      "    return filter\n",
-      "\n",
-      "def old_stars(pfilter, data):\n",
-      "    age = data.ds.current_time - data[pfilter.filtered_type, \"creation_time\"]\n",
-      "    filter = np.logical_or(age.in_units('Myr') >= 5, age < 0)\n",
-      "    return filter"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now we define these as particle filters within the yt universe with the\n",
-      "`add_particle_filter()` function."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "yt.add_particle_filter(\"young_stars\", function=young_stars, filtered_type='Stars', requires=[\"creation_time\"])\n",
-      "\n",
-      "yt.add_particle_filter(\"old_stars\", function=old_stars, filtered_type='Stars', requires=[\"creation_time\"])"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Let us now apply these filters specifically to our dataset.\n",
-      "\n",
-      "Let's double check that it worked by looking at the derived_field_list for any new fields created by our filter."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds.add_particle_filter('young_stars')\n",
-      "ds.add_particle_filter('old_stars')\n",
-      "\n",
-      "for field in ds.derived_field_list:\n",
-      "    if \"young_stars\" in field or \"young_stars\" in field[1]:\n",
-      "        print field"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We see all of the new `young_stars` fields as well as the 4 deposit fields.  These deposit fields are `mesh` fields generated by depositing particle fields on the grid.  Let's generate a couple of projections of where the young and old stars reside in this simulation by accessing some of these new fields."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "p = yt.ProjectionPlot(ds, 'z', [('deposit', 'young_stars_cic'), ('deposit', 'old_stars_cic')], width=(40, 'kpc'), center='m')\n",
-      "p.set_figure_size(5)\n",
-      "p.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We see that young stars are concentrated in regions of active star formation, while old stars are more spatially extended."
-     ]
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}


https://bitbucket.org/yt_analysis/yt/commits/2e32465a8a9b/
Changeset:   2e32465a8a9b
Branch:      yt
User:        ngoldbaum
Date:        2016-01-11 22:34:13+00:00
Summary:     Updating example notebooks for the unit system
Affected #:  6 files

diff -r 1ca19ab5f395f5cc1a6a21d0f218f00d7a38af02 -r 2e32465a8a9be582723a03c6a3d6d696699c2583 doc/source/analyzing/units/1)_Symbolic_Units.ipynb
--- a/doc/source/analyzing/units/1)_Symbolic_Units.ipynb
+++ b/doc/source/analyzing/units/1)_Symbolic_Units.ipynb
@@ -1,707 +1,744 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Dimensional analysis"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The fastest way to get into the unit system is to explore the quantities that live in the `yt.units` namespace:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.units import meter, gram, kilogram, second, joule\n",
+    "print (kilogram*meter**2/second**2 == joule)\n",
+    "print (kilogram*meter**2/second**2)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.units import m, kg, s, W\n",
+    "kg*m**2/s**3 == W"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.units import kilometer\n",
+    "three_kilometers = 3*kilometer\n",
+    "print (three_kilometers)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.units import gram, kilogram\n",
+    "print (gram+kilogram)\n",
+    "\n",
+    "print (kilogram+gram)\n",
+    "\n",
+    "print (kilogram/gram)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "These unit symbols are all instances of a new class we've added to yt 3.0, `YTQuantity`. `YTQuantity` is useful for storing a single data point."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "type(kilogram)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We also provide `YTArray`, which can store arrays of quantities:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "arr = [3,4,5]*kilogram\n",
+    "\n",
+    "print (arr)\n",
+    "\n",
+    "print (type(arr))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Creating arrays and quantities"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Most people will interact with the new unit system using `YTArray` and `YTQuantity`.  These are both subclasses of numpy's fast array type, `ndarray`, and can be used interchangably with other NumPy arrays. These new classes make use of the unit system to append unit metadata to the underlying `ndarray`.  `YTArray` is intended to store array data, while `YTQuantitity` is intended to store scalars in a particular unit system.\n",
+    "\n",
+    "There are two ways to create arrays and quantities. The first is to explicitly create it by calling the class constructor and supplying a unit string:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.units.yt_array import YTArray\n",
+    "\n",
+    "sample_array = YTArray([1,2,3], 'g/cm**3')\n",
+    "\n",
+    "print (sample_array)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The unit string can be an arbitrary combination of metric unit names.  Just a few examples:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.units.yt_array import YTQuantity\n",
+    "from yt.utilities.physical_constants import kboltz\n",
+    "from numpy.random import random\n",
+    "import numpy as np\n",
+    "\n",
+    "print (\"Length:\")\n",
+    "print (YTQuantity(random(), 'm'))\n",
+    "print (YTQuantity(random(), 'cm'))\n",
+    "print (YTQuantity(random(), 'Mpc'))\n",
+    "print (YTQuantity(random(), 'AU'))\n",
+    "print ('')\n",
+    "\n",
+    "print (\"Time:\")\n",
+    "print (YTQuantity(random(), 's'))\n",
+    "print (YTQuantity(random(), 'min'))\n",
+    "print (YTQuantity(random(), 'hr'))\n",
+    "print (YTQuantity(random(), 'day'))\n",
+    "print (YTQuantity(random(), 'yr'))\n",
+    "print ('')\n",
+    "\n",
+    "print (\"Mass:\")\n",
+    "print (YTQuantity(random(), 'g'))\n",
+    "print (YTQuantity(random(), 'kg'))\n",
+    "print (YTQuantity(random(), 'Msun'))\n",
+    "print ('')\n",
+    "\n",
+    "print (\"Energy:\")\n",
+    "print (YTQuantity(random(), 'erg'))\n",
+    "print (YTQuantity(random(), 'g*cm**2/s**2'))\n",
+    "print (YTQuantity(random(), 'eV'))\n",
+    "print (YTQuantity(random(), 'J'))\n",
+    "print ('')\n",
+    "\n",
+    "print (\"Temperature:\")\n",
+    "print (YTQuantity(random(), 'K'))\n",
+    "print ((YTQuantity(random(), 'eV')/kboltz).in_cgs())"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Dimensional arrays and quantities can also be created by multiplication with another array or quantity:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.units import kilometer\n",
+    "print (kilometer)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "three_kilometers = 3*kilometer\n",
+    "print (three_kilometers)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "When working with a YTArray with complicated units, you can use `unit_array` and `unit_quantity` to conveniently apply units to data:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "test_array = YTArray(np.random.random(20), 'erg/s')\n",
+    "\n",
+    "print (test_array)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`unit_quantity` returns a `YTQuantity` with a value of 1.0 and the same units as the array it is a attached to."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (test_array.unit_quantity)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`unit_array` returns a `YTArray` with the same units and shape as the array it is a attached to and with all values set to 1.0."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (test_array.unit_array)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "These are useful when doing arithmetic:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (test_array + 1.0*test_array.unit_quantity)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (test_array + np.arange(20)*test_array.unit_array)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "For convenience, `unit_quantity` is also available via `uq` and `unit_array` is available via `ua`.  You can use these arrays to create dummy arrays with the same units as another array - this is sometimes easier than manually creating a new array or quantity."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (test_array.uq)\n",
+    "\n",
+    "print (test_array.unit_quantity == test_array.uq)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from numpy import array_equal\n",
+    "\n",
+    "print (test_array.ua)\n",
+    "\n",
+    "print (array_equal(test_array.ua, test_array.unit_array))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Unit metadata is encoded in the `units` attribute that hangs off of `YTArray` or `YTQuantity` instances:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.units import kilometer, erg\n",
+    "\n",
+    "print (\"kilometer's units:\", kilometer.units)\n",
+    "print (\"kilometer's dimensions:\", kilometer.units.dimensions)\n",
+    "\n",
+    "print ('')\n",
+    "\n",
+    "print (\"erg's units:\", erg.units)\n",
+    "print (\"erg's dimensions: \", erg.units.dimensions)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Arithmetic with `YTQuantity` and `YTArray`"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Of course it wouldn't be very useful if all we could do is create data with units.  The real power of the new unit system is that we can add, subtract, mutliply, and divide using quantities and dimensional arrays:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "a = YTQuantity(3, 'cm')\n",
+    "b = YTQuantity(3, 'm')\n",
+    "\n",
+    "print (a+b)\n",
+    "print (b+a)\n",
+    "print ('')\n",
+    "\n",
+    "print ((a+b).in_units('ft'))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "a = YTQuantity(42, 'mm')\n",
+    "b = YTQuantity(1, 's')\n",
+    "\n",
+    "print (a/b)\n",
+    "print ((a/b).in_cgs())\n",
+    "print ((a/b).in_mks())\n",
+    "print ((a/b).in_units('km/s'))\n",
+    "print ('')\n",
+    "\n",
+    "print (a*b)\n",
+    "print ((a*b).in_cgs())\n",
+    "print ((a*b).in_mks())"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "m = YTQuantity(35, 'g')\n",
+    "a = YTQuantity(9.8, 'm/s**2')\n",
+    "\n",
+    "print (m*a)\n",
+    "print ((m*a).in_units('dyne'))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.utilities.physical_constants import G, kboltz\n",
+    "\n",
+    "print (\"Newton's constant: \", G)\n",
+    "print (\"Newton's constant in MKS: \", G.in_mks(), \"\\n\")\n",
+    "\n",
+    "print (\"Boltzmann constant: \", kboltz)\n",
+    "print (\"Boltzmann constant in MKS: \", kboltz.in_mks())"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "rho = YTQuantity(1, 'g/cm**3')\n",
+    "t_ff = (G*rho)**(-0.5)\n",
+    "\n",
+    "print (t_ff)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "An exception is raised if we try to do a unit operation that doesn't make any sense:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.utilities.exceptions import YTUnitOperationError\n",
+    "\n",
+    "a = YTQuantity(3, 'm')\n",
+    "b = YTQuantity(5, 'erg')\n",
+    "\n",
+    "try:\n",
+    "    print (a+b)\n",
+    "except YTUnitOperationError as e:\n",
+    "    print (e)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "A plain `ndarray` or a `YTArray` created with empty units is treated as a dimensionless quantity and can be used in situations where unit consistency allows it to be used: "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "a = YTArray([1.,2.,3.], 'm')\n",
+    "b = np.array([2.,2.,2.])\n",
+    "\n",
+    "print (\"a:   \", a)\n",
+    "print (\"b:   \", b)\n",
+    "print (\"a*b: \", a*b)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "c = YTArray([2,2,2])\n",
+    "\n",
+    "print (\"c:    \", c)\n",
+    "print (\"a*c:  \", a*c)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Saving and Loading `YTArray`s to/from disk"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`YTArray`s can be written to disk, to be loaded again to be used in yt or in a different context later. There are two formats that can be written to/read from: HDF5 and ASCII.  "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### HDF5"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "To write to HDF5, use `write_hdf5`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "my_dens = YTArray(np.random.random(10), 'Msun/kpc**3')\n",
+    "my_temp = YTArray(np.random.random(10), 'K')\n",
+    "my_dens.write_hdf5(\"my_data.h5\", dataset_name=\"density\")\n",
+    "my_temp.write_hdf5(\"my_data.h5\", dataset_name=\"temperature\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Where we used the `dataset_name` keyword argument to create a separate dataset for each array in the same file.\n",
+    "\n",
+    "We can use the `from_hdf5` classmethod to read the data back in:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "read_dens = YTArray.from_hdf5(\"my_data.h5\", dataset_name=\"density\")\n",
+    "print (read_dens)\n",
+    "print (my_dens)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can use the `info` keyword argument to `write_hdf5` to write some additional data to the file, which will be stored as attributes of the dataset:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "my_vels = YTArray(np.random.normal(10), 'km/s')\n",
+    "info = {\"source\":\"galaxy cluster\",\"user\":\"jzuhone\"}\n",
+    "my_vels.write_hdf5(\"my_data.h5\", dataset_name=\"velocity\", info=info)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "If you want to read/write a dataset from/to a specific group within the HDF5 file, use the `group_name` keyword:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "my_vels.write_hdf5(\"data_in_group.h5\", dataset_name=\"velocity\", info=info, group_name=\"/data/fields\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "where we have used the standard HDF5 slash notation for writing a group hierarchy (e.g., group within a group):"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### ASCII"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "To write one or more `YTArray`s to an ASCII text file, use `yt.savetxt`, which works a lot like NumPy's `savetxt`, except with units:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt\n",
+    "a = YTArray(np.random.random(size=10), \"cm\")\n",
+    "b = YTArray(np.random.random(size=10), \"g\")\n",
+    "c = YTArray(np.random.random(size=10), \"s\")\n",
+    "yt.savetxt(\"my_data.dat\", [a,b,c], header='My cool data', footer='Data is over', delimiter=\"\\t\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The file we wrote can then be easily used in other contexts, such as plotting in Gnuplot, or loading into a spreadsheet, or just for causal examination. We can quickly check it here:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "%%bash \n",
+    "more my_data.dat"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "You can see that the header comes first, and then right before the data we have a subheader marking the units of each column. The footer comes after the data. "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`yt.loadtxt` can be used to read the same data with units back in, or read data that has been generated from some other source. Just make sure it's in the format above. `loadtxt` can also selectively read from particular columns in the file with the `usecols` keyword argument:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "bb, cc = yt.loadtxt(\"my_data.dat\", usecols=(1,2), delimiter=\"\\t\")\n",
+    "print (bb)\n",
+    "print (b)\n",
+    "print ('')\n",
+    "print (cc)\n",
+    "print (c)"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:6d823c3543f4183db8d28ad5003183515a69ce533fcfff00d92db0372afc3930"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "heading",
-     "level": 3,
-     "metadata": {},
-     "source": [
-      "Dimensional analysis"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The fastest way to get into the unit system is to explore the quantities that live in the `yt.units` namespace:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.units import meter, gram, kilogram, second, joule\n",
-      "print kilogram*meter**2/second**2 == joule\n",
-      "print kilogram*meter**2/second**2"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.units import m, kg, s, W\n",
-      "kg*m**2/s**3 == W"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.units import kilometer\n",
-      "three_kilometers = 3*kilometer\n",
-      "print three_kilometers"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.units import gram, kilogram\n",
-      "print gram+kilogram\n",
-      "\n",
-      "print kilogram+gram\n",
-      "\n",
-      "print kilogram/gram"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "These unit symbols are all instances of a new class we've added to yt 3.0, `YTQuantity`. `YTQuantity` is useful for storing a single data point."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "type(kilogram)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We also provide `YTArray`, which can store arrays of quantities:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "arr = [3,4,5]*kilogram\n",
-      "\n",
-      "print arr\n",
-      "\n",
-      "print type(arr)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 3,
-     "metadata": {},
-     "source": [
-      "Creating arrays and quantities"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Most people will interact with the new unit system using `YTArray` and `YTQuantity`.  These are both subclasses of numpy's fast array type, `ndarray`, and can be used interchangably with other NumPy arrays. These new classes make use of the unit system to append unit metadata to the underlying `ndarray`.  `YTArray` is intended to store array data, while `YTQuantitity` is intended to store scalars in a particular unit system.\n",
-      "\n",
-      "There are two ways to create arrays and quantities. The first is to explicitly create it by calling the class constructor and supplying a unit string:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.units.yt_array import YTArray\n",
-      "\n",
-      "sample_array = YTArray([1,2,3], 'g/cm**3')\n",
-      "\n",
-      "print sample_array"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The unit string can be an arbitrary combination of metric unit names.  Just a few examples:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.units.yt_array import YTQuantity\n",
-      "from yt.utilities.physical_constants import kboltz\n",
-      "from numpy.random import random\n",
-      "import numpy as np\n",
-      "\n",
-      "print \"Length:\"\n",
-      "print YTQuantity(random(), 'm')\n",
-      "print YTQuantity(random(), 'cm')\n",
-      "print YTQuantity(random(), 'Mpc')\n",
-      "print YTQuantity(random(), 'AU')\n",
-      "print ''\n",
-      "\n",
-      "print \"Time:\"\n",
-      "print YTQuantity(random(), 's')\n",
-      "print YTQuantity(random(), 'min')\n",
-      "print YTQuantity(random(), 'hr')\n",
-      "print YTQuantity(random(), 'day')\n",
-      "print YTQuantity(random(), 'yr')\n",
-      "print ''\n",
-      "\n",
-      "print \"Mass:\"\n",
-      "print YTQuantity(random(), 'g')\n",
-      "print YTQuantity(random(), 'kg')\n",
-      "print YTQuantity(random(), 'Msun')\n",
-      "print ''\n",
-      "\n",
-      "print \"Energy:\"\n",
-      "print YTQuantity(random(), 'erg')\n",
-      "print YTQuantity(random(), 'g*cm**2/s**2')\n",
-      "print YTQuantity(random(), 'eV')\n",
-      "print YTQuantity(random(), 'J')\n",
-      "print ''\n",
-      "\n",
-      "print \"Temperature:\"\n",
-      "print YTQuantity(random(), 'K')\n",
-      "print (YTQuantity(random(), 'eV')/kboltz).in_cgs()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Dimensional arrays and quantities can also be created by multiplication with another array or quantity:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.units import kilometer\n",
-      "print kilometer"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "three_kilometers = 3*kilometer\n",
-      "print three_kilometers"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "When working with a YTArray with complicated units, you can use `unit_array` and `unit_quantity` to conveniently apply units to data:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "test_array = YTArray(np.random.random(20), 'erg/s')\n",
-      "\n",
-      "print test_array"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "`unit_quantity` returns a `YTQuantity` with a value of 1.0 and the same units as the array it is a attached to."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print test_array.unit_quantity"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "`unit_array` returns a `YTArray` with the same units and shape as the array it is a attached to and with all values set to 1.0."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print test_array.unit_array"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "These are useful when doing arithmetic:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print test_array + 1.0*test_array.unit_quantity"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print test_array + np.arange(20)*test_array.unit_array"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "For convenience, `unit_quantity` is also available via `uq` and `unit_array` is available via `ua`.  You can use these arrays to create dummy arrays with the same units as another array - this is sometimes easier than manually creating a new array or quantity."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print test_array.uq\n",
-      "\n",
-      "print test_array.unit_quantity == test_array.uq"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from numpy import array_equal\n",
-      "\n",
-      "print test_array.ua\n",
-      "\n",
-      "print array_equal(test_array.ua, test_array.unit_array)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Unit metadata is encoded in the `units` attribute that hangs off of `YTArray` or `YTQuantity` instances:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.units import kilometer, erg\n",
-      "\n",
-      "print \"kilometer's units:\", kilometer.units\n",
-      "print \"kilometer's dimensions:\", kilometer.units.dimensions\n",
-      "\n",
-      "print ''\n",
-      "\n",
-      "print \"erg's units:\", erg.units\n",
-      "print \"erg's dimensions: \", erg.units.dimensions"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 3,
-     "metadata": {},
-     "source": [
-      "Arithmetic with `YTQuantity` and `YTArray`"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Of course it wouldn't be very useful if all we could do is create data with units.  The real power of the new unit system is that we can add, subtract, mutliply, and divide using quantities and dimensional arrays:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "a = YTQuantity(3, 'cm')\n",
-      "b = YTQuantity(3, 'm')\n",
-      "\n",
-      "print a+b\n",
-      "print b+a\n",
-      "print ''\n",
-      "\n",
-      "print (a+b).in_units('ft')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "a = YTQuantity(42, 'mm')\n",
-      "b = YTQuantity(1, 's')\n",
-      "\n",
-      "print a/b\n",
-      "print (a/b).in_cgs()\n",
-      "print (a/b).in_mks()\n",
-      "print (a/b).in_units('km/s')\n",
-      "print ''\n",
-      "\n",
-      "print a*b\n",
-      "print (a*b).in_cgs()\n",
-      "print (a*b).in_mks()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "m = YTQuantity(35, 'g')\n",
-      "a = YTQuantity(9.8, 'm/s**2')\n",
-      "\n",
-      "print m*a\n",
-      "print (m*a).in_units('dyne')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.utilities.physical_constants import G, kboltz\n",
-      "\n",
-      "print \"Newton's constant: \", G\n",
-      "print \"Newton's constant in MKS: \", G.in_mks(), \"\\n\"\n",
-      "\n",
-      "print \"Boltzmann constant: \", kboltz\n",
-      "print \"Boltzmann constant in MKS: \", kboltz.in_mks()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "rho = YTQuantity(1, 'g/cm**3')\n",
-      "t_ff = (G*rho)**(-0.5)\n",
-      "\n",
-      "print t_ff"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "An exception is raised if we try to do a unit operation that doesn't make any sense:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.utilities.exceptions import YTUnitOperationError\n",
-      "\n",
-      "a = YTQuantity(3, 'm')\n",
-      "b = YTQuantity(5, 'erg')\n",
-      "\n",
-      "try:\n",
-      "    print a+b\n",
-      "except YTUnitOperationError as e:\n",
-      "    print e"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "A plain `ndarray` or a `YTArray` created with empty units is treated as a dimensionless quantity and can be used in situations where unit consistency allows it to be used: "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "a = YTArray([1.,2.,3.], 'm')\n",
-      "b = np.array([2.,2.,2.])\n",
-      "\n",
-      "print \"a:   \", a\n",
-      "print \"b:   \", b\n",
-      "print \"a*b: \", a*b"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "c = YTArray([2,2,2])\n",
-      "\n",
-      "print \"c:    \", c\n",
-      "print \"a*c:  \", a*c"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 3,
-     "metadata": {},
-     "source": [
-      "Saving and Loading `YTArray`s to/from disk"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "`YTArray`s can be written to disk, to be loaded again to be used in yt or in a different context later. There are two formats that can be written to/read from: HDF5 and ASCII.  "
-     ]
-    },
-    {
-     "cell_type": "heading",
-     "level": 4,
-     "metadata": {},
-     "source": [
-      "HDF5"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "To write to HDF5, use `write_hdf5`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "my_dens = YTArray(np.random.random(10), 'Msun/kpc**3')\n",
-      "my_temp = YTArray(np.random.random(10), 'K')\n",
-      "my_dens.write_hdf5(\"my_data.h5\", dataset_name=\"density\")\n",
-      "my_temp.write_hdf5(\"my_data.h5\", dataset_name=\"temperature\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Where we used the `dataset_name` keyword argument to create a separate dataset for each array in the same file.\n",
-      "\n",
-      "We can use the `from_hdf5` classmethod to read the data back in:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "read_dens = YTArray.from_hdf5(\"my_data.h5\", dataset_name=\"density\")\n",
-      "print read_dens\n",
-      "print my_dens"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can use the `info` keyword argument to `write_hdf5` to write some additional data to the file, which will be stored as attributes of the dataset:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "my_vels = YTArray(np.random.normal(10), 'km/s')\n",
-      "info = {\"source\":\"galaxy cluster\",\"user\":\"jzuhone\"}\n",
-      "my_vels.write_hdf5(\"my_data.h5\", dataset_name=\"velocity\", info=info)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "If you want to read/write a dataset from/to a specific group within the HDF5 file, use the `group_name` keyword:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "my_vels.write_hdf5(\"data_in_group.h5\", dataset_name=\"velocity\", info=info, group_name=\"/data/fields\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "where we have used the standard HDF5 slash notation for writing a group hierarchy (e.g., group within a group):"
-     ]
-    },
-    {
-     "cell_type": "heading",
-     "level": 4,
-     "metadata": {},
-     "source": [
-      "ASCII"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "To write one or more `YTArray`s to an ASCII text file, use `yt.savetxt`, which works a lot like NumPy's `savetxt`, except with units:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt\n",
-      "a = YTArray(np.random.random(size=10), \"cm\")\n",
-      "b = YTArray(np.random.random(size=10), \"g\")\n",
-      "c = YTArray(np.random.random(size=10), \"s\")\n",
-      "yt.savetxt(\"my_data.dat\", [a,b,c], header='My cool data', footer='Data is over', delimiter=\"\\t\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The file we wrote can then be easily used in other contexts, such as plotting in Gnuplot, or loading into a spreadsheet, or just for causal examination. We can quickly check it here:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "%%bash \n",
-      "more my_data.dat"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "You can see that the header comes first, and then right before the data we have a subheader marking the units of each column. The footer comes after the data. "
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "`yt.loadtxt` can be used to read the same data with units back in, or read data that has been generated from some other source. Just make sure it's in the format above. `loadtxt` can also selectively read from particular columns in the file with the `usecols` keyword argument:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "bb, cc = yt.loadtxt(\"my_data.dat\", usecols=(1,2), delimiter=\"\\t\")\n",
-      "print bb\n",
-      "print b\n",
-      "print\n",
-      "print cc\n",
-      "print c"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/09adb299b57f/
Changeset:   09adb299b57f
Branch:      yt
User:        ngoldbaum
Date:        2016-01-12 20:12:14+00:00
Summary:     Updating embeeded_webm_animation notebook.

Fixing a py2/py3 incompatibility issue as well.
Affected #:  1 file

diff -r 2e32465a8a9be582723a03c6a3d6d696699c2583 -r 09adb299b57f3a182f5aa6f2b2ef9e5a43ef7291 doc/source/cookbook/embedded_webm_animation.ipynb
--- a/doc/source/cookbook/embedded_webm_animation.ipynb
+++ b/doc/source/cookbook/embedded_webm_animation.ipynb
@@ -1,122 +1,137 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "This example shows how to embed an animation produced by `matplotlib` into an IPython notebook.  This example makes use of `matplotlib`'s [animation toolkit](http://matplotlib.org/api/animation_api.html) to transform individual frames into a final rendered movie.  \n",
+    "\n",
+    "Matplotlib uses [`ffmpeg`](http://www.ffmpeg.org/) to generate the movie, so you must install `ffmpeg` for this example to work correctly.  Usually the best way to install `ffmpeg` is using your system's package manager."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt\n",
+    "from matplotlib import animation"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "First, we need to construct a function that will embed the video produced by ffmpeg directly into the notebook document. This makes use of the [HTML5 video tag](http://www.w3schools.com/html/html5_video.asp) and the WebM video format.  WebM is supported by Chrome, Firefox, and Opera, but not Safari and Internet Explorer.  If you have trouble viewing the video you may need to use a different video format.  Since this uses `libvpx` to construct the frames, you will need to ensure that ffmpeg has been compiled with `libvpx` support."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from tempfile import NamedTemporaryFile\n",
+    "import base64\n",
+    "\n",
+    "VIDEO_TAG = \"\"\"<video controls>\n",
+    " <source src=\"data:video/x-webm;base64,{0}\" type=\"video/webm\">\n",
+    " Your browser does not support the video tag.\n",
+    "</video>\"\"\"\n",
+    "\n",
+    "def anim_to_html(anim):\n",
+    "    if not hasattr(anim, '_encoded_video'):\n",
+    "        with NamedTemporaryFile(suffix='.webm') as f:\n",
+    "            anim.save(f.name, fps=6, extra_args=['-vcodec', 'libvpx'])\n",
+    "            video = open(f.name, \"rb\").read()\n",
+    "        anim._encoded_video = base64.b64encode(video)\n",
+    "    \n",
+    "    return VIDEO_TAG.format(anim._encoded_video.decode('ascii'))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Next, we define a function to actually display the video inline in the notebook."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from IPython.display import HTML\n",
+    "\n",
+    "def display_animation(anim):\n",
+    "    plt.close(anim._fig)\n",
+    "    return HTML(anim_to_html(anim))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Finally, we set up the animation itsself.  We use yt to load the data and create each frame and use matplotlib to stitch the frames together.  Note that we customize the plot a bit by calling the `set_zlim` function.  Customizations only need to be applied to the first frame - they will carry through to the rest.\n",
+    "\n",
+    "This may take a while to run, be patient."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import matplotlib.pyplot as plt\n",
+    "from matplotlib.backends.backend_agg import FigureCanvasAgg\n",
+    "\n",
+    "prj = yt.ProjectionPlot(yt.load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
+    "prj.set_zlim('density',1e-32,1e-26)\n",
+    "fig = prj.plots['density'].figure\n",
+    "\n",
+    "# animation function.  This is called sequentially\n",
+    "def animate(i):\n",
+    "    ds = yt.load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
+    "    prj._switch_ds(ds)\n",
+    "\n",
+    "# call the animator.  blit=True means only re-draw the parts that have changed.\n",
+    "anim = animation.FuncAnimation(fig, animate, frames=44, interval=200, blit=False)\n",
+    "\n",
+    "# call our new function to display the animation\n",
+    "display_animation(anim)"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:b400f12ff9e27ff6a3ddd13f2f8fc3f88bd857fa6083fad6808f00d771312db7"
+  "kernelspec": {
+   "display_name": "Python 2",
+   "language": "python",
+   "name": "python2"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 2
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython2",
+   "version": "2.7.10"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "This example shows how to embed an animation produced by `matplotlib` into an IPython notebook.  This example makes use of `matplotlib`'s [animation toolkit](http://matplotlib.org/api/animation_api.html) to transform individual frames into a final rendered movie.  \n",
-      "\n",
-      "Matplotlib uses [`ffmpeg`](http://www.ffmpeg.org/) to generate the movie, so you must install `ffmpeg` for this example to work correctly.  Usually the best way to install `ffmpeg` is using your system's package manager."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt\n",
-      "from matplotlib import animation"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "First, we need to construct a function that will embed the video produced by ffmpeg directly into the notebook document. This makes use of the [HTML5 video tag](http://www.w3schools.com/html/html5_video.asp) and the WebM video format.  WebM is supported by Chrome, Firefox, and Opera, but not Safari and Internet Explorer.  If you have trouble viewing the video you may need to use a different video format.  Since this uses `libvpx` to construct the frames, you will need to ensure that ffmpeg has been compiled with `libvpx` support."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from tempfile import NamedTemporaryFile\n",
-      "\n",
-      "VIDEO_TAG = \"\"\"<video controls>\n",
-      " <source src=\"data:video/x-webm;base64,{0}\" type=\"video/webm\">\n",
-      " Your browser does not support the video tag.\n",
-      "</video>\"\"\"\n",
-      "\n",
-      "def anim_to_html(anim):\n",
-      "    if not hasattr(anim, '_encoded_video'):\n",
-      "        with NamedTemporaryFile(suffix='.webm') as f:\n",
-      "            anim.save(f.name, fps=6, extra_args=['-vcodec', 'libvpx'])\n",
-      "            video = open(f.name, \"rb\").read()\n",
-      "        anim._encoded_video = video.encode(\"base64\")\n",
-      "    \n",
-      "    return VIDEO_TAG.format(anim._encoded_video)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Next, we define a function to actually display the video inline in the notebook."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from IPython.display import HTML\n",
-      "\n",
-      "def display_animation(anim):\n",
-      "    plt.close(anim._fig)\n",
-      "    return HTML(anim_to_html(anim))"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Finally, we set up the animation itsself.  We use yt to load the data and create each frame and use matplotlib to stitch the frames together.  Note that we customize the plot a bit by calling the `set_zlim` function.  Customizations only need to be applied to the first frame - they will carry through to the rest.\n",
-      "\n",
-      "This may take a while to run, be patient."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import matplotlib.pyplot as plt\n",
-      "from matplotlib.backends.backend_agg import FigureCanvasAgg\n",
-      "\n",
-      "prj = yt.ProjectionPlot(yt.load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
-      "prj.set_zlim('density',1e-32,1e-26)\n",
-      "fig = prj.plots['density'].figure\n",
-      "\n",
-      "# animation function.  This is called sequentially\n",
-      "def animate(i):\n",
-      "    ds = yt.load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
-      "    prj._switch_ds(ds)\n",
-      "\n",
-      "# call the animator.  blit=True means only re-draw the parts that have changed.\n",
-      "anim = animation.FuncAnimation(fig, animate, frames=44, interval=200, blit=False)\n",
-      "\n",
-      "# call our new function to display the animation\n",
-      "display_animation(anim)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
+ "nbformat": 4,
+ "nbformat_minor": 0
 }


https://bitbucket.org/yt_analysis/yt/commits/627ad1222266/
Changeset:   627ad1222266
Branch:      yt
User:        ngoldbaum
Date:        2016-01-12 20:14:12+00:00
Summary:     Updating custom_colorbar_tickmarks notebook
Affected #:  1 file

diff -r 09adb299b57f3a182f5aa6f2b2ef9e5a43ef7291 -r 627ad12222668b240092edd19641942a58d962c5 doc/source/cookbook/custom_colorbar_tickmarks.ipynb
--- a/doc/source/cookbook/custom_colorbar_tickmarks.ipynb
+++ b/doc/source/cookbook/custom_colorbar_tickmarks.ipynb
@@ -1,90 +1,105 @@
 {
+ "cells": [
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
+    "slc = yt.SlicePlot(ds, 'x', 'density')\n",
+    "slc"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`PlotWindow` plots are containers for plots, keyed to field names.  Below, we get a copy of the plot for the `Density` field."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "plot = slc.plots['density']"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The plot has a few attributes that point to underlying `matplotlib` plot primites.  For example, the `colorbar` object corresponds to the `cb` attribute of the plot."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "colorbar = plot.cb"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "To set custom tickmarks, simply call the `matplotlib` [`set_ticks`](http://matplotlib.org/api/colorbar_api.html#matplotlib.colorbar.ColorbarBase.set_ticks) and [`set_ticklabels`](http://matplotlib.org/api/colorbar_api.html#matplotlib.colorbar.ColorbarBase.set_ticklabels) functions."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "colorbar.set_ticks([1e-28])\n",
+    "colorbar.set_ticklabels(['$10^{-28}$'])\n",
+    "slc"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:e8fd07931e339dc67b9d84b0fbc6abc84d3957d885544c24da7aa550f9427a1f"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
-      "slc = yt.SlicePlot(ds, 'x', 'density')\n",
-      "slc"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "`PlotWindow` plots are containers for plots, keyed to field names.  Below, we get a copy of the plot for the `Density` field."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "plot = slc.plots['density']"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The plot has a few attributes that point to underlying `matplotlib` plot primites.  For example, the `colorbar` object corresponds to the `cb` attribute of the plot."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "colorbar = plot.cb"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "To set custom tickmarks, simply call the `matplotlib` [`set_ticks`](http://matplotlib.org/api/colorbar_api.html#matplotlib.colorbar.ColorbarBase.set_ticks) and [`set_ticklabels`](http://matplotlib.org/api/colorbar_api.html#matplotlib.colorbar.ColorbarBase.set_ticklabels) functions."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "colorbar.set_ticks([1e-28])\n",
-      "colorbar.set_ticklabels(['$10^{-28}$'])\n",
-      "slc"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}


https://bitbucket.org/yt_analysis/yt/commits/a17a3b37e36e/
Changeset:   a17a3b37e36e
Branch:      yt
User:        ngoldbaum
Date:        2016-01-12 21:52:30+00:00
Summary:     Updating the Halo_analysis notebook
Affected #:  1 file

diff -r 627ad12222668b240092edd19641942a58d962c5 -r a17a3b37e36e58dd4b8decd3f1876a98df8e1671 doc/source/cookbook/Halo_Analysis.ipynb
--- a/doc/source/cookbook/Halo_Analysis.ipynb
+++ b/doc/source/cookbook/Halo_Analysis.ipynb
@@ -1,412 +1,434 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Full Halo Analysis"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Creating a Catalog"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Here we put everything together to perform some realistic analysis. First we load a full simulation dataset."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt\n",
+    "from yt.analysis_modules.halo_analysis.api import *\n",
+    "import tempfile\n",
+    "import shutil\n",
+    "import os\n",
+    "\n",
+    "# Create temporary directory for storing files\n",
+    "tmpdir = tempfile.mkdtemp()\n",
+    "\n",
+    "# Load the data set with the full simulation information\n",
+    "data_ds = yt.load('Enzo_64/RD0006/RedshiftOutput0006')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now we load a rockstar halos binary file. This is the output from running the rockstar halo finder on the dataset loaded above. It is also possible to require the HaloCatalog to find the halos in the full simulation dataset at runtime by specifying a `finder_method` keyword."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# Load the rockstar data files\n",
+    "halos_ds = yt.load('rockstar_halos/halos_0.0.bin')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "From these two loaded datasets we create a halo catalog object. No analysis is done at this point, we are simply defining an object we can add analysis tasks to. These analysis tasks will be run in the order they are added to the halo catalog object."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# Instantiate a catalog using those two paramter files\n",
+    "hc = HaloCatalog(data_ds=data_ds, halos_ds=halos_ds, \n",
+    "                 output_dir=os.path.join(tmpdir, 'halo_catalog'))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The first analysis task we add is a filter for the most massive halos; those with masses great than $10^{14}~M_\\odot$. Note that all following analysis will only be performed on these massive halos and we will not waste computational time calculating quantities for halos we are not interested in. This is a result of adding this filter first. If we had called `add_filter` after some other `add_quantity` or `add_callback` to the halo catalog, the quantity and callback calculations would have been performed for all halos, not just those which pass the filter."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": true
+   },
+   "outputs": [],
+   "source": [
+    "# Filter out less massive halos\n",
+    "hc.add_filter(\"quantity_value\", \"particle_mass\", \">\", 1e14, \"Msun\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Finding Radial Profiles"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Our first analysis goal is going to be constructing radial profiles for our halos. We would like these profiles to be in terms of the virial radius. Unfortunately we have no guarantee that values of center and virial radius recorded by the halo finder are actually physical. Therefore we should recalculate these quantities ourselves using the values recorded by the halo finder as a starting point."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The first step is going to be creating a sphere object that we will create radial profiles along. This attaches a sphere data object to every halo left in the catalog."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# attach a sphere object to each halo whose radius extends to twice the radius of the halo\n",
+    "hc.add_callback(\"sphere\", factor=2.0)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Next we find the radial profile of the gas overdensity along the sphere object in order to find the virial radius. `radius` is the axis along which we make bins for the radial profiles. `[(\"gas\",\"overdensity\")]` is the quantity that we are profiling. This is a list so we can profile as many quantities as we want. The `weight_field` indicates how the cells should be weighted, but note that this is not a list, so all quantities will be weighted in the same way. The `accumulation` keyword indicates if the profile should be cummulative; this is useful for calculating profiles such as enclosed mass. The `storage` keyword indicates the name of the attribute of a halo where these profiles will be stored. Setting the storage keyword to \"virial_quantities_profiles\" means that the profiles will be stored in a dictionary that can be accessed by `halo.virial_quantities_profiles`."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# use the sphere to calculate radial profiles of gas density weighted by cell volume in terms of the virial radius\n",
+    "hc.add_callback(\"profile\", [\"radius\"],\n",
+    "                [(\"gas\", \"overdensity\")],\n",
+    "                weight_field=\"cell_volume\", \n",
+    "                accumulation=True,\n",
+    "                storage=\"virial_quantities_profiles\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now we calculate the virial radius of halo using the sphere object. As this is a callback, not a quantity, the virial radius will not be written out with the rest of the halo properties in the final halo catalog. This also has a `profile_storage` keyword to specify where the radial profiles are stored that will allow the callback to calculate the relevant virial quantities. We supply this keyword with the same string we gave to `storage` in the last `profile` callback."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# Define a virial radius for the halo.\n",
+    "hc.add_callback(\"virial_quantities\", [\"radius\"], \n",
+    "                profile_storage = \"virial_quantities_profiles\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now that we have calculated the virial radius, we delete the profiles we used to find it."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "hc.add_callback('delete_attribute','virial_quantities_profiles')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now that we have calculated virial quantities we can add a new sphere that is aware of the virial radius we calculated above."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "hc.add_callback('sphere', radius_field='radius_200', factor=5,\n",
+    "                field_parameters=dict(virial_radius=('quantity', 'radius_200')))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Using this new sphere, we calculate a gas temperature profile along the virial radius, weighted by the cell mass."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "hc.add_callback('profile', 'virial_radius_fraction', [('gas','temperature')],\n",
+    "                storage='virial_profiles',\n",
+    "                weight_field='cell_mass', \n",
+    "                accumulation=False, output_dir='profiles')\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "As profiles are not quantities they will not automatically be written out in the halo catalog; thus in order to be reloadable we must write them out explicitly through a callback of `save_profiles`. This makes sense because they have an extra dimension for each halo along the profile axis. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# Save the profiles\n",
+    "hc.add_callback(\"save_profiles\", storage=\"virial_profiles\", output_dir=\"profiles\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We then create the halo catalog. Remember, no analysis is done before this call to create. By adding callbacks and filters we are simply queuing up the actions we want to take that will all run now."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "hc.create()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Reloading HaloCatalogs"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Finally we load these profiles back in and make a pretty plot. It is not strictly necessary to reload the profiles in this notebook, but we show this process here to illustrate that this step may be performed completely separately from the rest of the script. This workflow allows you to create a single script that will allow you to perform all of the analysis that requires the full dataset. The output can then be saved in a compact form where only the necessarily halo quantities are stored. You can then download this smaller dataset to a local computer and run any further non-computationally intense analysis and design the appropriate plots."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can load a previously saved halo catalog by using the `load` command. We then create a `HaloCatalog` object from just this dataset."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "halos_ds =  yt.load(os.path.join(tmpdir, 'halo_catalog/halo_catalog.0.h5'))\n",
+    "\n",
+    "hc_reloaded = HaloCatalog(halos_ds=halos_ds,\n",
+    "                          output_dir=os.path.join(tmpdir, 'halo_catalog'))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    " Just as profiles are saved seperately throught the `save_profiles` callback they also must be loaded separately using the `load_profiles` callback."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "hc_reloaded.add_callback('load_profiles', storage='virial_profiles',\n",
+    "                         output_dir='profiles')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Calling `load` is the equivalent of calling `create` earlier, but defaults to to not saving new information. This means that the callback to `load_profiles` is not run until we call `load` here."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "hc_reloaded.load()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Plotting Radial Profiles"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "In the future ProfilePlot will be able to properly interpret the loaded profiles of `Halo` and `HaloCatalog` objects, but this functionality is not yet implemented. In the meantime, we show a quick method of viewing a profile for a single halo."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The individual `Halo` objects contained in the `HaloCatalog` can be accessed through the `halo_list` attribute. This gives us access to the dictionary attached to each halo where we stored the radial profiles."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "halo = hc_reloaded.halo_list[0]\n",
+    "\n",
+    "radius = halo.virial_profiles[u\"('index', 'virial_radius_fraction')\"]\n",
+    "temperature = halo.virial_profiles[u\"('gas', 'temperature')\"]\n",
+    "\n",
+    "# Remove output files, that are no longer needed\n",
+    "shutil.rmtree(tmpdir)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Here we quickly use matplotlib to create a basic plot of the radial profile of this halo. When `ProfilePlot` is properly configured to accept Halos and HaloCatalogs the full range of yt plotting tools will be accessible."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "%matplotlib inline\n",
+    "import matplotlib.pyplot as plt\n",
+    "import numpy as np\n",
+    "\n",
+    "plt.plot(np.array(radius), np.array(temperature))\n",
+    "\n",
+    "plt.semilogy()\n",
+    "plt.xlabel(r'$\\rm{R/R_{vir}}$')\n",
+    "plt.ylabel(r'$\\rm{Temperature\\/\\/(K)}$')\n",
+    "\n",
+    "plt.show()"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:cb4a8114d92def67d5f948ec8326e8af0b20a5340dc8fd54d8d583e9c646886e"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "heading",
-     "level": 1,
-     "metadata": {},
-     "source": [
-      "Full Halo Analysis"
-     ]
-    },
-    {
-     "cell_type": "heading",
-     "level": 3,
-     "metadata": {},
-     "source": [
-      "Creating a Catalog"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Here we put everything together to perform some realistic analysis. First we load a full simulation dataset."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt\n",
-      "from yt.analysis_modules.halo_analysis.api import *\n",
-      "import tempfile\n",
-      "import shutil\n",
-      "import os\n",
-      "\n",
-      "# Create temporary directory for storing files\n",
-      "tmpdir = tempfile.mkdtemp()\n",
-      "\n",
-      "# Load the data set with the full simulation information\n",
-      "data_ds = yt.load('Enzo_64/RD0006/RedshiftOutput0006')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now we load a rockstar halos binary file. This is the output from running the rockstar halo finder on the dataset loaded above. It is also possible to require the HaloCatalog to find the halos in the full simulation dataset at runtime by specifying a `finder_method` keyword."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# Load the rockstar data files\n",
-      "halos_ds = yt.load('rockstar_halos/halos_0.0.bin')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "From these two loaded datasets we create a halo catalog object. No analysis is done at this point, we are simply defining an object we can add analysis tasks to. These analysis tasks will be run in the order they are added to the halo catalog object."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# Instantiate a catalog using those two paramter files\n",
-      "hc = HaloCatalog(data_ds=data_ds, halos_ds=halos_ds, \n",
-      "                 output_dir=os.path.join(tmpdir, 'halo_catalog'))"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The first analysis task we add is a filter for the most massive halos; those with masses great than $10^{14}~M_\\odot$. Note that all following analysis will only be performed on these massive halos and we will not waste computational time calculating quantities for halos we are not interested in. This is a result of adding this filter first. If we had called `add_filter` after some other `add_quantity` or `add_callback` to the halo catalog, the quantity and callback calculations would have been performed for all halos, not just those which pass the filter."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": true,
-     "input": [
-      "# Filter out less massive halos\n",
-      "hc.add_filter(\"quantity_value\", \"particle_mass\", \">\", 1e14, \"Msun\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 3,
-     "metadata": {},
-     "source": [
-      "Finding Radial Profiles"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Our first analysis goal is going to be constructing radial profiles for our halos. We would like these profiles to be in terms of the virial radius. Unfortunately we have no guarantee that values of center and virial radius recorded by the halo finder are actually physical. Therefore we should recalculate these quantities ourselves using the values recorded by the halo finder as a starting point."
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The first step is going to be creating a sphere object that we will create radial profiles along. This attaches a sphere data object to every halo left in the catalog."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# attach a sphere object to each halo whose radius extends to twice the radius of the halo\n",
-      "hc.add_callback(\"sphere\", factor=2.0)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Next we find the radial profile of the gas overdensity along the sphere object in order to find the virial radius. `radius` is the axis along which we make bins for the radial profiles. `[(\"gas\",\"overdensity\")]` is the quantity that we are profiling. This is a list so we can profile as many quantities as we want. The `weight_field` indicates how the cells should be weighted, but note that this is not a list, so all quantities will be weighted in the same way. The `accumulation` keyword indicates if the profile should be cummulative; this is useful for calculating profiles such as enclosed mass. The `storage` keyword indicates the name of the attribute of a halo where these profiles will be stored. Setting the storage keyword to \"virial_quantities_profiles\" means that the profiles will be stored in a dictionary that can be accessed by `halo.virial_quantities_profiles`."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# use the sphere to calculate radial profiles of gas density weighted by cell volume in terms of the virial radius\n",
-      "hc.add_callback(\"profile\", [\"radius\"],\n",
-      "                [(\"gas\", \"overdensity\")],\n",
-      "                weight_field=\"cell_volume\", \n",
-      "                accumulation=True,\n",
-      "                storage=\"virial_quantities_profiles\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now we calculate the virial radius of halo using the sphere object. As this is a callback, not a quantity, the virial radius will not be written out with the rest of the halo properties in the final halo catalog. This also has a `profile_storage` keyword to specify where the radial profiles are stored that will allow the callback to calculate the relevant virial quantities. We supply this keyword with the same string we gave to `storage` in the last `profile` callback."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# Define a virial radius for the halo.\n",
-      "hc.add_callback(\"virial_quantities\", [\"radius\"], \n",
-      "                profile_storage = \"virial_quantities_profiles\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now that we have calculated the virial radius, we delete the profiles we used to find it."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "hc.add_callback('delete_attribute','virial_quantities_profiles')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now that we have calculated virial quantities we can add a new sphere that is aware of the virial radius we calculated above."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "hc.add_callback('sphere', radius_field='radius_200', factor=5,\n",
-      "                field_parameters=dict(virial_radius=('quantity', 'radius_200')))"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Using this new sphere, we calculate a gas temperature profile along the virial radius, weighted by the cell mass."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "hc.add_callback('profile', 'virial_radius_fraction', [('gas','temperature')],\n",
-      "                storage='virial_profiles',\n",
-      "                weight_field='cell_mass', \n",
-      "                accumulation=False, output_dir='profiles')\n"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "As profiles are not quantities they will not automatically be written out in the halo catalog; thus in order to be reloadable we must write them out explicitly through a callback of `save_profiles`. This makes sense because they have an extra dimension for each halo along the profile axis. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# Save the profiles\n",
-      "hc.add_callback(\"save_profiles\", storage=\"virial_profiles\", output_dir=\"profiles\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We then create the halo catalog. Remember, no analysis is done before this call to create. By adding callbacks and filters we are simply queuing up the actions we want to take that will all run now."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": true,
-     "input": [
-      "hc.create()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 3,
-     "metadata": {},
-     "source": [
-      "Reloading HaloCatalogs"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Finally we load these profiles back in and make a pretty plot. It is not strictly necessary to reload the profiles in this notebook, but we show this process here to illustrate that this step may be performed completely separately from the rest of the script. This workflow allows you to create a single script that will allow you to perform all of the analysis that requires the full dataset. The output can then be saved in a compact form where only the necessarily halo quantities are stored. You can then download this smaller dataset to a local computer and run any further non-computationally intense analysis and design the appropriate plots."
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can load a previously saved halo catalog by using the `load` command. We then create a `HaloCatalog` object from just this dataset."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "halos_ds =  yt.load(os.path.join(tmpdir, 'halo_catalog/halo_catalog.0.h5'))\n",
-      "\n",
-      "hc_reloaded = HaloCatalog(halos_ds=halos_ds,\n",
-      "                          output_dir=os.path.join(tmpdir, 'halo_catalog'))"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      " Just as profiles are saved seperately throught the `save_profiles` callback they also must be loaded separately using the `load_profiles` callback."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "hc_reloaded.add_callback('load_profiles', storage='virial_profiles',\n",
-      "                         output_dir='profiles')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Calling `load` is the equivalent of calling `create` earlier, but defaults to to not saving new information. This means that the callback to `load_profiles` is not run until we call `load` here."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": true,
-     "input": [
-      "hc_reloaded.load()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 3,
-     "metadata": {},
-     "source": [
-      "Plotting Radial Profiles"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "In the future ProfilePlot will be able to properly interpret the loaded profiles of `Halo` and `HaloCatalog` objects, but this functionality is not yet implemented. In the meantime, we show a quick method of viewing a profile for a single halo."
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The individual `Halo` objects contained in the `HaloCatalog` can be accessed through the `halo_list` attribute. This gives us access to the dictionary attached to each halo where we stored the radial profiles."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "halo = hc_reloaded.halo_list[0]\n",
-      "\n",
-      "radius = halo.virial_profiles[u\"('index', 'virial_radius_fraction')\"]\n",
-      "temperature = halo.virial_profiles[u\"('gas', 'temperature')\"]\n",
-      "\n",
-      "# Remove output files, that are no longer needed\n",
-      "shutil.rmtree(tmpdir)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Here we quickly use matplotlib to create a basic plot of the radial profile of this halo. When `ProfilePlot` is properly configured to accept Halos and HaloCatalogs the full range of yt plotting tools will be accessible."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "%matplotlib inline\n",
-      "import matplotlib.pyplot as plt\n",
-      "import numpy as np\n",
-      "\n",
-      "plt.plot(np.array(radius), np.array(temperature))\n",
-      "\n",
-      "plt.semilogy()\n",
-      "plt.xlabel(r'$\\rm{R/R_{vir}}$')\n",
-      "plt.ylabel(r'$\\rm{Temperature\\/\\/(K)}$')\n",
-      "\n",
-      "plt.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}


https://bitbucket.org/yt_analysis/yt/commits/495a6d33906b/
Changeset:   495a6d33906b
Branch:      yt
User:        ngoldbaum
Date:        2016-01-12 21:52:49+00:00
Summary:     Updating the fits_radio_cubes notebook
Affected #:  1 file

diff -r a17a3b37e36e58dd4b8decd3f1876a98df8e1671 -r 495a6d33906bc945fb9d6efdbbcab091bf30bdc0 doc/source/cookbook/fits_radio_cubes.ipynb
--- a/doc/source/cookbook/fits_radio_cubes.ipynb
+++ b/doc/source/cookbook/fits_radio_cubes.ipynb
@@ -1,466 +1,500 @@
 {
+ "cells": [
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "%matplotlib inline\n",
+    "import yt"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "This notebook demonstrates some of the capabilties of yt on some FITS \"position-position-spectrum\" cubes of radio data.\n",
+    "\n",
+    "Note that it depends on some external dependencies, including `astropy`, `wcsaxes`, and `pyregion`."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## M33 VLA Image"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The dataset `\"m33_hi.fits\"` has `NaN`s in it, so we'll mask them out by setting `nan_mask` = 0:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load(\"radio_fits/m33_hi.fits\", nan_mask=0.0, z_axis_decomp=True)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "First, we'll take a slice of the data along the z-axis, which is the velocity axis of the FITS cube:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc = yt.SlicePlot(ds, \"z\", [\"intensity\"], origin=\"native\")\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The x and y axes are in units of the image pixel. When making plots of FITS data, to see the image coordinates as they are in the file, it is helpful to set the keyword `origin = \"native\"`. If you want to see the celestial coordinates along the axes, you can import the `PlotWindowWCS` class and feed it the `SlicePlot`. For this to work, the [WCSAxes](http://wcsaxes.readthedocs.org/en/latest/) package needs to be installed."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.frontends.fits.misc import PlotWindowWCS\n",
+    "PlotWindowWCS(slc)\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Generally, it is best to get the plot in the shape you want it before feeding it to `PlotWindowWCS`. Once it looks the way you want, you can save it just like a normal `PlotWindow` plot:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc.save()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can also take slices of this dataset at a few different values along the \"z\" axis (corresponding to the velocity), so let's try a few. To pick specific velocity values for slices, we will need to use the dataset's `spec2pixel` method to determine which pixels to slice on:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt.units as u\n",
+    "new_center = ds.domain_center\n",
+    "new_center[2] = ds.spec2pixel(-250000.*u.m/u.s)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now we can use this new center to create a new slice:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc = yt.SlicePlot(ds, \"z\", [\"intensity\"], center=new_center, origin=\"native\")\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can do this a few more times for different values of the velocity:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "new_center[2] = ds.spec2pixel(-100000.*u.m/u.s)\n",
+    "slc = yt.SlicePlot(ds, \"z\", [\"intensity\"], center=new_center, origin=\"native\")\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "new_center[2] = ds.spec2pixel(-150000.*u.m/u.s)\n",
+    "slc = yt.SlicePlot(ds, \"z\", [\"intensity\"], center=new_center, origin=\"native\")\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "These slices demonstrate the intensity of the radio emission at different line-of-sight velocities. "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can also make a projection of all the emission along the line of sight. Since we're not doing an integration along a path length, we needed to specify `method = \"sum\"`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj = yt.ProjectionPlot(ds, \"z\", [\"intensity\"], origin=\"native\")\n",
+    "prj.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can also look at the slices perpendicular to the other axes, which will show us the structure along the velocity axis:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc = yt.SlicePlot(ds, \"x\", [\"intensity\"], origin=\"native\", window_size=(8,8))\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc = yt.SlicePlot(ds, \"y\", [\"intensity\"], origin=\"native\", window_size=(8,8))\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "In these cases, we needed to explicitly declare a square `window_size` to get a figure that looks good. "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## $^{13}$CO GRS Data"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "This next example uses one of the cubes from the [Boston University Galactic Ring Survey](http://www.bu.edu/galacticring/new_index.htm). "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load(\"radio_fits/grs-50-cube.fits\", nan_mask=0.0)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can use the `quantities` methods to determine derived quantities of the dataset. For example, we could find the maximum and minimum temperature:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "dd = ds.all_data() # A region containing the entire dataset\n",
+    "extrema = dd.quantities.extrema(\"temperature\")\n",
+    "print (extrema)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can compute the average temperature along the \"velocity\" axis for all positions by making a `ProjectionPlot`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj = yt.ProjectionPlot(ds, \"z\", [\"temperature\"], origin=\"native\", \n",
+    "                        weight_field=\"ones\") # \"ones\" weights each cell by 1\n",
+    "prj.set_log(\"temperature\", True)\n",
+    "prj.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can also make a histogram of the temperature field of this region:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "pplot = yt.ProfilePlot(dd, \"temperature\", [\"ones\"], weight_field=None, n_bins=128)\n",
+    "pplot.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can see from this histogram and our calculation of the dataset's extrema that there is a lot of noise. Suppose we wanted to make a projection, but instead make it only of the cells which had a positive temperature value. We can do this by doing a \"field cut\" on the data:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "fc = dd.cut_region([\"obj['temperature'] > 0\"])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now let's check the extents of this region:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (fc.quantities.extrema(\"temperature\"))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Looks like we were successful in filtering out the negative temperatures. To compute the average temperature of this new region:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "fc.quantities.weighted_average_quantity(\"temperature\", \"ones\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now, let's make a projection of the dataset, using the field cut `fc` as a `data_source`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj = yt.ProjectionPlot(ds, \"z\", [\"temperature\"], data_source=fc, origin=\"native\", \n",
+    "                        weight_field=\"ones\") # \"ones\" weights each cell by 1\n",
+    "prj.set_log(\"temperature\", True)\n",
+    "prj.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Finally, we can also take an existing [ds9](http://ds9.si.edu/site/Home.html) region and use it to create a \"cut region\" as well, using `ds9_region` (the [pyregion](http://leejjoon.github.io/pyregion/) package needs to be installed for this):"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.frontends.fits.misc import ds9_region"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "For this example we'll create a ds9 region from scratch and load it up:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "region = 'galactic;box(+49:26:35.150,-0:30:04.410,1926.1927\",1483.3701\",0.0)'\n",
+    "box_reg = ds9_region(ds, region)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "This region may now be used to compute derived quantities:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (box_reg.quantities.extrema(\"temperature\"))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Or in projections:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj = yt.ProjectionPlot(ds, \"z\", [\"temperature\"], origin=\"native\", \n",
+    "                        data_source=box_reg, weight_field=\"ones\") # \"ones\" weights each cell by 1\n",
+    "prj.set_zlim(\"temperature\", 1.0e-2, 1.5)\n",
+    "prj.set_log(\"temperature\", True)\n",
+    "prj.show()"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:bfa15149bd8b2d2f52fd758fbbc27a67846c8c01038fdd2bd366656b50fdd7ba"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "%matplotlib inline\n",
-      "import yt"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "This notebook demonstrates some of the capabilties of yt on some FITS \"position-position-spectrum\" cubes of radio data. "
-     ]
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "M33 VLA Image"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The dataset `\"m33_hi.fits\"` has `NaN`s in it, so we'll mask them out by setting `nan_mask` = 0:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load(\"radio_fits/m33_hi.fits\", nan_mask=0.0, z_axis_decomp=True)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "First, we'll take a slice of the data along the z-axis, which is the velocity axis of the FITS cube:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc = yt.SlicePlot(ds, \"z\", [\"intensity\"], origin=\"native\")\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The x and y axes are in units of the image pixel. When making plots of FITS data, to see the image coordinates as they are in the file, it is helpful to set the keyword `origin = \"native\"`. If you want to see the celestial coordinates along the axes, you can import the `PlotWindowWCS` class and feed it the `SlicePlot`. For this to work, the [WCSAxes](http://wcsaxes.readthedocs.org/en/latest/) package needs to be installed."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.frontends.fits.misc import PlotWindowWCS\n",
-      "PlotWindowWCS(slc)\n"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Generally, it is best to get the plot in the shape you want it before feeding it to `PlotWindowWCS`. Once it looks the way you want, you can save it just like a normal `PlotWindow` plot:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc.save()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can also take slices of this dataset at a few different values along the \"z\" axis (corresponding to the velocity), so let's try a few. To pick specific velocity values for slices, we will need to use the dataset's `spec2pixel` method to determine which pixels to slice on:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt.units as u\n",
-      "new_center = ds.domain_center\n",
-      "new_center[2] = ds.spec2pixel(-250000.*u.m/u.s)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now we can use this new center to create a new slice:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc = yt.SlicePlot(ds, \"z\", [\"intensity\"], center=new_center, origin=\"native\")\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can do this a few more times for different values of the velocity:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "new_center[2] = ds.spec2pixel(-100000.*u.m/u.s)\n",
-      "slc = yt.SlicePlot(ds, \"z\", [\"intensity\"], center=new_center, origin=\"native\")\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "new_center[2] = ds.spec2pixel(-150000.*u.m/u.s)\n",
-      "slc = yt.SlicePlot(ds, \"z\", [\"intensity\"], center=new_center, origin=\"native\")\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "These slices demonstrate the intensity of the radio emission at different line-of-sight velocities. "
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can also make a projection of all the emission along the line of sight. Since we're not doing an integration along a path length, we needed to specify `method = \"sum\"`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj = yt.ProjectionPlot(ds, \"z\", [\"intensity\"], origin=\"native\")\n",
-      "prj.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can also look at the slices perpendicular to the other axes, which will show us the structure along the velocity axis:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc = yt.SlicePlot(ds, \"x\", [\"intensity\"], origin=\"native\", window_size=(8,8))\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc = yt.SlicePlot(ds, \"y\", [\"intensity\"], origin=\"native\", window_size=(8,8))\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "In these cases, we needed to explicitly declare a square `window_size` to get a figure that looks good. "
-     ]
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "$^{13}$CO GRS Data"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "This next example uses one of the cubes from the [Boston University Galactic Ring Survey](http://www.bu.edu/galacticring/new_index.htm). "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load(\"radio_fits/grs-50-cube.fits\", nan_mask=0.0)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can use the `quantities` methods to determine derived quantities of the dataset. For example, we could find the maximum and minimum temperature:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "dd = ds.all_data() # A region containing the entire dataset\n",
-      "extrema = dd.quantities.extrema(\"temperature\")\n",
-      "print extrema"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can compute the average temperature along the \"velocity\" axis for all positions by making a `ProjectionPlot`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj = yt.ProjectionPlot(ds, \"z\", [\"temperature\"], origin=\"native\", \n",
-      "                        weight_field=\"ones\") # \"ones\" weights each cell by 1\n",
-      "prj.set_log(\"temperature\", True)\n",
-      "prj.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can also make a histogram of the temperature field of this region:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "pplot = yt.ProfilePlot(dd, \"temperature\", [\"ones\"], weight_field=None, n_bins=128)\n",
-      "pplot.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can see from this histogram and our calculation of the dataset's extrema that there is a lot of noise. Suppose we wanted to make a projection, but instead make it only of the cells which had a positive temperature value. We can do this by doing a \"field cut\" on the data:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "fc = dd.cut_region([\"obj['temperature'] > 0\"])"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now let's check the extents of this region:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print fc.quantities.extrema(\"temperature\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Looks like we were successful in filtering out the negative temperatures. To compute the average temperature of this new region:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "fc.quantities.weighted_average_quantity(\"temperature\", \"ones\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now, let's make a projection of the dataset, using the field cut `fc` as a `data_source`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj = yt.ProjectionPlot(ds, \"z\", [\"temperature\"], data_source=fc, origin=\"native\", \n",
-      "                        weight_field=\"ones\") # \"ones\" weights each cell by 1\n",
-      "prj.set_log(\"temperature\", True)\n",
-      "prj.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Finally, we can also take an existing [ds9](http://ds9.si.edu/site/Home.html) region and use it to create a \"cut region\" as well, using `ds9_region` (the [pyregion](http://leejjoon.github.io/pyregion/) package needs to be installed for this):"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.frontends.fits.misc import ds9_region"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "For this example we'll create a ds9 region from scratch and load it up:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "region = 'galactic;box(+49:26:35.150,-0:30:04.410,1926.1927\",1483.3701\",0.0)'\n",
-      "box_reg = ds9_region(ds, region)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "This region may now be used to compute derived quantities:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print box_reg.quantities.extrema(\"temperature\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Or in projections:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj = yt.ProjectionPlot(ds, \"z\", [\"temperature\"], origin=\"native\", \n",
-      "                        data_source=box_reg, weight_field=\"ones\") # \"ones\" weights each cell by 1\n",
-      "prj.set_zlim(\"temperature\", 1.0e-2, 1.5)\n",
-      "prj.set_log(\"temperature\", True)\n",
-      "prj.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}


https://bitbucket.org/yt_analysis/yt/commits/4bbce8322f9d/
Changeset:   4bbce8322f9d
Branch:      yt
User:        ngoldbaum
Date:        2016-01-12 21:53:06+00:00
Summary:     Updating the fits_xray_images notebook
Affected #:  1 file

diff -r 495a6d33906bc945fb9d6efdbbcab091bf30bdc0 -r 4bbce8322f9d86e66bb1a8757530430e48d38e56 doc/source/cookbook/fits_xray_images.ipynb
--- a/doc/source/cookbook/fits_xray_images.ipynb
+++ b/doc/source/cookbook/fits_xray_images.ipynb
@@ -1,405 +1,431 @@
 {
+ "cells": [
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "%matplotlib inline\n",
+    "import yt\n",
+    "import numpy as np"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "This notebook shows how to use yt to make plots and examine FITS X-ray images and events files. "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Sloshing, Shocks, and Bubbles in Abell 2052"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "This example uses data provided by [Scott Randall](http://hea-www.cfa.harvard.edu/~srandall/), presented originally in [Blanton, E.L., Randall, S.W., Clarke, T.E., et al. 2011, ApJ, 737, 99](http://adsabs.harvard.edu/cgi-bin/bib_query?2011ApJ...737...99B). They consist of two files, a \"flux map\" in counts/s/pixel between 0.3 and 2 keV, and a spectroscopic temperature map in keV. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load(\"xray_fits/A2052_merged_0.3-2_match-core_tmap_bgecorr.fits\", \n",
+    "             auxiliary_files=[\"xray_fits/A2052_core_tmap_b1_m2000_.fits\"])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Since the flux and projected temperature images are in two different files, we had to use one of them (in this case the \"flux\" file) as a master file, and pass in the \"temperature\" file with the `auxiliary_files` keyword to `load`. "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Next, let's derive some new fields for the number of counts, the \"pseudo-pressure\", and the \"pseudo-entropy\":"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "def _counts(field, data):\n",
+    "    exposure_time = data.get_field_parameter(\"exposure_time\")\n",
+    "    return data[\"flux\"]*data[\"pixel\"]*exposure_time\n",
+    "ds.add_field((\"gas\",\"counts\"), function=_counts, units=\"counts\", take_log=False)\n",
+    "\n",
+    "def _pp(field, data):\n",
+    "    return np.sqrt(data[\"counts\"])*data[\"projected_temperature\"]\n",
+    "ds.add_field((\"gas\",\"pseudo_pressure\"), function=_pp, units=\"sqrt(counts)*keV\", take_log=False)\n",
+    "\n",
+    "def _pe(field, data):\n",
+    "    return data[\"projected_temperature\"]*data[\"counts\"]**(-1./3.)\n",
+    "ds.add_field((\"gas\",\"pseudo_entropy\"), function=_pe, units=\"keV*(counts)**(-1/3)\", take_log=False)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Here, we're deriving a \"counts\" field from the \"flux\" field by passing it a `field_parameter` for the exposure time of the time and multiplying by the pixel scale. Second, we use the fact that the surface brightness is strongly dependent on density ($S_X \\propto \\rho^2$) to use the counts in each pixel as a \"stand-in\". Next, we'll grab the exposure time from the primary FITS header of the flux file and create a `YTQuantity` from it, to be used as a `field_parameter`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "exposure_time = ds.quan(ds.primary_header[\"exposure\"], \"s\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now, we can make the `SlicePlot` object of the fields we want, passing in the `exposure_time` as a `field_parameter`. We'll also set the width of the image to 250 pixels."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc = yt.SlicePlot(ds, \"z\", \n",
+    "                   [\"flux\",\"projected_temperature\",\"pseudo_pressure\",\"pseudo_entropy\"], \n",
+    "                   origin=\"native\", field_parameters={\"exposure_time\":exposure_time})\n",
+    "slc.set_log(\"flux\",True)\n",
+    "slc.set_log(\"pseudo_pressure\",False)\n",
+    "slc.set_log(\"pseudo_entropy\",False)\n",
+    "slc.set_width(250.)\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "To add the celestial coordinates to the image, we can use `PlotWindowWCS`, if you have the [WCSAxes](http://wcsaxes.readthedocs.org/en/latest/) package installed:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.frontends.fits.misc import PlotWindowWCS\n",
+    "wcs_slc = PlotWindowWCS(slc)\n",
+    "wcs_slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can make use of yt's facilities for profile plotting as well."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "v, c = ds.find_max(\"flux\") # Find the maxmimum flux and its center\n",
+    "my_sphere = ds.sphere(c, (100.,\"code_length\")) # Radius of 150 pixels\n",
+    "my_sphere.set_field_parameter(\"exposure_time\", exposure_time)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Such as a radial profile plot:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "radial_profile = yt.ProfilePlot(my_sphere, \"radius\", \n",
+    "                                [\"counts\",\"pseudo_pressure\",\"pseudo_entropy\"], \n",
+    "                                n_bins=30, weight_field=\"ones\")\n",
+    "radial_profile.set_log(\"counts\", True)\n",
+    "radial_profile.set_log(\"pseudo_pressure\", True)\n",
+    "radial_profile.set_log(\"pseudo_entropy\", True)\n",
+    "radial_profile.set_xlim(3,100.)\n",
+    "radial_profile.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Or a phase plot:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "phase_plot = yt.PhasePlot(my_sphere, \"pseudo_pressure\", \"pseudo_entropy\", [\"counts\"], weight_field=None)\n",
+    "phase_plot.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Finally, we can also take an existing [ds9](http://ds9.si.edu/site/Home.html) region and use it to create a \"cut region\", using `ds9_region` (the [pyregion](http://leejjoon.github.io/pyregion/) package needs to be installed for this):"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.frontends.fits.misc import ds9_region\n",
+    "reg_file = [\"# Region file format: DS9 version 4.1\\n\",\n",
+    "            \"global color=green dashlist=8 3 width=3 include=1 source=1 fk5\\n\",\n",
+    "            \"circle(15:16:44.817,+7:01:19.62,34.6256\\\")\"]\n",
+    "f = open(\"circle.reg\",\"w\")\n",
+    "f.writelines(reg_file)\n",
+    "f.close()\n",
+    "circle_reg = ds9_region(ds, \"circle.reg\", field_parameters={\"exposure_time\":exposure_time})"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "This region may now be used to compute derived quantities:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (circle_reg.quantities.weighted_average_quantity(\"projected_temperature\", \"counts\"))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Or used in projections:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj = yt.ProjectionPlot(ds, \"z\", \n",
+    "                   [\"flux\",\"projected_temperature\",\"pseudo_pressure\",\"pseudo_entropy\"], \n",
+    "                   origin=\"native\", field_parameters={\"exposure_time\":exposure_time},\n",
+    "                   data_source=circle_reg,\n",
+    "                   method=\"sum\")\n",
+    "prj.set_log(\"flux\",True)\n",
+    "prj.set_log(\"pseudo_pressure\",False)\n",
+    "prj.set_log(\"pseudo_entropy\",False)\n",
+    "prj.set_width(250.)\n",
+    "prj.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## The Bullet Cluster"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "This example uses an events table file from a ~100 ks exposure of the \"Bullet Cluster\" from the [Chandra Data Archive](http://cxc.harvard.edu/cda/). In this case, the individual photon events are treated as particle fields in yt. However, you can make images of the object in different energy bands using the `setup_counts_fields` function. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.frontends.fits.api import setup_counts_fields"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`load` will handle the events file as FITS image files, and will set up a grid using the WCS information in the file. Optionally, the events may be reblocked to a new resolution. by setting the `\"reblock\"` parameter in the `parameters` dictionary in `load`. `\"reblock\"` must be a power of 2. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds2 = yt.load(\"xray_fits/acisf05356N003_evt2.fits.gz\", parameters={\"reblock\":2})"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`setup_counts_fields` will take a list of energy bounds (emin, emax) in keV and create a new field from each where the photons in that energy range will be deposited onto the image grid. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ebounds = [(0.1,2.0),(2.0,5.0)]\n",
+    "setup_counts_fields(ds2, ebounds)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The \"x\", \"y\", \"energy\", and \"time\" fields in the events table are loaded as particle fields. Each one has a name given by \"event\\_\" plus the name of the field:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "dd = ds2.all_data()\n",
+    "print (dd[\"event_x\"])\n",
+    "print (dd[\"event_y\"])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now, we'll make a plot of the two counts fields we made, and pan and zoom to the bullet:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc = yt.SlicePlot(ds2, \"z\", [\"counts_0.1-2.0\",\"counts_2.0-5.0\"], origin=\"native\")\n",
+    "slc.pan((100.,100.))\n",
+    "slc.set_width(500.)\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The counts fields can take the field parameter `\"sigma\"` and use [AstroPy's convolution routines](http://astropy.readthedocs.org/en/latest/convolution/) to smooth the data with a Gaussian:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc = yt.SlicePlot(ds2, \"z\", [\"counts_0.1-2.0\",\"counts_2.0-5.0\"], origin=\"native\",\n",
+    "                   field_parameters={\"sigma\":2.}) # This value is in pixel scale\n",
+    "slc.pan((100.,100.))\n",
+    "slc.set_width(500.)\n",
+    "slc.set_zlim(\"counts_0.1-2.0\", 0.01, 100.)\n",
+    "slc.set_zlim(\"counts_2.0-5.0\", 0.01, 50.)\n",
+    "slc.show()"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:73eb48af6c5628619b5cab88840f6ac2b3eb278ae057c34d567f4a127181b8fb"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "%matplotlib inline\n",
-      "import yt\n",
-      "import numpy as np"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "This notebook shows how to use yt to make plots and examine FITS X-ray images and events files. "
-     ]
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Sloshing, Shocks, and Bubbles in Abell 2052"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "This example uses data provided by [Scott Randall](http://hea-www.cfa.harvard.edu/~srandall/), presented originally in [Blanton, E.L., Randall, S.W., Clarke, T.E., et al. 2011, ApJ, 737, 99](http://adsabs.harvard.edu/cgi-bin/bib_query?2011ApJ...737...99B). They consist of two files, a \"flux map\" in counts/s/pixel between 0.3 and 2 keV, and a spectroscopic temperature map in keV. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load(\"xray_fits/A2052_merged_0.3-2_match-core_tmap_bgecorr.fits\", \n",
-      "             auxiliary_files=[\"xray_fits/A2052_core_tmap_b1_m2000_.fits\"])"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Since the flux and projected temperature images are in two different files, we had to use one of them (in this case the \"flux\" file) as a master file, and pass in the \"temperature\" file with the `auxiliary_files` keyword to `load`. "
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Next, let's derive some new fields for the number of counts, the \"pseudo-pressure\", and the \"pseudo-entropy\":"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "def _counts(field, data):\n",
-      "    exposure_time = data.get_field_parameter(\"exposure_time\")\n",
-      "    return data[\"flux\"]*data[\"pixel\"]*exposure_time\n",
-      "ds.add_field((\"gas\",\"counts\"), function=_counts, units=\"counts\", take_log=False)\n",
-      "\n",
-      "def _pp(field, data):\n",
-      "    return np.sqrt(data[\"counts\"])*data[\"projected_temperature\"]\n",
-      "ds.add_field((\"gas\",\"pseudo_pressure\"), function=_pp, units=\"sqrt(counts)*keV\", take_log=False)\n",
-      "\n",
-      "def _pe(field, data):\n",
-      "    return data[\"projected_temperature\"]*data[\"counts\"]**(-1./3.)\n",
-      "ds.add_field((\"gas\",\"pseudo_entropy\"), function=_pe, units=\"keV*(counts)**(-1/3)\", take_log=False)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Here, we're deriving a \"counts\" field from the \"flux\" field by passing it a `field_parameter` for the exposure time of the time and multiplying by the pixel scale. Second, we use the fact that the surface brightness is strongly dependent on density ($S_X \\propto \\rho^2$) to use the counts in each pixel as a \"stand-in\". Next, we'll grab the exposure time from the primary FITS header of the flux file and create a `YTQuantity` from it, to be used as a `field_parameter`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "exposure_time = ds.quan(ds.primary_header[\"exposure\"], \"s\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now, we can make the `SlicePlot` object of the fields we want, passing in the `exposure_time` as a `field_parameter`. We'll also set the width of the image to 250 pixels."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc = yt.SlicePlot(ds, \"z\", \n",
-      "                   [\"flux\",\"projected_temperature\",\"pseudo_pressure\",\"pseudo_entropy\"], \n",
-      "                   origin=\"native\", field_parameters={\"exposure_time\":exposure_time})\n",
-      "slc.set_log(\"flux\",True)\n",
-      "slc.set_log(\"pseudo_pressure\",False)\n",
-      "slc.set_log(\"pseudo_entropy\",False)\n",
-      "slc.set_width(250.)\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "To add the celestial coordinates to the image, we can use `PlotWindowWCS`, if you have the [WCSAxes](http://wcsaxes.readthedocs.org/en/latest/) package installed:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.frontends.fits.misc import PlotWindowWCS\n",
-      "wcs_slc = PlotWindowWCS(slc)\n",
-      "wcs_slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can make use of yt's facilities for profile plotting as well."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "v, c = ds.find_max(\"flux\") # Find the maxmimum flux and its center\n",
-      "my_sphere = ds.sphere(c, (100.,\"code_length\")) # Radius of 150 pixels\n",
-      "my_sphere.set_field_parameter(\"exposure_time\", exposure_time)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Such as a radial profile plot:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "radial_profile = yt.ProfilePlot(my_sphere, \"radius\", \n",
-      "                                [\"counts\",\"pseudo_pressure\",\"pseudo_entropy\"], \n",
-      "                                n_bins=30, weight_field=\"ones\")\n",
-      "radial_profile.set_log(\"counts\", True)\n",
-      "radial_profile.set_log(\"pseudo_pressure\", True)\n",
-      "radial_profile.set_log(\"pseudo_entropy\", True)\n",
-      "radial_profile.set_xlim(3,100.)\n",
-      "radial_profile.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Or a phase plot:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "phase_plot = yt.PhasePlot(my_sphere, \"pseudo_pressure\", \"pseudo_entropy\", [\"counts\"], weight_field=None)\n",
-      "phase_plot.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Finally, we can also take an existing [ds9](http://ds9.si.edu/site/Home.html) region and use it to create a \"cut region\", using `ds9_region` (the [pyregion](http://leejjoon.github.io/pyregion/) package needs to be installed for this):"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.frontends.fits.misc import ds9_region\n",
-      "reg_file = [\"# Region file format: DS9 version 4.1\\n\",\n",
-      "            \"global color=green dashlist=8 3 width=3 include=1 source=1 fk5\\n\",\n",
-      "            \"circle(15:16:44.817,+7:01:19.62,34.6256\\\")\"]\n",
-      "f = open(\"circle.reg\",\"w\")\n",
-      "f.writelines(reg_file)\n",
-      "f.close()\n",
-      "circle_reg = ds9_region(ds, \"circle.reg\", field_parameters={\"exposure_time\":exposure_time})"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "This region may now be used to compute derived quantities:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print circle_reg.quantities.weighted_average_quantity(\"projected_temperature\", \"counts\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Or used in projections:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj = yt.ProjectionPlot(ds, \"z\", \n",
-      "                   [\"flux\",\"projected_temperature\",\"pseudo_pressure\",\"pseudo_entropy\"], \n",
-      "                   origin=\"native\", field_parameters={\"exposure_time\":exposure_time},\n",
-      "                   data_source=circle_reg,\n",
-      "                   method=\"sum\")\n",
-      "prj.set_log(\"flux\",True)\n",
-      "prj.set_log(\"pseudo_pressure\",False)\n",
-      "prj.set_log(\"pseudo_entropy\",False)\n",
-      "prj.set_width(250.)\n",
-      "prj.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "The Bullet Cluster"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "This example uses an events table file from a ~100 ks exposure of the \"Bullet Cluster\" from the [Chandra Data Archive](http://cxc.harvard.edu/cda/). In this case, the individual photon events are treated as particle fields in yt. However, you can make images of the object in different energy bands using the `setup_counts_fields` function. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.frontends.fits.api import setup_counts_fields"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "`load` will handle the events file as FITS image files, and will set up a grid using the WCS information in the file. Optionally, the events may be reblocked to a new resolution. by setting the `\"reblock\"` parameter in the `parameters` dictionary in `load`. `\"reblock\"` must be a power of 2. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds2 = yt.load(\"xray_fits/acisf05356N003_evt2.fits.gz\", parameters={\"reblock\":2})"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "`setup_counts_fields` will take a list of energy bounds (emin, emax) in keV and create a new field from each where the photons in that energy range will be deposited onto the image grid. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ebounds = [(0.1,2.0),(2.0,5.0)]\n",
-      "setup_counts_fields(ds2, ebounds)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The \"x\", \"y\", \"energy\", and \"time\" fields in the events table are loaded as particle fields. Each one has a name given by \"event\\_\" plus the name of the field:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "dd = ds2.all_data()\n",
-      "print dd[\"event_x\"]\n",
-      "print dd[\"event_y\"]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now, we'll make a plot of the two counts fields we made, and pan and zoom to the bullet:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc = yt.SlicePlot(ds2, \"z\", [\"counts_0.1-2.0\",\"counts_2.0-5.0\"], origin=\"native\")\n",
-      "slc.pan((100.,100.))\n",
-      "slc.set_width(500.)\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The counts fields can take the field parameter `\"sigma\"` and use [AstroPy's convolution routines](http://astropy.readthedocs.org/en/latest/convolution/) to smooth the data with a Gaussian:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc = yt.SlicePlot(ds2, \"z\", [\"counts_0.1-2.0\",\"counts_2.0-5.0\"], origin=\"native\",\n",
-      "                   field_parameters={\"sigma\":2.}) # This value is in pixel scale\n",
-      "slc.pan((100.,100.))\n",
-      "slc.set_width(500.)\n",
-      "slc.set_zlim(\"counts_0.1-2.0\", 0.01, 100.)\n",
-      "slc.set_zlim(\"counts_2.0-5.0\", 0.01, 50.)\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}


https://bitbucket.org/yt_analysis/yt/commits/896f6b217ee1/
Changeset:   896f6b217ee1
Branch:      yt
User:        ngoldbaum
Date:        2016-01-12 21:53:18+00:00
Summary:     Updating the tipsy_and_yt notebook
Affected #:  1 file

diff -r 4bbce8322f9d86e66bb1a8757530430e48d38e56 -r 896f6b217ee1af6da5fb33b15d9d64bd8c9711e7 doc/source/cookbook/tipsy_and_yt.ipynb
--- a/doc/source/cookbook/tipsy_and_yt.ipynb
+++ b/doc/source/cookbook/tipsy_and_yt.ipynb
@@ -1,197 +1,205 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Loading Files"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Alright, let's start with some basics.  Before we do anything, we will need to load a snapshot.  You can do this using the ```load``` convenience function.  yt will autodetect that you have a tipsy snapshot, and automatically set itself up appropriately."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We will be looking at a fairly low resolution dataset.  In the next cell, the `ds` object has an atribute called `n_ref` that tells the oct-tree how many particles to refine on.  The default is 64, but we'll get prettier plots (at the expense of a deeper tree) with 8.  Just passing the argument `n_ref=8` to load does this for us."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    ">This dataset is available for download at http://yt-project.org/data/TipsyGalaxy.tar.gz (10 MB)."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load('TipsyGalaxy/galaxy.00300', n_ref=8)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We now have a `TipsyDataset` object called `ds`.  Let's see what fields it has."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds.field_list"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "yt also defines so-called \"derived\" fields.  These fields are functions of the on-disk fields that live in the `field_list`.  There is a `derived_field_list` attribute attached to the `Dataset` object - let's take look at the derived fields in this dataset:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds.derived_field_list"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "All of the field in the `field_list` are arrays containing the values for the associated particles.  These haven't been smoothed or gridded in any way. We can grab the array-data for these particles using `ds.all_data()`. For example, let's take a look at a temperature-colored scatterplot of the gas particles in this output."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "%matplotlib inline\n",
+    "import matplotlib.pyplot as plt\n",
+    "import numpy as np"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "dd = ds.all_data()\n",
+    "xcoord = dd['Gas','Coordinates'][:,0].v\n",
+    "ycoord = dd['Gas','Coordinates'][:,1].v\n",
+    "logT = np.log10(dd['Gas','Temperature'])\n",
+    "plt.scatter(xcoord, ycoord, c=logT, s=2*logT, marker='o', edgecolor='none', vmin=2, vmax=6)\n",
+    "plt.xlim(-20,20)\n",
+    "plt.ylim(-20,20)\n",
+    "cb = plt.colorbar()\n",
+    "cb.set_label('$\\log_{10}$ Temperature')\n",
+    "plt.gcf().set_size_inches(15,10)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Making Smoothed Images"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "yt will automatically generate smoothed versions of these fields that you can use to plot.  Let's make a temperature slice and a density projection."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "yt.SlicePlot(ds, 'z', ('gas','density'), width=(40, 'kpc'), center='m')"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "yt.ProjectionPlot(ds, 'z', ('gas','density'), width=(40, 'kpc'), center='m')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Not only are the values in the tipsy snapshot read and automatically smoothed, the auxiliary files that have physical significance are also smoothed.  Let's look at a slice of Iron mass fraction."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "yt.SlicePlot(ds, 'z', ('gas', 'Fe_fraction'), width=(40, 'kpc'), center='m')"
+   ]
+  }
+ ],
  "metadata": {
   "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
    "codemirror_mode": {
     "name": "ipython",
-    "version": 2
+    "version": 3
    },
-   "display_name": "IPython (Python 2)",
-   "language": "python",
-   "name": "python2"
-  },
-  "name": "",
-  "signature": "sha256:1f6e5cf50123ad75676f035a2a36cd60f4987832462907b9cb78cb25548d8afd"
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Loading Files"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Alright, let's start with some basics.  Before we do anything, we will need to load a snapshot.  You can do this using the ```load``` convenience function.  yt will autodetect that you have a tipsy snapshot, and automatically set itself up appropriately."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We will be looking at a fairly low resolution dataset.  In the next cell, the `ds` object has an atribute called `n_ref` that tells the oct-tree how many particles to refine on.  The default is 64, but we'll get prettier plots (at the expense of a deeper tree) with 8.  Just passing the argument `n_ref=8` to load does this for us."
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      ">This dataset is available for download at http://yt-project.org/data/TipsyGalaxy.tar.gz (10 MB)."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load('TipsyGalaxy/galaxy.00300', n_ref=8)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We now have a `TipsyDataset` object called `ds`.  Let's see what fields it has."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds.field_list"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "yt also defines so-called \"derived\" fields.  These fields are functions of the on-disk fields that live in the `field_list`.  There is a `derived_field_list` attribute attached to the `Dataset` object - let's take look at the derived fields in this dataset:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds.derived_field_list"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "All of the field in the `field_list` are arrays containing the values for the associated particles.  These haven't been smoothed or gridded in any way. We can grab the array-data for these particles using `ds.all_data()`. For example, let's take a look at a temperature-colored scatterplot of the gas particles in this output."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "%matplotlib inline\n",
-      "import matplotlib.pyplot as plt\n",
-      "import numpy as np"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "dd = ds.all_data()\n",
-      "xcoord = dd['Gas','Coordinates'][:,0].v\n",
-      "ycoord = dd['Gas','Coordinates'][:,1].v\n",
-      "logT = np.log10(dd['Gas','Temperature'])\n",
-      "plt.scatter(xcoord, ycoord, c=logT, s=2*logT, marker='o', edgecolor='none', vmin=2, vmax=6)\n",
-      "plt.xlim(-20,20)\n",
-      "plt.ylim(-20,20)\n",
-      "cb = plt.colorbar()\n",
-      "cb.set_label('$\\log_{10}$ Temperature')\n",
-      "plt.gcf().set_size_inches(15,10)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Making Smoothed Images"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "yt will automatically generate smoothed versions of these fields that you can use to plot.  Let's make a temperature slice and a density projection."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "yt.SlicePlot(ds, 'z', ('gas','density'), width=(40, 'kpc'), center='m')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "yt.ProjectionPlot(ds, 'z', ('gas','density'), width=(40, 'kpc'), center='m')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Not only are the values in the tipsy snapshot read and automatically smoothed, the auxiliary files that have physical significance are also smoothed.  Let's look at a slice of Iron mass fraction."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "yt.SlicePlot(ds, 'z', ('gas', 'Fe_fraction'), width=(40, 'kpc'), center='m')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}


https://bitbucket.org/yt_analysis/yt/commits/b5215025c959/
Changeset:   b5215025c959
Branch:      yt
User:        ngoldbaum
Date:        2016-01-12 21:53:35+00:00
Summary:     Updating the yt_gadget_analysis notebook
Affected #:  1 file

diff -r 896f6b217ee1af6da5fb33b15d9d64bd8c9711e7 -r b5215025c95931ce6208517c44bcb011e106df38 doc/source/cookbook/yt_gadget_analysis.ipynb
--- a/doc/source/cookbook/yt_gadget_analysis.ipynb
+++ b/doc/source/cookbook/yt_gadget_analysis.ipynb
@@ -1,263 +1,274 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Loading the data"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "First we set up our imports:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt\n",
+    "import numpy as np\n",
+    "import yt.units as units\n",
+    "import pylab"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "First we load the data set, specifying both the unit length/mass/velocity, as well as the size of the bounding box (which should encapsulate all the particles in the data set)\n",
+    "\n",
+    "At the end, we flatten the data into \"ad\" in case we want access to the raw simulation data"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    ">This dataset is available for download at http://yt-project.org/data/GadgetDiskGalaxy.tar.gz (430 MB)."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "fname = 'GadgetDiskGalaxy/snapshot_200.hdf5'\n",
+    "\n",
+    "unit_base = {'UnitLength_in_cm'         : 3.08568e+21,\n",
+    "             'UnitMass_in_g'            :   1.989e+43,\n",
+    "             'UnitVelocity_in_cm_per_s' :      100000}\n",
+    "\n",
+    "bbox_lim = 1e5 #kpc\n",
+    "\n",
+    "bbox = [[-bbox_lim,bbox_lim],\n",
+    "        [-bbox_lim,bbox_lim],\n",
+    "        [-bbox_lim,bbox_lim]]\n",
+    " \n",
+    "ds = yt.load(fname,unit_base=unit_base,bounding_box=bbox)\n",
+    "ds.index\n",
+    "ad= ds.all_data()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's make a projection plot to look at the entire volume"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "px = yt.ProjectionPlot(ds, 'x', ('gas', 'density'))\n",
+    "px.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's print some quantities about the domain, as well as the physical properties of the simulation\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print ('left edge: ',ds.domain_left_edge)\n",
+    "print ('right edge: ',ds.domain_right_edge)\n",
+    "print ('center: ',ds.domain_center)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can also see the fields that are available to query in the dataset"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "sorted(ds.field_list)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's create a data object that represents the full simulation domain, and find the total mass in gas and dark matter particles contained in it:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ad = ds.all_data()\n",
+    "\n",
+    "# total_mass returns a list, representing the total gas and dark matter + stellar mass, respectively\n",
+    "print ([tm.in_units('Msun') for tm in ad.quantities.total_mass()])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now let's say we want to zoom in on the box (since clearly the bounding we chose initially is much larger than the volume containing the gas particles!), and center on wherever the highest gas density peak is.  First, let's find this peak:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "density = ad[(\"PartType0\",\"density\")]\n",
+    "wdens = np.where(density == np.max(density))\n",
+    "coordinates = ad[(\"PartType0\",\"Coordinates\")]\n",
+    "center = coordinates[wdens][0]\n",
+    "print ('center = ',center)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Set up the box to zoom into"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "new_box_size = ds.quan(250,'code_length')\n",
+    "\n",
+    "left_edge = center - new_box_size/2\n",
+    "right_edge = center + new_box_size/2\n",
+    "\n",
+    "print (new_box_size.in_units('Mpc'))\n",
+    "print (left_edge.in_units('Mpc'))\n",
+    "print (right_edge.in_units('Mpc'))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ad2= ds.region(center=center, left_edge=left_edge, right_edge=right_edge)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Using this new data object, let's confirm that we're only looking at a subset of the domain by first calculating thte total mass in gas and particles contained in the subvolume:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print ([tm.in_units('Msun') for tm in ad.quantities.total_mass()])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "And then by visualizing what the new zoomed region looks like"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "px = yt.ProjectionPlot(ds, 'x', ('gas', 'density'), center=center, width=new_box_size)\n",
+    "px.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Cool - there's a disk galaxy there!"
+   ]
+  }
+ ],
  "metadata": {
   "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
    "codemirror_mode": {
     "name": "ipython",
-    "version": 2
+    "version": 3
    },
-   "display_name": "IPython (Python 2)",
-   "language": "python",
-   "name": "python2"
-  },
-  "name": "",
-  "signature": "sha256:42e2b7cc4c70a501432f24bc0d62d0723605d50196399148dd365d28387dd55d"
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Loading the data"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "First we set up our imports:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt\n",
-      "import numpy as np\n",
-      "import yt.units as units\n",
-      "import pylab"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "First we load the data set, specifying both the unit length/mass/velocity, as well as the size of the bounding box (which should encapsulate all the particles in the data set)\n",
-      "\n",
-      "At the end, we flatten the data into \"ad\" in case we want access to the raw simulation data"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      ">This dataset is available for download at http://yt-project.org/data/GadgetDiskGalaxy.tar.gz (430 MB)."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "fname = 'GadgetDiskGalaxy/snapshot_200.hdf5'\n",
-      "\n",
-      "unit_base = {'UnitLength_in_cm'         : 3.08568e+21,\n",
-      "             'UnitMass_in_g'            :   1.989e+43,\n",
-      "             'UnitVelocity_in_cm_per_s' :      100000}\n",
-      "\n",
-      "bbox_lim = 1e5 #kpc\n",
-      "\n",
-      "bbox = [[-bbox_lim,bbox_lim],\n",
-      "        [-bbox_lim,bbox_lim],\n",
-      "        [-bbox_lim,bbox_lim]]\n",
-      " \n",
-      "ds = yt.load(fname,unit_base=unit_base,bounding_box=bbox)\n",
-      "ds.index\n",
-      "ad= ds.all_data()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Let's make a projection plot to look at the entire volume"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "px = yt.ProjectionPlot(ds, 'x', ('gas', 'density'))\n",
-      "px.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Let's print some quantities about the domain, as well as the physical properties of the simulation\n"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print 'left edge: ',ds.domain_left_edge\n",
-      "print 'right edge: ',ds.domain_right_edge\n",
-      "print 'center: ',ds.domain_center"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can also see the fields that are available to query in the dataset"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "sorted(ds.field_list)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Let's create a data object that represents the full simulation domain, and find the total mass in gas and dark matter particles contained in it:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ad = ds.all_data()\n",
-      "\n",
-      "# total_mass returns a list, representing the total gas and dark matter + stellar mass, respectively\n",
-      "print [tm.in_units('Msun') for tm in ad.quantities.total_mass()]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now let's say we want to zoom in on the box (since clearly the bounding we chose initially is much larger than the volume containing the gas particles!), and center on wherever the highest gas density peak is.  First, let's find this peak:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "density = ad[(\"PartType0\",\"density\")]\n",
-      "wdens = np.where(density == np.max(density))\n",
-      "coordinates = ad[(\"PartType0\",\"Coordinates\")]\n",
-      "center = coordinates[wdens][0]\n",
-      "print 'center = ',center"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Set up the box to zoom into"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "new_box_size = ds.quan(250,'code_length')\n",
-      "\n",
-      "left_edge = center - new_box_size/2\n",
-      "right_edge = center + new_box_size/2\n",
-      "\n",
-      "print new_box_size.in_units('Mpc')\n",
-      "print left_edge.in_units('Mpc')\n",
-      "print right_edge.in_units('Mpc')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ad2= ds.region(center=center, left_edge=left_edge, right_edge=right_edge)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Using this new data object, let's confirm that we're only looking at a subset of the domain by first calculating thte total mass in gas and particles contained in the subvolume:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print [tm.in_units('Msun') for tm in ad.quantities.total_mass()]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "And then by visualizing what the new zoomed region looks like"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "px = yt.ProjectionPlot(ds, 'x', ('gas', 'density'), center=center, width=new_box_size)\n",
-      "px.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Cool - there's a disk galaxy there!"
-     ]
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}


https://bitbucket.org/yt_analysis/yt/commits/8ac513cd3816/
Changeset:   8ac513cd3816
Branch:      yt
User:        ngoldbaum
Date:        2016-01-12 21:53:48+00:00
Summary:     Updating the yt_gadget_owls_analysis notebook
Affected #:  1 file

diff -r b5215025c95931ce6208517c44bcb011e106df38 -r 8ac513cd38165aab35a60483d07c9174115a9d6a doc/source/cookbook/yt_gadget_owls_analysis.ipynb
--- a/doc/source/cookbook/yt_gadget_owls_analysis.ipynb
+++ b/doc/source/cookbook/yt_gadget_owls_analysis.ipynb
@@ -1,269 +1,288 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# OWLS Examples"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Setup"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The first thing you will need to run these examples is a working installation of yt.  The author or these examples followed the instructions under \"Get yt: from source\" at http://yt-project.org/ to install an up to date development version of yt.\n",
+    "\n",
+    "Next you should set the default ``test_data_dir`` in the ``.yt/config`` file in your home directory.  Note that you may have to create the directory and file if it doesn't exist already.\n",
+    "\n",
+    "> [yt]\n",
+    "\n",
+    "> test_data_dir=/home/galtay/yt-data\n",
+    "\n",
+    "Now you should create the directory referenced above (in this case ``/home/galtay/yt-data``).  The first time you request ion fields from yt it will download a supplementary data file to this directory. \n",
+    "\n",
+    "Next we will get an example OWLS snapshot.  There is a snapshot available on the yt data site, http://yt-project.org/data/snapshot_033.tar.gz (269 MB).  Save this tar.gz file in the same directory as this notebook, then unzip and untar it.  "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now we will tell the notebook that we want figures produced inline. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "%matplotlib inline"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Loading"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now we will load the snapshot.  See the docs (http://yt-project.org/docs/dev/examining/loading_data.html#indexing-criteria) for a description of ``n_ref`` and ``over_refine_factor``."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "fname = 'snapshot_033/snap_033.0.hdf5'\n",
+    "ds = yt.load(fname, n_ref=64, over_refine_factor=1)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Set a ``YTRegion`` that contains all the data."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ad = ds.all_data()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Inspecting "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The dataset can tell us what fields it knows about, "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds.field_list"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds.derived_field_list"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Note that the ion fields follow the naming convention described in YTEP-0003 http://ytep.readthedocs.org/en/latest/YTEPs/YTEP-0003.html#molecular-and-atomic-species-names"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Accessing Particle Data"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The raw particle data can be accessed using the particle types.  This corresponds directly with what is in the hdf5 snapshots. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ad['PartType0', 'Coordinates']"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ad['PartType4', 'IronFromSNIa']"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ad['PartType1', 'ParticleIDs']"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ad['PartType0', 'Hydrogen']"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Projection Plots"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The projection plots make use of derived fields that store the smoothed particle data (particles smoothed onto an oct-tree).  Below we make a projection of all hydrogen gas followed by only the neutral hydrogen gas. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "pz = yt.ProjectionPlot(ds, 'z', ('gas', 'H_density'))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "pz.show()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "pz = yt.ProjectionPlot(ds, 'z', ('gas', 'H_p0_density'))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "pz.show()"
+   ]
+  }
+ ],
  "metadata": {
-  "name": ""
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "heading",
-     "level": 1,
-     "metadata": {},
-     "source": [
-      "OWLS Examples"
-     ]
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Setup"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The first thing you will need to run these examples is a working installation of yt.  The author or these examples followed the instructions under \"Get yt: from source\" at http://yt-project.org/ to install an up to date development version of yt.\n",
-      "\n",
-      "Next you should set the default ``test_data_dir`` in the ``.yt/config`` file in your home directory.  Note that you may have to create the directory and file if it doesn't exist already.\n",
-      "\n",
-      "> [yt]\n",
-      "\n",
-      "> test_data_dir=/home/galtay/yt-data\n",
-      "\n",
-      "Now you should create the directory referenced above (in this case ``/home/galtay/yt-data``).  The first time you request ion fields from yt it will download a supplementary data file to this directory. \n",
-      "\n",
-      "Next we will get an example OWLS snapshot.  There is a snapshot available on the yt data site, http://yt-project.org/data/snapshot_033.tar.gz (269 MB).  Save this tar.gz file in the same directory as this notebook, then unzip and untar it.  "
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now we will tell the notebook that we want figures produced inline. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "%matplotlib inline"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Loading"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now we will load the snapshot.  See the docs (http://yt-project.org/docs/dev/examining/loading_data.html#indexing-criteria) for a description of ``n_ref`` and ``over_refine_factor``."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "fname = 'snapshot_033/snap_033.0.hdf5'\n",
-      "ds = yt.load(fname, n_ref=64, over_refine_factor=1)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Set a ``YTRegion`` that contains all the data."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ad = ds.all_data()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Inspecting "
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The dataset can tell us what fields it knows about, "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds.field_list"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds.derived_field_list"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Note that the ion fields follow the naming convention described in YTEP-0003 http://ytep.readthedocs.org/en/latest/YTEPs/YTEP-0003.html#molecular-and-atomic-species-names"
-     ]
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Accessing Particle Data"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The raw particle data can be accessed using the particle types.  This corresponds directly with what is in the hdf5 snapshots. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ad['PartType0', 'Coordinates']"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ad['PartType4', 'IronFromSNIa']"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ad['PartType1', 'ParticleIDs']"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ad['PartType0', 'Hydrogen']"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Projection Plots"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The projection plots make use of derived fields that store the smoothed particle data (particles smoothed onto an oct-tree).  Below we make a projection of all hydrogen gas followed by only the neutral hydrogen gas. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "pz = yt.ProjectionPlot(ds, 'z', ('gas', 'H_density'))"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "pz.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "pz = yt.ProjectionPlot(ds, 'z', ('gas', 'H_p0_density'))"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "pz.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}


https://bitbucket.org/yt_analysis/yt/commits/9161e77f420b/
Changeset:   9161e77f420b
Branch:      yt
User:        ngoldbaum
Date:        2016-01-12 22:07:44+00:00
Summary:     Updating the Loading_Generic_Array_Data notebook
Affected #:  1 file

diff -r 8ac513cd38165aab35a60483d07c9174115a9d6a -r 9161e77f420b98342fc22f804afffd3119e018b3 doc/source/examining/Loading_Generic_Array_Data.ipynb
--- a/doc/source/examining/Loading_Generic_Array_Data.ipynb
+++ b/doc/source/examining/Loading_Generic_Array_Data.ipynb
@@ -1,653 +1,683 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Even if your data is not strictly related to fields commonly used in\n",
+    "astrophysical codes or your code is not supported yet, you can still feed it to\n",
+    "yt to use its advanced visualization and analysis facilities. The only\n",
+    "requirement is that your data can be represented as three-dimensional NumPy arrays with a consistent grid structure. What follows are some common examples of loading in generic array data that you may find useful. "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Generic Unigrid Data"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The simplest case is that of a single grid of data spanning the domain, with one or more fields. The data could be generated from a variety of sources; we'll just give three common examples:"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Data generated \"on-the-fly\""
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The most common example is that of data that is generated in memory from the currently running script or notebook. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt\n",
+    "import numpy as np"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "In this example, we'll just create a 3-D array of random floating-point data using NumPy:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "arr = np.random.random(size=(64,64,64))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "To load this data into yt, we need associate it with a field. The `data` dictionary consists of one or more fields, each consisting of a tuple of a NumPy array and a unit string. Then, we can call `load_uniform_grid`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "data = dict(density = (arr, \"g/cm**3\"))\n",
+    "bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])\n",
+    "ds = yt.load_uniform_grid(data, arr.shape, length_unit=\"Mpc\", bbox=bbox, nprocs=64)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`load_uniform_grid` takes the following arguments and optional keywords:\n",
+    "\n",
+    "* `data` : This is a dict of numpy arrays, where the keys are the field names\n",
+    "* `domain_dimensions` : The domain dimensions of the unigrid\n",
+    "* `length_unit` : The unit that corresponds to `code_length`, can be a string, tuple, or floating-point number\n",
+    "* `bbox` : Size of computational domain in units of `code_length`\n",
+    "* `nprocs` : If greater than 1, will create this number of subarrays out of data\n",
+    "* `sim_time` : The simulation time in seconds\n",
+    "* `mass_unit` : The unit that corresponds to `code_mass`, can be a string, tuple, or floating-point number\n",
+    "* `time_unit` : The unit that corresponds to `code_time`, can be a string, tuple, or floating-point number\n",
+    "* `velocity_unit` : The unit that corresponds to `code_velocity`\n",
+    "* `magnetic_unit` : The unit that corresponds to `code_magnetic`, i.e. the internal units used to represent magnetic field strengths.\n",
+    "* `periodicity` : A tuple of booleans that determines whether the data will be treated as periodic along each axis\n",
+    "\n",
+    "This example creates a yt-native dataset `ds` that will treat your array as a\n",
+    "density field in cubic domain of 3 Mpc edge size and simultaneously divide the \n",
+    "domain into `nprocs` = 64 chunks, so that you can take advantage\n",
+    "of the underlying parallelism. \n",
+    "\n",
+    "The optional unit keyword arguments allow for the default units of the dataset to be set. They can be:\n",
+    "* A string, e.g. `length_unit=\"Mpc\"`\n",
+    "* A tuple, e.g. `mass_unit=(1.0e14, \"Msun\")`\n",
+    "* A floating-point value, e.g. `time_unit=3.1557e13`\n",
+    "\n",
+    "In the latter case, the unit is assumed to be cgs. \n",
+    "\n",
+    "The resulting `ds` functions exactly like a dataset like any other yt can handle--it can be sliced, and we can show the grid boundaries:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc = yt.SlicePlot(ds, \"z\", [\"density\"])\n",
+    "slc.set_cmap(\"density\", \"Blues\")\n",
+    "slc.annotate_grids(cmap=None)\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Particle fields are detected as one-dimensional fields. The number of\n",
+    "particles is set by the `number_of_particles` key in\n",
+    "`data`. Particle fields are then added as one-dimensional arrays in\n",
+    "a similar manner as the three-dimensional grid fields:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "posx_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)\n",
+    "posy_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)\n",
+    "posz_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)\n",
+    "data = dict(density = (np.random.random(size=(64,64,64)), \"Msun/kpc**3\"), \n",
+    "            number_of_particles = 10000,\n",
+    "            particle_position_x = (posx_arr, 'code_length'), \n",
+    "            particle_position_y = (posy_arr, 'code_length'),\n",
+    "            particle_position_z = (posz_arr, 'code_length'))\n",
+    "bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])\n",
+    "ds = yt.load_uniform_grid(data, data[\"density\"][0].shape, length_unit=(1.0, \"Mpc\"), mass_unit=(1.0,\"Msun\"), \n",
+    "                       bbox=bbox, nprocs=4)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "In this example only the particle position fields have been assigned. `number_of_particles` must be the same size as the particle\n",
+    "arrays. If no particle arrays are supplied then `number_of_particles` is assumed to be zero. Take a slice, and overlay particle positions:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc = yt.SlicePlot(ds, \"z\", [\"density\"])\n",
+    "slc.set_cmap(\"density\", \"Blues\")\n",
+    "slc.annotate_particles(0.25, p_size=12.0, col=\"Red\")\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### HDF5 data"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "HDF5 is a convenient format to store data. If you have unigrid data stored in an HDF5 file, it is possible to load it into memory and then use `load_uniform_grid` to get it into yt:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import h5py\n",
+    "from yt.config import ytcfg\n",
+    "data_dir = ytcfg.get('yt','test_data_dir')\n",
+    "from yt.utilities.physical_ratios import cm_per_kpc\n",
+    "f = h5py.File(data_dir+\"/UnigridData/turb_vels.h5\", \"r\") # Read-only access to the file"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The HDF5 file handle's keys correspond to the datasets stored in the file:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (f.keys())"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We need to add some unit information. It may be stored in the file somewhere, or we may know it from another source. In this case, the units are simply cgs:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "units = [\"gauss\",\"gauss\",\"gauss\", \"g/cm**3\", \"erg/cm**3\", \"K\", \n",
+    "         \"cm/s\", \"cm/s\", \"cm/s\", \"cm/s\", \"cm/s\", \"cm/s\"]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can iterate over the items in the file handle and the units to get the data into a dictionary, which we will then load:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "data = {k:(v.value,u) for (k,v), u in zip(f.items(),units)}\n",
+    "bbox = np.array([[-0.5, 0.5], [-0.5, 0.5], [-0.5, 0.5]])"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load_uniform_grid(data, data[\"Density\"][0].shape, length_unit=250.*cm_per_kpc, bbox=bbox, nprocs=8, \n",
+    "                       periodicity=(False,False,False))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "In this case, the data came from a simulation which was 250 kpc on a side. An example projection of two fields:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj = yt.ProjectionPlot(ds, \"z\", [\"z-velocity\",\"Temperature\",\"Bx\"], weight_field=\"Density\")\n",
+    "prj.set_log(\"z-velocity\", False)\n",
+    "prj.set_log(\"Bx\", False)\n",
+    "prj.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Volume Rendering Loaded Data"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Volume rendering requires defining a `TransferFunction` to map data to color and opacity and a `camera` to create a viewport and render the image."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "#Find the min and max of the field\n",
+    "mi, ma = ds.all_data().quantities.extrema('Temperature')\n",
+    "#Reduce the dynamic range\n",
+    "mi = mi.value + 1.5e7\n",
+    "ma = ma.value - 0.81e7"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Create a Transfer Function that goes from the minimum to the maximum of the data:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "tf = yt.ColorTransferFunction((mi, ma), grey_opacity=False)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Define the properties and size of the `camera` viewport:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# Choose a vector representing the viewing direction.\n",
+    "L = [0.5, 0.5, 0.5]\n",
+    "# Define the center of the camera to be the domain center\n",
+    "c = ds.domain_center[0]\n",
+    "# Define the width of the image\n",
+    "W = 1.5*ds.domain_width[0]\n",
+    "# Define the number of pixels to render\n",
+    "Npixels = 512 "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Create a `camera` object and "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "cam = ds.camera(c, L, W, Npixels, tf, fields=['Temperature'],\n",
+    "                north_vector=[0,0,1], steady_north=True, \n",
+    "                sub_samples=5, log_fields=[False])\n",
+    "\n",
+    "cam.transfer_function.map_to_colormap(mi,ma, \n",
+    "                                      scale=15.0, colormap='algae')"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "cam.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### FITS image data"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The FITS file format is a common astronomical format for 2-D images, but it can store three-dimensional data as well. The [AstroPy](http://www.astropy.org) project has modules for FITS reading and writing, which were incorporated from the [PyFITS](http://www.stsci.edu/institute/software_hardware/pyfits) library."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import astropy.io.fits as pyfits\n",
+    "# Or, just import pyfits if that's what you have installed"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Using `pyfits` we can open a FITS file. If we call `info()` on the file handle, we can figure out some information about the file's contents. The file in this example has a primary HDU (header-data-unit) with no data, and three HDUs with 3-D data. In this case, the data consists of three velocity fields:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "f = pyfits.open(data_dir+\"/UnigridData/velocity_field_20.fits\")\n",
+    "f.info()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can put it into a dictionary in the same way as before, but we slice the file handle `f` so that we don't use the `PrimaryHDU`. `hdu.name` is the field name and `hdu.data` is the actual data. Each of these velocity fields is in km/s. We can check that we got the correct fields. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "data = {}\n",
+    "for hdu in f:\n",
+    "    name = hdu.name.lower()\n",
+    "    data[name] = (hdu.data,\"km/s\")\n",
+    "print (data.keys())"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The velocity field names in this case are slightly different than the standard yt field names for velocity fields, so we will reassign the field names:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "data[\"velocity_x\"] = data.pop(\"x-velocity\")\n",
+    "data[\"velocity_y\"] = data.pop(\"y-velocity\")\n",
+    "data[\"velocity_z\"] = data.pop(\"z-velocity\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now we load the data into yt. Let's assume that the box size is a Mpc. Since these are velocity fields, we can overlay velocity vectors on slices, just as if we had loaded in data from a supported code. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load_uniform_grid(data, data[\"velocity_x\"][0].shape, length_unit=(1.0,\"Mpc\"))\n",
+    "slc = yt.SlicePlot(ds, \"x\", [\"velocity_x\",\"velocity_y\",\"velocity_z\"])\n",
+    "for ax in \"xyz\":\n",
+    "    slc.set_log(\"velocity_%s\" % (ax), False)\n",
+    "slc.annotate_velocity()\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Generic AMR Data"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "In a similar fashion to unigrid data, data gridded into rectangular patches at varying levels of resolution may also be loaded into yt. In this case, a list of grid dictionaries should be provided, with the requisite information about each grid's properties. This example sets up two grids: a top-level grid (`level == 0`) covering the entire domain and a subgrid at `level == 1`. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "grid_data = [\n",
+    "    dict(left_edge = [0.0, 0.0, 0.0],\n",
+    "         right_edge = [1.0, 1.0, 1.0],\n",
+    "         level = 0,\n",
+    "         dimensions = [32, 32, 32]), \n",
+    "    dict(left_edge = [0.25, 0.25, 0.25],\n",
+    "         right_edge = [0.75, 0.75, 0.75],\n",
+    "         level = 1,\n",
+    "         dimensions = [32, 32, 32])\n",
+    "   ]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We'll just fill each grid with random density data, with a scaling with the grid refinement level."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "for g in grid_data: \n",
+    "    g[\"density\"] = (np.random.random(g[\"dimensions\"]) * 2**g[\"level\"], \"g/cm**3\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Particle fields are supported by adding 1-dimensional arrays to each `grid` and\n",
+    "setting the `number_of_particles` key in each `grid`'s dict. If a grid has no particles, set `number_of_particles = 0`, but the particle fields still have to be defined since they are defined elsewhere; set them to empty NumPy arrays:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "grid_data[0][\"number_of_particles\"] = 0 # Set no particles in the top-level grid\n",
+    "grid_data[0][\"particle_position_x\"] = (np.array([]), \"code_length\") # No particles, so set empty arrays\n",
+    "grid_data[0][\"particle_position_y\"] = (np.array([]), \"code_length\")\n",
+    "grid_data[0][\"particle_position_z\"] = (np.array([]), \"code_length\")\n",
+    "grid_data[1][\"number_of_particles\"] = 1000\n",
+    "grid_data[1][\"particle_position_x\"] = (np.random.uniform(low=0.25, high=0.75, size=1000), \"code_length\")\n",
+    "grid_data[1][\"particle_position_y\"] = (np.random.uniform(low=0.25, high=0.75, size=1000), \"code_length\")\n",
+    "grid_data[1][\"particle_position_z\"] = (np.random.uniform(low=0.25, high=0.75, size=1000), \"code_length\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Then, call `load_amr_grids`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load_amr_grids(grid_data, [32, 32, 32])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`load_amr_grids` also takes the same keywords `bbox` and `sim_time` as `load_uniform_grid`. We could have also specified the length, time, velocity, and mass units in the same manner as before. Let's take a slice:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc = yt.SlicePlot(ds, \"z\", [\"density\"])\n",
+    "slc.annotate_particles(0.25, p_size=15.0, col=\"Pink\")\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Caveats for Loading Generic Array Data"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "* Units will be incorrect unless the data has already been converted to cgs.\n",
+    "* Particles may be difficult to integrate.\n",
+    "* Data must already reside in memory before loading it in to yt, whether it is generated at runtime or loaded from disk. \n",
+    "* Some functions may behave oddly, and parallelism will be disappointing or non-existent in most cases.\n",
+    "* No consistency checks are performed on the hierarchy\n",
+    "* Consistency between particle positions and grids is not checked; `load_amr_grids` assumes that particle positions associated with one grid are not bounded within another grid at a higher level, so this must be ensured by the user prior to loading the grid data. "
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:5a62a9f151e691e242c1f5043e9211e166d70fd35a83f61278083c361fb07f12"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Even if your data is not strictly related to fields commonly used in\n",
-      "astrophysical codes or your code is not supported yet, you can still feed it to\n",
-      "yt to use its advanced visualization and analysis facilities. The only\n",
-      "requirement is that your data can be represented as three-dimensional NumPy arrays with a consistent grid structure. What follows are some common examples of loading in generic array data that you may find useful. "
-     ]
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Generic Unigrid Data"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The simplest case is that of a single grid of data spanning the domain, with one or more fields. The data could be generated from a variety of sources; we'll just give three common examples:"
-     ]
-    },
-    {
-     "cell_type": "heading",
-     "level": 3,
-     "metadata": {},
-     "source": [
-      "Data generated \"on-the-fly\""
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The most common example is that of data that is generated in memory from the currently running script or notebook. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt\n",
-      "import numpy as np"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "In this example, we'll just create a 3-D array of random floating-point data using NumPy:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "arr = np.random.random(size=(64,64,64))"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "To load this data into yt, we need associate it with a field. The `data` dictionary consists of one or more fields, each consisting of a tuple of a NumPy array and a unit string. Then, we can call `load_uniform_grid`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "data = dict(density = (arr, \"g/cm**3\"))\n",
-      "bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])\n",
-      "ds = yt.load_uniform_grid(data, arr.shape, length_unit=\"Mpc\", bbox=bbox, nprocs=64)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "`load_uniform_grid` takes the following arguments and optional keywords:\n",
-      "\n",
-      "* `data` : This is a dict of numpy arrays, where the keys are the field names\n",
-      "* `domain_dimensions` : The domain dimensions of the unigrid\n",
-      "* `length_unit` : The unit that corresponds to `code_length`, can be a string, tuple, or floating-point number\n",
-      "* `bbox` : Size of computational domain in units of `code_length`\n",
-      "* `nprocs` : If greater than 1, will create this number of subarrays out of data\n",
-      "* `sim_time` : The simulation time in seconds\n",
-      "* `mass_unit` : The unit that corresponds to `code_mass`, can be a string, tuple, or floating-point number\n",
-      "* `time_unit` : The unit that corresponds to `code_time`, can be a string, tuple, or floating-point number\n",
-      "* `velocity_unit` : The unit that corresponds to `code_velocity`\n",
-      "* `magnetic_unit` : The unit that corresponds to `code_magnetic`, i.e. the internal units used to represent magnetic field strengths.\n",
-      "* `periodicity` : A tuple of booleans that determines whether the data will be treated as periodic along each axis\n",
-      "\n",
-      "This example creates a yt-native dataset `ds` that will treat your array as a\n",
-      "density field in cubic domain of 3 Mpc edge size and simultaneously divide the \n",
-      "domain into `nprocs` = 64 chunks, so that you can take advantage\n",
-      "of the underlying parallelism. \n",
-      "\n",
-      "The optional unit keyword arguments allow for the default units of the dataset to be set. They can be:\n",
-      "* A string, e.g. `length_unit=\"Mpc\"`\n",
-      "* A tuple, e.g. `mass_unit=(1.0e14, \"Msun\")`\n",
-      "* A floating-point value, e.g. `time_unit=3.1557e13`\n",
-      "\n",
-      "In the latter case, the unit is assumed to be cgs. \n",
-      "\n",
-      "The resulting `ds` functions exactly like a dataset like any other yt can handle--it can be sliced, and we can show the grid boundaries:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc = yt.SlicePlot(ds, \"z\", [\"density\"])\n",
-      "slc.set_cmap(\"density\", \"Blues\")\n",
-      "slc.annotate_grids(cmap=None)\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Particle fields are detected as one-dimensional fields. The number of\n",
-      "particles is set by the `number_of_particles` key in\n",
-      "`data`. Particle fields are then added as one-dimensional arrays in\n",
-      "a similar manner as the three-dimensional grid fields:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "posx_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)\n",
-      "posy_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)\n",
-      "posz_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)\n",
-      "data = dict(density = (np.random.random(size=(64,64,64)), \"Msun/kpc**3\"), \n",
-      "            number_of_particles = 10000,\n",
-      "            particle_position_x = (posx_arr, 'code_length'), \n",
-      "            particle_position_y = (posy_arr, 'code_length'),\n",
-      "            particle_position_z = (posz_arr, 'code_length'))\n",
-      "bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])\n",
-      "ds = yt.load_uniform_grid(data, data[\"density\"][0].shape, length_unit=(1.0, \"Mpc\"), mass_unit=(1.0,\"Msun\"), \n",
-      "                       bbox=bbox, nprocs=4)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "In this example only the particle position fields have been assigned. `number_of_particles` must be the same size as the particle\n",
-      "arrays. If no particle arrays are supplied then `number_of_particles` is assumed to be zero. Take a slice, and overlay particle positions:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc = yt.SlicePlot(ds, \"z\", [\"density\"])\n",
-      "slc.set_cmap(\"density\", \"Blues\")\n",
-      "slc.annotate_particles(0.25, p_size=12.0, col=\"Red\")\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 3,
-     "metadata": {},
-     "source": [
-      "HDF5 data"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "HDF5 is a convenient format to store data. If you have unigrid data stored in an HDF5 file, it is possible to load it into memory and then use `load_uniform_grid` to get it into yt:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import h5py\n",
-      "from yt.config import ytcfg\n",
-      "data_dir = ytcfg.get('yt','test_data_dir')\n",
-      "from yt.utilities.physical_ratios import cm_per_kpc\n",
-      "f = h5py.File(data_dir+\"/UnigridData/turb_vels.h5\", \"r\") # Read-only access to the file"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The HDF5 file handle's keys correspond to the datasets stored in the file:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print f.keys()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We need to add some unit information. It may be stored in the file somewhere, or we may know it from another source. In this case, the units are simply cgs:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "units = [\"gauss\",\"gauss\",\"gauss\", \"g/cm**3\", \"erg/cm**3\", \"K\", \n",
-      "         \"cm/s\", \"cm/s\", \"cm/s\", \"cm/s\", \"cm/s\", \"cm/s\"]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can iterate over the items in the file handle and the units to get the data into a dictionary, which we will then load:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "data = {k:(v.value,u) for (k,v), u in zip(f.items(),units)}\n",
-      "bbox = np.array([[-0.5, 0.5], [-0.5, 0.5], [-0.5, 0.5]])"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load_uniform_grid(data, data[\"Density\"][0].shape, length_unit=250.*cm_per_kpc, bbox=bbox, nprocs=8, \n",
-      "                       periodicity=(False,False,False))"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "In this case, the data came from a simulation which was 250 kpc on a side. An example projection of two fields:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj = yt.ProjectionPlot(ds, \"z\", [\"z-velocity\",\"Temperature\",\"Bx\"], weight_field=\"Density\")\n",
-      "prj.set_log(\"z-velocity\", False)\n",
-      "prj.set_log(\"Bx\", False)\n",
-      "prj.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 3,
-     "metadata": {},
-     "source": [
-      "Volume Rendering Loaded Data"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Volume rendering requires defining a `TransferFunction` to map data to color and opacity and a `camera` to create a viewport and render the image."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "#Find the min and max of the field\n",
-      "mi, ma = ds.all_data().quantities.extrema('Temperature')\n",
-      "#Reduce the dynamic range\n",
-      "mi = mi.value + 1.5e7\n",
-      "ma = ma.value - 0.81e7"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Create a Transfer Function that goes from the minimum to the maximum of the data:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "tf = yt.ColorTransferFunction((mi, ma), grey_opacity=False)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Define the properties and size of the `camera` viewport:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# Choose a vector representing the viewing direction.\n",
-      "L = [0.5, 0.5, 0.5]\n",
-      "# Define the center of the camera to be the domain center\n",
-      "c = ds.domain_center[0]\n",
-      "# Define the width of the image\n",
-      "W = 1.5*ds.domain_width[0]\n",
-      "# Define the number of pixels to render\n",
-      "Npixels = 512 "
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Create a `camera` object and "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "cam = ds.camera(c, L, W, Npixels, tf, fields=['Temperature'],\n",
-      "                north_vector=[0,0,1], steady_north=True, \n",
-      "                sub_samples=5, log_fields=[False])\n",
-      "\n",
-      "cam.transfer_function.map_to_colormap(mi,ma, \n",
-      "                                      scale=15.0, colormap='algae')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "cam.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 3,
-     "metadata": {},
-     "source": [
-      "FITS image data"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The FITS file format is a common astronomical format for 2-D images, but it can store three-dimensional data as well. The [AstroPy](http://www.astropy.org) project has modules for FITS reading and writing, which were incorporated from the [PyFITS](http://www.stsci.edu/institute/software_hardware/pyfits) library."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import astropy.io.fits as pyfits\n",
-      "# Or, just import pyfits if that's what you have installed"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Using `pyfits` we can open a FITS file. If we call `info()` on the file handle, we can figure out some information about the file's contents. The file in this example has a primary HDU (header-data-unit) with no data, and three HDUs with 3-D data. In this case, the data consists of three velocity fields:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "f = pyfits.open(data_dir+\"/UnigridData/velocity_field_20.fits\")\n",
-      "f.info()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can put it into a dictionary in the same way as before, but we slice the file handle `f` so that we don't use the `PrimaryHDU`. `hdu.name` is the field name and `hdu.data` is the actual data. Each of these velocity fields is in km/s. We can check that we got the correct fields. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "data = {}\n",
-      "for hdu in f:\n",
-      "    name = hdu.name.lower()\n",
-      "    data[name] = (hdu.data,\"km/s\")\n",
-      "print data.keys()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The velocity field names in this case are slightly different than the standard yt field names for velocity fields, so we will reassign the field names:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "data[\"velocity_x\"] = data.pop(\"x-velocity\")\n",
-      "data[\"velocity_y\"] = data.pop(\"y-velocity\")\n",
-      "data[\"velocity_z\"] = data.pop(\"z-velocity\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now we load the data into yt. Let's assume that the box size is a Mpc. Since these are velocity fields, we can overlay velocity vectors on slices, just as if we had loaded in data from a supported code. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load_uniform_grid(data, data[\"velocity_x\"][0].shape, length_unit=(1.0,\"Mpc\"))\n",
-      "slc = yt.SlicePlot(ds, \"x\", [\"velocity_x\",\"velocity_y\",\"velocity_z\"])\n",
-      "for ax in \"xyz\":\n",
-      "    slc.set_log(\"velocity_%s\" % (ax), False)\n",
-      "slc.annotate_velocity()\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Generic AMR Data"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "In a similar fashion to unigrid data, data gridded into rectangular patches at varying levels of resolution may also be loaded into yt. In this case, a list of grid dictionaries should be provided, with the requisite information about each grid's properties. This example sets up two grids: a top-level grid (`level == 0`) covering the entire domain and a subgrid at `level == 1`. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "grid_data = [\n",
-      "    dict(left_edge = [0.0, 0.0, 0.0],\n",
-      "         right_edge = [1.0, 1.0, 1.0],\n",
-      "         level = 0,\n",
-      "         dimensions = [32, 32, 32]), \n",
-      "    dict(left_edge = [0.25, 0.25, 0.25],\n",
-      "         right_edge = [0.75, 0.75, 0.75],\n",
-      "         level = 1,\n",
-      "         dimensions = [32, 32, 32])\n",
-      "   ]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We'll just fill each grid with random density data, with a scaling with the grid refinement level."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "for g in grid_data: \n",
-      "    g[\"density\"] = (np.random.random(g[\"dimensions\"]) * 2**g[\"level\"], \"g/cm**3\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Particle fields are supported by adding 1-dimensional arrays to each `grid` and\n",
-      "setting the `number_of_particles` key in each `grid`'s dict. If a grid has no particles, set `number_of_particles = 0`, but the particle fields still have to be defined since they are defined elsewhere; set them to empty NumPy arrays:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "grid_data[0][\"number_of_particles\"] = 0 # Set no particles in the top-level grid\n",
-      "grid_data[0][\"particle_position_x\"] = (np.array([]), \"code_length\") # No particles, so set empty arrays\n",
-      "grid_data[0][\"particle_position_y\"] = (np.array([]), \"code_length\")\n",
-      "grid_data[0][\"particle_position_z\"] = (np.array([]), \"code_length\")\n",
-      "grid_data[1][\"number_of_particles\"] = 1000\n",
-      "grid_data[1][\"particle_position_x\"] = (np.random.uniform(low=0.25, high=0.75, size=1000), \"code_length\")\n",
-      "grid_data[1][\"particle_position_y\"] = (np.random.uniform(low=0.25, high=0.75, size=1000), \"code_length\")\n",
-      "grid_data[1][\"particle_position_z\"] = (np.random.uniform(low=0.25, high=0.75, size=1000), \"code_length\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Then, call `load_amr_grids`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load_amr_grids(grid_data, [32, 32, 32])"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "`load_amr_grids` also takes the same keywords `bbox` and `sim_time` as `load_uniform_grid`. We could have also specified the length, time, velocity, and mass units in the same manner as before. Let's take a slice:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc = yt.SlicePlot(ds, \"z\", [\"density\"])\n",
-      "slc.annotate_particles(0.25, p_size=15.0, col=\"Pink\")\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Caveats for Loading Generic Array Data"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "* Units will be incorrect unless the data has already been converted to cgs.\n",
-      "* Particles may be difficult to integrate.\n",
-      "* Data must already reside in memory before loading it in to yt, whether it is generated at runtime or loaded from disk. \n",
-      "* Some functions may behave oddly, and parallelism will be disappointing or non-existent in most cases.\n",
-      "* No consistency checks are performed on the hierarchy\n",
-      "* Consistency between particle positions and grids is not checked; `load_amr_grids` assumes that particle positions associated with one grid are not bounded within another grid at a higher level, so this must be ensured by the user prior to loading the grid data. "
-     ]
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}


https://bitbucket.org/yt_analysis/yt/commits/a22184a8b94e/
Changeset:   a22184a8b94e
Branch:      yt
User:        ngoldbaum
Date:        2016-01-12 22:07:57+00:00
Summary:     Updating the Loading_Generic_Particle_Data notebook
Affected #:  1 file

diff -r 9161e77f420b98342fc22f804afffd3119e018b3 -r a22184a8b94ec7411e79e8cd76ee09ad10013bfe doc/source/examining/Loading_Generic_Particle_Data.ipynb
--- a/doc/source/examining/Loading_Generic_Particle_Data.ipynb
+++ b/doc/source/examining/Loading_Generic_Particle_Data.ipynb
@@ -1,156 +1,173 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "This example creates a fake in-memory particle dataset and then loads it as a yt dataset using the `load_particles` function.\n",
+    "\n",
+    "Our \"fake\" dataset will be numpy arrays filled with normally distributed randoml particle positions and uniform particle masses.  Since real data is often scaled, I arbitrarily multiply by 1e6 to show how to deal with scaled data."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import numpy as np\n",
+    "\n",
+    "n_particles = 5e6\n",
+    "\n",
+    "ppx, ppy, ppz = 1e6*np.random.normal(size=[3, n_particles])\n",
+    "\n",
+    "ppm = np.ones(n_particles)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The `load_particles` function accepts a dictionary populated with particle data fields loaded in memory as numpy arrays or python lists:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "data = {'particle_position_x': ppx,\n",
+    "        'particle_position_y': ppy,\n",
+    "        'particle_position_z': ppz,\n",
+    "        'particle_mass': ppm}"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "To hook up with yt's internal field system, the dictionary keys must be 'particle_position_x', 'particle_position_y', 'particle_position_z', and 'particle_mass', as well as any other particle field provided by one of the particle frontends."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The `load_particles` function transforms the `data` dictionary into an in-memory yt `Dataset` object, providing an interface for further analysis with yt. The example below illustrates how to load the data dictionary we created above."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt\n",
+    "from yt.units import parsec, Msun\n",
+    "\n",
+    "bbox = 1.1*np.array([[min(ppx), max(ppx)], [min(ppy), max(ppy)], [min(ppz), max(ppz)]])\n",
+    "\n",
+    "ds = yt.load_particles(data, length_unit=parsec, mass_unit=1e8*Msun, n_ref=256, bbox=bbox)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The `length_unit` and `mass_unit` are the conversion from the units used in the `data` dictionary to CGS.  I've arbitrarily chosen one parsec and 10^8 Msun for this example. \n",
+    "\n",
+    "The `n_ref` parameter controls how many particle it takes to accumulate in an oct-tree cell to trigger refinement.  Larger `n_ref` will decrease poisson noise at the cost of resolution in the octree.  \n",
+    "\n",
+    "Finally, the `bbox` parameter is a bounding box in the units of the dataset that contains all of the particles.  This is used to set the size of the base octree block."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "This new dataset acts like any other yt `Dataset` object, and can be used to create data objects and query for yt fields.  This example shows how to access \"deposit\" fields:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ad = ds.all_data()\n",
+    "\n",
+    "# This is generated with \"cloud-in-cell\" interpolation.\n",
+    "cic_density = ad[\"deposit\", \"all_cic\"]\n",
+    "\n",
+    "# These three are based on nearest-neighbor cell deposition\n",
+    "nn_density = ad[\"deposit\", \"all_density\"]\n",
+    "nn_deposited_mass = ad[\"deposit\", \"all_mass\"]\n",
+    "particle_count_per_cell = ad[\"deposit\", \"all_count\"]"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds.field_list"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds.derived_field_list"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc = yt.SlicePlot(ds, 2, ('deposit', 'all_cic'))\n",
+    "slc.set_width((8, 'Mpc'))"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:6da8ec00f414307f27544fbdbc6b4fa476e5e96809003426279b2a1c898b4546"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "This example creates a fake in-memory particle dataset and then loads it as a yt dataset using the `load_particles` function.\n",
-      "\n",
-      "Our \"fake\" dataset will be numpy arrays filled with normally distributed randoml particle positions and uniform particle masses.  Since real data is often scaled, I arbitrarily multiply by 1e6 to show how to deal with scaled data."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import numpy as np\n",
-      "\n",
-      "n_particles = 5e6\n",
-      "\n",
-      "ppx, ppy, ppz = 1e6*np.random.normal(size=[3, n_particles])\n",
-      "\n",
-      "ppm = np.ones(n_particles)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The `load_particles` function accepts a dictionary populated with particle data fields loaded in memory as numpy arrays or python lists:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "data = {'particle_position_x': ppx,\n",
-      "        'particle_position_y': ppy,\n",
-      "        'particle_position_z': ppz,\n",
-      "        'particle_mass': ppm}"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "To hook up with yt's internal field system, the dictionary keys must be 'particle_position_x', 'particle_position_y', 'particle_position_z', and 'particle_mass', as well as any other particle field provided by one of the particle frontends."
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The `load_particles` function transforms the `data` dictionary into an in-memory yt `Dataset` object, providing an interface for further analysis with yt. The example below illustrates how to load the data dictionary we created above."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt\n",
-      "from yt.units import parsec, Msun\n",
-      "\n",
-      "bbox = 1.1*np.array([[min(ppx), max(ppx)], [min(ppy), max(ppy)], [min(ppz), max(ppz)]])\n",
-      "\n",
-      "ds = yt.load_particles(data, length_unit=parsec, mass_unit=1e8*Msun, n_ref=256, bbox=bbox)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The `length_unit` and `mass_unit` are the conversion from the units used in the `data` dictionary to CGS.  I've arbitrarily chosen one parsec and 10^8 Msun for this example. \n",
-      "\n",
-      "The `n_ref` parameter controls how many particle it takes to accumulate in an oct-tree cell to trigger refinement.  Larger `n_ref` will decrease poisson noise at the cost of resolution in the octree.  \n",
-      "\n",
-      "Finally, the `bbox` parameter is a bounding box in the units of the dataset that contains all of the particles.  This is used to set the size of the base octree block."
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "This new dataset acts like any other yt `Dataset` object, and can be used to create data objects and query for yt fields.  This example shows how to access \"deposit\" fields:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ad = ds.all_data()\n",
-      "\n",
-      "# This is generated with \"cloud-in-cell\" interpolation.\n",
-      "cic_density = ad[\"deposit\", \"all_cic\"]\n",
-      "\n",
-      "# These three are based on nearest-neighbor cell deposition\n",
-      "nn_density = ad[\"deposit\", \"all_density\"]\n",
-      "nn_deposited_mass = ad[\"deposit\", \"all_mass\"]\n",
-      "particle_count_per_cell = ad[\"deposit\", \"all_count\"]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds.field_list"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds.derived_field_list"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc = yt.SlicePlot(ds, 2, ('deposit', 'all_cic'))\n",
-      "slc.set_width((8, 'Mpc'))"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
+ "nbformat": 4,
+ "nbformat_minor": 0
 }


https://bitbucket.org/yt_analysis/yt/commits/ee56ec8990c6/
Changeset:   ee56ec8990c6
Branch:      yt
User:        ngoldbaum
Date:        2016-01-12 22:08:22+00:00
Summary:     Updating the Loading_Spherical_Data notebook
Affected #:  1 file

diff -r a22184a8b94ec7411e79e8cd76ee09ad10013bfe -r ee56ec8990c6cbd184beedf936437a8620162f5a doc/source/examining/Loading_Spherical_Data.ipynb
--- a/doc/source/examining/Loading_Spherical_Data.ipynb
+++ b/doc/source/examining/Loading_Spherical_Data.ipynb
@@ -1,188 +1,206 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Loading Spherical Data\n",
+    "\n",
+    "With version 3.0 of yt, it has gained the ability to load data from non-Cartesian systems.  This support is still being extended, but here is an example of how to load spherical data from a regularly-spaced grid.  For irregularly spaced grids, a similar setup can be used, but the `load_hexahedral_mesh` method will have to be used instead.\n",
+    "\n",
+    "Note that in yt, \"spherical\" means that it is ordered $r$, $\\theta$, $\\phi$, where $\\theta$ is the declination from the azimuth (running from $0$ to $\\pi$ and $\\phi$ is the angle around the zenith (running from $0$ to $2\\pi$).\n",
+    "\n",
+    "We first start out by loading yt."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import numpy as np\n",
+    "import yt"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now, we create a few derived fields.  The first three are just straight translations of the Cartesian coordinates, so that we can see where we are located in the data, and understand what we're seeing.  The final one is just a fun field that is some combination of the three coordinates, and will vary in all dimensions."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "@yt.derived_field(name = \"sphx\", units = \"cm\", take_log=False)\n",
+    "def sphx(field, data):\n",
+    "    return np.cos(data[\"phi\"]) * np.sin(data[\"theta\"])*data[\"r\"]\n",
+    "@yt.derived_field(name = \"sphy\", units = \"cm\", take_log=False)\n",
+    "def sphy(field, data):\n",
+    "    return np.sin(data[\"phi\"]) * np.sin(data[\"theta\"])*data[\"r\"]\n",
+    "@yt.derived_field(name = \"sphz\", units = \"cm\", take_log=False)\n",
+    "def sphz(field, data):\n",
+    "    return np.cos(data[\"theta\"])*data[\"r\"]\n",
+    "@yt.derived_field(name = \"funfield\", units=\"cm\", take_log=False)\n",
+    "def funfield(field, data):\n",
+    "    return (np.sin(data[\"phi\"])**2 + np.cos(data[\"theta\"])**2) * (1.0*data[\"r\"].uq+data[\"r\"])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Loading Data\n",
+    "\n",
+    "Now we can actually load our data.  We use the `load_uniform_grid` function here.  Normally, the first argument would be a dictionary of field data, where the keys were the field names and the values the field data arrays.  Here, we're just going to look at derived fields, so we supply an empty one.\n",
+    "\n",
+    "The next few arguments are the number of dimensions, the bounds, and we then specify the geometry as spherical."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load_uniform_grid({}, [128, 128, 128],\n",
+    "                          bbox=np.array([[0.0, 1.0], [0.0, np.pi], [0.0, 2*np.pi]]),\n",
+    "                          geometry=\"spherical\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Looking at Data\n",
+    "\n",
+    "Now we can take slices.  The first thing we will try is making a slice of data along the \"phi\" axis, here $\\pi/2$, which will be along the y axis in the positive direction.  We use the `.slice` attribute, which creates a slice, and then we convert this into a plot window.  Note that here 2 is used to indicate the third axis (0-indexed) which for spherical data is $\\phi$.\n",
+    "\n",
+    "This is the manual way of creating a plot -- below, we'll use the standard, automatic ways.  Note that the coordinates run from $-r$ to $r$ along the $z$ axis and from $0$ to $r$ along the $R$ axis.  We use the capital $R$ to indicate that it's the $R$ along the $x-y$ plane."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "s = ds.slice(2, np.pi/2)\n",
+    "p = s.to_pw(\"funfield\", origin=\"native\")\n",
+    "p.set_zlim(\"all\", 0.0, 4.0)\n",
+    "p.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can also slice along $r$.  For now, this creates a regular grid with *incorrect* units for phi and theta.  We are currently exploring two other options -- a simple aitoff projection, and fixing it to use the correct units as-is."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "s = yt.SlicePlot(ds, \"r\", \"funfield\")\n",
+    "s.set_zlim(\"all\", 0.0, 4.0)\n",
+    "s.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can also slice at constant $\\theta$.  But, this is a weird thing!  We're slicing at a constant declination from the azimuth.  What this means is that when thought of in a Cartesian domain, this slice is actually a cone.  The axes have been labeled appropriately, to indicate that these are not exactly the $x$ and $y$ axes, but instead differ by a factor of $\\sin(\\theta))$."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "s = yt.SlicePlot(ds, \"theta\", \"funfield\")\n",
+    "s.set_zlim(\"all\", 0.0, 4.0)\n",
+    "s.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We've seen lots of the `funfield` plots, but we can also look at the Cartesian axes.  This next plot plots the Cartesian $x$, $y$ and $z$ values on a $\\theta$ slice.  Because we're not supplying an argument to the `center` parameter, yt will place it at the center of the $\\theta$ axis, which will be at $\\pi/2$, where it will be aligned with the $x-y$ plane.  The slight change in `sphz` results from the cells themselves migrating, and plotting the center of those cells."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "s = yt.SlicePlot(ds, \"theta\", [\"sphx\", \"sphy\", \"sphz\"])\n",
+    "s.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can do the same with the $\\phi$ axis."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "s = yt.SlicePlot(ds, \"phi\", [\"sphx\", \"sphy\", \"sphz\"])\n",
+    "s.show()"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:88ed88ce8d8f4a359052f287aea17a7cbed435ff960e195097b440191ce6c2ab"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "# Loading Spherical Data\n",
-      "\n",
-      "With version 3.0 of yt, it has gained the ability to load data from non-Cartesian systems.  This support is still being extended, but here is an example of how to load spherical data from a regularly-spaced grid.  For irregularly spaced grids, a similar setup can be used, but the `load_hexahedral_mesh` method will have to be used instead.\n",
-      "\n",
-      "Note that in yt, \"spherical\" means that it is ordered $r$, $\\theta$, $\\phi$, where $\\theta$ is the declination from the azimuth (running from $0$ to $\\pi$ and $\\phi$ is the angle around the zenith (running from $0$ to $2\\pi$).\n",
-      "\n",
-      "We first start out by loading yt."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import numpy as np\n",
-      "import yt"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now, we create a few derived fields.  The first three are just straight translations of the Cartesian coordinates, so that we can see where we are located in the data, and understand what we're seeing.  The final one is just a fun field that is some combination of the three coordinates, and will vary in all dimensions."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "@yt.derived_field(name = \"sphx\", units = \"cm\", take_log=False)\n",
-      "def sphx(field, data):\n",
-      "    return np.cos(data[\"phi\"]) * np.sin(data[\"theta\"])*data[\"r\"]\n",
-      "@yt.derived_field(name = \"sphy\", units = \"cm\", take_log=False)\n",
-      "def sphy(field, data):\n",
-      "    return np.sin(data[\"phi\"]) * np.sin(data[\"theta\"])*data[\"r\"]\n",
-      "@yt.derived_field(name = \"sphz\", units = \"cm\", take_log=False)\n",
-      "def sphz(field, data):\n",
-      "    return np.cos(data[\"theta\"])*data[\"r\"]\n",
-      "@yt.derived_field(name = \"funfield\", units=\"cm\", take_log=False)\n",
-      "def funfield(field, data):\n",
-      "    return (np.sin(data[\"phi\"])**2 + np.cos(data[\"theta\"])**2) * (1.0*data[\"r\"].uq+data[\"r\"])"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "## Loading Data\n",
-      "\n",
-      "Now we can actually load our data.  We use the `load_uniform_grid` function here.  Normally, the first argument would be a dictionary of field data, where the keys were the field names and the values the field data arrays.  Here, we're just going to look at derived fields, so we supply an empty one.\n",
-      "\n",
-      "The next few arguments are the number of dimensions, the bounds, and we then specify the geometry as spherical."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load_uniform_grid({}, [128, 128, 128],\n",
-      "                          bbox=np.array([[0.0, 1.0], [0.0, np.pi], [0.0, 2*np.pi]]),\n",
-      "                          geometry=\"spherical\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "## Looking at Data\n",
-      "\n",
-      "Now we can take slices.  The first thing we will try is making a slice of data along the \"phi\" axis, here $\\pi/2$, which will be along the y axis in the positive direction.  We use the `.slice` attribute, which creates a slice, and then we convert this into a plot window.  Note that here 2 is used to indicate the third axis (0-indexed) which for spherical data is $\\phi$.\n",
-      "\n",
-      "This is the manual way of creating a plot -- below, we'll use the standard, automatic ways.  Note that the coordinates run from $-r$ to $r$ along the $z$ axis and from $0$ to $r$ along the $R$ axis.  We use the capital $R$ to indicate that it's the $R$ along the $x-y$ plane."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "s = ds.slice(2, np.pi/2)\n",
-      "p = s.to_pw(\"funfield\", origin=\"native\")\n",
-      "p.set_zlim(\"all\", 0.0, 4.0)\n",
-      "p.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can also slice along $r$.  For now, this creates a regular grid with *incorrect* units for phi and theta.  We are currently exploring two other options -- a simple aitoff projection, and fixing it to use the correct units as-is."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "s = yt.SlicePlot(ds, \"r\", \"funfield\")\n",
-      "s.set_zlim(\"all\", 0.0, 4.0)\n",
-      "s.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can also slice at constant $\\theta$.  But, this is a weird thing!  We're slicing at a constant declination from the azimuth.  What this means is that when thought of in a Cartesian domain, this slice is actually a cone.  The axes have been labeled appropriately, to indicate that these are not exactly the $x$ and $y$ axes, but instead differ by a factor of $\\sin(\\theta))$."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "s = yt.SlicePlot(ds, \"theta\", \"funfield\")\n",
-      "s.set_zlim(\"all\", 0.0, 4.0)\n",
-      "s.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We've seen lots of the `funfield` plots, but we can also look at the Cartesian axes.  This next plot plots the Cartesian $x$, $y$ and $z$ values on a $\\theta$ slice.  Because we're not supplying an argument to the `center` parameter, yt will place it at the center of the $\\theta$ axis, which will be at $\\pi/2$, where it will be aligned with the $x-y$ plane.  The slight change in `sphz` results from the cells themselves migrating, and plotting the center of those cells."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "s = yt.SlicePlot(ds, \"theta\", [\"sphx\", \"sphy\", \"sphz\"])\n",
-      "s.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can do the same with the $\\phi$ axis."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": true,
-     "input": [
-      "s = yt.SlicePlot(ds, \"phi\", [\"sphx\", \"sphy\", \"sphz\"])\n",
-      "s.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}


https://bitbucket.org/yt_analysis/yt/commits/2b88a36695a2/
Changeset:   2b88a36695a2
Branch:      yt
User:        ngoldbaum
Date:        2016-01-12 22:33:26+00:00
Summary:     Updating the FITSImageData notebook
Affected #:  1 file

diff -r ee56ec8990c6cbd184beedf936437a8620162f5a -r 2b88a36695a2b00125fa069c45ed7ae102229ad8 doc/source/visualizing/FITSImageData.ipynb
--- a/doc/source/visualizing/FITSImageData.ipynb
+++ b/doc/source/visualizing/FITSImageData.ipynb
@@ -409,8 +409,8 @@
    },
    "outputs": [],
    "source": [
-    "print fid_frb[\"density\"].header[\"time\"]\n",
-    "print fid_frb[\"temperature\"].header[\"scale\"]"
+    "print (fid_frb[\"density\"].header[\"time\"])\n",
+    "print (fid_frb[\"temperature\"].header[\"scale\"])"
    ]
   }
  ],
@@ -430,7 +430,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.4.3"
+   "version": "3.5.1"
   }
  },
  "nbformat": 4,


https://bitbucket.org/yt_analysis/yt/commits/5257e3b5721e/
Changeset:   5257e3b5721e
Branch:      yt
User:        ngoldbaum
Date:        2016-01-12 22:33:40+00:00
Summary:     Update the TransferFunctionHelper_Tutorial notebook
Affected #:  1 file

diff -r 2b88a36695a2b00125fa069c45ed7ae102229ad8 -r 5257e3b5721ef3405c20da9df7ac1118577368b7 doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb
--- a/doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb
+++ b/doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb
@@ -1,178 +1,195 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Here, we explain how to use TransferFunctionHelper to visualize and interpret yt volume rendering transfer functions.  TransferFunctionHelper is a utility class that makes it easy to visualize he probability density functions of yt fields that you might want to volume render.  This makes it easier to choose a nice transfer function that highlights interesting physical regimes.\n",
+    "\n",
+    "First, we set up our namespace and define a convenience function to display volume renderings inline in the notebook.  Using `%matplotlib inline` makes it so matplotlib plots display inline in the notebook."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt\n",
+    "import numpy as np\n",
+    "from IPython.core.display import Image\n",
+    "from yt.visualization.volume_rendering.transfer_function_helper import TransferFunctionHelper\n",
+    "from yt.visualization.volume_rendering.render_source import VolumeSource\n",
+    "from yt.visualization.volume_rendering.camera import Camera\n",
+    "\n",
+    "def showme(im):\n",
+    "    # screen out NaNs\n",
+    "    im[im != im] = 0.0\n",
+    "    \n",
+    "    # Create an RGBA bitmap to display\n",
+    "    imb = yt.write_bitmap(im, None)\n",
+    "    return Image(imb)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Next, we load up a low resolution Enzo cosmological simulation."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load('Enzo_64/DD0043/data0043')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now that we have the dataset loaded, let's create a `TransferFunctionHelper` to visualize the dataset and transfer function we'd like to use."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "tfh = TransferFunctionHelper(ds)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`TransferFunctionHelpler` will intelligently choose transfer function bounds based on the data values.  Use the `plot()` method to take a look at the transfer function."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# Build a transfer function that is a multivariate gaussian in temperature\n",
+    "tfh = TransferFunctionHelper(ds)\n",
+    "tfh.set_field('temperature')\n",
+    "tfh.set_log(True)\n",
+    "tfh.set_bounds()\n",
+    "tfh.build_transfer_function()\n",
+    "tfh.tf.add_layers(5)\n",
+    "tfh.plot()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's also look at the probability density function of the `cell_mass` field as a function of `temperature`.  This might give us an idea where there is a lot of structure. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "tfh.plot(profile_field='cell_mass')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "It looks like most of the gas is hot but there is still a lot of low-density cool gas.  Let's construct a transfer function that highlights both the rarefied hot gas and the dense cool gas simultaneously."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "tfh = TransferFunctionHelper(ds)\n",
+    "tfh.set_field('temperature')\n",
+    "tfh.set_bounds()\n",
+    "tfh.set_log(True)\n",
+    "tfh.build_transfer_function()\n",
+    "tfh.tf.add_layers(8, w=0.01, mi=4.0, ma=8.0, col_bounds=[4.,8.], alpha=np.logspace(-1,2,7), colormap='RdBu_r')\n",
+    "tfh.tf.map_to_colormap(6.0, 8.0, colormap='Reds', scale=10.0)\n",
+    "tfh.tf.map_to_colormap(-1.0, 6.0, colormap='Blues_r', scale=1.)\n",
+    "\n",
+    "tfh.plot(profile_field='cell_mass')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Finally, let's take a look at the volume rendering. First use the helper function to create a default rendering, then we override this with the transfer function we just created."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "im, sc = yt.volume_render(ds, ['temperature'])\n",
+    "\n",
+    "source = sc.get_source(0)\n",
+    "source.set_transfer_function(tfh.tf)\n",
+    "im2 = sc.render()\n",
+    "\n",
+    "showme(im2[:,:,:3])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can clearly see that the hot gas is mostly associated with bound structures while the cool gas is associated with low-density voids."
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:ed09405c56bab51abd351d107a4354726709d289b965f274106f4451b387f5ba"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Here, we explain how to use TransferFunctionHelper to visualize and interpret yt volume rendering transfer functions.  TransferFunctionHelper is a utility class that makes it easy to visualize he probability density functions of yt fields that you might want to volume render.  This makes it easier to choose a nice transfer function that highlights interesting physical regimes.\n",
-      "\n",
-      "First, we set up our namespace and define a convenience function to display volume renderings inline in the notebook.  Using `%matplotlib inline` makes it so matplotlib plots display inline in the notebook."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt\n",
-      "import numpy as np\n",
-      "from IPython.core.display import Image\n",
-      "from yt.visualization.volume_rendering.transfer_function_helper import TransferFunctionHelper\n",
-      "from yt.visualization.volume_rendering.render_source import VolumeSource\n",
-      "from yt.visualization.volume_rendering.camera import Camera\n",
-      "\n",
-      "def showme(im):\n",
-      "    # screen out NaNs\n",
-      "    im[im != im] = 0.0\n",
-      "    \n",
-      "    # Create an RGBA bitmap to display\n",
-      "    imb = yt.write_bitmap(im, None)\n",
-      "    return Image(imb)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Next, we load up a low resolution Enzo cosmological simulation."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load('Enzo_64/DD0043/data0043')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now that we have the dataset loaded, let's create a `TransferFunctionHelper` to visualize the dataset and transfer function we'd like to use."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "tfh = TransferFunctionHelper(ds)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "`TransferFunctionHelpler` will intelligently choose transfer function bounds based on the data values.  Use the `plot()` method to take a look at the transfer function."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# Build a transfer function that is a multivariate gaussian in temperature\n",
-      "tfh = TransferFunctionHelper(ds)\n",
-      "tfh.set_field('temperature')\n",
-      "tfh.set_log(True)\n",
-      "tfh.set_bounds()\n",
-      "tfh.build_transfer_function()\n",
-      "tfh.tf.add_layers(5)\n",
-      "tfh.plot()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Let's also look at the probability density function of the `cell_mass` field as a function of `temperature`.  This might give us an idea where there is a lot of structure. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "tfh.plot(profile_field='cell_mass')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "It looks like most of the gas is hot but there is still a lot of low-density cool gas.  Let's construct a transfer function that highlights both the rarefied hot gas and the dense cool gas simultaneously."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "tfh = TransferFunctionHelper(ds)\n",
-      "tfh.set_field('temperature')\n",
-      "tfh.set_bounds()\n",
-      "tfh.set_log(True)\n",
-      "tfh.build_transfer_function()\n",
-      "tfh.tf.add_layers(8, w=0.01, mi=4.0, ma=8.0, col_bounds=[4.,8.], alpha=np.logspace(-1,2,7), colormap='RdBu_r')\n",
-      "tfh.tf.map_to_colormap(6.0, 8.0, colormap='Reds', scale=10.0)\n",
-      "tfh.tf.map_to_colormap(-1.0, 6.0, colormap='Blues_r', scale=1.)\n",
-      "\n",
-      "tfh.plot(profile_field='cell_mass')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Finally, let's take a look at the volume rendering. First use the helper function to create a default rendering, then we override this with the transfer function we just created."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "im, sc = yt.volume_render(ds, ['temperature'])\n",
-      "\n",
-      "source = sc.get_source(0)\n",
-      "source.set_transfer_function(tfh.tf)\n",
-      "im2 = sc.render()\n",
-      "\n",
-      "showme(im2[:,:,:3])"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can clearly see that the hot gas is mostly associated with bound structures while the cool gas is associated with low-density voids."
-     ]
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}


https://bitbucket.org/yt_analysis/yt/commits/e6c3fc3e21d6/
Changeset:   e6c3fc3e21d6
Branch:      yt
User:        ngoldbaum
Date:        2016-01-12 22:33:56+00:00
Summary:     Updating the Volume_Rendering_Tutorial notebook
Affected #:  1 file

diff -r 5257e3b5721ef3405c20da9df7ac1118577368b7 -r e6c3fc3e21d6838f8bbfdbbf81a6233ad9cbf294 doc/source/visualizing/Volume_Rendering_Tutorial.ipynb
--- a/doc/source/visualizing/Volume_Rendering_Tutorial.ipynb
+++ b/doc/source/visualizing/Volume_Rendering_Tutorial.ipynb
@@ -1,272 +1,294 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "This notebook shows how to use the new (in version 3.3) Scene interface to create custom volume renderings. To begin, we load up a dataset and use the yt.create_scene method to set up a basic Scene. We store the Scene in a variable called 'sc' and render the default ('gas', 'density') field."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt\n",
+    "import numpy as np\n",
+    "from yt.visualization.volume_rendering.transfer_function_helper import TransferFunctionHelper\n",
+    "from yt.visualization.volume_rendering.api import Scene, Camera, VolumeSource\n",
+    "\n",
+    "ds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")\n",
+    "sc = yt.create_scene(ds)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now we can look at some information about the Scene we just created using the python print keyword:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (sc)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "This prints out information about the Sources, Camera, and Lens associated with this Scene. Each of these can also be printed individually. For example, to print only the information about the first (and currently, only) Source, we can do:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (sc.get_source(0))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can see that the yt.create_source has created a VolumeSource with default values for the center, bounds, and transfer function. Now, let's see what this Scene looks like. In the notebook, we can do this by calling sc.show(). "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "sc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "That looks okay, but it's a little too zoomed-out. To fix this, let's modify the Camera associated with our Scene. This next bit of code will zoom in the camera (i.e. decrease the width of the view) by a factor of 3."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "sc.camera.zoom(3.0)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now when we print the Scene, we see that the Camera width has decreased by a factor of 3:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (sc)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "To see what this looks like, we re-render the image and display the scene again. Note that we don't actually have to call sc.show() here - we can just have Ipython evaluate the Scene and that will display it automatically."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "sc.render()\n",
+    "sc"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "That's better! The image looks a little washed-out though, so we use the sigma_clip argument to sc.show() to improve the contrast:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "sc.show(sigma_clip=4.0)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Next, we demonstrate how to change the mapping between the field values and the colors in the image. We use the TransferFunctionHelper to create a new transfer function using the \"gist_rainbow\" colormap, and then re-create the image as follows:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# Set up a custom transfer function using the TransferFunctionHelper. \n",
+    "# We use 10 Gaussians evenly spaced logarithmically between the min and max\n",
+    "# field values.\n",
+    "tfh = TransferFunctionHelper(ds)\n",
+    "tfh.set_field('density')\n",
+    "tfh.set_log(True)\n",
+    "tfh.set_bounds()\n",
+    "tfh.build_transfer_function()\n",
+    "tfh.tf.add_layers(10, colormap='gist_rainbow')\n",
+    "\n",
+    "# Grab the first render source and set it to use the new transfer function\n",
+    "render_source = sc.get_source(0)\n",
+    "render_source.transfer_function = tfh.tf\n",
+    "\n",
+    "sc.render()\n",
+    "sc.show(sigma_clip=4.0)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now, let's try using a different lens type. We can give a sense of depth to the image by using the perspective lens. To do, we create a new Camera below. We also demonstrate how to switch the camera to a new position and orientation."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "cam = Camera(ds, lens_type='perspective')\n",
+    "\n",
+    "# Standing at (x=0.05, y=0.5, z=0.5), we look at the area of x>0.05 (with some open angle\n",
+    "# specified by camera width) along the positive x direction.\n",
+    "cam.position = ds.arr([0.05, 0.5, 0.5], 'code_length')\n",
+    "\n",
+    "normal_vector = [1., 0., 0.]\n",
+    "north_vector = [0., 0., 1.]\n",
+    "cam.switch_orientation(normal_vector=normal_vector,\n",
+    "                       north_vector=north_vector)\n",
+    "\n",
+    "# The width determines the opening angle\n",
+    "cam.set_width(ds.domain_width * 0.5)\n",
+    "\n",
+    "sc.camera = cam\n",
+    "print (sc.camera)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The resulting image looks like:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "sc.render()\n",
+    "sc.show(sigma_clip=4.0)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Finally, the next cell restores the lens and the transfer function to the defaults, moves the camera, and adds an opaque source  that shows the axes of the simulation coordinate system."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# set the lens type back to plane-parallel\n",
+    "sc.camera.set_lens('plane-parallel')\n",
+    "\n",
+    "# move the camera to the left edge of the domain\n",
+    "sc.camera.set_position(ds.domain_left_edge)\n",
+    "sc.camera.switch_orientation()\n",
+    "\n",
+    "# reset the transfer function to the default\n",
+    "render_source = sc.get_source(0)\n",
+    "render_source.build_default_transfer_function()\n",
+    "\n",
+    "# add an opaque source to the scene\n",
+    "sc.annotate_axes()\n",
+    "\n",
+    "sc.render()\n",
+    "sc.show(sigma_clip=4.0)"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:16b0b0137841594b135cb8e07f6029e0cd646630816929435e584cbbf10555b4"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "This notebook shows how to use the new (in version 3.3) Scene interface to create custom volume renderings. To begin, we load up a dataset and use the yt.create_scene method to set up a basic Scene. We store the Scene in a variable called 'sc' and render the default ('gas', 'density') field."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt\n",
-      "import numpy as np\n",
-      "from yt.visualization.volume_rendering.transfer_function_helper import TransferFunctionHelper\n",
-      "from yt.visualization.volume_rendering.api import Scene, Camera, VolumeSource\n",
-      "\n",
-      "ds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")\n",
-      "sc = yt.create_scene(ds)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now we can look at some information about the Scene we just created using the python print keyword:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print sc"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "This prints out information about the Sources, Camera, and Lens associated with this Scene. Each of these can also be printed individually. For example, to print only the information about the first (and currently, only) Source, we can do:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print sc.get_source(0)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We can see that the yt.create_source has created a VolumeSource with default values for the center, bounds, and transfer function. Now, let's see what this Scene looks like. In the notebook, we can do this by calling sc.show(). "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "sc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "That looks okay, but it's a little too zoomed-out. To fix this, let's modify the Camera associated with our Scene. This next bit of code will zoom in the camera (i.e. decrease the width of the view) by a factor of 3."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "sc.camera.zoom(3.0)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now when we print the Scene, we see that the Camera width has decreased by a factor of 3:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print sc"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "To see what this looks like, we re-render the image and display the scene again. Note that we don't actually have to call sc.show() here - we can just have Ipython evaluate the Scene and that will display it automatically."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "sc.render()\n",
-      "sc"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "That's better! The image looks a little washed-out though, so we use the sigma_clip argument to sc.show() to improve the contrast:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "sc.show(sigma_clip=4.0)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Next, we demonstrate how to change the mapping between the field values and the colors in the image. We use the TransferFunctionHelper to create a new transfer function using the \"gist_rainbow\" colormap, and then re-create the image as follows:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# Set up a custom transfer function using the TransferFunctionHelper. \n",
-      "# We use 10 Gaussians evenly spaced logarithmically between the min and max\n",
-      "# field values.\n",
-      "tfh = TransferFunctionHelper(ds)\n",
-      "tfh.set_field('density')\n",
-      "tfh.set_log(True)\n",
-      "tfh.set_bounds()\n",
-      "tfh.build_transfer_function()\n",
-      "tfh.tf.add_layers(10, colormap='gist_rainbow')\n",
-      "\n",
-      "# Grab the first render source and set it to use the new transfer function\n",
-      "render_source = sc.get_source(0)\n",
-      "render_source.transfer_function = tfh.tf\n",
-      "\n",
-      "sc.render()\n",
-      "sc.show(sigma_clip=4.0)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now, let's try using a different lens type. We can give a sense of depth to the image by using the perspective lens. To do, we create a new Camera below. We also demonstrate how to switch the camera to a new position and orientation."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "cam = Camera(ds, lens_type='perspective')\n",
-      "\n",
-      "# Standing at (x=0.05, y=0.5, z=0.5), we look at the area of x>0.05 (with some open angle\n",
-      "# specified by camera width) along the positive x direction.\n",
-      "cam.position = ds.arr([0.05, 0.5, 0.5], 'code_length')\n",
-      "\n",
-      "normal_vector = [1., 0., 0.]\n",
-      "north_vector = [0., 0., 1.]\n",
-      "cam.switch_orientation(normal_vector=normal_vector,\n",
-      "                       north_vector=north_vector)\n",
-      "\n",
-      "# The width determines the opening angle\n",
-      "cam.set_width(ds.domain_width * 0.5)\n",
-      "\n",
-      "sc.camera = cam\n",
-      "print sc.camera"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The resulting image looks like:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "sc.render()\n",
-      "sc.show(sigma_clip=4.0)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Finally, the next cell restores the lens and the transfer function to the defaults, moves the camera, and adds an opaque source  that shows the axes of the simulation coordinate system."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# set the lens type back to plane-parallel\n",
-      "sc.camera.set_lens('plane-parallel')\n",
-      "\n",
-      "# move the camera to the left edge of the domain\n",
-      "sc.camera.set_position(ds.domain_left_edge)\n",
-      "sc.camera.switch_orientation()\n",
-      "\n",
-      "# reset the transfer function to the default\n",
-      "render_source = sc.get_source(0)\n",
-      "render_source.build_default_transfer_function()\n",
-      "\n",
-      "# add an opaque source to the scene\n",
-      "sc.annotate_axes()\n",
-      "\n",
-      "sc.render()\n",
-      "sc.show(sigma_clip=4.0)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}


https://bitbucket.org/yt_analysis/yt/commits/3ae4d4671768/
Changeset:   3ae4d4671768
Branch:      yt
User:        ngoldbaum
Date:        2016-01-14 23:21:30+00:00
Summary:     Updating the docs build to use the RunNotebook module
Affected #:  2 files

diff -r e6c3fc3e21d6838f8bbfdbbf81a6233ad9cbf294 -r 3ae4d467176843f8952a68521629e91b8b37081f doc/source/conf.py
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -38,11 +38,11 @@
     extensions.append('pythonscript_sphinxext')
 
 try:
-    import runipy
-    import IPython.nbconvert.utils.pandoc
+    import RunNotebook
+    import nbconvert
     if not on_rtd:
-        extensions.append('notebook_sphinxext')
-        extensions.append('notebookcell_sphinxext')
+        extensions.append('RunNotebook.notebook_sphinxext')
+        extensions.append('RunNotebook.notebookcell_sphinxext')
 except ImportError:
     pass
 

diff -r e6c3fc3e21d6838f8bbfdbbf81a6233ad9cbf294 -r 3ae4d467176843f8952a68521629e91b8b37081f doc/source/developing/building_the_docs.rst
--- a/doc/source/developing/building_the_docs.rst
+++ b/doc/source/developing/building_the_docs.rst
@@ -154,22 +154,17 @@
 recipes, notebooks, and inline code snippets into python scripts, IPython_
 notebooks, or notebook cells that are executed when the docs are built.
 
-To do this, we use IPython's nbconvert module to transform notebooks into
+To do this, we use Jupyter's nbconvert module to transform notebooks into
 HTML. to simplify versioning of the notebook JSON format, we store notebooks in
-an unevaluated state.  To generate evaluated notebooks, which could include
-arbitrary output (text, images, HTML), we make use of runipy_, which provides
-facilities to script notebook evaluation.
+an unevaluated state.
 
-.. _runipy: https://github.com/paulgb/runipy
-.. _IPython: http://ipython.org/
-
-To build the full documentation, you will need yt, IPython, runipy, and all 
-supplementary yt analysis modules installed. The following dependencies were 
+To build the full documentation, you will need yt, jupyter, and all depedencies 
+needed for yt's analysis modules installed. The following dependencies were 
 used to generate the yt documentation during the release of yt 3.2 in 2015.
 
 * Sphinx_ 1.3.1
-* IPython_ 2.4.1
-* runipy_ 0.1.3
+* Jupyter 1.0.0
+* RunNotebook 0.1
 * pandoc_ 1.13.2
 * Rockstar halo finder 0.99.6
 * SZpack_ 1.1.1
@@ -200,7 +195,7 @@
    make html
 
 If all of the dependencies are installed and all of the test data is in the
-testing directory, this should churn away for a while (~ 1 hour) and 
+testing directory, this should churn away for a while (several hours) and 
 eventually generate a docs build.  We suggest setting 
 :code:`suppressStreamLogging = True` in your yt configuration (See 
 :ref:`configuration-file`) to suppress large amounts of debug output from


https://bitbucket.org/yt_analysis/yt/commits/45da94413605/
Changeset:   45da94413605
Branch:      yt
User:        ngoldbaum
Date:        2016-01-14 23:23:40+00:00
Summary:     Removing the notebook extensions from the repo
Affected #:  2 files

diff -r 3ae4d467176843f8952a68521629e91b8b37081f -r 45da944136058bb47888d09ce85d21edab66b254 doc/extensions/notebook_sphinxext.py
--- a/doc/extensions/notebook_sphinxext.py
+++ /dev/null
@@ -1,241 +0,0 @@
-import errno
-import os
-import shutil
-import string
-import re
-import tempfile
-import uuid
-from sphinx.util.compat import Directive
-from docutils import nodes
-from docutils.parsers.rst import directives
-from IPython.config import Config
-from IPython.nbconvert import html, python
-from IPython.nbformat import current as nbformat
-from runipy.notebook_runner import NotebookRunner, NotebookError
-
-class NotebookDirective(Directive):
-    """Insert an evaluated notebook into a document
-
-    This uses runipy and nbconvert to transform a path to an unevaluated notebook
-    into html suitable for embedding in a Sphinx document.
-    """
-    required_arguments = 1
-    optional_arguments = 1
-    option_spec = {'skip_exceptions': directives.flag}
-    final_argument_whitespace = True
-
-    def run(self): # check if there are spaces in the notebook name
-        nb_path = self.arguments[0]
-        if ' ' in nb_path: raise ValueError(
-            "Due to issues with docutils stripping spaces from links, white "
-            "space is not allowed in notebook filenames '{0}'".format(nb_path))
-        # check if raw html is supported
-        if not self.state.document.settings.raw_enabled:
-            raise self.warning('"%s" directive disabled.' % self.name)
-
-        cwd = os.getcwd()
-        tmpdir = tempfile.mkdtemp()
-        os.chdir(tmpdir)
-
-        # get path to notebook
-        nb_filename = self.arguments[0]
-        nb_basename = os.path.basename(nb_filename)
-        rst_file = self.state_machine.document.attributes['source']
-        rst_dir = os.path.abspath(os.path.dirname(rst_file))
-        nb_abs_path = os.path.abspath(os.path.join(rst_dir, nb_filename))
-
-        # Move files around.
-        rel_dir = os.path.relpath(rst_dir, setup.confdir)
-        dest_dir = os.path.join(setup.app.builder.outdir, rel_dir)
-        dest_path = os.path.join(dest_dir, nb_basename)
-
-        image_dir, image_rel_dir = make_image_dir(setup, rst_dir)
-
-        # Ensure desination build directory exists
-        thread_safe_mkdir(os.path.dirname(dest_path))
-
-        # Copy unevaluated notebook
-        shutil.copyfile(nb_abs_path, dest_path)
-
-        # Construct paths to versions getting copied over
-        dest_path_eval = string.replace(dest_path, '.ipynb', '_evaluated.ipynb')
-        dest_path_script = string.replace(dest_path, '.ipynb', '.py')
-        rel_path_eval = string.replace(nb_basename, '.ipynb', '_evaluated.ipynb')
-        rel_path_script = string.replace(nb_basename, '.ipynb', '.py')
-
-        # Create python script vesion
-        script_text = nb_to_python(nb_abs_path)
-        f = open(dest_path_script, 'w')
-        f.write(script_text.encode('utf8'))
-        f.close()
-
-        skip_exceptions = 'skip_exceptions' in self.options
-
-        ret = evaluate_notebook(
-            nb_abs_path, dest_path_eval, skip_exceptions=skip_exceptions)
-
-        try:
-            evaluated_text, resources = ret
-            evaluated_text = write_notebook_output(
-                resources, image_dir, image_rel_dir, evaluated_text)
-        except ValueError:
-            # This happens when a notebook raises an unhandled exception
-            evaluated_text = ret
-
-        # Create link to notebook and script files
-        link_rst = "(" + \
-                   formatted_link(nb_basename) + "; " + \
-                   formatted_link(rel_path_eval) + "; " + \
-                   formatted_link(rel_path_script) + \
-                   ")"
-
-        self.state_machine.insert_input([link_rst], rst_file)
-
-        # create notebook node
-        attributes = {'format': 'html', 'source': 'nb_path'}
-        nb_node = notebook_node('', evaluated_text, **attributes)
-        (nb_node.source, nb_node.line) = \
-            self.state_machine.get_source_and_line(self.lineno)
-
-        # add dependency
-        self.state.document.settings.record_dependencies.add(nb_abs_path)
-
-        # clean up
-        os.chdir(cwd)
-        shutil.rmtree(tmpdir, True)
-
-        return [nb_node]
-
-
-class notebook_node(nodes.raw):
-    pass
-
-def nb_to_python(nb_path):
-    """convert notebook to python script"""
-    exporter = python.PythonExporter()
-    output, resources = exporter.from_filename(nb_path)
-    return output
-
-def nb_to_html(nb_path):
-    """convert notebook to html"""
-    c = Config({'ExtractOutputPreprocessor':{'enabled':True}})
-
-    exporter = html.HTMLExporter(template_file='full', config=c)
-    notebook = nbformat.read(open(nb_path), 'json')
-    output, resources = exporter.from_notebook_node(notebook)
-    header = output.split('<head>', 1)[1].split('</head>',1)[0]
-    body = output.split('<body>', 1)[1].split('</body>',1)[0]
-
-    # http://imgur.com/eR9bMRH
-    header = header.replace('<style', '<style scoped="scoped"')
-    header = header.replace('body {\n  overflow: visible;\n  padding: 8px;\n}\n',
-                            '')
-    header = header.replace("code,pre{", "code{")
-
-    # Filter out styles that conflict with the sphinx theme.
-    filter_strings = [
-        'navbar',
-        'body{',
-        'alert{',
-        'uneditable-input{',
-        'collapse{',
-    ]
-
-    filter_strings.extend(['h%s{' % (i+1) for i in range(6)])
-
-    line_begin = [
-        'pre{',
-        'p{margin'
-    ]
-
-    filterfunc = lambda x: not any([s in x for s in filter_strings])
-    header_lines = filter(filterfunc, header.split('\n'))
-
-    filterfunc = lambda x: not any([x.startswith(s) for s in line_begin])
-    header_lines = filter(filterfunc, header_lines)
-
-    header = '\n'.join(header_lines)
-
-    # concatenate raw html lines
-    lines = ['<div class="ipynotebook">']
-    lines.append(header)
-    lines.append(body)
-    lines.append('</div>')
-    return '\n'.join(lines), resources
-
-def evaluate_notebook(nb_path, dest_path=None, skip_exceptions=False):
-    # Create evaluated version and save it to the dest path.
-    notebook = nbformat.read(open(nb_path), 'json')
-    nb_runner = NotebookRunner(notebook, pylab=False)
-    try:
-        nb_runner.run_notebook(skip_exceptions=skip_exceptions)
-    except NotebookError as e:
-        print('')
-        print(e)
-        # Return the traceback, filtering out ANSI color codes.
-        # http://stackoverflow.com/questions/13506033/filtering-out-ansi-escape-sequences
-        return "Notebook conversion failed with the " \
-               "following traceback: \n%s" % \
-            re.sub(r'\\033[\[\]]([0-9]{1,2}([;@][0-9]{0,2})*)*[mKP]?', '',
-                   str(e))
-
-    if dest_path is None:
-        dest_path = 'temp_evaluated.ipynb'
-    nbformat.write(nb_runner.nb, open(dest_path, 'w'), 'json')
-    ret = nb_to_html(dest_path)
-    if dest_path is 'temp_evaluated.ipynb':
-        os.remove(dest_path)
-    return ret
-
-def formatted_link(path):
-    return "`%s <%s>`__" % (os.path.basename(path), path)
-
-def visit_notebook_node(self, node):
-    self.visit_raw(node)
-
-def depart_notebook_node(self, node):
-    self.depart_raw(node)
-
-def setup(app):
-    setup.app = app
-    setup.config = app.config
-    setup.confdir = app.confdir
-
-    app.add_node(notebook_node,
-                 html=(visit_notebook_node, depart_notebook_node))
-
-    app.add_directive('notebook', NotebookDirective)
-
-    retdict = dict(
-        version='0.1',
-        parallel_read_safe=True,
-        parallel_write_safe=True
-    )
-
-    return retdict
-
-def make_image_dir(setup, rst_dir):
-    image_dir = setup.app.builder.outdir + os.path.sep + '_images'
-    rel_dir = os.path.relpath(setup.confdir, rst_dir)
-    image_rel_dir = rel_dir + os.path.sep + '_images'
-    thread_safe_mkdir(image_dir)
-    return image_dir, image_rel_dir
-
-def write_notebook_output(resources, image_dir, image_rel_dir, evaluated_text):
-    my_uuid = uuid.uuid4().hex
-
-    for output in resources['outputs']:
-        new_name = image_dir + os.path.sep + my_uuid + output
-        new_relative_name = image_rel_dir + os.path.sep + my_uuid + output
-        evaluated_text = evaluated_text.replace(output, new_relative_name)
-        with open(new_name, 'wb') as f:
-            f.write(resources['outputs'][output])
-    return evaluated_text
-
-def thread_safe_mkdir(dirname):
-    try:
-        os.makedirs(dirname)
-    except OSError as e:
-        if e.errno != errno.EEXIST:
-            raise
-        pass

diff -r 3ae4d467176843f8952a68521629e91b8b37081f -r 45da944136058bb47888d09ce85d21edab66b254 doc/extensions/notebookcell_sphinxext.py
--- a/doc/extensions/notebookcell_sphinxext.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import os
-import shutil
-import io
-import tempfile
-from sphinx.util.compat import Directive
-from docutils.parsers.rst import directives
-from IPython.nbformat import current
-from notebook_sphinxext import \
-    notebook_node, visit_notebook_node, depart_notebook_node, \
-    evaluate_notebook, make_image_dir, write_notebook_output
-
-
-class NotebookCellDirective(Directive):
-    """Insert an evaluated notebook cell into a document
-
-    This uses runipy and nbconvert to transform an inline python
-    script into html suitable for embedding in a Sphinx document.
-    """
-    required_arguments = 0
-    optional_arguments = 1
-    has_content = True
-    option_spec = {'skip_exceptions': directives.flag}
-
-    def run(self):
-        # check if raw html is supported
-        if not self.state.document.settings.raw_enabled:
-            raise self.warning('"%s" directive disabled.' % self.name)
-
-        cwd = os.getcwd()
-        tmpdir = tempfile.mkdtemp()
-        os.chdir(tmpdir)
-
-        rst_file = self.state_machine.document.attributes['source']
-        rst_dir = os.path.abspath(os.path.dirname(rst_file))
-
-        image_dir, image_rel_dir = make_image_dir(setup, rst_dir)
-
-        # Construct notebook from cell content
-        content = "\n".join(self.content)
-        with open("temp.py", "w") as f:
-            f.write(content)
-
-        convert_to_ipynb('temp.py', 'temp.ipynb')
-
-        skip_exceptions = 'skip_exceptions' in self.options
-
-        evaluated_text, resources = evaluate_notebook(
-            'temp.ipynb', skip_exceptions=skip_exceptions)
-
-        evaluated_text = write_notebook_output(
-            resources, image_dir, image_rel_dir, evaluated_text)
-
-        # create notebook node
-        attributes = {'format': 'html', 'source': 'nb_path'}
-        nb_node = notebook_node('', evaluated_text, **attributes)
-        (nb_node.source, nb_node.line) = \
-            self.state_machine.get_source_and_line(self.lineno)
-
-        # clean up
-        os.chdir(cwd)
-        shutil.rmtree(tmpdir, True)
-
-        return [nb_node]
-
-def setup(app):
-    setup.app = app
-    setup.config = app.config
-    setup.confdir = app.confdir
-
-    app.add_node(notebook_node,
-                 html=(visit_notebook_node, depart_notebook_node))
-
-    app.add_directive('notebook-cell', NotebookCellDirective)
-
-    retdict = dict(
-        version='0.1',
-        parallel_read_safe=True,
-        parallel_write_safe=True
-    )
-
-    return retdict
-
-def convert_to_ipynb(py_file, ipynb_file):
-    with io.open(py_file, 'r', encoding='utf-8') as f:
-        notebook = current.reads(f.read(), format='py')
-    with io.open(ipynb_file, 'w', encoding='utf-8') as f:
-        current.write(notebook, f, format='ipynb')


https://bitbucket.org/yt_analysis/yt/commits/c83bfab9bffe/
Changeset:   c83bfab9bffe
Branch:      yt
User:        ngoldbaum
Date:        2016-01-15 01:30:54+00:00
Summary:     Lose dependency on the removed extensons in the python script extension
Affected #:  1 file

diff -r 45da944136058bb47888d09ce85d21edab66b254 -r c83bfab9bffe8f2b691fad533840c19480812c17 doc/extensions/pythonscript_sphinxext.py
--- a/doc/extensions/pythonscript_sphinxext.py
+++ b/doc/extensions/pythonscript_sphinxext.py
@@ -6,7 +6,6 @@
 import uuid
 from sphinx.util.compat import Directive
 from docutils import nodes
-from notebook_sphinxext import make_image_dir
 
 
 class PythonScriptDirective(Directive):
@@ -82,3 +81,20 @@
     shutil.move(filename, image_dir + os.path.sep + my_uuid + filename)
     relative_filename = image_rel_dir + os.path.sep + my_uuid + filename
     return '<img src="%s" width="600"><br>' % relative_filename
+
+
+def make_image_dir(setup, rst_dir):
+    image_dir = setup.app.builder.outdir + os.path.sep + '_images'
+    rel_dir = os.path.relpath(setup.confdir, rst_dir)
+    image_rel_dir = rel_dir + os.path.sep + '_images'
+    thread_safe_mkdir(image_dir)
+    return image_dir, image_rel_dir
+
+
+def thread_safe_mkdir(dirname):
+    try:
+        os.makedirs(dirname)
+    except OSError as e:
+        if e.errno != errno.EEXIST:
+            raise
+        pass


https://bitbucket.org/yt_analysis/yt/commits/07f3159d32e4/
Changeset:   07f3159d32e4
Branch:      yt
User:        ngoldbaum
Date:        2016-01-15 04:05:13+00:00
Summary:     Add missing import
Affected #:  1 file

diff -r c83bfab9bffe8f2b691fad533840c19480812c17 -r 07f3159d32e41e9807ec3c31d02e1fd022c9cf27 doc/extensions/pythonscript_sphinxext.py
--- a/doc/extensions/pythonscript_sphinxext.py
+++ b/doc/extensions/pythonscript_sphinxext.py
@@ -4,6 +4,7 @@
 import shutil
 import subprocess
 import uuid
+import errno
 from sphinx.util.compat import Directive
 from docutils import nodes
 


https://bitbucket.org/yt_analysis/yt/commits/c348106475f5/
Changeset:   c348106475f5
Branch:      yt
User:        ngoldbaum
Date:        2016-01-15 18:37:24+00:00
Summary:     Install jupyter (and IPython) via pip
Affected #:  1 file

diff -r 07f3159d32e41e9807ec3c31d02e1fd022c9cf27 -r c348106475f588f5966beda6127f8506f80475f4 doc/install_script.sh
--- a/doc/install_script.sh
+++ b/doc/install_script.sh
@@ -627,7 +627,6 @@
 FREETYPE_VER='freetype-2.4.12' 
 H5PY='h5py-2.5.0'
 HDF5='hdf5-1.8.14' 
-IPYTHON='ipython-2.4.1'
 LAPACK='lapack-3.4.2'
 PNG=libpng-1.6.3
 MATPLOTLIB='matplotlib-1.4.3'
@@ -635,13 +634,12 @@
 NOSE='nose-1.3.6'
 NUMPY='numpy-1.9.2'
 PYTHON_HGLIB='python-hglib-1.6'
-PYZMQ='pyzmq-14.5.0'
 ROCKSTAR='rockstar-0.99.6'
 SCIPY='scipy-0.15.1'
 SQLITE='sqlite-autoconf-3071700'
 SYMPY='sympy-0.7.6'
-TORNADO='tornado-4.0.2'
-ZEROMQ='zeromq-4.0.5'
+PYZMQ='pyzmq-15.2.0'
+ZEROMQ='zeromq-4.1.4'
 ZLIB='zlib-1.2.8'
 SETUPTOOLS='setuptools-18.0.1'
 
@@ -655,7 +653,6 @@
 echo '609a68a3675087e0cc95268574f31e104549daa48efe15a25a33b8e269a93b4bd160f4c3e8178dca9c950ef5ca514b039d6fd1b45db6af57f25342464d0429ce  freetype-2.4.12.tar.gz' > freetype-2.4.12.tar.gz.sha512
 echo '4a83f9ae1855a7fad90133b327d426201c8ccfd2e7fbe9f39b2d61a2eee2f3ebe2ea02cf80f3d4e1ad659f8e790c173df8cc99b87d0b7ce63d34aa88cfdc7939  h5py-2.5.0.tar.gz' > h5py-2.5.0.tar.gz.sha512
 echo '4073fba510ccadaba41db0939f909613c9cb52ba8fb6c1062fc9118edc601394c75e102310be1af4077d07c9b327e6bbb1a6359939a7268dc140382d0c1e0199  hdf5-1.8.14.tar.gz' > hdf5-1.8.14.tar.gz.sha512
-echo 'a9cffc08ba10c47b0371b05664e55eee0562a30ef0d4bbafae79e52e5b9727906c45840c0918122c06c5672ac65e6eb381399f103e1a836aca003eda81b2acde  ipython-2.4.1.tar.gz' > ipython-2.4.1.tar.gz.sha512
 echo '8770214491e31f0a7a3efaade90eee7b0eb20a8a6ab635c5f854d78263f59a1849133c14ef5123d01023f0110cbb9fc6f818da053c01277914ae81473430a952  lapack-3.4.2.tar.gz' > lapack-3.4.2.tar.gz.sha512
 echo '887582e5a22e4cde338aa8fec7a89f6dd31f2f02b8842735f00f970f64582333fa03401cea6d01704083403c7e8b7ebc26655468ce930165673b33efa4bcd586  libpng-1.6.3.tar.gz' > libpng-1.6.3.tar.gz.sha512
 echo '51b0f58b2618b47b653e17e4f6b6a1215d3a3b0f1331ce3555cc7435e365d9c75693f289ce12fe3bf8f69fd57b663e545f0f1c2c94e81eaa661cac0689e125f5  matplotlib-1.4.3.tar.gz' > matplotlib-1.4.3.tar.gz.sha512
@@ -663,12 +660,11 @@
 echo 'd0cede08dc33a8ac0af0f18063e57f31b615f06e911edb5ca264575174d8f4adb4338448968c403811d9dcc60f38ade3164662d6c7b69b499f56f0984bb6283c  nose-1.3.6.tar.gz' > nose-1.3.6.tar.gz.sha512
 echo '70470ebb9afef5dfd0c83ceb7a9d5f1b7a072b1a9b54b04f04f5ed50fbaedd5b4906bd500472268d478f94df9e749a88698b1ff30f2d80258e7f3fec040617d9  numpy-1.9.2.tar.gz' > numpy-1.9.2.tar.gz.sha512
 echo 'bfd10455e74e30df568c4c4827140fb6cc29893b0e062ce1764bd52852ec7487a70a0f5ea53c3fca7886f5d36365c9f4db52b8c93cad35fb67beeb44a2d56f2d  python-hglib-1.6.tar.gz' > python-hglib-1.6.tar.gz.sha512
-echo '20164f7b05c308e0f089c07fc46b1c522094f3ac136f2e0bba84f19cb63dfd36152a2465df723dd4d93c6fbd2de4f0d94c160e2bbc353a92cfd680eb03cbdc87  pyzmq-14.5.0.tar.gz' > pyzmq-14.5.0.tar.gz.sha512
+echo '28541b095b5486b662fe33a24994af5a465989a2391091ec8b693579124fdd600c3b0721853377c7551430d55b13c9116a1eebdced74678598d78c01fa7431c7  pyzmq-15.2.0.tar.gz' > pyzmq-15.2.0.tar.gz.sha512
 echo 'fff4412d850c431a1b4e6ee3b17958ee5ab3beb81e6cb8a8e7d56d368751eaa8781d7c3e69d932dc002d718fddc66a72098acfe74cfe29ec80b24e6736317275  scipy-0.15.1.tar.gz' > scipy-0.15.1.tar.gz.sha512
 echo '96f3e51b46741450bc6b63779c10ebb4a7066860fe544385d64d1eda52592e376a589ef282ace2e1df73df61c10eab1a0d793abbdaf770e60289494d4bf3bcb4  sqlite-autoconf-3071700.tar.gz' > sqlite-autoconf-3071700.tar.gz.sha512
 echo 'ce0f1a17ac01eb48aec31fc0ad431d9d7ed9907f0e8584a6d79d0ffe6864fe62e203fe3f2a3c3e4e3d485809750ce07507a6488e776a388a7a9a713110882fcf  sympy-0.7.6.tar.gz' > sympy-0.7.6.tar.gz.sha512
-echo '93591068dc63af8d50a7925d528bc0cccdd705232c529b6162619fe28dddaf115e8a460b1842877d35160bd7ed480c1bd0bdbec57d1f359085bd1814e0c1c242  tornado-4.0.2.tar.gz' > tornado-4.0.2.tar.gz.sha512
-echo '0d928ed688ed940d460fa8f8d574a9819dccc4e030d735a8c7db71b59287ee50fa741a08249e356c78356b03c2174f2f2699f05aa7dc3d380ed47d8d7bab5408  zeromq-4.0.5.tar.gz' > zeromq-4.0.5.tar.gz.sha512
+echo '8a8cf4f52ad78dddfff104bfba0f80bbc12566920906a0fafb9fc340aa92f5577c2923cb2e5346c69835cd2ea1609647a8893c2883cd22c1f0340a720511460c  zeromq-4.1.4.tar.gz' > zeromq-4.1.4.tar.gz.sha512
 echo 'ece209d4c7ec0cb58ede791444dc754e0d10811cbbdebe3df61c0fd9f9f9867c1c3ccd5f1827f847c005e24eef34fb5bf87b5d3f894d75da04f1797538290e4a  zlib-1.2.8.tar.gz' > zlib-1.2.8.tar.gz.sha512
 echo '9b318ce2ee2cf787929dcb886d76c492b433e71024fda9452d8b4927652a298d6bd1bdb7a4c73883a98e100024f89b46ea8aa14b250f896e549e6dd7e10a6b41  setuptools-18.0.1.tar.gz' > setuptools-18.0.1.tar.gz.sha512
 # Individual processes
@@ -681,7 +677,6 @@
 [ $INST_PYX -eq 1 ] && get_ytproject $PYX.tar.gz
 [ $INST_0MQ -eq 1 ] && get_ytproject $ZEROMQ.tar.gz
 [ $INST_0MQ -eq 1 ] && get_ytproject $PYZMQ.tar.gz
-[ $INST_0MQ -eq 1 ] && get_ytproject $TORNADO.tar.gz
 [ $INST_SCIPY -eq 1 ] && get_ytproject $SCIPY.tar.gz
 [ $INST_SCIPY -eq 1 ] && get_ytproject blas.tar.gz
 [ $INST_SCIPY -eq 1 ] && get_ytproject $LAPACK.tar.gz
@@ -690,7 +685,6 @@
 get_ytproject $PYTHON2.tgz
 get_ytproject $NUMPY.tar.gz
 get_ytproject $MATPLOTLIB.tar.gz
-get_ytproject $IPYTHON.tar.gz
 get_ytproject $H5PY.tar.gz
 get_ytproject $CYTHON.tar.gz
 get_ytproject $NOSE.tar.gz
@@ -984,17 +978,18 @@
         [ ! -e $ZEROMQ ] && tar xfz $ZEROMQ.tar.gz
         echo "Installing ZeroMQ"
         cd $ZEROMQ
-        ( ./configure --prefix=${DEST_DIR}/ 2>&1 ) 1>> ${LOG_FILE} || do_exit
+        ( ./configure --without-libsodium --prefix=${DEST_DIR}/ 2>&1 ) 1>> ${LOG_FILE} || do_exit
         ( make install 2>&1 ) 1>> ${LOG_FILE} || do_exit
         ( make clean 2>&1) 1>> ${LOG_FILE} || do_exit
         touch done
         cd ..
     fi
     do_setup_py $PYZMQ --zmq=${DEST_DIR}
-    do_setup_py $TORNADO
 fi
 
-do_setup_py $IPYTHON
+echo "Installing Jupyter"
+( ${DEST_DIR}/bin/pip install "jupyter<2.0.0" 2>&1 ) 1>> ${LOG_FILE}
+
 do_setup_py $CYTHON
 do_setup_py $H5PY
 do_setup_py $NOSE


https://bitbucket.org/yt_analysis/yt/commits/1d7a3b560891/
Changeset:   1d7a3b560891
Branch:      yt
User:        ngoldbaum
Date:        2016-01-25 03:43:54+00:00
Summary:     Removing zeromq and pyzeromq from the install script, pip manages this now
Affected #:  1 file

diff -r c348106475f588f5966beda6127f8506f80475f4 -r 1d7a3b5608910fa21d2512e3e22c2ca265506abc doc/install_script.sh
--- a/doc/install_script.sh
+++ b/doc/install_script.sh
@@ -440,10 +440,6 @@
 get_willwont ${INST_SCIPY}
 echo "be installing scipy"
 
-printf "%-15s = %s so I " "INST_0MQ" "${INST_0MQ}"
-get_willwont ${INST_0MQ}
-echo "be installing ZeroMQ"
-
 printf "%-15s = %s so I " "INST_ROCKSTAR" "${INST_ROCKSTAR}"
 get_willwont ${INST_ROCKSTAR}
 echo "be installing Rockstar"
@@ -638,8 +634,6 @@
 SCIPY='scipy-0.15.1'
 SQLITE='sqlite-autoconf-3071700'
 SYMPY='sympy-0.7.6'
-PYZMQ='pyzmq-15.2.0'
-ZEROMQ='zeromq-4.1.4'
 ZLIB='zlib-1.2.8'
 SETUPTOOLS='setuptools-18.0.1'
 
@@ -660,11 +654,9 @@
 echo 'd0cede08dc33a8ac0af0f18063e57f31b615f06e911edb5ca264575174d8f4adb4338448968c403811d9dcc60f38ade3164662d6c7b69b499f56f0984bb6283c  nose-1.3.6.tar.gz' > nose-1.3.6.tar.gz.sha512
 echo '70470ebb9afef5dfd0c83ceb7a9d5f1b7a072b1a9b54b04f04f5ed50fbaedd5b4906bd500472268d478f94df9e749a88698b1ff30f2d80258e7f3fec040617d9  numpy-1.9.2.tar.gz' > numpy-1.9.2.tar.gz.sha512
 echo 'bfd10455e74e30df568c4c4827140fb6cc29893b0e062ce1764bd52852ec7487a70a0f5ea53c3fca7886f5d36365c9f4db52b8c93cad35fb67beeb44a2d56f2d  python-hglib-1.6.tar.gz' > python-hglib-1.6.tar.gz.sha512
-echo '28541b095b5486b662fe33a24994af5a465989a2391091ec8b693579124fdd600c3b0721853377c7551430d55b13c9116a1eebdced74678598d78c01fa7431c7  pyzmq-15.2.0.tar.gz' > pyzmq-15.2.0.tar.gz.sha512
 echo 'fff4412d850c431a1b4e6ee3b17958ee5ab3beb81e6cb8a8e7d56d368751eaa8781d7c3e69d932dc002d718fddc66a72098acfe74cfe29ec80b24e6736317275  scipy-0.15.1.tar.gz' > scipy-0.15.1.tar.gz.sha512
 echo '96f3e51b46741450bc6b63779c10ebb4a7066860fe544385d64d1eda52592e376a589ef282ace2e1df73df61c10eab1a0d793abbdaf770e60289494d4bf3bcb4  sqlite-autoconf-3071700.tar.gz' > sqlite-autoconf-3071700.tar.gz.sha512
 echo 'ce0f1a17ac01eb48aec31fc0ad431d9d7ed9907f0e8584a6d79d0ffe6864fe62e203fe3f2a3c3e4e3d485809750ce07507a6488e776a388a7a9a713110882fcf  sympy-0.7.6.tar.gz' > sympy-0.7.6.tar.gz.sha512
-echo '8a8cf4f52ad78dddfff104bfba0f80bbc12566920906a0fafb9fc340aa92f5577c2923cb2e5346c69835cd2ea1609647a8893c2883cd22c1f0340a720511460c  zeromq-4.1.4.tar.gz' > zeromq-4.1.4.tar.gz.sha512
 echo 'ece209d4c7ec0cb58ede791444dc754e0d10811cbbdebe3df61c0fd9f9f9867c1c3ccd5f1827f847c005e24eef34fb5bf87b5d3f894d75da04f1797538290e4a  zlib-1.2.8.tar.gz' > zlib-1.2.8.tar.gz.sha512
 echo '9b318ce2ee2cf787929dcb886d76c492b433e71024fda9452d8b4927652a298d6bd1bdb7a4c73883a98e100024f89b46ea8aa14b250f896e549e6dd7e10a6b41  setuptools-18.0.1.tar.gz' > setuptools-18.0.1.tar.gz.sha512
 # Individual processes
@@ -675,8 +667,6 @@
 [ $INST_FTYPE -eq 1 ] && get_ytproject $FREETYPE_VER.tar.gz
 [ $INST_SQLITE3 -eq 1 ] && get_ytproject $SQLITE.tar.gz
 [ $INST_PYX -eq 1 ] && get_ytproject $PYX.tar.gz
-[ $INST_0MQ -eq 1 ] && get_ytproject $ZEROMQ.tar.gz
-[ $INST_0MQ -eq 1 ] && get_ytproject $PYZMQ.tar.gz
 [ $INST_SCIPY -eq 1 ] && get_ytproject $SCIPY.tar.gz
 [ $INST_SCIPY -eq 1 ] && get_ytproject blas.tar.gz
 [ $INST_SCIPY -eq 1 ] && get_ytproject $LAPACK.tar.gz
@@ -970,23 +960,6 @@
 [ -n "${OLD_CXXFLAGS}" ] && export CXXFLAGS=${OLD_CXXFLAGS}
 [ -n "${OLD_CFLAGS}" ] && export CFLAGS=${OLD_CFLAGS}
 
-# Now we do our IPython installation, which has two optional dependencies.
-if [ $INST_0MQ -eq 1 ]
-then
-    if [ ! -e $ZEROMQ/done ]
-    then
-        [ ! -e $ZEROMQ ] && tar xfz $ZEROMQ.tar.gz
-        echo "Installing ZeroMQ"
-        cd $ZEROMQ
-        ( ./configure --without-libsodium --prefix=${DEST_DIR}/ 2>&1 ) 1>> ${LOG_FILE} || do_exit
-        ( make install 2>&1 ) 1>> ${LOG_FILE} || do_exit
-        ( make clean 2>&1) 1>> ${LOG_FILE} || do_exit
-        touch done
-        cd ..
-    fi
-    do_setup_py $PYZMQ --zmq=${DEST_DIR}
-fi
-
 echo "Installing Jupyter"
 ( ${DEST_DIR}/bin/pip install "jupyter<2.0.0" 2>&1 ) 1>> ${LOG_FILE}
 


https://bitbucket.org/yt_analysis/yt/commits/bc0cc00c150c/
Changeset:   bc0cc00c150c
Branch:      yt
User:        xarthisius
Date:        2016-01-28 03:26:16+00:00
Summary:     Merged in ngoldbaum/yt (pull request #1935)

Update documentation notebooks to nbformat4 and py3 compatibility
Affected #:  37 files

diff -r 481d5a937fb8e369949bae0073a7777d71cc952f -r bc0cc00c150c6e5fca1b85310935dfe9ccd905a2 doc/extensions/notebook_sphinxext.py
--- a/doc/extensions/notebook_sphinxext.py
+++ /dev/null
@@ -1,241 +0,0 @@
-import errno
-import os
-import shutil
-import string
-import re
-import tempfile
-import uuid
-from sphinx.util.compat import Directive
-from docutils import nodes
-from docutils.parsers.rst import directives
-from IPython.config import Config
-from IPython.nbconvert import html, python
-from IPython.nbformat import current as nbformat
-from runipy.notebook_runner import NotebookRunner, NotebookError
-
-class NotebookDirective(Directive):
-    """Insert an evaluated notebook into a document
-
-    This uses runipy and nbconvert to transform a path to an unevaluated notebook
-    into html suitable for embedding in a Sphinx document.
-    """
-    required_arguments = 1
-    optional_arguments = 1
-    option_spec = {'skip_exceptions': directives.flag}
-    final_argument_whitespace = True
-
-    def run(self): # check if there are spaces in the notebook name
-        nb_path = self.arguments[0]
-        if ' ' in nb_path: raise ValueError(
-            "Due to issues with docutils stripping spaces from links, white "
-            "space is not allowed in notebook filenames '{0}'".format(nb_path))
-        # check if raw html is supported
-        if not self.state.document.settings.raw_enabled:
-            raise self.warning('"%s" directive disabled.' % self.name)
-
-        cwd = os.getcwd()
-        tmpdir = tempfile.mkdtemp()
-        os.chdir(tmpdir)
-
-        # get path to notebook
-        nb_filename = self.arguments[0]
-        nb_basename = os.path.basename(nb_filename)
-        rst_file = self.state_machine.document.attributes['source']
-        rst_dir = os.path.abspath(os.path.dirname(rst_file))
-        nb_abs_path = os.path.abspath(os.path.join(rst_dir, nb_filename))
-
-        # Move files around.
-        rel_dir = os.path.relpath(rst_dir, setup.confdir)
-        dest_dir = os.path.join(setup.app.builder.outdir, rel_dir)
-        dest_path = os.path.join(dest_dir, nb_basename)
-
-        image_dir, image_rel_dir = make_image_dir(setup, rst_dir)
-
-        # Ensure desination build directory exists
-        thread_safe_mkdir(os.path.dirname(dest_path))
-
-        # Copy unevaluated notebook
-        shutil.copyfile(nb_abs_path, dest_path)
-
-        # Construct paths to versions getting copied over
-        dest_path_eval = string.replace(dest_path, '.ipynb', '_evaluated.ipynb')
-        dest_path_script = string.replace(dest_path, '.ipynb', '.py')
-        rel_path_eval = string.replace(nb_basename, '.ipynb', '_evaluated.ipynb')
-        rel_path_script = string.replace(nb_basename, '.ipynb', '.py')
-
-        # Create python script vesion
-        script_text = nb_to_python(nb_abs_path)
-        f = open(dest_path_script, 'w')
-        f.write(script_text.encode('utf8'))
-        f.close()
-
-        skip_exceptions = 'skip_exceptions' in self.options
-
-        ret = evaluate_notebook(
-            nb_abs_path, dest_path_eval, skip_exceptions=skip_exceptions)
-
-        try:
-            evaluated_text, resources = ret
-            evaluated_text = write_notebook_output(
-                resources, image_dir, image_rel_dir, evaluated_text)
-        except ValueError:
-            # This happens when a notebook raises an unhandled exception
-            evaluated_text = ret
-
-        # Create link to notebook and script files
-        link_rst = "(" + \
-                   formatted_link(nb_basename) + "; " + \
-                   formatted_link(rel_path_eval) + "; " + \
-                   formatted_link(rel_path_script) + \
-                   ")"
-
-        self.state_machine.insert_input([link_rst], rst_file)
-
-        # create notebook node
-        attributes = {'format': 'html', 'source': 'nb_path'}
-        nb_node = notebook_node('', evaluated_text, **attributes)
-        (nb_node.source, nb_node.line) = \
-            self.state_machine.get_source_and_line(self.lineno)
-
-        # add dependency
-        self.state.document.settings.record_dependencies.add(nb_abs_path)
-
-        # clean up
-        os.chdir(cwd)
-        shutil.rmtree(tmpdir, True)
-
-        return [nb_node]
-
-
-class notebook_node(nodes.raw):
-    pass
-
-def nb_to_python(nb_path):
-    """convert notebook to python script"""
-    exporter = python.PythonExporter()
-    output, resources = exporter.from_filename(nb_path)
-    return output
-
-def nb_to_html(nb_path):
-    """convert notebook to html"""
-    c = Config({'ExtractOutputPreprocessor':{'enabled':True}})
-
-    exporter = html.HTMLExporter(template_file='full', config=c)
-    notebook = nbformat.read(open(nb_path), 'json')
-    output, resources = exporter.from_notebook_node(notebook)
-    header = output.split('<head>', 1)[1].split('</head>',1)[0]
-    body = output.split('<body>', 1)[1].split('</body>',1)[0]
-
-    # http://imgur.com/eR9bMRH
-    header = header.replace('<style', '<style scoped="scoped"')
-    header = header.replace('body {\n  overflow: visible;\n  padding: 8px;\n}\n',
-                            '')
-    header = header.replace("code,pre{", "code{")
-
-    # Filter out styles that conflict with the sphinx theme.
-    filter_strings = [
-        'navbar',
-        'body{',
-        'alert{',
-        'uneditable-input{',
-        'collapse{',
-    ]
-
-    filter_strings.extend(['h%s{' % (i+1) for i in range(6)])
-
-    line_begin = [
-        'pre{',
-        'p{margin'
-    ]
-
-    filterfunc = lambda x: not any([s in x for s in filter_strings])
-    header_lines = filter(filterfunc, header.split('\n'))
-
-    filterfunc = lambda x: not any([x.startswith(s) for s in line_begin])
-    header_lines = filter(filterfunc, header_lines)
-
-    header = '\n'.join(header_lines)
-
-    # concatenate raw html lines
-    lines = ['<div class="ipynotebook">']
-    lines.append(header)
-    lines.append(body)
-    lines.append('</div>')
-    return '\n'.join(lines), resources
-
-def evaluate_notebook(nb_path, dest_path=None, skip_exceptions=False):
-    # Create evaluated version and save it to the dest path.
-    notebook = nbformat.read(open(nb_path), 'json')
-    nb_runner = NotebookRunner(notebook, pylab=False)
-    try:
-        nb_runner.run_notebook(skip_exceptions=skip_exceptions)
-    except NotebookError as e:
-        print('')
-        print(e)
-        # Return the traceback, filtering out ANSI color codes.
-        # http://stackoverflow.com/questions/13506033/filtering-out-ansi-escape-sequences
-        return "Notebook conversion failed with the " \
-               "following traceback: \n%s" % \
-            re.sub(r'\\033[\[\]]([0-9]{1,2}([;@][0-9]{0,2})*)*[mKP]?', '',
-                   str(e))
-
-    if dest_path is None:
-        dest_path = 'temp_evaluated.ipynb'
-    nbformat.write(nb_runner.nb, open(dest_path, 'w'), 'json')
-    ret = nb_to_html(dest_path)
-    if dest_path is 'temp_evaluated.ipynb':
-        os.remove(dest_path)
-    return ret
-
-def formatted_link(path):
-    return "`%s <%s>`__" % (os.path.basename(path), path)
-
-def visit_notebook_node(self, node):
-    self.visit_raw(node)
-
-def depart_notebook_node(self, node):
-    self.depart_raw(node)
-
-def setup(app):
-    setup.app = app
-    setup.config = app.config
-    setup.confdir = app.confdir
-
-    app.add_node(notebook_node,
-                 html=(visit_notebook_node, depart_notebook_node))
-
-    app.add_directive('notebook', NotebookDirective)
-
-    retdict = dict(
-        version='0.1',
-        parallel_read_safe=True,
-        parallel_write_safe=True
-    )
-
-    return retdict
-
-def make_image_dir(setup, rst_dir):
-    image_dir = setup.app.builder.outdir + os.path.sep + '_images'
-    rel_dir = os.path.relpath(setup.confdir, rst_dir)
-    image_rel_dir = rel_dir + os.path.sep + '_images'
-    thread_safe_mkdir(image_dir)
-    return image_dir, image_rel_dir
-
-def write_notebook_output(resources, image_dir, image_rel_dir, evaluated_text):
-    my_uuid = uuid.uuid4().hex
-
-    for output in resources['outputs']:
-        new_name = image_dir + os.path.sep + my_uuid + output
-        new_relative_name = image_rel_dir + os.path.sep + my_uuid + output
-        evaluated_text = evaluated_text.replace(output, new_relative_name)
-        with open(new_name, 'wb') as f:
-            f.write(resources['outputs'][output])
-    return evaluated_text
-
-def thread_safe_mkdir(dirname):
-    try:
-        os.makedirs(dirname)
-    except OSError as e:
-        if e.errno != errno.EEXIST:
-            raise
-        pass

diff -r 481d5a937fb8e369949bae0073a7777d71cc952f -r bc0cc00c150c6e5fca1b85310935dfe9ccd905a2 doc/extensions/notebookcell_sphinxext.py
--- a/doc/extensions/notebookcell_sphinxext.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import os
-import shutil
-import io
-import tempfile
-from sphinx.util.compat import Directive
-from docutils.parsers.rst import directives
-from IPython.nbformat import current
-from notebook_sphinxext import \
-    notebook_node, visit_notebook_node, depart_notebook_node, \
-    evaluate_notebook, make_image_dir, write_notebook_output
-
-
-class NotebookCellDirective(Directive):
-    """Insert an evaluated notebook cell into a document
-
-    This uses runipy and nbconvert to transform an inline python
-    script into html suitable for embedding in a Sphinx document.
-    """
-    required_arguments = 0
-    optional_arguments = 1
-    has_content = True
-    option_spec = {'skip_exceptions': directives.flag}
-
-    def run(self):
-        # check if raw html is supported
-        if not self.state.document.settings.raw_enabled:
-            raise self.warning('"%s" directive disabled.' % self.name)
-
-        cwd = os.getcwd()
-        tmpdir = tempfile.mkdtemp()
-        os.chdir(tmpdir)
-
-        rst_file = self.state_machine.document.attributes['source']
-        rst_dir = os.path.abspath(os.path.dirname(rst_file))
-
-        image_dir, image_rel_dir = make_image_dir(setup, rst_dir)
-
-        # Construct notebook from cell content
-        content = "\n".join(self.content)
-        with open("temp.py", "w") as f:
-            f.write(content)
-
-        convert_to_ipynb('temp.py', 'temp.ipynb')
-
-        skip_exceptions = 'skip_exceptions' in self.options
-
-        evaluated_text, resources = evaluate_notebook(
-            'temp.ipynb', skip_exceptions=skip_exceptions)
-
-        evaluated_text = write_notebook_output(
-            resources, image_dir, image_rel_dir, evaluated_text)
-
-        # create notebook node
-        attributes = {'format': 'html', 'source': 'nb_path'}
-        nb_node = notebook_node('', evaluated_text, **attributes)
-        (nb_node.source, nb_node.line) = \
-            self.state_machine.get_source_and_line(self.lineno)
-
-        # clean up
-        os.chdir(cwd)
-        shutil.rmtree(tmpdir, True)
-
-        return [nb_node]
-
-def setup(app):
-    setup.app = app
-    setup.config = app.config
-    setup.confdir = app.confdir
-
-    app.add_node(notebook_node,
-                 html=(visit_notebook_node, depart_notebook_node))
-
-    app.add_directive('notebook-cell', NotebookCellDirective)
-
-    retdict = dict(
-        version='0.1',
-        parallel_read_safe=True,
-        parallel_write_safe=True
-    )
-
-    return retdict
-
-def convert_to_ipynb(py_file, ipynb_file):
-    with io.open(py_file, 'r', encoding='utf-8') as f:
-        notebook = current.reads(f.read(), format='py')
-    with io.open(ipynb_file, 'w', encoding='utf-8') as f:
-        current.write(notebook, f, format='ipynb')

diff -r 481d5a937fb8e369949bae0073a7777d71cc952f -r bc0cc00c150c6e5fca1b85310935dfe9ccd905a2 doc/extensions/pythonscript_sphinxext.py
--- a/doc/extensions/pythonscript_sphinxext.py
+++ b/doc/extensions/pythonscript_sphinxext.py
@@ -4,9 +4,9 @@
 import shutil
 import subprocess
 import uuid
+import errno
 from sphinx.util.compat import Directive
 from docutils import nodes
-from notebook_sphinxext import make_image_dir
 
 
 class PythonScriptDirective(Directive):
@@ -82,3 +82,20 @@
     shutil.move(filename, image_dir + os.path.sep + my_uuid + filename)
     relative_filename = image_rel_dir + os.path.sep + my_uuid + filename
     return '<img src="%s" width="600"><br>' % relative_filename
+
+
+def make_image_dir(setup, rst_dir):
+    image_dir = setup.app.builder.outdir + os.path.sep + '_images'
+    rel_dir = os.path.relpath(setup.confdir, rst_dir)
+    image_rel_dir = rel_dir + os.path.sep + '_images'
+    thread_safe_mkdir(image_dir)
+    return image_dir, image_rel_dir
+
+
+def thread_safe_mkdir(dirname):
+    try:
+        os.makedirs(dirname)
+    except OSError as e:
+        if e.errno != errno.EEXIST:
+            raise
+        pass

diff -r 481d5a937fb8e369949bae0073a7777d71cc952f -r bc0cc00c150c6e5fca1b85310935dfe9ccd905a2 doc/install_script.sh
--- a/doc/install_script.sh
+++ b/doc/install_script.sh
@@ -440,10 +440,6 @@
 get_willwont ${INST_SCIPY}
 echo "be installing scipy"
 
-printf "%-15s = %s so I " "INST_0MQ" "${INST_0MQ}"
-get_willwont ${INST_0MQ}
-echo "be installing ZeroMQ"
-
 printf "%-15s = %s so I " "INST_ROCKSTAR" "${INST_ROCKSTAR}"
 get_willwont ${INST_ROCKSTAR}
 echo "be installing Rockstar"
@@ -627,7 +623,6 @@
 FREETYPE_VER='freetype-2.4.12' 
 H5PY='h5py-2.5.0'
 HDF5='hdf5-1.8.14' 
-IPYTHON='ipython-2.4.1'
 LAPACK='lapack-3.4.2'
 PNG=libpng-1.6.3
 MATPLOTLIB='matplotlib-1.4.3'
@@ -635,13 +630,10 @@
 NOSE='nose-1.3.6'
 NUMPY='numpy-1.9.2'
 PYTHON_HGLIB='python-hglib-1.6'
-PYZMQ='pyzmq-14.5.0'
 ROCKSTAR='rockstar-0.99.6'
 SCIPY='scipy-0.15.1'
 SQLITE='sqlite-autoconf-3071700'
 SYMPY='sympy-0.7.6'
-TORNADO='tornado-4.0.2'
-ZEROMQ='zeromq-4.0.5'
 ZLIB='zlib-1.2.8'
 SETUPTOOLS='setuptools-18.0.1'
 
@@ -655,7 +647,6 @@
 echo '609a68a3675087e0cc95268574f31e104549daa48efe15a25a33b8e269a93b4bd160f4c3e8178dca9c950ef5ca514b039d6fd1b45db6af57f25342464d0429ce  freetype-2.4.12.tar.gz' > freetype-2.4.12.tar.gz.sha512
 echo '4a83f9ae1855a7fad90133b327d426201c8ccfd2e7fbe9f39b2d61a2eee2f3ebe2ea02cf80f3d4e1ad659f8e790c173df8cc99b87d0b7ce63d34aa88cfdc7939  h5py-2.5.0.tar.gz' > h5py-2.5.0.tar.gz.sha512
 echo '4073fba510ccadaba41db0939f909613c9cb52ba8fb6c1062fc9118edc601394c75e102310be1af4077d07c9b327e6bbb1a6359939a7268dc140382d0c1e0199  hdf5-1.8.14.tar.gz' > hdf5-1.8.14.tar.gz.sha512
-echo 'a9cffc08ba10c47b0371b05664e55eee0562a30ef0d4bbafae79e52e5b9727906c45840c0918122c06c5672ac65e6eb381399f103e1a836aca003eda81b2acde  ipython-2.4.1.tar.gz' > ipython-2.4.1.tar.gz.sha512
 echo '8770214491e31f0a7a3efaade90eee7b0eb20a8a6ab635c5f854d78263f59a1849133c14ef5123d01023f0110cbb9fc6f818da053c01277914ae81473430a952  lapack-3.4.2.tar.gz' > lapack-3.4.2.tar.gz.sha512
 echo '887582e5a22e4cde338aa8fec7a89f6dd31f2f02b8842735f00f970f64582333fa03401cea6d01704083403c7e8b7ebc26655468ce930165673b33efa4bcd586  libpng-1.6.3.tar.gz' > libpng-1.6.3.tar.gz.sha512
 echo '51b0f58b2618b47b653e17e4f6b6a1215d3a3b0f1331ce3555cc7435e365d9c75693f289ce12fe3bf8f69fd57b663e545f0f1c2c94e81eaa661cac0689e125f5  matplotlib-1.4.3.tar.gz' > matplotlib-1.4.3.tar.gz.sha512
@@ -663,12 +654,9 @@
 echo 'd0cede08dc33a8ac0af0f18063e57f31b615f06e911edb5ca264575174d8f4adb4338448968c403811d9dcc60f38ade3164662d6c7b69b499f56f0984bb6283c  nose-1.3.6.tar.gz' > nose-1.3.6.tar.gz.sha512
 echo '70470ebb9afef5dfd0c83ceb7a9d5f1b7a072b1a9b54b04f04f5ed50fbaedd5b4906bd500472268d478f94df9e749a88698b1ff30f2d80258e7f3fec040617d9  numpy-1.9.2.tar.gz' > numpy-1.9.2.tar.gz.sha512
 echo 'bfd10455e74e30df568c4c4827140fb6cc29893b0e062ce1764bd52852ec7487a70a0f5ea53c3fca7886f5d36365c9f4db52b8c93cad35fb67beeb44a2d56f2d  python-hglib-1.6.tar.gz' > python-hglib-1.6.tar.gz.sha512
-echo '20164f7b05c308e0f089c07fc46b1c522094f3ac136f2e0bba84f19cb63dfd36152a2465df723dd4d93c6fbd2de4f0d94c160e2bbc353a92cfd680eb03cbdc87  pyzmq-14.5.0.tar.gz' > pyzmq-14.5.0.tar.gz.sha512
 echo 'fff4412d850c431a1b4e6ee3b17958ee5ab3beb81e6cb8a8e7d56d368751eaa8781d7c3e69d932dc002d718fddc66a72098acfe74cfe29ec80b24e6736317275  scipy-0.15.1.tar.gz' > scipy-0.15.1.tar.gz.sha512
 echo '96f3e51b46741450bc6b63779c10ebb4a7066860fe544385d64d1eda52592e376a589ef282ace2e1df73df61c10eab1a0d793abbdaf770e60289494d4bf3bcb4  sqlite-autoconf-3071700.tar.gz' > sqlite-autoconf-3071700.tar.gz.sha512
 echo 'ce0f1a17ac01eb48aec31fc0ad431d9d7ed9907f0e8584a6d79d0ffe6864fe62e203fe3f2a3c3e4e3d485809750ce07507a6488e776a388a7a9a713110882fcf  sympy-0.7.6.tar.gz' > sympy-0.7.6.tar.gz.sha512
-echo '93591068dc63af8d50a7925d528bc0cccdd705232c529b6162619fe28dddaf115e8a460b1842877d35160bd7ed480c1bd0bdbec57d1f359085bd1814e0c1c242  tornado-4.0.2.tar.gz' > tornado-4.0.2.tar.gz.sha512
-echo '0d928ed688ed940d460fa8f8d574a9819dccc4e030d735a8c7db71b59287ee50fa741a08249e356c78356b03c2174f2f2699f05aa7dc3d380ed47d8d7bab5408  zeromq-4.0.5.tar.gz' > zeromq-4.0.5.tar.gz.sha512
 echo 'ece209d4c7ec0cb58ede791444dc754e0d10811cbbdebe3df61c0fd9f9f9867c1c3ccd5f1827f847c005e24eef34fb5bf87b5d3f894d75da04f1797538290e4a  zlib-1.2.8.tar.gz' > zlib-1.2.8.tar.gz.sha512
 echo '9b318ce2ee2cf787929dcb886d76c492b433e71024fda9452d8b4927652a298d6bd1bdb7a4c73883a98e100024f89b46ea8aa14b250f896e549e6dd7e10a6b41  setuptools-18.0.1.tar.gz' > setuptools-18.0.1.tar.gz.sha512
 # Individual processes
@@ -679,9 +667,6 @@
 [ $INST_FTYPE -eq 1 ] && get_ytproject $FREETYPE_VER.tar.gz
 [ $INST_SQLITE3 -eq 1 ] && get_ytproject $SQLITE.tar.gz
 [ $INST_PYX -eq 1 ] && get_ytproject $PYX.tar.gz
-[ $INST_0MQ -eq 1 ] && get_ytproject $ZEROMQ.tar.gz
-[ $INST_0MQ -eq 1 ] && get_ytproject $PYZMQ.tar.gz
-[ $INST_0MQ -eq 1 ] && get_ytproject $TORNADO.tar.gz
 [ $INST_SCIPY -eq 1 ] && get_ytproject $SCIPY.tar.gz
 [ $INST_SCIPY -eq 1 ] && get_ytproject blas.tar.gz
 [ $INST_SCIPY -eq 1 ] && get_ytproject $LAPACK.tar.gz
@@ -690,7 +675,6 @@
 get_ytproject $PYTHON2.tgz
 get_ytproject $NUMPY.tar.gz
 get_ytproject $MATPLOTLIB.tar.gz
-get_ytproject $IPYTHON.tar.gz
 get_ytproject $H5PY.tar.gz
 get_ytproject $CYTHON.tar.gz
 get_ytproject $NOSE.tar.gz
@@ -976,25 +960,9 @@
 [ -n "${OLD_CXXFLAGS}" ] && export CXXFLAGS=${OLD_CXXFLAGS}
 [ -n "${OLD_CFLAGS}" ] && export CFLAGS=${OLD_CFLAGS}
 
-# Now we do our IPython installation, which has two optional dependencies.
-if [ $INST_0MQ -eq 1 ]
-then
-    if [ ! -e $ZEROMQ/done ]
-    then
-        [ ! -e $ZEROMQ ] && tar xfz $ZEROMQ.tar.gz
-        echo "Installing ZeroMQ"
-        cd $ZEROMQ
-        ( ./configure --prefix=${DEST_DIR}/ 2>&1 ) 1>> ${LOG_FILE} || do_exit
-        ( make install 2>&1 ) 1>> ${LOG_FILE} || do_exit
-        ( make clean 2>&1) 1>> ${LOG_FILE} || do_exit
-        touch done
-        cd ..
-    fi
-    do_setup_py $PYZMQ --zmq=${DEST_DIR}
-    do_setup_py $TORNADO
-fi
+echo "Installing Jupyter"
+( ${DEST_DIR}/bin/pip install "jupyter<2.0.0" 2>&1 ) 1>> ${LOG_FILE}
 
-do_setup_py $IPYTHON
 do_setup_py $CYTHON
 do_setup_py $H5PY
 do_setup_py $NOSE

diff -r 481d5a937fb8e369949bae0073a7777d71cc952f -r bc0cc00c150c6e5fca1b85310935dfe9ccd905a2 doc/source/analyzing/analysis_modules/PPVCube.ipynb
--- a/doc/source/analyzing/analysis_modules/PPVCube.ipynb
+++ b/doc/source/analyzing/analysis_modules/PPVCube.ipynb
@@ -1,423 +1,455 @@
 {
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Detailed spectra of astrophysical objects sometimes allow for determinations of how much of the gas is moving with a certain velocity along the line of sight, thanks to Doppler shifting of spectral lines. This enables \"data cubes\" to be created in RA, Dec, and line-of-sight velocity space. In yt, we can use the `PPVCube` analysis module to project fields along a given line of sight traveling at different line-of-sight velocities, to \"mock-up\" what would be seen in observations."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.config import ytcfg\n",
+    "\n",
+    "import yt\n",
+    "import numpy as np\n",
+    "from yt.analysis_modules.ppv_cube.api import PPVCube\n",
+    "import yt.units as u"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "To demonstrate this functionality, we'll create a simple unigrid dataset from scratch of a rotating disk. We create a thin disk in the x-y midplane of the domain of three cells in height in either direction, and a radius of 10 kpc. The density and azimuthal velocity profiles of the disk as a function of radius will be given by the following functions:"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Density: $\\rho(r) \\propto r^{\\alpha}$"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Velocity: $v_{\\theta}(r) \\propto \\frac{r}{1+(r/r_0)^{\\beta}}$"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "where for simplicity we won't worry about the normalizations of these profiles. "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "First, we'll set up the grid and the parameters of the profiles:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# increasing the resolution will make the images in this notebook more visually appealing\n",
+    "nx,ny,nz = (64, 64, 64) # domain dimensions\n",
+    "R = 10. # outer radius of disk, kpc\n",
+    "r_0 = 3. # scale radius, kpc\n",
+    "beta = 1.4 # for the tangential velocity profile\n",
+    "alpha = -1. # for the radial density profile\n",
+    "x, y = np.mgrid[-R:R:nx*1j,-R:R:ny*1j] # cartesian coordinates of x-y plane of disk\n",
+    "r = np.sqrt(x*x+y*y) # polar coordinates\n",
+    "theta = np.arctan2(y, x) # polar coordinates"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Second, we'll construct the data arrays for the density, temperature, and velocity of the disk. Since we have the tangential velocity profile, we have to use the polar coordinates we derived earlier to compute `velx` and `vely`. Everywhere outside the disk, all fields are set to zero.  "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "dens = np.zeros((nx,ny,nz))\n",
+    "dens[:,:,nz/2-3:nz/2+3] = (r**alpha).reshape(nx,ny,1) # the density profile of the disk\n",
+    "temp = np.zeros((nx,ny,nz))\n",
+    "temp[:,:,nz/2-3:nz/2+3] = 1.0e5 # Isothermal\n",
+    "vel_theta = 100.*r/(1.+(r/r_0)**beta) # the azimuthal velocity profile of the disk\n",
+    "velx = np.zeros((nx,ny,nz))\n",
+    "vely = np.zeros((nx,ny,nz))\n",
+    "velx[:,:,nz/2-3:nz/2+3] = (-vel_theta*np.sin(theta)).reshape(nx,ny,1) # convert polar to cartesian\n",
+    "vely[:,:,nz/2-3:nz/2+3] = (vel_theta*np.cos(theta)).reshape(nx,ny,1) # convert polar to cartesian\n",
+    "dens[r > R] = 0.0\n",
+    "temp[r > R] = 0.0\n",
+    "velx[r > R] = 0.0\n",
+    "vely[r > R] = 0.0"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Finally, we'll package these data arrays up into a dictionary, which will then be shipped off to `load_uniform_grid`. We'll define the width of the grid to be `2*R` kpc, which will be equal to 1  `code_length`. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "data = {}\n",
+    "data[\"density\"] = (dens,\"g/cm**3\")\n",
+    "data[\"temperature\"] = (temp, \"K\")\n",
+    "data[\"velocity_x\"] = (velx, \"km/s\")\n",
+    "data[\"velocity_y\"] = (vely, \"km/s\")\n",
+    "data[\"velocity_z\"] = (np.zeros((nx,ny,nz)), \"km/s\") # zero velocity in the z-direction\n",
+    "bbox = np.array([[-0.5,0.5],[-0.5,0.5],[-0.5,0.5]]) # bbox of width 1 on a side with center (0,0,0)\n",
+    "ds = yt.load_uniform_grid(data, (nx,ny,nz), length_unit=(2*R,\"kpc\"), nprocs=1, bbox=bbox)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "To get a sense of what the data looks like, we'll take a slice through the middle of the disk:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc = yt.SlicePlot(ds, \"z\", [\"density\",\"velocity_x\",\"velocity_y\",\"velocity_magnitude\"])"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc.set_log(\"velocity_x\", False)\n",
+    "slc.set_log(\"velocity_y\", False)\n",
+    "slc.set_log(\"velocity_magnitude\", False)\n",
+    "slc.set_unit(\"velocity_magnitude\", \"km/s\")\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Which shows a rotating disk with a specific density and velocity profile. Now, suppose we wanted to look at this disk galaxy from a certain orientation angle, and simulate a 3D FITS data cube where we can see the gas that is emitting at different velocities along the line of sight. We can do this using the `PPVCube` class. First, let's assume we rotate our viewing angle 60 degrees from face-on, from along the z-axis into the x-axis. We'll create a normal vector:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "i = 60.*np.pi/180.\n",
+    "L = [np.sin(i),0.0,np.cos(i)]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Next, we need to specify a field that will serve as the \"intensity\" of the emission that we see. For simplicity, we'll simply choose the gas density as this field, though it could be any field (including derived fields) in principle. We also need to choose the bounds in line-of-sight velocity that the data will be binned into, which is a 4-tuple in the shape of `(vmin, vmax, nbins, units)`, which specifies a linear range of `nbins` velocity bins from `vmin` to `vmax` in units of `units`. We may also optionally specify the dimensions of the data cube with the `dims` argument."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false,
+    "scrolled": true
+   },
+   "outputs": [],
+   "source": [
+    "cube = PPVCube(ds, L, \"density\", (-150.,150.,50,\"km/s\"), dims=200, method=\"sum\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Following this, we can now write this cube to a FITS file. The x and y axes of the file can be in length units, which can be optionally specified by `length_unit`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "cube.write_fits(\"cube.fits\", clobber=True, length_unit=\"kpc\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Or one can use the `sky_scale` and `sky_center` keywords to set up the coordinates in RA and Dec:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "sky_scale = (1.0, \"arcsec/kpc\")\n",
+    "sky_center = (30., 45.) # RA, Dec in degrees\n",
+    "cube.write_fits(\"cube_sky.fits\", clobber=True, sky_scale=sky_scale, sky_center=sky_center)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now, we'll look at the FITS dataset in yt and look at different slices along the velocity axis, which is the \"z\" axis:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds_cube = yt.load(\"cube.fits\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# Specifying no center gives us the center slice\n",
+    "slc = yt.SlicePlot(ds_cube, \"z\", [\"density\"])\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "# Picking different velocities for the slices\n",
+    "new_center = ds_cube.domain_center\n",
+    "new_center[2] = ds_cube.spec2pixel(-100.*u.km/u.s)\n",
+    "slc = yt.SlicePlot(ds_cube, \"z\", [\"density\"], center=new_center)\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "new_center[2] = ds_cube.spec2pixel(70.0*u.km/u.s)\n",
+    "slc = yt.SlicePlot(ds_cube, \"z\", [\"density\"], center=new_center)\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "new_center[2] = ds_cube.spec2pixel(-30.0*u.km/u.s)\n",
+    "slc = yt.SlicePlot(ds_cube, \"z\", [\"density\"], center=new_center)\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "If we project all the emission at all the different velocities along the z-axis, we recover the entire disk:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "prj = yt.ProjectionPlot(ds_cube, \"z\", [\"density\"], method=\"sum\")\n",
+    "prj.set_log(\"density\", True)\n",
+    "prj.set_zlim(\"density\", 1.0e-3, 0.2)\n",
+    "prj.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The `thermal_broad` keyword allows one to simulate thermal line broadening based on the temperature, and the `atomic_weight` argument is used to specify the atomic weight of the particle that is doing the emitting."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "cube2 = PPVCube(ds, L, \"density\", (-150.,150.,50,\"km/s\"), dims=200, thermal_broad=True, \n",
+    "                atomic_weight=12.0, method=\"sum\")\n",
+    "cube2.write_fits(\"cube2.fits\", clobber=True, length_unit=\"kpc\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Taking a slice of this cube shows:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds_cube2 = yt.load(\"cube2.fits\")\n",
+    "new_center = ds_cube2.domain_center\n",
+    "new_center[2] = ds_cube2.spec2pixel(70.0*u.km/u.s)\n",
+    "slc = yt.SlicePlot(ds_cube2, \"z\", [\"density\"], center=new_center)\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "new_center[2] = ds_cube2.spec2pixel(-100.*u.km/u.s)\n",
+    "slc = yt.SlicePlot(ds_cube2, \"z\", [\"density\"], center=new_center)\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "where we can see the emission has been smeared into this velocity slice from neighboring slices due to the thermal broadening. \n",
+    "\n",
+    "Finally, the \"velocity\" or \"spectral\" axis of the cube can be changed to a different unit, such as wavelength, frequency, or energy: "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (cube2.vbins[0], cube2.vbins[-1])\n",
+    "cube2.transform_spectral_axis(400.0,\"nm\")\n",
+    "print (cube2.vbins[0], cube2.vbins[-1])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "If a FITS file is now written from the cube, the spectral axis will be in the new units. To reset the spectral axis back to the original velocity units:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "cube2.reset_spectral_axis()\n",
+    "print (cube2.vbins[0], cube2.vbins[-1])"
+   ]
+  }
+ ],
  "metadata": {
-  "name": "",
-  "signature": "sha256:67e4297cbc32716b2481c71659305687cb5bdadad648a0acf6b48960267bb069"
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
  },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Detailed spectra of astrophysical objects sometimes allow for determinations of how much of the gas is moving with a certain velocity along the line of sight, thanks to Doppler shifting of spectral lines. This enables \"data cubes\" to be created in RA, Dec, and line-of-sight velocity space. In yt, we can use the `PPVCube` analysis module to project fields along a given line of sight traveling at different line-of-sight velocities, to \"mock-up\" what would be seen in observations."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "from yt.config import ytcfg\n",
-      "ytcfg[\"yt\",\"loglevel\"] = 30\n",
-      "\n",
-      "import yt\n",
-      "import numpy as np\n",
-      "from yt.analysis_modules.ppv_cube.api import PPVCube\n",
-      "import yt.units as u"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "To demonstrate this functionality, we'll create a simple unigrid dataset from scratch of a rotating disk. We create a thin disk in the x-y midplane of the domain of three cells in height in either direction, and a radius of 10 kpc. The density and azimuthal velocity profiles of the disk as a function of radius will be given by the following functions:"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Density: $\\rho(r) \\propto r^{\\alpha}$"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Velocity: $v_{\\theta}(r) \\propto \\frac{r}{1+(r/r_0)^{\\beta}}$"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "where for simplicity we won't worry about the normalizations of these profiles. "
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "First, we'll set up the grid and the parameters of the profiles:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "nx,ny,nz = (256,256,256) # domain dimensions\n",
-      "R = 10. # outer radius of disk, kpc\n",
-      "r_0 = 3. # scale radius, kpc\n",
-      "beta = 1.4 # for the tangential velocity profile\n",
-      "alpha = -1. # for the radial density profile\n",
-      "x, y = np.mgrid[-R:R:nx*1j,-R:R:ny*1j] # cartesian coordinates of x-y plane of disk\n",
-      "r = np.sqrt(x*x+y*y) # polar coordinates\n",
-      "theta = np.arctan2(y, x) # polar coordinates"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Second, we'll construct the data arrays for the density, temperature, and velocity of the disk. Since we have the tangential velocity profile, we have to use the polar coordinates we derived earlier to compute `velx` and `vely`. Everywhere outside the disk, all fields are set to zero.  "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "dens = np.zeros((nx,ny,nz))\n",
-      "dens[:,:,nz/2-3:nz/2+3] = (r**alpha).reshape(nx,ny,1) # the density profile of the disk\n",
-      "temp = np.zeros((nx,ny,nz))\n",
-      "temp[:,:,nz/2-3:nz/2+3] = 1.0e5 # Isothermal\n",
-      "vel_theta = 100.*r/(1.+(r/r_0)**beta) # the azimuthal velocity profile of the disk\n",
-      "velx = np.zeros((nx,ny,nz))\n",
-      "vely = np.zeros((nx,ny,nz))\n",
-      "velx[:,:,nz/2-3:nz/2+3] = (-vel_theta*np.sin(theta)).reshape(nx,ny,1) # convert polar to cartesian\n",
-      "vely[:,:,nz/2-3:nz/2+3] = (vel_theta*np.cos(theta)).reshape(nx,ny,1) # convert polar to cartesian\n",
-      "dens[r > R] = 0.0\n",
-      "temp[r > R] = 0.0\n",
-      "velx[r > R] = 0.0\n",
-      "vely[r > R] = 0.0"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Finally, we'll package these data arrays up into a dictionary, which will then be shipped off to `load_uniform_grid`. We'll define the width of the grid to be `2*R` kpc, which will be equal to 1  `code_length`. "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "data = {}\n",
-      "data[\"density\"] = (dens,\"g/cm**3\")\n",
-      "data[\"temperature\"] = (temp, \"K\")\n",
-      "data[\"velocity_x\"] = (velx, \"km/s\")\n",
-      "data[\"velocity_y\"] = (vely, \"km/s\")\n",
-      "data[\"velocity_z\"] = (np.zeros((nx,ny,nz)), \"km/s\") # zero velocity in the z-direction\n",
-      "bbox = np.array([[-0.5,0.5],[-0.5,0.5],[-0.5,0.5]]) # bbox of width 1 on a side with center (0,0,0)\n",
-      "ds = yt.load_uniform_grid(data, (nx,ny,nz), length_unit=(2*R,\"kpc\"), nprocs=1, bbox=bbox)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "To get a sense of what the data looks like, we'll take a slice through the middle of the disk:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc = yt.SlicePlot(ds, \"z\", [\"density\",\"velocity_x\",\"velocity_y\",\"velocity_magnitude\"])"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "slc.set_log(\"velocity_x\", False)\n",
-      "slc.set_log(\"velocity_y\", False)\n",
-      "slc.set_log(\"velocity_magnitude\", False)\n",
-      "slc.set_unit(\"velocity_magnitude\", \"km/s\")\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Which shows a rotating disk with a specific density and velocity profile. Now, suppose we wanted to look at this disk galaxy from a certain orientation angle, and simulate a 3D FITS data cube where we can see the gas that is emitting at different velocities along the line of sight. We can do this using the `PPVCube` class. First, let's assume we rotate our viewing angle 60 degrees from face-on, from along the z-axis into the x-axis. We'll create a normal vector:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "i = 60.*np.pi/180.\n",
-      "L = [np.sin(i),0.0,np.cos(i)]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Next, we need to specify a field that will serve as the \"intensity\" of the emission that we see. For simplicity, we'll simply choose the gas density as this field, though it could be any field (including derived fields) in principle. We also need to choose the bounds in line-of-sight velocity that the data will be binned into, which is a 4-tuple in the shape of `(vmin, vmax, nbins, units)`, which specifies a linear range of `nbins` velocity bins from `vmin` to `vmax` in units of `units`. We may also optionally specify the dimensions of the data cube with the `dims` argument."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "cube = PPVCube(ds, L, \"density\", (-150.,150.,50,\"km/s\"), dims=200, method=\"sum\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Following this, we can now write this cube to a FITS file. The x and y axes of the file can be in length units, which can be optionally specified by `length_unit`:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "cube.write_fits(\"cube.fits\", clobber=True, length_unit=\"kpc\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Or one can use the `sky_scale` and `sky_center` keywords to set up the coordinates in RA and Dec:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "sky_scale = (1.0, \"arcsec/kpc\")\n",
-      "sky_center = (30., 45.) # RA, Dec in degrees\n",
-      "cube.write_fits(\"cube_sky.fits\", clobber=True, sky_scale=sky_scale, sky_center=sky_center)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Now, we'll look at the FITS dataset in yt and look at different slices along the velocity axis, which is the \"z\" axis:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds_cube = yt.load(\"cube.fits\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# Specifying no center gives us the center slice\n",
-      "slc = yt.SlicePlot(ds_cube, \"z\", [\"density\"])\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "# Picking different velocities for the slices\n",
-      "new_center = ds_cube.domain_center\n",
-      "new_center[2] = ds_cube.spec2pixel(-100.*u.km/u.s)\n",
-      "slc = yt.SlicePlot(ds_cube, \"z\", [\"density\"], center=new_center)\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "new_center[2] = ds_cube.spec2pixel(70.0*u.km/u.s)\n",
-      "slc = yt.SlicePlot(ds_cube, \"z\", [\"density\"], center=new_center)\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "new_center[2] = ds_cube.spec2pixel(-30.0*u.km/u.s)\n",
-      "slc = yt.SlicePlot(ds_cube, \"z\", [\"density\"], center=new_center)\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "If we project all the emission at all the different velocities along the z-axis, we recover the entire disk:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "prj = yt.ProjectionPlot(ds_cube, \"z\", [\"density\"], method=\"sum\")\n",
-      "prj.set_log(\"density\", True)\n",
-      "prj.set_zlim(\"density\", 1.0e-3, 0.2)\n",
-      "prj.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The `thermal_broad` keyword allows one to simulate thermal line broadening based on the temperature, and the `atomic_weight` argument is used to specify the atomic weight of the particle that is doing the emitting."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "cube2 = PPVCube(ds, L, \"density\", (-150.,150.,50,\"km/s\"), dims=200, thermal_broad=True, \n",
-      "                atomic_weight=12.0, method=\"sum\")\n",
-      "cube2.write_fits(\"cube2.fits\", clobber=True, length_unit=\"kpc\")"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Taking a slice of this cube shows:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds_cube2 = yt.load(\"cube2.fits\")\n",
-      "new_center = ds_cube2.domain_center\n",
-      "new_center[2] = ds_cube2.spec2pixel(70.0*u.km/u.s)\n",
-      "slc = yt.SlicePlot(ds_cube2, \"z\", [\"density\"], center=new_center)\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "new_center[2] = ds_cube2.spec2pixel(-100.*u.km/u.s)\n",
-      "slc = yt.SlicePlot(ds_cube2, \"z\", [\"density\"], center=new_center)\n",
-      "slc.show()"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "where we can see the emission has been smeared into this velocity slice from neighboring slices due to the thermal broadening. \n",
-      "\n",
-      "Finally, the \"velocity\" or \"spectral\" axis of the cube can be changed to a different unit, such as wavelength, frequency, or energy: "
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "print cube2.vbins[0], cube2.vbins[-1]\n",
-      "cube2.transform_spectral_axis(400.0,\"nm\")\n",
-      "print cube2.vbins[0], cube2.vbins[-1]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "If a FITS file is now written from the cube, the spectral axis will be in the new units. To reset the spectral axis back to the original velocity units:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "cube2.reset_spectral_axis()\n",
-      "print cube2.vbins[0], cube2.vbins[-1]"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file
+ "nbformat": 4,
+ "nbformat_minor": 0
+}

This diff is so big that we needed to truncate the remainder.

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.


More information about the yt-svn mailing list