[yt-svn] commit/yt: MatthewTurk: Merged in jzuhone/yt-3.x/yt-3.0 (pull request #859)

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Mon May 5 06:29:59 PDT 2014


1 new commit in yt:

https://bitbucket.org/yt_analysis/yt/commits/85cf74b2256c/
Changeset:   85cf74b2256c
Branch:      yt-3.0
User:        MatthewTurk
Date:        2014-05-05 15:29:46
Summary:     Merged in jzuhone/yt-3.x/yt-3.0 (pull request #859)

FITS frontend work, PPVCube analysis module, updates to Particle Trajectories
Affected #:  48 files

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f doc/source/analyzing/analysis_modules/PPVCube.ipynb
--- /dev/null
+++ b/doc/source/analyzing/analysis_modules/PPVCube.ipynb
@@ -0,0 +1,237 @@
+{
+ "metadata": {
+  "name": "",
+  "signature": "sha256:3a720e0a18272564522f9fc23553431908d6f2b4f3e3e7dfe5b3e690e2e37677"
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+  {
+   "cells": [
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Detailed spectra of astrophysical objects sometimes allow for determinations of how much of the gas is moving with a certain velocity along the line of sight, thanks to Doppler shifting of spectral lines. This enables \"data cubes\" to be created in RA, Dec, and line-of-sight velocity space. In yt, we can use the `PPVCube` analysis module to project fields along a given line of sight traveling at different line-of-sight velocities, to \"mock-up\" what would be seen in observations."
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "To demonstrate this functionality, we'll create a simple unigrid dataset from scratch of a rotating disk galaxy. We create a thin disk in the x-y midplane of the domain of three cells in height in either direction, and a radius of 10 kpc. The density and azimuthal velocity profiles of the disk as a function of radius will be given by the following functions:"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Density: $\\rho(r) \\propto r^{\\alpha}$"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Velocity: $v_{\\theta}(r) \\propto \\frac{r}{1+(r/r_0)^{\\beta}}$"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "where for simplicity we won't worry about the normalizations of these profiles. "
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "%matplotlib inline\n",
+      "from yt.mods import *\n",
+      "from yt.analysis_modules.api import PPVCube"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "data = {}\n",
+      "nx,ny,nz = (256,256,256)\n",
+      "R = 10. # kpc\n",
+      "r_0 = 3. # kpc\n",
+      "beta = 1.4\n",
+      "alpha = -1.\n",
+      "x, y = np.mgrid[-R:R:nx*1j,-R:R:ny*1j] # cartesian coordinates\n",
+      "r = np.sqrt(x*x+y*y) # polar coordinates\n",
+      "theta = np.arctan2(y, x) # polar coordinates\n",
+      "dens = np.zeros((nx,ny,nz))\n",
+      "dens[:,:,nz/2-3:nz/2+3] = (r**alpha).reshape(nx,ny,1) # the density profile of the disk\n",
+      "vel_theta = r/(1.+(r/r_0)**beta) # the azimuthal velocity profile of the disk\n",
+      "velx = np.zeros((nx,ny,nz))\n",
+      "vely = np.zeros((nx,ny,nz))\n",
+      "velx[:,:,nz/2-3:nz/2+3] = (-vel_theta*np.sin(theta)).reshape(nx,ny,1) # convert polar to cartesian\n",
+      "vely[:,:,nz/2-3:nz/2+3] = (vel_theta*np.cos(theta)).reshape(nx,ny,1) # convert polar to cartesian\n",
+      "data[\"density\"] = (dens,\"g/cm**3\")\n",
+      "data[\"velocity_x\"] = (velx, \"km/s\")\n",
+      "data[\"velocity_y\"] = (vely, \"km/s\")\n",
+      "data[\"velocity_z\"] = (np.zeros((nx,ny,nz)), \"km/s\") # zero velocity in the z-direction\n",
+      "bbox = np.array([[-0.5,0.5],[-0.5,0.5],[-0.5,0.5]])\n",
+      "ds = load_uniform_grid(data, (nx,ny,nz), length_unit=(2*R,\"kpc\"), nprocs=1, bbox=bbox)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "To get a sense of what the data looks like, we'll take a slice through the middle of the disk:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "slc = SlicePlot(ds, \"z\", [\"density\",\"velocity_x\",\"velocity_y\",\"velocity_magnitude\"])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "slc.set_log(\"velocity_x\", False)\n",
+      "slc.set_log(\"velocity_y\", False)\n",
+      "slc.set_log(\"velocity_magnitude\", False)\n",
+      "slc.set_unit(\"velocity_magnitude\", \"km/s\")\n",
+      "slc.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Which shows a rotating disk with a specific density and velocity profile. Now, suppose we wanted to look at this disk galaxy from a certain orientation angle, and simulate a 3D FITS data cube where we can see the gas that is emitting at different velocities along the line of sight. We can do this using the `PPVCube` class. First, let's assume we rotate our viewing angle 60 degrees from face-on, from along the z-axis into the y-axis. We'll create a normal vector:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "i = 60.*np.pi/180.\n",
+      "L = [0.0,np.sin(i),np.sin(i)]"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Next, we need to specify a field that will serve as the \"intensity\" of the emission that we see. For simplicity, we'll simply choose the gas density as this field, though it could be any field (including derived fields) in principle. We also need to specify the dimensions of the data cube, and optionally we may choose the bounds in line-of-sight velocity that the data will be binned into. Otherwise, the bounds will simply be set to the negative and positive of the largest speed in the dataset."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "cube = PPVCube(ds, L, \"density\", dims=(200,100,50), velocity_bounds=(-0.5,0.5,\"km/s\"))"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Following this, we can now write this cube to a FITS file:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "cube.write_fits(\"cube.fits\", clobber=True, length_unit=(5.0,\"deg\"))"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Now, we'll look at the FITS dataset in yt and look at different slices along the velocity axis, which is the \"z\" axis:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds = load(\"cube.fits\")\n",
+      "slc = SlicePlot(ds, \"z\", [\"density\"], center=\"c\") # sliced at the center of the domain\n",
+      "slc.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "# To figure out what the domain center and width is in pixel (code length) units:\n",
+      "print ds.domain_center\n",
+      "print ds.domain_width"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "slc = SlicePlot(ds, \"z\", [\"density\"], center=[100.5,50.5,-250.0]) # \"z\" slice is in m/s\n",
+      "slc.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "slc = SlicePlot(ds, \"z\", [\"density\"], center=[100.5,50.5,300.0])\n",
+      "slc.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    }
+   ],
+   "metadata": {}
+  }
+ ]
+}
\ No newline at end of file

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
--- a/doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
+++ b/doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:874e85c86cd80a516bb61775b566cd46766c60bdf8f865336bf9dd3505f83821"
+  "signature": "sha256:e4b5ea69687eb79452c16385b3a6f795b4572518dfa7f9d8a8125bd75b5fea85"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -21,9 +21,11 @@
      "input": [
       "%matplotlib inline\n",
       "from yt.mods import *\n",
-      "from yt.analysis_modules.api import ParticleTrajectories\n",
+      "import glob\n",
+      "from yt.analysis_modules.particle_trajectories.api import ParticleTrajectories\n",
       "from yt.config import ytcfg\n",
-      "path = ytcfg.get(\"yt\", \"test_data_dir\")"
+      "path = ytcfg.get(\"yt\", \"test_data_dir\")\n",
+      "import matplotlib.pyplot as plt"
      ],
      "language": "python",
      "metadata": {},
@@ -75,8 +77,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pf = load(my_fns[0])\n",
-      "dd = pf.h.all_data()\n",
+      "ds = load(my_fns[0])\n",
+      "dd = ds.all_data()\n",
       "indices = dd[\"particle_index\"].astype(\"int\")\n",
       "print indices"
      ],
@@ -130,8 +132,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pylab.plot(trajs[\"particle_position_x\"][0], trajs[\"particle_position_y\"][0])\n",
-      "pylab.plot(trajs[\"particle_position_x\"][1], trajs[\"particle_position_y\"][1])"
+      "plt.plot(trajs[\"particle_position_x\"][0].ndarray_view(), trajs[\"particle_position_y\"][0].ndarray_view())\n",
+      "plt.plot(trajs[\"particle_position_x\"][1].ndarray_view(), trajs[\"particle_position_y\"][1].ndarray_view())"
      ],
      "language": "python",
      "metadata": {},
@@ -148,8 +150,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pylab.plot(trajs[\"particle_velocity_x\"][0], trajs[\"particle_velocity_y\"][0])\n",
-      "pylab.plot(trajs[\"particle_velocity_x\"][1], trajs[\"particle_velocity_y\"][1])"
+      "plt.plot(trajs[\"particle_velocity_x\"][0].ndarray_view(), trajs[\"particle_velocity_y\"][0].ndarray_view())\n",
+      "plt.plot(trajs[\"particle_velocity_x\"][1].ndarray_view(), trajs[\"particle_velocity_y\"][1].ndarray_view())"
      ],
      "language": "python",
      "metadata": {},
@@ -166,8 +168,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_velocity_x\"][1])\n",
-      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_velocity_y\"][1])"
+      "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_velocity_x\"][1].ndarray_view())\n",
+      "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_velocity_y\"][1].ndarray_view())"
      ],
      "language": "python",
      "metadata": {},
@@ -185,8 +187,8 @@
      "collapsed": false,
      "input": [
       "particle1 = trajs.trajectory_from_index(1)\n",
-      "pylab.plot(particle1[\"particle_time\"], particle1[\"particle_position_x\"])\n",
-      "pylab.plot(particle1[\"particle_time\"], particle1[\"particle_position_y\"])"
+      "plt.plot(particle1[\"particle_time\"].ndarray_view(), particle1[\"particle_position_x\"].ndarray_view())\n",
+      "plt.plot(particle1[\"particle_time\"].ndarray_view(), particle1[\"particle_position_y\"].ndarray_view())"
      ],
      "language": "python",
      "metadata": {},
@@ -203,8 +205,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pf = load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
-      "slc = SlicePlot(pf, \"x\", [\"Density\",\"Dark_Matter_Density\"], center=\"max\", width=(3.0, \"mpc\"))\n",
+      "ds = load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+      "slc = SlicePlot(ds, \"x\", [\"density\",\"dark_matter_density\"], center=\"max\", width=(3.0, \"Mpc\"))\n",
       "slc.show()"
      ],
      "language": "python",
@@ -222,7 +224,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "sp = pf.sphere(\"max\", (0.5, \"mpc\"))\n",
+      "sp = ds.sphere(\"max\", (0.5, \"Mpc\"))\n",
       "indices = sp[\"particle_index\"][sp[\"particle_type\"] == 1]"
      ],
      "language": "python",
@@ -240,7 +242,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "my_fns = glob.glob(path+\"/enzo_tiny_cosmology/DD*/*.index\")\n",
+      "my_fns = glob.glob(path+\"/enzo_tiny_cosmology/DD*/*.hierarchy\")\n",
       "my_fns.sort()\n",
       "trajs = ParticleTrajectories(my_fns, indices)"
      ],
@@ -263,9 +265,12 @@
       "from mpl_toolkits.mplot3d import Axes3D\n",
       "fig = plt.figure(figsize=(8.0, 8.0))\n",
       "ax = fig.add_subplot(111, projection='3d')\n",
-      "ax.plot(trajs[\"particle_position_x\"][100], trajs[\"particle_position_z\"][100], trajs[\"particle_position_z\"][100])\n",
-      "ax.plot(trajs[\"particle_position_x\"][8], trajs[\"particle_position_z\"][8], trajs[\"particle_position_z\"][8])\n",
-      "ax.plot(trajs[\"particle_position_x\"][25], trajs[\"particle_position_z\"][25], trajs[\"particle_position_z\"][25])"
+      "ax.plot(trajs[\"particle_position_x\"][100].ndarray_view(), trajs[\"particle_position_z\"][100].ndarray_view(), \n",
+      "        trajs[\"particle_position_z\"][100].ndarray_view())\n",
+      "ax.plot(trajs[\"particle_position_x\"][8].ndarray_view(), trajs[\"particle_position_z\"][8].ndarray_view(), \n",
+      "        trajs[\"particle_position_z\"][8].ndarray_view())\n",
+      "ax.plot(trajs[\"particle_position_x\"][25].ndarray_view(), trajs[\"particle_position_z\"][25].ndarray_view(), \n",
+      "        trajs[\"particle_position_z\"][25].ndarray_view())"
      ],
      "language": "python",
      "metadata": {},
@@ -282,9 +287,9 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][100])\n",
-      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][8])\n",
-      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][25])"
+      "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_position_x\"][100].ndarray_view())\n",
+      "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_position_x\"][8].ndarray_view())\n",
+      "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_position_x\"][25].ndarray_view())"
      ],
      "language": "python",
      "metadata": {},
@@ -301,7 +306,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "trajs.add_fields([\"Density\"])"
+      "trajs.add_fields([\"density\"])"
      ],
      "language": "python",
      "metadata": {},
@@ -311,17 +316,17 @@
      "cell_type": "markdown",
      "metadata": {},
      "source": [
-      "We also could have included `\"Density\"` in our original field list. Now, plot up the gas density for each particle as a function of time:"
+      "We also could have included `\"density\"` in our original field list. Now, plot up the gas density for each particle as a function of time:"
      ]
     },
     {
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pylab.plot(trajs[\"particle_time\"], trajs[\"Density\"][100])\n",
-      "pylab.plot(trajs[\"particle_time\"], trajs[\"Density\"][8])\n",
-      "pylab.plot(trajs[\"particle_time\"], trajs[\"Density\"][25])\n",
-      "pylab.yscale(\"log\")"
+      "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"density\"][100].ndarray_view())\n",
+      "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"density\"][8].ndarray_view())\n",
+      "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"density\"][25].ndarray_view())\n",
+      "plt.yscale(\"log\")"
      ],
      "language": "python",
      "metadata": {},
@@ -338,29 +343,12 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "trajs.write_out(\"halo_trajectories.txt\")\n",
-      "trajs.write_out_h5(\"halo_trajectories.h5\")"
+      "trajs.write_out(\"halo_trajectories\") # This will write a separate file for each trajectory\n",
+      "trajs.write_out_h5(\"halo_trajectories.h5\") # This will write all trajectories to a single file"
      ],
      "language": "python",
      "metadata": {},
      "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Important Caveats"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "* Parallelization is not yet implemented.\n",
-      "* For large datasets, constructing trajectories can be very slow. We are working on optimizing the algorithm for a future release. \n",
-      "* At the moment, trajectories are limited for particles that exist in every dataset. Therefore, for codes like FLASH that allow for particles to exit the domain (and hence the simulation) for certain types of boundary conditions, you need to insure that the particles you wish to examine exist in all datasets in the time series from the beginning to the end. If this is not the case, `ParticleTrajectories` will throw an error. This is a limitation we hope to relax in a future release. "
-     ]
     }
    ],
    "metadata": {}

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f doc/source/analyzing/analysis_modules/ppv_cubes.rst
--- /dev/null
+++ b/doc/source/analyzing/analysis_modules/ppv_cubes.rst
@@ -0,0 +1,4 @@
+Creating Position-Position-Velocity FITS Cubes
+-------------------------------------------------
+
+.. notebook:: PPVCube.ipynb

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f doc/source/analyzing/analysis_modules/synthetic_observation.rst
--- a/doc/source/analyzing/analysis_modules/synthetic_observation.rst
+++ b/doc/source/analyzing/analysis_modules/synthetic_observation.rst
@@ -18,3 +18,4 @@
    sunyaev_zeldovich
    radial_column_density
    photon_simulator
+   ppv_cubes

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f doc/source/cookbook/fits_radio_cubes.ipynb
--- /dev/null
+++ b/doc/source/cookbook/fits_radio_cubes.ipynb
@@ -0,0 +1,306 @@
+{
+ "metadata": {
+  "name": "",
+  "signature": "sha256:ded7d47bf5a74c9ea5431a37b6d371a631909d2b95214cd8053617762f62e2e4"
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+  {
+   "cells": [
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "%matplotlib inline\n",
+      "import yt"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "This notebook demonstrates some of the capabilties of `yt` on some FITS \"position-position-velocity\" cubes of radio data. "
+     ]
+    },
+    {
+     "cell_type": "heading",
+     "level": 2,
+     "metadata": {},
+     "source": [
+      "M33 VLA Image"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The dataset `\"m33_hi.fits\"` has `NaN`s in it, so we'll mask them out by setting `nan_mask` = 0:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds = yt.load(\"radio_fits/m33_hi.fits\", nan_mask=0.0)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "First, we'll take a slice of the data along the z-axis, which is the velocity axis of the FITS cube:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "slc = yt.SlicePlot(ds, \"z\", [\"intensity\"], origin=\"native\")\n",
+      "slc.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Note that the x and y axes are in units of \"code length\", which in the case of FITS datasets are equal to the width of one pixel. Currently, the `yt` plotting routines don't understand datasets with non-length units on the axes (such as RA, Dec, velocity, etc.), so it defaults to the pixel scale. This will be changed in a future release. When making plots of FITS data, to see the image coordinates as they are in the file, it is helpful to set the keyword `origin = \"native\"`."
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We can take slices of this dataset at a few different values along the \"z\" axis (corresponding to the velocity), so let's try a few. First, we'll check what the value along the velocity axis at the domain center is, as well as the range of possible values. This is the third value of each array. "
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "print ds.domain_left_edge[2], ds.domain_center[2], ds.domain_right_edge[2]"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Now, we'll choose a few values for the velocity within this range:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "new_center = ds.domain_center \n",
+      "new_center[2] = -250000.\n",
+      "slc = yt.SlicePlot(ds, \"z\", [\"intensity\"], center=new_center, origin=\"native\")\n",
+      "slc.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "new_center = ds.domain_center \n",
+      "new_center[2] = -100000.\n",
+      "slc = yt.SlicePlot(ds, \"z\", [\"intensity\"], center=new_center, origin=\"native\")\n",
+      "slc.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "new_center = ds.domain_center \n",
+      "new_center[2] = -150000.\n",
+      "slc = yt.SlicePlot(ds, \"z\", [\"intensity\"], center=new_center, origin=\"native\")\n",
+      "slc.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "These slices demonstrate the intensity of the radio emission at different line-of-sight velocities. "
+     ]
+    },
+    {
+     "cell_type": "heading",
+     "level": 2,
+     "metadata": {},
+     "source": [
+      "$^{13}$CO GRS Data"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "This next example uses one of the cubes from the [Boston University Galactic Ring Survey](http://www.bu.edu/galacticring/new_index.htm). "
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds = yt.load(\"radio_fits/grs-50-cube.fits\", nan_mask=0.0)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We can use the `quantities` methods to determine derived quantities of the dataset. For example, we could find the maximum and minimum temperature:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "dd = ds.all_data() # A region containing the entire dataset\n",
+      "extrema = dd.quantities.extrema(\"temperature\")\n",
+      "print extrema"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We can compute the average temperature along the \"velocity\" axis for all positions by making a `ProjectionPlot`:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "prj = yt.ProjectionPlot(ds, \"z\", [\"temperature\"], origin=\"native\", \n",
+      "                        weight_field=\"ones\") # \"ones\" weights each cell by 1\n",
+      "prj.set_log(\"temperature\", True)\n",
+      "prj.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We can also make a histogram of the temperature field of this region:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pplot = yt.ProfilePlot(dd, \"temperature\", [\"ones\"], weight_field=None, n_bins=128)\n",
+      "pplot.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We can see from this histogram and our calculation of the dataset's extrema that there is a lot of noise. Suppose we wanted to make a projection, but instead make it only of the cells which had a positive temperature value. We can do this by doing a \"field cut\" on the data:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "fc = dd.cut_region([\"obj['temperature'] > 0\"])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Now let's check the extents of this region:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "print fc.quantities.extrema(\"temperature\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Looks like we were successful in filtering out the negative temperatures. To compute the average temperature of this new region:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "fc.quantities.weighted_average_quantity(\"temperature\", \"ones\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Now, let's make a projection of the dataset, using the field cut `fc` as a `data_source`:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "prj = yt.ProjectionPlot(ds, \"z\", [\"temperature\"], data_source=fc, origin=\"native\", \n",
+      "                        weight_field=\"ones\") # \"ones\" weights each cell by 1\n",
+      "prj.set_log(\"temperature\", True)\n",
+      "prj.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    }
+   ],
+   "metadata": {}
+  }
+ ]
+}
\ No newline at end of file

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f doc/source/cookbook/fits_radio_cubes.rst
--- /dev/null
+++ b/doc/source/cookbook/fits_radio_cubes.rst
@@ -0,0 +1,6 @@
+.. _radio_cubes:
+
+FITS Radio Cubes in yt
+----------------------
+
+.. notebook:: fits_radio_cubes.ipynb
\ No newline at end of file

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f doc/source/cookbook/fits_xray_images.ipynb
--- /dev/null
+++ b/doc/source/cookbook/fits_xray_images.ipynb
@@ -0,0 +1,259 @@
+{
+ "metadata": {
+  "name": "",
+  "signature": "sha256:564cb1986609d8bb76397a18219974504231b260f912bed483b87c1f896e92ac"
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+  {
+   "cells": [
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "%matplotlib inline\n",
+      "import yt\n",
+      "import numpy as np"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "This notebook shows how to use `yt` to make plots and examine FITS X-ray images and events files. "
+     ]
+    },
+    {
+     "cell_type": "heading",
+     "level": 2,
+     "metadata": {},
+     "source": [
+      "Sloshing, Shocks, and Bubbles in Abell 2052"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "This example uses data provided by [Scott Randall](http://hea-www.cfa.harvard.edu/~srandall/), presented originally in [Blanton, E.L., Randall, S.W., Clarke, T.E., et al. 2011, ApJ, 737, 99](http://adsabs.harvard.edu/cgi-bin/bib_query?2011ApJ...737...99B). They consist of two files, a \"flux map\" in counts/s/pixel between 0.3 and 2 keV, and a spectroscopic temperature map in keV. "
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds = yt.load(\"xray_fits/A2052_merged_0.3-2_match-core_tmap_bgecorr.fits\", \n",
+      "             auxiliary_files=[\"xray_fits/A2052_core_tmap_b1_m2000_.fits\"])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Since the flux and projected temperature images are in two different files, we had to use one of them (in this case the \"flux\" file) as a master file, and pass in the \"temperature\" file with the `auxiliary_files` keyword to `load`. "
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Next, let's derive some new fields for the number of counts, the \"pseudo-pressure\", and the \"pseudo-entropy\":"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds.index\n",
+      "def _counts(field, data):\n",
+      "    exposure_time = data.get_field_parameter(\"exposure_time\")\n",
+      "    return data[\"flux\"]*data[\"pixel\"]*exposure_time\n",
+      "ds.field_info.add_field(name=\"counts\", function=_counts, units=\"counts\")\n",
+      "\n",
+      "def _pp(field, data):\n",
+      "    return np.sqrt(data[\"counts\"])*data[\"projected_temperature\"]\n",
+      "ds.field_info.add_field(name=\"pseudo_pressure\", function=_pp, units=\"sqrt(counts)*keV\")\n",
+      "\n",
+      "def _pe(field, data):\n",
+      "    return data[\"projected_temperature\"]*data[\"counts\"]**(-1./3.)\n",
+      "ds.field_info.add_field(name=\"pseudo_entropy\", function=_pe, units=\"keV*(counts)**(-1/3)\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Here, we're deriving a \"counts\" field from the \"flux\" field by passing it a `field_parameter` for the exposure time of the time and multiplying by the pixel scale. Second, we use the fact that the surface brightness is strongly dependent on density ($S_X \\propto \\rho^2$) to use the counts in each pixel as a \"stand-in\". Next, we'll grab the exposure time from the primary FITS header of the flux file and create a `YTQuantity` from it, to be used as a `field_parameter`:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "exposure_time = ds.quan(ds.primary_header[\"exposure\"], \"s\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Now, we can make the `SlicePlot` object of the fields we want, passing in the `exposure_time` as a `field_parameter`. We'll also set the width of the image to 250 pixels."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "slc = yt.SlicePlot(ds, \"z\", \n",
+      "                   [\"flux\",\"projected_temperature\",\"pseudo_pressure\",\"pseudo_entropy\"], \n",
+      "                   origin=\"native\", field_parameters={\"exposure_time\":exposure_time})\n",
+      "slc.set_log(\"flux\",True)\n",
+      "slc.set_log(\"pseudo_pressure\",False)\n",
+      "slc.set_log(\"pseudo_entropy\",False)\n",
+      "slc.set_width(250.)\n",
+      "slc.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "heading",
+     "level": 2,
+     "metadata": {},
+     "source": [
+      "The Bullet Cluster"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "This example uses an events table file from a ~100 ks exposure of the \"Bullet Cluster\" from the [Chandra Data Archive](http://cxc.harvard.edu/cda/). In this case, the individual photon events are treated as particle fields in `yt`. However, you can make images of the object in different energy bands using the `setup_counts_fields` function. "
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "from yt.frontends.fits.api import setup_counts_fields"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "`setup_counts_fields` will take a list of energy bounds (emin, emax) in keV and create a new field from each where the photons in that energy range will be deposited onto the image grid. "
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ebounds = [(0.1,2.0),(2.0,5.0)]\n",
+      "setup_counts_fields(ebounds)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "`load` will handle the events file as FITS image files, and will set up a grid using the WCS information in the file. Optionally, the events may be reblocked to a new resolution. by setting the `\"reblock\"` parameter in the `parameters` dictionary in `load`. `\"reblock\"` must be a power of 2. "
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds2 = yt.load(\"xray_fits/acisf05356N003_evt2.fits.gz\", parameters={\"reblock\":2})"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The \"x\", \"y\", \"energy\", and \"time\" fields in the events table are loaded as particle fields. Each one has a name given by \"event\\_\" plus the name of the field:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "dd = ds2.all_data()\n",
+      "print dd[\"event_x\"]\n",
+      "print dd[\"event_y\"]"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Now, we'll make a plot of the two counts fields we made, and pan and zoom to the bullet:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "slc = yt.SlicePlot(ds2, \"z\", [\"counts_0.1-2.0\",\"counts_2.0-5.0\"], origin=\"native\")\n",
+      "slc.pan((100.,100.))\n",
+      "slc.set_width(500.)\n",
+      "slc.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The counts fields can take the field parameter `\"sigma\"` and use [AstroPy's convolution routines](http://astropy.readthedocs.org/en/latest/convolution/) to smooth the data with a Gaussian:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "slc = yt.SlicePlot(ds2, \"z\", [\"counts_0.1-2.0\",\"counts_2.0-5.0\"], origin=\"native\",\n",
+      "                   field_parameters={\"sigma\":2.}) # This value is in pixel scale\n",
+      "slc.pan((100.,100.))\n",
+      "slc.set_width(500.)\n",
+      "slc.set_zlim(\"counts_0.1-2.0\", 0.01, 100.)\n",
+      "slc.set_zlim(\"counts_2.0-5.0\", 0.01, 50.)\n",
+      "slc.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    }
+   ],
+   "metadata": {}
+  }
+ ]
+}
\ No newline at end of file

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f doc/source/cookbook/fits_xray_images.rst
--- /dev/null
+++ b/doc/source/cookbook/fits_xray_images.rst
@@ -0,0 +1,6 @@
+.. _xray_fits:
+
+FITS X-ray Images in yt
+----------------------
+
+.. notebook:: fits_xray_images.ipynb
\ No newline at end of file

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f doc/source/cookbook/index.rst
--- a/doc/source/cookbook/index.rst
+++ b/doc/source/cookbook/index.rst
@@ -46,4 +46,5 @@
    embedded_javascript_animation
    embedded_webm_animation
    ../analyzing/analysis_modules/sunyaev_zeldovich
-   
+   fits_radio_cubes
+   fits_xray_images

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -459,6 +459,243 @@
 
    pf = load("/u/cmoody3/data/art_snapshots/SFG1/10MpcBox_csf512_a0.460.d")
 
+.. _loading_athena_data:
+
+Athena Data
+-----------
+
+Athena 4.x VTK data is *mostly* supported and cared for by John
+ZuHone. Both uniform grid and SMR datasets are supported.
+
+Loading Athena datasets is slightly different depending on whether
+your dataset came from a serial or a parallel run. If the data came
+from a serial run or you have joined the VTK files together using the
+Athena tool ``join_vtk``, you can load the data like this:
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("kh.0010.vtk")
+
+The filename corresponds to the file on SMR level 0, whereas if there
+are multiple levels the corresponding files will be picked up
+automatically, assuming they are laid out in ``lev*`` subdirectories
+under the directory where the base file is located.
+
+For parallel datasets, yt assumes that they are laid out in
+directories named ``id*``, one for each processor number, each with
+``lev*`` subdirectories for additional refinement levels. To load this
+data, call ``load`` with the base file in the ``id0`` directory:
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("id0/kh.0010.vtk")
+
+which will pick up all of the files in the different ``id*`` directories for
+the entire dataset.
+
+yt works in cgs ("Gaussian") units by default, but Athena data is not
+normally stored in these units. If you would like to convert data to
+cgs units, you may supply conversions for length, time, and mass to ``load``:
+
+.. code-block:: python
+
+   from yt.mods import *
+   pf = load("id0/cluster_merger.0250.vtk",
+             parameters={"length_unit":(1.0,"Mpc"),
+                         "time_unit"(1.0,"Myr"),
+                         "mass_unit":(1.0e14,"Msun")})
+
+This means that the yt fields, e.g. ``("gas","density")``, ``("gas","x-velocity")``,
+``("gas","magnetic_field_x")``, will be in cgs units, but the Athena fields, e.g.,
+``("athena","density")``, ``("athena","velocity_x")``, ``("athena","cell_centered_B_x")``, will be
+in code units.
+
+.. rubric:: Caveats
+
+* yt primarily works with primitive variables. If the Athena
+  dataset contains conservative variables, the yt primitive fields will be generated from the
+  conserved variables on disk.
+* Domains may be visualized assuming periodicity.
+* Particle list data is currently unsupported.
+
+.. _loading-fits-data:
+
+FITS Data
+---------
+
+FITS data is *mostly* supported and cared for by John ZuHone. In order to
+read FITS data, `AstroPy <http://www.astropy.org>`_ must be installed. FITS
+data cubes can be loaded in the same way by yt as other datasets. yt
+can read FITS image files that have the following (case-insensitive) suffixes:
+
+* fits
+* fts
+* fits.gz
+* fts.gz
+
+yt can read two kinds of FITS files: FITS image files and FITS binary table files containing
+positions, times, and energies of X-ray events.
+
+.. note::
+
+  AstroPy is necessary due to the requirements of both FITS file reading and
+  WCS coordinates. Since new releases of `PyFITS <http://www.stsci
+  .edu/institute/software_hardware/pyfits>`_ are to be discontinued, individual
+  installations of this package and the `PyWCS <http://stsdas.stsci
+  .edu/astrolib/pywcs/>`_ package are not supported.
+
+Though FITS a image is composed of one data cube in the FITS file,
+upon being loaded into yt it is automatically decomposed into grids:
+
+.. code-block:: python
+
+  from yt.mods import *
+  ds = load("m33_hi.fits")
+  ds.print_stats()
+
+.. parsed-literal::
+
+  level	  # grids	    # cells	   # cells^3
+  ----------------------------------------------
+    0	     512	  981940800	         994
+  ----------------------------------------------
+             512	  981940800
+
+yt will generate its own domain decomposition, but the number of grids can be
+set manually by passing the ``nprocs`` parameter to the ``load`` call:
+
+.. code-block:: python
+
+  ds = load("m33_hi.fits", nprocs=1024)
+
+Making the Most of `yt` for FITS Data
++++++++++++++++++++++++++++++++++++++
+
+yt will load data without WCS information and/or some missing header keywords, but the resulting
+field information will necessarily be incomplete. For example, field names may not be descriptive,
+and units will not be correct. To get the full use out of yt for FITS files, make sure that for
+each image the following header keywords have sensible values:
+
+* ``CDELTx``: The pixel width in along axis ``x``
+* ``CRVALx``: The coordinate value at the reference position along axis ``x``
+* ``CRPIXx``: The the reference pixel along axis ``x``
+* ``CTYPEx``: The projection type of axis ``x``
+* ``CUNITx``: The units of the coordinate along axis ``x``
+* ``BTYPE``: The type of the image
+* ``BUNIT``: The units of the image
+
+FITS header keywords can easily be updated using AstroPy. For example,
+to set the ``BTYPE`` and ``BUNIT`` keywords:
+
+.. code-block:: python
+
+    import astropy.io.fits as pyfits
+    f = pyfits.open("xray_flux_image.fits", mode="update")
+    f[0].header["BUNIT"] = "cts/s/pixel"
+    f[0].header["BTYPE"] = "flux"
+    f.flush()
+    f.close()
+
+FITS Coordinates
+++++++++++++++++
+
+For FITS datasets, the unit of ``code_length`` is always the width of one
+pixel. yt will attempt to use the WCS information in the FITS header to
+construct information about the coordinate system, and provides support for
+the following dataset types:
+
+1. Rectilinear 2D/3D images with length units (e.g., Mpc, AU,
+   etc.) defined in the ``CUNITx`` keywords
+2. 2D images in some celestial coordinate systems (RA/Dec,
+   galactic latitude/longitude, defined in the ``CTYPEx``
+   keywords), and X-ray binary table event files
+3. 3D images with celestial coordinates and a third axis for another
+   quantity, such as velocity, frequency, wavelength, etc.
+4. 4D images with the first three axes like Case 3, where the slices
+   along the 4th axis are interpreted as different fields.
+
+If your data is of the first case, yt will determine the length units based
+on the information in the header. If your data is of the second or third
+cases, no length units will be assigned, but the world coordinate information
+about the axes will be stored in separate fields. If your data is of the fourth
+type, the coordinates of the first three axes will be determined according to
+cases 1-3.
+
+.. note::
+
+  Linear length-based coordinates (Case 1 above) are only supported if all dimensions
+  have the same value for ``CUNITx``. WCS coordinates are only supported for Cases 2-4.
+
+Fields in FITS Datasets
++++++++++++++++++++++++
+
+Multiple fields can be included in a FITS dataset in several different ways.
+The first way, and the simplest, is if more than one image HDU is
+contained within the same file. The field names will be determined by the
+value of ``BTYPE`` in the header, and the field units will be determined by
+the value of ``BUNIT``. The second way is if a dataset has a fourth axis,
+with each slice along this axis corresponding to a different field. In this
+case, the field names will be determined by the value of the ``CTYPE4`` keyword
+and the index of the slice. So, for example, if ``BTYPE`` = ``"intensity"`` and
+``CTYPE4`` = ``"stokes"``, then the fields will be named
+``"intensity_stokes_1"``, ``"intensity_stokes_2"``, and so on.
+
+The third way is if auxiliary files are included along with the main file, like so:
+
+.. code-block:: python
+
+    ds = load("flux.fits", auxiliary_files=["temp.fits","metal.fits"])
+
+The image blocks in each of these files will be loaded as a separate field,
+provided they have the same dimensions as the image blocks in the main file.
+
+Additionally, fields corresponding to the WCS coordinates will be generated.
+based on the corresponding ``CTYPEx`` keywords. When queried, these fields
+will be generated from the pixel coordinates in the file using the WCS
+transformations provided by AstroPy.
+
+X-ray event data will be loaded as particle fields in yt, but a grid will be constructed from the
+WCS information in the FITS header. There is a helper function, ``setup_counts_fields``,
+which may be used to make deposited image fields from the event data for different energy bands
+(for an example see :ref:`xray_fits`).
+
+.. note::
+
+  Each FITS image from a single dataset, whether from one file or from one of
+  multiple files, must have the same dimensions and WCS information as the
+  first image in the primary file. If this is not the case,
+  yt will raise a warning and will not load this field.
+
+Additional Options
+++++++++++++++++++
+
+FITS image data may include ``NaNs``. If you wish to mask this data out,
+you may supply a ``nan_mask`` parameter to ``load``, which may either be a
+single floating-point number (applies to all fields) or a Python dictionary
+containing different mask values for different fields:
+
+.. code-block::
+
+  # passing a single float
+  ds = load("m33_hi.fits", nan_mask=0.0)
+
+  # passing a dict
+  ds = load("m33_hi.fits", nan_mask={"intensity":-1.0,"temperature":0.0})
+
+Generally, AstroPy may generate a lot of warnings about individual FITS
+files, many of which you may want to ignore. If you want to see these
+warnings, set ``suppress_astropy_warnings = False`` in the call to ``load``.
+
+Examples of Using FITS Data
++++++++++++++++++++++++++++
+
+The following IPython notebooks show examples of working with FITS data in yt:
+
+* :ref:`radio_cubes`
+* :ref:`xray_fits`
+
 .. _loading-moab-data:
 
 MOAB Data

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f doc/source/visualizing/FITSImageBuffer.ipynb
--- /dev/null
+++ b/doc/source/visualizing/FITSImageBuffer.ipynb
@@ -0,0 +1,205 @@
+{
+ "metadata": {
+  "name": "",
+  "signature": "sha256:872f7525edd3c1ee09c67f6ecdd8552218df05ebe5ab73bcab55654edf0ac2bb"
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+  {
+   "cells": [
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "yt has capabilities for writing 2D and 3D uniformly gridded data generated from datasets to FITS files. This is via the `FITSImageBuffer` class, which has subclasses `FITSSlice` and `FITSProjection` to write slices and projections directly to FITS. We'll test this out on an Athena dataset."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "%matplotlib inline\n",
+      "import yt\n",
+      "from yt.utilities.fits_image import FITSImageBuffer, FITSSlice, FITSProjection"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds = yt.load(\"MHDSloshing/virgo_low_res.0054.vtk\", parameters={\"length_unit\":(1.0,\"Mpc\"),\n",
+      "                                                               \"mass_unit\":(1.0e14,\"Msun\"),\n",
+      "                                                               \"time_unit\":(1.0,\"Myr\")})"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "To demonstrate a useful example of creating a FITS file, let's first make a `ProjectionPlot`:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "prj = yt.ProjectionPlot(ds, \"z\", [\"temperature\"], weight_field=\"density\", width=(500.,\"kpc\"))\n",
+      "prj.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Suppose that we wanted to write this projection to a FITS file for analysis and visualization in other programs, such as ds9. We can do that using `FITSProjection`:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "prj_fits = FITSProjection(ds, \"z\", [\"temperature\"], weight_field=\"density\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "which took the same parameters as `ProjectionPlot` except the width, because `FITSProjection` and `FITSSlice` always make slices and projections of the width of the domain size, at the finest resolution available in the simulation, in a unit determined to be appropriate for the physical size of the dataset. `prj_fits` is a full-fledged FITS file in memory, specifically an [AstroPy `HDUList`](http://astropy.readthedocs.org/en/latest/io/fits/api/hdulists.html) object. This means that we can use all of the methods inherited from `HDUList`:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "prj_fits.info()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "`info` shows us the contents of the virtual FITS file. We can also look at the header for the `\"temperature\"` image, like so:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "prj_fits[\"temperature\"].header"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "where we can see that the temperature units are in Kelvin and the cell widths are in kiloparsecs. The projection can be written to disk using the `writeto` method:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "prj_fits.writeto(\"sloshing.fits\", clobber=True)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Since yt can read FITS image files, it can be loaded up just like any other dataset:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds2 = yt.load(\"sloshing.fits\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "and we can make a `SlicePlot` of the 2D image, which shows the same data as the previous image:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "slc2 = yt.SlicePlot(ds2, \"z\", [\"temperature\"], width=(500.,\"kpc\"))\n",
+      "slc2.set_log(\"temperature\", True)\n",
+      "slc2.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "If you want more fine-grained control over what goes into the FITS file, you can call `FITSImageBuffer` directly, with various kinds of inputs. For example, you could use a `FixedResolutionBuffer`, and specify you want the units in parsecs instead:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "slc3 = ds.slice(0, 0.0)\n",
+      "frb = slc3.to_frb((500.,\"kpc\"), 800)\n",
+      "fib = FITSImageBuffer(frb, fields=[\"density\",\"temperature\"], units=\"pc\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Finally, a 3D FITS cube can be created from a covering grid:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "cvg = ds.covering_grid(ds.index.max_level, [-0.5,-0.5,-0.5], [64, 64, 64], fields=[\"density\",\"temperature\"])\n",
+      "fib = FITSImageBuffer(cvg, fields=[\"density\",\"temperature\"], units=\"Mpc\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    }
+   ],
+   "metadata": {}
+  }
+ ]
+}
\ No newline at end of file

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f doc/source/visualizing/index.rst
--- a/doc/source/visualizing/index.rst
+++ b/doc/source/visualizing/index.rst
@@ -14,3 +14,4 @@
    mapserver
    streamlines
    colormaps/index
+   writing_fits_images

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f doc/source/visualizing/writing_fits_images.rst
--- /dev/null
+++ b/doc/source/visualizing/writing_fits_images.rst
@@ -0,0 +1,6 @@
+.. _writing_fits_images:
+
+Writing FITS Images
+==========================
+
+.. notebook:: FITSImageBuffer.ipynb
\ No newline at end of file

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f yt/analysis_modules/api.py
--- a/yt/analysis_modules/api.py
+++ b/yt/analysis_modules/api.py
@@ -114,3 +114,6 @@
      TableAbsorbModel, \
      PhotonModel, \
      ThermalPhotonModel
+
+from .ppv_cube.api import \
+    PPVCube

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f yt/analysis_modules/particle_trajectories/particle_trajectories.py
--- a/yt/analysis_modules/particle_trajectories/particle_trajectories.py
+++ b/yt/analysis_modules/particle_trajectories/particle_trajectories.py
@@ -13,7 +13,12 @@
 from yt.data_objects.data_containers import YTFieldData
 from yt.data_objects.time_series import DatasetSeries
 from yt.utilities.lib.CICDeposit import CICSample_3
+from yt.utilities.parallel_tools.parallel_analysis_interface import \
+    parallel_root_only
 from yt.funcs import *
+from yt.units.yt_array import array_like_field
+from yt.config import ytcfg
+from collections import OrderedDict
 
 import numpy as np
 import h5py
@@ -47,30 +52,21 @@
     >>> fields = ["particle_position_x", "particle_position_y",
     >>>           "particle_position_z", "particle_velocity_x",
     >>>           "particle_velocity_y", "particle_velocity_z"]
-    >>> pf = load(my_fns[0])
-    >>> init_sphere = pf.sphere(pf.domain_center, (.5, "unitary"))
+    >>> ds = load(my_fns[0])
+    >>> init_sphere = ds.sphere(ds.domain_center, (.5, "unitary"))
     >>> indices = init_sphere["particle_index"].astype("int")
     >>> trajs = ParticleTrajectories(my_fns, indices, fields=fields)
     >>> for t in trajs :
     >>>     print t["particle_velocity_x"].max(), t["particle_velocity_x"].min()
-
-    Notes
-    -----
-    As of this time only particle trajectories that are complete over the
-    set of specified parameter files are supported. If any particle's history
-    ends for some reason (e.g. leaving the simulation domain or being actively
-    destroyed), the whole trajectory collection of which it is a set must end
-    at or before the particle's last timestep. This is a limitation we hope to
-    lift at some point in the future.     
     """
     def __init__(self, filenames, indices, fields=None) :
 
         indices.sort() # Just in case the caller wasn't careful
-        
         self.field_data = YTFieldData()
-        self.pfs = DatasetSeries.from_filenames(filenames)
+        self.data_series = DatasetSeries.from_filenames(filenames)
         self.masks = []
         self.sorts = []
+        self.array_indices = []
         self.indices = indices
         self.num_indices = len(indices)
         self.num_steps = len(filenames)
@@ -79,54 +75,44 @@
         # Default fields 
         
         if fields is None: fields = []
+        fields.append("particle_position_x")
+        fields.append("particle_position_y")
+        fields.append("particle_position_z")
+        fields = list(OrderedDict.fromkeys(fields))
 
-        # Must ALWAYS have these fields
-        
-        fields = fields + ["particle_position_x",
-                           "particle_position_y",
-                           "particle_position_z"]
-
-        # Set up the derived field list and the particle field list
-        # so that if the requested field is a particle field, we'll
-        # just copy the field over, but if the field is a grid field,
-        # we will first interpolate the field to the particle positions
-        # and then return the field. 
-
-        pf = self.pfs[0]
-        self.derived_field_list = pf.derived_field_list
-        self.particle_fields = [field for field in self.derived_field_list
-                                if pf.field_info[field].particle_type]
-
-        """
-        The following loops through the parameter files
-        and performs two tasks. The first is to isolate
-        the particles with the correct indices, and the
-        second is to create a sorted list of these particles.
-        We also make a list of the current time from each file. 
-        Right now, the code assumes (and checks for) the
-        particle indices existing in each dataset, a limitation I
-        would like to lift at some point since some codes
-        (e.g., FLASH) destroy particles leaving the domain.
-        """
-        
-        for pf in self.pfs:
-            dd = pf.h.all_data()
-            newtags = dd["particle_index"].astype("int")
-            if not np.all(np.in1d(indices, newtags, assume_unique=True)):
-                print "Not all requested particle ids contained in this dataset!"
-                raise IndexError
+        old_level = int(ytcfg.get("yt","loglevel"))
+        mylog.setLevel(40)
+        my_storage = {}
+        pbar = get_pbar("Constructing trajectory information", len(self.data_series))
+        for i, (sto, ds) in enumerate(self.data_series.piter(storage=my_storage)):
+            dd = ds.all_data()
+            idx_field = dd._determine_fields("particle_index")[0]
+            newtags = dd[idx_field].ndarray_view().astype("int64")
             mask = np.in1d(newtags, indices, assume_unique=True)
             sorts = np.argsort(newtags[mask])
-            self.masks.append(mask)            
+            self.array_indices.append(np.where(np.in1d(indices, newtags, assume_unique=True))[0])
+            self.masks.append(mask)
             self.sorts.append(sorts)
-            self.times.append(pf.current_time)
+            sto.result_id = ds.parameter_filename
+            sto.result = ds.current_time
+            pbar.update(i)
+        pbar.finish()
 
-        self.times = np.array(self.times)
+        mylog.setLevel(old_level)
 
-        # Now instantiate the requested fields 
+        times = []
+        for fn, time in sorted(my_storage.items()):
+            times.append(time)
+
+        self.times = self.data_series[0].arr([time for time in times], times[0].units)
+
+        self.particle_fields = []
+
+        # Instantiate fields the caller requested
+
         for field in fields:
             self._get_data(field)
-            
+
     def has_key(self, key):
         return (key in self.field_data)
     
@@ -135,8 +121,7 @@
 
     def __getitem__(self, key):
         """
-        Get the field associated with key,
-        checking to make sure it is a particle field.
+        Get the field associated with key.
         """
         if key == "particle_time":
             return self.times
@@ -203,33 +188,48 @@
         with shape (num_indices, num_steps)
         """
         if not self.field_data.has_key(field):
-            particles = np.empty((0))
+            old_level = int(ytcfg.get("yt","loglevel"))
+            mylog.setLevel(40)
+            dd_first = self.data_series[0].all_data()
+            fd = dd_first._determine_fields(field)[0]
+            if field not in self.particle_fields:
+                if self.data_series[0].field_info[fd].particle_type:
+                    self.particle_fields.append(field)
+            particles = np.empty((self.num_indices,self.num_steps)) * np.nan
             step = int(0)
-            for pf, mask, sort in zip(self.pfs, self.masks, self.sorts):
+            pbar = get_pbar("Generating field %s in trajectories." % (field), self.num_steps)
+            my_storage={}
+            for i, (sto, ds) in enumerate(self.data_series.piter(storage=my_storage)):
+                mask = self.masks[i]
+                sort = self.sorts[i]
                 if field in self.particle_fields:
                     # This is easy... just get the particle fields
-                    dd = pf.h.all_data()
-                    pfield = dd[field][mask]
-                    particles = np.append(particles, pfield[sort])
+                    dd = ds.all_data()
+                    pfield = dd[fd].ndarray_view()[mask][sort]
                 else:
                     # This is hard... must loop over grids
                     pfield = np.zeros((self.num_indices))
-                    x = self["particle_position_x"][:,step]
-                    y = self["particle_position_y"][:,step]
-                    z = self["particle_position_z"][:,step]
-                    particle_grids, particle_grid_inds = pf.h.find_points(x,y,z)
+                    x = self["particle_position_x"][:,step].ndarray_view()
+                    y = self["particle_position_y"][:,step].ndarray_view()
+                    z = self["particle_position_z"][:,step].ndarray_view()
+                    particle_grids, particle_grid_inds = ds.index.find_points(x,y,z)
                     for grid in particle_grids:
-                        cube = grid.retrieve_ghost_zones(1, [field])
+                        cube = grid.retrieve_ghost_zones(1, [fd])
                         CICSample_3(x,y,z,pfield,
                                     self.num_indices,
-                                    cube[field],
+                                    cube[fd],
                                     np.array(grid.LeftEdge).astype(np.float64),
                                     np.array(grid.ActiveDimensions).astype(np.int32),
-                                    np.float64(grid['dx']))
-                    particles = np.append(particles, pfield)
+                                    grid.dds[0])
+                sto.result_id = ds.parameter_filename
+                sto.result = (self.array_indices[i], pfield)
+                pbar.update(step)
                 step += 1
-            self[field] = particles.reshape(self.num_steps,
-                                            self.num_indices).transpose()
+            pbar.finish()
+            for i, (fn, (indices, pfield)) in enumerate(sorted(my_storage.items())):
+                particles[indices,i] = pfield
+            self.field_data[field] = array_like_field(dd_first, particles, fd)
+            mylog.setLevel(old_level)
         return self.field_data[field]
 
     def trajectory_from_index(self, index):
@@ -269,6 +269,7 @@
             traj[field] = self[field][mask,:][0]
         return traj
 
+    @parallel_root_only
     def write_out(self, filename_base):
         """
         Write out particle trajectories to tab-separated ASCII files (one
@@ -299,7 +300,8 @@
             fid.writelines(outlines)
             fid.close()
             del fid
-            
+
+    @parallel_root_only
     def write_out_h5(self, filename):
         """
         Write out all the particle trajectories to a single HDF5 file

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f yt/analysis_modules/ppv_cube/api.py
--- /dev/null
+++ b/yt/analysis_modules/ppv_cube/api.py
@@ -0,0 +1,12 @@
+"""
+API for ppv_cube
+"""
+#-----------------------------------------------------------------------------
+# Copyright (c) 2013, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+from ppv_cube import PPVCube

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f yt/analysis_modules/ppv_cube/ppv_cube.py
--- /dev/null
+++ b/yt/analysis_modules/ppv_cube/ppv_cube.py
@@ -0,0 +1,164 @@
+"""
+Generating PPV FITS cubes
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2013, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+import numpy as np
+from yt.frontends.fits.data_structures import ap
+from yt.utilities.orientation import Orientation
+from yt.utilities.fits_image import FITSImageBuffer
+from yt.visualization.volume_rendering.camera import off_axis_projection
+from yt.funcs import get_pbar
+
+def create_intensity(vmin, vmax, ifield):
+    def _intensity(field, data):
+        idxs = (data["v_los"] >= vmin) & (data["v_los"] < vmax)
+        f = np.zeros(data[ifield].shape)
+        f[idxs] = data[ifield][idxs]
+        return f
+    return _intensity
+
+def create_vlos(z_hat):
+    def _v_los(field, data):
+        vz = data["velocity_x"]*z_hat[0] + \
+             data["velocity_y"]*z_hat[1] + \
+             data["velocity_z"]*z_hat[2]
+        return -vz
+    return _v_los
+
+class PPVCube(object):
+    def __init__(self, ds, normal, field, width=(1.0,"unitary"),
+                 dims=(100,100,100), velocity_bounds=None):
+        r""" Initialize a PPVCube object.
+
+        Parameters
+        ----------
+        ds : dataset
+            The dataset.
+        normal : array_like
+            The normal vector along with to make the projections.
+        field : string
+            The field to project.
+        width : float or tuple, optional
+            The width of the projection in length units. Specify a float
+            for code_length units or a tuple (value, units).
+        dims : tuple, optional
+            A 3-tuple of dimensions (nx,ny,nv) for the cube.
+        velocity_bounds : tuple, optional
+            A 3-tuple of (vmin, vmax, units) for the velocity bounds to
+            integrate over. If None, the largest velocity of the
+            dataset will be used, e.g. velocity_bounds = (-v.max(), v.max())
+
+        Examples
+        --------
+        >>> i = 60*np.pi/180.
+        >>> L = [0.0,np.sin(i),np.cos(i)]
+        >>> cube = PPVCube(ds, L, "density", width=(10.,"kpc"),
+        ...                velocity_bounds=(-5.,4.,"km/s"))
+        """
+        self.ds = ds
+        self.field = field
+        self.width = width
+
+        self.nx = dims[0]
+        self.ny = dims[1]
+        self.nv = dims[2]
+
+        normal = np.array(normal)
+        normal /= np.sqrt(np.dot(normal, normal))
+        vecs = np.identity(3)
+        t = np.cross(normal, vecs).sum(axis=1)
+        ax = t.argmax()
+        north = np.cross(normal, vecs[ax,:]).ravel()
+        orient = Orientation(normal, north_vector=north)
+
+        dd = ds.all_data()
+
+        fd = dd._determine_fields(field)[0]
+
+        self.field_units = ds._get_field_info(fd).units
+
+        if velocity_bounds is None:
+            vmin, vmax = dd.quantities.extrema("velocity_magnitude")
+            self.v_bnd = -vmax, vmax
+        else:
+            self.v_bnd = (ds.quan(velocity_bounds[0], velocity_bounds[2]),
+                     ds.quan(velocity_bounds[1], velocity_bounds[2]))
+
+        vbins = np.linspace(self.v_bnd[0], self.v_bnd[1], num=self.nv+1)
+
+        _vlos = create_vlos(orient.unit_vectors[2])
+        ds.field_info.add_field(("gas","v_los"), function=_vlos, units="cm/s")
+
+        self.data = ds.arr(np.zeros((self.nx,self.ny,self.nv)), self.field_units)
+        pbar = get_pbar("Generating cube.", self.nv)
+        for i in xrange(self.nv):
+            v1 = vbins[i]
+            v2 = vbins[i+1]
+            _intensity = create_intensity(v1, v2, field)
+            ds.field_info.add_field(("gas","intensity"),
+                                    function=_intensity, units=self.field_units)
+            prj = off_axis_projection(ds, ds.domain_center, normal, width,
+                                      (self.nx, self.ny), "intensity")
+            self.data[:,:,i] = prj[:,:]
+            ds.field_info.pop(("gas","intensity"))
+            pbar.update(i)
+
+        pbar.finish()
+
+    def write_fits(self, filename, clobber=True, length_unit=(10.0, "kpc"),
+                   sky_center=(30.,45.)):
+        r""" Write the PPVCube to a FITS file.
+
+        Parameters
+        ----------
+        filename : string
+            The name of the file to write.
+        clobber : boolean
+            Whether or not to clobber an existing file with the same name.
+        length_unit : tuple, optional
+            The length that corresponds to the width of the projection in
+            (value, unit) form. Accepts a length unit or 'deg'.
+        sky_center : tuple, optional
+            The (RA, Dec) coordinate in degrees of the central pixel if
+            *length_unit* is 'deg'.
+
+        Examples
+        --------
+        >>> cube.write_fits("my_cube.fits", clobber=False, length_unit=(5,"deg"))
+        """
+        if length_unit[1] == "deg":
+            center = sky_center
+            types = ["RA---SIN","DEC--SIN"]
+        else:
+            center = [0.0,0.0]
+            types = ["LINEAR","LINEAR"]
+
+        v_center = 0.5*(self.v_bnd[0]+self.v_bnd[1]).in_units("m/s").value
+
+        dx = length_unit[0]/self.nx
+        dy = length_unit[0]/self.ny
+        dv = (self.v_bnd[1]-self.v_bnd[0]).in_units("m/s").value/self.nv
+
+        if length_unit[1] == "deg":
+            dx *= -1.
+
+        w = ap.pywcs.WCS(naxis=3)
+        w.wcs.crpix = [0.5*(self.nx+1), 0.5*(self.ny+1), 0.5*(self.nv+1)]
+        w.wcs.cdelt = [dx,dy,dv]
+        w.wcs.crval = [center[0], center[1], v_center]
+        w.wcs.cunit = [length_unit[1],length_unit[1],"m/s"]
+        w.wcs.ctype = [types[0],types[1],"VELO-LSR"]
+
+        fib = FITSImageBuffer(self.data.transpose(), fields=self.field, wcs=w)
+        fib[0].header["bunit"] = self.field_units
+        fib[0].header["btype"] = self.field
+
+        fib.writeto(filename, clobber=clobber)

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f yt/analysis_modules/ppv_cube/setup.py
--- /dev/null
+++ b/yt/analysis_modules/ppv_cube/setup.py
@@ -0,0 +1,14 @@
+#!/usr/bin/env python
+import setuptools
+import os
+import sys
+import os.path
+
+
+def configuration(parent_package='', top_path=None):
+    from numpy.distutils.misc_util import Configuration
+    config = Configuration('ppv_cube', parent_package, top_path)
+    #config.add_subpackage("tests")
+    config.make_config_py()  # installs __config__.py
+    #config.make_svn_version_py()
+    return config

diff -r 4ce853670d87f1463676208c2340e6af1c5e7916 -r 85cf74b2256cfe93057f6858619f5f2fdbf4e55f yt/analysis_modules/setup.py
--- a/yt/analysis_modules/setup.py
+++ b/yt/analysis_modules/setup.py
@@ -23,4 +23,5 @@
     config.add_subpackage("sunyaev_zeldovich")
     config.add_subpackage("particle_trajectories")
     config.add_subpackage("photon_simulator")
+    config.add_subpackage("ppv_cube")
     return config

This diff is so big that we needed to truncate the remainder.

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list