[yt-svn] commit/yt: xarthisius: Merged in jzuhone/yt (pull request #2496)

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Thu Feb 9 06:49:46 PST 2017


1 new commit in yt:

https://bitbucket.org/yt_analysis/yt/commits/441ff5e2dcd3/
Changeset:   441ff5e2dcd3
Branch:      yt
User:        xarthisius
Date:        2017-02-09 14:49:38+00:00
Summary:     Merged in jzuhone/yt (pull request #2496)

Move particle trajectories to data_objects and make them a DatasetSeries method.

Approved-by: Nathan Goldbaum
Approved-by: Andrew Myers
Approved-by: Kacper Kowalik
Affected #:  13 files

diff -r 13c1ee22f924f988c6dfc68c9b64d6e9acdebfd2 -r 441ff5e2dcd342526152309d260e912300fdcd3b doc/source/analyzing/Particle_Trajectories.ipynb
--- /dev/null
+++ b/doc/source/analyzing/Particle_Trajectories.ipynb
@@ -0,0 +1,390 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "One can create particle trajectories from a `DatasetSeries` object for a specified list of particles identified by their unique indices using the `particle_trajectories` method. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "%matplotlib inline\n",
+    "import yt\n",
+    "import glob\n",
+    "from yt.config import ytcfg\n",
+    "path = ytcfg.get(\"yt\", \"test_data_dir\")\n",
+    "import matplotlib.pyplot as plt\n",
+    "from mpl_toolkits.mplot3d import Axes3D"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "First, let's start off with a FLASH dataset containing only two particles in a mutual circular orbit. We can get the list of filenames this way:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "my_fns = glob.glob(path+\"/Orbit/orbit_hdf5_chk_00[0-9][0-9]\")\n",
+    "my_fns.sort()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "And let's define a list of fields that we want to include in the trajectories. The position fields will be included by default, so let's just ask for the velocity fields:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "fields = [\"particle_velocity_x\", \"particle_velocity_y\", \"particle_velocity_z\"]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "There are only two particles, but for consistency's sake let's grab their indices from the dataset itself:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load(my_fns[0])\n",
+    "dd = ds.all_data()\n",
+    "indices = dd[\"particle_index\"].astype(\"int\")\n",
+    "print (indices)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "which is what we expected them to be. Now we're ready to create a `DatasetSeries` object and use it to create particle trajectories: "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ts = yt.DatasetSeries(my_fns)\n",
+    "# suppress_logging=True cuts down on a lot of noise\n",
+    "trajs = ts.particle_trajectories(indices, fields=fields, suppress_logging=True)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The `ParticleTrajectories` object `trajs` is essentially a dictionary-like container for the particle fields along the trajectory, and can be accessed as such:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (trajs[\"particle_position_x\"])\n",
+    "print (trajs[\"particle_position_x\"].shape)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Note that each field is a 2D NumPy array with the different particle indices along the first dimension and the times along the second dimension. As such, we can access them individually by indexing the field:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "plt.figure(figsize=(6, 6))\n",
+    "plt.plot(trajs[\"particle_position_x\"][0], trajs[\"particle_position_y\"][0])\n",
+    "plt.plot(trajs[\"particle_position_x\"][1], trajs[\"particle_position_y\"][1])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "And we can plot the velocity fields as well:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "plt.figure(figsize=(6, 6))\n",
+    "plt.plot(trajs[\"particle_velocity_x\"][0], trajs[\"particle_velocity_y\"][0])\n",
+    "plt.plot(trajs[\"particle_velocity_x\"][1], trajs[\"particle_velocity_y\"][1])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "If we want to access the time along the trajectory, we use the key `\"particle_time\"`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "plt.figure(figsize=(6, 6))\n",
+    "plt.plot(trajs[\"particle_time\"], trajs[\"particle_velocity_x\"][1])\n",
+    "plt.plot(trajs[\"particle_time\"], trajs[\"particle_velocity_y\"][1])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Alternatively, if we know the particle index we'd like to examine, we can get an individual trajectory corresponding to that index:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "particle1 = trajs.trajectory_from_index(1)\n",
+    "plt.figure(figsize=(6, 6))\n",
+    "plt.plot(particle1[\"particle_time\"], particle1[\"particle_position_x\"])\n",
+    "plt.plot(particle1[\"particle_time\"], particle1[\"particle_position_y\"])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now let's look at a more complicated (and fun!) example. We'll use an Enzo cosmology dataset. First, we'll find the maximum density in the domain, and obtain the indices of the particles within some radius of the center. First, let's have a look at what we're getting:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+    "slc = yt.SlicePlot(ds, \"x\", [\"density\",\"dark_matter_density\"], center=\"max\", width=(3.0, \"Mpc\"))\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "So far, so good--it looks like we've centered on a galaxy cluster. Let's grab all of the dark matter particles within a sphere of 0.5 Mpc (identified by `\"particle_type == 1\"`):"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "sp = ds.sphere(\"max\", (0.5, \"Mpc\"))\n",
+    "indices = sp[\"particle_index\"][sp[\"particle_type\"] == 1]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Next we'll get the list of datasets we want, and create trajectories for these particles:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "my_fns = glob.glob(path+\"/enzo_tiny_cosmology/DD*/*.hierarchy\")\n",
+    "my_fns.sort()\n",
+    "ts = yt.DatasetSeries(my_fns)\n",
+    "trajs = ts.particle_trajectories(indices, fields=fields, suppress_logging=True)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Matplotlib can make 3D plots, so let's pick three particle trajectories at random and look at them in the volume:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "fig = plt.figure(figsize=(8.0, 8.0))\n",
+    "ax = fig.add_subplot(111, projection='3d')\n",
+    "ax.plot(trajs[\"particle_position_x\"][100], trajs[\"particle_position_z\"][100], trajs[\"particle_position_z\"][100])\n",
+    "ax.plot(trajs[\"particle_position_x\"][8], trajs[\"particle_position_z\"][8], trajs[\"particle_position_z\"][8])\n",
+    "ax.plot(trajs[\"particle_position_x\"][25], trajs[\"particle_position_z\"][25], trajs[\"particle_position_z\"][25])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "It looks like these three different particles fell into the cluster along different filaments. We can also look at their x-positions only as a function of time:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "plt.figure(figsize=(6,6))\n",
+    "plt.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][100])\n",
+    "plt.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][8])\n",
+    "plt.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][25])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Suppose we wanted to know the gas density along the particle trajectory, but there wasn't a particle field corresponding to that in our dataset. Never fear! If the field exists as a grid field, yt will interpolate this field to the particle positions and add the interpolated field to the trajectory. To add such a field (or any field, including additional particle fields) we can call the `add_fields` method:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "trajs.add_fields([\"density\"])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We also could have included `\"density\"` in our original field list. Now, plot up the gas density for each particle as a function of time:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "plt.figure(figsize=(6,6))\n",
+    "plt.plot(trajs[\"particle_time\"], trajs[\"density\"][100])\n",
+    "plt.plot(trajs[\"particle_time\"], trajs[\"density\"][8])\n",
+    "plt.plot(trajs[\"particle_time\"], trajs[\"density\"][25])\n",
+    "plt.yscale(\"log\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Finally, the particle trajectories can be written to disk. Two options are provided: ASCII text files with a column for each field and the time, and HDF5 files:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "trajs.write_out(\"halo_trajectories\") # This will write a separate file for each trajectory\n",
+    "trajs.write_out_h5(\"halo_trajectories.h5\") # This will write all trajectories to a single file"
+   ]
+  }
+ ],
+ "metadata": {
+  "anaconda-cloud": {},
+  "kernelspec": {
+   "display_name": "Python [default]",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.2"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}

diff -r 13c1ee22f924f988c6dfc68c9b64d6e9acdebfd2 -r 441ff5e2dcd342526152309d260e912300fdcd3b doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
--- a/doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
+++ /dev/null
@@ -1,385 +0,0 @@
-{
- "cells": [
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "The `particle_trajectories` analysis module enables the construction of particle trajectories from a time series of datasets for a specified list of particles identified by their unique indices. "
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "%matplotlib inline\n",
-    "import yt\n",
-    "import glob\n",
-    "from yt.analysis_modules.particle_trajectories.api import ParticleTrajectories\n",
-    "from yt.config import ytcfg\n",
-    "path = ytcfg.get(\"yt\", \"test_data_dir\")\n",
-    "import matplotlib.pyplot as plt"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "First, let's start off with a FLASH dataset containing only two particles in a mutual circular orbit. We can get the list of filenames this way:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "my_fns = glob.glob(path+\"/Orbit/orbit_hdf5_chk_00[0-9][0-9]\")\n",
-    "my_fns.sort()"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "And let's define a list of fields that we want to include in the trajectories. The position fields will be included by default, so let's just ask for the velocity fields:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "fields = [\"particle_velocity_x\", \"particle_velocity_y\", \"particle_velocity_z\"]"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "There are only two particles, but for consistency's sake let's grab their indices from the dataset itself:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "ds = yt.load(my_fns[0])\n",
-    "dd = ds.all_data()\n",
-    "indices = dd[\"particle_index\"].astype(\"int\")\n",
-    "print (indices)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "which is what we expected them to be. Now we're ready to create a `ParticleTrajectories` object:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "trajs = ParticleTrajectories(my_fns, indices, fields=fields)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "The `ParticleTrajectories` object `trajs` is essentially a dictionary-like container for the particle fields along the trajectory, and can be accessed as such:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "print (trajs[\"particle_position_x\"])\n",
-    "print (trajs[\"particle_position_x\"].shape)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Note that each field is a 2D NumPy array with the different particle indices along the first dimension and the times along the second dimension. As such, we can access them individually by indexing the field:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "plt.plot(trajs[\"particle_position_x\"][0].ndarray_view(), trajs[\"particle_position_y\"][0].ndarray_view())\n",
-    "plt.plot(trajs[\"particle_position_x\"][1].ndarray_view(), trajs[\"particle_position_y\"][1].ndarray_view())"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "And we can plot the velocity fields as well:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "plt.plot(trajs[\"particle_velocity_x\"][0].ndarray_view(), trajs[\"particle_velocity_y\"][0].ndarray_view())\n",
-    "plt.plot(trajs[\"particle_velocity_x\"][1].ndarray_view(), trajs[\"particle_velocity_y\"][1].ndarray_view())"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "If we want to access the time along the trajectory, we use the key `\"particle_time\"`:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_velocity_x\"][1].ndarray_view())\n",
-    "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_velocity_y\"][1].ndarray_view())"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Alternatively, if we know the particle index we'd like to examine, we can get an individual trajectory corresponding to that index:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "particle1 = trajs.trajectory_from_index(1)\n",
-    "plt.plot(particle1[\"particle_time\"].ndarray_view(), particle1[\"particle_position_x\"].ndarray_view())\n",
-    "plt.plot(particle1[\"particle_time\"].ndarray_view(), particle1[\"particle_position_y\"].ndarray_view())"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Now let's look at a more complicated (and fun!) example. We'll use an Enzo cosmology dataset. First, we'll find the maximum density in the domain, and obtain the indices of the particles within some radius of the center. First, let's have a look at what we're getting:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
-    "slc = yt.SlicePlot(ds, \"x\", [\"density\",\"dark_matter_density\"], center=\"max\", width=(3.0, \"Mpc\"))\n",
-    "slc.show()"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "So far, so good--it looks like we've centered on a galaxy cluster. Let's grab all of the dark matter particles within a sphere of 0.5 Mpc (identified by `\"particle_type == 1\"`):"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "sp = ds.sphere(\"max\", (0.5, \"Mpc\"))\n",
-    "indices = sp[\"particle_index\"][sp[\"particle_type\"] == 1]"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Next we'll get the list of datasets we want, and create trajectories for these particles:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "my_fns = glob.glob(path+\"/enzo_tiny_cosmology/DD*/*.hierarchy\")\n",
-    "my_fns.sort()\n",
-    "trajs = ParticleTrajectories(my_fns, indices)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Matplotlib can make 3D plots, so let's pick three particle trajectories at random and look at them in the volume:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "import matplotlib.pyplot as plt\n",
-    "from mpl_toolkits.mplot3d import Axes3D\n",
-    "fig = plt.figure(figsize=(8.0, 8.0))\n",
-    "ax = fig.add_subplot(111, projection='3d')\n",
-    "ax.plot(trajs[\"particle_position_x\"][100].ndarray_view(), trajs[\"particle_position_z\"][100].ndarray_view(), \n",
-    "        trajs[\"particle_position_z\"][100].ndarray_view())\n",
-    "ax.plot(trajs[\"particle_position_x\"][8].ndarray_view(), trajs[\"particle_position_z\"][8].ndarray_view(), \n",
-    "        trajs[\"particle_position_z\"][8].ndarray_view())\n",
-    "ax.plot(trajs[\"particle_position_x\"][25].ndarray_view(), trajs[\"particle_position_z\"][25].ndarray_view(), \n",
-    "        trajs[\"particle_position_z\"][25].ndarray_view())"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "It looks like these three different particles fell into the cluster along different filaments. We can also look at their x-positions only as a function of time:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_position_x\"][100].ndarray_view())\n",
-    "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_position_x\"][8].ndarray_view())\n",
-    "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"particle_position_x\"][25].ndarray_view())"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Suppose we wanted to know the gas density along the particle trajectory, but there wasn't a particle field corresponding to that in our dataset. Never fear! If the field exists as a grid field, yt will interpolate this field to the particle positions and add the interpolated field to the trajectory. To add such a field (or any field, including additional particle fields) we can call the `add_fields` method:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "trajs.add_fields([\"density\"])"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "We also could have included `\"density\"` in our original field list. Now, plot up the gas density for each particle as a function of time:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"density\"][100].ndarray_view())\n",
-    "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"density\"][8].ndarray_view())\n",
-    "plt.plot(trajs[\"particle_time\"].ndarray_view(), trajs[\"density\"][25].ndarray_view())\n",
-    "plt.yscale(\"log\")"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Finally, the particle trajectories can be written to disk. Two options are provided: ASCII text files with a column for each field and the time, and HDF5 files:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": [],
-   "source": [
-    "trajs.write_out(\"halo_trajectories\") # This will write a separate file for each trajectory\n",
-    "trajs.write_out_h5(\"halo_trajectories.h5\") # This will write all trajectories to a single file"
-   ]
-  }
- ],
- "metadata": {
-  "kernelspec": {
-   "display_name": "Python 3",
-   "language": "python",
-   "name": "python3"
-  },
-  "language_info": {
-   "codemirror_mode": {
-    "name": "ipython",
-    "version": 3
-   },
-   "file_extension": ".py",
-   "mimetype": "text/x-python",
-   "name": "python",
-   "nbconvert_exporter": "python",
-   "pygments_lexer": "ipython3",
-   "version": "3.5.1"
-  }
- },
- "nbformat": 4,
- "nbformat_minor": 0
-}

diff -r 13c1ee22f924f988c6dfc68c9b64d6e9acdebfd2 -r 441ff5e2dcd342526152309d260e912300fdcd3b doc/source/analyzing/analysis_modules/index.rst
--- a/doc/source/analyzing/analysis_modules/index.rst
+++ b/doc/source/analyzing/analysis_modules/index.rst
@@ -18,4 +18,3 @@
    exporting
    two_point_functions
    clump_finding
-   particle_trajectories

diff -r 13c1ee22f924f988c6dfc68c9b64d6e9acdebfd2 -r 441ff5e2dcd342526152309d260e912300fdcd3b doc/source/analyzing/analysis_modules/particle_trajectories.rst
--- a/doc/source/analyzing/analysis_modules/particle_trajectories.rst
+++ /dev/null
@@ -1,6 +0,0 @@
-.. _particle-trajectories:
-
-Particle Trajectories
----------------------
-
-.. notebook:: Particle_Trajectories.ipynb

diff -r 13c1ee22f924f988c6dfc68c9b64d6e9acdebfd2 -r 441ff5e2dcd342526152309d260e912300fdcd3b doc/source/analyzing/index.rst
--- a/doc/source/analyzing/index.rst
+++ b/doc/source/analyzing/index.rst
@@ -22,4 +22,5 @@
    generating_processed_data
    saving_data
    time_series_analysis
+   particle_trajectories
    parallel_computation

diff -r 13c1ee22f924f988c6dfc68c9b64d6e9acdebfd2 -r 441ff5e2dcd342526152309d260e912300fdcd3b doc/source/analyzing/particle_trajectories.rst
--- /dev/null
+++ b/doc/source/analyzing/particle_trajectories.rst
@@ -0,0 +1,6 @@
+.. _particle-trajectories:
+
+Particle Trajectories
+---------------------
+
+.. notebook:: Particle_Trajectories.ipynb

diff -r 13c1ee22f924f988c6dfc68c9b64d6e9acdebfd2 -r 441ff5e2dcd342526152309d260e912300fdcd3b doc/source/reference/api/api.rst
--- a/doc/source/reference/api/api.rst
+++ b/doc/source/reference/api/api.rst
@@ -130,6 +130,7 @@
    ~yt.data_objects.time_series.DatasetSeriesObject
    ~yt.data_objects.time_series.TimeSeriesQuantitiesContainer
    ~yt.data_objects.time_series.AnalysisTaskProxy
+   ~yt.data_objects.particle_trajectories.ParticleTrajectories
 
 Geometry Handlers
 -----------------

diff -r 13c1ee22f924f988c6dfc68c9b64d6e9acdebfd2 -r 441ff5e2dcd342526152309d260e912300fdcd3b tests/tests.yaml
--- a/tests/tests.yaml
+++ b/tests/tests.yaml
@@ -104,6 +104,9 @@
   local_axialpix_001:
     - yt/geometry/coordinates/tests/test_axial_pixelization.py:test_axial_pixelization
 
+  local_particle_trajectory_000:
+    - yt/data_objects/tests/test_particle_trajectories.py
+
 other_tests:
   unittests:
      - '--exclude=test_mesh_slices'  # disable randomly failing test

diff -r 13c1ee22f924f988c6dfc68c9b64d6e9acdebfd2 -r 441ff5e2dcd342526152309d260e912300fdcd3b yt/analysis_modules/particle_trajectories/api.py
--- a/yt/analysis_modules/particle_trajectories/api.py
+++ b/yt/analysis_modules/particle_trajectories/api.py
@@ -1,13 +1,6 @@
-"""
-API for particle_trajectories
-"""
-from __future__ import absolute_import
-#-----------------------------------------------------------------------------
-# Copyright (c) 2013, yt Development Team.
-#
-# Distributed under the terms of the Modified BSD License.
-#
-# The full license is in the file COPYING.txt, distributed with this software.
-#-----------------------------------------------------------------------------
+from yt.funcs import issue_deprecation_warning
 
-from .particle_trajectories import ParticleTrajectories
+issue_deprecation_warning("Particle trajectories are now available from DatasetSeries "
+                          "objects as ts.particle_trajectories. The ParticleTrajectories "
+                          "analysis module is deprecated.")
+from yt.data_objects.particle_trajectories import ParticleTrajectories
\ No newline at end of file

diff -r 13c1ee22f924f988c6dfc68c9b64d6e9acdebfd2 -r 441ff5e2dcd342526152309d260e912300fdcd3b yt/analysis_modules/particle_trajectories/particle_trajectories.py
--- a/yt/analysis_modules/particle_trajectories/particle_trajectories.py
+++ /dev/null
@@ -1,383 +0,0 @@
-"""
-Particle trajectories
-"""
-from __future__ import print_function
-
-#-----------------------------------------------------------------------------
-# Copyright (c) 2013, yt Development Team.
-#
-# Distributed under the terms of the Modified BSD License.
-#
-# The full license is in the file COPYING.txt, distributed with this software.
-#-----------------------------------------------------------------------------
-
-from yt.data_objects.field_data import YTFieldData
-from yt.data_objects.time_series import DatasetSeries
-from yt.utilities.lib.particle_mesh_operations import CICSample_3
-from yt.utilities.parallel_tools.parallel_analysis_interface import \
-    parallel_root_only
-from yt.funcs import mylog, get_pbar
-from yt.units.yt_array import array_like_field
-from yt.config import ytcfg
-from collections import OrderedDict
-
-import numpy as np
-from yt.utilities.on_demand_imports import _h5py as h5py
-
-class ParticleTrajectories(object):
-    r"""A collection of particle trajectories in time over a series of
-    datasets. 
-
-    The ParticleTrajectories object contains a collection of
-    particle trajectories for a specified set of particle indices. 
-    
-    Parameters
-    ----------
-    outputs : `yt.data_objects.time_series.DatasetSeries` or list of strings
-        DatasetSeries object, or a time-sorted list of filenames to
-        construct a new DatasetSeries object.
-    indices : array_like
-        An integer array of particle indices whose trajectories we
-        want to track. If they are not sorted they will be sorted.
-    fields : list of strings, optional
-        A set of fields that is retrieved when the trajectory
-        collection is instantiated.
-        Default : None (will default to the fields 'particle_position_x',
-        'particle_position_y', 'particle_position_z')
-    suppress_logging : boolean
-        Suppress yt's logging when iterating over the simulation time
-        series.
-        Default : False
-
-    Examples
-    ________
-    >>> from yt.mods import *
-    >>> my_fns = glob.glob("orbit_hdf5_chk_00[0-9][0-9]")
-    >>> my_fns.sort()
-    >>> fields = ["particle_position_x", "particle_position_y",
-    >>>           "particle_position_z", "particle_velocity_x",
-    >>>           "particle_velocity_y", "particle_velocity_z"]
-    >>> ds = load(my_fns[0])
-    >>> init_sphere = ds.sphere(ds.domain_center, (.5, "unitary"))
-    >>> indices = init_sphere["particle_index"].astype("int")
-    >>> trajs = ParticleTrajectories(my_fns, indices, fields=fields)
-    >>> for t in trajs :
-    >>>     print t["particle_velocity_x"].max(), t["particle_velocity_x"].min()
-    """
-    def __init__(self, outputs, indices, fields=None, suppress_logging=False):
-
-        indices.sort() # Just in case the caller wasn't careful
-        self.field_data = YTFieldData()
-        if isinstance(outputs, DatasetSeries):
-            self.data_series = outputs
-        else:
-            self.data_series = DatasetSeries(outputs)
-        self.masks = []
-        self.sorts = []
-        self.array_indices = []
-        self.indices = indices
-        self.num_indices = len(indices)
-        self.num_steps = len(outputs)
-        self.times = []
-        self.suppress_logging = suppress_logging
-
-        if fields is None: fields = []
-        fields = list(OrderedDict.fromkeys(fields))
-
-        if self.suppress_logging:
-            old_level = int(ytcfg.get("yt","loglevel"))
-            mylog.setLevel(40)
-        
-        fds = {}
-        ds_first = self.data_series[0]
-        dd_first = ds_first.all_data()
-        idx_field = dd_first._determine_fields("particle_index")[0]
-        for field in ("particle_position_%s" % ax for ax in "xyz"):
-            fds[field] = dd_first._determine_fields(field)[0]
-
-        my_storage = {}
-        pbar = get_pbar("Constructing trajectory information", len(self.data_series))
-        for i, (sto, ds) in enumerate(self.data_series.piter(storage=my_storage)):
-            dd = ds.all_data()
-            newtags = dd[idx_field].d.astype("int64")
-            mask = np.in1d(newtags, indices, assume_unique=True)
-            sort = np.argsort(newtags[mask])
-            array_indices = np.where(np.in1d(indices, newtags, assume_unique=True))[0]
-            self.array_indices.append(array_indices)
-            self.masks.append(mask)
-            self.sorts.append(sort)
-
-            pfields = {}
-            for field in ("particle_position_%s" % ax for ax in "xyz"):
-                pfields[field] = dd[fds[field]].ndarray_view()[mask][sort]
-
-            sto.result_id = ds.parameter_filename
-            sto.result = (ds.current_time, array_indices, pfields)
-            pbar.update(i)
-        pbar.finish()
-
-        if self.suppress_logging:
-            mylog.setLevel(old_level)
-
-        times = []
-        for fn, (time, indices, pfields) in sorted(my_storage.items()):
-            times.append(time)
-        self.times = self.data_series[0].arr([time for time in times], times[0].units)
-
-        self.particle_fields = []
-        output_field = np.empty((self.num_indices, self.num_steps))
-        output_field.fill(np.nan)
-        for field in ("particle_position_%s" % ax for ax in "xyz"):
-            for i, (fn, (time, indices, pfields)) in enumerate(sorted(my_storage.items())):
-                output_field[indices, i] = pfields[field]
-            self.field_data[field] = array_like_field(
-                dd_first, output_field.copy(), fds[field])
-            self.particle_fields.append(field)
-
-        # Instantiate fields the caller requested
-        self._get_data(fields)
-
-    def has_key(self, key):
-        return (key in self.field_data)
-    
-    def keys(self):
-        return self.field_data.keys()
-
-    def __getitem__(self, key):
-        """
-        Get the field associated with key.
-        """
-        if key == "particle_time":
-            return self.times
-        if key not in self.field_data:
-            self._get_data([key])
-        return self.field_data[key]
-    
-    def __setitem__(self, key, val):
-        """
-        Sets a field to be some other value.
-        """
-        self.field_data[key] = val
-                        
-    def __delitem__(self, key):
-        """
-        Delete the field from the trajectory
-        """
-        del self.field_data[key]
-
-    def __iter__(self):
-        """
-        This iterates over the trajectories for
-        the different particles, returning dicts
-        of fields for each trajectory
-        """
-        for idx in range(self.num_indices):
-            traj = {}
-            traj["particle_index"] = self.indices[idx]
-            traj["particle_time"] = self.times
-            for field in self.field_data.keys():
-                traj[field] = self[field][idx,:]
-            yield traj
-            
-    def __len__(self):
-        """
-        The number of individual trajectories
-        """
-        return self.num_indices
-
-    def add_fields(self, fields):
-        """
-        Add a list of fields to an existing trajectory
-
-        Parameters
-        ----------
-        fields : list of strings
-            A list of fields to be added to the current trajectory
-            collection.
-
-        Examples
-        ________
-        >>> trajs = ParticleTrajectories(my_fns, indices)
-        >>> trajs.add_fields(["particle_mass", "particle_gpot"])
-        """
-        self._get_data(fields)
-                
-    def _get_data(self, fields):
-        """
-        Get a list of fields to include in the trajectory collection.
-        The trajectory collection itself is a dict of 2D numpy arrays,
-        with shape (num_indices, num_steps)
-        """
-
-        missing_fields = [field for field in fields
-                          if field not in self.field_data]
-        if not missing_fields:
-            return
-
-        if self.suppress_logging:
-            old_level = int(ytcfg.get("yt","loglevel"))
-            mylog.setLevel(40)
-        ds_first = self.data_series[0]
-        dd_first = ds_first.all_data()
-
-        fds = {}
-        new_particle_fields = []
-        for field in missing_fields:
-            fds[field] = dd_first._determine_fields(field)[0]
-            if field not in self.particle_fields:
-                if self.data_series[0]._get_field_info(*fds[field]).particle_type:
-                    self.particle_fields.append(field)
-                    new_particle_fields.append(field)
-                    
-
-        grid_fields = [field for field in missing_fields
-                       if field not in self.particle_fields]
-        step = int(0)
-        pbar = get_pbar("Generating [%s] fields in trajectories." %
-                        ", ".join(missing_fields), self.num_steps)
-        my_storage = {}
-        
-        for i, (sto, ds) in enumerate(self.data_series.piter(storage=my_storage)):
-            mask = self.masks[i]
-            sort = self.sorts[i]
-            pfield = {}
-
-            if new_particle_fields:  # there's at least one particle field
-                dd = ds.all_data()
-                for field in new_particle_fields:
-                    # This is easy... just get the particle fields
-                    pfield[field] = dd[fds[field]].d[mask][sort]
-
-            if grid_fields:
-                # This is hard... must loop over grids
-                for field in grid_fields:
-                    pfield[field] = np.zeros(self.num_indices)
-                x = self["particle_position_x"][:,step].d
-                y = self["particle_position_y"][:,step].d
-                z = self["particle_position_z"][:,step].d
-                particle_grids, particle_grid_inds = ds.index._find_points(x,y,z)
-
-                # This will fail for non-grid index objects
-                for grid in particle_grids:
-                    cube = grid.retrieve_ghost_zones(1, grid_fields)
-                    for field in grid_fields:
-                        CICSample_3(x, y, z, pfield[field],
-                                    self.num_indices,
-                                    cube[fds[field]],
-                                    np.array(grid.LeftEdge).astype(np.float64),
-                                    np.array(grid.ActiveDimensions).astype(np.int32),
-                                    grid.dds[0])
-            sto.result_id = ds.parameter_filename
-            sto.result = (self.array_indices[i], pfield)
-            pbar.update(step)
-            step += 1
-        pbar.finish()
-
-        output_field = np.empty((self.num_indices,self.num_steps))
-        output_field.fill(np.nan)
-        for field in missing_fields:
-            fd = fds[field]
-            for i, (fn, (indices, pfield)) in enumerate(sorted(my_storage.items())):
-                output_field[indices, i] = pfield[field]
-            self.field_data[field] = array_like_field(dd_first, output_field.copy(), fd)
-
-        if self.suppress_logging:
-            mylog.setLevel(old_level)
-
-    def trajectory_from_index(self, index):
-        """
-        Retrieve a single trajectory corresponding to a specific particle
-        index
-
-        Parameters
-        ----------
-        index : int
-            This defines which particle trajectory from the
-            ParticleTrajectories object will be returned.
-
-        Returns
-        -------
-        A dictionary corresponding to the particle's trajectory and the
-        fields along that trajectory
-
-        Examples
-        --------
-        >>> from yt.mods import *
-        >>> import matplotlib.pylab as pl
-        >>> trajs = ParticleTrajectories(my_fns, indices)
-        >>> traj = trajs.trajectory_from_index(indices[0])
-        >>> pl.plot(traj["particle_time"], traj["particle_position_x"], "-x")
-        >>> pl.savefig("orbit")
-        """
-        mask = np.in1d(self.indices, (index,), assume_unique=True)
-        if not np.any(mask):
-            print("The particle index %d is not in the list!" % (index))
-            raise IndexError
-        fields = [field for field in sorted(self.field_data.keys())]
-        traj = {}
-        traj["particle_time"] = self.times
-        traj["particle_index"] = index
-        for field in fields:
-            traj[field] = self[field][mask,:][0]
-        return traj
-
-    @parallel_root_only
-    def write_out(self, filename_base):
-        """
-        Write out particle trajectories to tab-separated ASCII files (one
-        for each trajectory) with the field names in the file header. Each
-        file is named with a basename and the index number.
-
-        Parameters
-        ----------
-        filename_base : string
-            The prefix for the outputted ASCII files.
-
-        Examples
-        --------
-        >>> from yt.mods import *
-        >>> trajs = ParticleTrajectories(my_fns, indices)
-        >>> trajs.write_out("orbit_trajectory")       
-        """
-        fields = [field for field in sorted(self.field_data.keys())]
-        num_fields = len(fields)
-        first_str = "# particle_time\t" + "\t".join(fields)+"\n"
-        template_str = "%g\t"*num_fields+"%g\n"
-        for ix in range(self.num_indices):
-            outlines = [first_str]
-            for it in range(self.num_steps):
-                outlines.append(template_str %
-                                tuple([self.times[it]]+[self[field][ix,it] for field in fields]))
-            fid = open(filename_base + "_%d.dat" % self.indices[ix], "w")
-            fid.writelines(outlines)
-            fid.close()
-            del fid
-
-    @parallel_root_only
-    def write_out_h5(self, filename):
-        """
-        Write out all the particle trajectories to a single HDF5 file
-        that contains the indices, the times, and the 2D array for each
-        field individually
-
-        Parameters
-        ----------
-
-        filename : string
-            The output filename for the HDF5 file
-
-        Examples
-        --------
-
-        >>> from yt.mods import *
-        >>> trajs = ParticleTrajectories(my_fns, indices)
-        >>> trajs.write_out_h5("orbit_trajectories")                
-        """
-        fid = h5py.File(filename, "w")
-        fid.create_dataset("particle_indices", dtype=np.int64,
-                           data=self.indices)
-        fid.close()
-        self.times.write_hdf5(filename, dataset_name="particle_times")
-        fields = [field for field in sorted(self.field_data.keys())]
-        for field in fields:
-            self[field].write_hdf5(filename, dataset_name="%s" % field)

diff -r 13c1ee22f924f988c6dfc68c9b64d6e9acdebfd2 -r 441ff5e2dcd342526152309d260e912300fdcd3b yt/data_objects/particle_trajectories.py
--- /dev/null
+++ b/yt/data_objects/particle_trajectories.py
@@ -0,0 +1,371 @@
+"""
+Particle trajectories
+"""
+from __future__ import print_function
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2013, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+from yt.data_objects.field_data import YTFieldData
+from yt.utilities.lib.particle_mesh_operations import CICSample_3
+from yt.utilities.parallel_tools.parallel_analysis_interface import \
+    parallel_root_only
+from yt.funcs import mylog, get_pbar
+from yt.units.yt_array import array_like_field
+from yt.config import ytcfg
+from collections import OrderedDict
+
+import numpy as np
+from yt.utilities.on_demand_imports import _h5py as h5py
+
+class ParticleTrajectories(object):
+    r"""A collection of particle trajectories in time over a series of
+    datasets. 
+
+    Parameters
+    ----------
+    outputs : `yt.data_objects.time_series.DatasetSeries`
+        DatasetSeries object from which to draw the particles.
+    indices : array_like
+        An integer array of particle indices whose trajectories we
+        want to track. If they are not sorted they will be sorted.
+    fields : list of strings, optional
+        A set of fields that is retrieved when the trajectory
+        collection is instantiated. Default: None (will default
+        to the fields 'particle_position_x', 'particle_position_y',
+        'particle_position_z')
+    suppress_logging : boolean
+        Suppress yt's logging when iterating over the simulation time
+        series. Default: False
+
+    Examples
+    --------
+    >>> my_fns = glob.glob("orbit_hdf5_chk_00[0-9][0-9]")
+    >>> my_fns.sort()
+    >>> fields = ["particle_position_x", "particle_position_y",
+    >>>           "particle_position_z", "particle_velocity_x",
+    >>>           "particle_velocity_y", "particle_velocity_z"]
+    >>> ds = load(my_fns[0])
+    >>> init_sphere = ds.sphere(ds.domain_center, (.5, "unitary"))
+    >>> indices = init_sphere["particle_index"].astype("int")
+    >>> ts = DatasetSeries(my_fns)
+    >>> trajs = ts.particle_trajectories(indices, fields=fields)
+    >>> for t in trajs :
+    >>>     print t["particle_velocity_x"].max(), t["particle_velocity_x"].min()
+    """
+    def __init__(self, outputs, indices, fields=None, suppress_logging=False):
+
+        indices.sort() # Just in case the caller wasn't careful
+        self.field_data = YTFieldData()
+        self.data_series = outputs
+        self.masks = []
+        self.sorts = []
+        self.array_indices = []
+        self.indices = indices
+        self.num_indices = len(indices)
+        self.num_steps = len(outputs)
+        self.times = []
+        self.suppress_logging = suppress_logging
+
+        if fields is None: fields = []
+        fields = list(OrderedDict.fromkeys(fields))
+
+        if self.suppress_logging:
+            old_level = int(ytcfg.get("yt","loglevel"))
+            mylog.setLevel(40)
+        
+        fds = {}
+        ds_first = self.data_series[0]
+        dd_first = ds_first.all_data()
+        idx_field = dd_first._determine_fields("particle_index")[0]
+        for field in ("particle_position_%s" % ax for ax in "xyz"):
+            fds[field] = dd_first._determine_fields(field)[0]
+
+        my_storage = {}
+        pbar = get_pbar("Constructing trajectory information", len(self.data_series))
+        for i, (sto, ds) in enumerate(self.data_series.piter(storage=my_storage)):
+            dd = ds.all_data()
+            newtags = dd[idx_field].d.astype("int64")
+            mask = np.in1d(newtags, indices, assume_unique=True)
+            sort = np.argsort(newtags[mask])
+            array_indices = np.where(np.in1d(indices, newtags, assume_unique=True))[0]
+            self.array_indices.append(array_indices)
+            self.masks.append(mask)
+            self.sorts.append(sort)
+
+            pfields = {}
+            for field in ("particle_position_%s" % ax for ax in "xyz"):
+                pfields[field] = dd[fds[field]].ndarray_view()[mask][sort]
+
+            sto.result_id = ds.parameter_filename
+            sto.result = (ds.current_time, array_indices, pfields)
+            pbar.update(i)
+        pbar.finish()
+
+        if self.suppress_logging:
+            mylog.setLevel(old_level)
+
+        times = []
+        for fn, (time, indices, pfields) in sorted(my_storage.items()):
+            times.append(time)
+        self.times = self.data_series[0].arr([time for time in times], times[0].units)
+
+        self.particle_fields = []
+        output_field = np.empty((self.num_indices, self.num_steps))
+        output_field.fill(np.nan)
+        for field in ("particle_position_%s" % ax for ax in "xyz"):
+            for i, (fn, (time, indices, pfields)) in enumerate(sorted(my_storage.items())):
+                output_field[indices, i] = pfields[field]
+            self.field_data[field] = array_like_field(
+                dd_first, output_field.copy(), fds[field])
+            self.particle_fields.append(field)
+
+        # Instantiate fields the caller requested
+        self._get_data(fields)
+
+    def has_key(self, key):
+        return (key in self.field_data)
+    
+    def keys(self):
+        return self.field_data.keys()
+
+    def __getitem__(self, key):
+        """
+        Get the field associated with key.
+        """
+        if key == "particle_time":
+            return self.times
+        if key not in self.field_data:
+            self._get_data([key])
+        return self.field_data[key]
+    
+    def __setitem__(self, key, val):
+        """
+        Sets a field to be some other value.
+        """
+        self.field_data[key] = val
+                        
+    def __delitem__(self, key):
+        """
+        Delete the field from the trajectory
+        """
+        del self.field_data[key]
+
+    def __iter__(self):
+        """
+        This iterates over the trajectories for
+        the different particles, returning dicts
+        of fields for each trajectory
+        """
+        for idx in range(self.num_indices):
+            traj = {}
+            traj["particle_index"] = self.indices[idx]
+            traj["particle_time"] = self.times
+            for field in self.field_data.keys():
+                traj[field] = self[field][idx,:]
+            yield traj
+            
+    def __len__(self):
+        """
+        The number of individual trajectories
+        """
+        return self.num_indices
+
+    def add_fields(self, fields):
+        """
+        Add a list of fields to an existing trajectory
+
+        Parameters
+        ----------
+        fields : list of strings
+            A list of fields to be added to the current trajectory
+            collection.
+
+        Examples
+        ________
+        >>> trajs = ParticleTrajectories(my_fns, indices)
+        >>> trajs.add_fields(["particle_mass", "particle_gpot"])
+        """
+        self._get_data(fields)
+                
+    def _get_data(self, fields):
+        """
+        Get a list of fields to include in the trajectory collection.
+        The trajectory collection itself is a dict of 2D numpy arrays,
+        with shape (num_indices, num_steps)
+        """
+
+        missing_fields = [field for field in fields
+                          if field not in self.field_data]
+        if not missing_fields:
+            return
+
+        if self.suppress_logging:
+            old_level = int(ytcfg.get("yt","loglevel"))
+            mylog.setLevel(40)
+        ds_first = self.data_series[0]
+        dd_first = ds_first.all_data()
+
+        fds = {}
+        new_particle_fields = []
+        for field in missing_fields:
+            fds[field] = dd_first._determine_fields(field)[0]
+            if field not in self.particle_fields:
+                if self.data_series[0]._get_field_info(*fds[field]).particle_type:
+                    self.particle_fields.append(field)
+                    new_particle_fields.append(field)
+                    
+
+        grid_fields = [field for field in missing_fields
+                       if field not in self.particle_fields]
+        step = int(0)
+        pbar = get_pbar("Generating [%s] fields in trajectories" %
+                        ", ".join(missing_fields), self.num_steps)
+        my_storage = {}
+        
+        for i, (sto, ds) in enumerate(self.data_series.piter(storage=my_storage)):
+            mask = self.masks[i]
+            sort = self.sorts[i]
+            pfield = {}
+
+            if new_particle_fields:  # there's at least one particle field
+                dd = ds.all_data()
+                for field in new_particle_fields:
+                    # This is easy... just get the particle fields
+                    pfield[field] = dd[fds[field]].d[mask][sort]
+
+            if grid_fields:
+                # This is hard... must loop over grids
+                for field in grid_fields:
+                    pfield[field] = np.zeros(self.num_indices)
+                x = self["particle_position_x"][:,step].d
+                y = self["particle_position_y"][:,step].d
+                z = self["particle_position_z"][:,step].d
+                particle_grids, particle_grid_inds = ds.index._find_points(x,y,z)
+
+                # This will fail for non-grid index objects
+                for grid in particle_grids:
+                    cube = grid.retrieve_ghost_zones(1, grid_fields)
+                    for field in grid_fields:
+                        CICSample_3(x, y, z, pfield[field],
+                                    self.num_indices,
+                                    cube[fds[field]],
+                                    np.array(grid.LeftEdge).astype(np.float64),
+                                    np.array(grid.ActiveDimensions).astype(np.int32),
+                                    grid.dds[0])
+            sto.result_id = ds.parameter_filename
+            sto.result = (self.array_indices[i], pfield)
+            pbar.update(step)
+            step += 1
+        pbar.finish()
+
+        output_field = np.empty((self.num_indices,self.num_steps))
+        output_field.fill(np.nan)
+        for field in missing_fields:
+            fd = fds[field]
+            for i, (fn, (indices, pfield)) in enumerate(sorted(my_storage.items())):
+                output_field[indices, i] = pfield[field]
+            self.field_data[field] = array_like_field(dd_first, output_field.copy(), fd)
+
+        if self.suppress_logging:
+            mylog.setLevel(old_level)
+
+    def trajectory_from_index(self, index):
+        """
+        Retrieve a single trajectory corresponding to a specific particle
+        index
+
+        Parameters
+        ----------
+        index : int
+            This defines which particle trajectory from the
+            ParticleTrajectories object will be returned.
+
+        Returns
+        -------
+        A dictionary corresponding to the particle's trajectory and the
+        fields along that trajectory
+
+        Examples
+        --------
+        >>> from yt.mods import *
+        >>> import matplotlib.pylab as pl
+        >>> trajs = ParticleTrajectories(my_fns, indices)
+        >>> traj = trajs.trajectory_from_index(indices[0])
+        >>> pl.plot(traj["particle_time"], traj["particle_position_x"], "-x")
+        >>> pl.savefig("orbit")
+        """
+        mask = np.in1d(self.indices, (index,), assume_unique=True)
+        if not np.any(mask):
+            print("The particle index %d is not in the list!" % (index))
+            raise IndexError
+        fields = [field for field in sorted(self.field_data.keys())]
+        traj = {}
+        traj["particle_time"] = self.times
+        traj["particle_index"] = index
+        for field in fields:
+            traj[field] = self[field][mask,:][0]
+        return traj
+
+    @parallel_root_only
+    def write_out(self, filename_base):
+        """
+        Write out particle trajectories to tab-separated ASCII files (one
+        for each trajectory) with the field names in the file header. Each
+        file is named with a basename and the index number.
+
+        Parameters
+        ----------
+        filename_base : string
+            The prefix for the outputted ASCII files.
+
+        Examples
+        --------
+        >>> trajs = ParticleTrajectories(my_fns, indices)
+        >>> trajs.write_out("orbit_trajectory")       
+        """
+        fields = [field for field in sorted(self.field_data.keys())]
+        num_fields = len(fields)
+        first_str = "# particle_time\t" + "\t".join(fields)+"\n"
+        template_str = "%g\t"*num_fields+"%g\n"
+        for ix in range(self.num_indices):
+            outlines = [first_str]
+            for it in range(self.num_steps):
+                outlines.append(template_str %
+                                tuple([self.times[it]]+[self[field][ix,it] for field in fields]))
+            fid = open(filename_base + "_%d.dat" % self.indices[ix], "w")
+            fid.writelines(outlines)
+            fid.close()
+            del fid
+
+    @parallel_root_only
+    def write_out_h5(self, filename):
+        """
+        Write out all the particle trajectories to a single HDF5 file
+        that contains the indices, the times, and the 2D array for each
+        field individually
+
+        Parameters
+        ----------
+
+        filename : string
+            The output filename for the HDF5 file
+
+        Examples
+        --------
+        >>> trajs = ParticleTrajectories(my_fns, indices)
+        >>> trajs.write_out_h5("orbit_trajectories")                
+        """
+        fid = h5py.File(filename, "w")
+        fid.create_dataset("particle_indices", dtype=np.int64,
+                           data=self.indices)
+        fid.close()
+        self.times.write_hdf5(filename, dataset_name="particle_times")
+        fields = [field for field in sorted(self.field_data.keys())]
+        for field in fields:
+            self[field].write_hdf5(filename, dataset_name="%s" % field)

diff -r 13c1ee22f924f988c6dfc68c9b64d6e9acdebfd2 -r 441ff5e2dcd342526152309d260e912300fdcd3b yt/data_objects/tests/test_particle_trajectories.py
--- /dev/null
+++ b/yt/data_objects/tests/test_particle_trajectories.py
@@ -0,0 +1,45 @@
+import glob
+import os
+
+from yt.config import ytcfg
+from yt.data_objects.time_series import DatasetSeries
+from yt.utilities.answer_testing.framework import \
+    requires_ds, \
+    GenericArrayTest
+
+def setup():
+    ytcfg["yt","__withintesting"] = "True"
+
+data_path = ytcfg.get("yt", "test_data_dir")
+
+pfields = ["particle_position_x", "particle_position_y", "particle_position_z"]
+vfields = ["particle_velocity_x", "particle_velocity_y", "particle_velocity_z"]
+
+ at requires_ds("Orbit/orbit_hdf5_chk_0000")
+def test_orbit_traj():
+    fields = ["particle_velocity_x", "particle_velocity_y", "particle_velocity_z"]
+    my_fns = glob.glob(os.path.join(data_path, "Orbit/orbit_hdf5_chk_00[0-9][0-9]"))
+    my_fns.sort()
+    ts = DatasetSeries(my_fns)
+    ds = ts[0]
+    traj = ts.particle_trajectories([1, 2], fields=fields, suppress_logging=True)
+    for field in pfields+vfields:
+        def field_func(name):
+            return traj[field]
+        yield GenericArrayTest(ds, field_func, args=[field])
+
+ at requires_ds("enzo_tiny_cosmology/DD0000/DD0000")
+def test_etc_traj():
+    fields = ["particle_velocity_x", "particle_velocity_y", "particle_velocity_z"]
+    my_fns = glob.glob(os.path.join(data_path, "enzo_tiny_cosmology/DD000[0-9]/*.hierarchy"))
+    my_fns.sort()
+    ts = DatasetSeries(my_fns)
+    ds = ts[0]
+    sp = ds.sphere("max", (0.5, "Mpc"))
+    indices = sp["particle_index"][sp["particle_type"] == 1][:5]
+    traj = ts.particle_trajectories(indices, fields=fields, suppress_logging=True)
+    traj.add_fields(["density"])
+    for field in pfields+vfields+["density"]:
+        def field_func(name):
+            return traj[field]
+        yield GenericArrayTest(ds, field_func, args=[field])

diff -r 13c1ee22f924f988c6dfc68c9b64d6e9acdebfd2 -r 441ff5e2dcd342526152309d260e912300fdcd3b yt/data_objects/time_series.py
--- a/yt/data_objects/time_series.py
+++ b/yt/data_objects/time_series.py
@@ -32,6 +32,8 @@
     create_quantity_proxy, \
     analysis_task_registry, \
     AnalysisTask
+from yt.data_objects.particle_trajectories import \
+    ParticleTrajectories
 from yt.funcs import \
     iterable, \
     ensure_list, \
@@ -399,6 +401,41 @@
         self._dataset_cls = ds.__class__
         return ds
 
+    def particle_trajectories(self, indices, fields=None, suppress_logging=False):
+        r"""Create a collection of particle trajectories in time over a series of
+        datasets.
+
+        Parameters
+        ----------
+        indices : array_like
+            An integer array of particle indices whose trajectories we
+            want to track. If they are not sorted they will be sorted.
+        fields : list of strings, optional
+            A set of fields that is retrieved when the trajectory
+            collection is instantiated. Default: None (will default
+            to the fields 'particle_position_x', 'particle_position_y',
+            'particle_position_z')
+        suppress_logging : boolean
+            Suppress yt's logging when iterating over the simulation time
+            series. Default: False
+
+        Examples
+        --------
+        >>> my_fns = glob.glob("orbit_hdf5_chk_00[0-9][0-9]")
+        >>> my_fns.sort()
+        >>> fields = ["particle_position_x", "particle_position_y",
+        >>>           "particle_position_z", "particle_velocity_x",
+        >>>           "particle_velocity_y", "particle_velocity_z"]
+        >>> ds = load(my_fns[0])
+        >>> init_sphere = ds.sphere(ds.domain_center, (.5, "unitary"))
+        >>> indices = init_sphere["particle_index"].astype("int")
+        >>> ts = DatasetSeries(my_fns)
+        >>> trajs = ts.particle_trajectories(indices, fields=fields)
+        >>> for t in trajs :
+        >>>     print t["particle_velocity_x"].max(), t["particle_velocity_x"].min()
+        """
+        return ParticleTrajectories(self, indices, fields=fields, suppress_logging=suppress_logging)
+
 class TimeSeriesQuantitiesContainer(object):
     def __init__(self, data_object, quantities):
         self.data_object = data_object

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list