[yt-svn] commit/yt-doc: 5 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Sun Nov 3 17:17:57 PST 2013


5 new commits in yt-doc:

https://bitbucket.org/yt_analysis/yt-doc/commits/5b3db29d129a/
Changeset:   5b3db29d129a
User:        jzuhone
Date:        2013-11-03 00:58:39
Summary:     Particle trajectory docs
Affected #:  3 files

diff -r 64fe0d129a4d1f06feb5cb1b8a20aee03cca9794 -r 5b3db29d129aaa61339e413219655b77a49119e5 source/analyzing/analysis_modules/Particle_Trajectories.ipynb
--- /dev/null
+++ b/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
@@ -0,0 +1,367 @@
+{
+ "metadata": {
+  "name": ""
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+  {
+   "cells": [
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The `particle_trajectories` analysis module enables the construction of particle trajectories from a time series of datasets for a specified list of particles identified by their unique indices. "
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "from yt.imods import *\n",
+      "from yt.analysis_modules.api import ParticleTrajectories\n",
+      "from yt.config import ytcfg\n",
+      "path = ytcfg.get(\"yt\", \"test_data_dir\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "First, let's start off with a FLASH dataset containing only two particles in a mutual circular orbit. We can get the list of filenames this way:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "my_fns = glob.glob(path+\"/Orbit/orbit_hdf5_chk_00[0-9][0-9]\")\n",
+      "my_fns.sort()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "And let's define a list of fields that we want to include in the trajectories. The position fields will be included by default, so let's just ask for the velocity fields:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "fields = [\"particle_velocity_x\", \"particle_velocity_y\", \"particle_velocity_z\"]"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "There are only two particles, but for consistency's sake let's grab their indices from the dataset itself:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pf = load(my_fns[0])\n",
+      "dd = pf.h.all_data()\n",
+      "indices = dd[\"particle_index\"].astype(\"int\")\n",
+      "print indices"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "which is what we expected them to be. Now we're ready to create a `ParticleTrajectories` object:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "trajs = ParticleTrajectories(my_fns, indices, fields=fields)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The `ParticleTrajectories` object `trajs` is essentially a dictionary-like container for the particle fields along the trajectory, and can be accessed as such:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "print trajs[\"particle_position_x\"]\n",
+      "print trajs[\"particle_position_x\"].shape"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Note that each field is a 2D NumPy array with the different particle indices along the first dimension and the times along the second dimension. As such, we can access them individually by indexing the field:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_position_x\"][0], trajs[\"particle_position_y\"][0])\n",
+      "pylab.plot(trajs[\"particle_position_x\"][1], trajs[\"particle_position_y\"][1])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "And we can plot the velocity fields as well:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_velocity_x\"][0], trajs[\"particle_velocity_y\"][0])\n",
+      "pylab.plot(trajs[\"particle_velocity_x\"][1], trajs[\"particle_velocity_y\"][1])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "If we want to access the time along the trajectory, we use the key `\"particle_time\"`:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_velocity_x\"][1])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_velocity_y\"][1])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Alternatively, if we know the particle index we'd like to examine, we can get an individual trajectory corresponding to that index:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "particle1 = trajs.trajectory_from_index(1)\n",
+      "pylab.plot(particle1[\"particle_time\"], particle1[\"particle_position_x\"])\n",
+      "pylab.plot(particle1[\"particle_time\"], particle1[\"particle_position_y\"])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Now let's look at a more complicated (and fun!) example. We'll use an Enzo cosmology dataset. First, we'll find the maximum density in the domain, and obtain the indices of the particles within some radius of the center. First, let's have a look at what we're getting:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pf = load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+      "slc = SlicePlot(pf, \"x\", [\"Density\",\"Dark_Matter_Density\"], center=\"max\", width=(3.0, \"mpc\"))\n",
+      "slc.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "So far, so good--it looks like we've centered on a galaxy cluster. Let's grab all of the dark matter particles within a sphere of 0.5 Mpc (identified by `\"particle_type == 1\"`):"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "sp = pf.h.sphere(\"max\", (0.5, \"mpc\"))\n",
+      "indices = sp[\"particle_index\"][sp[\"particle_type\"] == 1]"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Next we'll get the list of datasets we want, and create trajectories for these particles:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "my_fns = glob.glob(path+\"/enzo_tiny_cosmology/DD*/*.hierarchy\")\n",
+      "my_fns.sort()\n",
+      "trajs = ParticleTrajectories(my_fns, indices)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Matplotlib can make 3D plots, so let's pick three particle trajectories at random and look at them in the volume:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "import matplotlib.pyplot as plt\n",
+      "from mpl_toolkits.mplot3d import Axes3D\n",
+      "fig = plt.figure(figsize=(8.0, 8.0))\n",
+      "ax = fig.add_subplot(111, projection='3d')\n",
+      "ax.plot(trajs[\"particle_position_x\"][100], trajs[\"particle_position_z\"][100], trajs[\"particle_position_z\"][100])\n",
+      "ax.plot(trajs[\"particle_position_x\"][8], trajs[\"particle_position_z\"][8], trajs[\"particle_position_z\"][8])\n",
+      "ax.plot(trajs[\"particle_position_x\"][25], trajs[\"particle_position_z\"][25], trajs[\"particle_position_z\"][25])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "It looks like these three different particles fell into the cluster along different filaments. We can also look at their x-positions only as a function of time:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][100])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][8])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][25])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Suppose we wanted to know the gas density along the particle trajectory, but there wasn't a particle field corresponding to that in our dataset. Never fear! If the field exists as a grid field, `yt` will interpolate this field to the particle positions and add the interpolated field to the trajectory. To add such a field (or any field, including additional particle fields) we can call the `add_fields` method:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "trajs.add_fields([\"Density\"])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We also could have included `\"Density\"` in our original field list. Now, plot up the gas density for each particle as a function of time:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"Density\"][100])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"Density\"][8])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"Density\"][25])\n",
+      "pylab.yscale(\"log\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Finally, the particle trajectories can be written to disk. Two options are provided: ASCII text files with a column for each field and the time, and HDF5 files:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "trajs.write_out(\"halo_trajectories.txt\")\n",
+      "trajs.write_out_h5(\"halo_trajectories.h5\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "heading",
+     "level": 2,
+     "metadata": {},
+     "source": [
+      "Important Caveats"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "* Parallelization is not yet implemented.\n",
+      "* For large datasets, constructing trajectories can be very slow. We are working on optimizing the algorithm for a future release. \n",
+      "* At the moment, trajectories are limited for particles that exist in every dataset. Therefore, for codes like FLASH that allow for particles to exit the domain (and hence the simulation) for certain types of boundary conditions, you need to insure that the particles you wish to examine exist in all datasets in the time series from the beginning to the end. If this is not the case, `ParticleTrajectories` will throw an error. This is a limitation we hope to relax in a future release. "
+     ]
+    }
+   ],
+   "metadata": {}
+  }
+ ]
+}
\ No newline at end of file

diff -r 64fe0d129a4d1f06feb5cb1b8a20aee03cca9794 -r 5b3db29d129aaa61339e413219655b77a49119e5 source/analyzing/analysis_modules/index.rst
--- a/source/analyzing/analysis_modules/index.rst
+++ b/source/analyzing/analysis_modules/index.rst
@@ -38,3 +38,4 @@
 
    two_point_functions
    clump_finding
+   particle_trajectories

diff -r 64fe0d129a4d1f06feb5cb1b8a20aee03cca9794 -r 5b3db29d129aaa61339e413219655b77a49119e5 source/analyzing/analysis_modules/particle_trajectories.rst
--- /dev/null
+++ b/source/analyzing/analysis_modules/particle_trajectories.rst
@@ -0,0 +1,4 @@
+Particle Trajectories
+-----------------------------------------
+
+.. notebook:: Particle_Trajectories.ipynb


https://bitbucket.org/yt_analysis/yt-doc/commits/086639a9f5d2/
Changeset:   086639a9f5d2
User:        jzuhone
Date:        2013-11-03 17:25:42
Summary:     Merged chummels/yt-doc into default
Affected #:  13 files

diff -r 5b3db29d129aaa61339e413219655b77a49119e5 -r 086639a9f5d2e91132fee1103110b8334dad4ed9 source/cookbook/custom_colorbar_tickmarks.ipynb
--- /dev/null
+++ b/source/cookbook/custom_colorbar_tickmarks.ipynb
@@ -0,0 +1,90 @@
+{
+ "metadata": {
+  "name": ""
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+  {
+   "cells": [
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "%matplotlib inline\n",
+      "from yt.mods import *"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pf = load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
+      "slc = SlicePlot(pf, 'x', 'Density')\n",
+      "slc"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "`PlotWindow` plots are containers for plots, keyed to field names.  Below, we get a copy of the plot for the `Density` field."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "plot = slc.plots['Density']"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The plot has a few attributes that point to underlying `matplotlib` plot primites.  For example, the `colorbar` object corresponds to the `cb` attribute of the plot."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "colorbar = plot.cb"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "To set custom tickmarks, simply call the `matplotlib` [`set_ticks`](http://matplotlib.org/api/colorbar_api.html#matplotlib.colorbar.ColorbarBase.set_ticks) and [`set_ticklabels`](http://matplotlib.org/api/colorbar_api.html#matplotlib.colorbar.ColorbarBase.set_ticklabels) functions."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "colorbar.set_ticks([1e-28])\n",
+      "colorbar.set_ticklabels(['$10^{-28}$'])\n",
+      "slc"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    }
+   ],
+   "metadata": {}
+  }
+ ]
+}
\ No newline at end of file

diff -r 5b3db29d129aaa61339e413219655b77a49119e5 -r 086639a9f5d2e91132fee1103110b8334dad4ed9 source/cookbook/custom_colorbar_tickmarks.rst
--- /dev/null
+++ b/source/cookbook/custom_colorbar_tickmarks.rst
@@ -0,0 +1,4 @@
+Custom Colorabar Tickmarks
+--------------------------
+
+.. notebook:: custom_colorbar_tickmarks.ipynb

diff -r 5b3db29d129aaa61339e413219655b77a49119e5 -r 086639a9f5d2e91132fee1103110b8334dad4ed9 source/cookbook/embedded_javascript_animation.ipynb
--- /dev/null
+++ b/source/cookbook/embedded_javascript_animation.ipynb
@@ -0,0 +1,70 @@
+{
+ "metadata": {
+  "name": ""
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+  {
+   "cells": [
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "This example shows how to embed an animation produced by `matplotlib` into an IPython notebook.  This example makes use of `matplotlib`'s [animation toolkit](http://matplotlib.org/api/animation_api.html) to transform individual frames into a final rendered movie.  \n",
+      "\n",
+      "Additionally, this uses Jake VanderPlas' [`JSAnimation`](https://github.com/jakevdp/JSAnimation) library to embed the movie as a javascript widget, directly in the notebook.  This does not use `ffmpeg` to stitch the frames together and thus does not require `ffmpeg`.  However, you must have `JSAnimation` installed.\n",
+      "\n",
+      "To do so, clone to git repostiory and run `python setup.py install` in the root of the repository."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "from yt.imods import *\n",
+      "from JSAnimation import IPython_display\n",
+      "from matplotlib import animation"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Here we set up the animation.  We use `yt` to load the data and create each frame and use matplotlib to stitch the frames together.  Note that we customize the plot a bit by calling the `set_zlim` function.  Customizations only need to be applied to the first frame - they will carry through to the rest.\n",
+      "\n",
+      "This may take a while to run, be patient."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "import matplotlib.pyplot as plt\n",
+      "from matplotlib.backends.backend_agg import FigureCanvasAgg\n",
+      "\n",
+      "prj = ProjectionPlot(load('Enzo_64/DD0000/data0000'), 0, 'Density', weight_field='Density',width=(180,'mpccm'))\n",
+      "prj.set_zlim('Density',1e-32,1e-26)\n",
+      "fig = prj.plots['Density'].figure\n",
+      "fig.canvas = FigureCanvasAgg(fig)\n",
+      "\n",
+      "# animation function.  This is called sequentially\n",
+      "def animate(i):\n",
+      "    pf = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
+      "    prj._switch_pf(pf)\n",
+      "\n",
+      "# call the animator.  blit=True means only re-draw the parts that have changed.\n",
+      "animation.FuncAnimation(fig, animate, frames=44, interval=200, blit=False)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    }
+   ],
+   "metadata": {}
+  }
+ ]
+}
\ No newline at end of file

diff -r 5b3db29d129aaa61339e413219655b77a49119e5 -r 086639a9f5d2e91132fee1103110b8334dad4ed9 source/cookbook/embedded_javascript_animation.rst
--- /dev/null
+++ b/source/cookbook/embedded_javascript_animation.rst
@@ -0,0 +1,4 @@
+Making a javascript animation widget using JSAnimation
+------------------------------------------------------
+
+.. notebook:: embedded_javascript_animation.ipynb

diff -r 5b3db29d129aaa61339e413219655b77a49119e5 -r 086639a9f5d2e91132fee1103110b8334dad4ed9 source/cookbook/embedded_webm_animation.ipynb
--- /dev/null
+++ b/source/cookbook/embedded_webm_animation.ipynb
@@ -0,0 +1,122 @@
+{
+ "metadata": {
+  "name": ""
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+  {
+   "cells": [
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "This example shows how to embed an animation produced by `matplotlib` into an IPython notebook.  This example makes use of `matplotlib`'s [animation toolkit](http://matplotlib.org/api/animation_api.html) to transform individual frames into a final rendered movie.  \n",
+      "\n",
+      "Matplotlib uses [`ffmpeg`](http://www.ffmpeg.org/) to generate the movie, so you must install `ffmpeg` for this example to work correctly.  Usually the best way to install `ffmpeg` is using your system's package manager."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "from yt.imods import *\n",
+      "from matplotlib import animation"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "First, we need to construct a function that will embed the video produced by ffmpeg directly into the notebook document. This makes use of the [HTML5 video tag](http://www.w3schools.com/html/html5_video.asp) and the WebM video format.  WebM is supported by Chrome, Firefox, and Opera, but not Safari and Internet Explorer.  If you have trouble viewing the video you may need to use a different video format.  Since this uses `libvpx` to construct the frames, you will need to ensure that ffmpeg has been compiled with `libvpx` support."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "from tempfile import NamedTemporaryFile\n",
+      "\n",
+      "VIDEO_TAG = \"\"\"<video controls>\n",
+      " <source src=\"data:video/x-webm;base64,{0}\" type=\"video/webm\">\n",
+      " Your browser does not support the video tag.\n",
+      "</video>\"\"\"\n",
+      "\n",
+      "def anim_to_html(anim):\n",
+      "    if not hasattr(anim, '_encoded_video'):\n",
+      "        with NamedTemporaryFile(suffix='.webm') as f:\n",
+      "            anim.save(f.name, fps=6, extra_args=['-vcodec', 'libvpx'])\n",
+      "            video = open(f.name, \"rb\").read()\n",
+      "        anim._encoded_video = video.encode(\"base64\")\n",
+      "    \n",
+      "    return VIDEO_TAG.format(anim._encoded_video)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Next, we define a function to actually display the video inline in the notebook."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "from IPython.display import HTML\n",
+      "\n",
+      "def display_animation(anim):\n",
+      "    plt.close(anim._fig)\n",
+      "    return HTML(anim_to_html(anim))"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Finally, we set up the animation itsself.  We use `yt` to load the data and create each frame and use matplotlib to stitch the frames together.  Note that we customize the plot a bit by calling the `set_zlim` function.  Customizations only need to be applied to the first frame - they will carry through to the rest.\n",
+      "\n",
+      "This may take a while to run, be patient."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "import matplotlib.pyplot as plt\n",
+      "from matplotlib.backends.backend_agg import FigureCanvasAgg\n",
+      "\n",
+      "prj = ProjectionPlot(load('Enzo_64/DD0000/data0000'), 0, 'Density', weight_field='Density',width=(180,'mpccm'))\n",
+      "prj.set_zlim('Density',1e-32,1e-26)\n",
+      "fig = prj.plots['Density'].figure\n",
+      "fig.canvas = FigureCanvasAgg(fig)\n",
+      "\n",
+      "# animation function.  This is called sequentially\n",
+      "def animate(i):\n",
+      "    pf = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
+      "    prj._switch_pf(pf)\n",
+      "\n",
+      "# call the animator.  blit=True means only re-draw the parts that have changed.\n",
+      "anim = animation.FuncAnimation(fig, animate, frames=44, interval=200, blit=False)\n",
+      "\n",
+      "# call our new function to display the animation\n",
+      "display_animation(anim)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    }
+   ],
+   "metadata": {}
+  }
+ ]
+}
\ No newline at end of file

diff -r 5b3db29d129aaa61339e413219655b77a49119e5 -r 086639a9f5d2e91132fee1103110b8334dad4ed9 source/cookbook/embedded_webm_animation.rst
--- /dev/null
+++ b/source/cookbook/embedded_webm_animation.rst
@@ -0,0 +1,4 @@
+Making animations using matplotlib and ffmpeg
+---------------------------------------------
+
+.. notebook:: embedded_webm_animation.ipynb

diff -r 5b3db29d129aaa61339e413219655b77a49119e5 -r 086639a9f5d2e91132fee1103110b8334dad4ed9 source/cookbook/index.rst
--- a/source/cookbook/index.rst
+++ b/source/cookbook/index.rst
@@ -42,4 +42,8 @@
    :maxdepth: 1
 
    notebook_tutorial
+   custom_colorbar_tickmarks
+   embedded_javascript_animation
+   embedded_webm_animation
    ../analyzing/analysis_modules/sunyaev_zeldovich
+   

diff -r 5b3db29d129aaa61339e413219655b77a49119e5 -r 086639a9f5d2e91132fee1103110b8334dad4ed9 source/cookbook/notebook_tutorial.rst
--- a/source/cookbook/notebook_tutorial.rst
+++ b/source/cookbook/notebook_tutorial.rst
@@ -1,4 +1,33 @@
 Notebook Tutorial
 -----------------
 
-This is a placeholder for the badass notebook tutorial that Nathan is going to write.
+The IPython notebook is a powerful system for literate codoing - a style of
+writing code that embeds input, output, and explanatory text into one document.
+
+yt has deep integration with the IPython notebook, explained in-depth in the
+other example notebooks and the rest of the yt documentation.  This page is here
+to give a brief introduction to the notebook its self.
+
+To start the notebook, enter the following command at the bash command line:
+
+.. code-block:: bash
+
+   $ ipython notebook
+
+Depending on your default web browser and system setup this will open a web
+browser and direct you to the notebook dahboard.  If it does not,  you might
+need to connect to the notebook manually.  See the `IPython documentation
+<http://ipython.org/ipython-doc/stable/interactive/notebook.html#starting-the-notebook-server>`_
+for more details.
+
+For the notebook tutorial, we rely on example notebooks that are part of the
+IPython documentation.  We link to static nbviewer versions of the 'evaluated'
+versions of these example notebooks.  If you would like to run them locally on
+your own computer, simply download the notebook by clicking the 'Download
+Notebook' link in the top right corner of each page.
+
+1. `Running Code in the IPython Notebook <http://nbviewer.ipython.org/url/github.com/ipython/ipython/raw/master/examples/notebooks/Part%201%20-%20Running%20Code.ipynb>`_
+2. `Basic Output <http://nbviewer.ipython.org/url/github.com/ipython/ipython/raw/master/examples/notebooks/Part%202%20-%20Basic%20Output.ipynb>`_
+3. `Plotting with matplotlib <http://nbviewer.ipython.org/url/github.com/ipython/ipython/raw/master/examples/notebooks/Part%203%20-%20Plotting%20with%20Matplotlib.ipynb>`_
+4. `Markdown Cells <http://nbviewer.ipython.org/url/github.com/ipython/ipython/raw/master/examples/notebooks/Part%204%20-%20Markdown%20Cells.ipynb>`_
+5. `IPython's rich display system <http://nbviewer.ipython.org/url/github.com/ipython/ipython/raw/master/examples/notebooks/Part%205%20-%20Rich%20Display%20System.ipynb>`_

diff -r 5b3db29d129aaa61339e413219655b77a49119e5 -r 086639a9f5d2e91132fee1103110b8334dad4ed9 source/developing/building_the_docs.rst
--- a/source/developing/building_the_docs.rst
+++ b/source/developing/building_the_docs.rst
@@ -43,6 +43,8 @@
 - pandoc 1.11.1
 - Rockstar halo finder 0.99.6
 - SZpack_ 1.1.1
+- ffmpeg 1.2.4 (compiled with libvpx support)
+- JSAnimation (git hash 1b95cb3a3a)
 
 .. _SZpack: http://www.cita.utoronto.ca/~jchluba/Science_Jens/SZpack/SZpack.html
 

diff -r 5b3db29d129aaa61339e413219655b77a49119e5 -r 086639a9f5d2e91132fee1103110b8334dad4ed9 source/installing.rst
--- a/source/installing.rst
+++ b/source/installing.rst
@@ -53,7 +53,7 @@
 the entire installation process, so it is usually quite cumbersome.  By looking 
 at the last few hundred lines (i.e. ``tail -500 yt_install.log``), you can 
 potentially figure out what went wrong.  If you have problems, though, do not 
-hesitate to :ref:`contact us asking-for-help` for assistance.
+hesitate to :ref:`contact us <asking-for-help>` for assistance.
 
 .. _activating-yt:
 
@@ -115,7 +115,7 @@
 yt, which means you have successfully installed yt.  Congratulations!  
 
 If you get an error, follow the instructions it gives you to debug the problem.  
-Do not hesitate to :ref:`contact us asking-for-help` so we can help you 
+Do not hesitate to :ref:`contact us <asking-for-help>` so we can help you 
 figure it out.
 
 .. _updating-yt:

diff -r 5b3db29d129aaa61339e413219655b77a49119e5 -r 086639a9f5d2e91132fee1103110b8334dad4ed9 source/reference/api/api.rst
--- a/source/reference/api/api.rst
+++ b/source/reference/api/api.rst
@@ -376,7 +376,10 @@
 
 Absorption spectra fitting:
 
-.. autofunction:: yt.analysis_modules.absorption_spectrum.absorption_spectrum_fit.generate_total_fit
+.. autosummary:: 
+   :toctree: generated/
+
+   ~yt.analysis_modules.absorption_spectrum.absorption_spectrum_fit.generate_total_fit
 
 Sunrise exporting:
 
@@ -583,9 +586,8 @@
    ~yt.utilities.parallel_tools.parallel_analysis_interface.ObjectIterator
    ~yt.utilities.parallel_tools.parallel_analysis_interface.ParallelAnalysisInterface
    ~yt.utilities.parallel_tools.parallel_analysis_interface.ParallelObjectIterator
-
-.. autoclass:: yt.analysis_modules.hierarchy_subset.hierarchy_subset.ConstructedRootGrid
-.. autoclass:: yt.analysis_modules.hierarchy_subset.hierarchy_subset.ExtractedHierarchy
+   ~yt.analysis_modules.hierarchy_subset.hierarchy_subset.ConstructedRootGrid
+   ~yt.analysis_modules.hierarchy_subset.hierarchy_subset.ExtractedHierarchy
 
 
 Testing Infrastructure

diff -r 5b3db29d129aaa61339e413219655b77a49119e5 -r 086639a9f5d2e91132fee1103110b8334dad4ed9 source/visualizing/manual_plotting.rst
--- a/source/visualizing/manual_plotting.rst
+++ b/source/visualizing/manual_plotting.rst
@@ -32,16 +32,16 @@
 generate an FRB is to use the ``.to_frb(width, resolution, center=None)`` method
 of any data two-dimensional data object:
 
-.. code-block:: python
+.. python-script::
    
    import pylab as P
    from yt.mods import *
-   pf = load("RedshiftOutput0005")
+   pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
 
    c = pf.h.find_max('Density')[1]
    proj = pf.h.proj(0, 'Density')
 
-   width = 1.5/pf['mpc'] # we want a 1.5 mpc view
+   width = 10/pf['kpc'] # we want a 1.5 mpc view
    res = [1000, 1000] # create an image with 1000x1000 pixels
    frb = proj.to_frb(width, res, center=c)
 
@@ -66,20 +66,20 @@
 
 This is perhaps the simplest thing to do. ``yt`` provides a number of one dimensional objects, and these return a 1-D numpy array of their contents with direct dictionary access. As a simple example, take a :class:`~yt.data_objects.data_containers.AMROrthoRayBase` object, which can be created from a hierarchy by calling ``pf.h.ortho_ray(axis, center)``. 
 
-.. code-block:: python
+.. python-script::
 
    from yt.mods import *
    import pylab as P
-   pf = load("RedshiftOutput0005")
+   pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
    c = pf.h.find_max("Density")[1]
    ax = 0 # take a line cut along the x axis
    ray = pf.h.ortho_ray(ax, (c[1], c[2])) # cutting through the y0,z0 such that we hit the max density
 
    P.subplot(211)
-   P.plot(ray['x'], ray['Density'])
+   P.semilogy(ray['x'], ray['Density'])
    P.ylabel('Density')
    P.subplot(212)
-   P.plot(ray['x'], ray['Temperature'])
+   P.semilogy(ray['x'], ray['Temperature'])
    P.xlabel('x')
    P.ylabel('Temperature')
 
@@ -99,21 +99,20 @@
 :class:`yt.visualization.profile_plotter.PhasePlotter` object, giving it a data
 source and three fields: the x-axis field, the y-axis field, and the z field (that is, the color of the cells). 
 
-.. code-block:: python
+.. python-script::
    
    from yt.mods import *
    import yt.visualization.profile_plotter as pp
    import pylab as P
    
-   pf = load("RedshiftOutput0005")
+   pf = load("Enzo_64/DD0043/data0043")
    c = pf.h.find_max("Density")[1]
-   radius = 1.5/pf['mpc']
+   radius = 10/pf['mpc']
    sph = pf.h.sphere(c,radius)
    
    phase = pp.PhasePlotter(sph,'Density', 'Temperature','CellMassMsun')
    
    fig, ax = phase.plot.to_mpl()
-   # sorry this is convoluted!
    from yt.visualization._mpl_imports import FigureCanvasAgg
    
    canvas = FigureCanvasAgg(fig)

diff -r 5b3db29d129aaa61339e413219655b77a49119e5 -r 086639a9f5d2e91132fee1103110b8334dad4ed9 source/visualizing/plots.rst
--- a/source/visualizing/plots.rst
+++ b/source/visualizing/plots.rst
@@ -38,7 +38,7 @@
 is requested of it -- for instance, when the width or field is changed
 -- this high-resolution data is then pixelized and placed in a buffer
 of fixed size. This is accomplished behind the scenes using
-:class:`yt.visualization.fixed_resolution.FixedResolutionBuffer``
+:class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer`
 ``PlotWindow`` expose the underlying matplotlib ``figure`` and
 ``axes`` objects, making it easy to customize your plots and 
 add new annotations.
@@ -283,6 +283,167 @@
 
 __ :class:`~yt.visualization.plot_window.OffAxisProjectionPlot`
 
+Plot Customization
+------------------
+
+You can customize each of the four plot types above in identical ways.  We'll go
+over each of the customizations methos below.  For each of the examples below we
+will modify the following plot.
+
+.. python-script::
+
+   from yt.mods import *
+   pf = load("HiresIsolatedGalaxy/DD0044/DD0044")
+   slc = SlicePlot(pf, 'z', 'VorticitySquared', width=(10,'kpc'), center='max')
+   slc.save()
+
+Panning and zooming
+~~~~~~~~~~~~~~~~~~~
+
+There are three methods to dynamically pan around the data.  
+
+:class:`~yt.visualization.plot_window.SlicePlot.pan` accepts x and y deltas in code
+units.
+
+.. python-script::
+
+   from yt.mods import *
+   pf = load("HiresIsolatedGalaxy/DD0044/DD0044")
+   slc = SlicePlot(pf, 'z', 'VorticitySquared', width=(10,'kpc'), center='max')
+   slc.pan((2/pf['kpc'],2/pf['kpc']))
+   slc.save()
+
+:class:`~yt.visualization.plot_window.SlicePlot.pan_rel` accepts deltas in units relative
+to the field of view of the plot.  
+
+.. python-script::
+
+   from yt.mods import *
+   pf = load("HiresIsolatedGalaxy/DD0044/DD0044")
+   slc = SlicePlot(pf, 'z', 'VorticitySquared', width=(10,'kpc'), center='max')
+   slc.pan_rel((0.1, -0.1))
+   slc.save()
+
+:class:`~yt.visualization.plot_window.SlicePlot.zoom` accepts a factor to zoom in by.
+
+.. python-script::
+
+   from yt.mods import *
+   pf = load("HiresIsolatedGalaxy/DD0044/DD0044")
+   slc = SlicePlot(pf, 'z', 'VorticitySquared', width=(10,'kpc'), center='max')
+   slc.zoom(2)
+   slc.save()
+
+Set axes units
+~~~~~~~~~~~~~~
+
+:class:`~yt.visualization.plot_window.SlicePlot.set_axes_unit` allows the customization of
+the axes unit labels.
+
+.. python-script::
+
+   from yt.mods import *
+   pf = load("HiresIsolatedGalaxy/DD0044/DD0044")
+   slc = SlicePlot(pf, 'z', 'VorticitySquared', width=(10,'kpc'), center='max')
+   slc.set_axes_unit('Mpc')
+   slc.save()
+
+Set the plot center
+~~~~~~~~~~~~~~~~~~~
+
+The :class:`~yt.visualization.plot_window.SlicePlot.set_center` function accepts a new
+center for the plot, in code units.  New centers must be two element tuples.
+
+.. python-script::
+
+   from yt.mods import *
+   pf = load("HiresIsolatedGalaxy/DD0044/DD0044")
+   slc = SlicePlot(pf, 'z', 'VorticitySquared', width=(10,'kpc'), center='max')
+   slc.set_center((0.53, 0.53))
+   slc.save()
+
+Fonts
+~~~~~
+
+:class:`~yt.visualization.plot_window.SlicePlot.set_font` allows font costomization.
+
+.. python-script::
+
+   from yt.mods import *
+   pf = load("HiresIsolatedGalaxy/DD0044/DD0044")
+   slc = SlicePlot(pf, 'z', 'VorticitySquared', width=(10,'kpc'), center='max')
+   slc.set_font({'family': 'sans-serif', 'style': 'italic','weight': 'bold', 'size': 24})
+   slc.save()
+
+Colormaps
+~~~~~~~~~
+
+Each of these functions accept two arguments.  In all cases the first argument
+is a field name.  This makes it possible to use different custom colormaps for
+different fields tracked by the plot object.
+
+To change the colormap for the plot, call the
+:class:`~yt.visualization.plot_window.SlicePlot.set_cmap` function.  Use any of the
+colormaps listed in the :ref:`colormaps` section.
+
+.. python-script::
+
+   from yt.mods import *
+   pf = load("HiresIsolatedGalaxy/DD0044/DD0044")
+   slc = SlicePlot(pf, 'z', 'VorticitySquared', width=(10,'kpc'), center='max')
+   slc.set_cmap('VorticitySquared', 'RdBu_r')
+   slc.save()
+
+The :class:`~yt.visualization.plot_window.SlicePlot.set_log` function accepts a field name
+and a boolean.  If the boolean is :code:`True`, the colormap for the field will
+be log scaled.  If it is `False` the colormap will be linear.
+
+.. python-script::
+
+   from yt.mods import *
+   pf = load("HiresIsolatedGalaxy/DD0044/DD0044")
+   slc = SlicePlot(pf, 'z', 'VorticitySquared', width=(10,'kpc'), center='max')
+   slc.set_log('VorticitySquared', False)
+   slc.save()
+
+Lastly, the :class:`~yt.visualization.plot_window.SlicePlot.set_zlim` function makes it
+possible to set a custom colormap range.
+
+.. python-script::
+
+   from yt.mods import *
+   pf = load("HiresIsolatedGalaxy/DD0044/DD0044")
+   slc = SlicePlot(pf, 'z', 'VorticitySquared', width=(10,'kpc'), center='max')
+   slc.set_zlim('VorticitySquared', 1e-30, 1e-25)
+   slc.save()
+
+Set the size of the plot
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+To set the size of the plot, use the
+:class:`~yt.visualization.plot_window.SlicePlot.set_window_size` function.  The argument
+is the size of the longest edge of the plot in inches.  View the full resolution
+image to see the difference more clearly.
+
+.. python-script::
+
+   from yt.mods import *
+   pf = load("HiresIsolatedGalaxy/DD0044/DD0044")
+   slc = SlicePlot(pf, 'z', 'VorticitySquared', width=(10,'kpc'), center='max')
+   slc.set_window_size(10)
+   slc.save()
+
+To change the resolution of the image, call the
+:class:`~yt.visualization.plot_window.SlicePlot.set_buff_size` function.
+
+.. python-script::
+
+   from yt.mods import *
+   pf = load("HiresIsolatedGalaxy/DD0044/DD0044")
+   slc = SlicePlot(pf, 'z', 'VorticitySquared', width=(10,'kpc'), center='max')
+   slc.set_buff_size(1600)
+   slc.save()
+
 Quantative Analysis and Visualization
 -------------------------------------
 


https://bitbucket.org/yt_analysis/yt-doc/commits/fc1dadb6b876/
Changeset:   fc1dadb6b876
User:        jzuhone
Date:        2013-11-03 17:30:57
Summary:     Merging
Affected #:  1 file

diff -r aa482e4881748677e642ab64369bc601b2ad442e -r fc1dadb6b876b7f61ac3d1762af5e1790b3f7bd2 source/examining/supported_frontends_data.rst
--- a/source/examining/supported_frontends_data.rst
+++ b/source/examining/supported_frontends_data.rst
@@ -125,7 +125,6 @@
   positions will not be.
 * Domains may be visualized assuming periodicity.
 
-<<<<<<< local
 .. _loading-ramses-data:
 
 RAMSES Data


https://bitbucket.org/yt_analysis/yt-doc/commits/3c8695e699de/
Changeset:   3c8695e699de
User:        jzuhone
Date:        2013-11-03 17:31:11
Summary:     Merging
Affected #:  3 files

diff -r fc1dadb6b876b7f61ac3d1762af5e1790b3f7bd2 -r 3c8695e699ded335d9b8c79581b513bada392079 source/analyzing/analysis_modules/Particle_Trajectories.ipynb
--- /dev/null
+++ b/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
@@ -0,0 +1,367 @@
+{
+ "metadata": {
+  "name": ""
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+  {
+   "cells": [
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The `particle_trajectories` analysis module enables the construction of particle trajectories from a time series of datasets for a specified list of particles identified by their unique indices. "
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "from yt.imods import *\n",
+      "from yt.analysis_modules.api import ParticleTrajectories\n",
+      "from yt.config import ytcfg\n",
+      "path = ytcfg.get(\"yt\", \"test_data_dir\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "First, let's start off with a FLASH dataset containing only two particles in a mutual circular orbit. We can get the list of filenames this way:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "my_fns = glob.glob(path+\"/Orbit/orbit_hdf5_chk_00[0-9][0-9]\")\n",
+      "my_fns.sort()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "And let's define a list of fields that we want to include in the trajectories. The position fields will be included by default, so let's just ask for the velocity fields:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "fields = [\"particle_velocity_x\", \"particle_velocity_y\", \"particle_velocity_z\"]"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "There are only two particles, but for consistency's sake let's grab their indices from the dataset itself:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pf = load(my_fns[0])\n",
+      "dd = pf.h.all_data()\n",
+      "indices = dd[\"particle_index\"].astype(\"int\")\n",
+      "print indices"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "which is what we expected them to be. Now we're ready to create a `ParticleTrajectories` object:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "trajs = ParticleTrajectories(my_fns, indices, fields=fields)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The `ParticleTrajectories` object `trajs` is essentially a dictionary-like container for the particle fields along the trajectory, and can be accessed as such:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "print trajs[\"particle_position_x\"]\n",
+      "print trajs[\"particle_position_x\"].shape"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Note that each field is a 2D NumPy array with the different particle indices along the first dimension and the times along the second dimension. As such, we can access them individually by indexing the field:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_position_x\"][0], trajs[\"particle_position_y\"][0])\n",
+      "pylab.plot(trajs[\"particle_position_x\"][1], trajs[\"particle_position_y\"][1])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "And we can plot the velocity fields as well:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_velocity_x\"][0], trajs[\"particle_velocity_y\"][0])\n",
+      "pylab.plot(trajs[\"particle_velocity_x\"][1], trajs[\"particle_velocity_y\"][1])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "If we want to access the time along the trajectory, we use the key `\"particle_time\"`:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_velocity_x\"][1])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_velocity_y\"][1])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Alternatively, if we know the particle index we'd like to examine, we can get an individual trajectory corresponding to that index:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "particle1 = trajs.trajectory_from_index(1)\n",
+      "pylab.plot(particle1[\"particle_time\"], particle1[\"particle_position_x\"])\n",
+      "pylab.plot(particle1[\"particle_time\"], particle1[\"particle_position_y\"])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Now let's look at a more complicated (and fun!) example. We'll use an Enzo cosmology dataset. First, we'll find the maximum density in the domain, and obtain the indices of the particles within some radius of the center. First, let's have a look at what we're getting:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pf = load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+      "slc = SlicePlot(pf, \"x\", [\"Density\",\"Dark_Matter_Density\"], center=\"max\", width=(3.0, \"mpc\"))\n",
+      "slc.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "So far, so good--it looks like we've centered on a galaxy cluster. Let's grab all of the dark matter particles within a sphere of 0.5 Mpc (identified by `\"particle_type == 1\"`):"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "sp = pf.h.sphere(\"max\", (0.5, \"mpc\"))\n",
+      "indices = sp[\"particle_index\"][sp[\"particle_type\"] == 1]"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Next we'll get the list of datasets we want, and create trajectories for these particles:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "my_fns = glob.glob(path+\"/enzo_tiny_cosmology/DD*/*.hierarchy\")\n",
+      "my_fns.sort()\n",
+      "trajs = ParticleTrajectories(my_fns, indices)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Matplotlib can make 3D plots, so let's pick three particle trajectories at random and look at them in the volume:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "import matplotlib.pyplot as plt\n",
+      "from mpl_toolkits.mplot3d import Axes3D\n",
+      "fig = plt.figure(figsize=(8.0, 8.0))\n",
+      "ax = fig.add_subplot(111, projection='3d')\n",
+      "ax.plot(trajs[\"particle_position_x\"][100], trajs[\"particle_position_z\"][100], trajs[\"particle_position_z\"][100])\n",
+      "ax.plot(trajs[\"particle_position_x\"][8], trajs[\"particle_position_z\"][8], trajs[\"particle_position_z\"][8])\n",
+      "ax.plot(trajs[\"particle_position_x\"][25], trajs[\"particle_position_z\"][25], trajs[\"particle_position_z\"][25])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "It looks like these three different particles fell into the cluster along different filaments. We can also look at their x-positions only as a function of time:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][100])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][8])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][25])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Suppose we wanted to know the gas density along the particle trajectory, but there wasn't a particle field corresponding to that in our dataset. Never fear! If the field exists as a grid field, `yt` will interpolate this field to the particle positions and add the interpolated field to the trajectory. To add such a field (or any field, including additional particle fields) we can call the `add_fields` method:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "trajs.add_fields([\"Density\"])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We also could have included `\"Density\"` in our original field list. Now, plot up the gas density for each particle as a function of time:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"Density\"][100])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"Density\"][8])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"Density\"][25])\n",
+      "pylab.yscale(\"log\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Finally, the particle trajectories can be written to disk. Two options are provided: ASCII text files with a column for each field and the time, and HDF5 files:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "trajs.write_out(\"halo_trajectories.txt\")\n",
+      "trajs.write_out_h5(\"halo_trajectories.h5\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "heading",
+     "level": 2,
+     "metadata": {},
+     "source": [
+      "Important Caveats"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "* Parallelization is not yet implemented.\n",
+      "* For large datasets, constructing trajectories can be very slow. We are working on optimizing the algorithm for a future release. \n",
+      "* At the moment, trajectories are limited for particles that exist in every dataset. Therefore, for codes like FLASH that allow for particles to exit the domain (and hence the simulation) for certain types of boundary conditions, you need to insure that the particles you wish to examine exist in all datasets in the time series from the beginning to the end. If this is not the case, `ParticleTrajectories` will throw an error. This is a limitation we hope to relax in a future release. "
+     ]
+    }
+   ],
+   "metadata": {}
+  }
+ ]
+}
\ No newline at end of file

diff -r fc1dadb6b876b7f61ac3d1762af5e1790b3f7bd2 -r 3c8695e699ded335d9b8c79581b513bada392079 source/analyzing/analysis_modules/index.rst
--- a/source/analyzing/analysis_modules/index.rst
+++ b/source/analyzing/analysis_modules/index.rst
@@ -38,3 +38,4 @@
 
    two_point_functions
    clump_finding
+   particle_trajectories

diff -r fc1dadb6b876b7f61ac3d1762af5e1790b3f7bd2 -r 3c8695e699ded335d9b8c79581b513bada392079 source/analyzing/analysis_modules/particle_trajectories.rst
--- /dev/null
+++ b/source/analyzing/analysis_modules/particle_trajectories.rst
@@ -0,0 +1,4 @@
+Particle Trajectories
+-----------------------------------------
+
+.. notebook:: Particle_Trajectories.ipynb


https://bitbucket.org/yt_analysis/yt-doc/commits/02502401d183/
Changeset:   02502401d183
User:        ngoldbaum
Date:        2013-11-04 02:17:56
Summary:     Merged in jzuhone/yt-doc (pull request #111)

Particle Trajectory Docs
Affected #:  4 files

diff -r 4c30777c6ada6e32ae197a94ffb417478103b389 -r 02502401d183eb3d121da1a00a6c3513a98e490d source/analyzing/analysis_modules/Particle_Trajectories.ipynb
--- /dev/null
+++ b/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
@@ -0,0 +1,367 @@
+{
+ "metadata": {
+  "name": ""
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+  {
+   "cells": [
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The `particle_trajectories` analysis module enables the construction of particle trajectories from a time series of datasets for a specified list of particles identified by their unique indices. "
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "from yt.imods import *\n",
+      "from yt.analysis_modules.api import ParticleTrajectories\n",
+      "from yt.config import ytcfg\n",
+      "path = ytcfg.get(\"yt\", \"test_data_dir\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "First, let's start off with a FLASH dataset containing only two particles in a mutual circular orbit. We can get the list of filenames this way:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "my_fns = glob.glob(path+\"/Orbit/orbit_hdf5_chk_00[0-9][0-9]\")\n",
+      "my_fns.sort()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "And let's define a list of fields that we want to include in the trajectories. The position fields will be included by default, so let's just ask for the velocity fields:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "fields = [\"particle_velocity_x\", \"particle_velocity_y\", \"particle_velocity_z\"]"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "There are only two particles, but for consistency's sake let's grab their indices from the dataset itself:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pf = load(my_fns[0])\n",
+      "dd = pf.h.all_data()\n",
+      "indices = dd[\"particle_index\"].astype(\"int\")\n",
+      "print indices"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "which is what we expected them to be. Now we're ready to create a `ParticleTrajectories` object:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "trajs = ParticleTrajectories(my_fns, indices, fields=fields)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The `ParticleTrajectories` object `trajs` is essentially a dictionary-like container for the particle fields along the trajectory, and can be accessed as such:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "print trajs[\"particle_position_x\"]\n",
+      "print trajs[\"particle_position_x\"].shape"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Note that each field is a 2D NumPy array with the different particle indices along the first dimension and the times along the second dimension. As such, we can access them individually by indexing the field:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_position_x\"][0], trajs[\"particle_position_y\"][0])\n",
+      "pylab.plot(trajs[\"particle_position_x\"][1], trajs[\"particle_position_y\"][1])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "And we can plot the velocity fields as well:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_velocity_x\"][0], trajs[\"particle_velocity_y\"][0])\n",
+      "pylab.plot(trajs[\"particle_velocity_x\"][1], trajs[\"particle_velocity_y\"][1])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "If we want to access the time along the trajectory, we use the key `\"particle_time\"`:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_velocity_x\"][1])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_velocity_y\"][1])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Alternatively, if we know the particle index we'd like to examine, we can get an individual trajectory corresponding to that index:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "particle1 = trajs.trajectory_from_index(1)\n",
+      "pylab.plot(particle1[\"particle_time\"], particle1[\"particle_position_x\"])\n",
+      "pylab.plot(particle1[\"particle_time\"], particle1[\"particle_position_y\"])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Now let's look at a more complicated (and fun!) example. We'll use an Enzo cosmology dataset. First, we'll find the maximum density in the domain, and obtain the indices of the particles within some radius of the center. First, let's have a look at what we're getting:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pf = load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+      "slc = SlicePlot(pf, \"x\", [\"Density\",\"Dark_Matter_Density\"], center=\"max\", width=(3.0, \"mpc\"))\n",
+      "slc.show()"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "So far, so good--it looks like we've centered on a galaxy cluster. Let's grab all of the dark matter particles within a sphere of 0.5 Mpc (identified by `\"particle_type == 1\"`):"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "sp = pf.h.sphere(\"max\", (0.5, \"mpc\"))\n",
+      "indices = sp[\"particle_index\"][sp[\"particle_type\"] == 1]"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Next we'll get the list of datasets we want, and create trajectories for these particles:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "my_fns = glob.glob(path+\"/enzo_tiny_cosmology/DD*/*.hierarchy\")\n",
+      "my_fns.sort()\n",
+      "trajs = ParticleTrajectories(my_fns, indices)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Matplotlib can make 3D plots, so let's pick three particle trajectories at random and look at them in the volume:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "import matplotlib.pyplot as plt\n",
+      "from mpl_toolkits.mplot3d import Axes3D\n",
+      "fig = plt.figure(figsize=(8.0, 8.0))\n",
+      "ax = fig.add_subplot(111, projection='3d')\n",
+      "ax.plot(trajs[\"particle_position_x\"][100], trajs[\"particle_position_z\"][100], trajs[\"particle_position_z\"][100])\n",
+      "ax.plot(trajs[\"particle_position_x\"][8], trajs[\"particle_position_z\"][8], trajs[\"particle_position_z\"][8])\n",
+      "ax.plot(trajs[\"particle_position_x\"][25], trajs[\"particle_position_z\"][25], trajs[\"particle_position_z\"][25])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "It looks like these three different particles fell into the cluster along different filaments. We can also look at their x-positions only as a function of time:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][100])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][8])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"particle_position_x\"][25])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Suppose we wanted to know the gas density along the particle trajectory, but there wasn't a particle field corresponding to that in our dataset. Never fear! If the field exists as a grid field, `yt` will interpolate this field to the particle positions and add the interpolated field to the trajectory. To add such a field (or any field, including additional particle fields) we can call the `add_fields` method:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "trajs.add_fields([\"Density\"])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We also could have included `\"Density\"` in our original field list. Now, plot up the gas density for each particle as a function of time:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"Density\"][100])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"Density\"][8])\n",
+      "pylab.plot(trajs[\"particle_time\"], trajs[\"Density\"][25])\n",
+      "pylab.yscale(\"log\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Finally, the particle trajectories can be written to disk. Two options are provided: ASCII text files with a column for each field and the time, and HDF5 files:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "trajs.write_out(\"halo_trajectories.txt\")\n",
+      "trajs.write_out_h5(\"halo_trajectories.h5\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "heading",
+     "level": 2,
+     "metadata": {},
+     "source": [
+      "Important Caveats"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "* Parallelization is not yet implemented.\n",
+      "* For large datasets, constructing trajectories can be very slow. We are working on optimizing the algorithm for a future release. \n",
+      "* At the moment, trajectories are limited for particles that exist in every dataset. Therefore, for codes like FLASH that allow for particles to exit the domain (and hence the simulation) for certain types of boundary conditions, you need to insure that the particles you wish to examine exist in all datasets in the time series from the beginning to the end. If this is not the case, `ParticleTrajectories` will throw an error. This is a limitation we hope to relax in a future release. "
+     ]
+    }
+   ],
+   "metadata": {}
+  }
+ ]
+}
\ No newline at end of file

diff -r 4c30777c6ada6e32ae197a94ffb417478103b389 -r 02502401d183eb3d121da1a00a6c3513a98e490d source/analyzing/analysis_modules/index.rst
--- a/source/analyzing/analysis_modules/index.rst
+++ b/source/analyzing/analysis_modules/index.rst
@@ -38,3 +38,4 @@
 
    two_point_functions
    clump_finding
+   particle_trajectories

diff -r 4c30777c6ada6e32ae197a94ffb417478103b389 -r 02502401d183eb3d121da1a00a6c3513a98e490d source/analyzing/analysis_modules/particle_trajectories.rst
--- /dev/null
+++ b/source/analyzing/analysis_modules/particle_trajectories.rst
@@ -0,0 +1,4 @@
+Particle Trajectories
+-----------------------------------------
+
+.. notebook:: Particle_Trajectories.ipynb

diff -r 4c30777c6ada6e32ae197a94ffb417478103b389 -r 02502401d183eb3d121da1a00a6c3513a98e490d source/examining/supported_frontends_data.rst
--- a/source/examining/supported_frontends_data.rst
+++ b/source/examining/supported_frontends_data.rst
@@ -125,7 +125,6 @@
   positions will not be.
 * Domains may be visualized assuming periodicity.
 
-<<<<<<< local
 .. _loading-ramses-data:
 
 RAMSES Data

Repository URL: https://bitbucket.org/yt_analysis/yt-doc/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list