[Yt-svn] commit/yt-doc: 4 new changesets

Bitbucket commits-noreply at bitbucket.org
Tue Apr 5 19:35:32 PDT 2011


4 new changesets in yt-doc:

http://bitbucket.org/yt_analysis/yt-doc/changeset/1a90e6d0cac5/
changeset:   r57:1a90e6d0cac5
user:        samskillman
date:        2011-03-29 03:23:43
summary:     Adding first go at streamline docs.
affected #:  4 files (4.4 KB)

--- a/source/reference/api/data_sources.rst	Thu Mar 10 13:04:20 2011 -0800
+++ b/source/reference/api/data_sources.rst	Mon Mar 28 21:23:43 2011 -0400
@@ -23,6 +23,7 @@
    ~yt.data_objects.data_containers.AMRGridCollectionBase
    ~yt.data_objects.data_containers.AMRRayBase
    ~yt.data_objects.data_containers.AMROrthoRayBase
+   ~yt.data_objects.data_containers.AMRStreamlineBase
    ~yt.data_objects.data_containers.AMRProjBase
    ~yt.data_objects.data_containers.AMRRegionBase
    ~yt.data_objects.data_containers.AMRSliceBase


--- a/source/reference/api/extension_types.rst	Thu Mar 10 13:04:20 2011 -0800
+++ b/source/reference/api/extension_types.rst	Mon Mar 28 21:23:43 2011 -0400
@@ -50,6 +50,7 @@
    ~yt.visualization.volume_rendering.grid_partitioner.HomogenizedVolume
    ~yt.visualization.volume_rendering.transfer_functions.MultiVariateTransferFunction
    ~yt.utilities.amr_utils.PartitionedGrid
+   ~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree
    ~yt.visualization.volume_rendering.camera.PerspectiveCamera
    ~yt.visualization.volume_rendering.transfer_functions.PlanckTransferFunction
    ~yt.visualization.volume_rendering.transfer_functions.ProjectionTransferFunction
@@ -59,6 +60,21 @@
 
 .. _image_writer:
 
+Streamlining
+----------------
+
+See also :ref:`streamlines`.
+
+.. py:module:: yt.visualization.streamlines
+
+.. autosummary::
+   :toctree: generated/
+
+   ~yt.visualization.streamlines.Streamlines
+   ~yt.visualization.streamlines.Streamlines.integrate_through_volume
+   ~yt.visualization.streamlines.Streamlines.path
+   ~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree
+
 Image Writing
 -------------
 


--- a/source/visualizing/index.rst	Thu Mar 10 13:04:20 2011 -0800
+++ b/source/visualizing/index.rst	Mon Mar 28 21:23:43 2011 -0400
@@ -8,3 +8,5 @@
    callbacks
    volume_rendering
    image_panner
+   streamlines
+


--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/source/visualizing/streamlines.rst	Mon Mar 28 21:23:43 2011 -0400
@@ -0,0 +1,115 @@
+.. _streamlines:
+
+Streamlining
+================
+.. versionadded:: 2.1
+
+Streamlines, as implemented in ``yt``, are defined as being parallel to a
+vector field at all points.  While commonly used to follow the
+velocity flow or magnetic field lines, they can be defined to follow
+any three-dimensional vector field.  Once an initial condition and
+total length of the streamline are specified, the streamline is
+uniquely defined.    
+
+Method
+----------------
+
+Streamlining through a volume is useful for a variety of analysis
+tasks.  By specifying a set of starting positions, the user is
+returned a set of 3D positions that can, in turn, be used to visualize
+the 3D path of the streamlines.  Additionally, individual streamlines
+can be converted into
+:class:`~yt.data_objects.data_containers.AMRStreamlineBase` objects,
+and queried for all the available fields along the streamline.
+
+The implementation of streamlining  in ``yt`` is described below.
+
+#. Decompose the volume into a set of non-overlapping, fully domain
+   tiling bricks, using the
+   :class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` homogenized
+   volume.
+#. For every streamline starting position:
+
+   #. While the length of the streamline is less than the requested
+      length:
+
+      #. Find the brick that contains the current position
+      #. If not already present, generate vertex-centered data for
+         the vector fields defining the streamline.
+      #. While inside the brick
+
+         #. Integrate the streamline path using a Runge-Kutta 4th
+            order method and the vertex centered data.  
+	 #. During the intermediate steps of each RK4 step, if the
+            position is updated to outside the current brick,
+            interrupt the integration and locate a new brick at the
+            intermediate position.
+
+#. The set set of streamline positions are stored in the
+   :obj:`~yt.visualization.streamlines.Streamlines.streamlines` object.
+
+Example Script
+++++++++++++++++
+
+.. code-block:: python
+
+    from yt.mods import *
+    from yt.visualization.api import Streamlines
+    
+    pf = load('DD1701') # Load pf 
+    c = na.array([0.5]*3)
+    N = 100
+    scale = 1.0
+    pos_dx = na.random.random((N,3))*scale-scale/2.
+    pos = c+pos_dx
+    
+    streamlines = Streamlines(pf,pos,'x-velocity', 'y-velocity', 'z-velocity', length=1.0) 
+    streamlines.integrate_through_volume()
+    
+    import matplotlib.pylab as pl
+    from mpl_toolkits.mplot3d import Axes3D
+    fig=pl.figure() 
+    ax = Axes3D(fig)
+    for stream in streamlines.streamlines:
+        stream = stream[na.all(stream != 0.0, axis=1)]
+    	ax.plot3D(stream[:,0], stream[:,1], stream[:,2], alpha=0.1)
+    pl.savefig('streamlines.png')
+
+
+Data Access Along the Streamline
+--------------------------------
+
+Once the streamlines are found, a
+:class:`~yt.data_objects.data_containers.AMRStreamlineBase` object can
+be created using the
+:meth:`~yt.visualization.streamlines.Streamlines.path` function, which
+takes as input the index of the streamline requested. This conversion
+is done by creating a mask that defines where the streamline is, and
+creating 't' and 'dts' fields that define the dimensionless streamline
+integration coordinate and integration step size. Once defined, fields
+can be accessed in the standard manner.
+
+Example Script
+++++++++++++++++
+
+.. code-block:: python
+
+    from yt.mods import *
+    from yt.visualization.api import Streamlines
+    
+    pf = load('DD1701') # Load pf 
+    streamlines = Streamlines(pf, [0.5]*3) 
+    streamlines.integrate_through_volume()
+    stream = streamlines.path(0)
+    matplotlib.pylab.semilogy(stream['t'], stream['Density'], '-x')
+
+
+Running in Parallel
+--------------------
+
+The integration of the streamline paths is trivially parallelized by
+splitting the streamlines up between the processors.  Upon completion,
+each processor has access to all of the streamlines through the use of
+a reduction operation.
+
+Parallel usage is specified using the standard `--parallel` flag.


http://bitbucket.org/yt_analysis/yt-doc/changeset/3f0450e17041/
changeset:   r58:3f0450e17041
user:        samskillman
date:        2011-03-29 03:27:53
summary:     Merging
affected #:  27 files (17.9 KB)

--- a/helper_scripts/update_recipes.py	Mon Mar 28 21:23:43 2011 -0400
+++ b/helper_scripts/update_recipes.py	Mon Mar 28 21:27:53 2011 -0400
@@ -23,7 +23,7 @@
     recipes = cStringIO.StringIO()
 recipes.write(header)
 
-url = "here: http://hg.enzotools.org/cookbook/raw-file/tip/%s ."
+url = "here: http://hg.enzotools.org/cookbook/raw/tip/%s ."
 
 def cond_output(f, v):
     if not v:


--- a/source/advanced/debugdrive.rst	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/advanced/debugdrive.rst	Mon Mar 28 21:27:53 2011 -0400
@@ -56,7 +56,7 @@
 
 .. code-block:: bash
 
-   $ python2.6 some_problematic_script.py --paste
+   $ python2.7 some_problematic_script.py --paste
 
 The ``--paste`` option has to come after the name of the script.  When the
 script dies and prints its error, it will also submit that error to the
@@ -109,7 +109,7 @@
 
 .. code-block:: bash
 
-   $ mpirun -np 4 python2.6 some_script.py --parallel --rpdb
+   $ mpirun -np 4 python2.7 some_script.py --parallel --rpdb
 
 and it reaches an error or an exception, it will launch the debugger.
 Additionally, instructions will be printed for connecting to the debugger.


--- a/source/advanced/developing.rst	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/advanced/developing.rst	Mon Mar 28 21:27:53 2011 -0400
@@ -34,6 +34,120 @@
 still want to contribute, just drop me a line and I'll put a link on the main
 wiki page to wherever you like!
 
+.. _bootstrap-dev:
+
+Bootstrapping Your Development Environment
+------------------------------------------
+
+Getting up and running with developing yt can be somewhat daunting.  To assist
+with that, yt provides a 'bootstrap' script that handles a couple of the more
+annoying items on the checklist -- getting set up on BitBucket, creating a
+pasteboard, and adding a couple handy extensions to Mercurial.  As time goes
+on, we hope that we will be able to use the extensions added during this
+process to both issue forks and pull requests to BitBucket, enabling much more
+rapid and easy development.  To run the script, on the command line type::
+
+   $ yt bootstrap_dev
+
+.. note:: Although the bootstrap script will manipulate and modify your
+   ``~/.hgrc`` and possibly your BitBucket repositories, it will ask before
+   doing anything.  You should feel free to Ctrl-C out at any time.  If you
+   wish to inspect the source code of the bootstrap script, it is located in
+   ``yt/utilities/command_line.py`` in the function ``do_bootstrap``.
+
+Here is the list of items that the script will attempt to accomplish, along
+with a brief motivation of each.  
+
+ #. **Ensure that the yt-supplemental repository is checked out into
+    ``YT_DEST``**.  To make sure that the extensions we're going to use to
+    facilitate mercurial usage are checked out and ready to go, we optionally
+    clone the repository here.  If you've run with a recent install script,
+    this won't be necessary.
+ #. **Create an ``~/.hgrc`` if it does not exist, and add your username**.
+    Because Mercurial's changesets are all signed with a username, we make sure
+    that your username is set in your ``~/.hgrc``.  The script will prompt you
+    for what you would like it to be.  When committing to yt, we strongly
+    prefer you set it to be of the form "Firstname Lastname
+    <email at address.com>".  If you want to skip this step, simply set the
+    configuration value yourself in ``~/.hgrc``.  Any of the above-listed
+    tutorials on hg can help with this.
+ #. **Create a BitBucket user if you do not have one**.  Because yt is developed
+    on the source code hosting site `BitBucket <http://bitbucket.org/>`_, we
+    make sure that you're set up to have a username there.  You should not feel
+    obliged to do this step if you do not want to, but it provides a much more
+    convenient mechanism for sharing changes, reporting issues, and
+    contributing to the yt wiki.  It also provides a location to host an
+    unlimited number of publicly accessible repositories, if you wish to share
+    other pieces of code with other users.  (See :ref:`included-hg-extensions`
+    for more information about this.)
+ #. **Turn on the ``hgbb`` and ``cedit`` extensions in ``~/.hgrc``**.  This sets
+    up these extensions, described below.  It amounts to adding them to the
+    ``[extensions]`` section and adding your BitBucket username to the ``[bb]``
+    section.
+ #. **Create a pasteboard repository**.  This is the step that is probably the
+    most fun.  yt now comes with pasteboard facilities.  A pasteboard is like a
+    pastebin, except designed to be more persistent -- it's a versioned
+    repository that contains scripts with descriptions, which are automatically
+    posted to the web.  You can download from your pasteboard programmatically
+    using the ``yt pasteboard`` command, and you can download from other
+    pasteboards using the ``yt pastegrab`` command.  For more information, see
+    :ref:`pasteboards`.  This repository will be created on BitBucket, and will
+    be of the name ``your_username.bitbucket.org``, which is also the web
+    address it will be hosted at.
+
+And that's it!  If you run into any trouble, please email ``yt-dev`` with your
+concerns, questions or error messages.  This should put you in a good place to
+start developing yt efficiently.
+
+.. _included-hg-extensions:
+ 
+Included hg Extensions
+^^^^^^^^^^^^^^^^^^^^^^
+
+Mercurial is written in Python, and as such is easily extensible by scripts.
+It comes with a number of extensions (descriptions of which you can find on the
+Mercurial wiki under `UsingExtensions
+<http://mercurial.selenic.com/wiki/UsingExtensions>`_.  Some of my favorites
+are transplant, extdiff, color and progress.) yt now comes bundled with a few
+additional extensions, which should make interacting with other repositories
+and BitBucket a bit easier.
+
+The first of these is ``hgbb``, which is a Mercurial extension that interacts
+with the public-facing BitBucket-API.  It adds several commands, and you can
+get information about these commands by typing: ::
+
+   $ hg help COMMANDNAME
+
+It also adds the URL-specifer ``bb://USERNAME/reponame`` for convenience; this
+means you can reference ``sskory/yt`` to see Stephen's yt fork, for instance.
+
+The most fun of these commands are:
+
+``bbcreate``
+   This creates a new repository on BitBucket and clones it locally.  This is
+   really cool and very convenient when developing.
+``bbforks``
+   This shows the status of all known forks of a given repository, and can show
+   the incoming and outgoing changesets.  You can use this to see what
+   changesets are different between yours and another repository.
+
+As time goes on, and as the BitBucket API is expanded to cover things like
+forking and pull requests, we hope that this extension will also expand.
+
+The other extension that is currently bundled with yt is the ``cedit``
+extension.  This adds the ability to add, remove and set configuration options
+from the command line.  This brings with it the ability to add new sources for
+Mercurial repositories -- for instance, if you become aware of a different
+source repository you want to be able to pull from, you can add it as a source
+and then pull from it directly.
+
+The new commands you may be interested in are:
+
+``cedit``
+   Set an option in either the local or the global configuration file.
+``addsource``
+   Add a mercurial repo to the ``[paths]`` section of the local repository.
+
 How To Get The Source Code
 --------------------------
 
@@ -57,7 +171,7 @@
 
 .. code-block:: bash
 
-   $ python2.6 setup.py develop
+   $ python2.7 setup.py develop
 
 This will rebuild all C modules as well.
 
@@ -248,6 +362,8 @@
  * Variable names should be short but descriptive.
  * No globals!
 
+.. _project-ideas:
+
 Project Ideas
 -------------
 


--- a/source/advanced/external_analysis.rst	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/advanced/external_analysis.rst	Mon Mar 28 21:27:53 2011 -0400
@@ -186,7 +186,7 @@
 
 .. code-block:: bash
 
-   $ python2.6 axes_calculator_setup.py build_ext -i
+   $ python2.7 axes_calculator_setup.py build_ext -i
 
 Note that since we don't yet have an ``axes_calculator.pyx``, this will fail.
 But once we have it, it ought to run.


--- a/source/advanced/installing.rst	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/advanced/installing.rst	Mon Mar 28 21:27:53 2011 -0400
@@ -107,7 +107,7 @@
 
 .. code-block:: bash
 
-   $ python2.6 setup.py install
+   $ python2.7 setup.py install
 
 from the ``yt-hg`` directory.  Alternately, you can replace ``install`` with
 ``develop`` if you anticipate making any modifications to the code; ``develop``


--- a/source/advanced/parallel_computation.rst	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/advanced/parallel_computation.rst	Mon Mar 28 21:27:53 2011 -0400
@@ -55,7 +55,7 @@
 
 .. code-block:: bash
 
-   $ mpirun -np 16 python2.6 my_script.py --parallel
+   $ mpirun -np 16 python2.7 my_script.py --parallel
 
 if you wanted it to run in parallel.  If you run into problems, the you can use
 :ref:`remote-debugging` to examine what went wrong.


--- a/source/analysis_modules/clump_finding.rst	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/analysis_modules/clump_finding.rst	Mon Mar 28 21:27:53 2011 -0400
@@ -24,3 +24,94 @@
 Once the clump-finder has finished, the user can write out a set of quantities for each clump in the 
 hierarchy.  Additional info items can also be added.  We also provide a recipe
 for finding clumps in :ref:`cookbook-find_clumps`.
+
+Treecode Optimization
+---------------------
+
+.. sectionauthor:: Stephen Skory <s at skory.us>
+.. versionadded:: 2.1
+
+As mentioned above, the user has the option to limit clumps to those that are
+gravitationally bound.
+The correct and accurate way to calculate if a clump is gravitationally
+bound is to do the full double sum:
+
+.. math::
+
+  PE = \Sigma_{i=1}^N \Sigma_{j=i}^N \frac{G M_i M_j}{r_{ij}}
+
+where PE is the gravitational potential energy of N cells, G is the
+gravitational constant, M_i is the mass of cell i, and r_{ij} is the distance
+between cell i and j. The number of calculations required for this calculation
+grows with the square of N. Therefore, for large clumps with many cells, the
+test for boundedness can take a significant amount of time.
+
+An effective way to greatly speed up this calculation with minimal error
+is to use the treecode approximation pioneered by
+`Barnes and Hut (1986) <http://adsabs.harvard.edu/abs/1986Natur.324..446B>`_.
+This method of calculating gravitational potentials works by
+grouping individual masses that are located close together into a larger conglomerated
+mass with a geometric size equal to the distribution of the individual masses.
+For a mass cell that is sufficiently distant from the conglomerated mass,
+the gravitational calculation can be made using the conglomerate, rather than
+each individual mass, which saves time.
+
+The decision whether or not to use a conglomerate depends on the accuracy control
+parameter ``opening_angle``. Using the small-angle approximation, a conglomerate
+may be used if its geometric size subtends an angle no greater than the
+``opening_angle`` upon the remote mass. The default value is
+``opening_angle = 1``, which gives errors well under 1%. A value of 
+``opening_angle = 0`` is identical to the full O(N^2) method, and larger values
+will speed up the calculation and sacrifice accuracy.
+
+The treecode method is iterative. Conglomerates may themselves form larger
+conglomerates. And if a larger conglomerate does not meet the ``opening_angle``
+criterion, the smaller conglomerates are tested as well. This iteration of 
+conglomerates will
+cease once the level of the original masses is reached (this is what happens
+if ``opening_angle = 0``).
+
+Below are some examples of how to control the usage of the treecode.
+
+This example will calculate the ratio of the potential energy to kinetic energy
+for a spherical clump using the treecode method with an opening angle of 2.
+The default opening angle is 1.0:
+
+.. python::
+  
+  from yt.mods import *
+  
+  pf = load("DD0000")
+  sp = pf.h.sphere([0.5, 0.5, 0.5], radius=0.1)
+  
+  ratio = sp.quantities["IsBound"](truncate=False, include_thermal_energy=True,
+      treecode=True, opening_angle=2.0)
+
+This example will accomplish the same as the above, but will use the full
+N^2 method.
+
+.. python::
+  
+  from yt.mods import *
+  
+  pf = load("DD0000")
+  sp = pf.h.sphere([0.5, 0.5, 0.5], radius=0.1)
+  
+  ratio = sp.quantities["IsBound"](truncate=False, include_thermal_energy=True,
+      treecode=False)
+
+Here the treecode method is specified for clump finding (this is default).
+Please see the link above for the full example of how to find clumps (the
+trailing backslash is important!):
+
+.. python::
+  
+  function_name = 'self.data.quantities["IsBound"](truncate=True, \
+      include_thermal_energy=True, treecode=True, opening_angle=2.0) > 1.0'
+  master_clump = amods.level_sets.Clump(data_source, None, field,
+      function=function_name)
+
+To turn off the treecode, of course one should turn treecode=False in the
+example above.
+
+


--- a/source/analyzing/index.rst	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/analyzing/index.rst	Mon Mar 28 21:27:53 2011 -0400
@@ -9,3 +9,4 @@
    particles
    creating_derived_fields
    generating_processed_data
+   time_series_analysis


--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/source/analyzing/time_series_analysis.rst	Mon Mar 28 21:27:53 2011 -0400
@@ -0,0 +1,121 @@
+.. _time-series-analysis:
+
+Time Series Analysis
+====================
+
+Often, one wants to analyze a continuous set of outputs from a simulation in a
+uniform manner.  A simple example would be to calculate the peak density in a
+set of outputs that were written out.  The problem with time series analysis in
+yt is general an issue of verbosity and clunkiness. Typically, unless using the
+:class:`~yt.analysis_modules.simulation_handling.EnzoSimulation` class (which
+is only available as of right now for Enzo) one sets up a loop:
+
+.. code-block:: python
+
+   for pfi in range(30):
+       fn = "DD%04i/DD%04i" % (pfi, pfi)
+       pf = load(fn)
+       process_output(pf)
+
+But this is not really very nice.  This ends up requiring a lot of maintenance.
+The :class:`~yt.data_objects.time_series.TimeSeriesData` object has been
+designed to remove some of this clunkiness and present an easier, more unified
+approach to analyzing sets of data.  Furthermore, future versions of yt will
+automatically parallelize operations conducted on time series of data.
+
+The idea behind the current implementation of time series analysis is that
+the underlying data and the operators that act on that data can and should be
+distinct.  There are several operators provided, as well as facilities for
+creating your own, and these operators can be applied either to datasets on the
+whole or to subregions of individual datasets.
+
+The simplest mechanism for creating a ``TimeSeriesData`` object is to use the
+class method
+:meth:`~yt.data_objects.time_series.TimeSeriesData.from_filenames`.  This
+method accepts a list of strings that can be supplied to ``load``.  For
+example:
+
+.. code-block:: python
+
+   from yt.mods import *
+   filenames = ["DD0030/output_0030", "DD0040/output_0040"]
+   ts = TimeSeriesData.from_filenames(filenames)
+
+This will create a new time series, populated with the output files ``DD0030``
+and ``DD0040``.  This object, here called ``ts``, can now be analyzed in bulk.
+
+Simple Analysis Tasks
+---------------------
+
+The available tasks that come built-in can be seen by looking at the output of
+``ts.tasks.keys()``.  For instance, one of the simplest ones is the
+``MaxValue`` task.  We can execute this task by calling it with the field whose
+maximum value we want to evaluate:
+
+.. code-block:: python
+
+   from yt.mods import *
+   all_files = glob.glob("*/*.hierarchy")
+   all_files.sort()
+   ts = TimeSeries.from_filenames(all_files)
+   max_rho = ts.tasks["MaximumValue"]("Density")
+
+When we call the task, the time series object executes the task on each
+component parameter file.  The results are then returned to the user.  More
+complex, multi-task evaluations can be conducted by using the
+:meth:`~yt.data_objects.time_series.TimeSeriesData.eval` call, which accepts a
+list of analysis tasks.
+
+Analysis Tasks Applied to Objects
+---------------------------------
+
+Just as some tasks can be applied to datasets as a whole, one can also apply
+the creation of objects to datasets.  This means that you are able to construct
+a generalized "sphere" operator that will be created inside all datasets, which
+you can then calculate derived quantities (see :ref:`derived-quantities`) from.
+
+For instance, imagine that you wanted to create a sphere that is centered on
+the most dense point in the simulation and that is 1 pc in radius, and then
+calculate the angular momentum vector on this sphere.  You could do that with
+this script:
+
+.. code-block:: python
+
+   from yt.mods import *
+   all_files = glob.glob("*/*.hierarchy")
+   all_files.sort()
+   ts = TimeSeries.from_filenames(all_files)
+   sphere = ts.sphere("max", (1.0, "pc"))
+   L_vecs = sphere.quantities["AngularMomentumVector"]()
+
+Note that we have specified the units differently than usual -- the time series
+objects allow units as a tuple, so that in cases where units may change over
+the course of several outputs they are correctly set at all times.  This script
+simply sets up the time series object, creates a sphere, and then runs
+quantities on it.  It is designed to look very similar to the code that would
+conduct this analysis on a single output.
+
+All of the objects listed in :ref:`available-objects` are made available in
+the same manner as "sphere" was used above.
+
+Creating Analysis Tasks
+-----------------------
+
+If you wanted to look at the mass in star particles as a function of time, you
+would write a function that accepts params and pf and then decorate it with
+analysis_task. Here we have done so:
+
+.. code-block:: python
+
+   @analysis_task(('particle_type',))
+   def MassInParticleType(params, pf):
+       dd = pf.h.all_data()
+       ptype = (dd["particle_type"] == params.particle_type)
+       return (ptype.sum(), dd["ParticleMassMsun"][ptype].sum())
+
+   ms = ts.tasks["MassInParticleType"](4)
+   print ms
+
+This allows you to create your own analysis tasks that will be then available
+to time series data objects.  In the future, this will allow for transparent
+parallelization.


--- a/source/conf.py	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/conf.py	Mon Mar 28 21:27:53 2011 -0400
@@ -51,9 +51,9 @@
 # built documents.
 #
 # The short X.Y version.
-version = '2.0'
+version = '2.1'
 # The full version, including alpha/beta/rc tags.
-release = '2.0'
+release = '2.1beta'
 
 # The language for content autogenerated by Sphinx. Refer to documentation
 # for a list of supported languages.


--- a/source/cookbook/arbitrary_vectors_on_slice.inc	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/cookbook/arbitrary_vectors_on_slice.inc	Mon Mar 28 21:27:53 2011 -0400
@@ -19,14 +19,19 @@
    pf = load(fn) # load data
    pc = PlotCollection(pf) # defaults to center at most dense point
    p = pc.add_slice("Density", ax) 
-   v1 = "magnetic_field_%s" % (axis_names[x_dict[ax]])
-   v2 = "magnetic_field_%s" % (axis_names[y_dict[ax]])
-   p.modify["quiver"](v1, v2) # This takes a few arguments, but we'll use the defaults
-                              # here.  You can control the 'skip' factor in the
-                              # vectors.
+   v1 = "%s-velocity" % (axis_names[x_dict[ax]])
+   v2 = "%s-velocity" % (axis_names[y_dict[ax]])
+   # This takes a few arguments, but we'll use the defaults here.  The third
+   # argument is the 'skip' factor -- every how-many pixels to put a vector.
+   p.modify["quiver"](v1, v2, 16)
    pc.set_width(2.5, 'mpc') # change width of all plots in pc
    pc.save(fn) # save all plots
    
 
+.. rubric:: Sample Output
 
+.. image:: _arbitrary_vectors_on_slice/arbitrary_vectors_on_slice_RedshiftOutput0005_Slice_x_Density.png
+   :width: 240
+   :target: ../_images/arbitrary_vectors_on_slice_RedshiftOutput0005_Slice_x_Density.png
 
+


--- a/source/cookbook/find_clumps.inc	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/cookbook/find_clumps.inc	Mon Mar 28 21:27:53 2011 -0400
@@ -35,25 +35,25 @@
    c_max = 10**na.floor(na.log10(data_source[field]).max()+1)
    
    # Now find get our 'base' clump -- this one just covers the whole domain.
-   master_clump = Clump(data_source, None, field)
+   master_clump = amods.level_sets.Clump(data_source, None, field)
    
    # This next command accepts our base clump and we say the range between which
    # we want to contour.  It recursively finds clumps within the master clump, at
    # intervals defined by the step size we feed it.  The current value is
    # *multiplied* by step size, rather than added to it -- so this means if you
    # want to look in log10 space intervals, you would supply step = 10.0.
-   find_clumps(master_clump, c_min, c_max, step)
+   amods.level_sets.find_clumps(master_clump, c_min, c_max, step)
    
    # As it goes, it appends the information about all the sub-clumps to the
    # master-clump.  Among different ways we can examine it, there's a convenience
    # function for outputting the full hierarchy to a file.
    f = open('%s_clump_hierarchy.txt' % pf,'w')
-   write_clump_hierarchy(master_clump,0,f)
+   amods.level_sets.write_clump_hierarchy(master_clump,0,f)
    f.close()
    
    # We can also output some handy information, as well.
    f = open('%s_clumps.txt' % pf,'w')
-   write_clumps(master_clump,0,f)
+   amods.level_sets.write_clumps(master_clump,0,f)
    f.close()
    # If you'd like to visualize these clumps, a list of clumps can be supplied to
    # the "clumps" callback on a plot.


--- a/source/cookbook/global_phase_plots.inc	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/cookbook/global_phase_plots.inc	Mon Mar 28 21:27:53 2011 -0400
@@ -21,20 +21,30 @@
    pc = PlotCollection(pf) # defaults to center at most dense point
    
    # We plot the average x-velocity (mass-weighted) in our object as a function of
-   # Electron_Density and Temperature
-   plot=pc.add_phase_object(dd, ["Electron_Density","Temperature","x-velocity"]
+   # Density and Temperature
+   plot=pc.add_phase_object(dd, ["Density","Temperature","x-velocity"],
                    lazy_reader = True)
    
    # We now plot the average value of x-velocity as a function of temperature
    plot=pc.add_profile_object(dd, ["Temperature", "x-velocity"],
                    lazy_reader = True)
    
-   # Finally, the average electron density as a function of the magnitude of the
-   # velocity
-   plot=pc.add_profile_object(dd, ["Electron_Density", "VelocityMagnitude"],
+   # Finally, the velocity magnitude as a function of density
+   plot=pc.add_profile_object(dd, ["Density", "VelocityMagnitude"],
                    lazy_reader = True)
    pc.save() # save all plots
    
 
+.. rubric:: Sample Output
 
+.. image:: _global_phase_plots/global_phase_plots_RedshiftOutput0005_Profile1D_1_Temperature_x-velocity.png
+   :width: 240
+   :target: ../_images/global_phase_plots_RedshiftOutput0005_Profile1D_1_Temperature_x-velocity.png
+.. image:: _global_phase_plots/global_phase_plots_RedshiftOutput0005_Profile1D_2_Density_VelocityMagnitude.png
+   :width: 240
+   :target: ../_images/global_phase_plots_RedshiftOutput0005_Profile1D_2_Density_VelocityMagnitude.png
+.. image:: _global_phase_plots/global_phase_plots_RedshiftOutput0005_Profile2D_0_Density_Temperature_x-velocity.png
+   :width: 240
+   :target: ../_images/global_phase_plots_RedshiftOutput0005_Profile2D_0_Density_Temperature_x-velocity.png
 
+


--- a/source/cookbook/light_cone_halo_mask.inc	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/cookbook/light_cone_halo_mask.inc	Mon Mar 28 21:27:53 2011 -0400
@@ -15,11 +15,11 @@
 .. code-block:: python
 
    
-   import yt.extensions.lightcone as LC
-   import yt.extensions.HaloProfiler as HP
+   from yt.mods import *
    
    # Instantiate a light cone object as usual.
-   lc = LC.LightCone("128Mpc256grid_SFFB.param", initial_redshift=0.4, 
+   lc = amods.light_cone.LightCone(
+                     "128Mpc256grid_SFFB.param", initial_redshift=0.4, 
                      final_redshift=0.0, observer_redshift=0.0,
                      field_of_view_in_arcminutes=600.0, 
                      image_resolution_in_arcseconds=60.0,
@@ -42,8 +42,8 @@
    # be called ("function"), the arguments of the function ("args"), and the 
    # keyword arguments of the function ("kwargs").
    # This item will add a virial filter.
-   halo_profiler_actions.append({'function': HP.HaloProfiler.add_halo_filter,
-                                 'args': [HP.VirialFilter],
+   halo_profiler_actions.append({'function': amods.halo_profiler.HaloProfiler.add_halo_filter,
+                                 'args': [amods.halo_profiler.VirialFilter],
                                  'kwargs': {'must_be_virialized':True, 
                                             'overdensity_field':'ActualOverdensity',
                                             'virial_overdensity':200,
@@ -51,7 +51,7 @@
                                             'virial_quantities':['TotalMassMsun','RadiusMpc']}})
    
    # This item will call the make_profile method to get the filtered halo list.
-   halo_profiler_actions.append({'function': HP.HaloProfiler.make_profiles,
+   halo_profiler_actions.append({'function': amods.halo_profiler.HaloProfiler.make_profiles,
                                  'kwargs': {'filename': "virial_filter.out"}})
    
    # Specify the desired halo list is the filtered list.
@@ -77,3 +77,4 @@
    
 
 
+


--- a/source/cookbook/make_light_cone.inc	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/cookbook/make_light_cone.inc	Mon Mar 28 21:27:53 2011 -0400
@@ -10,16 +10,17 @@
 
 .. code-block:: python
 
-   import yt.extensions.lightcone as LC
+   from yt.mods import *
+   from yt.analysis_modules.light_cone.api import *
    
    # All of the light cone parameters are given as keyword arguments at instantiation.
-   lc = LC.LightCone("128Mpc256grid_SFFB.param", initial_redshift=0.4, 
-                     final_redshift=0.0, observer_redshift=0.0,
-                     field_of_view_in_arcminutes=450.0, 
-                     image_resolution_in_arcseconds=60.0,
-                     use_minimum_datasets=True, deltaz_min=0.0, 
-                     minimum_coherent_box_fraction=0.0,
-                     output_dir='LC', output_prefix='LightCone')
+   lc = LightCone("128Mpc256grid_SFFB.param", initial_redshift=0.4, 
+                  final_redshift=0.0, observer_redshift=0.0,
+                  field_of_view_in_arcminutes=450.0, 
+                  image_resolution_in_arcseconds=60.0,
+                  use_minimum_datasets=True, deltaz_min=0.0, 
+                  minimum_coherent_box_fraction=0.0,
+                  output_dir='LC', output_prefix='LightCone')
    
    # Calculate a light cone solution and write out a text file with the details 
    # of the solution.
@@ -30,12 +31,8 @@
    
    # Make the light cone projection, save individual images of each slice 
    # and of the projection as well as an hdf5 file with the full data cube.
-   # Add a label of the slice redshift to each individual image.
-   # The return value is the PlotCollection that holds the image data for 
-   # the final projection, allowing for additional customization of the 
-   # final image.
-   pc = lc.project_light_cone(field ,save_stack=True, save_slice_images=True, use_colorbar=False, 
-                              add_redshift_label=True)
+   lc.project_light_cone(field ,save_stack=True, save_slice_images=True)
    
 
 
+


--- a/source/cookbook/multi_plot.inc	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/cookbook/multi_plot.inc	Mon Mar 28 21:27:53 2011 -0400
@@ -5,7 +5,7 @@
 
 This is a simple recipe to show how to open a dataset and then plot a slice
 through it, centered at its most dense point.  For more information, see
-:func:`~yt.raven.get_multi_plot`.
+:func:`~yt.visualization.plot_collection.get_multi_plot`.
 
 The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/multi_plot.py .
 
@@ -26,7 +26,7 @@
    #   Number of plots on the x-axis, number of plots on the y-axis, and how we
    #   want our colorbars oriented.  (This governs where they will go, too.
    #   bw is the base-width in inches, but 4 is about right for most cases.
-   fig, axes, colorbars = raven.get_multi_plot( 2, 1, colorbar=orient, bw = 4)
+   fig, axes, colorbars = get_multi_plot( 2, 1, colorbar=orient, bw = 4)
    
    # We'll use a plot collection, just for convenience's sake
    pc = PlotCollection(pf, center=[0.5, 0.5, 0.5])


--- a/source/cookbook/multi_plot_3x2.inc	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/cookbook/multi_plot_3x2.inc	Mon Mar 28 21:27:53 2011 -0400
@@ -5,7 +5,7 @@
 
 This is a simple recipe to show how to open a dataset and then plot a slice
 through it, centered at its most dense point.  For more information, see
-:func:`~yt.raven.get_multi_plot`.
+:func:`~yt.visualization.plot_collection.get_multi_plot`.
 
 The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/multi_plot_3x2.py .
 
@@ -26,7 +26,7 @@
    #   Number of plots on the x-axis, number of plots on the y-axis, and how we
    #   want our colorbars oriented.  (This governs where they will go, too.
    #   bw is the base-width in inches, but 4 is about right for most cases.
-   fig, axes, colorbars = raven.get_multi_plot( 2, 3, colorbar=orient, bw = 4)
+   fig, axes, colorbars = get_multi_plot( 2, 3, colorbar=orient, bw = 4)
    
    # We'll use a plot collection, just for convenience's sake
    pc = PlotCollection(pf, center=[0.5, 0.5, 0.5])


--- a/source/cookbook/offaxis_projection.inc	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/cookbook/offaxis_projection.inc	Mon Mar 28 21:27:53 2011 -0400
@@ -15,8 +15,6 @@
 .. code-block:: python
 
    from yt.mods import * # set up our namespace
-   import yt.extensions.volume_rendering as vr
-   import yt.extensions.image_writer as iw
    
    fn = "RedshiftOutput0005" # parameter file to load
    
@@ -24,7 +22,7 @@
    
    # This operates on a pass-through basis, so you should not need to specify
    # limits.
-   tf = vr.ProjectionTransferFunction()
+   tf = ProjectionTransferFunction()
    
    # We don't want to take the log of Density, so we need to disable that here.
    # Note that if using the Camera interface, this does not need to be done.
@@ -60,7 +58,7 @@
    image = na.log10(vp.image[:,:,0]) 
    
    # And now, we call our direct image saver.  We save the log of the result.
-   iw.write_image(image, "%s_offaxis_projection.png" % pf)
+   write_image(image, "%s_offaxis_projection.png" % pf)
    
 
 .. rubric:: Sample Output


--- a/source/cookbook/overplot_particles.inc	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/cookbook/overplot_particles.inc	Mon Mar 28 21:27:53 2011 -0400
@@ -18,8 +18,7 @@
    pf = load(fn) # load data
    pc = PlotCollection(pf, center=[0.5,0.5,0.5]) # defaults to center at most dense point
    p = pc.add_projection("Density", 0) # 0 = x-axis
-   # "nparticles" is slightly more efficient than "particles"
-   p.modify["nparticles"](1.0) # 1.0 is the 'width' we want for our slab of
+   p.modify["particles"](1.0) # 1.0 is the 'width' we want for our slab of
                                # particles -- this governs the allowable locations
                                # of particles that show up on the image
                                # NOTE: we can also supply a *ptype* to cut based


--- a/source/cookbook/run_halo_profiler.inc	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/cookbook/run_halo_profiler.inc	Mon Mar 28 21:27:53 2011 -0400
@@ -11,15 +11,15 @@
 
 .. code-block:: python
 
-   import yt.extensions.HaloProfiler as HP
+   from yt.mods import *
    
    # Instantiate HaloProfiler for this dataset.
-   hp = HP.HaloProfiler("DD0242/DD0242")
+   hp = amods.halo_profiler.HaloProfiler("DD0242/DD0242")
    
    # Add a filter to remove halos that have no profile points with overdensity 
    # above 200, and with virial masses less than 1e14 solar masses.
    # Also, return the virial mass and radius to be written out to a file.
-   hp.add_halo_filter(HP.VirialFilter,must_be_virialized=True,
+   hp.add_halo_filter(amods.halo_profiler.VirialFilter,must_be_virialized=True,
                       overdensity_field='ActualOverdensity',
                       virial_overdensity=200,
                       virial_filters=[['TotalMassMsun','>=','1e14']],
@@ -46,3 +46,4 @@
    
 
 
+


--- a/source/cookbook/simple_pdf.inc	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/cookbook/simple_pdf.inc	Mon Mar 28 21:27:53 2011 -0400
@@ -24,5 +24,10 @@
    pc.save(fn) # save all plots
    
 
+.. rubric:: Sample Output
 
+.. image:: _simple_pdf/simple_pdf_RedshiftOutput0005_Profile2D_0_Density_Temperature_CellMassMsun.png
+   :width: 240
+   :target: ../_images/simple_pdf_RedshiftOutput0005_Profile2D_0_Density_Temperature_CellMassMsun.png
 
+


--- a/source/cookbook/simple_projection.inc	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/cookbook/simple_projection.inc	Mon Mar 28 21:27:53 2011 -0400
@@ -26,14 +26,14 @@
 
 .. rubric:: Sample Output
 
-.. image:: _simple_projection/simple_projection_RedshiftOutput0005_Projection_x_Density.png
+.. image:: _simple_projection/simple_projection_RedshiftOutput0005_Projection_x_Density_Density.png
    :width: 240
-   :target: ../_images/simple_projection_RedshiftOutput0005_Projection_x_Density.png
-.. image:: _simple_projection/simple_projection_RedshiftOutput0005_Projection_y_Density.png
+   :target: ../_images/simple_projection_RedshiftOutput0005_Projection_x_Density_Density.png
+.. image:: _simple_projection/simple_projection_RedshiftOutput0005_Projection_y_Density_Density.png
    :width: 240
-   :target: ../_images/simple_projection_RedshiftOutput0005_Projection_y_Density.png
-.. image:: _simple_projection/simple_projection_RedshiftOutput0005_Projection_z_Density.png
+   :target: ../_images/simple_projection_RedshiftOutput0005_Projection_y_Density_Density.png
+.. image:: _simple_projection/simple_projection_RedshiftOutput0005_Projection_z_Density_Density.png
    :width: 240
-   :target: ../_images/simple_projection_RedshiftOutput0005_Projection_z_Density.png
+   :target: ../_images/simple_projection_RedshiftOutput0005_Projection_z_Density_Density.png
 
 


--- a/source/cookbook/simple_volume_rendering.inc	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/cookbook/simple_volume_rendering.inc	Mon Mar 28 21:27:53 2011 -0400
@@ -5,7 +5,9 @@
 
 This recipe shows how to volume render a dataset.  There are a number of
 twiddles, and rough edges, and the process is still very much in beta.
-See :ref:`volume_rendering` for more information.
+See :ref:`volume_rendering` for more information.  In particular, this
+interface will do some things very easily, but it provides almost no
+customizability.  The Camera interface is recommended.
 
 Additionally, for the purposes of the recipe, we have simplified the image
 considerably.
@@ -15,8 +17,6 @@
 .. code-block:: python
 
    from yt.mods import * # set up our namespace
-   import yt.extensions.volume_rendering as vr
-   import yt.extensions.image_writer as iw
    
    fn = "RedshiftOutput0005" # parameter file to load
    
@@ -29,7 +29,7 @@
    
    # We supply the min/max we want the function to cover, in log.
    # For this dataset it's -31 and -27.
-   tf = vr.ColorTransferFunction((na.log10(mi), na.log10(ma)))
+   tf = ColorTransferFunction((na.log10(mi), na.log10(ma)))
    
    # Now we add some Gaussians on.  Work is underway to transform this into a
    # graphical user interface, and the initial steps can be found in
@@ -66,7 +66,7 @@
    vp.ray_cast()
    
    # And now, we call our direct image saver.  
-   iw.write_bitmap(vp.image, "%s_volume_rendered.png" % pf)
+   write_bitmap(vp.image, "%s_volume_rendered.png" % pf)
    
 
 .. rubric:: Sample Output


--- a/source/cookbook/simulation_halo_profiler.inc	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/cookbook/simulation_halo_profiler.inc	Mon Mar 28 21:27:53 2011 -0400
@@ -10,19 +10,19 @@
 
 .. code-block:: python
 
-   import yt.extensions.EnzoSimulation as ES
-   import yt.extensions.HaloProfiler as HP
+   from yt.mods import *
    
-   es = ES.EnzoSimulation("simulation_parameter_file", initial_redshift=10, final_redshift=0)
+   es = amods.simulation_handler.EnzoSimulation(
+           "simulation_parameter_file", initial_redshift=10, final_redshift=0)
    
    # Loop over all dataset in the requested time interval.
    for output in es.allOutputs:
    
        # Instantiate HaloProfiler for this dataset.
-       hp = HP.HaloProfiler(output['filename'])
+       hp = amods.halo_profiler.HaloProfiler(output['filename'])
        
        # Add a virialization filter.
-       hp.add_halo_filter(HP.VirialFilter,must_be_virialized=True,
+       hp.add_halo_filter(amods.halo_profiler.VirialFilter,must_be_virialized=True,
                           overdensity_field='ActualOverdensity',
                           virial_overdensity=200,
                           virial_filters=[['TotalMassMsun','>=','1e14']],
@@ -49,3 +49,4 @@
    
 
 
+


--- a/source/cookbook/unique_light_cones.inc	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/cookbook/unique_light_cones.inc	Mon Mar 28 21:27:53 2011 -0400
@@ -10,10 +10,11 @@
 
 .. code-block:: python
 
-   import yt.extensions.lightcone as LC
+   from yt.mods import *
    
    # Instantiate a light cone object as usual.
-   lc = LC.LightCone("128Mpc256grid_SFFB.param", initial_redshift=0.4, 
+   lc = amods.light_cone.LightCone(
+                     "128Mpc256grid_SFFB.param", initial_redshift=0.4, 
                      final_redshift=0.0, observer_redshift=0.0,
                      field_of_view_in_arcminutes=120.0, 
                      image_resolution_in_arcseconds=60.0,
@@ -39,3 +40,4 @@
    
 
 
+


--- a/source/reference/api/data_sources.rst	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/reference/api/data_sources.rst	Mon Mar 28 21:27:53 2011 -0400
@@ -30,6 +30,20 @@
    ~yt.data_objects.data_containers.AMRSmoothedCoveringGridBase
    ~yt.data_objects.data_containers.AMRSphereBase
 
+Time Series Objects
+-------------------
+
+These are objects that either contain and represent or operate on series of
+datasets.
+
+.. autosummary::
+   :toctree: generated/
+
+   ~yt.data_objects.time_series.TimeSeriesData
+   ~yt.data_objects.time_series.TimeSeriesDataObject
+   ~yt.data_objects.time_series.TimeSeriesQuantitiesContainer
+   ~yt.data_objects.time_series.AnalysisTaskProxy
+
 Frontends
 ---------
 


--- a/source/reference/changelog.rst	Mon Mar 28 21:23:43 2011 -0400
+++ b/source/reference/changelog.rst	Mon Mar 28 21:27:53 2011 -0400
@@ -3,6 +3,37 @@
 
 This is a non-comprehensive log of changes to the code.
 
+Version 2.1
+-----------
+
+ * HEALpix-based volume rendering for 4pi, allsky volume rendering
+ * libconfig is now included
+ * SQLite3 and Forthon now included by default in the install script
+ * Development guide has been lengthened substantially and a development
+   bootstrap script (:ref:`bootstrap-dev`) is now included.
+ * Installation script now installs Python 2.7 and HDF5 1.8.6
+ * iyt now tab-completes field names
+ * Halos can now be stored on-disk much more easily between HaloFinding runs.
+ * Halos found inline in Enzo can be loaded and merger trees calculated
+ * Support for CASTRO particles has been added
+ * Chombo support updated and fixed
+ * New code contributions 
+ * Contour finder has been sped up by a factor of a few
+ * Constrained two-point functions are now possible, for LOS power spectra
+ * Time series analysis (:ref:`time-series-analysis`) now much easier
+ * Stream Lines now a supported 1D data type (:class:`AMRStreamlineBase`)
+ * Stream Lines now able to be calculated and plotted (:ref:`streamlines-viz`)
+ * In situ Enzo visualization now much faster
+ * "gui" source directory reorganized and cleaned up
+ * Cython now a compile-time dependency, reducing the size of source tree
+   updates substantially
+ * ``yt-supplemental`` repository now checked out by default, containing
+   cookbook, documentation, handy mercurial extensions, and advanced plotting
+   examples and helper scripts.
+ * Pasteboards now supported and available (:ref:`pasteboards`)
+ * Parallel yt efficiency improved by removal of barriers and improvement of
+   collective operations
+
 Version 2.0
 -----------
 


http://bitbucket.org/yt_analysis/yt-doc/changeset/169ec8151f9e/
changeset:   r59:169ec8151f9e
user:        samskillman
date:        2011-04-06 02:19:43
summary:     Merging
affected #:  4 files (4.4 KB)

--- a/source/reference/api/data_sources.rst	Sun Apr 03 20:18:43 2011 -0400
+++ b/source/reference/api/data_sources.rst	Tue Apr 05 20:19:43 2011 -0400
@@ -23,6 +23,7 @@
    ~yt.data_objects.data_containers.AMRGridCollectionBase
    ~yt.data_objects.data_containers.AMRRayBase
    ~yt.data_objects.data_containers.AMROrthoRayBase
+   ~yt.data_objects.data_containers.AMRStreamlineBase
    ~yt.data_objects.data_containers.AMRProjBase
    ~yt.data_objects.data_containers.AMRRegionBase
    ~yt.data_objects.data_containers.AMRSliceBase


--- a/source/reference/api/extension_types.rst	Sun Apr 03 20:18:43 2011 -0400
+++ b/source/reference/api/extension_types.rst	Tue Apr 05 20:19:43 2011 -0400
@@ -50,6 +50,7 @@
    ~yt.visualization.volume_rendering.grid_partitioner.HomogenizedVolume
    ~yt.visualization.volume_rendering.transfer_functions.MultiVariateTransferFunction
    ~yt.utilities.amr_utils.PartitionedGrid
+   ~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree
    ~yt.visualization.volume_rendering.camera.PerspectiveCamera
    ~yt.visualization.volume_rendering.transfer_functions.PlanckTransferFunction
    ~yt.visualization.volume_rendering.transfer_functions.ProjectionTransferFunction
@@ -59,6 +60,21 @@
 
 .. _image_writer:
 
+Streamlining
+----------------
+
+See also :ref:`streamlines`.
+
+.. py:module:: yt.visualization.streamlines
+
+.. autosummary::
+   :toctree: generated/
+
+   ~yt.visualization.streamlines.Streamlines
+   ~yt.visualization.streamlines.Streamlines.integrate_through_volume
+   ~yt.visualization.streamlines.Streamlines.path
+   ~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree
+
 Image Writing
 -------------
 


--- a/source/visualizing/index.rst	Sun Apr 03 20:18:43 2011 -0400
+++ b/source/visualizing/index.rst	Tue Apr 05 20:19:43 2011 -0400
@@ -8,3 +8,5 @@
    callbacks
    volume_rendering
    image_panner
+   streamlines
+


--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/source/visualizing/streamlines.rst	Tue Apr 05 20:19:43 2011 -0400
@@ -0,0 +1,115 @@
+.. _streamlines:
+
+Streamlining
+================
+.. versionadded:: 2.1
+
+Streamlines, as implemented in ``yt``, are defined as being parallel to a
+vector field at all points.  While commonly used to follow the
+velocity flow or magnetic field lines, they can be defined to follow
+any three-dimensional vector field.  Once an initial condition and
+total length of the streamline are specified, the streamline is
+uniquely defined.    
+
+Method
+----------------
+
+Streamlining through a volume is useful for a variety of analysis
+tasks.  By specifying a set of starting positions, the user is
+returned a set of 3D positions that can, in turn, be used to visualize
+the 3D path of the streamlines.  Additionally, individual streamlines
+can be converted into
+:class:`~yt.data_objects.data_containers.AMRStreamlineBase` objects,
+and queried for all the available fields along the streamline.
+
+The implementation of streamlining  in ``yt`` is described below.
+
+#. Decompose the volume into a set of non-overlapping, fully domain
+   tiling bricks, using the
+   :class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` homogenized
+   volume.
+#. For every streamline starting position:
+
+   #. While the length of the streamline is less than the requested
+      length:
+
+      #. Find the brick that contains the current position
+      #. If not already present, generate vertex-centered data for
+         the vector fields defining the streamline.
+      #. While inside the brick
+
+         #. Integrate the streamline path using a Runge-Kutta 4th
+            order method and the vertex centered data.  
+	 #. During the intermediate steps of each RK4 step, if the
+            position is updated to outside the current brick,
+            interrupt the integration and locate a new brick at the
+            intermediate position.
+
+#. The set set of streamline positions are stored in the
+   :obj:`~yt.visualization.streamlines.Streamlines.streamlines` object.
+
+Example Script
+++++++++++++++++
+
+.. code-block:: python
+
+    from yt.mods import *
+    from yt.visualization.api import Streamlines
+    
+    pf = load('DD1701') # Load pf 
+    c = na.array([0.5]*3)
+    N = 100
+    scale = 1.0
+    pos_dx = na.random.random((N,3))*scale-scale/2.
+    pos = c+pos_dx
+    
+    streamlines = Streamlines(pf,pos,'x-velocity', 'y-velocity', 'z-velocity', length=1.0) 
+    streamlines.integrate_through_volume()
+    
+    import matplotlib.pylab as pl
+    from mpl_toolkits.mplot3d import Axes3D
+    fig=pl.figure() 
+    ax = Axes3D(fig)
+    for stream in streamlines.streamlines:
+        stream = stream[na.all(stream != 0.0, axis=1)]
+    	ax.plot3D(stream[:,0], stream[:,1], stream[:,2], alpha=0.1)
+    pl.savefig('streamlines.png')
+
+
+Data Access Along the Streamline
+--------------------------------
+
+Once the streamlines are found, a
+:class:`~yt.data_objects.data_containers.AMRStreamlineBase` object can
+be created using the
+:meth:`~yt.visualization.streamlines.Streamlines.path` function, which
+takes as input the index of the streamline requested. This conversion
+is done by creating a mask that defines where the streamline is, and
+creating 't' and 'dts' fields that define the dimensionless streamline
+integration coordinate and integration step size. Once defined, fields
+can be accessed in the standard manner.
+
+Example Script
+++++++++++++++++
+
+.. code-block:: python
+
+    from yt.mods import *
+    from yt.visualization.api import Streamlines
+    
+    pf = load('DD1701') # Load pf 
+    streamlines = Streamlines(pf, [0.5]*3) 
+    streamlines.integrate_through_volume()
+    stream = streamlines.path(0)
+    matplotlib.pylab.semilogy(stream['t'], stream['Density'], '-x')
+
+
+Running in Parallel
+--------------------
+
+The integration of the streamline paths is trivially parallelized by
+splitting the streamlines up between the processors.  Upon completion,
+each processor has access to all of the streamlines through the use of
+a reduction operation.
+
+Parallel usage is specified using the standard `--parallel` flag.


http://bitbucket.org/yt_analysis/yt-doc/changeset/31a36e045dc7/
changeset:   r60:31a36e045dc7
user:        samskillman
date:        2011-04-06 04:26:24
summary:     A few changes here and there.
affected #:  2 files (9 bytes)

--- a/source/visualizing/streamlines.rst	Tue Apr 05 20:19:43 2011 -0400
+++ b/source/visualizing/streamlines.rst	Tue Apr 05 22:26:24 2011 -0400
@@ -107,9 +107,9 @@
 Running in Parallel
 --------------------
 
-The integration of the streamline paths is trivially parallelized by
+The integration of the streamline paths is "embarassingly" parallelized by
 splitting the streamlines up between the processors.  Upon completion,
 each processor has access to all of the streamlines through the use of
 a reduction operation.
 
-Parallel usage is specified using the standard `--parallel` flag.
+Parallel usage is specified using the standard ``--parallel`` flag.


--- a/source/visualizing/volume_rendering.rst	Tue Apr 05 20:19:43 2011 -0400
+++ b/source/visualizing/volume_rendering.rst	Tue Apr 05 22:26:24 2011 -0400
@@ -180,7 +180,7 @@
 either an instance of
 :class:`~yt.visualization.volume_rendering.grid_partitioner.HomogenizedVolume`
 or an instance of :class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` that
-has already been initialized.  If oneis not supplied, the camera will generate
+has already been initialized.  If one is not supplied, the camera will generate
 one itself.  This can also be specified if you wish to save bricks between
 repeated calls, thus saving considerable amounts of time.

Repository URL: https://bitbucket.org/yt_analysis/yt-doc/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list