[Yt-svn] commit/yt-doc: 3 new changesets

Bitbucket commits-noreply at bitbucket.org
Tue Mar 22 17:35:57 PDT 2011


3 new changesets in yt-doc:

http://bitbucket.org/yt_analysis/yt-doc/changeset/436d9082242e/
changeset:   r47:436d9082242e
user:        MatthewTurk
date:        2011-03-22 22:32:54
summary:     Adding a bit of time series analysis discussion
affected #:  2 files (3.7 KB)

--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/source/analysis_modules/time_series_analysis.rst	Tue Mar 22 17:32:54 2011 -0400
@@ -0,0 +1,89 @@
+.. _time-series-analysis:
+
+Time Series Analysis
+====================
+
+Often, one wants to analyze a continuous set of outputs from a simulation in a
+uniform manner.  A simple example would be to calculate the peak density in a
+set of outputs that were written out.  The problem with time series analysis in
+yt is general an issue of verbosity and clunkiness. Typically, unless using the
+:class:`~yt.analysis_modules.simulation_handling.EnzoSimulation` class (which
+is only available as of right now for Enzo) one sets up a loop:
+
+.. code-block:: python
+
+   for pfi in range(30):
+       fn = "DD%04i/DD%04i" % (pfi, pfi)
+       pf = load(fn)
+       process_output(pf)
+
+But this is not really very nice.  This ends up requiring a lot of maintenance.
+The :class:`~yt.data_objects.time_series.TimeSeriesData` object has been
+designed to remove some of this clunkiness and present an easier, more unified
+approach to analyzing sets of data.  Furthermore, future versions of yt will
+automatically parallelize operations conducted on time series of data.
+
+The idea behind the current implementation of time series analysis is that
+the underlying data and the operators that act on that data can and should be
+distinct.  There are several operators provided, as well as facilities for
+creating your own, and these operators can be applied either to datasets on the
+whole or to subregions of individual datasets.
+
+The simplest mechanism for creating a ``TimeSeriesData`` object is to use the
+class method
+:meth:`~yt.data_objects.time_series.TimeSeriesData.from_filenames`.  This
+method accepts a list of strings that can be supplied to ``load``.  For
+example:
+
+.. code-block:: python
+
+   from yt.mods import *
+   filenames = ["DD0030/output_0030", "DD0040/output_0040"]
+   ts = TimeSeriesData.from_filenames(filenames)
+
+This will create a new time series, populated with the output files ``DD0030``
+and ``DD0040``.  This object, here called ``ts``, can now be analyzed in bulk.
+
+Simple Analysis Tasks
+---------------------
+
+The available tasks that come built-in can be seen by looking at the output of
+``ts.tasks.keys()``.  For instance, one of the simplest ones is the
+``MaxValue`` task.  We can execute this task by calling it with the field whose
+maximum value we want to evaluate:
+
+.. code-block:: python
+
+   from yt.mods import *
+   all_files = glob.glob("*/*.hierarchy")
+   all_files.sort()
+   ts = TimeSeries.from_filenames(all_files)
+   max_rho = ts.tasks["MaximumValue"]("Density")
+
+When we call the task, the time series object executes the task on each
+component parameter file.  The results are then returned to the user.  More
+complex, multi-task evaluations can be conducted by using the
+:meth:`~yt.data_objects.time_series.TimeSeriesData.eval` call, which accepts a
+list of analysis tasks.
+
+Analysis Tasks Applied to Objects
+---------------------------------
+
+
+Creating Analysis Tasks
+-----------------------
+
+If you wanted to look at the mass in star particles as a function of time, you
+would write a function that accepts params and pf and then decorate it with
+analysis_task. Here we have done so:
+
+.. code-block:: python
+
+   @analysis_task(('particle_type',))
+   def MassInParticleType(params, pf):
+       dd = pf.h.all_data()
+       ptype = (dd["particle_type"] == params.particle_type)
+       return (ptype.sum(), dd["ParticleMassMsun"][ptype].sum())
+
+   ms = ts.tasks["MassInParticleType"](4)
+   print ms


--- a/source/reference/api/data_sources.rst	Tue Mar 22 09:09:57 2011 -0700
+++ b/source/reference/api/data_sources.rst	Tue Mar 22 17:32:54 2011 -0400
@@ -29,6 +29,20 @@
    ~yt.data_objects.data_containers.AMRSmoothedCoveringGridBase
    ~yt.data_objects.data_containers.AMRSphereBase
 
+Time Series Objects
+-------------------
+
+These are objects that either contain and represent or operate on series of
+datasets.
+
+.. autosummary::
+   :toctree: generated/
+
+   ~yt.data_objects.time_series.TimeSeriesData
+   ~yt.data_objects.time_series.TimeSeriesDataObject
+   ~yt.data_objects.time_series.TimeSeriesQuantitiesContainer
+   ~yt.data_objects.time_series.AnalysisTaskProxy
+
 Frontends
 ---------
 


http://bitbucket.org/yt_analysis/yt-doc/changeset/97ac85b952c4/
changeset:   r48:97ac85b952c4
user:        MatthewTurk
date:        2011-03-22 22:44:49
summary:     More on time series data
affected #:  2 files (1.5 KB)

--- a/source/analysis_modules/time_series_analysis.rst	Tue Mar 22 17:32:54 2011 -0400
+++ b/source/analysis_modules/time_series_analysis.rst	Tue Mar 22 17:44:49 2011 -0400
@@ -69,6 +69,34 @@
 Analysis Tasks Applied to Objects
 ---------------------------------
 
+Just as some tasks can be applied to datasets as a whole, one can also apply
+the creation of objects to datasets.  This means that you are able to construct
+a generalized "sphere" operator that will be created inside all datasets, which
+you can then calculate derived quantities (see :ref:`derived-quantities`) from.
+
+For instance, imagine that you wanted to create a sphere that is centered on
+the most dense point in the simulation and that is 1 pc in radius, and then
+calculate the angular momentum vector on this sphere.  You could do that with
+this script:
+
+.. code-block:: python
+
+   from yt.mods import *
+   all_files = glob.glob("*/*.hierarchy")
+   all_files.sort()
+   ts = TimeSeries.from_filenames(all_files)
+   sphere = ts.sphere("max", (1.0, "pc"))
+   L_vecs = sphere.quantities["AngularMomentumVector"]()
+
+Note that we have specified the units differently than usual -- the time series
+objects allow units as a tuple, so that in cases where units may change over
+the course of several outputs they are correctly set at all times.  This script
+simply sets up the time series object, creates a sphere, and then runs
+quantities on it.  It is designed to look very similar to the code that would
+conduct this analysis on a single output.
+
+All of the objects listed in :ref:`available-objects` are made available in
+the same manner as "sphere" was used above.
 
 Creating Analysis Tasks
 -----------------------
@@ -87,3 +115,7 @@
 
    ms = ts.tasks["MassInParticleType"](4)
    print ms
+
+This allows you to create your own analysis tasks that will be then available
+to time series data objects.  In the future, this will allow for transparent
+parallelization.


--- a/source/analyzing/index.rst	Tue Mar 22 17:32:54 2011 -0400
+++ b/source/analyzing/index.rst	Tue Mar 22 17:44:49 2011 -0400
@@ -9,3 +9,4 @@
    particles
    creating_derived_fields
    generating_processed_data
+   time_series_analysis


http://bitbucket.org/yt_analysis/yt-doc/changeset/38bfdb3c2d49/
changeset:   r49:38bfdb3c2d49
user:        MatthewTurk
date:        2011-03-22 22:45:48
summary:     Oops, created this file in the wrong place ... tab-completion!
affected #:  2 files (4.8 KB)

--- a/source/analysis_modules/time_series_analysis.rst	Tue Mar 22 17:44:49 2011 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,121 +0,0 @@
-.. _time-series-analysis:
-
-Time Series Analysis
-====================
-
-Often, one wants to analyze a continuous set of outputs from a simulation in a
-uniform manner.  A simple example would be to calculate the peak density in a
-set of outputs that were written out.  The problem with time series analysis in
-yt is general an issue of verbosity and clunkiness. Typically, unless using the
-:class:`~yt.analysis_modules.simulation_handling.EnzoSimulation` class (which
-is only available as of right now for Enzo) one sets up a loop:
-
-.. code-block:: python
-
-   for pfi in range(30):
-       fn = "DD%04i/DD%04i" % (pfi, pfi)
-       pf = load(fn)
-       process_output(pf)
-
-But this is not really very nice.  This ends up requiring a lot of maintenance.
-The :class:`~yt.data_objects.time_series.TimeSeriesData` object has been
-designed to remove some of this clunkiness and present an easier, more unified
-approach to analyzing sets of data.  Furthermore, future versions of yt will
-automatically parallelize operations conducted on time series of data.
-
-The idea behind the current implementation of time series analysis is that
-the underlying data and the operators that act on that data can and should be
-distinct.  There are several operators provided, as well as facilities for
-creating your own, and these operators can be applied either to datasets on the
-whole or to subregions of individual datasets.
-
-The simplest mechanism for creating a ``TimeSeriesData`` object is to use the
-class method
-:meth:`~yt.data_objects.time_series.TimeSeriesData.from_filenames`.  This
-method accepts a list of strings that can be supplied to ``load``.  For
-example:
-
-.. code-block:: python
-
-   from yt.mods import *
-   filenames = ["DD0030/output_0030", "DD0040/output_0040"]
-   ts = TimeSeriesData.from_filenames(filenames)
-
-This will create a new time series, populated with the output files ``DD0030``
-and ``DD0040``.  This object, here called ``ts``, can now be analyzed in bulk.
-
-Simple Analysis Tasks
----------------------
-
-The available tasks that come built-in can be seen by looking at the output of
-``ts.tasks.keys()``.  For instance, one of the simplest ones is the
-``MaxValue`` task.  We can execute this task by calling it with the field whose
-maximum value we want to evaluate:
-
-.. code-block:: python
-
-   from yt.mods import *
-   all_files = glob.glob("*/*.hierarchy")
-   all_files.sort()
-   ts = TimeSeries.from_filenames(all_files)
-   max_rho = ts.tasks["MaximumValue"]("Density")
-
-When we call the task, the time series object executes the task on each
-component parameter file.  The results are then returned to the user.  More
-complex, multi-task evaluations can be conducted by using the
-:meth:`~yt.data_objects.time_series.TimeSeriesData.eval` call, which accepts a
-list of analysis tasks.
-
-Analysis Tasks Applied to Objects
----------------------------------
-
-Just as some tasks can be applied to datasets as a whole, one can also apply
-the creation of objects to datasets.  This means that you are able to construct
-a generalized "sphere" operator that will be created inside all datasets, which
-you can then calculate derived quantities (see :ref:`derived-quantities`) from.
-
-For instance, imagine that you wanted to create a sphere that is centered on
-the most dense point in the simulation and that is 1 pc in radius, and then
-calculate the angular momentum vector on this sphere.  You could do that with
-this script:
-
-.. code-block:: python
-
-   from yt.mods import *
-   all_files = glob.glob("*/*.hierarchy")
-   all_files.sort()
-   ts = TimeSeries.from_filenames(all_files)
-   sphere = ts.sphere("max", (1.0, "pc"))
-   L_vecs = sphere.quantities["AngularMomentumVector"]()
-
-Note that we have specified the units differently than usual -- the time series
-objects allow units as a tuple, so that in cases where units may change over
-the course of several outputs they are correctly set at all times.  This script
-simply sets up the time series object, creates a sphere, and then runs
-quantities on it.  It is designed to look very similar to the code that would
-conduct this analysis on a single output.
-
-All of the objects listed in :ref:`available-objects` are made available in
-the same manner as "sphere" was used above.
-
-Creating Analysis Tasks
------------------------
-
-If you wanted to look at the mass in star particles as a function of time, you
-would write a function that accepts params and pf and then decorate it with
-analysis_task. Here we have done so:
-
-.. code-block:: python
-
-   @analysis_task(('particle_type',))
-   def MassInParticleType(params, pf):
-       dd = pf.h.all_data()
-       ptype = (dd["particle_type"] == params.particle_type)
-       return (ptype.sum(), dd["ParticleMassMsun"][ptype].sum())
-
-   ms = ts.tasks["MassInParticleType"](4)
-   print ms
-
-This allows you to create your own analysis tasks that will be then available
-to time series data objects.  In the future, this will allow for transparent
-parallelization.


--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/source/analyzing/time_series_analysis.rst	Tue Mar 22 17:45:48 2011 -0400
@@ -0,0 +1,121 @@
+.. _time-series-analysis:
+
+Time Series Analysis
+====================
+
+Often, one wants to analyze a continuous set of outputs from a simulation in a
+uniform manner.  A simple example would be to calculate the peak density in a
+set of outputs that were written out.  The problem with time series analysis in
+yt is general an issue of verbosity and clunkiness. Typically, unless using the
+:class:`~yt.analysis_modules.simulation_handling.EnzoSimulation` class (which
+is only available as of right now for Enzo) one sets up a loop:
+
+.. code-block:: python
+
+   for pfi in range(30):
+       fn = "DD%04i/DD%04i" % (pfi, pfi)
+       pf = load(fn)
+       process_output(pf)
+
+But this is not really very nice.  This ends up requiring a lot of maintenance.
+The :class:`~yt.data_objects.time_series.TimeSeriesData` object has been
+designed to remove some of this clunkiness and present an easier, more unified
+approach to analyzing sets of data.  Furthermore, future versions of yt will
+automatically parallelize operations conducted on time series of data.
+
+The idea behind the current implementation of time series analysis is that
+the underlying data and the operators that act on that data can and should be
+distinct.  There are several operators provided, as well as facilities for
+creating your own, and these operators can be applied either to datasets on the
+whole or to subregions of individual datasets.
+
+The simplest mechanism for creating a ``TimeSeriesData`` object is to use the
+class method
+:meth:`~yt.data_objects.time_series.TimeSeriesData.from_filenames`.  This
+method accepts a list of strings that can be supplied to ``load``.  For
+example:
+
+.. code-block:: python
+
+   from yt.mods import *
+   filenames = ["DD0030/output_0030", "DD0040/output_0040"]
+   ts = TimeSeriesData.from_filenames(filenames)
+
+This will create a new time series, populated with the output files ``DD0030``
+and ``DD0040``.  This object, here called ``ts``, can now be analyzed in bulk.
+
+Simple Analysis Tasks
+---------------------
+
+The available tasks that come built-in can be seen by looking at the output of
+``ts.tasks.keys()``.  For instance, one of the simplest ones is the
+``MaxValue`` task.  We can execute this task by calling it with the field whose
+maximum value we want to evaluate:
+
+.. code-block:: python
+
+   from yt.mods import *
+   all_files = glob.glob("*/*.hierarchy")
+   all_files.sort()
+   ts = TimeSeries.from_filenames(all_files)
+   max_rho = ts.tasks["MaximumValue"]("Density")
+
+When we call the task, the time series object executes the task on each
+component parameter file.  The results are then returned to the user.  More
+complex, multi-task evaluations can be conducted by using the
+:meth:`~yt.data_objects.time_series.TimeSeriesData.eval` call, which accepts a
+list of analysis tasks.
+
+Analysis Tasks Applied to Objects
+---------------------------------
+
+Just as some tasks can be applied to datasets as a whole, one can also apply
+the creation of objects to datasets.  This means that you are able to construct
+a generalized "sphere" operator that will be created inside all datasets, which
+you can then calculate derived quantities (see :ref:`derived-quantities`) from.
+
+For instance, imagine that you wanted to create a sphere that is centered on
+the most dense point in the simulation and that is 1 pc in radius, and then
+calculate the angular momentum vector on this sphere.  You could do that with
+this script:
+
+.. code-block:: python
+
+   from yt.mods import *
+   all_files = glob.glob("*/*.hierarchy")
+   all_files.sort()
+   ts = TimeSeries.from_filenames(all_files)
+   sphere = ts.sphere("max", (1.0, "pc"))
+   L_vecs = sphere.quantities["AngularMomentumVector"]()
+
+Note that we have specified the units differently than usual -- the time series
+objects allow units as a tuple, so that in cases where units may change over
+the course of several outputs they are correctly set at all times.  This script
+simply sets up the time series object, creates a sphere, and then runs
+quantities on it.  It is designed to look very similar to the code that would
+conduct this analysis on a single output.
+
+All of the objects listed in :ref:`available-objects` are made available in
+the same manner as "sphere" was used above.
+
+Creating Analysis Tasks
+-----------------------
+
+If you wanted to look at the mass in star particles as a function of time, you
+would write a function that accepts params and pf and then decorate it with
+analysis_task. Here we have done so:
+
+.. code-block:: python
+
+   @analysis_task(('particle_type',))
+   def MassInParticleType(params, pf):
+       dd = pf.h.all_data()
+       ptype = (dd["particle_type"] == params.particle_type)
+       return (ptype.sum(), dd["ParticleMassMsun"][ptype].sum())
+
+   ms = ts.tasks["MassInParticleType"](4)
+   print ms
+
+This allows you to create your own analysis tasks that will be then available
+to time series data objects.  In the future, this will allow for transparent
+parallelization.

Repository URL: https://bitbucket.org/yt_analysis/yt-doc/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list