[yt-svn] commit/yt: jzuhone: Merged in ngoldbaum/yt/yt-3.0 (pull request #1068)

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Thu Jul 24 09:15:55 PDT 2014


1 new commit in yt:

https://bitbucket.org/yt_analysis/yt/commits/eceab32b33c8/
Changeset:   eceab32b33c8
Branch:      yt-3.0
User:        jzuhone
Date:        2014-07-24 18:15:45
Summary:     Merged in ngoldbaum/yt/yt-3.0 (pull request #1068)

Updates for "analyzing" and "examining" sections.
Affected #:  20 files

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f doc/source/analyzing/analysis_modules/radial_column_density.rst
--- a/doc/source/analyzing/analysis_modules/radial_column_density.rst
+++ b/doc/source/analyzing/analysis_modules/radial_column_density.rst
@@ -5,6 +5,14 @@
 .. sectionauthor:: Stephen Skory <s at skory.us>
 .. versionadded:: 2.3
 
+.. note:: 
+
+    As of :code:`yt-3.0`, the radial column density analysis module is not
+    currently functional.  This functionality is still available in
+    :code:`yt-2.x`.  If you would like to use these features in :code:`yt-3.x`,
+    help is needed to port them over.  Contact the yt-users mailing list if you
+    are interested in doing this.
+
 This module allows the calculation of column densities around a point over a
 field such as ``NumberDensity`` or ``Density``.
 This uses :ref:`healpix_volume_rendering` to interpolate column densities

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f doc/source/analyzing/analysis_modules/radmc3d_export.rst
--- a/doc/source/analyzing/analysis_modules/radmc3d_export.rst
+++ b/doc/source/analyzing/analysis_modules/radmc3d_export.rst
@@ -6,6 +6,14 @@
 .. sectionauthor:: Andrew Myers <atmyers2 at gmail.com>
 .. versionadded:: 2.6
 
+.. note:: 
+
+    As of :code:`yt-3.0`, the radial column density analysis module is not
+    currently functional.  This functionality is still available in
+    :code:`yt-2.x`.  If you would like to use these features in :code:`yt-3.x`,
+    help is needed to port them over.  Contact the yt-users mailing list if you
+    are interested in doing this.
+
 `RADMC-3D
 <http://www.ita.uni-heidelberg.de/~dullemond/software/radmc-3d/>`_ is a three-dimensional Monte-Carlo radiative transfer code
 that is capable of handling both line and continuum emission. The :class:`~yt.analysis_modules.radmc3d_export.RadMC3DInterface.RadMC3DWriter`

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f doc/source/analyzing/analysis_modules/star_analysis.rst
--- a/doc/source/analyzing/analysis_modules/star_analysis.rst
+++ b/doc/source/analyzing/analysis_modules/star_analysis.rst
@@ -5,6 +5,14 @@
 .. sectionauthor:: Stephen Skory <sskory at physics.ucsd.edu>
 .. versionadded:: 1.6
 
+.. note:: 
+
+    As of :code:`yt-3.0`, the star particle analysis module is not currently
+    functional.  This functionality is still available in :code:`yt-2.x`.  If
+    you would like to use these features in :code:`yt-3.x`, help is needed to
+    port them over.  Contact the yt-users mailing list if you are interested in
+    doing this.
+
 This document describes tools in yt for analyzing star particles.
 The Star Formation Rate tool bins stars by time to produce star formation
 statistics over several metrics.

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f doc/source/analyzing/analysis_modules/sunrise_export.rst
--- a/doc/source/analyzing/analysis_modules/sunrise_export.rst
+++ b/doc/source/analyzing/analysis_modules/sunrise_export.rst
@@ -6,6 +6,13 @@
 .. sectionauthor:: Christopher Moody <cemoody at ucsc.edu>
 .. versionadded:: 1.8
 
+.. note:: 
+
+    As of :code:`yt-3.0`, the sunrise exporter is not currently functional.
+    This functionality is still available in :code:`yt-2.x`.  If you would like
+    to use these features in :code:`yt-3.x`, help is needed to port them over.
+    Contact the yt-users mailing list if you are interested in doing this.
+
 The yt-Sunrise exporter essentially takes grid cell data and translates it into a binary octree format, attaches star particles, and saves the output to a FITS file Sunrise can read. For every cell, the gas mass, metals mass (a fraction of which is later assumed to be in the form of dust), and the temperature are saved. Star particles are defined entirely by their mass, position, metallicity, and a 'radius.' This guide outlines the steps to exporting the data, troubleshoots common problems, and reviews recommended sanity checks. 
 
 Simple Export

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f doc/source/analyzing/analysis_modules/two_point_functions.rst
--- a/doc/source/analyzing/analysis_modules/two_point_functions.rst
+++ b/doc/source/analyzing/analysis_modules/two_point_functions.rst
@@ -5,6 +5,14 @@
 .. sectionauthor:: Stephen Skory <sskory at physics.ucsd.edu>
 .. versionadded:: 1.7
 
+.. note:: 
+
+    As of :code:`yt-3.0`, the two point function analysis module is not
+    currently functional.  This functionality is still available in
+    :code:`yt-2.x`.  If you would like to use these features in :code:`yt-3.x`,
+    help is needed to port them over.  Contact the yt-users mailing list if you
+    are interested in doing this.
+
 The Two Point Functions framework (TPF) is capable of running several
 multi-dimensional two point functions simultaneously on a dataset using
 memory and workload parallelism.

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f doc/source/analyzing/fields.rst
--- a/doc/source/analyzing/fields.rst
+++ b/doc/source/analyzing/fields.rst
@@ -1,19 +1,22 @@
 Particle Fields
-====================================
+===============
+
 Naturally, particle fields contain properties of particles rather than
 grid cells.  Many of these fields have corresponding grid fields that
 can be populated by "depositing" the particle values onto a yt grid.
 
 General Particle Fields
-------------------------------------
+-----------------------
+
 Every particle will contain both a ``particle_position`` and ``particle_velocity``
 that tracks the position and velocity (respectively) in code units.
 
 
 SPH Fields
-------------------------------------
+----------
+
 For gas particles from SPH simulations, each particle will typically carry
-a field for the smoothing length `h`, which is roughly equivalent to 
-`(m/\rho)^{1/3}`, where `m` and `rho` are the particle mass and density 
+a field for the smoothing length ``h``, which is roughly equivalent to 
+``(m/\rho)^{1/3}``, where ``m`` and ``rho`` are the particle mass and density 
 respectively.  This can be useful for doing neighbour finding.
 

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f doc/source/analyzing/parallel_computation.rst
--- a/doc/source/analyzing/parallel_computation.rst
+++ b/doc/source/analyzing/parallel_computation.rst
@@ -84,6 +84,7 @@
 
    import yt
    yt.enable_parallelism()
+   
    ds = yt.load("RD0035/RedshiftOutput0035")
    v, c = ds.find_max("density")
    print v, c
@@ -125,6 +126,8 @@
 .. code-block:: python
 
    import yt
+   yt.enable_parallelism()
+
    ds = yt.load("RD0035/RedshiftOutput0035")
    v, c = ds.find_max("density")
    p = yt.ProjectionPlot(ds, "x", "density")
@@ -143,6 +146,7 @@
 .. code-block:: python
 
    import yt
+   yt.enable_parallelism()
 
    def print_and_save_plot(v, c, plot, print=True):
        if print:
@@ -216,6 +220,7 @@
    
    # As always...
    import yt
+   yt.enable_parallelism()
    
    import glob
    
@@ -314,6 +319,8 @@
 .. code-block:: python
 
    import yt
+   yt.enable_parallelism()
+
    ts = yt.DatasetSeries("DD*/output_*", parallel = 4)
    
    for ds in ts.piter():
@@ -478,6 +485,8 @@
        import yt
        import time
 
+       yt.enable_parallelism()
+
        ds = yt.load("DD0152")
        t0 = time.time()
        bigstuff, hugestuff = StuffFinder(ds)

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f doc/source/cookbook/index.rst
--- a/doc/source/cookbook/index.rst
+++ b/doc/source/cookbook/index.rst
@@ -44,3 +44,4 @@
    ../analyzing/analysis_modules/sunyaev_zeldovich
    fits_radio_cubes
    fits_xray_images
+   tipsy_notebook

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f doc/source/cookbook/tipsy_and_yt.ipynb
--- /dev/null
+++ b/doc/source/cookbook/tipsy_and_yt.ipynb
@@ -0,0 +1,196 @@
+{
+ "metadata": {
+  "name": "",
+  "signature": "sha256:2ae8b1599fa35495fa1bb8deb1c67094e3529e70093b30e20354122cd9403d9d"
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+  {
+   "cells": [
+    {
+     "cell_type": "heading",
+     "level": 1,
+     "metadata": {},
+     "source": [
+      "Using yt to view and analyze Tipsy outputs from Gasoline"
+     ]
+    },
+    {
+     "cell_type": "heading",
+     "level": 2,
+     "metadata": {},
+     "source": [
+      "Loading Files"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Alright, let's start with some basics.  Before we do anything, we will need to load a snapshot.  You can do this using the ```load``` convenience function.  yt will autodetect that you have a tipsy snapshot, and automatically set itself up appropriately."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "import yt"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We will be looking at a fairly low resolution dataset.  In the next cell, the `ds` object has an atribute called `n_ref` that tells the oct-tree how many particles to refine on.  The default is 64, but we'll get prettier plots (at the expense of a deeper tree) with 8.  Just passing the argument `n_ref=8` to load does this for us."
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      ">This dataset is available for download at http://yt-project.org/data/TipsyGalaxy.tar.gz (10 MB)."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds = yt.load('TipsyGalaxy/galaxy.00300', n_ref=8)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We now have a `TipsyDataset` object called `ds`.  Let's see what fields it has."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds.field_list"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "`yt` also defines so-called \"derived\" fields.  These fields are functions of the on-disk fields that live in the `field_list`.  There is a `derived_field_list` attribute attached to the `Dataset` object - let's take look at the derived fields in this dataset:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds.derived_field_list"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "All of the field in the `field_list` are arrays containing the values for the associated particles.  These haven't been smoothed or gridded in any way. We can grab the array-data for these particles using `ds.all_data()`. For example, let's take a look at a temperature-colored scatterplot of the gas particles in this output."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "%matplotlib inline\n",
+      "import matplotlib.pyplot as plt\n",
+      "import numpy as np"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "dd = ds.all_data()\n",
+      "xcoord = dd['Gas','Coordinates'][:,0].v\n",
+      "ycoord = dd['Gas','Coordinates'][:,1].v\n",
+      "logT = np.log10(dd['Gas','Temperature'])\n",
+      "plt.scatter(xcoord, ycoord, c=logT, s=2*logT, marker='o', edgecolor='none', vmin=2, vmax=6)\n",
+      "plt.xlim(-20,20)\n",
+      "plt.ylim(-20,20)\n",
+      "cb = plt.colorbar()\n",
+      "cb.set_label('$\\log_{10}$ Temperature')\n",
+      "plt.gcf().set_size_inches(15,10)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "heading",
+     "level": 2,
+     "metadata": {},
+     "source": [
+      "Making Smoothed Images"
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "`yt` will automatically generate smoothed versions of these fields that you can use to plot.  Let's make a temperature slice and a density projection."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "yt.SlicePlot(ds, 'z', ('gas','density'), width=(40, 'kpc'), center='m')"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "yt.ProjectionPlot(ds, 'z', ('gas','density'), width=(40, 'kpc'), center='m')"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Not only are the values in the tipsy snapshot read and automatically smoothed, the auxiliary files that have physical significance are also smoothed.  Let's look at a slice of Iron mass fraction."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "yt.SlicePlot(ds, 'z', ('gas', 'FeMassFrac'), width=(40, 'kpc'), center='m')"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    }
+   ],
+   "metadata": {}
+  }
+ ]
+}
\ No newline at end of file

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f doc/source/cookbook/tipsy_notebook.rst
--- /dev/null
+++ b/doc/source/cookbook/tipsy_notebook.rst
@@ -0,0 +1,7 @@
+.. _tipsy-notebook:
+
+Using yt to view and analyze Tipsy outputs from Gasoline
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+.. notebook:: tipsy_and_yt.ipynb
+

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f doc/source/examining/generic_particle_data.rst
--- /dev/null
+++ b/doc/source/examining/generic_particle_data.rst
@@ -0,0 +1,6 @@
+.. _genertic-particle-data:
+
+Loading Generic Particle Data
+-----------------------------
+
+.. notebook:: Loading_Generic_Particle_Data.ipynb

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -391,10 +391,7 @@
 
 These will be used set the units, if they are specified.
 
-Using yt to view and analyze Tipsy outputs from Gasoline
-++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-
-.. notebook:: tipsy_and_yt.ipynb
+See :ref:`tipsy-notebook` for more details.
 
 .. _loading-artio-data:
 
@@ -839,7 +836,8 @@
 Generic Array Data
 ------------------
 
-See :ref:`loading-numpy-array` for more detail.
+See :ref:`loading-numpy-array` and
+:ref:`~yt.frontends.stream.data_structures.load_uniform_grid` for more detail.
 
 Even if your data is not strictly related to fields commonly used in
 astrophysical codes or your code is not supported yet, you can still feed it to
@@ -892,7 +890,8 @@
 Generic AMR Data
 ----------------
 
-See :ref:`loading-numpy-array` for more detail.
+See :ref:`loading-numpy-array` and
+:ref:`~yt.frontends.sph.data_structures.load_amr_grids` for more detail.
 
 It is possible to create native ``yt`` dataset from Python's dictionary
 that describes set of rectangular patches of data of possibly varying
@@ -944,21 +943,72 @@
 Generic Particle Data
 ---------------------
 
-.. notebook:: Loading_Generic_Particle_Data.ipynb
+See :ref:`generic-particle-data` and
+:ref:`~yt.frontends.stream.data_structures.load_particles` for more detail.
+
+You can also load generic particle data using the same ``stream`` functionality
+discussed above to load in-memory grid data.  For example, if your particle
+positions and masses are stored in ``positions`` and ``massess``, a
+vertically-stacked array of particle x,y, and z positions, and a 1D array of
+particle masses respectively, you would load them like this:
+
+.. code-block:: python
+
+    import yt
+
+    data = dict(particle_position=positions, particle_mass=masses)
+    ds = yt.load_particles(data)
+
+You can also load data using 1D x, y, and z position arrays:
+
+.. code-block:: python
+
+    import yt
+
+    data = dict(particle_position_x=posx,
+                particle_position_y=posy,
+                particle_position_z=posz,
+                particle_mass=masses)
+    ds = yt.load_particles(data)
+
+The ``load_particles`` function also accepts the following keyword parameters:
+
+    ``length_unit``
+      The units used for particle positions.
+
+     ``mass_unit``
+       The units of the particle masses.
+
+     ``time_unit``
+       The units used to represent times. This is optional and is only used if 
+       your data contains a ``creation_time`` field or a ``particle_velocity`` field.
+
+     ``velocity_unit``
+       The units used to represent velocities.  This is optional and is only used
+       if you supply a velocity field.  If this is not supplied, it is inferred from
+       the length and time units.
+
+     ``bbox``
+       The bounding box for the particle positions.
 
 .. _loading_sph_data:
 
 SPH Particle Data
 -----------------
-For all of the SPH frontends, yt uses a cython-based SPH to created deposit
-mesh fields from individual particle fields.  This uses a standard M4 smoothing
-kernel and the ``SmoothingLength`` field to calculate SPH sums, filling in the
-mesh fields.  This gives you the ability to both track individual particles
-(useful for tasks like following contiguous clouds of gas that would be require
-a clump finder in grid data) as well as doing standard grid-based analysis.
-The ``SmoothingLength`` variable is also useful for determining which particles
+
+For all of the SPH frontends, yt uses cython-based SPH smoothing onto an
+in-memory octree to create deposited mesh fields from individual SPH particle
+fields.
+
+This uses a standard M4 smoothing kernel and the ``smoothing_length``
+field to calculate SPH sums, filling in the mesh fields.  This gives you the
+ability to both track individual particles (useful for tasks like following
+contiguous clouds of gas that would be require a clump finder in grid data) as
+well as doing standard grid-based analysis (i.e. slices, projections, and profiles).
+
+The ``smoothing_length`` variable is also useful for determining which particles
 can interact with each other, since particles more distant than twice the
 smoothing length do not typically see each other in SPH simulations.  By
-changing the value of the ``SmoothingLength`` and then re-depositing particles
+changing the value of the ``smoothing_length`` and then re-depositing particles
 onto the grid, you can also effectively mimic what your data would look like at
 lower resolution.

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f doc/source/examining/tipsy_and_yt.ipynb
--- a/doc/source/examining/tipsy_and_yt.ipynb
+++ /dev/null
@@ -1,196 +0,0 @@
-{
- "metadata": {
-  "name": "",
-  "signature": "sha256:2ae8b1599fa35495fa1bb8deb1c67094e3529e70093b30e20354122cd9403d9d"
- },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
-  {
-   "cells": [
-    {
-     "cell_type": "heading",
-     "level": 1,
-     "metadata": {},
-     "source": [
-      "Using yt to view and analyze Tipsy outputs from Gasoline"
-     ]
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Loading Files"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Alright, let's start with some basics.  Before we do anything, we will need to load a snapshot.  You can do this using the ```load``` convenience function.  yt will autodetect that you have a tipsy snapshot, and automatically set itself up appropriately."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "import yt"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We will be looking at a fairly low resolution dataset.  In the next cell, the `ds` object has an atribute called `n_ref` that tells the oct-tree how many particles to refine on.  The default is 64, but we'll get prettier plots (at the expense of a deeper tree) with 8.  Just passing the argument `n_ref=8` to load does this for us."
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      ">This dataset is available for download at http://yt-project.org/data/TipsyGalaxy.tar.gz (10 MB)."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds = yt.load('TipsyGalaxy/galaxy.00300', n_ref=8)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "We now have a `TipsyDataset` object called `ds`.  Let's see what fields it has."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds.field_list"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "`yt` also defines so-called \"derived\" fields.  These fields are functions of the on-disk fields that live in the `field_list`.  There is a `derived_field_list` attribute attached to the `Dataset` object - let's take look at the derived fields in this dataset:"
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "ds.derived_field_list"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "All of the field in the `field_list` are arrays containing the values for the associated particles.  These haven't been smoothed or gridded in any way. We can grab the array-data for these particles using `ds.all_data()`. For example, let's take a look at a temperature-colored scatterplot of the gas particles in this output."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "%matplotlib inline\n",
-      "import matplotlib.pyplot as plt\n",
-      "import numpy as np"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "dd = ds.all_data()\n",
-      "xcoord = dd['Gas','Coordinates'][:,0].v\n",
-      "ycoord = dd['Gas','Coordinates'][:,1].v\n",
-      "logT = np.log10(dd['Gas','Temperature'])\n",
-      "plt.scatter(xcoord, ycoord, c=logT, s=2*logT, marker='o', edgecolor='none', vmin=2, vmax=6)\n",
-      "plt.xlim(-20,20)\n",
-      "plt.ylim(-20,20)\n",
-      "cb = plt.colorbar()\n",
-      "cb.set_label('$\\log_{10}$ Temperature')\n",
-      "plt.gcf().set_size_inches(15,10)"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "heading",
-     "level": 2,
-     "metadata": {},
-     "source": [
-      "Making Smoothed Images"
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "`yt` will automatically generate smoothed versions of these fields that you can use to plot.  Let's make a temperature slice and a density projection."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "yt.SlicePlot(ds, 'z', ('gas','density'), width=(40, 'kpc'), center='m')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "yt.ProjectionPlot(ds, 'z', ('gas','density'), width=(40, 'kpc'), center='m')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "Not only are the values in the tipsy snapshot read and automatically smoothed, the auxiliary files that have physical significance are also smoothed.  Let's look at a slice of Iron mass fraction."
-     ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [
-      "yt.SlicePlot(ds, 'z', ('gas', 'FeMassFrac'), width=(40, 'kpc'), center='m')"
-     ],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
-    }
-   ],
-   "metadata": {}
-  }
- ]
-}
\ No newline at end of file

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f doc/source/reference/api/api.rst
--- a/doc/source/reference/api/api.rst
+++ b/doc/source/reference/api/api.rst
@@ -1,3 +1,5 @@
+.. _api-reference:
+
 API Reference
 =============
 

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f doc/source/yt3differences.rst
--- a/doc/source/yt3differences.rst
+++ b/doc/source/yt3differences.rst
@@ -21,29 +21,42 @@
 
 Here's a quick reference for how to update your code to work with yt-3.0.
 
-  * Importing yt is now as simple as ``import yt``.  The docs have been
-    extensively updated to reflect this new style.  ``from yt.mods import *``
-    still works, but we are discouraging its use going forward.
+  * We have reworked yt's import system so that most commonly-used yt functions
+    and classes live in the top-level ``yt`` namespace. That means you can now
+    import yt with ``import yt``, load a dataset with ``ds = yt.load``
+    and create a plot with ``yt.SlicePlot``.  See :ref:`api-reference` for a full
+    API listing.
+  * Fields and metadata for data objects and datasets now have units.  The unit
+    system keeps you from making weird things like ``ergs`` + ``g`` and can
+    handle things like ``g`` + ``kg`` or ``kg*m/s**2 == Newton``.  See
+    :ref:`units` for more information.
+  * Previously, yt would use "Enzo-isms" for field names. We now very
+    specifically define fields as lowercase with underscores.  For instance,
+    what used to be ``VelocityMagnitude`` would now be ``velocity_magnitude``.
+    Axis names are now at the *end* of field names, not the beginning.
+    ``x-velocity`` is now ``velocity_x``.
   * Fields can be accessed by a name, but are named internally as ``(fluid_type,
     fluid_name)``.
-  * Fields on-disk will be in code units, and will be named ``(code_name,
+  * Mesh fields on-disk will be in code units, and will be named ``(code_name,
     FieldName)``.
-  * Previously, yt would use "Enzo-isms" for field names.  We now very
-    specifically define fields as lowercase with underscores.  For instance,
-    what used to be ``VelocityMagnitude`` would now be ``velocity_magnitude``.
-  * Particles are either named by their type or default to the type ``io``.
-  * Axis names are now at the *end* of field names, not the beginning.
-    ``x-velocity`` is now ``velocity_x``.
-  * Any derived quantities that *always* returned lists (like ``Extrema``,
-    which would return a list even if you only ask for one field) now only
-    return a single tuple if you only ask for one field.
-  * Units can be tricky, and they try to keep you from making weird things like
-    ``ergs`` + ``g``.  See :ref:`units` for more information.
+  * Particle fields on-disk will also be in code units, and will be named
+    ``(particle_type, FieldName)``.  If there is only one particle type in the
+    output file, the particle type for all particles will be ``io``.
   * Previously, yt would capture command line arguments when being imported.
     This no longer happens.  As a side effect, it is no longer necessary to
     specify ``--parallel`` at the command line when running a parallel 
     computation. Use ``yt.enable_parallelism()`` instead.  See 
     :ref:`parallel-computation` for more detail.
+  * Any derived quantities that *always* returned lists (like ``Extrema``,
+    which would return a list even if you only ask for one field) now only
+    returns a single result if you only ask for one field.  Results for particle
+    and mesh fields will be returned separately.
+  * Derived quantities can now be accessed via a function that hangs off of the
+    ``quantities`` atribute of data objects. Instead of
+    ``dd.quantities['TotalMass']``, you can now use
+    ``dd.quantities.total_mass()`` to do the same thing. All derived quantities
+    can be accessed via a function that hangs off of the `quantities` attribute
+    of data objects.
 
 Cool New Things
 ---------------
@@ -92,9 +105,10 @@
 Units
 +++++
 
-yt now has units.  This is one of the bigger features, and in essence it means
-that you can convert units between anything.  See :ref:`units` for more
-information.
+yt now has a unit system.  This is one of the bigger features, and in essence it means
+that you can convert units between anything.  In practice, it makes it much
+easier to define fields and convert data between different unit systems. See
+:ref:`units` for more information.
 
 Non-Cartesian Coordinates
 +++++++++++++++++++++++++
@@ -119,10 +133,11 @@
 
    my_object["gas", "density"]
 
-will return the gas field density.  This extends to particle types as well.  By
-default you do *not* need to use the field "type" key, but in case of ambiguity
-it will utilize the default value in its place.  This should therefore be
-identical to::
+will return the gas field density.  In this example "gas" is the field type and
+"density" is the field name.  Field types are a bit like a namespace.  This
+system extends to particle types as well.  By default you do *not* need to use
+the field "type" key, but in case of ambiguity it will utilize the default value
+in its place.  This should therefore be identical to::
 
    my_object["density"]
 
@@ -133,20 +148,6 @@
 along with it units.  This means that if you want to manipulate fields, you
 have to modify them in a unitful way.
 
-Field Info
-++++++++++
-
-In the past, the object ``ds`` (or ``ds``) had a ``field_info`` object which
-was a dictionary leading to derived field definitions.  At the present time,
-because of the field naming changes (i.e., access-by-tuple) it is better to
-utilize the function ``_get_field_info`` than to directly access the
-``field_info`` dictionary.  For example::
-
-   finfo = ds._get_field_info("gas", "density")
-
-This function respects the special "field type" ``unknown`` and will search all
-field types for the field name.
-
 Parameter Files are Now Datasets
 ++++++++++++++++++++++++++++++++
 
@@ -154,6 +155,29 @@
 (i.e., ``ds``) with the term "dataset."  Future revisions will change most of
 the ``ds`` atrributes of objects into ``ds`` or ``dataset`` attributes.
 
+Hierarchy is Now Index
+++++++++++++++++++++++
+
+The hierarchy object (``pf.h``) is now referred to as an index (``ds.index``).
+It is no longer necessary to directly refer to the ``index`` as often, since
+data objects are now attached to the to the ``dataset`` object.  Before, you
+would say ``ph.f.sphere()``, now you can say ``ds.sphere()``.
+
+Field Info
+++++++++++
+
+In previous versions of yt, the ``dataset`` object (what we used to call a
+parameter file) had a ``field_info`` attribute which was a dictionary leading to
+derived field definitions.  At the present time, because of the field naming
+changes (i.e., access-by-tuple) it is better to utilize the function
+``_get_field_info`` than to directly access the ``field_info`` dictionary.  For
+example::
+
+   finfo = ds._get_field_info("gas", "density")
+
+This function respects the special "field type" ``unknown`` and will search all
+field types for the field name.
+
 Projection Argument Order
 +++++++++++++++++++++++++
 
@@ -173,7 +197,8 @@
 Nearly all internal objects have been renamed.  Typically this means either
 removing ``AMR`` from the prefix or replacing it with ``YT``.  All names of
 objects remain the same for the purposes of selecting data and creating them;
-i.e., you will not need to change ``ds.sphere`` to something else.
+i.e., ``sphere`` objects are still called ``sphere`` - you can access create one
+via ``ds.sphere``.
 
 Boolean Regions
 +++++++++++++++

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f yt/frontends/stream/data_structures.py
--- a/yt/frontends/stream/data_structures.py
+++ b/yt/frontends/stream/data_structures.py
@@ -498,7 +498,7 @@
     for field in data:
         if isinstance(field, tuple): 
             new_field = field
-        elif len(data[field].shape) == 1:
+        elif len(data[field].shape) in (1, 2):
             new_field = ("io", field)
         elif len(data[field].shape) == 3:
             new_field = ("gas", field)

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f yt/frontends/stream/fields.py
--- a/yt/frontends/stream/fields.py
+++ b/yt/frontends/stream/fields.py
@@ -68,6 +68,7 @@
         ("particle_mass", ("code_mass", [], None)),
         ("smoothing_length", ("code_length", [], None)),
         ("density", ("code_mass/code_length**3", [], None)),
+        ("creation_time", ("code_time", [], None)),
     )
 
     def setup_fluid_fields(self):

diff -r 5a10dea0299bf9cf1587b4365fd8b73688636a8e -r eceab32b33c86b14ba1a3746b6bfc82d66bad00f yt/frontends/stream/io.py
--- a/yt/frontends/stream/io.py
+++ b/yt/frontends/stream/io.py
@@ -167,9 +167,11 @@
         # self.fields[g.id][fname] is the pattern here
         morton = []
         for ptype in self.ds.particle_types_raw:
-            pos = np.column_stack(self.fields[data_file.filename][
-                                  (ptype, "particle_position_%s" % ax)]
-                                  for ax in 'xyz')
+            try:
+                pos = np.column_stack(self.fields[data_file.filename][
+                    (ptype, "particle_position_%s" % ax)] for ax in 'xyz')
+            except KeyError:
+                pos = self.fields[data_file.filename][ptype, "particle_position"]
             if np.any(pos.min(axis=0) < data_file.ds.domain_left_edge) or \
                np.any(pos.max(axis=0) > data_file.ds.domain_right_edge):
                 raise YTDomainOverflow(pos.min(axis=0), pos.max(axis=0),
@@ -186,7 +188,10 @@
         pcount = {}
         for ptype in self.ds.particle_types_raw:
             d = self.fields[data_file.filename]
-            pcount[ptype] = d[ptype, "particle_position_x"].size
+            try:
+                pcount[ptype] = d[ptype, "particle_position_x"].size
+            except KeyError:
+                pcount[ptype] = d[ptype, "particle_position"].shape[0]
         return pcount
 
     def _identify_fields(self, data_file):

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list