[yt-svn] commit/yt: chummels: Merged in ngoldbaum/yt/yt-3.0 (pull request #1044)

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Mon Jul 21 13:33:28 PDT 2014


1 new commit in yt:

https://bitbucket.org/yt_analysis/yt/commits/c97c0f6168fd/
Changeset:   c97c0f6168fd
Branch:      yt-3.0
User:        chummels
Date:        2014-07-21 22:33:21
Summary:     Merged in ngoldbaum/yt/yt-3.0 (pull request #1044)

Doc fix patchbomb
Affected #:  45 files

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/cheatsheet.tex
--- a/doc/cheatsheet.tex
+++ b/doc/cheatsheet.tex
@@ -3,7 +3,7 @@
 \usepackage{calc}
 \usepackage{ifthen}
 \usepackage[landscape]{geometry}
-\usepackage[colorlinks = true, linkcolor=blue, citecolor=blue, urlcolor=blue]{hyperref}
+\usepackage[hyphens]{url}
 
 % To make this come out properly in landscape mode, do one of the following
 % 1.
@@ -101,9 +101,13 @@
 Documentation \url{http://yt-project.org/doc/index.html}.
 Need help? Start here \url{http://yt-project.org/doc/help/} and then
 try the IRC chat room \url{http://yt-project.org/irc.html},
-or the mailing list \url{http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org}.
-{\bf Installing yt:} The easiest way to install yt is to use the installation script
-found on the yt homepage or the docs linked above.
+or the mailing list \url{http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org}. \\
+
+\subsection{Installing yt} The easiest way to install yt is to use the
+installation script found on the yt homepage or the docs linked above.  If you
+already have python set up with \texttt{numpy}, \texttt{scipy},
+\texttt{matplotlib}, \texttt{h5py}, and \texttt{cython}, you can also use
+\texttt{pip install yt}
 
 \subsection{Command Line yt}
 yt, and its convenience functions, are launched from a command line prompt.
@@ -118,9 +122,8 @@
 \texttt{yt stats} {\it dataset} \textemdash\ Print stats of a dataset. \\
 \texttt{yt update} \textemdash\ Update yt to most recent version.\\
 \texttt{yt update --all} \textemdash\ Update yt and dependencies to most recent version. \\
-\texttt{yt instinfo} \textemdash\ yt installation information. \\
+\texttt{yt version} \textemdash\ yt installation information. \\
 \texttt{yt notebook} \textemdash\ Run the IPython notebook server. \\
-\texttt{yt serve} ({\it dataset}) \textemdash\  Run yt-specific web GUI ({\it dataset} is optional).\\
 \texttt{yt upload\_image} {\it image.png} \textemdash\ Upload PNG image to imgur.com. \\
 \texttt{yt upload\_notebook} {\it notebook.nb} \textemdash\ Upload IPython notebook to hub.yt-project.org.\\
 \texttt{yt plot} {\it dataset} \textemdash\ Create a set of images.\\
@@ -132,16 +135,8 @@
  paste.yt-project.org. \\ 
 \texttt{yt pastebin\_grab} {\it identifier} \textemdash\ Print content of pastebin to
  STDOUT. \\
- \texttt{yt hub\_register} \textemdash\ Register with
-hub.yt-project.org. \\
-\texttt{yt hub\_submit} \textemdash\ Submit hg repo to
-hub.yt-project.org. \\
-\texttt{yt bootstrap\_dev} \textemdash\ Bootstrap a yt 
-development environment. \\
 \texttt{yt bugreport} \textemdash\ Report a yt bug. \\
 \texttt{yt hop} {\it dataset} \textemdash\  Run hop on a dataset. \\
-\texttt{yt rpdb} \textemdash\ Connect to running rpd 
- session. 
 
 \subsection{yt Imports}
 In order to use yt, Python must load the relevant yt modules into memory.
@@ -149,37 +144,40 @@
 used as part of a script.
 \newlength{\MyLen}
 \settowidth{\MyLen}{\texttt{letterpaper}/\texttt{a4paper} \ }
-\texttt{from yt.mods import \textasteriskcentered}  \textemdash\ 
-Load base yt  modules. \\
+\texttt{import yt}  \textemdash\ 
+Load yt. \\
 \texttt{from yt.config import ytcfg}  \textemdash\ 
 Used to set yt configuration options.
- If used, must be called before importing any other module.\\
-\texttt{from yt.analysis\_modules.api import \textasteriskcentered}   \textemdash\ 
-Load all yt analysis modules. \\
+If used, must be called before importing any other module.\\
 \texttt{from yt.analysis\_modules.\emph{halo\_finding}.api import \textasteriskcentered}  \textemdash\ 
 Load halo finding modules. Other modules
 are loaded in a similar way by swapping the 
 {\em emphasized} text.
 See the \textbf{Analysis Modules} section for a listing and short descriptions of each.
 
-\subsection{Numpy Arrays}
-Simulation data in yt is returned in Numpy arrays. The Numpy package provides a wealth of built-in
-functions that operate on Numpy arrays. Here is a very brief list of some useful ones.
-Please see \url{http://docs.scipy.org/doc/numpy/reference/} for the full
-numpy documentation.\\
-\settowidth{\MyLen}{\texttt{multicol} }
+\subsection{YTArray}
+Simulation data in yt is returned as a YTArray.  YTArray is a numpy array that
+has unit data attached to it and can automatically handle unit conversions and
+detect unit errors. Just like a numpy array, YTArray provides a wealth of
+built-in functions to calculate properties of the data in the array. Here is a
+very brief list of some useful ones.
+\settowidth{\MyLen}{\texttt{multicol} }\\
+\texttt{v = a.in\_cgs()} \textemdash\ Return the array in CGS units \\
+\texttt{v = a.in\_units('Msun/pc**3')} \textemdash\ Return the array in solar masses per cubic parsec \\ 
 \texttt{v = a.max(), a.min()} \textemdash\ Return maximum, minimum of \texttt{a}. \\
-\texttt{index = a.argmax(), a.argmin()} \textemdash\ Return index of max, 
+\texttt{index = a.argmax(), a.argmin()} \textemdash\ Return index of max,
 min value of \texttt{a}.\\
 \texttt{v = a[}{\it index}\texttt{]} \textemdash\ Select a single value from \texttt{a} at location {\it index}.\\
-\texttt{b = a[}{\it i:j}\texttt{]} \textemdash\ Select the slice of values from \texttt{a} between
+\texttt{b = a[}{\it i:j}\texttt{]} \textemdash\ Select the slice of values from
+\texttt{a} between
 locations {\it i} to {\it j-1} saved to a new Numpy array \texttt{b} with length {\it j-i}. \\
-\texttt{sel = (a > const)}  \textemdash\ Create a new boolean Numpy array \texttt{sel}, of the same shape as \texttt{a},
+\texttt{sel = (a > const)} \textemdash\ Create a new boolean Numpy array
+\texttt{sel}, of the same shape as \texttt{a},
 that marks which values of \texttt{a > const}. Other operators (e.g. \textless, !=, \%) work as well.\\
-\texttt{b = a[sel]} \textemdash\ Create a new Numpy array \texttt{b} made up of elements from \texttt{a} that correspond to elements of \texttt{sel}
+\texttt{b = a[sel]} \textemdash\ Create a new Numpy array \texttt{b} made up of
+elements from \texttt{a} that correspond to elements of \texttt{sel}
 that are {\it True}. In the above example \texttt{b} would be all elements of \texttt{a} that are greater than \texttt{const}.\\
-\texttt{a.dump({\it filename.dat})} \textemdash\ Save \texttt{a} to the binary file {\it filename.dat}.\\
-\texttt{a = np.load({\it filename.dat})} \textemdash\ Load the contents of {\it filename.dat} into \texttt{a}.
+\texttt{a.write\_hdf5({\it filename.h5})} \textemdash\ Save \texttt{a} to the hdf5 file {\it filename.h5}.\\
 
 \subsection{IPython Tips}
 \settowidth{\MyLen}{\texttt{multicol} }
@@ -196,6 +194,7 @@
 \texttt{\%hist} \textemdash\ Print recent command history.\\
 \texttt{\%quickref} \textemdash\ Print IPython quick reference.\\
 \texttt{\%pdb} \textemdash\ Automatically enter the Python debugger at an exception.\\
+\texttt{\%debug} \textemdash\ Drop into a debugger at the location of the last unhandled exception. \\
 \texttt{\%time, \%timeit} \textemdash\ Find running time of expressions for benchmarking.\\
 \texttt{\%lsmagic} \textemdash\ List all available IPython magics. Hint: \texttt{?} works with magics.\\
 
@@ -208,10 +207,10 @@
 After that, simulation data is generally accessed in yt using {\it Data Containers} which are Python objects
 that define a region of simulation space from which data should be selected.
 \settowidth{\MyLen}{\texttt{multicol} }
-\texttt{ds = load(}{\it dataset}\texttt{)} \textemdash\   Reference a single snapshot.\\
+\texttt{ds = yt.load(}{\it dataset}\texttt{)} \textemdash\   Reference a single snapshot.\\
 \texttt{dd = ds.all\_data()} \textemdash\ Select the entire volume.\\
-\texttt{a = dd[}{\it field\_name}\texttt{]} \textemdash\ Saves the contents of {\it field} into the
-numpy array \texttt{a}. Similarly for other data containers.\\
+\texttt{a = dd[}{\it field\_name}\texttt{]} \textemdash\ Copies the contents of {\it field} into the
+YTArray \texttt{a}. Similarly for other data containers.\\
 \texttt{ds.field\_list} \textemdash\ A list of available fields in the snapshot. \\
 \texttt{ds.derived\_field\_list} \textemdash\ A list of available derived fields
 in the snapshot. \\
@@ -231,45 +230,29 @@
 direction set by {\it normal},with total length
  2$\times${\it height} and with radius {\it radius}. \\
  
- \texttt{bl = ds.boolean({\it constructor})} \textemdash\ Create a boolean data
- container. {\it constructor} is a list of pre-defined non-boolean 
- data containers with nested boolean logic using the
- ``AND'', ``NOT'', or ``OR'' operators. E.g. {\it constructor=}
- {\it [sp, ``NOT'', (di, ``OR'', re)]} gives a volume defined
- by {\it sp} minus the patches covered by {\it di} and {\it re}.\\
- 
 \texttt{ds.save\_object(sp, {\it ``sp\_for\_later''})} \textemdash\ Save an object (\texttt{sp}) for later use.\\
 \texttt{sp = ds.load\_object({\it ``sp\_for\_later''})} \textemdash\ Recover a saved object.\\
 
 
-\subsection{Defining New Fields \& Quantities}
-\texttt{yt} expects on-disk fields, fields generated on-demand and in-memory. Quantities reduce a field (e.g. "Density") defined over an object (e.g. "sphere") to get a single value (e.g. "Mass"). \\
-\texttt{def \_MetalMassMsun({\it field},{\it data})}\\
-\texttt{\hspace{4 mm} return data["Metallicity"]*data["CellMassMsun"]}\\
-\texttt{add\_field("MetalMassMsun",function=\_MetalMassMsun)}\\
-Define a new quantity; note the first function operates on grids and data objects and the second on the results of the first. \\
-\texttt{def \_TotalMass(data): }\\
-\texttt{\hspace{4 mm} baryon\_mass = data["CellMassMsun"].sum()}\\
-\texttt{\hspace{4 mm} particle\_mass = data["ParticleMassMsun"].sum()}\\
-\texttt{\hspace{4 mm} return baryon\_mass, particle\_mass}\\
-\texttt{def \_combTotalMass(data, baryon\_mass, particle\_mass):}\\
-\texttt{\hspace{4 mm} return baryon\_mass.sum() + particle\_mass.sum()}\\
-\texttt{add\_quantity("TotalMass", function=\_TotalMass,}\\
-\texttt{\hspace{4 mm} combine\_function=\_combTotalMass, n\_ret = 2)}\\
-
-
+\subsection{Defining New Fields}
+\texttt{yt} expects on-disk fields, fields generated on-demand and in-memory. 
+Field can either be created before a dataset is loaded using \texttt{add\_field}:
+\texttt{def \_metal\_mass({\it field},{\it data})}\\
+\texttt{\hspace{4 mm} return data["metallicity"]*data["cell\_mass"]}\\
+\texttt{add\_field("metal\_mass", units='g', function=\_metal\_mass)}\\
+Or added to an existing dataset using \texttt{ds.add\_field}:
+\texttt{ds.add\_field("metal\_mass", units='g', function=\_metal\_mass)}\\
 
 \subsection{Slices and Projections}
 \settowidth{\MyLen}{\texttt{multicol} }
-\texttt{slc = SlicePlot(ds, {\it axis}, {\it field}, {\it center=}, {\it width=}, {\it weight\_field=}, {\it additional parameters})} \textemdash\ Make a slice plot
-perpendicular to {\it axis} of {\it field} weighted by {\it weight\_field} at (code-units) {\it center} with 
-{\it width} in code units or a (value, unit) tuple. Hint: try {\it SlicePlot?} in IPython to see additional parameters.\\
+\texttt{slc = yt.SlicePlot(ds, {\it axis or normal vector}, {\it field}, {\it center=}, {\it width=}, {\it weight\_field=}, {\it additional parameters})} \textemdash\ Make a slice plot
+perpendicular to {\it axis} (specified via 'x', 'y', or 'z') or a normal vector for an off-axis slice of {\it field} weighted by {\it weight\_field} at (code-units) {\it center} with 
+{\it width} in code units or a (value, unit) tuple. Hint: try {\it yt.SlicePlot?} in IPython to see additional parameters.\\
 \texttt{slc.save({\it file\_prefix})} \textemdash\ Save the slice to a png with name prefix {\it file\_prefix}.
 \texttt{.save()} works similarly for the commands below.\\
 
-\texttt{prj = ProjectionPlot(ds, {\it axis}, {\it field}, {\it addit. params})} \textemdash\ Make a projection. \\
-\texttt{prj = OffAxisSlicePlot(ds, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off-axis slice. Note this takes an array of fields. \\
-\texttt{prj = OffAxisProjectionPlot(ds, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off axis projection. Note this takes an array of fields. \\
+\texttt{prj = yt.ProjectionPlot(ds, {\it axis}, {\it field}, {\it addit. params})} \textemdash\ Make a projection. \\
+\texttt{prj = yt.OffAxisProjectionPlot(ds, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off axis projection. Note this takes an array of fields. \\
 
 \subsection{Plot Annotations}
 \settowidth{\MyLen}{\texttt{multicol} }
@@ -299,51 +282,37 @@
 The \texttt{my\_plugins.py} file \textemdash\ Add functions, derived fields, constants, or other commonly-used Python code to yt.
 
 
-
-
 \subsection{Analysis Modules}
 \settowidth{\MyLen}{\texttt{multicol}}
 The import name for each module is listed at the end of each description (see \textbf{yt Imports}).
 
 \texttt{Absorption Spectrum} \textemdash\ (\texttt{absorption\_spectrum}). \\
 \texttt{Clump Finder} \textemdash\ Find clumps defined by density thresholds (\texttt{level\_sets}). \\
-\texttt{Coordinate Transformation} \textemdash\ (\texttt{coordinate\_transformation}). \\
 \texttt{Halo Finding} \textemdash\ Locate halos of dark matter particles (\texttt{halo\_finding}). \\
-\texttt{Halo Mass Function} \textemdash\ Find halo mass functions from data and from theory (\texttt{halo\_mass\_function}). \\
-\texttt{Halo Profiling} \textemdash\ Profile and project multiple halos (\texttt{halo\_profiler}). \\
-\texttt{Halo Merger Tree} \textemdash\ Create a database of halo mergers (\texttt{halo\_merger\_tree}). \\
 \texttt{Light Cone Generator} \textemdash\ Stitch datasets together to perform analysis over cosmological volumes. \\
 \texttt{Light Ray Generator} \textemdash\ Analyze the path of light rays.\\
-\texttt{Radial Column Density} \textemdash\ Calculate column densities around a point (\texttt{radial\_column\_density}). \\
 \texttt{Rockstar Halo Finding} \textemdash\ Locate halos of dark matter using the Rockstar halo finder (\texttt{halo\_finding.rockstar}). \\
 \texttt{Star Particle Analysis} \textemdash\ Analyze star formation history and assemble spectra (\texttt{star\_analysis}). \\
 \texttt{Sunrise Exporter} \textemdash\ Export data to the sunrise visualization format (\texttt{sunrise\_export}). \\
-\texttt{Two Point Functions} \textemdash\ Two point correlations (\texttt{two\_point\_functions}). \\
 
 
 \subsection{Parallel Analysis}
-\settowidth{\MyLen}{\texttt{multicol}}
-Nearly all of yt is parallelized using MPI.
-The {\it mpi4py} package must be installed for parallelism in yt.
-To install {\it pip install mpi4py} on the command line usually works.
+\settowidth{\MyLen}{\texttt{multicol}} 
+Nearly all of yt is parallelized using
+MPI.  The {\it mpi4py} package must be installed for parallelism in yt.  To
+install {\it pip install mpi4py} on the command line usually works.
 Execute python in parallel similar to this:\\
-{\it mpirun -n 12 python script.py --parallel}\\
-This command may differ for each system on which you use yt;
-please consult the system documentation for details on how to run parallel applications.
+{\it mpirun -n 12 python script.py}\\
+The file \texttt{script.py} must call the \texttt{yt.enable\_parallelism()} to
+turn on yt's parallelism.  If this doesn't happen, all cores will execute the
+same serial yt script.  This command may differ for each system on which you use
+yt; please consult the system documentation for details on how to run parallel
+applications.
 
-\texttt{from yt.pmods import *} \textemdash\ Load yt faster when in parallel.
-This replaces the usual \texttt{from yt.mods import *}.\\
 \texttt{parallel\_objects()} \textemdash\ A way to parallelize analysis over objects
 (such as halos or clumps).\\
 
 
-\subsection{Pre-Installed Versions}
-\settowidth{\MyLen}{\texttt{multicol}}
-yt is pre-installed on several supercomputer systems.
-
-\textbf{NICS Kraken} \textemdash\ {\it module load yt} \\
-
-
 \subsection{Mercurial}
 \settowidth{\MyLen}{\texttt{multicol}}
 Please see \url{http://mercurial.selenic.com/} for the full Mercurial documentation.
@@ -365,8 +334,7 @@
 \subsection{FAQ}
 \settowidth{\MyLen}{\texttt{multicol}}
 
-\texttt{ds.field\_info[`field'].take\_log = False} \textemdash\ When plotting \texttt{field}, do not take log.
-Must enter \texttt{ds.index} before this command. \\
+\texttt{slc.set\_log('field', False)} \textemdash\ When plotting \texttt{field}, use linear scaling instead of log scaling.
 
 
 %\rule{0.3\linewidth}{0.25pt}

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/_dq_docstrings.inc
--- a/doc/source/analyzing/_dq_docstrings.inc
+++ b/doc/source/analyzing/_dq_docstrings.inc
@@ -1,43 +1,20 @@
 
 
-.. function:: Action(action, combine_action, filter=None):
+.. function:: angular_momentum_vector()
 
-   (This is a proxy for :func:`~yt.data_objects.derived_quantities._Action`.)
-   This function evals the string given by the action arg and uses 
-   the function thrown with the combine_action to combine the values.  
-   A filter can be thrown to be evaled to short-circuit the calculation 
-   if some criterion is not met.
-   :param action: a string containing the desired action to be evaled.
-   :param combine_action: the function used to combine the answers when done lazily.
-   :param filter: a string to be evaled to serve as a data filter.
-
-
-
-.. function:: AngularMomentumVector():
-
-   (This is a proxy for :func:`~yt.data_objects.derived_quantities._AngularMomentumVector`.)
+   (This is a proxy for :func:`~yt.data_objects.derived_quantities.AngularMomentumVector`.)
    This function returns the mass-weighted average angular momentum vector.
 
 
+.. function:: bulk_velocity():
 
-.. function:: BaryonSpinParameter():
-
-   (This is a proxy for :func:`~yt.data_objects.derived_quantities._BaryonSpinParameter`.)
-   This function returns the spin parameter for the baryons, but it uses
-   the particles in calculating enclosed mass.
-
-
-
-.. function:: BulkVelocity():
-
-   (This is a proxy for :func:`~yt.data_objects.derived_quantities._BulkVelocity`.)
+   (This is a proxy for :func:`~yt.data_objects.derived_quantities.BulkVelocity`.)
    This function returns the mass-weighted average velocity in the object.
 
 
+.. function:: center_of_mass(use_cells=True, use_particles=False):
 
-.. function:: CenterOfMass(use_cells=True, use_particles=False):
-
-   (This is a proxy for :func:`~yt.data_objects.derived_quantities._CenterOfMass`.)
+   (This is a proxy for :func:`~yt.data_objects.derived_quantities.CenterOfMass`.)
    This function returns the location of the center
    of mass. By default, it computes of the *non-particle* data in the object. 
    
@@ -51,112 +28,64 @@
 
 
 
-.. function:: Extrema(fields, non_zero=False, filter=None):
+.. function:: extrema(fields, non_zero=False, filter=None):
 
-   (This is a proxy for :func:`~yt.data_objects.derived_quantities._Extrema`.)
+   (This is a proxy for :func:`~yt.data_objects.derived_quantities.Extrema`.)
    This function returns the extrema of a set of fields
    
    :param fields: A field name, or a list of field names
    :param filter: a string to be evaled to serve as a data filter.
 
 
+.. function:: max_location(field):
 
-.. function:: IsBound(truncate=True, include_thermal_energy=False, treecode=True, opening_angle=1.0, periodic_test=False, include_particles=True):
-
-   (This is a proxy for :func:`~yt.data_objects.derived_quantities._IsBound`.)
-   This returns whether or not the object is gravitationally bound. If this
-   returns a value greater than one, it is bound, and otherwise not.
-   
-   Parameters
-   ----------
-   truncate : Bool
-       Should the calculation stop once the ratio of
-       gravitational:kinetic is 1.0?
-   include_thermal_energy : Bool
-       Should we add the energy from ThermalEnergy
-       on to the kinetic energy to calculate 
-       binding energy?
-   treecode : Bool
-       Whether or not to use the treecode.
-   opening_angle : Float 
-       The maximal angle a remote node may subtend in order
-       for the treecode method of mass conglomeration may be
-       used to calculate the potential between masses.
-   periodic_test : Bool 
-       Used for testing the periodic adjustment machinery
-       of this derived quantity.
-   include_particles : Bool
-       Should we add the mass contribution of particles
-       to calculate binding energy?
-   
-   Examples
-   --------
-   >>> sp.quantities["IsBound"](truncate=False,
-   ... include_thermal_energy=True, treecode=False, opening_angle=2.0)
-   0.32493
-
-
-
-.. function:: MaxLocation(field):
-
-   (This is a proxy for :func:`~yt.data_objects.derived_quantities._MaxLocation`.)
+   (This is a proxy for :func:`~yt.data_objects.derived_quantities.max_location`.)
    This function returns the location of the maximum of a set
    of fields.
 
 
+.. function:: min_location(field):
 
-.. function:: MinLocation(field):
-
-   (This is a proxy for :func:`~yt.data_objects.derived_quantities._MinLocation`.)
+   (This is a proxy for :func:`~yt.data_objects.derived_quantities.MinLocation`.)
    This function returns the location of the minimum of a set
    of fields.
 
 
 
-.. function:: ParticleSpinParameter():
+.. function:: spin_parameter(use_gas=True, use_particles=True):
 
-   (This is a proxy for :func:`~yt.data_objects.derived_quantities._ParticleSpinParameter`.)
+   (This is a proxy for :func:`~yt.data_objects.derived_quantities.SpinParameter`.)
    This function returns the spin parameter for the baryons, but it uses
    the particles in calculating enclosed mass.
 
 
+.. function:: total_mass():
 
-.. function:: StarAngularMomentumVector():
-
-   (This is a proxy for :func:`~yt.data_objects.derived_quantities._StarAngularMomentumVector`.)
-   This function returns the mass-weighted average angular momentum vector 
-   for stars.
-
-
-
-.. function:: TotalMass():
-
-   (This is a proxy for :func:`~yt.data_objects.derived_quantities._TotalMass`.)
+   (This is a proxy for :func:`~yt.data_objects.derived_quantities.TotalMass`.)
    This function takes no arguments and returns the sum of cell masses and
    particle masses in the object.
 
 
+.. function:: total_quantity(fields):
 
-.. function:: TotalQuantity(fields):
-
-   (This is a proxy for :func:`~yt.data_objects.derived_quantities._TotalQuantity`.)
+   (This is a proxy for :func:`~yt.data_objects.derived_quantities.TotalQuantity`.)
    This function sums up a given field over the entire region
    
    :param fields: The fields to sum up
 
 
 
-.. function:: WeightedAverageQuantity(field, weight):
+.. function:: weighted_average_quantity(field, weight):
 
-   (This is a proxy for :func:`~yt.data_objects.derived_quantities._WeightedAverageQuantity`.)
+   (This is a proxy for :func:`~yt.data_objects.derived_quantities.WeightedAverageQuantity`.)
    This function returns an averaged quantity.
    
    :param field: The field to average
    :param weight: The field to weight by
 
-.. function:: WeightedVariance(field, weight):
+.. function:: weighted_variance(field, weight):
 
-   (This is a proxy for :func:`~yt.data_objects.derived_quantities._WeightedVariance`.)
+   (This is a proxy for :func:`~yt.data_objects.derived_quantities.WeightedVariance`.)
     This function returns the variance of a field.
 
     :param field: The target field

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb
--- a/doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb
+++ b/doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb
@@ -1,6 +1,7 @@
 {
  "metadata": {
-  "name": ""
+  "name": "",
+  "signature": "sha256:e792ad188f59161aa3ff4cdbb32cad75142b2e6b4062dfa1d8c12b3172fcf4e9"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -34,7 +35,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "from yt.mods import *\n",
+      "import yt\n",
       "from yt.analysis_modules.halo_analysis.api import *\n",
       "import tempfile\n",
       "import shutil\n",
@@ -44,7 +45,7 @@
       "tmpdir = tempfile.mkdtemp()\n",
       "\n",
       "# Load the data set with the full simulation information\n",
-      "data_ds = load('Enzo_64/RD0006/RedshiftOutput0006')"
+      "data_ds = yt.load('Enzo_64/RD0006/RedshiftOutput0006')"
      ],
      "language": "python",
      "metadata": {},
@@ -62,7 +63,7 @@
      "collapsed": false,
      "input": [
       "# Load the rockstar data files\n",
-      "halos_ds = load('rockstar_halos/halos_0.0.bin')"
+      "halos_ds = yt.load('rockstar_halos/halos_0.0.bin')"
      ],
      "language": "python",
      "metadata": {},
@@ -407,4 +408,4 @@
    "metadata": {}
   }
  ]
-}
+}
\ No newline at end of file

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/analysis_modules/PPVCube.ipynb
--- a/doc/source/analyzing/analysis_modules/PPVCube.ipynb
+++ b/doc/source/analyzing/analysis_modules/PPVCube.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:ba8b6a53571695ae1d0c236ad43875823746e979a329a9d35ab0a8b899cebbba"
+  "signature": "sha256:56a8d72735e3cc428ff04b241d4b2ce6f653019818c6fc7a4148840d99030c85"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -19,8 +19,9 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "%matplotlib inline\n",
-      "from yt.mods import *\n",
+      "import yt\n",
+      "import numpy as np\n",
+      "\n",
       "from yt.analysis_modules.ppv_cube.api import PPVCube"
      ],
      "language": "python",
@@ -122,7 +123,7 @@
       "data[\"velocity_y\"] = (vely, \"km/s\")\n",
       "data[\"velocity_z\"] = (np.zeros((nx,ny,nz)), \"km/s\") # zero velocity in the z-direction\n",
       "bbox = np.array([[-0.5,0.5],[-0.5,0.5],[-0.5,0.5]]) # bbox of width 1 on a side with center (0,0,0)\n",
-      "ds = load_uniform_grid(data, (nx,ny,nz), length_unit=(2*R,\"kpc\"), nprocs=1, bbox=bbox)"
+      "ds = yt.load_uniform_grid(data, (nx,ny,nz), length_unit=(2*R,\"kpc\"), nprocs=1, bbox=bbox)"
      ],
      "language": "python",
      "metadata": {},
@@ -139,7 +140,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "slc = SlicePlot(ds, \"z\", [\"density\",\"velocity_x\",\"velocity_y\",\"velocity_magnitude\"])"
+      "slc = yt.SlicePlot(ds, \"z\", [\"density\",\"velocity_x\",\"velocity_y\",\"velocity_magnitude\"])"
      ],
      "language": "python",
      "metadata": {},
@@ -222,7 +223,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "ds = load(\"cube.fits\")"
+      "ds = yt.load(\"cube.fits\")"
      ],
      "language": "python",
      "metadata": {},
@@ -233,7 +234,7 @@
      "collapsed": false,
      "input": [
       "# Specifying no center gives us the center slice\n",
-      "slc = SlicePlot(ds, \"z\", [\"density\"])\n",
+      "slc = yt.SlicePlot(ds, \"z\", [\"density\"])\n",
       "slc.show()"
      ],
      "language": "python",
@@ -248,7 +249,7 @@
       "# Picking different velocities for the slices\n",
       "new_center = ds.domain_center\n",
       "new_center[2] = ds.spec2pixel(-1.0*u.km/u.s)\n",
-      "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
+      "slc = yt.SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
       "slc.show()"
      ],
      "language": "python",
@@ -260,7 +261,7 @@
      "collapsed": false,
      "input": [
       "new_center[2] = ds.spec2pixel(0.7*u.km/u.s)\n",
-      "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
+      "slc = yt.SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
       "slc.show()"
      ],
      "language": "python",
@@ -272,7 +273,7 @@
      "collapsed": false,
      "input": [
       "new_center[2] = ds.spec2pixel(-0.3*u.km/u.s)\n",
-      "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
+      "slc = yt.SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
       "slc.show()"
      ],
      "language": "python",
@@ -290,7 +291,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "prj = ProjectionPlot(ds, \"z\", [\"density\"], proj_style=\"sum\")\n",
+      "prj = yt.ProjectionPlot(ds, \"z\", [\"density\"], proj_style=\"sum\")\n",
       "prj.set_log(\"density\", True)\n",
       "prj.set_zlim(\"density\", 1.0e-3, 0.2)\n",
       "prj.show()"
@@ -303,4 +304,4 @@
    "metadata": {}
   }
  ]
-}
+}
\ No newline at end of file

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
--- a/doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
+++ b/doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:e4b5ea69687eb79452c16385b3a6f795b4572518dfa7f9d8a8125bd75b5fea85"
+  "signature": "sha256:5ab80c6b33a115cb88c36fde8659434d14a852dd43b0b419f2bb0c04acf66278"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -20,7 +20,7 @@
      "collapsed": false,
      "input": [
       "%matplotlib inline\n",
-      "from yt.mods import *\n",
+      "import yt\n",
       "import glob\n",
       "from yt.analysis_modules.particle_trajectories.api import ParticleTrajectories\n",
       "from yt.config import ytcfg\n",
@@ -77,7 +77,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "ds = load(my_fns[0])\n",
+      "ds = yt.load(my_fns[0])\n",
       "dd = ds.all_data()\n",
       "indices = dd[\"particle_index\"].astype(\"int\")\n",
       "print indices"
@@ -205,8 +205,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "ds = load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
-      "slc = SlicePlot(ds, \"x\", [\"density\",\"dark_matter_density\"], center=\"max\", width=(3.0, \"Mpc\"))\n",
+      "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+      "slc = yt.SlicePlot(ds, \"x\", [\"density\",\"dark_matter_density\"], center=\"max\", width=(3.0, \"Mpc\"))\n",
       "slc.show()"
      ],
      "language": "python",

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/analysis_modules/SZ_projections.ipynb
--- a/doc/source/analyzing/analysis_modules/SZ_projections.ipynb
+++ b/doc/source/analyzing/analysis_modules/SZ_projections.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:4745a15abb6512547b50280b92c22567f89255189fd968ca706ef7c39d48024f"
+  "signature": "sha256:e4db171b795d155870280ddbe8986f55f9a94ffb10783abf9d4cc2de3ec24894"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -89,11 +89,10 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "%matplotlib inline\n",
-      "from yt.mods import *\n",
+      "import yt\n",
       "from yt.analysis_modules.sunyaev_zeldovich.api import SZProjection\n",
       "\n",
-      "ds = load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+      "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
       "\n",
       "freqs = [90.,180.,240.]\n",
       "szprj = SZProjection(ds, freqs)"
@@ -218,14 +217,6 @@
       "including coordinate information in kpc. The optional keyword\n",
       "`clobber` allows a previous file to be overwritten. \n"
      ]
-    },
-    {
-     "cell_type": "code",
-     "collapsed": false,
-     "input": [],
-     "language": "python",
-     "metadata": {},
-     "outputs": []
     }
    ],
    "metadata": {}

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/analysis_modules/clump_finding.rst
--- a/doc/source/analyzing/analysis_modules/clump_finding.rst
+++ b/doc/source/analyzing/analysis_modules/clump_finding.rst
@@ -87,7 +87,7 @@
   ds = load("DD0000")
   sp = ds.sphere([0.5, 0.5, 0.5], radius=0.1)
   
-  ratio = sp.quantities["IsBound"](truncate=False, include_thermal_energy=True,
+  ratio = sp.quantities.is_bound(truncate=False, include_thermal_energy=True,
       treecode=True, opening_angle=2.0)
 
 This example will accomplish the same as the above, but will use the full
@@ -100,7 +100,7 @@
   ds = load("DD0000")
   sp = ds.sphere([0.5, 0.5, 0.5], radius=0.1)
   
-  ratio = sp.quantities["IsBound"](truncate=False, include_thermal_energy=True,
+  ratio = sp.quantities.is_bound(truncate=False, include_thermal_energy=True,
       treecode=False)
 
 Here the treecode method is specified for clump finding (this is default).
@@ -109,7 +109,7 @@
 
 .. code-block:: python
   
-  function_name = 'self.data.quantities["IsBound"](truncate=True, \
+  function_name = 'self.data.quantities.is_bound(truncate=True, \
       include_thermal_energy=True, treecode=True, opening_angle=2.0) > 1.0'
   master_clump = amods.level_sets.Clump(data_source, None, field,
       function=function_name)

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/objects.rst
--- a/doc/source/analyzing/objects.rst
+++ b/doc/source/analyzing/objects.rst
@@ -31,35 +31,15 @@
 derived fields.  If it finds nothing there, it then defaults to examining the
 global set of derived fields.
 
-To add a field to the list of fields that you know should exist in a particular
-frontend, call the function ``add_frontend_field`` where you replace
-``frontend`` with the name of the frontend.  Below is an example for adding
-``Cooling_Time`` to Enzo:
+To add a derived field, which is not expected to necessarily exist on disk, use
+the standard construction:
 
 .. code-block:: python
 
-   add_enzo_field("Cooling_Time", units=r"\rm{s}",
-                  function=NullFunc,
-                  validators=ValidateDataField("Cooling_Time"))
+   add_field("specific_thermal_energy", function=_specific_thermal_energy,
+             units="ergs/g")
 
-Note that we used the ``NullFunc`` function here.  To add a derived field,
-which is not expected to necessarily exist on disk, use the standard
-construction:
-
-.. code-block:: python
-
-   add_field("thermal_energy", function=_ThermalEnergy,
-             units=r"\rm{ergs}/\rm{g}")
-
-To add a translation from one field to another, use the ``TranslationFunc`` as
-the function for reading the field.  For instance, this code appears in the Nyx
-frontend:
-
-.. code-block:: python
-
-   add_field("density", function=TranslationFunc("density"), take_log=True,
-             units=r"\rm{g} / \rm{cm}^3",
-             projected_units =r"\rm{g} / \rm{cm}^2")
+where ``_specific_thermal_energy`` is a python function that defines the field.
 
 .. _accessing-fields:
 
@@ -105,7 +85,7 @@
 
 .. code-block:: python
 
-   ds = load("my_data")
+   ds = yt.load("my_data")
    print ds.field_list
    print ds.derived_field_list
 
@@ -115,7 +95,7 @@
 
 .. code-block:: python
 
-   ds = load("my_data")
+   ds = yt.load("my_data")
    print ds.field_info["pressure"].get_units()
 
 This is a fast way to examine the units of a given field, and additionally you
@@ -141,8 +121,8 @@
 
 .. code-block:: python
 
-   from yt.mods import *
-   ds = load("RedshiftOutput0005")
+   import yt
+   ds = yt.load("RedshiftOutput0005")
    reg = ds.region([0.5, 0.5, 0.5], [0.0, 0.0, 0.0], [1.0, 1.0, 1.0])
 
 .. include:: _obj_docstrings.inc
@@ -199,7 +179,7 @@
 
    ds = load("my_data")
    dd = ds.all_data()
-   dd.quantities["AngularMomentumVector"]()
+   dd.quantities.angular_momentum_vector()
 
 The following quantities are available via the ``quantities`` interface.
 
@@ -246,8 +226,8 @@
 
 .. notebook-cell::
 
-   from yt.mods import *
-   ds = load("enzo_tiny_cosmology/DD0046/DD0046")
+   import yt
+   ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
    ad = ds.all_data()
    total_mass = ad.quantities.total_quantity('cell_mass')
    # now select only gas with 1e5 K < T < 1e7 K.
@@ -268,12 +248,12 @@
 
 .. python-script::
 
-   from yt.mods import *
-   ds = load("enzo_tiny_cosmology/DD0046/DD0046")
+   import yt
+   ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
    ad = ds.all_data()
    new_region = ad.cut_region(['obj["density"] > 1e-29'])
-   plot = ProjectionPlot(ds, "x", "density", weight_field="density",
-                         data_source=new_region)
+   plot = yt.ProjectionPlot(ds, "x", "density", weight_field="density",
+                            data_source=new_region)
    plot.save()
 
 .. _extracting-connected-sets:
@@ -311,10 +291,6 @@
 
 Extracting Isocontour Information
 ---------------------------------
-.. versionadded:: 2.3
-
-.. warning::
-   This is still beta!
 
 ``yt`` contains an implementation of the `Marching Cubes
 <http://en.wikipedia.org/wiki/Marching_cubes>`_ algorithm, which can operate on
@@ -378,8 +354,8 @@
 
 .. code-block:: python
 
-   from yt.mods import *
-   ds = load("my_data")
+   import yt
+   ds = yt.load("my_data")
    sp = ds.sphere([0.5, 0.5, 0.5], 10.0/ds['kpc'])
 
    ds.save_object(sp, "sphere_to_analyze_later")
@@ -390,9 +366,9 @@
 
 .. code-block:: python
 
-   from yt.mods import *
+   import yt
 
-   ds = load("my_data")
+   ds = yt.load("my_data")
    sphere_to_analyze = ds.load_object("sphere_to_analyze_later")
 
 Additionally, if we want to store the object independent of the ``.yt`` file,
@@ -400,9 +376,9 @@
 
 .. code-block:: python
 
-   from yt.mods import *
+   import yt
 
-   ds = load("my_data")
+   ds = yt.load("my_data")
    sp = ds.sphere([0.5, 0.5, 0.5], 10.0/ds['kpc'])
 
    sp.save_object("my_sphere", "my_storage_file.cpkl")
@@ -416,10 +392,10 @@
 
 .. code-block:: python
 
-   from yt.mods import *
+   import yt
    import shelve
 
-   ds = load("my_data") # not necessary if storeparameterfiles is on
+   ds = yt.load("my_data") # not necessary if storeparameterfiles is on
 
    obj_file = shelve.open("my_storage_file.cpkl")
    ds, obj = obj_file["my_sphere"]

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/time_series_analysis.rst
--- a/doc/source/analyzing/time_series_analysis.rst
+++ b/doc/source/analyzing/time_series_analysis.rst
@@ -33,27 +33,23 @@
 creating your own, and these operators can be applied either to datasets on the
 whole or to subregions of individual datasets.
 
-The simplest mechanism for creating a ``DatasetSeries`` object is to use the
-class method
-:meth:`~yt.data_objects.time_series.DatasetSeries.from_filenames`.  This
-method accepts a list of strings that can be supplied to ``load``.  For
-example:
+The simplest mechanism for creating a ``DatasetSeries`` object is to pass a glob
+pattern to the ``yt.load`` function.
 
 .. code-block:: python
 
-   from yt.mods import *
-   filenames = ["DD0030/output_0030", "DD0040/output_0040"]
-   ts = DatasetSeries.from_filenames(filenames)
+   import yt
+   ts = yt.load("DD????/DD????")
 
-This will create a new time series, populated with the output files ``DD0030``
-and ``DD0040``.  This object, here called ``ts``, can now be analyzed in bulk.
-Alternately, you can specify a pattern that is supplied to :mod:`glob`, and
-those filenames will be sorted and returned.  Here is an example:
+This will create a new time series, populated with all datasets that match the
+pattern "DD" followed by four digits.  This object, here called ``ts``, can now
+be analyzed in bulk.  Alternately, you can specify an already formatted list of
+filenames directly to the `DatasetSeries` initializer:
 
 .. code-block:: python
 
-   from yt.mods import *
-   ts = DatasetSeries.from_filenames("*/*.index")
+   import yt
+   ts = DatasetSeries(["DD0030/DD0030", "DD0040/DD0040")
 
 Analyzing Each Dataset In Sequence
 ----------------------------------
@@ -64,8 +60,8 @@
 
 .. code-block:: python
 
-   from yt.mods import *
-   ts = DatasetSeries.from_filenames("*/*.index")
+   import yt
+   ts = yt.load("*/*.index")
    for ds in ts:
        print ds.current_time
 
@@ -77,87 +73,6 @@
  * The cookbook recipe for :ref:`cookbook-time-series-analysis`
  * :class:`~yt.data_objects.time_series.DatasetSeries`
 
-Prepared Time Series Analysis
------------------------------
-
-A few handy functions for treating time series data as a uniform, single object
-are also available.
-
-.. warning:: The future of these functions is uncertain: they may be removed in
-   the future!
-
-Simple Analysis Tasks
-~~~~~~~~~~~~~~~~~~~~~
-
-The available tasks that come built-in can be seen by looking at the output of
-``ts.tasks.keys()``.  For instance, one of the simplest ones is the
-``MaxValue`` task.  We can execute this task by calling it with the field whose
-maximum value we want to evaluate:
-
-.. code-block:: python
-
-   from yt.mods import *
-   ts = TimeSeries.from_filenames("*/*.index")
-   max_rho = ts.tasks["MaximumValue"]("density")
-
-When we call the task, the time series object executes the task on each
-component dataset.  The results are then returned to the user.  More
-complex, multi-task evaluations can be conducted by using the
-:meth:`~yt.data_objects.time_series.DatasetSeries.eval` call, which accepts a
-list of analysis tasks.
-
-Analysis Tasks Applied to Objects
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Just as some tasks can be applied to datasets as a whole, one can also apply
-the creation of objects to datasets.  This means that you are able to construct
-a generalized "sphere" operator that will be created inside all datasets, which
-you can then calculate derived quantities (see :ref:`derived-quantities`) from.
-
-For instance, imagine that you wanted to create a sphere that is centered on
-the most dense point in the simulation and that is 1 pc in radius, and then
-calculate the angular momentum vector on this sphere.  You could do that with
-this script:
-
-.. code-block:: python
-
-   from yt.mods import *
-   ts = TimeSeries.from_filenames("*/*.index")
-   sphere = ts.sphere("max", (1.0, "pc"))
-   L_vecs = sphere.quantities["AngularMomentumVector"]()
-
-Note that we have specified the units differently than usual -- the time series
-objects allow units as a tuple, so that in cases where units may change over
-the course of several outputs they are correctly set at all times.  This script
-simply sets up the time series object, creates a sphere, and then runs
-quantities on it.  It is designed to look very similar to the code that would
-conduct this analysis on a single output.
-
-All of the objects listed in :ref:`available-objects` are made available in
-the same manner as "sphere" was used above.
-
-Creating Analysis Tasks
-~~~~~~~~~~~~~~~~~~~~~~~
-
-If you wanted to look at the mass in star particles as a function of time, you
-would write a function that accepts params and ds and then decorate it with
-analysis_task. Here we have done so:
-
-.. code-block:: python
-
-   @analysis_task(('particle_type',))
-   def MassInParticleType(params, ds):
-       dd = ds.all_data()
-       ptype = (dd["particle_type"] == params.particle_type)
-       return (ptype.sum(), dd["ParticleMassMsun"][ptype].sum())
-
-   ms = ts.tasks["MassInParticleType"](4)
-   print ms
-
-This allows you to create your own analysis tasks that will be then available
-to time series data objects.  Since ``DatasetSeries`` objects iterate over
-filenames in parallel by default, this allows for transparent parallelization. 
-
 .. _analyzing-an-entire-simulation:
 
 Analyzing an Entire Simulation
@@ -175,9 +90,9 @@
 
 .. code-block:: python
 
-  from yt.mods import *
-  my_sim = simulation('enzo_tiny_cosmology/32Mpc_32.enzo', 'Enzo',
-                      find_outputs=False)
+  import yt
+  my_sim = yt.simulation('enzo_tiny_cosmology/32Mpc_32.enzo', 'Enzo',
+                         find_outputs=False)
 
 Then, create a ``DatasetSeries`` object with the :meth:`get_time_series` 
 function.  With no additional keywords, the time series will include every 
@@ -198,7 +113,7 @@
 
   for ds in my_sim.piter()
       all_data = ds.all_data()
-      print all_data.quantities['Extrema']('density')
+      print all_data.quantities.extrema('density')
  
 Additional keywords can be given to :meth:`get_time_series` to select a subset
 of the total data:

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/units/2)_Data_Selection_and_fields.ipynb
--- a/doc/source/analyzing/units/2)_Data_Selection_and_fields.ipynb
+++ b/doc/source/analyzing/units/2)_Data_Selection_and_fields.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:882b31591c60bfe6ad4cb0f8842953d2e94fb8a12ce742be831a65642eea72c9"
+  "signature": "sha256:2faff88abc93fe2bc9d91467db786a8b69ec3ece6783a7055942ecc7c47a0817"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -34,8 +34,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "from yt.mods import *\n",
-      "ds = load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
+      "import yt\n",
+      "ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
       "          \n",
       "dd = ds.all_data()\n",
       "maxval, maxloc = ds.find_max('density')\n",
@@ -324,6 +324,8 @@
      "collapsed": false,
      "input": [
       "from astropy import units as u\n",
+      "from yt import YTQuantity, YTArray\n",
+      "\n",
       "x = 42.0 * u.meter\n",
       "y = YTQuantity.from_astropy(x) "
      ],

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/units/3)_Comoving_units_and_code_units.ipynb
--- a/doc/source/analyzing/units/3)_Comoving_units_and_code_units.ipynb
+++ b/doc/source/analyzing/units/3)_Comoving_units_and_code_units.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:242d7005d45a82744713bfe6389e49d47f39b524d1e7fcbf5ceb2e65dc473e68"
+  "signature": "sha256:8ba193cc3867e2185133bbf3952bd5834e6c63993208635c71cf55fa6f27b491"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -34,8 +34,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "from yt.mods import *\n",
-      "ds = load('Enzo_64/DD0043/data0043')"
+      "import yt\n",
+      "ds = yt.load('Enzo_64/DD0043/data0043')"
      ],
      "language": "python",
      "metadata": {},
@@ -208,7 +208,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "slc = SlicePlot(ds, 0, 'density', width=(128, 'Mpccm/h'))\n",
+      "slc = yt.SlicePlot(ds, 0, 'density', width=(128, 'Mpccm/h'))\n",
       "slc.set_figure_size(6)"
      ],
      "language": "python",
@@ -234,6 +234,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
+      "from yt import YTQuantity\n",
+      "\n",
       "a = YTQuantity(3, 'cm')\n",
       "\n",
       "print a.units.registry.keys()"

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/units/4)_Comparing_units_from_different_datasets.ipynb
--- a/doc/source/analyzing/units/4)_Comparing_units_from_different_datasets.ipynb
+++ b/doc/source/analyzing/units/4)_Comparing_units_from_different_datasets.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:448380e74a746d19dc1eecfe222c0e798a87a4ac285e4f50e2598316086c5ee8"
+  "signature": "sha256:273a23e3a20b277a9e5ea7117b48cf19013c331d0893e6e9d21896e97f59aceb"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -22,9 +22,9 @@
      "collapsed": false,
      "input": [
       "# A high redshift output from z ~ 8\n",
-      "from yt.mods import *\n",
+      "import yt\n",
       "\n",
-      "ds1 = load('Enzo_64/DD0002/data0002')\n",
+      "ds1 = yt.load('Enzo_64/DD0002/data0002')\n",
       "print \"z = %s\" % ds1.current_redshift\n",
       "print \"Internal length units = %s\" % ds1.length_unit\n",
       "print \"Internal length units in cgs = %s\" % ds1.length_unit.in_cgs()"
@@ -38,7 +38,7 @@
      "collapsed": false,
      "input": [
       "# A low redshift output from z ~ 0\n",
-      "ds2 = load('Enzo_64/DD0043/data0043')\n",
+      "ds2 = yt.load('Enzo_64/DD0043/data0043')\n",
       "print \"z = %s\" % ds2.current_redshift\n",
       "print \"Internal length units = %s\" % ds2.length_unit\n",
       "print \"Internal length units in cgs = %s\" % ds2.length_unit.in_cgs()"
@@ -94,9 +94,10 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "from yt.mods import *\n",
+      "import yt\n",
+      "yt.enable_parallelism()\n",
       "\n",
-      "ts = DatasetSeries.from_filenames(\"Enzo_64/DD????/data????\")\n",
+      "ts = yt.load(\"Enzo_64/DD????/data????\")\n",
       "\n",
       "storage = {}\n",
       "\n",
@@ -104,7 +105,7 @@
       "    sto.result_id = ds.current_time\n",
       "    sto.result = ds.length_unit\n",
       "\n",
-      "if is_root():\n",
+      "if yt.is_root():\n",
       "    for t in sorted(storage.keys()):\n",
       "        print t.in_units('Gyr'), storage[t].in_units('Mpc')"
      ],

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/units/5)_Units_and_plotting.ipynb
--- a/doc/source/analyzing/units/5)_Units_and_plotting.ipynb
+++ b/doc/source/analyzing/units/5)_Units_and_plotting.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:981baca6958c75f0d84bbc24be7d2b75af5957d36aa3eb4ba725d9e47a85f80d"
+  "signature": "sha256:3deac8455c3bbd85e3cefc0f8905be509fba0050f67f69a7faed0505b4d8dbad"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -28,9 +28,9 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "from yt.mods import *\n",
-      "ds = load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
-      "slc = SlicePlot(ds, 2, 'density', center=[0.5, 0.5, 0.5], width=(15, 'kpc'))\n",
+      "import yt\n",
+      "ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
+      "slc = yt.SlicePlot(ds, 2, 'density', center=[0.5, 0.5, 0.5], width=(15, 'kpc'))\n",
       "slc.set_figure_size(6)"
      ],
      "language": "python",
@@ -107,7 +107,7 @@
      "collapsed": false,
      "input": [
       "dd = ds.all_data()\n",
-      "plot = ProfilePlot(dd, 'density', 'temperature', weight_field='cell_mass')\n",
+      "plot = yt.ProfilePlot(dd, 'density', 'temperature', weight_field='cell_mass')\n",
       "plot.show()"
      ],
      "language": "python",
@@ -142,7 +142,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "plot = PhasePlot(dd, 'density', 'temperature', 'cell_mass')\n",
+      "plot = yt.PhasePlot(dd, 'density', 'temperature', 'cell_mass')\n",
       "plot.set_figure_size(6)"
      ],
      "language": "python",

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/bootcamp/5)_Derived_Fields_and_Profiles.ipynb
--- a/doc/source/bootcamp/5)_Derived_Fields_and_Profiles.ipynb
+++ b/doc/source/bootcamp/5)_Derived_Fields_and_Profiles.ipynb
@@ -230,14 +230,14 @@
      "collapsed": false,
      "input": [
       "sp_small = ds.sphere(\"max\", (50.0, 'kpc'))\n",
-      "bv = sp_small.quantities[\"BulkVelocity\"]()\n",
+      "bv = sp_small.quantities.bulk_velocity()\n",
       "\n",
       "sp = ds.sphere(\"max\", (0.1, 'Mpc'))\n",
-      "rv1 = sp.quantities[\"Extrema\"](\"radial_velocity\")\n",
+      "rv1 = sp.quantities.extrema(\"radial_velocity\")\n",
       "\n",
       "sp.clear_data()\n",
       "sp.set_field_parameter(\"bulk_velocity\", bv)\n",
-      "rv2 = sp.quantities[\"Extrema\"](\"radial_velocity\")\n",
+      "rv2 = sp.quantities.extrema(\"radial_velocity\")\n",
       "\n",
       "print bv\n",
       "print rv1\n",
@@ -251,4 +251,4 @@
    "metadata": {}
   }
  ]
-}
\ No newline at end of file
+}

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/cookbook/amrkdtree_to_uniformgrid.py
--- a/doc/source/cookbook/amrkdtree_to_uniformgrid.py
+++ /dev/null
@@ -1,33 +0,0 @@
-import numpy as np
-import yt
-
-#This is an example of how to map an amr data set
-#to a uniform grid. In this case the highest
-#level of refinement is mapped into a 1024x1024x1024 cube
-
-#first the amr data is loaded
-ds = yt.load("~/pfs/galaxy/new_tests/feedback_8bz/DD0021/DD0021")
-
-#next we get the maxium refinement level
-lmax = ds.parameters['MaximumRefinementLevel']
-
-#calculate the center of the domain
-domain_center = (ds.domain_right_edge - ds.domain_left_edge)/2
-
-#determine the cellsize in the highest refinement level
-cell_size = ds.domain_width/(ds.domain_dimensions*2**lmax)
-
-#calculate the left edge of the new grid
-left_edge = domain_center - 512*cell_size
-
-#the number of cells per side of the new grid
-ncells = 1024
-
-#ask yt for the specified covering grid
-cgrid = ds.covering_grid(lmax, left_edge, np.array([ncells,]*3))
-
-#get a map of the density into the new grid
-density_map = cgrid["density"].astype(dtype="float32")
-
-#save the file as a numpy array for convenient future processing
-np.save("/pfs/goldbaum/galaxy/new_tests/feedback_8bz/gas_density_DD0021_log_densities.npy", density_map)

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/cookbook/constructing_data_objects.rst
--- a/doc/source/cookbook/constructing_data_objects.rst
+++ b/doc/source/cookbook/constructing_data_objects.rst
@@ -25,6 +25,8 @@
 
 .. yt_cookbook:: find_clumps.py
 
+.. _extract_frb:
+
 Extracting Fixed Resolution Data
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/cookbook/custom_colorbar_tickmarks.ipynb
--- a/doc/source/cookbook/custom_colorbar_tickmarks.ipynb
+++ b/doc/source/cookbook/custom_colorbar_tickmarks.ipynb
@@ -1,6 +1,7 @@
 {
  "metadata": {
-  "name": ""
+  "name": "",
+  "signature": "sha256:e8fd07931e339dc67b9d84b0fbc6abc84d3957d885544c24da7aa550f9427a1f"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -11,8 +12,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "%matplotlib inline\n",
-      "from yt.mods import *"
+      "import yt"
      ],
      "language": "python",
      "metadata": {},
@@ -22,8 +22,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "ds = load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
-      "slc = SlicePlot(ds, 'x', 'density')\n",
+      "ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
+      "slc = yt.SlicePlot(ds, 'x', 'density')\n",
       "slc"
      ],
      "language": "python",
@@ -87,4 +87,4 @@
    "metadata": {}
   }
  ]
-}
+}
\ No newline at end of file

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/cookbook/embedded_javascript_animation.ipynb
--- a/doc/source/cookbook/embedded_javascript_animation.ipynb
+++ b/doc/source/cookbook/embedded_javascript_animation.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:4f7d409d15ecc538096d15212923312e2cb4a911ebf5a9cf7edc9bd63a8335e9"
+  "signature": "sha256:bed79f0227742715a8753a98f2ad54175767a7c9ded19b14976ee6c8ff255f04"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -23,7 +23,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "from yt.mods import *\n",
+      "import yt\n",
       "from JSAnimation import IPython_display\n",
       "from matplotlib import animation"
      ],
@@ -47,14 +47,14 @@
       "import matplotlib.pyplot as plt\n",
       "from matplotlib.backends.backend_agg import FigureCanvasAgg\n",
       "\n",
-      "prj = ProjectionPlot(load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
+      "prj = yt.ProjectionPlot(yt.load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
       "prj.set_figure_size(5)\n",
       "prj.set_zlim('density',1e-32,1e-26)\n",
       "fig = prj.plots['density'].figure\n",
       "\n",
       "# animation function.  This is called sequentially\n",
       "def animate(i):\n",
-      "    ds = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
+      "    ds = yt.load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
       "    prj._switch_ds(ds)\n",
       "\n",
       "# call the animator.  blit=True means only re-draw the parts that have changed.\n",
@@ -68,4 +68,4 @@
    "metadata": {}
   }
  ]
-}
+}
\ No newline at end of file

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/cookbook/embedded_webm_animation.ipynb
--- a/doc/source/cookbook/embedded_webm_animation.ipynb
+++ b/doc/source/cookbook/embedded_webm_animation.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:0090176ae6299b2310bf613404cbfbb42a54e19a03d1469d1429a01170a63aa0"
+  "signature": "sha256:b400f12ff9e27ff6a3ddd13f2f8fc3f88bd857fa6083fad6808f00d771312db7"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -21,7 +21,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "from yt.mods import *\n",
+      "import yt\n",
       "from matplotlib import animation"
      ],
      "language": "python",
@@ -96,13 +96,13 @@
       "import matplotlib.pyplot as plt\n",
       "from matplotlib.backends.backend_agg import FigureCanvasAgg\n",
       "\n",
-      "prj = ProjectionPlot(load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
+      "prj = yt.ProjectionPlot(yt.load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
       "prj.set_zlim('density',1e-32,1e-26)\n",
       "fig = prj.plots['density'].figure\n",
       "\n",
       "# animation function.  This is called sequentially\n",
       "def animate(i):\n",
-      "    ds = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
+      "    ds = yt.load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
       "    prj._switch_ds(ds)\n",
       "\n",
       "# call the animator.  blit=True means only re-draw the parts that have changed.\n",
@@ -119,4 +119,4 @@
    "metadata": {}
   }
  ]
-}
+}
\ No newline at end of file

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/cookbook/ffmpeg_volume_rendering.py
--- a/doc/source/cookbook/ffmpeg_volume_rendering.py
+++ /dev/null
@@ -1,99 +0,0 @@
-#This is an example of how to make videos of 
-#uniform grid data using Theia and ffmpeg
-
-#The Scene object to hold the ray caster and view camera
-from yt.visualization.volume_rendering.theia.scene import TheiaScene
-
-#GPU based raycasting algorithm to use 
-from yt.visualization.volume_rendering.theia.algorithms.front_to_back import FrontToBackRaycaster
-
-#These will be used to define how to color the data
-from yt.visualization.volume_rendering.transfer_functions import ColorTransferFunction
-from yt.visualization.color_maps import *
-
-#This will be used to launch ffmpeg
-import subprocess as sp
-
-#Of course we need numpy for math magic
-import numpy as np
-
-#Opacity scaling function
-def scale_func(v, mi, ma):
-      return  np.minimum(1.0, (v-mi)/(ma-mi) + 0.0)
-
-#load the uniform grid from a numpy array file
-bolshoi = "/home/bogert/log_densities_1024.npy"
-density_grid = np.load(bolshoi)
-
-#Set the TheiaScene to use the density_grid and 
-#setup the raycaster for a resulting 1080p image
-ts = TheiaScene(volume = density_grid, raycaster = FrontToBackRaycaster(size = (1920,1080) ))
-
-#the min and max values in the data to color
-mi, ma = 0.0, 3.6
-
-#setup colortransferfunction
-bins = 5000
-tf = ColorTransferFunction( (mi, ma), bins)
-tf.map_to_colormap(0.5, ma, colormap="spring", scale_func = scale_func)
-
-#pass the transfer function to the ray caster
-ts.source.raycaster.set_transfer(tf)
-
-#Initial configuration for start of video
-#set initial opacity and brightness values
-#then zoom into the center of the data 30%
-ts.source.raycaster.set_opacity(0.03)
-ts.source.raycaster.set_brightness(2.3)
-ts.camera.zoom(30.0)
-
-#path to ffmpeg executable
-FFMPEG_BIN = "/usr/local/bin/ffmpeg"
-
-pipe = sp.Popen([ FFMPEG_BIN,
-        '-y', # (optional) overwrite the output file if it already exists
-	#This must be set to rawvideo because the image is an array
-        '-f', 'rawvideo', 
-	#This must be set to rawvideo because the image is an array
-        '-vcodec','rawvideo',
-	#The size of the image array and resulting video
-        '-s', '1920x1080', 
-	#This must be rgba to match array format (uint32)
-        '-pix_fmt', 'rgba',
-	#frame rate of video
-        '-r', '29.97', 
-        #Indicate that the input to ffmpeg comes from a pipe
-        '-i', '-', 
-        # Tells FFMPEG not to expect any audio
-        '-an', 
-        #Setup video encoder
-	#Use any encoder you life available from ffmpeg
-        '-vcodec', 'libx264', '-preset', 'ultrafast', '-qp', '0',
-        '-pix_fmt', 'yuv420p',
-        #Name of the output
-        'bolshoiplanck2.mkv' ],
-        stdin=sp.PIPE,stdout=sp.PIPE)
-		
-		
-#Now we loop and produce 500 frames
-for k in range (0,500) :
-    #update the scene resulting in a new image
-    ts.update()
-
-    #get the image array from the ray caster
-    array = ts.source.get_results()
-
-    #send the image array to ffmpeg
-    array.tofile(pipe.stdin)
-
-    #rotate the scene by 0.01 rads in x,y & z
-    ts.camera.rotateX(0.01)
-    ts.camera.rotateZ(0.01)
-    ts.camera.rotateY(0.01)
-
-    #zoom in 0.01% for a total of a 5% zoom
-    ts.camera.zoom(0.01)
-
-
-#Close the pipe to ffmpeg
-pipe.terminate()

diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/cookbook/opengl_stereo_volume_rendering.py
--- a/doc/source/cookbook/opengl_stereo_volume_rendering.py
+++ /dev/null
@@ -1,370 +0,0 @@
-from OpenGL.GL import *
-from OpenGL.GLUT import *
-from OpenGL.GLU import *
-from OpenGL.GL.ARB.vertex_buffer_object import *
-
-import sys, time
-import numpy as np
-import pycuda.driver as cuda_driver
-import pycuda.gl as cuda_gl
-
-from yt.visualization.volume_rendering.theia.scene import TheiaScene
-from yt.visualization.volume_rendering.theia.algorithms.front_to_back import FrontToBackRaycaster
-from yt.visualization.volume_rendering.transfer_functions import ColorTransferFunction
-from yt.visualization.color_maps import *
-
-import numexpr as ne
-
-window = None     # Number of the glut window.
-rot_enabled = True
-
-#Theia Scene
-ts = None
-
-#RAY CASTING values
-c_tbrightness = 1.0
-c_tdensity = 0.05
-
-output_texture = None # pointer to offscreen render target
-
-leftButton = False
-middleButton = False
-rightButton = False
-
-#Screen width and height
-width = 1920
-height = 1080
-
-eyesep = 0.1
-
-(pbo, pycuda_pbo) = [None]*2
-(rpbo, rpycuda_pbo) = [None]*2
-
-#create 2 PBO for stereo scopic rendering
-def create_PBO(w, h):
-    global pbo, pycuda_pbo, rpbo, rpycuda_pbo
-    num_texels = w*h
-    array = np.zeros((num_texels, 3),np.float32)
-
-    pbo = glGenBuffers(1)
-    glBindBuffer(GL_ARRAY_BUFFER, pbo)
-    glBufferData(GL_ARRAY_BUFFER, array, GL_DYNAMIC_DRAW)
-    glBindBuffer(GL_ARRAY_BUFFER, 0)
-    pycuda_pbo = cuda_gl.RegisteredBuffer(long(pbo))
-
-    rpbo = glGenBuffers(1)
-    glBindBuffer(GL_ARRAY_BUFFER, rpbo)
-    glBufferData(GL_ARRAY_BUFFER, array, GL_DYNAMIC_DRAW)
-    glBindBuffer(GL_ARRAY_BUFFER, 0)
-    rpycuda_pbo = cuda_gl.RegisteredBuffer(long(rpbo))
-
-def destroy_PBO(self):
-    global pbo, pycuda_pbo, rpbo, rpycuda_pbo
-    glBindBuffer(GL_ARRAY_BUFFER, long(pbo))
-    glDeleteBuffers(1, long(pbo));
-    glBindBuffer(GL_ARRAY_BUFFER, 0)
-    pbo,pycuda_pbo = [None]*2
-
-    glBindBuffer(GL_ARRAY_BUFFER, long(rpbo))
-    glDeleteBuffers(1, long(rpbo));
-    glBindBuffer(GL_ARRAY_BUFFER, 0)
-    rpbo,rpycuda_pbo = [None]*2
-
-#consistent with C initPixelBuffer()
-def create_texture(w,h):
-    global output_texture
-    output_texture = glGenTextures(1)
-    glBindTexture(GL_TEXTURE_2D, output_texture)
-    # set basic parameters
-    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
-    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
-    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
-    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
-    # buffer data
-    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
-                 w, h, 0, GL_RGB, GL_FLOAT, None)
-
-#consistent with C initPixelBuffer()
-def destroy_texture():
-    global output_texture
-    glDeleteTextures(output_texture);
-    output_texture = None
-
-def init_gl(w = 512 , h = 512):
-    Width, Height = (w, h)
-
-    glClearColor(0.1, 0.1, 0.5, 1.0)
-    glDisable(GL_DEPTH_TEST)
-
-    #matrix functions
-    glViewport(0, 0, Width, Height)
-    glMatrixMode(GL_PROJECTION);
-    glLoadIdentity();
-
-    #matrix functions
-    gluPerspective(60.0, Width/float(Height), 0.1, 10.0)
-    glPolygonMode(GL_FRONT_AND_BACK, GL_FILL)
-
-def resize(Width, Height):
-    global width, height
-    (width, height) = Width, Height
-    glViewport(0, 0, Width, Height)        # Reset The Current Viewport And Perspective Transformation
-    glMatrixMode(GL_PROJECTION)
-    glLoadIdentity()
-    gluPerspective(60.0, Width/float(Height), 0.1, 10.0)
-
-
-def do_tick():
-    global time_of_last_titleupdate, frame_counter, frames_per_second
-    if ((time.clock () * 1000.0) - time_of_last_titleupdate >= 1000.):
-        frames_per_second = frame_counter                   # Save The FPS
-        frame_counter = 0  # Reset The FPS Counter
-        szTitle = "%d FPS" % (frames_per_second )
-        glutSetWindowTitle ( szTitle )
-        time_of_last_titleupdate = time.clock () * 1000.0
-    frame_counter += 1
-
-oldMousePos = [ 0, 0 ]
-def mouseButton( button, mode, x, y ):
-	"""Callback function (mouse button pressed or released).
-
-	The current and old mouse positions are stored in
-	a	global renderParam and a global list respectively"""
-
-	global leftButton, middleButton, rightButton, oldMousePos
-
-        if button == GLUT_LEFT_BUTTON:
-	    if mode == GLUT_DOWN:
-	        leftButton = True
-            else:
-		leftButton = False
-
-        if button == GLUT_MIDDLE_BUTTON:
-	    if mode == GLUT_DOWN:
-	        middleButton = True
-            else:
-		middleButton = False
-
-        if button == GLUT_RIGHT_BUTTON:
-	    if mode == GLUT_DOWN:
-	        rightButton = True
-            else:
-		rightButton = False
-
-	oldMousePos[0], oldMousePos[1] = x, y
-	glutPostRedisplay( )
-
-def mouseMotion( x, y ):
-	"""Callback function (mouse moved while button is pressed).
-
-	The current and old mouse positions are stored in
-	a	global renderParam and a global list respectively.
-	The global translation vector is updated according to
-	the movement of the mouse pointer."""
-
-	global ts, leftButton, middleButton, rightButton, oldMousePos
-	deltaX = x - oldMousePos[ 0 ]
-	deltaY = y - oldMousePos[ 1 ]
-
-	factor = 0.001
-
-	if leftButton == True:
-            ts.camera.rotateX( - deltaY * factor)
-            ts.camera.rotateY( - deltaX * factor)
-	if middleButton == True:
-	    ts.camera.translateX( deltaX* 2.0 * factor)
-	    ts.camera.translateY( - deltaY* 2.0 * factor)
-	if rightButton == True:
-	    ts.camera.scale += deltaY * factor
-
-	oldMousePos[0], oldMousePos[1] = x, y
-	glutPostRedisplay( )
-
-def keyPressed(*args):
-    global c_tbrightness, c_tdensity, eyesep
-    # If escape is pressed, kill everything.
-    if args[0] == '\033':
-        print 'Closing..'
-        destroy_PBOs()
-        destroy_texture()
-        exit()
-
-    #change the brightness of the scene
-    elif args[0] == ']':
-        c_tbrightness += 0.025
-    elif args[0] == '[':
-        c_tbrightness -= 0.025
-
-    #change the density scale
-    elif args[0] == ';':
-        c_tdensity -= 0.001
-    elif args[0] == '\'':
-        c_tdensity += 0.001 
-
-    #change the transfer scale
-    elif args[0] == '-':
-        eyesep -= 0.01
-    elif args[0] == '=':
-        eyesep += 0.01 
-
-def idle():
-    glutPostRedisplay()
-
-def display():
-    try:
-        #process left eye
-        process_image()
-        display_image()
-
-        #process right eye
-        process_image(eye = False)
-        display_image(eye = False)
-
-
-        glutSwapBuffers()
-
-    except:
-        from traceback import print_exc
-        print_exc()
-        from os import _exit
-        _exit(0)
-
-def process(eye = True):
-    global ts, pycuda_pbo, rpycuda_pbo, eyesep, c_tbrightness, c_tdensity
-    """ Use PyCuda """
-
-    ts.get_raycaster().set_opacity(c_tdensity)
-    ts.get_raycaster().set_brightness(c_tbrightness)
-
-    if (eye) :
-        ts.camera.translateX(-eyesep)
-        dest_mapping = pycuda_pbo.map()
-        (dev_ptr, size) = dest_mapping.device_ptr_and_size()
-        ts.get_raycaster().surface.device_ptr = dev_ptr
-        ts.update()
-        dest_mapping.unmap()
-        ts.camera.translateX(eyesep)
-    else :
-        ts.camera.translateX(eyesep)
-        dest_mapping = rpycuda_pbo.map()
-        (dev_ptr, size) = dest_mapping.device_ptr_and_size()
-        ts.get_raycaster().surface.device_ptr = dev_ptr
-        ts.update()
-        dest_mapping.unmap()
-        ts.camera.translateX(-eyesep)
-
-
-def process_image(eye =  True):
-    global output_texture, pbo, rpbo, width, height
-    """ copy image and process using CUDA """
-    # run the Cuda kernel
-    process(eye)
-    # download texture from PBO
-    if (eye) : 
-        glBindBuffer(GL_PIXEL_UNPACK_BUFFER, np.uint64(pbo))
-        glBindTexture(GL_TEXTURE_2D, output_texture)
-
-        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
-                 width, height, 0,
-                 GL_RGB, GL_FLOAT, None)
-    else :
-        glBindBuffer(GL_PIXEL_UNPACK_BUFFER, np.uint64(rpbo))
-        glBindTexture(GL_TEXTURE_2D, output_texture)
-
-        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
-                 width, height, 0,
-                 GL_RGB, GL_FLOAT, None)
-
-def display_image(eye = True):
-    global width, height
-    """ render a screen sized quad """
-    glDisable(GL_DEPTH_TEST)
-    glDisable(GL_LIGHTING)
-    glEnable(GL_TEXTURE_2D)
-    glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE)
-
-    #matix functions should be moved
-    glMatrixMode(GL_PROJECTION)
-    glPushMatrix()
-    glLoadIdentity()
-    glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0)
-    glMatrixMode( GL_MODELVIEW)
-    glLoadIdentity()
-    glViewport(0, 0, width, height)
-
-    if (eye) :
-        glDrawBuffer(GL_BACK_LEFT)
-    else :
-        glDrawBuffer(GL_BACK_RIGHT)
-
-    glBegin(GL_QUADS)
-    glTexCoord2f(0.0, 0.0)
-    glVertex3f(-1.0, -1.0, 0.5)
-    glTexCoord2f(1.0, 0.0)
-    glVertex3f(1.0, -1.0, 0.5)
-    glTexCoord2f(1.0, 1.0)
-    glVertex3f(1.0, 1.0, 0.5)
-    glTexCoord2f(0.0, 1.0)
-    glVertex3f(-1.0, 1.0, 0.5)
-    glEnd()
-
-    glMatrixMode(GL_PROJECTION)
-    glPopMatrix()
-
-    glDisable(GL_TEXTURE_2D)
-    glBindTexture(GL_TEXTURE_2D, 0)
-    glBindBuffer(GL_PIXEL_PACK_BUFFER, 0)
-    glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0)
-
-
-#note we may need to init cuda_gl here and pass it to camera
-def main():
-    global window, ts, width, height
-    (width, height) = (1920, 1080)
-
-    glutInit(sys.argv)
-    glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_ALPHA | GLUT_DEPTH | GLUT_STEREO)
-    glutInitWindowSize(*initial_size)
-    glutInitWindowPosition(0, 0)
-    window = glutCreateWindow("Stereo Volume Rendering")
-
-
-    glutDisplayFunc(display)
-    glutIdleFunc(idle)
-    glutReshapeFunc(resize)
-    glutMouseFunc( mouseButton )
-    glutMotionFunc( mouseMotion )
-    glutKeyboardFunc(keyPressed)
-    init_gl(width, height)
-
-    # create texture for blitting to screen
-    create_texture(width, height)
-
-    import pycuda.gl.autoinit
-    import pycuda.gl
-    cuda_gl = pycuda.gl
-
-    create_PBO(width, height)
-    # ----- Load and Set Volume Data -----
-
-    density_grid = np.load("/home/bogert/dd150_log_densities.npy")
-
-    mi, ma= 21.5, 24.5
-    bins = 5000
-    tf = ColorTransferFunction( (mi, ma), bins)
-    tf.map_to_colormap(mi, ma, colormap="algae", scale_func = scale_func)
-
-    ts = TheiaScene(volume = density_grid, raycaster = FrontToBackRaycaster(size = (width, height), tf = tf))
-
-    ts.get_raycaster().set_sample_size(0.01)
-    ts.get_raycaster().set_max_samples(5000)
-
-    glutMainLoop()
-
-def scale_func(v, mi, ma):
-    return  np.minimum(1.0, np.abs((v)-ma)/np.abs(mi-ma) + 0.0)
-
-# Print message to console, and kick off the main to get it rolling.
-if __name__ == "__main__":
-    print "Hit ESC key to quit, 'a' to toggle animation, and 'e' to toggle cuda"
-    main()

This diff is so big that we needed to truncate the remainder.

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list