[yt-svn] commit/yt: 9 new changesets
commits-noreply at bitbucket.org
commits-noreply at bitbucket.org
Mon Jul 21 13:33:27 PDT 2014
9 new commits in yt:
https://bitbucket.org/yt_analysis/yt/commits/959d7f69b3d6/
Changeset: 959d7f69b3d6
Branch: yt-3.0
User: ngoldbaum
Date: 2014-07-20 08:09:01
Summary: First pass at updating the cheat sheet.
Affected #: 1 file
diff -r 3c2909dfdf7ab1918119fff79f048dbef3c3caf3 -r 959d7f69b3d60ab7910eb6de36cec6701232d586 doc/cheatsheet.tex
--- a/doc/cheatsheet.tex
+++ b/doc/cheatsheet.tex
@@ -3,7 +3,7 @@
\usepackage{calc}
\usepackage{ifthen}
\usepackage[landscape]{geometry}
-\usepackage[colorlinks = true, linkcolor=blue, citecolor=blue, urlcolor=blue]{hyperref}
+\usepackage[hyphens]{url}
% To make this come out properly in landscape mode, do one of the following
% 1.
@@ -101,9 +101,13 @@
Documentation \url{http://yt-project.org/doc/index.html}.
Need help? Start here \url{http://yt-project.org/doc/help/} and then
try the IRC chat room \url{http://yt-project.org/irc.html},
-or the mailing list \url{http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org}.
-{\bf Installing yt:} The easiest way to install yt is to use the installation script
-found on the yt homepage or the docs linked above.
+or the mailing list \url{http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org}. \\
+
+\subsection{Installing yt} The easiest way to install yt is to use the
+installation script found on the yt homepage or the docs linked above. If you
+already have python set up with \texttt{numpy}, \texttt{scipy},
+\texttt{matplotlib}, \texttt{h5py}, and \texttt{cython}, you can also use
+\texttt{pip install yt}
\subsection{Command Line yt}
yt, and its convenience functions, are launched from a command line prompt.
@@ -118,9 +122,8 @@
\texttt{yt stats} {\it dataset} \textemdash\ Print stats of a dataset. \\
\texttt{yt update} \textemdash\ Update yt to most recent version.\\
\texttt{yt update --all} \textemdash\ Update yt and dependencies to most recent version. \\
-\texttt{yt instinfo} \textemdash\ yt installation information. \\
+\texttt{yt version} \textemdash\ yt installation information. \\
\texttt{yt notebook} \textemdash\ Run the IPython notebook server. \\
-\texttt{yt serve} ({\it dataset}) \textemdash\ Run yt-specific web GUI ({\it dataset} is optional).\\
\texttt{yt upload\_image} {\it image.png} \textemdash\ Upload PNG image to imgur.com. \\
\texttt{yt upload\_notebook} {\it notebook.nb} \textemdash\ Upload IPython notebook to hub.yt-project.org.\\
\texttt{yt plot} {\it dataset} \textemdash\ Create a set of images.\\
@@ -132,16 +135,8 @@
paste.yt-project.org. \\
\texttt{yt pastebin\_grab} {\it identifier} \textemdash\ Print content of pastebin to
STDOUT. \\
- \texttt{yt hub\_register} \textemdash\ Register with
-hub.yt-project.org. \\
-\texttt{yt hub\_submit} \textemdash\ Submit hg repo to
-hub.yt-project.org. \\
-\texttt{yt bootstrap\_dev} \textemdash\ Bootstrap a yt
-development environment. \\
\texttt{yt bugreport} \textemdash\ Report a yt bug. \\
\texttt{yt hop} {\it dataset} \textemdash\ Run hop on a dataset. \\
-\texttt{yt rpdb} \textemdash\ Connect to running rpd
- session.
\subsection{yt Imports}
In order to use yt, Python must load the relevant yt modules into memory.
@@ -149,37 +144,40 @@
used as part of a script.
\newlength{\MyLen}
\settowidth{\MyLen}{\texttt{letterpaper}/\texttt{a4paper} \ }
-\texttt{from yt.mods import \textasteriskcentered} \textemdash\
-Load base yt modules. \\
+\texttt{import yt} \textemdash\
+Load yt. \\
\texttt{from yt.config import ytcfg} \textemdash\
Used to set yt configuration options.
- If used, must be called before importing any other module.\\
-\texttt{from yt.analysis\_modules.api import \textasteriskcentered} \textemdash\
-Load all yt analysis modules. \\
+If used, must be called before importing any other module.\\
\texttt{from yt.analysis\_modules.\emph{halo\_finding}.api import \textasteriskcentered} \textemdash\
Load halo finding modules. Other modules
are loaded in a similar way by swapping the
{\em emphasized} text.
See the \textbf{Analysis Modules} section for a listing and short descriptions of each.
-\subsection{Numpy Arrays}
-Simulation data in yt is returned in Numpy arrays. The Numpy package provides a wealth of built-in
-functions that operate on Numpy arrays. Here is a very brief list of some useful ones.
-Please see \url{http://docs.scipy.org/doc/numpy/reference/} for the full
-numpy documentation.\\
-\settowidth{\MyLen}{\texttt{multicol} }
+\subsection{YTArray}
+Simulation data in yt is returned as a YTArray. YTArray is a numpy array that
+has unit data attached to it and can automatically handle unit conversions and
+detect unit errors. Just like a numpy array, YTArray provides a wealth of
+built-in functions to calculate properties of the data in the array. Here is a
+very brief list of some useful ones.
+\settowidth{\MyLen}{\texttt{multicol} }\\
+\texttt{v = a.in\_cgs()} \textemdash\ Return the array in CGS units \\
+\texttt{v = a.in\_units('Msun/pc**3')} \textemdash\ Return the array in solar masses per cubic parsec \\
\texttt{v = a.max(), a.min()} \textemdash\ Return maximum, minimum of \texttt{a}. \\
-\texttt{index = a.argmax(), a.argmin()} \textemdash\ Return index of max,
+\texttt{index = a.argmax(), a.argmin()} \textemdash\ Return index of max,
min value of \texttt{a}.\\
\texttt{v = a[}{\it index}\texttt{]} \textemdash\ Select a single value from \texttt{a} at location {\it index}.\\
-\texttt{b = a[}{\it i:j}\texttt{]} \textemdash\ Select the slice of values from \texttt{a} between
+\texttt{b = a[}{\it i:j}\texttt{]} \textemdash\ Select the slice of values from
+\texttt{a} between
locations {\it i} to {\it j-1} saved to a new Numpy array \texttt{b} with length {\it j-i}. \\
-\texttt{sel = (a > const)} \textemdash\ Create a new boolean Numpy array \texttt{sel}, of the same shape as \texttt{a},
+\texttt{sel = (a > const)} \textemdash\ Create a new boolean Numpy array
+\texttt{sel}, of the same shape as \texttt{a},
that marks which values of \texttt{a > const}. Other operators (e.g. \textless, !=, \%) work as well.\\
-\texttt{b = a[sel]} \textemdash\ Create a new Numpy array \texttt{b} made up of elements from \texttt{a} that correspond to elements of \texttt{sel}
+\texttt{b = a[sel]} \textemdash\ Create a new Numpy array \texttt{b} made up of
+elements from \texttt{a} that correspond to elements of \texttt{sel}
that are {\it True}. In the above example \texttt{b} would be all elements of \texttt{a} that are greater than \texttt{const}.\\
-\texttt{a.dump({\it filename.dat})} \textemdash\ Save \texttt{a} to the binary file {\it filename.dat}.\\
-\texttt{a = np.load({\it filename.dat})} \textemdash\ Load the contents of {\it filename.dat} into \texttt{a}.
+\texttt{a.write\_hdf5({\it filename.h5})} \textemdash\ Save \texttt{a} to the hdf5 file {\it filename.h5}.\\
\subsection{IPython Tips}
\settowidth{\MyLen}{\texttt{multicol} }
@@ -196,6 +194,7 @@
\texttt{\%hist} \textemdash\ Print recent command history.\\
\texttt{\%quickref} \textemdash\ Print IPython quick reference.\\
\texttt{\%pdb} \textemdash\ Automatically enter the Python debugger at an exception.\\
+\texttt{\%debug} \textemdash\ Drop into a debugger at the location of the last unhandled exception. \\
\texttt{\%time, \%timeit} \textemdash\ Find running time of expressions for benchmarking.\\
\texttt{\%lsmagic} \textemdash\ List all available IPython magics. Hint: \texttt{?} works with magics.\\
@@ -208,10 +207,10 @@
After that, simulation data is generally accessed in yt using {\it Data Containers} which are Python objects
that define a region of simulation space from which data should be selected.
\settowidth{\MyLen}{\texttt{multicol} }
-\texttt{ds = load(}{\it dataset}\texttt{)} \textemdash\ Reference a single snapshot.\\
+\texttt{ds = yt.load(}{\it dataset}\texttt{)} \textemdash\ Reference a single snapshot.\\
\texttt{dd = ds.all\_data()} \textemdash\ Select the entire volume.\\
-\texttt{a = dd[}{\it field\_name}\texttt{]} \textemdash\ Saves the contents of {\it field} into the
-numpy array \texttt{a}. Similarly for other data containers.\\
+\texttt{a = dd[}{\it field\_name}\texttt{]} \textemdash\ Copies the contents of {\it field} into the
+YTArray \texttt{a}. Similarly for other data containers.\\
\texttt{ds.field\_list} \textemdash\ A list of available fields in the snapshot. \\
\texttt{ds.derived\_field\_list} \textemdash\ A list of available derived fields
in the snapshot. \\
@@ -231,45 +230,29 @@
direction set by {\it normal},with total length
2$\times${\it height} and with radius {\it radius}. \\
- \texttt{bl = ds.boolean({\it constructor})} \textemdash\ Create a boolean data
- container. {\it constructor} is a list of pre-defined non-boolean
- data containers with nested boolean logic using the
- ``AND'', ``NOT'', or ``OR'' operators. E.g. {\it constructor=}
- {\it [sp, ``NOT'', (di, ``OR'', re)]} gives a volume defined
- by {\it sp} minus the patches covered by {\it di} and {\it re}.\\
-
\texttt{ds.save\_object(sp, {\it ``sp\_for\_later''})} \textemdash\ Save an object (\texttt{sp}) for later use.\\
\texttt{sp = ds.load\_object({\it ``sp\_for\_later''})} \textemdash\ Recover a saved object.\\
-\subsection{Defining New Fields \& Quantities}
-\texttt{yt} expects on-disk fields, fields generated on-demand and in-memory. Quantities reduce a field (e.g. "Density") defined over an object (e.g. "sphere") to get a single value (e.g. "Mass"). \\
-\texttt{def \_MetalMassMsun({\it field},{\it data})}\\
-\texttt{\hspace{4 mm} return data["Metallicity"]*data["CellMassMsun"]}\\
-\texttt{add\_field("MetalMassMsun",function=\_MetalMassMsun)}\\
-Define a new quantity; note the first function operates on grids and data objects and the second on the results of the first. \\
-\texttt{def \_TotalMass(data): }\\
-\texttt{\hspace{4 mm} baryon\_mass = data["CellMassMsun"].sum()}\\
-\texttt{\hspace{4 mm} particle\_mass = data["ParticleMassMsun"].sum()}\\
-\texttt{\hspace{4 mm} return baryon\_mass, particle\_mass}\\
-\texttt{def \_combTotalMass(data, baryon\_mass, particle\_mass):}\\
-\texttt{\hspace{4 mm} return baryon\_mass.sum() + particle\_mass.sum()}\\
-\texttt{add\_quantity("TotalMass", function=\_TotalMass,}\\
-\texttt{\hspace{4 mm} combine\_function=\_combTotalMass, n\_ret = 2)}\\
-
-
+\subsection{Defining New Fields}
+\texttt{yt} expects on-disk fields, fields generated on-demand and in-memory.
+Field can either be created before a dataset is loaded using \texttt{add\_field}:
+\texttt{def \_metal\_mass({\it field},{\it data})}\\
+\texttt{\hspace{4 mm} return data["metallicity"]*data["cell\_mass"]}\\
+\texttt{add\_field("metal\_mass", units='g', function=\_metal\_mass)}\\
+Or added to an existing dataset using \texttt{ds.add\_field}:
+\texttt{ds.add\_field("metal\_mass", units='g', function=\_metal\_mass)}\\
\subsection{Slices and Projections}
\settowidth{\MyLen}{\texttt{multicol} }
-\texttt{slc = SlicePlot(ds, {\it axis}, {\it field}, {\it center=}, {\it width=}, {\it weight\_field=}, {\it additional parameters})} \textemdash\ Make a slice plot
-perpendicular to {\it axis} of {\it field} weighted by {\it weight\_field} at (code-units) {\it center} with
-{\it width} in code units or a (value, unit) tuple. Hint: try {\it SlicePlot?} in IPython to see additional parameters.\\
+\texttt{slc = yt.SlicePlot(ds, {\it axis or normal vector}, {\it field}, {\it center=}, {\it width=}, {\it weight\_field=}, {\it additional parameters})} \textemdash\ Make a slice plot
+perpendicular to {\it axis} (specified via 'x', 'y', or 'z') or a normal vector for an off-axis slice of {\it field} weighted by {\it weight\_field} at (code-units) {\it center} with
+{\it width} in code units or a (value, unit) tuple. Hint: try {\it yt.SlicePlot?} in IPython to see additional parameters.\\
\texttt{slc.save({\it file\_prefix})} \textemdash\ Save the slice to a png with name prefix {\it file\_prefix}.
\texttt{.save()} works similarly for the commands below.\\
-\texttt{prj = ProjectionPlot(ds, {\it axis}, {\it field}, {\it addit. params})} \textemdash\ Make a projection. \\
-\texttt{prj = OffAxisSlicePlot(ds, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off-axis slice. Note this takes an array of fields. \\
-\texttt{prj = OffAxisProjectionPlot(ds, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off axis projection. Note this takes an array of fields. \\
+\texttt{prj = yt.ProjectionPlot(ds, {\it axis}, {\it field}, {\it addit. params})} \textemdash\ Make a projection. \\
+\texttt{prj = yt.OffAxisProjectionPlot(ds, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off axis projection. Note this takes an array of fields. \\
\subsection{Plot Annotations}
\settowidth{\MyLen}{\texttt{multicol} }
@@ -299,51 +282,37 @@
The \texttt{my\_plugins.py} file \textemdash\ Add functions, derived fields, constants, or other commonly-used Python code to yt.
-
-
\subsection{Analysis Modules}
\settowidth{\MyLen}{\texttt{multicol}}
The import name for each module is listed at the end of each description (see \textbf{yt Imports}).
\texttt{Absorption Spectrum} \textemdash\ (\texttt{absorption\_spectrum}). \\
\texttt{Clump Finder} \textemdash\ Find clumps defined by density thresholds (\texttt{level\_sets}). \\
-\texttt{Coordinate Transformation} \textemdash\ (\texttt{coordinate\_transformation}). \\
\texttt{Halo Finding} \textemdash\ Locate halos of dark matter particles (\texttt{halo\_finding}). \\
-\texttt{Halo Mass Function} \textemdash\ Find halo mass functions from data and from theory (\texttt{halo\_mass\_function}). \\
-\texttt{Halo Profiling} \textemdash\ Profile and project multiple halos (\texttt{halo\_profiler}). \\
-\texttt{Halo Merger Tree} \textemdash\ Create a database of halo mergers (\texttt{halo\_merger\_tree}). \\
\texttt{Light Cone Generator} \textemdash\ Stitch datasets together to perform analysis over cosmological volumes. \\
\texttt{Light Ray Generator} \textemdash\ Analyze the path of light rays.\\
-\texttt{Radial Column Density} \textemdash\ Calculate column densities around a point (\texttt{radial\_column\_density}). \\
\texttt{Rockstar Halo Finding} \textemdash\ Locate halos of dark matter using the Rockstar halo finder (\texttt{halo\_finding.rockstar}). \\
\texttt{Star Particle Analysis} \textemdash\ Analyze star formation history and assemble spectra (\texttt{star\_analysis}). \\
\texttt{Sunrise Exporter} \textemdash\ Export data to the sunrise visualization format (\texttt{sunrise\_export}). \\
-\texttt{Two Point Functions} \textemdash\ Two point correlations (\texttt{two\_point\_functions}). \\
\subsection{Parallel Analysis}
-\settowidth{\MyLen}{\texttt{multicol}}
-Nearly all of yt is parallelized using MPI.
-The {\it mpi4py} package must be installed for parallelism in yt.
-To install {\it pip install mpi4py} on the command line usually works.
+\settowidth{\MyLen}{\texttt{multicol}}
+Nearly all of yt is parallelized using
+MPI. The {\it mpi4py} package must be installed for parallelism in yt. To
+install {\it pip install mpi4py} on the command line usually works.
Execute python in parallel similar to this:\\
-{\it mpirun -n 12 python script.py --parallel}\\
-This command may differ for each system on which you use yt;
-please consult the system documentation for details on how to run parallel applications.
+{\it mpirun -n 12 python script.py}\\
+The file \texttt{script.py} must call the \texttt{yt.enable\_parallelism()} to
+turn on yt's parallelism. If this doesn't happen, all cores will execute the
+same serial yt script. This command may differ for each system on which you use
+yt; please consult the system documentation for details on how to run parallel
+applications.
-\texttt{from yt.pmods import *} \textemdash\ Load yt faster when in parallel.
-This replaces the usual \texttt{from yt.mods import *}.\\
\texttt{parallel\_objects()} \textemdash\ A way to parallelize analysis over objects
(such as halos or clumps).\\
-\subsection{Pre-Installed Versions}
-\settowidth{\MyLen}{\texttt{multicol}}
-yt is pre-installed on several supercomputer systems.
-
-\textbf{NICS Kraken} \textemdash\ {\it module load yt} \\
-
-
\subsection{Mercurial}
\settowidth{\MyLen}{\texttt{multicol}}
Please see \url{http://mercurial.selenic.com/} for the full Mercurial documentation.
@@ -365,8 +334,7 @@
\subsection{FAQ}
\settowidth{\MyLen}{\texttt{multicol}}
-\texttt{ds.field\_info[`field'].take\_log = False} \textemdash\ When plotting \texttt{field}, do not take log.
-Must enter \texttt{ds.index} before this command. \\
+\texttt{slc.set\_log('field', False)} \textemdash\ When plotting \texttt{field}, use linear scaling instead of log scaling.
%\rule{0.3\linewidth}{0.25pt}
https://bitbucket.org/yt_analysis/yt/commits/7effaf0eae9e/
Changeset: 7effaf0eae9e
Branch: yt-3.0
User: ngoldbaum
Date: 2014-07-20 08:15:38
Summary: Replacing "from yt.mods import *" with "import yt" in the docs.
Affected #: 31 files
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/analyzing/_dq_docstrings.inc
--- a/doc/source/analyzing/_dq_docstrings.inc
+++ b/doc/source/analyzing/_dq_docstrings.inc
@@ -1,43 +1,20 @@
-.. function:: Action(action, combine_action, filter=None):
+.. function:: angular_momentum_vector()
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._Action`.)
- This function evals the string given by the action arg and uses
- the function thrown with the combine_action to combine the values.
- A filter can be thrown to be evaled to short-circuit the calculation
- if some criterion is not met.
- :param action: a string containing the desired action to be evaled.
- :param combine_action: the function used to combine the answers when done lazily.
- :param filter: a string to be evaled to serve as a data filter.
-
-
-
-.. function:: AngularMomentumVector():
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._AngularMomentumVector`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.AngularMomentumVector`.)
This function returns the mass-weighted average angular momentum vector.
+.. function:: bulk_velocity():
-.. function:: BaryonSpinParameter():
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._BaryonSpinParameter`.)
- This function returns the spin parameter for the baryons, but it uses
- the particles in calculating enclosed mass.
-
-
-
-.. function:: BulkVelocity():
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._BulkVelocity`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.BulkVelocity`.)
This function returns the mass-weighted average velocity in the object.
+.. function:: center_of_mass(use_cells=True, use_particles=False):
-.. function:: CenterOfMass(use_cells=True, use_particles=False):
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._CenterOfMass`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.CenterOfMass`.)
This function returns the location of the center
of mass. By default, it computes of the *non-particle* data in the object.
@@ -51,112 +28,64 @@
-.. function:: Extrema(fields, non_zero=False, filter=None):
+.. function:: extrema(fields, non_zero=False, filter=None):
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._Extrema`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.Extrema`.)
This function returns the extrema of a set of fields
:param fields: A field name, or a list of field names
:param filter: a string to be evaled to serve as a data filter.
+.. function:: max_location(field):
-.. function:: IsBound(truncate=True, include_thermal_energy=False, treecode=True, opening_angle=1.0, periodic_test=False, include_particles=True):
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._IsBound`.)
- This returns whether or not the object is gravitationally bound. If this
- returns a value greater than one, it is bound, and otherwise not.
-
- Parameters
- ----------
- truncate : Bool
- Should the calculation stop once the ratio of
- gravitational:kinetic is 1.0?
- include_thermal_energy : Bool
- Should we add the energy from ThermalEnergy
- on to the kinetic energy to calculate
- binding energy?
- treecode : Bool
- Whether or not to use the treecode.
- opening_angle : Float
- The maximal angle a remote node may subtend in order
- for the treecode method of mass conglomeration may be
- used to calculate the potential between masses.
- periodic_test : Bool
- Used for testing the periodic adjustment machinery
- of this derived quantity.
- include_particles : Bool
- Should we add the mass contribution of particles
- to calculate binding energy?
-
- Examples
- --------
- >>> sp.quantities["IsBound"](truncate=False,
- ... include_thermal_energy=True, treecode=False, opening_angle=2.0)
- 0.32493
-
-
-
-.. function:: MaxLocation(field):
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._MaxLocation`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.max_location`.)
This function returns the location of the maximum of a set
of fields.
+.. function:: min_location(field):
-.. function:: MinLocation(field):
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._MinLocation`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.MinLocation`.)
This function returns the location of the minimum of a set
of fields.
-.. function:: ParticleSpinParameter():
+.. function:: spin_parameter(use_gas=True, use_particles=True):
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._ParticleSpinParameter`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.SpinParameter`.)
This function returns the spin parameter for the baryons, but it uses
the particles in calculating enclosed mass.
+.. function:: total_mass():
-.. function:: StarAngularMomentumVector():
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._StarAngularMomentumVector`.)
- This function returns the mass-weighted average angular momentum vector
- for stars.
-
-
-
-.. function:: TotalMass():
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._TotalMass`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.TotalMass`.)
This function takes no arguments and returns the sum of cell masses and
particle masses in the object.
+.. function:: total_quantity(fields):
-.. function:: TotalQuantity(fields):
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._TotalQuantity`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.TotalQuantity`.)
This function sums up a given field over the entire region
:param fields: The fields to sum up
-.. function:: WeightedAverageQuantity(field, weight):
+.. function:: weighted_average_quantity(field, weight):
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._WeightedAverageQuantity`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.WeightedAverageQuantity`.)
This function returns an averaged quantity.
:param field: The field to average
:param weight: The field to weight by
-.. function:: WeightedVariance(field, weight):
+.. function:: weighted_variance(field, weight):
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._WeightedVariance`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.WeightedVariance`.)
This function returns the variance of a field.
:param field: The target field
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb
--- a/doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb
+++ b/doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb
@@ -1,6 +1,7 @@
{
"metadata": {
- "name": ""
+ "name": "",
+ "signature": "sha256:e792ad188f59161aa3ff4cdbb32cad75142b2e6b4062dfa1d8c12b3172fcf4e9"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -34,7 +35,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "from yt.mods import *\n",
+ "import yt\n",
"from yt.analysis_modules.halo_analysis.api import *\n",
"import tempfile\n",
"import shutil\n",
@@ -44,7 +45,7 @@
"tmpdir = tempfile.mkdtemp()\n",
"\n",
"# Load the data set with the full simulation information\n",
- "data_ds = load('Enzo_64/RD0006/RedshiftOutput0006')"
+ "data_ds = yt.load('Enzo_64/RD0006/RedshiftOutput0006')"
],
"language": "python",
"metadata": {},
@@ -62,7 +63,7 @@
"collapsed": false,
"input": [
"# Load the rockstar data files\n",
- "halos_ds = load('rockstar_halos/halos_0.0.bin')"
+ "halos_ds = yt.load('rockstar_halos/halos_0.0.bin')"
],
"language": "python",
"metadata": {},
@@ -407,4 +408,4 @@
"metadata": {}
}
]
-}
+}
\ No newline at end of file
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/analyzing/analysis_modules/PPVCube.ipynb
--- a/doc/source/analyzing/analysis_modules/PPVCube.ipynb
+++ b/doc/source/analyzing/analysis_modules/PPVCube.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:ba8b6a53571695ae1d0c236ad43875823746e979a329a9d35ab0a8b899cebbba"
+ "signature": "sha256:56a8d72735e3cc428ff04b241d4b2ce6f653019818c6fc7a4148840d99030c85"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -19,8 +19,9 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "%matplotlib inline\n",
- "from yt.mods import *\n",
+ "import yt\n",
+ "import numpy as np\n",
+ "\n",
"from yt.analysis_modules.ppv_cube.api import PPVCube"
],
"language": "python",
@@ -122,7 +123,7 @@
"data[\"velocity_y\"] = (vely, \"km/s\")\n",
"data[\"velocity_z\"] = (np.zeros((nx,ny,nz)), \"km/s\") # zero velocity in the z-direction\n",
"bbox = np.array([[-0.5,0.5],[-0.5,0.5],[-0.5,0.5]]) # bbox of width 1 on a side with center (0,0,0)\n",
- "ds = load_uniform_grid(data, (nx,ny,nz), length_unit=(2*R,\"kpc\"), nprocs=1, bbox=bbox)"
+ "ds = yt.load_uniform_grid(data, (nx,ny,nz), length_unit=(2*R,\"kpc\"), nprocs=1, bbox=bbox)"
],
"language": "python",
"metadata": {},
@@ -139,7 +140,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "slc = SlicePlot(ds, \"z\", [\"density\",\"velocity_x\",\"velocity_y\",\"velocity_magnitude\"])"
+ "slc = yt.SlicePlot(ds, \"z\", [\"density\",\"velocity_x\",\"velocity_y\",\"velocity_magnitude\"])"
],
"language": "python",
"metadata": {},
@@ -222,7 +223,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "ds = load(\"cube.fits\")"
+ "ds = yt.load(\"cube.fits\")"
],
"language": "python",
"metadata": {},
@@ -233,7 +234,7 @@
"collapsed": false,
"input": [
"# Specifying no center gives us the center slice\n",
- "slc = SlicePlot(ds, \"z\", [\"density\"])\n",
+ "slc = yt.SlicePlot(ds, \"z\", [\"density\"])\n",
"slc.show()"
],
"language": "python",
@@ -248,7 +249,7 @@
"# Picking different velocities for the slices\n",
"new_center = ds.domain_center\n",
"new_center[2] = ds.spec2pixel(-1.0*u.km/u.s)\n",
- "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
+ "slc = yt.SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
"slc.show()"
],
"language": "python",
@@ -260,7 +261,7 @@
"collapsed": false,
"input": [
"new_center[2] = ds.spec2pixel(0.7*u.km/u.s)\n",
- "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
+ "slc = yt.SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
"slc.show()"
],
"language": "python",
@@ -272,7 +273,7 @@
"collapsed": false,
"input": [
"new_center[2] = ds.spec2pixel(-0.3*u.km/u.s)\n",
- "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
+ "slc = yt.SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
"slc.show()"
],
"language": "python",
@@ -290,7 +291,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "prj = ProjectionPlot(ds, \"z\", [\"density\"], proj_style=\"sum\")\n",
+ "prj = yt.ProjectionPlot(ds, \"z\", [\"density\"], proj_style=\"sum\")\n",
"prj.set_log(\"density\", True)\n",
"prj.set_zlim(\"density\", 1.0e-3, 0.2)\n",
"prj.show()"
@@ -303,4 +304,4 @@
"metadata": {}
}
]
-}
+}
\ No newline at end of file
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
--- a/doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
+++ b/doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:e4b5ea69687eb79452c16385b3a6f795b4572518dfa7f9d8a8125bd75b5fea85"
+ "signature": "sha256:5ab80c6b33a115cb88c36fde8659434d14a852dd43b0b419f2bb0c04acf66278"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -20,7 +20,7 @@
"collapsed": false,
"input": [
"%matplotlib inline\n",
- "from yt.mods import *\n",
+ "import yt\n",
"import glob\n",
"from yt.analysis_modules.particle_trajectories.api import ParticleTrajectories\n",
"from yt.config import ytcfg\n",
@@ -77,7 +77,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "ds = load(my_fns[0])\n",
+ "ds = yt.load(my_fns[0])\n",
"dd = ds.all_data()\n",
"indices = dd[\"particle_index\"].astype(\"int\")\n",
"print indices"
@@ -205,8 +205,8 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "ds = load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
- "slc = SlicePlot(ds, \"x\", [\"density\",\"dark_matter_density\"], center=\"max\", width=(3.0, \"Mpc\"))\n",
+ "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+ "slc = yt.SlicePlot(ds, \"x\", [\"density\",\"dark_matter_density\"], center=\"max\", width=(3.0, \"Mpc\"))\n",
"slc.show()"
],
"language": "python",
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/analyzing/analysis_modules/SZ_projections.ipynb
--- a/doc/source/analyzing/analysis_modules/SZ_projections.ipynb
+++ b/doc/source/analyzing/analysis_modules/SZ_projections.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:4745a15abb6512547b50280b92c22567f89255189fd968ca706ef7c39d48024f"
+ "signature": "sha256:e4db171b795d155870280ddbe8986f55f9a94ffb10783abf9d4cc2de3ec24894"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -89,11 +89,10 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "%matplotlib inline\n",
- "from yt.mods import *\n",
+ "import yt\n",
"from yt.analysis_modules.sunyaev_zeldovich.api import SZProjection\n",
"\n",
- "ds = load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+ "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
"\n",
"freqs = [90.,180.,240.]\n",
"szprj = SZProjection(ds, freqs)"
@@ -218,14 +217,6 @@
"including coordinate information in kpc. The optional keyword\n",
"`clobber` allows a previous file to be overwritten. \n"
]
- },
- {
- "cell_type": "code",
- "collapsed": false,
- "input": [],
- "language": "python",
- "metadata": {},
- "outputs": []
}
],
"metadata": {}
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/analyzing/analysis_modules/clump_finding.rst
--- a/doc/source/analyzing/analysis_modules/clump_finding.rst
+++ b/doc/source/analyzing/analysis_modules/clump_finding.rst
@@ -87,7 +87,7 @@
ds = load("DD0000")
sp = ds.sphere([0.5, 0.5, 0.5], radius=0.1)
- ratio = sp.quantities["IsBound"](truncate=False, include_thermal_energy=True,
+ ratio = sp.quantities.is_bound(truncate=False, include_thermal_energy=True,
treecode=True, opening_angle=2.0)
This example will accomplish the same as the above, but will use the full
@@ -100,7 +100,7 @@
ds = load("DD0000")
sp = ds.sphere([0.5, 0.5, 0.5], radius=0.1)
- ratio = sp.quantities["IsBound"](truncate=False, include_thermal_energy=True,
+ ratio = sp.quantities.is_bound(truncate=False, include_thermal_energy=True,
treecode=False)
Here the treecode method is specified for clump finding (this is default).
@@ -109,7 +109,7 @@
.. code-block:: python
- function_name = 'self.data.quantities["IsBound"](truncate=True, \
+ function_name = 'self.data.quantities.is_bound(truncate=True, \
include_thermal_energy=True, treecode=True, opening_angle=2.0) > 1.0'
master_clump = amods.level_sets.Clump(data_source, None, field,
function=function_name)
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/analyzing/objects.rst
--- a/doc/source/analyzing/objects.rst
+++ b/doc/source/analyzing/objects.rst
@@ -31,35 +31,15 @@
derived fields. If it finds nothing there, it then defaults to examining the
global set of derived fields.
-To add a field to the list of fields that you know should exist in a particular
-frontend, call the function ``add_frontend_field`` where you replace
-``frontend`` with the name of the frontend. Below is an example for adding
-``Cooling_Time`` to Enzo:
-
-.. code-block:: python
-
- add_enzo_field("Cooling_Time", units=r"\rm{s}",
- function=NullFunc,
- validators=ValidateDataField("Cooling_Time"))
-
-Note that we used the ``NullFunc`` function here. To add a derived field,
-which is not expected to necessarily exist on disk, use the standard
-construction:
+To add a derived field, which is not expected to necessarily exist on disk, use
+the standard construction:
.. code-block:: python
add_field("thermal_energy", function=_ThermalEnergy,
- units=r"\rm{ergs}/\rm{g}")
+ units="ergs/g")
-To add a translation from one field to another, use the ``TranslationFunc`` as
-the function for reading the field. For instance, this code appears in the Nyx
-frontend:
-
-.. code-block:: python
-
- add_field("density", function=TranslationFunc("density"), take_log=True,
- units=r"\rm{g} / \rm{cm}^3",
- projected_units =r"\rm{g} / \rm{cm}^2")
+where ``_ThermalEnergy`` is a python function that defines the field.
.. _accessing-fields:
@@ -105,7 +85,7 @@
.. code-block:: python
- ds = load("my_data")
+ ds = yt.load("my_data")
print ds.field_list
print ds.derived_field_list
@@ -115,7 +95,7 @@
.. code-block:: python
- ds = load("my_data")
+ ds = yt.load("my_data")
print ds.field_info["pressure"].get_units()
This is a fast way to examine the units of a given field, and additionally you
@@ -141,8 +121,8 @@
.. code-block:: python
- from yt.mods import *
- ds = load("RedshiftOutput0005")
+ import yt
+ ds = yt.load("RedshiftOutput0005")
reg = ds.region([0.5, 0.5, 0.5], [0.0, 0.0, 0.0], [1.0, 1.0, 1.0])
.. include:: _obj_docstrings.inc
@@ -199,7 +179,7 @@
ds = load("my_data")
dd = ds.all_data()
- dd.quantities["AngularMomentumVector"]()
+ dd.quantities.angular_momentum_vector()
The following quantities are available via the ``quantities`` interface.
@@ -246,8 +226,8 @@
.. notebook-cell::
- from yt.mods import *
- ds = load("enzo_tiny_cosmology/DD0046/DD0046")
+ import yt
+ ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
ad = ds.all_data()
total_mass = ad.quantities.total_quantity('cell_mass')
# now select only gas with 1e5 K < T < 1e7 K.
@@ -268,12 +248,12 @@
.. python-script::
- from yt.mods import *
- ds = load("enzo_tiny_cosmology/DD0046/DD0046")
+ import yt
+ ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
ad = ds.all_data()
new_region = ad.cut_region(['obj["density"] > 1e-29'])
- plot = ProjectionPlot(ds, "x", "density", weight_field="density",
- data_source=new_region)
+ plot = yt.ProjectionPlot(ds, "x", "density", weight_field="density",
+ data_source=new_region)
plot.save()
.. _extracting-connected-sets:
@@ -311,10 +291,6 @@
Extracting Isocontour Information
---------------------------------
-.. versionadded:: 2.3
-
-.. warning::
- This is still beta!
``yt`` contains an implementation of the `Marching Cubes
<http://en.wikipedia.org/wiki/Marching_cubes>`_ algorithm, which can operate on
@@ -378,8 +354,8 @@
.. code-block:: python
- from yt.mods import *
- ds = load("my_data")
+ import yt
+ ds = yt.load("my_data")
sp = ds.sphere([0.5, 0.5, 0.5], 10.0/ds['kpc'])
ds.save_object(sp, "sphere_to_analyze_later")
@@ -390,9 +366,9 @@
.. code-block:: python
- from yt.mods import *
+ import yt
- ds = load("my_data")
+ ds = yt.load("my_data")
sphere_to_analyze = ds.load_object("sphere_to_analyze_later")
Additionally, if we want to store the object independent of the ``.yt`` file,
@@ -400,9 +376,9 @@
.. code-block:: python
- from yt.mods import *
+ import yt
- ds = load("my_data")
+ ds = yt.load("my_data")
sp = ds.sphere([0.5, 0.5, 0.5], 10.0/ds['kpc'])
sp.save_object("my_sphere", "my_storage_file.cpkl")
@@ -416,10 +392,10 @@
.. code-block:: python
- from yt.mods import *
+ import yt
import shelve
- ds = load("my_data") # not necessary if storeparameterfiles is on
+ ds = yt.load("my_data") # not necessary if storeparameterfiles is on
obj_file = shelve.open("my_storage_file.cpkl")
ds, obj = obj_file["my_sphere"]
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/analyzing/time_series_analysis.rst
--- a/doc/source/analyzing/time_series_analysis.rst
+++ b/doc/source/analyzing/time_series_analysis.rst
@@ -33,27 +33,23 @@
creating your own, and these operators can be applied either to datasets on the
whole or to subregions of individual datasets.
-The simplest mechanism for creating a ``DatasetSeries`` object is to use the
-class method
-:meth:`~yt.data_objects.time_series.DatasetSeries.from_filenames`. This
-method accepts a list of strings that can be supplied to ``load``. For
-example:
+The simplest mechanism for creating a ``DatasetSeries`` object is to pass a glob
+pattern to the ``yt.load`` function.
.. code-block:: python
- from yt.mods import *
- filenames = ["DD0030/output_0030", "DD0040/output_0040"]
- ts = DatasetSeries.from_filenames(filenames)
+ import yt
+ ts = yt.load("DD????/DD????")
-This will create a new time series, populated with the output files ``DD0030``
-and ``DD0040``. This object, here called ``ts``, can now be analyzed in bulk.
-Alternately, you can specify a pattern that is supplied to :mod:`glob`, and
-those filenames will be sorted and returned. Here is an example:
+This will create a new time series, populated with all datasets that match the
+pattern "DD" followed by four digits. This object, here called ``ts``, can now
+be analyzed in bulk. Alternately, you can specify an already formatted list of
+filenames directly to the `DatasetSeries` initializer:
.. code-block:: python
- from yt.mods import *
- ts = DatasetSeries.from_filenames("*/*.index")
+ import yt
+ ts = DatasetSeries(["DD0030/DD0030", "DD0040/DD0040")
Analyzing Each Dataset In Sequence
----------------------------------
@@ -64,8 +60,8 @@
.. code-block:: python
- from yt.mods import *
- ts = DatasetSeries.from_filenames("*/*.index")
+ import yt
+ ts = yt.load("*/*.index")
for ds in ts:
print ds.current_time
@@ -77,87 +73,6 @@
* The cookbook recipe for :ref:`cookbook-time-series-analysis`
* :class:`~yt.data_objects.time_series.DatasetSeries`
-Prepared Time Series Analysis
------------------------------
-
-A few handy functions for treating time series data as a uniform, single object
-are also available.
-
-.. warning:: The future of these functions is uncertain: they may be removed in
- the future!
-
-Simple Analysis Tasks
-~~~~~~~~~~~~~~~~~~~~~
-
-The available tasks that come built-in can be seen by looking at the output of
-``ts.tasks.keys()``. For instance, one of the simplest ones is the
-``MaxValue`` task. We can execute this task by calling it with the field whose
-maximum value we want to evaluate:
-
-.. code-block:: python
-
- from yt.mods import *
- ts = TimeSeries.from_filenames("*/*.index")
- max_rho = ts.tasks["MaximumValue"]("density")
-
-When we call the task, the time series object executes the task on each
-component dataset. The results are then returned to the user. More
-complex, multi-task evaluations can be conducted by using the
-:meth:`~yt.data_objects.time_series.DatasetSeries.eval` call, which accepts a
-list of analysis tasks.
-
-Analysis Tasks Applied to Objects
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Just as some tasks can be applied to datasets as a whole, one can also apply
-the creation of objects to datasets. This means that you are able to construct
-a generalized "sphere" operator that will be created inside all datasets, which
-you can then calculate derived quantities (see :ref:`derived-quantities`) from.
-
-For instance, imagine that you wanted to create a sphere that is centered on
-the most dense point in the simulation and that is 1 pc in radius, and then
-calculate the angular momentum vector on this sphere. You could do that with
-this script:
-
-.. code-block:: python
-
- from yt.mods import *
- ts = TimeSeries.from_filenames("*/*.index")
- sphere = ts.sphere("max", (1.0, "pc"))
- L_vecs = sphere.quantities["AngularMomentumVector"]()
-
-Note that we have specified the units differently than usual -- the time series
-objects allow units as a tuple, so that in cases where units may change over
-the course of several outputs they are correctly set at all times. This script
-simply sets up the time series object, creates a sphere, and then runs
-quantities on it. It is designed to look very similar to the code that would
-conduct this analysis on a single output.
-
-All of the objects listed in :ref:`available-objects` are made available in
-the same manner as "sphere" was used above.
-
-Creating Analysis Tasks
-~~~~~~~~~~~~~~~~~~~~~~~
-
-If you wanted to look at the mass in star particles as a function of time, you
-would write a function that accepts params and ds and then decorate it with
-analysis_task. Here we have done so:
-
-.. code-block:: python
-
- @analysis_task(('particle_type',))
- def MassInParticleType(params, ds):
- dd = ds.all_data()
- ptype = (dd["particle_type"] == params.particle_type)
- return (ptype.sum(), dd["ParticleMassMsun"][ptype].sum())
-
- ms = ts.tasks["MassInParticleType"](4)
- print ms
-
-This allows you to create your own analysis tasks that will be then available
-to time series data objects. Since ``DatasetSeries`` objects iterate over
-filenames in parallel by default, this allows for transparent parallelization.
-
.. _analyzing-an-entire-simulation:
Analyzing an Entire Simulation
@@ -175,9 +90,9 @@
.. code-block:: python
- from yt.mods import *
- my_sim = simulation('enzo_tiny_cosmology/32Mpc_32.enzo', 'Enzo',
- find_outputs=False)
+ import yt
+ my_sim = yt.simulation('enzo_tiny_cosmology/32Mpc_32.enzo', 'Enzo',
+ find_outputs=False)
Then, create a ``DatasetSeries`` object with the :meth:`get_time_series`
function. With no additional keywords, the time series will include every
@@ -198,7 +113,7 @@
for ds in my_sim.piter()
all_data = ds.all_data()
- print all_data.quantities['Extrema']('density')
+ print all_data.quantities.extrema('density')
Additional keywords can be given to :meth:`get_time_series` to select a subset
of the total data:
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/analyzing/units/2)_Data_Selection_and_fields.ipynb
--- a/doc/source/analyzing/units/2)_Data_Selection_and_fields.ipynb
+++ b/doc/source/analyzing/units/2)_Data_Selection_and_fields.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:882b31591c60bfe6ad4cb0f8842953d2e94fb8a12ce742be831a65642eea72c9"
+ "signature": "sha256:2faff88abc93fe2bc9d91467db786a8b69ec3ece6783a7055942ecc7c47a0817"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -34,8 +34,8 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "from yt.mods import *\n",
- "ds = load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
+ "import yt\n",
+ "ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
" \n",
"dd = ds.all_data()\n",
"maxval, maxloc = ds.find_max('density')\n",
@@ -324,6 +324,8 @@
"collapsed": false,
"input": [
"from astropy import units as u\n",
+ "from yt import YTQuantity, YTArray\n",
+ "\n",
"x = 42.0 * u.meter\n",
"y = YTQuantity.from_astropy(x) "
],
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/analyzing/units/3)_Comoving_units_and_code_units.ipynb
--- a/doc/source/analyzing/units/3)_Comoving_units_and_code_units.ipynb
+++ b/doc/source/analyzing/units/3)_Comoving_units_and_code_units.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:242d7005d45a82744713bfe6389e49d47f39b524d1e7fcbf5ceb2e65dc473e68"
+ "signature": "sha256:8ba193cc3867e2185133bbf3952bd5834e6c63993208635c71cf55fa6f27b491"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -34,8 +34,8 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "from yt.mods import *\n",
- "ds = load('Enzo_64/DD0043/data0043')"
+ "import yt\n",
+ "ds = yt.load('Enzo_64/DD0043/data0043')"
],
"language": "python",
"metadata": {},
@@ -208,7 +208,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "slc = SlicePlot(ds, 0, 'density', width=(128, 'Mpccm/h'))\n",
+ "slc = yt.SlicePlot(ds, 0, 'density', width=(128, 'Mpccm/h'))\n",
"slc.set_figure_size(6)"
],
"language": "python",
@@ -234,6 +234,8 @@
"cell_type": "code",
"collapsed": false,
"input": [
+ "from yt import YTQuantity\n",
+ "\n",
"a = YTQuantity(3, 'cm')\n",
"\n",
"print a.units.registry.keys()"
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/analyzing/units/4)_Comparing_units_from_different_datasets.ipynb
--- a/doc/source/analyzing/units/4)_Comparing_units_from_different_datasets.ipynb
+++ b/doc/source/analyzing/units/4)_Comparing_units_from_different_datasets.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:448380e74a746d19dc1eecfe222c0e798a87a4ac285e4f50e2598316086c5ee8"
+ "signature": "sha256:273a23e3a20b277a9e5ea7117b48cf19013c331d0893e6e9d21896e97f59aceb"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -22,9 +22,9 @@
"collapsed": false,
"input": [
"# A high redshift output from z ~ 8\n",
- "from yt.mods import *\n",
+ "import yt\n",
"\n",
- "ds1 = load('Enzo_64/DD0002/data0002')\n",
+ "ds1 = yt.load('Enzo_64/DD0002/data0002')\n",
"print \"z = %s\" % ds1.current_redshift\n",
"print \"Internal length units = %s\" % ds1.length_unit\n",
"print \"Internal length units in cgs = %s\" % ds1.length_unit.in_cgs()"
@@ -38,7 +38,7 @@
"collapsed": false,
"input": [
"# A low redshift output from z ~ 0\n",
- "ds2 = load('Enzo_64/DD0043/data0043')\n",
+ "ds2 = yt.load('Enzo_64/DD0043/data0043')\n",
"print \"z = %s\" % ds2.current_redshift\n",
"print \"Internal length units = %s\" % ds2.length_unit\n",
"print \"Internal length units in cgs = %s\" % ds2.length_unit.in_cgs()"
@@ -94,9 +94,10 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "from yt.mods import *\n",
+ "import yt\n",
+ "yt.enable_parallelism()\n",
"\n",
- "ts = DatasetSeries.from_filenames(\"Enzo_64/DD????/data????\")\n",
+ "ts = yt.load(\"Enzo_64/DD????/data????\")\n",
"\n",
"storage = {}\n",
"\n",
@@ -104,7 +105,7 @@
" sto.result_id = ds.current_time\n",
" sto.result = ds.length_unit\n",
"\n",
- "if is_root():\n",
+ "if yt.is_root():\n",
" for t in sorted(storage.keys()):\n",
" print t.in_units('Gyr'), storage[t].in_units('Mpc')"
],
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/analyzing/units/5)_Units_and_plotting.ipynb
--- a/doc/source/analyzing/units/5)_Units_and_plotting.ipynb
+++ b/doc/source/analyzing/units/5)_Units_and_plotting.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:981baca6958c75f0d84bbc24be7d2b75af5957d36aa3eb4ba725d9e47a85f80d"
+ "signature": "sha256:3deac8455c3bbd85e3cefc0f8905be509fba0050f67f69a7faed0505b4d8dbad"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -28,9 +28,9 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "from yt.mods import *\n",
- "ds = load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
- "slc = SlicePlot(ds, 2, 'density', center=[0.5, 0.5, 0.5], width=(15, 'kpc'))\n",
+ "import yt\n",
+ "ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
+ "slc = yt.SlicePlot(ds, 2, 'density', center=[0.5, 0.5, 0.5], width=(15, 'kpc'))\n",
"slc.set_figure_size(6)"
],
"language": "python",
@@ -107,7 +107,7 @@
"collapsed": false,
"input": [
"dd = ds.all_data()\n",
- "plot = ProfilePlot(dd, 'density', 'temperature', weight_field='cell_mass')\n",
+ "plot = yt.ProfilePlot(dd, 'density', 'temperature', weight_field='cell_mass')\n",
"plot.show()"
],
"language": "python",
@@ -142,7 +142,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "plot = PhasePlot(dd, 'density', 'temperature', 'cell_mass')\n",
+ "plot = yt.PhasePlot(dd, 'density', 'temperature', 'cell_mass')\n",
"plot.set_figure_size(6)"
],
"language": "python",
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/bootcamp/5)_Derived_Fields_and_Profiles.ipynb
--- a/doc/source/bootcamp/5)_Derived_Fields_and_Profiles.ipynb
+++ b/doc/source/bootcamp/5)_Derived_Fields_and_Profiles.ipynb
@@ -230,14 +230,14 @@
"collapsed": false,
"input": [
"sp_small = ds.sphere(\"max\", (50.0, 'kpc'))\n",
- "bv = sp_small.quantities[\"BulkVelocity\"]()\n",
+ "bv = sp_small.quantities.bulk_velocity()\n",
"\n",
"sp = ds.sphere(\"max\", (0.1, 'Mpc'))\n",
- "rv1 = sp.quantities[\"Extrema\"](\"radial_velocity\")\n",
+ "rv1 = sp.quantities.extrema(\"radial_velocity\")\n",
"\n",
"sp.clear_data()\n",
"sp.set_field_parameter(\"bulk_velocity\", bv)\n",
- "rv2 = sp.quantities[\"Extrema\"](\"radial_velocity\")\n",
+ "rv2 = sp.quantities.extrema(\"radial_velocity\")\n",
"\n",
"print bv\n",
"print rv1\n",
@@ -251,4 +251,4 @@
"metadata": {}
}
]
-}
\ No newline at end of file
+}
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/cookbook/constructing_data_objects.rst
--- a/doc/source/cookbook/constructing_data_objects.rst
+++ b/doc/source/cookbook/constructing_data_objects.rst
@@ -16,6 +16,8 @@
.. yt_cookbook:: find_clumps.py
+.. _extract_frb:
+
Extracting Fixed Resolution Data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/cookbook/custom_colorbar_tickmarks.ipynb
--- a/doc/source/cookbook/custom_colorbar_tickmarks.ipynb
+++ b/doc/source/cookbook/custom_colorbar_tickmarks.ipynb
@@ -1,6 +1,7 @@
{
"metadata": {
- "name": ""
+ "name": "",
+ "signature": "sha256:e8fd07931e339dc67b9d84b0fbc6abc84d3957d885544c24da7aa550f9427a1f"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -11,8 +12,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "%matplotlib inline\n",
- "from yt.mods import *"
+ "import yt"
],
"language": "python",
"metadata": {},
@@ -22,8 +22,8 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "ds = load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
- "slc = SlicePlot(ds, 'x', 'density')\n",
+ "ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
+ "slc = yt.SlicePlot(ds, 'x', 'density')\n",
"slc"
],
"language": "python",
@@ -87,4 +87,4 @@
"metadata": {}
}
]
-}
+}
\ No newline at end of file
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/cookbook/embedded_javascript_animation.ipynb
--- a/doc/source/cookbook/embedded_javascript_animation.ipynb
+++ b/doc/source/cookbook/embedded_javascript_animation.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:4f7d409d15ecc538096d15212923312e2cb4a911ebf5a9cf7edc9bd63a8335e9"
+ "signature": "sha256:bed79f0227742715a8753a98f2ad54175767a7c9ded19b14976ee6c8ff255f04"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -23,7 +23,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "from yt.mods import *\n",
+ "import yt\n",
"from JSAnimation import IPython_display\n",
"from matplotlib import animation"
],
@@ -47,14 +47,14 @@
"import matplotlib.pyplot as plt\n",
"from matplotlib.backends.backend_agg import FigureCanvasAgg\n",
"\n",
- "prj = ProjectionPlot(load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
+ "prj = yt.ProjectionPlot(yt.load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
"prj.set_figure_size(5)\n",
"prj.set_zlim('density',1e-32,1e-26)\n",
"fig = prj.plots['density'].figure\n",
"\n",
"# animation function. This is called sequentially\n",
"def animate(i):\n",
- " ds = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
+ " ds = yt.load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
" prj._switch_ds(ds)\n",
"\n",
"# call the animator. blit=True means only re-draw the parts that have changed.\n",
@@ -68,4 +68,4 @@
"metadata": {}
}
]
-}
+}
\ No newline at end of file
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/cookbook/embedded_webm_animation.ipynb
--- a/doc/source/cookbook/embedded_webm_animation.ipynb
+++ b/doc/source/cookbook/embedded_webm_animation.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:0090176ae6299b2310bf613404cbfbb42a54e19a03d1469d1429a01170a63aa0"
+ "signature": "sha256:b400f12ff9e27ff6a3ddd13f2f8fc3f88bd857fa6083fad6808f00d771312db7"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -21,7 +21,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "from yt.mods import *\n",
+ "import yt\n",
"from matplotlib import animation"
],
"language": "python",
@@ -96,13 +96,13 @@
"import matplotlib.pyplot as plt\n",
"from matplotlib.backends.backend_agg import FigureCanvasAgg\n",
"\n",
- "prj = ProjectionPlot(load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
+ "prj = yt.ProjectionPlot(yt.load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
"prj.set_zlim('density',1e-32,1e-26)\n",
"fig = prj.plots['density'].figure\n",
"\n",
"# animation function. This is called sequentially\n",
"def animate(i):\n",
- " ds = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
+ " ds = yt.load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
" prj._switch_ds(ds)\n",
"\n",
"# call the animator. blit=True means only re-draw the parts that have changed.\n",
@@ -119,4 +119,4 @@
"metadata": {}
}
]
-}
+}
\ No newline at end of file
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/cookbook/profile_with_variance.py
--- a/doc/source/cookbook/profile_with_variance.py
+++ b/doc/source/cookbook/profile_with_variance.py
@@ -8,7 +8,7 @@
sp = ds.sphere("max", (1, "Mpc"))
# Calculate and store the bulk velocity for the sphere.
-bulk_velocity = sp.quantities['BulkVelocity']()
+bulk_velocity = sp.quantities.bulk_velocity()
sp.set_field_parameter('bulk_velocity', bulk_velocity)
# Create a 1D profile object for profiles over radius
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/cookbook/rad_velocity.py
--- a/doc/source/cookbook/rad_velocity.py
+++ b/doc/source/cookbook/rad_velocity.py
@@ -7,7 +7,7 @@
sp0 = ds.sphere(ds.domain_center, (500., "kpc"))
# Compute the bulk velocity from the cells in this sphere
-bulk_vel = sp0.quantities["BulkVelocity"]()
+bulk_vel = sp0.quantities.bulk_velocity()
# Get the second sphere
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/cookbook/simple_off_axis_projection.py
--- a/doc/source/cookbook/simple_off_axis_projection.py
+++ b/doc/source/cookbook/simple_off_axis_projection.py
@@ -7,7 +7,7 @@
sp = ds.sphere("center", (15.0, "kpc"))
# Get the angular momentum vector for the sphere.
-L = sp.quantities["AngularMomentumVector"]()
+L = sp.quantities.angular_momentum_vector()
print "Angular momentum vector: {0}".format(L)
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/developing/developing.rst
--- a/doc/source/developing/developing.rst
+++ b/doc/source/developing/developing.rst
@@ -339,10 +339,10 @@
$ hg up revision_specifier
-Lastly, if you want to use this new downloaded version of your yt repository
-as the *active* version of yt on your computer (i.e. the one which is executed
-when you run yt from the command line or ``from yt.mods import *``),
-then you must "activate" it using the following commands from within the
+Lastly, if you want to use this new downloaded version of your yt repository as
+the *active* version of yt on your computer (i.e. the one which is executed when
+you run yt from the command line or the one that is loaded when you do ``import
+yt``), then you must "activate" it using the following commands from within the
repository directory.
In order to do this for the first time with a new repository, you have to
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/examining/Loading_Generic_Array_Data.ipynb
--- a/doc/source/examining/Loading_Generic_Array_Data.ipynb
+++ b/doc/source/examining/Loading_Generic_Array_Data.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:5fc7783d6c99659c353a35348bb21210fcb7572d5357f32dd61755d4a7f8fe6c"
+ "signature": "sha256:0d8d5fd49877ae68c53b6efec37e2c41a66935f70e5bb77065fe55fa9e82309b"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -52,8 +52,8 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "%matplotlib inline\n",
- "from yt.mods import *"
+ "import yt\n",
+ "import numpy as np"
],
"language": "python",
"metadata": {},
@@ -89,7 +89,7 @@
"input": [
"data = dict(density = (arr, \"g/cm**3\"))\n",
"bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])\n",
- "ds = load_uniform_grid(data, arr.shape, length_unit=\"Mpc\", bbox=bbox, nprocs=64)"
+ "ds = yt.load_uniform_grid(data, arr.shape, length_unit=\"Mpc\", bbox=bbox, nprocs=64)"
],
"language": "python",
"metadata": {},
@@ -110,6 +110,7 @@
"* `mass_unit` : The unit that corresponds to `code_mass`, can be a string, tuple, or floating-point number\n",
"* `time_unit` : The unit that corresponds to `code_time`, can be a string, tuple, or floating-point number\n",
"* `velocity_unit` : The unit that corresponds to `code_velocity`\n",
+ "* `magnetic_unit` : The unit that corresponds to `code_magnetic`, i.e. the internal units used to represent magnetic field strengths.\n",
"* `periodicity` : A tuple of booleans that determines whether the data will be treated as periodic along each axis\n",
"\n",
"This example creates a `yt`-native dataset `ds` that will treat your array as a\n",
@@ -131,7 +132,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "slc = SlicePlot(ds, \"z\", [\"density\"])\n",
+ "slc = yt.SlicePlot(ds, \"z\", [\"density\"])\n",
"slc.set_cmap(\"density\", \"Blues\")\n",
"slc.annotate_grids(cmap=None)\n",
"slc.show()"
@@ -159,11 +160,11 @@
"posz_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)\n",
"data = dict(density = (np.random.random(size=(64,64,64)), \"Msun/kpc**3\"), \n",
" number_of_particles = 10000,\n",
- " particle_position_x = (posx_arr, \"code_length\"), \n",
- "\t particle_position_y = (posy_arr, \"code_length\"),\n",
- "\t particle_position_z = (posz_arr, \"code_length\"))\n",
+ " particle_position_x = (posx_arr, 'code_length'), \n",
+ " particle_position_y = (posy_arr, 'code_length'),\n",
+ " particle_position_z = (posz_arr, 'code_length'))\n",
"bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])\n",
- "ds = load_uniform_grid(data, data[\"density\"][0].shape, length_unit=(1.0, \"Mpc\"), mass_unit=(1.0,\"Msun\"), \n",
+ "ds = yt.load_uniform_grid(data, data[\"density\"][0].shape, length_unit=(1.0, \"Mpc\"), mass_unit=(1.0,\"Msun\"), \n",
" bbox=bbox, nprocs=4)"
],
"language": "python",
@@ -182,7 +183,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "slc = SlicePlot(ds, \"z\", [\"density\"])\n",
+ "slc = yt.SlicePlot(ds, \"z\", [\"density\"])\n",
"slc.set_cmap(\"density\", \"Blues\")\n",
"slc.annotate_particles(0.25, p_size=12.0, col=\"Red\")\n",
"slc.show()"
@@ -277,7 +278,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "ds = load_uniform_grid(data, data[\"Density\"][0].shape, length_unit=250.*cm_per_kpc, bbox=bbox, nprocs=8, \n",
+ "ds = yt.load_uniform_grid(data, data[\"Density\"][0].shape, length_unit=250.*cm_per_kpc, bbox=bbox, nprocs=8, \n",
" periodicity=(False,False,False))"
],
"language": "python",
@@ -295,7 +296,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "prj = ProjectionPlot(ds, \"z\", [\"z-velocity\",\"Temperature\",\"Bx\"], weight_field=\"Density\")\n",
+ "prj = yt.ProjectionPlot(ds, \"z\", [\"z-velocity\",\"Temperature\",\"Bx\"], weight_field=\"Density\")\n",
"prj.set_log(\"z-velocity\", False)\n",
"prj.set_log(\"Bx\", False)\n",
"prj.show()"
@@ -324,7 +325,7 @@
"collapsed": false,
"input": [
"#Find the min and max of the field\n",
- "mi, ma = ds.all_data().quantities[\"Extrema\"]('Temperature')\n",
+ "mi, ma = ds.all_data().quantities.extrema('Temperature')\n",
"#Reduce the dynamic range\n",
"mi = mi.value + 1.5e7\n",
"ma = ma.value - 0.81e7"
@@ -344,7 +345,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "tf = ColorTransferFunction((mi, ma), grey_opacity=False)"
+ "tf = yt.ColorTransferFunction((mi, ma), grey_opacity=False)"
],
"language": "python",
"metadata": {},
@@ -501,8 +502,8 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "ds = load_uniform_grid(data, data[\"velocity_x\"][0].shape, length_unit=(1.0,\"Mpc\"))\n",
- "slc = SlicePlot(ds, \"x\", [\"velocity_x\",\"velocity_y\",\"velocity_z\"])\n",
+ "ds = yt.load_uniform_grid(data, data[\"velocity_x\"][0].shape, length_unit=(1.0,\"Mpc\"))\n",
+ "slc = yt.SlicePlot(ds, \"x\", [\"velocity_x\",\"velocity_y\",\"velocity_z\"])\n",
"for ax in \"xyz\":\n",
" slc.set_log(\"velocity_%s\" % (ax), False)\n",
"slc.annotate_velocity()\n",
@@ -619,7 +620,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "ds = load_amr_grids(grid_data, [32, 32, 32], field_units=field_units)"
+ "ds = yt.load_amr_grids(grid_data, [32, 32, 32], field_units=field_units)"
],
"language": "python",
"metadata": {},
@@ -636,7 +637,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "slc = SlicePlot(ds, \"z\", [\"density\"])\n",
+ "slc = yt.SlicePlot(ds, \"z\", [\"density\"])\n",
"slc.annotate_particles(0.25, p_size=15.0, col=\"Pink\")\n",
"slc.show()"
],
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -31,8 +31,8 @@
.. code-block:: python
- from yt.mods import *
- ds = load("DD0010/data0010")
+ import yt
+ ds = yt.load("DD0010/data0010")
.. rubric:: Caveats
@@ -78,8 +78,8 @@
.. code-block:: python
- from yt.mods import *
- ds = load("pltgmlcs5600")
+ import yt
+ ds = yt.load("pltgmlcs5600")
.. _loading-flash-data:
@@ -101,8 +101,8 @@
.. code-block:: python
- from yt.mods import *
- ds = load("cosmoSim_coolhdf5_chk_0026")
+ import yt
+ ds = yt.load("cosmoSim_coolhdf5_chk_0026")
If you have a FLASH particle file that was created at the same time as
a plotfile or checkpoint file (therefore having particle data
@@ -111,8 +111,8 @@
.. code-block:: python
- from yt.mods import *
- ds = load("radio_halo_1kpc_hdf5_plt_cnt_0100", particle_filename="radio_halo_1kpc_hdf5_part_0100")
+ import yt
+ ds = yt.load("radio_halo_1kpc_hdf5_plt_cnt_0100", particle_filename="radio_halo_1kpc_hdf5_part_0100")
.. rubric:: Caveats
@@ -142,8 +142,8 @@
.. code-block:: python
- from yt.mods import *
- ds = load("output_00007/info_00007.txt")
+ import yt
+ ds = yt.load("output_00007/info_00007.txt")
yt will attempt to guess the fields in the file. You may also specify a list
of fields by supplying the ``fields`` keyword in your call to ``load``.
@@ -162,16 +162,16 @@
.. code-block:: python
- from yt.mods import *
- ds = load("snapshot_061.hdf5")
+ import yt
+ ds = yt.load("snapshot_061.hdf5")
However, yt cannot detect raw-binary Gadget data, and so you must specify the
format as being Gadget:
.. code-block:: python
- from yt.mods import *
- ds = GadgetDataset("snapshot_061")
+ import yt
+ ds = yt.GadgetDataset("snapshot_061")
.. _particle-bbox:
@@ -461,9 +461,9 @@
.. code-block:: python
- from yt.mods import *
+ import yt
- ds = load("/u/cmoody3/data/art_snapshots/SFG1/10MpcBox_csf512_a0.460.d")
+ ds = yt.load("SFG1/10MpcBox_csf512_a0.460.d")
.. _loading_athena_data:
@@ -480,8 +480,8 @@
.. code-block:: python
- from yt.mods import *
- ds = load("kh.0010.vtk")
+ import yt
+ ds = yt.load("kh.0010.vtk")
The filename corresponds to the file on SMR level 0, whereas if there
are multiple levels the corresponding files will be picked up
@@ -495,8 +495,8 @@
.. code-block:: python
- from yt.mods import *
- ds = load("id0/kh.0010.vtk")
+ import yt
+ ds = yt.load("id0/kh.0010.vtk")
which will pick up all of the files in the different ``id*`` directories for
the entire dataset.
@@ -507,11 +507,11 @@
.. code-block:: python
- from yt.mods import *
- ds = load("id0/cluster_merger.0250.vtk",
- parameters={"length_unit":(1.0,"Mpc"),
- "time_unit"(1.0,"Myr"),
- "mass_unit":(1.0e14,"Msun")})
+ import yt
+ ds = yt.load("id0/cluster_merger.0250.vtk",
+ parameters={"length_unit":(1.0,"Mpc"),
+ "time_unit"(1.0,"Myr"),
+ "mass_unit":(1.0e14,"Msun")})
This means that the yt fields, e.g. ``("gas","density")``, ``("gas","x-velocity")``,
``("gas","magnetic_field_x")``, will be in cgs units, but the Athena fields, e.g.,
@@ -557,8 +557,8 @@
.. code-block:: python
- from yt.mods import *
- ds = load("m33_hi.fits")
+ import yt
+ ds = yt.load("m33_hi.fits")
ds.print_stats()
.. parsed-literal::
@@ -809,11 +809,11 @@
.. code-block:: python
- from yt.frontends.stream.api import load_uniform_grid
+ import yt
data = dict(Density = arr)
bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
- ds = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
+ ds = yt.load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
will create ``yt``-native dataset ``ds`` that will treat your array as
density field in cubic domain of 3 Mpc edge size (3 * 3.08e24 cm) and
@@ -827,7 +827,7 @@
.. code-block:: python
- from yt.frontends.stream.api import load_uniform_grid
+ import yt
data = dict(Density = dens,
number_of_particles = 1000000,
@@ -835,7 +835,7 @@
particle_position_y = posy_arr,
particle_position_z = posz_arr)
bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
- ds = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
+ ds = yt.load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
where in this exampe the particle position fields have been assigned. ``number_of_particles`` must be the same size as the particle
arrays. If no particle arrays are supplied then ``number_of_particles`` is assumed to be zero.
@@ -859,7 +859,7 @@
.. code-block:: python
- from yt.frontends.stream.api import load_amr_grids
+ import yt
grid_data = [
dict(left_edge = [0.0, 0.0, 0.0],
@@ -877,7 +877,7 @@
for g in grid_data:
g["density"] = np.random.random(g["dimensions"]) * 2**g["level"]
- ds = load_amr_grids(grid_data, [32, 32, 32], 1.0)
+ ds = yt.load_amr_grids(grid_data, [32, 32, 32], 1.0)
Particle fields are supported by adding 1-dimensional arrays and
setting the ``number_of_particles`` key to each ``grid``'s dict:
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/examining/low_level_inspection.rst
--- a/doc/source/examining/low_level_inspection.rst
+++ b/doc/source/examining/low_level_inspection.rst
@@ -142,8 +142,8 @@
.. code-block:: python
- from yt.mods import *
- ds = load('Enzo_64/DD0043/data0043')
+ import yt
+ ds = yt.load('Enzo_64/DD0043/data0043')
all_data_level_0 = ds.covering_grid(level=0, left_edge=[0,0.0,0.0],
dims=[64, 64, 64])
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/reference/api/api.rst
--- a/doc/source/reference/api/api.rst
+++ b/doc/source/reference/api/api.rst
@@ -638,14 +638,17 @@
~yt.data_objects.derived_quantities.DerivedQuantity
~yt.data_objects.derived_quantities.DerivedQuantityCollection
~yt.data_objects.derived_quantities.WeightedAverageQuantity
- ~yt.data_objects.derived_quantities.TotalQuantity
- ~yt.data_objects.derived_quantities.TotalMass
+ ~yt.data_objects.derived_quantities.AngularMomentumVector
+ ~yt.data_objects.derived_quantities.BulkVelocity
~yt.data_objects.derived_quantities.CenterOfMass
- ~yt.data_objects.derived_quantities.BulkVelocity
- ~yt.data_objects.derived_quantities.AngularMomentumVector
~yt.data_objects.derived_quantities.Extrema
~yt.data_objects.derived_quantities.MaxLocation
~yt.data_objects.derived_quantities.MinLocation
+ ~yt.data_objects.derived_quantities.SpinParameter
+ ~yt.data_objects.derived_quantities.TotalMass
+ ~yt.data_objects.derived_quantities.TotalQuantity
+ ~yt.data_objects.derived_quantities.WeightedAverageQuantity
+ ~yt.data_objects.derived_quantities.WeightedVariance
.. _callback-api:
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb
--- a/doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb
+++ b/doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:d75e416150ccb017cfdf89973f8d4463e780da4d9bdc9a3783001d22021d9081"
+ "signature": "sha256:0b3811c163a3c9d35de8d103a38328e5f4d3ae481327542d2ed178ddcc718f5e"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -23,14 +23,17 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "%matplotlib inline\n",
- "from yt.mods import *\n",
+ "import yt\n",
+ "import numpy as np\n",
"from IPython.core.display import Image\n",
"from yt.visualization.volume_rendering.transfer_function_helper import TransferFunctionHelper\n",
"\n",
- "def showme(np_im):\n",
- " np_im[np_im != np_im] = 0.0\n",
- " imb = write_bitmap(np_im, None)\n",
+ "def showme(im):\n",
+ " # screen out NaNs\n",
+ " im[im != im] = 0.0\n",
+ " \n",
+ " # Create an RGBA bitmap to display\n",
+ " imb = yt.write_bitmap(im, None)\n",
" return Image(imb)"
],
"language": "python",
@@ -48,7 +51,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "ds = load('Enzo_64/DD0043/data0043')"
+ "ds = yt.load('Enzo_64/DD0043/data0043')"
],
"language": "python",
"metadata": {},
@@ -65,7 +68,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "tfh = TransferFunctionHelper(ds)"
+ "tfh = yt.TransferFunctionHelper(ds)"
],
"language": "python",
"metadata": {},
@@ -83,7 +86,7 @@
"collapsed": false,
"input": [
"# Build a transfer function that is a multivariate gaussian in Density\n",
- "tfh = TransferFunctionHelper(ds)\n",
+ "tfh = yt.TransferFunctionHelper(ds)\n",
"tfh.set_field('temperature')\n",
"tfh.set_log(True)\n",
"tfh.set_bounds()\n",
@@ -99,7 +102,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Let's also look at the probability density function of the `CellMass` field as a function of `Temperature`. This might give us an idea where there is a lot of structure. "
+ "Let's also look at the probability density function of the `cell_mass` field as a function of `temperature`. This might give us an idea where there is a lot of structure. "
]
},
{
@@ -123,7 +126,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "tfh = TransferFunctionHelper(ds)\n",
+ "tfh = yt.TransferFunctionHelper(ds)\n",
"tfh.set_field('temperature')\n",
"tfh.set_bounds()\n",
"tfh.set_log(True)\n",
@@ -179,4 +182,4 @@
"metadata": {}
}
]
-}
+}
\ No newline at end of file
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/visualizing/colormaps/cmap_images.py
--- a/doc/source/visualizing/colormaps/cmap_images.py
+++ b/doc/source/visualizing/colormaps/cmap_images.py
@@ -1,11 +1,11 @@
-from yt.mods import *
+import yt
import matplotlib.cm as cm
# Load the dataset.
-ds = load(os.path.join(ytcfg.get("yt", "test_data_dir"), "IsolatedGalaxy/galaxy0030/galaxy0030"))
+ds = yt.load(os.path.join(ytcfg.get("yt", "test_data_dir"), "IsolatedGalaxy/galaxy0030/galaxy0030"))
# Create projections using each colormap available.
-p = ProjectionPlot(ds, "z", "density", weight_field = "density", width=0.4)
+p = yt.ProjectionPlot(ds, "z", "density", weight_field = "density", width=0.4)
for cmap in cm.datad:
if cmap.startswith("idl"):
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/visualizing/colormaps/index.rst
--- a/doc/source/visualizing/colormaps/index.rst
+++ b/doc/source/visualizing/colormaps/index.rst
@@ -34,15 +34,15 @@
.. code-block:: python
- from yt.mods import *
- show_colormaps()
+ import yt
+ yt.show_colormaps()
or to output just the colormaps native to yt to an image file, try:
.. code-block:: python
- from yt.mods import *
- show_colormaps(subset = "yt_native", filename = "yt_native.png")
+ import yt
+ yt.show_colormaps(subset = "yt_native", filename = "yt_native.png")
Applying a Colormap to your Rendering
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -52,7 +52,7 @@
.. code-block:: python
- write_image(im, "output.png", cmap_name = 'jet')
+ yt.write_image(im, "output.png", cmap_name = 'jet')
If you're using the Plot Window interface (e.g. SlicePlot, ProjectionPlot,
etc.), it's even easier than that. Simply create your rendering, and you
@@ -61,8 +61,8 @@
.. code-block:: python
- ds = load("IsolatedGalaxy/galaxy0030/galaxy0030")
- p = ProjectionPlot(ds, "z", "density")
+ ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
+ p = yt.ProjectionPlot(ds, "z", "density")
p.set_cmap(field="density", cmap='jet')
p.save('proj_with_jet_cmap.png')
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/visualizing/sketchfab.rst
--- a/doc/source/visualizing/sketchfab.rst
+++ b/doc/source/visualizing/sketchfab.rst
@@ -33,8 +33,8 @@
.. code-block:: python
- from yt.mods import *
- ds = load("/data/workshop2012/IsolatedGalaxy/galaxy0030/galaxy0030")
+ import yt
+ ds = yt.load("/data/workshop2012/IsolatedGalaxy/galaxy0030/galaxy0030")
sphere = ds.sphere("max", (1.0, "mpc"))
surface = ds.surface(sphere, "density", 1e-27)
@@ -92,8 +92,8 @@
.. code-block:: python
- from yt.mods import *
- ds = load("redshift0058")
+ import yt
+ ds = yt.load("redshift0058")
dd = ds.sphere("max", (200, "kpc"))
rho = 5e-27
@@ -144,9 +144,9 @@
.. code-block:: python
- from yt.mods import *
+ import yt
- ds = load("/data/workshop2012/IsolatedGalaxy/galaxy0030/galaxy0030")
+ ds = yt.load("/data/workshop2012/IsolatedGalaxy/galaxy0030/galaxy0030")
rho = [2e-27, 1e-27]
trans = [1.0, 0.5]
filename = './surfaces'
@@ -216,22 +216,22 @@
.. code-block:: python
- from yt.mods import *
+ import yt
- ds = load("/data/workshop2012/IsolatedGalaxy/galaxy0030/galaxy0030")
+ ds = yt.load("/data/workshop2012/IsolatedGalaxy/galaxy0030/galaxy0030")
rho = [2e-27, 1e-27]
trans = [1.0, 0.5]
filename = './surfaces'
- def _Emissivity(field, data):
+ def emissivity(field, data):
return (data['density']*data['density']*np.sqrt(data['temperature']))
- add_field("Emissivity", function=_Emissivity, units=r"\rm{g K}/\rm{cm}^{6}")
+ add_field("emissivity", function=_Emissivity, units=r"g*K/cm**6")
sphere = ds.sphere("max", (1.0, "mpc"))
for i,r in enumerate(rho):
surf = ds.surface(sphere, 'density', r)
surf.export_obj(filename, transparency = trans[i],
- color_field='temperature', emit_field = 'Emissivity',
+ color_field='temperature', emit_field = 'emissivity',
plot_index = i)
will output the same OBJ and MTL as in our previous example, but it will scale
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f doc/source/visualizing/streamlines.rst
--- a/doc/source/visualizing/streamlines.rst
+++ b/doc/source/visualizing/streamlines.rst
@@ -49,14 +49,14 @@
:obj:`~yt.visualization.streamlines.Streamlines.streamlines` object.
Example Script
-++++++++++++++++
+++++++++++++++
.. code-block:: python
- from yt.mods import *
+ import yt
from yt.visualization.api import Streamlines
- ds = load('DD1701') # Load ds
+ ds = yt.load('DD1701') # Load ds
c = np.array([0.5]*3)
N = 100
scale = 1.0
@@ -94,10 +94,10 @@
.. code-block:: python
- from yt.mods import *
+ import yt
from yt.visualization.api import Streamlines
- ds = load('DD1701') # Load ds
+ ds = yt.load('DD1701') # Load ds
streamlines = Streamlines(ds, [0.5]*3)
streamlines.integrate_through_volume()
stream = streamlines.path(0)
diff -r 959d7f69b3d60ab7910eb6de36cec6701232d586 -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f yt/analysis_modules/halo_analysis/halo_catalog.py
--- a/yt/analysis_modules/halo_analysis/halo_catalog.py
+++ b/yt/analysis_modules/halo_analysis/halo_catalog.py
@@ -103,6 +103,7 @@
finder_kwargs=None,
output_dir="halo_catalogs/catalog"):
ParallelAnalysisInterface.__init__(self)
+ halos_ds.index
self.halos_ds = halos_ds
self.data_ds = data_ds
self.output_dir = ensure_dir(output_dir)
@@ -204,7 +205,6 @@
>>> hc.add_quantity("mass_squared")
"""
-
if "field_type" in kwargs:
field_type = kwargs.pop("field_type")
else:
@@ -361,6 +361,7 @@
if self.halos_ds is None:
# Find the halos and make a dataset of them
self.halos_ds = self.finder_method(self.data_ds)
+ self.halos_ds.index
if self.halos_ds is None:
mylog.warning('No halos were found for {0}'.format(\
self.data_ds.basename))
https://bitbucket.org/yt_analysis/yt/commits/d9bccb3e0ecd/
Changeset: d9bccb3e0ecd
Branch: yt-3.0
User: ngoldbaum
Date: 2014-07-20 08:16:46
Summary: Moving the hardware VR stuff into its own section.
Affected #: 10 files
diff -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f -r d9bccb3e0ecde8a5069f2be20e18c2b23ee84ea5 doc/source/cookbook/amrkdtree_to_uniformgrid.py
--- a/doc/source/cookbook/amrkdtree_to_uniformgrid.py
+++ /dev/null
@@ -1,33 +0,0 @@
-import numpy as np
-import yt
-
-#This is an example of how to map an amr data set
-#to a uniform grid. In this case the highest
-#level of refinement is mapped into a 1024x1024x1024 cube
-
-#first the amr data is loaded
-ds = yt.load("~/pfs/galaxy/new_tests/feedback_8bz/DD0021/DD0021")
-
-#next we get the maxium refinement level
-lmax = ds.parameters['MaximumRefinementLevel']
-
-#calculate the center of the domain
-domain_center = (ds.domain_right_edge - ds.domain_left_edge)/2
-
-#determine the cellsize in the highest refinement level
-cell_size = ds.domain_width/(ds.domain_dimensions*2**lmax)
-
-#calculate the left edge of the new grid
-left_edge = domain_center - 512*cell_size
-
-#the number of cells per side of the new grid
-ncells = 1024
-
-#ask yt for the specified covering grid
-cgrid = ds.covering_grid(lmax, left_edge, np.array([ncells,]*3))
-
-#get a map of the density into the new grid
-density_map = cgrid["density"].astype(dtype="float32")
-
-#save the file as a numpy array for convenient future processing
-np.save("/pfs/goldbaum/galaxy/new_tests/feedback_8bz/gas_density_DD0021_log_densities.npy", density_map)
diff -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f -r d9bccb3e0ecde8a5069f2be20e18c2b23ee84ea5 doc/source/cookbook/ffmpeg_volume_rendering.py
--- a/doc/source/cookbook/ffmpeg_volume_rendering.py
+++ /dev/null
@@ -1,99 +0,0 @@
-#This is an example of how to make videos of
-#uniform grid data using Theia and ffmpeg
-
-#The Scene object to hold the ray caster and view camera
-from yt.visualization.volume_rendering.theia.scene import TheiaScene
-
-#GPU based raycasting algorithm to use
-from yt.visualization.volume_rendering.theia.algorithms.front_to_back import FrontToBackRaycaster
-
-#These will be used to define how to color the data
-from yt.visualization.volume_rendering.transfer_functions import ColorTransferFunction
-from yt.visualization.color_maps import *
-
-#This will be used to launch ffmpeg
-import subprocess as sp
-
-#Of course we need numpy for math magic
-import numpy as np
-
-#Opacity scaling function
-def scale_func(v, mi, ma):
- return np.minimum(1.0, (v-mi)/(ma-mi) + 0.0)
-
-#load the uniform grid from a numpy array file
-bolshoi = "/home/bogert/log_densities_1024.npy"
-density_grid = np.load(bolshoi)
-
-#Set the TheiaScene to use the density_grid and
-#setup the raycaster for a resulting 1080p image
-ts = TheiaScene(volume = density_grid, raycaster = FrontToBackRaycaster(size = (1920,1080) ))
-
-#the min and max values in the data to color
-mi, ma = 0.0, 3.6
-
-#setup colortransferfunction
-bins = 5000
-tf = ColorTransferFunction( (mi, ma), bins)
-tf.map_to_colormap(0.5, ma, colormap="spring", scale_func = scale_func)
-
-#pass the transfer function to the ray caster
-ts.source.raycaster.set_transfer(tf)
-
-#Initial configuration for start of video
-#set initial opacity and brightness values
-#then zoom into the center of the data 30%
-ts.source.raycaster.set_opacity(0.03)
-ts.source.raycaster.set_brightness(2.3)
-ts.camera.zoom(30.0)
-
-#path to ffmpeg executable
-FFMPEG_BIN = "/usr/local/bin/ffmpeg"
-
-pipe = sp.Popen([ FFMPEG_BIN,
- '-y', # (optional) overwrite the output file if it already exists
- #This must be set to rawvideo because the image is an array
- '-f', 'rawvideo',
- #This must be set to rawvideo because the image is an array
- '-vcodec','rawvideo',
- #The size of the image array and resulting video
- '-s', '1920x1080',
- #This must be rgba to match array format (uint32)
- '-pix_fmt', 'rgba',
- #frame rate of video
- '-r', '29.97',
- #Indicate that the input to ffmpeg comes from a pipe
- '-i', '-',
- # Tells FFMPEG not to expect any audio
- '-an',
- #Setup video encoder
- #Use any encoder you life available from ffmpeg
- '-vcodec', 'libx264', '-preset', 'ultrafast', '-qp', '0',
- '-pix_fmt', 'yuv420p',
- #Name of the output
- 'bolshoiplanck2.mkv' ],
- stdin=sp.PIPE,stdout=sp.PIPE)
-
-
-#Now we loop and produce 500 frames
-for k in range (0,500) :
- #update the scene resulting in a new image
- ts.update()
-
- #get the image array from the ray caster
- array = ts.source.get_results()
-
- #send the image array to ffmpeg
- array.tofile(pipe.stdin)
-
- #rotate the scene by 0.01 rads in x,y & z
- ts.camera.rotateX(0.01)
- ts.camera.rotateZ(0.01)
- ts.camera.rotateY(0.01)
-
- #zoom in 0.01% for a total of a 5% zoom
- ts.camera.zoom(0.01)
-
-
-#Close the pipe to ffmpeg
-pipe.terminate()
diff -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f -r d9bccb3e0ecde8a5069f2be20e18c2b23ee84ea5 doc/source/cookbook/opengl_stereo_volume_rendering.py
--- a/doc/source/cookbook/opengl_stereo_volume_rendering.py
+++ /dev/null
@@ -1,370 +0,0 @@
-from OpenGL.GL import *
-from OpenGL.GLUT import *
-from OpenGL.GLU import *
-from OpenGL.GL.ARB.vertex_buffer_object import *
-
-import sys, time
-import numpy as np
-import pycuda.driver as cuda_driver
-import pycuda.gl as cuda_gl
-
-from yt.visualization.volume_rendering.theia.scene import TheiaScene
-from yt.visualization.volume_rendering.theia.algorithms.front_to_back import FrontToBackRaycaster
-from yt.visualization.volume_rendering.transfer_functions import ColorTransferFunction
-from yt.visualization.color_maps import *
-
-import numexpr as ne
-
-window = None # Number of the glut window.
-rot_enabled = True
-
-#Theia Scene
-ts = None
-
-#RAY CASTING values
-c_tbrightness = 1.0
-c_tdensity = 0.05
-
-output_texture = None # pointer to offscreen render target
-
-leftButton = False
-middleButton = False
-rightButton = False
-
-#Screen width and height
-width = 1920
-height = 1080
-
-eyesep = 0.1
-
-(pbo, pycuda_pbo) = [None]*2
-(rpbo, rpycuda_pbo) = [None]*2
-
-#create 2 PBO for stereo scopic rendering
-def create_PBO(w, h):
- global pbo, pycuda_pbo, rpbo, rpycuda_pbo
- num_texels = w*h
- array = np.zeros((num_texels, 3),np.float32)
-
- pbo = glGenBuffers(1)
- glBindBuffer(GL_ARRAY_BUFFER, pbo)
- glBufferData(GL_ARRAY_BUFFER, array, GL_DYNAMIC_DRAW)
- glBindBuffer(GL_ARRAY_BUFFER, 0)
- pycuda_pbo = cuda_gl.RegisteredBuffer(long(pbo))
-
- rpbo = glGenBuffers(1)
- glBindBuffer(GL_ARRAY_BUFFER, rpbo)
- glBufferData(GL_ARRAY_BUFFER, array, GL_DYNAMIC_DRAW)
- glBindBuffer(GL_ARRAY_BUFFER, 0)
- rpycuda_pbo = cuda_gl.RegisteredBuffer(long(rpbo))
-
-def destroy_PBO(self):
- global pbo, pycuda_pbo, rpbo, rpycuda_pbo
- glBindBuffer(GL_ARRAY_BUFFER, long(pbo))
- glDeleteBuffers(1, long(pbo));
- glBindBuffer(GL_ARRAY_BUFFER, 0)
- pbo,pycuda_pbo = [None]*2
-
- glBindBuffer(GL_ARRAY_BUFFER, long(rpbo))
- glDeleteBuffers(1, long(rpbo));
- glBindBuffer(GL_ARRAY_BUFFER, 0)
- rpbo,rpycuda_pbo = [None]*2
-
-#consistent with C initPixelBuffer()
-def create_texture(w,h):
- global output_texture
- output_texture = glGenTextures(1)
- glBindTexture(GL_TEXTURE_2D, output_texture)
- # set basic parameters
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
- # buffer data
- glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
- w, h, 0, GL_RGB, GL_FLOAT, None)
-
-#consistent with C initPixelBuffer()
-def destroy_texture():
- global output_texture
- glDeleteTextures(output_texture);
- output_texture = None
-
-def init_gl(w = 512 , h = 512):
- Width, Height = (w, h)
-
- glClearColor(0.1, 0.1, 0.5, 1.0)
- glDisable(GL_DEPTH_TEST)
-
- #matrix functions
- glViewport(0, 0, Width, Height)
- glMatrixMode(GL_PROJECTION);
- glLoadIdentity();
-
- #matrix functions
- gluPerspective(60.0, Width/float(Height), 0.1, 10.0)
- glPolygonMode(GL_FRONT_AND_BACK, GL_FILL)
-
-def resize(Width, Height):
- global width, height
- (width, height) = Width, Height
- glViewport(0, 0, Width, Height) # Reset The Current Viewport And Perspective Transformation
- glMatrixMode(GL_PROJECTION)
- glLoadIdentity()
- gluPerspective(60.0, Width/float(Height), 0.1, 10.0)
-
-
-def do_tick():
- global time_of_last_titleupdate, frame_counter, frames_per_second
- if ((time.clock () * 1000.0) - time_of_last_titleupdate >= 1000.):
- frames_per_second = frame_counter # Save The FPS
- frame_counter = 0 # Reset The FPS Counter
- szTitle = "%d FPS" % (frames_per_second )
- glutSetWindowTitle ( szTitle )
- time_of_last_titleupdate = time.clock () * 1000.0
- frame_counter += 1
-
-oldMousePos = [ 0, 0 ]
-def mouseButton( button, mode, x, y ):
- """Callback function (mouse button pressed or released).
-
- The current and old mouse positions are stored in
- a global renderParam and a global list respectively"""
-
- global leftButton, middleButton, rightButton, oldMousePos
-
- if button == GLUT_LEFT_BUTTON:
- if mode == GLUT_DOWN:
- leftButton = True
- else:
- leftButton = False
-
- if button == GLUT_MIDDLE_BUTTON:
- if mode == GLUT_DOWN:
- middleButton = True
- else:
- middleButton = False
-
- if button == GLUT_RIGHT_BUTTON:
- if mode == GLUT_DOWN:
- rightButton = True
- else:
- rightButton = False
-
- oldMousePos[0], oldMousePos[1] = x, y
- glutPostRedisplay( )
-
-def mouseMotion( x, y ):
- """Callback function (mouse moved while button is pressed).
-
- The current and old mouse positions are stored in
- a global renderParam and a global list respectively.
- The global translation vector is updated according to
- the movement of the mouse pointer."""
-
- global ts, leftButton, middleButton, rightButton, oldMousePos
- deltaX = x - oldMousePos[ 0 ]
- deltaY = y - oldMousePos[ 1 ]
-
- factor = 0.001
-
- if leftButton == True:
- ts.camera.rotateX( - deltaY * factor)
- ts.camera.rotateY( - deltaX * factor)
- if middleButton == True:
- ts.camera.translateX( deltaX* 2.0 * factor)
- ts.camera.translateY( - deltaY* 2.0 * factor)
- if rightButton == True:
- ts.camera.scale += deltaY * factor
-
- oldMousePos[0], oldMousePos[1] = x, y
- glutPostRedisplay( )
-
-def keyPressed(*args):
- global c_tbrightness, c_tdensity, eyesep
- # If escape is pressed, kill everything.
- if args[0] == '\033':
- print 'Closing..'
- destroy_PBOs()
- destroy_texture()
- exit()
-
- #change the brightness of the scene
- elif args[0] == ']':
- c_tbrightness += 0.025
- elif args[0] == '[':
- c_tbrightness -= 0.025
-
- #change the density scale
- elif args[0] == ';':
- c_tdensity -= 0.001
- elif args[0] == '\'':
- c_tdensity += 0.001
-
- #change the transfer scale
- elif args[0] == '-':
- eyesep -= 0.01
- elif args[0] == '=':
- eyesep += 0.01
-
-def idle():
- glutPostRedisplay()
-
-def display():
- try:
- #process left eye
- process_image()
- display_image()
-
- #process right eye
- process_image(eye = False)
- display_image(eye = False)
-
-
- glutSwapBuffers()
-
- except:
- from traceback import print_exc
- print_exc()
- from os import _exit
- _exit(0)
-
-def process(eye = True):
- global ts, pycuda_pbo, rpycuda_pbo, eyesep, c_tbrightness, c_tdensity
- """ Use PyCuda """
-
- ts.get_raycaster().set_opacity(c_tdensity)
- ts.get_raycaster().set_brightness(c_tbrightness)
-
- if (eye) :
- ts.camera.translateX(-eyesep)
- dest_mapping = pycuda_pbo.map()
- (dev_ptr, size) = dest_mapping.device_ptr_and_size()
- ts.get_raycaster().surface.device_ptr = dev_ptr
- ts.update()
- dest_mapping.unmap()
- ts.camera.translateX(eyesep)
- else :
- ts.camera.translateX(eyesep)
- dest_mapping = rpycuda_pbo.map()
- (dev_ptr, size) = dest_mapping.device_ptr_and_size()
- ts.get_raycaster().surface.device_ptr = dev_ptr
- ts.update()
- dest_mapping.unmap()
- ts.camera.translateX(-eyesep)
-
-
-def process_image(eye = True):
- global output_texture, pbo, rpbo, width, height
- """ copy image and process using CUDA """
- # run the Cuda kernel
- process(eye)
- # download texture from PBO
- if (eye) :
- glBindBuffer(GL_PIXEL_UNPACK_BUFFER, np.uint64(pbo))
- glBindTexture(GL_TEXTURE_2D, output_texture)
-
- glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
- width, height, 0,
- GL_RGB, GL_FLOAT, None)
- else :
- glBindBuffer(GL_PIXEL_UNPACK_BUFFER, np.uint64(rpbo))
- glBindTexture(GL_TEXTURE_2D, output_texture)
-
- glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
- width, height, 0,
- GL_RGB, GL_FLOAT, None)
-
-def display_image(eye = True):
- global width, height
- """ render a screen sized quad """
- glDisable(GL_DEPTH_TEST)
- glDisable(GL_LIGHTING)
- glEnable(GL_TEXTURE_2D)
- glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE)
-
- #matix functions should be moved
- glMatrixMode(GL_PROJECTION)
- glPushMatrix()
- glLoadIdentity()
- glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0)
- glMatrixMode( GL_MODELVIEW)
- glLoadIdentity()
- glViewport(0, 0, width, height)
-
- if (eye) :
- glDrawBuffer(GL_BACK_LEFT)
- else :
- glDrawBuffer(GL_BACK_RIGHT)
-
- glBegin(GL_QUADS)
- glTexCoord2f(0.0, 0.0)
- glVertex3f(-1.0, -1.0, 0.5)
- glTexCoord2f(1.0, 0.0)
- glVertex3f(1.0, -1.0, 0.5)
- glTexCoord2f(1.0, 1.0)
- glVertex3f(1.0, 1.0, 0.5)
- glTexCoord2f(0.0, 1.0)
- glVertex3f(-1.0, 1.0, 0.5)
- glEnd()
-
- glMatrixMode(GL_PROJECTION)
- glPopMatrix()
-
- glDisable(GL_TEXTURE_2D)
- glBindTexture(GL_TEXTURE_2D, 0)
- glBindBuffer(GL_PIXEL_PACK_BUFFER, 0)
- glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0)
-
-
-#note we may need to init cuda_gl here and pass it to camera
-def main():
- global window, ts, width, height
- (width, height) = (1920, 1080)
-
- glutInit(sys.argv)
- glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_ALPHA | GLUT_DEPTH | GLUT_STEREO)
- glutInitWindowSize(*initial_size)
- glutInitWindowPosition(0, 0)
- window = glutCreateWindow("Stereo Volume Rendering")
-
-
- glutDisplayFunc(display)
- glutIdleFunc(idle)
- glutReshapeFunc(resize)
- glutMouseFunc( mouseButton )
- glutMotionFunc( mouseMotion )
- glutKeyboardFunc(keyPressed)
- init_gl(width, height)
-
- # create texture for blitting to screen
- create_texture(width, height)
-
- import pycuda.gl.autoinit
- import pycuda.gl
- cuda_gl = pycuda.gl
-
- create_PBO(width, height)
- # ----- Load and Set Volume Data -----
-
- density_grid = np.load("/home/bogert/dd150_log_densities.npy")
-
- mi, ma= 21.5, 24.5
- bins = 5000
- tf = ColorTransferFunction( (mi, ma), bins)
- tf.map_to_colormap(mi, ma, colormap="algae", scale_func = scale_func)
-
- ts = TheiaScene(volume = density_grid, raycaster = FrontToBackRaycaster(size = (width, height), tf = tf))
-
- ts.get_raycaster().set_sample_size(0.01)
- ts.get_raycaster().set_max_samples(5000)
-
- glutMainLoop()
-
-def scale_func(v, mi, ma):
- return np.minimum(1.0, np.abs((v)-ma)/np.abs(mi-ma) + 0.0)
-
-# Print message to console, and kick off the main to get it rolling.
-if __name__ == "__main__":
- print "Hit ESC key to quit, 'a' to toggle animation, and 'e' to toggle cuda"
- main()
diff -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f -r d9bccb3e0ecde8a5069f2be20e18c2b23ee84ea5 doc/source/cookbook/opengl_volume_rendering.py
--- a/doc/source/cookbook/opengl_volume_rendering.py
+++ /dev/null
@@ -1,322 +0,0 @@
-from OpenGL.GL import *
-from OpenGL.GLUT import *
-from OpenGL.GLU import *
-from OpenGL.GL.ARB.vertex_buffer_object import *
-
-import sys, time
-import numpy as np
-import pycuda.driver as cuda_driver
-import pycuda.gl as cuda_gl
-
-from yt.visualization.volume_rendering.theia.scene import TheiaScene
-from yt.visualization.volume_rendering.theia.algorithms.front_to_back import FrontToBackRaycaster
-from yt.visualization.volume_rendering.transfer_functions import ColorTransferFunction
-from yt.visualization.color_maps import *
-
-import numexpr as ne
-
-window = None # Number of the glut window.
-rot_enabled = True
-
-#Theia Scene
-ts = None
-
-#RAY CASTING values
-c_tbrightness = 1.0
-c_tdensity = 0.05
-
-output_texture = None # pointer to offscreen render target
-
-leftButton = False
-middleButton = False
-rightButton = False
-
-#Screen width and height
-width = 1024
-height = 1024
-
-eyesep = 0.1
-
-(pbo, pycuda_pbo) = [None]*2
-
-def create_PBO(w, h):
- global pbo, pycuda_pbo
- num_texels = w*h
- array = np.zeros((w,h,3),np.uint32)
-
- pbo = glGenBuffers(1)
- glBindBuffer(GL_ARRAY_BUFFER, pbo)
- glBufferData(GL_ARRAY_BUFFER, array, GL_DYNAMIC_DRAW)
- glBindBuffer(GL_ARRAY_BUFFER, 0)
- pycuda_pbo = cuda_gl.RegisteredBuffer(long(pbo))
-
-def destroy_PBO(self):
- global pbo, pycuda_pbo
- glBindBuffer(GL_ARRAY_BUFFER, long(pbo))
- glDeleteBuffers(1, long(pbo));
- glBindBuffer(GL_ARRAY_BUFFER, 0)
- pbo,pycuda_pbo = [None]*2
-
-#consistent with C initPixelBuffer()
-def create_texture(w,h):
- global output_texture
- output_texture = glGenTextures(1)
- glBindTexture(GL_TEXTURE_2D, output_texture)
- # set basic parameters
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
- # buffer data
- glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
- w, h, 0, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8, None)
-
-#consistent with C initPixelBuffer()
-def destroy_texture():
- global output_texture
- glDeleteTextures(output_texture);
- output_texture = None
-
-def init_gl(w = 512 , h = 512):
- Width, Height = (w, h)
-
- glClearColor(0.1, 0.1, 0.5, 1.0)
- glDisable(GL_DEPTH_TEST)
-
- #matrix functions
- glViewport(0, 0, Width, Height)
- glMatrixMode(GL_PROJECTION);
- glLoadIdentity();
-
- #matrix functions
- gluPerspective(60.0, Width/float(Height), 0.1, 10.0)
- glPolygonMode(GL_FRONT_AND_BACK, GL_FILL)
-
-def resize(Width, Height):
- global width, height
- (width, height) = Width, Height
- glViewport(0, 0, Width, Height) # Reset The Current Viewport And Perspective Transformation
- glMatrixMode(GL_PROJECTION)
- glLoadIdentity()
- gluPerspective(60.0, Width/float(Height), 0.1, 10.0)
-
-
-def do_tick():
- global time_of_last_titleupdate, frame_counter, frames_per_second
- if ((time.clock () * 1000.0) - time_of_last_titleupdate >= 1000.):
- frames_per_second = frame_counter # Save The FPS
- frame_counter = 0 # Reset The FPS Counter
- szTitle = "%d FPS" % (frames_per_second )
- glutSetWindowTitle ( szTitle )
- time_of_last_titleupdate = time.clock () * 1000.0
- frame_counter += 1
-
-oldMousePos = [ 0, 0 ]
-def mouseButton( button, mode, x, y ):
- """Callback function (mouse button pressed or released).
-
- The current and old mouse positions are stored in
- a global renderParam and a global list respectively"""
-
- global leftButton, middleButton, rightButton, oldMousePos
-
- if button == GLUT_LEFT_BUTTON:
- if mode == GLUT_DOWN:
- leftButton = True
- else:
- leftButton = False
-
- if button == GLUT_MIDDLE_BUTTON:
- if mode == GLUT_DOWN:
- middleButton = True
- else:
- middleButton = False
-
- if button == GLUT_RIGHT_BUTTON:
- if mode == GLUT_DOWN:
- rightButton = True
- else:
- rightButton = False
-
- oldMousePos[0], oldMousePos[1] = x, y
- glutPostRedisplay( )
-
-def mouseMotion( x, y ):
- """Callback function (mouse moved while button is pressed).
-
- The current and old mouse positions are stored in
- a global renderParam and a global list respectively.
- The global translation vector is updated according to
- the movement of the mouse pointer."""
-
- global ts, leftButton, middleButton, rightButton, oldMousePos
- deltaX = x - oldMousePos[ 0 ]
- deltaY = y - oldMousePos[ 1 ]
-
- factor = 0.001
-
- if leftButton == True:
- ts.camera.rotateX( - deltaY * factor)
- ts.camera.rotateY( - deltaX * factor)
- if middleButton == True:
- ts.camera.translateX( deltaX* 2.0 * factor)
- ts.camera.translateY( - deltaY* 2.0 * factor)
- if rightButton == True:
- ts.camera.scale += deltaY * factor
-
- oldMousePos[0], oldMousePos[1] = x, y
- glutPostRedisplay( )
-
-def keyPressed(*args):
- global c_tbrightness, c_tdensity
- # If escape is pressed, kill everything.
- if args[0] == '\033':
- print 'Closing..'
- destroy_PBOs()
- destroy_texture()
- exit()
-
- #change the brightness of the scene
- elif args[0] == ']':
- c_tbrightness += 0.025
- elif args[0] == '[':
- c_tbrightness -= 0.025
-
- #change the density scale
- elif args[0] == ';':
- c_tdensity -= 0.001
- elif args[0] == '\'':
- c_tdensity += 0.001
-
-def idle():
- glutPostRedisplay()
-
-def display():
- try:
- #process left eye
- process_image()
- display_image()
-
- glutSwapBuffers()
-
- except:
- from traceback import print_exc
- print_exc()
- from os import _exit
- _exit(0)
-
-def process(eye = True):
- global ts, pycuda_pbo, eyesep, c_tbrightness, c_tdensity
-
- ts.get_raycaster().set_opacity(c_tdensity)
- ts.get_raycaster().set_brightness(c_tbrightness)
-
- dest_mapping = pycuda_pbo.map()
- (dev_ptr, size) = dest_mapping.device_ptr_and_size()
- ts.get_raycaster().surface.device_ptr = dev_ptr
- ts.update()
- # ts.get_raycaster().cast()
- dest_mapping.unmap()
-
-
-def process_image():
- global output_texture, pbo, width, height
- """ copy image and process using CUDA """
- # run the Cuda kernel
- process()
- # download texture from PBO
- glBindBuffer(GL_PIXEL_UNPACK_BUFFER, np.uint64(pbo))
- glBindTexture(GL_TEXTURE_2D, output_texture)
-
- glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
- width, height, 0, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8_REV, None)
-
-def display_image(eye = True):
- global width, height
- """ render a screen sized quad """
- glDisable(GL_DEPTH_TEST)
- glDisable(GL_LIGHTING)
- glEnable(GL_TEXTURE_2D)
- glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE)
-
- #matix functions should be moved
- glMatrixMode(GL_PROJECTION)
- glPushMatrix()
- glLoadIdentity()
- glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0)
- glMatrixMode( GL_MODELVIEW)
- glLoadIdentity()
- glViewport(0, 0, width, height)
-
- glBegin(GL_QUADS)
- glTexCoord2f(0.0, 0.0)
- glVertex3f(-1.0, -1.0, 0.5)
- glTexCoord2f(1.0, 0.0)
- glVertex3f(1.0, -1.0, 0.5)
- glTexCoord2f(1.0, 1.0)
- glVertex3f(1.0, 1.0, 0.5)
- glTexCoord2f(0.0, 1.0)
- glVertex3f(-1.0, 1.0, 0.5)
- glEnd()
-
- glMatrixMode(GL_PROJECTION)
- glPopMatrix()
-
- glDisable(GL_TEXTURE_2D)
- glBindTexture(GL_TEXTURE_2D, 0)
- glBindBuffer(GL_PIXEL_PACK_BUFFER, 0)
- glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0)
-
-
-#note we may need to init cuda_gl here and pass it to camera
-def main():
- global window, ts, width, height
- (width, height) = (1024, 1024)
-
- glutInit(sys.argv)
- glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_ALPHA | GLUT_DEPTH )
- glutInitWindowSize(width, height)
- glutInitWindowPosition(0, 0)
- window = glutCreateWindow("Stereo Volume Rendering")
-
-
- glutDisplayFunc(display)
- glutIdleFunc(idle)
- glutReshapeFunc(resize)
- glutMouseFunc( mouseButton )
- glutMotionFunc( mouseMotion )
- glutKeyboardFunc(keyPressed)
- init_gl(width, height)
-
- # create texture for blitting to screen
- create_texture(width, height)
-
- import pycuda.gl.autoinit
- import pycuda.gl
- cuda_gl = pycuda.gl
-
- create_PBO(width, height)
- # ----- Load and Set Volume Data -----
-
- density_grid = np.load("/home/bogert/dd150_log_densities.npy")
-
- mi, ma= 21.5, 24.5
- bins = 5000
- tf = ColorTransferFunction( (mi, ma), bins)
- tf.map_to_colormap(mi, ma, colormap="algae", scale_func = scale_func)
-
- ts = TheiaScene(volume = density_grid, raycaster = FrontToBackRaycaster(size = (width, height), tf = tf))
-
- ts.get_raycaster().set_sample_size(0.01)
- ts.get_raycaster().set_max_samples(5000)
- ts.update()
-
- glutMainLoop()
-
-def scale_func(v, mi, ma):
- return np.minimum(1.0, np.abs((v)-ma)/np.abs(mi-ma) + 0.0)
-
-# Print message to console, and kick off the main to get it rolling.
-if __name__ == "__main__":
- print "Hit ESC key to quit, 'a' to toggle animation, and 'e' to toggle cuda"
- main()
diff -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f -r d9bccb3e0ecde8a5069f2be20e18c2b23ee84ea5 doc/source/visualizing/ffmpeg_volume_rendering.py
--- /dev/null
+++ b/doc/source/visualizing/ffmpeg_volume_rendering.py
@@ -0,0 +1,99 @@
+#This is an example of how to make videos of
+#uniform grid data using Theia and ffmpeg
+
+#The Scene object to hold the ray caster and view camera
+from yt.visualization.volume_rendering.theia.scene import TheiaScene
+
+#GPU based raycasting algorithm to use
+from yt.visualization.volume_rendering.theia.algorithms.front_to_back import FrontToBackRaycaster
+
+#These will be used to define how to color the data
+from yt.visualization.volume_rendering.transfer_functions import ColorTransferFunction
+from yt.visualization.color_maps import *
+
+#This will be used to launch ffmpeg
+import subprocess as sp
+
+#Of course we need numpy for math magic
+import numpy as np
+
+#Opacity scaling function
+def scale_func(v, mi, ma):
+ return np.minimum(1.0, (v-mi)/(ma-mi) + 0.0)
+
+#load the uniform grid from a numpy array file
+bolshoi = "/home/bogert/log_densities_1024.npy"
+density_grid = np.load(bolshoi)
+
+#Set the TheiaScene to use the density_grid and
+#setup the raycaster for a resulting 1080p image
+ts = TheiaScene(volume = density_grid, raycaster = FrontToBackRaycaster(size = (1920,1080) ))
+
+#the min and max values in the data to color
+mi, ma = 0.0, 3.6
+
+#setup colortransferfunction
+bins = 5000
+tf = ColorTransferFunction( (mi, ma), bins)
+tf.map_to_colormap(0.5, ma, colormap="spring", scale_func = scale_func)
+
+#pass the transfer function to the ray caster
+ts.source.raycaster.set_transfer(tf)
+
+#Initial configuration for start of video
+#set initial opacity and brightness values
+#then zoom into the center of the data 30%
+ts.source.raycaster.set_opacity(0.03)
+ts.source.raycaster.set_brightness(2.3)
+ts.camera.zoom(30.0)
+
+#path to ffmpeg executable
+FFMPEG_BIN = "/usr/local/bin/ffmpeg"
+
+pipe = sp.Popen([ FFMPEG_BIN,
+ '-y', # (optional) overwrite the output file if it already exists
+ #This must be set to rawvideo because the image is an array
+ '-f', 'rawvideo',
+ #This must be set to rawvideo because the image is an array
+ '-vcodec','rawvideo',
+ #The size of the image array and resulting video
+ '-s', '1920x1080',
+ #This must be rgba to match array format (uint32)
+ '-pix_fmt', 'rgba',
+ #frame rate of video
+ '-r', '29.97',
+ #Indicate that the input to ffmpeg comes from a pipe
+ '-i', '-',
+ # Tells FFMPEG not to expect any audio
+ '-an',
+ #Setup video encoder
+ #Use any encoder you life available from ffmpeg
+ '-vcodec', 'libx264', '-preset', 'ultrafast', '-qp', '0',
+ '-pix_fmt', 'yuv420p',
+ #Name of the output
+ 'bolshoiplanck2.mkv' ],
+ stdin=sp.PIPE,stdout=sp.PIPE)
+
+
+#Now we loop and produce 500 frames
+for k in range (0,500) :
+ #update the scene resulting in a new image
+ ts.update()
+
+ #get the image array from the ray caster
+ array = ts.source.get_results()
+
+ #send the image array to ffmpeg
+ array.tofile(pipe.stdin)
+
+ #rotate the scene by 0.01 rads in x,y & z
+ ts.camera.rotateX(0.01)
+ ts.camera.rotateZ(0.01)
+ ts.camera.rotateY(0.01)
+
+ #zoom in 0.01% for a total of a 5% zoom
+ ts.camera.zoom(0.01)
+
+
+#Close the pipe to ffmpeg
+pipe.terminate()
diff -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f -r d9bccb3e0ecde8a5069f2be20e18c2b23ee84ea5 doc/source/visualizing/hardware_volume_rendering.rst
--- /dev/null
+++ b/doc/source/visualizing/hardware_volume_rendering.rst
@@ -0,0 +1,89 @@
+.. _hardware_volume_rendering:
+
+Hardware Volume Rendering on NVidia Graphics cards
+--------------------------------------------------
+
+Theia is a hardware volume renderer that takes advantage of NVidias CUDA language
+to peform ray casting with GPUs instead of the CPU.
+
+Only unigrid rendering is supported, but yt provides a grid mapping function
+to get unigrid data from amr or sph formats, see :ref:`extract_frb`.
+
+System Requirements
++++++++++++++++++++
+
+Nvidia graphics card - The memory limit of the graphics card sets the limit
+ on the size of the data source.
+
+CUDA 5 or later and
+
+The environment variable CUDA_SAMPLES must be set pointing to
+the common/inc samples shipped with CUDA. The following shows an example
+in bash with CUDA 5.5 installed in /usr/local :
+
+ export CUDA_SAMPLES=/usr/local/cuda-5.5/samples/common/inc
+
+PyCUDA must also be installed to use Theia.
+
+PyCUDA can be installed following these instructions :
+
+ git clone --recursive http://git.tiker.net/trees/pycuda.git
+
+ python configure.py
+ python setup.py install
+
+
+Tutorial
+++++++++
+
+Currently rendering only works on uniform grids. Here is an example
+on a 1024 cube of float32 scalars.
+
+.. code-block:: python
+
+ from yt.visualization.volume_rendering.theia.scene import TheiaScene
+ from yt.visualization.volume_rendering.algorithms.front_to_back import FrontToBackRaycaster
+ import numpy as np
+
+ #load 3D numpy array of float32
+ volume = np.load("/home/bogert/log_densities_1024.npy")
+
+ scene = TheiaScene( volume = volume, raycaster = FrontToBackRaycaster() )
+
+ scene.camera.rotateX(1.0)
+ scene.update()
+
+ surface = scene.get_results()
+ #surface now contains an image array 2x2 int32 rbga values
+
+.. _the-theiascene-interface:
+
+The TheiaScene Interface
+++++++++++++++++++++++++
+
+A TheiaScene object has been created to provide a high level entry point for
+controlling the raycaster's view onto the data. The class
+:class:`~yt.visualization.volume_rendering.theia.TheiaScene` encapsulates a
+Camera object and a TheiaSource that intern encapsulates a volume. The
+:class:`~yt.visualization.volume_rendering.theia.Camera` provides controls for
+rotating, translating, and zooming into the volume. Using the
+:class:`~yt.visualization.volume_rendering.theia.TheiaSource` automatically
+transfers the volume to the graphic's card texture memory.
+
+Example Cookbooks
++++++++++++++++++
+
+OpenGL Example for interactive volume rendering:
+
+.. literalinclude:: opengl_volume_rendering.py
+
+.. warning:: Frame rate will suffer significantly from stereoscopic rendering.
+ ~2x slower since the volume must be rendered twice.
+
+OpenGL Stereoscopic Example:
+
+.. literalinclude:: opengl_stereo_volume_rendering.py
+
+Pseudo-Realtime video rendering with ffmpeg:
+
+.. literalinclude:: ffmpeg_volume_rendering.py
diff -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f -r d9bccb3e0ecde8a5069f2be20e18c2b23ee84ea5 doc/source/visualizing/index.rst
--- a/doc/source/visualizing/index.rst
+++ b/doc/source/visualizing/index.rst
@@ -10,6 +10,7 @@
callbacks
manual_plotting
volume_rendering
+ hardware_volume_rendering
sketchfab
mapserver
streamlines
diff -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f -r d9bccb3e0ecde8a5069f2be20e18c2b23ee84ea5 doc/source/visualizing/opengl_stereo_volume_rendering.py
--- /dev/null
+++ b/doc/source/visualizing/opengl_stereo_volume_rendering.py
@@ -0,0 +1,370 @@
+from OpenGL.GL import *
+from OpenGL.GLUT import *
+from OpenGL.GLU import *
+from OpenGL.GL.ARB.vertex_buffer_object import *
+
+import sys, time
+import numpy as np
+import pycuda.driver as cuda_driver
+import pycuda.gl as cuda_gl
+
+from yt.visualization.volume_rendering.theia.scene import TheiaScene
+from yt.visualization.volume_rendering.theia.algorithms.front_to_back import FrontToBackRaycaster
+from yt.visualization.volume_rendering.transfer_functions import ColorTransferFunction
+from yt.visualization.color_maps import *
+
+import numexpr as ne
+
+window = None # Number of the glut window.
+rot_enabled = True
+
+#Theia Scene
+ts = None
+
+#RAY CASTING values
+c_tbrightness = 1.0
+c_tdensity = 0.05
+
+output_texture = None # pointer to offscreen render target
+
+leftButton = False
+middleButton = False
+rightButton = False
+
+#Screen width and height
+width = 1920
+height = 1080
+
+eyesep = 0.1
+
+(pbo, pycuda_pbo) = [None]*2
+(rpbo, rpycuda_pbo) = [None]*2
+
+#create 2 PBO for stereo scopic rendering
+def create_PBO(w, h):
+ global pbo, pycuda_pbo, rpbo, rpycuda_pbo
+ num_texels = w*h
+ array = np.zeros((num_texels, 3),np.float32)
+
+ pbo = glGenBuffers(1)
+ glBindBuffer(GL_ARRAY_BUFFER, pbo)
+ glBufferData(GL_ARRAY_BUFFER, array, GL_DYNAMIC_DRAW)
+ glBindBuffer(GL_ARRAY_BUFFER, 0)
+ pycuda_pbo = cuda_gl.RegisteredBuffer(long(pbo))
+
+ rpbo = glGenBuffers(1)
+ glBindBuffer(GL_ARRAY_BUFFER, rpbo)
+ glBufferData(GL_ARRAY_BUFFER, array, GL_DYNAMIC_DRAW)
+ glBindBuffer(GL_ARRAY_BUFFER, 0)
+ rpycuda_pbo = cuda_gl.RegisteredBuffer(long(rpbo))
+
+def destroy_PBO(self):
+ global pbo, pycuda_pbo, rpbo, rpycuda_pbo
+ glBindBuffer(GL_ARRAY_BUFFER, long(pbo))
+ glDeleteBuffers(1, long(pbo));
+ glBindBuffer(GL_ARRAY_BUFFER, 0)
+ pbo,pycuda_pbo = [None]*2
+
+ glBindBuffer(GL_ARRAY_BUFFER, long(rpbo))
+ glDeleteBuffers(1, long(rpbo));
+ glBindBuffer(GL_ARRAY_BUFFER, 0)
+ rpbo,rpycuda_pbo = [None]*2
+
+#consistent with C initPixelBuffer()
+def create_texture(w,h):
+ global output_texture
+ output_texture = glGenTextures(1)
+ glBindTexture(GL_TEXTURE_2D, output_texture)
+ # set basic parameters
+ glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
+ glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
+ glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
+ glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
+ # buffer data
+ glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
+ w, h, 0, GL_RGB, GL_FLOAT, None)
+
+#consistent with C initPixelBuffer()
+def destroy_texture():
+ global output_texture
+ glDeleteTextures(output_texture);
+ output_texture = None
+
+def init_gl(w = 512 , h = 512):
+ Width, Height = (w, h)
+
+ glClearColor(0.1, 0.1, 0.5, 1.0)
+ glDisable(GL_DEPTH_TEST)
+
+ #matrix functions
+ glViewport(0, 0, Width, Height)
+ glMatrixMode(GL_PROJECTION);
+ glLoadIdentity();
+
+ #matrix functions
+ gluPerspective(60.0, Width/float(Height), 0.1, 10.0)
+ glPolygonMode(GL_FRONT_AND_BACK, GL_FILL)
+
+def resize(Width, Height):
+ global width, height
+ (width, height) = Width, Height
+ glViewport(0, 0, Width, Height) # Reset The Current Viewport And Perspective Transformation
+ glMatrixMode(GL_PROJECTION)
+ glLoadIdentity()
+ gluPerspective(60.0, Width/float(Height), 0.1, 10.0)
+
+
+def do_tick():
+ global time_of_last_titleupdate, frame_counter, frames_per_second
+ if ((time.clock () * 1000.0) - time_of_last_titleupdate >= 1000.):
+ frames_per_second = frame_counter # Save The FPS
+ frame_counter = 0 # Reset The FPS Counter
+ szTitle = "%d FPS" % (frames_per_second )
+ glutSetWindowTitle ( szTitle )
+ time_of_last_titleupdate = time.clock () * 1000.0
+ frame_counter += 1
+
+oldMousePos = [ 0, 0 ]
+def mouseButton( button, mode, x, y ):
+ """Callback function (mouse button pressed or released).
+
+ The current and old mouse positions are stored in
+ a global renderParam and a global list respectively"""
+
+ global leftButton, middleButton, rightButton, oldMousePos
+
+ if button == GLUT_LEFT_BUTTON:
+ if mode == GLUT_DOWN:
+ leftButton = True
+ else:
+ leftButton = False
+
+ if button == GLUT_MIDDLE_BUTTON:
+ if mode == GLUT_DOWN:
+ middleButton = True
+ else:
+ middleButton = False
+
+ if button == GLUT_RIGHT_BUTTON:
+ if mode == GLUT_DOWN:
+ rightButton = True
+ else:
+ rightButton = False
+
+ oldMousePos[0], oldMousePos[1] = x, y
+ glutPostRedisplay( )
+
+def mouseMotion( x, y ):
+ """Callback function (mouse moved while button is pressed).
+
+ The current and old mouse positions are stored in
+ a global renderParam and a global list respectively.
+ The global translation vector is updated according to
+ the movement of the mouse pointer."""
+
+ global ts, leftButton, middleButton, rightButton, oldMousePos
+ deltaX = x - oldMousePos[ 0 ]
+ deltaY = y - oldMousePos[ 1 ]
+
+ factor = 0.001
+
+ if leftButton == True:
+ ts.camera.rotateX( - deltaY * factor)
+ ts.camera.rotateY( - deltaX * factor)
+ if middleButton == True:
+ ts.camera.translateX( deltaX* 2.0 * factor)
+ ts.camera.translateY( - deltaY* 2.0 * factor)
+ if rightButton == True:
+ ts.camera.scale += deltaY * factor
+
+ oldMousePos[0], oldMousePos[1] = x, y
+ glutPostRedisplay( )
+
+def keyPressed(*args):
+ global c_tbrightness, c_tdensity, eyesep
+ # If escape is pressed, kill everything.
+ if args[0] == '\033':
+ print 'Closing..'
+ destroy_PBOs()
+ destroy_texture()
+ exit()
+
+ #change the brightness of the scene
+ elif args[0] == ']':
+ c_tbrightness += 0.025
+ elif args[0] == '[':
+ c_tbrightness -= 0.025
+
+ #change the density scale
+ elif args[0] == ';':
+ c_tdensity -= 0.001
+ elif args[0] == '\'':
+ c_tdensity += 0.001
+
+ #change the transfer scale
+ elif args[0] == '-':
+ eyesep -= 0.01
+ elif args[0] == '=':
+ eyesep += 0.01
+
+def idle():
+ glutPostRedisplay()
+
+def display():
+ try:
+ #process left eye
+ process_image()
+ display_image()
+
+ #process right eye
+ process_image(eye = False)
+ display_image(eye = False)
+
+
+ glutSwapBuffers()
+
+ except:
+ from traceback import print_exc
+ print_exc()
+ from os import _exit
+ _exit(0)
+
+def process(eye = True):
+ global ts, pycuda_pbo, rpycuda_pbo, eyesep, c_tbrightness, c_tdensity
+ """ Use PyCuda """
+
+ ts.get_raycaster().set_opacity(c_tdensity)
+ ts.get_raycaster().set_brightness(c_tbrightness)
+
+ if (eye) :
+ ts.camera.translateX(-eyesep)
+ dest_mapping = pycuda_pbo.map()
+ (dev_ptr, size) = dest_mapping.device_ptr_and_size()
+ ts.get_raycaster().surface.device_ptr = dev_ptr
+ ts.update()
+ dest_mapping.unmap()
+ ts.camera.translateX(eyesep)
+ else :
+ ts.camera.translateX(eyesep)
+ dest_mapping = rpycuda_pbo.map()
+ (dev_ptr, size) = dest_mapping.device_ptr_and_size()
+ ts.get_raycaster().surface.device_ptr = dev_ptr
+ ts.update()
+ dest_mapping.unmap()
+ ts.camera.translateX(-eyesep)
+
+
+def process_image(eye = True):
+ global output_texture, pbo, rpbo, width, height
+ """ copy image and process using CUDA """
+ # run the Cuda kernel
+ process(eye)
+ # download texture from PBO
+ if (eye) :
+ glBindBuffer(GL_PIXEL_UNPACK_BUFFER, np.uint64(pbo))
+ glBindTexture(GL_TEXTURE_2D, output_texture)
+
+ glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
+ width, height, 0,
+ GL_RGB, GL_FLOAT, None)
+ else :
+ glBindBuffer(GL_PIXEL_UNPACK_BUFFER, np.uint64(rpbo))
+ glBindTexture(GL_TEXTURE_2D, output_texture)
+
+ glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
+ width, height, 0,
+ GL_RGB, GL_FLOAT, None)
+
+def display_image(eye = True):
+ global width, height
+ """ render a screen sized quad """
+ glDisable(GL_DEPTH_TEST)
+ glDisable(GL_LIGHTING)
+ glEnable(GL_TEXTURE_2D)
+ glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE)
+
+ #matix functions should be moved
+ glMatrixMode(GL_PROJECTION)
+ glPushMatrix()
+ glLoadIdentity()
+ glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0)
+ glMatrixMode( GL_MODELVIEW)
+ glLoadIdentity()
+ glViewport(0, 0, width, height)
+
+ if (eye) :
+ glDrawBuffer(GL_BACK_LEFT)
+ else :
+ glDrawBuffer(GL_BACK_RIGHT)
+
+ glBegin(GL_QUADS)
+ glTexCoord2f(0.0, 0.0)
+ glVertex3f(-1.0, -1.0, 0.5)
+ glTexCoord2f(1.0, 0.0)
+ glVertex3f(1.0, -1.0, 0.5)
+ glTexCoord2f(1.0, 1.0)
+ glVertex3f(1.0, 1.0, 0.5)
+ glTexCoord2f(0.0, 1.0)
+ glVertex3f(-1.0, 1.0, 0.5)
+ glEnd()
+
+ glMatrixMode(GL_PROJECTION)
+ glPopMatrix()
+
+ glDisable(GL_TEXTURE_2D)
+ glBindTexture(GL_TEXTURE_2D, 0)
+ glBindBuffer(GL_PIXEL_PACK_BUFFER, 0)
+ glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0)
+
+
+#note we may need to init cuda_gl here and pass it to camera
+def main():
+ global window, ts, width, height
+ (width, height) = (1920, 1080)
+
+ glutInit(sys.argv)
+ glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_ALPHA | GLUT_DEPTH | GLUT_STEREO)
+ glutInitWindowSize(*initial_size)
+ glutInitWindowPosition(0, 0)
+ window = glutCreateWindow("Stereo Volume Rendering")
+
+
+ glutDisplayFunc(display)
+ glutIdleFunc(idle)
+ glutReshapeFunc(resize)
+ glutMouseFunc( mouseButton )
+ glutMotionFunc( mouseMotion )
+ glutKeyboardFunc(keyPressed)
+ init_gl(width, height)
+
+ # create texture for blitting to screen
+ create_texture(width, height)
+
+ import pycuda.gl.autoinit
+ import pycuda.gl
+ cuda_gl = pycuda.gl
+
+ create_PBO(width, height)
+ # ----- Load and Set Volume Data -----
+
+ density_grid = np.load("/home/bogert/dd150_log_densities.npy")
+
+ mi, ma= 21.5, 24.5
+ bins = 5000
+ tf = ColorTransferFunction( (mi, ma), bins)
+ tf.map_to_colormap(mi, ma, colormap="algae", scale_func = scale_func)
+
+ ts = TheiaScene(volume = density_grid, raycaster = FrontToBackRaycaster(size = (width, height), tf = tf))
+
+ ts.get_raycaster().set_sample_size(0.01)
+ ts.get_raycaster().set_max_samples(5000)
+
+ glutMainLoop()
+
+def scale_func(v, mi, ma):
+ return np.minimum(1.0, np.abs((v)-ma)/np.abs(mi-ma) + 0.0)
+
+# Print message to console, and kick off the main to get it rolling.
+if __name__ == "__main__":
+ print "Hit ESC key to quit, 'a' to toggle animation, and 'e' to toggle cuda"
+ main()
diff -r 7effaf0eae9e78eb984b28e0e4bc2902d11c230f -r d9bccb3e0ecde8a5069f2be20e18c2b23ee84ea5 doc/source/visualizing/opengl_volume_rendering.py
--- /dev/null
+++ b/doc/source/visualizing/opengl_volume_rendering.py
@@ -0,0 +1,322 @@
+from OpenGL.GL import *
+from OpenGL.GLUT import *
+from OpenGL.GLU import *
+from OpenGL.GL.ARB.vertex_buffer_object import *
+
+import sys, time
+import numpy as np
+import pycuda.driver as cuda_driver
+import pycuda.gl as cuda_gl
+
+from yt.visualization.volume_rendering.theia.scene import TheiaScene
+from yt.visualization.volume_rendering.theia.algorithms.front_to_back import FrontToBackRaycaster
+from yt.visualization.volume_rendering.transfer_functions import ColorTransferFunction
+from yt.visualization.color_maps import *
+
+import numexpr as ne
+
+window = None # Number of the glut window.
+rot_enabled = True
+
+#Theia Scene
+ts = None
+
+#RAY CASTING values
+c_tbrightness = 1.0
+c_tdensity = 0.05
+
+output_texture = None # pointer to offscreen render target
+
+leftButton = False
+middleButton = False
+rightButton = False
+
+#Screen width and height
+width = 1024
+height = 1024
+
+eyesep = 0.1
+
+(pbo, pycuda_pbo) = [None]*2
+
+def create_PBO(w, h):
+ global pbo, pycuda_pbo
+ num_texels = w*h
+ array = np.zeros((w,h,3),np.uint32)
+
+ pbo = glGenBuffers(1)
+ glBindBuffer(GL_ARRAY_BUFFER, pbo)
+ glBufferData(GL_ARRAY_BUFFER, array, GL_DYNAMIC_DRAW)
+ glBindBuffer(GL_ARRAY_BUFFER, 0)
+ pycuda_pbo = cuda_gl.RegisteredBuffer(long(pbo))
+
+def destroy_PBO(self):
+ global pbo, pycuda_pbo
+ glBindBuffer(GL_ARRAY_BUFFER, long(pbo))
+ glDeleteBuffers(1, long(pbo));
+ glBindBuffer(GL_ARRAY_BUFFER, 0)
+ pbo,pycuda_pbo = [None]*2
+
+#consistent with C initPixelBuffer()
+def create_texture(w,h):
+ global output_texture
+ output_texture = glGenTextures(1)
+ glBindTexture(GL_TEXTURE_2D, output_texture)
+ # set basic parameters
+ glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
+ glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
+ glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
+ glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
+ # buffer data
+ glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
+ w, h, 0, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8, None)
+
+#consistent with C initPixelBuffer()
+def destroy_texture():
+ global output_texture
+ glDeleteTextures(output_texture);
+ output_texture = None
+
+def init_gl(w = 512 , h = 512):
+ Width, Height = (w, h)
+
+ glClearColor(0.1, 0.1, 0.5, 1.0)
+ glDisable(GL_DEPTH_TEST)
+
+ #matrix functions
+ glViewport(0, 0, Width, Height)
+ glMatrixMode(GL_PROJECTION);
+ glLoadIdentity();
+
+ #matrix functions
+ gluPerspective(60.0, Width/float(Height), 0.1, 10.0)
+ glPolygonMode(GL_FRONT_AND_BACK, GL_FILL)
+
+def resize(Width, Height):
+ global width, height
+ (width, height) = Width, Height
+ glViewport(0, 0, Width, Height) # Reset The Current Viewport And Perspective Transformation
+ glMatrixMode(GL_PROJECTION)
+ glLoadIdentity()
+ gluPerspective(60.0, Width/float(Height), 0.1, 10.0)
+
+
+def do_tick():
+ global time_of_last_titleupdate, frame_counter, frames_per_second
+ if ((time.clock () * 1000.0) - time_of_last_titleupdate >= 1000.):
+ frames_per_second = frame_counter # Save The FPS
+ frame_counter = 0 # Reset The FPS Counter
+ szTitle = "%d FPS" % (frames_per_second )
+ glutSetWindowTitle ( szTitle )
+ time_of_last_titleupdate = time.clock () * 1000.0
+ frame_counter += 1
+
+oldMousePos = [ 0, 0 ]
+def mouseButton( button, mode, x, y ):
+ """Callback function (mouse button pressed or released).
+
+ The current and old mouse positions are stored in
+ a global renderParam and a global list respectively"""
+
+ global leftButton, middleButton, rightButton, oldMousePos
+
+ if button == GLUT_LEFT_BUTTON:
+ if mode == GLUT_DOWN:
+ leftButton = True
+ else:
+ leftButton = False
+
+ if button == GLUT_MIDDLE_BUTTON:
+ if mode == GLUT_DOWN:
+ middleButton = True
+ else:
+ middleButton = False
+
+ if button == GLUT_RIGHT_BUTTON:
+ if mode == GLUT_DOWN:
+ rightButton = True
+ else:
+ rightButton = False
+
+ oldMousePos[0], oldMousePos[1] = x, y
+ glutPostRedisplay( )
+
+def mouseMotion( x, y ):
+ """Callback function (mouse moved while button is pressed).
+
+ The current and old mouse positions are stored in
+ a global renderParam and a global list respectively.
+ The global translation vector is updated according to
+ the movement of the mouse pointer."""
+
+ global ts, leftButton, middleButton, rightButton, oldMousePos
+ deltaX = x - oldMousePos[ 0 ]
+ deltaY = y - oldMousePos[ 1 ]
+
+ factor = 0.001
+
+ if leftButton == True:
+ ts.camera.rotateX( - deltaY * factor)
+ ts.camera.rotateY( - deltaX * factor)
+ if middleButton == True:
+ ts.camera.translateX( deltaX* 2.0 * factor)
+ ts.camera.translateY( - deltaY* 2.0 * factor)
+ if rightButton == True:
+ ts.camera.scale += deltaY * factor
+
+ oldMousePos[0], oldMousePos[1] = x, y
+ glutPostRedisplay( )
+
+def keyPressed(*args):
+ global c_tbrightness, c_tdensity
+ # If escape is pressed, kill everything.
+ if args[0] == '\033':
+ print 'Closing..'
+ destroy_PBOs()
+ destroy_texture()
+ exit()
+
+ #change the brightness of the scene
+ elif args[0] == ']':
+ c_tbrightness += 0.025
+ elif args[0] == '[':
+ c_tbrightness -= 0.025
+
+ #change the density scale
+ elif args[0] == ';':
+ c_tdensity -= 0.001
+ elif args[0] == '\'':
+ c_tdensity += 0.001
+
+def idle():
+ glutPostRedisplay()
+
+def display():
+ try:
+ #process left eye
+ process_image()
+ display_image()
+
+ glutSwapBuffers()
+
+ except:
+ from traceback import print_exc
+ print_exc()
+ from os import _exit
+ _exit(0)
+
+def process(eye = True):
+ global ts, pycuda_pbo, eyesep, c_tbrightness, c_tdensity
+
+ ts.get_raycaster().set_opacity(c_tdensity)
+ ts.get_raycaster().set_brightness(c_tbrightness)
+
+ dest_mapping = pycuda_pbo.map()
+ (dev_ptr, size) = dest_mapping.device_ptr_and_size()
+ ts.get_raycaster().surface.device_ptr = dev_ptr
+ ts.update()
+ # ts.get_raycaster().cast()
+ dest_mapping.unmap()
+
+
+def process_image():
+ global output_texture, pbo, width, height
+ """ copy image and process using CUDA """
+ # run the Cuda kernel
+ process()
+ # download texture from PBO
+ glBindBuffer(GL_PIXEL_UNPACK_BUFFER, np.uint64(pbo))
+ glBindTexture(GL_TEXTURE_2D, output_texture)
+
+ glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
+ width, height, 0, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8_REV, None)
+
+def display_image(eye = True):
+ global width, height
+ """ render a screen sized quad """
+ glDisable(GL_DEPTH_TEST)
+ glDisable(GL_LIGHTING)
+ glEnable(GL_TEXTURE_2D)
+ glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE)
+
+ #matix functions should be moved
+ glMatrixMode(GL_PROJECTION)
+ glPushMatrix()
+ glLoadIdentity()
+ glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0)
+ glMatrixMode( GL_MODELVIEW)
+ glLoadIdentity()
+ glViewport(0, 0, width, height)
+
+ glBegin(GL_QUADS)
+ glTexCoord2f(0.0, 0.0)
+ glVertex3f(-1.0, -1.0, 0.5)
+ glTexCoord2f(1.0, 0.0)
+ glVertex3f(1.0, -1.0, 0.5)
+ glTexCoord2f(1.0, 1.0)
+ glVertex3f(1.0, 1.0, 0.5)
+ glTexCoord2f(0.0, 1.0)
+ glVertex3f(-1.0, 1.0, 0.5)
+ glEnd()
+
+ glMatrixMode(GL_PROJECTION)
+ glPopMatrix()
+
+ glDisable(GL_TEXTURE_2D)
+ glBindTexture(GL_TEXTURE_2D, 0)
+ glBindBuffer(GL_PIXEL_PACK_BUFFER, 0)
+ glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0)
+
+
+#note we may need to init cuda_gl here and pass it to camera
+def main():
+ global window, ts, width, height
+ (width, height) = (1024, 1024)
+
+ glutInit(sys.argv)
+ glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_ALPHA | GLUT_DEPTH )
+ glutInitWindowSize(width, height)
+ glutInitWindowPosition(0, 0)
+ window = glutCreateWindow("Stereo Volume Rendering")
+
+
+ glutDisplayFunc(display)
+ glutIdleFunc(idle)
+ glutReshapeFunc(resize)
+ glutMouseFunc( mouseButton )
+ glutMotionFunc( mouseMotion )
+ glutKeyboardFunc(keyPressed)
+ init_gl(width, height)
+
+ # create texture for blitting to screen
+ create_texture(width, height)
+
+ import pycuda.gl.autoinit
+ import pycuda.gl
+ cuda_gl = pycuda.gl
+
+ create_PBO(width, height)
+ # ----- Load and Set Volume Data -----
+
+ density_grid = np.load("/home/bogert/dd150_log_densities.npy")
+
+ mi, ma= 21.5, 24.5
+ bins = 5000
+ tf = ColorTransferFunction( (mi, ma), bins)
+ tf.map_to_colormap(mi, ma, colormap="algae", scale_func = scale_func)
+
+ ts = TheiaScene(volume = density_grid, raycaster = FrontToBackRaycaster(size = (width, height), tf = tf))
+
+ ts.get_raycaster().set_sample_size(0.01)
+ ts.get_raycaster().set_max_samples(5000)
+ ts.update()
+
+ glutMainLoop()
+
+def scale_func(v, mi, ma):
+ return np.minimum(1.0, np.abs((v)-ma)/np.abs(mi-ma) + 0.0)
+
+# Print message to console, and kick off the main to get it rolling.
+if __name__ == "__main__":
+ print "Hit ESC key to quit, 'a' to toggle animation, and 'e' to toggle cuda"
+ main()
This diff is so big that we needed to truncate the remainder.
https://bitbucket.org/yt_analysis/yt/commits/61c6d3c0c55a/
Changeset: 61c6d3c0c55a
Branch: yt-3.0
User: ngoldbaum
Date: 2014-07-20 08:19:22
Summary: Do not overwrite unit information in the Cosmology object.
"fixes" an issue added in 59a415b, still not 100% sure what's going on here.
Affected #: 1 file
diff -r d9bccb3e0ecde8a5069f2be20e18c2b23ee84ea5 -r 61c6d3c0c55a5d9eace24c695a9041cf85cdc714 yt/utilities/cosmology.py
--- a/yt/utilities/cosmology.py
+++ b/yt/utilities/cosmology.py
@@ -67,13 +67,13 @@
self.omega_curvature = omega_curvature
if unit_registry is None:
unit_registry = UnitRegistry()
+ unit_registry.modify("h", hubble_constant)
+ for my_unit in ["m", "pc", "AU", "au"]:
+ new_unit = "%scm" % my_unit
+ # technically not true, but distances here are actually comoving
+ unit_registry.add(new_unit, unit_registry.lut[my_unit][0],
+ dimensions.length, "\\rm{%s}/(1+z)" % my_unit)
self.unit_registry = unit_registry
- self.unit_registry.modify("h", hubble_constant)
- for my_unit in ["m", "pc", "AU", "au"]:
- new_unit = "%scm" % my_unit
- # technically not true, but distances here are actually comoving
- self.unit_registry.add(new_unit, self.unit_registry.lut[my_unit][0],
- dimensions.length, "\\rm{%s}/(1+z)" % my_unit)
self.hubble_constant = self.quan(hubble_constant, "100*km/s/Mpc")
def hubble_distance(self):
https://bitbucket.org/yt_analysis/yt/commits/4608b0ef8358/
Changeset: 4608b0ef8358
Branch: yt-3.0
User: ngoldbaum
Date: 2014-07-20 08:20:33
Summary: Fixing ds/pf confusion in the base plot container type. Closes #860.
Affected #: 1 file
diff -r 61c6d3c0c55a5d9eace24c695a9041cf85cdc714 -r 4608b0ef83589bc4c3e29f042a3085e1d12762f1 yt/visualization/plot_container.py
--- a/yt/visualization/plot_container.py
+++ b/yt/visualization/plot_container.py
@@ -268,17 +268,18 @@
pass
def _switch_ds(self, new_ds, data_source=None):
- ds = self.data_source
- name = ds._type_name
- kwargs = dict((n, getattr(ds, n)) for n in ds._con_args)
+ old_object = self.data_source
+ name = old_object._type_name
+ kwargs = dict((n, getattr(old_object, n))
+ for n in old_object._con_args)
if data_source is not None:
if name != "proj":
raise RuntimeError("The data_source keyword argument "
"is only defined for projections.")
kwargs['data_source'] = data_source
- new_ds = getattr(new_ds, name)(**kwargs)
+ new_object = getattr(new_ds, name)(**kwargs)
self.ds = new_ds
- self.data_source = new_ds
+ self.data_source = new_object
self._data_valid = self._plot_valid = False
self._recreate_frb()
self._setup_plots()
https://bitbucket.org/yt_analysis/yt/commits/16d1ce358f40/
Changeset: 16d1ce358f40
Branch: yt-3.0
User: ngoldbaum
Date: 2014-07-20 08:20:58
Summary: Draw all grids in a dataset, if it has any. This is hacky.
Affected #: 1 file
diff -r 4608b0ef83589bc4c3e29f042a3085e1d12762f1 -r 16d1ce358f40742f4c55c1e597c84be1ff371401 yt/visualization/volume_rendering/camera.py
--- a/yt/visualization/volume_rendering/camera.py
+++ b/yt/visualization/volume_rendering/camera.py
@@ -267,10 +267,9 @@
>>> write_bitmap(im, 'render_with_grids.png')
"""
- region = self.ds.region((self.re + self.le) / 2.0,
- self.le, self.re)
- corners = region.grid_corners
- levels = region.grid_levels[:,0]
+ index = self.ds.index
+ corners = index.grid_corners
+ levels = index.grid_levels[:,0]
if max_level is not None:
subset = levels <= max_level
https://bitbucket.org/yt_analysis/yt/commits/ed9657b8c624/
Changeset: ed9657b8c624
Branch: yt-3.0
User: ngoldbaum
Date: 2014-07-20 19:10:51
Summary: Responding to PR comment.
Affected #: 1 file
diff -r 16d1ce358f40742f4c55c1e597c84be1ff371401 -r ed9657b8c6249bee85aaaf73fece6ef3de1095f4 doc/source/analyzing/objects.rst
--- a/doc/source/analyzing/objects.rst
+++ b/doc/source/analyzing/objects.rst
@@ -36,10 +36,10 @@
.. code-block:: python
- add_field("thermal_energy", function=_ThermalEnergy,
+ add_field("specific_thermal_energy", function=_specific_thermal_energy,
units="ergs/g")
-where ``_ThermalEnergy`` is a python function that defines the field.
+where ``_specific_thermal_energy`` is a python function that defines the field.
.. _accessing-fields:
https://bitbucket.org/yt_analysis/yt/commits/f3196548231f/
Changeset: f3196548231f
Branch: yt-3.0
User: ngoldbaum
Date: 2014-07-21 21:30:33
Summary: Only get grid corners for blocks of the camera's region.
Affected #: 1 file
diff -r ed9657b8c6249bee85aaaf73fece6ef3de1095f4 -r f3196548231fe5e61b6a31730f928d369b1ca00f yt/visualization/volume_rendering/camera.py
--- a/yt/visualization/volume_rendering/camera.py
+++ b/yt/visualization/volume_rendering/camera.py
@@ -267,9 +267,24 @@
>>> write_bitmap(im, 'render_with_grids.png')
"""
- index = self.ds.index
- corners = index.grid_corners
- levels = index.grid_levels[:,0]
+ region = self.data_source
+ corners = []
+ levels = []
+ for block, mask in region.blocks:
+ block_corners = np.array([
+ [block.LeftEdge[0], block.LeftEdge[1], block.LeftEdge[2]],
+ [block.RightEdge[0], block.LeftEdge[1], block.LeftEdge[2]],
+ [block.RightEdge[0], block.RightEdge[1], block.LeftEdge[2]],
+ [block.LeftEdge[0], block.RightEdge[1], block.LeftEdge[2]],
+ [block.LeftEdge[0], block.LeftEdge[1], block.RightEdge[2]],
+ [block.RightEdge[0], block.LeftEdge[1], block.RightEdge[2]],
+ [block.RightEdge[0], block.RightEdge[1], block.RightEdge[2]],
+ [block.LeftEdge[0], block.RightEdge[1], block.RightEdge[2]],
+ ], dtype='float64')
+ corners.append(block_corners)
+ levels.append(block.Level)
+ corners = np.dstack(corners)
+ levels = np.array(levels)
if max_level is not None:
subset = levels <= max_level
https://bitbucket.org/yt_analysis/yt/commits/c97c0f6168fd/
Changeset: c97c0f6168fd
Branch: yt-3.0
User: chummels
Date: 2014-07-21 22:33:21
Summary: Merged in ngoldbaum/yt/yt-3.0 (pull request #1044)
Doc fix patchbomb
Affected #: 45 files
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/cheatsheet.tex
--- a/doc/cheatsheet.tex
+++ b/doc/cheatsheet.tex
@@ -3,7 +3,7 @@
\usepackage{calc}
\usepackage{ifthen}
\usepackage[landscape]{geometry}
-\usepackage[colorlinks = true, linkcolor=blue, citecolor=blue, urlcolor=blue]{hyperref}
+\usepackage[hyphens]{url}
% To make this come out properly in landscape mode, do one of the following
% 1.
@@ -101,9 +101,13 @@
Documentation \url{http://yt-project.org/doc/index.html}.
Need help? Start here \url{http://yt-project.org/doc/help/} and then
try the IRC chat room \url{http://yt-project.org/irc.html},
-or the mailing list \url{http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org}.
-{\bf Installing yt:} The easiest way to install yt is to use the installation script
-found on the yt homepage or the docs linked above.
+or the mailing list \url{http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org}. \\
+
+\subsection{Installing yt} The easiest way to install yt is to use the
+installation script found on the yt homepage or the docs linked above. If you
+already have python set up with \texttt{numpy}, \texttt{scipy},
+\texttt{matplotlib}, \texttt{h5py}, and \texttt{cython}, you can also use
+\texttt{pip install yt}
\subsection{Command Line yt}
yt, and its convenience functions, are launched from a command line prompt.
@@ -118,9 +122,8 @@
\texttt{yt stats} {\it dataset} \textemdash\ Print stats of a dataset. \\
\texttt{yt update} \textemdash\ Update yt to most recent version.\\
\texttt{yt update --all} \textemdash\ Update yt and dependencies to most recent version. \\
-\texttt{yt instinfo} \textemdash\ yt installation information. \\
+\texttt{yt version} \textemdash\ yt installation information. \\
\texttt{yt notebook} \textemdash\ Run the IPython notebook server. \\
-\texttt{yt serve} ({\it dataset}) \textemdash\ Run yt-specific web GUI ({\it dataset} is optional).\\
\texttt{yt upload\_image} {\it image.png} \textemdash\ Upload PNG image to imgur.com. \\
\texttt{yt upload\_notebook} {\it notebook.nb} \textemdash\ Upload IPython notebook to hub.yt-project.org.\\
\texttt{yt plot} {\it dataset} \textemdash\ Create a set of images.\\
@@ -132,16 +135,8 @@
paste.yt-project.org. \\
\texttt{yt pastebin\_grab} {\it identifier} \textemdash\ Print content of pastebin to
STDOUT. \\
- \texttt{yt hub\_register} \textemdash\ Register with
-hub.yt-project.org. \\
-\texttt{yt hub\_submit} \textemdash\ Submit hg repo to
-hub.yt-project.org. \\
-\texttt{yt bootstrap\_dev} \textemdash\ Bootstrap a yt
-development environment. \\
\texttt{yt bugreport} \textemdash\ Report a yt bug. \\
\texttt{yt hop} {\it dataset} \textemdash\ Run hop on a dataset. \\
-\texttt{yt rpdb} \textemdash\ Connect to running rpd
- session.
\subsection{yt Imports}
In order to use yt, Python must load the relevant yt modules into memory.
@@ -149,37 +144,40 @@
used as part of a script.
\newlength{\MyLen}
\settowidth{\MyLen}{\texttt{letterpaper}/\texttt{a4paper} \ }
-\texttt{from yt.mods import \textasteriskcentered} \textemdash\
-Load base yt modules. \\
+\texttt{import yt} \textemdash\
+Load yt. \\
\texttt{from yt.config import ytcfg} \textemdash\
Used to set yt configuration options.
- If used, must be called before importing any other module.\\
-\texttt{from yt.analysis\_modules.api import \textasteriskcentered} \textemdash\
-Load all yt analysis modules. \\
+If used, must be called before importing any other module.\\
\texttt{from yt.analysis\_modules.\emph{halo\_finding}.api import \textasteriskcentered} \textemdash\
Load halo finding modules. Other modules
are loaded in a similar way by swapping the
{\em emphasized} text.
See the \textbf{Analysis Modules} section for a listing and short descriptions of each.
-\subsection{Numpy Arrays}
-Simulation data in yt is returned in Numpy arrays. The Numpy package provides a wealth of built-in
-functions that operate on Numpy arrays. Here is a very brief list of some useful ones.
-Please see \url{http://docs.scipy.org/doc/numpy/reference/} for the full
-numpy documentation.\\
-\settowidth{\MyLen}{\texttt{multicol} }
+\subsection{YTArray}
+Simulation data in yt is returned as a YTArray. YTArray is a numpy array that
+has unit data attached to it and can automatically handle unit conversions and
+detect unit errors. Just like a numpy array, YTArray provides a wealth of
+built-in functions to calculate properties of the data in the array. Here is a
+very brief list of some useful ones.
+\settowidth{\MyLen}{\texttt{multicol} }\\
+\texttt{v = a.in\_cgs()} \textemdash\ Return the array in CGS units \\
+\texttt{v = a.in\_units('Msun/pc**3')} \textemdash\ Return the array in solar masses per cubic parsec \\
\texttt{v = a.max(), a.min()} \textemdash\ Return maximum, minimum of \texttt{a}. \\
-\texttt{index = a.argmax(), a.argmin()} \textemdash\ Return index of max,
+\texttt{index = a.argmax(), a.argmin()} \textemdash\ Return index of max,
min value of \texttt{a}.\\
\texttt{v = a[}{\it index}\texttt{]} \textemdash\ Select a single value from \texttt{a} at location {\it index}.\\
-\texttt{b = a[}{\it i:j}\texttt{]} \textemdash\ Select the slice of values from \texttt{a} between
+\texttt{b = a[}{\it i:j}\texttt{]} \textemdash\ Select the slice of values from
+\texttt{a} between
locations {\it i} to {\it j-1} saved to a new Numpy array \texttt{b} with length {\it j-i}. \\
-\texttt{sel = (a > const)} \textemdash\ Create a new boolean Numpy array \texttt{sel}, of the same shape as \texttt{a},
+\texttt{sel = (a > const)} \textemdash\ Create a new boolean Numpy array
+\texttt{sel}, of the same shape as \texttt{a},
that marks which values of \texttt{a > const}. Other operators (e.g. \textless, !=, \%) work as well.\\
-\texttt{b = a[sel]} \textemdash\ Create a new Numpy array \texttt{b} made up of elements from \texttt{a} that correspond to elements of \texttt{sel}
+\texttt{b = a[sel]} \textemdash\ Create a new Numpy array \texttt{b} made up of
+elements from \texttt{a} that correspond to elements of \texttt{sel}
that are {\it True}. In the above example \texttt{b} would be all elements of \texttt{a} that are greater than \texttt{const}.\\
-\texttt{a.dump({\it filename.dat})} \textemdash\ Save \texttt{a} to the binary file {\it filename.dat}.\\
-\texttt{a = np.load({\it filename.dat})} \textemdash\ Load the contents of {\it filename.dat} into \texttt{a}.
+\texttt{a.write\_hdf5({\it filename.h5})} \textemdash\ Save \texttt{a} to the hdf5 file {\it filename.h5}.\\
\subsection{IPython Tips}
\settowidth{\MyLen}{\texttt{multicol} }
@@ -196,6 +194,7 @@
\texttt{\%hist} \textemdash\ Print recent command history.\\
\texttt{\%quickref} \textemdash\ Print IPython quick reference.\\
\texttt{\%pdb} \textemdash\ Automatically enter the Python debugger at an exception.\\
+\texttt{\%debug} \textemdash\ Drop into a debugger at the location of the last unhandled exception. \\
\texttt{\%time, \%timeit} \textemdash\ Find running time of expressions for benchmarking.\\
\texttt{\%lsmagic} \textemdash\ List all available IPython magics. Hint: \texttt{?} works with magics.\\
@@ -208,10 +207,10 @@
After that, simulation data is generally accessed in yt using {\it Data Containers} which are Python objects
that define a region of simulation space from which data should be selected.
\settowidth{\MyLen}{\texttt{multicol} }
-\texttt{ds = load(}{\it dataset}\texttt{)} \textemdash\ Reference a single snapshot.\\
+\texttt{ds = yt.load(}{\it dataset}\texttt{)} \textemdash\ Reference a single snapshot.\\
\texttt{dd = ds.all\_data()} \textemdash\ Select the entire volume.\\
-\texttt{a = dd[}{\it field\_name}\texttt{]} \textemdash\ Saves the contents of {\it field} into the
-numpy array \texttt{a}. Similarly for other data containers.\\
+\texttt{a = dd[}{\it field\_name}\texttt{]} \textemdash\ Copies the contents of {\it field} into the
+YTArray \texttt{a}. Similarly for other data containers.\\
\texttt{ds.field\_list} \textemdash\ A list of available fields in the snapshot. \\
\texttt{ds.derived\_field\_list} \textemdash\ A list of available derived fields
in the snapshot. \\
@@ -231,45 +230,29 @@
direction set by {\it normal},with total length
2$\times${\it height} and with radius {\it radius}. \\
- \texttt{bl = ds.boolean({\it constructor})} \textemdash\ Create a boolean data
- container. {\it constructor} is a list of pre-defined non-boolean
- data containers with nested boolean logic using the
- ``AND'', ``NOT'', or ``OR'' operators. E.g. {\it constructor=}
- {\it [sp, ``NOT'', (di, ``OR'', re)]} gives a volume defined
- by {\it sp} minus the patches covered by {\it di} and {\it re}.\\
-
\texttt{ds.save\_object(sp, {\it ``sp\_for\_later''})} \textemdash\ Save an object (\texttt{sp}) for later use.\\
\texttt{sp = ds.load\_object({\it ``sp\_for\_later''})} \textemdash\ Recover a saved object.\\
-\subsection{Defining New Fields \& Quantities}
-\texttt{yt} expects on-disk fields, fields generated on-demand and in-memory. Quantities reduce a field (e.g. "Density") defined over an object (e.g. "sphere") to get a single value (e.g. "Mass"). \\
-\texttt{def \_MetalMassMsun({\it field},{\it data})}\\
-\texttt{\hspace{4 mm} return data["Metallicity"]*data["CellMassMsun"]}\\
-\texttt{add\_field("MetalMassMsun",function=\_MetalMassMsun)}\\
-Define a new quantity; note the first function operates on grids and data objects and the second on the results of the first. \\
-\texttt{def \_TotalMass(data): }\\
-\texttt{\hspace{4 mm} baryon\_mass = data["CellMassMsun"].sum()}\\
-\texttt{\hspace{4 mm} particle\_mass = data["ParticleMassMsun"].sum()}\\
-\texttt{\hspace{4 mm} return baryon\_mass, particle\_mass}\\
-\texttt{def \_combTotalMass(data, baryon\_mass, particle\_mass):}\\
-\texttt{\hspace{4 mm} return baryon\_mass.sum() + particle\_mass.sum()}\\
-\texttt{add\_quantity("TotalMass", function=\_TotalMass,}\\
-\texttt{\hspace{4 mm} combine\_function=\_combTotalMass, n\_ret = 2)}\\
-
-
+\subsection{Defining New Fields}
+\texttt{yt} expects on-disk fields, fields generated on-demand and in-memory.
+Field can either be created before a dataset is loaded using \texttt{add\_field}:
+\texttt{def \_metal\_mass({\it field},{\it data})}\\
+\texttt{\hspace{4 mm} return data["metallicity"]*data["cell\_mass"]}\\
+\texttt{add\_field("metal\_mass", units='g', function=\_metal\_mass)}\\
+Or added to an existing dataset using \texttt{ds.add\_field}:
+\texttt{ds.add\_field("metal\_mass", units='g', function=\_metal\_mass)}\\
\subsection{Slices and Projections}
\settowidth{\MyLen}{\texttt{multicol} }
-\texttt{slc = SlicePlot(ds, {\it axis}, {\it field}, {\it center=}, {\it width=}, {\it weight\_field=}, {\it additional parameters})} \textemdash\ Make a slice plot
-perpendicular to {\it axis} of {\it field} weighted by {\it weight\_field} at (code-units) {\it center} with
-{\it width} in code units or a (value, unit) tuple. Hint: try {\it SlicePlot?} in IPython to see additional parameters.\\
+\texttt{slc = yt.SlicePlot(ds, {\it axis or normal vector}, {\it field}, {\it center=}, {\it width=}, {\it weight\_field=}, {\it additional parameters})} \textemdash\ Make a slice plot
+perpendicular to {\it axis} (specified via 'x', 'y', or 'z') or a normal vector for an off-axis slice of {\it field} weighted by {\it weight\_field} at (code-units) {\it center} with
+{\it width} in code units or a (value, unit) tuple. Hint: try {\it yt.SlicePlot?} in IPython to see additional parameters.\\
\texttt{slc.save({\it file\_prefix})} \textemdash\ Save the slice to a png with name prefix {\it file\_prefix}.
\texttt{.save()} works similarly for the commands below.\\
-\texttt{prj = ProjectionPlot(ds, {\it axis}, {\it field}, {\it addit. params})} \textemdash\ Make a projection. \\
-\texttt{prj = OffAxisSlicePlot(ds, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off-axis slice. Note this takes an array of fields. \\
-\texttt{prj = OffAxisProjectionPlot(ds, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off axis projection. Note this takes an array of fields. \\
+\texttt{prj = yt.ProjectionPlot(ds, {\it axis}, {\it field}, {\it addit. params})} \textemdash\ Make a projection. \\
+\texttt{prj = yt.OffAxisProjectionPlot(ds, {\it normal}, {\it fields}, {\it center=}, {\it width=}, {\it depth=},{\it north\_vector=},{\it weight\_field=})} \textemdash Make an off axis projection. Note this takes an array of fields. \\
\subsection{Plot Annotations}
\settowidth{\MyLen}{\texttt{multicol} }
@@ -299,51 +282,37 @@
The \texttt{my\_plugins.py} file \textemdash\ Add functions, derived fields, constants, or other commonly-used Python code to yt.
-
-
\subsection{Analysis Modules}
\settowidth{\MyLen}{\texttt{multicol}}
The import name for each module is listed at the end of each description (see \textbf{yt Imports}).
\texttt{Absorption Spectrum} \textemdash\ (\texttt{absorption\_spectrum}). \\
\texttt{Clump Finder} \textemdash\ Find clumps defined by density thresholds (\texttt{level\_sets}). \\
-\texttt{Coordinate Transformation} \textemdash\ (\texttt{coordinate\_transformation}). \\
\texttt{Halo Finding} \textemdash\ Locate halos of dark matter particles (\texttt{halo\_finding}). \\
-\texttt{Halo Mass Function} \textemdash\ Find halo mass functions from data and from theory (\texttt{halo\_mass\_function}). \\
-\texttt{Halo Profiling} \textemdash\ Profile and project multiple halos (\texttt{halo\_profiler}). \\
-\texttt{Halo Merger Tree} \textemdash\ Create a database of halo mergers (\texttt{halo\_merger\_tree}). \\
\texttt{Light Cone Generator} \textemdash\ Stitch datasets together to perform analysis over cosmological volumes. \\
\texttt{Light Ray Generator} \textemdash\ Analyze the path of light rays.\\
-\texttt{Radial Column Density} \textemdash\ Calculate column densities around a point (\texttt{radial\_column\_density}). \\
\texttt{Rockstar Halo Finding} \textemdash\ Locate halos of dark matter using the Rockstar halo finder (\texttt{halo\_finding.rockstar}). \\
\texttt{Star Particle Analysis} \textemdash\ Analyze star formation history and assemble spectra (\texttt{star\_analysis}). \\
\texttt{Sunrise Exporter} \textemdash\ Export data to the sunrise visualization format (\texttt{sunrise\_export}). \\
-\texttt{Two Point Functions} \textemdash\ Two point correlations (\texttt{two\_point\_functions}). \\
\subsection{Parallel Analysis}
-\settowidth{\MyLen}{\texttt{multicol}}
-Nearly all of yt is parallelized using MPI.
-The {\it mpi4py} package must be installed for parallelism in yt.
-To install {\it pip install mpi4py} on the command line usually works.
+\settowidth{\MyLen}{\texttt{multicol}}
+Nearly all of yt is parallelized using
+MPI. The {\it mpi4py} package must be installed for parallelism in yt. To
+install {\it pip install mpi4py} on the command line usually works.
Execute python in parallel similar to this:\\
-{\it mpirun -n 12 python script.py --parallel}\\
-This command may differ for each system on which you use yt;
-please consult the system documentation for details on how to run parallel applications.
+{\it mpirun -n 12 python script.py}\\
+The file \texttt{script.py} must call the \texttt{yt.enable\_parallelism()} to
+turn on yt's parallelism. If this doesn't happen, all cores will execute the
+same serial yt script. This command may differ for each system on which you use
+yt; please consult the system documentation for details on how to run parallel
+applications.
-\texttt{from yt.pmods import *} \textemdash\ Load yt faster when in parallel.
-This replaces the usual \texttt{from yt.mods import *}.\\
\texttt{parallel\_objects()} \textemdash\ A way to parallelize analysis over objects
(such as halos or clumps).\\
-\subsection{Pre-Installed Versions}
-\settowidth{\MyLen}{\texttt{multicol}}
-yt is pre-installed on several supercomputer systems.
-
-\textbf{NICS Kraken} \textemdash\ {\it module load yt} \\
-
-
\subsection{Mercurial}
\settowidth{\MyLen}{\texttt{multicol}}
Please see \url{http://mercurial.selenic.com/} for the full Mercurial documentation.
@@ -365,8 +334,7 @@
\subsection{FAQ}
\settowidth{\MyLen}{\texttt{multicol}}
-\texttt{ds.field\_info[`field'].take\_log = False} \textemdash\ When plotting \texttt{field}, do not take log.
-Must enter \texttt{ds.index} before this command. \\
+\texttt{slc.set\_log('field', False)} \textemdash\ When plotting \texttt{field}, use linear scaling instead of log scaling.
%\rule{0.3\linewidth}{0.25pt}
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/_dq_docstrings.inc
--- a/doc/source/analyzing/_dq_docstrings.inc
+++ b/doc/source/analyzing/_dq_docstrings.inc
@@ -1,43 +1,20 @@
-.. function:: Action(action, combine_action, filter=None):
+.. function:: angular_momentum_vector()
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._Action`.)
- This function evals the string given by the action arg and uses
- the function thrown with the combine_action to combine the values.
- A filter can be thrown to be evaled to short-circuit the calculation
- if some criterion is not met.
- :param action: a string containing the desired action to be evaled.
- :param combine_action: the function used to combine the answers when done lazily.
- :param filter: a string to be evaled to serve as a data filter.
-
-
-
-.. function:: AngularMomentumVector():
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._AngularMomentumVector`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.AngularMomentumVector`.)
This function returns the mass-weighted average angular momentum vector.
+.. function:: bulk_velocity():
-.. function:: BaryonSpinParameter():
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._BaryonSpinParameter`.)
- This function returns the spin parameter for the baryons, but it uses
- the particles in calculating enclosed mass.
-
-
-
-.. function:: BulkVelocity():
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._BulkVelocity`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.BulkVelocity`.)
This function returns the mass-weighted average velocity in the object.
+.. function:: center_of_mass(use_cells=True, use_particles=False):
-.. function:: CenterOfMass(use_cells=True, use_particles=False):
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._CenterOfMass`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.CenterOfMass`.)
This function returns the location of the center
of mass. By default, it computes of the *non-particle* data in the object.
@@ -51,112 +28,64 @@
-.. function:: Extrema(fields, non_zero=False, filter=None):
+.. function:: extrema(fields, non_zero=False, filter=None):
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._Extrema`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.Extrema`.)
This function returns the extrema of a set of fields
:param fields: A field name, or a list of field names
:param filter: a string to be evaled to serve as a data filter.
+.. function:: max_location(field):
-.. function:: IsBound(truncate=True, include_thermal_energy=False, treecode=True, opening_angle=1.0, periodic_test=False, include_particles=True):
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._IsBound`.)
- This returns whether or not the object is gravitationally bound. If this
- returns a value greater than one, it is bound, and otherwise not.
-
- Parameters
- ----------
- truncate : Bool
- Should the calculation stop once the ratio of
- gravitational:kinetic is 1.0?
- include_thermal_energy : Bool
- Should we add the energy from ThermalEnergy
- on to the kinetic energy to calculate
- binding energy?
- treecode : Bool
- Whether or not to use the treecode.
- opening_angle : Float
- The maximal angle a remote node may subtend in order
- for the treecode method of mass conglomeration may be
- used to calculate the potential between masses.
- periodic_test : Bool
- Used for testing the periodic adjustment machinery
- of this derived quantity.
- include_particles : Bool
- Should we add the mass contribution of particles
- to calculate binding energy?
-
- Examples
- --------
- >>> sp.quantities["IsBound"](truncate=False,
- ... include_thermal_energy=True, treecode=False, opening_angle=2.0)
- 0.32493
-
-
-
-.. function:: MaxLocation(field):
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._MaxLocation`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.max_location`.)
This function returns the location of the maximum of a set
of fields.
+.. function:: min_location(field):
-.. function:: MinLocation(field):
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._MinLocation`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.MinLocation`.)
This function returns the location of the minimum of a set
of fields.
-.. function:: ParticleSpinParameter():
+.. function:: spin_parameter(use_gas=True, use_particles=True):
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._ParticleSpinParameter`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.SpinParameter`.)
This function returns the spin parameter for the baryons, but it uses
the particles in calculating enclosed mass.
+.. function:: total_mass():
-.. function:: StarAngularMomentumVector():
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._StarAngularMomentumVector`.)
- This function returns the mass-weighted average angular momentum vector
- for stars.
-
-
-
-.. function:: TotalMass():
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._TotalMass`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.TotalMass`.)
This function takes no arguments and returns the sum of cell masses and
particle masses in the object.
+.. function:: total_quantity(fields):
-.. function:: TotalQuantity(fields):
-
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._TotalQuantity`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.TotalQuantity`.)
This function sums up a given field over the entire region
:param fields: The fields to sum up
-.. function:: WeightedAverageQuantity(field, weight):
+.. function:: weighted_average_quantity(field, weight):
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._WeightedAverageQuantity`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.WeightedAverageQuantity`.)
This function returns an averaged quantity.
:param field: The field to average
:param weight: The field to weight by
-.. function:: WeightedVariance(field, weight):
+.. function:: weighted_variance(field, weight):
- (This is a proxy for :func:`~yt.data_objects.derived_quantities._WeightedVariance`.)
+ (This is a proxy for :func:`~yt.data_objects.derived_quantities.WeightedVariance`.)
This function returns the variance of a field.
:param field: The target field
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb
--- a/doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb
+++ b/doc/source/analyzing/analysis_modules/Halo_Analysis.ipynb
@@ -1,6 +1,7 @@
{
"metadata": {
- "name": ""
+ "name": "",
+ "signature": "sha256:e792ad188f59161aa3ff4cdbb32cad75142b2e6b4062dfa1d8c12b3172fcf4e9"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -34,7 +35,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "from yt.mods import *\n",
+ "import yt\n",
"from yt.analysis_modules.halo_analysis.api import *\n",
"import tempfile\n",
"import shutil\n",
@@ -44,7 +45,7 @@
"tmpdir = tempfile.mkdtemp()\n",
"\n",
"# Load the data set with the full simulation information\n",
- "data_ds = load('Enzo_64/RD0006/RedshiftOutput0006')"
+ "data_ds = yt.load('Enzo_64/RD0006/RedshiftOutput0006')"
],
"language": "python",
"metadata": {},
@@ -62,7 +63,7 @@
"collapsed": false,
"input": [
"# Load the rockstar data files\n",
- "halos_ds = load('rockstar_halos/halos_0.0.bin')"
+ "halos_ds = yt.load('rockstar_halos/halos_0.0.bin')"
],
"language": "python",
"metadata": {},
@@ -407,4 +408,4 @@
"metadata": {}
}
]
-}
+}
\ No newline at end of file
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/analysis_modules/PPVCube.ipynb
--- a/doc/source/analyzing/analysis_modules/PPVCube.ipynb
+++ b/doc/source/analyzing/analysis_modules/PPVCube.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:ba8b6a53571695ae1d0c236ad43875823746e979a329a9d35ab0a8b899cebbba"
+ "signature": "sha256:56a8d72735e3cc428ff04b241d4b2ce6f653019818c6fc7a4148840d99030c85"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -19,8 +19,9 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "%matplotlib inline\n",
- "from yt.mods import *\n",
+ "import yt\n",
+ "import numpy as np\n",
+ "\n",
"from yt.analysis_modules.ppv_cube.api import PPVCube"
],
"language": "python",
@@ -122,7 +123,7 @@
"data[\"velocity_y\"] = (vely, \"km/s\")\n",
"data[\"velocity_z\"] = (np.zeros((nx,ny,nz)), \"km/s\") # zero velocity in the z-direction\n",
"bbox = np.array([[-0.5,0.5],[-0.5,0.5],[-0.5,0.5]]) # bbox of width 1 on a side with center (0,0,0)\n",
- "ds = load_uniform_grid(data, (nx,ny,nz), length_unit=(2*R,\"kpc\"), nprocs=1, bbox=bbox)"
+ "ds = yt.load_uniform_grid(data, (nx,ny,nz), length_unit=(2*R,\"kpc\"), nprocs=1, bbox=bbox)"
],
"language": "python",
"metadata": {},
@@ -139,7 +140,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "slc = SlicePlot(ds, \"z\", [\"density\",\"velocity_x\",\"velocity_y\",\"velocity_magnitude\"])"
+ "slc = yt.SlicePlot(ds, \"z\", [\"density\",\"velocity_x\",\"velocity_y\",\"velocity_magnitude\"])"
],
"language": "python",
"metadata": {},
@@ -222,7 +223,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "ds = load(\"cube.fits\")"
+ "ds = yt.load(\"cube.fits\")"
],
"language": "python",
"metadata": {},
@@ -233,7 +234,7 @@
"collapsed": false,
"input": [
"# Specifying no center gives us the center slice\n",
- "slc = SlicePlot(ds, \"z\", [\"density\"])\n",
+ "slc = yt.SlicePlot(ds, \"z\", [\"density\"])\n",
"slc.show()"
],
"language": "python",
@@ -248,7 +249,7 @@
"# Picking different velocities for the slices\n",
"new_center = ds.domain_center\n",
"new_center[2] = ds.spec2pixel(-1.0*u.km/u.s)\n",
- "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
+ "slc = yt.SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
"slc.show()"
],
"language": "python",
@@ -260,7 +261,7 @@
"collapsed": false,
"input": [
"new_center[2] = ds.spec2pixel(0.7*u.km/u.s)\n",
- "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
+ "slc = yt.SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
"slc.show()"
],
"language": "python",
@@ -272,7 +273,7 @@
"collapsed": false,
"input": [
"new_center[2] = ds.spec2pixel(-0.3*u.km/u.s)\n",
- "slc = SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
+ "slc = yt.SlicePlot(ds, \"z\", [\"density\"], center=new_center)\n",
"slc.show()"
],
"language": "python",
@@ -290,7 +291,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "prj = ProjectionPlot(ds, \"z\", [\"density\"], proj_style=\"sum\")\n",
+ "prj = yt.ProjectionPlot(ds, \"z\", [\"density\"], proj_style=\"sum\")\n",
"prj.set_log(\"density\", True)\n",
"prj.set_zlim(\"density\", 1.0e-3, 0.2)\n",
"prj.show()"
@@ -303,4 +304,4 @@
"metadata": {}
}
]
-}
+}
\ No newline at end of file
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
--- a/doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
+++ b/doc/source/analyzing/analysis_modules/Particle_Trajectories.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:e4b5ea69687eb79452c16385b3a6f795b4572518dfa7f9d8a8125bd75b5fea85"
+ "signature": "sha256:5ab80c6b33a115cb88c36fde8659434d14a852dd43b0b419f2bb0c04acf66278"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -20,7 +20,7 @@
"collapsed": false,
"input": [
"%matplotlib inline\n",
- "from yt.mods import *\n",
+ "import yt\n",
"import glob\n",
"from yt.analysis_modules.particle_trajectories.api import ParticleTrajectories\n",
"from yt.config import ytcfg\n",
@@ -77,7 +77,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "ds = load(my_fns[0])\n",
+ "ds = yt.load(my_fns[0])\n",
"dd = ds.all_data()\n",
"indices = dd[\"particle_index\"].astype(\"int\")\n",
"print indices"
@@ -205,8 +205,8 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "ds = load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
- "slc = SlicePlot(ds, \"x\", [\"density\",\"dark_matter_density\"], center=\"max\", width=(3.0, \"Mpc\"))\n",
+ "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+ "slc = yt.SlicePlot(ds, \"x\", [\"density\",\"dark_matter_density\"], center=\"max\", width=(3.0, \"Mpc\"))\n",
"slc.show()"
],
"language": "python",
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/analysis_modules/SZ_projections.ipynb
--- a/doc/source/analyzing/analysis_modules/SZ_projections.ipynb
+++ b/doc/source/analyzing/analysis_modules/SZ_projections.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:4745a15abb6512547b50280b92c22567f89255189fd968ca706ef7c39d48024f"
+ "signature": "sha256:e4db171b795d155870280ddbe8986f55f9a94ffb10783abf9d4cc2de3ec24894"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -89,11 +89,10 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "%matplotlib inline\n",
- "from yt.mods import *\n",
+ "import yt\n",
"from yt.analysis_modules.sunyaev_zeldovich.api import SZProjection\n",
"\n",
- "ds = load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
+ "ds = yt.load(\"enzo_tiny_cosmology/DD0046/DD0046\")\n",
"\n",
"freqs = [90.,180.,240.]\n",
"szprj = SZProjection(ds, freqs)"
@@ -218,14 +217,6 @@
"including coordinate information in kpc. The optional keyword\n",
"`clobber` allows a previous file to be overwritten. \n"
]
- },
- {
- "cell_type": "code",
- "collapsed": false,
- "input": [],
- "language": "python",
- "metadata": {},
- "outputs": []
}
],
"metadata": {}
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/analysis_modules/clump_finding.rst
--- a/doc/source/analyzing/analysis_modules/clump_finding.rst
+++ b/doc/source/analyzing/analysis_modules/clump_finding.rst
@@ -87,7 +87,7 @@
ds = load("DD0000")
sp = ds.sphere([0.5, 0.5, 0.5], radius=0.1)
- ratio = sp.quantities["IsBound"](truncate=False, include_thermal_energy=True,
+ ratio = sp.quantities.is_bound(truncate=False, include_thermal_energy=True,
treecode=True, opening_angle=2.0)
This example will accomplish the same as the above, but will use the full
@@ -100,7 +100,7 @@
ds = load("DD0000")
sp = ds.sphere([0.5, 0.5, 0.5], radius=0.1)
- ratio = sp.quantities["IsBound"](truncate=False, include_thermal_energy=True,
+ ratio = sp.quantities.is_bound(truncate=False, include_thermal_energy=True,
treecode=False)
Here the treecode method is specified for clump finding (this is default).
@@ -109,7 +109,7 @@
.. code-block:: python
- function_name = 'self.data.quantities["IsBound"](truncate=True, \
+ function_name = 'self.data.quantities.is_bound(truncate=True, \
include_thermal_energy=True, treecode=True, opening_angle=2.0) > 1.0'
master_clump = amods.level_sets.Clump(data_source, None, field,
function=function_name)
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/objects.rst
--- a/doc/source/analyzing/objects.rst
+++ b/doc/source/analyzing/objects.rst
@@ -31,35 +31,15 @@
derived fields. If it finds nothing there, it then defaults to examining the
global set of derived fields.
-To add a field to the list of fields that you know should exist in a particular
-frontend, call the function ``add_frontend_field`` where you replace
-``frontend`` with the name of the frontend. Below is an example for adding
-``Cooling_Time`` to Enzo:
+To add a derived field, which is not expected to necessarily exist on disk, use
+the standard construction:
.. code-block:: python
- add_enzo_field("Cooling_Time", units=r"\rm{s}",
- function=NullFunc,
- validators=ValidateDataField("Cooling_Time"))
+ add_field("specific_thermal_energy", function=_specific_thermal_energy,
+ units="ergs/g")
-Note that we used the ``NullFunc`` function here. To add a derived field,
-which is not expected to necessarily exist on disk, use the standard
-construction:
-
-.. code-block:: python
-
- add_field("thermal_energy", function=_ThermalEnergy,
- units=r"\rm{ergs}/\rm{g}")
-
-To add a translation from one field to another, use the ``TranslationFunc`` as
-the function for reading the field. For instance, this code appears in the Nyx
-frontend:
-
-.. code-block:: python
-
- add_field("density", function=TranslationFunc("density"), take_log=True,
- units=r"\rm{g} / \rm{cm}^3",
- projected_units =r"\rm{g} / \rm{cm}^2")
+where ``_specific_thermal_energy`` is a python function that defines the field.
.. _accessing-fields:
@@ -105,7 +85,7 @@
.. code-block:: python
- ds = load("my_data")
+ ds = yt.load("my_data")
print ds.field_list
print ds.derived_field_list
@@ -115,7 +95,7 @@
.. code-block:: python
- ds = load("my_data")
+ ds = yt.load("my_data")
print ds.field_info["pressure"].get_units()
This is a fast way to examine the units of a given field, and additionally you
@@ -141,8 +121,8 @@
.. code-block:: python
- from yt.mods import *
- ds = load("RedshiftOutput0005")
+ import yt
+ ds = yt.load("RedshiftOutput0005")
reg = ds.region([0.5, 0.5, 0.5], [0.0, 0.0, 0.0], [1.0, 1.0, 1.0])
.. include:: _obj_docstrings.inc
@@ -199,7 +179,7 @@
ds = load("my_data")
dd = ds.all_data()
- dd.quantities["AngularMomentumVector"]()
+ dd.quantities.angular_momentum_vector()
The following quantities are available via the ``quantities`` interface.
@@ -246,8 +226,8 @@
.. notebook-cell::
- from yt.mods import *
- ds = load("enzo_tiny_cosmology/DD0046/DD0046")
+ import yt
+ ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
ad = ds.all_data()
total_mass = ad.quantities.total_quantity('cell_mass')
# now select only gas with 1e5 K < T < 1e7 K.
@@ -268,12 +248,12 @@
.. python-script::
- from yt.mods import *
- ds = load("enzo_tiny_cosmology/DD0046/DD0046")
+ import yt
+ ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
ad = ds.all_data()
new_region = ad.cut_region(['obj["density"] > 1e-29'])
- plot = ProjectionPlot(ds, "x", "density", weight_field="density",
- data_source=new_region)
+ plot = yt.ProjectionPlot(ds, "x", "density", weight_field="density",
+ data_source=new_region)
plot.save()
.. _extracting-connected-sets:
@@ -311,10 +291,6 @@
Extracting Isocontour Information
---------------------------------
-.. versionadded:: 2.3
-
-.. warning::
- This is still beta!
``yt`` contains an implementation of the `Marching Cubes
<http://en.wikipedia.org/wiki/Marching_cubes>`_ algorithm, which can operate on
@@ -378,8 +354,8 @@
.. code-block:: python
- from yt.mods import *
- ds = load("my_data")
+ import yt
+ ds = yt.load("my_data")
sp = ds.sphere([0.5, 0.5, 0.5], 10.0/ds['kpc'])
ds.save_object(sp, "sphere_to_analyze_later")
@@ -390,9 +366,9 @@
.. code-block:: python
- from yt.mods import *
+ import yt
- ds = load("my_data")
+ ds = yt.load("my_data")
sphere_to_analyze = ds.load_object("sphere_to_analyze_later")
Additionally, if we want to store the object independent of the ``.yt`` file,
@@ -400,9 +376,9 @@
.. code-block:: python
- from yt.mods import *
+ import yt
- ds = load("my_data")
+ ds = yt.load("my_data")
sp = ds.sphere([0.5, 0.5, 0.5], 10.0/ds['kpc'])
sp.save_object("my_sphere", "my_storage_file.cpkl")
@@ -416,10 +392,10 @@
.. code-block:: python
- from yt.mods import *
+ import yt
import shelve
- ds = load("my_data") # not necessary if storeparameterfiles is on
+ ds = yt.load("my_data") # not necessary if storeparameterfiles is on
obj_file = shelve.open("my_storage_file.cpkl")
ds, obj = obj_file["my_sphere"]
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/time_series_analysis.rst
--- a/doc/source/analyzing/time_series_analysis.rst
+++ b/doc/source/analyzing/time_series_analysis.rst
@@ -33,27 +33,23 @@
creating your own, and these operators can be applied either to datasets on the
whole or to subregions of individual datasets.
-The simplest mechanism for creating a ``DatasetSeries`` object is to use the
-class method
-:meth:`~yt.data_objects.time_series.DatasetSeries.from_filenames`. This
-method accepts a list of strings that can be supplied to ``load``. For
-example:
+The simplest mechanism for creating a ``DatasetSeries`` object is to pass a glob
+pattern to the ``yt.load`` function.
.. code-block:: python
- from yt.mods import *
- filenames = ["DD0030/output_0030", "DD0040/output_0040"]
- ts = DatasetSeries.from_filenames(filenames)
+ import yt
+ ts = yt.load("DD????/DD????")
-This will create a new time series, populated with the output files ``DD0030``
-and ``DD0040``. This object, here called ``ts``, can now be analyzed in bulk.
-Alternately, you can specify a pattern that is supplied to :mod:`glob`, and
-those filenames will be sorted and returned. Here is an example:
+This will create a new time series, populated with all datasets that match the
+pattern "DD" followed by four digits. This object, here called ``ts``, can now
+be analyzed in bulk. Alternately, you can specify an already formatted list of
+filenames directly to the `DatasetSeries` initializer:
.. code-block:: python
- from yt.mods import *
- ts = DatasetSeries.from_filenames("*/*.index")
+ import yt
+ ts = DatasetSeries(["DD0030/DD0030", "DD0040/DD0040")
Analyzing Each Dataset In Sequence
----------------------------------
@@ -64,8 +60,8 @@
.. code-block:: python
- from yt.mods import *
- ts = DatasetSeries.from_filenames("*/*.index")
+ import yt
+ ts = yt.load("*/*.index")
for ds in ts:
print ds.current_time
@@ -77,87 +73,6 @@
* The cookbook recipe for :ref:`cookbook-time-series-analysis`
* :class:`~yt.data_objects.time_series.DatasetSeries`
-Prepared Time Series Analysis
------------------------------
-
-A few handy functions for treating time series data as a uniform, single object
-are also available.
-
-.. warning:: The future of these functions is uncertain: they may be removed in
- the future!
-
-Simple Analysis Tasks
-~~~~~~~~~~~~~~~~~~~~~
-
-The available tasks that come built-in can be seen by looking at the output of
-``ts.tasks.keys()``. For instance, one of the simplest ones is the
-``MaxValue`` task. We can execute this task by calling it with the field whose
-maximum value we want to evaluate:
-
-.. code-block:: python
-
- from yt.mods import *
- ts = TimeSeries.from_filenames("*/*.index")
- max_rho = ts.tasks["MaximumValue"]("density")
-
-When we call the task, the time series object executes the task on each
-component dataset. The results are then returned to the user. More
-complex, multi-task evaluations can be conducted by using the
-:meth:`~yt.data_objects.time_series.DatasetSeries.eval` call, which accepts a
-list of analysis tasks.
-
-Analysis Tasks Applied to Objects
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Just as some tasks can be applied to datasets as a whole, one can also apply
-the creation of objects to datasets. This means that you are able to construct
-a generalized "sphere" operator that will be created inside all datasets, which
-you can then calculate derived quantities (see :ref:`derived-quantities`) from.
-
-For instance, imagine that you wanted to create a sphere that is centered on
-the most dense point in the simulation and that is 1 pc in radius, and then
-calculate the angular momentum vector on this sphere. You could do that with
-this script:
-
-.. code-block:: python
-
- from yt.mods import *
- ts = TimeSeries.from_filenames("*/*.index")
- sphere = ts.sphere("max", (1.0, "pc"))
- L_vecs = sphere.quantities["AngularMomentumVector"]()
-
-Note that we have specified the units differently than usual -- the time series
-objects allow units as a tuple, so that in cases where units may change over
-the course of several outputs they are correctly set at all times. This script
-simply sets up the time series object, creates a sphere, and then runs
-quantities on it. It is designed to look very similar to the code that would
-conduct this analysis on a single output.
-
-All of the objects listed in :ref:`available-objects` are made available in
-the same manner as "sphere" was used above.
-
-Creating Analysis Tasks
-~~~~~~~~~~~~~~~~~~~~~~~
-
-If you wanted to look at the mass in star particles as a function of time, you
-would write a function that accepts params and ds and then decorate it with
-analysis_task. Here we have done so:
-
-.. code-block:: python
-
- @analysis_task(('particle_type',))
- def MassInParticleType(params, ds):
- dd = ds.all_data()
- ptype = (dd["particle_type"] == params.particle_type)
- return (ptype.sum(), dd["ParticleMassMsun"][ptype].sum())
-
- ms = ts.tasks["MassInParticleType"](4)
- print ms
-
-This allows you to create your own analysis tasks that will be then available
-to time series data objects. Since ``DatasetSeries`` objects iterate over
-filenames in parallel by default, this allows for transparent parallelization.
-
.. _analyzing-an-entire-simulation:
Analyzing an Entire Simulation
@@ -175,9 +90,9 @@
.. code-block:: python
- from yt.mods import *
- my_sim = simulation('enzo_tiny_cosmology/32Mpc_32.enzo', 'Enzo',
- find_outputs=False)
+ import yt
+ my_sim = yt.simulation('enzo_tiny_cosmology/32Mpc_32.enzo', 'Enzo',
+ find_outputs=False)
Then, create a ``DatasetSeries`` object with the :meth:`get_time_series`
function. With no additional keywords, the time series will include every
@@ -198,7 +113,7 @@
for ds in my_sim.piter()
all_data = ds.all_data()
- print all_data.quantities['Extrema']('density')
+ print all_data.quantities.extrema('density')
Additional keywords can be given to :meth:`get_time_series` to select a subset
of the total data:
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/units/2)_Data_Selection_and_fields.ipynb
--- a/doc/source/analyzing/units/2)_Data_Selection_and_fields.ipynb
+++ b/doc/source/analyzing/units/2)_Data_Selection_and_fields.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:882b31591c60bfe6ad4cb0f8842953d2e94fb8a12ce742be831a65642eea72c9"
+ "signature": "sha256:2faff88abc93fe2bc9d91467db786a8b69ec3ece6783a7055942ecc7c47a0817"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -34,8 +34,8 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "from yt.mods import *\n",
- "ds = load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
+ "import yt\n",
+ "ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
" \n",
"dd = ds.all_data()\n",
"maxval, maxloc = ds.find_max('density')\n",
@@ -324,6 +324,8 @@
"collapsed": false,
"input": [
"from astropy import units as u\n",
+ "from yt import YTQuantity, YTArray\n",
+ "\n",
"x = 42.0 * u.meter\n",
"y = YTQuantity.from_astropy(x) "
],
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/units/3)_Comoving_units_and_code_units.ipynb
--- a/doc/source/analyzing/units/3)_Comoving_units_and_code_units.ipynb
+++ b/doc/source/analyzing/units/3)_Comoving_units_and_code_units.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:242d7005d45a82744713bfe6389e49d47f39b524d1e7fcbf5ceb2e65dc473e68"
+ "signature": "sha256:8ba193cc3867e2185133bbf3952bd5834e6c63993208635c71cf55fa6f27b491"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -34,8 +34,8 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "from yt.mods import *\n",
- "ds = load('Enzo_64/DD0043/data0043')"
+ "import yt\n",
+ "ds = yt.load('Enzo_64/DD0043/data0043')"
],
"language": "python",
"metadata": {},
@@ -208,7 +208,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "slc = SlicePlot(ds, 0, 'density', width=(128, 'Mpccm/h'))\n",
+ "slc = yt.SlicePlot(ds, 0, 'density', width=(128, 'Mpccm/h'))\n",
"slc.set_figure_size(6)"
],
"language": "python",
@@ -234,6 +234,8 @@
"cell_type": "code",
"collapsed": false,
"input": [
+ "from yt import YTQuantity\n",
+ "\n",
"a = YTQuantity(3, 'cm')\n",
"\n",
"print a.units.registry.keys()"
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/units/4)_Comparing_units_from_different_datasets.ipynb
--- a/doc/source/analyzing/units/4)_Comparing_units_from_different_datasets.ipynb
+++ b/doc/source/analyzing/units/4)_Comparing_units_from_different_datasets.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:448380e74a746d19dc1eecfe222c0e798a87a4ac285e4f50e2598316086c5ee8"
+ "signature": "sha256:273a23e3a20b277a9e5ea7117b48cf19013c331d0893e6e9d21896e97f59aceb"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -22,9 +22,9 @@
"collapsed": false,
"input": [
"# A high redshift output from z ~ 8\n",
- "from yt.mods import *\n",
+ "import yt\n",
"\n",
- "ds1 = load('Enzo_64/DD0002/data0002')\n",
+ "ds1 = yt.load('Enzo_64/DD0002/data0002')\n",
"print \"z = %s\" % ds1.current_redshift\n",
"print \"Internal length units = %s\" % ds1.length_unit\n",
"print \"Internal length units in cgs = %s\" % ds1.length_unit.in_cgs()"
@@ -38,7 +38,7 @@
"collapsed": false,
"input": [
"# A low redshift output from z ~ 0\n",
- "ds2 = load('Enzo_64/DD0043/data0043')\n",
+ "ds2 = yt.load('Enzo_64/DD0043/data0043')\n",
"print \"z = %s\" % ds2.current_redshift\n",
"print \"Internal length units = %s\" % ds2.length_unit\n",
"print \"Internal length units in cgs = %s\" % ds2.length_unit.in_cgs()"
@@ -94,9 +94,10 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "from yt.mods import *\n",
+ "import yt\n",
+ "yt.enable_parallelism()\n",
"\n",
- "ts = DatasetSeries.from_filenames(\"Enzo_64/DD????/data????\")\n",
+ "ts = yt.load(\"Enzo_64/DD????/data????\")\n",
"\n",
"storage = {}\n",
"\n",
@@ -104,7 +105,7 @@
" sto.result_id = ds.current_time\n",
" sto.result = ds.length_unit\n",
"\n",
- "if is_root():\n",
+ "if yt.is_root():\n",
" for t in sorted(storage.keys()):\n",
" print t.in_units('Gyr'), storage[t].in_units('Mpc')"
],
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/analyzing/units/5)_Units_and_plotting.ipynb
--- a/doc/source/analyzing/units/5)_Units_and_plotting.ipynb
+++ b/doc/source/analyzing/units/5)_Units_and_plotting.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:981baca6958c75f0d84bbc24be7d2b75af5957d36aa3eb4ba725d9e47a85f80d"
+ "signature": "sha256:3deac8455c3bbd85e3cefc0f8905be509fba0050f67f69a7faed0505b4d8dbad"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -28,9 +28,9 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "from yt.mods import *\n",
- "ds = load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
- "slc = SlicePlot(ds, 2, 'density', center=[0.5, 0.5, 0.5], width=(15, 'kpc'))\n",
+ "import yt\n",
+ "ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
+ "slc = yt.SlicePlot(ds, 2, 'density', center=[0.5, 0.5, 0.5], width=(15, 'kpc'))\n",
"slc.set_figure_size(6)"
],
"language": "python",
@@ -107,7 +107,7 @@
"collapsed": false,
"input": [
"dd = ds.all_data()\n",
- "plot = ProfilePlot(dd, 'density', 'temperature', weight_field='cell_mass')\n",
+ "plot = yt.ProfilePlot(dd, 'density', 'temperature', weight_field='cell_mass')\n",
"plot.show()"
],
"language": "python",
@@ -142,7 +142,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "plot = PhasePlot(dd, 'density', 'temperature', 'cell_mass')\n",
+ "plot = yt.PhasePlot(dd, 'density', 'temperature', 'cell_mass')\n",
"plot.set_figure_size(6)"
],
"language": "python",
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/bootcamp/5)_Derived_Fields_and_Profiles.ipynb
--- a/doc/source/bootcamp/5)_Derived_Fields_and_Profiles.ipynb
+++ b/doc/source/bootcamp/5)_Derived_Fields_and_Profiles.ipynb
@@ -230,14 +230,14 @@
"collapsed": false,
"input": [
"sp_small = ds.sphere(\"max\", (50.0, 'kpc'))\n",
- "bv = sp_small.quantities[\"BulkVelocity\"]()\n",
+ "bv = sp_small.quantities.bulk_velocity()\n",
"\n",
"sp = ds.sphere(\"max\", (0.1, 'Mpc'))\n",
- "rv1 = sp.quantities[\"Extrema\"](\"radial_velocity\")\n",
+ "rv1 = sp.quantities.extrema(\"radial_velocity\")\n",
"\n",
"sp.clear_data()\n",
"sp.set_field_parameter(\"bulk_velocity\", bv)\n",
- "rv2 = sp.quantities[\"Extrema\"](\"radial_velocity\")\n",
+ "rv2 = sp.quantities.extrema(\"radial_velocity\")\n",
"\n",
"print bv\n",
"print rv1\n",
@@ -251,4 +251,4 @@
"metadata": {}
}
]
-}
\ No newline at end of file
+}
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/cookbook/amrkdtree_to_uniformgrid.py
--- a/doc/source/cookbook/amrkdtree_to_uniformgrid.py
+++ /dev/null
@@ -1,33 +0,0 @@
-import numpy as np
-import yt
-
-#This is an example of how to map an amr data set
-#to a uniform grid. In this case the highest
-#level of refinement is mapped into a 1024x1024x1024 cube
-
-#first the amr data is loaded
-ds = yt.load("~/pfs/galaxy/new_tests/feedback_8bz/DD0021/DD0021")
-
-#next we get the maxium refinement level
-lmax = ds.parameters['MaximumRefinementLevel']
-
-#calculate the center of the domain
-domain_center = (ds.domain_right_edge - ds.domain_left_edge)/2
-
-#determine the cellsize in the highest refinement level
-cell_size = ds.domain_width/(ds.domain_dimensions*2**lmax)
-
-#calculate the left edge of the new grid
-left_edge = domain_center - 512*cell_size
-
-#the number of cells per side of the new grid
-ncells = 1024
-
-#ask yt for the specified covering grid
-cgrid = ds.covering_grid(lmax, left_edge, np.array([ncells,]*3))
-
-#get a map of the density into the new grid
-density_map = cgrid["density"].astype(dtype="float32")
-
-#save the file as a numpy array for convenient future processing
-np.save("/pfs/goldbaum/galaxy/new_tests/feedback_8bz/gas_density_DD0021_log_densities.npy", density_map)
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/cookbook/constructing_data_objects.rst
--- a/doc/source/cookbook/constructing_data_objects.rst
+++ b/doc/source/cookbook/constructing_data_objects.rst
@@ -25,6 +25,8 @@
.. yt_cookbook:: find_clumps.py
+.. _extract_frb:
+
Extracting Fixed Resolution Data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/cookbook/custom_colorbar_tickmarks.ipynb
--- a/doc/source/cookbook/custom_colorbar_tickmarks.ipynb
+++ b/doc/source/cookbook/custom_colorbar_tickmarks.ipynb
@@ -1,6 +1,7 @@
{
"metadata": {
- "name": ""
+ "name": "",
+ "signature": "sha256:e8fd07931e339dc67b9d84b0fbc6abc84d3957d885544c24da7aa550f9427a1f"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -11,8 +12,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "%matplotlib inline\n",
- "from yt.mods import *"
+ "import yt"
],
"language": "python",
"metadata": {},
@@ -22,8 +22,8 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "ds = load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
- "slc = SlicePlot(ds, 'x', 'density')\n",
+ "ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')\n",
+ "slc = yt.SlicePlot(ds, 'x', 'density')\n",
"slc"
],
"language": "python",
@@ -87,4 +87,4 @@
"metadata": {}
}
]
-}
+}
\ No newline at end of file
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/cookbook/embedded_javascript_animation.ipynb
--- a/doc/source/cookbook/embedded_javascript_animation.ipynb
+++ b/doc/source/cookbook/embedded_javascript_animation.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:4f7d409d15ecc538096d15212923312e2cb4a911ebf5a9cf7edc9bd63a8335e9"
+ "signature": "sha256:bed79f0227742715a8753a98f2ad54175767a7c9ded19b14976ee6c8ff255f04"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -23,7 +23,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "from yt.mods import *\n",
+ "import yt\n",
"from JSAnimation import IPython_display\n",
"from matplotlib import animation"
],
@@ -47,14 +47,14 @@
"import matplotlib.pyplot as plt\n",
"from matplotlib.backends.backend_agg import FigureCanvasAgg\n",
"\n",
- "prj = ProjectionPlot(load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
+ "prj = yt.ProjectionPlot(yt.load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
"prj.set_figure_size(5)\n",
"prj.set_zlim('density',1e-32,1e-26)\n",
"fig = prj.plots['density'].figure\n",
"\n",
"# animation function. This is called sequentially\n",
"def animate(i):\n",
- " ds = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
+ " ds = yt.load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
" prj._switch_ds(ds)\n",
"\n",
"# call the animator. blit=True means only re-draw the parts that have changed.\n",
@@ -68,4 +68,4 @@
"metadata": {}
}
]
-}
+}
\ No newline at end of file
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/cookbook/embedded_webm_animation.ipynb
--- a/doc/source/cookbook/embedded_webm_animation.ipynb
+++ b/doc/source/cookbook/embedded_webm_animation.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:0090176ae6299b2310bf613404cbfbb42a54e19a03d1469d1429a01170a63aa0"
+ "signature": "sha256:b400f12ff9e27ff6a3ddd13f2f8fc3f88bd857fa6083fad6808f00d771312db7"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -21,7 +21,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "from yt.mods import *\n",
+ "import yt\n",
"from matplotlib import animation"
],
"language": "python",
@@ -96,13 +96,13 @@
"import matplotlib.pyplot as plt\n",
"from matplotlib.backends.backend_agg import FigureCanvasAgg\n",
"\n",
- "prj = ProjectionPlot(load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
+ "prj = yt.ProjectionPlot(yt.load('Enzo_64/DD0000/data0000'), 0, 'density', weight_field='density',width=(180,'Mpccm'))\n",
"prj.set_zlim('density',1e-32,1e-26)\n",
"fig = prj.plots['density'].figure\n",
"\n",
"# animation function. This is called sequentially\n",
"def animate(i):\n",
- " ds = load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
+ " ds = yt.load('Enzo_64/DD%04i/data%04i' % (i,i))\n",
" prj._switch_ds(ds)\n",
"\n",
"# call the animator. blit=True means only re-draw the parts that have changed.\n",
@@ -119,4 +119,4 @@
"metadata": {}
}
]
-}
+}
\ No newline at end of file
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/cookbook/ffmpeg_volume_rendering.py
--- a/doc/source/cookbook/ffmpeg_volume_rendering.py
+++ /dev/null
@@ -1,99 +0,0 @@
-#This is an example of how to make videos of
-#uniform grid data using Theia and ffmpeg
-
-#The Scene object to hold the ray caster and view camera
-from yt.visualization.volume_rendering.theia.scene import TheiaScene
-
-#GPU based raycasting algorithm to use
-from yt.visualization.volume_rendering.theia.algorithms.front_to_back import FrontToBackRaycaster
-
-#These will be used to define how to color the data
-from yt.visualization.volume_rendering.transfer_functions import ColorTransferFunction
-from yt.visualization.color_maps import *
-
-#This will be used to launch ffmpeg
-import subprocess as sp
-
-#Of course we need numpy for math magic
-import numpy as np
-
-#Opacity scaling function
-def scale_func(v, mi, ma):
- return np.minimum(1.0, (v-mi)/(ma-mi) + 0.0)
-
-#load the uniform grid from a numpy array file
-bolshoi = "/home/bogert/log_densities_1024.npy"
-density_grid = np.load(bolshoi)
-
-#Set the TheiaScene to use the density_grid and
-#setup the raycaster for a resulting 1080p image
-ts = TheiaScene(volume = density_grid, raycaster = FrontToBackRaycaster(size = (1920,1080) ))
-
-#the min and max values in the data to color
-mi, ma = 0.0, 3.6
-
-#setup colortransferfunction
-bins = 5000
-tf = ColorTransferFunction( (mi, ma), bins)
-tf.map_to_colormap(0.5, ma, colormap="spring", scale_func = scale_func)
-
-#pass the transfer function to the ray caster
-ts.source.raycaster.set_transfer(tf)
-
-#Initial configuration for start of video
-#set initial opacity and brightness values
-#then zoom into the center of the data 30%
-ts.source.raycaster.set_opacity(0.03)
-ts.source.raycaster.set_brightness(2.3)
-ts.camera.zoom(30.0)
-
-#path to ffmpeg executable
-FFMPEG_BIN = "/usr/local/bin/ffmpeg"
-
-pipe = sp.Popen([ FFMPEG_BIN,
- '-y', # (optional) overwrite the output file if it already exists
- #This must be set to rawvideo because the image is an array
- '-f', 'rawvideo',
- #This must be set to rawvideo because the image is an array
- '-vcodec','rawvideo',
- #The size of the image array and resulting video
- '-s', '1920x1080',
- #This must be rgba to match array format (uint32)
- '-pix_fmt', 'rgba',
- #frame rate of video
- '-r', '29.97',
- #Indicate that the input to ffmpeg comes from a pipe
- '-i', '-',
- # Tells FFMPEG not to expect any audio
- '-an',
- #Setup video encoder
- #Use any encoder you life available from ffmpeg
- '-vcodec', 'libx264', '-preset', 'ultrafast', '-qp', '0',
- '-pix_fmt', 'yuv420p',
- #Name of the output
- 'bolshoiplanck2.mkv' ],
- stdin=sp.PIPE,stdout=sp.PIPE)
-
-
-#Now we loop and produce 500 frames
-for k in range (0,500) :
- #update the scene resulting in a new image
- ts.update()
-
- #get the image array from the ray caster
- array = ts.source.get_results()
-
- #send the image array to ffmpeg
- array.tofile(pipe.stdin)
-
- #rotate the scene by 0.01 rads in x,y & z
- ts.camera.rotateX(0.01)
- ts.camera.rotateZ(0.01)
- ts.camera.rotateY(0.01)
-
- #zoom in 0.01% for a total of a 5% zoom
- ts.camera.zoom(0.01)
-
-
-#Close the pipe to ffmpeg
-pipe.terminate()
diff -r e47aaa3b97697882c6653fadc2619f92d0e1bc36 -r c97c0f6168fd2c6f1e35b6f63cbcaec18cced1f5 doc/source/cookbook/opengl_stereo_volume_rendering.py
--- a/doc/source/cookbook/opengl_stereo_volume_rendering.py
+++ /dev/null
@@ -1,370 +0,0 @@
-from OpenGL.GL import *
-from OpenGL.GLUT import *
-from OpenGL.GLU import *
-from OpenGL.GL.ARB.vertex_buffer_object import *
-
-import sys, time
-import numpy as np
-import pycuda.driver as cuda_driver
-import pycuda.gl as cuda_gl
-
-from yt.visualization.volume_rendering.theia.scene import TheiaScene
-from yt.visualization.volume_rendering.theia.algorithms.front_to_back import FrontToBackRaycaster
-from yt.visualization.volume_rendering.transfer_functions import ColorTransferFunction
-from yt.visualization.color_maps import *
-
-import numexpr as ne
-
-window = None # Number of the glut window.
-rot_enabled = True
-
-#Theia Scene
-ts = None
-
-#RAY CASTING values
-c_tbrightness = 1.0
-c_tdensity = 0.05
-
-output_texture = None # pointer to offscreen render target
-
-leftButton = False
-middleButton = False
-rightButton = False
-
-#Screen width and height
-width = 1920
-height = 1080
-
-eyesep = 0.1
-
-(pbo, pycuda_pbo) = [None]*2
-(rpbo, rpycuda_pbo) = [None]*2
-
-#create 2 PBO for stereo scopic rendering
-def create_PBO(w, h):
- global pbo, pycuda_pbo, rpbo, rpycuda_pbo
- num_texels = w*h
- array = np.zeros((num_texels, 3),np.float32)
-
- pbo = glGenBuffers(1)
- glBindBuffer(GL_ARRAY_BUFFER, pbo)
- glBufferData(GL_ARRAY_BUFFER, array, GL_DYNAMIC_DRAW)
- glBindBuffer(GL_ARRAY_BUFFER, 0)
- pycuda_pbo = cuda_gl.RegisteredBuffer(long(pbo))
-
- rpbo = glGenBuffers(1)
- glBindBuffer(GL_ARRAY_BUFFER, rpbo)
- glBufferData(GL_ARRAY_BUFFER, array, GL_DYNAMIC_DRAW)
- glBindBuffer(GL_ARRAY_BUFFER, 0)
- rpycuda_pbo = cuda_gl.RegisteredBuffer(long(rpbo))
-
-def destroy_PBO(self):
- global pbo, pycuda_pbo, rpbo, rpycuda_pbo
- glBindBuffer(GL_ARRAY_BUFFER, long(pbo))
- glDeleteBuffers(1, long(pbo));
- glBindBuffer(GL_ARRAY_BUFFER, 0)
- pbo,pycuda_pbo = [None]*2
-
- glBindBuffer(GL_ARRAY_BUFFER, long(rpbo))
- glDeleteBuffers(1, long(rpbo));
- glBindBuffer(GL_ARRAY_BUFFER, 0)
- rpbo,rpycuda_pbo = [None]*2
-
-#consistent with C initPixelBuffer()
-def create_texture(w,h):
- global output_texture
- output_texture = glGenTextures(1)
- glBindTexture(GL_TEXTURE_2D, output_texture)
- # set basic parameters
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
- # buffer data
- glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
- w, h, 0, GL_RGB, GL_FLOAT, None)
-
-#consistent with C initPixelBuffer()
-def destroy_texture():
- global output_texture
- glDeleteTextures(output_texture);
- output_texture = None
-
-def init_gl(w = 512 , h = 512):
- Width, Height = (w, h)
-
- glClearColor(0.1, 0.1, 0.5, 1.0)
- glDisable(GL_DEPTH_TEST)
-
- #matrix functions
- glViewport(0, 0, Width, Height)
- glMatrixMode(GL_PROJECTION);
- glLoadIdentity();
-
- #matrix functions
- gluPerspective(60.0, Width/float(Height), 0.1, 10.0)
- glPolygonMode(GL_FRONT_AND_BACK, GL_FILL)
-
-def resize(Width, Height):
- global width, height
- (width, height) = Width, Height
- glViewport(0, 0, Width, Height) # Reset The Current Viewport And Perspective Transformation
- glMatrixMode(GL_PROJECTION)
- glLoadIdentity()
- gluPerspective(60.0, Width/float(Height), 0.1, 10.0)
-
-
-def do_tick():
- global time_of_last_titleupdate, frame_counter, frames_per_second
- if ((time.clock () * 1000.0) - time_of_last_titleupdate >= 1000.):
- frames_per_second = frame_counter # Save The FPS
- frame_counter = 0 # Reset The FPS Counter
- szTitle = "%d FPS" % (frames_per_second )
- glutSetWindowTitle ( szTitle )
- time_of_last_titleupdate = time.clock () * 1000.0
- frame_counter += 1
-
-oldMousePos = [ 0, 0 ]
-def mouseButton( button, mode, x, y ):
- """Callback function (mouse button pressed or released).
-
- The current and old mouse positions are stored in
- a global renderParam and a global list respectively"""
-
- global leftButton, middleButton, rightButton, oldMousePos
-
- if button == GLUT_LEFT_BUTTON:
- if mode == GLUT_DOWN:
- leftButton = True
- else:
- leftButton = False
-
- if button == GLUT_MIDDLE_BUTTON:
- if mode == GLUT_DOWN:
- middleButton = True
- else:
- middleButton = False
-
- if button == GLUT_RIGHT_BUTTON:
- if mode == GLUT_DOWN:
- rightButton = True
- else:
- rightButton = False
-
- oldMousePos[0], oldMousePos[1] = x, y
- glutPostRedisplay( )
-
-def mouseMotion( x, y ):
- """Callback function (mouse moved while button is pressed).
-
- The current and old mouse positions are stored in
- a global renderParam and a global list respectively.
- The global translation vector is updated according to
- the movement of the mouse pointer."""
-
- global ts, leftButton, middleButton, rightButton, oldMousePos
- deltaX = x - oldMousePos[ 0 ]
- deltaY = y - oldMousePos[ 1 ]
-
- factor = 0.001
-
- if leftButton == True:
- ts.camera.rotateX( - deltaY * factor)
- ts.camera.rotateY( - deltaX * factor)
- if middleButton == True:
- ts.camera.translateX( deltaX* 2.0 * factor)
- ts.camera.translateY( - deltaY* 2.0 * factor)
- if rightButton == True:
- ts.camera.scale += deltaY * factor
-
- oldMousePos[0], oldMousePos[1] = x, y
- glutPostRedisplay( )
-
-def keyPressed(*args):
- global c_tbrightness, c_tdensity, eyesep
- # If escape is pressed, kill everything.
- if args[0] == '\033':
- print 'Closing..'
- destroy_PBOs()
- destroy_texture()
- exit()
-
- #change the brightness of the scene
- elif args[0] == ']':
- c_tbrightness += 0.025
- elif args[0] == '[':
- c_tbrightness -= 0.025
-
- #change the density scale
- elif args[0] == ';':
- c_tdensity -= 0.001
- elif args[0] == '\'':
- c_tdensity += 0.001
-
- #change the transfer scale
- elif args[0] == '-':
- eyesep -= 0.01
- elif args[0] == '=':
- eyesep += 0.01
-
-def idle():
- glutPostRedisplay()
-
-def display():
- try:
- #process left eye
- process_image()
- display_image()
-
- #process right eye
- process_image(eye = False)
- display_image(eye = False)
-
-
- glutSwapBuffers()
-
- except:
- from traceback import print_exc
- print_exc()
- from os import _exit
- _exit(0)
-
-def process(eye = True):
- global ts, pycuda_pbo, rpycuda_pbo, eyesep, c_tbrightness, c_tdensity
- """ Use PyCuda """
-
- ts.get_raycaster().set_opacity(c_tdensity)
- ts.get_raycaster().set_brightness(c_tbrightness)
-
- if (eye) :
- ts.camera.translateX(-eyesep)
- dest_mapping = pycuda_pbo.map()
- (dev_ptr, size) = dest_mapping.device_ptr_and_size()
- ts.get_raycaster().surface.device_ptr = dev_ptr
- ts.update()
- dest_mapping.unmap()
- ts.camera.translateX(eyesep)
- else :
- ts.camera.translateX(eyesep)
- dest_mapping = rpycuda_pbo.map()
- (dev_ptr, size) = dest_mapping.device_ptr_and_size()
- ts.get_raycaster().surface.device_ptr = dev_ptr
- ts.update()
- dest_mapping.unmap()
- ts.camera.translateX(-eyesep)
-
-
-def process_image(eye = True):
- global output_texture, pbo, rpbo, width, height
- """ copy image and process using CUDA """
- # run the Cuda kernel
- process(eye)
- # download texture from PBO
- if (eye) :
- glBindBuffer(GL_PIXEL_UNPACK_BUFFER, np.uint64(pbo))
- glBindTexture(GL_TEXTURE_2D, output_texture)
-
- glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
- width, height, 0,
- GL_RGB, GL_FLOAT, None)
- else :
- glBindBuffer(GL_PIXEL_UNPACK_BUFFER, np.uint64(rpbo))
- glBindTexture(GL_TEXTURE_2D, output_texture)
-
- glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
- width, height, 0,
- GL_RGB, GL_FLOAT, None)
-
-def display_image(eye = True):
- global width, height
- """ render a screen sized quad """
- glDisable(GL_DEPTH_TEST)
- glDisable(GL_LIGHTING)
- glEnable(GL_TEXTURE_2D)
- glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE)
-
- #matix functions should be moved
- glMatrixMode(GL_PROJECTION)
- glPushMatrix()
- glLoadIdentity()
- glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0)
- glMatrixMode( GL_MODELVIEW)
- glLoadIdentity()
- glViewport(0, 0, width, height)
-
- if (eye) :
- glDrawBuffer(GL_BACK_LEFT)
- else :
- glDrawBuffer(GL_BACK_RIGHT)
-
- glBegin(GL_QUADS)
- glTexCoord2f(0.0, 0.0)
- glVertex3f(-1.0, -1.0, 0.5)
- glTexCoord2f(1.0, 0.0)
- glVertex3f(1.0, -1.0, 0.5)
- glTexCoord2f(1.0, 1.0)
- glVertex3f(1.0, 1.0, 0.5)
- glTexCoord2f(0.0, 1.0)
- glVertex3f(-1.0, 1.0, 0.5)
- glEnd()
-
- glMatrixMode(GL_PROJECTION)
- glPopMatrix()
-
- glDisable(GL_TEXTURE_2D)
- glBindTexture(GL_TEXTURE_2D, 0)
- glBindBuffer(GL_PIXEL_PACK_BUFFER, 0)
- glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0)
-
-
-#note we may need to init cuda_gl here and pass it to camera
-def main():
- global window, ts, width, height
- (width, height) = (1920, 1080)
-
- glutInit(sys.argv)
- glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_ALPHA | GLUT_DEPTH | GLUT_STEREO)
- glutInitWindowSize(*initial_size)
- glutInitWindowPosition(0, 0)
- window = glutCreateWindow("Stereo Volume Rendering")
-
-
- glutDisplayFunc(display)
- glutIdleFunc(idle)
- glutReshapeFunc(resize)
- glutMouseFunc( mouseButton )
- glutMotionFunc( mouseMotion )
- glutKeyboardFunc(keyPressed)
- init_gl(width, height)
-
- # create texture for blitting to screen
- create_texture(width, height)
-
- import pycuda.gl.autoinit
- import pycuda.gl
- cuda_gl = pycuda.gl
-
- create_PBO(width, height)
- # ----- Load and Set Volume Data -----
-
- density_grid = np.load("/home/bogert/dd150_log_densities.npy")
-
- mi, ma= 21.5, 24.5
- bins = 5000
- tf = ColorTransferFunction( (mi, ma), bins)
- tf.map_to_colormap(mi, ma, colormap="algae", scale_func = scale_func)
-
- ts = TheiaScene(volume = density_grid, raycaster = FrontToBackRaycaster(size = (width, height), tf = tf))
-
- ts.get_raycaster().set_sample_size(0.01)
- ts.get_raycaster().set_max_samples(5000)
-
- glutMainLoop()
-
-def scale_func(v, mi, ma):
- return np.minimum(1.0, np.abs((v)-ma)/np.abs(mi-ma) + 0.0)
-
-# Print message to console, and kick off the main to get it rolling.
-if __name__ == "__main__":
- print "Hit ESC key to quit, 'a' to toggle animation, and 'e' to toggle cuda"
- main()
This diff is so big that we needed to truncate the remainder.
Repository URL: https://bitbucket.org/yt_analysis/yt/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
More information about the yt-svn
mailing list