[yt-svn] commit/yt: 10 new changesets
commits-noreply at bitbucket.org
commits-noreply at bitbucket.org
Thu Jul 24 04:50:04 PDT 2014
10 new commits in yt:
https://bitbucket.org/yt_analysis/yt/commits/0b5355d4b047/
Changeset: 0b5355d4b047
Branch: yt-3.0
User: jzuhone
Date: 2014-07-11 18:44:52
Summary: Beginning to work on derived field docs.
Affected #: 1 file
diff -r 14dfbdf28b07dd18e112abafce364d04a4864829 -r 0b5355d4b04751551e3b8ef9e27340d47fa6833c doc/source/analyzing/creating_derived_fields.rst
--- a/doc/source/analyzing/creating_derived_fields.rst
+++ b/doc/source/analyzing/creating_derived_fields.rst
@@ -19,12 +19,12 @@
.. code-block:: python
- def _Pressure(field, data):
- return (data.pf["Gamma"] - 1.0) * \
+ def _pressure(field, data):
+ return (data.pf.gamma - 1.0) * \
data["density"] * data["thermal_energy"]
-Note that we do a couple different things here. We access the "Gamma"
-parameter from the parameter file, we access the "density" field and we access
+Note that we do a couple different things here. We access the "gamma"
+parameter from the dataset, we access the "density" field and we access
the "thermal_energy" field. "thermal_energy" is, in fact, another derived field!
("thermal_energy" deals with the distinction in storage of energy between dual
energy formalism and non-DEF.) We don't do any loops, we don't do any
@@ -37,7 +37,7 @@
.. code-block:: python
- add_field("pressure", function=_Pressure, units=r"\rm{dyne}/\rm{cm}^{2}")
+ add_field("pressure", function=_pressure, units="dyne/cm**2")
We feed it the name of the field, the name of the function, and the
units. Note that the units parameter is a "raw" string, with some
@@ -53,53 +53,11 @@
We suggest that you name the function that creates a derived field
with the intended field name prefixed by a single underscore, as in
-the ``_Pressure`` example above.
+the ``_pressure`` example above.
If you find yourself using the same custom-defined fields over and over, you
should put them in your plugins file as described in :ref:`plugin-file`.
-.. _conversion-factors:
-
-Conversion Factors
-~~~~~~~~~~~~~~~~~~
-
-When creating a derived field, ``yt`` does not by default do unit
-conversion. All of the fields fed into the field are pre-supposed to
-be in CGS. If the field does not need any constants applied after
-that, you are done. If it does, you should define a second function
-that applies the proper multiple in order to return the desired units
-and use the argument ``convert_function`` to ``add_field`` to point to
-it.
-
-The argument that you pass to ``convert_function`` will be dependent on
-what fields are input into your derived field, and in what form they
-are passed from their native format. For enzo fields, nearly all the
-native on-disk fields are in CGS units already (except for ``dx``, ``dy``,
-and ``dz`` fields), so you typically only need to convert for
-off-standard fields taking into account where those fields are
-used in the final output derived field. For other codes, it can vary.
-
-You can check to see the units associated with any field in a dataset
-from any code by using the ``_units`` attribute. Here is an example
-with one of our sample FLASH datasets available publicly at
-http://yt-project.org/data :
-
-.. code-block:: python
-
- >>> from yt.mods import *
- >>> pf = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
- >>> pf.field_list
- ['dens', 'temp', 'pres', 'gpot', 'divb', 'velx', 'vely', 'velz', 'magx', 'magy', 'magz', 'magp']
- >>> pf.field_info['dens']._units
- '\\rm{g}/\\rm{cm}^{3}'
- >>> pf.field_info['temp']._units
- '\\rm{K}'
- >>> pf.field_info['velx']._units
- '\\rm{cm}/\\rm{s}'
-
-Thus if you were using any of these fields as input to your derived field, you
-wouldn't have to worry about unit conversion because they're already in CGS.
-
Some More Complicated Examples
------------------------------
@@ -157,11 +115,9 @@
r_vec = coords - np.reshape(center,new_shape)
v_vec = np.array([xv,yv,zv], dtype='float64')
return np.cross(r_vec, v_vec, axis=0)
- def _convertSpecificAngularMomentum(data):
- return data.convert("cm")
- add_field("SpecificAngularMomentum",
- convert_function=_convertSpecificAngularMomentum, vector_field=True,
- units=r"\rm{cm}^2/\rm{s}", validators=[ValidateParameter('center')])
+ add_field("specific_angular_momentum",
+ vector_field=True, units=r"\rm{cm}^2/\rm{s}",
+ validators=[ValidateParameter('center')])
Here we define the SpecificAngularMomentum field, optionally taking a
``bulk_velocity``, and returning a vector field that needs conversion by the
@@ -234,15 +190,16 @@
.. code-block:: python
+ import yt
from yt.mods import *
from yt.utilities.grid_data_format import writer
- def _Entropy(field, data) :
+ def _entropy(field, data) :
return data["temperature"]*data["density"]**(-2./3.)
add_field("Entr", function=_Entropy)
pf = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
- writer.save_field(pf, "Entr")
+ writer.save_field(pf, "entropy")
This creates a "_backup.gdf" file next to your datadump. If you load up the dataset again:
@@ -267,17 +224,11 @@
``name``
This is the name of the field -- how you refer to it. For instance,
- ``Pressure`` or ``H2I_Fraction``.
+ ``pressure`` or ``magnetic_field_strength``.
``function``
This is a function handle that defines the field
- ``convert_function``
- This is the function that converts the field to CGS. All inputs to this
- function are mandated to already *be* in CGS.
``units``
- This is a mathtext (LaTeX-like) string that describes the units.
- ``projected_units``
- This is a mathtext (LaTeX-like) string that describes the units if the
- field has been projected without a weighting.
+ This is a string that describes the units.
``display_name``
This is a name used in the plots, for instance ``"Divergence of
Velocity"``. If not supplied, the ``name`` value is used.
@@ -355,23 +306,3 @@
``pch``
Proper parsecs, normalized by the scaled hubble constant, :math:`\rm{pc}/h`.
-
-Which Enzo Field names Does ``yt`` Know About?
-----------------------------------------------
-
-These are the names of primitive fields in the Enzo AMR code. ``yt`` was originally
-written to analyze Enzo data so the default field names used by the various
-frontends are the same as Enzo fields.
-
-.. note::
-
- Enzo field names are *universal* yt fields. All frontends define conversions
- to Enzo fields. Enzo fields are always in CGS.
-
-* Density
-* Temperature
-* Gas Energy
-* Total Energy
-* [xyz]-velocity
-* Species fields: HI, HII, Electron, HeI, HeII, HeIII, HM, H2I, H2II, DI, DII, HDI
-* Particle mass, velocity,
https://bitbucket.org/yt_analysis/yt/commits/a9af0cdb2f19/
Changeset: a9af0cdb2f19
Branch: yt-3.0
User: jzuhone
Date: 2014-07-14 21:29:32
Summary: Further derived field doc updates
Affected #: 1 file
diff -r 0b5355d4b04751551e3b8ef9e27340d47fa6833c -r a9af0cdb2f19362c594c89ebba3df77f503d6137 doc/source/analyzing/creating_derived_fields.rst
--- a/doc/source/analyzing/creating_derived_fields.rst
+++ b/doc/source/analyzing/creating_derived_fields.rst
@@ -11,7 +11,7 @@
So once a new field has been conceived of, the best way to create it is to
construct a function that performs an array operation -- operating on a
-collection of data, neutral to its size, shape, and type. (All fields should
+collection of data, neutral to its size, shape, and type. (All fields should
be provided as 64-bit floats.)
A simple example of this is the pressure field, which demonstrates the ease of
@@ -19,6 +19,8 @@
.. code-block:: python
+ import yt
+
def _pressure(field, data):
return (data.pf.gamma - 1.0) * \
data["density"] * data["thermal_energy"]
@@ -37,12 +39,12 @@
.. code-block:: python
- add_field("pressure", function=_pressure, units="dyne/cm**2")
+ yt.add_field("pressure", function=_pressure, units="dyne/cm**2")
We feed it the name of the field, the name of the function, and the
-units. Note that the units parameter is a "raw" string, with some
-LaTeX-style formatting -- Matplotlib actually has a MathText rendering
-engine, so if you include LaTeX it will be rendered appropriately.
+units. Note that the units parameter is a "raw" string, in the format that ``yt`` uses
+in its `symbolic units implementation <units>`_ (e.g., employing only unit names, numbers,
+and mathematical operators in the string, and using ``"**"`` for exponentiation).
.. One very important thing to note about the call to ``add_field`` is
.. that it **does not** need to specify the function name **if** the
@@ -55,6 +57,34 @@
with the intended field name prefixed by a single underscore, as in
the ``_pressure`` example above.
+:func:`add_field` can be invoked in two other ways. The first is by the function
+decorator :func:`derived_field`. The following code is equivalent to the previous
+example:
+
+.. code-block:: python
+
+ from yt import derived_field
+
+ @derived_field(name="pressure", units="dyne/cm**2")
+ def _pressure(field, data):
+ return (data.pf.gamma - 1.0) * \
+ data["density"] * data["thermal_energy"]
+
+The :func:`derived_field` decorator takes the same arguments as :func:`add_field`,
+and is often a more convenient shorthand in cases where you want to quickly set up
+a new field.
+
+Defining derived fields in the above fashion must be done before a dataset is loaded,
+in order for the dataset to recognize it. If you want to set up a derived field after you
+have loaded a dataset, or if you only want to set up a derived field for a particular
+dataset, there is an :meth:`add_field` method that hangs off dataset objects. The calling
+syntax is the same:
+
+.. code-block:: python
+
+ ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
+ ds.add_field("pressure", function=_pressure, units="dyne/cm**2")
+
If you find yourself using the same custom-defined fields over and over, you
should put them in your plugins file as described in :ref:`plugin-file`.
@@ -69,7 +99,7 @@
.. code-block:: python
- def _DiskAngle(field, data):
+ def _disk_angle(field, data):
# We make both r_vec and h_vec into unit vectors
center = data.get_field_parameter("center")
r_vec = np.array([data["x"] - center[0],
@@ -81,10 +111,10 @@
+ r_vec[1,:] * h_vec[1] \
+ r_vec[2,:] * h_vec[2]
return np.arccos(dp)
- add_field("DiskAngle", take_log=False,
- validators=[ValidateParameter("height_vector"),
- ValidateParameter("center")],
- display_field=False)
+ yt.add_field("disk_angle", take_log=False,
+ validators=[ValidateParameter("height_vector"),
+ ValidateParameter("center")],
+ display_field=False)
Note that we have added a few parameters below the main function; we specify
that we do not wish to display this field as logged, that we require both
@@ -102,10 +132,11 @@
.. code-block:: python
- def _SpecificAngularMomentum(field, data):
+ def _specific_angular_momentum(field, data):
if data.has_field_parameter("bulk_velocity"):
bv = data.get_field_parameter("bulk_velocity")
- else: bv = np.zeros(3, dtype='float64')
+ else:
+ bv = np.zeros(3, dtype='float64')
xv = data["velocity_x"] - bv[0]
yv = data["velocity_y"] - bv[1]
zv = data["velocity_z"] - bv[2]
@@ -116,12 +147,11 @@
v_vec = np.array([xv,yv,zv], dtype='float64')
return np.cross(r_vec, v_vec, axis=0)
add_field("specific_angular_momentum",
- vector_field=True, units=r"\rm{cm}^2/\rm{s}",
+ vector_field=True, units="cm**2/s",
validators=[ValidateParameter('center')])
-Here we define the SpecificAngularMomentum field, optionally taking a
-``bulk_velocity``, and returning a vector field that needs conversion by the
-function ``_convertSpecificAngularMomentum``.
+Here we define the ``specific_angular_momentum`` field, optionally taking a
+``bulk_velocity``, and returning a vector field.
It is also possible to define fields that depend on spatial derivatives of
other fields. Calculating the derivative for a single grid cell requires
@@ -186,30 +216,30 @@
using the Grid Data Format. The next time you start yt, it will check this file
and your field will be treated as native if present.
-The code below creates a new derived field called "Entr" and saves it to disk:
+The code below creates a new derived field called "dinosaurs" and saves it to disk:
.. code-block:: python
import yt
- from yt.mods import *
from yt.utilities.grid_data_format import writer
+ import numpy as np
- def _entropy(field, data) :
- return data["temperature"]*data["density"]**(-2./3.)
- add_field("Entr", function=_Entropy)
+ def _dinosaurs(field, data) :
+ return data["temperature"]*np.sqrt(data["density"])
+ yt.add_field("dinosaurs", units="K*sqrt(g)/sqrt(cm**3)")
- pf = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
- writer.save_field(pf, "entropy")
+ ds = yt.load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
+ writer.save_field(ds, "dinosaurs")
This creates a "_backup.gdf" file next to your datadump. If you load up the dataset again:
.. code-block:: python
- from yt.mods import *
+ import yt
- pf = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
- data = pf.h.all_data()
- print data["Entr"]
+ ds = yt.load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
+ dd = ds.all_data()
+ print dd["entropy"]
you can work with the field exactly as before, without having to recompute it.
@@ -219,7 +249,7 @@
The arguments to :func:`add_field` are passed on to the constructor of
:class:`DerivedField`. :func:`add_field` takes care of finding the arguments
`function` and `convert_function` if it can, however. There are a number of
-options available, but the only mandatory ones are ``name`` and possibly
+options available, but the only mandatory ones are ``name``, ``units``, and possibly
``function``.
``name``
@@ -228,7 +258,8 @@
``function``
This is a function handle that defines the field
``units``
- This is a string that describes the units.
+ This is a string that describes the units. Powers must be in
+ python syntax (** instead of ^).
``display_name``
This is a name used in the plots, for instance ``"Divergence of
Velocity"``. If not supplied, the ``name`` value is used.
@@ -240,43 +271,14 @@
``validators``
(*Advanced*) This is a list of :class:`FieldValidator` objects, for instance to mandate
spatial data.
- ``vector_field``
- (*Advanced*) Is this field more than one value per cell?
``display_field``
(*Advanced*) Should this field appear in the dropdown box in Reason?
``not_in_all``
(*Advanced*) If this is *True*, the field may not be in all the grids.
-
-How Do Units Work?
-------------------
-
-The best way to understand yt's unit system is to keep in mind that ``yt`` is really
-handling *two* unit systems: the internal unit system of the dataset and the
-physical (usually CGS) unit system. For simulation codes like FLASH and ORION
-that do all computations in CGS units internally, these two unit systems are the
-same. Most other codes do their calculations in a non-dimensionalized unit
-system chosen so that most primitive variables are as close to unity as
-possible. ``yt`` allows data access both in code units and physical units by
-providing a set of standard yt fields defined by all frontends.
-
-When a dataset is loaded, ``yt`` reads the conversion factors necessary convert the
-data to CGS units from the datafile itself or from a dictionary passed to the
-``load`` command. Raw on-disk fields are presented to the user via the string
-names used in the dataset. For a full enumeration of the known field names for
-each of the different frontends, see the :ref:`field-list`. In general, no
-conversion factors are applied to on-disk fields.
-
-To access data in physical CGS units, yt recognizes a number of 'universal'
-field names. All primitive fields (density, pressure, magnetic field strength,
-etc.) are mapped to Enzo field names, listed in the :ref:`enzo-field-names`.
-The reason Enzo field names are used here is because ``yt`` was originally written
-to only read Enzo data. In the future we will switch to a new system of
-universal field names - this will also make it much easier to access raw on-disk
-Enzo data!
-
-In addition to primitive fields, yt provides an extensive list of "universal"
-derived fields that are accessible from any of the frontends. For a full
-listing of the universal derived fields, see :ref:`universal-field-list`.
+ ``output_units``
+ (*Advanced*) For fields that exist on disk, which we may want to convert to other
+ fields or that get aliased to themselves, we can specify a different
+ desired output unit than the unit found on disk.
Units for Cosmological Datasets
-------------------------------
https://bitbucket.org/yt_analysis/yt/commits/39b3e75f4526/
Changeset: 39b3e75f4526
Branch: yt-3.0
User: jzuhone
Date: 2014-07-22 21:17:26
Summary: Removed other obsolete references
Affected #: 3 files
diff -r 7f05dc865802e5ed16ff88941b386a0a944d43c2 -r 39b3e75f4526e507734679099cc2fe52656c4244 doc/source/analyzing/analysis_modules/halo_profiling.rst
--- a/doc/source/analyzing/analysis_modules/halo_profiling.rst
+++ /dev/null
@@ -1,451 +0,0 @@
-.. _halo_profiling:
-
-Halo Profiling
-==============
-.. sectionauthor:: Britton Smith <brittonsmith at gmail.com>,
- Stephen Skory <s at skory.us>
-
-The ``HaloProfiler`` provides a means of performing analysis on multiple halos
-in a parallel-safe way.
-
-The halo profiler performs three primary functions: radial profiles,
-projections, and custom analysis. See the cookbook for a recipe demonstrating
-all of these features.
-
-Configuring the Halo Profiler
------------------------------
-
-The only argument required to create a ``HaloProfiler`` object is the path
-to the dataset.
-
-.. code-block:: python
-
- from yt.analysis_modules.halo_profiler.api import *
- hp = HaloProfiler("enzo_tiny_cosmology/DD0046/DD0046")
-
-Most of the halo profiler's options are configured with additional keyword
-arguments:
-
- * **output_dir** (*str*): if specified, all output will be put into this path
- instead of in the dataset directories. Default: None.
-
- * **halos** (*str*): "multiple" for profiling more than one halo. In this mode
- halos are read in from a list or identified with a
- `halo finder <../cookbook/running_halofinder.html>`_. In "single" mode, the
- one and only halo center is identified automatically as the location of the
- peak in the density field. Default: "multiple".
-
- * **halo_list_file** (*str*): name of file containing the list of halos.
- The halo profiler will look for this file in the data directory.
- Default: "HopAnalysis.out".
-
- * **halo_list_format** (*str* or *dict*): the format of the halo list file.
- "yt_hop" for the format given by yt's halo finders. "enzo_hop" for the
- format written by enzo_hop. This keyword can also be given in the form of a
- dictionary specifying the column in which various properties can be found.
- For example, {"id": 0, "center": [1, 2, 3], "mass": 4, "radius": 5}.
- Default: "yt_hop".
-
- * **halo_finder_function** (*function*): If halos is set to multiple and the
- file given by halo_list_file does not exit, the halo finding function
- specified here will be called. Default: HaloFinder (yt_hop).
-
- * **halo_finder_args** (*tuple*): args given with call to halo finder function.
- Default: None.
-
- * **halo_finder_kwargs** (*dict*): kwargs given with call to halo finder
- function. Default: None.
-
- * **recenter** (*string* or function name): The name of a function
- that will be used to move the center of the halo for the purposes of
- analysis. See explanation and examples, below. Default: None, which
- is equivalent to the center of mass of the halo as output by the halo
- finder.
-
- * **halo_radius** (*float*): if no halo radii are provided in the halo list
- file, this parameter is used to specify the radius out to which radial
- profiles will be made. This keyword is also used when halos is set to
- single. Default: 0.1.
-
- * **radius_units** (*str*): the units of **halo_radius**.
- Default: "1" (code units).
-
- * **n_profile_bins** (*int*): the number of bins in the radial profiles.
- Default: 50.
-
- * **profile_output_dir** (*str*): the subdirectory, inside the data directory,
- in which radial profile output files will be created. The directory will be
- created if it does not exist. Default: "radial_profiles".
-
- * **projection_output_dir** (*str*): the subdirectory, inside the data
- directory, in which projection output files will be created. The directory
- will be created if it does not exist. Default: "projections".
-
- * **projection_width** (*float*): the width of halo projections.
- Default: 8.0.
-
- * **projection_width_units** (*str*): the units of projection_width.
- Default: "mpc".
-
- * **project_at_level** (*int* or "max"): the maximum refinement level to be
- included in projections. Default: "max" (maximum level within the dataset).
-
- * **velocity_center** (*list*): the method in which the halo bulk velocity is
- calculated (used for calculation of radial and tangential velocities. Valid
- options are:
- - ["bulk", "halo"] (Default): the velocity provided in the halo list
- - ["bulk", "sphere"]: the bulk velocity of the sphere centered on the halo center.
- - ["max", field]: the velocity of the cell that is the location of the maximum of the field specified.
-
- * **filter_quantities** (*list*): quantities from the original halo list
- file to be written out in the filtered list file. Default: ['id','center'].
-
- * **use_critical_density** (*bool*): if True, the definition of overdensity
- for virial quantities is calculated with respect to the critical
- density. If False, overdensity is with respect to mean matter density,
- which is lower by a factor of Omega_M. Default: False.
-
-Profiles
---------
-
-Once the halo profiler object has been instantiated, fields can be added for
-profiling with the :meth:`add_profile` method:
-
-.. code-block:: python
-
- hp.add_profile('cell_volume', weight_field=None, accumulation=True)
- hp.add_profile('TotalMassMsun', weight_field=None, accumulation=True)
- hp.add_profile('density', weight_field=None, accumulation=False)
- hp.add_profile('temperature', weight_field='cell_mass', accumulation=False)
- hp.make_profiles(njobs=-1, prefilters=["halo['mass'] > 1e13"],
- filename='VirialQuantities.h5')
-
-The :meth:`make_profiles` method will begin the profiling. Use the
-**njobs** keyword to control the number of jobs over which the
-profiling is divided. Setting to -1 results in a single processor per
-halo. Setting to 1 results in all available processors working on the
-same halo. The prefilters keyword tells the profiler to skip all halos with
-masses (as loaded from the halo finder) less than a given amount. See below
-for more information. Additional keyword arguments are:
-
- * **filename** (*str*): If set, a file will be written with all of the
- filtered halos and the quantities returned by the filter functions.
- Default: None.
-
- * **prefilters** (*list*): A single dataset can contain thousands or tens of
- thousands of halos. Significant time can be saved by not profiling halos
- that are certain to not pass any filter functions in place. Simple filters
- based on quantities provided in the initial halo list can be used to filter
- out unwanted halos using this parameter. Default: None.
-
- * **njobs** (*int*): The number of jobs over which to split the profiling.
- Set to -1 so that each halo is done by a single processor. Default: -1.
-
- * **dynamic** (*bool*): If True, distribute halos using a task queue. If
- False, distribute halos evenly over all jobs. Default: False.
-
- * **profile_format** (*str*): The file format for the radial profiles,
- 'ascii' or 'hdf5'. Default: 'ascii'.
-
-.. image:: _images/profiles.png
- :width: 500
-
-Radial profiles of Overdensity (left) and Temperature (right) for five halos.
-
-Projections
------------
-
-The process of making projections is similar to that of profiles:
-
-.. code-block:: python
-
- hp.add_projection('density', weight_field=None)
- hp.add_projection('temperature', weight_field='density')
- hp.add_projection('metallicity', weight_field='density')
- hp.make_projections(axes=[0, 1, 2], save_cube=True, save_images=True,
- halo_list="filtered", njobs=-1)
-
-If **save_cube** is set to True, the projection data
-will be written to a set of hdf5 files
-in the directory given by **projection_output_dir**.
-The keyword, **halo_list**, can be
-used to select between the full list of halos ("all"),
-the filtered list ("filtered"), or
-an entirely new list given in the form of a file name.
-See :ref:`filter_functions` for a
-discussion of filtering halos. Use the **njobs** keyword to control
-the number of jobs over which the profiling is divided. Setting to -1
-results in a single processor per halo. Setting to 1 results in all
-available processors working on the same halo. The keyword arguments are:
-
- * **axes** (*list*): A list of the axes to project along, using the usual
- 0,1,2 convention. Default=[0,1,2].
-
- * **halo_list** (*str*) {'filtered', 'all'}: Which set of halos to make
- profiles of, either ones passed by the halo filters (if enabled/added), or
- all halos. Default='filtered'.
-
- * **save_images** (*bool*): Whether or not to save images of the projections.
- Default=False.
-
- * **save_cube** (*bool*): Whether or not to save the HDF5 files of the halo
- projections. Default=True.
-
- * **njobs** (*int*): The number of jobs over which to split the projections.
- Set to -1 so that each halo is done by a single processor. Default: -1.
-
- * **dynamic** (*bool*): If True, distribute halos using a task queue. If
- False, distribute halos evenly over all jobs. Default: False.
-
-.. image:: _images/projections.png
- :width: 500
-
-Projections of Density (top) and Temperature,
-weighted by Density (bottom), in the x (left),
-y (middle), and z (right) directions for a single halo with a width of 8 Mpc.
-
-Halo Filters
-------------
-
-Filters can be added to create a refined list of
-halos based on their profiles or to avoid
-profiling halos altogether based on information
-given in the halo list file.
-
-.. _filter_functions:
-
-Filter Functions
-^^^^^^^^^^^^^^^^
-
-It is often the case that one is looking to
-identify halos with a specific set of
-properties. This can be accomplished through the creation
-of filter functions. A filter
-function can take as many args and kwargs as you like,
-as long as the first argument is a
-profile object, or at least a dictionary which contains
-the profile arrays for each field.
-Filter functions must return a list of two things.
-The first is a True or False indicating
-whether the halo passed the filter.
-The second is a dictionary containing quantities
-calculated for that halo that will be written to a
-file if the halo passes the filter.
-A sample filter function based on virial quantities can be found in
-``yt/analysis_modules/halo_profiler/halo_filters.py``.
-
-Halo filtering takes place during the call to :meth:`make_profiles`.
-The :meth:`add_halo_filter` method is used to add a filter to be used
-during the profiling:
-
-.. code-block:: python
-
- hp.add_halo_filter(HP.VirialFilter, must_be_virialized=True,
- overdensity_field='ActualOverdensity',
- virial_overdensity=200,
- virial_filters=[['TotalMassMsun','>=','1e14']],
- virial_quantities=['TotalMassMsun','RadiusMpc'],
- use_log=True)
-
-The addition above will calculate and return virial quantities,
-mass and radius, for an
-overdensity of 200. In order to pass the filter, at least one
-point in the profile must be
-above the specified overdensity and the virial mass must be at
-least 1e14 solar masses. The **use_log** keyword indicates that interpolation
-should be done in log space. If
-the VirialFilter function has been added to the filter list,
-the halo profiler will make
-sure that the fields necessary for calculating virial quantities are added.
-As many filters as desired can be added. If filters have been added,
-the next call to :meth:`make_profiles` will filter by all of
-the added filter functions:
-
-.. code-block:: python
-
- hp.make_profiles(filename="FilteredQuantities.out")
-
-If the **filename** keyword is set, a file will be written with all of the
-filtered halos and the quantities returned by the filter functions.
-
-.. note:: If the profiles have already been run, the halo profiler will read
- in the previously created output files instead of re-running the profiles.
- The halo profiler will check to make sure the output file contains all of
- the requested halo fields. If not, the profile will be made again from
- scratch.
-
-.. _halo_profiler_pre_filters:
-
-Pre-filters
-^^^^^^^^^^^
-
-A single dataset can contain thousands or tens of thousands of halos.
-Significant time can
-be saved by not profiling halos that are certain to not pass any filter
-functions in place.
-Simple filters based on quantities provided in the initial halo list
-can be used to filter
-out unwanted halos using the **prefilters** keyword:
-
-.. code-block:: python
-
- hp.make_profiles(filename="FilteredQuantities.out",
- prefilters=["halo['mass'] > 1e13"])
-
-Arguments provided with the **prefilters** keyword should be given
-as a list of strings.
-Each string in the list will be evaluated with an *eval*.
-
-.. note:: If a VirialFilter function has been added with a filter based
- on mass (as in the example above), a prefilter will be automatically
- added to filter out halos with masses greater or less than (depending
- on the conditional of the filter) a factor of ten of the specified
- virial mass.
-
-Recentering the Halo For Analysis
----------------------------------
-
-It is possible to move the center of the halo to a new point using an
-arbitrary function for making profiles.
-By default, the center is provided by the halo finder,
-which outputs the center of mass of the particles. For the purposes of
-analysis, it may be important to recenter onto a gas density maximum,
-or a temperature minimum.
-
-There are a number of built-in functions to do this, listed below.
-Each of the functions uses mass-weighted fields for the calculations
-of new center points.
-To use
-them, supply the HaloProfiler with the ``recenter`` option and
-the name of the function, as in the example below.
-
-.. code-block:: python
-
- hp = HaloProfiler("enzo_tiny_cosmology/DD0046/DD0046",
- recenter="Max_Dark_Matter_Density")
-
-Additional options are:
-
- * *Min_Dark_Matter_Density* - Recenter on the point of minimum dark matter
- density in the halo.
-
- * *Max_Dark_Matter_Density* - Recenter on the point of maximum dark matter
- density in the halo.
-
- * *CoM_Dark_Matter_Density* - Recenter on the center of mass of the dark
- matter density field. This will be very similar to what the halo finder
- provides, but not precisely similar.
-
- * *Min_Gas_Density* - Recenter on the point of minimum gas density in the
- halo.
-
- * *Max_Gas_Density* - Recenter on the point of maximum gas density in the
- halo.
-
- * *CoM_Gas_Density* - Recenter on the center of mass of the gas density field
- in the halo.
-
- * *Min_Total_Density* - Recenter on the point of minimum total (gas + dark
- matter) density in the halo.
-
- * *Max_Total_Density* - Recenter on the point of maximum total density in the
- halo.
-
- * *CoM_Total_Density* - Recenter on the center of mass for the total density
- in the halo.
-
- * *Min_Temperature* - Recenter on the point of minimum temperature in the
- halo.
-
- * *Max_Temperature* - Recenter on the point of maximum temperature in the
- halo.
-
-It is also possible to supply a user-defined function to the HaloProfiler.
-This can be used if the pre-defined functions above are not sufficient.
-The function takes a single argument, a data container for the halo,
-which is a sphere. The function returns a 3-list with the new center.
-
-In this example below, a function is used such that the halos will be
-re-centered on the point of absolute minimum temperature, that is not
-mass weighted.
-
-.. code-block:: python
-
- from yt.mods import *
-
- def find_min_temp(sphere):
- ma, mini, mx, my, mz, mg = sphere.quantities['MinLocation']('temperature')
- return [mx,my,mz]
-
- hp = HaloProfiler("enzo_tiny_cosmology/DD0046/DD0046", recenter=find_min_temp)
-
-It is possible to make more complicated functions. This example below extends
-the example above to include a distance control that prevents the center from
-being moved too far. If the recenter moves too far, ``[-1, -1, -1]`` is
-returned which will prevent the halo from being profiled.
-Any triplet of values less than the ``domain_left_edge`` will suffice.
-There will be a note made in the output (stderr) showing which halos were
-skipped.
-
-.. code-block:: python
-
- from yt.mods import *
- from yt.utilities.math_utils import periodic_dist
-
- def find_min_temp_dist(sphere):
- old = sphere.center
- ma, mini, mx, my, mz, mg = sphere.quantities['MinLocation']('temperature')
- d = sphere.pf['kpc'] * periodic_dist(old, [mx, my, mz],
- sphere.pf.domain_right_edge - sphere.pf.domain_left_edge)
- # If new center farther than 5 kpc away, don't recenter
- if d > 5.: return [-1, -1, -1]
- return [mx,my,mz]
-
- hp = HaloProfiler("enzo_tiny_cosmology/DD0046/DD0046",
- recenter=find_min_temp_dist)
-
-Custom Halo Analysis
---------------------
-
-Besides radial profiles and projections, the halo profiler has the
-ability to run custom analysis functions on each halo. Custom halo
-analysis functions take two arguments: a halo dictionary containing
-the id, center, etc; and a sphere object. The example function shown
-below creates a 2D profile of the total mass in bins of density and
-temperature for a given halo.
-
-.. code-block:: python
-
- from yt.mods import *
- from yt.data_objects.profiles import BinnedProfile2D
-
- def halo_2D_profile(halo, sphere):
- "Make a 2D profile for a halo."
- my_profile = BinnedProfile2D(sphere,
- 128, 'density', 1e-30, 1e-24, True,
- 128, 'temperature', 1e2, 1e7, True,
- end_collect=False)
- my_profile.add_fields('cell_mass', weight=None, fractional=False)
- my_filename = os.path.join(sphere.pf.fullpath, '2D_profiles',
- 'Halo_%04d.h5' % halo['id'])
- my_profile.write_out_h5(my_filename)
-
-Using the :meth:`analyze_halo_spheres` function, the halo profiler
-will create a sphere centered on each halo, and perform the analysis
-from the custom routine.
-
-.. code-block:: python
-
- hp.analyze_halo_sphere(halo_2D_profile, halo_list='filtered',
- analysis_output_dir='2D_profiles',
- njobs=-1, dynamic=False)
-
-Just like with the :meth:`make_projections` function, the keyword,
-**halo_list**, can be used to select between the full list of halos
-("all"), the filtered list ("filtered"), or an entirely new list given
-in the form of a file name. If the **analysis_output_dir** keyword is
-set, the halo profiler will make sure the desired directory exists in
-a parallel-safe manner. Use the **njobs** keyword to control the
-number of jobs over which the profiling is divided. Setting to -1
-results in a single processor per halo. Setting to 1 results in all
-available processors working on the same halo.
diff -r 7f05dc865802e5ed16ff88941b386a0a944d43c2 -r 39b3e75f4526e507734679099cc2fe52656c4244 doc/source/analyzing/analysis_modules/radial_column_density.rst
--- a/doc/source/analyzing/analysis_modules/radial_column_density.rst
+++ /dev/null
@@ -1,85 +0,0 @@
-.. _radial-column-density:
-
-Radial Column Density
-=====================
-.. sectionauthor:: Stephen Skory <s at skory.us>
-.. versionadded:: 2.3
-
-This module allows the calculation of column densities around a point over a
-field such as ``NumberDensity`` or ``Density``.
-This uses :ref:`healpix_volume_rendering` to interpolate column densities
-on the grid cells.
-
-Details
--------
-
-This module allows the calculation of column densities around a single point.
-For example, this is useful for looking at the gas around a radiating source.
-Briefly summarized, the calculation is performed by first creating a number
-of HEALPix shells around the central point.
-Next, the value of the column density at cell centers is found by
-linearly interpolating the values on the inner and outer shell.
-This is added as derived field, which can be used like any other derived field.
-
-Basic Example
--------------
-
-In this simple example below, the radial column density for the field
-``NumberDensity`` is calculated and added as a derived field named
-``RCDNumberDensity``.
-The calculations will use the starting point of (x, y, z) = (0.5, 0.5, 0.5) and
-go out to a maximum radius of 0.5 in code units.
-Due to the way normalization is handled in HEALPix, the column density
-calculation can extend out only as far as the nearest face of the volume.
-For example, with a center point of (0.2, 0.3, 0.4), the column density
-is calculated out to only a radius of 0.2.
-The column density will be output as zero (0.0) outside the maximum radius.
-Just like a real number column density, when the derived is added using
-``add_field``, we give the units as :math:`1/\rm{cm}^2`.
-
-.. code-block:: python
-
- from yt.mods import *
- from yt.analysis_modules.radial_column_density.api import *
- pf = load("data0030")
-
- rcdnumdens = RadialColumnDensity(pf, 'NumberDensity', [0.5, 0.5, 0.5],
- max_radius = 0.5)
- def _RCDNumberDensity(field, data, rcd = rcdnumdens):
- return rcd._build_derived_field(data)
- add_field('RCDNumberDensity', _RCDNumberDensity, units=r'1/\rm{cm}^2')
-
- dd = pf.h.all_data()
- print dd['RCDNumberDensity']
-
-The field ``RCDNumberDensity`` can be used just like any other derived field
-in yt.
-
-Additional Parameters
----------------------
-
-Each of these parameters is added to the call to ``RadialColumnDensity()``,
-just like ``max_radius`` is used above.
-
- * ``steps`` : integer - Because this implementation uses linear
- interpolation to calculate the column
- density at each cell, the accuracy of the solution goes up as the number of
- HEALPix surfaces is increased.
- The ``steps`` parameter controls the number of HEALPix surfaces, and a larger
- number is more accurate, but slower. Default = 10.
-
- * ``base`` : string - This controls where the surfaces are placed, with
- linear "lin" or logarithmic "log" spacing. The inner-most
- surface is always set to the size of the smallest cell.
- Default = "lin".
-
- * ``Nside`` : int
- The resolution of column density calculation as performed by
- HEALPix. Higher numbers mean higher quality. Max = 8192.
- Default = 32.
-
- * ``ang_divs`` : imaginary integer
- This number controls the gridding of the HEALPix projection onto
- the spherical surfaces. Higher numbers mean higher quality.
- Default = 800j.
-
diff -r 7f05dc865802e5ed16ff88941b386a0a944d43c2 -r 39b3e75f4526e507734679099cc2fe52656c4244 doc/source/analyzing/analysis_modules/synthetic_observation.rst
--- a/doc/source/analyzing/analysis_modules/synthetic_observation.rst
+++ b/doc/source/analyzing/analysis_modules/synthetic_observation.rst
@@ -16,6 +16,5 @@
star_analysis
xray_emission_fields
sunyaev_zeldovich
- radial_column_density
photon_simulator
ppv_cubes
https://bitbucket.org/yt_analysis/yt/commits/587382526d7a/
Changeset: 587382526d7a
Branch: yt-3.0
User: jzuhone
Date: 2014-07-22 21:17:50
Summary: Merge
Affected #: 1 file
diff -r 39b3e75f4526e507734679099cc2fe52656c4244 -r 587382526d7ae98c8db0d1db89aada029c9ffd18 doc/source/analyzing/creating_derived_fields.rst
--- a/doc/source/analyzing/creating_derived_fields.rst
+++ b/doc/source/analyzing/creating_derived_fields.rst
@@ -11,7 +11,7 @@
So once a new field has been conceived of, the best way to create it is to
construct a function that performs an array operation -- operating on a
-collection of data, neutral to its size, shape, and type. (All fields should
+collection of data, neutral to its size, shape, and type. (All fields should
be provided as 64-bit floats.)
A simple example of this is the pressure field, which demonstrates the ease of
@@ -19,12 +19,14 @@
.. code-block:: python
- def _Pressure(field, data):
- return (data.pf["Gamma"] - 1.0) * \
+ import yt
+
+ def _pressure(field, data):
+ return (data.pf.gamma - 1.0) * \
data["density"] * data["thermal_energy"]
-Note that we do a couple different things here. We access the "Gamma"
-parameter from the parameter file, we access the "density" field and we access
+Note that we do a couple different things here. We access the "gamma"
+parameter from the dataset, we access the "density" field and we access
the "thermal_energy" field. "thermal_energy" is, in fact, another derived field!
("thermal_energy" deals with the distinction in storage of energy between dual
energy formalism and non-DEF.) We don't do any loops, we don't do any
@@ -37,12 +39,12 @@
.. code-block:: python
- add_field("pressure", function=_Pressure, units=r"\rm{dyne}/\rm{cm}^{2}")
+ yt.add_field("pressure", function=_pressure, units="dyne/cm**2")
We feed it the name of the field, the name of the function, and the
-units. Note that the units parameter is a "raw" string, with some
-LaTeX-style formatting -- Matplotlib actually has a MathText rendering
-engine, so if you include LaTeX it will be rendered appropriately.
+units. Note that the units parameter is a "raw" string, in the format that ``yt`` uses
+in its `symbolic units implementation <units>`_ (e.g., employing only unit names, numbers,
+and mathematical operators in the string, and using ``"**"`` for exponentiation).
.. One very important thing to note about the call to ``add_field`` is
.. that it **does not** need to specify the function name **if** the
@@ -53,53 +55,39 @@
We suggest that you name the function that creates a derived field
with the intended field name prefixed by a single underscore, as in
-the ``_Pressure`` example above.
+the ``_pressure`` example above.
+
+:func:`add_field` can be invoked in two other ways. The first is by the function
+decorator :func:`derived_field`. The following code is equivalent to the previous
+example:
+
+.. code-block:: python
+
+ from yt import derived_field
+
+ @derived_field(name="pressure", units="dyne/cm**2")
+ def _pressure(field, data):
+ return (data.pf.gamma - 1.0) * \
+ data["density"] * data["thermal_energy"]
+
+The :func:`derived_field` decorator takes the same arguments as :func:`add_field`,
+and is often a more convenient shorthand in cases where you want to quickly set up
+a new field.
+
+Defining derived fields in the above fashion must be done before a dataset is loaded,
+in order for the dataset to recognize it. If you want to set up a derived field after you
+have loaded a dataset, or if you only want to set up a derived field for a particular
+dataset, there is an :meth:`add_field` method that hangs off dataset objects. The calling
+syntax is the same:
+
+.. code-block:: python
+
+ ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
+ ds.add_field("pressure", function=_pressure, units="dyne/cm**2")
If you find yourself using the same custom-defined fields over and over, you
should put them in your plugins file as described in :ref:`plugin-file`.
-.. _conversion-factors:
-
-Conversion Factors
-~~~~~~~~~~~~~~~~~~
-
-When creating a derived field, ``yt`` does not by default do unit
-conversion. All of the fields fed into the field are pre-supposed to
-be in CGS. If the field does not need any constants applied after
-that, you are done. If it does, you should define a second function
-that applies the proper multiple in order to return the desired units
-and use the argument ``convert_function`` to ``add_field`` to point to
-it.
-
-The argument that you pass to ``convert_function`` will be dependent on
-what fields are input into your derived field, and in what form they
-are passed from their native format. For enzo fields, nearly all the
-native on-disk fields are in CGS units already (except for ``dx``, ``dy``,
-and ``dz`` fields), so you typically only need to convert for
-off-standard fields taking into account where those fields are
-used in the final output derived field. For other codes, it can vary.
-
-You can check to see the units associated with any field in a dataset
-from any code by using the ``_units`` attribute. Here is an example
-with one of our sample FLASH datasets available publicly at
-http://yt-project.org/data :
-
-.. code-block:: python
-
- >>> from yt.mods import *
- >>> pf = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
- >>> pf.field_list
- ['dens', 'temp', 'pres', 'gpot', 'divb', 'velx', 'vely', 'velz', 'magx', 'magy', 'magz', 'magp']
- >>> pf.field_info['dens']._units
- '\\rm{g}/\\rm{cm}^{3}'
- >>> pf.field_info['temp']._units
- '\\rm{K}'
- >>> pf.field_info['velx']._units
- '\\rm{cm}/\\rm{s}'
-
-Thus if you were using any of these fields as input to your derived field, you
-wouldn't have to worry about unit conversion because they're already in CGS.
-
Some More Complicated Examples
------------------------------
@@ -111,7 +99,7 @@
.. code-block:: python
- def _DiskAngle(field, data):
+ def _disk_angle(field, data):
# We make both r_vec and h_vec into unit vectors
center = data.get_field_parameter("center")
r_vec = np.array([data["x"] - center[0],
@@ -123,10 +111,10 @@
+ r_vec[1,:] * h_vec[1] \
+ r_vec[2,:] * h_vec[2]
return np.arccos(dp)
- add_field("DiskAngle", take_log=False,
- validators=[ValidateParameter("height_vector"),
- ValidateParameter("center")],
- display_field=False)
+ yt.add_field("disk_angle", take_log=False,
+ validators=[ValidateParameter("height_vector"),
+ ValidateParameter("center")],
+ display_field=False)
Note that we have added a few parameters below the main function; we specify
that we do not wish to display this field as logged, that we require both
@@ -144,10 +132,11 @@
.. code-block:: python
- def _SpecificAngularMomentum(field, data):
+ def _specific_angular_momentum(field, data):
if data.has_field_parameter("bulk_velocity"):
bv = data.get_field_parameter("bulk_velocity")
- else: bv = np.zeros(3, dtype='float64')
+ else:
+ bv = np.zeros(3, dtype='float64')
xv = data["velocity_x"] - bv[0]
yv = data["velocity_y"] - bv[1]
zv = data["velocity_z"] - bv[2]
@@ -157,15 +146,12 @@
r_vec = coords - np.reshape(center,new_shape)
v_vec = np.array([xv,yv,zv], dtype='float64')
return np.cross(r_vec, v_vec, axis=0)
- def _convertSpecificAngularMomentum(data):
- return data.convert("cm")
- add_field("SpecificAngularMomentum",
- convert_function=_convertSpecificAngularMomentum, vector_field=True,
- units=r"\rm{cm}^2/\rm{s}", validators=[ValidateParameter('center')])
+ add_field("specific_angular_momentum",
+ vector_field=True, units="cm**2/s",
+ validators=[ValidateParameter('center')])
-Here we define the SpecificAngularMomentum field, optionally taking a
-``bulk_velocity``, and returning a vector field that needs conversion by the
-function ``_convertSpecificAngularMomentum``.
+Here we define the ``specific_angular_momentum`` field, optionally taking a
+``bulk_velocity``, and returning a vector field.
It is also possible to define fields that depend on spatial derivatives of
other fields. Calculating the derivative for a single grid cell requires
@@ -230,29 +216,30 @@
using the Grid Data Format. The next time you start yt, it will check this file
and your field will be treated as native if present.
-The code below creates a new derived field called "Entr" and saves it to disk:
+The code below creates a new derived field called "dinosaurs" and saves it to disk:
.. code-block:: python
- from yt.mods import *
+ import yt
from yt.utilities.grid_data_format import writer
+ import numpy as np
- def _Entropy(field, data) :
- return data["temperature"]*data["density"]**(-2./3.)
- add_field("Entr", function=_Entropy)
+ def _dinosaurs(field, data) :
+ return data["temperature"]*np.sqrt(data["density"])
+ yt.add_field("dinosaurs", units="K*sqrt(g)/sqrt(cm**3)")
- pf = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
- writer.save_field(pf, "Entr")
+ ds = yt.load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
+ writer.save_field(ds, "dinosaurs")
This creates a "_backup.gdf" file next to your datadump. If you load up the dataset again:
.. code-block:: python
- from yt.mods import *
+ import yt
- pf = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
- data = pf.h.all_data()
- print data["Entr"]
+ ds = yt.load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
+ dd = ds.all_data()
+ print dd["entropy"]
you can work with the field exactly as before, without having to recompute it.
@@ -262,22 +249,17 @@
The arguments to :func:`add_field` are passed on to the constructor of
:class:`DerivedField`. :func:`add_field` takes care of finding the arguments
`function` and `convert_function` if it can, however. There are a number of
-options available, but the only mandatory ones are ``name`` and possibly
+options available, but the only mandatory ones are ``name``, ``units``, and possibly
``function``.
``name``
This is the name of the field -- how you refer to it. For instance,
- ``Pressure`` or ``H2I_Fraction``.
+ ``pressure`` or ``magnetic_field_strength``.
``function``
This is a function handle that defines the field
- ``convert_function``
- This is the function that converts the field to CGS. All inputs to this
- function are mandated to already *be* in CGS.
``units``
- This is a mathtext (LaTeX-like) string that describes the units.
- ``projected_units``
- This is a mathtext (LaTeX-like) string that describes the units if the
- field has been projected without a weighting.
+ This is a string that describes the units. Powers must be in
+ python syntax (** instead of ^).
``display_name``
This is a name used in the plots, for instance ``"Divergence of
Velocity"``. If not supplied, the ``name`` value is used.
@@ -289,43 +271,14 @@
``validators``
(*Advanced*) This is a list of :class:`FieldValidator` objects, for instance to mandate
spatial data.
- ``vector_field``
- (*Advanced*) Is this field more than one value per cell?
``display_field``
(*Advanced*) Should this field appear in the dropdown box in Reason?
``not_in_all``
(*Advanced*) If this is *True*, the field may not be in all the grids.
-
-How Do Units Work?
-------------------
-
-The best way to understand yt's unit system is to keep in mind that ``yt`` is really
-handling *two* unit systems: the internal unit system of the dataset and the
-physical (usually CGS) unit system. For simulation codes like FLASH and ORION
-that do all computations in CGS units internally, these two unit systems are the
-same. Most other codes do their calculations in a non-dimensionalized unit
-system chosen so that most primitive variables are as close to unity as
-possible. ``yt`` allows data access both in code units and physical units by
-providing a set of standard yt fields defined by all frontends.
-
-When a dataset is loaded, ``yt`` reads the conversion factors necessary convert the
-data to CGS units from the datafile itself or from a dictionary passed to the
-``load`` command. Raw on-disk fields are presented to the user via the string
-names used in the dataset. For a full enumeration of the known field names for
-each of the different frontends, see the :ref:`field-list`. In general, no
-conversion factors are applied to on-disk fields.
-
-To access data in physical CGS units, yt recognizes a number of 'universal'
-field names. All primitive fields (density, pressure, magnetic field strength,
-etc.) are mapped to Enzo field names, listed in the :ref:`enzo-field-names`.
-The reason Enzo field names are used here is because ``yt`` was originally written
-to only read Enzo data. In the future we will switch to a new system of
-universal field names - this will also make it much easier to access raw on-disk
-Enzo data!
-
-In addition to primitive fields, yt provides an extensive list of "universal"
-derived fields that are accessible from any of the frontends. For a full
-listing of the universal derived fields, see :ref:`universal-field-list`.
+ ``output_units``
+ (*Advanced*) For fields that exist on disk, which we may want to convert to other
+ fields or that get aliased to themselves, we can specify a different
+ desired output unit than the unit found on disk.
Units for Cosmological Datasets
-------------------------------
@@ -355,23 +308,3 @@
``pch``
Proper parsecs, normalized by the scaled hubble constant, :math:`\rm{pc}/h`.
-
-Which Enzo Field names Does ``yt`` Know About?
-----------------------------------------------
-
-These are the names of primitive fields in the Enzo AMR code. ``yt`` was originally
-written to analyze Enzo data so the default field names used by the various
-frontends are the same as Enzo fields.
-
-.. note::
-
- Enzo field names are *universal* yt fields. All frontends define conversions
- to Enzo fields. Enzo fields are always in CGS.
-
-* Density
-* Temperature
-* Gas Energy
-* Total Energy
-* [xyz]-velocity
-* Species fields: HI, HII, Electron, HeI, HeII, HeIII, HM, H2I, H2II, DI, DII, HDI
-* Particle mass, velocity,
https://bitbucket.org/yt_analysis/yt/commits/4f7533cb36b6/
Changeset: 4f7533cb36b6
Branch: yt-3.0
User: jzuhone
Date: 2014-07-22 21:21:05
Summary: Merge
Affected #: 2 files
diff -r e816afd0f0637f5a4eb1136a01309ad4875a97ce -r 4f7533cb36b644a4d5cf296d80f7ab97f8558299 doc/source/analyzing/analysis_modules/synthetic_observation.rst
--- a/doc/source/analyzing/analysis_modules/synthetic_observation.rst
+++ b/doc/source/analyzing/analysis_modules/synthetic_observation.rst
@@ -16,6 +16,5 @@
star_analysis
xray_emission_fields
sunyaev_zeldovich
- radial_column_density
photon_simulator
ppv_cubes
diff -r e816afd0f0637f5a4eb1136a01309ad4875a97ce -r 4f7533cb36b644a4d5cf296d80f7ab97f8558299 doc/source/analyzing/creating_derived_fields.rst
--- a/doc/source/analyzing/creating_derived_fields.rst
+++ b/doc/source/analyzing/creating_derived_fields.rst
@@ -11,7 +11,7 @@
So once a new field has been conceived of, the best way to create it is to
construct a function that performs an array operation -- operating on a
-collection of data, neutral to its size, shape, and type. (All fields should
+collection of data, neutral to its size, shape, and type. (All fields should
be provided as 64-bit floats.)
A simple example of this is the pressure field, which demonstrates the ease of
@@ -19,11 +19,13 @@
.. code-block:: python
- def _Pressure(field, data):
- return (data.ds.gamma - 1.0) * \
+ import yt
+
+ def _pressure(field, data):
+ return (data.pf.gamma - 1.0) * \
data["density"] * data["thermal_energy"]
-Note that we do a couple different things here. We access the "Gamma"
+Note that we do a couple different things here. We access the "gamma"
parameter from the dataset, we access the "density" field and we access
the "thermal_energy" field. "thermal_energy" is, in fact, another derived field!
("thermal_energy" deals with the distinction in storage of energy between dual
@@ -37,12 +39,12 @@
.. code-block:: python
- add_field("pressure", function=_Pressure, units=r"\rm{dyne}/\rm{cm}^{2}")
+ yt.add_field("pressure", function=_pressure, units="dyne/cm**2")
We feed it the name of the field, the name of the function, and the
-units. Note that the units parameter is a "raw" string, with some
-LaTeX-style formatting -- Matplotlib actually has a MathText rendering
-engine, so if you include LaTeX it will be rendered appropriately.
+units. Note that the units parameter is a "raw" string, in the format that ``yt`` uses
+in its `symbolic units implementation <units>`_ (e.g., employing only unit names, numbers,
+and mathematical operators in the string, and using ``"**"`` for exponentiation).
.. One very important thing to note about the call to ``add_field`` is
.. that it **does not** need to specify the function name **if** the
@@ -53,53 +55,39 @@
We suggest that you name the function that creates a derived field
with the intended field name prefixed by a single underscore, as in
-the ``_Pressure`` example above.
+the ``_pressure`` example above.
+
+:func:`add_field` can be invoked in two other ways. The first is by the function
+decorator :func:`derived_field`. The following code is equivalent to the previous
+example:
+
+.. code-block:: python
+
+ from yt import derived_field
+
+ @derived_field(name="pressure", units="dyne/cm**2")
+ def _pressure(field, data):
+ return (data.pf.gamma - 1.0) * \
+ data["density"] * data["thermal_energy"]
+
+The :func:`derived_field` decorator takes the same arguments as :func:`add_field`,
+and is often a more convenient shorthand in cases where you want to quickly set up
+a new field.
+
+Defining derived fields in the above fashion must be done before a dataset is loaded,
+in order for the dataset to recognize it. If you want to set up a derived field after you
+have loaded a dataset, or if you only want to set up a derived field for a particular
+dataset, there is an :meth:`add_field` method that hangs off dataset objects. The calling
+syntax is the same:
+
+.. code-block:: python
+
+ ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
+ ds.add_field("pressure", function=_pressure, units="dyne/cm**2")
If you find yourself using the same custom-defined fields over and over, you
should put them in your plugins file as described in :ref:`plugin-file`.
-.. _conversion-factors:
-
-Conversion Factors
-~~~~~~~~~~~~~~~~~~
-
-When creating a derived field, ``yt`` does not by default do unit
-conversion. All of the fields fed into the field are pre-supposed to
-be in CGS. If the field does not need any constants applied after
-that, you are done. If it does, you should define a second function
-that applies the proper multiple in order to return the desired units
-and use the argument ``convert_function`` to ``add_field`` to point to
-it.
-
-The argument that you pass to ``convert_function`` will be dependent on
-what fields are input into your derived field, and in what form they
-are passed from their native format. For enzo fields, nearly all the
-native on-disk fields are in CGS units already (except for ``dx``, ``dy``,
-and ``dz`` fields), so you typically only need to convert for
-off-standard fields taking into account where those fields are
-used in the final output derived field. For other codes, it can vary.
-
-You can check to see the units associated with any field in a dataset
-from any code by using the ``_units`` attribute. Here is an example
-with one of our sample FLASH datasets available publicly at
-http://yt-project.org/data :
-
-.. code-block:: python
-
- >>> from yt.mods import *
- >>> ds = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
- >>> ds.field_list
- ['dens', 'temp', 'pres', 'gpot', 'divb', 'velx', 'vely', 'velz', 'magx', 'magy', 'magz', 'magp']
- >>> ds.field_info['dens']._units
- '\\rm{g}/\\rm{cm}^{3}'
- >>> ds.field_info['temp']._units
- '\\rm{K}'
- >>> ds.field_info['velx']._units
- '\\rm{cm}/\\rm{s}'
-
-Thus if you were using any of these fields as input to your derived field, you
-wouldn't have to worry about unit conversion because they're already in CGS.
-
Some More Complicated Examples
------------------------------
@@ -111,7 +99,7 @@
.. code-block:: python
- def _DiskAngle(field, data):
+ def _disk_angle(field, data):
# We make both r_vec and h_vec into unit vectors
center = data.get_field_parameter("center")
r_vec = np.array([data["x"] - center[0],
@@ -123,10 +111,10 @@
+ r_vec[1,:] * h_vec[1] \
+ r_vec[2,:] * h_vec[2]
return np.arccos(dp)
- add_field("DiskAngle", take_log=False,
- validators=[ValidateParameter("height_vector"),
- ValidateParameter("center")],
- display_field=False)
+ yt.add_field("disk_angle", take_log=False,
+ validators=[ValidateParameter("height_vector"),
+ ValidateParameter("center")],
+ display_field=False)
Note that we have added a few parameters below the main function; we specify
that we do not wish to display this field as logged, that we require both
@@ -144,10 +132,11 @@
.. code-block:: python
- def _SpecificAngularMomentum(field, data):
+ def _specific_angular_momentum(field, data):
if data.has_field_parameter("bulk_velocity"):
bv = data.get_field_parameter("bulk_velocity")
- else: bv = np.zeros(3, dtype='float64')
+ else:
+ bv = np.zeros(3, dtype='float64')
xv = data["velocity_x"] - bv[0]
yv = data["velocity_y"] - bv[1]
zv = data["velocity_z"] - bv[2]
@@ -157,15 +146,12 @@
r_vec = coords - np.reshape(center,new_shape)
v_vec = np.array([xv,yv,zv], dtype='float64')
return np.cross(r_vec, v_vec, axis=0)
- def _convertSpecificAngularMomentum(data):
- return data.convert("cm")
- add_field("SpecificAngularMomentum",
- convert_function=_convertSpecificAngularMomentum, vector_field=True,
- units=r"\rm{cm}^2/\rm{s}", validators=[ValidateParameter('center')])
+ add_field("specific_angular_momentum",
+ vector_field=True, units="cm**2/s",
+ validators=[ValidateParameter('center')])
-Here we define the SpecificAngularMomentum field, optionally taking a
-``bulk_velocity``, and returning a vector field that needs conversion by the
-function ``_convertSpecificAngularMomentum``.
+Here we define the ``specific_angular_momentum`` field, optionally taking a
+``bulk_velocity``, and returning a vector field.
It is also possible to define fields that depend on spatial derivatives of
other fields. Calculating the derivative for a single grid cell requires
@@ -178,7 +164,7 @@
def _DivV(field, data):
# We need to set up stencils
- if data.ds["HydroMethod"] == 2:
+ if data.pf["HydroMethod"] == 2:
sl_left = slice(None,-2,None)
sl_right = slice(1,-1,None)
div_fac = 1.0
@@ -189,11 +175,11 @@
ds = div_fac * data['dx'].flat[0]
f = data["velocity_x"][sl_right,1:-1,1:-1]/ds
f -= data["velocity_x"][sl_left ,1:-1,1:-1]/ds
- if data.ds.dimensionality > 1:
+ if data.pf.dimensionality > 1:
ds = div_fac * data['dy'].flat[0]
f += data["velocity_y"][1:-1,sl_right,1:-1]/ds
f -= data["velocity_y"][1:-1,sl_left ,1:-1]/ds
- if data.ds.dimensionality > 2:
+ if data.pf.dimensionality > 2:
ds = div_fac * data['dz'].flat[0]
f += data["velocity_z"][1:-1,1:-1,sl_right]/ds
f -= data["velocity_z"][1:-1,1:-1,sl_left ]/ds
@@ -230,29 +216,30 @@
using the Grid Data Format. The next time you start yt, it will check this file
and your field will be treated as native if present.
-The code below creates a new derived field called "Entr" and saves it to disk:
+The code below creates a new derived field called "dinosaurs" and saves it to disk:
.. code-block:: python
- from yt.mods import *
+ import yt
from yt.utilities.grid_data_format import writer
+ import numpy as np
- def _Entropy(field, data) :
- return data["temperature"]*data["density"]**(-2./3.)
- add_field("Entr", function=_Entropy)
+ def _dinosaurs(field, data) :
+ return data["temperature"]*np.sqrt(data["density"])
+ yt.add_field("dinosaurs", units="K*sqrt(g)/sqrt(cm**3)")
- ds = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
- writer.save_field(ds, "Entr")
+ ds = yt.load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
+ writer.save_field(ds, "dinosaurs")
This creates a "_backup.gdf" file next to your datadump. If you load up the dataset again:
.. code-block:: python
- from yt.mods import *
+ import yt
- ds = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
- data = ds.all_data()
- print data["Entr"]
+ ds = yt.load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
+ dd = ds.all_data()
+ print dd["entropy"]
you can work with the field exactly as before, without having to recompute it.
@@ -262,22 +249,17 @@
The arguments to :func:`add_field` are passed on to the constructor of
:class:`DerivedField`. :func:`add_field` takes care of finding the arguments
`function` and `convert_function` if it can, however. There are a number of
-options available, but the only mandatory ones are ``name`` and possibly
+options available, but the only mandatory ones are ``name``, ``units``, and possibly
``function``.
``name``
This is the name of the field -- how you refer to it. For instance,
- ``Pressure`` or ``H2I_Fraction``.
+ ``pressure`` or ``magnetic_field_strength``.
``function``
This is a function handle that defines the field
- ``convert_function``
- This is the function that converts the field to CGS. All inputs to this
- function are mandated to already *be* in CGS.
``units``
- This is a mathtext (LaTeX-like) string that describes the units.
- ``projected_units``
- This is a mathtext (LaTeX-like) string that describes the units if the
- field has been projected without a weighting.
+ This is a string that describes the units. Powers must be in
+ python syntax (** instead of ^).
``display_name``
This is a name used in the plots, for instance ``"Divergence of
Velocity"``. If not supplied, the ``name`` value is used.
@@ -289,43 +271,14 @@
``validators``
(*Advanced*) This is a list of :class:`FieldValidator` objects, for instance to mandate
spatial data.
- ``vector_field``
- (*Advanced*) Is this field more than one value per cell?
``display_field``
(*Advanced*) Should this field appear in the dropdown box in Reason?
``not_in_all``
(*Advanced*) If this is *True*, the field may not be in all the grids.
-
-How Do Units Work?
-------------------
-
-The best way to understand yt's unit system is to keep in mind that ``yt`` is really
-handling *two* unit systems: the internal unit system of the dataset and the
-physical (usually CGS) unit system. For simulation codes like FLASH and ORION
-that do all computations in CGS units internally, these two unit systems are the
-same. Most other codes do their calculations in a non-dimensionalized unit
-system chosen so that most primitive variables are as close to unity as
-possible. ``yt`` allows data access both in code units and physical units by
-providing a set of standard yt fields defined by all frontends.
-
-When a dataset is loaded, ``yt`` reads the conversion factors necessary convert the
-data to CGS units from the datafile itself or from a dictionary passed to the
-``load`` command. Raw on-disk fields are presented to the user via the string
-names used in the dataset. For a full enumeration of the known field names for
-each of the different frontends, see the :ref:`field-list`. In general, no
-conversion factors are applied to on-disk fields.
-
-To access data in physical CGS units, yt recognizes a number of 'universal'
-field names. All primitive fields (density, pressure, magnetic field strength,
-etc.) are mapped to Enzo field names, listed in the :ref:`enzo-field-names`.
-The reason Enzo field names are used here is because ``yt`` was originally written
-to only read Enzo data. In the future we will switch to a new system of
-universal field names - this will also make it much easier to access raw on-disk
-Enzo data!
-
-In addition to primitive fields, yt provides an extensive list of "universal"
-derived fields that are accessible from any of the frontends. For a full
-listing of the universal derived fields, see :ref:`universal-field-list`.
+ ``output_units``
+ (*Advanced*) For fields that exist on disk, which we may want to convert to other
+ fields or that get aliased to themselves, we can specify a different
+ desired output unit than the unit found on disk.
Units for Cosmological Datasets
-------------------------------
@@ -355,23 +308,3 @@
``pch``
Proper parsecs, normalized by the scaled hubble constant, :math:`\rm{pc}/h`.
-
-Which Enzo Field names Does ``yt`` Know About?
-----------------------------------------------
-
-These are the names of primitive fields in the Enzo AMR code. ``yt`` was originally
-written to analyze Enzo data so the default field names used by the various
-frontends are the same as Enzo fields.
-
-.. note::
-
- Enzo field names are *universal* yt fields. All frontends define conversions
- to Enzo fields. Enzo fields are always in CGS.
-
-* Density
-* Temperature
-* Gas Energy
-* Total Energy
-* [xyz]-velocity
-* Species fields: HI, HII, Electron, HeI, HeII, HeIII, HM, H2I, H2II, DI, DII, HDI
-* Particle mass, velocity,
https://bitbucket.org/yt_analysis/yt/commits/d47e3c285348/
Changeset: d47e3c285348
Branch: yt-3.0
User: jzuhone
Date: 2014-07-22 22:22:17
Summary: This looks to be in good shape at the moment. Removed vector and gradient field examples, possibly to be added in later, as well as backup field.
Affected #: 1 file
diff -r 4f7533cb36b644a4d5cf296d80f7ab97f8558299 -r d47e3c285348154d1de9e2961252155003400b60 doc/source/analyzing/creating_derived_fields.rst
--- a/doc/source/analyzing/creating_derived_fields.rst
+++ b/doc/source/analyzing/creating_derived_fields.rst
@@ -22,7 +22,7 @@
import yt
def _pressure(field, data):
- return (data.pf.gamma - 1.0) * \
+ return (data.ds.gamma - 1.0) * \
data["density"] * data["thermal_energy"]
Note that we do a couple different things here. We access the "gamma"
@@ -44,18 +44,9 @@
We feed it the name of the field, the name of the function, and the
units. Note that the units parameter is a "raw" string, in the format that ``yt`` uses
in its `symbolic units implementation <units>`_ (e.g., employing only unit names, numbers,
-and mathematical operators in the string, and using ``"**"`` for exponentiation).
-
-.. One very important thing to note about the call to ``add_field`` is
-.. that it **does not** need to specify the function name **if** the
-.. function is the name of the field prefixed with an underscore. If it
-.. is not -- and it won't be for fields in different units (such as
-.. "cell_mass") -- then you need to specify it with the argument
-.. ``function``.
-
-We suggest that you name the function that creates a derived field
-with the intended field name prefixed by a single underscore, as in
-the ``_pressure`` example above.
+and mathematical operators in the string, and using ``"**"`` for exponentiation). We suggest
+that you name the function that creates a derived field with the intended field name prefixed
+by a single underscore, as in the ``_pressure`` example above.
:func:`add_field` can be invoked in two other ways. The first is by the function
decorator :func:`derived_field`. The following code is equivalent to the previous
@@ -67,7 +58,7 @@
@derived_field(name="pressure", units="dyne/cm**2")
def _pressure(field, data):
- return (data.pf.gamma - 1.0) * \
+ return (data.ds.gamma - 1.0) * \
data["density"] * data["thermal_energy"]
The :func:`derived_field` decorator takes the same arguments as :func:`add_field`,
@@ -88,169 +79,71 @@
If you find yourself using the same custom-defined fields over and over, you
should put them in your plugins file as described in :ref:`plugin-file`.
-Some More Complicated Examples
-------------------------------
+A More Complicated Example
+--------------------------
-But what if we want to do some more fancy stuff? Here's an example of getting
+But what if we want to do something a bit more fancy? Here's an example of getting
parameters from the data object and using those to define the field;
-specifically, here we obtain the ``center`` and ``height_vector`` parameters
-and use those to define an angle of declination of a point with respect to a
-disk.
+specifically, here we obtain the ``center`` and ``bulk_velocity`` parameters
+and use those to define a field for radial velocity (there is already a ``"radial_velocity"``
+field in ``yt``, but we create this one here just as a transparent and simple example).
.. code-block:: python
- def _disk_angle(field, data):
- # We make both r_vec and h_vec into unit vectors
- center = data.get_field_parameter("center")
- r_vec = np.array([data["x"] - center[0],
- data["y"] - center[1],
- data["z"] - center[2]])
- r_vec = r_vec/np.sqrt((r_vec**2.0).sum(axis=0))
- h_vec = np.array(data.get_field_parameter("height_vector"))
- dp = r_vec[0,:] * h_vec[0] \
- + r_vec[1,:] * h_vec[1] \
- + r_vec[2,:] * h_vec[2]
- return np.arccos(dp)
- yt.add_field("disk_angle", take_log=False,
- validators=[ValidateParameter("height_vector"),
- ValidateParameter("center")],
- display_field=False)
+ from yt.fields.api import ValidateParameter
+ import numpy as np
+
+ def _my_radial_velocity(field, data):
+ if data.has_field_parameter("bulk_velocity"):
+ bv = data.get_field_parameter("bulk_velocity").in_units("cm/s")
+ else:
+ bv = data.ds.arr(np.zeros(3), "cm/s")
+ xv = data["gas","velocity_x"] - bv[0]
+ yv = data["gas","velocity_y"] - bv[1]
+ zv = data["gas","velocity_z"] - bv[2]
+ center = data.get_field_parameter('center')
+ x_hat = data["x"] - center[0]
+ y_hat = data["y"] - center[1]
+ z_hat = data["z"] - center[2]
+ r = np.sqrt(x_hat*x_hat+y_hat*y_hat+z_hat*z_hat)
+ x_hat /= r
+ y_hat /= r
+ z_hat /= r
+ return xv*x_hat + yv*y_hat + zv*z_hat
+ yt.add_field("my_radial_velocity",
+ function=_my_radial_velocity,
+ units="cm/s",
+ take_log=False,
+ validators=[ValidateParameter('center'),
+ ValidateParameter('bulk_velocity')])
Note that we have added a few parameters below the main function; we specify
that we do not wish to display this field as logged, that we require both
-``height_vector`` and ``center`` to be present in a given data object we wish
+``bulk_velocity`` and ``center`` to be present in a given data object we wish
to calculate this for, and we say that it should not be displayed in a
-drop-down box of fields to display. This is done through the parameter
-*validators*, which accepts a list of :class:`FieldValidator` objects. These
+drop-down box of fields to display. This is done through the parameter
+*validators*, which accepts a list of :class:`FieldValidator` objects. These
objects define the way in which the field is generated, and when it is able to
-be created. In this case, we mandate that parameters *center* and
-*height_vector* are set before creating the field. These are set via
+be created. In this case, we mandate that parameters *center* and
+*bulk_velocity* are set before creating the field. These are set via
:meth:`~yt.data_objects.data_containers.set_field_parameter`, which can
-be called on any object that has fields.
-
-We can also define vector fields.
+be called on any object that has fields:
.. code-block:: python
- def _specific_angular_momentum(field, data):
- if data.has_field_parameter("bulk_velocity"):
- bv = data.get_field_parameter("bulk_velocity")
- else:
- bv = np.zeros(3, dtype='float64')
- xv = data["velocity_x"] - bv[0]
- yv = data["velocity_y"] - bv[1]
- zv = data["velocity_z"] - bv[2]
- center = data.get_field_parameter('center')
- coords = np.array([data['x'],data['y'],data['z']], dtype='float64')
- new_shape = tuple([3] + [1]*(len(coords.shape)-1))
- r_vec = coords - np.reshape(center,new_shape)
- v_vec = np.array([xv,yv,zv], dtype='float64')
- return np.cross(r_vec, v_vec, axis=0)
- add_field("specific_angular_momentum",
- vector_field=True, units="cm**2/s",
- validators=[ValidateParameter('center')])
+ ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
+ sp = ds.sphere("max", (200.,"kpc"))
+ sp.set_field_parameter("bulk_velocity", yt.YTArray([-100.,200.,300.], "km/s"))
-Here we define the ``specific_angular_momentum`` field, optionally taking a
-``bulk_velocity``, and returning a vector field.
-
-It is also possible to define fields that depend on spatial derivatives of
-other fields. Calculating the derivative for a single grid cell requires
-information about neighboring grid cells. Therefore, properly calculating
-a derivative for a cell on the edge of the grid will require cell values from
-neighboring grids. Below is an example of a field that is the divergence of the
-velocity.
-
-.. code-block:: python
-
- def _DivV(field, data):
- # We need to set up stencils
- if data.pf["HydroMethod"] == 2:
- sl_left = slice(None,-2,None)
- sl_right = slice(1,-1,None)
- div_fac = 1.0
- else:
- sl_left = slice(None,-2,None)
- sl_right = slice(2,None,None)
- div_fac = 2.0
- ds = div_fac * data['dx'].flat[0]
- f = data["velocity_x"][sl_right,1:-1,1:-1]/ds
- f -= data["velocity_x"][sl_left ,1:-1,1:-1]/ds
- if data.pf.dimensionality > 1:
- ds = div_fac * data['dy'].flat[0]
- f += data["velocity_y"][1:-1,sl_right,1:-1]/ds
- f -= data["velocity_y"][1:-1,sl_left ,1:-1]/ds
- if data.pf.dimensionality > 2:
- ds = div_fac * data['dz'].flat[0]
- f += data["velocity_z"][1:-1,1:-1,sl_right]/ds
- f -= data["velocity_z"][1:-1,1:-1,sl_left ]/ds
- new_field = np.zeros(data["velocity_x"].shape, dtype='float64')
- new_field[1:-1,1:-1,1:-1] = f
- return new_field
- def _convertDivV(data):
- return data.convert("cm")**-1.0
- add_field("DivV", function=_DivV,
- validators=[ValidateSpatial(ghost_zones=1,
- fields=["velocity_x","velocity_y","velocity_z"])],
- units=r"\rm{s}^{-1}", take_log=False,
- convert_function=_convertDivV)
-
-Note that *slice* is simply a native Python object used for taking slices of
-arrays or lists. Another :class:`FieldValidator` object, ``ValidateSpatial``
-is given in the list of *validators* in the call to ``add_field`` with
-*ghost_zones* = 1, specifying that the original grid be padded with one additional
-cell from the neighboring grids on all sides. The *fields* keyword simply
-mandates that the listed fields be present. With one ghost zone added to all sides
-of the grid, the data fields (data["velocity_x"], data["velocity_y"], and
-data["velocity_z"]) will have a shape of (NX+2, NY+2, NZ+2) inside of this function,
-where the original grid has dimension (NX, NY, NZ). However, when the final field
-data is returned, the ghost zones will be removed and the shape will again be
-(NX, NY, NZ).
-
-.. _derived-field-options:
-
-Saving Derived Fields
----------------------
-
-Complex fields can be time-consuming to generate, especially on large datasets.
-To mitigate this, ``yt`` provides a mechanism for saving fields to a backup file
-using the Grid Data Format. The next time you start yt, it will check this file
-and your field will be treated as native if present.
-
-The code below creates a new derived field called "dinosaurs" and saves it to disk:
-
-.. code-block:: python
-
- import yt
- from yt.utilities.grid_data_format import writer
- import numpy as np
-
- def _dinosaurs(field, data) :
- return data["temperature"]*np.sqrt(data["density"])
- yt.add_field("dinosaurs", units="K*sqrt(g)/sqrt(cm**3)")
-
- ds = yt.load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
- writer.save_field(ds, "dinosaurs")
-
-This creates a "_backup.gdf" file next to your datadump. If you load up the dataset again:
-
-.. code-block:: python
-
- import yt
-
- ds = yt.load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
- dd = ds.all_data()
- print dd["entropy"]
-
-you can work with the field exactly as before, without having to recompute it.
+In this case, we already know what the *center* of the sphere is, so we do not set it. Also,
+note that *center* and *bulk_velocity* need to be :class:`YTArray` objects with units.
Field Options
-------------
-The arguments to :func:`add_field` are passed on to the constructor of
-:class:`DerivedField`. :func:`add_field` takes care of finding the arguments
-`function` and `convert_function` if it can, however. There are a number of
-options available, but the only mandatory ones are ``name``, ``units``, and possibly
-``function``.
+The arguments to :func:`add_field` are passed on to the constructor of :class:`DerivedField`.
+There are a number of options available, but the only mandatory ones are ``name``,
+``units``, and ``function``.
``name``
This is the name of the field -- how you refer to it. For instance,
@@ -286,13 +179,13 @@
``yt`` has additional capabilities to handle the comoving coordinate system used
internally in cosmological simulations. Simulations that use comoving
coordinates, all length units have three other counterparts correspoding to
-comoving units, scaled comoving units, and scaled proper units. In all cases
+comoving units, scaled comoving units, and scaled proper units. In all cases
'scaled' units refer to scaling by the reduced Hubble constant - i.e. the length
unit is what it would be in a universe where Hubble's constant is 100 km/s/Mpc.
-To access these different units, yt has a common naming system. Scaled units
-are denoted by appending ``h`` to the end of the unit name. Comoving units are
-denoted by appending ``cm`` to the end of the unit name. If both are used, the
+To access these different units, yt has a common naming system. Scaled units
+are denoted by appending ``h`` to the end of the unit name. Comoving units are
+denoted by appending ``cm`` to the end of the unit name. If both are used, the
strings should be appended in that order: 'Mpchcm', *but not* 'Mpccmh'.
Using the parsec as an example,
https://bitbucket.org/yt_analysis/yt/commits/abbe32cb7aad/
Changeset: abbe32cb7aad
Branch: yt-3.0
User: jzuhone
Date: 2014-07-22 22:28:23
Summary: Add a note about the photon simulator only working with grid-based data.
Affected #: 1 file
diff -r d47e3c285348154d1de9e2961252155003400b60 -r abbe32cb7aad6e2af1970c096e193d19d24a944e doc/source/analyzing/analysis_modules/photon_simulator.rst
--- a/doc/source/analyzing/analysis_modules/photon_simulator.rst
+++ b/doc/source/analyzing/analysis_modules/photon_simulator.rst
@@ -35,6 +35,11 @@
We'll demonstrate the functionality on a realistic dataset of a galaxy
cluster to get you started.
+.. note::
+
+ Currently, the ``photon_simulator`` analysis module only works with grid-based
+ data.
+
Creating an X-ray observation of a dataset on disk
++++++++++++++++++++++++++++++++++++++++++++++++++
https://bitbucket.org/yt_analysis/yt/commits/635b39910fb5/
Changeset: 635b39910fb5
Branch: yt-3.0
User: jzuhone
Date: 2014-07-23 00:05:16
Summary: Putting these in backticks.
Affected #: 1 file
diff -r abbe32cb7aad6e2af1970c096e193d19d24a944e -r 635b39910fb5223e49550c1fe62283270480cb13 doc/source/analyzing/creating_derived_fields.rst
--- a/doc/source/analyzing/creating_derived_fields.rst
+++ b/doc/source/analyzing/creating_derived_fields.rst
@@ -152,7 +152,7 @@
This is a function handle that defines the field
``units``
This is a string that describes the units. Powers must be in
- python syntax (** instead of ^).
+ Python syntax (``**`` instead of ``^``).
``display_name``
This is a name used in the plots, for instance ``"Divergence of
Velocity"``. If not supplied, the ``name`` value is used.
https://bitbucket.org/yt_analysis/yt/commits/5f5111f63806/
Changeset: 5f5111f63806
Branch: yt-3.0
User: jzuhone
Date: 2014-07-24 05:57:24
Summary: Adding links to other sections in the docs that reference the same issues. Updated documentation on cosmological units.
Affected #: 2 files
diff -r 635b39910fb5223e49550c1fe62283270480cb13 -r 5f5111f638067b443d948862efc2b13de4163917 doc/source/analyzing/creating_derived_fields.rst
--- a/doc/source/analyzing/creating_derived_fields.rst
+++ b/doc/source/analyzing/creating_derived_fields.rst
@@ -138,6 +138,9 @@
In this case, we already know what the *center* of the sphere is, so we do not set it. Also,
note that *center* and *bulk_velocity* need to be :class:`YTArray` objects with units.
+Other examples for creating derived fields can be found in the cookbook recipes
+:ref:`cookbook-simple-derived-fields` and :ref:`cookbook-complex-derived-fields`.
+
Field Options
-------------
@@ -180,13 +183,12 @@
internally in cosmological simulations. Simulations that use comoving
coordinates, all length units have three other counterparts correspoding to
comoving units, scaled comoving units, and scaled proper units. In all cases
-'scaled' units refer to scaling by the reduced Hubble constant - i.e. the length
-unit is what it would be in a universe where Hubble's constant is 100 km/s/Mpc.
+'scaled' units refer to scaling by the reduced Hubble parameter - i.e. the length
+unit is what it would be in a universe where Hubble's parameter is 100 km/s/Mpc.
-To access these different units, yt has a common naming system. Scaled units
-are denoted by appending ``h`` to the end of the unit name. Comoving units are
-denoted by appending ``cm`` to the end of the unit name. If both are used, the
-strings should be appended in that order: 'Mpchcm', *but not* 'Mpccmh'.
+To access these different units, yt has a common naming system. Scaled units are denoted by
+dividing by the scaled Hubble parameter ``h`` (which is in itself a unit). Comoving
+units are denoted by appending ``cm`` to the end of the unit name.
Using the parsec as an example,
@@ -196,8 +198,10 @@
``pccm``
Comoving parsecs, :math:`\rm{pc}/(1+z)`.
-``pchcm``
+``pccm/h``
Comoving parsecs normalized by the scaled hubble constant, :math:`\rm{pc}/h/(1+z)`.
-``pch``
+``pc/h``
Proper parsecs, normalized by the scaled hubble constant, :math:`\rm{pc}/h`.
+
+Further examples of this functionality are shown in :ref:`comoving_units_and_code_units`.
diff -r 635b39910fb5223e49550c1fe62283270480cb13 -r 5f5111f638067b443d948862efc2b13de4163917 doc/source/cookbook/calculating_information.rst
--- a/doc/source/cookbook/calculating_information.rst
+++ b/doc/source/cookbook/calculating_information.rst
@@ -58,6 +58,8 @@
.. yt_cookbook:: time_series.py
+.. _cookbook-simple-derived-fields:
+
Simple Derived Fields
~~~~~~~~~~~~~~~~~~~~~
@@ -66,6 +68,8 @@
.. yt_cookbook:: derived_field.py
+.. _cookbook-complex-derived-fields:
+
Complex Derived Fields
~~~~~~~~~~~~~~~~~~~~~~
https://bitbucket.org/yt_analysis/yt/commits/1ba67463a525/
Changeset: 1ba67463a525
Branch: yt-3.0
User: MatthewTurk
Date: 2014-07-24 13:49:55
Summary: Merged in jzuhone/yt/yt-3.0 (pull request #1053)
Derived field documentation
Affected #: 4 files
diff -r f8f6cf1c1415a65032655b9b85f7e52eff1afef4 -r 1ba67463a5253d18d1767f1f06e96180078aca62 doc/source/analyzing/analysis_modules/photon_simulator.rst
--- a/doc/source/analyzing/analysis_modules/photon_simulator.rst
+++ b/doc/source/analyzing/analysis_modules/photon_simulator.rst
@@ -35,6 +35,11 @@
We'll demonstrate the functionality on a realistic dataset of a galaxy
cluster to get you started.
+.. note::
+
+ Currently, the ``photon_simulator`` analysis module only works with grid-based
+ data.
+
Creating an X-ray observation of a dataset on disk
++++++++++++++++++++++++++++++++++++++++++++++++++
diff -r f8f6cf1c1415a65032655b9b85f7e52eff1afef4 -r 1ba67463a5253d18d1767f1f06e96180078aca62 doc/source/analyzing/analysis_modules/synthetic_observation.rst
--- a/doc/source/analyzing/analysis_modules/synthetic_observation.rst
+++ b/doc/source/analyzing/analysis_modules/synthetic_observation.rst
@@ -16,6 +16,5 @@
star_analysis
xray_emission_fields
sunyaev_zeldovich
- radial_column_density
photon_simulator
ppv_cubes
diff -r f8f6cf1c1415a65032655b9b85f7e52eff1afef4 -r 1ba67463a5253d18d1767f1f06e96180078aca62 doc/source/analyzing/creating_derived_fields.rst
--- a/doc/source/analyzing/creating_derived_fields.rst
+++ b/doc/source/analyzing/creating_derived_fields.rst
@@ -11,7 +11,7 @@
So once a new field has been conceived of, the best way to create it is to
construct a function that performs an array operation -- operating on a
-collection of data, neutral to its size, shape, and type. (All fields should
+collection of data, neutral to its size, shape, and type. (All fields should
be provided as 64-bit floats.)
A simple example of this is the pressure field, which demonstrates the ease of
@@ -19,11 +19,13 @@
.. code-block:: python
- def _Pressure(field, data):
+ import yt
+
+ def _pressure(field, data):
return (data.ds.gamma - 1.0) * \
data["density"] * data["thermal_energy"]
-Note that we do a couple different things here. We access the "Gamma"
+Note that we do a couple different things here. We access the "gamma"
parameter from the dataset, we access the "density" field and we access
the "thermal_energy" field. "thermal_energy" is, in fact, another derived field!
("thermal_energy" deals with the distinction in storage of energy between dual
@@ -37,247 +39,123 @@
.. code-block:: python
- add_field("pressure", function=_Pressure, units=r"\rm{dyne}/\rm{cm}^{2}")
+ yt.add_field("pressure", function=_pressure, units="dyne/cm**2")
We feed it the name of the field, the name of the function, and the
-units. Note that the units parameter is a "raw" string, with some
-LaTeX-style formatting -- Matplotlib actually has a MathText rendering
-engine, so if you include LaTeX it will be rendered appropriately.
+units. Note that the units parameter is a "raw" string, in the format that ``yt`` uses
+in its `symbolic units implementation <units>`_ (e.g., employing only unit names, numbers,
+and mathematical operators in the string, and using ``"**"`` for exponentiation). We suggest
+that you name the function that creates a derived field with the intended field name prefixed
+by a single underscore, as in the ``_pressure`` example above.
-.. One very important thing to note about the call to ``add_field`` is
-.. that it **does not** need to specify the function name **if** the
-.. function is the name of the field prefixed with an underscore. If it
-.. is not -- and it won't be for fields in different units (such as
-.. "cell_mass") -- then you need to specify it with the argument
-.. ``function``.
+:func:`add_field` can be invoked in two other ways. The first is by the function
+decorator :func:`derived_field`. The following code is equivalent to the previous
+example:
-We suggest that you name the function that creates a derived field
-with the intended field name prefixed by a single underscore, as in
-the ``_Pressure`` example above.
+.. code-block:: python
+
+ from yt import derived_field
+
+ @derived_field(name="pressure", units="dyne/cm**2")
+ def _pressure(field, data):
+ return (data.ds.gamma - 1.0) * \
+ data["density"] * data["thermal_energy"]
+
+The :func:`derived_field` decorator takes the same arguments as :func:`add_field`,
+and is often a more convenient shorthand in cases where you want to quickly set up
+a new field.
+
+Defining derived fields in the above fashion must be done before a dataset is loaded,
+in order for the dataset to recognize it. If you want to set up a derived field after you
+have loaded a dataset, or if you only want to set up a derived field for a particular
+dataset, there is an :meth:`add_field` method that hangs off dataset objects. The calling
+syntax is the same:
+
+.. code-block:: python
+
+ ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
+ ds.add_field("pressure", function=_pressure, units="dyne/cm**2")
If you find yourself using the same custom-defined fields over and over, you
should put them in your plugins file as described in :ref:`plugin-file`.
-.. _conversion-factors:
+A More Complicated Example
+--------------------------
-Conversion Factors
-~~~~~~~~~~~~~~~~~~
-
-When creating a derived field, ``yt`` does not by default do unit
-conversion. All of the fields fed into the field are pre-supposed to
-be in CGS. If the field does not need any constants applied after
-that, you are done. If it does, you should define a second function
-that applies the proper multiple in order to return the desired units
-and use the argument ``convert_function`` to ``add_field`` to point to
-it.
-
-The argument that you pass to ``convert_function`` will be dependent on
-what fields are input into your derived field, and in what form they
-are passed from their native format. For enzo fields, nearly all the
-native on-disk fields are in CGS units already (except for ``dx``, ``dy``,
-and ``dz`` fields), so you typically only need to convert for
-off-standard fields taking into account where those fields are
-used in the final output derived field. For other codes, it can vary.
-
-You can check to see the units associated with any field in a dataset
-from any code by using the ``_units`` attribute. Here is an example
-with one of our sample FLASH datasets available publicly at
-http://yt-project.org/data :
+But what if we want to do something a bit more fancy? Here's an example of getting
+parameters from the data object and using those to define the field;
+specifically, here we obtain the ``center`` and ``bulk_velocity`` parameters
+and use those to define a field for radial velocity (there is already a ``"radial_velocity"``
+field in ``yt``, but we create this one here just as a transparent and simple example).
.. code-block:: python
- >>> from yt.mods import *
- >>> ds = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
- >>> ds.field_list
- ['dens', 'temp', 'pres', 'gpot', 'divb', 'velx', 'vely', 'velz', 'magx', 'magy', 'magz', 'magp']
- >>> ds.field_info['dens']._units
- '\\rm{g}/\\rm{cm}^{3}'
- >>> ds.field_info['temp']._units
- '\\rm{K}'
- >>> ds.field_info['velx']._units
- '\\rm{cm}/\\rm{s}'
+ from yt.fields.api import ValidateParameter
+ import numpy as np
-Thus if you were using any of these fields as input to your derived field, you
-wouldn't have to worry about unit conversion because they're already in CGS.
+ def _my_radial_velocity(field, data):
+ if data.has_field_parameter("bulk_velocity"):
+ bv = data.get_field_parameter("bulk_velocity").in_units("cm/s")
+ else:
+ bv = data.ds.arr(np.zeros(3), "cm/s")
+ xv = data["gas","velocity_x"] - bv[0]
+ yv = data["gas","velocity_y"] - bv[1]
+ zv = data["gas","velocity_z"] - bv[2]
+ center = data.get_field_parameter('center')
+ x_hat = data["x"] - center[0]
+ y_hat = data["y"] - center[1]
+ z_hat = data["z"] - center[2]
+ r = np.sqrt(x_hat*x_hat+y_hat*y_hat+z_hat*z_hat)
+ x_hat /= r
+ y_hat /= r
+ z_hat /= r
+ return xv*x_hat + yv*y_hat + zv*z_hat
+ yt.add_field("my_radial_velocity",
+ function=_my_radial_velocity,
+ units="cm/s",
+ take_log=False,
+ validators=[ValidateParameter('center'),
+ ValidateParameter('bulk_velocity')])
-Some More Complicated Examples
-------------------------------
-
-But what if we want to do some more fancy stuff? Here's an example of getting
-parameters from the data object and using those to define the field;
-specifically, here we obtain the ``center`` and ``height_vector`` parameters
-and use those to define an angle of declination of a point with respect to a
-disk.
+Note that we have added a few parameters below the main function; we specify
+that we do not wish to display this field as logged, that we require both
+``bulk_velocity`` and ``center`` to be present in a given data object we wish
+to calculate this for, and we say that it should not be displayed in a
+drop-down box of fields to display. This is done through the parameter
+*validators*, which accepts a list of :class:`FieldValidator` objects. These
+objects define the way in which the field is generated, and when it is able to
+be created. In this case, we mandate that parameters *center* and
+*bulk_velocity* are set before creating the field. These are set via
+:meth:`~yt.data_objects.data_containers.set_field_parameter`, which can
+be called on any object that has fields:
.. code-block:: python
- def _DiskAngle(field, data):
- # We make both r_vec and h_vec into unit vectors
- center = data.get_field_parameter("center")
- r_vec = np.array([data["x"] - center[0],
- data["y"] - center[1],
- data["z"] - center[2]])
- r_vec = r_vec/np.sqrt((r_vec**2.0).sum(axis=0))
- h_vec = np.array(data.get_field_parameter("height_vector"))
- dp = r_vec[0,:] * h_vec[0] \
- + r_vec[1,:] * h_vec[1] \
- + r_vec[2,:] * h_vec[2]
- return np.arccos(dp)
- add_field("DiskAngle", take_log=False,
- validators=[ValidateParameter("height_vector"),
- ValidateParameter("center")],
- display_field=False)
+ ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
+ sp = ds.sphere("max", (200.,"kpc"))
+ sp.set_field_parameter("bulk_velocity", yt.YTArray([-100.,200.,300.], "km/s"))
-Note that we have added a few parameters below the main function; we specify
-that we do not wish to display this field as logged, that we require both
-``height_vector`` and ``center`` to be present in a given data object we wish
-to calculate this for, and we say that it should not be displayed in a
-drop-down box of fields to display. This is done through the parameter
-*validators*, which accepts a list of :class:`FieldValidator` objects. These
-objects define the way in which the field is generated, and when it is able to
-be created. In this case, we mandate that parameters *center* and
-*height_vector* are set before creating the field. These are set via
-:meth:`~yt.data_objects.data_containers.set_field_parameter`, which can
-be called on any object that has fields.
+In this case, we already know what the *center* of the sphere is, so we do not set it. Also,
+note that *center* and *bulk_velocity* need to be :class:`YTArray` objects with units.
-We can also define vector fields.
-
-.. code-block:: python
-
- def _SpecificAngularMomentum(field, data):
- if data.has_field_parameter("bulk_velocity"):
- bv = data.get_field_parameter("bulk_velocity")
- else: bv = np.zeros(3, dtype='float64')
- xv = data["velocity_x"] - bv[0]
- yv = data["velocity_y"] - bv[1]
- zv = data["velocity_z"] - bv[2]
- center = data.get_field_parameter('center')
- coords = np.array([data['x'],data['y'],data['z']], dtype='float64')
- new_shape = tuple([3] + [1]*(len(coords.shape)-1))
- r_vec = coords - np.reshape(center,new_shape)
- v_vec = np.array([xv,yv,zv], dtype='float64')
- return np.cross(r_vec, v_vec, axis=0)
- def _convertSpecificAngularMomentum(data):
- return data.convert("cm")
- add_field("SpecificAngularMomentum",
- convert_function=_convertSpecificAngularMomentum, vector_field=True,
- units=r"\rm{cm}^2/\rm{s}", validators=[ValidateParameter('center')])
-
-Here we define the SpecificAngularMomentum field, optionally taking a
-``bulk_velocity``, and returning a vector field that needs conversion by the
-function ``_convertSpecificAngularMomentum``.
-
-It is also possible to define fields that depend on spatial derivatives of
-other fields. Calculating the derivative for a single grid cell requires
-information about neighboring grid cells. Therefore, properly calculating
-a derivative for a cell on the edge of the grid will require cell values from
-neighboring grids. Below is an example of a field that is the divergence of the
-velocity.
-
-.. code-block:: python
-
- def _DivV(field, data):
- # We need to set up stencils
- if data.ds["HydroMethod"] == 2:
- sl_left = slice(None,-2,None)
- sl_right = slice(1,-1,None)
- div_fac = 1.0
- else:
- sl_left = slice(None,-2,None)
- sl_right = slice(2,None,None)
- div_fac = 2.0
- ds = div_fac * data['dx'].flat[0]
- f = data["velocity_x"][sl_right,1:-1,1:-1]/ds
- f -= data["velocity_x"][sl_left ,1:-1,1:-1]/ds
- if data.ds.dimensionality > 1:
- ds = div_fac * data['dy'].flat[0]
- f += data["velocity_y"][1:-1,sl_right,1:-1]/ds
- f -= data["velocity_y"][1:-1,sl_left ,1:-1]/ds
- if data.ds.dimensionality > 2:
- ds = div_fac * data['dz'].flat[0]
- f += data["velocity_z"][1:-1,1:-1,sl_right]/ds
- f -= data["velocity_z"][1:-1,1:-1,sl_left ]/ds
- new_field = np.zeros(data["velocity_x"].shape, dtype='float64')
- new_field[1:-1,1:-1,1:-1] = f
- return new_field
- def _convertDivV(data):
- return data.convert("cm")**-1.0
- add_field("DivV", function=_DivV,
- validators=[ValidateSpatial(ghost_zones=1,
- fields=["velocity_x","velocity_y","velocity_z"])],
- units=r"\rm{s}^{-1}", take_log=False,
- convert_function=_convertDivV)
-
-Note that *slice* is simply a native Python object used for taking slices of
-arrays or lists. Another :class:`FieldValidator` object, ``ValidateSpatial``
-is given in the list of *validators* in the call to ``add_field`` with
-*ghost_zones* = 1, specifying that the original grid be padded with one additional
-cell from the neighboring grids on all sides. The *fields* keyword simply
-mandates that the listed fields be present. With one ghost zone added to all sides
-of the grid, the data fields (data["velocity_x"], data["velocity_y"], and
-data["velocity_z"]) will have a shape of (NX+2, NY+2, NZ+2) inside of this function,
-where the original grid has dimension (NX, NY, NZ). However, when the final field
-data is returned, the ghost zones will be removed and the shape will again be
-(NX, NY, NZ).
-
-.. _derived-field-options:
-
-Saving Derived Fields
----------------------
-
-Complex fields can be time-consuming to generate, especially on large datasets.
-To mitigate this, ``yt`` provides a mechanism for saving fields to a backup file
-using the Grid Data Format. The next time you start yt, it will check this file
-and your field will be treated as native if present.
-
-The code below creates a new derived field called "Entr" and saves it to disk:
-
-.. code-block:: python
-
- from yt.mods import *
- from yt.utilities.grid_data_format import writer
-
- def _Entropy(field, data) :
- return data["temperature"]*data["density"]**(-2./3.)
- add_field("Entr", function=_Entropy)
-
- ds = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
- writer.save_field(ds, "Entr")
-
-This creates a "_backup.gdf" file next to your datadump. If you load up the dataset again:
-
-.. code-block:: python
-
- from yt.mods import *
-
- ds = load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100')
- data = ds.all_data()
- print data["Entr"]
-
-you can work with the field exactly as before, without having to recompute it.
+Other examples for creating derived fields can be found in the cookbook recipes
+:ref:`cookbook-simple-derived-fields` and :ref:`cookbook-complex-derived-fields`.
Field Options
-------------
-The arguments to :func:`add_field` are passed on to the constructor of
-:class:`DerivedField`. :func:`add_field` takes care of finding the arguments
-`function` and `convert_function` if it can, however. There are a number of
-options available, but the only mandatory ones are ``name`` and possibly
-``function``.
+The arguments to :func:`add_field` are passed on to the constructor of :class:`DerivedField`.
+There are a number of options available, but the only mandatory ones are ``name``,
+``units``, and ``function``.
``name``
This is the name of the field -- how you refer to it. For instance,
- ``Pressure`` or ``H2I_Fraction``.
+ ``pressure`` or ``magnetic_field_strength``.
``function``
This is a function handle that defines the field
- ``convert_function``
- This is the function that converts the field to CGS. All inputs to this
- function are mandated to already *be* in CGS.
``units``
- This is a mathtext (LaTeX-like) string that describes the units.
- ``projected_units``
- This is a mathtext (LaTeX-like) string that describes the units if the
- field has been projected without a weighting.
+ This is a string that describes the units. Powers must be in
+ Python syntax (``**`` instead of ``^``).
``display_name``
This is a name used in the plots, for instance ``"Divergence of
Velocity"``. If not supplied, the ``name`` value is used.
@@ -289,43 +167,14 @@
``validators``
(*Advanced*) This is a list of :class:`FieldValidator` objects, for instance to mandate
spatial data.
- ``vector_field``
- (*Advanced*) Is this field more than one value per cell?
``display_field``
(*Advanced*) Should this field appear in the dropdown box in Reason?
``not_in_all``
(*Advanced*) If this is *True*, the field may not be in all the grids.
-
-How Do Units Work?
-------------------
-
-The best way to understand yt's unit system is to keep in mind that ``yt`` is really
-handling *two* unit systems: the internal unit system of the dataset and the
-physical (usually CGS) unit system. For simulation codes like FLASH and ORION
-that do all computations in CGS units internally, these two unit systems are the
-same. Most other codes do their calculations in a non-dimensionalized unit
-system chosen so that most primitive variables are as close to unity as
-possible. ``yt`` allows data access both in code units and physical units by
-providing a set of standard yt fields defined by all frontends.
-
-When a dataset is loaded, ``yt`` reads the conversion factors necessary convert the
-data to CGS units from the datafile itself or from a dictionary passed to the
-``load`` command. Raw on-disk fields are presented to the user via the string
-names used in the dataset. For a full enumeration of the known field names for
-each of the different frontends, see the :ref:`field-list`. In general, no
-conversion factors are applied to on-disk fields.
-
-To access data in physical CGS units, yt recognizes a number of 'universal'
-field names. All primitive fields (density, pressure, magnetic field strength,
-etc.) are mapped to Enzo field names, listed in the :ref:`enzo-field-names`.
-The reason Enzo field names are used here is because ``yt`` was originally written
-to only read Enzo data. In the future we will switch to a new system of
-universal field names - this will also make it much easier to access raw on-disk
-Enzo data!
-
-In addition to primitive fields, yt provides an extensive list of "universal"
-derived fields that are accessible from any of the frontends. For a full
-listing of the universal derived fields, see :ref:`universal-field-list`.
+ ``output_units``
+ (*Advanced*) For fields that exist on disk, which we may want to convert to other
+ fields or that get aliased to themselves, we can specify a different
+ desired output unit than the unit found on disk.
Units for Cosmological Datasets
-------------------------------
@@ -333,14 +182,13 @@
``yt`` has additional capabilities to handle the comoving coordinate system used
internally in cosmological simulations. Simulations that use comoving
coordinates, all length units have three other counterparts correspoding to
-comoving units, scaled comoving units, and scaled proper units. In all cases
-'scaled' units refer to scaling by the reduced Hubble constant - i.e. the length
-unit is what it would be in a universe where Hubble's constant is 100 km/s/Mpc.
+comoving units, scaled comoving units, and scaled proper units. In all cases
+'scaled' units refer to scaling by the reduced Hubble parameter - i.e. the length
+unit is what it would be in a universe where Hubble's parameter is 100 km/s/Mpc.
-To access these different units, yt has a common naming system. Scaled units
-are denoted by appending ``h`` to the end of the unit name. Comoving units are
-denoted by appending ``cm`` to the end of the unit name. If both are used, the
-strings should be appended in that order: 'Mpchcm', *but not* 'Mpccmh'.
+To access these different units, yt has a common naming system. Scaled units are denoted by
+dividing by the scaled Hubble parameter ``h`` (which is in itself a unit). Comoving
+units are denoted by appending ``cm`` to the end of the unit name.
Using the parsec as an example,
@@ -350,28 +198,10 @@
``pccm``
Comoving parsecs, :math:`\rm{pc}/(1+z)`.
-``pchcm``
+``pccm/h``
Comoving parsecs normalized by the scaled hubble constant, :math:`\rm{pc}/h/(1+z)`.
-``pch``
+``pc/h``
Proper parsecs, normalized by the scaled hubble constant, :math:`\rm{pc}/h`.
-Which Enzo Field names Does ``yt`` Know About?
-----------------------------------------------
-
-These are the names of primitive fields in the Enzo AMR code. ``yt`` was originally
-written to analyze Enzo data so the default field names used by the various
-frontends are the same as Enzo fields.
-
-.. note::
-
- Enzo field names are *universal* yt fields. All frontends define conversions
- to Enzo fields. Enzo fields are always in CGS.
-
-* Density
-* Temperature
-* Gas Energy
-* Total Energy
-* [xyz]-velocity
-* Species fields: HI, HII, Electron, HeI, HeII, HeIII, HM, H2I, H2II, DI, DII, HDI
-* Particle mass, velocity,
+Further examples of this functionality are shown in :ref:`comoving_units_and_code_units`.
diff -r f8f6cf1c1415a65032655b9b85f7e52eff1afef4 -r 1ba67463a5253d18d1767f1f06e96180078aca62 doc/source/cookbook/calculating_information.rst
--- a/doc/source/cookbook/calculating_information.rst
+++ b/doc/source/cookbook/calculating_information.rst
@@ -58,6 +58,8 @@
.. yt_cookbook:: time_series.py
+.. _cookbook-simple-derived-fields:
+
Simple Derived Fields
~~~~~~~~~~~~~~~~~~~~~
@@ -66,6 +68,8 @@
.. yt_cookbook:: derived_field.py
+.. _cookbook-complex-derived-fields:
+
Complex Derived Fields
~~~~~~~~~~~~~~~~~~~~~~
Repository URL: https://bitbucket.org/yt_analysis/yt/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
More information about the yt-svn
mailing list