[yt-svn] commit/yt: 9 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Tue Mar 15 08:38:20 PDT 2016


9 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/03639487bf4c/
Changeset:   03639487bf4c
Branch:      yt
User:        migueldvb
Date:        2016-03-12 14:31:44+00:00
Summary:     Remove broken link to email address

The email address in this version is converted into a "mailto" link in
the HTML docs

Affected #:  1 file

diff -r 9c26b97915111f171177465e79d6381aae16d4f0 -r 03639487bf4cb20abca8959260b0156c299715fa doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -1386,8 +1386,8 @@
 ---------
 
 `PyNE <http://pyne.io/>`_ is an open source nuclear engineering toolkit
-maintained by the PyNE developement team (`pyne-dev at googlegroups.com
-<pyne-dev%40googlegroups.com>`_). PyNE meshes utilize the Mesh-Oriented datABase
+maintained by the PyNE developement team (pyne-dev at googlegroups.com).
+PyNE meshes utilize the Mesh-Oriented datABase
 `(MOAB) <http://trac.mcs.anl.gov/projects/ITAPS/wiki/MOAB/>`_ and can be
 Cartesian or tetrahedral. In addition to field data, pyne meshes store pyne
 Material objects which provide a rich set of capabilities for nuclear


https://bitbucket.org/yt_analysis/yt/commits/a7fc13bc868b/
Changeset:   a7fc13bc868b
Branch:      yt
User:        migueldvb
Date:        2016-03-12 14:35:35+00:00
Summary:     minor corrections in loading data docs

Affected #:  1 file

diff -r 03639487bf4cb20abca8959260b0156c299715fa -r a7fc13bc868bbc38cd10b316f7685a2a9dc62cae doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -369,7 +369,7 @@
 
 This particular dataset has two meshes in it, both of which are made of 8-node hexes.
 yt uses a field name convention to access these different meshes in plots and data
-objects. To see all the fields found in a particlular dataset, you can do:
+objects. To see all the fields found in a particular dataset, you can do:
 
 .. code-block:: python
     
@@ -540,7 +540,7 @@
 
 * ``CDELTx``: The pixel width in along axis ``x``
 * ``CRVALx``: The coordinate value at the reference position along axis ``x``
-* ``CRPIXx``: The the reference pixel along axis ``x``
+* ``CRPIXx``: The reference pixel along axis ``x``
 * ``CTYPEx``: The projection type of axis ``x``
 * ``CUNITx``: The units of the coordinate along axis ``x``
 * ``BTYPE``: The type of the image
@@ -870,7 +870,7 @@
 ``over_refine_factor``.  They are weak proxies for each other.  The first,
 ``n_ref``, governs how many particles in an oct results in that oct being
 refined into eight child octs.  Lower values mean higher resolution; the
-default is 64.  The secon parameter, ``over_refine_factor``, governs how many
+default is 64.  The second parameter, ``over_refine_factor``, governs how many
 cells are in a given oct; the default value of 1 corresponds to 8 cells.
 The number of cells in an oct is defined by the expression
 ``2**(3*over_refine_factor)``.


https://bitbucket.org/yt_analysis/yt/commits/87991cf2a429/
Changeset:   87991cf2a429
Branch:      yt
User:        migueldvb
Date:        2016-03-12 15:15:57+00:00
Summary:     fix typos in absoprtion spectrum docs

Affected #:  1 file

diff -r a7fc13bc868bbc38cd10b316f7685a2a9dc62cae -r 87991cf2a42938fef719646c79e2807f4a391c5b doc/source/analyzing/analysis_modules/absorption_spectrum.rst
--- a/doc/source/analyzing/analysis_modules/absorption_spectrum.rst
+++ b/doc/source/analyzing/analysis_modules/absorption_spectrum.rst
@@ -204,7 +204,7 @@
 --------------------------
 
 After loading a spectrum and specifying the properties of the species
-used to generate the spectrum, an apporpriate fit can be generated. 
+used to generate the spectrum, an appropriate fit can be generated. 
 
 .. code-block:: python
 
@@ -232,7 +232,7 @@
 as all lines with the same group number as ``group#[i]``.
 
 The ``fitted_flux`` is an ndarray of the same size as ``flux`` and 
-``wavelength`` that contains the cummulative absorption spectrum generated 
+``wavelength`` that contains the cumulative absorption spectrum generated
 by the lines contained in ``fitted_lines``.
 
 Saving a Spectrum Fit


https://bitbucket.org/yt_analysis/yt/commits/65acb0ac1067/
Changeset:   65acb0ac1067
Branch:      yt
User:        migueldvb
Date:        2016-03-12 21:10:26+00:00
Summary:     Update reference to docstring example and remove empty file

Affected #:  2 files

diff -r 87991cf2a42938fef719646c79e2807f4a391c5b -r 65acb0ac10675e5100e132e1d0877ca852d3a97a CONTRIBUTING.rst
--- a/CONTRIBUTING.rst
+++ b/CONTRIBUTING.rst
@@ -795,8 +795,8 @@
    rather than explicitly. Ex: ``super(SpecialGridSubclass, self).__init__()``
    rather than ``SpecialGrid.__init__()``.
  * Docstrings should describe input, output, behavior, and any state changes
-   that occur on an object.  See the file ``doc/docstring_example.txt`` for a
-   fiducial example of a docstring.
+   that occur on an object.  See :ref:`docstrings` below for a fiducial example
+   of a docstring.
  * Use only one top-level import per line. Unless there is a good reason not to,
    imports should happen at the top of the file, after the copyright blurb.
  * Never compare with ``True`` or ``False`` using ``==`` or ``!=``, always use


https://bitbucket.org/yt_analysis/yt/commits/a892c2edf3a8/
Changeset:   a892c2edf3a8
Branch:      yt
User:        migueldvb
Date:        2016-03-13 21:58:10+00:00
Summary:     Update copyright notice in sphinx docs

Affected #:  1 file

diff -r 65acb0ac10675e5100e132e1d0877ca852d3a97a -r a892c2edf3a8cdc5164e7fb6fa58f4e5bb3c42c0 doc/source/conf.py
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -60,7 +60,7 @@
 
 # General information about the project.
 project = u'The yt Project'
-copyright = u'2013, the yt Project'
+copyright = u'2013-2016, the yt Project'
 
 # The version info for the project you're documenting, acts as replacement for
 # |version| and |release|, also used in various other places throughout the


https://bitbucket.org/yt_analysis/yt/commits/1148919f26a6/
Changeset:   1148919f26a6
Branch:      yt
User:        migueldvb
Date:        2016-03-13 22:06:51+00:00
Summary:     Fix more typos in yt 3.0 differences

Affected #:  1 file

diff -r a892c2edf3a8cdc5164e7fb6fa58f4e5bb3c42c0 -r 1148919f26a665f06caf9d7466d68b335e832b27 doc/source/yt3differences.rst
--- a/doc/source/yt3differences.rst
+++ b/doc/source/yt3differences.rst
@@ -84,7 +84,7 @@
   external code**
   Mesh fields that exist on-disk in an output file can be read in using whatever
   name is used by the output file.  On-disk fields are always returned in code
-  units.  The full field name will be will be ``(code_name, field_name)``. See
+  units.  The full field name will be ``(code_name, field_name)``. See
   :ref:`field-list`.
 * **Particle fields are now more obviously different than mesh fields**
   Particle fields on-disk will also be in code units, and will be named
@@ -247,8 +247,8 @@
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 Wherever possible, we have attempted to replace the term "parameter file"
-(i.e., ``pf``) with the term "dataset."  In yt-3.0, all of the 
-the ``pf`` atrributes of objects are now ``ds`` or ``dataset`` attributes.
+(i.e., ``pf``) with the term "dataset."  In yt-3.0, all of
+the ``pf`` attributes of objects are now ``ds`` or ``dataset`` attributes.
 
 Hierarchy is Now Index
 ^^^^^^^^^^^^^^^^^^^^^^
@@ -262,7 +262,7 @@
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 Derived quantities can now be accessed via a function that hangs off of the
-``quantities`` atribute of data objects. Instead of
+``quantities`` attribute of data objects. Instead of
 ``dd.quantities['TotalMass']()``, you can now use ``dd.quantities.total_mass()``
 to do the same thing. All derived quantities can be accessed via a function that
 hangs off of the `quantities` attribute of data objects.


https://bitbucket.org/yt_analysis/yt/commits/9695466e7191/
Changeset:   9695466e7191
Branch:      yt
User:        migueldvb
Date:        2016-03-13 22:45:47+00:00
Summary:     minor changes in installation instructions

Affected #:  1 file

diff -r 1148919f26a665f06caf9d7466d68b335e832b27 -r 9695466e719130ac4e36364e422d5a6acca55d21 doc/source/installing.rst
--- a/doc/source/installing.rst
+++ b/doc/source/installing.rst
@@ -36,7 +36,7 @@
   <http://brew.sh>`_ or `MacPorts <http://www.macports.org/>`_ this choice will
   let you install yt using the python installed by the package manager. Similarly
   for python environments set up via Linux package managers so long as you
-  have the the necessary compilers installed (e.g. the ``build-essentials``
+  have the necessary compilers installed (e.g. the ``build-essentials``
   package on Debian and Ubuntu).
 
 .. note::
@@ -417,7 +417,7 @@
 
 With the release of version 3.0 of yt, development of the legacy yt 2.x series
 has been relegated to bugfixes.  That said, we will continue supporting the 2.x
-series for the forseeable future.  This makes it easy to use scripts written
+series for the foreseeable future.  This makes it easy to use scripts written
 for older versions of yt without substantially updating them to support the
 new field naming or unit systems in yt version 3.
 
@@ -431,7 +431,7 @@
 You already have the mercurial repository, so you simply need to switch
 which version you're using.  Navigate to the root of the yt mercurial
 repository, update to the desired version, and rebuild the source (some of the
-c code requires a compilation step for big changes like this):
+C code requires a compilation step for big changes like this):
 
 .. code-block:: bash
 
@@ -439,7 +439,7 @@
   hg update <desired-version>
   python setup.py develop
 
-Valid versions to jump to are described in :ref:`branches-of-yt`).
+Valid versions to jump to are described in :ref:`branches-of-yt`.
 
 You can check which version of yt you have installed by invoking ``yt version``
 at the command line.  If you encounter problems, see :ref:`update-errors`.


https://bitbucket.org/yt_analysis/yt/commits/93cdcbfd73f5/
Changeset:   93cdcbfd73f5
Branch:      yt
User:        migueldvb
Date:        2016-03-13 23:56:50+00:00
Summary:     Several other typo corrections

Affected #:  14 files

diff -r 9695466e719130ac4e36364e422d5a6acca55d21 -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 doc/source/analyzing/analysis_modules/clump_finding.rst
--- a/doc/source/analyzing/analysis_modules/clump_finding.rst
+++ b/doc/source/analyzing/analysis_modules/clump_finding.rst
@@ -7,7 +7,7 @@
 disconnected structures within a dataset.  This works by first creating a 
 single contour over the full range of the contouring field, then continually 
 increasing the lower value of the contour until it reaches the maximum value 
-of the field.  As disconnected structures are identified as separate contoures, 
+of the field.  As disconnected structures are identified as separate contours, 
 the routine continues recursively through each object, creating a hierarchy of 
 clumps.  Individual clumps can be kept or removed from the hierarchy based on 
 the result of user-specified functions, such as checking for gravitational 

diff -r 9695466e719130ac4e36364e422d5a6acca55d21 -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
--- a/doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
+++ b/doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
@@ -93,7 +93,7 @@
 ellipsoid's semi-principle axes. "e0" is the largest semi-principle
 axis vector direction that would have magnitude A but normalized.  
 The "tilt" is an angle measured in radians.  It can be best described
-as after the rotation about the z-axis to allign e0 to x in the x-y
+as after the rotation about the z-axis to align e0 to x in the x-y
 plane, and then rotating about the y-axis to align e0 completely to
 the x-axis, the angle remaining to rotate about the x-axis to align
 both e1 to the y-axis and e2 to the z-axis.

diff -r 9695466e719130ac4e36364e422d5a6acca55d21 -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 doc/source/analyzing/analysis_modules/halo_catalogs.rst
--- a/doc/source/analyzing/analysis_modules/halo_catalogs.rst
+++ b/doc/source/analyzing/analysis_modules/halo_catalogs.rst
@@ -236,7 +236,7 @@
 All callbacks, quantities, and filters are stored in an actions list, 
 meaning that they are executed in the same order in which they were added. 
 This enables the use of simple, reusable, single action callbacks that 
-depend on each other. This also prevents unecessary computation by allowing 
+depend on each other. This also prevents unnecessary computation by allowing 
 the user to add filters at multiple stages to skip remaining analysis if it 
 is not warranted.
 

diff -r 9695466e719130ac4e36364e422d5a6acca55d21 -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 doc/source/analyzing/analysis_modules/halo_mass_function.rst
--- a/doc/source/analyzing/analysis_modules/halo_mass_function.rst
+++ b/doc/source/analyzing/analysis_modules/halo_mass_function.rst
@@ -13,7 +13,7 @@
 
 A halo mass function can be created for the halos identified in a cosmological 
 simulation, as well as analytic fits using any arbitrary set of cosmological
-paramters. In order to create a mass function for simulated halos, they must
+parameters. In order to create a mass function for simulated halos, they must
 first be identified (using HOP, FOF, or Rockstar, see 
 :ref:`halo_catalog`) and loaded as a halo dataset object. The distribution of
 halo masses will then be found, and can be compared to the analytic prediction
@@ -78,7 +78,7 @@
   my_halos = load("rockstar_halos/halos_0.0.bin")
   hmf = HaloMassFcn(halos_ds=my_halos)
 
-A simulation dataset can be passed along with additonal cosmological parameters 
+A simulation dataset can be passed along with additional cosmological parameters 
 to create an analytic mass function.
 
 .. code-block:: python
@@ -106,7 +106,7 @@
 -----------------
 
 * **simulation_ds** (*Simulation dataset object*)
-  The loaded simulation dataset, used to set cosmological paramters.
+  The loaded simulation dataset, used to set cosmological parameters.
   Default : None.
 
 * **halos_ds** (*Halo dataset object*)
@@ -130,7 +130,7 @@
 
 * **omega_baryon0**  (*float*)
   The fraction of the universe made up of baryonic matter. This is not 
-  always stored in the datset and should be checked by hand.
+  always stored in the dataset and should be checked by hand.
   Default : 0.0456.
 
 * **hubble0** (*float*)
@@ -140,14 +140,14 @@
 * **sigma8** (*float*)
   The amplitude of the linear power spectrum at z=0 as specified by 
   the rms amplitude of mass-fluctuations in a top-hat sphere of radius 
-  8 Mpc/h. This is not always stored in the datset and should be 
+  8 Mpc/h. This is not always stored in the dataset and should be 
   checked by hand.
   Default : 0.86.
 
 * **primoridal_index** (*float*)
   This is the index of the mass power spectrum before modification by 
   the transfer function. A value of 1 corresponds to the scale-free 
-  primordial spectrum. This is not always stored in the datset and 
+  primordial spectrum. This is not always stored in the dataset and 
   should be checked by hand.
   Default : 1.0.
 

diff -r 9695466e719130ac4e36364e422d5a6acca55d21 -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 doc/source/analyzing/analysis_modules/halo_transition.rst
--- a/doc/source/analyzing/analysis_modules/halo_transition.rst
+++ b/doc/source/analyzing/analysis_modules/halo_transition.rst
@@ -40,7 +40,7 @@
 the full halo catalog documentation for further information about
 how to add these quantities and what quantities are available.
 
-You no longer have to iteratre over halos in the ``halo_list``.
+You no longer have to iterate over halos in the ``halo_list``.
 Now a halo dataset can be treated as a regular dataset and 
 all quantities are available by accessing ``all_data``.
 Specifically, all quantities can be accessed as shown:

diff -r 9695466e719130ac4e36364e422d5a6acca55d21 -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 doc/source/analyzing/analysis_modules/light_cone_generator.rst
--- a/doc/source/analyzing/analysis_modules/light_cone_generator.rst
+++ b/doc/source/analyzing/analysis_modules/light_cone_generator.rst
@@ -50,7 +50,7 @@
   ``use_minimum_datasets`` set to False, this parameter specifies the 
   fraction of the total box size to be traversed before rerandomizing the 
   projection axis and center.  This was invented to allow light cones with 
-  thin slices to sample coherent large cale structure, but in practice does 
+  thin slices to sample coherent large scale structure, but in practice does 
   not work so well.  Try setting this parameter to 1 and see what happens.  
   Default: 0.0.
 
@@ -74,7 +74,7 @@
 
 A light cone solution consists of a list of datasets spanning a redshift 
 interval with a random orientation for each dataset.  A new solution 
-is calcuated with the 
+is calculated with the 
 :func:`~yt.analysis_modules.cosmological_observation.light_cone.light_cone.LightCone.calculate_light_cone_solution`
 function:
 

diff -r 9695466e719130ac4e36364e422d5a6acca55d21 -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 doc/source/analyzing/analysis_modules/photon_simulator.rst
--- a/doc/source/analyzing/analysis_modules/photon_simulator.rst
+++ b/doc/source/analyzing/analysis_modules/photon_simulator.rst
@@ -347,7 +347,7 @@
   be used to control what vector corresponds to the "up" direction in 
   the resulting event list. 
 * ``psf_sigma`` may be specified to provide a crude representation of 
-  a PSF, and corresponds to the standard deviation (in degress) of a 
+  a PSF, and corresponds to the standard deviation (in degrees) of a 
   Gaussian PSF model. 
 
 Let's just take a quick look at the raw events object:

diff -r 9695466e719130ac4e36364e422d5a6acca55d21 -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 doc/source/analyzing/objects.rst
--- a/doc/source/analyzing/objects.rst
+++ b/doc/source/analyzing/objects.rst
@@ -313,7 +313,7 @@
     | A ``cut_region`` is a filter which can be applied to any other data 
       object.  The filter is defined by the conditionals present, which 
       apply cuts to the data in the object.  A ``cut_region`` will work
-      for either particle fields or mesh fields, but not on both simulaneously.
+      for either particle fields or mesh fields, but not on both simultaneously.
       For more detailed information and examples, see :ref:`cut-regions`.
 
 **Collection of Data Objects** 

diff -r 9695466e719130ac4e36364e422d5a6acca55d21 -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 doc/source/analyzing/parallel_computation.rst
--- a/doc/source/analyzing/parallel_computation.rst
+++ b/doc/source/analyzing/parallel_computation.rst
@@ -49,7 +49,7 @@
 
     $ conda install mpi4py
 
-This will install `MPICH2 <https://www.mpich.org/>`_ and will interefere with
+This will install `MPICH2 <https://www.mpich.org/>`_ and will interfere with
 other MPI libraries that are already installed. Therefore, it is preferable to
 use the ``pip`` installation method.
 
@@ -103,7 +103,7 @@
    p.save()
 
 If this script is run in parallel, two of the most expensive operations -
-finding of the maximum density and the projection will be calulcated in
+finding of the maximum density and the projection will be calculated in
 parallel.  If we save the script as ``my_script.py``, we would run it on 16 MPI
 processes using the following Bash command:
 
@@ -121,7 +121,7 @@
 
 You can set the ``communicator`` keyword in the 
 :func:`~yt.utilities.parallel_tools.parallel_analysis_interface.enable_parallelism` 
-call to a specific MPI communicator to specify a subset of availble MPI 
+call to a specific MPI communicator to specify a subset of available MPI 
 processes.  If none is specified, it defaults to ``COMM_WORLD``.
 
 Creating Parallel and Serial Sections in a Script
@@ -251,7 +251,7 @@
 You may define an empty dictionary and include it as the keyword argument 
 ``storage`` to ``piter()``.  Then, during the processing step, you can access
 this dictionary as the ``sto`` object.  After the 
-loop is finished, the dictionary is re-aggragated from all of the processors, 
+loop is finished, the dictionary is re-aggregated from all of the processors, 
 and you can access the contents:
 
 .. code-block:: python

diff -r 9695466e719130ac4e36364e422d5a6acca55d21 -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 doc/source/analyzing/units/comoving_units_and_code_units.rst
--- a/doc/source/analyzing/units/comoving_units_and_code_units.rst
+++ b/doc/source/analyzing/units/comoving_units_and_code_units.rst
@@ -12,7 +12,7 @@
 
 yt has additional capabilities to handle the comoving coordinate system used
 internally in cosmological simulations. Simulations that use comoving
-coordinates, all length units have three other counterparts correspoding to
+coordinates, all length units have three other counterparts corresponding to
 comoving units, scaled comoving units, and scaled proper units. In all cases
 'scaled' units refer to scaling by the reduced Hubble parameter - i.e. the length
 unit is what it would be in a universe where Hubble's parameter is 100 km/s/Mpc.

diff -r 9695466e719130ac4e36364e422d5a6acca55d21 -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 doc/source/cookbook/cosmological_analysis.rst
--- a/doc/source/cookbook/cosmological_analysis.rst
+++ b/doc/source/cookbook/cosmological_analysis.rst
@@ -65,7 +65,7 @@
 
 .. yt_cookbook:: light_ray.py 
 
-This script demontrates how to make a light ray from a single dataset.
+This script demonstrates how to make a light ray from a single dataset.
 
 .. _cookbook-single-dataset-light-ray:
 

diff -r 9695466e719130ac4e36364e422d5a6acca55d21 -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 doc/source/cookbook/notebook_tutorial.rst
--- a/doc/source/cookbook/notebook_tutorial.rst
+++ b/doc/source/cookbook/notebook_tutorial.rst
@@ -17,7 +17,7 @@
    $ ipython notebook
 
 Depending on your default web browser and system setup this will open a web
-browser and direct you to the notebook dahboard.  If it does not,  you might
+browser and direct you to the notebook dashboard.  If it does not,  you might
 need to connect to the notebook manually.  See the `IPython documentation
 <http://ipython.org/ipython-doc/stable/notebook/notebook.html#starting-the-notebook-server>`_
 for more details.

diff -r 9695466e719130ac4e36364e422d5a6acca55d21 -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 doc/source/developing/building_the_docs.rst
--- a/doc/source/developing/building_the_docs.rst
+++ b/doc/source/developing/building_the_docs.rst
@@ -158,7 +158,7 @@
 HTML. to simplify versioning of the notebook JSON format, we store notebooks in
 an unevaluated state.
 
-To build the full documentation, you will need yt, jupyter, and all depedencies 
+To build the full documentation, you will need yt, jupyter, and all dependencies 
 needed for yt's analysis modules installed. The following dependencies were 
 used to generate the yt documentation during the release of yt 3.2 in 2015.
 

diff -r 9695466e719130ac4e36364e422d5a6acca55d21 -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 doc/source/developing/creating_derived_fields.rst
--- a/doc/source/developing/creating_derived_fields.rst
+++ b/doc/source/developing/creating_derived_fields.rst
@@ -65,7 +65,7 @@
 data in a dimensionally equivalent unit (e.g. a ``"dyne"`` versus a ``"N"``), the
 field data will be converted to the units specified in ``add_field`` before
 being returned in a data object selection. If the field function returns data
-with dimensions that are incompatibible with units specified in ``add_field``,
+with dimensions that are incompatible with units specified in ``add_field``,
 you will see an error. To clear this error, you must ensure that your field
 function returns data in the correct units. Often, this means applying units to
 a dimensionless float or array.
@@ -75,7 +75,7 @@
 to get a predefined version of the constant with the correct units. If you know
 the units your data is supposed to have ahead of time, you can import unit
 symbols like ``g`` or ``cm`` from the ``yt.units`` namespace and multiply the
-return value of your field function by the appropriate compbination of unit
+return value of your field function by the appropriate combination of unit
 symbols for your field's units. You can also convert floats or NumPy arrays into
 :class:`~yt.units.yt_array.YTArray` or :class:`~yt.units.yt_array.YTQuantity`
 instances by making use of the


https://bitbucket.org/yt_analysis/yt/commits/81d517a65dfc/
Changeset:   81d517a65dfc
Branch:      yt
User:        migueldvb
Date:        2016-03-14 00:23:39+00:00
Summary:     A few corrections in visualizing docs

Affected #:  7 files

diff -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 -r 81d517a65dfc0046e3eddc24c86a987bc9cf0f43 doc/source/developing/testing.rst
--- a/doc/source/developing/testing.rst
+++ b/doc/source/developing/testing.rst
@@ -471,7 +471,7 @@
 Another good example of an image comparison test is the
 ``PlotWindowAttributeTest`` defined in the answer testing framework and used in
 ``yt/visualization/tests/test_plotwindow.py``. This test shows how a new answer
-test subclass can be used to programitically test a variety of different methods
+test subclass can be used to programmatically test a variety of different methods
 of a complicated class using the same test class. This sort of image comparison
 test is more useful if you are finding yourself writing a ton of boilerplate
 code to get your image comparison test working.  The ``GenericImageTest`` is

diff -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 -r 81d517a65dfc0046e3eddc24c86a987bc9cf0f43 doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -1118,8 +1118,10 @@
    bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
    ds = yt.load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
 
-where in this exampe the particle position fields have been assigned. ``number_of_particles`` must be the same size as the particle
-arrays. If no particle arrays are supplied then ``number_of_particles`` is assumed to be zero. 
+where in this example the particle position fields have been assigned.
+``number_of_particles`` must be the same size as the particle arrays. If no
+particle arrays are supplied then ``number_of_particles`` is assumed to be
+zero. 
 
 .. rubric:: Caveats
 
@@ -1153,7 +1155,7 @@
    coordinates,connectivity = yt.hexahedral_connectivity(xgrid,ygrid,zgrid)
 
 will define the (x,y,z) coordinates of the hexahedral cells and
-information about that cell's neighbors such that the celll corners
+information about that cell's neighbors such that the cell corners
 will be a grid of points constructed as the Cartesion product of
 xgrid, ygrid, and zgrid.
 

diff -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 -r 81d517a65dfc0046e3eddc24c86a987bc9cf0f43 doc/source/examining/low_level_inspection.rst
--- a/doc/source/examining/low_level_inspection.rst
+++ b/doc/source/examining/low_level_inspection.rst
@@ -176,7 +176,7 @@
 cells from the parent grid will be duplicated (appropriately) to fill the 
 covering grid.
 
-Let's say we now want to look at that entire data volume and sample it at the 
+Let's say we now want to look at that entire data volume and sample it at
 a higher resolution (i.e. level 2).  As stated above, we'll be oversampling
 under-refined regions, but that's OK.  We must also increase the resolution 
 of our output array by a factor of 2^2 in each direction to hold this new 

diff -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 -r 81d517a65dfc0046e3eddc24c86a987bc9cf0f43 doc/source/help/index.rst
--- a/doc/source/help/index.rst
+++ b/doc/source/help/index.rst
@@ -141,7 +141,7 @@
   $ grep -r SlicePlot *         (or $ grin SlicePlot)
 
 This will print a number of locations in the yt source tree where ``SlicePlot``
-is mentioned.  You can now followup on this and open up the files that have
+is mentioned.  You can now follow-up on this and open up the files that have
 references to ``SlicePlot`` (particularly the one that defines SlicePlot) and
 inspect their contents for problems or clarification.
 

diff -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 -r 81d517a65dfc0046e3eddc24c86a987bc9cf0f43 doc/source/reference/python_introduction.rst
--- a/doc/source/reference/python_introduction.rst
+++ b/doc/source/reference/python_introduction.rst
@@ -315,7 +315,7 @@
 Let's try this out with a for loop.  First type ``for i in range(10):`` and
 press enter.  This will change the prompt to be three periods, instead of three
 greater-than signs, and you will be expected to hit the tab key to indent.
-Then type "print i", press enter, and then instead of indenting again, press
+Then type "print(i)", press enter, and then instead of indenting again, press
 enter again.  The entire entry should look like this::
 
    >>> for i in range(10):

diff -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 -r 81d517a65dfc0046e3eddc24c86a987bc9cf0f43 doc/source/visualizing/colormaps/index.rst
--- a/doc/source/visualizing/colormaps/index.rst
+++ b/doc/source/visualizing/colormaps/index.rst
@@ -47,7 +47,7 @@
 store them in your :ref:`plugin-file` for access to them in every future yt 
 session.  The example below creates two custom colormaps, one that has
 three equally spaced bars of blue, white and red, and the other that 
-interpolates in increasing lengthed intervals from black to red, to green, 
+interpolates in increasing lengthen intervals from black to red, to green, 
 to blue.  These will be accessible for the rest of the yt session as 
 'french_flag' and 'weird'.  See 
 :func:`~yt.visualization.color_maps.make_colormap` and 

diff -r 93cdcbfd73f53ed78ad88dc5bbd1c458a0923508 -r 81d517a65dfc0046e3eddc24c86a987bc9cf0f43 doc/source/visualizing/unstructured_mesh_rendering.rst
--- a/doc/source/visualizing/unstructured_mesh_rendering.rst
+++ b/doc/source/visualizing/unstructured_mesh_rendering.rst
@@ -355,7 +355,7 @@
 ^^^^^^^^^^^^^
 
 Here are a couple of example scripts that show how to create image frames that 
-can later be stiched together into a movie. In the first example, we look at a 
+can later be stitched together into a movie. In the first example, we look at a 
 single dataset at a fixed time, but we move the camera around to get a different
 vantage point. We call the rotate() method 300 times, saving a new image to the 
 disk each time.

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list