[yt-svn] commit/yt: 2 new changesets
commits-noreply at bitbucket.org
commits-noreply at bitbucket.org
Fri Aug 18 11:43:43 PDT 2017
2 new commits in yt:
https://bitbucket.org/yt_analysis/yt/commits/146c671ce565/
Changeset: 146c671ce565
User: xarthisius
Date: 2017-08-18 14:42:36+00:00
Summary: Fix typos in docs
Affected #: 25 files
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/analyzing/analysis_modules/absorption_spectrum.rst
--- a/doc/source/analyzing/analysis_modules/absorption_spectrum.rst
+++ b/doc/source/analyzing/analysis_modules/absorption_spectrum.rst
@@ -431,6 +431,6 @@
stuck in a local minimum. A set of hard coded initial parameter guesses
for Lyman alpha lines is given by the function
:func:`~yt.analysis_modules.absorption_spectrum.absorption_spectrum_fit.get_test_lines`.
-Also included in these parameter guesses is an an initial guess of a high
-column cool line overlapping a lower column warm line, indictive of a
+Also included in these parameter guesses is an initial guess of a high
+column cool line overlapping a lower column warm line, indicative of a
broad Lyman alpha (BLA) absorber.
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/analyzing/analysis_modules/cosmology_calculator.rst
--- a/doc/source/analyzing/analysis_modules/cosmology_calculator.rst
+++ b/doc/source/analyzing/analysis_modules/cosmology_calculator.rst
@@ -39,7 +39,7 @@
# comoving volume
print("comoving volume", co.comoving_volume(0, 0.5).in_units("Gpccm**3"))
- # angulare diameter distance
+ # angular diameter distance
print("angular diameter distance", co.angular_diameter_distance(0, 0.5).in_units("Mpc/h"))
# angular scale
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/analyzing/analysis_modules/halo_mass_function.rst
--- a/doc/source/analyzing/analysis_modules/halo_mass_function.rst
+++ b/doc/source/analyzing/analysis_modules/halo_mass_function.rst
@@ -151,7 +151,7 @@
checked by hand.
Default : 0.86.
-* **primoridal_index** (*float*)
+* **primordial_index** (*float*)
This is the index of the mass power spectrum before modification by
the transfer function. A value of 1 corresponds to the scale-free
primordial spectrum. This is not always stored in the dataset and
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/analyzing/analysis_modules/halo_transition.rst
--- a/doc/source/analyzing/analysis_modules/halo_transition.rst
+++ b/doc/source/analyzing/analysis_modules/halo_transition.rst
@@ -3,7 +3,7 @@
Transitioning From yt-2 to yt-3
===============================
-If you're used to halo analysis in yt-2.x, heres a guide to
+If you're used to halo analysis in yt-2.x, here's a guide to
how to update your analysis pipeline to take advantage of
the new halo catalog infrastructure. If you're starting
from scratch, see :ref:`halo_catalog`.
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/analyzing/analysis_modules/light_ray_generator.rst
--- a/doc/source/analyzing/analysis_modules/light_ray_generator.rst
+++ b/doc/source/analyzing/analysis_modules/light_ray_generator.rst
@@ -205,8 +205,8 @@
option should be used with caution as it will lead to the creation
of disconnected ray segments within a single dataset.
-I want a continous trajectory over the entire ray.
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+I want a continuous trajectory over the entire ray.
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Set the ``minimum_coherent_box_fraction`` keyword argument to a very
large number, like infinity (`numpy.inf`).
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/analyzing/filtering.rst
--- a/doc/source/analyzing/filtering.rst
+++ b/doc/source/analyzing/filtering.rst
@@ -157,7 +157,7 @@
This is equivalent to our use of the ``particle_filter`` decorator above. The
choice to use either the ``particle_filter`` decorator or the
-``add_particle_fitler`` function is a purely stylistic choice.
+``add_particle_filter`` function is a purely stylistic choice.
.. notebook:: particle_filter.ipynb
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/analyzing/parallel_computation.rst
--- a/doc/source/analyzing/parallel_computation.rst
+++ b/doc/source/analyzing/parallel_computation.rst
@@ -261,7 +261,7 @@
for sto, dataset in dataset_series.piter(storage=my_dictionary):
<process>
sto.result = <some information processed for this dataset>
- sto.result_id = <some identfier for this dataset>
+ sto.result_id = <some identifier for this dataset>
print(my_dictionary)
@@ -491,7 +491,7 @@
++++++++++++++++++++
The various types of analysis that utilize domain decomposition use them in
-different enough ways that they are be discussed separately.
+different enough ways that they are discussed separately.
**Halo-Finding**
@@ -605,7 +605,7 @@
16 processors assigned to each output in the time series.
#. Creating a big cube that will hold our results for this set of processors.
Note that this will be only for each output considered by this processor,
- and this cube will not necessarily be filled in in every cell.
+ and this cube will not necessarily be filled in every cell.
#. For each output, distribute the grids to each of the sixteen processors
working on that output. Each of these takes the max of the ionized
redshift in their zone versus the accumulation cube.
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/analyzing/units/1)_Symbolic_Units.ipynb
--- a/doc/source/analyzing/units/1)_Symbolic_Units.ipynb
+++ b/doc/source/analyzing/units/1)_Symbolic_Units.ipynb
@@ -119,7 +119,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Most people will interact with the new unit system using `YTArray` and `YTQuantity`. These are both subclasses of numpy's fast array type, `ndarray`, and can be used interchangably with other NumPy arrays. These new classes make use of the unit system to append unit metadata to the underlying `ndarray`. `YTArray` is intended to store array data, while `YTQuantitity` is intended to store scalars in a particular unit system.\n",
+ "Most people will interact with the new unit system using `YTArray` and `YTQuantity`. These are both subclasses of numpy's fast array type, `ndarray`, and can be used interchangeably with other NumPy arrays. These new classes make use of the unit system to append unit metadata to the underlying `ndarray`. `YTArray` is intended to store array data, while `YTQuantitity` is intended to store scalars in a particular unit system.\n",
"\n",
"There are two ways to create arrays and quantities. The first is to explicitly create it by calling the class constructor and supplying a unit string:"
]
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/cookbook/Halo_Analysis.ipynb
--- a/doc/source/cookbook/Halo_Analysis.ipynb
+++ b/doc/source/cookbook/Halo_Analysis.ipynb
@@ -310,7 +310,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- " Just as profiles are saved seperately throught the `save_profiles` callback they also must be loaded separately using the `load_profiles` callback."
+ " Just as profiles are saved separately through the `save_profiles` callback they also must be loaded separately using the `load_profiles` callback."
]
},
{
@@ -329,7 +329,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Calling `load` is the equivalent of calling `create` earlier, but defaults to to not saving new information. This means that the callback to `load_profiles` is not run until we call `load` here."
+ "Calling `load` is the equivalent of calling `create` earlier, but defaults to not saving new information. This means that the callback to `load_profiles` is not run until we call `load` here."
]
},
{
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/cookbook/complex_plots.rst
--- a/doc/source/cookbook/complex_plots.rst
+++ b/doc/source/cookbook/complex_plots.rst
@@ -36,8 +36,8 @@
.. yt_cookbook:: multiplot_2x2_time_series.py
-Mutiple Slice Multipanel
-~~~~~~~~~~~~~~~~~~~~~~~~
+Multiple Slice Multipanel
+~~~~~~~~~~~~~~~~~~~~~~~~~
This illustrates how to create a multipanel plot of slices along the coordinate
axes. To focus on what's happening in the x-y plane, we make an additional
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/cookbook/notebook_tutorial.rst
--- a/doc/source/cookbook/notebook_tutorial.rst
+++ b/doc/source/cookbook/notebook_tutorial.rst
@@ -3,7 +3,7 @@
Notebook Tutorial
-----------------
-The IPython notebook is a powerful system for literate codoing - a style of
+The IPython notebook is a powerful system for literate coding - a style of
writing code that embeds input, output, and explanatory text into one document.
yt has deep integration with the IPython notebook, explained in-depth in the
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/cookbook/streamlines.py
--- a/doc/source/cookbook/streamlines.py
+++ b/doc/source/cookbook/streamlines.py
@@ -24,7 +24,7 @@
length=1.0*Mpc, get_magnitude=True)
streamlines.integrate_through_volume()
-# Create a 3D plot, trace the streamlines throught the 3D volume of the plot
+# Create a 3D plot, trace the streamlines through the 3D volume of the plot
fig=pl.figure()
ax = Axes3D(fig)
for stream in streamlines.streamlines:
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/developing/releasing.rst
--- a/doc/source/developing/releasing.rst
+++ b/doc/source/developing/releasing.rst
@@ -20,7 +20,7 @@
These releases happen when new features are deemed ready to be merged into the
``stable`` branch and should not happen on a regular schedule. Minor releases
can also include fixes for bugs if the fix is determined to be too invasive
- for a bugfix release. Minor releases should *not* inlucde
+ for a bugfix release. Minor releases should *not* include
backwards-incompatible changes and should not change APIs. If an API change
is deemed to be necessary, the old API should continue to function but might
trigger deprecation warnings. Minor releases should happen by merging the
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/developing/testing.rst
--- a/doc/source/developing/testing.rst
+++ b/doc/source/developing/testing.rst
@@ -543,7 +543,7 @@
it is considered best practice to first submit a pull request adding the tests WITHOUT incrementing
the version number. Then, allow the tests to run (resulting in "no old answer" errors for the missing
answers). If no other failures are present, you can then increment the version number to regenerate
-the answers. This way, we can avoid accidently covering up test breakages.
+the answers. This way, we can avoid accidentally covering up test breakages.
Adding New Answer Tests
~~~~~~~~~~~~~~~~~~~~~~~
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -49,7 +49,7 @@
file containing particle time steps is not loaded by yt.
You also have the option of gridding particles and assigning them onto the
-meshes. This process is in beta, and for the time being it's probably best to
+meshes. This process is in beta, and for the time being, it's probably best to
leave ``do_grid_particles=False`` as the default.
To speed up the loading of an ART file, you have a few options. You can turn
@@ -1420,7 +1420,7 @@
will define the (x,y,z) coordinates of the hexahedral cells and
information about that cell's neighbors such that the cell corners
-will be a grid of points constructed as the Cartesion product of
+will be a grid of points constructed as the Cartesian product of
xgrid, ygrid, and zgrid.
Then, to load your data, which should be defined on the interiors of
@@ -1556,7 +1556,7 @@
You can also load generic particle data using the same ``stream`` functionality
discussed above to load in-memory grid data. For example, if your particle
-positions and masses are stored in ``positions`` and ``massess``, a
+positions and masses are stored in ``positions`` and ``masses``, a
vertically-stacked array of particle x,y, and z positions, and a 1D array of
particle masses respectively, you would load them like this:
@@ -1647,7 +1647,7 @@
AHF halo catalogs are loaded by providing the path to the .parameter files.
The corresponding .log and .AHF_halos files must exist for data loading to
-succeed. The field type for all fields is "halos". Some fields of note avaible
+succeed. The field type for all fields is "halos". Some fields of note available
from AHF are:
+----------------+---------------------------+
@@ -1913,7 +1913,7 @@
---------
`PyNE <http://pyne.io/>`_ is an open source nuclear engineering toolkit
-maintained by the PyNE developement team (pyne-dev at googlegroups.com).
+maintained by the PyNE development team (pyne-dev at googlegroups.com).
PyNE meshes utilize the Mesh-Oriented datABase
`(MOAB) <http://trac.mcs.anl.gov/projects/ITAPS/wiki/MOAB/>`_ and can be
Cartesian or tetrahedral. In addition to field data, pyne meshes store pyne
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/examining/low_level_inspection.rst
--- a/doc/source/examining/low_level_inspection.rst
+++ b/doc/source/examining/low_level_inspection.rst
@@ -264,6 +264,6 @@
will only have a particle named ``io``.
Finally, one can see the number of each particle type by inspecting
-``ds.particle_type_counts``. This will be a dictionary mappying the names of
+``ds.particle_type_counts``. This will be a dictionary mapping the names of
particle types in ``ds.particle_types_raw`` to the number of each particle type
in a simulation output.
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/installing.rst
--- a/doc/source/installing.rst
+++ b/doc/source/installing.rst
@@ -20,7 +20,7 @@
python packages, or are working on a supercomputer or cluster computer, you
will probably want to use the bash all-in-one installation script. This
creates a python environment using the `miniconda python
- distrubtion <http://conda.pydata.org/miniconda.html>`_ and the
+ distribution <http://conda.pydata.org/miniconda.html>`_ and the
`conda <http://conda.pydata.org/docs/>`_ package manager inside of a single
folder in your home directory. See :ref:`install-script` for more details.
@@ -107,7 +107,7 @@
$ curl -OL https://raw.githubusercontent.com/yt-project/yt/master/doc/install_script.sh
By default, the bash install script will create a python environment based on
-the `miniconda python distrubtion <http://conda.pydata.org/miniconda.html>`_,
+the `miniconda python distribution <http://conda.pydata.org/miniconda.html>`_,
and will install yt's dependencies using the `conda
<http://conda.pydata.org/docs/>`_ package manager. To avoid needing a
compilation environment to run the install script, yt itself will also be
@@ -300,7 +300,7 @@
Installing Support for the Rockstar Halo Finder
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-The easiest way to set rockstar up in a conda-based python envrionment is to run
+The easiest way to set rockstar up in a conda-based python environment is to run
the install script with ``INST_ROCKSTAR=1``.
If you want to do this manually, you will need to follow these
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/intro/index.rst
--- a/doc/source/intro/index.rst
+++ b/doc/source/intro/index.rst
@@ -111,7 +111,7 @@
<light-cone-generator>`, :ref:`cosmological light rays <light-ray-generator>`,
:ref:`synthetic absorption spectra <absorption_spectrum>`, :ref:`spectral
emission distributions (SEDS) <synthetic_spectrum>`, :ref:`star formation
-rates <star_analysis>`, :ref:`synthetic x-ray obserservations
+rates <star_analysis>`, :ref:`synthetic x-ray observations
<xray_emission_fields>`, and :ref:`synthetic sunyaev-zeldovich effect
observations <sunyaev-zeldovich>`), :ref:`two-point correlations functions
<two_point_functions>`, :ref:`identification of overdensities in arbitrary
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/quickstart/2)_Data_Inspection.ipynb
--- a/doc/source/quickstart/2)_Data_Inspection.ipynb
+++ b/doc/source/quickstart/2)_Data_Inspection.ipynb
@@ -174,7 +174,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "For this dataset, we see that there are two particle types defined, (`io` and `all`), but that only one of these particle types in in `ds.particle_types_raw`. The `ds.particle_types` list contains *all* particle types in the simulation, including ones that are dynamically defined like particle unions. The `ds.particle_types_raw` list includes only particle types that are in the output file we loaded the dataset from.\n",
+ "For this dataset, we see that there are two particle types defined, (`io` and `all`), but that only one of these particle types in `ds.particle_types_raw`. The `ds.particle_types` list contains *all* particle types in the simulation, including ones that are dynamically defined like particle unions. The `ds.particle_types_raw` list includes only particle types that are in the output file we loaded the dataset from.\n",
"\n",
"We can also see that there are a bit more than 1.1 million particles in this simulation. Only particle types in `ds.particle_types_raw` will appear in the `ds.particle_type_counts` dictionary."
]
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/quickstart/index.rst
--- a/doc/source/quickstart/index.rst
+++ b/doc/source/quickstart/index.rst
@@ -4,7 +4,7 @@
=============
The quickstart is a series of worked examples of how to use much of the
-funtionality of yt. These are simple, short introductions to give you a taste
+functionality of yt. These are simple, short introductions to give you a taste
of what the code can do and are not meant to be detailed walkthroughs.
There are two ways in which you can go through the quickstart: interactively and
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/reference/changelog.rst
--- a/doc/source/reference/changelog.rst
+++ b/doc/source/reference/changelog.rst
@@ -364,8 +364,8 @@
* Fixed bugs related to compatibility issues with newer versions of numpy
* Added the ability to export data objects to a Pandas dataframe
* Added support for the fabs ufunc to YTArray
-* Fixed two licensings issues
-* Fixed a number of bugs related to Windows compatability.
+* Fixed two licensing issues
+* Fixed a number of bugs related to Windows compatibility.
* We now avoid hard-to-decipher tracebacks when loading empty files or
directories
* Fixed a bug related to ART star particle creation time field
@@ -441,7 +441,7 @@
* Made PlotWindow show/hide helpers for axes and colorbar return self
* Made Profile objects store field metadata.
* Ensured GDF unit names are strings
-* Tought off_axis_projection about its resolution keyword.
+* Taught off_axis_projection about its resolution keyword.
* Reintroduced sanitize_width for polar/cyl coordinates.
* We now fail early when load_uniform_grid is passed data with an incorrect shape
* Replaced progress bar with tqdm
@@ -472,7 +472,7 @@
* Patched ParticlePlot to work with filtered particle fields.
* Fixed a couple corner cases in gadget_fof frontend
* We now properly normalise all normal vectors in functions that take a normal
- vector (for e.g get_sph_theta)
+ vector (for e.g. get_sph_theta)
* Fixed a bug where the transfer function features were not always getting
cleared properly.
* Made the Chombo frontend is_valid method smarter.
@@ -489,7 +489,7 @@
* Ensured that mpi operations retain ImageArray type instead of downgrading to
YTArray parent class
* Added a call to _setup_plots in the custom colorbar tickmark example
-* Fixed two minor bugs in save_annocated
+* Fixed two minor bugs in save_annotated
* Added ability to specify that DatasetSeries is not a mixed data type
* Fixed a memory leak in ARTIO
* Fixed copy/paste error in to_frb method.
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/visualizing/callbacks.rst
--- a/doc/source/visualizing/callbacks.rst
+++ b/doc/source/visualizing/callbacks.rst
@@ -740,7 +740,7 @@
Adds a line representing the projected path of a ray across the plot. The
ray can be either a
:class:`~yt.data_objects.selection_data_containers.YTOrthoRay`,
- :class:`~yt.data_objects.selection_data_contaners.YTRay`, or a
+ :class:`~yt.data_objects.selection_data_containers.YTRay`, or a
:class:`~yt.analysis_modules.cosmological_observation.light_ray.light_ray.LightRay`
object. annotate_ray() will properly account for periodic rays across the
volume.
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/visualizing/mapserver.rst
--- a/doc/source/visualizing/mapserver.rst
+++ b/doc/source/visualizing/mapserver.rst
@@ -22,7 +22,7 @@
field, projection, weight and axis can all be specified on the command line.
When you do this, it will spawn a micro-webserver on your localhost, and output
-the URL to connect to to standard output. You can connect to it (or create an
+the URL to connect to standard output. You can connect to it (or create an
SSH tunnel to connect to it) and explore your data. Double-clicking zooms, and
dragging drags.
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/visualizing/plots.rst
--- a/doc/source/visualizing/plots.rst
+++ b/doc/source/visualizing/plots.rst
@@ -376,8 +376,7 @@
Here, ``W`` is the width of the projection in the x, y, *and* z
directions.
-One can also generate generate annotated off axis projections
-using
+One can also generate annotated off axis projections using
:class:`~yt.visualization.plot_window.OffAxisProjectionPlot`. These
plots can be created in much the same way as an
``OffAxisSlicePlot``, requiring only an open dataset, a direction
@@ -403,7 +402,7 @@
------------------------
Unstructured Mesh datasets can be sliced using the same syntax as above.
-Here is an example script using a publically available MOOSE dataset:
+Here is an example script using a publicly available MOOSE dataset:
.. python-script::
@@ -647,7 +646,7 @@
~~~~~
:meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_font` allows font
-costomization.
+customization.
.. python-script::
@@ -691,7 +690,7 @@
slc.save()
Specifically, a field containing both positive and negative values can be plotted
-with symlog scale, by seting the boolean to be ``True`` and providing an extra
+with symlog scale, by setting the boolean to be ``True`` and providing an extra
parameter ``linthresh``. In the region around zero (when the log scale approaches
to infinity), the linear scale will be applied to the region ``(-linthresh, linthresh)``
and stretched relative to the logarithmic range. You can also plot a positive field
@@ -889,7 +888,7 @@
Note that because we have specified the weighting field to be ``None``, the
profile plot will display the accumulated cell mass as a function of temperature
rather than the average. Also note the use of a ``(value, unit)`` tuple. These
-can be used interchangably with units explicitly imported from ``yt.units`` when
+can be used interchangeably with units explicitly imported from ``yt.units`` when
creating yt plots.
We can also accumulate along the bin field of a ``ProfilePlot`` (the bin field
@@ -1110,7 +1109,7 @@
If working in a Jupyter Notebook, ``LinePlot`` also has the ``show()`` method.
-You can can add a legend to a 1D sampling plot. The legend process takes two steps:
+You can add a legend to a 1D sampling plot. The legend process takes two steps:
1. When instantiating the ``LinePlot``, pass a dictionary of
labels with keys corresponding to the field names
@@ -1426,7 +1425,7 @@
p.save()
Finally, with 1D and 2D Profiles, you can create a :class:`~yt.data_objects.profiles.ParticleProfile`
-object seperately using the :func:`~yt.data_objects.profiles.create_profile` function, and then use it
+object separately using the :func:`~yt.data_objects.profiles.create_profile` function, and then use it
create a :class:`~yt.visualization.particle_plots.ParticlePhasePlot` object using the
:meth:`~yt.visualization.particle_plots.ParticlePhasePlot.from_profile` method. In this example,
we have also used the ``weight_field`` argument to compute the average ``particle_mass`` in each
@@ -1645,7 +1644,7 @@
Publication-ready Figures
-------------------------
-While the routines above give a convienent method to inspect and
+While the routines above give a convenient method to inspect and
visualize your data, publishers often require figures to be in PDF or
EPS format. While the matplotlib supports vector graphics and image
compression in PDF formats, it does not support compression in EPS
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r 146c671ce565c26e8983cfaa54c5b8903921890e doc/source/visualizing/streamlines.rst
--- a/doc/source/visualizing/streamlines.rst
+++ b/doc/source/visualizing/streamlines.rst
@@ -45,7 +45,7 @@
interrupt the integration and locate a new brick at the
intermediate position.
-#. The set set of streamline positions are stored in the
+#. The set of streamline positions are stored in the
:class:`~yt.visualization.streamlines.Streamlines` object.
Example Script
@@ -79,7 +79,7 @@
length=1.0*Mpc, get_magnitude=True)
streamlines.integrate_through_volume()
- # Create a 3D plot, trace the streamlines throught the 3D volume of the plot
+ # Create a 3D plot, trace the streamlines through the 3D volume of the plot
fig=pl.figure()
ax = Axes3D(fig)
for stream in streamlines.streamlines:
@@ -95,7 +95,7 @@
.. note::
- This functionality has not been implemented yet in in the 3.x series of
+ This functionality has not been implemented yet in the 3.x series of
yt. If you are interested in working on this and have questions, please
let us know on the yt-dev mailing list.
https://bitbucket.org/yt_analysis/yt/commits/ce4b77403514/
Changeset: ce4b77403514
User: ngoldbaum
Date: 2017-08-18 18:43:01+00:00
Summary: Merge pull request #1542 from Xarthisius/doc_fixes
Fix typos in docs
Affected #: 25 files
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/analyzing/analysis_modules/absorption_spectrum.rst
--- a/doc/source/analyzing/analysis_modules/absorption_spectrum.rst
+++ b/doc/source/analyzing/analysis_modules/absorption_spectrum.rst
@@ -431,6 +431,6 @@
stuck in a local minimum. A set of hard coded initial parameter guesses
for Lyman alpha lines is given by the function
:func:`~yt.analysis_modules.absorption_spectrum.absorption_spectrum_fit.get_test_lines`.
-Also included in these parameter guesses is an an initial guess of a high
-column cool line overlapping a lower column warm line, indictive of a
+Also included in these parameter guesses is an initial guess of a high
+column cool line overlapping a lower column warm line, indicative of a
broad Lyman alpha (BLA) absorber.
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/analyzing/analysis_modules/cosmology_calculator.rst
--- a/doc/source/analyzing/analysis_modules/cosmology_calculator.rst
+++ b/doc/source/analyzing/analysis_modules/cosmology_calculator.rst
@@ -39,7 +39,7 @@
# comoving volume
print("comoving volume", co.comoving_volume(0, 0.5).in_units("Gpccm**3"))
- # angulare diameter distance
+ # angular diameter distance
print("angular diameter distance", co.angular_diameter_distance(0, 0.5).in_units("Mpc/h"))
# angular scale
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/analyzing/analysis_modules/halo_mass_function.rst
--- a/doc/source/analyzing/analysis_modules/halo_mass_function.rst
+++ b/doc/source/analyzing/analysis_modules/halo_mass_function.rst
@@ -151,7 +151,7 @@
checked by hand.
Default : 0.86.
-* **primoridal_index** (*float*)
+* **primordial_index** (*float*)
This is the index of the mass power spectrum before modification by
the transfer function. A value of 1 corresponds to the scale-free
primordial spectrum. This is not always stored in the dataset and
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/analyzing/analysis_modules/halo_transition.rst
--- a/doc/source/analyzing/analysis_modules/halo_transition.rst
+++ b/doc/source/analyzing/analysis_modules/halo_transition.rst
@@ -3,7 +3,7 @@
Transitioning From yt-2 to yt-3
===============================
-If you're used to halo analysis in yt-2.x, heres a guide to
+If you're used to halo analysis in yt-2.x, here's a guide to
how to update your analysis pipeline to take advantage of
the new halo catalog infrastructure. If you're starting
from scratch, see :ref:`halo_catalog`.
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/analyzing/analysis_modules/light_ray_generator.rst
--- a/doc/source/analyzing/analysis_modules/light_ray_generator.rst
+++ b/doc/source/analyzing/analysis_modules/light_ray_generator.rst
@@ -205,8 +205,8 @@
option should be used with caution as it will lead to the creation
of disconnected ray segments within a single dataset.
-I want a continous trajectory over the entire ray.
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+I want a continuous trajectory over the entire ray.
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Set the ``minimum_coherent_box_fraction`` keyword argument to a very
large number, like infinity (`numpy.inf`).
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/analyzing/filtering.rst
--- a/doc/source/analyzing/filtering.rst
+++ b/doc/source/analyzing/filtering.rst
@@ -157,7 +157,7 @@
This is equivalent to our use of the ``particle_filter`` decorator above. The
choice to use either the ``particle_filter`` decorator or the
-``add_particle_fitler`` function is a purely stylistic choice.
+``add_particle_filter`` function is a purely stylistic choice.
.. notebook:: particle_filter.ipynb
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/analyzing/parallel_computation.rst
--- a/doc/source/analyzing/parallel_computation.rst
+++ b/doc/source/analyzing/parallel_computation.rst
@@ -261,7 +261,7 @@
for sto, dataset in dataset_series.piter(storage=my_dictionary):
<process>
sto.result = <some information processed for this dataset>
- sto.result_id = <some identfier for this dataset>
+ sto.result_id = <some identifier for this dataset>
print(my_dictionary)
@@ -491,7 +491,7 @@
++++++++++++++++++++
The various types of analysis that utilize domain decomposition use them in
-different enough ways that they are be discussed separately.
+different enough ways that they are discussed separately.
**Halo-Finding**
@@ -605,7 +605,7 @@
16 processors assigned to each output in the time series.
#. Creating a big cube that will hold our results for this set of processors.
Note that this will be only for each output considered by this processor,
- and this cube will not necessarily be filled in in every cell.
+ and this cube will not necessarily be filled in every cell.
#. For each output, distribute the grids to each of the sixteen processors
working on that output. Each of these takes the max of the ionized
redshift in their zone versus the accumulation cube.
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/analyzing/units/1)_Symbolic_Units.ipynb
--- a/doc/source/analyzing/units/1)_Symbolic_Units.ipynb
+++ b/doc/source/analyzing/units/1)_Symbolic_Units.ipynb
@@ -119,7 +119,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Most people will interact with the new unit system using `YTArray` and `YTQuantity`. These are both subclasses of numpy's fast array type, `ndarray`, and can be used interchangably with other NumPy arrays. These new classes make use of the unit system to append unit metadata to the underlying `ndarray`. `YTArray` is intended to store array data, while `YTQuantitity` is intended to store scalars in a particular unit system.\n",
+ "Most people will interact with the new unit system using `YTArray` and `YTQuantity`. These are both subclasses of numpy's fast array type, `ndarray`, and can be used interchangeably with other NumPy arrays. These new classes make use of the unit system to append unit metadata to the underlying `ndarray`. `YTArray` is intended to store array data, while `YTQuantitity` is intended to store scalars in a particular unit system.\n",
"\n",
"There are two ways to create arrays and quantities. The first is to explicitly create it by calling the class constructor and supplying a unit string:"
]
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/cookbook/Halo_Analysis.ipynb
--- a/doc/source/cookbook/Halo_Analysis.ipynb
+++ b/doc/source/cookbook/Halo_Analysis.ipynb
@@ -310,7 +310,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- " Just as profiles are saved seperately throught the `save_profiles` callback they also must be loaded separately using the `load_profiles` callback."
+ " Just as profiles are saved separately through the `save_profiles` callback they also must be loaded separately using the `load_profiles` callback."
]
},
{
@@ -329,7 +329,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Calling `load` is the equivalent of calling `create` earlier, but defaults to to not saving new information. This means that the callback to `load_profiles` is not run until we call `load` here."
+ "Calling `load` is the equivalent of calling `create` earlier, but defaults to not saving new information. This means that the callback to `load_profiles` is not run until we call `load` here."
]
},
{
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/cookbook/complex_plots.rst
--- a/doc/source/cookbook/complex_plots.rst
+++ b/doc/source/cookbook/complex_plots.rst
@@ -36,8 +36,8 @@
.. yt_cookbook:: multiplot_2x2_time_series.py
-Mutiple Slice Multipanel
-~~~~~~~~~~~~~~~~~~~~~~~~
+Multiple Slice Multipanel
+~~~~~~~~~~~~~~~~~~~~~~~~~
This illustrates how to create a multipanel plot of slices along the coordinate
axes. To focus on what's happening in the x-y plane, we make an additional
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/cookbook/notebook_tutorial.rst
--- a/doc/source/cookbook/notebook_tutorial.rst
+++ b/doc/source/cookbook/notebook_tutorial.rst
@@ -3,7 +3,7 @@
Notebook Tutorial
-----------------
-The IPython notebook is a powerful system for literate codoing - a style of
+The IPython notebook is a powerful system for literate coding - a style of
writing code that embeds input, output, and explanatory text into one document.
yt has deep integration with the IPython notebook, explained in-depth in the
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/cookbook/streamlines.py
--- a/doc/source/cookbook/streamlines.py
+++ b/doc/source/cookbook/streamlines.py
@@ -24,7 +24,7 @@
length=1.0*Mpc, get_magnitude=True)
streamlines.integrate_through_volume()
-# Create a 3D plot, trace the streamlines throught the 3D volume of the plot
+# Create a 3D plot, trace the streamlines through the 3D volume of the plot
fig=pl.figure()
ax = Axes3D(fig)
for stream in streamlines.streamlines:
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/developing/releasing.rst
--- a/doc/source/developing/releasing.rst
+++ b/doc/source/developing/releasing.rst
@@ -20,7 +20,7 @@
These releases happen when new features are deemed ready to be merged into the
``stable`` branch and should not happen on a regular schedule. Minor releases
can also include fixes for bugs if the fix is determined to be too invasive
- for a bugfix release. Minor releases should *not* inlucde
+ for a bugfix release. Minor releases should *not* include
backwards-incompatible changes and should not change APIs. If an API change
is deemed to be necessary, the old API should continue to function but might
trigger deprecation warnings. Minor releases should happen by merging the
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/developing/testing.rst
--- a/doc/source/developing/testing.rst
+++ b/doc/source/developing/testing.rst
@@ -543,7 +543,7 @@
it is considered best practice to first submit a pull request adding the tests WITHOUT incrementing
the version number. Then, allow the tests to run (resulting in "no old answer" errors for the missing
answers). If no other failures are present, you can then increment the version number to regenerate
-the answers. This way, we can avoid accidently covering up test breakages.
+the answers. This way, we can avoid accidentally covering up test breakages.
Adding New Answer Tests
~~~~~~~~~~~~~~~~~~~~~~~
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -49,7 +49,7 @@
file containing particle time steps is not loaded by yt.
You also have the option of gridding particles and assigning them onto the
-meshes. This process is in beta, and for the time being it's probably best to
+meshes. This process is in beta, and for the time being, it's probably best to
leave ``do_grid_particles=False`` as the default.
To speed up the loading of an ART file, you have a few options. You can turn
@@ -1420,7 +1420,7 @@
will define the (x,y,z) coordinates of the hexahedral cells and
information about that cell's neighbors such that the cell corners
-will be a grid of points constructed as the Cartesion product of
+will be a grid of points constructed as the Cartesian product of
xgrid, ygrid, and zgrid.
Then, to load your data, which should be defined on the interiors of
@@ -1556,7 +1556,7 @@
You can also load generic particle data using the same ``stream`` functionality
discussed above to load in-memory grid data. For example, if your particle
-positions and masses are stored in ``positions`` and ``massess``, a
+positions and masses are stored in ``positions`` and ``masses``, a
vertically-stacked array of particle x,y, and z positions, and a 1D array of
particle masses respectively, you would load them like this:
@@ -1647,7 +1647,7 @@
AHF halo catalogs are loaded by providing the path to the .parameter files.
The corresponding .log and .AHF_halos files must exist for data loading to
-succeed. The field type for all fields is "halos". Some fields of note avaible
+succeed. The field type for all fields is "halos". Some fields of note available
from AHF are:
+----------------+---------------------------+
@@ -1913,7 +1913,7 @@
---------
`PyNE <http://pyne.io/>`_ is an open source nuclear engineering toolkit
-maintained by the PyNE developement team (pyne-dev at googlegroups.com).
+maintained by the PyNE development team (pyne-dev at googlegroups.com).
PyNE meshes utilize the Mesh-Oriented datABase
`(MOAB) <http://trac.mcs.anl.gov/projects/ITAPS/wiki/MOAB/>`_ and can be
Cartesian or tetrahedral. In addition to field data, pyne meshes store pyne
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/examining/low_level_inspection.rst
--- a/doc/source/examining/low_level_inspection.rst
+++ b/doc/source/examining/low_level_inspection.rst
@@ -264,6 +264,6 @@
will only have a particle named ``io``.
Finally, one can see the number of each particle type by inspecting
-``ds.particle_type_counts``. This will be a dictionary mappying the names of
+``ds.particle_type_counts``. This will be a dictionary mapping the names of
particle types in ``ds.particle_types_raw`` to the number of each particle type
in a simulation output.
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/installing.rst
--- a/doc/source/installing.rst
+++ b/doc/source/installing.rst
@@ -20,7 +20,7 @@
python packages, or are working on a supercomputer or cluster computer, you
will probably want to use the bash all-in-one installation script. This
creates a python environment using the `miniconda python
- distrubtion <http://conda.pydata.org/miniconda.html>`_ and the
+ distribution <http://conda.pydata.org/miniconda.html>`_ and the
`conda <http://conda.pydata.org/docs/>`_ package manager inside of a single
folder in your home directory. See :ref:`install-script` for more details.
@@ -107,7 +107,7 @@
$ curl -OL https://raw.githubusercontent.com/yt-project/yt/master/doc/install_script.sh
By default, the bash install script will create a python environment based on
-the `miniconda python distrubtion <http://conda.pydata.org/miniconda.html>`_,
+the `miniconda python distribution <http://conda.pydata.org/miniconda.html>`_,
and will install yt's dependencies using the `conda
<http://conda.pydata.org/docs/>`_ package manager. To avoid needing a
compilation environment to run the install script, yt itself will also be
@@ -300,7 +300,7 @@
Installing Support for the Rockstar Halo Finder
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-The easiest way to set rockstar up in a conda-based python envrionment is to run
+The easiest way to set rockstar up in a conda-based python environment is to run
the install script with ``INST_ROCKSTAR=1``.
If you want to do this manually, you will need to follow these
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/intro/index.rst
--- a/doc/source/intro/index.rst
+++ b/doc/source/intro/index.rst
@@ -111,7 +111,7 @@
<light-cone-generator>`, :ref:`cosmological light rays <light-ray-generator>`,
:ref:`synthetic absorption spectra <absorption_spectrum>`, :ref:`spectral
emission distributions (SEDS) <synthetic_spectrum>`, :ref:`star formation
-rates <star_analysis>`, :ref:`synthetic x-ray obserservations
+rates <star_analysis>`, :ref:`synthetic x-ray observations
<xray_emission_fields>`, and :ref:`synthetic sunyaev-zeldovich effect
observations <sunyaev-zeldovich>`), :ref:`two-point correlations functions
<two_point_functions>`, :ref:`identification of overdensities in arbitrary
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/quickstart/2)_Data_Inspection.ipynb
--- a/doc/source/quickstart/2)_Data_Inspection.ipynb
+++ b/doc/source/quickstart/2)_Data_Inspection.ipynb
@@ -174,7 +174,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "For this dataset, we see that there are two particle types defined, (`io` and `all`), but that only one of these particle types in in `ds.particle_types_raw`. The `ds.particle_types` list contains *all* particle types in the simulation, including ones that are dynamically defined like particle unions. The `ds.particle_types_raw` list includes only particle types that are in the output file we loaded the dataset from.\n",
+ "For this dataset, we see that there are two particle types defined, (`io` and `all`), but that only one of these particle types in `ds.particle_types_raw`. The `ds.particle_types` list contains *all* particle types in the simulation, including ones that are dynamically defined like particle unions. The `ds.particle_types_raw` list includes only particle types that are in the output file we loaded the dataset from.\n",
"\n",
"We can also see that there are a bit more than 1.1 million particles in this simulation. Only particle types in `ds.particle_types_raw` will appear in the `ds.particle_type_counts` dictionary."
]
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/quickstart/index.rst
--- a/doc/source/quickstart/index.rst
+++ b/doc/source/quickstart/index.rst
@@ -4,7 +4,7 @@
=============
The quickstart is a series of worked examples of how to use much of the
-funtionality of yt. These are simple, short introductions to give you a taste
+functionality of yt. These are simple, short introductions to give you a taste
of what the code can do and are not meant to be detailed walkthroughs.
There are two ways in which you can go through the quickstart: interactively and
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/reference/changelog.rst
--- a/doc/source/reference/changelog.rst
+++ b/doc/source/reference/changelog.rst
@@ -364,8 +364,8 @@
* Fixed bugs related to compatibility issues with newer versions of numpy
* Added the ability to export data objects to a Pandas dataframe
* Added support for the fabs ufunc to YTArray
-* Fixed two licensings issues
-* Fixed a number of bugs related to Windows compatability.
+* Fixed two licensing issues
+* Fixed a number of bugs related to Windows compatibility.
* We now avoid hard-to-decipher tracebacks when loading empty files or
directories
* Fixed a bug related to ART star particle creation time field
@@ -441,7 +441,7 @@
* Made PlotWindow show/hide helpers for axes and colorbar return self
* Made Profile objects store field metadata.
* Ensured GDF unit names are strings
-* Tought off_axis_projection about its resolution keyword.
+* Taught off_axis_projection about its resolution keyword.
* Reintroduced sanitize_width for polar/cyl coordinates.
* We now fail early when load_uniform_grid is passed data with an incorrect shape
* Replaced progress bar with tqdm
@@ -472,7 +472,7 @@
* Patched ParticlePlot to work with filtered particle fields.
* Fixed a couple corner cases in gadget_fof frontend
* We now properly normalise all normal vectors in functions that take a normal
- vector (for e.g get_sph_theta)
+ vector (for e.g. get_sph_theta)
* Fixed a bug where the transfer function features were not always getting
cleared properly.
* Made the Chombo frontend is_valid method smarter.
@@ -489,7 +489,7 @@
* Ensured that mpi operations retain ImageArray type instead of downgrading to
YTArray parent class
* Added a call to _setup_plots in the custom colorbar tickmark example
-* Fixed two minor bugs in save_annocated
+* Fixed two minor bugs in save_annotated
* Added ability to specify that DatasetSeries is not a mixed data type
* Fixed a memory leak in ARTIO
* Fixed copy/paste error in to_frb method.
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/visualizing/callbacks.rst
--- a/doc/source/visualizing/callbacks.rst
+++ b/doc/source/visualizing/callbacks.rst
@@ -740,7 +740,7 @@
Adds a line representing the projected path of a ray across the plot. The
ray can be either a
:class:`~yt.data_objects.selection_data_containers.YTOrthoRay`,
- :class:`~yt.data_objects.selection_data_contaners.YTRay`, or a
+ :class:`~yt.data_objects.selection_data_containers.YTRay`, or a
:class:`~yt.analysis_modules.cosmological_observation.light_ray.light_ray.LightRay`
object. annotate_ray() will properly account for periodic rays across the
volume.
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/visualizing/mapserver.rst
--- a/doc/source/visualizing/mapserver.rst
+++ b/doc/source/visualizing/mapserver.rst
@@ -22,7 +22,7 @@
field, projection, weight and axis can all be specified on the command line.
When you do this, it will spawn a micro-webserver on your localhost, and output
-the URL to connect to to standard output. You can connect to it (or create an
+the URL to connect to standard output. You can connect to it (or create an
SSH tunnel to connect to it) and explore your data. Double-clicking zooms, and
dragging drags.
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/visualizing/plots.rst
--- a/doc/source/visualizing/plots.rst
+++ b/doc/source/visualizing/plots.rst
@@ -376,8 +376,7 @@
Here, ``W`` is the width of the projection in the x, y, *and* z
directions.
-One can also generate generate annotated off axis projections
-using
+One can also generate annotated off axis projections using
:class:`~yt.visualization.plot_window.OffAxisProjectionPlot`. These
plots can be created in much the same way as an
``OffAxisSlicePlot``, requiring only an open dataset, a direction
@@ -403,7 +402,7 @@
------------------------
Unstructured Mesh datasets can be sliced using the same syntax as above.
-Here is an example script using a publically available MOOSE dataset:
+Here is an example script using a publicly available MOOSE dataset:
.. python-script::
@@ -647,7 +646,7 @@
~~~~~
:meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_font` allows font
-costomization.
+customization.
.. python-script::
@@ -691,7 +690,7 @@
slc.save()
Specifically, a field containing both positive and negative values can be plotted
-with symlog scale, by seting the boolean to be ``True`` and providing an extra
+with symlog scale, by setting the boolean to be ``True`` and providing an extra
parameter ``linthresh``. In the region around zero (when the log scale approaches
to infinity), the linear scale will be applied to the region ``(-linthresh, linthresh)``
and stretched relative to the logarithmic range. You can also plot a positive field
@@ -889,7 +888,7 @@
Note that because we have specified the weighting field to be ``None``, the
profile plot will display the accumulated cell mass as a function of temperature
rather than the average. Also note the use of a ``(value, unit)`` tuple. These
-can be used interchangably with units explicitly imported from ``yt.units`` when
+can be used interchangeably with units explicitly imported from ``yt.units`` when
creating yt plots.
We can also accumulate along the bin field of a ``ProfilePlot`` (the bin field
@@ -1110,7 +1109,7 @@
If working in a Jupyter Notebook, ``LinePlot`` also has the ``show()`` method.
-You can can add a legend to a 1D sampling plot. The legend process takes two steps:
+You can add a legend to a 1D sampling plot. The legend process takes two steps:
1. When instantiating the ``LinePlot``, pass a dictionary of
labels with keys corresponding to the field names
@@ -1426,7 +1425,7 @@
p.save()
Finally, with 1D and 2D Profiles, you can create a :class:`~yt.data_objects.profiles.ParticleProfile`
-object seperately using the :func:`~yt.data_objects.profiles.create_profile` function, and then use it
+object separately using the :func:`~yt.data_objects.profiles.create_profile` function, and then use it
create a :class:`~yt.visualization.particle_plots.ParticlePhasePlot` object using the
:meth:`~yt.visualization.particle_plots.ParticlePhasePlot.from_profile` method. In this example,
we have also used the ``weight_field`` argument to compute the average ``particle_mass`` in each
@@ -1645,7 +1644,7 @@
Publication-ready Figures
-------------------------
-While the routines above give a convienent method to inspect and
+While the routines above give a convenient method to inspect and
visualize your data, publishers often require figures to be in PDF or
EPS format. While the matplotlib supports vector graphics and image
compression in PDF formats, it does not support compression in EPS
diff -r 1d3486a454521d5062615fbb2ec42d86a8d18a9d -r ce4b774035140f64e6cc92907a9bb2c736b250b7 doc/source/visualizing/streamlines.rst
--- a/doc/source/visualizing/streamlines.rst
+++ b/doc/source/visualizing/streamlines.rst
@@ -45,7 +45,7 @@
interrupt the integration and locate a new brick at the
intermediate position.
-#. The set set of streamline positions are stored in the
+#. The set of streamline positions are stored in the
:class:`~yt.visualization.streamlines.Streamlines` object.
Example Script
@@ -79,7 +79,7 @@
length=1.0*Mpc, get_magnitude=True)
streamlines.integrate_through_volume()
- # Create a 3D plot, trace the streamlines throught the 3D volume of the plot
+ # Create a 3D plot, trace the streamlines through the 3D volume of the plot
fig=pl.figure()
ax = Axes3D(fig)
for stream in streamlines.streamlines:
@@ -95,7 +95,7 @@
.. note::
- This functionality has not been implemented yet in in the 3.x series of
+ This functionality has not been implemented yet in the 3.x series of
yt. If you are interested in working on this and have questions, please
let us know on the yt-dev mailing list.
Repository URL: https://bitbucket.org/yt_analysis/yt/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
More information about the yt-svn
mailing list