[yt-svn] commit/yt: MatthewTurk: Merged in jzuhone/yt-3.x/yt-3.0 (pull request #905)
commits-noreply at bitbucket.org
commits-noreply at bitbucket.org
Tue May 20 05:06:57 PDT 2014
1 new commit in yt:
https://bitbucket.org/yt_analysis/yt/commits/8ce1487c06c6/
Changeset: 8ce1487c06c6
Branch: yt-3.0
User: MatthewTurk
Date: 2014-05-20 14:06:48
Summary: Merged in jzuhone/yt-3.x/yt-3.0 (pull request #905)
FITS frontend refactor, and some docs
Affected #: 15 files
diff -r 78d4ad6d25422503bf323322ab2ff8609c8da6ae -r 8ce1487c06c6efc8041f9c8bd4e6f80b5782a08d doc/source/analyzing/analysis_modules/photon_simulator.rst
--- a/doc/source/analyzing/analysis_modules/photon_simulator.rst
+++ b/doc/source/analyzing/analysis_modules/photon_simulator.rst
@@ -36,20 +36,20 @@
.. code:: python
from yt.mods import *
- from yt.analysis_modules.api import *
+ from yt.analysis_modules.photon_simulator.api import *
from yt.utilities.cosmology import Cosmology
We're going to load up an Athena dataset of a galaxy cluster core:
.. code:: python
- pf = load("MHDSloshing/virgo_low_res.0054.vtk",
- parameters={"TimeUnits":3.1557e13,
- "LengthUnits":3.0856e24,
- "DensityUnits":6.770424595218825e-27})
+ pf = load("MHDSloshing/virgo_low_res.0054.vtk",
+ parameters={"time_unit":(1.0,"Myr"),
+ "length_unit":(1.0,"Mpc"),
+ "mass_unit":(1.0e14,"Msun")})
First, to get a sense of what the resulting image will look like, let's
-make a new ``yt`` field called ``"DensitySquared"``, since the X-ray
+make a new ``yt`` field called ``"density_squared"``, since the X-ray
emission is proportional to :math:`\rho^2`, and a weak function of
temperature and metallicity.
@@ -57,14 +57,14 @@
def _density_squared(field, data):
return data["density"]**2
- add_field("DensitySquared", function=_density_squared)
+ add_field("density_squared", function=_density_squared)
Then we'll project this field along the z-axis.
.. code:: python
- prj = ProjectionPlot(pf, "z", ["DensitySquared"], width=(500., "kpc"))
- prj.set_cmap("DensitySquared", "gray_r")
+ prj = ProjectionPlot(ds, "z", ["density_squared"], width=(500., "kpc"))
+ prj.set_cmap("density_squared", "gray_r")
prj.show()
.. image:: _images/dsquared.png
@@ -89,7 +89,7 @@
.. code:: python
- sp = pf.sphere("c", (250., "kpc"))
+ sp = ds.sphere("c", (250., "kpc"))
This will serve as our ``data_source`` that we will use later. Next, we
need to create the ``SpectralModel`` instance that will determine how
@@ -258,11 +258,6 @@
events = photons.project_photons(L, exp_time_new=2.0e5, redshift_new=0.07, absorb_model=abs_model,
sky_center=(187.5,12.333), responses=[ARF,RMF])
-.. parsed-literal::
-
- WARNING:yt:This routine has not been tested to work with all RMFs. YMMV.
-
-
Also, the optional keyword ``psf_sigma`` specifies a Gaussian standard
deviation to scatter the photon sky positions around with, providing a
crude representation of a PSF.
@@ -282,17 +277,17 @@
.. code:: python
- {'eobs': array([ 0.32086522, 0.32271389, 0.32562708, ..., 8.90600621,
- 9.73534237, 10.21614256]),
- 'xsky': array([ 187.5177707 , 187.4887825 , 187.50733609, ..., 187.5059345 ,
- 187.49897546, 187.47307048]),
- 'ysky': array([ 12.33519996, 12.3544496 , 12.32750903, ..., 12.34907707,
- 12.33327653, 12.32955225]),
- 'ypix': array([ 133.85374195, 180.68583074, 115.14110561, ..., 167.61447493,
- 129.17278711, 120.11508562]),
+ {'eobs': YTArray([ 0.32086522, 0.32271389, 0.32562708, ..., 8.90600621,
+ 9.73534237, 10.21614256]) keV,
+ 'xsky': YTArray([ 187.5177707 , 187.4887825 , 187.50733609, ..., 187.5059345 ,
+ 187.49897546, 187.47307048]) degree,
+ 'ysky': YTArray([ 12.33519996, 12.3544496 , 12.32750903, ..., 12.34907707,
+ 12.33327653, 12.32955225]) degree,
+ 'ypix': YTArray([ 133.85374195, 180.68583074, 115.14110561, ..., 167.61447493,
+ 129.17278711, 120.11508562]) (dimensionless),
'PI': array([ 27, 15, 25, ..., 609, 611, 672]),
- 'xpix': array([ 86.26331108, 155.15934197, 111.06337043, ..., 114.39586907,
- 130.93509652, 192.50639633])}
+ 'xpix': YTArray([ 86.26331108, 155.15934197, 111.06337043, ..., 114.39586907,
+ 130.93509652, 192.50639633]) (dimensionless)}
We can bin up the events into an image and save it to a FITS file. The
@@ -436,7 +431,7 @@
bbox = np.array([[-0.5,0.5],[-0.5,0.5],[-0.5,0.5]])
- pf = load_uniform_grid(data, ddims, 2*R*cm_per_kpc, bbox=bbox)
+ ds = load_uniform_grid(data, ddims, 2*R*cm_per_kpc, bbox=bbox)
where for simplicity we have set the velocities to zero, though we
could have created a realistic velocity field as well. Now, we
@@ -445,7 +440,7 @@
.. code:: python
- sphere = pf.sphere(pf.domain_center, 1.0/pf["mpc"])
+ sphere = ds.sphere(pf.domain_center, (1.0,"Mpc"))
A = 6000.
exp_time = 2.0e5
diff -r 78d4ad6d25422503bf323322ab2ff8609c8da6ae -r 8ce1487c06c6efc8041f9c8bd4e6f80b5782a08d doc/source/cookbook/fits_radio_cubes.ipynb
--- a/doc/source/cookbook/fits_radio_cubes.ipynb
+++ b/doc/source/cookbook/fits_radio_cubes.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:dbc41f6f836cdeb88a549d85e389d6e4e43d163d8c4c267baea8cce0ebdbf441"
+ "signature": "sha256:af10462a2a656015309ffc74e415bade3910ba7c7ccca5e15cfa98eca7ccadf4"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -23,7 +23,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "This notebook demonstrates some of the capabilties of `yt` on some FITS \"position-position-velocity\" cubes of radio data. "
+ "This notebook demonstrates some of the capabilties of `yt` on some FITS \"position-position-spectrum\" cubes of radio data. "
]
},
{
@@ -82,7 +82,7 @@
"input": [
"from yt.frontends.fits.misc import PlotWindowWCS\n",
"wcs_slc = PlotWindowWCS(slc)\n",
- "wcs_slc.show()"
+ "wcs_slc[\"intensity\"]"
],
"language": "python",
"metadata": {},
@@ -109,14 +109,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "We can also take slices of this dataset at a few different values along the \"z\" axis (corresponding to the velocity), so let's try a few. First, we'll check what the value along the velocity axis at the domain center is, as well as the range of possible values. This is the third value of each array. "
+ "We can also take slices of this dataset at a few different values along the \"z\" axis (corresponding to the velocity), so let's try a few. To pick specific velocity values for slices, we will need to use the dataset's `spec2pixel` method to determine which pixels to slice on:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
- "print ds.domain_left_edge[2], ds.domain_center[2], ds.domain_right_edge[2]"
+ "import astropy.units as u\n",
+ "new_center = ds.domain_center\n",
+ "new_center[2] = ds.spec2pixel(-250000.*u.m/u.s)"
],
"language": "python",
"metadata": {},
@@ -126,15 +128,33 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Now, we'll choose a few values for the velocity within this range:"
+ "Now we can use this new center to create a new slice:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
- "new_center = ds.domain_center \n",
- "new_center[2] = -250000.\n",
+ "slc = yt.SlicePlot(ds, \"z\", [\"intensity\"], center=new_center, origin=\"native\")\n",
+ "slc.show()"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can do this a few more times for different values of the velocity:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "new_center[2] = ds.spec2pixel(-100000.*u.m/u.s)\n",
+ "print new_center[2]\n",
"slc = yt.SlicePlot(ds, \"z\", [\"intensity\"], center=new_center, origin=\"native\")\n",
"slc.show()"
],
@@ -146,21 +166,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "new_center = ds.domain_center \n",
- "new_center[2] = -100000.\n",
- "slc = yt.SlicePlot(ds, \"z\", [\"intensity\"], center=new_center, origin=\"native\")\n",
- "slc.show()"
- ],
- "language": "python",
- "metadata": {},
- "outputs": []
- },
- {
- "cell_type": "code",
- "collapsed": false,
- "input": [
- "new_center = ds.domain_center \n",
- "new_center[2] = -150000.\n",
+ "new_center[2] = ds.spec2pixel(-150000.*u.m/u.s)\n",
"slc = yt.SlicePlot(ds, \"z\", [\"intensity\"], center=new_center, origin=\"native\")\n",
"slc.show()"
],
@@ -179,14 +185,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "We can also make a projection of all the emission along the line of sight:"
+ "We can also make a projection of all the emission along the line of sight. Since we're not doing an integration along a path length, we needed to specify `proj_style = \"sum\"`:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
- "prj = yt.ProjectionPlot(ds, \"z\", [\"intensity\"], origin=\"native\", proj_style=\"sum\")\n",
+ "prj = yt.ProjectionPlot(ds, \"z\", [\"intensity\"], proj_style=\"sum\", origin=\"native\")\n",
"prj.show()"
],
"language": "python",
@@ -197,13 +203,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Since we're not doing an integration along a path length, we needed to specify `proj_style = \"sum\"`. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
"We can also look at the slices perpendicular to the other axes, which will show us the structure along the velocity axis:"
]
},
@@ -211,8 +210,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "slc = yt.SlicePlot(ds, \"x\", [\"intensity\"], origin=\"native\", \n",
- " aspect=\"auto\", window_size=(8.0,8.0))\n",
+ "slc = yt.SlicePlot(ds, \"x\", [\"intensity\"], origin=\"native\")\n",
"slc.show()"
],
"language": "python",
@@ -223,8 +221,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "slc = yt.SlicePlot(ds, \"y\", [\"intensity\"], origin=\"native\", \n",
- " aspect=\"auto\", window_size=(8.0,8.0))\n",
+ "slc = yt.SlicePlot(ds, \"y\", [\"intensity\"], origin=\"native\")\n",
"slc.show()"
],
"language": "python",
diff -r 78d4ad6d25422503bf323322ab2ff8609c8da6ae -r 8ce1487c06c6efc8041f9c8bd4e6f80b5782a08d doc/source/developing/developing.rst
--- a/doc/source/developing/developing.rst
+++ b/doc/source/developing/developing.rst
@@ -111,6 +111,8 @@
out with them. In :ref:`code-style-guide` there is a list of handy tips for
how to structure and write your code.
+.. _mercurial-with-yt:
+
How to Use Mercurial with yt
++++++++++++++++++++++++++++
@@ -135,6 +137,8 @@
* If you run into any troubles, stop by IRC (see :ref:`irc`) or the mailing
list.
+.. _building-yt:
+
Building yt
+++++++++++
@@ -148,19 +152,31 @@
.. code-block:: bash
- python2.7 setup.py develop
+ $ python2.7 setup.py develop
If you have previously "installed" via ``setup.py install`` you have to
re-install:
.. code-block:: bash
- python2.7 setup.py install
+ $ python2.7 setup.py install
-Only one of these two options is needed. yt may require you to specify the
-location to libpng and hdf5. This can be done through files named ``png.cfg``
-and ``hdf5.cfg``. If you are using the installation script, these will already
-exist.
+Only one of these two options is needed.
+
+If you plan to develop yt on Windows, we recommend using the `MinGW <http://www.mingw.org/>`_ gcc
+compiler that can be installed using the `Anaconda Python
+Distribution <https://store.continuum.io/cshop/anaconda/>`_. Also, the syntax for the
+setup command is slightly different; you must type:
+
+.. code-block:: bash
+
+ $ python2.7 setup.py build --compiler=mingw32 develop
+
+or
+
+.. code-block:: bash
+
+ $ python2.7 setup.py build --compiler=mingw32 install
Making and Sharing Changes
++++++++++++++++++++++++++
diff -r 78d4ad6d25422503bf323322ab2ff8609c8da6ae -r 8ce1487c06c6efc8041f9c8bd4e6f80b5782a08d doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -676,8 +676,14 @@
Additional Options
++++++++++++++++++
+The following are additional options that may be passed to the ``load`` command when analyzing
+FITS data:
+
+``nan_mask``
+~~~~~~~~~~~~
+
FITS image data may include ``NaNs``. If you wish to mask this data out,
-you may supply a ``nan_mask`` parameter to ``load``, which may either be a
+you may supply a ``nan_mask`` parameter, which may either be a
single floating-point number (applies to all fields) or a Python dictionary
containing different mask values for different fields:
@@ -689,9 +695,27 @@
# passing a dict
ds = load("m33_hi.fits", nan_mask={"intensity":-1.0,"temperature":0.0})
+``suppress_astropy_warnings``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
Generally, AstroPy may generate a lot of warnings about individual FITS
files, many of which you may want to ignore. If you want to see these
-warnings, set ``suppress_astropy_warnings = False`` in the call to ``load``.
+warnings, set ``suppress_astropy_warnings = False``.
+
+``z_axis_decomp``
+~~~~~~~~~~~~~~~~~
+
+For some applications, decomposing 3D FITS data into grids that span the x-y plane with short
+strides along the z-axis may result in a significant improvement in I/O speed. To enable this feature, set ``z_axis_decomp=True``.
+
+``spectral_factor``
+~~~~~~~~~~~~~~~~~~~
+
+Often, the aspect ratio of 3D spectral cubes can be far from unity. Because yt sets the pixel
+scale as the ``code_length``, certain visualizations (such as volume renderings) may look extended
+or distended in ways that are undesirable. To adjust the width in ``code_length`` of the spectral
+ axis, set ``spectral_factor`` equal to a constant which gives the desired scaling,
+ or set it to ``"auto"`` to make the width the same as the largest axis in the sky plane.
Miscellaneous Tools for Use with FITS Data
++++++++++++++++++++++++++++++++++++++++++
@@ -703,7 +727,6 @@
from yt.frontends.fits.misc import setup_counts_fields, PlotWindowWCS, ds9_region
-
``setup_counts_fields``
~~~~~~~~~~~~~~~~~~~~~~~
diff -r 78d4ad6d25422503bf323322ab2ff8609c8da6ae -r 8ce1487c06c6efc8041f9c8bd4e6f80b5782a08d doc/source/installing.rst
--- a/doc/source/installing.rst
+++ b/doc/source/installing.rst
@@ -14,8 +14,7 @@
be time-consuming, yt provides an installation script which downloads and builds
a fully-isolated Python + NumPy + Matplotlib + HDF5 + Mercurial installation.
yt supports Linux and OSX deployment, with the possibility of deployment on
-other Unix-like systems (XSEDE resources, clusters, etc.). Windows is not
-supported.
+other Unix-like systems (XSEDE resources, clusters, etc.).
Since the install is fully-isolated, if you get tired of having yt on your
system, you can just delete its directory, and yt and all of its dependencies
@@ -83,14 +82,73 @@
will also need to set ``LD_LIBRARY_PATH`` and ``PYTHONPATH`` to contain
``$YT_DEST/lib`` and ``$YT_DEST/python2.7/site-packages``, respectively.
+.. _testing-installation:
+
+Testing Your Installation
+-------------------------
+
+To test to make sure everything is installed properly, try running yt at
+the command line:
+
+.. code-block:: bash
+
+ $ yt --help
+
+If this works, you should get a list of the various command-line options for
+yt, which means you have successfully installed yt. Congratulations!
+
+If you get an error, follow the instructions it gives you to debug the problem.
+Do not hesitate to :ref:`contact us <asking-for-help>` so we can help you
+figure it out.
+
+If you like, this might be a good time :ref:`to run the test suite <testing>`.
+
+.. _updating-yt:
+
+Updating yt and its dependencies
+--------------------------------
+
+With many active developers, code development sometimes occurs at a furious
+pace in yt. To make sure you're using the latest version of the code, run
+this command at a command-line:
+
+.. code-block:: bash
+
+ $ yt update
+
+Additionally, if you want to make sure you have the latest dependencies
+associated with yt and update the codebase simultaneously, type this:
+
+.. code-block:: bash
+
+ $ yt update --all
+
+.. _removing-yt:
+
+Removing yt and its dependencies
+--------------------------------
+
+Because yt and its dependencies are installed in an isolated directory when
+you use the script installer, you can easily remove yt and all of its
+dependencies cleanly. Simply remove the install directory and its
+subdirectories and you're done. If you *really* had problems with the
+code, this is a last defense for solving: remove and then fully
+:ref:`re-install <installing-yt>` from the install script again.
+
+.. _alternative-installation:
+
Alternative Installation Methods
--------------------------------
+.. _pip-installation:
+
+Installing yt Using pip or from Source
+++++++++++++++++++++++++++++++++++++++
+
If you want to forego the use of the install script, you need to make sure you
have yt's dependencies installed on your system. These include: a C compiler,
-``HDF5``, ``Freetype``, ``libpng``, ``python``, ``cython``, ``NumPy``, and
-``matplotlib``. From here, you can use ``pip`` (which comes with ``Python``) to
-install yt as:
+``HDF5``, ``python``, ``cython``, ``NumPy``, ``matplotlib``, and ``h5py``. From here,
+you can use ``pip`` (which comes with ``Python``) to install yt as:
.. code-block:: bash
@@ -110,67 +168,46 @@
It will install yt into ``$HOME/.local/lib64/python2.7/site-packages``.
Please refer to ``setuptools`` documentation for the additional options.
-Provided that the required dependencies are in a predictable location, yt should
-be able to find them automatically. However, you can manually specify prefix used
-for installation of ``HDF5``, ``Freetype`` and ``libpng`` by using ``hdf5.cfg``,
-``freetype.cfg``, ``png.cfg`` or setting ``HDF5_DIR``, ``FTYPE_DIR``, ``PNG_DIR``
-environmental variables respectively, e.g.
+If you choose this installation method, you do not need to run the activation
+script as it is unnecessary.
+
+.. _anaconda-installation:
+
+Installing yt Using Anaconda
+++++++++++++++++++++++++++++
+
+Perhaps the quickest way to get yt up and running is to install it using the `Anaconda Python
+Distribution <https://store.continuum.io/cshop/anaconda/>`_, which will provide you with a
+easy-to-use environment for installing Python packages. To install a bare-bones Python
+installation with yt, first visit http://repo.continuum.io/miniconda/ and download a recent
+version of the ``Miniconda-x.y.z`` script (corresponding to Python 2.7) for your platform and
+system architecture. Next, run the script, e.g.:
.. code-block:: bash
- $ echo '/usr/local' > hdf5.cfg
- $ export FTYPE_DIR=/opt/freetype
+ $ bash Miniconda-3.3.0-Linux-x86_64.sh
-If you choose this installation method, you do not need to run the activation
-script as it is unnecessary.
-
-.. _testing-installation:
-
-Testing Your Installation
--------------------------
-
-To test to make sure everything is installed properly, try running yt at
-the command line:
+Make sure that the Anaconda ``bin`` directory is in your path, and then issue:
.. code-block:: bash
- $ yt --help
+ $ conda install yt
-If this works, you should get a list of the various command-line options for
-yt, which means you have successfully installed yt. Congratulations!
+which will install yt along with all of its dependencies.
-If you get an error, follow the instructions it gives you to debug the problem.
-Do not hesitate to :ref:`contact us <asking-for-help>` so we can help you
-figure it out.
+.. _windows-installation:
-.. _updating-yt:
+Installing yt on Windows
+++++++++++++++++++++++++
-Updating yt and its dependencies
---------------------------------
+Installation on Microsoft Windows is only supported for Windows XP Service Pack 3 and
+higher (both 32-bit and 64-bit) using Anaconda.
-With many active developers, code development sometimes occurs at a furious
-pace in yt. To make sure you're using the latest version of the code, run
-this command at a command-line:
+Keeping yt Updated via Mercurial
+++++++++++++++++++++++++++++++++
-.. code-block:: bash
+If you want to maintain your yt installation via updates straight from the Bitbucket repository,
+or if you want to do some development on your own, we suggest you check out some of the
+:ref:`development docs <contributing-code>`, especially the sections on :ref:`Mercurial
+<mercurial-with-yt>` and :ref:`building yt from source <building-yt>`.
- $ yt update
-
-Additionally, if you want to make sure you have the latest dependencies
-associated with yt and update the codebase simultaneously, type this:
-
-.. code-block:: bash
-
- $ yt update --all
-
-.. _removing-yt:
-
-Removing yt and its dependencies
---------------------------------
-
-Because yt and its dependencies are installed in an isolated directory when
-you use the script installer, you can easily remove yt and all of its
-dependencies cleanly. Simply remove the install directory and its
-subdirectories and you're done. If you *really* had problems with the
-code, this is a last defense for solving: remove and then fully
-:ref:`re-install <installing-yt>` from the install script again.
diff -r 78d4ad6d25422503bf323322ab2ff8609c8da6ae -r 8ce1487c06c6efc8041f9c8bd4e6f80b5782a08d yt/data_objects/static_output.py
--- a/yt/data_objects/static_output.py
+++ b/yt/data_objects/static_output.py
@@ -55,8 +55,8 @@
SphericalCoordinateHandler
from yt.geometry.geographic_coordinates import \
GeographicCoordinateHandler
-from yt.geometry.ppv_coordinates import \
- PPVCoordinateHandler
+from yt.geometry.spec_cube_coordinates import \
+ SpectralCubeCoordinateHandler
# We want to support the movie format in the future.
# When such a thing comes to pass, I'll move all the stuff that is contant up
@@ -361,8 +361,8 @@
self.coordinates = SphericalCoordinateHandler(self)
elif self.geometry == "geographic":
self.coordinates = GeographicCoordinateHandler(self)
- elif self.geometry == "ppv":
- self.coordinates = PPVCoordinateHandler(self)
+ elif self.geometry == "spectral_cube":
+ self.coordinates = SpectralCubeCoordinateHandler(self)
else:
raise YTGeometryNotSupported(self.geometry)
diff -r 78d4ad6d25422503bf323322ab2ff8609c8da6ae -r 8ce1487c06c6efc8041f9c8bd4e6f80b5782a08d yt/frontends/fits/data_structures.py
--- a/yt/frontends/fits/data_structures.py
+++ b/yt/frontends/fits/data_structures.py
@@ -44,11 +44,15 @@
lon_prefixes = ["X","RA","GLON"]
lat_prefixes = ["Y","DEC","GLAT"]
-vel_prefixes = ["V","ENER","FREQ","WAV"]
delimiters = ["*", "/", "-", "^"]
delimiters += [str(i) for i in xrange(10)]
regex_pattern = '|'.join(re.escape(_) for _ in delimiters)
+spec_names = {"V":"Velocity",
+ "FREQ":"Frequency",
+ "ENER":"Energy",
+ "WAV":"Wavelength"}
+
field_from_unit = {"Jy":"intensity",
"K":"temperature"}
@@ -136,6 +140,7 @@
self._file_map = {}
self._ext_map = {}
self._scale_map = {}
+ dup_field_index = {}
# Since FITS header keywords are case-insensitive, we only pick a subset of
# prefixes, ones that we expect to end up in headers.
known_units = dict([(unit.lower(),unit) for unit in self.pf.unit_registry.lut])
@@ -162,13 +167,19 @@
if fname is None: fname = "image_%d" % (j)
if self.pf.num_files > 1 and fname.startswith("image"):
fname += "_file_%d" % (i)
+ if ("fits", fname) in self.field_list:
+ if fname in dup_field_index:
+ dup_field_index[fname] += 1
+ else:
+ dup_field_index[fname] = 1
+ mylog.warning("This field has the same name as a previously loaded " +
+ "field. Changing the name from %s to %s_%d. To avoid " %
+ (fname, fname, dup_field_index[fname]) +
+ " this, change one of the BTYPE header keywords.")
+ fname += "_%d" % (dup_field_index[fname])
for k in xrange(naxis4):
if naxis4 > 1:
fname += "_%s_%d" % (hdu.header["CTYPE4"], k+1)
- if fname in self.field_list:
- mylog.error("You have two fields with the same name. Change one of " +
- "the names in the BTYPE header keyword to distinguish " +
- "them.")
self._axis_map[fname] = k
self._file_map[fname] = fits_file
self._ext_map[fname] = j
@@ -210,7 +221,7 @@
# If nprocs > 1, decompose the domain into virtual grids
if self.num_grids > 1:
if self.pf.z_axis_decomp:
- dz = (pf.domain_width/pf.domain_dimensions)[2]
+ dz = pf.quan(1.0, "code_length")*pf.spectral_factor
self.grid_dimensions[:,2] = np.around(float(pf.domain_dimensions[2])/
self.num_grids).astype("int")
self.grid_dimensions[-1,2] += (pf.domain_dimensions[2] % self.num_grids)
@@ -227,7 +238,7 @@
dims = np.array(pf.domain_dimensions)
# If we are creating a dataset of lines, only decompose along the position axes
if len(pf.line_database) > 0:
- dims[pf.vel_axis] = 1
+ dims[pf.spec_axis] = 1
psize = get_psize(dims, self.num_grids)
gle, gre, shapes, slices = decompose_array(dims, psize, bbox)
self.grid_left_edge = self.pf.arr(gle, "code_length")
@@ -235,9 +246,9 @@
self.grid_dimensions = np.array([shape for shape in shapes], dtype="int32")
# If we are creating a dataset of lines, only decompose along the position axes
if len(pf.line_database) > 0:
- self.grid_left_edge[:,pf.vel_axis] = pf.domain_left_edge[pf.vel_axis]
- self.grid_right_edge[:,pf.vel_axis] = pf.domain_right_edge[pf.vel_axis]
- self.grid_dimensions[:,pf.vel_axis] = pf.domain_dimensions[pf.vel_axis]
+ self.grid_left_edge[:,pf.spec_axis] = pf.domain_left_edge[pf.spec_axis]
+ self.grid_right_edge[:,pf.spec_axis] = pf.domain_right_edge[pf.spec_axis]
+ self.grid_dimensions[:,pf.spec_axis] = pf.domain_dimensions[pf.spec_axis]
else:
self.grid_left_edge[0,:] = pf.domain_left_edge
self.grid_right_edge[0,:] = pf.domain_right_edge
@@ -303,6 +314,7 @@
nprocs = None,
storage_filename = None,
nan_mask = None,
+ spectral_factor = 1.0,
z_axis_decomp = False,
line_database = None,
line_width = None,
@@ -315,10 +327,13 @@
self.specified_parameters = parameters
self.z_axis_decomp = z_axis_decomp
+ self.spectral_factor = spectral_factor
if line_width is not None:
self.line_width = YTQuantity(line_width[0], line_width[1])
self.line_units = line_width[1]
+ mylog.info("For line folding, spectral_factor = 1.0")
+ self.spectral_factor = 1.0
else:
self.line_width = None
@@ -356,8 +371,8 @@
else:
fn = os.path.join(ytcfg.get("yt","test_data_dir"),fits_file)
f = _astropy.pyfits.open(fn, memmap=True,
- do_not_scale_image_data=True,
- ignore_blank=True)
+ do_not_scale_image_data=True,
+ ignore_blank=True)
self._fits_files.append(f)
if len(self._handle) > 1 and self._handle[1].name == "EVENTS":
@@ -399,13 +414,23 @@
self.events_data = False
self.first_image = 0
self.primary_header = self._handle[self.first_image].header
- self.wcs = _astropy.pywcs.WCS(header=self.primary_header)
self.naxis = self.primary_header["naxis"]
self.axis_names = [self.primary_header["ctype%d" % (i+1)]
for i in xrange(self.naxis)]
self.dims = [self.primary_header["naxis%d" % (i+1)]
for i in xrange(self.naxis)]
+ wcs = _astropy.pywcs.WCS(header=self.primary_header)
+ if self.naxis == 4:
+ self.wcs = _astropy.pywcs.WCS(naxis=3)
+ self.wcs.wcs.crpix = wcs.wcs.crpix[:3]
+ self.wcs.wcs.cdelt = wcs.wcs.cdelt[:3]
+ self.wcs.wcs.crval = wcs.wcs.crval[:3]
+ self.wcs.wcs.cunit = [str(unit) for unit in wcs.wcs.cunit][:3]
+ self.wcs.wcs.ctype = [type for type in wcs.wcs.ctype][:3]
+ else:
+ self.wcs = wcs
+
self.refine_by = 2
Dataset.__init__(self, fn, dataset_type)
@@ -441,11 +466,12 @@
self.time_unit = self.quan(1.0, "s")
self.velocity_unit = self.quan(1.0, "cm/s")
if "beam_size" in self.specified_parameters:
+ beam_size = self.specified_parameters["beam_size"]
beam_size = self.quan(beam_size[0], beam_size[1]).in_cgs().value
else:
beam_size = 1.0
self.unit_registry.add("beam",beam_size,dimensions=dimensions.solid_angle)
- if self.ppv_data:
+ if self.spec_cube:
units = self.wcs_2d.wcs.cunit[0]
if units == "deg": units = "degree"
if units == "rad": units = "radian"
@@ -520,17 +546,17 @@
self.reversed = False
# Check to see if this data is in some kind of (Lat,Lon,Vel) format
- self.ppv_data = False
+ self.spec_cube = False
x = 0
- for p in lon_prefixes+lat_prefixes+vel_prefixes:
+ for p in lon_prefixes+lat_prefixes+spec_names.keys():
y = np_char.startswith(self.axis_names[:self.dimensionality], p)
x += y.sum()
- if x == self.dimensionality: self._setup_ppv()
+ if x == self.dimensionality: self._setup_spec_cube()
- def _setup_ppv(self):
+ def _setup_spec_cube(self):
- self.ppv_data = True
- self.geometry = "ppv"
+ self.spec_cube = True
+ self.geometry = "spectral_cube"
end = min(self.dimensionality+1,4)
if self.events_data:
@@ -556,11 +582,11 @@
if self.wcs.naxis > 2:
- self.vel_axis = np.zeros((end-1), dtype="bool")
- for p in vel_prefixes:
- self.vel_axis += np_char.startswith(ctypes, p)
- self.vel_axis = np.where(self.vel_axis)[0][0]
- self.vel_name = ctypes[self.vel_axis].split("-")[0].lower()
+ self.spec_axis = np.zeros((end-1), dtype="bool")
+ for p in spec_names.keys():
+ self.spec_axis += np_char.startswith(ctypes, p)
+ self.spec_axis = np.where(self.spec_axis)[0][0]
+ self.spec_name = spec_names[ctypes[self.spec_axis].split("-")[0][0]]
self.wcs_2d = _astropy.pywcs.WCS(naxis=2)
self.wcs_2d.wcs.crpix = self.wcs.wcs.crpix[[self.lon_axis, self.lat_axis]]
@@ -571,41 +597,60 @@
self.wcs_2d.wcs.ctype = [self.wcs.wcs.ctype[self.lon_axis],
self.wcs.wcs.ctype[self.lat_axis]]
- x0 = self.wcs.wcs.crpix[self.vel_axis]
- dz = self.wcs.wcs.cdelt[self.vel_axis]
- z0 = self.wcs.wcs.crval[self.vel_axis]
- self.vel_unit = str(self.wcs.wcs.cunit[self.vel_axis])
-
- if dz < 0.0:
- self.reversed = True
- le = self.dims[self.vel_axis]+0.5
- re = 0.5
- else:
- le = 0.5
- re = self.dims[self.vel_axis]+0.5
- self.domain_left_edge[self.vel_axis] = (le-x0)*dz + z0
- self.domain_right_edge[self.vel_axis] = (re-x0)*dz + z0
- if self.reversed: dz *= -1
+ self._p0 = self.wcs.wcs.crpix[self.spec_axis]
+ self._dz = self.wcs.wcs.cdelt[self.spec_axis]
+ self._z0 = self.wcs.wcs.crval[self.spec_axis]
+ self.spec_unit = str(self.wcs.wcs.cunit[self.spec_axis])
if self.line_width is not None:
- self.line_width = self.line_width.in_units(self.vel_unit)
- self.freq_begin = self.domain_left_edge[self.vel_axis]
- nz = np.rint(self.line_width.value/dz).astype("int")
- self.line_width = dz*nz
- self.domain_left_edge[self.vel_axis] = -self.line_width/2.
- self.domain_right_edge[self.vel_axis] = self.line_width/2.
- self.domain_dimensions[self.vel_axis] = nz
-
+ if self._dz < 0.0:
+ self.reversed = True
+ le = self.dims[self.spec_axis]+0.5
+ else:
+ le = 0.5
+ self.line_width = self.line_width.in_units(self.spec_unit)
+ self.freq_begin = (le-self._p0)*self._dz + self._z0
+ # We now reset these so that they are consistent
+ # with the new setup
+ self._dz = np.abs(self._dz)
+ self._p0 = 0.0
+ self._z0 = 0.0
+ nz = np.rint(self.line_width.value/self._dz).astype("int")
+ self.line_width = self._dz*nz
+ self.domain_left_edge[self.spec_axis] = -0.5*float(nz)
+ self.domain_right_edge[self.spec_axis] = 0.5*float(nz)
+ self.domain_dimensions[self.spec_axis] = nz
+ else:
+ if self.spectral_factor == "auto":
+ self.spectral_factor = float(max(self.domain_dimensions[[self.lon_axis,
+ self.lat_axis]]))
+ self.spectral_factor /= self.domain_dimensions[self.spec_axis]
+ mylog.info("Setting the spectral factor to %f" % (self.spectral_factor))
+ Dz = self.domain_right_edge[self.spec_axis]-self.domain_left_edge[self.spec_axis]
+ self.domain_right_edge[self.spec_axis] = self.domain_left_edge[self.spec_axis] + \
+ self.spectral_factor*Dz
+ self._dz /= self.spectral_factor
+ self._p0 = (self._p0-0.5)*self.spectral_factor + 0.5
else:
self.wcs_2d = self.wcs
- self.vel_axis = 2
- self.vel_name = "z"
- self.vel_unit = "code length"
+ self.spec_axis = 2
+ self.spec_name = "z"
+ self.spec_unit = "code length"
+
+ def spec2pixel(self, spec_value):
+ sv = self.arr(spec_value).in_units(self.spec_unit)
+ return self.arr((sv.v-self._z0)/self._dz+self._p0,
+ "code_length")
+
+ def pixel2spec(self, pixel_value):
+ pv = self.arr(pixel_value, "code_length")
+ return self.arr((pv.v-self._p0)*self._dz+self._z0,
+ self.spec_unit)
def __del__(self):
- for file in self._fits_files:
- file.close()
+ for f in self._fits_files:
+ f.close()
del file
self._handle.close()
del self._handle
diff -r 78d4ad6d25422503bf323322ab2ff8609c8da6ae -r 8ce1487c06c6efc8041f9c8bd4e6f80b5782a08d yt/frontends/fits/fields.py
--- a/yt/frontends/fits/fields.py
+++ b/yt/frontends/fits/fields.py
@@ -23,7 +23,7 @@
for field in pf.field_list:
if field[0] == "fits": self[field].take_log = False
- def _setup_ppv_fields(self):
+ def _setup_spec_cube_fields(self):
def _get_2d_wcs(data, axis):
w_coords = data.pf.wcs_2d.wcs_pix2world(data["x"], data["y"], 1)
@@ -42,17 +42,18 @@
self.add_field(("fits",name), function=world_f(axis, unit), units=unit)
if self.pf.dimensionality == 3:
- def _vel_los(field, data):
- axis = "xyz"[data.pf.vel_axis]
- return data.pf.arr(data[axis].ndarray_view(),data.pf.vel_unit)
- self.add_field(("fits",self.pf.vel_name), function=_vel_los,
- units=self.pf.vel_unit)
+ def _spec(field, data):
+ axis = "xyz"[data.pf.spec_axis]
+ sp = (data[axis].ndarray_view()-self.pf._p0)*self.pf._dz + self.pf._z0
+ return data.pf.arr(sp, data.pf.spec_unit)
+ self.add_field(("fits","spectral"), function=_spec,
+ units=self.pf.spec_unit, display_name=self.pf.spec_name)
def setup_fluid_fields(self):
- if self.pf.ppv_data:
+ if self.pf.spec_cube:
def _pixel(field, data):
return data.pf.arr(data["ones"], "pixel")
self.add_field(("fits","pixel"), function=_pixel, units="pixel")
- self._setup_ppv_fields()
+ self._setup_spec_cube_fields()
return
diff -r 78d4ad6d25422503bf323322ab2ff8609c8da6ae -r 8ce1487c06c6efc8041f9c8bd4e6f80b5782a08d yt/frontends/fits/io.py
--- a/yt/frontends/fits/io.py
+++ b/yt/frontends/fits/io.py
@@ -26,8 +26,10 @@
self._handle = pf._handle
if self.pf.line_width is not None:
self.line_db = self.pf.line_database
+ self.dz = self.pf.line_width/self.domain_dimensions[self.pf.spec_axis]
else:
self.line_db = None
+ self.dz = 1.
def _read_particles(self, fields_to_read, type, args, grid_list,
count_list, conv_factors):
@@ -90,19 +92,19 @@
start = ((g.LeftEdge-self.pf.domain_left_edge)/dx).to_ndarray().astype("int")
end = start + g.ActiveDimensions
if self.line_db is not None and fname in self.line_db:
- my_off = self.line_db.get(fname).in_units(self.pf.vel_unit).value
+ my_off = self.line_db.get(fname).in_units(self.pf.spec_unit).value
my_off = my_off - 0.5*self.pf.line_width
- my_off = int((my_off-self.pf.freq_begin)/dx[self.pf.vel_axis].value)
+ my_off = int((my_off-self.pf.freq_begin)/self.dz)
my_off = max(my_off, 0)
- my_off = min(my_off, self.pf.dims[self.pf.vel_axis]-1)
- start[self.pf.vel_axis] += my_off
- end[self.pf.vel_axis] += my_off
+ my_off = min(my_off, self.pf.dims[self.pf.spec_axis]-1)
+ start[self.pf.spec_axis] += my_off
+ end[self.pf.spec_axis] += my_off
mylog.debug("Reading from " + str(start) + str(end))
slices = [slice(start[i],end[i]) for i in xrange(3)]
if self.pf.reversed:
- new_start = self.pf.dims[self.pf.vel_axis]-1-start[self.pf.vel_axis]
- new_end = max(self.pf.dims[self.pf.vel_axis]-1-end[self.pf.vel_axis],0)
- slices[self.pf.vel_axis] = slice(new_start,new_end,-1)
+ new_start = self.pf.dims[self.pf.spec_axis]-1-start[self.pf.spec_axis]
+ new_end = max(self.pf.dims[self.pf.spec_axis]-1-end[self.pf.spec_axis],0)
+ slices[self.pf.spec_axis] = slice(new_start,new_end,-1)
if self.pf.dimensionality == 2:
nx, ny = g.ActiveDimensions[:2]
nz = 1
@@ -114,8 +116,8 @@
else:
data = ds.data[slices[2],slices[1],slices[0]].transpose()
if self.line_db is not None:
- nz1 = data.shape[self.pf.vel_axis]
- nz2 = g.ActiveDimensions[self.pf.vel_axis]
+ nz1 = data.shape[self.pf.spec_axis]
+ nz2 = g.ActiveDimensions[self.pf.spec_axis]
if nz1 != nz2:
old_data = data.copy()
data = np.zeros(g.ActiveDimensions)
diff -r 78d4ad6d25422503bf323322ab2ff8609c8da6ae -r 8ce1487c06c6efc8041f9c8bd4e6f80b5782a08d yt/frontends/fits/misc.py
--- a/yt/frontends/fits/misc.py
+++ b/yt/frontends/fits/misc.py
@@ -14,7 +14,6 @@
from yt.fields.derived_field import ValidateSpatial
from yt.utilities.on_demand_imports import _astropy
from yt.funcs import mylog, get_image_suffix
-from yt.visualization._mpl_imports import FigureCanvasAgg
import os
def _make_counts(emin, emax):
@@ -130,26 +129,17 @@
raise NotImplementedError("WCS axes are not implemented for oblique plots.")
if not hasattr(pw.pf, "wcs_2d"):
raise NotImplementedError("WCS axes are not implemented for this dataset.")
- if pw.data_source.axis != pw.pf.vel_axis:
+ if pw.data_source.axis != pw.pf.spec_axis:
raise NotImplementedError("WCS axes are not implemented for this axis.")
- self.pf = pw.pf
+ self.plots = {}
self.pw = pw
- self.plots = {}
- self.wcs_axes = []
for f in pw.plots:
rect = pw.plots[f]._get_best_layout()[1]
fig = pw.plots[f].figure
- ax = WCSAxes(fig, rect, wcs=pw.pf.wcs_2d, frameon=False)
- fig.add_axes(ax)
- self.wcs_axes.append(ax)
- self._setup_plots()
-
- def _setup_plots(self):
- pw = self.pw
- for f, ax in zip(pw.plots, self.wcs_axes):
- wcs = ax.wcs.wcs
- pw.plots[f].axes.get_xaxis().set_visible(False)
- pw.plots[f].axes.get_yaxis().set_visible(False)
+ ax = fig.axes[0]
+ wcs_ax = WCSAxes(fig, rect, wcs=pw.pf.wcs_2d, frameon=False)
+ fig.add_axes(wcs_ax)
+ wcs = pw.pf.wcs_2d.wcs
xax = pw.pf.coordinates.x_axis[pw.data_source.axis]
yax = pw.pf.coordinates.y_axis[pw.data_source.axis]
xlabel = "%s (%s)" % (wcs.ctype[xax].split("-")[0],
@@ -157,18 +147,18 @@
ylabel = "%s (%s)" % (wcs.ctype[yax].split("-")[0],
wcs.cunit[yax])
fp = pw._font_properties
- ax.coords[0].set_axislabel(xlabel, fontproperties=fp)
- ax.coords[1].set_axislabel(ylabel, fontproperties=fp)
- ax.set_xlim(pw.xlim[0].value, pw.xlim[1].value)
- ax.set_ylim(pw.ylim[0].value, pw.ylim[1].value)
- ax.coords[0].ticklabels.set_fontproperties(fp)
- ax.coords[1].ticklabels.set_fontproperties(fp)
- self.plots[f] = pw.plots[f]
- self.pw = pw
- self.pf = self.pw.pf
-
- def refresh(self):
- self._setup_plots(self)
+ wcs_ax.coords[0].set_axislabel(xlabel, fontproperties=fp)
+ wcs_ax.coords[1].set_axislabel(ylabel, fontproperties=fp)
+ wcs_ax.coords[0].ticklabels.set_fontproperties(fp)
+ wcs_ax.coords[1].ticklabels.set_fontproperties(fp)
+ ax.xaxis.set_visible(False)
+ ax.yaxis.set_visible(False)
+ wcs_ax.set_xlim(pw.xlim[0].value, pw.xlim[1].value)
+ wcs_ax.set_ylim(pw.ylim[0].value, pw.ylim[1].value)
+ wcs_ax.coords.frame._update_cache = []
+ ax.xaxis.set_visible(False)
+ ax.yaxis.set_visible(False)
+ self.plots[f] = fig
def keys(self):
return self.plots.keys()
diff -r 78d4ad6d25422503bf323322ab2ff8609c8da6ae -r 8ce1487c06c6efc8041f9c8bd4e6f80b5782a08d yt/geometry/ppv_coordinates.py
--- a/yt/geometry/ppv_coordinates.py
+++ /dev/null
@@ -1,58 +0,0 @@
-"""
-Cartesian fields
-
-
-
-
-"""
-
-#-----------------------------------------------------------------------------
-# Copyright (c) 2013, yt Development Team.
-#
-# Distributed under the terms of the Modified BSD License.
-#
-# The full license is in the file COPYING.txt, distributed with this software.
-#-----------------------------------------------------------------------------
-
-import numpy as np
-from .cartesian_coordinates import \
- CartesianCoordinateHandler
-
-class PPVCoordinateHandler(CartesianCoordinateHandler):
-
- def __init__(self, pf):
- super(PPVCoordinateHandler, self).__init__(pf)
-
- self.axis_name = {}
- self.axis_id = {}
-
- for axis, axis_name in zip([pf.lon_axis, pf.lat_axis, pf.vel_axis],
- ["Image\ x", "Image\ y", pf.vel_name]):
- lower_ax = "xyz"[axis]
- upper_ax = lower_ax.upper()
-
- self.axis_name[axis] = axis_name
- self.axis_name[lower_ax] = axis_name
- self.axis_name[upper_ax] = axis_name
- self.axis_name[axis_name] = axis_name
-
- self.axis_id[lower_ax] = axis
- self.axis_id[axis] = axis
- self.axis_id[axis_name] = axis
-
- self.default_unit_label = {}
- self.default_unit_label[pf.lon_axis] = "pixel"
- self.default_unit_label[pf.lat_axis] = "pixel"
- self.default_unit_label[pf.vel_axis] = pf.vel_unit
-
- def convert_to_cylindrical(self, coord):
- raise NotImplementedError
-
- def convert_from_cylindrical(self, coord):
- raise NotImplementedError
-
- x_axis = { 'x' : 1, 'y' : 0, 'z' : 0,
- 0 : 1, 1 : 0, 2 : 0}
-
- y_axis = { 'x' : 2, 'y' : 2, 'z' : 1,
- 0 : 2, 1 : 2, 2 : 1}
diff -r 78d4ad6d25422503bf323322ab2ff8609c8da6ae -r 8ce1487c06c6efc8041f9c8bd4e6f80b5782a08d yt/geometry/spec_cube_coordinates.py
--- /dev/null
+++ b/yt/geometry/spec_cube_coordinates.py
@@ -0,0 +1,65 @@
+"""
+Cartesian fields
+
+
+
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2013, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+import numpy as np
+from .cartesian_coordinates import \
+ CartesianCoordinateHandler
+
+class SpectralCubeCoordinateHandler(CartesianCoordinateHandler):
+
+ def __init__(self, pf):
+ super(SpectralCubeCoordinateHandler, self).__init__(pf)
+
+ self.axis_name = {}
+ self.axis_id = {}
+
+ for axis, axis_name in zip([pf.lon_axis, pf.lat_axis, pf.spec_axis],
+ ["Image\ x", "Image\ y", pf.spec_name]):
+ lower_ax = "xyz"[axis]
+ upper_ax = lower_ax.upper()
+
+ self.axis_name[axis] = axis_name
+ self.axis_name[lower_ax] = axis_name
+ self.axis_name[upper_ax] = axis_name
+ self.axis_name[axis_name] = axis_name
+
+ self.axis_id[lower_ax] = axis
+ self.axis_id[axis] = axis
+ self.axis_id[axis_name] = axis
+
+ self.default_unit_label = {}
+ self.default_unit_label[pf.lon_axis] = "pixel"
+ self.default_unit_label[pf.lat_axis] = "pixel"
+ self.default_unit_label[pf.spec_axis] = pf.spec_unit
+
+ def _spec_axis(ax, x, y):
+ p = (x,y)[ax]
+ return [self.pf.pixel2spec(pp).v for pp in p]
+
+ self.axis_field = {}
+ self.axis_field[self.pf.spec_axis] = _spec_axis
+
+ def convert_to_cylindrical(self, coord):
+ raise NotImplementedError
+
+ def convert_from_cylindrical(self, coord):
+ raise NotImplementedError
+
+ x_axis = { 'x' : 1, 'y' : 0, 'z' : 0,
+ 0 : 1, 1 : 0, 2 : 0}
+
+ y_axis = { 'x' : 2, 'y' : 2, 'z' : 1,
+ 0 : 2, 1 : 2, 2 : 1}
diff -r 78d4ad6d25422503bf323322ab2ff8609c8da6ae -r 8ce1487c06c6efc8041f9c8bd4e6f80b5782a08d yt/utilities/fits_image.py
--- a/yt/utilities/fits_image.py
+++ b/yt/utilities/fits_image.py
@@ -20,7 +20,12 @@
pyfits = _astropy.pyfits
pywcs = _astropy.pywcs
-class FITSImageBuffer(pyfits.HDUList):
+if pyfits is None:
+ HDUList = object
+else:
+ HDUList = pyfits.HDUList
+
+class FITSImageBuffer(HDUList):
def __init__(self, data, fields=None, units="cm",
center=None, scale=None, wcs=None):
diff -r 78d4ad6d25422503bf323322ab2ff8609c8da6ae -r 8ce1487c06c6efc8041f9c8bd4e6f80b5782a08d yt/visualization/plot_window.py
--- a/yt/visualization/plot_window.py
+++ b/yt/visualization/plot_window.py
@@ -163,7 +163,7 @@
return center
def get_window_parameters(axis, center, width, pf):
- if pf.geometry == "cartesian" or pf.geometry == "ppv":
+ if pf.geometry == "cartesian" or pf.geometry == "spectral_cube":
width = get_sanitized_width(axis, width, None, pf)
center = get_sanitized_center(center, pf)
elif pf.geometry in ("polar", "cylindrical"):
@@ -742,7 +742,7 @@
else:
(unit_x, unit_y) = self._axes_unit_names
- # For some plots we may set aspect by hand, such as for PPV data.
+ # For some plots we may set aspect by hand, such as for spectral cube data.
# This will likely be replaced at some point by the coordinate handler
# setting plot aspect.
if self.aspect is None:
@@ -832,12 +832,27 @@
axis_names = self.pf.coordinates.axis_name
xax = self.pf.coordinates.x_axis[axis_index]
yax = self.pf.coordinates.y_axis[axis_index]
+
if hasattr(self.pf.coordinates, "axis_default_unit_label"):
axes_unit_labels = [self.pf.coordinates.axis_default_unit_name[xax],
self.pf.coordinates.axis_default_unit_name[yax]]
labels = [r'$\rm{'+axis_names[xax]+axes_unit_labels[0] + r'}$',
r'$\rm{'+axis_names[yax]+axes_unit_labels[1] + r'}$']
+ if hasattr(self.pf.coordinates, "axis_field"):
+ if xax in self.pf.coordinates.axis_field:
+ xmin, xmax = self.pf.coordinates.axis_field[xax](0,
+ self.xlim, self.ylim)
+ else:
+ xmin, xmax = [float(x) for x in extentx]
+ if yax in self.pf.coordinates.axis_field:
+ ymin, ymax = self.pf.coordinates.axis_field[yax](1,
+ self.xlim, self.ylim)
+ else:
+ ymin, ymax = [float(y) for y in extenty]
+ self.plots[f].image.set_extent((xmin,xmax,ymin,ymax))
+ self.plots[f].axes.set_aspect("auto")
+
self.plots[f].axes.set_xlabel(labels[0],fontproperties=fp)
self.plots[f].axes.set_ylabel(labels[1],fontproperties=fp)
Repository URL: https://bitbucket.org/yt_analysis/yt/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
More information about the yt-svn
mailing list