[yt-svn] commit/yt: 14 new changesets
commits-noreply at bitbucket.org
commits-noreply at bitbucket.org
Sat Jun 28 09:40:42 PDT 2014
14 new commits in yt:
https://bitbucket.org/yt_analysis/yt/commits/084683f27ec7/
Changeset: 084683f27ec7
Branch: yt-3.0
User: ngoldbaum
Date: 2014-06-25 02:45:28
Summary: Fixing many minor issues that are causing doc build errors.
Affected #: 7 files
diff -r 35cecc8a0a24bbf074956ef320f70e0e686478a7 -r 084683f27ec781da6336b39e25eae760f88473b0 doc/source/cookbook/complex_plots.rst
--- a/doc/source/cookbook/complex_plots.rst
+++ b/doc/source/cookbook/complex_plots.rst
@@ -36,7 +36,7 @@
axes. To focus on what's happening in the x-y plane, we make an additional
Temperature slice for the bottom-right subpanel.
-.. yt-cookbook:: multiplot_2x2_coordaxes_slice.py
+.. yt_cookbook:: multiplot_2x2_coordaxes_slice.py
Multi-Plot Slice and Projections
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff -r 35cecc8a0a24bbf074956ef320f70e0e686478a7 -r 084683f27ec781da6336b39e25eae760f88473b0 doc/source/cookbook/fits_xray_images.rst
--- a/doc/source/cookbook/fits_xray_images.rst
+++ b/doc/source/cookbook/fits_xray_images.rst
@@ -1,6 +1,6 @@
.. _xray_fits:
FITS X-ray Images in yt
-----------------------
+-----------------------
-.. notebook:: fits_xray_images.ipynb
\ No newline at end of file
+.. notebook:: fits_xray_images.ipynb
diff -r 35cecc8a0a24bbf074956ef320f70e0e686478a7 -r 084683f27ec781da6336b39e25eae760f88473b0 doc/source/developing/testing.rst
--- a/doc/source/developing/testing.rst
+++ b/doc/source/developing/testing.rst
@@ -51,7 +51,7 @@
If you are developing new functionality, it is sometimes more convenient to use
the Nose command line interface, ``nosetests``. You can run the unit tests
-using `no`qsetets` by navigating to the base directory of the yt mercurial
+using ``nose`` by navigating to the base directory of the yt mercurial
repository and invoking ``nosetests``:
.. code-block:: bash
diff -r 35cecc8a0a24bbf074956ef320f70e0e686478a7 -r 084683f27ec781da6336b39e25eae760f88473b0 doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -711,11 +711,13 @@
``spectral_factor``
~~~~~~~~~~~~~~~~~~~
-Often, the aspect ratio of 3D spectral cubes can be far from unity. Because yt sets the pixel
-scale as the ``code_length``, certain visualizations (such as volume renderings) may look extended
-or distended in ways that are undesirable. To adjust the width in ``code_length`` of the spectral
- axis, set ``spectral_factor`` equal to a constant which gives the desired scaling,
- or set it to ``"auto"`` to make the width the same as the largest axis in the sky plane.
+Often, the aspect ratio of 3D spectral cubes can be far from unity. Because yt
+sets the pixel scale as the ``code_length``, certain visualizations (such as
+volume renderings) may look extended or distended in ways that are
+undesirable. To adjust the width in ``code_length`` of the spectral axis, set
+``spectral_factor`` equal to a constant which gives the desired scaling, or set
+it to ``"auto"`` to make the width the same as the largest axis in the sky
+plane.
Miscellaneous Tools for Use with FITS Data
++++++++++++++++++++++++++++++++++++++++++
@@ -792,11 +794,11 @@
PyNE Data
---------
-.. _loading-numpy-array:
-
Generic Array Data
------------------
+See :ref:`loading-numpy-array` for more detail.
+
Even if your data is not strictly related to fields commonly used in
astrophysical codes or your code is not supported yet, you can still feed it to
``yt`` to use its advanced visualization and analysis facilities. The only
@@ -848,6 +850,8 @@
Generic AMR Data
----------------
+See :ref:`loading-numpy-array` for more detail.
+
It is possible to create native ``yt`` parameter file from Python's dictionary
that describes set of rectangular patches of data of possibly varying
resolution.
diff -r 35cecc8a0a24bbf074956ef320f70e0e686478a7 -r 084683f27ec781da6336b39e25eae760f88473b0 doc/source/visualizing/_cb_docstrings.inc
--- a/doc/source/visualizing/_cb_docstrings.inc
+++ b/doc/source/visualizing/_cb_docstrings.inc
@@ -120,6 +120,8 @@
.. python-script::
from yt.mods import *
+ from yt.analysis_modules.halo_analysis.halo_catalog import HaloCatalog
+
data_pf = load('Enzo_64/RD0006/RedshiftOutput0006')
halos_pf = load('rockstar_halos/halos_0.0.bin')
diff -r 35cecc8a0a24bbf074956ef320f70e0e686478a7 -r 084683f27ec781da6336b39e25eae760f88473b0 doc/source/visualizing/_images/mapserver.png
Binary file doc/source/visualizing/_images/mapserver.png has changed
diff -r 35cecc8a0a24bbf074956ef320f70e0e686478a7 -r 084683f27ec781da6336b39e25eae760f88473b0 doc/source/visualizing/volume_rendering.rst
--- a/doc/source/visualizing/volume_rendering.rst
+++ b/doc/source/visualizing/volume_rendering.rst
@@ -478,8 +478,7 @@
:ref:`cookbook-amrkdtree_to_uniformgrid`.
System Requirements
--------------------
-.. versionadded:: 3.0
++++++++++++++++++++
Nvidia graphics card - The memory limit of the graphics card sets the limit
on the size of the data source.
@@ -490,7 +489,7 @@
the common/inc samples shipped with CUDA. The following shows an example
in bash with CUDA 5.5 installed in /usr/local :
-export CUDA_SAMPLES=/usr/local/cuda-5.5/samples/common/inc
+ export CUDA_SAMPLES=/usr/local/cuda-5.5/samples/common/inc
PyCUDA must also be installed to use Theia.
@@ -503,13 +502,13 @@
Tutorial
---------
-.. versionadded:: 3.0
+++++++++
Currently rendering only works on uniform grids. Here is an example
on a 1024 cube of float32 scalars.
.. code-block:: python
+
from yt.visualization.volume_rendering.theia.scene import TheiaScene
from yt.visualization.volume_rendering.algorithms.front_to_back import FrontToBackRaycaster
import numpy as np
@@ -528,28 +527,27 @@
.. _the-theiascene-interface:
The TheiaScene Interface
---------------------
-.. versionadded:: 3.0
+++++++++++++++++++++++++
A TheiaScene object has been created to provide a high level entry point for
-controlling the raycaster's view onto the data. The class
-:class:`~yt.visualization.volume_rendering.theia.TheiaScene` encapsulates
- a Camera object and a TheiaSource that intern encapsulates
-a volume. The :class:`~yt.visualization.volume_rendering.theia.Camera`
-provides controls for rotating, translating, and zooming into the volume.
-Using the :class:`~yt.visualization.volume_rendering.theia.TheiaSource`
-automatically transfers the volume to the graphic's card texture memory.
+controlling the raycaster's view onto the data. The class
+:class:`~yt.visualization.volume_rendering.theia.TheiaScene` encapsulates a
+Camera object and a TheiaSource that intern encapsulates a volume. The
+:class:`~yt.visualization.volume_rendering.theia.Camera` provides controls for
+rotating, translating, and zooming into the volume. Using the
+:class:`~yt.visualization.volume_rendering.theia.TheiaSource` automatically
+transfers the volume to the graphic's card texture memory.
Example Cookbooks
----------------
++++++++++++++++++
OpenGL Example for interactive volume rendering:
:ref:`cookbook-opengl_volume_rendering`.
-OpenGL Stereoscopic Example :
.. warning:: Frame rate will suffer significantly from stereoscopic rendering.
~2x slower since the volume must be rendered twice.
-:ref:`cookbook-opengl_stereo_volume_rendering`.
+
+OpenGL Stereoscopic Example: :ref:`cookbook-opengl_stereo_volume_rendering`.
Pseudo-Realtime video rendering with ffmpeg :
:ref:`cookbook-ffmpeg_volume_rendering`.
https://bitbucket.org/yt_analysis/yt/commits/30f2141e18c0/
Changeset: 30f2141e18c0
Branch: yt-3.0
User: ngoldbaum
Date: 2014-06-25 02:45:59
Summary: Preventing the dataset from doing out of scope, causing a ReferenceError.
Affected #: 1 file
diff -r 084683f27ec781da6336b39e25eae760f88473b0 -r 30f2141e18c07c924cd93073279b44b10848c4bd yt/analysis_modules/particle_trajectories/particle_trajectories.py
--- a/yt/analysis_modules/particle_trajectories/particle_trajectories.py
+++ b/yt/analysis_modules/particle_trajectories/particle_trajectories.py
@@ -201,7 +201,8 @@
if self.suppress_logging:
old_level = int(ytcfg.get("yt","loglevel"))
mylog.setLevel(40)
- dd_first = self.data_series[0].all_data()
+ ds_first = self.data_series[0]
+ dd_first = ds_first.all_data()
fd = dd_first._determine_fields(field)[0]
if field not in self.particle_fields:
if self.data_series[0].field_info[fd].particle_type:
https://bitbucket.org/yt_analysis/yt/commits/fdb1493c826e/
Changeset: fdb1493c826e
Branch: yt-3.0
User: ngoldbaum
Date: 2014-06-25 02:46:25
Summary: Fixing a minor units issue in the PPV cube analysis module.
Affected #: 1 file
diff -r 30f2141e18c07c924cd93073279b44b10848c4bd -r fdb1493c826e5a9235144f7fe5c7e2d27b4bcdb0 yt/analysis_modules/ppv_cube/ppv_cube.py
--- a/yt/analysis_modules/ppv_cube/ppv_cube.py
+++ b/yt/analysis_modules/ppv_cube/ppv_cube.py
@@ -156,7 +156,8 @@
def _create_intensity(self, i):
def _intensity(field, data):
- w = np.abs(data["v_los"]-self.vmid[i])/self.dv
+ vlos = data["v_los"]
+ w = np.abs(vlos-self.vmid[i])/self.dv.in_units(vlos.units)
w = 1.-w
w[w < 0.0] = 0.0
return data[self.field]*w
https://bitbucket.org/yt_analysis/yt/commits/491e1d4498e6/
Changeset: 491e1d4498e6
Branch: yt-3.0
User: ngoldbaum
Date: 2014-06-25 02:46:52
Summary: Fixing a syntax error in the streamline callback.
Affected #: 1 file
diff -r fdb1493c826e5a9235144f7fe5c7e2d27b4bcdb0 -r 491e1d4498e68f66fb9f999fa25b51e6af2e7b5d yt/visualization/plot_modifications.py
--- a/yt/visualization/plot_modifications.py
+++ b/yt/visualization/plot_modifications.py
@@ -501,7 +501,7 @@
streamplot_args = {'x': X, 'y': Y, 'u':pixX, 'v': pixY,
'density': self.dens}
streamplot_args.update(self.plot_args)
- plot._axes.streamplot(**self.streamplot_args)
+ plot._axes.streamplot(**streamplot_args)
plot._axes.set_xlim(xx0,xx1)
plot._axes.set_ylim(yy0,yy1)
plot._axes.hold(False)
https://bitbucket.org/yt_analysis/yt/commits/35f49950e54b/
Changeset: 35f49950e54b
Branch: yt-3.0
User: ngoldbaum
Date: 2014-06-25 02:47:19
Summary: Decreasing the direness of these warnings, which will happen for many flash datasets.
Affected #: 1 file
diff -r 491e1d4498e68f66fb9f999fa25b51e6af2e7b5d -r 35f49950e54b90f710e6b60efd1c5ac733183471 yt/frontends/flash/data_structures.py
--- a/yt/frontends/flash/data_structures.py
+++ b/yt/frontends/flash/data_structures.py
@@ -283,7 +283,8 @@
else :
pval = val
if vn in self.parameters and self.parameters[vn] != pval:
- mylog.warning("{0} {1} overwrites a simulation scalar of the same name".format(hn[:-1],vn))
+ mylog.info("{0} {1} overwrites a simulation "
+ "scalar of the same name".format(hn[:-1],vn))
self.parameters[vn] = pval
if self._flash_version == 7:
for hn in hns:
@@ -300,7 +301,8 @@
else :
pval = val
if vn in self.parameters and self.parameters[vn] != pval:
- mylog.warning("{0} {1} overwrites a simulation scalar of the same name".format(hn[:-1],vn))
+ mylog.info("{0} {1} overwrites a simulation "
+ "scalar of the same name".format(hn[:-1],vn))
self.parameters[vn] = pval
# Determine block size
@@ -363,7 +365,7 @@
try:
self.gamma = self.parameters["gamma"]
except:
- mylog.warning("Cannot find Gamma")
+ mylog.info("Cannot find Gamma")
pass
# Get the simulation time
https://bitbucket.org/yt_analysis/yt/commits/6c941b3b41f1/
Changeset: 6c941b3b41f1
Branch: yt-3.0
User: ngoldbaum
Date: 2014-06-25 23:32:53
Summary: Fixing an issue with the generic array data notebook.
Affected #: 1 file
diff -r 35f49950e54b90f710e6b60efd1c5ac733183471 -r 6c941b3b41f1fd3acb1035277ca68bc0a22bba36 doc/source/examining/Loading_Generic_Array_Data.ipynb
--- a/doc/source/examining/Loading_Generic_Array_Data.ipynb
+++ b/doc/source/examining/Loading_Generic_Array_Data.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:cd145d8cadbf1a0065d0f9fb4ea107c215fcd53245b3bb7d29303af46f063552"
+ "signature": "sha256:5fc7783d6c99659c353a35348bb21210fcb7572d5357f32dd61755d4a7f8fe6c"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -443,7 +443,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "f = pyfits.open(data_dir+\"/UnigridData/velocity_field_20.fits.gz\")\n",
+ "f = pyfits.open(data_dir+\"/UnigridData/velocity_field_20.fits\")\n",
"f.info()"
],
"language": "python",
@@ -462,7 +462,7 @@
"collapsed": false,
"input": [
"data = {}\n",
- "for hdu in f[1:]:\n",
+ "for hdu in f:\n",
" name = hdu.name.lower()\n",
" data[name] = (hdu.data,\"km/s\")\n",
"print data.keys()"
https://bitbucket.org/yt_analysis/yt/commits/14858c252cdb/
Changeset: 14858c252cdb
Branch: yt-3.0
User: ngoldbaum
Date: 2014-06-26 04:45:04
Summary: Removing the free-free field because it's currently not possible to add custom
derived quantities.
Affected #: 1 file
diff -r 6c941b3b41f1fd3acb1035277ca68bc0a22bba36 -r 14858c252cdb66a9d7401124b77620cac05340fe doc/source/cookbook/free_free_field.py
--- a/doc/source/cookbook/free_free_field.py
+++ /dev/null
@@ -1,104 +0,0 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
-
-import numpy as np
-import yt
-# Need to grab the proton mass from the constants database
-from yt.utilities.physical_constants import mp
-
-# Define the emission field
-
-keVtoerg = 1.602e-9 # Convert energy in keV to energy in erg
-KtokeV = 8.617e-08 # Convert degrees Kelvin to degrees keV
-sqrt3 = np.sqrt(3.)
-expgamma = 1.78107241799 # Exponential of Euler's constant
-
-
-def _FreeFree_Emission(field, data):
-
- if data.has_field_parameter("Z"):
- Z = data.get_field_parameter("Z")
- else:
- Z = 1.077 # Primordial H/He plasma
-
- if data.has_field_parameter("mue"):
- mue = data.get_field_parameter("mue")
- else:
- mue = 1./0.875 # Primordial H/He plasma
-
- if data.has_field_parameter("mui"):
- mui = data.get_field_parameter("mui")
- else:
- mui = 1./0.8125 # Primordial H/He plasma
-
- if data.has_field_parameter("Ephoton"):
- Ephoton = data.get_field_parameter("Ephoton")
- else:
- Ephoton = 1.0 # in keV
-
- if data.has_field_parameter("photon_emission"):
- photon_emission = data.get_field_parameter("photon_emission")
- else:
- photon_emission = False # Flag for energy or photon emission
-
- n_e = data["density"]/(mue*mp)
- n_i = data["density"]/(mui*mp)
- kT = data["temperature"]*KtokeV
-
- # Compute the Gaunt factor
-
- g_ff = np.zeros(kT.shape)
- g_ff[Ephoton/kT > 1.] = np.sqrt((3./np.pi)*kT[Ephoton/kT > 1.]/Ephoton)
- g_ff[Ephoton/kT < 1.] = (sqrt3/np.pi)*np.log((4./expgamma) *
- kT[Ephoton/kT < 1.]/Ephoton)
-
- eps_E = 1.64e-20*Z*Z*n_e*n_i/np.sqrt(data["temperature"]) * \
- np.exp(-Ephoton/kT)*g_ff
-
- if photon_emission:
- eps_E /= (Ephoton*keVtoerg)
-
- return eps_E
-
-yt.add_field("FreeFree_Emission", function=_FreeFree_Emission)
-
-# Define the luminosity derived quantity
-def _FreeFreeLuminosity(data):
- return (data["FreeFree_Emission"]*data["cell_volume"]).sum()
-
-
-def _combFreeFreeLuminosity(data, luminosity):
- return luminosity.sum()
-
-yt.add_quantity("FreeFree_Luminosity", function=_FreeFreeLuminosity,
- combine_function=_combFreeFreeLuminosity, n_ret=1)
-
-pf = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
-
-sphere = pf.sphere(pf.domain_center, (100., "kpc"))
-
-# Print out the total luminosity at 1 keV for the sphere
-
-print "L_E (1 keV, primordial) = ", sphere.quantities["FreeFree_Luminosity"]()
-
-# The defaults for the field assume a H/He primordial plasma.
-# Let's set the appropriate parameters for a pure hydrogen plasma.
-
-sphere.set_field_parameter("mue", 1.0)
-sphere.set_field_parameter("mui", 1.0)
-sphere.set_field_parameter("Z", 1.0)
-
-print "L_E (1 keV, pure hydrogen) = ", sphere.quantities["FreeFree_Luminosity"]()
-
-# Now let's print the luminosity at an energy of E = 10 keV
-
-sphere.set_field_parameter("Ephoton", 10.0)
-
-print "L_E (10 keV, pure hydrogen) = ", sphere.quantities["FreeFree_Luminosity"]()
-
-# Finally, let's set the flag for photon emission, to get the total number
-# of photons emitted at this energy:
-
-sphere.set_field_parameter("photon_emission", True)
-
-print "L_ph (10 keV, pure hydrogen) = ", sphere.quantities["FreeFree_Luminosity"]()
https://bitbucket.org/yt_analysis/yt/commits/9acc67029bc7/
Changeset: 9acc67029bc7
Branch: yt-3.0
User: ngoldbaum
Date: 2014-06-26 04:47:19
Summary: Preventing the smoothed covering grid from creating regions that overflow the domain.
Affected #: 1 file
diff -r 14858c252cdb66a9d7401124b77620cac05340fe -r 9acc67029bc740e0b81dfa6accbd6648da7b49d6 yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py
+++ b/yt/data_objects/construction_data_containers.py
@@ -684,10 +684,12 @@
if level_state is None: return
# We need a buffer region to allow for zones that contribute to the
# interpolation but are not directly inside our bounds
+ left_edge = self.left_edge - level_state.current_dx
+ right_edge = self.right_edge + level_state.current_dx
+ left_edge = np.maximum(left_edge, self.pf.domain_left_edge)
+ right_edge = np.minimum(right_edge, self.pf.domain_right_edge)
level_state.data_source = self.pf.region(
- self.center,
- self.left_edge - level_state.current_dx,
- self.right_edge + level_state.current_dx)
+ self.center, left_edge, right_edge)
level_state.data_source.min_level = level_state.current_level
level_state.data_source.max_level = level_state.current_level
https://bitbucket.org/yt_analysis/yt/commits/4b7cb6e906fa/
Changeset: 4b7cb6e906fa
Branch: yt-3.0
User: ngoldbaum
Date: 2014-06-26 04:47:50
Summary: Making flat behave properly for YTArray and subclasses.
Affected #: 2 files
diff -r 9acc67029bc740e0b81dfa6accbd6648da7b49d6 -r 4b7cb6e906fad08d8de8b0c635ee8b55c8b66053 yt/units/tests/test_ytarray.py
--- a/yt/units/tests/test_ytarray.py
+++ b/yt/units/tests/test_ytarray.py
@@ -756,6 +756,15 @@
yield assert_array_equal, yt_arr, YTArray(yt_arr.to_astropy())
yield assert_equal, yt_quan, YTQuantity(yt_quan.to_astropy())
+def test_flatiter():
+ a = YTArray(np.arange(10), 'km/hr')
+
+ yield assert_equal, a, a.flat[:]
+ yield assert_equal, a[1:3], a.flat[1:3]
+
+ yield assert_isinstance, a.flat[:], YTArray
+ yield assert_isinstance, a.flat[1:3], YTArray
+ yield assert_isinstance, a.flat[0], YTQuantity
def test_subclass():
diff -r 9acc67029bc740e0b81dfa6accbd6648da7b49d6 -r 4b7cb6e906fad08d8de8b0c635ee8b55c8b66053 yt/units/yt_array.py
--- a/yt/units/yt_array.py
+++ b/yt/units/yt_array.py
@@ -176,6 +176,25 @@
fmax, fmin, copysign, nextafter, fmod,
)
+class YTArrayIterator(object):
+ def __init__(self, arr):
+ self.arr = arr
+ self.iter = self.arr.view(np.ndarray).flat
+
+ def __iter__(self):
+ return self
+
+ def __getitem__(self, indx):
+ out = self.iter[indx]
+ return out*self.arr.uq
+
+ def __setitem__(self, index, value):
+ self.iter[index] = value
+
+ def __next__(self):
+ out = next(self.iter)
+ return out*self.arr.uq
+
class YTArray(np.ndarray):
"""
An ndarray subclass that attaches a symbolic unit object to the array data.
@@ -654,6 +673,21 @@
ua = unit_array
+ @property
+ def flat(self):
+ """A 1D iterator over the YTArray.
+
+ This returns a ``YTArrayIterator`` instance, which behaves the same as
+ the ``~np.flatiter`` instance returned by ``~np.ndarray.flat``, and is
+ similar to, but not a subclass of, Python's built-in iterator object.
+ """
+ return YTArrayIterator(self)
+
+ @flat.setter
+ def flat(self, value):
+ y = self.ravel()
+ y[:] = value
+
#
# Start operation methods
#
https://bitbucket.org/yt_analysis/yt/commits/ee2a140fa42a/
Changeset: ee2a140fa42a
Branch: yt-3.0
User: ngoldbaum
Date: 2014-06-26 04:48:11
Summary: Fixing the hse_field recipe. Adding it to the toctree.
Affected #: 2 files
diff -r 4b7cb6e906fad08d8de8b0c635ee8b55c8b66053 -r ee2a140fa42a5caf3e55abdcd393bd64c34ddb9c doc/source/cookbook/calculating_information.rst
--- a/doc/source/cookbook/calculating_information.rst
+++ b/doc/source/cookbook/calculating_information.rst
@@ -57,3 +57,12 @@
serial the operation ``for pf in ts:`` would also have worked identically.
.. yt_cookbook:: time_series.py
+
+Complex Derived Fields
+~~~~~~~~~~~~~~~~~~~~~~
+
+This recipe estimates the ratio of gravitational and pressure forces in a galaxy
+cluster simulation. This shows how to create and work with vector derived
+fields.
+
+.. yt_cookbook:: hse_field.py
diff -r 4b7cb6e906fad08d8de8b0c635ee8b55c8b66053 -r ee2a140fa42a5caf3e55abdcd393bd64c34ddb9c doc/source/cookbook/hse_field.py
--- a/doc/source/cookbook/hse_field.py
+++ b/doc/source/cookbook/hse_field.py
@@ -7,8 +7,10 @@
# Define the components of the gravitational acceleration vector field by
# taking the gradient of the gravitational potential
- at yt.derived_field(name='grav_accel_x', units='cm/s**2', take_log=False)
-def grav_accel_x(field, data):
+ at yt.derived_field(name='gravitational_acceleration_x',
+ units='cm/s**2', take_log=False,
+ validators=[yt.ValidateSpatial(1,["gravitational_potential"])])
+def gravitational_acceleration_x(field, data):
# We need to set up stencils
@@ -22,14 +24,16 @@
gx -= data["gravitational_potential"][sl_left, 1:-1, 1:-1]/dx
new_field = np.zeros(data["gravitational_potential"].shape,
- dtype='float64')*gx.unit_array
+ dtype='float64')*gx.uq
new_field[1:-1, 1:-1, 1:-1] = -gx
return new_field
- at yt.derived_field(name='grav_accel_y', units='cm/s**2', take_log=False)
-def grav_accel_y(field, data):
+ at yt.derived_field(name='gravitational_acceleration_y',
+ units='cm/s**2', take_log=False,
+ validators=[yt.ValidateSpatial(1,["gravitational_potential"])])
+def gravitational_acceleration_y(field, data):
# We need to set up stencils
@@ -43,14 +47,17 @@
gy -= data["gravitational_potential"][1:-1, sl_left, 1:-1]/dy
new_field = np.zeros(data["gravitational_potential"].shape,
- dtype='float64')*gx.unit_array
+ dtype='float64')*gy.uq
+
new_field[1:-1, 1:-1, 1:-1] = -gy
return new_field
- at yt.derived_field(name='grav_accel_z', units='cm/s**2', take_log=False)
-def grav_accel_z(field, data):
+ at yt.derived_field(name='gravitational_acceleration_z',
+ units='cm/s**2', take_log=False,
+ validators=[yt.ValidateSpatial(1,["gravitational_potential"])])
+def gravitational_acceleration_z(field, data):
# We need to set up stencils
@@ -64,7 +71,7 @@
gz -= data["gravitational_potential"][1:-1, 1:-1, sl_left]/dz
new_field = np.zeros(data["gravitational_potential"].shape,
- dtype='float64')*gx.unit_array
+ dtype='float64')*gz.uq
new_field[1:-1, 1:-1, 1:-1] = -gz
return new_field
@@ -73,7 +80,8 @@
# Define the components of the pressure gradient field
- at yt.derived_field(name='grad_pressure_x', units='g/(cm*s)**2', take_log=False)
+ at yt.derived_field(name='grad_pressure_x', units='g/(cm*s)**2', take_log=False,
+ validators=[yt.ValidateSpatial(1,["pressure"])])
def grad_pressure_x(field, data):
# We need to set up stencils
@@ -87,13 +95,14 @@
px = data["pressure"][sl_right, 1:-1, 1:-1]/dx
px -= data["pressure"][sl_left, 1:-1, 1:-1]/dx
- new_field = np.zeros(data["pressure"].shape, dtype='float64')*px.unit_array
+ new_field = np.zeros(data["pressure"].shape, dtype='float64')*px.uq
new_field[1:-1, 1:-1, 1:-1] = px
return new_field
- at yt.derived_field(name='grad_pressure_y', units='g/(cm*s)**2', take_log=False)
+ at yt.derived_field(name='grad_pressure_y', units='g/(cm*s)**2', take_log=False,
+ validators=[yt.ValidateSpatial(1,["pressure"])])
def grad_pressure_y(field, data):
# We need to set up stencils
@@ -107,13 +116,14 @@
py = data["pressure"][1:-1, sl_right, 1:-1]/dy
py -= data["pressure"][1:-1, sl_left, 1:-1]/dy
- new_field = np.zeros(data["pressure"].shape, dtype='float64')*px.unit_array
+ new_field = np.zeros(data["pressure"].shape, dtype='float64')*py.uq
new_field[1:-1, 1:-1, 1:-1] = py
return new_field
- at yt.derived_field(name='grad_pressure_z', units='g/(cm*s)**2', take_log=False)
+ at yt.derived_field(name='grad_pressure_z', units='g/(cm*s)**2', take_log=False,
+ validators=[yt.ValidateSpatial(1,["pressure"])])
def grad_pressure_z(field, data):
# We need to set up stencils
@@ -127,7 +137,7 @@
pz = data["pressure"][1:-1, 1:-1, sl_right]/dz
pz -= data["pressure"][1:-1, 1:-1, sl_left]/dz
- new_field = np.zeros(data["pressure"].shape, dtype='float64')*px.unit_array
+ new_field = np.zeros(data["pressure"].shape, dtype='float64')*pz.uq
new_field[1:-1, 1:-1, 1:-1] = pz
return new_field
@@ -135,49 +145,29 @@
# Define the "degree of hydrostatic equilibrium" field
- at yt.derived_field(name='HSE', units=None, take_log=False)
+ at yt.derived_field(name='HSE', units=None, take_log=False,
+ display_name='Hydrostatic Equilibrium')
def HSE(field, data):
- gx = data["density"]*data["Grav_Accel_x"]
- gy = data["density"]*data["Grav_Accel_y"]
- gz = data["density"]*data["Grav_Accel_z"]
+ gx = data["density"]*data["gravitational_acceleration_x"]
+ gy = data["density"]*data["gravitational_acceleration_y"]
+ gz = data["density"]*data["gravitational_acceleration_z"]
- hx = data["Grad_Pressure_x"] - gx
- hy = data["Grad_Pressure_y"] - gy
- hz = data["Grad_Pressure_z"] - gz
+ hx = data["grad_pressure_x"] - gx
+ hy = data["grad_pressure_y"] - gy
+ hz = data["grad_pressure_z"] - gz
- h = np.sqrt((hx*hx+hy*hy+hz*hz)/(gx*gx+gy*gy+gz*gz))*gx.unit_array
+ h = np.sqrt((hx*hx+hy*hy+hz*hz)/(gx*gx+gy*gy+gz*gz))
return h
-# Open two files, one at the beginning and the other at a later time when
-# there's a lot of sloshing going on.
+# Open a dataset from when there's a lot of sloshing going on.
-dsi = yt.load("GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0000")
-dsf = yt.load("GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0350")
+ds = yt.load("GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0350")
-# Sphere objects centered at the cluster potential minimum with a radius
-# of 200 kpc
-sphere_i = dsi.sphere(dsi.domain_center, (200, "kpc"))
-sphere_f = dsf.sphere(dsf.domain_center, (200, "kpc"))
+# Take a slice through the center of the domain
+slc = yt.SlicePlot(ds, 2, ["density", "HSE"], width=(1, 'Mpc'))
-# Average "degree of hydrostatic equilibrium" in these spheres
-
-hse_i = sphere_i.quantities["WeightedAverageQuantity"]("HSE", "cell_mass")
-hse_f = sphere_f.quantities["WeightedAverageQuantity"]("HSE", "cell_mass")
-
-print "Degree of hydrostatic equilibrium initially: ", hse_i
-print "Degree of hydrostatic equilibrium later: ", hse_f
-
-# Just for good measure, take slices through the center of the domains
-# of the two files
-
-slc_i = yt.SlicePlot(dsi, 2, ["density", "HSE"], center=dsi.domain_center,
- width=(1.0, "Mpc"))
-slc_f = yt.SlicePlot(dsf, 2, ["density", "HSE"], center=dsf.domain_center,
- width=(1.0, "Mpc"))
-
-slc_i.save("initial")
-slc_f.save("final")
+slc.save("hse")
https://bitbucket.org/yt_analysis/yt/commits/11f216bfc793/
Changeset: 11f216bfc793
Branch: yt-3.0
User: ngoldbaum
Date: 2014-06-26 05:09:45
Summary: Re-adding the free_free_field recipe.
Affected #: 1 file
diff -r ee2a140fa42a5caf3e55abdcd393bd64c34ddb9c -r 11f216bfc793fccb63d26ecdb89267b406da4ff4 doc/source/cookbook/free_free_field.py
--- /dev/null
+++ b/doc/source/cookbook/free_free_field.py
@@ -0,0 +1,104 @@
+### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
+### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+
+import numpy as np
+import yt
+# Need to grab the proton mass from the constants database
+from yt.utilities.physical_constants import mp
+
+# Define the emission field
+
+keVtoerg = 1.602e-9 # Convert energy in keV to energy in erg
+KtokeV = 8.617e-08 # Convert degrees Kelvin to degrees keV
+sqrt3 = np.sqrt(3.)
+expgamma = 1.78107241799 # Exponential of Euler's constant
+
+
+def _FreeFree_Emission(field, data):
+
+ if data.has_field_parameter("Z"):
+ Z = data.get_field_parameter("Z")
+ else:
+ Z = 1.077 # Primordial H/He plasma
+
+ if data.has_field_parameter("mue"):
+ mue = data.get_field_parameter("mue")
+ else:
+ mue = 1./0.875 # Primordial H/He plasma
+
+ if data.has_field_parameter("mui"):
+ mui = data.get_field_parameter("mui")
+ else:
+ mui = 1./0.8125 # Primordial H/He plasma
+
+ if data.has_field_parameter("Ephoton"):
+ Ephoton = data.get_field_parameter("Ephoton")
+ else:
+ Ephoton = 1.0 # in keV
+
+ if data.has_field_parameter("photon_emission"):
+ photon_emission = data.get_field_parameter("photon_emission")
+ else:
+ photon_emission = False # Flag for energy or photon emission
+
+ n_e = data["density"]/(mue*mp)
+ n_i = data["density"]/(mui*mp)
+ kT = data["temperature"]*KtokeV
+
+ # Compute the Gaunt factor
+
+ g_ff = np.zeros(kT.shape)
+ g_ff[Ephoton/kT > 1.] = np.sqrt((3./np.pi)*kT[Ephoton/kT > 1.]/Ephoton)
+ g_ff[Ephoton/kT < 1.] = (sqrt3/np.pi)*np.log((4./expgamma) *
+ kT[Ephoton/kT < 1.]/Ephoton)
+
+ eps_E = 1.64e-20*Z*Z*n_e*n_i/np.sqrt(data["temperature"]) * \
+ np.exp(-Ephoton/kT)*g_ff
+
+ if photon_emission:
+ eps_E /= (Ephoton*keVtoerg)
+
+ return eps_E
+
+yt.add_field("FreeFree_Emission", function=_FreeFree_Emission)
+
+# Define the luminosity derived quantity
+def _FreeFreeLuminosity(data):
+ return (data["FreeFree_Emission"]*data["cell_volume"]).sum()
+
+
+def _combFreeFreeLuminosity(data, luminosity):
+ return luminosity.sum()
+
+yt.add_quantity("FreeFree_Luminosity", function=_FreeFreeLuminosity,
+ combine_function=_combFreeFreeLuminosity, n_ret=1)
+
+pf = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
+
+sphere = pf.sphere(pf.domain_center, (100., "kpc"))
+
+# Print out the total luminosity at 1 keV for the sphere
+
+print "L_E (1 keV, primordial) = ", sphere.quantities["FreeFree_Luminosity"]()
+
+# The defaults for the field assume a H/He primordial plasma.
+# Let's set the appropriate parameters for a pure hydrogen plasma.
+
+sphere.set_field_parameter("mue", 1.0)
+sphere.set_field_parameter("mui", 1.0)
+sphere.set_field_parameter("Z", 1.0)
+
+print "L_E (1 keV, pure hydrogen) = ", sphere.quantities["FreeFree_Luminosity"]()
+
+# Now let's print the luminosity at an energy of E = 10 keV
+
+sphere.set_field_parameter("Ephoton", 10.0)
+
+print "L_E (10 keV, pure hydrogen) = ", sphere.quantities["FreeFree_Luminosity"]()
+
+# Finally, let's set the flag for photon emission, to get the total number
+# of photons emitted at this energy:
+
+sphere.set_field_parameter("photon_emission", True)
+
+print "L_ph (10 keV, pure hydrogen) = ", sphere.quantities["FreeFree_Luminosity"]()
https://bitbucket.org/yt_analysis/yt/commits/2f502e2aeb2d/
Changeset: 2f502e2aeb2d
Branch: yt-3.0
User: ngoldbaum
Date: 2014-06-26 20:52:21
Summary: Backing out 9acc670
Affected #: 1 file
diff -r 11f216bfc793fccb63d26ecdb89267b406da4ff4 -r 2f502e2aeb2d8c44e349a187662c87def7868090 yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py
+++ b/yt/data_objects/construction_data_containers.py
@@ -684,12 +684,10 @@
if level_state is None: return
# We need a buffer region to allow for zones that contribute to the
# interpolation but are not directly inside our bounds
- left_edge = self.left_edge - level_state.current_dx
- right_edge = self.right_edge + level_state.current_dx
- left_edge = np.maximum(left_edge, self.pf.domain_left_edge)
- right_edge = np.minimum(right_edge, self.pf.domain_right_edge)
level_state.data_source = self.pf.region(
- self.center, left_edge, right_edge)
+ self.center,
+ self.left_edge - level_state.current_dx,
+ self.right_edge + level_state.current_dx)
level_state.data_source.min_level = level_state.current_level
level_state.data_source.max_level = level_state.current_level
https://bitbucket.org/yt_analysis/yt/commits/0475e3fd0d58/
Changeset: 0475e3fd0d58
Branch: yt-3.0
User: ngoldbaum
Date: 2014-06-26 22:54:27
Summary: Removing the flat iterator since it introduces performance bottlenecks on numpy 1.7.
Affected #: 3 files
diff -r 2f502e2aeb2d8c44e349a187662c87def7868090 -r 0475e3fd0d584c6b1c750031d44f73ab518b069b doc/source/cookbook/hse_field.py
--- a/doc/source/cookbook/hse_field.py
+++ b/doc/source/cookbook/hse_field.py
@@ -18,7 +18,7 @@
sl_right = slice(2, None, None)
div_fac = 2.0
- dx = div_fac * data['dx'].flat[0]
+ dx = div_fac * data['dx'][0]
gx = data["gravitational_potential"][sl_right, 1:-1, 1:-1]/dx
gx -= data["gravitational_potential"][sl_left, 1:-1, 1:-1]/dx
@@ -41,7 +41,7 @@
sl_right = slice(2, None, None)
div_fac = 2.0
- dy = div_fac * data['dy'].flat[0]
+ dy = div_fac * data['dy'].flatten()[0]
gy = data["gravitational_potential"][1:-1, sl_right, 1:-1]/dy
gy -= data["gravitational_potential"][1:-1, sl_left, 1:-1]/dy
@@ -65,7 +65,7 @@
sl_right = slice(2, None, None)
div_fac = 2.0
- dz = div_fac * data['dz'].flat[0]
+ dz = div_fac * data['dz'].flatten()[0]
gz = data["gravitational_potential"][1:-1, 1:-1, sl_right]/dz
gz -= data["gravitational_potential"][1:-1, 1:-1, sl_left]/dz
@@ -90,7 +90,7 @@
sl_right = slice(2, None, None)
div_fac = 2.0
- dx = div_fac * data['dx'].flat[0]
+ dx = div_fac * data['dx'].flatten()[0]
px = data["pressure"][sl_right, 1:-1, 1:-1]/dx
px -= data["pressure"][sl_left, 1:-1, 1:-1]/dx
@@ -111,7 +111,7 @@
sl_right = slice(2, None, None)
div_fac = 2.0
- dy = div_fac * data['dy'].flat[0]
+ dy = div_fac * data['dy'].flatten()[0]
py = data["pressure"][1:-1, sl_right, 1:-1]/dy
py -= data["pressure"][1:-1, sl_left, 1:-1]/dy
@@ -132,7 +132,7 @@
sl_right = slice(2, None, None)
div_fac = 2.0
- dz = div_fac * data['dz'].flat[0]
+ dz = div_fac * data['dz'].flatten()[0]
pz = data["pressure"][1:-1, 1:-1, sl_right]/dz
pz -= data["pressure"][1:-1, 1:-1, sl_left]/dz
diff -r 2f502e2aeb2d8c44e349a187662c87def7868090 -r 0475e3fd0d584c6b1c750031d44f73ab518b069b yt/units/tests/test_ytarray.py
--- a/yt/units/tests/test_ytarray.py
+++ b/yt/units/tests/test_ytarray.py
@@ -756,16 +756,6 @@
yield assert_array_equal, yt_arr, YTArray(yt_arr.to_astropy())
yield assert_equal, yt_quan, YTQuantity(yt_quan.to_astropy())
-def test_flatiter():
- a = YTArray(np.arange(10), 'km/hr')
-
- yield assert_equal, a, a.flat[:]
- yield assert_equal, a[1:3], a.flat[1:3]
-
- yield assert_isinstance, a.flat[:], YTArray
- yield assert_isinstance, a.flat[1:3], YTArray
- yield assert_isinstance, a.flat[0], YTQuantity
-
def test_subclass():
class YTASubclass(YTArray):
diff -r 2f502e2aeb2d8c44e349a187662c87def7868090 -r 0475e3fd0d584c6b1c750031d44f73ab518b069b yt/units/yt_array.py
--- a/yt/units/yt_array.py
+++ b/yt/units/yt_array.py
@@ -176,25 +176,6 @@
fmax, fmin, copysign, nextafter, fmod,
)
-class YTArrayIterator(object):
- def __init__(self, arr):
- self.arr = arr
- self.iter = self.arr.view(np.ndarray).flat
-
- def __iter__(self):
- return self
-
- def __getitem__(self, indx):
- out = self.iter[indx]
- return out*self.arr.uq
-
- def __setitem__(self, index, value):
- self.iter[index] = value
-
- def __next__(self):
- out = next(self.iter)
- return out*self.arr.uq
-
class YTArray(np.ndarray):
"""
An ndarray subclass that attaches a symbolic unit object to the array data.
@@ -673,21 +654,6 @@
ua = unit_array
- @property
- def flat(self):
- """A 1D iterator over the YTArray.
-
- This returns a ``YTArrayIterator`` instance, which behaves the same as
- the ``~np.flatiter`` instance returned by ``~np.ndarray.flat``, and is
- similar to, but not a subclass of, Python's built-in iterator object.
- """
- return YTArrayIterator(self)
-
- @flat.setter
- def flat(self, value):
- y = self.ravel()
- y[:] = value
-
#
# Start operation methods
#
https://bitbucket.org/yt_analysis/yt/commits/15bf4f304a0e/
Changeset: 15bf4f304a0e
Branch: yt-3.0
User: jzuhone
Date: 2014-06-28 18:40:36
Summary: Merged in ngoldbaum/yt/yt-3.0 (pull request #979)
Fixing a number of docs build issues.
Affected #: 18 files
diff -r dc6a369b263c68adaf02af5d14effa934aed93d8 -r 15bf4f304a0e8b46658a6a7a1b887e4cc60722bb doc/source/cookbook/calculating_information.rst
--- a/doc/source/cookbook/calculating_information.rst
+++ b/doc/source/cookbook/calculating_information.rst
@@ -57,3 +57,12 @@
serial the operation ``for pf in ts:`` would also have worked identically.
.. yt_cookbook:: time_series.py
+
+Complex Derived Fields
+~~~~~~~~~~~~~~~~~~~~~~
+
+This recipe estimates the ratio of gravitational and pressure forces in a galaxy
+cluster simulation. This shows how to create and work with vector derived
+fields.
+
+.. yt_cookbook:: hse_field.py
diff -r dc6a369b263c68adaf02af5d14effa934aed93d8 -r 15bf4f304a0e8b46658a6a7a1b887e4cc60722bb doc/source/cookbook/complex_plots.rst
--- a/doc/source/cookbook/complex_plots.rst
+++ b/doc/source/cookbook/complex_plots.rst
@@ -36,7 +36,7 @@
axes. To focus on what's happening in the x-y plane, we make an additional
Temperature slice for the bottom-right subpanel.
-.. yt-cookbook:: multiplot_2x2_coordaxes_slice.py
+.. yt_cookbook:: multiplot_2x2_coordaxes_slice.py
Multi-Plot Slice and Projections
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff -r dc6a369b263c68adaf02af5d14effa934aed93d8 -r 15bf4f304a0e8b46658a6a7a1b887e4cc60722bb doc/source/cookbook/fits_xray_images.rst
--- a/doc/source/cookbook/fits_xray_images.rst
+++ b/doc/source/cookbook/fits_xray_images.rst
@@ -1,6 +1,6 @@
.. _xray_fits:
FITS X-ray Images in yt
-----------------------
+-----------------------
-.. notebook:: fits_xray_images.ipynb
\ No newline at end of file
+.. notebook:: fits_xray_images.ipynb
diff -r dc6a369b263c68adaf02af5d14effa934aed93d8 -r 15bf4f304a0e8b46658a6a7a1b887e4cc60722bb doc/source/cookbook/hse_field.py
--- a/doc/source/cookbook/hse_field.py
+++ b/doc/source/cookbook/hse_field.py
@@ -7,8 +7,10 @@
# Define the components of the gravitational acceleration vector field by
# taking the gradient of the gravitational potential
- at yt.derived_field(name='grav_accel_x', units='cm/s**2', take_log=False)
-def grav_accel_x(field, data):
+ at yt.derived_field(name='gravitational_acceleration_x',
+ units='cm/s**2', take_log=False,
+ validators=[yt.ValidateSpatial(1,["gravitational_potential"])])
+def gravitational_acceleration_x(field, data):
# We need to set up stencils
@@ -16,20 +18,22 @@
sl_right = slice(2, None, None)
div_fac = 2.0
- dx = div_fac * data['dx'].flat[0]
+ dx = div_fac * data['dx'][0]
gx = data["gravitational_potential"][sl_right, 1:-1, 1:-1]/dx
gx -= data["gravitational_potential"][sl_left, 1:-1, 1:-1]/dx
new_field = np.zeros(data["gravitational_potential"].shape,
- dtype='float64')*gx.unit_array
+ dtype='float64')*gx.uq
new_field[1:-1, 1:-1, 1:-1] = -gx
return new_field
- at yt.derived_field(name='grav_accel_y', units='cm/s**2', take_log=False)
-def grav_accel_y(field, data):
+ at yt.derived_field(name='gravitational_acceleration_y',
+ units='cm/s**2', take_log=False,
+ validators=[yt.ValidateSpatial(1,["gravitational_potential"])])
+def gravitational_acceleration_y(field, data):
# We need to set up stencils
@@ -37,20 +41,23 @@
sl_right = slice(2, None, None)
div_fac = 2.0
- dy = div_fac * data['dy'].flat[0]
+ dy = div_fac * data['dy'].flatten()[0]
gy = data["gravitational_potential"][1:-1, sl_right, 1:-1]/dy
gy -= data["gravitational_potential"][1:-1, sl_left, 1:-1]/dy
new_field = np.zeros(data["gravitational_potential"].shape,
- dtype='float64')*gx.unit_array
+ dtype='float64')*gy.uq
+
new_field[1:-1, 1:-1, 1:-1] = -gy
return new_field
- at yt.derived_field(name='grav_accel_z', units='cm/s**2', take_log=False)
-def grav_accel_z(field, data):
+ at yt.derived_field(name='gravitational_acceleration_z',
+ units='cm/s**2', take_log=False,
+ validators=[yt.ValidateSpatial(1,["gravitational_potential"])])
+def gravitational_acceleration_z(field, data):
# We need to set up stencils
@@ -58,13 +65,13 @@
sl_right = slice(2, None, None)
div_fac = 2.0
- dz = div_fac * data['dz'].flat[0]
+ dz = div_fac * data['dz'].flatten()[0]
gz = data["gravitational_potential"][1:-1, 1:-1, sl_right]/dz
gz -= data["gravitational_potential"][1:-1, 1:-1, sl_left]/dz
new_field = np.zeros(data["gravitational_potential"].shape,
- dtype='float64')*gx.unit_array
+ dtype='float64')*gz.uq
new_field[1:-1, 1:-1, 1:-1] = -gz
return new_field
@@ -73,7 +80,8 @@
# Define the components of the pressure gradient field
- at yt.derived_field(name='grad_pressure_x', units='g/(cm*s)**2', take_log=False)
+ at yt.derived_field(name='grad_pressure_x', units='g/(cm*s)**2', take_log=False,
+ validators=[yt.ValidateSpatial(1,["pressure"])])
def grad_pressure_x(field, data):
# We need to set up stencils
@@ -82,18 +90,19 @@
sl_right = slice(2, None, None)
div_fac = 2.0
- dx = div_fac * data['dx'].flat[0]
+ dx = div_fac * data['dx'].flatten()[0]
px = data["pressure"][sl_right, 1:-1, 1:-1]/dx
px -= data["pressure"][sl_left, 1:-1, 1:-1]/dx
- new_field = np.zeros(data["pressure"].shape, dtype='float64')*px.unit_array
+ new_field = np.zeros(data["pressure"].shape, dtype='float64')*px.uq
new_field[1:-1, 1:-1, 1:-1] = px
return new_field
- at yt.derived_field(name='grad_pressure_y', units='g/(cm*s)**2', take_log=False)
+ at yt.derived_field(name='grad_pressure_y', units='g/(cm*s)**2', take_log=False,
+ validators=[yt.ValidateSpatial(1,["pressure"])])
def grad_pressure_y(field, data):
# We need to set up stencils
@@ -102,18 +111,19 @@
sl_right = slice(2, None, None)
div_fac = 2.0
- dy = div_fac * data['dy'].flat[0]
+ dy = div_fac * data['dy'].flatten()[0]
py = data["pressure"][1:-1, sl_right, 1:-1]/dy
py -= data["pressure"][1:-1, sl_left, 1:-1]/dy
- new_field = np.zeros(data["pressure"].shape, dtype='float64')*px.unit_array
+ new_field = np.zeros(data["pressure"].shape, dtype='float64')*py.uq
new_field[1:-1, 1:-1, 1:-1] = py
return new_field
- at yt.derived_field(name='grad_pressure_z', units='g/(cm*s)**2', take_log=False)
+ at yt.derived_field(name='grad_pressure_z', units='g/(cm*s)**2', take_log=False,
+ validators=[yt.ValidateSpatial(1,["pressure"])])
def grad_pressure_z(field, data):
# We need to set up stencils
@@ -122,12 +132,12 @@
sl_right = slice(2, None, None)
div_fac = 2.0
- dz = div_fac * data['dz'].flat[0]
+ dz = div_fac * data['dz'].flatten()[0]
pz = data["pressure"][1:-1, 1:-1, sl_right]/dz
pz -= data["pressure"][1:-1, 1:-1, sl_left]/dz
- new_field = np.zeros(data["pressure"].shape, dtype='float64')*px.unit_array
+ new_field = np.zeros(data["pressure"].shape, dtype='float64')*pz.uq
new_field[1:-1, 1:-1, 1:-1] = pz
return new_field
@@ -135,49 +145,29 @@
# Define the "degree of hydrostatic equilibrium" field
- at yt.derived_field(name='HSE', units=None, take_log=False)
+ at yt.derived_field(name='HSE', units=None, take_log=False,
+ display_name='Hydrostatic Equilibrium')
def HSE(field, data):
- gx = data["density"]*data["Grav_Accel_x"]
- gy = data["density"]*data["Grav_Accel_y"]
- gz = data["density"]*data["Grav_Accel_z"]
+ gx = data["density"]*data["gravitational_acceleration_x"]
+ gy = data["density"]*data["gravitational_acceleration_y"]
+ gz = data["density"]*data["gravitational_acceleration_z"]
- hx = data["Grad_Pressure_x"] - gx
- hy = data["Grad_Pressure_y"] - gy
- hz = data["Grad_Pressure_z"] - gz
+ hx = data["grad_pressure_x"] - gx
+ hy = data["grad_pressure_y"] - gy
+ hz = data["grad_pressure_z"] - gz
- h = np.sqrt((hx*hx+hy*hy+hz*hz)/(gx*gx+gy*gy+gz*gz))*gx.unit_array
+ h = np.sqrt((hx*hx+hy*hy+hz*hz)/(gx*gx+gy*gy+gz*gz))
return h
-# Open two files, one at the beginning and the other at a later time when
-# there's a lot of sloshing going on.
+# Open a dataset from when there's a lot of sloshing going on.
-dsi = yt.load("GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0000")
-dsf = yt.load("GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0350")
+ds = yt.load("GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0350")
-# Sphere objects centered at the cluster potential minimum with a radius
-# of 200 kpc
-sphere_i = dsi.sphere(dsi.domain_center, (200, "kpc"))
-sphere_f = dsf.sphere(dsf.domain_center, (200, "kpc"))
+# Take a slice through the center of the domain
+slc = yt.SlicePlot(ds, 2, ["density", "HSE"], width=(1, 'Mpc'))
-# Average "degree of hydrostatic equilibrium" in these spheres
-
-hse_i = sphere_i.quantities["WeightedAverageQuantity"]("HSE", "cell_mass")
-hse_f = sphere_f.quantities["WeightedAverageQuantity"]("HSE", "cell_mass")
-
-print "Degree of hydrostatic equilibrium initially: ", hse_i
-print "Degree of hydrostatic equilibrium later: ", hse_f
-
-# Just for good measure, take slices through the center of the domains
-# of the two files
-
-slc_i = yt.SlicePlot(dsi, 2, ["density", "HSE"], center=dsi.domain_center,
- width=(1.0, "Mpc"))
-slc_f = yt.SlicePlot(dsf, 2, ["density", "HSE"], center=dsf.domain_center,
- width=(1.0, "Mpc"))
-
-slc_i.save("initial")
-slc_f.save("final")
+slc.save("hse")
diff -r dc6a369b263c68adaf02af5d14effa934aed93d8 -r 15bf4f304a0e8b46658a6a7a1b887e4cc60722bb doc/source/developing/testing.rst
--- a/doc/source/developing/testing.rst
+++ b/doc/source/developing/testing.rst
@@ -51,7 +51,7 @@
If you are developing new functionality, it is sometimes more convenient to use
the Nose command line interface, ``nosetests``. You can run the unit tests
-using `no`qsetets` by navigating to the base directory of the yt mercurial
+using ``nose`` by navigating to the base directory of the yt mercurial
repository and invoking ``nosetests``:
.. code-block:: bash
diff -r dc6a369b263c68adaf02af5d14effa934aed93d8 -r 15bf4f304a0e8b46658a6a7a1b887e4cc60722bb doc/source/examining/Loading_Generic_Array_Data.ipynb
--- a/doc/source/examining/Loading_Generic_Array_Data.ipynb
+++ b/doc/source/examining/Loading_Generic_Array_Data.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:cd145d8cadbf1a0065d0f9fb4ea107c215fcd53245b3bb7d29303af46f063552"
+ "signature": "sha256:5fc7783d6c99659c353a35348bb21210fcb7572d5357f32dd61755d4a7f8fe6c"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -443,7 +443,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "f = pyfits.open(data_dir+\"/UnigridData/velocity_field_20.fits.gz\")\n",
+ "f = pyfits.open(data_dir+\"/UnigridData/velocity_field_20.fits\")\n",
"f.info()"
],
"language": "python",
@@ -462,7 +462,7 @@
"collapsed": false,
"input": [
"data = {}\n",
- "for hdu in f[1:]:\n",
+ "for hdu in f:\n",
" name = hdu.name.lower()\n",
" data[name] = (hdu.data,\"km/s\")\n",
"print data.keys()"
diff -r dc6a369b263c68adaf02af5d14effa934aed93d8 -r 15bf4f304a0e8b46658a6a7a1b887e4cc60722bb doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -711,11 +711,13 @@
``spectral_factor``
~~~~~~~~~~~~~~~~~~~
-Often, the aspect ratio of 3D spectral cubes can be far from unity. Because yt sets the pixel
-scale as the ``code_length``, certain visualizations (such as volume renderings) may look extended
-or distended in ways that are undesirable. To adjust the width in ``code_length`` of the spectral
- axis, set ``spectral_factor`` equal to a constant which gives the desired scaling,
- or set it to ``"auto"`` to make the width the same as the largest axis in the sky plane.
+Often, the aspect ratio of 3D spectral cubes can be far from unity. Because yt
+sets the pixel scale as the ``code_length``, certain visualizations (such as
+volume renderings) may look extended or distended in ways that are
+undesirable. To adjust the width in ``code_length`` of the spectral axis, set
+``spectral_factor`` equal to a constant which gives the desired scaling, or set
+it to ``"auto"`` to make the width the same as the largest axis in the sky
+plane.
Miscellaneous Tools for Use with FITS Data
++++++++++++++++++++++++++++++++++++++++++
@@ -792,11 +794,11 @@
PyNE Data
---------
-.. _loading-numpy-array:
-
Generic Array Data
------------------
+See :ref:`loading-numpy-array` for more detail.
+
Even if your data is not strictly related to fields commonly used in
astrophysical codes or your code is not supported yet, you can still feed it to
``yt`` to use its advanced visualization and analysis facilities. The only
@@ -848,6 +850,8 @@
Generic AMR Data
----------------
+See :ref:`loading-numpy-array` for more detail.
+
It is possible to create native ``yt`` parameter file from Python's dictionary
that describes set of rectangular patches of data of possibly varying
resolution.
diff -r dc6a369b263c68adaf02af5d14effa934aed93d8 -r 15bf4f304a0e8b46658a6a7a1b887e4cc60722bb doc/source/visualizing/_cb_docstrings.inc
--- a/doc/source/visualizing/_cb_docstrings.inc
+++ b/doc/source/visualizing/_cb_docstrings.inc
@@ -120,6 +120,8 @@
.. python-script::
from yt.mods import *
+ from yt.analysis_modules.halo_analysis.halo_catalog import HaloCatalog
+
data_pf = load('Enzo_64/RD0006/RedshiftOutput0006')
halos_pf = load('rockstar_halos/halos_0.0.bin')
diff -r dc6a369b263c68adaf02af5d14effa934aed93d8 -r 15bf4f304a0e8b46658a6a7a1b887e4cc60722bb doc/source/visualizing/_images/mapserver.png
Binary file doc/source/visualizing/_images/mapserver.png has changed
diff -r dc6a369b263c68adaf02af5d14effa934aed93d8 -r 15bf4f304a0e8b46658a6a7a1b887e4cc60722bb doc/source/visualizing/volume_rendering.rst
--- a/doc/source/visualizing/volume_rendering.rst
+++ b/doc/source/visualizing/volume_rendering.rst
@@ -478,8 +478,7 @@
:ref:`cookbook-amrkdtree_to_uniformgrid`.
System Requirements
--------------------
-.. versionadded:: 3.0
++++++++++++++++++++
Nvidia graphics card - The memory limit of the graphics card sets the limit
on the size of the data source.
@@ -490,7 +489,7 @@
the common/inc samples shipped with CUDA. The following shows an example
in bash with CUDA 5.5 installed in /usr/local :
-export CUDA_SAMPLES=/usr/local/cuda-5.5/samples/common/inc
+ export CUDA_SAMPLES=/usr/local/cuda-5.5/samples/common/inc
PyCUDA must also be installed to use Theia.
@@ -503,13 +502,13 @@
Tutorial
---------
-.. versionadded:: 3.0
+++++++++
Currently rendering only works on uniform grids. Here is an example
on a 1024 cube of float32 scalars.
.. code-block:: python
+
from yt.visualization.volume_rendering.theia.scene import TheiaScene
from yt.visualization.volume_rendering.algorithms.front_to_back import FrontToBackRaycaster
import numpy as np
@@ -528,28 +527,27 @@
.. _the-theiascene-interface:
The TheiaScene Interface
---------------------
-.. versionadded:: 3.0
+++++++++++++++++++++++++
A TheiaScene object has been created to provide a high level entry point for
-controlling the raycaster's view onto the data. The class
-:class:`~yt.visualization.volume_rendering.theia.TheiaScene` encapsulates
- a Camera object and a TheiaSource that intern encapsulates
-a volume. The :class:`~yt.visualization.volume_rendering.theia.Camera`
-provides controls for rotating, translating, and zooming into the volume.
-Using the :class:`~yt.visualization.volume_rendering.theia.TheiaSource`
-automatically transfers the volume to the graphic's card texture memory.
+controlling the raycaster's view onto the data. The class
+:class:`~yt.visualization.volume_rendering.theia.TheiaScene` encapsulates a
+Camera object and a TheiaSource that intern encapsulates a volume. The
+:class:`~yt.visualization.volume_rendering.theia.Camera` provides controls for
+rotating, translating, and zooming into the volume. Using the
+:class:`~yt.visualization.volume_rendering.theia.TheiaSource` automatically
+transfers the volume to the graphic's card texture memory.
Example Cookbooks
----------------
++++++++++++++++++
OpenGL Example for interactive volume rendering:
:ref:`cookbook-opengl_volume_rendering`.
-OpenGL Stereoscopic Example :
.. warning:: Frame rate will suffer significantly from stereoscopic rendering.
~2x slower since the volume must be rendered twice.
-:ref:`cookbook-opengl_stereo_volume_rendering`.
+
+OpenGL Stereoscopic Example: :ref:`cookbook-opengl_stereo_volume_rendering`.
Pseudo-Realtime video rendering with ffmpeg :
:ref:`cookbook-ffmpeg_volume_rendering`.
diff -r dc6a369b263c68adaf02af5d14effa934aed93d8 -r 15bf4f304a0e8b46658a6a7a1b887e4cc60722bb yt/analysis_modules/particle_trajectories/particle_trajectories.py
--- a/yt/analysis_modules/particle_trajectories/particle_trajectories.py
+++ b/yt/analysis_modules/particle_trajectories/particle_trajectories.py
@@ -201,7 +201,8 @@
if self.suppress_logging:
old_level = int(ytcfg.get("yt","loglevel"))
mylog.setLevel(40)
- dd_first = self.data_series[0].all_data()
+ ds_first = self.data_series[0]
+ dd_first = ds_first.all_data()
fd = dd_first._determine_fields(field)[0]
if field not in self.particle_fields:
if self.data_series[0].field_info[fd].particle_type:
diff -r dc6a369b263c68adaf02af5d14effa934aed93d8 -r 15bf4f304a0e8b46658a6a7a1b887e4cc60722bb yt/analysis_modules/ppv_cube/ppv_cube.py
--- a/yt/analysis_modules/ppv_cube/ppv_cube.py
+++ b/yt/analysis_modules/ppv_cube/ppv_cube.py
@@ -156,7 +156,8 @@
def _create_intensity(self, i):
def _intensity(field, data):
- w = np.abs(data["v_los"]-self.vmid[i])/self.dv
+ vlos = data["v_los"]
+ w = np.abs(vlos-self.vmid[i])/self.dv.in_units(vlos.units)
w = 1.-w
w[w < 0.0] = 0.0
return data[self.field]*w
diff -r dc6a369b263c68adaf02af5d14effa934aed93d8 -r 15bf4f304a0e8b46658a6a7a1b887e4cc60722bb yt/frontends/flash/data_structures.py
--- a/yt/frontends/flash/data_structures.py
+++ b/yt/frontends/flash/data_structures.py
@@ -283,7 +283,8 @@
else :
pval = val
if vn in self.parameters and self.parameters[vn] != pval:
- mylog.warning("{0} {1} overwrites a simulation scalar of the same name".format(hn[:-1],vn))
+ mylog.info("{0} {1} overwrites a simulation "
+ "scalar of the same name".format(hn[:-1],vn))
self.parameters[vn] = pval
if self._flash_version == 7:
for hn in hns:
@@ -300,7 +301,8 @@
else :
pval = val
if vn in self.parameters and self.parameters[vn] != pval:
- mylog.warning("{0} {1} overwrites a simulation scalar of the same name".format(hn[:-1],vn))
+ mylog.info("{0} {1} overwrites a simulation "
+ "scalar of the same name".format(hn[:-1],vn))
self.parameters[vn] = pval
# Determine block size
@@ -363,7 +365,7 @@
try:
self.gamma = self.parameters["gamma"]
except:
- mylog.warning("Cannot find Gamma")
+ mylog.info("Cannot find Gamma")
pass
# Get the simulation time
diff -r dc6a369b263c68adaf02af5d14effa934aed93d8 -r 15bf4f304a0e8b46658a6a7a1b887e4cc60722bb yt/units/tests/test_ytarray.py
--- a/yt/units/tests/test_ytarray.py
+++ b/yt/units/tests/test_ytarray.py
@@ -756,7 +756,6 @@
yield assert_array_equal, yt_arr, YTArray(yt_arr.to_astropy())
yield assert_equal, yt_quan, YTQuantity(yt_quan.to_astropy())
-
def test_subclass():
class YTASubclass(YTArray):
diff -r dc6a369b263c68adaf02af5d14effa934aed93d8 -r 15bf4f304a0e8b46658a6a7a1b887e4cc60722bb yt/visualization/plot_modifications.py
--- a/yt/visualization/plot_modifications.py
+++ b/yt/visualization/plot_modifications.py
@@ -501,7 +501,7 @@
streamplot_args = {'x': X, 'y': Y, 'u':pixX, 'v': pixY,
'density': self.dens}
streamplot_args.update(self.plot_args)
- plot._axes.streamplot(**self.streamplot_args)
+ plot._axes.streamplot(**streamplot_args)
plot._axes.set_xlim(xx0,xx1)
plot._axes.set_ylim(yy0,yy1)
plot._axes.hold(False)
Repository URL: https://bitbucket.org/yt_analysis/yt/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
More information about the yt-svn
mailing list