[yt-svn] commit/yt: MatthewTurk: Merged in chummels/yt/yt-3.0 (pull request #1029)
commits-noreply at bitbucket.org
commits-noreply at bitbucket.org
Sun Jul 20 07:18:17 PDT 2014
1 new commit in yt:
https://bitbucket.org/yt_analysis/yt/commits/43ba03d90988/
Changeset: 43ba03d90988
Branch: yt-3.0
User: MatthewTurk
Date: 2014-07-20 16:18:10
Summary: Merged in chummels/yt/yt-3.0 (pull request #1029)
A few cookbook updates
Affected #: 5 files
diff -r 4e16b4a309f6a7ec765af203f815ec10f7604bb9 -r 43ba03d9098851c7018b796410a73ae60e376d3d doc/source/analyzing/units/data_selection_and_fields.rst
--- a/doc/source/analyzing/units/data_selection_and_fields.rst
+++ b/doc/source/analyzing/units/data_selection_and_fields.rst
@@ -11,31 +11,28 @@
.. This needs to be added outside the notebook since user-defined derived fields
require a 'fresh' kernel.
-.. warning:: Note: derived field definitions need to happen *before* a dataset
- is loaded. This means changes to the following cells will only be
- picked up on a fresh kernel. Select Kernel -> Restart on the
- IPython menu bar to restart the kernel.
-
-New derived fields can be added just like in old vesions of yt. The most
-straightforward way to do this is to apply the `derived_field` decorator on a
-function that defines a field.
-
The following example creates a derived field for the square root of the cell
volume.
.. notebook-cell::
- from yt.mods import *
+ import yt
import numpy as np
- @derived_field(name='root_cell_volume', units='cm**(3/2)')
+ # Function defining the derived field
def root_cell_volume(field, data):
- return np.sqrt(data['cell_volume'])
+ return np.sqrt(data['cell_volume'])
- ds = load('HiresIsolatedGalaxy/DD0044/DD0044')
+ # Load the dataset
+ ds = yt.load('HiresIsolatedGalaxy/DD0044/DD0044')
- dd = ds.all_data()
- dd['root_cell_volume']
+ # Add the field to the dataset, linking to the derived field function and
+ # units of the field
+ ds.add_field(("gas", "root_cell_volume"), units="cm**(3/2)", function=root_cell_volume)
+
+ # Access the derived field like any other field
+ ad = ds.all_data()
+ ad['root_cell_volume']
No special unit logic needs to happen inside of the function - `np.sqrt` will
convert the units of the `density` field appropriately:
@@ -43,17 +40,17 @@
.. notebook-cell::
:skip_exceptions:
- from yt.mods import *
+ import yt
import numpy as np
- ds = load('HiresIsolatedGalaxy/DD0044/DD0044')
- dd = ds.all_data()
+ ds = yt.load('HiresIsolatedGalaxy/DD0044/DD0044')
+ ad = ds.all_data()
- print dd['cell_volume'].in_cgs()
- print np.sqrt(dd['cell_volume'].in_cgs())
+ print ad['cell_volume'].in_cgs()
+ print np.sqrt(ad['cell_volume'].in_cgs())
That said, it is necessary to specify the units in the call to the
-:code:`@derived_field` decorator. Not only does this ensure the returned units
+:code:`add_field` function. Not only does this ensure the returned units
will be exactly what you expect, it also allows an in-place conversion of units,
just in case the function returns a field with dimensionally equivalent units.
@@ -62,13 +59,16 @@
.. notebook-cell::
- from yt.mods import *
+ import yt
+ import numpy as np
- @derived_field(name='root_cell_volume', units='Mpc**(3/2)')
def root_cell_volume(field, data):
- return np.sqrt(data['cell_volume'])
+ return np.sqrt(data['cell_volume'])
- ds = load('HiresIsolatedGalaxy/DD0044/DD0044')
+ ds = yt.load('HiresIsolatedGalaxy/DD0044/DD0044')
- dd = ds.all_data()
- dd['root_cell_volume']
+ # Here we set the default units to Mpc^(3/2)
+ ds.add_field(("gas", "root_cell_volume"), units="Mpc**(3/2)", function=root_cell_volume)
+
+ ad = ds.all_data()
+ ad['root_cell_volume']
diff -r 4e16b4a309f6a7ec765af203f815ec10f7604bb9 -r 43ba03d9098851c7018b796410a73ae60e376d3d doc/source/cookbook/calculating_information.rst
--- a/doc/source/cookbook/calculating_information.rst
+++ b/doc/source/cookbook/calculating_information.rst
@@ -58,6 +58,14 @@
.. yt_cookbook:: time_series.py
+Simple Derived Fields
+~~~~~~~~~~~~~~~~~~~~~
+
+This recipe demonstrates how to create a simple derived field,
+thermal_energy_density, and then generate a projection from it.
+
+.. yt_cookbook:: derived_field.py
+
Complex Derived Fields
~~~~~~~~~~~~~~~~~~~~~~
diff -r 4e16b4a309f6a7ec765af203f815ec10f7604bb9 -r 43ba03d9098851c7018b796410a73ae60e376d3d doc/source/cookbook/derived_field.py
--- /dev/null
+++ b/doc/source/cookbook/derived_field.py
@@ -0,0 +1,23 @@
+import yt
+
+# Load the dataset.
+ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
+
+# You can create a derived field by manipulating any existing derived fields
+# in any way you choose. In this case, let's just make a simple one:
+# thermal_energy_density = 3/2 nkT
+
+# First create a function which yields your new derived field
+def thermal_energy_dens(field, data):
+ return (3/2)*data['gas', 'number_density'] * data['gas', 'kT']
+
+# Then add it to your dataset and define the units
+ds.add_field(("gas", "thermal_energy_density"), units="erg/cm**3", function=thermal_energy_dens)
+
+# It will now show up in your derived_field_list
+for i in sorted(ds.derived_field_list):
+ print i
+
+# Let's use it to make a projection
+ad = ds.all_data()
+yt.ProjectionPlot(ds, "x", "thermal_energy_density", weight_field="density", width=(200, 'kpc')).save()
diff -r 4e16b4a309f6a7ec765af203f815ec10f7604bb9 -r 43ba03d9098851c7018b796410a73ae60e376d3d doc/source/cookbook/simulation_analysis.py
--- a/doc/source/cookbook/simulation_analysis.py
+++ b/doc/source/cookbook/simulation_analysis.py
@@ -2,21 +2,27 @@
yt.enable_parallelism()
import collections
-# Instantiate a time series object for an Enzo simulation..
-sim = yt.simulation('enzo_tiny_cosmology/32Mpc_32.enzo', 'Enzo')
+# Enable parallelism in the script (assuming it was called with
+# `mpirun -np <n_procs>` )
+yt.enable_parallelism()
-# Get a time series for all data made by the simulation.
-sim.get_time_series()
+# By using wildcards such as ? and * with the load command, we can load up a
+# Time Series containing all of these datasets simultaneously.
+ts = yt.load('enzo_tiny_cosmology/DD????/DD????')
-# Calculate and store extrema for all datasets along with redshift
+# Calculate and store density extrema for all datasets along with redshift
# in a data dictionary with entries as tuples
-# Note that by using sim.piter(), we are automatically
-# forcing yt to do this in parallel
+# Create an empty dictionary
data = {}
-for ds in sim.piter():
+
+# Iterate through each dataset in the Time Series (using piter allows it
+# to happen in parallel automatically across available processors)
+for ds in ts.piter():
ad = ds.all_data()
extrema = ad.quantities.extrema('density')
+
+ # Fill the dictionary with extrema and redshift information for each dataset
data[ds.basename] = (extrema, ds.current_redshift)
# Convert dictionary to ordered dictionary to get the right order
@@ -25,5 +31,6 @@
# Print out all the values we calculated.
print "Dataset Redshift Density Min Density Max"
print "---------------------------------------------------------"
-for k, v in od.iteritems():
- print "%s %05.3f %5.3g g/cm^3 %5.3g g/cm^3" % (k, v[1], v[0][0], v[0][1])
+for key, val in od.iteritems():
+ print "%s %05.3f %5.3g g/cm^3 %5.3g g/cm^3" % \
+ (key, val[1], val[0][0], val[0][1])
diff -r 4e16b4a309f6a7ec765af203f815ec10f7604bb9 -r 43ba03d9098851c7018b796410a73ae60e376d3d doc/source/cookbook/time_series.py
--- a/doc/source/cookbook/time_series.py
+++ b/doc/source/cookbook/time_series.py
@@ -1,37 +1,40 @@
import yt
-import glob
import matplotlib.pyplot as plt
+import numpy as np
-# Glob for a list of filenames, then sort them
-fns = glob.glob("GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0*")
-fns.sort()
+# Enable parallelism in the script (assuming it was called with
+# `mpirun -np <n_procs>` )
+yt.enable_parallelism()
-# Construct the time series object
-ts = yt.DatasetSeries.from_filenames(fns)
+# By using wildcards such as ? and * with the load command, we can load up a
+# Time Series containing all of these datasets simultaneously.
+ts = yt.load('GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0*')
storage = {}
-# We use the piter() method here so that this can be run in parallel.
-# Alternately, you could just iterate "for ds in ts:" and directly append to
-# times and entrs.
-for sto, ds in ts.piter(storage=storage):
+# By using the piter() function, we can iterate on every dataset in
+# the TimeSeries object. By using the storage keyword, we can populate
+# a dictionary where the dataset is the key, and sto.result is the value
+# for later use when the loop is complete.
+
+# The serial equivalent of piter() here is just "for ds in ts:" .
+
+for store, ds in ts.piter(storage=storage):
+
+ # Create a sphere of radius 100 kpc at the center of the dataset volume
sphere = ds.sphere("c", (100., "kpc"))
+ # Calculate the entropy within that sphere
entr = sphere["entropy"].sum()
- sto.result = (ds.current_time.in_units('Gyr'), entr)
+ # Store the current time and sphere entropy for this dataset in our
+ # storage dictionary as a tuple
+ store.result = (ds.current_time.in_units('Gyr'), entr)
+# Convert the storage dictionary values to a Nx2 array, so the can be easily
+# plotted
+arr = np.array(storage.values())
-# Store these values in a couple of lists
-times = []
-entrs = []
-for k in storage:
- t, e = storage[k]
- times.append(t)
- entrs.append(e)
-
-
-# Plot up the results
-
-plt.semilogy(times, entrs, '-')
+# Plot up the results: time versus entropy
+plt.semilogy(arr[:,0], arr[:,1], 'r-')
plt.xlabel("Time (Gyr)")
plt.ylabel("Entropy (ergs/K)")
plt.savefig("time_versus_entropy.png")
Repository URL: https://bitbucket.org/yt_analysis/yt/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
More information about the yt-svn
mailing list