[yt-svn] commit/yt: ngoldbaum: Merged in jzuhone/yt (pull request #1844)

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Tue Nov 10 11:50:13 PST 2015


1 new commit in yt:

https://bitbucket.org/yt_analysis/yt/commits/3547c2e2eca0/
Changeset:   3547c2e2eca0
Branch:      yt
User:        ngoldbaum
Date:        2015-11-10 19:49:51+00:00
Summary:     Merged in jzuhone/yt (pull request #1844)

Unit and answer tests for photon_simulator
Affected #:  11 files

diff -r 652f3058834c94de92fb59bfde799736182ee8b7 -r 3547c2e2eca0949527d1d6ff86a63996eda62c84 doc/source/analyzing/analysis_modules/photon_simulator.rst
--- a/doc/source/analyzing/analysis_modules/photon_simulator.rst
+++ b/doc/source/analyzing/analysis_modules/photon_simulator.rst
@@ -1,3 +1,5 @@
+.. _photon_simulator:
+
 Constructing Mock X-ray Observations
 ------------------------------------
 
@@ -98,9 +100,8 @@
    `AtomDB <http://www.atomdb.org>`_ and get the files from the
    `xray_data <http://yt-project.org/data/xray_data.tar.gz>`_ auxiliary
    data package (see the ``xray_data`` `README <xray_data_README.html>`_ 
-   for details on the latter). Make sure that
-   in what follows you specify the full path to the locations of these
-   files.
+   for details on the latter). Make sure that in what follows you 
+   specify the full path to the locations of these files.
 
 To generate photons from this dataset, we have several different things
 we need to set up. The first is a standard yt data object. It could
@@ -197,7 +198,7 @@
 
 .. code:: python
 
-    A = 6000.
+    A = 3000.
     exp_time = 4.0e5
     redshift = 0.05
     cosmo = Cosmology()
@@ -298,7 +299,7 @@
 
 The second option, ``TableAbsorbModel``, takes as input an HDF5 file
 containing two datasets, ``"energy"`` (in keV), and ``"cross_section"``
-(in cm2), and the Galactic column density :math:`N_H`:
+(in :math:`cm^2`), and the Galactic column density :math:`N_H`:
 
 .. code:: python
 
@@ -307,7 +308,7 @@
 Now we're ready to project the photons. First, we choose a line-of-sight
 vector ``normal``. Second, we'll adjust the exposure time and the redshift.
 Third, we'll pass in the absorption ``SpectrumModel``. Fourth, we'll
-specify a ``sky_center`` in RA,DEC on the sky in degrees.
+specify a ``sky_center`` in RA and DEC on the sky in degrees.
 
 Also, we're going to convolve the photons with instrument ``responses``.
 For this, you need a ARF/RMF pair with matching energy bins. This is of
@@ -322,8 +323,8 @@
 
 .. code:: python
 
-    ARF = "chandra_ACIS-S3_onaxis_arf.fits"
-    RMF = "chandra_ACIS-S3_onaxis_rmf.fits"
+    ARF = "acisi_aimpt_cy17.arf"
+    RMF = "acisi_aimpt_cy17.rmf"
     normal = [0.0,0.0,1.0]
     events = photons.project_photons(normal, exp_time_new=2.0e5, redshift_new=0.07, dist_new=None, 
                                      absorb_model=abs_model, sky_center=(187.5,12.333), responses=[ARF,RMF], 
@@ -540,7 +541,7 @@
 
    sphere = ds.sphere("c", (1.0,"Mpc"))
        
-   A = 6000.
+   A = 3000.
    exp_time = 2.0e5
    redshift = 0.05
    cosmo = Cosmology()
@@ -555,7 +556,8 @@
 
 
    events = photons.project_photons([0.0,0.0,1.0], 
-                                    responses=["sim_arf.fits","sim_rmf.fits"], 
+                                    responses=["acisi_aimpt_cy17.arf",
+                                               "acisi_aimpt_cy17.rmf"], 
                                     absorb_model=abs_model,
                                     north_vector=[0.0,1.0,0.0])
 

diff -r 652f3058834c94de92fb59bfde799736182ee8b7 -r 3547c2e2eca0949527d1d6ff86a63996eda62c84 yt/analysis_modules/photon_simulator/photon_models.py
--- a/yt/analysis_modules/photon_simulator/photon_models.py
+++ b/yt/analysis_modules/photon_simulator/photon_models.py
@@ -73,14 +73,20 @@
         "invert_cdf": Invert the cumulative distribution function of the spectrum.
         "accept_reject": Acceptance-rejection method using the spectrum. 
         The first method should be sufficient for most cases. 
+    prng : NumPy `RandomState` object or numpy.random                                                                            
+        A pseudo-random number generator. Typically will only be specified                                                            
+        if you have a reason to generate the same set of random numbers, such as for a 
+        test. Default is the numpy.random module.                                                                      
     """
     def __init__(self, spectral_model, X_H=0.75, Zmet=0.3, 
-                 photons_per_chunk=10000000, method="invert_cdf"):
+                 photons_per_chunk=10000000, method="invert_cdf",
+                 prng=np.random):
         self.X_H = X_H
         self.Zmet = Zmet
         self.spectral_model = spectral_model
         self.photons_per_chunk = photons_per_chunk
         self.method = method
+        self.prng = prng
 
     def __call__(self, data_source, parameters):
 
@@ -174,7 +180,7 @@
                 tot_ph_c = cspec.d.sum()
                 tot_ph_m = mspec.d.sum()
 
-                u = np.random.random(size=n_current)
+                u = self.prng.uniform(size=n_current)
 
                 cell_norm_c = tot_ph_c*cem
                 cell_norm_m = tot_ph_m*metalZ[ibegin:iend]*cem
@@ -209,7 +215,7 @@
                         cumspec += Z * cumspec_m
                         norm_factor = 1.0 / cumspec[-1]
                         cumspec *= norm_factor
-                        randvec = np.random.uniform(size=cn)
+                        randvec = self.prng.uniform(size=cn)
                         randvec.sort()
                         cell_e = np.interp(randvec, cumspec, ebins)
                     elif self.method == "accept_reject":
@@ -217,7 +223,7 @@
                         tot_spec += Z * mspec.d
                         norm_factor = 1.0 / tot_spec.sum()
                         tot_spec *= norm_factor
-                        eidxs = np.random.choice(nchan, size=cn, p=tot_spec)
+                        eidxs = self.prng.choice(nchan, size=cn, p=tot_spec)
                         cell_e = emid[eidxs]
                     energies[ei:ei+cn] = cell_e
                     cell_counter += 1
@@ -252,4 +258,6 @@
         mylog.info("Number of photons generated: %d" % int(np.sum(photons["NumberOfPhotons"])))
         mylog.info("Number of cells with photons: %d" % len(photons["x"]))
 
+        self.spectral_model.cleanup_spectrum()
+
         return photons

diff -r 652f3058834c94de92fb59bfde799736182ee8b7 -r 3547c2e2eca0949527d1d6ff86a63996eda62c84 yt/analysis_modules/photon_simulator/photon_simulator.py
--- a/yt/analysis_modules/photon_simulator/photon_simulator.py
+++ b/yt/analysis_modules/photon_simulator/photon_simulator.py
@@ -90,6 +90,9 @@
         else:
             return self.photons[key]
 
+    def __contains__(self, key):
+        return key in self.photons
+
     def __repr__(self):
         return self.photons.__repr__()
 
@@ -449,7 +452,7 @@
                         absorb_model=None, psf_sigma=None,
                         sky_center=None, responses=None,
                         convolve_energies=False, no_shifting=False,
-                        north_vector=None):
+                        north_vector=None, prng=np.random):
         r"""
         Projects photons onto an image plane given a line of sight.
 
@@ -489,6 +492,11 @@
             the plane of projection. If not set, an arbitrary grid-aligned north_vector 
             is chosen. Ignored in the case where a particular axis (e.g., "x", "y", or
             "z") is explicitly specified.
+        prng : NumPy `RandomState` object or numpy.random                                                                              
+            A pseudo-random number generator. Typically will only be specified if you
+            have a reason to generate the same set of random numbers, such as for a                                    
+            test. Default is the numpy.random module.                                                                            
+
         Examples
         --------
         >>> L = np.array([0.1,-0.2,0.3])
@@ -612,14 +620,14 @@
         if my_n_obs == n_ph_tot:
             idxs = np.arange(my_n_obs,dtype='uint64')
         else:
-            idxs = np.random.permutation(n_ph_tot)[:my_n_obs].astype("uint64")
+            idxs = prng.permutation(n_ph_tot)[:my_n_obs].astype("uint64")
         obs_cells = np.searchsorted(self.p_bins, idxs, side='right')-1
         delta = dx[obs_cells]
 
         if isinstance(normal, string_types):
 
-            xsky = np.random.uniform(low=-0.5,high=0.5,size=my_n_obs)
-            ysky = np.random.uniform(low=-0.5,high=0.5,size=my_n_obs)
+            xsky = prng.uniform(low=-0.5,high=0.5,size=my_n_obs)
+            ysky = prng.uniform(low=-0.5,high=0.5,size=my_n_obs)
             xsky *= delta
             ysky *= delta
             xsky += self.photons[axes_lookup[normal][0]][obs_cells]
@@ -629,9 +637,9 @@
                 vz = self.photons["v%s" % normal]
 
         else:
-            x = np.random.uniform(low=-0.5,high=0.5,size=my_n_obs)
-            y = np.random.uniform(low=-0.5,high=0.5,size=my_n_obs)
-            z = np.random.uniform(low=-0.5,high=0.5,size=my_n_obs)
+            x = prng.uniform(low=-0.5,high=0.5,size=my_n_obs)
+            y = prng.uniform(low=-0.5,high=0.5,size=my_n_obs)
+            z = prng.uniform(low=-0.5,high=0.5,size=my_n_obs)
 
             if not no_shifting:
                 vz = self.photons["vx"]*z_hat[0] + \
@@ -664,15 +672,16 @@
             emid = absorb_model.emid
             aspec = absorb_model.get_spectrum()
             absorb = np.interp(eobs, emid, aspec, left=0.0, right=0.0)
-            randvec = aspec.max()*np.random.random(eobs.shape)
+            randvec = aspec.max()*prng.uniform(size=eobs.shape)
             not_abs = randvec < absorb
+            absorb_model.cleanup_spectrum()
 
         if eff_area is None:
             detected = np.ones(eobs.shape, dtype='bool')
         else:
             mylog.info("Applying energy-dependent effective area.")
             earea = np.interp(eobs, earf, eff_area, left=0.0, right=0.0)
-            randvec = eff_area.max()*np.random.random(eobs.shape)
+            randvec = eff_area.max()*prng.uniform(size=eobs.shape)
             detected = randvec < earea
 
         detected = np.logical_and(not_abs, detected)
@@ -689,8 +698,8 @@
         events = comm.par_combine_object(events, datatype="dict", op="cat")
 
         if psf_sigma is not None:
-            events["xpix"] += np.random.normal(sigma=psf_sigma/dtheta)
-            events["ypix"] += np.random.normal(sigma=psf_sigma/dtheta)
+            events["xpix"] += prng.normal(sigma=psf_sigma/dtheta)
+            events["ypix"] += prng.normal(sigma=psf_sigma/dtheta)
 
         num_events = len(events["xpix"])
 
@@ -698,7 +707,8 @@
             mylog.info("Total number of observed photons: %d" % num_events)
 
         if "RMF" in parameters and convolve_energies:
-            events, info = self._convolve_with_rmf(parameters["RMF"], events, mat_key)
+            events, info = self._convolve_with_rmf(parameters["RMF"], events, 
+                                                   mat_key, prng)
             for k, v in info.items():
                 parameters[k] = v
 
@@ -725,7 +735,7 @@
         rmf.close()
         return weights
 
-    def _convolve_with_rmf(self, respfile, events, mat_key):
+    def _convolve_with_rmf(self, respfile, events, mat_key, prng):
         """
         Convolve the events with a RMF file.
         """
@@ -780,7 +790,7 @@
             if len(trueChannel) > 0:
                 for q in range(fcurr,last):
                     if phEE[q] >= low and phEE[q] < high:
-                        channelInd = np.random.choice(len(weights), p=weights)
+                        channelInd = prng.choice(len(weights), p=weights)
                         fcurr += 1
                         detectedChannels.append(trueChannel[channelInd])
                     if phEE[q] >= high:
@@ -1349,6 +1359,8 @@
                 bins = 0.5*(ee[1:]+ee[:-1])
                 spectype = "energy"
             else:
+                mylog.info("Events haven't been convolved with an RMF, so assuming "
+                           "a perfect response and %d PI channels." % nchan)
                 bins = (np.arange(nchan)+1).astype("int32")
                 spectype = "pi"
 

diff -r 652f3058834c94de92fb59bfde799736182ee8b7 -r 3547c2e2eca0949527d1d6ff86a63996eda62c84 yt/analysis_modules/photon_simulator/spectral_models.py
--- a/yt/analysis_modules/photon_simulator/spectral_models.py
+++ b/yt/analysis_modules/photon_simulator/spectral_models.py
@@ -42,6 +42,9 @@
     def get_spectrum(self):
         pass
 
+    def cleanup_spectrum(self):
+        pass
+
 class XSpecThermalModel(SpectralModel):
     r"""
     Initialize a thermal gas emission model from PyXspec.
@@ -109,6 +112,10 @@
         cosmic_spec *= self.norm
         metal_spec *= self.norm
         return YTArray(cosmic_spec, "cm**3/s"), YTArray(metal_spec, "cm**3/s")
+    
+    def cleanup_spectrum(self):
+        del self.thermal_comp
+        del self.model
 
 class XSpecAbsorbModel(SpectralModel):
     r"""
@@ -165,6 +172,9 @@
         m.nH = self.nH
         return np.array(self.model.values(0))
 
+    def cleanup_spectrum(self):
+        del self.model
+
 class TableApecModel(SpectralModel):
     r"""
     Initialize a thermal gas emission model from the AtomDB APEC tables

diff -r 652f3058834c94de92fb59bfde799736182ee8b7 -r 3547c2e2eca0949527d1d6ff86a63996eda62c84 yt/analysis_modules/photon_simulator/tests/test_beta_model.py
--- /dev/null
+++ b/yt/analysis_modules/photon_simulator/tests/test_beta_model.py
@@ -0,0 +1,171 @@
+"""
+A unit test for the photon_simulator analysis module.
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2013, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+from yt.analysis_modules.photon_simulator.api import \
+    XSpecThermalModel, XSpecAbsorbModel, \
+    ThermalPhotonModel, PhotonList
+from yt.config import ytcfg
+from yt.utilities.answer_testing.framework import \
+    requires_module
+from yt.testing import requires_file
+import numpy as np
+from yt.utilities.physical_ratios import \
+    K_per_keV, mass_hydrogen_grams
+from yt.utilities.physical_constants import clight
+from yt.frontends.stream.api import load_uniform_grid
+import os
+import tempfile
+import shutil
+from numpy.random import RandomState
+
+ckms = clight.in_units("km/s").v
+
+def setup():
+    from yt.config import ytcfg
+    ytcfg["yt", "__withintesting"] = "True"
+
+xray_data_dir = ytcfg.get("yt", "xray_data_dir")
+
+arf = os.path.join(xray_data_dir,"sxt-s_120210_ts02um_intallpxl.arf")
+rmf = os.path.join(xray_data_dir,"ah_sxs_5ev_basefilt_20100712.rmf")
+
+ at requires_module("xspec")
+ at requires_file(arf)
+ at requires_file(rmf)
+def test_beta_model():
+    import xspec
+    
+    xspec.Fit.statMethod = "cstat"
+    xspec.Xset.addModelString("APECTHERMAL","yes")
+    xspec.Fit.query = "yes"
+    xspec.Fit.method = ["leven","10","0.01"]
+    xspec.Fit.delta = 0.01
+    xspec.Xset.chatter = 5
+
+    my_prng = RandomState(24)
+
+    tmpdir = tempfile.mkdtemp()
+    curdir = os.getcwd()
+    os.chdir(tmpdir)
+
+    R = 1.0
+    r_c = 0.05
+    rho_c = 0.04*mass_hydrogen_grams
+    beta = 1.
+    kT_sim = 6.0
+    v_shift = 4.0e7
+    v_width = 4.0e7
+    nx = 128
+
+    ddims = (nx,nx,nx)
+
+    x, y, z = np.mgrid[-R:R:nx*1j,
+                       -R:R:nx*1j,
+                       -R:R:nx*1j]
+
+    r = np.sqrt(x**2+y**2+z**2)
+
+    dens = np.zeros(ddims)
+    dens[r <= R] = rho_c*(1.+(r[r <= R]/r_c)**2)**(-1.5*beta)
+    dens[r > R] = 0.0
+    temp = kT_sim*K_per_keV*np.ones(ddims)
+    bbox = np.array([[-0.5,0.5],[-0.5,0.5],[-0.5,0.5]])
+    velz = my_prng.normal(loc=v_shift,scale=v_width,size=ddims)
+
+    data = {}
+    data["density"] = (dens, "g/cm**3")
+    data["temperature"] = (temp, "K")
+    data["velocity_x"] = (np.zeros(ddims), "cm/s")
+    data["velocity_y"] = (np.zeros(ddims), "cm/s")
+    data["velocity_z"] = (velz, "cm/s")
+
+    ds = load_uniform_grid(data, ddims, length_unit=(2*R, "Mpc"),
+                           nprocs=64, bbox=bbox)
+
+    A = 3000.
+    exp_time = 1.0e5
+    redshift = 0.05
+    nH_sim = 0.02
+
+    apec_model = XSpecThermalModel("bapec", 0.1, 11.5, 20000,
+                                   thermal_broad=True)
+    abs_model = XSpecAbsorbModel("TBabs", nH_sim)
+
+    sphere = ds.sphere("c", (0.5, "Mpc"))
+
+    mu_sim = -v_shift / 1.0e5
+    sigma_sim = v_width / 1.0e5
+
+    Z_sim = 0.3
+
+    thermal_model = ThermalPhotonModel(apec_model, Zmet=Z_sim, X_H=0.76,
+                                       prng=my_prng)
+    photons = PhotonList.from_scratch(sphere, redshift, A, exp_time,
+                                      thermal_model)
+
+    D_A = photons.parameters["FiducialAngularDiameterDistance"]
+
+    norm_sim = sphere.quantities.total_quantity("emission_measure")
+    norm_sim *= 1.0e-14/(4*np.pi*D_A*D_A*(1.+redshift)*(1.+redshift))
+    norm_sim = float(norm_sim.in_cgs())
+
+    events = photons.project_photons("z", responses=[arf,rmf],
+                                     absorb_model=abs_model, 
+                                     convolve_energies=True, prng=my_prng)
+    events.write_spectrum("beta_model_evt.pi", clobber=True)
+
+    s = xspec.Spectrum("beta_model_evt.pi")
+    s.ignore("**-0.5")
+    s.ignore("9.0-**")
+
+    m = xspec.Model("tbabs*bapec")
+    m.bapec.kT = 5.5
+    m.bapec.Abundanc = 0.25
+    m.bapec.norm = 1.0
+    m.bapec.Redshift = 0.05
+    m.bapec.Velocity = 300.0
+    m.TBabs.nH = 0.02
+
+    m.bapec.Velocity.frozen = False
+    m.bapec.Abundanc.frozen = False
+    m.bapec.Redshift.frozen = False
+    m.TBabs.nH.frozen = True
+
+    xspec.Fit.renorm()
+    xspec.Fit.nIterations = 100
+    xspec.Fit.perform()
+    
+    kT  = m.bapec.kT.values[0]
+    mu = (m.bapec.Redshift.values[0]-redshift)*ckms
+    Z = m.bapec.Abundanc.values[0]
+    sigma = m.bapec.Velocity.values[0]
+    norm = m.bapec.norm.values[0]
+
+    dkT = m.bapec.kT.sigma
+    dmu = m.bapec.Redshift.sigma*ckms
+    dZ = m.bapec.Abundanc.sigma
+    dsigma = m.bapec.Velocity.sigma
+    dnorm = m.bapec.norm.sigma
+
+    print kT, kT_sim, dkT
+
+    assert np.abs(mu-mu_sim) < dmu
+    assert np.abs(kT-kT_sim) < dkT    
+    assert np.abs(Z-Z_sim) < dZ
+    assert np.abs(sigma-sigma_sim) < dsigma
+    assert np.abs(norm-norm_sim) < dnorm
+
+    xspec.AllModels.clear()
+    xspec.AllData.clear()
+
+    os.chdir(curdir)
+    shutil.rmtree(tmpdir)

diff -r 652f3058834c94de92fb59bfde799736182ee8b7 -r 3547c2e2eca0949527d1d6ff86a63996eda62c84 yt/analysis_modules/photon_simulator/tests/test_cluster.py
--- a/yt/analysis_modules/photon_simulator/tests/test_cluster.py
+++ /dev/null
@@ -1,67 +0,0 @@
-"""
-Answer test the photon_simulator analysis module.
-"""
-
-#-----------------------------------------------------------------------------
-# Copyright (c) 2013, yt Development Team.
-#
-# Distributed under the terms of the Modified BSD License.
-#
-# The full license is in the file COPYING.txt, distributed with this software.
-#-----------------------------------------------------------------------------
-
-from yt.analysis_modules.photon_simulator.api import \
-    TableApecModel, TableAbsorbModel, \
-    ThermalPhotonModel, PhotonList
-from yt.config import ytcfg
-from yt.testing import requires_file
-from yt.utilities.answer_testing.framework import requires_ds, \
-    GenericArrayTest, data_dir_load
-import numpy as np
-
-def setup():
-    from yt.config import ytcfg
-    ytcfg["yt", "__withintesting"] = "True"
-
-test_dir = ytcfg.get("yt", "test_data_dir")
-
-ETC = test_dir+"/enzo_tiny_cosmology/DD0046/DD0046"
-APEC = test_dir+"/xray_data/atomdb_v2.0.2"
-TBABS = test_dir+"/xray_data/tbabs_table.h5"
-ARF = test_dir+"/xray_data/chandra_ACIS-S3_onaxis_arf.fits"
-RMF = test_dir+"/xray_data/chandra_ACIS-S3_onaxis_rmf.fits"
-
- at requires_ds(ETC)
- at requires_file(APEC)
- at requires_file(TBABS)
- at requires_file(ARF)
- at requires_file(RMF)
-def test_etc():
-
-    np.random.seed(seed=0x4d3d3d3)
-
-    ds = data_dir_load(ETC)
-    A = 3000.
-    exp_time = 1.0e5
-    redshift = 0.1
-
-    apec_model = TableApecModel(APEC, 0.1, 20.0, 2000)
-    tbabs_model = TableAbsorbModel(TBABS, 0.1)
-
-    sphere = ds.sphere("max", (0.5, "mpc"))
-
-    thermal_model = ThermalPhotonModel(apec_model, Zmet=0.3)
-    photons = PhotonList.from_scratch(sphere, redshift, A, exp_time,
-                                      thermal_model)
-
-    events = photons.project_photons([0.0,0.0,1.0],
-                                     responses=[ARF,RMF],
-                                     absorb_model=tbabs_model)
-
-    def photons_test(): return photons.photons
-    def events_test(): return events.events
-
-    for test in [GenericArrayTest(ds, photons_test),
-                 GenericArrayTest(ds, events_test)]:
-        test_etc.__name__ = test.description
-        yield test

diff -r 652f3058834c94de92fb59bfde799736182ee8b7 -r 3547c2e2eca0949527d1d6ff86a63996eda62c84 yt/analysis_modules/photon_simulator/tests/test_sloshing.py
--- /dev/null
+++ b/yt/analysis_modules/photon_simulator/tests/test_sloshing.py
@@ -0,0 +1,86 @@
+"""
+Answer test the photon_simulator analysis module.
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2013, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+from yt.analysis_modules.photon_simulator.api import \
+    TableApecModel, TableAbsorbModel, \
+    ThermalPhotonModel, PhotonList
+from yt.config import ytcfg
+from yt.testing import requires_file
+from yt.utilities.answer_testing.framework import requires_ds, \
+    GenericArrayTest, data_dir_load
+import numpy as np
+from numpy.random import RandomState
+import os
+
+def setup():
+    from yt.config import ytcfg
+    ytcfg["yt", "__withintesting"] = "True"
+
+test_data_dir = ytcfg.get("yt", "test_data_dir")
+xray_data_dir = ytcfg.get("yt", "xray_data_dir")
+
+rmfs = ["pn-med.rmf", "acisi_aimpt_cy17.rmf",
+        "aciss_aimpt_cy17.rmf", "nustar.rmf",
+        "ah_sxs_5ev_basefilt_20100712.rmf"]
+arfs = ["pn-med.arf", "acisi_aimpt_cy17.arf",
+        "aciss_aimpt_cy17.arf", "nustar_3arcminA.arf",
+        "sxt-s_120210_ts02um_intallpxl.arf"]
+
+gslr = test_data_dir+"/GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0300"
+APEC = xray_data_dir
+TBABS = xray_data_dir+"/tbabs_table.h5"
+
+def return_data(data):
+    def _return_data(name):
+        return data
+    return _return_data
+
+ at requires_ds(gslr)
+ at requires_file(APEC)
+ at requires_file(TBABS)
+def test_sloshing():
+
+    prng = RandomState(0x4d3d3d3)
+
+    ds = data_dir_load(gslr)
+    A = 2000.
+    exp_time = 1.0e4
+    redshift = 0.1
+
+    apec_model = TableApecModel(APEC, 0.1, 11.0, 10000)
+    tbabs_model = TableAbsorbModel(TBABS, 0.1)
+
+    sphere = ds.sphere("c", (0.1, "Mpc"))
+
+    thermal_model = ThermalPhotonModel(apec_model, Zmet=0.3, prng=prng)
+    photons = PhotonList.from_scratch(sphere, redshift, A, exp_time,
+                                      thermal_model)
+
+    return_photons = return_data(photons.photons)
+
+    tests = []
+    tests.append(GenericArrayTest(ds, return_photons, args=["photons"]))
+
+    for a, r in zip(arfs, rmfs):
+        arf = os.path.join(xray_data_dir,a)
+        rmf = os.path.join(xray_data_dir,r)
+        events = photons.project_photons([1.0,-0.5,0.2], responses=[arf,rmf],
+                                         absorb_model=tbabs_model, 
+                                         convolve_energies=True, prng=prng)
+
+        return_events = return_data(events.events)
+
+        tests.append(GenericArrayTest(ds, return_events, args=[a]))
+
+    for test in tests:
+        test_sloshing.__name__ = test.description
+        yield test

diff -r 652f3058834c94de92fb59bfde799736182ee8b7 -r 3547c2e2eca0949527d1d6ff86a63996eda62c84 yt/analysis_modules/photon_simulator/tests/test_spectra.py
--- a/yt/analysis_modules/photon_simulator/tests/test_spectra.py
+++ b/yt/analysis_modules/photon_simulator/tests/test_spectra.py
@@ -9,7 +9,7 @@
 def setup():
     ytcfg["yt", "__withintesting"] = "True"
 
-test_data_dir = ytcfg.get("yt", "test_data_dir")
+xray_data_dir = ytcfg.get("yt", "xray_data_dir")
 
 ds = fake_random_ds(64)
 
@@ -17,7 +17,7 @@
 @requires_module("astropy")
 def test_apec():
 
-    settings = {"APECROOT":test_data_dir+"/xray_data/apec_v2.0.2"}
+    settings = {"APECROOT":xray_data_dir+"/apec_v2.0.2"}
     xmod = XSpecThermalModel("apec", 0.1, 10.0, 10000, thermal_broad=True,
                              settings=settings)
     xmod.prepare_spectrum(0.2)
@@ -25,10 +25,10 @@
     xcspec, xmspec = xmod.get_spectrum(6.0)
     spec1 = xcspec+0.3*xmspec
 
-    amod = TableApecModel(test_data_dir+"/xray_data", 0.1, 10.0, 
+    amod = TableApecModel(xray_data_dir, 0.1, 10.0, 
                           10000, thermal_broad=True)
     amod.prepare_spectrum(0.2)
-    
+
     acspec, amspec = amod.get_spectrum(6.0)
     spec2 = acspec+0.3*amspec
 
@@ -42,3 +42,4 @@
         test_apec.__name__ = test.description
         yield test
 
+    xmod.cleanup_spectrum()

diff -r 652f3058834c94de92fb59bfde799736182ee8b7 -r 3547c2e2eca0949527d1d6ff86a63996eda62c84 yt/config.py
--- a/yt/config.py
+++ b/yt/config.py
@@ -60,7 +60,8 @@
     sketchfab_api_key = 'None',
     thread_field_detection = 'False',
     ignore_invalid_unit_operation_errors = 'False',
-    chunk_size = '1000'
+    chunk_size = '1000',
+    xray_data_dir = '/does/not/exist',
     )
 # Here is the upgrade.  We're actually going to parse the file in its entirety
 # here.  Then, if it has any of the Forbidden Sections, it will be rewritten

diff -r 652f3058834c94de92fb59bfde799736182ee8b7 -r 3547c2e2eca0949527d1d6ff86a63996eda62c84 yt/fields/astro_fields.py
--- a/yt/fields/astro_fields.py
+++ b/yt/fields/astro_fields.py
@@ -91,6 +91,20 @@
                        function=_chandra_emissivity,
                        units="") # add correct units here
 
+    def _emission_measure(field, data):
+        if data.has_field_parameter("X_H"):
+            X_H = data.get_field_parameter("X_H")
+        else:
+            X_H = 0.76
+        nenh = data["density"]*data["density"]
+        nenh /= mh*mh
+        nenh *= 0.5*(1.+X_H)*X_H*data["cell_volume"]
+        return nenh
+    
+    registry.add_field((ftype, "emission_measure"),
+                       function=_emission_measure,
+                       units="cm**-3")
+
     def _xray_emissivity(field, data):
         # old scaling coefficient was 2.168e60
         return data.ds.arr(data[ftype, "density"].to_ndarray().astype(np.float64)**2

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list