[yt-svn] commit/yt: MatthewTurk: Merged in brittonsmith/yt/yt-3.0 (pull request #997)
commits-noreply at bitbucket.org
commits-noreply at bitbucket.org
Mon Jul 14 10:58:34 PDT 2014
1 new commit in yt:
https://bitbucket.org/yt_analysis/yt/commits/ef7c988b774a/
Changeset: ef7c988b774a
Branch: yt-3.0
User: MatthewTurk
Date: 2014-07-14 19:58:22
Summary: Merged in brittonsmith/yt/yt-3.0 (pull request #997)
LightRay and LightCone 3.0
Affected #: 21 files
diff -r 8f2015a0da2717edbb82f2dbd4f2073b0d95b63c -r ef7c988b774a159c854d7a2fefc000e767007eda doc/source/analyzing/analysis_modules/light_cone_generator.rst
--- a/doc/source/analyzing/analysis_modules/light_cone_generator.rst
+++ b/doc/source/analyzing/analysis_modules/light_cone_generator.rst
@@ -2,15 +2,15 @@
Light Cone Generator
====================
-.. sectionauthor:: Britton Smith <brittonsmith at gmail.com>
-Light cones are projections made by stacking multiple datasets together to
-continuously span a given redshift interval. The width of individual
-projection slices is adjusted such that each slice has the same angular size.
-Each projection slice is randomly shifted and projected along a random axis to
-ensure that the same structures are not sampled multiple times. Since deeper
-images sample earlier epochs of the simulation, light cones represent the
-closest thing to synthetic imaging observations.
+Light cones are created by stacking multiple datasets together to
+continuously span a given redshift interval. To make a projection of a
+field through a light cone, the width of individual slices is adjusted
+such that each slice has the same angular size.
+Each slice is randomly shifted and projected along a random axis to
+ensure that the same structures are not sampled multiple times. A
+recipe for creating a simple light cone projection can be found in
+the cookbook under :ref:`cookbook-light_cone`.
.. image:: _images/LightCone_full_small.png
:width: 500
@@ -23,46 +23,41 @@
Configuring the Light Cone Generator
------------------------------------
-A recipe for creating a simple light cone projection can be found in the
-cookbook. The required arguments to instantiate a ``LightCone`` objects are
+The required arguments to instantiate a ``LightCone`` object are
the path to the simulation parameter file, the simulation type, the nearest
redshift, and the furthest redshift of the light cone.
.. code-block:: python
- from yt.analysis_modules.api import LightCone
+ from yt.analysis_modules.cosmological_observation.api import \
+ LightCone
lc = LightCone('enzo_tiny_cosmology/32Mpc_32.enzo',
'Enzo', 0., 0.1)
The additional keyword arguments are:
- * **field_of_view_in_arcminutes** (*float*): The field of view of the image
- in units of arcminutes. Default: 600.0.
-
- * **image_resolution_in_arcseconds** (*float*): The size of each image pixel
- in units of arcseconds. Default: 60.0.
-
- * **use_minimum_datasets** (*bool*): If True, the minimum number of datasets
- is used to connect the initial and final redshift. If false, the light
- cone solution will contain as many entries as possible within the redshift
- interval. Default: True.
+ * **use_minimum_datasets** (*bool*): If True, the minimum number of
+ datasets is used to connect the initial and final redshift. If False,
+ the light cone solution will contain as many entries as possible within
+ the redshift interval. Default: True.
* **deltaz_min** (*float*): Specifies the minimum Delta-z between
consecutive datasets in the returned list. Default: 0.0.
- * **minimum_coherent_box_fraction** (*float*): Used with use_minimum_datasets
- set to False, this parameter specifies the fraction of the total box size
- to be traversed before rerandomizing the projection axis and center. This
- was invented to allow light cones with thin slices to sample coherent large
- scale structure, but in practice does not work so well. Try setting this
- parameter to 1 and see what happens. Default: 0.0.
+ * **minimum_coherent_box_fraction** (*float*): Used with
+ **use_minimum_datasets** set to False, this parameter specifies the
+ fraction of the total box size to be traversed before rerandomizing the
+ projection axis and center. This was invented to allow light cones with
+ thin slices to sample coherent large cale structure, but in practice does
+ not work so well. Try setting this parameter to 1 and see what happens.
+ Default: 0.0.
* **time_data** (*bool*): Whether or not to include time outputs when
gathering datasets for time series. Default: True.
- * **redshift_data** (*bool*): Whether or not to include redshift outputs when
- gathering datasets for time series. Default: True.
+ * **redshift_data** (*bool*): Whether or not to include redshift outputs
+ when gathering datasets for time series. Default: True.
* **set_parameters** (*dict*): Dictionary of parameters to attach to
pf.parameters. Default: None.
@@ -76,10 +71,10 @@
Creating Light Cone Solutions
-----------------------------
-A light cone solution consists of a list of datasets and the width, depth,
-center, and axis of the projection to be made for that slice. The
-:meth:`LightCone.calculate_light_cone_solution` function is used to
-calculate the random shifting and projection axis:
+A light cone solution consists of a list of datasets spanning a redshift
+interval with a random orientation for each dataset. A new solution
+is calcuated with the :meth:`LightCone.calculate_light_cone_solution`
+function:
.. code-block:: python
@@ -87,70 +82,39 @@
The keyword argument are:
- * **seed** (*int*): the seed for the random number generator. Any light cone
- solution can be reproduced by giving the same random seed. Default: None
- (each solution will be distinct).
+ * **seed** (*int*): the seed for the random number generator. Any light
+ cone solution can be reproduced by giving the same random seed.
+ Default: None.
* **filename** (*str*): if given, a text file detailing the solution will be
written out. Default: None.
-If a new solution for the same LightCone object is desired, the
-:meth:`rerandomize_light_cone_solution` method should be called in place of
-:meth:`calculate_light_cone_solution`:
-
-.. code-block:: python
-
- new_seed = 987654321
- lc.rerandomize_light_cone_solution(new_seed, Recycle=True,
- filename='new_lightcone.dat')
-
-Additional keyword arguments are:
-
- * **recycle** (*bool*): if True, the new solution will have the same shift in
- the line of sight as the original solution. Since the projections of each
- slice are serialized and stored for the entire width of the box (even if
- the width used is left than the total box), the projection data can be
- deserialized instead of being remade from scratch. This can greatly speed
- up the creation of a large number of light cone projections. Default: True.
-
- * **filename** (*str*): if given, a text file detailing the solution will be
- written out. Default: None.
-
-If :meth:`rerandomize_light_cone_solution` is used, the LightCone object will
-keep a copy of the original solution that can be returned to at any time by
-calling :meth:`restore_master_solution`:
-
-.. code-block:: python
-
- lc.restore_master_solution()
-
-.. note:: All light cone solutions made with the above method will still use
- the same list of datasets. Only the shifting and projection axis will be
- different.
-
Making a Light Cone Projection
------------------------------
-With the light cone solution set, projections can be made of any available
-field:
+With the light cone solution in place, projections with a given field of
+view and resolution can be made of any available field:
.. code-block:: python
field = 'density'
- lc.project_light_cone(field , weight_field=None,
+ field_of_view = (600.0, "arcmin")
+ resolution = (60.0, "arcsec")
+ lc.project_light_cone(field_of_vew, resolution,
+ field , weight_field=None,
save_stack=True,
save_slice_images=True)
+The field of view and resolution can be specified either as a tuple of
+value and unit string or as a unitful ``YTQuantity``.
Additional keyword arguments:
- * **weight_field** (*str*): the weight field of the projection. This has the
- same meaning as in standard projections. Default: None.
+ * **weight_field** (*str*): the weight field of the projection. This has
+ the same meaning as in standard projections. Default: None.
- * **apply_halo_mask** (*bool*): if True, a boolean mask is apply to the light
- cone projection. See below for a description of halo masks. Default: False.
-
- * **node** (*str*): a prefix to be prepended to the node name under which the
- projection data is serialized. Default: None.
+ * **photon_field** (*bool*): if True, the projection data for each slice is
+ decremented by 4 pi R :superscript:`2` , where R is the luminosity
+ distance between the observer and the slice redshift. Default: False.
* **save_stack** (*bool*): if True, the unflatted light cone data including
each individual slice is written to an hdf5 file. Default: True.
@@ -161,13 +125,7 @@
* **save_slice_images** (*bool*): save images for each individual projection
slice. Default: False.
- * **flatten_stack** (*bool*): if True, the light cone stack is continually
- flattened each time a slice is added in order to save memory. This is
- generally not necessary. Default: False.
-
- * **photon_field** (*bool*): if True, the projection data for each slice is
- decremented by 4 pi R :superscript:`2` , where R is the luminosity
- distance between the observer and the slice redshift. Default: False.
+ * **cmap_name** (*string*): color map for images. Default: "algae".
* **njobs** (*int*): The number of parallel jobs over which the light cone
projection will be split. Choose -1 for one processor per individual
@@ -177,34 +135,4 @@
* **dynamic** (*bool*): If True, use dynamic load balancing to create the
projections. Default: False.
-Sampling Unique Light Cone Volumes
-----------------------------------
-
-When making a large number of light cones, particularly for statistical
-analysis, it is important to have a handle on the amount of sampled volume in
-common from one projection to another. Any statistics may untrustworthy if a
-set of light cones have too much volume in common, even if they may all be
-entirely different in appearance. LightCone objects have the ability to
-calculate the volume in common between two solutions with the same dataset
-ist. The :meth:`find_unique_solutions` and
-:meth:`project_unique_light_cones` functions can be used to create a set of
-light cone solutions that have some maximum volume in common and create light
-cone projections for those solutions. If specified, the code will attempt to
-use recycled solutions that can use the same serialized projection objects
-that have already been created. This can greatly increase the speed of making
-multiple light cone projections. See the cookbook for an example of doing this.
-
-Making Light Cones with a Halo Mask
------------------------------------
-
-The situation may arise where it is necessary or desirable to know the
-location of halos within the light cone volume, and specifically their
-location in the final image. This can be useful for developing algorithms to
-find galaxies or clusters in image data. The light cone generator does this
-by running the HaloProfiler (see :ref:`halo_profiling`) on each of the
-datasets used in the light cone and shifting them accordingly with the light
-cone solution. The ability also exists to create a boolean mask with the
-dimensions of the final light cone image that can be used to mask out the
-halos in the image. It is left as an exercise to the reader to find a use for
-this functionality. This process is somewhat complicated, but not terribly.
-See the recipe in the cookbook for an example of this functionality.
+.. note:: As of :code:`yt-3.0`, the halo mask and unique light cone functionality no longer exist. These are still available in :code:`yt-2.x`. If you would like to use these features in :code:`yt-3.x`, help is needed to port them over. Contact the yt-users mailing list if you are interested in doing this.
\ No newline at end of file
diff -r 8f2015a0da2717edbb82f2dbd4f2073b0d95b63c -r ef7c988b774a159c854d7a2fefc000e767007eda doc/source/analyzing/analysis_modules/light_ray_generator.rst
--- a/doc/source/analyzing/analysis_modules/light_ray_generator.rst
+++ b/doc/source/analyzing/analysis_modules/light_ray_generator.rst
@@ -1,20 +1,21 @@
.. _light-ray-generator:
Light Ray Generator
-====================
-.. sectionauthor:: Britton Smith <brittonsmith at gmail.com>
+===================
Light rays are similar to light cones (:ref:`light-cone-generator`) in how
they stack multiple datasets together to span a redshift interval. Unlike
-light cones, which which stack randomly oriented projections from each
+light cones, which stack randomly oriented projections from each
dataset to create synthetic images, light rays use thin pencil beams to
-simulate QSO sight lines.
+simulate QSO sight lines. A sample script can be found in the cookbook
+under :ref:`cookbook-light_ray`.
.. image:: _images/lightray.png
-A ray segment records the information of all grid cells intersected by the ray
-as well as the path length, dl, of the ray through the cell. Column densities
-can be calculated by multiplying physical densities by the path length.
+A ray segment records the information of all grid cells intersected by the
+ray as well as the path length, dl, of the ray through the cell. Column
+densities can be calculated by multiplying physical densities by the path
+length.
Configuring the Light Ray Generator
-----------------------------------
@@ -36,22 +37,22 @@
ray solution will contain as many entries as possible within the redshift
interval. Default: True.
- * **deltaz_min** (*float*): Specifies the minimum Delta-z between consecutive
- datasets in the returned list. Default: 0.0.
+ * **deltaz_min** (*float*): Specifies the minimum Delta-z between
+ consecutive datasets in the returned list. Default: 0.0.
- * **minimum_coherent_box_fraction** (*float*): Used with use_minimum_datasets
- set to False, this parameter specifies the fraction of the total box size
- to be traversed before rerandomizing the projection axis and center. This
- was invented to allow light rays with thin slices to sample coherent large
- scale structure, but in practice does not work so well. Try setting this
- parameter to 1 and see what happens. Default: 0.0.
+ * **minimum_coherent_box_fraction** (*float*): Used with
+ **use_minimum_datasets** set to False, this parameter specifies the
+ fraction of the total box size to be traversed before rerandomizing the
+ projection axis and center. This was invented to allow light rays with
+ thin slices to sample coherent large scale structure, but in practice
+ does not work so well. Try setting this parameter to 1 and see what
+ happens. Default: 0.0.
- * **time_data** (*bool*): Whether or not to include time outputs when gathering
- datasets for time series. Default: True.
-
- * **redshift_data** (*bool*): Whether or not to include redshift outputs when
+ * **time_data** (*bool*): Whether or not to include time outputs when
gathering datasets for time series. Default: True.
+ * **redshift_data** (*bool*): Whether or not to include redshift outputs
+ when gathering datasets for time series. Default: True.
Making Light Ray Data
---------------------
@@ -74,7 +75,21 @@
* **seed** (*int*): Seed for the random number generator. Default: None.
- * **fields** (*list*): A list of fields for which to get data. Default: None.
+ * **start_position** (*list* of floats): Used only if creating a light ray
+ from a single dataset. The coordinates of the starting position of the
+ ray. Default: None.
+
+ * **end_position** (*list* of floats): Used only if creating a light ray
+ from a single dataset. The coordinates of the ending position of the ray.
+ Default: None.
+
+ * **trajectory** (*list* of floats): Used only if creating a light ray
+ from a single dataset. The (r, theta, phi) direction of the light ray.
+ Use either **end_position** or **trajectory**, not both.
+ Default: None.
+
+ * **fields** (*list*): A list of fields for which to get data.
+ Default: None.
* **solution_filename** (*string*): Path to a text file where the
trajectories of each subray is written out. Default: None.
@@ -83,51 +98,17 @@
Default: None.
* **get_los_velocity** (*bool*): If True, the line of sight velocity is
- calculated for each point in the ray. Default: False.
+ calculated for each point in the ray. Default: True.
- * **get_nearest_halo** (*bool*): If True, the HaloProfiler will be used to
- calculate the distance and mass of the nearest halo for each point in the
- ray. This option requires additional information to be included. See
- the cookbook for an example. Default: False.
-
- * **nearest_halo_fields** (*list*): A list of fields to be calculated for the
- halos nearest to every pixel in the ray. Default: None.
-
- * **halo_list_file** (*str*): Filename containing a list of halo properties to be used
- for getting the nearest halos to absorbers. Default: None.
-
- * **halo_profiler_parameters** (*dict*): A dictionary of parameters to be
- passed to the HaloProfiler to create the appropriate data used to get
- properties for the nearest halos. Default: None.
-
- * **njobs** (*int*): The number of parallel jobs over which the slices for the
- halo mask will be split. Choose -1 for one processor per individual slice
- and 1 to have all processors work together on each projection. Default: 1
+ * **njobs** (*int*): The number of parallel jobs over which the slices for
+ the halo mask will be split. Choose -1 for one processor per individual
+ slice and 1 to have all processors work together on each projection.
+ Default: 1
* **dynamic** (*bool*): If True, use dynamic load balancing to create the
projections. Default: False.
-Getting The Nearest Galaxies
-----------------------------
-
-The light ray tool will use the HaloProfiler to calculate the distance and
-mass of the nearest halo to that pixel. In order to do this, a dictionary
-called halo_profiler_parameters is used to pass instructions to the
-HaloProfiler. This dictionary has three additional keywords:
-
- * **halo_profiler_kwargs** (*dict*): A dictionary of standard HaloProfiler
- keyword arguments and values to be given to the HaloProfiler.
-
- * **halo_profiler_actions** (*list*): A list of actions to be performed by
- the HaloProfiler. Each item in the list should be a dictionary with the
- following entries: "function", "args", and "kwargs", for the function to
- be performed, the arguments supplied to that function, and the keyword
- arguments.
-
- * **halo_list** (*string*): 'all' to use the full halo list, or 'filtered'
- to use the filtered halo list created after calling make_profiles.
-
-See the recipe in the cookbook for am example.
+.. note:: As of :code:`yt-3.0`, the functionality for recording properties of the nearest halo to each element of the ray no longer exists. This is still available in :code:`yt-2.x`. If you would like to use this feature in :code:`yt-3.x`, help is needed to port it over. Contact the yt-users mailing list if you are interested in doing this.
What Can I do with this?
------------------------
diff -r 8f2015a0da2717edbb82f2dbd4f2073b0d95b63c -r ef7c988b774a159c854d7a2fefc000e767007eda doc/source/cookbook/cosmological_analysis.rst
--- a/doc/source/cookbook/cosmological_analysis.rst
+++ b/doc/source/cookbook/cosmological_analysis.rst
@@ -29,6 +29,8 @@
.. yt_cookbook:: halo_merger_tree.py
+.. _cookbook-light_cone:
+
Light Cone Projection
~~~~~~~~~~~~~~~~~~~~~
This script creates a light cone projection, a synthetic observation
@@ -37,27 +39,15 @@
.. yt_cookbook:: light_cone_projection.py
-Light Cone with Halo Mask
-~~~~~~~~~~~~~~~~~~~~~~~~~
-This script combines the light cone generator with the halo profiler to
-make a light cone projection with all of the halos cut out of the image.
+.. _cookbook-light_ray:
-.. yt_cookbook:: light_cone_with_halo_mask.py
+Light Ray
+~~~~~~~~~
+This script demonstrates how to make a synthetic quasar sight line that
+extends over multiple datasets and can be used to generate a synthetic
+absorption spectrum.
-Making Unique Light Cones
-~~~~~~~~~~~~~~~~~~~~~~~~~
-This script demonstrates how to make a series of light cone projections
-that only have a maximum amount of volume in common.
-
-.. yt_cookbook:: unique_light_cone_projections.py
-
-Making Light Rays
-~~~~~~~~~~~~~~~~~
-This script demonstrates how to make a synthetic quasar sight line and
-uses the halo profiler to record information about halos close to the
-line of sight.
-
-.. yt_cookbook:: make_light_ray.py
+.. yt_cookbook:: light_ray.py
Creating and Fitting Absorption Spectra
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff -r 8f2015a0da2717edbb82f2dbd4f2073b0d95b63c -r ef7c988b774a159c854d7a2fefc000e767007eda doc/source/cookbook/light_cone_projection.py
--- a/doc/source/cookbook/light_cone_projection.py
+++ b/doc/source/cookbook/light_cone_projection.py
@@ -1,12 +1,8 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
+import yt
+from yt.analysis_modules.cosmological_observation.api import \
+ LightCone
-import yt
-from yt.analysis_modules.cosmological_observation.light_cone.light_cone import LightCone
-
-# Create a LightCone object extending from z = 0 to z = 0.1
-# with a 600 arcminute field of view and a resolution of
-# 60 arcseconds.
+# Create a LightCone object extending from z = 0 to z = 0.1.
# We have already set up the redshift dumps to be
# used for this, so we will not use any of the time
@@ -14,20 +10,21 @@
lc = LightCone('enzo_tiny_cosmology/32Mpc_32.enzo',
'Enzo', 0., 0.1,
observer_redshift=0.0,
- field_of_view_in_arcminutes=600.0,
- image_resolution_in_arcseconds=60.0,
time_data=False)
# Calculate a randomization of the solution.
-lc.calculate_light_cone_solution(seed=123456789)
+lc.calculate_light_cone_solution(seed=123456789, filename="LC/solution.txt")
# Choose the field to be projected.
-field = 'SZY'
+field = 'szy'
+# Use the LightCone object to make a projection with a 600 arcminute
+# field of view and a resolution of 60 arcseconds.
# Set njobs to -1 to have one core work on each projection
-# in parallel. Set save_slice_images to True to see an
-# image for each individual slice.
-lc.project_light_cone(field, save_stack=False,
+# in parallel.
+lc.project_light_cone((600.0, "arcmin"), (60.0, "arcsec"), field,
+ weight_field=None,
+ save_stack=True,
save_final_image=True,
- save_slice_images=False,
+ save_slice_images=True,
njobs=-1)
diff -r 8f2015a0da2717edbb82f2dbd4f2073b0d95b63c -r ef7c988b774a159c854d7a2fefc000e767007eda doc/source/cookbook/light_cone_with_halo_mask.py
--- a/doc/source/cookbook/light_cone_with_halo_mask.py
+++ /dev/null
@@ -1,78 +0,0 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
-
-import yt
-
-from yt.analysis_modules.cosmological_observation.light_cone.light_cone import LightCone
-from yt.analysis_modules.halo_profiler.api import HaloProfiler
-
-# Instantiate a light cone object as usual.
-lc = LightCone('enzo_tiny_cosmology/32Mpc_32.enzo',
- 'Enzo', 0, 0.1,
- observer_redshift=0.0,
- field_of_view_in_arcminutes=600.0,
- image_resolution_in_arcseconds=60.0,
- time_data=False,
- output_dir='LC_HM', output_prefix='LightCone')
-
-# Calculate the light cone solution.
-lc.calculate_light_cone_solution(seed=123456789,
- filename='LC_HM/lightcone.dat')
-
-
-# Configure the HaloProfiler.
-# These are keyword arguments given when creating a
-# HaloProfiler object.
-halo_profiler_kwargs = {'halo_list_file': 'HopAnalysis.out',
- 'output_dir': 'halo_analysis'}
-
-# Create a list of actions for the HaloProfiler to take.
-halo_profiler_actions = []
-
-# Each item in the list is a dictionary containing three things:
-# 1. 'function' - the function to be called.
-# 2. 'args' - a list of arguments given with the function.
-# 3. 'kwargs' - a dictionary of keyword arguments.
-
-# Add a virial filter.
-halo_profiler_actions.append({'function': HaloProfiler.add_halo_filter,
- 'args': [VirialFilter],
- 'kwargs': {'must_be_virialized':False,
- 'overdensity_field':'ActualOverdensity',
- 'virial_overdensity':100,
- 'virial_filters':[['TotalMassMsun','>','1e5']],
- 'virial_quantities':['TotalMassMsun','RadiusMpc']}})
-
-# Add a call to make the profiles.
-halo_profiler_actions.append({'function': HaloProfiler.make_profiles,
- 'kwargs': {'filename': "VirializedHalos.out"}})
-
-# Specify the desired halo list is the filtered list.
-# If 'all' is given instead, the full list will be used.
-halo_list = 'filtered'
-
-# Put them all into one dictionary.
-halo_profiler_parameters=dict(halo_profiler_kwargs=halo_profiler_kwargs,
- halo_profiler_actions=halo_profiler_actions,
- halo_list=halo_list)
-
-# Get the halo list for the active solution of this light cone using
-# the HaloProfiler settings set up above.
-# Write the boolean map to an hdf5 file called 'halo_mask.h5'.
-# Write a text file detailing the location, redshift, radius, and mass
-# of each halo in light cone projection.
-lc.get_halo_mask(mask_file='LC_HM/halo_mask.h5',
- map_file='LC_HM/halo_map.out',
- cube_file='LC_HM/halo_cube.h5',
- virial_overdensity=100,
- halo_profiler_parameters=halo_profiler_parameters,
- njobs=1, dynamic=False)
-
-# Choose the field to be projected.
-field = 'SZY'
-
-# Make the light cone projection and apply the halo mask.
-pc = lc.project_light_cone(field, save_stack=False,
- save_final_image=True,
- save_slice_images=False,
- apply_halo_mask=True)
diff -r 8f2015a0da2717edbb82f2dbd4f2073b0d95b63c -r ef7c988b774a159c854d7a2fefc000e767007eda doc/source/cookbook/light_ray.py
--- /dev/null
+++ b/doc/source/cookbook/light_ray.py
@@ -0,0 +1,25 @@
+import os
+import sys
+import yt
+from yt.analysis_modules.cosmological_observation.api import \
+ LightRay
+
+# Create a directory for the light rays
+if not os.path.isdir("LR"):
+ os.mkdir('LR')
+
+# Create a LightRay object extending from z = 0 to z = 0.1
+# and use only the redshift dumps.
+lr = LightRay("enzo_tiny_cosmology/32Mpc_32.enzo",
+ 'Enzo', 0.0, 0.1,
+ use_minimum_datasets=True,
+ time_data=False)
+
+# Make a light ray, and set njobs to -1 to use one core
+# per dataset.
+lr.make_light_ray(seed=123456789,
+ solution_filename='LR/lightraysolution.txt',
+ data_filename='LR/lightray.h5',
+ fields=['temperature', 'density'],
+ get_los_velocity=True,
+ njobs=-1)
diff -r 8f2015a0da2717edbb82f2dbd4f2073b0d95b63c -r ef7c988b774a159c854d7a2fefc000e767007eda doc/source/cookbook/make_light_ray.py
--- a/doc/source/cookbook/make_light_ray.py
+++ /dev/null
@@ -1,69 +0,0 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
-
-import os
-import sys
-import yt
-from yt.analysis_modules.halo_profiler.api import HaloProfiler
-from yt.analysis_modules.cosmological_observation.light_ray.light_ray import \
- LightRay
-
-# Create a directory for the light rays
-if not os.path.isdir("LR"):
- os.mkdir('LR')
-
-# Create a LightRay object extending from z = 0 to z = 0.1
-# and use only the redshift dumps.
-lr = LightRay("enzo_tiny_cosmology/32Mpc_32.enzo",
- 'Enzo', 0.0, 0.1,
- use_minimum_datasets=True,
- time_data=False)
-
-# Configure the HaloProfiler.
-# These are keyword arguments given when creating a
-# HaloProfiler object.
-halo_profiler_kwargs = {'halo_list_file': 'HopAnalysis.out',
- 'output_dir' : 'halo_analysis'}
-
-# Create a list of actions for the HaloProfiler to take.
-halo_profiler_actions = []
-
-# Each item in the list is a dictionary containing three things:
-# 1. 'function' - the function to be called.
-# 2. 'args' - a list of arguments given with the function.
-# 3. 'kwargs' - a dictionary of keyword arguments.
-
-# Add a virial filter.
-halo_profiler_actions.append({'function': HaloProfiler.add_halo_filter,
- 'args': [VirialFilter],
- 'kwargs': {'must_be_virialized':False,
- 'overdensity_field':'ActualOverdensity',
- 'virial_overdensity':100,
- 'virial_filters':[['TotalMassMsun','>','1e5']],
- 'virial_quantities':['TotalMassMsun','RadiusMpc']}})
-
-# Add a call to make the profiles.
-halo_profiler_actions.append({'function': HaloProfiler.make_profiles,
- 'kwargs': {'filename': "VirializedHalos.out"}})
-
-# Specify the desired halo list is the filtered list.
-# If 'all' is given instead, the full list will be used.
-halo_list = 'filtered'
-
-# Put them all into one dictionary.
-halo_profiler_parameters=dict(halo_profiler_kwargs=halo_profiler_kwargs,
- halo_profiler_actions=halo_profiler_actions,
- halo_list=halo_list)
-
-# Make a light ray, and set njobs to -1 to use one core
-# per dataset.
-lr.make_light_ray(seed=123456789,
- solution_filename='LR/lightraysolution.txt',
- data_filename='LR/lightray.h5',
- fields=['temperature', 'density'],
- get_nearest_halo=True,
- nearest_halo_fields=['TotalMassMsun_100',
- 'RadiusMpc_100'],
- halo_profiler_parameters=halo_profiler_parameters,
- get_los_velocity=True,
- njobs=-1)
diff -r 8f2015a0da2717edbb82f2dbd4f2073b0d95b63c -r ef7c988b774a159c854d7a2fefc000e767007eda doc/source/cookbook/unique_light_cone_projections.py
--- a/doc/source/cookbook/unique_light_cone_projections.py
+++ /dev/null
@@ -1,34 +0,0 @@
-### THIS RECIPE IS CURRENTLY BROKEN IN YT-3.0
-### DO NOT TRUST THIS RECIPE UNTIL THIS LINE IS REMOVED
-
-import yt
-from yt.analysis_modules.cosmological_observation.light_cone.light_cone import LightCone
-
-# Instantiate a light cone.
-lc = LightCone("enzo_tiny_cosmology/32Mpc_32.enzo", 'Enzo', 0, 0.1,
- observer_redshift=0.0,
- field_of_view_in_arcminutes=120.0,
- image_resolution_in_arcseconds=60.0,
- use_minimum_datasets=True,
- time_data=False,
- output_dir='LC_U', output_prefix='LightCone')
-
-# Try to find 10 solutions that have at most 10% volume in
-# common and give up after 50 consecutive failed attempts.
-# The recycle=True setting tells the code to first attempt
-# to use solutions with the same projection axes as other
-# solutions. This will save time when making the projection.
-yt.find_unique_solutions(lc, max_overlap=0.10, failures=50,
- seed=123456789, recycle=True,
- solutions=10, filename='LC_U/unique.dat')
-
-# Choose the field to be projected.
-field = 'SZY'
-
-# Make light cone projections with each of the random seeds
-# found above. All output files will be written with unique
-# names based on the random seed numbers.
-yt.project_unique_light_cones(lc, 'LC_U/unique.dat', field,
- save_stack=False,
- save_final_image=True,
- save_slice_images=False)
diff -r 8f2015a0da2717edbb82f2dbd4f2073b0d95b63c -r ef7c988b774a159c854d7a2fefc000e767007eda yt/analysis_modules/cosmological_observation/api.py
--- a/yt/analysis_modules/cosmological_observation/api.py
+++ b/yt/analysis_modules/cosmological_observation/api.py
@@ -17,9 +17,7 @@
CosmologySplice
from .light_cone.api import \
- LightCone, \
- find_unique_solutions, \
- project_unique_light_cones
+ LightCone
from .light_ray.api import \
LightRay
diff -r 8f2015a0da2717edbb82f2dbd4f2073b0d95b63c -r ef7c988b774a159c854d7a2fefc000e767007eda yt/analysis_modules/cosmological_observation/cosmology_splice.py
--- a/yt/analysis_modules/cosmological_observation/cosmology_splice.py
+++ b/yt/analysis_modules/cosmological_observation/cosmology_splice.py
@@ -78,8 +78,9 @@
Examples
--------
- >>> cosmo = es.create_cosmology_splice(1.0, 0.0, minimal=True,
- deltaz_min=0.0)
+
+ >>> co = CosmologySplice("enzo_tiny_cosmology/32Mpc_32.enzo", "Enzo")
+ >>> cosmo = co.create_cosmology_splice(1.0, 0.0)
"""
@@ -133,12 +134,12 @@
# fill redshift space with datasets
while ((z > near_redshift) and
- (np.fabs(z - near_redshift) > z_Tolerance)):
+ (np.abs(z - near_redshift) > z_Tolerance)):
# For first data dump, choose closest to desired redshift.
if (len(cosmology_splice) == 0):
# Sort data outputs by proximity to current redsfhit.
- self.splice_outputs.sort(key=lambda obj:np.fabs(z - \
+ self.splice_outputs.sort(key=lambda obj:np.abs(z - \
obj['redshift']))
cosmology_splice.append(self.splice_outputs[0])
@@ -153,20 +154,20 @@
if current_slice is cosmology_splice[-1]:
near_redshift = cosmology_splice[-1]['redshift'] - \
- cosmology_splice[-1]['deltazMax']
+ cosmology_splice[-1]['dz_max']
mylog.error("Cosmology splice incomplete due to insufficient data outputs.")
break
else:
cosmology_splice.append(current_slice)
z = cosmology_splice[-1]['redshift'] - \
- cosmology_splice[-1]['deltazMax']
+ cosmology_splice[-1]['dz_max']
# Make light ray using maximum number of datasets (minimum spacing).
else:
# Sort data outputs by proximity to current redsfhit.
- self.splice_outputs.sort(key=lambda obj:np.fabs(far_redshift -
- obj['redshift']))
+ self.splice_outputs.sort(key=lambda obj:np.abs(far_redshift -
+ obj['redshift']))
# For first data dump, choose closest to desired redshift.
cosmology_splice.append(self.splice_outputs[0])
@@ -175,14 +176,14 @@
if (nextOutput['redshift'] <= near_redshift):
break
if ((cosmology_splice[-1]['redshift'] - nextOutput['redshift']) >
- cosmology_splice[-1]['deltazMin']):
+ cosmology_splice[-1]['dz_min']):
cosmology_splice.append(nextOutput)
nextOutput = nextOutput['next']
if (cosmology_splice[-1]['redshift'] -
- cosmology_splice[-1]['deltazMax']) > near_redshift:
+ cosmology_splice[-1]['dz_max']) > near_redshift:
mylog.error("Cosmology splice incomplete due to insufficient data outputs.")
near_redshift = cosmology_splice[-1]['redshift'] - \
- cosmology_splice[-1]['deltazMax']
+ cosmology_splice[-1]['dz_max']
mylog.info("create_cosmology_splice: Used %d data dumps to get from z = %f to %f." %
(len(cosmology_splice), far_redshift, near_redshift))
@@ -253,7 +254,7 @@
z = rounded
deltaz_max = self._deltaz_forward(z, self.simulation.box_size)
- outputs.append({'redshift': z, 'deltazMax': deltaz_max})
+ outputs.append({'redshift': z, 'dz_max': deltaz_max})
z -= deltaz_max
mylog.info("%d data dumps will be needed to get from z = %f to %f." %
@@ -282,28 +283,24 @@
# at a given redshift using Newton's method.
z1 = z
z2 = z1 - 0.1 # just an initial guess
- distance1 = 0.0
+ distance1 = self.simulation.quan(0.0, "Mpccm / h")
+ distance2 = self.cosmology.comoving_radial_distance(z2, z)
iteration = 1
- # Convert comoving radial distance into Mpc / h,
- # since that's how box size is stored.
- distance2 = self.cosmology.comoving_radial_distance(z2, z) * \
- self.simulation.hubble_constant
-
- while ((np.fabs(distance2-target_distance)/distance2) > d_Tolerance):
+ while ((np.abs(distance2-target_distance)/distance2) > d_Tolerance):
m = (distance2 - distance1) / (z2 - z1)
z1 = z2
distance1 = distance2
- z2 = ((target_distance - distance2) / m) + z2
- distance2 = self.cosmology.comoving_radial_distance(z2, z) * \
- self.simulation.hubble_constant
+ z2 = ((target_distance - distance2) / m.in_units("Mpccm / h")) + z2
+ distance2 = self.cosmology.comoving_radial_distance(z2, z)
iteration += 1
if (iteration > max_Iterations):
- mylog.error("calculate_deltaz_max: Warning - max iterations exceeded for z = %f (delta z = %f)." %
- (z, np.fabs(z2 - z)))
+ mylog.error("calculate_deltaz_max: Warning - max iterations " +
+ "exceeded for z = %f (delta z = %f)." %
+ (z, np.abs(z2 - z)))
break
- output['deltazMax'] = np.fabs(z2 - z)
-
+ output['dz_max'] = np.abs(z2 - z)
+
def _calculate_deltaz_min(self, deltaz_min=0.0):
r"""Calculate delta z that corresponds to a single top grid pixel
going from z to (z - delta z).
@@ -322,28 +319,24 @@
# top grid pixel at a given redshift using Newton's method.
z1 = z
z2 = z1 - 0.01 # just an initial guess
- distance1 = 0.0
+ distance1 = self.simulation.quan(0.0, "Mpccm / h")
+ distance2 = self.cosmology.comoving_radial_distance(z2, z)
iteration = 1
- # Convert comoving radial distance into Mpc / h,
- # since that's how box size is stored.
- distance2 = self.cosmology.comoving_radial_distance(z2, z) * \
- self.simulation.hubble_constant
-
- while ((np.fabs(distance2 - target_distance) / distance2) > d_Tolerance):
+ while ((np.abs(distance2 - target_distance) / distance2) > d_Tolerance):
m = (distance2 - distance1) / (z2 - z1)
z1 = z2
distance1 = distance2
- z2 = ((target_distance - distance2) / m) + z2
- distance2 = self.cosmology.comoving_radial_distance(z2, z) * \
- self.simulation.hubble_constant
+ z2 = ((target_distance - distance2) / m.in_units("Mpccm / h")) + z2
+ distance2 = self.cosmology.comoving_radial_distance(z2, z)
iteration += 1
if (iteration > max_Iterations):
- mylog.error("calculate_deltaz_max: Warning - max iterations exceeded for z = %f (delta z = %f)." %
- (z, np.fabs(z2 - z)))
+ mylog.error("calculate_deltaz_max: Warning - max iterations " +
+ "exceeded for z = %f (delta z = %f)." %
+ (z, np.abs(z2 - z)))
break
# Use this calculation or the absolute minimum specified by the user.
- output['deltazMin'] = max(np.fabs(z2 - z), deltaz_min)
+ output['dz_min'] = max(np.abs(z2 - z), deltaz_min)
def _deltaz_forward(self, z, target_distance):
r"""Calculate deltaz corresponding to moving a comoving distance
@@ -357,24 +350,20 @@
# box at a given redshift.
z1 = z
z2 = z1 - 0.1 # just an initial guess
- distance1 = 0.0
+ distance1 = self.simulation.quan(0.0, "Mpccm / h")
+ distance2 = self.cosmology.comoving_radial_distance(z2, z)
iteration = 1
- # Convert comoving radial distance into Mpc / h,
- # since that's how box size is stored.
- distance2 = self.cosmology.comoving_radial_distance(z2, z) * \
- self.cosmology.hubble_constant
-
- while ((np.fabs(distance2 - target_distance)/distance2) > d_Tolerance):
+ while ((np.abs(distance2 - target_distance)/distance2) > d_Tolerance):
m = (distance2 - distance1) / (z2 - z1)
z1 = z2
distance1 = distance2
- z2 = ((target_distance - distance2) / m) + z2
- distance2 = self.cosmology.comoving_radial_distance(z2, z) * \
- self.cosmology.hubble_constant
+ z2 = ((target_distance - distance2) / m.in_units("Mpccm / h")) + z2
+ distance2 = self.cosmology.comoving_radial_distance(z2, z)
iteration += 1
if (iteration > max_Iterations):
- mylog.error("deltaz_forward: Warning - max iterations exceeded for z = %f (delta z = %f)." %
- (z, np.fabs(z2 - z)))
+ mylog.error("deltaz_forward: Warning - max iterations " +
+ "exceeded for z = %f (delta z = %f)." %
+ (z, np.abs(z2 - z)))
break
- return np.fabs(z2 - z)
+ return np.abs(z2 - z)
diff -r 8f2015a0da2717edbb82f2dbd4f2073b0d95b63c -r ef7c988b774a159c854d7a2fefc000e767007eda yt/analysis_modules/cosmological_observation/light_cone/api.py
--- a/yt/analysis_modules/cosmological_observation/light_cone/api.py
+++ b/yt/analysis_modules/cosmological_observation/light_cone/api.py
@@ -1,5 +1,5 @@
"""
-API for lightcone
+API for light_cone
@@ -15,7 +15,3 @@
from .light_cone import \
LightCone
-
-from .unique_solution import \
- project_unique_light_cones, \
- find_unique_solutions
diff -r 8f2015a0da2717edbb82f2dbd4f2073b0d95b63c -r ef7c988b774a159c854d7a2fefc000e767007eda yt/analysis_modules/cosmological_observation/light_cone/common_n_volume.py
--- a/yt/analysis_modules/cosmological_observation/light_cone/common_n_volume.py
+++ /dev/null
@@ -1,118 +0,0 @@
-"""
-Function to calculate volume in common between two n-cubes, with optional
-periodic boundary conditions.
-
-
-
-"""
-
-#-----------------------------------------------------------------------------
-# Copyright (c) 2013, yt Development Team.
-#
-# Distributed under the terms of the Modified BSD License.
-#
-# The full license is in the file COPYING.txt, distributed with this software.
-#-----------------------------------------------------------------------------
-
-import numpy as np
-
-def common_volume(n_cube_1, n_cube_2, periodic=None):
- "Return the n-volume in common between the two n-cubes."
-
- # Check for proper args.
- if ((len(np.shape(n_cube_1)) != 2) or
- (np.shape(n_cube_1)[1] != 2) or
- (np.shape(n_cube_1) != np.shape(n_cube_2))):
- print "Arguments must be 2 (n, 2) numpy array."
- return 0
-
- if ((periodic is not None) and
- (np.shape(n_cube_1) != np.shape(periodic))):
- print "periodic argument must be (n, 2) numpy array."
- return 0
-
- nCommon = 1.0
- for q in range(np.shape(n_cube_1)[0]):
- if (periodic is None):
- nCommon *= common_segment(n_cube_1[q], n_cube_2[q])
- else:
- nCommon *= common_segment(n_cube_1[q], n_cube_2[q],
- periodic=periodic[q])
-
- return nCommon
-
-def common_segment(seg1, seg2, periodic=None):
- "Return the length of the common segment."
-
- # Check for proper args.
- if ((len(seg1) != 2) or (len(seg2) != 2)):
- print "Arguments must be arrays of size 2."
- return 0
-
- # If not periodic, then this is very easy.
- if periodic is None:
- seg1.sort()
- len1 = seg1[1] - seg1[0]
- seg2.sort()
- len2 = seg2[1] - seg2[0]
-
- common = 0.0
-
- add = seg1[1] - seg2[0]
- if ((add > 0) and (add <= max(len1, len2))):
- common += add
- add = seg2[1] - seg1[0]
- if ((add > 0) and (add <= max(len1, len2))):
- common += add
- common = min(common, len1, len2)
- return common
-
- # If periodic, it's a little more complicated.
- else:
- if len(periodic) != 2:
- print "periodic array must be of size 2."
- return 0
-
- seg1.sort()
- flen1 = seg1[1] - seg1[0]
- len1 = flen1 - int(flen1)
- seg2.sort()
- flen2 = seg2[1] - seg2[0]
- len2 = flen2 - int(flen2)
-
- periodic.sort()
- scale = periodic[1] - periodic[0]
-
- if (abs(int(flen1)-int(flen2)) >= scale):
- return min(flen1, flen2)
-
- # Adjust for periodicity
- seg1[0] = np.mod(seg1[0], scale) + periodic[0]
- seg1[1] = seg1[0] + len1
- if (seg1[1] > periodic[1]): seg1[1] -= scale
- seg2[0] = np.mod(seg2[0], scale) + periodic[0]
- seg2[1] = seg2[0] + len2
- if (seg2[1] > periodic[1]): seg2[1] -= scale
-
- # create list of non-periodic segments
- pseg1 = []
- if (seg1[0] >= seg1[1]):
- pseg1.append([seg1[0], periodic[1]])
- pseg1.append([periodic[0], seg1[1]])
- else:
- pseg1.append(seg1)
- pseg2 = []
- if (seg2[0] >= seg2[1]):
- pseg2.append([seg2[0], periodic[1]])
- pseg2.append([periodic[0], seg2[1]])
- else:
- pseg2.append(seg2)
-
- # Add up common segments.
- common = min(int(flen1), int(flen2))
-
- for subseg1 in pseg1:
- for subseg2 in pseg2:
- common += common_segment(subseg1, subseg2)
-
- return common
diff -r 8f2015a0da2717edbb82f2dbd4f2073b0d95b63c -r ef7c988b774a159c854d7a2fefc000e767007eda yt/analysis_modules/cosmological_observation/light_cone/halo_mask.py
--- a/yt/analysis_modules/cosmological_observation/light_cone/halo_mask.py
+++ /dev/null
@@ -1,383 +0,0 @@
-"""
-Light cone halo mask functions.
-
-
-
-"""
-
-#-----------------------------------------------------------------------------
-# Copyright (c) 2013, yt Development Team.
-#
-# Distributed under the terms of the Modified BSD License.
-#
-# The full license is in the file COPYING.txt, distributed with this software.
-#-----------------------------------------------------------------------------
-
-import copy
-import h5py
-import numpy as np
-
-from yt.funcs import *
-from yt.analysis_modules.halo_profiler.api import \
- HaloProfiler
-from yt.convenience import load
-from yt.utilities.parallel_tools.parallel_analysis_interface import \
- parallel_objects, \
- parallel_root_only
-
-def _light_cone_halo_mask(lightCone, cube_file=None,
- mask_file=None, map_file=None,
- halo_profiler_parameters=None,
- virial_overdensity=200,
- njobs=1, dynamic=False):
- "Make a boolean mask to cut clusters out of light cone projections."
-
- if halo_profiler_parameters is None:
- halo_profiler_parameters = {}
-
- pixels = int(lightCone.field_of_view_in_arcminutes * 60.0 /
- lightCone.image_resolution_in_arcseconds)
-
- # Loop through files in light cone solution and get virial quantities.
- halo_map_storage = {}
- for my_storage, my_slice in \
- parallel_objects(lightCone.light_cone_solution,
- njobs=njobs, dynamic=dynamic,
- storage=halo_map_storage):
- halo_list = _get_halo_list(my_slice['filename'],
- **halo_profiler_parameters)
- my_storage.result = \
- {'mask': _make_slice_mask(my_slice, halo_list, pixels,
- virial_overdensity)}
- if map_file is not None:
- my_storage.result['map'] = \
- _make_slice_halo_map(my_slice, halo_list,
- virial_overdensity)
-
- # Reassemble halo mask and map lists.
- light_cone_mask = []
- halo_map = []
- all_slices = halo_map_storage.keys()
- all_slices.sort()
- for i in all_slices:
- light_cone_mask.append(halo_map_storage[i]['mask'])
- if map_file is not None:
- halo_map.extend(halo_map_storage[i]['map'])
- del halo_map_storage
-
- # Write out cube of masks from each slice.
- if cube_file is not None:
- _write_halo_mask(cube_file, np.array(light_cone_mask))
-
- # Write out a text list of all halos in the image.
- if map_file is not None:
- _write_halo_map(map_file, halo_map)
-
- # Write out final mask.
- if mask_file is not None:
- # Final mask is simply the product of the mask from each slice.
- final_mask = np.ones(shape=(pixels, pixels))
- for mask in light_cone_mask:
- final_mask *= mask
- _write_halo_mask(mask_file, final_mask)
-
- return light_cone_mask
-
- at parallel_root_only
-def _write_halo_mask(filename, halo_mask):
- r"""Write out an hdf5 file with the halo mask that
- can be applied to an image.
- """
-
- mylog.info("Saving halo mask to %s." % filename)
- output = h5py.File(filename, 'a')
- if 'HaloMask' in output.keys():
- del output['HaloMask']
- output.create_dataset('HaloMask', data=np.array(halo_mask))
- output.close()
-
- at parallel_root_only
-def _write_halo_map(filename, halo_map):
- "Write a text list of halos in a light cone image."
-
- mylog.info("Saving halo map to %s." % filename)
- f = open(filename, 'w')
- f.write("#z x y r_image r_mpc m_Msun\n")
- for halo in halo_map:
- f.write("%7.4f %9.6f %9.6f %9.3e %9.3e %9.3e\n" % \
- (halo['redshift'], halo['x'], halo['y'],
- halo['radius_image'], halo['radius_mpc'],
- halo['mass']))
- f.close()
-
-def _get_halo_list(dataset, halo_profiler_kwargs=None,
- halo_profiler_actions=None, halo_list='all'):
- "Load a list of halos for the dataset."
-
- if halo_profiler_kwargs is None: halo_profiler_kwargs = {}
- if halo_profiler_actions is None: halo_profiler_actions = []
-
- hp = HaloProfiler(dataset, **halo_profiler_kwargs)
- for action in halo_profiler_actions:
- if not action.has_key('args'): action['args'] = ()
- if not action.has_key('kwargs'): action['kwargs'] = {}
- action['function'](hp, *action['args'], **action['kwargs'])
-
- if halo_list == 'all':
- return_list = copy.deepcopy(hp.all_halos)
- elif halo_list == 'filtered':
- return_list = copy.deepcopy(hp.filtered_halos)
- else:
- mylog.error("Keyword, halo_list, must be either 'all' or 'filtered'.")
- return_list = None
-
- del hp
- return return_list
-
-def _make_slice_mask(slice, halo_list, pixels, virial_overdensity):
- "Make halo mask for one slice in light cone solution."
-
- # Get shifted, tiled halo list.
- all_halo_x, all_halo_y, \
- all_halo_radius, all_halo_mass = \
- _make_slice_halo_list(slice, halo_list, virial_overdensity)
-
- # Make boolean mask and cut out halos.
- dx = slice['box_width_fraction'] / pixels
- x = [(q + 0.5) * dx for q in range(pixels)]
- haloMask = np.ones(shape=(pixels, pixels), dtype=bool)
-
- # Cut out any pixel that has any part at all in the circle.
- for q in range(len(all_halo_radius)):
- dif_xIndex = np.array(int(all_halo_x[q]/dx) -
- np.array(range(pixels))) != 0
- dif_yIndex = np.array(int(all_halo_y[q]/dx) -
- np.array(range(pixels))) != 0
-
- xDistance = (np.abs(x - all_halo_x[q]) -
- (0.5 * dx)) * dif_xIndex
- yDistance = (np.abs(x - all_halo_y[q]) -
- (0.5 * dx)) * dif_yIndex
-
- distance = np.array([np.sqrt(w**2 + xDistance**2)
- for w in yDistance])
- haloMask *= (distance >= all_halo_radius[q])
-
- return haloMask
-
-def _make_slice_halo_map(slice, halo_list, virial_overdensity):
- "Make list of halos for one slice in light cone solution."
-
- # Get units to convert virial radii back to physical units.
- dataset_object = load(slice['filename'])
- Mpc_units = dataset_object.units['mpc']
- del dataset_object
-
- # Get shifted, tiled halo list.
- all_halo_x, all_halo_y, \
- all_halo_radius, all_halo_mass = \
- _make_slice_halo_list(slice, halo_list, virial_overdensity)
-
- # Construct list of halos
- halo_map = []
-
- for q in range(len(all_halo_x)):
- # Give radius in both physics units and
- # units of the image (0 to 1).
- radius_mpc = all_halo_radius[q] * Mpc_units
- radius_image = all_halo_radius[q] / slice['box_width_fraction']
-
- halo_map.append({'x': all_halo_x[q] / slice['box_width_fraction'],
- 'y': all_halo_y[q] / slice['box_width_fraction'],
- 'redshift': slice['redshift'],
- 'radius_mpc': radius_mpc,
- 'radius_image': radius_image,
- 'mass': all_halo_mass[q]})
-
- return halo_map
-
-def _make_slice_halo_list(slice, halo_list, virial_overdensity):
- "Make shifted, tiled list of halos for halo mask and halo map."
-
- # Make numpy arrays for halo centers and virial radii.
- halo_x = []
- halo_y = []
- halo_depth = []
- halo_radius = []
- halo_mass = []
-
- # Get units to convert virial radii to code units.
- dataset_object = load(slice['filename'])
- Mpc_units = dataset_object.units['mpc']
- del dataset_object
-
- for halo in halo_list:
- if halo is not None:
- center = copy.deepcopy(halo['center'])
- halo_depth.append(center.pop(slice['projection_axis']))
- halo_x.append(center[0])
- halo_y.append(center[1])
- halo_radius.append(halo['RadiusMpc_%d' % virial_overdensity] /
- Mpc_units)
- halo_mass.append(halo['TotalMassMsun_%d' % virial_overdensity])
-
- halo_x = np.array(halo_x)
- halo_y = np.array(halo_y)
- halo_depth = np.array(halo_depth)
- halo_radius = np.array(halo_radius)
- halo_mass = np.array(halo_mass)
-
- # Adjust halo centers along line of sight.
- depth_center = slice['projection_center'][slice['projection_axis']]
- depth_left = depth_center - 0.5 * slice['box_depth_fraction']
- depth_right = depth_center + 0.5 * slice['box_depth_fraction']
-
- # Make boolean mask to pick out centers in region along line of sight.
- # Halos near edges may wrap around to other side.
- add_left = (halo_depth + halo_radius) > 1 # should be box width
- add_right = (halo_depth - halo_radius) < 0
-
- halo_depth = np.concatenate([halo_depth,
- (halo_depth[add_left]-1),
- (halo_depth[add_right]+1)])
- halo_x = np.concatenate([halo_x, halo_x[add_left], halo_x[add_right]])
- halo_y = np.concatenate([halo_y, halo_y[add_left], halo_y[add_right]])
- halo_radius = np.concatenate([halo_radius,
- halo_radius[add_left],
- halo_radius[add_right]])
- halo_mass = np.concatenate([halo_mass,
- halo_mass[add_left],
- halo_mass[add_right]])
-
- del add_left, add_right
-
- # Cut out the halos outside the region of interest.
- if (slice['box_depth_fraction'] < 1):
- if (depth_left < 0):
- mask = ((halo_depth + halo_radius >= 0) &
- (halo_depth - halo_radius <= depth_right)) | \
- ((halo_depth + halo_radius >= depth_left + 1) &
- (halo_depth - halo_radius <= 1))
- elif (depth_right > 1):
- mask = ((halo_depth + halo_radius >= 0) &
- (halo_depth - halo_radius <= depth_right - 1)) | \
- ((halo_depth + halo_radius >= depth_left) &
- (halo_depth - halo_radius <= 1))
- else:
- mask = (halo_depth + halo_radius >= depth_left) & \
- (halo_depth - halo_radius <= depth_right)
-
- halo_x = halo_x[mask]
- halo_y = halo_y[mask]
- halo_radius = halo_radius[mask]
- halo_mass = halo_mass[mask]
- del mask
- del halo_depth
-
- all_halo_x = np.array([])
- all_halo_y = np.array([])
- all_halo_radius = np.array([])
- all_halo_mass = np.array([])
-
- # Tile halos of width box fraction is greater than one.
- # Copy original into offset positions to make tiles.
- for x in range(int(np.ceil(slice['box_width_fraction']))):
- for y in range(int(np.ceil(slice['box_width_fraction']))):
- all_halo_x = np.concatenate([all_halo_x, halo_x+x])
- all_halo_y = np.concatenate([all_halo_y, halo_y+y])
- all_halo_radius = np.concatenate([all_halo_radius, halo_radius])
- all_halo_mass = np.concatenate([all_halo_mass, halo_mass])
-
- del halo_x, halo_y, halo_radius, halo_mass
-
- # Shift centers laterally.
- offset = copy.deepcopy(slice['projection_center'])
- del offset[slice['projection_axis']]
-
- # Shift x and y positions.
- all_halo_x -= offset[0]
- all_halo_y -= offset[1]
-
- # Wrap off-edge centers back around to
- # other side (periodic boundary conditions).
- all_halo_x[all_halo_x < 0] += np.ceil(slice['box_width_fraction'])
- all_halo_y[all_halo_y < 0] += np.ceil(slice['box_width_fraction'])
-
- # After shifting, some centers have fractional coverage
- # on both sides of the box.
- # Find those centers and make copies to be placed on the other side.
-
- # Centers hanging off the right edge.
- add_x_right = all_halo_x + all_halo_radius > \
- np.ceil(slice['box_width_fraction'])
- add_x_halo_x = all_halo_x[add_x_right]
- add_x_halo_x -= np.ceil(slice['box_width_fraction'])
- add_x_halo_y = all_halo_y[add_x_right]
- add_x_halo_radius = all_halo_radius[add_x_right]
- add_x_halo_mass = all_halo_mass[add_x_right]
- del add_x_right
-
- # Centers hanging off the left edge.
- add_x_left = all_halo_x - all_halo_radius < 0
- add2_x_halo_x = all_halo_x[add_x_left]
- add2_x_halo_x += np.ceil(slice['box_width_fraction'])
- add2_x_halo_y = all_halo_y[add_x_left]
- add2_x_halo_radius = all_halo_radius[add_x_left]
- add2_x_halo_mass = all_halo_mass[add_x_left]
- del add_x_left
-
- # Centers hanging off the top edge.
- add_y_right = all_halo_y + all_halo_radius > \
- np.ceil(slice['box_width_fraction'])
- add_y_halo_x = all_halo_x[add_y_right]
- add_y_halo_y = all_halo_y[add_y_right]
- add_y_halo_y -= np.ceil(slice['box_width_fraction'])
- add_y_halo_radius = all_halo_radius[add_y_right]
- add_y_halo_mass = all_halo_mass[add_y_right]
- del add_y_right
-
- # Centers hanging off the bottom edge.
- add_y_left = all_halo_y - all_halo_radius < 0
- add2_y_halo_x = all_halo_x[add_y_left]
- add2_y_halo_y = all_halo_y[add_y_left]
- add2_y_halo_y += np.ceil(slice['box_width_fraction'])
- add2_y_halo_radius = all_halo_radius[add_y_left]
- add2_y_halo_mass = all_halo_mass[add_y_left]
- del add_y_left
-
- # Add the hanging centers back to the projection data.
- all_halo_x = np.concatenate([all_halo_x,
- add_x_halo_x, add2_x_halo_x,
- add_y_halo_x, add2_y_halo_x])
- all_halo_y = np.concatenate([all_halo_y,
- add_x_halo_y, add2_x_halo_y,
- add_y_halo_y, add2_y_halo_y])
- all_halo_radius = np.concatenate([all_halo_radius,
- add_x_halo_radius,
- add2_x_halo_radius,
- add_y_halo_radius,
- add2_y_halo_radius])
- all_halo_mass = np.concatenate([all_halo_mass,
- add_x_halo_mass,
- add2_x_halo_mass,
- add_y_halo_mass,
- add2_y_halo_mass])
-
- del add_x_halo_x, add_x_halo_y, add_x_halo_radius
- del add2_x_halo_x, add2_x_halo_y, add2_x_halo_radius
- del add_y_halo_x, add_y_halo_y, add_y_halo_radius
- del add2_y_halo_x, add2_y_halo_y, add2_y_halo_radius
-
- # Cut edges to proper width.
- cut_mask = (all_halo_x - all_halo_radius <
- slice['box_width_fraction']) & \
- (all_halo_y - all_halo_radius <
- slice['box_width_fraction'])
- all_halo_x = all_halo_x[cut_mask]
- all_halo_y = all_halo_y[cut_mask]
- all_halo_radius = all_halo_radius[cut_mask]
- all_halo_mass = all_halo_mass[cut_mask]
- del cut_mask
-
- return (all_halo_x, all_halo_y,
- all_halo_radius, all_halo_mass)
This diff is so big that we needed to truncate the remainder.
Repository URL: https://bitbucket.org/yt_analysis/yt/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
More information about the yt-svn
mailing list