[yt-svn] commit/yt-doc: 2 new changesets
commits-noreply at bitbucket.org
commits-noreply at bitbucket.org
Sun Nov 3 17:31:00 PST 2013
2 new commits in yt-doc:
https://bitbucket.org/yt_analysis/yt-doc/commits/4e7702d37f08/
Changeset: 4e7702d37f08
User: ngoldbaum
Date: 2013-11-04 02:18:55
Summary: Removing RAMSES and ART docs sine they are no longer supported in 2.6.
Affected #: 1 file
diff -r 02502401d183eb3d121da1a00a6c3513a98e490d -r 4e7702d37f0834ed1997390ab5f11ffbb3ad4656 source/examining/supported_frontends_data.rst
--- a/source/examining/supported_frontends_data.rst
+++ b/source/examining/supported_frontends_data.rst
@@ -125,114 +125,6 @@
positions will not be.
* Domains may be visualized assuming periodicity.
-.. _loading-ramses-data:
-
-RAMSES Data
------------
-
-RAMSES data enjoys preliminary support and is cared for by Matthew Turk. If
-you are interested in taking a development or stewardship role, please contact
-him. To load a RAMSES dataset, you can use the ``load`` command provided by
-``yt.mods`` and supply to it the ``info*.txt`` filename. For instance, if you
-were in a directory with the following files:
-
-.. code-block:: none
-
- output_00007
- output_00007/amr_00007.out00001
- output_00007/grav_00007.out00001
- output_00007/hydro_00007.out00001
- output_00007/info_00007.txt
- output_00007/part_00007.out00001
-
-You would feed it the filename ``output_00007/info_00007.txt``:
-
-.. code-block:: python
-
- from yt.mods import *
- pf = load("output_00007/info_00007.txt")
-
-.. rubric:: Caveats
-
-* Please be careful that the units are correctly set! This may not be the
- case for RAMSES data
-* Upon instantiation of the hierarchy, yt will attempt to regrid the entire
- domain to ensure minimum-coverage from a set of grid patches. (This is
- described in the yt method paper.) This is a time-consuming process and it
- has not yet been written to be stored between calls.
-* Particles are not supported
-* Parallelism will not be terribly efficient for large datasets
-* There may be occasional segfaults on multi-domain data, which do not
- reflect errors in the calculation
-
-If you are interested in helping with RAMSES support, we are eager to hear from
-you!
-
-.. _loading-art-data:
-
-ART Data
---------
-
-ART data enjoys preliminary support and is supported by Christopher Moody.
-Please contact the ``yt-dev`` mailing list if you are interested in using yt
-for ART data, or if you are interested in assisting with development of yt to
-work with ART data.
-
-At the moment, the ART octree is 'regridded' at each level to make the native
-octree look more like a mesh-based code. As a result, the initial outlay
-is about ~60 seconds to grid octs onto a mesh. This will be improved in
-``yt-3.0``, where octs will be supported natively.
-
-To load an ART dataset you can use the ``load`` command provided by
-``yt.mods`` and passing the gas mesh file. It will search for and attempt
-to find the complementary dark matter and stellar particle header and data
-files. However, your simulations may not follow the same naming convention.
-
-So for example, a single snapshot might have a series of files looking like
-this:
-
-.. code-block:: none
-
- 10MpcBox_csf512_a0.300.d #Gas mesh
- PMcrda0.300.DAT #Particle header
- PMcrs0a0.300.DAT #Particle data (positions,velocities)
- stars_a0.300.dat #Stellar data (metallicities, ages, etc.)
-
-The ART frontend tries to find the associated files matching the above, but
-if that fails you can specify ``file_particle_data``,``file_particle_data``,
-``file_star_data`` in addition to the specifying the gas mesh. You also have
-the option of gridding particles, and assigning them onto the meshes.
-This process is in beta, and for the time being it's probably best to leave
-``do_grid_particles=False`` as the default.
-
-To speed up the loading of an ART file, you have a few options. You can turn
-off the particles entirely by setting ``discover_particles=False``. You can
-also only grid octs up to a certain level, ``limit_level=5``, which is useful
-when debugging by artificially creating a 'smaller' dataset to work with.
-
-Finally, when stellar ages are computed we 'spread' the ages evenly within a
-smoothing window. By default this is turned on and set to 10Myr. To turn this
-off you can set ``spread=False``, and you can tweak the age smoothing window
-by specifying the window in seconds, ``spread=1.0e7*265*24*3600``.
-
-.. code-block:: python
-
- from yt.mods import *
-
- file = "/u/cmoody3/data/art_snapshots/SFG1/10MpcBox_csf512_a0.460.d"
- pf = load(file,discover_particles=True,grid_particles=2,limit_level=3)
- pf.h.print_stats()
- dd=pf.h.all_data()
- print np.sum(dd['particle_type']==0)
-
-In the above example code, the first line imports the standard yt functions,
-followed by defining the gas mesh file. It's loaded only through level 3, but
-grids particles on to meshes on level 2 and higher. Finally, we create a data
-container and ask it to gather the particle_type array. In this case ``type==0``
-is for the most highly-refined dark matter particle, and we print out how many
-high-resolution star particles we find in the simulation. Typically, however,
-you shouldn't have to specify any keyword arguments to load in a dataset.
-
Athena Data
-----------
https://bitbucket.org/yt_analysis/yt-doc/commits/53e9fd3ef9ae/
Changeset: 53e9fd3ef9ae
User: ngoldbaum
Date: 2013-11-04 02:30:49
Summary: Fixing a bad reference and an RST parsing warning.
Affected #: 2 files
diff -r 4e7702d37f0834ed1997390ab5f11ffbb3ad4656 -r 53e9fd3ef9ae3089c0322d2220fdb0c27eca7bb1 source/analyzing/analysis_modules/particle_trajectories.rst
--- a/source/analyzing/analysis_modules/particle_trajectories.rst
+++ b/source/analyzing/analysis_modules/particle_trajectories.rst
@@ -1,4 +1,4 @@
Particle Trajectories
------------------------------------------
+---------------------
.. notebook:: Particle_Trajectories.ipynb
diff -r 4e7702d37f0834ed1997390ab5f11ffbb3ad4656 -r 53e9fd3ef9ae3089c0322d2220fdb0c27eca7bb1 source/examining/generic_array_data.rst
--- a/source/examining/generic_array_data.rst
+++ b/source/examining/generic_array_data.rst
@@ -1,3 +1,4 @@
+.. _loading-numpy-array
Loading Generic Array Data
==========================
Repository URL: https://bitbucket.org/yt_analysis/yt-doc/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
More information about the yt-svn
mailing list