[yt-svn] commit/yt-doc: 2 new changesets

Bitbucket commits-noreply at bitbucket.org
Wed Jan 9 14:42:21 PST 2013


2 new commits in yt-doc:

https://bitbucket.org/yt_analysis/yt-doc/commits/83606a209675/
changeset:   83606a209675
user:        sskory
date:        2013-01-09 23:37:14
summary:     Adding more content to the cheat sheet.
affected #:  2 files

diff -r 3dee29549c252fa44eccf29de6326e1d09968c49 -r 83606a20967597a2b64efa2b2623c57f9313b8bd cheatsheet.tex
--- a/cheatsheet.tex
+++ b/cheatsheet.tex
@@ -99,44 +99,42 @@
 \subsection{General Info}
 For everything yt please see \url{http://yt-project.org}.
 Documentation \url{http://yt-project.org/doc/index.html}.
-Need help, start here \url{http://yt-project.org/doc/help/} and then
+Need help? Start here \url{http://yt-project.org/doc/help/} and then
 try the IRC chat room \url{http://yt-project.org/irc.html},
 or the mailing list \url{http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org}.
 
 
 \subsection{Command Line yt}
-\begin{tabular}{@{}ll@{}}
+\begin{tabular}{@{}lp{3.5cm}@{}}
 \texttt{iyt} & Load yt and IPython. \\
 \texttt{yt load} {\it dataset}    & Load a single dataset.  \\
 \texttt{yt help} & Print yt help information. \\
 \texttt{yt stats} \it{dataset}  & Print stats of a dataset. \\
-\texttt{yt update} & Update yt to \\
- & most recent version. \\
+\texttt{yt update} & Update yt to most recent version.\\
 \texttt{yt instinfo} & yt installation information. \\
-\texttt{yt serve} (\it{dataset})  &  Run web GUI.\\
-\texttt{yt upload\_image} \it{image.png}  & Upload PNG image \\
- & to imgur.com. \\
-\texttt{yt upload\_notebook} \it{notebook.nb} & Upload IPython notebook \\
- & to hub.yt-project.org.\\
+\texttt{yt notebook} & Run the IPython notebook server. \\
+\texttt{yt serve} (\it{dataset})  &  Run yt-specific web GUI.\\
+\texttt{yt upload\_image} \it{image.png}  & Upload PNG image to imgur.com. \\
+\texttt{yt upload\_notebook} \it{notebook.nb} & Upload IPython notebook to hub.yt-project.org.\\
 \texttt{yt plot} \it{dataset} & Create a set of images.\\
-\texttt{yt render} \it{dataset} & Create a simple \\
- & volume rendering. \\
-\texttt{yt mapserver} {\it dataset} & View plot in Gmaps-like \\
- & interface. \\
-\texttt{yt pastebin} \it{text.out} & Post text to the pastebin at\\
- & paste.yt-project.org. \\ 
-\texttt{yt pastebin\_grab} {\it identifier} & Print content of pastebin to \\
- & STDOUT. \\
- \texttt{yt hub\_register} & Register with\\
-  & hub.yt-project.org. \\
-\texttt{yt hub\_submit} & Submit hg repo to\\
- &  hub.yt-project.org. \\
-\texttt{yt bootstrap\_dev} & Bootstrap a yt \\
- & development environment. \\
+\texttt{yt render} \it{dataset} & Create a simple
+ volume rendering. \\
+\texttt{yt mapserver} {\it dataset} & View a plot/projection in a Gmaps-like
+ interface. \\
+\texttt{yt pastebin} \it{text.out} & Post text to the pastebin at
+ paste.yt-project.org. \\ 
+\texttt{yt pastebin\_grab} {\it identifier} & Print content of pastebin to
+ STDOUT. \\
+ \texttt{yt hub\_register} & Register with
+hub.yt-project.org. \\
+\texttt{yt hub\_submit} & Submit hg repo to
+hub.yt-project.org. \\
+\texttt{yt bootstrap\_dev} & Bootstrap a yt 
+development environment. \\
 \texttt{yt bugreport} & Report a yt bug. \\
 \texttt{yt hop} {\it dataset} &  Run hop on a dataset. \\
-\texttt{yt rpdb} & Connect to running rpd \\
- & session. 
+\texttt{yt rpdb} & Connect to running rpd 
+ session. 
 \end{tabular}
 
 Many commands have flags to control behavior.
@@ -151,70 +149,38 @@
 %\begin{tabular}{@{}p{\the\MyLen}%
 %                @{}p{\linewidth-\the\MyLen}@{}}
 %\begin{tabular}{@{}ll@{}}
-\begin{tabular}{@{}l}
-\texttt{from yt.mods import \textasteriskcentered} \\
-\hspace{10pt} Load base yt  modules. \\
-\texttt{from yt.config import ytcfg} \\
-\hspace{10pt} Used to set yt configuration options.\\
-\hspace{10pt} If used, must be called before importing other modules.\\
-\texttt{from yt.analysis\_modules.api import \textasteriskcentered}  \\
-\hspace{10pt} Load all yt analysis modules. \\
-\texttt{from yt.analysis\_modules.\emph{halo\_finding}.api import \textasteriskcentered} \\
-\hspace{10pt} Load halo finding modules. Other modules, listed below, \\
-\hspace{10pt} are loaded in a similar way by swapping the \\
-\hspace{10pt} {\em emphasized} text.\\
-\texttt{absorption\_spectrum}, \texttt{coordinate\_transformation}, \\
-\texttt{cosmological\_observation}, \texttt{halo\_mass\_function}, \\
-\texttt{halo\_merger\_tree}, \texttt{halo\_profiler}, \texttt{level\_sets}, \\
-\texttt{hierarchy\_subset}, \texttt{radial\_column\_density}, \\
-\texttt{spectral\_integrator}, \texttt{star\_analysis}, \texttt{sunrise\_export}, \\
+\begin{tabular}{@{}p{8cm}}
+\texttt{from yt.mods import \textasteriskcentered}  \textemdash\ 
+Load base yt  modules. \\
+\texttt{from yt.config import ytcfg}  \textemdash\ 
+Used to set yt configuration options.  \textemdash\ 
+ If used, must be called before importing other modules.\\
+\texttt{from yt.analysis\_modules.api import \textasteriskcentered}   \textemdash\ 
+Load all yt analysis modules. \\
+\texttt{from yt.analysis\_modules.\emph{halo\_finding}.api import \textasteriskcentered}  \textemdash\ 
+Load halo finding modules. Other modules, listed below, 
+are loaded in a similar way by swapping the 
+{\em emphasized} text.
+\texttt{absorption\_spectrum}, \texttt{coordinate\_transformation}, 
+\texttt{cosmological\_observation}, \texttt{halo\_mass\_function}, 
+\texttt{halo\_merger\_tree}, \texttt{halo\_profiler}, \texttt{level\_sets}, 
+\texttt{hierarchy\_subset}, \texttt{radial\_column\_density}, 
+\texttt{spectral\_integrator}, \texttt{star\_analysis}, \texttt{sunrise\_export}, 
 \texttt{two\_point\_functions}
 \end{tabular}
 The import commands are entered in the Python shell or
 used as part of a script.
 
-
-\subsection{Load and Access Data}
+\subsection{Numpy Arrays}
+% More examples are probably needed especially, but this is a start.
 \settowidth{\MyLen}{\texttt{multicol} }
-%\begin{tabular}{@{}p{\the\MyLen}%
- %               @{}p{\linewidth-\the\MyLen}@{}}
-\begin{tabular}{@{}l}
-\texttt{pf = load(}{\it dataset}\texttt{)} \textemdash\   Load a single snapshot.\\
-\texttt{dd = pf.h.all\_data()} \textemdash\ Select the entire volume.\\
-\texttt{a = dd[}{\it field\_name}\texttt{]} \textemdash\ Saves the contents of {\it field} into the \\
-numpy array \texttt{a}. Similarly for other data containers.\\
-\texttt{pf.h.field\_list} \textemdash\ A list of available fields in the snapshot. \\
-\texttt{pf.h.derived\_field\_list} \textemdash\ A list of available derived fields \\
-in the snapshot. \\
-\texttt{val, loc = pf.h.find\_max("Density")} \textemdash\ Find the \texttt{val}ue of\\
-the maximum of the field \texttt{Density} and its \texttt{loc}ation. \\
-\texttt{sp = pf.h.sphere(}{\it cen}\texttt{,}{\it radius}\texttt{)} \textemdash\   Create a spherical data \\ 
-container. {\it cen} may be a coordinate, or 'max' which \\
-centers on the max density point. {\it radius} may be a float in \\ 
-code units or a tuple of ({\it length, unit}).\\
-
-\texttt{re = pf.h.region({\it cen}, {\it left edge}, {\it right edge})} \textemdash\ Create a\\
-rectilinear data container. {\it cen} is required but not used.\\
-{\it left} and {\it right edge} are coordinate values that define the region.\\
-
-\texttt{di = pf.h.disk({\it cen}, {\it normal}, {\it radius}, {\it height})} \textemdash\ \\
-Create a cylindrical data container centered at {\it cen} along the \\
-direction set by {\it normal},with total length\\
- 2$\times${\it height} and with radius {\it radius}.
- 
-\end{tabular}
-
-
-
-\subsection{Numpy Arrays}
-\settowidth{\MyLen}{\texttt{multicol} }
-\begin{tabular}{@{}l}
+\begin{tabular}{@{}p{8cm}}
 
 \texttt{v = a.max(), a.min()} \textemdash\ Return maximum, minimum of \texttt{a}. \\
-\texttt{index = a.argmax(), a.argmin()} \textemdash\ Return index of max, \\
+\texttt{index = a.argmax(), a.argmin()} \textemdash\ Return index of max, 
 min value of \texttt{a}.\\
 \texttt{v = a[}{\it index}\texttt{]} \textemdash\ Select a single value from \texttt{a} at location {\it index}.\\
-\texttt{b = a[}{\it i:j}\texttt{]} \textemdash\ Select the slice of values from \texttt{a} between\\
+\texttt{b = a[}{\it i:j}\texttt{]} \textemdash\ Select the slice of values from \texttt{a} between
 locations {\it i} to {\it j-1} saved to a new numpy array \texttt{b} with length {\it j-i}. \\
 
 \end{tabular}
@@ -222,6 +188,81 @@
 Please see \url{http://docs.scipy.org/doc/numpy/reference/} for the full
 numpy documentation.
 
+\subsection{IPython Tips}
+\settowidth{\MyLen}{\texttt{multicol} }
+These tips work if IPython has been loaded, typically either by invoking
+\texttt{iyt} or \texttt{yt load} on the command line or using the IPython notebook (\texttt{yt notebook}).
+\begin{tabular}{@{}p{8cm}}
+\texttt{Tab complete} \textemdash\ IPython will attempt to auto-complete a
+variable or function name when the \texttt{Tab} key is pressed, e.g. {\it HaloFi}\textendash\texttt{Tab} would auto-complete
+to {\it HaloFinder}. This also works with imports, e.g. {\it from numpy.random.}\textendash\texttt{Tab}
+would give you a list of random functions (note the trailing period before hitting \texttt{Tab}).\\
+\texttt{?, ??} \textemdash\ Appending one or two question marks at the end of any object gives you
+detailed information about it, e.g. {\it variable\_name}?.\\
+\texttt{\%paste} \textemdash\ Paste content from the system clipboard into the IPython shell.\\
+\texttt{\%hist} \textemdash\ Print recent command history.\\
+\texttt{\%quickref} \textemdash\ Print IPython quick reference.\\
+\texttt{\%pdb} \textemdash\ Automatically enter the Python debugger at an exception.\\
+\texttt{\%time, \%timeit} \textemdash\ Find running time of expressions for benchmarking.\\
+\texttt{\%lsmagic} \textemdash\ List all available IPython magics. Hint: \texttt{?} works with magics.\\
+
+\end{tabular}
+
+Please see \url{http://ipython.org/documentation.html} for the full
+IPython documentation.
+
+\subsection{Load and Access Data}
+\settowidth{\MyLen}{\texttt{multicol} }
+%\begin{tabular}{@{}p{\the\MyLen}%
+%               @{}p{\linewidth-\the\MyLen}@{}}
+\begin{tabular}{@{}p{8cm}}
+\texttt{pf = load(}{\it dataset}\texttt{)} \textemdash\   Load a single snapshot.\\
+\texttt{dd = pf.h.all\_data()} \textemdash\ Select the entire volume.\\
+\texttt{a = dd[}{\it field\_name}\texttt{]} \textemdash\ Saves the contents of {\it field} into the
+numpy array \texttt{a}. Similarly for other data containers.\\
+\texttt{pf.h.field\_list} \textemdash\ A list of available fields in the snapshot. \\
+\texttt{pf.h.derived\_field\_list} \textemdash\ A list of available derived fields
+in the snapshot. \\
+\texttt{val, loc = pf.h.find\_max("Density")} \textemdash\ Find the \texttt{val}ue of
+the maximum of the field \texttt{Density} and its \texttt{loc}ation. \\
+\texttt{sp = pf.h.sphere(}{\it cen}\texttt{,}{\it radius}\texttt{)} \textemdash\   Create a spherical data 
+container. {\it cen} may be a coordinate, or ``max'' which 
+centers on the max density point. {\it radius} may be a float in 
+code units or a tuple of ({\it length, unit}).\\
+
+\texttt{re = pf.h.region({\it cen}, {\it left edge}, {\it right edge})} \textemdash\ Create a
+rectilinear data container. {\it cen} is required but not used.
+{\it left} and {\it right edge} are coordinate values that define the region.
+
+\texttt{di = pf.h.disk({\it cen}, {\it normal}, {\it radius}, {\it height})} \textemdash\ 
+Create a cylindrical data container centered at {\it cen} along the 
+direction set by {\it normal},with total length
+ 2$\times${\it height} and with radius {\it radius}. \\
+ 
+ \texttt{bl = pf.h.boolean({\it constructor})} \textemdash\ Create a boolean data
+ container. {\it constructor} is a list of pre-defined non-boolean 
+ data containers with nested boolean logic using the
+ ``AND'', ``NOT'', or ``OR'' operators. E.g. {\it constructor=}
+ {\it [sp, ``NOT'', (di, ``OR'', re)]} gives a volume defined
+ by {\it sp} minus the patches covered by {\it di} and {\it re}.\\
+ 
+\end{tabular}
+
+
+\subsection{Plots and Projections}
+\settowidth{\MyLen}{\texttt{multicol} }
+%\begin{tabular}{@{}p{\the\MyLen}%
+%               @{}p{\linewidth-\the\MyLen}@{}}
+\begin{tabular}{@{}p{8cm}}
+\texttt{slc = SlicePlot(pf, {\it axis}, {\it field}, {\it center=}, {\it width=}, {\it weight\_field=}, {\it additional parameters})} \textemdash\ Make a slice plot
+perpendicular to {\it axis} of {\it field} weighted by {\it weight\_field} at (code-units) {\it center} with 
+{\it width} in code units or a (value, unit) tuple. Hint: try {\it SlicePlot?} to see additional parameters.\\
+\texttt{slc.save({\it file\_prefix})} \textemdash\ Save the slice to a png with name prefix {\it file\_prefix}.
+\texttt{.save()} works similarly for the commands below.\\
+
+\texttt{prj = ProjectionPlot(pf, {\it axis}, {\it field}, {\it addit. params})} \textemdash\ Make a projection.
+
+\end{tabular}
 
 
 %\rule{0.3\linewidth}{0.25pt}

diff -r 3dee29549c252fa44eccf29de6326e1d09968c49 -r 83606a20967597a2b64efa2b2623c57f9313b8bd source/orientation/making_plots.rst
--- a/source/orientation/making_plots.rst
+++ b/source/orientation/making_plots.rst
@@ -60,7 +60,7 @@
 .. code-block:: python
 
    >>> SlicePlot(pf, 'z', 'Density', center=[0.2, 0.3, 0.8], 
-   ...           width = (10,'kpc)).save()
+   ...           width = (10,'kpc')).save()
 
 The center must be given in code units.  Optionally, you can supply 'c' or 'm'
 for the center.  These two choices will center the plot on the center of the


https://bitbucket.org/yt_analysis/yt-doc/commits/41a29eb8260e/
changeset:   41a29eb8260e
user:        sskory
date:        2013-01-09 23:37:35
summary:     Merge.
affected #:  16 files

diff -r 83606a20967597a2b64efa2b2623c57f9313b8bd -r 41a29eb8260e344259abc2061d94897b6868269f source/advanced/ionization_cube.py
--- a/source/advanced/ionization_cube.py
+++ b/source/advanced/ionization_cube.py
@@ -10,7 +10,7 @@
 
 ts = TimeSeriesData.from_filenames("SED800/DD*/*.hierarchy", parallel = 8)
 
-ionized_z = na.zeros(ts[0].domain_dimensions, dtype="float32")
+ionized_z = np.zeros(ts[0].domain_dimensions, dtype="float32")
 
 t1 = time.time()
 for pf in ts.piter():

diff -r 83606a20967597a2b64efa2b2623c57f9313b8bd -r 41a29eb8260e344259abc2061d94897b6868269f source/advanced/plugin_file.rst
--- a/source/advanced/plugin_file.rst
+++ b/source/advanced/plugin_file.rst
@@ -18,7 +18,7 @@
 .. code-block:: python
 
    def _myfunc(field, data):
-       return na.random.random(data["Density"].shape)
+       return np.random.random(data["Density"].shape)
    add_field("SomeQuantity", function=_myfunc)
 
 then all of my data objects would have access to the field "SomeQuantity"

diff -r 83606a20967597a2b64efa2b2623c57f9313b8bd -r 41a29eb8260e344259abc2061d94897b6868269f source/analysis_modules/star_analysis.rst
--- a/source/analysis_modules/star_analysis.rst
+++ b/source/analysis_modules/star_analysis.rst
@@ -231,8 +231,8 @@
   from yt.analysis_modules.star_analysis.api import *
   pf = load("data0030")
   spec = SpectrumBuilder(pf, bcdir="/home/user/bc", model="chabrier")
-  sm = na.ones(100)
-  ct = na.zeros(100)
+  sm = np.ones(100)
+  ct = np.zeros(100)
   spec.calculate_spectrum(star_mass=sm, star_creation_time=ct, star_metallicity_constant=0.02)
   spec.write_out_SED('SED.out')
 

diff -r 83606a20967597a2b64efa2b2623c57f9313b8bd -r 41a29eb8260e344259abc2061d94897b6868269f source/analysis_modules/sunrise_export.rst
--- a/source/analysis_modules/sunrise_export.rst
+++ b/source/analysis_modules/sunrise_export.rst
@@ -21,8 +21,8 @@
 	pf = ARTStaticOutput(file_amr)
 	potential_value,center=pf.h.find_min('Potential_New')
 	root_cells = pf.domain_dimensions[0]
-	le = na.floor(root_cells*center) #left edge
-	re = na.ceil(root_cells*center) #right edge
+	le = np.floor(root_cells*center) #left edge
+	re = np.ceil(root_cells*center) #right edge
 	bounds = [(le[0], re[0]-le[0]), (le[1], re[1]-le[1]), (le[2], re[2]-le[2])] 
 	#bounds are left edge plus a span
 	bounds = numpy.array(bounds,dtype='int')
@@ -37,8 +37,8 @@
 	import pyfits
 
 	col_list = []
-	col_list.append(pyfits.Column("ID", format="I", array=na.arange(mass_current.size)))
-	col_list.append(pyfits.Column("parent_ID", format="I", array=na.arange(mass_current.size)))
+	col_list.append(pyfits.Column("ID", format="I", array=np.arange(mass_current.size)))
+	col_list.append(pyfits.Column("parent_ID", format="I", array=np.arange(mass_current.size)))
 	col_list.append(pyfits.Column("position", format="3D", array=pos, unit="kpc"))
 	col_list.append(pyfits.Column("velocity", format="3D", array=vel, unit="kpc/yr"))
 	col_list.append(pyfits.Column("creation_mass", format="D", array=mass_initial, unit="Msun"))
@@ -48,7 +48,7 @@
 	col_list.append(pyfits.Column("age_m", format="D", array=age, unit="yr"))
 	col_list.append(pyfits.Column("age_l", format="D", array=age, unit="yr"))
 	col_list.append(pyfits.Column("metallicity", format="D",array=z))
-	col_list.append(pyfits.Column("L_bol", format="D",array=na.zeros(mass_current.size)))
+	col_list.append(pyfits.Column("L_bol", format="D",array=np.zeros(mass_current.size)))
 	cols = pyfits.ColDefs(col_list)
 
 	amods.sunrise_export.export_to_sunrise(pf, out_fits_file,write_particles=cols,

diff -r 83606a20967597a2b64efa2b2623c57f9313b8bd -r 41a29eb8260e344259abc2061d94897b6868269f source/analysis_modules/two_point_functions.rst
--- a/source/analysis_modules/two_point_functions.rst
+++ b/source/analysis_modules/two_point_functions.rst
@@ -45,8 +45,8 @@
     # The name of the function is used to name output files.
     def rms_vel(a, b, r1, r2, vec):
         vdiff = a - b
-        na.power(vdiff, 2.0, vdiff)
-        vdiff = na.sum(vdiff, axis=1)
+        np.power(vdiff, 2.0, vdiff)
+        vdiff = np.sum(vdiff, axis=1)
         return vdiff
 
     
@@ -626,11 +626,11 @@
       a_good = a[:,3] >= 1.e-26
       b_good = b[:,3] >= 1.e-26
       # Pick out the pairs with both good densities
-      both_good = na.bitwise_and(a_good, b_good)
+      both_good = np.bitwise_and(a_good, b_good)
       # Operate only on the velocity columns
       vdiff = a[:,0:3] - b[:,0:3]
-      na.power(vdiff, 2.0, vdiff)
-      vdiff = na.sum(vdiff, axis=1)
+      np.power(vdiff, 2.0, vdiff)
+      vdiff = np.sum(vdiff, axis=1)
       # Multiplying by a boolean array has the effect of multiplying by 1 for
       # True, and 0 for False. This operation below will force pairs of not
       # good points to zero, outside the PDF (see below), and leave good
@@ -678,10 +678,10 @@
     def rms_vel_D(a, b, r1, r2, vec):
       # Operate only on the velocity columns
       vdiff = a[:,0:3] - b[:,0:3]
-      na.power(vdiff, 2.0, vdiff)
-      vdiff = na.sum(vdiff, axis=1)
+      np.power(vdiff, 2.0, vdiff)
+      vdiff = np.sum(vdiff, axis=1)
       # Density ratio
-      Dratio = na.max(a[:,3]/b[:,3], b[:,3]/a[:,3])
+      Dratio = np.max(a[:,3]/b[:,3], b[:,3]/a[:,3])
       return [vdiff, Dratio]
     
     # Initialize a function generator object.
@@ -776,12 +776,12 @@
         xp = float(line[7])
         yp = float(line[8])
         zp = float(line[9])
-        CoM.append(na.array([xp, yp, zp]))
+        CoM.append(np.array([xp, yp, zp]))
     data.close()
     
     # This is the same dV as in the formulation of the two-point correlation.
     dV = 0.05
-    radius = (3./4. * dV / na.pi)**(2./3.)
+    radius = (3./4. * dV / np.pi)**(2./3.)
     
     # Instantiate our TPF object.
     # For technical reasons (hopefully to be fixed someday) `vol_ratio`
@@ -797,14 +797,14 @@
     # All of these need to be set even if they're not used.
     # Convert the data to fortran major/minor ordering
     add_tree(1)
-    fKD.t1.pos = na.array(CoM).T
-    fKD.t1.nfound_many = na.empty(tpf.comm_size, dtype='int64')
+    fKD.t1.pos = np.array(CoM).T
+    fKD.t1.nfound_many = np.empty(tpf.comm_size, dtype='int64')
     fKD.t1.radius = radius
     # These must be set because the function find_many_r_nearest
     # does more than how we are using it, and it needs these.
     fKD.t1.radius_n = 1
-    fKD.t1.nn_dist = na.empty((fKD.t1.radius_n, tpf.comm_size), dtype='float64')
-    fKD.t1.nn_tags = na.empty((fKD.t1.radius_n, tpf.comm_size), dtype='int64')
+    fKD.t1.nn_dist = np.empty((fKD.t1.radius_n, tpf.comm_size), dtype='float64')
+    fKD.t1.nn_tags = np.empty((fKD.t1.radius_n, tpf.comm_size), dtype='int64')
     # Makes the kD tree.
     create_tree(1)
     
@@ -904,7 +904,7 @@
         # dense enough. The first column is density.
         abig = (a[:,0] >= dens)
         bbig = (b[:,0] >= dens)
-        both = na.bitwise_and(abig, bbig)
+        both = np.bitwise_and(abig, bbig)
         # We normalize by the volume of the most refined cells.
         both = both.astype('float')
         both *= a[:,1] * b[:,1] / vol_conv**2 / sm**2

diff -r 83606a20967597a2b64efa2b2623c57f9313b8bd -r 41a29eb8260e344259abc2061d94897b6868269f source/analyzing/_dq_docstrings.inc
--- a/source/analyzing/_dq_docstrings.inc
+++ b/source/analyzing/_dq_docstrings.inc
@@ -154,3 +154,12 @@
    :param field: The field to average
    :param weight: The field to weight by
 
+.. function:: WeightedVariance(field, weight):
+
+   (This is a proxy for :func:`~yt.data_objects.derived_quantities._WeightedVariance`.)
+    This function returns the variance of a field.
+
+    :param field: The target field
+    :param weight: The field to weight by
+
+    Returns the weighted variance and the weighted mean.

diff -r 83606a20967597a2b64efa2b2623c57f9313b8bd -r 41a29eb8260e344259abc2061d94897b6868269f source/analyzing/creating_derived_fields.rst
--- a/source/analyzing/creating_derived_fields.rst
+++ b/source/analyzing/creating_derived_fields.rst
@@ -80,15 +80,15 @@
    def _DiskAngle(field, data):
        # We make both r_vec and h_vec into unit vectors
        center = data.get_field_parameter("center")
-       r_vec = na.array([data["x"] - center[0],
+       r_vec = np.array([data["x"] - center[0],
                          data["y"] - center[1],
                          data["z"] - center[2]])
-       r_vec = r_vec/na.sqrt((r_vec**2.0).sum(axis=0))
-       h_vec = na.array(data.get_field_parameter("height_vector"))
+       r_vec = r_vec/np.sqrt((r_vec**2.0).sum(axis=0))
+       h_vec = np.array(data.get_field_parameter("height_vector"))
        dp = r_vec[0,:] * h_vec[0] \
           + r_vec[1,:] * h_vec[1] \
           + r_vec[2,:] * h_vec[2]
-       return na.arccos(dp)
+       return np.arccos(dp)
    add_field("DiskAngle", take_log=False,
              validators=[ValidateParameter("height_vector"),
                          ValidateParameter("center")],
@@ -113,16 +113,16 @@
    def _SpecificAngularMomentum(field, data):
        if data.has_field_parameter("bulk_velocity"):
            bv = data.get_field_parameter("bulk_velocity")
-       else: bv = na.zeros(3, dtype='float64')
+       else: bv = np.zeros(3, dtype='float64')
        xv = data["x-velocity"] - bv[0]
        yv = data["y-velocity"] - bv[1]
        zv = data["z-velocity"] - bv[2]
        center = data.get_field_parameter('center')
-       coords = na.array([data['x'],data['y'],data['z']], dtype='float64')
+       coords = np.array([data['x'],data['y'],data['z']], dtype='float64')
        new_shape = tuple([3] + [1]*(len(coords.shape)-1))
-       r_vec = coords - na.reshape(center,new_shape)
-       v_vec = na.array([xv,yv,zv], dtype='float64')
-       return na.cross(r_vec, v_vec, axis=0)
+       r_vec = coords - np.reshape(center,new_shape)
+       v_vec = np.array([xv,yv,zv], dtype='float64')
+       return np.cross(r_vec, v_vec, axis=0)
    def _convertSpecificAngularMomentum(data):
        return data.convert("cm")
    add_field("SpecificAngularMomentum",
@@ -163,7 +163,7 @@
             ds = div_fac * data['dz'].flat[0]
             f += data["z-velocity"][1:-1,1:-1,sl_right]/ds
             f -= data["z-velocity"][1:-1,1:-1,sl_left ]/ds
-        new_field = na.zeros(data["x-velocity"].shape, dtype='float64')
+        new_field = np.zeros(data["x-velocity"].shape, dtype='float64')
         new_field[1:-1,1:-1,1:-1] = f
         return new_field
     def _convertDivV(data):

diff -r 83606a20967597a2b64efa2b2623c57f9313b8bd -r 41a29eb8260e344259abc2061d94897b6868269f source/analyzing/loading_data.rst
--- a/source/analyzing/loading_data.rst
+++ b/source/analyzing/loading_data.rst
@@ -6,6 +6,57 @@
 This section contains information on how to load data into ``yt``, as well as
 some important caveats about different data formats.
 
+.. _loading-numpy-array:
+
+Generic Array Data
+------------------
+
+Even if your data is not strictly related to fields commonly used in
+astrophysical codes or your code is not supported yet, you can still feed it to
+``yt`` to use its advanced visualization and analysis facilities. The only
+requirement is that your data can be represented as one or more uniform, three
+dimensional numpy arrays. Assuming that you have your data in ``arr``,
+the following code:
+
+.. code-block:: python
+
+   from yt.frontends.stream.api import load_uniform_grid
+
+   data = dict(Density = arr)
+   bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
+   pf = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
+
+will create ``yt``-native parameter file ``pf`` that will treat your array as
+density field in cubic domain of 3 Mpc edge size (3 * 3.08e24 cm) and
+simultaneously divide the domain into 12 chunks, so that you can take advantage
+of the underlying parallelism. 
+
+Particle fields are detected as one-dimensional fields. The number of
+particles is set by the ``number_of_particles`` key in
+``data``. Particle fields are then added as one-dimensional arrays in
+a similar manner as the three-dimensional grid fields:
+
+.. code-block:: python
+
+   from yt.frontends.stream.api import load_uniform_grid
+
+   data = dict(Density = dens, 
+               number_of_particles = 1000000,
+               particle_position_x = posx_arr, 
+	       particle_position_y = posy_arr,
+	       particle_position_z = posz_arr)
+   bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
+   pf = load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
+
+where in this exampe the particle position fields have been assigned. ``number_of_particles`` must be the same size as the particle
+arrays. If no particle arrays are supplied then ``number_of_particles`` is assumed to be zero. 
+
+.. rubric:: Caveats
+
+* Units will be incorrect unless the data has already been converted to cgs.
+* Particles may be difficult to integrate.
+* Data must already reside in memory.
+
 .. _loading-enzo-data:
 
 Enzo Data
@@ -91,8 +142,8 @@
 FLASH HDF5 data is *mostly* supported and cared for by John ZuHone.  To load a
 FLASH dataset, you can use the ``load`` command provided by ``yt.mods`` and
 supply to it the file name of a plot file or checkpoint file, but particle
-files are not currently directly loadable.  Particles, however, can be read
-from standard checkpoint files.  For instance, if you were in a directory with
+files are not currently directly loadable by themselves, due to the
+fact that they typically lack grid information. For instance, if you were in a directory with
 the following files:
 
 .. code-block:: none
@@ -106,6 +157,16 @@
    from yt.mods import *
    pf = load("cosmoSim_coolhdf5_chk_0026")
 
+If you have a FLASH particle file that was created at the same time as
+a plotfile or checkpoint file (therefore having particle data
+consistent with the grid structure of the latter), its data may be loaded with the
+``particle_filename`` optional argument:
+
+.. code-block:: python
+
+    from yt.mods import *
+    pf = load("radio_halo_1kpc_hdf5_plt_cnt_0100", particle_filename="radio_halo_1kpc_hdf5_part_0100")
+
 .. rubric:: Caveats
 
 * Please be careful that the units are correctly utilized; yt assumes cgs
@@ -182,17 +243,17 @@
 
 .. code-block:: none
 
-    10MpcBox_csf512_a0.300.d    #Gas mesh
-    PMcrda0.300.DAT             #Particle header
-    PMcrs0a0.300.DAT            #Particle data (positions,velocities)
-    stars_a0.300.dat            #Stellar data (metallicities, ages, etc.)
+   10MpcBox_csf512_a0.300.d    #Gas mesh
+   PMcrda0.300.DAT             #Particle header
+   PMcrs0a0.300.DAT            #Particle data (positions,velocities)
+   stars_a0.300.dat            #Stellar data (metallicities, ages, etc.)
 
 The ART frontend tries to find the associated files matching the above, but
 if that fails you can specify ``file_particle_data``,``file_particle_data``,
 ``file_star_data`` in addition to the specifying the gas mesh. You also have 
 the option of gridding particles, and assigning them onto the meshes.
 This process is in beta, and for the time being it's probably  best to leave
-``do_grid_particles=False`` as the deault.
+``do_grid_particles=False`` as the default.
 
 To speed up the loading of an ART file, you have a few options. You can turn 
 off the particles entirely by setting ``discover_particles=False``. You can
@@ -204,28 +265,72 @@
 off you can set ``spread=False``, and you can tweak the age smoothing window
 by specifying the window in seconds, ``spread=1.0e7*265*24*3600``. 
 
-:: code-block:: python
+.. code-block:: python
     
-    from yt.mods import *
+   from yt.mods import *
 
-    file = "/u/cmoody3/data/art_snapshots/SFG1/10MpcBox_csf512_a0.460.d"
-    pf = load(file,discover_particles=True,grid_particles=2,limit_level=3)
-    pf.h.print_stats()
-    dd=pf.h.all_data()
-    print na.sum(dd['particle_type']==0)
+   file = "/u/cmoody3/data/art_snapshots/SFG1/10MpcBox_csf512_a0.460.d"
+   pf = load(file,discover_particles=True,grid_particles=2,limit_level=3)
+   pf.h.print_stats()
+   dd=pf.h.all_data()
+   print np.sum(dd['particle_type']==0)
 
 In the above example code, the first line imports the standard yt functions,
-followed by defining the gas mesh file. It's loaded only through level 3,
-but grids particles on to meshes on level 2 and higher. Finally, 
-we create a data container and ask it to gather the particle_type array. In 
-this case type==0 is for the most highly-refined dark matter particle, and 
-we print out how many high-resolution star particles we find in the simulation.
-Typically, however, you shouldn't have to specify any keyword arguments to load
-in a dataset.
+followed by defining the gas mesh file. It's loaded only through level 3, but
+grids particles on to meshes on level 2 and higher. Finally, we create a data
+container and ask it to gather the particle_type array. In this case ``type==0``
+is for the most highly-refined dark matter particle, and we print out how many
+high-resolution star particles we find in the simulation.  Typically, however,
+you shouldn't have to specify any keyword arguments to load in a dataset.
 
+.. loading-amr-data:
 
+Generic AMR Data
+---------------
 
+It is possible to create native ``yt`` parameter file from Python's dictionary
+that describes set of rectangular patches of data of possibly varying
+resolution. 
 
+.. code-block:: python
 
+   from yt.frontends.stream.api import load_amr_grids
 
+   grid_data = [
+       dict(left_edge = [0.0, 0.0, 0.0],
+            right_edge = [1.0, 1.0, 1.],
+            level = 0,
+            dimensions = [32, 32, 32],
+            number_of_particles = 0)
+       dict(left_edge = [0.25, 0.25, 0.25],
+            right_edge = [0.75, 0.75, 0.75],
+            level = 1,
+            dimensions = [32, 32, 32],
+            number_of_particles = 0)
+   ]
+  
+   for g in grid_data:
+       g["Density"] = np.random.random(g["dimensions"]) * 2**g["level"]
+  
+   pf = load_amr_grids(grid_data, [32, 32, 32], 1.0)
 
+Particle fields are supported by adding 1-dimensional arrays and
+setting the ``number_of_particles`` key to each ``grid``'s dict:
+
+.. code-block:: python
+
+    for g in grid_data:
+        g["number_of_particles"] = 100000
+        g["particle_position_x"] = np.random.random((g["number_of_particles"]))
+
+.. rubric:: Caveats
+
+* Units will be incorrect unless the data has already been converted to cgs.
+* Some functions may behave oddly, and parallelism will be disappointing or
+  non-existent in most cases.
+* No consistency checks are performed on the hierarchy
+* Data must already reside in memory.
+* Consistency between particle positions and grids is not checked;
+  ``load_amr_grids`` assumes that particle positions associated with one grid are
+  not bounded within another grid at a higher level, so this must be
+  ensured by the user prior to loading the grid data. 

diff -r 83606a20967597a2b64efa2b2623c57f9313b8bd -r 41a29eb8260e344259abc2061d94897b6868269f source/cookbook/camera_movement.py
--- a/source/cookbook/camera_movement.py
+++ b/source/cookbook/camera_movement.py
@@ -1,12 +1,12 @@
 from yt.mods import * # set up our namespace
-   
+
 # Follow the simple_volume_rendering cookbook for the first part of this.
 pf = load("IsolatedGalaxy/galaxy0030/galaxy0030") # load data
 dd = pf.h.all_data()
 mi, ma = dd.quantities["Extrema"]("Density")[0]
 
 # Set up transfer function
-tf = ColorTransferFunction((na.log10(mi), na.log10(ma)))
+tf = ColorTransferFunction((np.log10(mi), np.log10(ma)))
 tf.add_layers(6, w=0.05)
 
 # Set up camera paramters
@@ -26,7 +26,7 @@
 frame = 0
 
 # Do a rotation over 5 frames
-for i, snapshot in enumerate(cam.rotation(na.pi, 5, clip_ratio = 8.0)):
+for i, snapshot in enumerate(cam.rotation(np.pi, 5, clip_ratio = 8.0)):
     write_bitmap(snapshot, 'camera_movement_%04i.png' % frame)
     frame += 1
 

diff -r 83606a20967597a2b64efa2b2623c57f9313b8bd -r 41a29eb8260e344259abc2061d94897b6868269f source/cookbook/free_free_field.py
--- a/source/cookbook/free_free_field.py
+++ b/source/cookbook/free_free_field.py
@@ -1,26 +1,26 @@
 from yt.mods import *
-from yt.utilities.physical_constants import mp # Need to grab the proton mass from the 
+from yt.utilities.physical_constants import mp # Need to grab the proton mass from the
                                                # constants database
-                                               
+
 # Define the emission field
 
 keVtoerg = 1.602e-9 # Convert energy in keV to energy in erg
 KtokeV = 8.617e-08 # Convert degrees Kelvin to degrees keV
-sqrt3 = na.sqrt(3.)
+sqrt3 = np.sqrt(3.)
 expgamma = 1.78107241799 # Exponential of Euler's constant
 
 def _FreeFree_Emission(field, data) :
-    
+
     if data.has_field_parameter("Z") :
         Z = data.get_field_parameter("Z")
     else :
         Z = 1.077 # Primordial H/He plasma
-        
+
     if data.has_field_parameter("mue") :
         mue = data.get_field_parameter("mue")
     else :
         mue = 1./0.875 # Primordial H/He plasma
-        
+
     if data.has_field_parameter("mui") :
         mui = data.get_field_parameter("mui")
     else :
@@ -30,30 +30,30 @@
         Ephoton = data.get_field_parameter("Ephoton")
     else :
         Ephoton = 1.0 # in keV
-         
+
     if data.has_field_parameter("photon_emission") :
         photon_emission = data.get_field_parameter("photon_emission")
     else :
         photon_emission = False # Flag for energy or photon emission
-        
+
     n_e = data["Density"]/(mue*mp)
     n_i = data["Density"]/(mui*mp)
     kT = data["Temperature"]*KtokeV
-    
+
     # Compute the Gaunt factor
-    
-    g_ff = na.zeros(kT.shape)
-    g_ff[Ephoton/kT > 1.] = na.sqrt((3./na.pi)*kT[Ephoton/kT > 1.]/Ephoton)
-    g_ff[Ephoton/kT < 1.] = (sqrt3/na.pi)*na.log((4./expgamma) * 
+
+    g_ff = np.zeros(kT.shape)
+    g_ff[Ephoton/kT > 1.] = np.sqrt((3./np.pi)*kT[Ephoton/kT > 1.]/Ephoton)
+    g_ff[Ephoton/kT < 1.] = (sqrt3/np.pi)*np.log((4./expgamma) *
                                                  kT[Ephoton/kT < 1.]/Ephoton)
 
-    eps_E = 1.64e-20*Z*Z*n_e*n_i/na.sqrt(data["Temperature"]) * \
-        na.exp(-Ephoton/kT)*g_ff
-        
+    eps_E = 1.64e-20*Z*Z*n_e*n_i/np.sqrt(data["Temperature"]) * \
+        np.exp(-Ephoton/kT)*g_ff
+
     if photon_emission: eps_E /= (Ephoton*keVtoerg)
-    
+
     return eps_E
-    
+
 add_field("FreeFree_Emission", function=_FreeFree_Emission)
 
 # Define the luminosity derived quantity
@@ -63,10 +63,10 @@
 
 def _combFreeFreeLuminosity(data, luminosity) :
     return luminosity.sum()
-    
+
 add_quantity("FreeFree_Luminosity", function=_FreeFreeLuminosity,
              combine_function=_combFreeFreeLuminosity, n_ret = 1)
-    
+
 pf = load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
 
 sphere = pf.h.sphere(pf.domain_center, (100., "kpc"))
@@ -75,7 +75,7 @@
 
 print "L_E (1 keV, primordial) = ", sphere.quantities["FreeFree_Luminosity"]()
 
-# The defaults for the field assume a H/He primordial plasma. Let's set the appropriate 
+# The defaults for the field assume a H/He primordial plasma. Let's set the appropriate
 # parameters for a pure hydrogen plasma.
 
 sphere.set_field_parameter("mue", 1.0)

diff -r 83606a20967597a2b64efa2b2623c57f9313b8bd -r 41a29eb8260e344259abc2061d94897b6868269f source/cookbook/hse_field.py
--- a/source/cookbook/hse_field.py
+++ b/source/cookbook/hse_field.py
@@ -1,7 +1,7 @@
 from yt.mods import *
 
 # Define the components of the gravitational acceleration vector field by taking the
-# gradient of the gravitational potential 
+# gradient of the gravitational potential
 
 def _Grav_Accel_x(field, data) :
 
@@ -16,7 +16,7 @@
     gx  = data["Grav_Potential"][sl_right,1:-1,1:-1]/dx
     gx -= data["Grav_Potential"][sl_left, 1:-1,1:-1]/dx
 
-    new_field = na.zeros(data["Grav_Potential"].shape, dtype='float64')
+    new_field = np.zeros(data["Grav_Potential"].shape, dtype='float64')
     new_field[1:-1,1:-1,1:-1] = -gx
 
     return new_field
@@ -34,7 +34,7 @@
     gy  = data["Grav_Potential"][1:-1,sl_right,1:-1]/dy
     gy -= data["Grav_Potential"][1:-1,sl_left ,1:-1]/dy
 
-    new_field = na.zeros(data["Grav_Potential"].shape, dtype='float64')
+    new_field = np.zeros(data["Grav_Potential"].shape, dtype='float64')
     new_field[1:-1,1:-1,1:-1] = -gy
 
     return new_field
@@ -52,13 +52,13 @@
     gz  = data["Grav_Potential"][1:-1,1:-1,sl_right]/dz
     gz -= data["Grav_Potential"][1:-1,1:-1,sl_left ]/dz
 
-    new_field = na.zeros(data["Grav_Potential"].shape, dtype='float64')
+    new_field = np.zeros(data["Grav_Potential"].shape, dtype='float64')
     new_field[1:-1,1:-1,1:-1] = -gz
 
     return new_field
 
 # Define the components of the pressure gradient field
-    
+
 def _Grad_Pressure_x(field, data) :
 
     # We need to set up stencils
@@ -72,7 +72,7 @@
     px  = data["Pressure"][sl_right,1:-1,1:-1]/dx
     px -= data["Pressure"][sl_left, 1:-1,1:-1]/dx
 
-    new_field = na.zeros(data["Pressure"].shape, dtype='float64')
+    new_field = np.zeros(data["Pressure"].shape, dtype='float64')
     new_field[1:-1,1:-1,1:-1] = px
 
     return new_field
@@ -90,7 +90,7 @@
     py  = data["Pressure"][1:-1,sl_right,1:-1]/dy
     py -= data["Pressure"][1:-1,sl_left ,1:-1]/dy
 
-    new_field = na.zeros(data["Pressure"].shape, dtype='float64')
+    new_field = np.zeros(data["Pressure"].shape, dtype='float64')
     new_field[1:-1,1:-1,1:-1] = py
 
     return new_field
@@ -108,7 +108,7 @@
     pz  = data["Pressure"][1:-1,1:-1,sl_right]/dz
     pz -= data["Pressure"][1:-1,1:-1,sl_left ]/dz
 
-    new_field = na.zeros(data["Pressure"].shape, dtype='float64')
+    new_field = np.zeros(data["Pressure"].shape, dtype='float64')
     new_field[1:-1,1:-1,1:-1] = pz
 
     return new_field
@@ -125,12 +125,12 @@
     hy = data["Grad_Pressure_y"] - gy
     hz = data["Grad_Pressure_z"] - gz
 
-    h = na.sqrt((hx*hx+hy*hy+hz*hz)/(gx*gx+gy*gy+gz*gz))
-    
+    h = np.sqrt((hx*hx+hy*hy+hz*hz)/(gx*gx+gy*gy+gz*gz))
+
     return h
 
 # Now add the fields to the database
-    
+
 add_field("Grav_Accel_x", function=_Grav_Accel_x, take_log=False,
           validators=[ValidateSpatial(1,["Grav_Potential"])])
 
@@ -151,8 +151,8 @@
 
 add_field("HSE", function=_HSE, take_log=False)
 
-# Open two files, one at the beginning and the other at a later time when there's a 
-# lot of sloshing going on. 
+# Open two files, one at the beginning and the other at a later time when there's a
+# lot of sloshing going on.
 
 pfi = load("GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0000")
 pff = load("GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0350")

diff -r 83606a20967597a2b64efa2b2623c57f9313b8bd -r 41a29eb8260e344259abc2061d94897b6868269f source/cookbook/simple_volume_rendering.py
--- a/source/cookbook/simple_volume_rendering.py
+++ b/source/cookbook/simple_volume_rendering.py
@@ -12,7 +12,7 @@
 
 # Create a transfer function to map field values to colors.
 # We bump up our minimum to cut out some of the background fluid
-tf = ColorTransferFunction((na.log10(mi)+2.0, na.log10(ma)))
+tf = ColorTransferFunction((np.log10(mi)+2.0, np.log10(ma)))
 
 # Add three guassians, evenly spaced between the min and
 # max specified above with widths of 0.02 and using the

diff -r 83606a20967597a2b64efa2b2623c57f9313b8bd -r 41a29eb8260e344259abc2061d94897b6868269f source/cookbook/zoomin_frames.py
--- a/source/cookbook/zoomin_frames.py
+++ b/source/cookbook/zoomin_frames.py
@@ -19,8 +19,8 @@
 # argument to enumerate is the 'logspace' function, which takes a minimum and a
 # maximum and the number of items to generate.  It returns 10^power of each
 # item it generates.
-for i,v in enumerate(na.logspace(
-            0, na.log10(pf.h.get_smallest_dx()*min_dx), n_frames)):
+for i,v in enumerate(np.logspace(
+            0, np.log10(pf.h.get_smallest_dx()*min_dx), n_frames)):
     # We set our width as necessary for this frame ...
     p.set_width(v, '1')
     # ... and we save!

This diff is so big that we needed to truncate the remainder.

Repository URL: https://bitbucket.org/yt_analysis/yt-doc/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list