[yt-svn] commit/yt: MatthewTurk: Merged in ngoldbaum/yt/yt-3.0 (pull request #937)

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Fri Jun 6 05:54:55 PDT 2014


1 new commit in yt:

https://bitbucket.org/yt_analysis/yt/commits/3d6f253ca3ec/
Changeset:   3d6f253ca3ec
Branch:      yt-3.0
User:        MatthewTurk
Date:        2014-06-06 14:54:47
Summary:     Merged in ngoldbaum/yt/yt-3.0 (pull request #937)

Adding documentation for load_particles.
Affected #:  2 files

diff -r 20366abd696f82c865749d7d1b998d5b3fa87795 -r 3d6f253ca3ec2e3b9f13f98984b72097d3135485 doc/source/examining/Loading_Generic_Particle_Data.ipynb
--- /dev/null
+++ b/doc/source/examining/Loading_Generic_Particle_Data.ipynb
@@ -0,0 +1,156 @@
+{
+ "metadata": {
+  "name": "",
+  "signature": "sha256:6da8ec00f414307f27544fbdbc6b4fa476e5e96809003426279b2a1c898b4546"
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+  {
+   "cells": [
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "This example creates a fake in-memory particle dataset and then loads it as a yt dataset using the `load_particles` function.\n",
+      "\n",
+      "Our \"fake\" dataset will be numpy arrays filled with normally distributed randoml particle positions and uniform particle masses.  Since real data is often scaled, I arbitrarily multiply by 1e6 to show how to deal with scaled data."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "import numpy as np\n",
+      "\n",
+      "n_particles = 5e6\n",
+      "\n",
+      "ppx, ppy, ppz = 1e6*np.random.normal(size=[3, n_particles])\n",
+      "\n",
+      "ppm = np.ones(n_particles)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The `load_particles` function accepts a dictionary populated with particle data fields loaded in memory as numpy arrays or python lists:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "data = {'particle_position_x': ppx,\n",
+      "        'particle_position_y': ppy,\n",
+      "        'particle_position_z': ppz,\n",
+      "        'particle_mass': ppm}"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "To hook up with yt's internal field system, the dictionary keys must be 'particle_position_x', 'particle_position_y', 'particle_position_z', and 'particle_mass', as well as any other particle field provided by one of the particle frontends."
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The `load_particles` function transforms the `data` dictionary into an in-memory yt `Dataset` object, providing an interface for further analysis with `yt`. The example below illustrates how to load the data dictionary we created above."
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "import yt\n",
+      "from yt.units import parsec, Msun\n",
+      "\n",
+      "bbox = 1.1*np.array([[min(ppx), max(ppx)], [min(ppy), max(ppy)], [min(ppy), max(ppy)]])\n",
+      "\n",
+      "ds = yt.load_particles(data, length_unit=parsec, mass_unit=1e8*Msun, n_ref=256, bbox=bbox)"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "The `length_unit` and `mass_unit` are the conversion from the units used in the `data` dictionary to CGS.  I've arbitrarily chosen one parsec and 10^8 Msun for this example. \n",
+      "\n",
+      "The `n_ref` parameter controls how many particle it takes to accumulate in an oct-tree cell to trigger refinement.  Larger `n_ref` will decrease poisson noise at the cost of resolution in the octree.  \n",
+      "\n",
+      "Finally, the `bbox` parameter is a bounding box in the units of the dataset that contains all of the particles.  This is used to set the size of the base octree block."
+     ]
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "This new dataset acts like any other `yt` `Dataset` object, and can be used to create data objects and query for yt fields.  This example shows how to access \"deposit\" fields:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ad = ds.all_data()\n",
+      "\n",
+      "# This is generated with \"cloud-in-cell\" interpolation.\n",
+      "cic_density = ad[\"deposit\", \"all_cic\"]\n",
+      "\n",
+      "# These three are based on nearest-neighbor cell deposition\n",
+      "nn_density = ad[\"deposit\", \"all_density\"]\n",
+      "nn_deposited_mass = ad[\"deposit\", \"all_mass\"]\n",
+      "particle_count_per_cell = ad[\"deposit\", \"all_count\"]"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds.field_list"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds.derived_field_list"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "slc = yt.SlicePlot(ds, 2, ('deposit', 'all_cic'))\n",
+      "slc.set_width((8, 'Mpc'))"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    }
+   ],
+   "metadata": {}
+  }
+ ]
+}
\ No newline at end of file

diff -r 20366abd696f82c865749d7d1b998d5b3fa87795 -r 3d6f253ca3ec2e3b9f13f98984b72097d3135485 doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -898,3 +898,4 @@
 Generic Particle Data
 ---------------------
 
+.. notebook:: Loading_Generic_Particle_Data.ipynb

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list