[yt-svn] commit/yt: 3 new changesets
commits-noreply at bitbucket.org
commits-noreply at bitbucket.org
Fri Jun 6 05:54:54 PDT 2014
3 new commits in yt:
https://bitbucket.org/yt_analysis/yt/commits/b2bb0d337ad0/
Changeset: b2bb0d337ad0
Branch: yt-3.0
User: ngoldbaum
Date: 2014-06-06 02:43:09
Summary: Adding documentation for load_particles.
Affected #: 2 files
diff -r 51fc454e1548d776f335d2f30079569e9f3a81e7 -r b2bb0d337ad052efcd96dc778bf53faf16781f71 doc/source/examining/Loading_Generic_Particle_Data.ipynb
--- /dev/null
+++ b/doc/source/examining/Loading_Generic_Particle_Data.ipynb
@@ -0,0 +1,86 @@
+{
+ "metadata": {
+ "name": "",
+ "signature": "sha256:6f1c8ea7363ad9579d64fba0cf034e298286142995fc0f5a72a75949c2c767be"
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+ {
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This example creates a fake in-memory particle dataset by randomly generating a million particle positions and masses."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Note that `data` must be a dictionary where the keys are field names and the values are numpy `ndarray`s. The field names for position and mass must be `particle_position_x`, `particle_position_y`, `particle_position_z`, and `particle_mass`, respectively."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "import yt\n",
+ "import numpy as np\n",
+ "\n",
+ "# Create fake particle data for a million particles randomly distributed\n",
+ "ppx, ppy, ppz, ppm = np.random.random([4, 1e6])\n",
+ "data = {'particle_position_x': ppx,\n",
+ " 'particle_position_y': ppy,\n",
+ " 'particle_position_z': ppz,\n",
+ " 'particle_mass': ppm}"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The `load_particles` function accepts a `data` dictionary and creates an in-memory \"stream\" dataset. The `length_unit` and `mass_unit` are the conversion from the units used in the `data` dictionary to CGS. I've chosen 1 megaparsec and $10^6$ particles for this fake dataset. Finally, `n_ref` controls how many particle it takes to accumulate in an oct-tree cell to trigger refinement. Larger `n_ref` will decrease poisson noise at the cost of memory efficiency."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "ds = yt.load_particles(data, length_unit=3.08e24, mass_unit=1.9891e36, n_ref=64)"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "ad = ds.all_data()\n",
+ "print ad['deposit', 'all_cic']"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "slc = yt.SlicePlot(ds, 2, ('deposit', 'all_cic'))\n",
+ "slc.set_figure_size(4)"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": []
+ }
+ ],
+ "metadata": {}
+ }
+ ]
+}
\ No newline at end of file
diff -r 51fc454e1548d776f335d2f30079569e9f3a81e7 -r b2bb0d337ad052efcd96dc778bf53faf16781f71 doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -898,3 +898,4 @@
Generic Particle Data
---------------------
+.. notebook:: Loading_Generic_Particle_Data.ipynb
https://bitbucket.org/yt_analysis/yt/commits/9fb71d159a89/
Changeset: 9fb71d159a89
Branch: yt-3.0
User: ngoldbaum
Date: 2014-06-06 08:32:04
Summary: Expanding loading particle data notbook.
Affected #: 1 file
diff -r b2bb0d337ad052efcd96dc778bf53faf16781f71 -r 9fb71d159a89282347c6c3abf7af55460d427a22 doc/source/examining/Loading_Generic_Particle_Data.ipynb
--- a/doc/source/examining/Loading_Generic_Particle_Data.ipynb
+++ b/doc/source/examining/Loading_Generic_Particle_Data.ipynb
@@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
- "signature": "sha256:6f1c8ea7363ad9579d64fba0cf034e298286142995fc0f5a72a75949c2c767be"
+ "signature": "sha256:6da8ec00f414307f27544fbdbc6b4fa476e5e96809003426279b2a1c898b4546"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -12,25 +12,38 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "This example creates a fake in-memory particle dataset by randomly generating a million particle positions and masses."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Note that `data` must be a dictionary where the keys are field names and the values are numpy `ndarray`s. The field names for position and mass must be `particle_position_x`, `particle_position_y`, `particle_position_z`, and `particle_mass`, respectively."
+ "This example creates a fake in-memory particle dataset and then loads it as a yt dataset using the `load_particles` function.\n",
+ "\n",
+ "Our \"fake\" dataset will be numpy arrays filled with normally distributed randoml particle positions and uniform particle masses. Since real data is often scaled, I arbitrarily multiply by 1e6 to show how to deal with scaled data."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
- "import yt\n",
"import numpy as np\n",
"\n",
- "# Create fake particle data for a million particles randomly distributed\n",
- "ppx, ppy, ppz, ppm = np.random.random([4, 1e6])\n",
+ "n_particles = 5e6\n",
+ "\n",
+ "ppx, ppy, ppz = 1e6*np.random.normal(size=[3, n_particles])\n",
+ "\n",
+ "ppm = np.ones(n_particles)"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The `load_particles` function accepts a dictionary populated with particle data fields loaded in memory as numpy arrays or python lists:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
"data = {'particle_position_x': ppx,\n",
" 'particle_position_y': ppy,\n",
" 'particle_position_z': ppz,\n",
@@ -44,14 +57,62 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "The `load_particles` function accepts a `data` dictionary and creates an in-memory \"stream\" dataset. The `length_unit` and `mass_unit` are the conversion from the units used in the `data` dictionary to CGS. I've chosen 1 megaparsec and $10^6$ particles for this fake dataset. Finally, `n_ref` controls how many particle it takes to accumulate in an oct-tree cell to trigger refinement. Larger `n_ref` will decrease poisson noise at the cost of memory efficiency."
+ "To hook up with yt's internal field system, the dictionary keys must be 'particle_position_x', 'particle_position_y', 'particle_position_z', and 'particle_mass', as well as any other particle field provided by one of the particle frontends."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The `load_particles` function transforms the `data` dictionary into an in-memory yt `Dataset` object, providing an interface for further analysis with `yt`. The example below illustrates how to load the data dictionary we created above."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
- "ds = yt.load_particles(data, length_unit=3.08e24, mass_unit=1.9891e36, n_ref=64)"
+ "import yt\n",
+ "from yt.units import parsec, Msun\n",
+ "\n",
+ "bbox = 1.1*np.array([[min(ppx), max(ppx)], [min(ppy), max(ppy)], [min(ppy), max(ppy)]])\n",
+ "\n",
+ "ds = yt.load_particles(data, length_unit=parsec, mass_unit=1e8*Msun, n_ref=256, bbox=bbox)"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The `length_unit` and `mass_unit` are the conversion from the units used in the `data` dictionary to CGS. I've arbitrarily chosen one parsec and 10^8 Msun for this example. \n",
+ "\n",
+ "The `n_ref` parameter controls how many particle it takes to accumulate in an oct-tree cell to trigger refinement. Larger `n_ref` will decrease poisson noise at the cost of resolution in the octree. \n",
+ "\n",
+ "Finally, the `bbox` parameter is a bounding box in the units of the dataset that contains all of the particles. This is used to set the size of the base octree block."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This new dataset acts like any other `yt` `Dataset` object, and can be used to create data objects and query for yt fields. This example shows how to access \"deposit\" fields:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "ad = ds.all_data()\n",
+ "\n",
+ "# This is generated with \"cloud-in-cell\" interpolation.\n",
+ "cic_density = ad[\"deposit\", \"all_cic\"]\n",
+ "\n",
+ "# These three are based on nearest-neighbor cell deposition\n",
+ "nn_density = ad[\"deposit\", \"all_density\"]\n",
+ "nn_deposited_mass = ad[\"deposit\", \"all_mass\"]\n",
+ "particle_count_per_cell = ad[\"deposit\", \"all_count\"]"
],
"language": "python",
"metadata": {},
@@ -61,8 +122,17 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "ad = ds.all_data()\n",
- "print ad['deposit', 'all_cic']"
+ "ds.field_list"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "ds.derived_field_list"
],
"language": "python",
"metadata": {},
@@ -73,7 +143,7 @@
"collapsed": false,
"input": [
"slc = yt.SlicePlot(ds, 2, ('deposit', 'all_cic'))\n",
- "slc.set_figure_size(4)"
+ "slc.set_width((8, 'Mpc'))"
],
"language": "python",
"metadata": {},
https://bitbucket.org/yt_analysis/yt/commits/3d6f253ca3ec/
Changeset: 3d6f253ca3ec
Branch: yt-3.0
User: MatthewTurk
Date: 2014-06-06 14:54:47
Summary: Merged in ngoldbaum/yt/yt-3.0 (pull request #937)
Adding documentation for load_particles.
Affected #: 2 files
diff -r 20366abd696f82c865749d7d1b998d5b3fa87795 -r 3d6f253ca3ec2e3b9f13f98984b72097d3135485 doc/source/examining/Loading_Generic_Particle_Data.ipynb
--- /dev/null
+++ b/doc/source/examining/Loading_Generic_Particle_Data.ipynb
@@ -0,0 +1,156 @@
+{
+ "metadata": {
+ "name": "",
+ "signature": "sha256:6da8ec00f414307f27544fbdbc6b4fa476e5e96809003426279b2a1c898b4546"
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+ {
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This example creates a fake in-memory particle dataset and then loads it as a yt dataset using the `load_particles` function.\n",
+ "\n",
+ "Our \"fake\" dataset will be numpy arrays filled with normally distributed randoml particle positions and uniform particle masses. Since real data is often scaled, I arbitrarily multiply by 1e6 to show how to deal with scaled data."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "import numpy as np\n",
+ "\n",
+ "n_particles = 5e6\n",
+ "\n",
+ "ppx, ppy, ppz = 1e6*np.random.normal(size=[3, n_particles])\n",
+ "\n",
+ "ppm = np.ones(n_particles)"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The `load_particles` function accepts a dictionary populated with particle data fields loaded in memory as numpy arrays or python lists:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "data = {'particle_position_x': ppx,\n",
+ " 'particle_position_y': ppy,\n",
+ " 'particle_position_z': ppz,\n",
+ " 'particle_mass': ppm}"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "To hook up with yt's internal field system, the dictionary keys must be 'particle_position_x', 'particle_position_y', 'particle_position_z', and 'particle_mass', as well as any other particle field provided by one of the particle frontends."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The `load_particles` function transforms the `data` dictionary into an in-memory yt `Dataset` object, providing an interface for further analysis with `yt`. The example below illustrates how to load the data dictionary we created above."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "import yt\n",
+ "from yt.units import parsec, Msun\n",
+ "\n",
+ "bbox = 1.1*np.array([[min(ppx), max(ppx)], [min(ppy), max(ppy)], [min(ppy), max(ppy)]])\n",
+ "\n",
+ "ds = yt.load_particles(data, length_unit=parsec, mass_unit=1e8*Msun, n_ref=256, bbox=bbox)"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The `length_unit` and `mass_unit` are the conversion from the units used in the `data` dictionary to CGS. I've arbitrarily chosen one parsec and 10^8 Msun for this example. \n",
+ "\n",
+ "The `n_ref` parameter controls how many particle it takes to accumulate in an oct-tree cell to trigger refinement. Larger `n_ref` will decrease poisson noise at the cost of resolution in the octree. \n",
+ "\n",
+ "Finally, the `bbox` parameter is a bounding box in the units of the dataset that contains all of the particles. This is used to set the size of the base octree block."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This new dataset acts like any other `yt` `Dataset` object, and can be used to create data objects and query for yt fields. This example shows how to access \"deposit\" fields:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "ad = ds.all_data()\n",
+ "\n",
+ "# This is generated with \"cloud-in-cell\" interpolation.\n",
+ "cic_density = ad[\"deposit\", \"all_cic\"]\n",
+ "\n",
+ "# These three are based on nearest-neighbor cell deposition\n",
+ "nn_density = ad[\"deposit\", \"all_density\"]\n",
+ "nn_deposited_mass = ad[\"deposit\", \"all_mass\"]\n",
+ "particle_count_per_cell = ad[\"deposit\", \"all_count\"]"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "ds.field_list"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "ds.derived_field_list"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "slc = yt.SlicePlot(ds, 2, ('deposit', 'all_cic'))\n",
+ "slc.set_width((8, 'Mpc'))"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": []
+ }
+ ],
+ "metadata": {}
+ }
+ ]
+}
\ No newline at end of file
diff -r 20366abd696f82c865749d7d1b998d5b3fa87795 -r 3d6f253ca3ec2e3b9f13f98984b72097d3135485 doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -898,3 +898,4 @@
Generic Particle Data
---------------------
+.. notebook:: Loading_Generic_Particle_Data.ipynb
Repository URL: https://bitbucket.org/yt_analysis/yt/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
More information about the yt-svn
mailing list