[yt-svn] commit/yt: 2 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Sun Mar 16 06:05:37 PDT 2014


2 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/fc550f6dac84/
Changeset:   fc550f6dac84
Branch:      yt-3.0
User:        jzuhone
Date:        2014-03-15 15:49:01
Summary:     Generic Array Data docs. Fixing the particle callback.
Affected #:  2 files

diff -r 4fdf65441a28c200fab0128f55bb43808247fc21 -r fc550f6dac840a65f20842748c61475a15421ba3 doc/source/examining/Loading_Generic_Array_Data.ipynb
--- a/doc/source/examining/Loading_Generic_Array_Data.ipynb
+++ b/doc/source/examining/Loading_Generic_Array_Data.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:4a1cb9a60d5113fc4ca2172a69d5e5ebc5506d77928e9f39715060444dc8f8ed"
+  "signature": "sha256:cd145d8cadbf1a0065d0f9fb4ea107c215fcd53245b3bb7d29303af46f063552"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -52,8 +52,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "from yt.mods import *\n",
-      "from yt.utilities.physical_constants import cm_per_kpc, cm_per_mpc"
+      "%matplotlib inline\n",
+      "from yt.mods import *"
      ],
      "language": "python",
      "metadata": {},
@@ -80,16 +80,16 @@
      "cell_type": "markdown",
      "metadata": {},
      "source": [
-      "To load this data into `yt`, we need to assign it a field name, in this case \"Density\", and place it into a dictionary. Then, we call `load_uniform_grid`:"
+      "To load this data into `yt`, we need associate it with a field. The `data` dictionary consists of one or more fields, each consisting of a tuple of a NumPy array and a unit string. Then, we can call `load_uniform_grid`:"
      ]
     },
     {
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "data = dict(Density = arr)\n",
+      "data = dict(density = (arr, \"g/cm**3\"))\n",
       "bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])\n",
-      "pf = load_uniform_grid(data, arr.shape, cm_per_mpc, bbox=bbox, nprocs=64)"
+      "ds = load_uniform_grid(data, arr.shape, length_unit=\"Mpc\", bbox=bbox, nprocs=64)"
      ],
      "language": "python",
      "metadata": {},
@@ -101,33 +101,38 @@
      "source": [
       "`load_uniform_grid` takes the following arguments and optional keywords:\n",
       "\n",
-      "* `data` : This is a dict of numpy arrays, where the keys are the field names.\n",
+      "* `data` : This is a dict of numpy arrays, where the keys are the field names\n",
       "* `domain_dimensions` : The domain dimensions of the unigrid\n",
-      "* `sim_unit_to_cm` : Conversion factor from simulation units to centimeters\n",
-      "* `bbox` : Size of computational domain in units sim_unit_to_cm\n",
+      "* `length_unit` : The unit that corresponds to `code_length`, can be a string, tuple, or floating-point number\n",
+      "* `bbox` : Size of computational domain in units of `code_length`\n",
       "* `nprocs` : If greater than 1, will create this number of subarrays out of data\n",
       "* `sim_time` : The simulation time in seconds\n",
+      "* `mass_unit` : The unit that corresponds to `code_mass`, can be a string, tuple, or floating-point number\n",
+      "* `time_unit` : The unit that corresponds to `code_time`, can be a string, tuple, or floating-point number\n",
+      "* `velocity_unit` : The unit that corresponds to `code_velocity`\n",
       "* `periodicity` : A tuple of booleans that determines whether the data will be treated as periodic along each axis\n",
       "\n",
-      "This example creates a `yt`-native parameter file `pf` that will treat your array as a\n",
-      "density field in cubic domain of 3 Mpc edge size (3 * 3.0856e24 cm) and\n",
-      "simultaneously divide the domain into `nprocs` = 64 chunks, so that you can take advantage\n",
-      "of the underlying parallelism. "
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The resulting `pf` functions exactly like a parameter file from any other dataset--it can be sliced, and we can show the grid boundaries:"
+      "This example creates a `yt`-native dataset `ds` that will treat your array as a\n",
+      "density field in cubic domain of 3 Mpc edge size and simultaneously divide the \n",
+      "domain into `nprocs` = 64 chunks, so that you can take advantage\n",
+      "of the underlying parallelism. \n",
+      "\n",
+      "The optional unit keyword arguments allow for the default units of the dataset to be set. They can be:\n",
+      "* A string, e.g. `length_unit=\"Mpc\"`\n",
+      "* A tuple, e.g. `mass_unit=(1.0e14, \"Msun\")`\n",
+      "* A floating-point value, e.g. `time_unit=3.1557e13`\n",
+      "\n",
+      "In the latter case, the unit is assumed to be cgs. \n",
+      "\n",
+      "The resulting `ds` functions exactly like a dataset like any other `yt` can handle--it can be sliced, and we can show the grid boundaries:"
      ]
     },
     {
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "slc = SlicePlot(pf, 2, [\"Density\"])\n",
-      "slc.set_cmap(\"Density\", \"Blues\")\n",
+      "slc = SlicePlot(ds, \"z\", [\"density\"])\n",
+      "slc.set_cmap(\"density\", \"Blues\")\n",
       "slc.annotate_grids(cmap=None)\n",
       "slc.show()"
      ],
@@ -152,13 +157,14 @@
       "posx_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)\n",
       "posy_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)\n",
       "posz_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)\n",
-      "data = dict(Density = np.random.random(size=(64,64,64)), \n",
+      "data = dict(density = (np.random.random(size=(64,64,64)), \"Msun/kpc**3\"), \n",
       "            number_of_particles = 10000,\n",
-      "            particle_position_x = posx_arr, \n",
-      "\t        particle_position_y = posy_arr,\n",
-      "\t        particle_position_z = posz_arr)\n",
+      "            particle_position_x = (posx_arr, \"code_length\"), \n",
+      "\t        particle_position_y = (posy_arr, \"code_length\"),\n",
+      "\t        particle_position_z = (posz_arr, \"code_length\"))\n",
       "bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])\n",
-      "pf = load_uniform_grid(data, data[\"Density\"].shape, cm_per_mpc, bbox=bbox, nprocs=4)"
+      "ds = load_uniform_grid(data, data[\"density\"][0].shape, length_unit=(1.0, \"Mpc\"), mass_unit=(1.0,\"Msun\"), \n",
+      "                       bbox=bbox, nprocs=4)"
      ],
      "language": "python",
      "metadata": {},
@@ -176,8 +182,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "slc = SlicePlot(pf, \"z\", [\"Density\"])\n",
-      "slc.set_cmap(\"Density\", \"Blues\")\n",
+      "slc = SlicePlot(ds, \"z\", [\"density\"])\n",
+      "slc.set_cmap(\"density\", \"Blues\")\n",
       "slc.annotate_particles(0.25, p_size=12.0, col=\"Red\")\n",
       "slc.show()"
      ],
@@ -207,6 +213,7 @@
       "import h5py\n",
       "from yt.config import ytcfg\n",
       "data_dir = ytcfg.get('yt','test_data_dir')\n",
+      "from yt.utilities.physical_ratios import cm_per_kpc\n",
       "f = h5py.File(data_dir+\"/UnigridData/turb_vels.h5\", \"r\") # Read-only access to the file"
      ],
      "language": "python",
@@ -234,16 +241,44 @@
      "cell_type": "markdown",
      "metadata": {},
      "source": [
-      "We can iterate over the items in the file handle to get the data into a dictionary, which we will then load:"
+      "We need to add some unit information. It may be stored in the file somewhere, or we may know it from another source. In this case, the units are simply cgs:"
      ]
     },
     {
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "data = {k:v for k,v in f.items()}\n",
-      "bbox = np.array([[-0.5, 0.5], [-0.5, 0.5], [-0.5, 0.5]])\n",
-      "pf = load_uniform_grid(data, data[\"Density\"].shape, 250.*cm_per_kpc, bbox=bbox, nprocs=8, periodicity=(False,False,False))"
+      "units = [\"gauss\",\"gauss\",\"gauss\", \"g/cm**3\", \"erg/cm**3\", \"K\", \n",
+      "         \"cm/s\", \"cm/s\", \"cm/s\", \"cm/s\", \"cm/s\", \"cm/s\"]"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We can iterate over the items in the file handle and the units to get the data into a dictionary, which we will then load:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "data = {k:(v.value,u) for (k,v), u in zip(f.items(),units)}\n",
+      "bbox = np.array([[-0.5, 0.5], [-0.5, 0.5], [-0.5, 0.5]])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds = load_uniform_grid(data, data[\"Density\"][0].shape, length_unit=250.*cm_per_kpc, bbox=bbox, nprocs=8, \n",
+      "                       periodicity=(False,False,False))"
      ],
      "language": "python",
      "metadata": {},
@@ -260,7 +295,9 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "prj = ProjectionPlot(pf, \"z\", [\"z-velocity\",\"Temperature\"], weight_field=\"Density\")\n",
+      "prj = ProjectionPlot(ds, \"z\", [\"z-velocity\",\"Temperature\",\"Bx\"], weight_field=\"Density\")\n",
+      "prj.set_log(\"z-velocity\", False)\n",
+      "prj.set_log(\"Bx\", False)\n",
       "prj.show()"
      ],
      "language": "python",
@@ -287,10 +324,10 @@
      "collapsed": false,
      "input": [
       "#Find the min and max of the field\n",
-      "mi, ma = pf.h.all_data().quantities[\"Extrema\"]('temperature')[0]\n",
+      "mi, ma = ds.all_data().quantities[\"Extrema\"]('Temperature')\n",
       "#Reduce the dynamic range\n",
-      "mi += 1.5e7\n",
-      "ma -= 0.81e7"
+      "mi = mi.value + 1.5e7\n",
+      "ma = ma.value - 0.81e7"
      ],
      "language": "python",
      "metadata": {},
@@ -327,9 +364,9 @@
       "# Choose a vector representing the viewing direction.\n",
       "L = [0.5, 0.5, 0.5]\n",
       "# Define the center of the camera to be the domain center\n",
-      "c = pf.domain_center\n",
+      "c = ds.domain_center[0]\n",
       "# Define the width of the image\n",
-      "W = 1.5*pf.domain_width\n",
+      "W = 1.5*ds.domain_width[0]\n",
       "# Define the number of pixels to render\n",
       "Npixels = 512 "
      ],
@@ -348,9 +385,9 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "cam = pf.h.camera(c, L, W, Npixels, tf, fields=['temperature'],\n",
-      "                  north_vector=[0,0,1], steady_north=True, \n",
-      "                  sub_samples=5, no_ghost=False,log_fields=False)\n",
+      "cam = ds.camera(c, L, W, Npixels, tf, fields=['Temperature'],\n",
+      "                north_vector=[0,0,1], steady_north=True, \n",
+      "                sub_samples=5, log_fields=[False])\n",
       "\n",
       "cam.transfer_function.map_to_colormap(mi,ma, \n",
       "                                      scale=15.0, colormap='algae')"
@@ -417,14 +454,17 @@
      "cell_type": "markdown",
      "metadata": {},
      "source": [
-      "We can put it into a dictionary in the same way as before, but we slice the file handle `f` so that we don't use the `PrimaryHDU`. `hdu.name` is the field name and `hdu.data` is the actual data. We can check that we got the correct fields. "
+      "We can put it into a dictionary in the same way as before, but we slice the file handle `f` so that we don't use the `PrimaryHDU`. `hdu.name` is the field name and `hdu.data` is the actual data. Each of these velocity fields is in km/s. We can check that we got the correct fields. "
      ]
     },
     {
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "data = {hdu.name.lower():hdu.data for hdu in f[1:]}\n",
+      "data = {}\n",
+      "for hdu in f[1:]:\n",
+      "    name = hdu.name.lower()\n",
+      "    data[name] = (hdu.data,\"km/s\")\n",
       "print data.keys()"
      ],
      "language": "python",
@@ -435,15 +475,36 @@
      "cell_type": "markdown",
      "metadata": {},
      "source": [
-      "Now we load the data into `yt`. This particular file doesn't have any coordinate information, but let's assume that the box size is a Mpc. Since these are velocity fields, we can overlay velocity vectors on slices, just as if we had loaded in data from a supported code. "
+      "The velocity field names in this case are slightly different than the standard `yt` field names for velocity fields, so we will reassign the field names:"
      ]
     },
     {
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pf = load_uniform_grid(data, data[\"x-velocity\"].shape, cm_per_mpc)\n",
-      "slc = SlicePlot(pf, \"x\", [\"x-velocity\",\"y-velocity\",\"z-velocity\"])\n",
+      "data[\"velocity_x\"] = data.pop(\"x-velocity\")\n",
+      "data[\"velocity_y\"] = data.pop(\"y-velocity\")\n",
+      "data[\"velocity_z\"] = data.pop(\"z-velocity\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Now we load the data into `yt`. Let's assume that the box size is a Mpc. Since these are velocity fields, we can overlay velocity vectors on slices, just as if we had loaded in data from a supported code. "
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds = load_uniform_grid(data, data[\"velocity_x\"][0].shape, length_unit=(1.0,\"Mpc\"))\n",
+      "slc = SlicePlot(ds, \"x\", [\"velocity_x\",\"velocity_y\",\"velocity_z\"])\n",
+      "for ax in \"xyz\":\n",
+      "    slc.set_log(\"velocity_%s\" % (ax), False)\n",
       "slc.annotate_velocity()\n",
       "slc.show()"
      ],
@@ -472,7 +533,7 @@
      "input": [
       "grid_data = [\n",
       "    dict(left_edge = [0.0, 0.0, 0.0],\n",
-      "         right_edge = [1.0, 1.0, 1.],\n",
+      "         right_edge = [1.0, 1.0, 1.0],\n",
       "         level = 0,\n",
       "         dimensions = [32, 32, 32]), \n",
       "    dict(left_edge = [0.25, 0.25, 0.25],\n",
@@ -496,7 +557,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "for g in grid_data: g[\"Density\"] = np.random.random(g[\"dimensions\"]) * 2**g[\"level\"]"
+      "for g in grid_data: g[\"density\"] = np.random.random(g[\"dimensions\"]) * 2**g[\"level\"]"
      ],
      "language": "python",
      "metadata": {},
@@ -516,7 +577,7 @@
      "input": [
       "grid_data[0][\"number_of_particles\"] = 0 # Set no particles in the top-level grid\n",
       "grid_data[0][\"particle_position_x\"] = np.array([]) # No particles, so set empty arrays\n",
-      "grid_data[0][\"particle_position_y\"] = np.array([]) \n",
+      "grid_data[0][\"particle_position_y\"] = np.array([])\n",
       "grid_data[0][\"particle_position_z\"] = np.array([])\n",
       "grid_data[1][\"number_of_particles\"] = 1000\n",
       "grid_data[1][\"particle_position_x\"] = np.random.uniform(low=0.25, high=0.75, size=1000)\n",
@@ -531,6 +592,26 @@
      "cell_type": "markdown",
      "metadata": {},
      "source": [
+      "We need to specify the field units in a `field_units` dict:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "field_units = {\"density\":\"code_mass/code_length**3\",\n",
+      "               \"particle_position_x\":\"code_length\",\n",
+      "               \"particle_position_y\":\"code_length\",\n",
+      "               \"particle_position_z\":\"code_length\",}"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
       "Then, call `load_amr_grids`:"
      ]
     },
@@ -538,7 +619,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pf = load_amr_grids(grid_data, [32, 32, 32], 1.0)"
+      "ds = load_amr_grids(grid_data, [32, 32, 32], field_units=field_units)"
      ],
      "language": "python",
      "metadata": {},
@@ -548,14 +629,14 @@
      "cell_type": "markdown",
      "metadata": {},
      "source": [
-      "`load_amr_grids` also takes the same keywords `bbox` and `sim_time` as `load_uniform_grid`. Let's take a slice:"
+      "`load_amr_grids` also takes the same keywords `bbox` and `sim_time` as `load_uniform_grid`. We could have also specified the length, time, velocity, and mass units in the same manner as before. Let's take a slice:"
      ]
     },
     {
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "slc = SlicePlot(pf, \"z\", [\"Density\"])\n",
+      "slc = SlicePlot(ds, \"z\", [\"density\"])\n",
       "slc.annotate_particles(0.25, p_size=15.0, col=\"Pink\")\n",
       "slc.show()"
      ],
@@ -579,7 +660,7 @@
       "* Particles may be difficult to integrate.\n",
       "* Data must already reside in memory before loading it in to `yt`, whether it is generated at runtime or loaded from disk. \n",
       "* Some functions may behave oddly, and parallelism will be disappointing or non-existent in most cases.\n",
-      "* No consistency checks are performed on the index\n",
+      "* No consistency checks are performed on the hierarchy\n",
       "* Consistency between particle positions and grids is not checked; `load_amr_grids` assumes that particle positions associated with one grid are not bounded within another grid at a higher level, so this must be ensured by the user prior to loading the grid data. "
      ]
     }

diff -r 4fdf65441a28c200fab0128f55bb43808247fc21 -r fc550f6dac840a65f20842748c61475a15421ba3 yt/visualization/plot_modifications.py
--- a/yt/visualization/plot_modifications.py
+++ b/yt/visualization/plot_modifications.py
@@ -1127,7 +1127,7 @@
         px, py = self.convert_to_plot(plot,
                     [reg[field_x][gg][::self.stride],
                      reg[field_y][gg][::self.stride]])
-        plot._axes.scatter(px, py, edgecolors='None', marker=self.marker,
+        plot._axes.scatter(px.ndarray_view(), py.ndarray_view(), edgecolors='None', marker=self.marker,
                            s=self.p_size, c=self.color,alpha=self.alpha)
         plot._axes.set_xlim(xx0,xx1)
         plot._axes.set_ylim(yy0,yy1)
@@ -1141,8 +1141,8 @@
         zax = axis
         LE[xax], RE[xax] = xlim
         LE[yax], RE[yax] = ylim
-        LE[zax] = data.center[zax] - self.width*0.5
-        RE[zax] = data.center[zax] + self.width*0.5
+        LE[zax] = data.center[zax].ndarray_view() - self.width*0.5
+        RE[zax] = data.center[zax].ndarray_view() + self.width*0.5
         if self.region is not None \
             and np.all(self.region.left_edge <= LE) \
             and np.all(self.region.right_edge >= RE):


https://bitbucket.org/yt_analysis/yt/commits/c1fa5b0b65e9/
Changeset:   c1fa5b0b65e9
Branch:      yt-3.0
User:        MatthewTurk
Date:        2014-03-16 14:05:31
Summary:     Merged in jzuhone/yt-3.x/yt-3.0 (pull request #724)

Generic Array Data docs.
Affected #:  2 files

diff -r fa45403f8a392652e6ec58d992f49dbdeed64899 -r c1fa5b0b65e9025d2f686e1f8464d6b343dff4ed doc/source/examining/Loading_Generic_Array_Data.ipynb
--- a/doc/source/examining/Loading_Generic_Array_Data.ipynb
+++ b/doc/source/examining/Loading_Generic_Array_Data.ipynb
@@ -1,7 +1,7 @@
 {
  "metadata": {
   "name": "",
-  "signature": "sha256:4a1cb9a60d5113fc4ca2172a69d5e5ebc5506d77928e9f39715060444dc8f8ed"
+  "signature": "sha256:cd145d8cadbf1a0065d0f9fb4ea107c215fcd53245b3bb7d29303af46f063552"
  },
  "nbformat": 3,
  "nbformat_minor": 0,
@@ -52,8 +52,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "from yt.mods import *\n",
-      "from yt.utilities.physical_constants import cm_per_kpc, cm_per_mpc"
+      "%matplotlib inline\n",
+      "from yt.mods import *"
      ],
      "language": "python",
      "metadata": {},
@@ -80,16 +80,16 @@
      "cell_type": "markdown",
      "metadata": {},
      "source": [
-      "To load this data into `yt`, we need to assign it a field name, in this case \"Density\", and place it into a dictionary. Then, we call `load_uniform_grid`:"
+      "To load this data into `yt`, we need associate it with a field. The `data` dictionary consists of one or more fields, each consisting of a tuple of a NumPy array and a unit string. Then, we can call `load_uniform_grid`:"
      ]
     },
     {
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "data = dict(Density = arr)\n",
+      "data = dict(density = (arr, \"g/cm**3\"))\n",
       "bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])\n",
-      "pf = load_uniform_grid(data, arr.shape, cm_per_mpc, bbox=bbox, nprocs=64)"
+      "ds = load_uniform_grid(data, arr.shape, length_unit=\"Mpc\", bbox=bbox, nprocs=64)"
      ],
      "language": "python",
      "metadata": {},
@@ -101,33 +101,38 @@
      "source": [
       "`load_uniform_grid` takes the following arguments and optional keywords:\n",
       "\n",
-      "* `data` : This is a dict of numpy arrays, where the keys are the field names.\n",
+      "* `data` : This is a dict of numpy arrays, where the keys are the field names\n",
       "* `domain_dimensions` : The domain dimensions of the unigrid\n",
-      "* `sim_unit_to_cm` : Conversion factor from simulation units to centimeters\n",
-      "* `bbox` : Size of computational domain in units sim_unit_to_cm\n",
+      "* `length_unit` : The unit that corresponds to `code_length`, can be a string, tuple, or floating-point number\n",
+      "* `bbox` : Size of computational domain in units of `code_length`\n",
       "* `nprocs` : If greater than 1, will create this number of subarrays out of data\n",
       "* `sim_time` : The simulation time in seconds\n",
+      "* `mass_unit` : The unit that corresponds to `code_mass`, can be a string, tuple, or floating-point number\n",
+      "* `time_unit` : The unit that corresponds to `code_time`, can be a string, tuple, or floating-point number\n",
+      "* `velocity_unit` : The unit that corresponds to `code_velocity`\n",
       "* `periodicity` : A tuple of booleans that determines whether the data will be treated as periodic along each axis\n",
       "\n",
-      "This example creates a `yt`-native parameter file `pf` that will treat your array as a\n",
-      "density field in cubic domain of 3 Mpc edge size (3 * 3.0856e24 cm) and\n",
-      "simultaneously divide the domain into `nprocs` = 64 chunks, so that you can take advantage\n",
-      "of the underlying parallelism. "
-     ]
-    },
-    {
-     "cell_type": "markdown",
-     "metadata": {},
-     "source": [
-      "The resulting `pf` functions exactly like a parameter file from any other dataset--it can be sliced, and we can show the grid boundaries:"
+      "This example creates a `yt`-native dataset `ds` that will treat your array as a\n",
+      "density field in cubic domain of 3 Mpc edge size and simultaneously divide the \n",
+      "domain into `nprocs` = 64 chunks, so that you can take advantage\n",
+      "of the underlying parallelism. \n",
+      "\n",
+      "The optional unit keyword arguments allow for the default units of the dataset to be set. They can be:\n",
+      "* A string, e.g. `length_unit=\"Mpc\"`\n",
+      "* A tuple, e.g. `mass_unit=(1.0e14, \"Msun\")`\n",
+      "* A floating-point value, e.g. `time_unit=3.1557e13`\n",
+      "\n",
+      "In the latter case, the unit is assumed to be cgs. \n",
+      "\n",
+      "The resulting `ds` functions exactly like a dataset like any other `yt` can handle--it can be sliced, and we can show the grid boundaries:"
      ]
     },
     {
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "slc = SlicePlot(pf, 2, [\"Density\"])\n",
-      "slc.set_cmap(\"Density\", \"Blues\")\n",
+      "slc = SlicePlot(ds, \"z\", [\"density\"])\n",
+      "slc.set_cmap(\"density\", \"Blues\")\n",
       "slc.annotate_grids(cmap=None)\n",
       "slc.show()"
      ],
@@ -152,13 +157,14 @@
       "posx_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)\n",
       "posy_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)\n",
       "posz_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)\n",
-      "data = dict(Density = np.random.random(size=(64,64,64)), \n",
+      "data = dict(density = (np.random.random(size=(64,64,64)), \"Msun/kpc**3\"), \n",
       "            number_of_particles = 10000,\n",
-      "            particle_position_x = posx_arr, \n",
-      "\t        particle_position_y = posy_arr,\n",
-      "\t        particle_position_z = posz_arr)\n",
+      "            particle_position_x = (posx_arr, \"code_length\"), \n",
+      "\t        particle_position_y = (posy_arr, \"code_length\"),\n",
+      "\t        particle_position_z = (posz_arr, \"code_length\"))\n",
       "bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])\n",
-      "pf = load_uniform_grid(data, data[\"Density\"].shape, cm_per_mpc, bbox=bbox, nprocs=4)"
+      "ds = load_uniform_grid(data, data[\"density\"][0].shape, length_unit=(1.0, \"Mpc\"), mass_unit=(1.0,\"Msun\"), \n",
+      "                       bbox=bbox, nprocs=4)"
      ],
      "language": "python",
      "metadata": {},
@@ -176,8 +182,8 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "slc = SlicePlot(pf, \"z\", [\"Density\"])\n",
-      "slc.set_cmap(\"Density\", \"Blues\")\n",
+      "slc = SlicePlot(ds, \"z\", [\"density\"])\n",
+      "slc.set_cmap(\"density\", \"Blues\")\n",
       "slc.annotate_particles(0.25, p_size=12.0, col=\"Red\")\n",
       "slc.show()"
      ],
@@ -207,6 +213,7 @@
       "import h5py\n",
       "from yt.config import ytcfg\n",
       "data_dir = ytcfg.get('yt','test_data_dir')\n",
+      "from yt.utilities.physical_ratios import cm_per_kpc\n",
       "f = h5py.File(data_dir+\"/UnigridData/turb_vels.h5\", \"r\") # Read-only access to the file"
      ],
      "language": "python",
@@ -234,16 +241,44 @@
      "cell_type": "markdown",
      "metadata": {},
      "source": [
-      "We can iterate over the items in the file handle to get the data into a dictionary, which we will then load:"
+      "We need to add some unit information. It may be stored in the file somewhere, or we may know it from another source. In this case, the units are simply cgs:"
      ]
     },
     {
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "data = {k:v for k,v in f.items()}\n",
-      "bbox = np.array([[-0.5, 0.5], [-0.5, 0.5], [-0.5, 0.5]])\n",
-      "pf = load_uniform_grid(data, data[\"Density\"].shape, 250.*cm_per_kpc, bbox=bbox, nprocs=8, periodicity=(False,False,False))"
+      "units = [\"gauss\",\"gauss\",\"gauss\", \"g/cm**3\", \"erg/cm**3\", \"K\", \n",
+      "         \"cm/s\", \"cm/s\", \"cm/s\", \"cm/s\", \"cm/s\", \"cm/s\"]"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "We can iterate over the items in the file handle and the units to get the data into a dictionary, which we will then load:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "data = {k:(v.value,u) for (k,v), u in zip(f.items(),units)}\n",
+      "bbox = np.array([[-0.5, 0.5], [-0.5, 0.5], [-0.5, 0.5]])"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds = load_uniform_grid(data, data[\"Density\"][0].shape, length_unit=250.*cm_per_kpc, bbox=bbox, nprocs=8, \n",
+      "                       periodicity=(False,False,False))"
      ],
      "language": "python",
      "metadata": {},
@@ -260,7 +295,9 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "prj = ProjectionPlot(pf, \"z\", [\"z-velocity\",\"Temperature\"], weight_field=\"Density\")\n",
+      "prj = ProjectionPlot(ds, \"z\", [\"z-velocity\",\"Temperature\",\"Bx\"], weight_field=\"Density\")\n",
+      "prj.set_log(\"z-velocity\", False)\n",
+      "prj.set_log(\"Bx\", False)\n",
       "prj.show()"
      ],
      "language": "python",
@@ -287,10 +324,10 @@
      "collapsed": false,
      "input": [
       "#Find the min and max of the field\n",
-      "mi, ma = pf.h.all_data().quantities[\"Extrema\"]('temperature')[0]\n",
+      "mi, ma = ds.all_data().quantities[\"Extrema\"]('Temperature')\n",
       "#Reduce the dynamic range\n",
-      "mi += 1.5e7\n",
-      "ma -= 0.81e7"
+      "mi = mi.value + 1.5e7\n",
+      "ma = ma.value - 0.81e7"
      ],
      "language": "python",
      "metadata": {},
@@ -327,9 +364,9 @@
       "# Choose a vector representing the viewing direction.\n",
       "L = [0.5, 0.5, 0.5]\n",
       "# Define the center of the camera to be the domain center\n",
-      "c = pf.domain_center\n",
+      "c = ds.domain_center[0]\n",
       "# Define the width of the image\n",
-      "W = 1.5*pf.domain_width\n",
+      "W = 1.5*ds.domain_width[0]\n",
       "# Define the number of pixels to render\n",
       "Npixels = 512 "
      ],
@@ -348,9 +385,9 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "cam = pf.h.camera(c, L, W, Npixels, tf, fields=['temperature'],\n",
-      "                  north_vector=[0,0,1], steady_north=True, \n",
-      "                  sub_samples=5, no_ghost=False,log_fields=False)\n",
+      "cam = ds.camera(c, L, W, Npixels, tf, fields=['Temperature'],\n",
+      "                north_vector=[0,0,1], steady_north=True, \n",
+      "                sub_samples=5, log_fields=[False])\n",
       "\n",
       "cam.transfer_function.map_to_colormap(mi,ma, \n",
       "                                      scale=15.0, colormap='algae')"
@@ -417,14 +454,17 @@
      "cell_type": "markdown",
      "metadata": {},
      "source": [
-      "We can put it into a dictionary in the same way as before, but we slice the file handle `f` so that we don't use the `PrimaryHDU`. `hdu.name` is the field name and `hdu.data` is the actual data. We can check that we got the correct fields. "
+      "We can put it into a dictionary in the same way as before, but we slice the file handle `f` so that we don't use the `PrimaryHDU`. `hdu.name` is the field name and `hdu.data` is the actual data. Each of these velocity fields is in km/s. We can check that we got the correct fields. "
      ]
     },
     {
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "data = {hdu.name.lower():hdu.data for hdu in f[1:]}\n",
+      "data = {}\n",
+      "for hdu in f[1:]:\n",
+      "    name = hdu.name.lower()\n",
+      "    data[name] = (hdu.data,\"km/s\")\n",
       "print data.keys()"
      ],
      "language": "python",
@@ -435,15 +475,36 @@
      "cell_type": "markdown",
      "metadata": {},
      "source": [
-      "Now we load the data into `yt`. This particular file doesn't have any coordinate information, but let's assume that the box size is a Mpc. Since these are velocity fields, we can overlay velocity vectors on slices, just as if we had loaded in data from a supported code. "
+      "The velocity field names in this case are slightly different than the standard `yt` field names for velocity fields, so we will reassign the field names:"
      ]
     },
     {
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pf = load_uniform_grid(data, data[\"x-velocity\"].shape, cm_per_mpc)\n",
-      "slc = SlicePlot(pf, \"x\", [\"x-velocity\",\"y-velocity\",\"z-velocity\"])\n",
+      "data[\"velocity_x\"] = data.pop(\"x-velocity\")\n",
+      "data[\"velocity_y\"] = data.pop(\"y-velocity\")\n",
+      "data[\"velocity_z\"] = data.pop(\"z-velocity\")"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
+      "Now we load the data into `yt`. Let's assume that the box size is a Mpc. Since these are velocity fields, we can overlay velocity vectors on slices, just as if we had loaded in data from a supported code. "
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "ds = load_uniform_grid(data, data[\"velocity_x\"][0].shape, length_unit=(1.0,\"Mpc\"))\n",
+      "slc = SlicePlot(ds, \"x\", [\"velocity_x\",\"velocity_y\",\"velocity_z\"])\n",
+      "for ax in \"xyz\":\n",
+      "    slc.set_log(\"velocity_%s\" % (ax), False)\n",
       "slc.annotate_velocity()\n",
       "slc.show()"
      ],
@@ -472,7 +533,7 @@
      "input": [
       "grid_data = [\n",
       "    dict(left_edge = [0.0, 0.0, 0.0],\n",
-      "         right_edge = [1.0, 1.0, 1.],\n",
+      "         right_edge = [1.0, 1.0, 1.0],\n",
       "         level = 0,\n",
       "         dimensions = [32, 32, 32]), \n",
       "    dict(left_edge = [0.25, 0.25, 0.25],\n",
@@ -496,7 +557,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "for g in grid_data: g[\"Density\"] = np.random.random(g[\"dimensions\"]) * 2**g[\"level\"]"
+      "for g in grid_data: g[\"density\"] = np.random.random(g[\"dimensions\"]) * 2**g[\"level\"]"
      ],
      "language": "python",
      "metadata": {},
@@ -516,7 +577,7 @@
      "input": [
       "grid_data[0][\"number_of_particles\"] = 0 # Set no particles in the top-level grid\n",
       "grid_data[0][\"particle_position_x\"] = np.array([]) # No particles, so set empty arrays\n",
-      "grid_data[0][\"particle_position_y\"] = np.array([]) \n",
+      "grid_data[0][\"particle_position_y\"] = np.array([])\n",
       "grid_data[0][\"particle_position_z\"] = np.array([])\n",
       "grid_data[1][\"number_of_particles\"] = 1000\n",
       "grid_data[1][\"particle_position_x\"] = np.random.uniform(low=0.25, high=0.75, size=1000)\n",
@@ -531,6 +592,26 @@
      "cell_type": "markdown",
      "metadata": {},
      "source": [
+      "We need to specify the field units in a `field_units` dict:"
+     ]
+    },
+    {
+     "cell_type": "code",
+     "collapsed": false,
+     "input": [
+      "field_units = {\"density\":\"code_mass/code_length**3\",\n",
+      "               \"particle_position_x\":\"code_length\",\n",
+      "               \"particle_position_y\":\"code_length\",\n",
+      "               \"particle_position_z\":\"code_length\",}"
+     ],
+     "language": "python",
+     "metadata": {},
+     "outputs": []
+    },
+    {
+     "cell_type": "markdown",
+     "metadata": {},
+     "source": [
       "Then, call `load_amr_grids`:"
      ]
     },
@@ -538,7 +619,7 @@
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "pf = load_amr_grids(grid_data, [32, 32, 32], 1.0)"
+      "ds = load_amr_grids(grid_data, [32, 32, 32], field_units=field_units)"
      ],
      "language": "python",
      "metadata": {},
@@ -548,14 +629,14 @@
      "cell_type": "markdown",
      "metadata": {},
      "source": [
-      "`load_amr_grids` also takes the same keywords `bbox` and `sim_time` as `load_uniform_grid`. Let's take a slice:"
+      "`load_amr_grids` also takes the same keywords `bbox` and `sim_time` as `load_uniform_grid`. We could have also specified the length, time, velocity, and mass units in the same manner as before. Let's take a slice:"
      ]
     },
     {
      "cell_type": "code",
      "collapsed": false,
      "input": [
-      "slc = SlicePlot(pf, \"z\", [\"Density\"])\n",
+      "slc = SlicePlot(ds, \"z\", [\"density\"])\n",
       "slc.annotate_particles(0.25, p_size=15.0, col=\"Pink\")\n",
       "slc.show()"
      ],
@@ -579,7 +660,7 @@
       "* Particles may be difficult to integrate.\n",
       "* Data must already reside in memory before loading it in to `yt`, whether it is generated at runtime or loaded from disk. \n",
       "* Some functions may behave oddly, and parallelism will be disappointing or non-existent in most cases.\n",
-      "* No consistency checks are performed on the index\n",
+      "* No consistency checks are performed on the hierarchy\n",
       "* Consistency between particle positions and grids is not checked; `load_amr_grids` assumes that particle positions associated with one grid are not bounded within another grid at a higher level, so this must be ensured by the user prior to loading the grid data. "
      ]
     }

diff -r fa45403f8a392652e6ec58d992f49dbdeed64899 -r c1fa5b0b65e9025d2f686e1f8464d6b343dff4ed yt/visualization/plot_modifications.py
--- a/yt/visualization/plot_modifications.py
+++ b/yt/visualization/plot_modifications.py
@@ -1127,7 +1127,7 @@
         px, py = self.convert_to_plot(plot,
                     [reg[field_x][gg][::self.stride],
                      reg[field_y][gg][::self.stride]])
-        plot._axes.scatter(px, py, edgecolors='None', marker=self.marker,
+        plot._axes.scatter(px.ndarray_view(), py.ndarray_view(), edgecolors='None', marker=self.marker,
                            s=self.p_size, c=self.color,alpha=self.alpha)
         plot._axes.set_xlim(xx0,xx1)
         plot._axes.set_ylim(yy0,yy1)
@@ -1141,8 +1141,8 @@
         zax = axis
         LE[xax], RE[xax] = xlim
         LE[yax], RE[yax] = ylim
-        LE[zax] = data.center[zax] - self.width*0.5
-        RE[zax] = data.center[zax] + self.width*0.5
+        LE[zax] = data.center[zax].ndarray_view() - self.width*0.5
+        RE[zax] = data.center[zax].ndarray_view() + self.width*0.5
         if self.region is not None \
             and np.all(self.region.left_edge <= LE) \
             and np.all(self.region.right_edge >= RE):

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list