[Yt-svn] yt: 10 new changesets

hg at spacepope.org hg at spacepope.org
Fri Feb 25 19:56:13 PST 2011


hg Repository: http://hg.spacepope.org/yt
details:   http://hg.spacepope.org/yt/rev/214ddc275963
changeset: 3765:214ddc275963
user:      Sam Skillman <sam.skillman at gmail.com>
date:
Thu Feb 24 16:55:55 2011 -0700
description:
Setting no_ghost=True by default in the Camera object.  This greatly
increases rendering speed, and higher quality renderings can be
enabled with no_ghost=False.

hg Repository: http://hg.spacepope.org/yt
details:   http://hg.spacepope.org/yt/rev/20a49bc35fbd
changeset: 3766:20a49bc35fbd
user:      Sam Skillman <sam.skillman at gmail.com>
date:
Thu Feb 24 19:18:17 2011 -0700
description:
Adding an initial go at streamlines.  It uses the amr kD-tree to
traverse the bricks.  The streamline integrator uses a 4th order Runge
Kutta scheme.  It could still use a bit of optimizing/improving, but
it works.  This also required writing a mechanism to merge the
kd-Trees because the streamlines don't stay in a particular domain, so
that was also added.

hg Repository: http://hg.spacepope.org/yt
details:   http://hg.spacepope.org/yt/rev/2cb24145eba6
changeset: 3767:2cb24145eba6
user:      Sam Skillman <sam.skillman at gmail.com>
date:
Thu Feb 24 19:18:53 2011 -0700
description:
Merging

hg Repository: http://hg.spacepope.org/yt
details:   http://hg.spacepope.org/yt/rev/42dbe8a92c19
changeset: 3768:42dbe8a92c19
user:      Sam Skillman <sam.skillman at gmail.com>
date:
Thu Feb 24 19:19:36 2011 -0700
description:
Merging.

hg Repository: http://hg.spacepope.org/yt
details:   http://hg.spacepope.org/yt/rev/fd91f581b0e0
changeset: 3769:fd91f581b0e0
user:      Sam Skillman <sam.skillman at gmail.com>
date:
Thu Feb 24 19:25:31 2011 -0700
description:
Oops, should actually add streamlines.py to the repo.  Has an example
script in the docstrings.

hg Repository: http://hg.spacepope.org/yt
details:   http://hg.spacepope.org/yt/rev/c81ca2e61e6a
changeset: 3770:c81ca2e61e6a
user:      Matthew Turk <matthewturk at gmail.com>
date:
Thu Feb 24 18:33:59 2011 -0800
description:
Adding field caching for inline runs; don't bother re-searching for them.
Also updating the contour tree to be slightly faster.

hg Repository: http://hg.spacepope.org/yt
details:   http://hg.spacepope.org/yt/rev/9f769234f317
changeset: 3771:9f769234f317
user:      Sam Skillman <sam.skillman at gmail.com>
date:
Thu Feb 24 23:27:30 2011 -0700
description:
Cleaning up Streamlines.  Also note the name change from StreamLines
to Streamlines.  This commit comes with better docstrings, and some
better behaving integrations.  Also added new _mpi_cat_na_array
function that uses mpi4py's built in type and shape inference.

hg Repository: http://hg.spacepope.org/yt
details:   http://hg.spacepope.org/yt/rev/5826af92a51a
changeset: 3772:5826af92a51a
user:      Sam Skillman <sam.skillman at gmail.com>
date:
Thu Feb 24 23:29:10 2011 -0700
description:
removing _mpi_cat2d_double_array.

hg Repository: http://hg.spacepope.org/yt
details:   http://hg.spacepope.org/yt/rev/f35629ccd48b
changeset: 3773:f35629ccd48b
user:      Matthew Turk <matthewturk at gmail.com>
date:
Fri Feb 25 08:39:43 2011 -0800
description:
Adding a buffer-aware version of _mpi_allsum.

hg Repository: http://hg.spacepope.org/yt
details:   http://hg.spacepope.org/yt/rev/a0ac3326fe87
changeset: 3774:a0ac3326fe87
user:      Sam Skillman <sam.skillman at gmail.com>
date:
Fri Feb 25 10:18:52 2011 -0700
description:
Improvements to how the streamlines are parallelized and finalized.
Now uses the new _mpi_allsum.

diffstat:

 yt/analysis_modules/level_sets/contour_finder.py           |    6 +-
 yt/frontends/enzo/data_structures.py                       |   12 +
 yt/utilities/_amr_utils/ContourFinding.pyx                 |   41 ++-
 yt/utilities/_amr_utils/VolumeIntegrator.pyx               |   69 ++++++
 yt/utilities/amr_kdtree/amr_kdtree.py                      |  109 +++++++++-
 yt/utilities/parallel_tools/parallel_analysis_interface.py |   21 +-
 yt/visualization/api.py                                    |    3 +
 yt/visualization/streamlines.py                            |  146 +++++++++++++
 yt/visualization/volume_rendering/camera.py                |    4 +-
 9 files changed, 391 insertions(+), 20 deletions(-)

diffs (truncated from 614 to 300 lines):

diff -r 80f278d2d1bd -r a0ac3326fe87 yt/analysis_modules/level_sets/contour_finder.py
--- a/yt/analysis_modules/level_sets/contour_finder.py	Tue Feb 22 23:32:21 2011 -0500
+++ b/yt/analysis_modules/level_sets/contour_finder.py	Fri Feb 25 10:18:52 2011 -0700
@@ -291,6 +291,7 @@
         total_contours += na.unique(grid["tempContours"][grid["tempContours"] > -1]).size
         new_contours = na.unique(grid["tempContours"][grid["tempContours"] > -1]).tolist()
         tree += zip(new_contours, new_contours)
+    tree = set(tree)
     pbar.finish()
     pbar = get_pbar("Calculating joins ", len(data_source._grids))
     grid_set = set()
@@ -299,9 +300,10 @@
         cg = grid.retrieve_ghost_zones(1, "tempContours", smoothed=False)
         grid_set.update(set(cg._grids))
         fd = cg["tempContours"].astype('int64')
-        tree += amr_utils.construct_boundary_relationships(fd)
+        boundary_tree = amr_utils.construct_boundary_relationships(fd)
+        tree.update(((a, b) for a, b in boundary_tree))
     pbar.finish()
-    sort_new = na.array(list(set(tree)), dtype='int64')
+    sort_new = na.array(list(tree), dtype='int64')
     mylog.info("Coalescing %s joins", sort_new.shape[0])
     joins = coalesce_join_tree(sort_new)
     #joins = [(i, na.array(list(j), dtype="int64")) for i, j in sorted(joins.items())]
diff -r 80f278d2d1bd -r a0ac3326fe87 yt/frontends/enzo/data_structures.py
--- a/yt/frontends/enzo/data_structures.py	Tue Feb 22 23:32:21 2011 -0500
+++ b/yt/frontends/enzo/data_structures.py	Fri Feb 25 10:18:52 2011 -0700
@@ -559,6 +559,18 @@
     def save_data(self, *args, **kwargs):
         pass
 
+    _cached_field_list = None
+    _cached_derived_field_list = None
+
+    def _detect_fields(self):
+        if self.__class__._cached_field_list is None:
+            EnzoHierarchy._detect_fields(self)
+            self.__class__._cached_field_list = self.field_list
+            self.__class__._cached_derived_field_list = self.derived_field_list
+        else:
+            self.field_list = self.__class__._cached_field_list
+            self.derived_field_list = self.__class__._cached_derived_field_list
+
     def _generate_random_grids(self):
         my_rank = self._mpi_get_rank()
         my_grids = self.grids[self.grid_procs.ravel() == my_rank]
diff -r 80f278d2d1bd -r a0ac3326fe87 yt/utilities/_amr_utils/ContourFinding.pyx
--- a/yt/utilities/_amr_utils/ContourFinding.pyx	Tue Feb 22 23:32:21 2011 -0500
+++ b/yt/utilities/_amr_utils/ContourFinding.pyx	Fri Feb 25 10:18:52 2011 -0700
@@ -38,24 +38,27 @@
     if i0 < i1: return i0
     return i1
 
- at cython.boundscheck(False)
- at cython.wraparound(False)
+#@cython.boundscheck(False)
+#@cython.wraparound(False)
 def construct_boundary_relationships(
         np.ndarray[dtype=np.int64_t, ndim=3] contour_ids):
     # We only look at the boundary and one cell in
     cdef int i, j, nx, ny, nz, offset_i, offset_j, oi, oj
     cdef np.int64_t c1, c2
-    tree = []
     nx = contour_ids.shape[0]
     ny = contour_ids.shape[1]
     nz = contour_ids.shape[2]
+    # We allocate an array of fixed (maximum) size
+    cdef int s = (ny*nx + nx*nz + nx*nz - 4) * 9
+    cdef np.ndarray[np.int64_t, ndim=2] tree = np.zeros((s, 2), dtype="int64")
+    cdef int ti = 0
     # First x-pass
     for i in range(ny):
         for j in range(nz):
             for offset_i in range(3):
                 oi = offset_i - 1
                 if i == 0 and oi == -1: continue
-                if i == ny - 1 and oj == 1: continue
+                if i == ny - 1 and oi == 1: continue
                 for offset_j in range(3):
                     oj = offset_j - 1
                     if j == 0 and oj == -1: continue
@@ -63,18 +66,22 @@
                     c1 = contour_ids[0, i, j]
                     c2 = contour_ids[1, i + oi, j + oj]
                     if c1 > -1 and c2 > -1:
-                        tree.append((i64max(c1,c2), i64min(c1,c2)))
+                        tree[ti,0] = i64max(c1,c2)
+                        tree[ti,1] = i64min(c1,c2)
+                        ti += 1
                     c1 = contour_ids[nx-1, i, j]
                     c2 = contour_ids[nx-2, i + oi, j + oj]
                     if c1 > -1 and c2 > -1:
-                        tree.append((i64max(c1,c2), i64min(c1,c2)))
+                        tree[ti,0] = i64max(c1,c2)
+                        tree[ti,1] = i64min(c1,c2)
+                        ti += 1
     # Now y-pass
     for i in range(nx):
         for j in range(nz):
             for offset_i in range(3):
                 oi = offset_i - 1
                 if i == 0 and oi == -1: continue
-                if i == nx - 1 and oj == 1: continue
+                if i == nx - 1 and oi == 1: continue
                 for offset_j in range(3):
                     oj = offset_j - 1
                     if j == 0 and oj == -1: continue
@@ -82,17 +89,21 @@
                     c1 = contour_ids[i, 0, j]
                     c2 = contour_ids[i + oi, 1, j + oj]
                     if c1 > -1 and c2 > -1:
-                        tree.append((i64max(c1,c2), i64min(c1,c2)))
+                        tree[ti,0] = i64max(c1,c2)
+                        tree[ti,1] = i64min(c1,c2)
+                        ti += 1
                     c1 = contour_ids[i, ny-1, j]
                     c2 = contour_ids[i + oi, ny-2, j + oj]
                     if c1 > -1 and c2 > -1:
-                        tree.append((i64max(c1,c2), i64min(c1,c2)))
+                        tree[ti,0] = i64max(c1,c2)
+                        tree[ti,1] = i64min(c1,c2)
+                        ti += 1
     for i in range(nx):
         for j in range(ny):
             for offset_i in range(3):
                 oi = offset_i - 1
                 if i == 0 and oi == -1: continue
-                if i == nx - 1 and oj == 1: continue
+                if i == nx - 1 and oi == 1: continue
                 for offset_j in range(3):
                     oj = offset_j - 1
                     if j == 0 and oj == -1: continue
@@ -100,12 +111,16 @@
                     c1 = contour_ids[i, j, 0]
                     c2 = contour_ids[i + oi, j + oj, 1]
                     if c1 > -1 and c2 > -1:
-                        tree.append((i64max(c1,c2), i64min(c1,c2)))
+                        tree[ti,0] = i64max(c1,c2)
+                        tree[ti,1] = i64min(c1,c2)
+                        ti += 1
                     c1 = contour_ids[i, j, nz-1]
                     c2 = contour_ids[i + oi, j + oj, nz-2]
                     if c1 > -1 and c2 > -1:
-                        tree.append((i64max(c1,c2), i64min(c1,c2)))
-    return tree
+                        tree[ti,0] = i64max(c1,c2)
+                        tree[ti,1] = i64min(c1,c2)
+                        ti += 1
+    return tree[:ti,:]
 
 cdef inline int are_neighbors(
             np.float64_t x1, np.float64_t y1, np.float64_t z1,
diff -r 80f278d2d1bd -r a0ac3326fe87 yt/utilities/_amr_utils/VolumeIntegrator.pyx
--- a/yt/utilities/_amr_utils/VolumeIntegrator.pyx	Tue Feb 22 23:32:21 2011 -0500
+++ b/yt/utilities/_amr_utils/VolumeIntegrator.pyx	Fri Feb 25 10:18:52 2011 -0700
@@ -705,6 +705,75 @@
             gaussian = self.star_coeff * exp(-gexp/self.star_sigma_num)
             for i in range(3): rgba[i] += gaussian*dt*colors[i]
         kdtree_utils.kd_res_rewind(ballq)
+        
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    def integrate_streamline(self, pos, np.float64_t h):
+        cdef np.float64_t k1[3], k2[3], k3[3], k4[3]
+        cdef np.float64_t newpos[3], oldpos[3]
+        for i in range(3):
+            newpos[i] = oldpos[i] = pos[i]
+        self.get_vector_field(newpos, k1)
+        for i in range(3):
+            newpos[i] = oldpos[i] + 0.5*k1[i]*h
+
+        if not (self.left_edge[0] < newpos[0] and newpos[0] < self.right_edge[0] and \
+                self.left_edge[1] < newpos[1] and newpos[1] < self.right_edge[1] and \
+                self.left_edge[2] < newpos[2] and newpos[2] < self.right_edge[2]):
+            for i in range(3):
+                pos[i] = newpos[i]
+            return
+        
+        self.get_vector_field(newpos, k2)
+        for i in range(3):
+            newpos[i] = oldpos[i] + 0.5*k2[i]*h
+
+        if not (self.left_edge[0] <= newpos[0] and newpos[0] <= self.right_edge[0] and \
+                self.left_edge[1] <= newpos[1] and newpos[1] <= self.right_edge[1] and \
+                self.left_edge[2] <= newpos[2] and newpos[2] <= self.right_edge[2]):
+            for i in range(3):
+                pos[i] = newpos[i]
+            return
+
+        self.get_vector_field(newpos, k3)
+        for i in range(3):
+            newpos[i] = oldpos[i] + k3[i]*h
+            
+        if not (self.left_edge[0] <= newpos[0] and newpos[0] <= self.right_edge[0] and \
+                self.left_edge[1] <= newpos[1] and newpos[1] <= self.right_edge[1] and \
+                self.left_edge[2] <= newpos[2] and newpos[2] <= self.right_edge[2]):
+            for i in range(3):
+                pos[i] = newpos[i]
+            return
+
+        self.get_vector_field(newpos, k4)
+
+        for i in range(3):
+            pos[i] = oldpos[i] + h*(k1[i]/6.0 + k2[i]/3.0 + k3[i]/3.0 + k4[i]/6.0)
+    
+    @cython.boundscheck(False)
+    @cython.wraparound(False)
+    @cython.cdivision(True)
+    cdef void get_vector_field(self, np.float64_t pos[3],
+                               np.float64_t *vel):
+        cdef np.float64_t dp[3]
+        cdef int ci[3] 
+        cdef np.float64_t vel_mag = 0.0
+        for i in range(3):
+            ci[i] = (int)((pos[i]-self.left_edge[i])/self.dds[i])
+            dp[i] = (pos[i] - self.left_edge[i])%(self.dds[i])
+
+        cdef int offset = ci[0] * (self.dims[1] + 1) * (self.dims[2] + 1) \
+                          + ci[1] * (self.dims[2] + 1) + ci[2]
+        
+        for i in range(3):
+            vel[i] = offset_interpolate(self.dims, dp, self.data[i] + offset)
+            vel_mag += vel[i]*vel[i]
+        vel_mag = np.sqrt(vel_mag)
+        if vel_mag != 0.0:
+            for i in range(3):
+                vel[i] /= vel_mag
 
 cdef class GridFace:
     cdef int direction
diff -r 80f278d2d1bd -r a0ac3326fe87 yt/utilities/amr_kdtree/amr_kdtree.py
--- a/yt/utilities/amr_kdtree/amr_kdtree.py	Tue Feb 22 23:32:21 2011 -0500
+++ b/yt/utilities/amr_kdtree/amr_kdtree.py	Fri Feb 25 10:18:52 2011 -0700
@@ -126,7 +126,7 @@
 class AMRKDTree(HomogenizedVolume):
     def __init__(self, pf,  l_max=None, le=None, re=None,
                  fields=None, no_ghost=False,
-                 tree_type='domain',log_fields=None):
+                 tree_type='domain',log_fields=None, merge_trees=False):
         r"""
         AMR kd-Tree object, a homogenized volume.
 
@@ -198,7 +198,14 @@
             prone to longer data IO times.  If all the data can fit in
             memory on each cpu, this can be the fastest option for
             multiple ray casts on the same dataset.
-
+        merge_trees: bool, optional
+            If True, the distributed kD-tree can be merged
+            together in an allgather-like fashion.  This should not be
+            used for parallel rendering, as it will cause all
+            processors to render all bricks.  This is primarily useful
+            for applications that are not domain decomposed but still
+            want to build the kD-tree in parallel. Default:False
+        
 
         Returns
         -------
@@ -306,19 +313,30 @@
         t2 = time()
         mylog.debug('It took %e seconds to build AMRKDTree.tree' % (t2-t1))
         
-        # Build Tree Dictionary
         self._build_tree_dict()
 
+        # If the full amr kD-tree is requested, merge the results from
+        # the parallel build.
+        if merge_trees and nprocs > 1:
+            self.join_parallel_trees()            
+            self.my_l_corner = self.domain_left_edge
+            self.my_r_corner = self.domain_right_edge
+        
         # Initialize the kd leafs:
         self.initialize_leafs()
         
         # Add properties to leafs/nodes
         self.total_cost = self.count_cost()
+
         # Calculate the total volume spanned by the tree
         self.volume = self.count_volume()
         mylog.debug('Cost is %d' % self.total_cost)
         mylog.debug('Volume is %e' % self.volume) 
 
+        self.current_saved_grids = []
+        self.current_vcds = []
+
+
     def _build_tree_dict(self):
         self.tree_dict = {}
         for node in self.depth_traverse():
@@ -511,6 +529,32 @@
         self.brick_dimensions = na.array(self.brick_dimensions)
         del current_saved_grids, current_vcds
         self.bricks_loaded = True
+
+    def get_brick_data(self,current_node):
+        if current_node.brick is not None:
+            return 
+



More information about the yt-svn mailing list