[yt-svn] commit/yt: 4 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Fri Dec 8 09:56:00 PST 2017


4 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/32882dd66c07/
Changeset:   32882dd66c07
User:        yingchaolu
Date:        2017-11-07 20:24:50+00:00
Summary:     Parallel covering grid
Affected #:  1 file

diff -r eccaa23d408f0027881e21ba82f86d44b32aa013 -r 32882dd66c072c6f00635a2a3b76cb3235676d45 yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py
+++ b/yt/data_objects/construction_data_containers.py
@@ -694,13 +694,16 @@
         if not iterable(self.ds.refine_by):
             refine_by = [refine_by, refine_by, refine_by]
         refine_by = np.array(refine_by, dtype="i8")
-        for chunk in self._data_source.chunks(fields, "io"):
+        for chunk in parallel_objects(self._data_source.chunks(fields, "io")):
             input_fields = [chunk[field] for field in fields]
             # NOTE: This usage of "refine_by" is actually *okay*, because it's
             # being used with respect to iref, which is *already* scaled!
             fill_region(input_fields, output_fields, self.level,
                         self.global_startindex, chunk.icoords, chunk.ires,
                         domain_dims, refine_by)
+        if self.comm.size > 1:
+            for i in range(len(fields)):
+                output_fields[i] = self.comm.mpi_allreduce(output_fields[i], op="sum")
         for name, v in zip(fields, output_fields):
             fi = self.ds._get_field_info(*name)
             self[name] = self.ds.arr(v, fi.units)


https://bitbucket.org/yt_analysis/yt/commits/175dd4ad80cc/
Changeset:   175dd4ad80cc
User:        yingchaolu
Date:        2017-11-07 22:22:30+00:00
Summary:     Update parallel_computation.rst

Docs for covering grid.
Affected #:  1 file

diff -r 32882dd66c072c6f00635a2a3b76cb3235676d45 -r 175dd4ad80ccb2a4ffc14869a7a72ea0c9e016ea doc/source/analyzing/parallel_computation.rst
--- a/doc/source/analyzing/parallel_computation.rst
+++ b/doc/source/analyzing/parallel_computation.rst
@@ -18,6 +18,7 @@
 * Projections (:ref:`projection-plots`)
 * Slices (:ref:`slice-plots`)
 * Cutting planes (oblique slices) (:ref:`off-axis-slices`)
+* Covering grids (:ref:`examining-grid-data-in-a-fixed-resolution-array`)
 * Derived Quantities (total mass, angular momentum, etc) (:ref:`creating_derived_quantities`,
   :ref:`derived-quantities`)
 * 1-, 2-, and 3-D profiles (:ref:`generating-profiles-and-histograms`)
@@ -210,6 +211,7 @@
 * Projections (see :ref:`available-objects`)
 * Slices (see :ref:`available-objects`)
 * Cutting planes (see :ref:`available-objects`)
+* Covering grids (see :ref:`available-objects`)
 * Derived Quantities (see :ref:`derived-quantities`)
 * 1-, 2-, and 3-D profiles (see :ref:`generating-profiles-and-histograms`)
 * Isocontours & flux calculations (see :ref:`surfaces`)
@@ -428,7 +430,7 @@
 The best advice for these sort of calculations is to run with just a few
 processors and go from there, seeing if it the runtime improves noticeably.
 
-**Projections, Slices, and Cutting Planes**
+**Projections, Slices, Cutting Planes and Covering Grids**
 
 Projections, slices and cutting planes are the most common methods of creating
 two-dimensional representations of data.  All three have been parallelized in a
@@ -459,6 +461,8 @@
 * **Cutting planes**: cutting planes are parallelized exactly as slices are.
   However, in contrast to slices, because the data-selection operation can be
   much more time consuming, cutting planes often benefit from parallelism.
+  
+* **Covering Grids**: covering grids are parallelized exactly as slices are.
 
 Object-Based
 ++++++++++++


https://bitbucket.org/yt_analysis/yt/commits/5ac6f7815032/
Changeset:   5ac6f7815032
User:        yingchaolu
Date:        2017-11-07 22:24:48+00:00
Summary:     Update parallel_computation.rst
Affected #:  1 file

diff -r 175dd4ad80ccb2a4ffc14869a7a72ea0c9e016ea -r 5ac6f78150321497921b338a1bba185214845627 doc/source/analyzing/parallel_computation.rst
--- a/doc/source/analyzing/parallel_computation.rst
+++ b/doc/source/analyzing/parallel_computation.rst
@@ -211,7 +211,7 @@
 * Projections (see :ref:`available-objects`)
 * Slices (see :ref:`available-objects`)
 * Cutting planes (see :ref:`available-objects`)
-* Covering grids (see :ref:`available-objects`)
+* Covering grids (see :ref:`construction-objects`)
 * Derived Quantities (see :ref:`derived-quantities`)
 * 1-, 2-, and 3-D profiles (see :ref:`generating-profiles-and-histograms`)
 * Isocontours & flux calculations (see :ref:`surfaces`)


https://bitbucket.org/yt_analysis/yt/commits/1529739c9636/
Changeset:   1529739c9636
User:        ngoldbaum
Date:        2017-12-08 17:55:48+00:00
Summary:     Merge pull request #1612 from yingchaolu/patch-2

Parallel covering grid
Affected #:  2 files

diff -r 7de54e9896cd1cd89ba5608b74428e3e8dea6bd3 -r 1529739c9636be39104b7b0824859a7ae66c7a99 doc/source/analyzing/parallel_computation.rst
--- a/doc/source/analyzing/parallel_computation.rst
+++ b/doc/source/analyzing/parallel_computation.rst
@@ -18,6 +18,7 @@
 * Projections (:ref:`projection-plots`)
 * Slices (:ref:`slice-plots`)
 * Cutting planes (oblique slices) (:ref:`off-axis-slices`)
+* Covering grids (:ref:`examining-grid-data-in-a-fixed-resolution-array`)
 * Derived Quantities (total mass, angular momentum, etc) (:ref:`creating_derived_quantities`,
   :ref:`derived-quantities`)
 * 1-, 2-, and 3-D profiles (:ref:`generating-profiles-and-histograms`)
@@ -210,6 +211,7 @@
 * Projections (see :ref:`available-objects`)
 * Slices (see :ref:`available-objects`)
 * Cutting planes (see :ref:`available-objects`)
+* Covering grids (see :ref:`construction-objects`)
 * Derived Quantities (see :ref:`derived-quantities`)
 * 1-, 2-, and 3-D profiles (see :ref:`generating-profiles-and-histograms`)
 * Isocontours & flux calculations (see :ref:`surfaces`)
@@ -428,7 +430,7 @@
 The best advice for these sort of calculations is to run with just a few
 processors and go from there, seeing if it the runtime improves noticeably.
 
-**Projections, Slices, and Cutting Planes**
+**Projections, Slices, Cutting Planes and Covering Grids**
 
 Projections, slices and cutting planes are the most common methods of creating
 two-dimensional representations of data.  All three have been parallelized in a
@@ -459,6 +461,8 @@
 * **Cutting planes**: cutting planes are parallelized exactly as slices are.
   However, in contrast to slices, because the data-selection operation can be
   much more time consuming, cutting planes often benefit from parallelism.
+  
+* **Covering Grids**: covering grids are parallelized exactly as slices are.
 
 Object-Based
 ++++++++++++

diff -r 7de54e9896cd1cd89ba5608b74428e3e8dea6bd3 -r 1529739c9636be39104b7b0824859a7ae66c7a99 yt/data_objects/construction_data_containers.py
--- a/yt/data_objects/construction_data_containers.py
+++ b/yt/data_objects/construction_data_containers.py
@@ -694,13 +694,16 @@
         if not iterable(self.ds.refine_by):
             refine_by = [refine_by, refine_by, refine_by]
         refine_by = np.array(refine_by, dtype="i8")
-        for chunk in self._data_source.chunks(fields, "io"):
+        for chunk in parallel_objects(self._data_source.chunks(fields, "io")):
             input_fields = [chunk[field] for field in fields]
             # NOTE: This usage of "refine_by" is actually *okay*, because it's
             # being used with respect to iref, which is *already* scaled!
             fill_region(input_fields, output_fields, self.level,
                         self.global_startindex, chunk.icoords, chunk.ires,
                         domain_dims, refine_by)
+        if self.comm.size > 1:
+            for i in range(len(fields)):
+                output_fields[i] = self.comm.mpi_allreduce(output_fields[i], op="sum")
         for name, v in zip(fields, output_fields):
             fi = self.ds._get_field_info(*name)
             self[name] = self.ds.arr(v, fi.units)

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list