[yt-svn] commit/yt-doc: ngoldbaum: Added a tip explaining how to assign work to the compute nodes rather than individual processors in a parallel_objects loop.

Bitbucket commits-noreply at bitbucket.org
Wed Dec 14 11:36:29 PST 2011


1 new commit in yt-doc:


https://bitbucket.org/yt_analysis/yt-doc/changeset/6094f9188564/
changeset:   6094f9188564
user:        ngoldbaum
date:        2011-12-14 20:33:32
summary:     Added a tip explaining how to assign work to the compute nodes rather than individual processors in a parallel_objects loop.
affected #:  1 file

diff -r 5b9c86452b8307e33e90986e1ea63a8858deec8e -r 6094f91885640079feac14823e9e12d277c31926 source/advanced/parallel_computation.rst
--- a/source/advanced/parallel_computation.rst
+++ b/source/advanced/parallel_computation.rst
@@ -332,6 +332,15 @@
     There will be a sweet spot between speed of run and the waiting time in
     the job scheduler queue; it may be worth trying to find it.
 
+  * If you are using object-based parallelism but doing CPU-intensive computations
+    on each object, you may find that setting :py:data:'num_procs' equal to the 
+    number of processors per compute node can lead to significant speedups.
+    By default, most mpi implimentations will assign tasks to processors on a
+    'by-slot' basis, so this setting will tell yt to do computations on a single
+    object using only the processors on a single compute node.  A nice application
+    for this type of parallelism is calculating a list of derived quantities for 
+    a large number of simulation outputs.
+
   * It is impossible to tune a parallel operation without understanding what's
     going on. Read the documentation, look at the underlying code, or talk to
     other yt users. Get informed!

Repository URL: https://bitbucket.org/yt_analysis/yt-doc/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list