<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hi all,<div><br></div><div>I just did a scaling test on Pleiades at NASA Ames and got somewhat worse scaling at high processor counts.  This is with one Matt's datasets so that might be the issue.</div><div><br></div><div>Here's some summary data for the hierarchy: <a href="http://paste.yt-project.org/show/2348/">http://paste.yt-project.org/show/2348/</a></div><div><br></div><div>I actually found superlinear scaling going from 1 processor to 2 processors so I made two different scaling plots.  I think the second plot (assuming the 2 core run is representative of the true serial performance) is probably more accurate.</div><div><br></div><div><a href="http://imgur.com/5pQ2P">http://imgur.com/5pQ2P</a></div><div><br></div><div><a href="http://imgur.com/pOKty">http://imgur.com/pOKty</a></div><div><br></div><div>Since I was running these jobs interactively, I was able to get a pretty good feel for which parts of the calculation were most time-consuming.  As the plots above show, beyond 8 processors, the projection operation was so fast that increasing the processor count really didn't help much.  Most of the overhead is in parsing the hierarchy, setting up the MPI communicators, and communicating and assembling the projection on all processors at the end, in rough order of importance.</div><div><br></div><div>This is also quite memory intensive -  each core (independent of the global number of cores) needed at least 4.5 gigabytes of memory.</div><div><br></div><div><div>
<div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Nathan Goldbaum<br>Graduate Student<br>Astronomy & Astrophysics, UCSC<br><a href="mailto:goldbaum@ucolick.org">goldbaum@ucolick.org</a><br>http://www.ucolick.org/~goldbaum</div></div></div>
</div>
<br><div><div>On May 4, 2012, at 4:20 AM, Matthew Turk wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div>Hi Sam,<br><br>Thanks a ton.  This looks good to me, seeing as how at few tasks we<br>have the overhead of creating the tree, and at many tasks we'll have<br>collective operations.  I'll try to get ahold of another testing<br>machine and then I'll issue a PR.  (And close Issue #348!)<br><br>-Matt<br><br>On Thu, May 3, 2012 at 6:47 PM, Sam Skillman <<a href="mailto:samskillman@gmail.com">samskillman@gmail.com</a>> wrote:<br><blockquote type="cite">Meant to include the scaling image.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">On Thu, May 3, 2012 at 4:44 PM, Sam Skillman <<a href="mailto:samskillman@gmail.com">samskillman@gmail.com</a>> wrote:<br></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">Hi Matt & friends,<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">I tested this on a fairly large nested simulation with about 60k grids<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">using 6 nodes of Janus (dual-hex nodes) and ran from 1 to 64 processors.  I<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">got fairly good scaling and made a quick mercurial repo on bitbucket with<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">everything except the dataset needed to do a similar<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">study. <a href="https://bitbucket.org/samskillman/quad-tree-proj-performance">https://bitbucket.org/samskillman/quad-tree-proj-performance</a><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">Raw timing:<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">projects/quad_proj_scale:more perf.dat<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">64 2.444e+01<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">32 4.834e+01<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">16 7.364e+01<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">8 1.125e+02<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">4 1.853e+02<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">2 3.198e+02<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">1 6.370e+02<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">A few notes:<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">-- I ran with 64 cores first, then again so that the disks were somewhat<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">warmed up, then only used the second timing of the 64 core run.<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">-- While I did get full nodes, the machine doesn't have a ton of I/O nodes<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">so in an ideal setting performance may be even better.<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">-- My guess would be that a lot of this speedup comes from having a<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">parallel filesystem, so you may not get as great of speedups on your laptop.<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">-- Speedup from 32 to 64 is nearly ideal...this is great.<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">This looks pretty great to me, and I'd +1 any PR.<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">Sam<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">On Thu, May 3, 2012 at 1:42 PM, Matthew Turk <<a href="mailto:matthewturk@gmail.com">matthewturk@gmail.com</a>><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">wrote:<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Hi all,<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">I implemented this "quadtree extension" that duplicates the quadtree<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">on all processors, which may make it nicer to scale projections.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Previously the procedure was:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">1) Locally project<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">2) Merge across procs:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"> 2a) Serialize quadtree<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"> 2b) Point-to-point communciate<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"> 2c) Deserialize<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"> 2d) Merge local and remote<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"> 2d) Repeat up to 2a<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">3) Finish<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">I've added a step 0) which is "initialize entire quadtree", which<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">means all of step 2 becomes "perform sum of big array on all procs."<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">This has good and bad elements: we're still doing a lot of heavy<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">communication across processors, but it will be managed by the MPI<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">implementation instead of by yt.  Also, we avoid all of the costly<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">serialize/deserialize procedures.  So for a given dataset, step 0 will<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">be fixed in cost, but step 1 will be reduced as the number of<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">processors goes up.  Step 2, which now is a single (or two)<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">communication steps, will increase in cost with increasing number of<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">processors.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">So, it's not clear that this will *actually* be helpful or not.  It<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">needs testing, and I've pushed it here:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="bb://MatthewTurk/yt/">bb://MatthewTurk/yt/</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">hash 3f39eb7bf468<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">If anybody out there could test it, I'd be might glad.  This is the<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">script I've been using:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="http://paste.yt-project.org/show/2343/">http://paste.yt-project.org/show/2343/</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">I'd *greatly* appreciate testing results -- particularly for proc<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">combos like 1, 2, 4, 8, 16, 32, 64, ... .  On my machine, the results<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">are somewhat inconclusive.  Keep in mind you'll have to run with the<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">option:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">--config serialize=False<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">to get real results.  Here's the shell command I used:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">( for i in 1 2 3 4 5 6 7 8 9 10 ; do mpirun -np ${i} python2.7 proj.py<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">--parallel --config serialize=False ; done ) 2>&1 | tee proj_new.log<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Comparison against results from the old method would also be super<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">helpful.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">The alternate idea that I'd had was a bit different, but harder to<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">implement, and also with a glaring problem.  The idea would be to<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">serialize arrays, do the butterfly reduction, but instead of<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">converting into data objects simply progressively walk hilbert<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">indices.  Unfortunately this only works for up to 2^32 effective size,<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">which is not going to work in a lot of cases.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Anyway, if this doesn't work, I'd be eager to hear if anybody has any<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">ideas.  :)<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">-Matt<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">_______________________________________________<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">yt-dev mailing list<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="mailto:yt-dev@lists.spacepope.org">yt-dev@lists.spacepope.org</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org">http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">_______________________________________________<br></blockquote><blockquote type="cite">yt-dev mailing list<br></blockquote><blockquote type="cite"><a href="mailto:yt-dev@lists.spacepope.org">yt-dev@lists.spacepope.org</a><br></blockquote><blockquote type="cite"><a href="http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org">http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org</a><br></blockquote><blockquote type="cite"><br></blockquote>_______________________________________________<br>yt-dev mailing list<br><a href="mailto:yt-dev@lists.spacepope.org">yt-dev@lists.spacepope.org</a><br>http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org<br><br>!DSPAM:10175,4fa3c167802953112396!<br><br></div></blockquote></div><br></div></body></html>