[yt-dev] yt Survey Responses Summary

John ZuHone jzuhone at gmail.com
Thu Dec 18 13:13:01 PST 2014


Where can I read the responses? 

In particular, I’d like to see what things folks spoke up most about in terms of 2-to-3 transitions that are still left hanging. Some things we left out of the transition explicitly (e.g., the boolean regions) because they didn’t fit well within the new framework or we wanted to start from scratch on similar functionality. Some things are also just deprecated for lack of use (I’m looking at a few of my own creations here that might need to be on the chopping block someday). So there definitely should be a triaging of things that should be transitioned as soon as possible versus things that can wait or things that should be completely deprecated. 

Because nobody here, not even Matt, is paid to work on yt, we have to be realistic about the level of support we can provide. I would argue that for a bunch of men and women that are literally working on this “on the side” as it were, that we are doing a pretty good job. I’m not trying to be combative about the survey, I just think we need to be realistic about what we’re able to do, which by scientific software standards (sadly) is pretty darn good. 

As far as the yt 3 transition, Matt, I for one will not fault you for the way that shook out. Most of us devs were using 3.0 almost completely for our own work, and our interaction with the 2.x codebase was becoming very sparse. I’m not sure we had a good choice one way or another on the timing of the release—I think delaying would have created other problems that may have been just as bad. 

One of the great things about open science and open development is that all the weaknesses and loose ends are always out in the open, and there is no way we are going to identify most of the important bugs unless folks actually try things out on their own data, where all of the corner cases are going to be exercised. We have a test suite for a reason, but of necessity the test suite is limited and the test datasets are small and typically rather simple. It seems like we were guaranteed to have a rough transition no matter what.  

> On Dec 18, 2014, at 3:56 PM, Matthew Turk <matthewturk at gmail.com> wrote:
> 
> Hi yt-dev list,
> 
> Thanks for the summary, Cameron.
> 
> In light of these strengths and weaknesses, I think we should figure out a strategy moving forward.  Of course, this strategy is a bit of a weird thing -- yt is right now, to my knowledge, a completely volunteer community.  (Although one survey response said "Matt is the only one paid to work on yt" this is not actually the case.)  There may be individuals with funded projects which name yt as a destination for work -- for instance, simulated observations -- but there's not anyone employed to work on the nuts and bolts of keeping the community running, the code bug-free, and so on.
> 
> So, in light of that, what do we do?  It is obvious that we need:
> 
>  * Work on the transition from yt 2 to yt 3, particularly in analysis modules
>  * Work on documentation, although I must confess from reading the responses it's not clear to me there is a "right answer" to the question of docs, as there are lots of contradictory opinions.  Not to say we don't know a lot of things that can be improved, but I was surprised at the lack of consensus on the overall problems with the docs.
>  * Better tracking of issues that are outstanding in the community.  The issue tracker can be effective if we use it, but right now it is not followed with regularity.  When I personally sit down to do work on yt, I sit down to advance things that I have tracked personally, which often include bugs but also features, and I don't always say, "What is an easy bug to check the box on?"
> 
> I read a paper by Stan Ahalt a while back about the agile process as applied to the water science center and their science-focused software development.  It involved frequent communication with the community and short bursts of activity.  In that case, they had a local team and a clearly-defined set of responsibilities; we have neither.
> 
> We've tried in the past having testing leaders, doc leaders, and on and on.  It sometimes works, sometimes doesn't.  And sometimes it makes those people bear the brunt of annoyance from other devs.  What should we try now?  Where are the weak points in our infrastructure, particularly those weak points that can be fixed without intruding on the lives and careers of the members of the community of developers?
> 
> What can we do better?  Do we need stronger leadership?  Weaker leadership?  Holding up new PRs?  An internalization or codification of values?  Rundowns of issue tracking, perhaps in an automated way?  More frequent, lower barrier-to-entry meetings where we go over things?  Should we call upon an external advisory board?
> 
> I also want to take a moment to discuss the yt 3 transition and to publicly eat crow about how that went.  The release came at a time for me when I'd been putting an enormous amount of effort into the code in an attempt to cut it off and release it before various things happened in my outside-of-yt life.  I was unsuccessful in that regard (which just made me want the emotional burden of a pending release gone even more), but the release went out shortly thereafter anyway.  And I take responsibility, because while in many ways it was well-vetted and robust (and I still believe it will be useful for growing our community), in other ways that were crucial to the *existing* community, particularly people who have been around for years and years, it was not sufficient.  And, it was my fault.  Disruption was inevitable and necessary, since we had to right some wrongs from past yt development, and I think we are recovering, but it would be nice if we could have sidestepped it a bit more.
> 
> -Matt
> 
> On Tue, Dec 16, 2014 at 9:06 PM, Cameron Hummels <chummels at gmail.com <mailto:chummels at gmail.com>> wrote:
> Fellow yt users:
> 
> A few weeks ago, we asked yt users and developers to fill out a brief survey (http://goo.gl/forms/hRNryOWTPO <http://goo.gl/forms/hRNryOWTPO>) to provide us with feedback on how well yt is meeting the needs of the community and how we can improve.  Thank you to all 39 people who responded, as it has given us a great deal to consider as we move forward with the code.  We summarize the results of the survey below, but I start with the basic takeaway from the survey:
> 
> Overall Survey Takeaway:
> The survey respondents are generally pleased with yt.  It meets their needs, has a wonderful community, is relatively easy to install, and has fair documentation.  Major short-term requests were for improvements in documentation, particularly in API docs and source code commenting, as well as more cross-linking in the existing documentation and making sure docs were up to date.  Furthermore, people wanted more attention to making sure existing code in 3.0 works and for resurrecting all 2.x functionality in 3.0.
> 
> The single biggest takeaway from the survey is that the transition to yt 3.0 has been fraught with difficulties.  Many submitters expressed satisfaction with the new functionality in 3.0, but the overall process of transition through documentation, analysis modules and community response has been found to be lacking.
> 
> Background:
> There were 39 unique responses to our survey.  75% of the respondents were grads and postdocs with a smattering of faculty, undergrads, and researchers.  Nearly everyone is at 4-year universities.  50% of the respondents consider themselves intermediate users, 20% novice, 20% advanced, and 10% gurus.
> 
> Installation:
> 90% of the respondents use the standalone install script, with a several users employing other methods (potentially in addition to the standalone script).  95% of the respondents rated installation as a 3 or better (out of 5) with most people settling on a 4 out of 5.  Installation comments were aimed at having better means of installing on remote supercomputing systems and/or making pip installs work more frequently.
> 
> Community Responsiveness:
> 72% of respondents gave yt 5 out of 5 and 97% were 3 or greater for community responsiveness.  Clearly this is our strong point.  There was a very wide distribution of ways in which people contacted the community for help with the most popular means being the mailing lists, the irc channel, mailing developers directly, and searching google.  Comments in this section were mostly positive, but one user wished for more concrete action to be taken after bugs were reported.
> 
> Documentation:
> 77% of respondents gave 4 or 5 out of 5 for the overall rating of the documentation.  Individual docs components were more of a mix.  Cookbooks were ranked very highly, and quickstart notebooks and narrative docs were generally ranked well.  The two documentation components that seemed be ranked lower (although still fair) were API docs and comments in the source code with 15% of respondents noting that they were “mostly not useful” (ie 2 / 5).  There were a lot of comments regarding ways to improve the docs, which I bullet point here:
> Organization of docs is difficult to parse; Hard to find what you’re looking for.
> Hard to know what to search for, so make command list (ie API docs) more prominent
> Docs not always up to date (even between 3.0 and dev)
> Discrepancies between API docs and narrative docs
> Examples are either too simple or too advanced--need more intermediate examples
> Units docs need more explanation
> Not enough source code commenting or API docs
> Not enough cross-linking between docs
> More FAQ / Gotchas for common mistakes
> API docs should include more examples and also note how to use all of the options, not just the most common.
> 
> Functionality:
> 88% of respondents found yt to meet their research needs (4 or 5 out of 5).  Respondents are generally using yt on a variety of datasets including grid data, octree data, particle data, and MHD with only a handful of users dealing with spherical or cylindrical data at present.  Nearly all of the frontends are being used by respondents, with a few exceptions: Chombo, Moab, Nyx, Pluto, and non-astro data.  Visualization remains the main use of yt with 97% of respondents, but simple analysis received 82% and advanced analysis received 62%.  Interestingly, 31% of respondents use halo analysis tools, with only 15% using synthetic observation analysis.
> 
> Big Picture:
> 51% of respondents gave yt 5 out of 5 for general satisfaction, with 28% 4 out of 5 and 15% 3 out of 5.  Overall, this is pretty good but probably biased by the fact that people filled out this survey.  Comments on the greatest strengths of yt include: 
> visualization capabilities
> community support
> flexibility
> Comments on the biggest shortcomings of yt include:
> documentation (see above)
> learning to “think in yt”
> making new functionality when there is existing broken functionality (or missing documentation)
> making sure 3.0 matches all functionality from 2.x
> keeping the documentation up to date
> making the transition from 2.x to 3.0 easier (how to update scripts)
> Things to focus on in the next year:
> documentation (almost unanimously)
> making sure 3.0 can do all functionality from 2.x
> 
> Thank you for all of the valuable feedback.  We sincerely appreciate the constructive criticism in making for a better code and community!  We will put together a blueprint of how to address these shortcomings soon.  Look for it after the holiday break.  Have a wonderful holiday!
> 
> On behalf of the yt development team,
> 
> Cameron 
> 
> -- 
> Cameron Hummels
> Postdoctoral Researcher
> Steward Observatory
> University of Arizona
> http://chummels.org <http://chummels.org/>
> _______________________________________________
> yt-dev mailing list
> yt-dev at lists.spacepope.org <mailto:yt-dev at lists.spacepope.org>
> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org <http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org>
> 
> _______________________________________________
> yt-dev mailing list
> yt-dev at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-dev-spacepope.org/attachments/20141218/a8ca29c6/attachment.htm>
-------------- next part --------------
_______________________________________________
yt-dev mailing list
yt-dev at lists.spacepope.org
http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org


More information about the yt-dev mailing list