[yt-dev] Volume Rendering Refactor

B.W. Keller kellerbw at mcmaster.ca
Wed Sep 2 17:47:07 PDT 2015


I think a lot of these could potentially be fixed/finished in a big sprint
(documentation stuff in particular).  Perhaps we could split this into a
list of difficult vs. semi-trivial, and at least kill off the easier ones
before we do a big merge.  I agree pretty much completely with your points,
but VR is not a feature I rely on, so I don't really lose anything if this
feature takes longer to get merged in.

On Wed, Sep 2, 2015 at 5:40 PM, Cameron Hummels <chummels at gmail.com> wrote:

> Hey everyone,
>
> Short Version of this email:
>
> Since it seems like I'm the minority in trying to flush out some of the
> remaining problems with the VR infrastructure before it gets merged into
> the main head of the dev branch, I will not be an impediment anymore. I've
> provided a list of things I think should be addressed (or discussed at
> least) at the bottom of this email before merging, but as I said, I will no
> longer block the merge if people want to do it.
>
> Long Version:
>
> A few years ago a few of us decided that the volume rendering
> infrastructure was too clunky and difficult to use efficiently to make good
> volume renders, so we detailed how it should be improved in a YTEP (
> http://ytep.readthedocs.org/en/latest/YTEPs/YTEP-0010.html).
> Subsequently, Sam Skillman put in a huge amount of work (with help from
> Matt, me, Andrew, Suoqing, and more) overhauling the volume rendering
> infrastructure to match the description of the YTEP.  The new VR interface
> appears to work well in many cases, but I believe there are still a lot of
> remaining issues to be addressed before it's ready for mainstream:
> bugfixes, docs, recipes, etc.
>
> About 6 months ago, there was a pull request (
> https://bitbucket.org/yt_analysis/yt/pull-requests/1256/volume-rendering-refactor/diff)
> made to merge all of the new volume rendering infrastructure into the
> development branch, but at that point, the PR was enormous and difficult to
> review.  While there were still a lot of unresolved issues associated with
> the PR, it was decided to merge it into the "experimental" bookmark on the
> dev branch, so as to not hold up other development dependent on that
> infrastructure, and to make it easier to test it out for everyone.
>
> While I was reviewing this pull request, I tried my best to read through
> all of the code diffs, make sure there was appropriate documentation
> (docstrings, narrative, comments, etc) for such a significant change, and
> actually download and test run the example code to see if it did what it
> was supposed to do.  This was a ton of work, but I think that this level of
> review needs to occur with major PRs to assure things are working before
> merging and exposing problematic code to the user base.  If we put code out
> there and don't properly explain how to use it (or if it doesn't work),
> we're ultimately going to make a lot more work for ourselves trying to
> respond to people's questions on the mailing list about how to use it.  And
> we'll potentially sour a lot of people to the code when they try to use
> something but find it doesn't do what they want it to do.
>
> Now there is increased pressure to just merge the VR refactor into the
> main dev branch so as to streamline overall development, and it seems I am
> the only one who has been resistant to this.  The argument for the merge
> has been: "once it is in the main branch, people will sort out the bugs."
>  Once something is in the main branch, it may very well get more people
> trying to use it; however, the level of confusion/irritation will increase
> if there is insufficient documentation, and the ability to change any kind
> of API is severely limited because it breaks peoples scripts to do so.  The
> time for reviewing and fixing code comes before it gets merged into the
> main dev branch.
>
> But ultimately, I do not want to be seen as the bad guy anymore on this.
> If people want to merge the code, then I don't want to hold it up.  I made
> a list of items when I reviewed the PR a few months back, and below are the
> things I think are remaining to be done.  But what I think really needs to
> be done is to have other people do a full code review (including testing)
> and find things themselves (maybe my list, maybe other things).  And then
> everyone works to fix them.  If people want to address my list, or they
> want to ignore them, that's fine. Really, it doesn't matter to me anymore.
>
> Cameron
>
>
> List of remaining things I think should be addressed before merging
> experimental head with main head on dev branch:
>
>    - The cookbook recipe for a simple_vr should get rid of all of the
>    extraneous stuff and just have the most simple means of creating a VR, just
>    like the simple slice and simple proj do.
>    - clip_ratio could be changed to something more descriptive
>    (stddev_mask?), and should be defaulted to something like 4.0 since it is
>    used almost everywhere to produce useful images
>    -
>
>    When making a VR, it defaults to using the first field in the
>    field_name list if a field is not specified, which almost never gives the
>    desired result. Perhaps just default to ('gas', 'density') like the PW
>    plots do?  Implicitly assumed in examples (ie cookbook recipe:
>    camera_movement.py)
>    -
>
>    map_to_colormap function is not in the API docs.
>    - map_to_colormap has a scale keyword, which could be more descriptive
>    or have examples. what does it do? 30 gives good results, but 10
>    doesn't and the default: 1. renders a black screen. how does the user
>    figure out what to use? even ballpark of what to use? change the defaults
>    to be useful?
>    -
>
>    There should be a default filename for saving images.  Default
>    filename should be something descriptive, like PlotWindow filenames
>    default to
>    - Docstrings for all outward facing functions and classes including a
>    description, parameters, returned quantities, and an example in the
>    docstrings.  right now, many examples are so trivial so as to not be
>    helpful at all: "source = PointsSource(particle_positions)" examples would
>    be better served to be a full VR with the appropriate method/class
>    injected; probably 3-5lines each; then can be tested with docstring tests
>    down the road too.
>    -
>
>    off_axis_projection() has a resolution arg, perhaps make a resolution
>    kwarg with a default set to 512? 256? just as PW has and VR has?
>    - off_axis_projection has an "item" kwarg, which should really be
>    "fields" and should be handled in the same way as PW does (recognizes lists
>    of fields)
>    - The __init__ method on many VR classes have the docstrings, but it
>    would be better to have them just after the class declaration so that the
>    API docs rendered them properly.
>    -
>
>    the way to render an off_axis_projection requires yt.write_image()
>    instead of sc.render() like the VRs even though it is using the same
>    infrastructure.  inconsistent. why? sc.render() *runs*, but even with a
>    clip_ratio, it just gives a black screen because all of the values are NaNs
>    - Transfer functions operate in log space for fields that are logged.
>    This should be done in the linear space of the fields.  Can be very
>    confusing.
>    - TransferFunctionHelper should be split out into a different page in
>    the narrative docs, as the jupyter notebook in the middle of the docs
>    breaks it up and makes it confusing
>    - Put the VR annotations (annotate_grids()) in the narrative docs to
>    show how to do them.
>    - rotate on the camera should specify what it is rotating --
>    specifically, it rotates the "position" of the camera, not the direction of
>    the camera
>    - switch_view should be included in narrative docs
>    - Add examples for all the different lenses to the cookbook or
>    narrative docs
>    - Make one parallelization header, with subheaders in the narrative
>    docs
>    - Note differences in ways to write images in VR: (1) you specify an
>    fname in the yt.volume_render() function originally (2) you get the
>    resulting im object from yt.volume_render() and then write it using
>    im.write_png(fname), (3) you take the resulting sc object from
>    yt.volume_render() and then sc.render(fname). i guess it is good to
>    have these three methods for getting the images to file, but it might be
>    worthwhile to explicitly state that there are three methods for writing
>    your VR to file, so people don't get confused on which is the main method
>    (as i did and just had to figure it out by looking through different
>    examples, source, etc).
>
>
>    - Camera movement cookbook needs to be improved to make images that
>    make sense
>    - Link to subsections of the VR docs in the relevant cookbook recipes
>    not just to VR as a whole.
>    - Add narrative docs to get the source, camera, get transfer function,
>    etc.
>    - Possibly get rid of all getters other than for source on the scene
>    object.
>    - get_source is incorrect, since dicts are not sorted
>    - Left handed versus right handed images; There was significant
>    discussion regarding this in the VR refactor PR linked above.  The
>    conclusion that a few of us reached (Suoqing, Michael, myself) was:  The
>    VR system measures the normal vector from the camera, but in the PlotWindow
>    interface, there is no "camera" per se, and the normal vector is measured
>    from the focus. So they're both in a RHS from their own perspective,
>    but it's because that normal vector is getting reversed between the two
>    systems.  The fix would be to flip the direction of north_vector in
>    the VR system so it's right handed from the perspective of the focus, but
>    left handed from the perspective of the camera.  Then we get the same
>    results from similar inputs to PlotWindow outputs and VR (including
>    off_axis_projections()).
>    - Using opaque sources (e.g. points and lines and grids) with render
>    sources (e.g. gas VR) tends to result in either seeing just one or the
>    other out of the box.  It takes a lot of work to balance the opacity of the
>    different sources, and it is not documented how to do this.  I think we
>    should potentially provide better default opacities associated with these
>    so as to make both points and gas visible in the same rendering out of the
>    box.  I think we should also work on documenting easy ways to change the
>    relative opacity of different sources to highlight one source or the other
>    in the rendering.
>
>
> --
> Cameron Hummels
> NSF Postdoctoral Fellow
> Department of Astronomy
> California Institute of Technology
> http://chummels.org
>
> _______________________________________________
> yt-dev mailing list
> yt-dev at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-dev-spacepope.org/attachments/20150902/ecc271d9/attachment.htm>


More information about the yt-dev mailing list