[yt-dev] [yt_analysis/yt] Volume Rendering Rotation Issues (issue #483)

salvesen issues-reply at bitbucket.org
Fri Dec 21 14:22:57 PST 2012


--- you can reply above this line ---

New issue 483: Volume Rendering Rotation Issues
https://bitbucket.org/yt_analysis/yt/issue/483/volume-rendering-rotation-issues

salvesen:

Hi,
The camera.rotate functionality is not working properly.  I've attached a script and the resulting renderings for a test that in order (1) takes a snapshot, (2) rotates the camera by 1/2 pi and takes a snapshot, and (3) rotates the camera by 3/2 pi and takes a snapshot.  I *think* that subsequent rotations are meant to add onto the previous rotation, rather than inducing a rotation relative to the initial camera setup.  Therefore, the series of steps described above should produce snapshots with rotations of 0, pi/2, and 2pi relative to the initial snapshot.  However, this is clearly not the case from inspection of the output renderings.  The 0 rotation and 2pi rotation do not match up like they should.

I repeated this test for the assumption that the rotations are made relative to the original camera object orientation, rather than relative to the previous rotation.  This also produced incorrect results.

I'd also like to address the issue of documentation regarding the volume rendering tools.  In my recent experience playing around with the various camera routines, I've found that the documentation often suffers from a lack of and/or ambiguous description.  For instance, as mentioned above, it was unclear to me how sequential rotations are performed.  Does the next rotation build off the last rotation or is each rotation made relative to the initial camera object?

The most confusing notation issue I've come across thus far is how the center point, looking vector, north vector, (and rotation vector) work together to produce a volume rendering.  Being confused on this front results in a lot of random guessing about how these all operate together.  My current understanding is as follows:

The center point defines the coordinate origin from which looking vectors, north vectors, and rotation vectors point radially away from.  The north vector effectively defines the coordinate system to be used.  If left alone, the camera will pick a north vector, which is perhaps not what the user is expecting.  The looking vector is drawn from the origin (defined by the center point) with respect to the coordinate system (defined by the north vector).  So, it seems to me that specifying a looking vector without knowing the north vector is equivalent to choosing an arbitrary looking vector (because you have no idea what the coordinate system is yet).  Now, if one explicitly chooses the center point, north vector, and looking vector, the result is sensible (so that's good news).  I think users would hugely benefit from a schematic vector illustration of what is going on here and they should be told upfront that the coordinate system is not pre-defined when they choose a looking vector, which often causes confusing viewing angles.  Additionally, as described above, the rotation angle does not seem to work as advertised, which adds to the confusion of how one is supposed to go about producing a rendering with specific and deliberate camera motions.  An example that does some semi-complex and intentional camera movements (including rotations and changing one's looking vector) would be very helpful.

Thanks!
- greg


--

This is an issue notification from bitbucket.org. You are receiving
this either because you are the owner of the issue, or you are
following the issue.



More information about the yt-dev mailing list