[yt-users] Interactive Volume Rendering Screenshot

Nathan Goldbaum nathan12343 at gmail.com
Wed Jan 3 08:35:34 PST 2018


We should also look into adding support for MIP projections for off-axis
projections with the software volume renderer. It seems like it should be
doable although I bet it will require some poking around in the volume
renderer's internals since I doubt there's a way to paramaterize a MIP
projection in terms of a transfer function.

On Wed, Jan 3, 2018 at 10:32 AM, Matthew Turk <matthewturk at gmail.com> wrote:

>
> Hi Nicole,
>
> On Tue, Jan 2, 2018 at 3:28 PM, Nicole DyAnn Melso <ndm2126 at columbia.edu>
> wrote:
>
>> Hello,
>>
>>
>>
>> I am attempting to create a series of volume rendered images of my
>> simulation at different time steps so that I can compile them into a movie.
>> I want to use the maximum intensity fragment shader, which I believe is
>> only available in the yt interactive volume rendering with OpenGL (I've
>> looked into simply using a maximum intensity projection, but this is only
>> available on-axis). I would like to be able to take a screenshot of the
>> OpenGL window created when I run the yt interactive volume rendering, but
>> I'm having difficulty adding this feature to the volume rendering source
>> code in my local yt directory. Below is the modification I’ve made to the
>> code as well as a screenshot of the problem. I have no prior experience
>> with OpenGL.
>>
>>
>>
>
> Ah!  This looks pretty close, I think.  What might be happening is a
> strangeness with how the bytes are coming back, and maybe even
> double-writing based on the start/end and which buffer is being read.  I
> can't quite say for sure.
>
> But!  Since you're experimenting with the OpenGL stuff anyway, I would
> suggest checking out the branch for interactive VR that I have in an open
> pull request:
>
> https://github.com/yt-project/yt/pull/1452
>
> This isn't *completely* working, but it *should* work precisely for what
> you're doing.  The API is considerably different, but also much simpler.
> This is an example script that gets your started:
>
> import yt
> import yt.visualization.volume_rendering.interactive_vr as ivr
> ds = yt.testing.fake_amr_ds()
>
> data_source = ds.sphere("c", 0.5)
>
> ivr.SceneGraph.from_ds(data_source, ("index", "x"))
>
> This will dump you into an IPython shell that has access to variables such
> as "scene" which has an "image" property.  I'm hoping to have the time to
> finish this up in the next month or so and get it accepted upstream in yt.
>
> This also has the ability to do offscreen rendering, but the API is
> (currently) slightly more complicated.  Hope that helps!
>
> -Matt
>
>
>> */yt/visualization/volume_rendering/interactive_vr.py*
>>
>>
>>
>> 846     def _retrieve_framebuffer(self):
>>
>> 847                    ox, oy, width, height = GL.
>> glGetIntegery(GL.GL_VIEWPORT)
>>
>> 848   +               my_buffer = GL.glReadBuffer(GL.GL_FRONT)
>>
>> 849    -               debug_buffer = GL.glReadPixels(0, 0, width,
>> height, GL.GL_RGB,    GL.GL_UNSIGNED_BYTE)
>>
>> 849    +              debug_buffer = GL.glReadPixels(0, 0, width, height,
>> GL.GL_RGB,    GL.GL_UNSIGNED_BYTE, my_buffer)
>>
>> 850                    arr = np.fromstring(debug_buffer, “unit8”, count,
>> width*height*3)
>>
>> 851                    return arr.reshape((width, height, 3))
>>
>>
>>
>>
>>
>> */yt/visualization/volume_rendering/interactive_loop.py*
>>
>>
>>
>> 187      def  __call__(self, scene, camera, callbacks):
>>
>> 188                  while not glfw.WindowShouldClose(self.window) or
>> self.should_quit:
>>
>> 189                              callbacks(self.window)
>>
>> 190                              if callbacks.draw:
>>
>> 191                                          camera.compute_matrices()
>>
>> 192                                          scene.set_camera(cameara)
>>
>> 193                                          scene.render()
>>
>> 194
>> glfw.SwapBuffers(self.window)
>>
>> 195                                          callbacks.draw = False
>>
>> 196   +                                     arr =
>> scene._retrieve_framebuffer()
>>
>> 197   +                                     write_bitmap(arr, “test.png”)
>>
>> 198                              glfw.PollEvents()
>>
>> 199                              yield self
>>
>> 200                              glfw.Terminate()
>>
>>
>> The simulation is a single rectangular box, but the screenshot I get with
>> the above code is multiple boxes that look like they should be overlaid?
>> Maybe there is a simpler method of accomplishing this? I'd appreciate any
>> suggestions.
>>
>> Thanks,
>> Nicole Melso
>>
>> --
>> Nicole Melso
>> NSF Graduate Research Fellow
>> Columbia Astronomy Department
>>
>> _______________________________________________
>> yt-users mailing list
>> yt-users at lists.spacepope.org
>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>
>>
>
> _______________________________________________
> yt-users mailing list
> yt-users at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20180103/a78d49f2/attachment-0002.html>


More information about the yt-users mailing list