2016-03-22 – Have you tried turning it off and on again?

Alex: After a long, long absence the DevBlog is finally being updated again. Even without posting anything here, the development has been progressed rapidly over the past 7 months. Tomas and Emil have defended their thesis on interactive volume rendering of large data and Emil has since transitions into working on OpenSpace as a developer. Speaking of new people, we have a couple of new developers in the mix as well. Eric and Jonathas will help out from New York with their expertise in development and architecture knowledge, Erik and Kalle are doing their thesis work at the American Museum of Natural History and will work on geospatial images, and last but not least, Michael and Sebastian are doing their thesis at Goddard and will be working on bringing current space weather information into the hands of the public. I’m looking forward to providing up-to-date information on all of these processes again!

Eric: Since I am new to the project I’m still getting up to speed. Since I work for AMNH I took care of paperwork so that I can get paid. I also read the past blog entries so I know more of the development history. I’m working on learning the build process, including CMake, so that I can build the executable(s?), first on my Mac, and then Windows.

Emil: I’m in the process of finishing up a new interface for volume ray casters, that will enable rendering of volumetric data together with antialiased geometry. The volumetric rendering is integrated with the abuffer renderer to properly support multiple intersecting half-transparent volumes with half-transparent geometry inside. For platforms not supporting the abuffer renderer (opengl < 4.2), volumes can still be rendered using the framebuffer renderer. In this case, geometry is first drawn to a separate fbo with a depth buffer attached to it. The depth buffer is fed into the proceeding volume rendering step, so that part of volumes can be occluded by geometry. A scene with only one volume and fully opaque geometry can be rendered correctly even with this approach. However, when falling back on this simpler rendering scheme, volumes are not blended correctly with each other, nor with semi-transparent geometry. Depending on future priorities, this method can also be extended to support correct alpha blending of non-intersecting objects (both volumes and geometries), by adding a CPU based depth sorting step. Intersecting half-transparent objects will probably never be supported for opengl < 4.2. While implementing the new volume rendering algorithms, the depth buffer of the framebuffer renderer has been replaced with a 32 bit float texture for the framebuffer renderer. Both the abuffer and framebuffer renderers now represent depth as the actual camera-to-fragment distance encoded into 32 floating point bits. This will hopefully give us appropriate resolution for doing depth sorting even for huge scenes, with high precision close to the camera, and lower resolution farther away.

Michael & Sebastian: The past two weeks we have been working on screenspace images and framebuffers working for both flatsceens and domes. It is coming along great, and it needs testing. This week we just started a new feature to render 2D data from space weather models in the right place.

Erik & Kalle: We arrived to the US at Monday night. Tuesday, we met up with Carter and Vivian at AMNH and we got introduced to the people in the science bulletin and to our new offices. First week was mostly spent on setting up OpenSpace and researching tesselation and all the map formats.