Optimizing and Improving

Rickard & Michael S.: This past week most of our time has been spent on optimizing the mesh generation to create as small models as possible (based on file size) while still keeping “enough” details to be able to distinguish individual stones with the size of a couple of centimeters. So far one promising approach has been to upsample the models at an early stage to fill holes and other errors caused by stereo shadows, and also to increase the amount of details. At a later step the models are downsampled again for effective loading time and rendering. The asynchronous loading from disk to RAM has been working for a while, but now it’s time to start uploading models and texture to the GPU asynchronously to prevent stuttering caused by non-asynchronous uploading. This is done by mapping a buffer, and then sending a pointer to that buffer into a worker thread that handles all the expensive I/O work.

Matthew Territo: I picked up some old, in-progress work again and added configuration options to control the Launcher’s debugging output. You can use the -d or -debug flag on the command line, or set a Launcher-specific LogLevel in the openspace.cfg configuration file.

Kalle Bladin: I’m finally getting some good result from the globe-browsing refactoring. I have reworked the cache and how tile textures are uploaded to graphics memory to increase performance. I will try to do some rigorous testing in the dome to compare PBO texture uploading with regular texture data transfer.

Jonathan Bosson: The multi-touch interface is starting to look good! The function computing the velocity deceleration used to be called each simulation step, i.e. each frame. This bound it to the frame rate and lead to inconsistent behavior depending on the system. Instead, now the deceleration function is called at a constant rate that can be defined by the user. Other bug fixes involved bad precision in direct-manipulation on huge planet sizes, as well as mouse+touch interactions intervening with each other. As the backlog of tasks starts to shrink I’m looking forward to this week’s feedback to get a set of fresh eyes on the interface.

Oskar & Michael N: Different methods for optimization of decoding of JPEG 2000 images were investigated last week. A GPU decode approach using Compute Shaders or FBOs would make a lot of sense for our use case, since the large image is going up there anyways. A test version of this has been implemented in a separate branch, but the main issue right now, before any testing can be done, is to get the raw compressed byte stream from a .jp2 image (and throw away the .jp2 boxes and other metadata). The openjpeg library does not seem to support this, and it seems like a lot of work to understand the underlying structure of the .jp2 container, so this will probably be put on hold for a while. The resolution-dependent buffer will be investigated further this week after some refactoring. As we’ve now received the massive data sets that are going to be used for the SUN-EARTH event in June, it is no longer feasible to re-trace field lines every time we run the program. Focus has therefore been on implementing functions that can read and write field line states to .json and binary files. The user can now also choose to look at a subset of the field lines where a certain quantity is within a given range.

Klas Eskilson: CEF is now being successfully built together with OpenSpace on Windows too. A working web renderer is also in place on Windows, supporting most web features. Basically any URL can be loaded and rendered on top of OpenSpace. The next step is to add interaction callbacks and do a cleanup of the somewhat messy code.

Field lines, model loading, and CEF

Oskar & Michael N.: During the last week we’ve made more options available within the field lines GUI, to prepare for the SUN-EARTH event on June 27th. For the live GPU-traced implementation users can now switch out or update seed point files during run time. We’ve also received new data that resembles what is going to be used for the event. These data are, however, not providing the expected output and we’re trying to locate the issue. For the solar browsing, a buffer has been created. The idea is to have one or several threads to continuously perform decoding jobs by predicting which images are going to be shown. The prediction is done by looking at the OpenSpace delta time multiplied with a minimum update limit — the images that map to these times are put into the buffer. If the buffer runs empty, the PBO update is simply skipped for that frame, so nothing stops the rendering thread. Another idea we have is to make the resolution time-dependent, this would basically mean that when the buffer runs empty, decoding is performed in a lower resolution until the buffer thread catches up again.

Jonathan: Since last week I’ve shown the interface to a number of people and managed to gathered some feedback. Parts of the interface have thus changed accordingly. For example, Zoom and Roll interaction no longer scales with multiple fingers, along with numerous other small improvements. I’ve also developed a unit test for the LM algorithm to be able to plot the path the solver took to converge to its solution. This was useful when debugging certain issues with the algorithm, such as it converging slowly or making missteps in the gradient descent. One long lasting bug has thus been fixed! I’ve also moved all parameters and changing variables for the algorithm into properties, so the user can change how the interface behaves through the GUI.

Rickard & Michael S: Last week we made some big changes to the structure of mesh generation. The texturing of the models are now done completely from shaders. This improves the size of the models and also their loading times. It also allows us to use textures from other cameras or different camera positions, which means we can get colored textures instead of only black and white. We’ve also looked into how we load the models: the models are now loaded as one subsite rather then each model for each subsite. This will also improve the loading time of the models. Our main goal right now is to improve the frame rate and loading of the models.

Klas: A CMake setup is finally in place for Chromium Embedded Framework (CEF). OpenSpace together with the CEF dynamic library, CEF process helper, and OpenSpace’s dynamic libraries, compiles and bundles the application on macOS. Work has begun on actually creating the embedded web browser within the WebGUI module. I’m expecting this to be something of a challenge, since CEF is expecting to be the main part of an application — not the other way around, as in our case. We do not want CEF be the star of our application — that should be OpenSpace itself.

Gene: I worked with Vikram on WMS server configuration, transferring information so I can take over with this when he leaves. I have also gone over his terrain bump-mapping work. I’m continuing to work on the MPCDI configuration in SGCT for the E&S Digitstar projection system.

A very productive week

Jonathan: Last week I improved the windows to TUIO output bridge that is used on the touch tables to send the touch input to OpenSpace. I also introduced a tracker for double tapping as well as a GUI mode for touch such that the on-screen GUI that already exists can be interacted with through touch, which gives the touch interface full control of the OpenSpace application.

Klas: I started to investigate how CEF (Chromium Embedded Framework) may be used in OpenSpace. The idea is to be able to use web techniques to create a richer user interface. Work has mostly been focused on getting a working CMake setup up and running. So far the build is failing, but nothing else is expected at this point. It might be tricky to get the multi process setup that CEF requires into OpenSpace, but it should be doable.

Oskar & Michael N: The solar browsing module can now handle projective texturing, which means that we can display time varying spacecraft imagery wrapped around the surface of a hemisphere. The FOV of the camera is also visualized simply by looking at the maximum size of the images at the position of the sun, and scaling linearly along the path to the spacecraft. This should preferably be done better though, by looking at the raw values of the metadata. The image plane is also now moved using a Gaussian function to gain more precision and smoother movement near the sun. Last week the CCMC staff asked for the possibility to color field lines depending on different quantities and magnitudes. This has been implemented by passing measured values along the lines to a transfer function. The user can, from within the GUI, change which variable to color by. Big parts of the code have been cleaned up, and it is being prepared to introduce the concept of dynamic seed points.

Rickard & Michael S.: We finally have a working prototype where the user can observe all rover surface models released by NASA. The next step is to improve mesh generation and texturing of the models. Another is to improve rendering of the rover traversal path to fit HiRISE as well as possible. The first dome test will be performed within the next few weeks.

Hard work paying off

Jonathan: Since last week I’ve focused on fixing bugs and refactoring the code inside the TouchInteraction class. Overall, the two interaction modes should be more intertwined, movement should be less sporadic, with fewer misinterpretations upon touch input (especially with taps), as well as some sensitivity scaling dependent on the touch screen size. I’ve also reached out to gather feedback on the current interface during the coming week so that I can design the interface to be more intuitive. There’s still some bugs with the LM algorithm — specifically with the gradient calculations — that are to be squashed!

Rickard & Michael S: Last week we mainly worked on placement of the rover terrain models, to correctly and automatically align them to the HiRiSE height map. Another task we finished was to dynamically and asynchronously load and cache models with corresponding textures. We implemented this together with the already existing chunk tree implemented in the globe browsing module, to be able to use the culling. We also wrote a small script to download binaries and textures for the model generation process. The next task is to be able to render the correct rover terrain model depending on the position of the camera.

Michael N & Oskar: Last week we implemented a wrapper around some of the features of OpenJPEG. The new format, together with asynchronous texture uploading, gives us the ability to stream from disk in 512×512 to 1024×1024, depending on the desired frame rate. A higher resolution is still desirable, but the bottleneck right now is the decoding speed of large images using this library. As for the field lines, after a meeting with CCMC staff responsible for the Sun-Earth event, compute shaders were put on hold to prioritize precomputed lines. Some refactoring and cleanup has also been done in the field line module.

Asynchronous Progress

Jonathan: This week I finished the implementation of the LM algorithm, a non-linear least-square minimization method. Initially this solver would generate six values, q = { tx, ty, tz, rx, ry, rz }, that would translate and rotate the touched object such that the touched surface point’s projected position on the view plane would stay at the same location. However, since we don’t move planets in space but instead the camera, this required some redefinition. In place of finding the matrix that transforms the object I defined q to manipulate the camera directly instead. This gives us freedom in defining how each degree of freedom is handled and directly manipulate the camera, something that would’t be possible to the same extent with the prior expression. As an example, instead of having one finger translate the object in 2D, the same interaction causes the camera to orbit around the focused planet. Due to this the difference between the direct-manipulation interaction mode and the regular one be minimized by sharing the same interface, which eliminates potential confusion.

Kalle: Started looking at solutions for asynchronous texture uploading for globe tiles. Pixel buffer objects which are filled asynchronously will hopefully decrease some lag experienced when the GPU is congested.

Rickard & Michael S.: The mesh decimation is now finished and will probably be moved to run as a separate task/module. The next step is to render the asynchronously loaded surface models and textures. We have to find a way to load the models and textures to the GPU without locking the render loop. Another thing on the agenda is to rotate the models in a pre-processing step instead of doing it at runtime.

Oskar & Michael N.: The field line tracing has now been implemented on the GPU for the ENLIL model as well. However, doing the tracing within the Geometry Shader means that only a fixed number of vertices can be output per field line. This is sometimes okay, but in order to be able to have a higher resolution we’ve decided to shift our focus to using compute shaders instead. As for the spacecraft imagery, we’ve started looking into using the JPEG2000 format instead of the raw FITS data. The major advantage of using this format is that the image can be encoded in separate tiles with multiple levels of resolution. This will make it possible to efficiently decode a part of the image with a predefined level of detail and hopefully give us the ability to stream images from disk.

Gene: Tested lua scene downloader on all 3 platforms, with improved error handling. A pull request is next. Continuing to work on MPCDI support in SGCT for E&S Digistar support.

Scene downloads and GPU uploads

Jonathan: I’ve continued to work on implementing the LM algorithm to solve the best-fit for a non-linear least square problem that will give us the transform matrix for direct-manipulation. The implementation is mostly done, with some small tests left to run until the correct matrix can be generated for our user case; namely defining the gradient of a function s(xi,q) that takes surface coordinates xi and applies a parameterized transform M(q) that gives us the closest fit to the current screen-space coordinates, as well as designating the desired interactions in low DOF cases (less than three fingers down on the surface).

Gene: This past week I merged the properties grouping feature into the master branch. I also started on a file downloader for the Lua interpreter so that file(s) can be downloaded at the point where an individual scene is loaded. This was started for the satellites scene but expanded to use for any scene. I have a rough version working that uses the existing C++ DownloadManager. It successfully downloads a file, but still needs error handling and more testing. I spent a little time getting up to speed on the Utah WMS server for globebrowsing data, which I will be taking over next month. I am also working on incorporating the E&S Digistar import feature into SGCT.

Michael N. & Oskar: The bottleneck for streaming spacecraft imagery data in 4K resolution is still uploading to the GPU. Asynchronous texture uploading has been implemented last week using the Pixel Buffer Object. The idea is to orphan the buffer data before filling up the next frame, so that we don’t have to wait for the texture download on the GPU to complete. This gives an increase of approximately 10 fps, compared to normal synchronous texture upload, but this is still not fast enough. Ideas that will be investigated this week are multi-res approaches, offline texture compression formats, and using a multithreaded approach for filling the PBO. Temporal, real-time field line tracing has now been made possible on the GPU (for the BATS-R-US model). Smooth transitions between states are made possible by linearly interpolating the vector field data between two time steps, as opposed to the line morphing which was done before.

Rickard & Michael S: Last week we finalized the construction and decimation of the meshes, from pointclouds, and started working on putting some texture on them. We also started implementing the models into the current chunking/culling already implemented in the globebrowsing module.

Touch, Trace, Sockets, and Point Clouds

Gene: I combined single and grouped property access using curly brackets { } to distinguish a group (see feature/grouping branch). I met with Alex and Erik and we came up with a good method for importing MPCDI configuration into SGCT, which will provide the capability to run on Evans & Sutherland Digistar hardware. I continued trying to get the luasocket library running correctly on Windows. It now does an integrated build with OpenSpace successfully, but suffers a runtime error on Windows which is not seen on Linux or Mac. I am currently looking at using the OpenSpace download manager as a substitute.

Alex: Last week I was catching up on most of the remaining tasks that are left before the Earth Day presentation on the 21st of April at the American Museum of Natural History. Aside from making the code more robust, this also involved a lot of on-site testing and preparing datasets. The most significant parts are in the GlobeBrowsing module and the handling of the temporal datasets.

Jonathan: By tracing a ray from the cursor position on the view plane through the world space and finding the closest intersection point with celestial bodies we can find the surface coordinates on a planet. This is what I’ve been working on last week, as well as defining methods to go from a renderable’s model view coordinates back to screen-space. This represents the main step toward creating a direct-touch interaction. Working in screen-space, we’re looking for the minimal L2 error of a non-linear least squares problem with six degrees of freedom. In short, the goal is to find the best-fit solution for a translation and rotation matrix that transforms the object to follow with the new screen-space coordinates. In the next week I will continue to work on implementing the Levenberg-Marquardt algorithm for this purpose.

Rickard & Michael S: Last week we continued working with reading point clouds and constructing meshes from them. To be able to create simplified meshes, without too many vertices, we included the PCL library. Another area that we started looking into was to be able to read models in multiple threads, to be able to quickly switch between models.

Oskar & Michael N.: We have continued to work on improving and optimizing the FITS reader. The resulting images are now also properly colored using color tables provided to us by the CCMC. We have starting testing a new approach regarding field line tracing. The idea is to send the volumetric field data to the GPU as a 3D texture along with the seed points and do the tracing live on the GPU (compared to calculating the vertices of each line in a pre-processing step).

Translation, Touch, and Tracing Field Lines

Rickard & Michael .S: Reading and rendering point clouds on the fly is a bit tricky. One big problem is that the binary files from the pds servers include a lot of data missing XYZ values, such as vertices in the origin (due to stereo shadow) and also redundant data outside where the stereo images overlap. Since we want to do it on the fly, extraction of XYZ data and rendering needs to be very effective because of the amount of vertices for one model. Tests have been made in Matlab to easier understand the structure of the binary files and also because Matlab has built-in algorithms for connecting unstructured vertices. Another area which is progressing is the positioning of the models. Earlier a preprocessing step was necessary to get the translation of the model aligned with the HiRISE heightmap. Now the translation problem is reduced to a small offset in height when rover origin is used instead of site origin.

Jonathan: The first step towards a direct-touch, constraint-based, interaction is to be able to trace the touch positions on the viewplane into the world space and find intersection points. This week I’ve implemented just that. With this feature the user can now select a new SceneGraphNode as their focus node to orbit the camera around. I’ve also defined a way to describe how big different renderables are, based on the camera position and projection matrix. This will be used in order to have intelligent swapping between different interaction modes.

Gene: I spent more time studying the MPCDI configuration in order to get OpenSpace running on the Evans & Sutherland “Digistar” system. I now have config data from both E&S and LiU to use as guides. I worked on problems with the auto-downloading of satellite telemetry files on Windows. I also implemented changes to the tag/grouping feature based on feedback provided, and did some more testing on it.

Michael N. & Oskar: An algorithm for scaling of the SDO imagery has been implemented as a preprocessing step, and the FITS reader can now dynamically read keywords from the header file, such as the exposure time and number of bits per pixel. Some AIA images from the solar flare event have also been fetched and can be shown in different wavelengths in 4K resolution. However, this needs to be tested much more thoroughly on a larger scale. As for the fieldlines, morphing has been implemented by first resampling the traced field lines and then linearly interpolating between states. Some of the corner cases, (where magnetic reconnection occurs) have been investigated and will, at least for now, morph much quicker than the stables ones.

Implementations and Interfaces

Rickard & Michael S: Last week we worked on reading point clouds straight into OpenSpace and skipping one preprocessing step. This implementation might not work because of alignment issues that might need preprocessing, but we will look into that this week. We also fixed a bug in the rendering of the Mars Rover path. Now that is working as expected.

Emil: I’m currently working our dome casting feature to support networked events across several sites. Since the last time dome casting was used, there has been some changes in OpenSpace’s navigation. To allow for smooth camera movements for all connected clients, I’m looking into restructuring the camera navigation and syncing to allow for an interpolation scheme that is more suitable to the new interaction mode.

Jonathan: I have been researching different published papers on direct-touch interaction to find a good methodology to implement this feature into OpenSpace, specifically screen-space interaction. Since there are some distinct differences between OpenSpace’s user case and simple object manipulation discussed in the papers, some alterations must be made. The goal is to interface the direct-touch interaction well with the current implementation, such that the user feels it is an intuitive transition between directly manipulating a planet and traversing through space.

Michael N. & Oskar: Last week we implemented the same scaling algorithm that NASA uses to process the raw data of the images taken by SDO. We also cleaned up the fieldlines code and implemented a new class structure to prepare for fieldline morphing in-between states.

Rolling and Roving

Jonathan: This week I’ve continued to work with the interaction class for touch input. The sensitivity of the input now works better in globebrowsing and a lot small bugfixes with the camera movement has been fixed. The user can now roll the camera by circling multiple fingers on the touch surface as well as zooming and rolling can now be done simultaneously.

Rickard and Michael S.: The focus of last week was to refactor the rover terrain model/path code and also create a script for automated download and assembly of the rover terrain models. We also implemented a more dynamic loading of the models and textures by giving a path to a folder instead of a path to a specific file.

Michael N. & Oskar: Last week we managed to get the download manager to work with the fits reader, we can now load images from CCMC’s servers and read them directly from memory. Also, field lines can now be rendered in the correct reference frame depending on the model, using Kameleon’s tracer.

Gene: I got a new feature working for grouping, so that multiple properties can be changed with a single command. This still needs more test cases, however. I just got started on the task of getting OpenSpace to run on a Evans & Sutherland multi-projector system. We just got a new AMD GPU that Matt and/or I will try running OpenSpace on.

1 2 3 4 5 6 7