Asynchronous Progress

Jonathan: This week I finished the implementation of the LM algorithm, a non-linear least-square minimization method. Initially this solver would generate six values, q = { tx, ty, tz, rx, ry, rz }, that would translate and rotate the touched object such that the touched surface point’s projected position on the view plane would stay at the same location. However, since we don’t move planets in space but instead the camera, this required some redefinition. In place of finding the matrix that transforms the object I defined q to manipulate the camera directly instead. This gives us freedom in defining how each degree of freedom is handled and directly manipulate the camera, something that would’t be possible to the same extent with the prior expression. As an example, instead of having one finger translate the object in 2D, the same interaction causes the camera to orbit around the focused planet. Due to this the difference between the direct-manipulation interaction mode and the regular one be minimized by sharing the same interface, which eliminates potential confusion.

Kalle: Started looking at solutions for asynchronous texture uploading for globe tiles. Pixel buffer objects which are filled asynchronously will hopefully decrease some lag experienced when the GPU is congested.

Rickard & Michael S.: The mesh decimation is now finished and will probably be moved to run as a separate task/module. The next step is to render the asynchronously loaded surface models and textures. We have to find a way to load the models and textures to the GPU without locking the render loop. Another thing on the agenda is to rotate the models in a pre-processing step instead of doing it at runtime.

Oskar & Michael N.: The field line tracing has now been implemented on the GPU for the ENLIL model as well. However, doing the tracing within the Geometry Shader means that only a fixed number of vertices can be output per field line. This is sometimes okay, but in order to be able to have a higher resolution we’ve decided to shift our focus to using compute shaders instead. As for the spacecraft imagery, we’ve started looking into using the JPEG2000 format instead of the raw FITS data. The major advantage of using this format is that the image can be encoded in separate tiles with multiple levels of resolution. This will make it possible to efficiently decode a part of the image with a predefined level of detail and hopefully give us the ability to stream images from disk.

Gene: Tested lua scene downloader on all 3 platforms, with improved error handling. A pull request is next. Continuing to work on MPCDI support in SGCT for E&S Digistar support.