Translation, Touch, and Tracing Field Lines
Rickard & Michael .S: Reading and rendering point clouds on the fly is a bit tricky. One big problem is that the binary files from the pds servers include a lot of data missing XYZ values, such as vertices in the origin (due to stereo shadow) and also redundant data outside where the stereo images overlap. Since we want to do it on the fly, extraction of XYZ data and rendering needs to be very effective because of the amount of vertices for one model. Tests have been made in Matlab to easier understand the structure of the binary files and also because Matlab has built-in algorithms for connecting unstructured vertices. Another area which is progressing is the positioning of the models. Earlier a preprocessing step was necessary to get the translation of the model aligned with the HiRISE heightmap. Now the translation problem is reduced to a small offset in height when rover origin is used instead of site origin.
Jonathan: The first step towards a direct-touch, constraint-based, interaction is to be able to trace the touch positions on the viewplane into the world space and find intersection points. This week I’ve implemented just that. With this feature the user can now select a new SceneGraphNode as their focus node to orbit the camera around. I’ve also defined a way to describe how big different renderables are, based on the camera position and projection matrix. This will be used in order to have intelligent swapping between different interaction modes.
Gene: I spent more time studying the MPCDI configuration in order to get OpenSpace running on the Evans & Sutherland “Digistar” system. I now have config data from both E&S and LiU to use as guides. I worked on problems with the auto-downloading of satellite telemetry files on Windows. I also implemented changes to the tag/grouping feature based on feedback provided, and did some more testing on it.
Michael N. & Oskar: An algorithm for scaling of the SDO imagery has been implemented as a preprocessing step, and the FITS reader can now dynamically read keywords from the header file, such as the exposure time and number of bits per pixel. Some AIA images from the solar flare event have also been fetched and can be shown in different wavelengths in 4K resolution. However, this needs to be tested much more thoroughly on a larger scale. As for the fieldlines, morphing has been implemented by first resampling the traced field lines and then linearly interpolating between states. Some of the corner cases, (where magnetic reconnection occurs) have been investigated and will, at least for now, morph much quicker than the stables ones.