Touch, Trace, Sockets, and Point Clouds

Gene: I combined single and grouped property access using curly brackets { } to distinguish a group (see feature/grouping branch). I met with Alex and Erik and we came up with a good method for importing MPCDI configuration into SGCT, which will provide the capability to run on Evans & Sutherland Digistar hardware. I continued trying to get the luasocket library running correctly on Windows. It now does an integrated build with OpenSpace successfully, but suffers a runtime error on Windows which is not seen on Linux or Mac. I am currently looking at using the OpenSpace download manager as a substitute.

Alex: Last week I was catching up on most of the remaining tasks that are left before the Earth Day presentation on the 21st of April at the American Museum of Natural History. Aside from making the code more robust, this also involved a lot of on-site testing and preparing datasets. The most significant parts are in the GlobeBrowsing module and the handling of the temporal datasets.

Jonathan: By tracing a ray from the cursor position on the view plane through the world space and finding the closest intersection point with celestial bodies we can find the surface coordinates on a planet. This is what I’ve been working on last week, as well as defining methods to go from a renderable’s model view coordinates back to screen-space. This represents the main step toward creating a direct-touch interaction. Working in screen-space, we’re looking for the minimal L2 error of a non-linear least squares problem with six degrees of freedom. In short, the goal is to find the best-fit solution for a translation and rotation matrix that transforms the object to follow with the new screen-space coordinates. In the next week I will continue to work on implementing the Levenberg-Marquardt algorithm for this purpose.

Rickard & Michael S: Last week we continued working with reading point clouds and constructing meshes from them. To be able to create simplified meshes, without too many vertices, we included the PCL library. Another area that we started looking into was to be able to read models in multiple threads, to be able to quickly switch between models.

Oskar & Michael N.: We have continued to work on improving and optimizing the FITS reader. The resulting images are now also properly colored using color tables provided to us by the CCMC. We have starting testing a new approach regarding field line tracing. The idea is to send the volumetric field data to the GPU as a 3D texture along with the seed points and do the tracing live on the GPU (compared to calculating the vertices of each line in a pre-processing step).