OpenSpace is open source interactive data visualization software designed to visualize the entire known universe and portray our ongoing efforts to investigate the cosmos.

OpenSpace brings the latest techniques from data visualization research to the general public.  OpenSpace supports interactive presentation of dynamic data from observations, simulations, and space mission planning and operations. OpenSpace works on multiple operating systems, with an extensible architecture powering high resolution tiled displays and planetarium domes, and makes use of the latest graphic card technologies for rapid data throughput.   In addition, OpenSpace enables simultaneous connections across the globe, creating opportunity for shared experiences among audiences worldwide.

Scene downloads and GPU uploads

Jonathan: I’ve continued to work on implementing the LM algorithm to solve the best-fit for a non-linear least square problem that will give us the transform matrix for direct-manipulation. The implementation is mostly done, with some small tests left to run until the correct matrix can be generated for our user case; namely defining the gradient of a function s(xi,q) that takes surface coordinates xi and applies a parameterized transform M(q) that gives us the closest fit to the current screen-space coordinates, as well as designating the desired interactions in low DOF cases (less than three fingers down on the surface).

Gene: This past week I merged the properties grouping feature into the master branch. I also started on a file downloader for the Lua interpreter so that file(s) can be downloaded at the point where an individual scene is loaded. This was started for the satellites scene but expanded to use for any scene. I have a rough version working that uses the existing C++ DownloadManager. It successfully downloads a file, but still needs error handling and more testing. I spent a little time getting up to speed on the Utah WMS server for globebrowsing data, which I will be taking over next month. I am also working on incorporating the E&S Digistar import feature into SGCT.

Michael N. & Oskar: The bottleneck for streaming spacecraft imagery data in 4K resolution is still uploading to the GPU. Asynchronous texture uploading has been implemented last week using the Pixel Buffer Object. The idea is to orphan the buffer data before filling up the next frame, so that we don’t have to wait for the texture download on the GPU to complete. This gives an increase of approximately 10 fps, compared to normal synchronous texture upload, but this is still not fast enough. Ideas that will be investigated this week are multi-res approaches, offline texture compression formats, and using a multithreaded approach for filling the PBO. Temporal, real-time field line tracing has now been made possible on the GPU (for the BATS-R-US model). Smooth transitions between states are made possible by linearly interpolating the vector field data between two time steps, as opposed to the line morphing which was done before.

Rickard & Michael S: Last week we finalized the construction and decimation of the meshes, from pointclouds, and started working on putting some texture on them. We also started implementing the models into the current chunking/culling already implemented in the globebrowsing module.

Touch, Trace, Sockets, and Point Clouds

Gene: I combined single and grouped property access using curly brackets { } to distinguish a group (see feature/grouping branch). I met with Alex and Erik and we came up with a good method for importing MPCDI configuration into SGCT, which will provide the capability to run on Evans & Sutherland Digistar hardware. I continued trying to get the luasocket library running correctly on Windows. It now does an integrated build with OpenSpace successfully, but suffers a runtime error on Windows which is not seen on Linux or Mac. I am currently looking at using the OpenSpace download manager as a substitute.

Alex: Last week I was catching up on most of the remaining tasks that are left before the Earth Day presentation on the 21st of April at the American Museum of Natural History. Aside from making the code more robust, this also involved a lot of on-site testing and preparing datasets. The most significant parts are in the GlobeBrowsing module and the handling of the temporal datasets.

Jonathan: By tracing a ray from the cursor position on the view plane through the world space and finding the closest intersection point with celestial bodies we can find the surface coordinates on a planet. This is what I’ve been working on last week, as well as defining methods to go from a renderable’s model view coordinates back to screen-space. This represents the main step toward creating a direct-touch interaction. Working in screen-space, we’re looking for the minimal L2 error of a non-linear least squares problem with six degrees of freedom. In short, the goal is to find the best-fit solution for a translation and rotation matrix that transforms the object to follow with the new screen-space coordinates. In the next week I will continue to work on implementing the Levenberg-Marquardt algorithm for this purpose.

Rickard & Michael S: Last week we continued working with reading point clouds and constructing meshes from them. To be able to create simplified meshes, without too many vertices, we included the PCL library. Another area that we started looking into was to be able to read models in multiple threads, to be able to quickly switch between models.

Oskar & Michael N.: We have continued to work on improving and optimizing the FITS reader. The resulting images are now also properly colored using color tables provided to us by the CCMC. We have starting testing a new approach regarding field line tracing. The idea is to send the volumetric field data to the GPU as a 3D texture along with the seed points and do the tracing live on the GPU (compared to calculating the vertices of each line in a pre-processing step).

Translation, Touch, and Tracing Field Lines

Rickard & Michael .S: Reading and rendering point clouds on the fly is a bit tricky. One big problem is that the binary files from the pds servers include a lot of data missing XYZ values, such as vertices in the origin (due to stereo shadow) and also redundant data outside where the stereo images overlap. Since we want to do it on the fly, extraction of XYZ data and rendering needs to be very effective because of the amount of vertices for one model. Tests have been made in Matlab to easier understand the structure of the binary files and also because Matlab has built-in algorithms for connecting unstructured vertices. Another area which is progressing is the positioning of the models. Earlier a preprocessing step was necessary to get the translation of the model aligned with the HiRISE heightmap. Now the translation problem is reduced to a small offset in height when rover origin is used instead of site origin.

Jonathan: The first step towards a direct-touch, constraint-based, interaction is to be able to trace the touch positions on the viewplane into the world space and find intersection points. This week I’ve implemented just that. With this feature the user can now select a new SceneGraphNode as their focus node to orbit the camera around. I’ve also defined a way to describe how big different renderables are, based on the camera position and projection matrix. This will be used in order to have intelligent swapping between different interaction modes.

Gene: I spent more time studying the MPCDI configuration in order to get OpenSpace running on the Evans & Sutherland “Digistar” system. I now have config data from both E&S and LiU to use as guides. I worked on problems with the auto-downloading of satellite telemetry files on Windows. I also implemented changes to the tag/grouping feature based on feedback provided, and did some more testing on it.

Michael N. & Oskar: An algorithm for scaling of the SDO imagery has been implemented as a preprocessing step, and the FITS reader can now dynamically read keywords from the header file, such as the exposure time and number of bits per pixel. Some AIA images from the solar flare event have also been fetched and can be shown in different wavelengths in 4K resolution. However, this needs to be tested much more thoroughly on a larger scale. As for the fieldlines, morphing has been implemented by first resampling the traced field lines and then linearly interpolating between states. Some of the corner cases, (where magnetic reconnection occurs) have been investigated and will, at least for now, morph much quicker than the stables ones.

Implementations and Interfaces

Rickard & Michael S: Last week we worked on reading point clouds straight into OpenSpace and skipping one preprocessing step. This implementation might not work because of alignment issues that might need preprocessing, but we will look into that this week. We also fixed a bug in the rendering of the Mars Rover path. Now that is working as expected.

Emil: I’m currently working our dome casting feature to support networked events across several sites. Since the last time dome casting was used, there has been some changes in OpenSpace’s navigation. To allow for smooth camera movements for all connected clients, I’m looking into restructuring the camera navigation and syncing to allow for an interpolation scheme that is more suitable to the new interaction mode.

Jonathan: I have been researching different published papers on direct-touch interaction to find a good methodology to implement this feature into OpenSpace, specifically screen-space interaction. Since there are some distinct differences between OpenSpace’s user case and simple object manipulation discussed in the papers, some alterations must be made. The goal is to interface the direct-touch interaction well with the current implementation, such that the user feels it is an intuitive transition between directly manipulating a planet and traversing through space.

Michael N. & Oskar: Last week we implemented the same scaling algorithm that NASA uses to process the raw data of the images taken by SDO. We also cleaned up the fieldlines code and implemented a new class structure to prepare for fieldline morphing in-between states.

OpenSpace NYC: Cassini & Messenger Buildathon

See Your Visualizations of NASA Missions on the Planetarium Dome! c-h-18-a28-hayden_planetarium_at_night

Calling all 3D Artists, Graphics Programmers & Software Developers, Astronomers & Astrophysicists: Would you like to see your own interactive space simulation running on the Hayden Planetarium dome?

Come join our OpenSpace “Buildathon” and be among the first to join the OpenSpace creator community!

The Buildathon will take place at the American Museum of Natural History in New York City on October 29, 2016. For more information visit https://openspacenyc.splashthat.com/

Osiris-REx Launch Event at AMNH

sgct_openspace_000002_small Today, NASA launched the OSIRIS-REx mission to obtain a sample of the asteroid Bennu and return it to Earth for further study. Scientists chose to sample Bennu, a primitive, carbon-rich near-Earth object, due to its potentially hazardous orbital path and informative composition.

On Monday, Sept 12, join Carter Emmart at the American Museum of Natural History for an OpenSpace-built Osiris-REx after-hours public program in which the Osiris-REx mission’s projected trajectory and potential sampling locations will be visualized on the AMNH Hayden Planetarium dome.

 

New Horizons’ Media Responses

Breakfast at Pluto Event at AMNH LeFrak Theatre

Breakfast at Pluto Event at AMNH LeFrak Theatre

Our event was a great success with much media attention throughout the world. If you have a news article covering our event, please let us know! First and foremost, the whole event took place in a Google Hangouts that is available online: Youtube Pre-event information: American Museum of Natural History As for the news articles: International: Engadget Space.com Gizmodo SpaceFlight Insider Swedish: Linköping University Press Release Norrköpings Tidningar Corren

Prerelease for New Horizons’ Closest Approach

In honor of the closest approach of the New Horizons spacecraft at Pluto, we prepared another pre-alpha version of OpenSpace in binary form. You can find all information about this in the Download section (or by following this link).

Using this pre-alpha version, we organize a global event connecting many planetariums the world over to celebrate this unique, once-in-a-lifetime experience.

Prerelease for Pluto-Palooza at the AMNH

To coincide with the Pluto Palooza at the AMNH, we are releasing a pre-alpha version of OpenSpace in binary form for Windows and Mac platforms. All information for this release is found here.

Space.com was present at the Pluto-Palooza in New York and some of the OpenSpace footage is shown next to the great explanations of the mission scientists:

IMERSA demo

This Saturday we will have an unofficial demo session at the IMERSA conference hosted in Boulder, CO. The demo will be given in the Fiske planetarium after the official events and will be hosted by Carter Emmart and driven by Miroslav Andel.