OpenSpace is open source interactive data visualization software designed to visualize the entire known universe and portray our ongoing efforts to investigate the cosmos.

OpenSpace brings the latest techniques from data visualization research to the general public.  OpenSpace supports interactive presentation of dynamic data from observations, simulations, and space mission planning and operations. OpenSpace works on multiple operating systems, with an extensible architecture powering high resolution tiled displays and planetarium domes, and makes use of the latest graphic card technologies for rapid data throughput.   In addition, OpenSpace enables simultaneous connections across the globe, creating opportunity for shared experiences among audiences worldwide.

Direct-manipulation in OpenSpace

Interacting with touch is for many people a direct and intuitive way to control a computer interface. This is especially powerful if the user doesn’t have to map a set of controls to different interactions, but the manipulation becomes what is physically expected. Direct-manipulation aims to do just that, which in effect removes the User Interface.

The method developed and used in OpenSpace is a screen-space formulation. Each frame contact point touches the surface of a celestial body, and the geographical surface coordinates of that body are found and saved through ray tracing. The camera transform then aims to move and orient itself such that it minimizes the distance between the current frame’s contact points on the screen and the last frame’s surface coordinates projected to the screen space. This is done with a non-linear least squares minimization algorithm. In effect, a geographical location is locked to the user’s contact points, and the camera moves such that the location follows the fingers. The solver is unconstrained, which means that adding more contact points simply introduces more degrees of freedom (up to six) that are to be controlled.

  • One contact point gives the user control over two degrees of freedom, which are taken to be the orbit X and Y angles around the focus.
  • Two contact points gives the user additional control of the distance and rotation related to the focus point.
  • Three or more contact points give the user control over all six degrees of freedom, with the last two being panning angles in X and Y.

Optimizing and Improving

Rickard & Michael S.: This past week most of our time has been spent on optimizing the mesh generation to create as small models as possible (based on file size) while still keeping “enough” details to be able to distinguish individual stones with the size of a couple of centimeters. So far one promising approach has been to upsample the models at an early stage to fill holes and other errors caused by stereo shadows, and also to increase the amount of details. At a later step the models are downsampled again for effective loading time and rendering. The asynchronous loading from disk to RAM has been working for a while, but now it’s time to start uploading models and texture to the GPU asynchronously to prevent stuttering caused by non-asynchronous uploading. This is done by mapping a buffer, and then sending a pointer to that buffer into a worker thread that handles all the expensive I/O work.

Matthew Territo: I picked up some old, in-progress work again and added configuration options to control the Launcher’s debugging output. You can use the -d or -debug flag on the command line, or set a Launcher-specific LogLevel in the openspace.cfg configuration file.

Kalle Bladin: I’m finally getting some good result from the globe-browsing refactoring. I have reworked the cache and how tile textures are uploaded to graphics memory to increase performance. I will try to do some rigorous testing in the dome to compare PBO texture uploading with regular texture data transfer.

Jonathan Bosson: The multi-touch interface is starting to look good! The function computing the velocity deceleration used to be called each simulation step, i.e. each frame. This bound it to the frame rate and lead to inconsistent behavior depending on the system. Instead, now the deceleration function is called at a constant rate that can be defined by the user. Other bug fixes involved bad precision in direct-manipulation on huge planet sizes, as well as mouse+touch interactions intervening with each other. As the backlog of tasks starts to shrink I’m looking forward to this week’s feedback to get a set of fresh eyes on the interface.

Oskar & Michael N: Different methods for optimization of decoding of JPEG 2000 images were investigated last week. A GPU decode approach using Compute Shaders or FBOs would make a lot of sense for our use case, since the large image is going up there anyways. A test version of this has been implemented in a separate branch, but the main issue right now, before any testing can be done, is to get the raw compressed byte stream from a .jp2 image (and throw away the .jp2 boxes and other metadata). The openjpeg library does not seem to support this, and it seems like a lot of work to understand the underlying structure of the .jp2 container, so this will probably be put on hold for a while. The resolution-dependent buffer will be investigated further this week after some refactoring. As we’ve now received the massive data sets that are going to be used for the SUN-EARTH event in June, it is no longer feasible to re-trace field lines every time we run the program. Focus has therefore been on implementing functions that can read and write field line states to .json and binary files. The user can now also choose to look at a subset of the field lines where a certain quantity is within a given range.

Klas Eskilson: CEF is now being successfully built together with OpenSpace on Windows too. A working web renderer is also in place on Windows, supporting most web features. Basically any URL can be loaded and rendered on top of OpenSpace. The next step is to add interaction callbacks and do a cleanup of the somewhat messy code.

Field lines, model loading, and CEF

Oskar & Michael N.: During the last week we’ve made more options available within the field lines GUI, to prepare for the SUN-EARTH event on June 27th. For the live GPU-traced implementation users can now switch out or update seed point files during run time. We’ve also received new data that resembles what is going to be used for the event. These data are, however, not providing the expected output and we’re trying to locate the issue. For the solar browsing, a buffer has been created. The idea is to have one or several threads to continuously perform decoding jobs by predicting which images are going to be shown. The prediction is done by looking at the OpenSpace delta time multiplied with a minimum update limit — the images that map to these times are put into the buffer. If the buffer runs empty, the PBO update is simply skipped for that frame, so nothing stops the rendering thread. Another idea we have is to make the resolution time-dependent, this would basically mean that when the buffer runs empty, decoding is performed in a lower resolution until the buffer thread catches up again.

Jonathan: Since last week I’ve shown the interface to a number of people and managed to gathered some feedback. Parts of the interface have thus changed accordingly. For example, Zoom and Roll interaction no longer scales with multiple fingers, along with numerous other small improvements. I’ve also developed a unit test for the LM algorithm to be able to plot the path the solver took to converge to its solution. This was useful when debugging certain issues with the algorithm, such as it converging slowly or making missteps in the gradient descent. One long lasting bug has thus been fixed! I’ve also moved all parameters and changing variables for the algorithm into properties, so the user can change how the interface behaves through the GUI.

Rickard & Michael S: Last week we made some big changes to the structure of mesh generation. The texturing of the models are now done completely from shaders. This improves the size of the models and also their loading times. It also allows us to use textures from other cameras or different camera positions, which means we can get colored textures instead of only black and white. We’ve also looked into how we load the models: the models are now loaded as one subsite rather then each model for each subsite. This will also improve the loading time of the models. Our main goal right now is to improve the frame rate and loading of the models.

Klas: A CMake setup is finally in place for Chromium Embedded Framework (CEF). OpenSpace together with the CEF dynamic library, CEF process helper, and OpenSpace’s dynamic libraries, compiles and bundles the application on macOS. Work has begun on actually creating the embedded web browser within the WebGUI module. I’m expecting this to be something of a challenge, since CEF is expecting to be the main part of an application — not the other way around, as in our case. We do not want CEF be the star of our application — that should be OpenSpace itself.

Gene: I worked with Vikram on WMS server configuration, transferring information so I can take over with this when he leaves. I have also gone over his terrain bump-mapping work. I’m continuing to work on the MPCDI configuration in SGCT for the E&S Digitstar projection system.

A very productive week

Jonathan: Last week I improved the windows to TUIO output bridge that is used on the touch tables to send the touch input to OpenSpace. I also introduced a tracker for double tapping as well as a GUI mode for touch such that the on-screen GUI that already exists can be interacted with through touch, which gives the touch interface full control of the OpenSpace application.

Klas: I started to investigate how CEF (Chromium Embedded Framework) may be used in OpenSpace. The idea is to be able to use web techniques to create a richer user interface. Work has mostly been focused on getting a working CMake setup up and running. So far the build is failing, but nothing else is expected at this point. It might be tricky to get the multi process setup that CEF requires into OpenSpace, but it should be doable.

Oskar & Michael N: The solar browsing module can now handle projective texturing, which means that we can display time varying spacecraft imagery wrapped around the surface of a hemisphere. The FOV of the camera is also visualized simply by looking at the maximum size of the images at the position of the sun, and scaling linearly along the path to the spacecraft. This should preferably be done better though, by looking at the raw values of the metadata. The image plane is also now moved using a Gaussian function to gain more precision and smoother movement near the sun. Last week the CCMC staff asked for the possibility to color field lines depending on different quantities and magnitudes. This has been implemented by passing measured values along the lines to a transfer function. The user can, from within the GUI, change which variable to color by. Big parts of the code have been cleaned up, and it is being prepared to introduce the concept of dynamic seed points.

Rickard & Michael S.: We finally have a working prototype where the user can observe all rover surface models released by NASA. The next step is to improve mesh generation and texturing of the models. Another is to improve rendering of the rover traversal path to fit HiRISE as well as possible. The first dome test will be performed within the next few weeks.

OpenSpace NYC: Cassini & Messenger Buildathon

See Your Visualizations of NASA Missions on the Planetarium Dome! c-h-18-a28-hayden_planetarium_at_night

Calling all 3D Artists, Graphics Programmers & Software Developers, Astronomers & Astrophysicists: Would you like to see your own interactive space simulation running on the Hayden Planetarium dome?

Come join our OpenSpace “Buildathon” and be among the first to join the OpenSpace creator community!

The Buildathon will take place at the American Museum of Natural History in New York City on October 29, 2016. For more information visit https://openspacenyc.splashthat.com/

Osiris-REx Launch Event at AMNH

sgct_openspace_000002_small Today, NASA launched the OSIRIS-REx mission to obtain a sample of the asteroid Bennu and return it to Earth for further study. Scientists chose to sample Bennu, a primitive, carbon-rich near-Earth object, due to its potentially hazardous orbital path and informative composition.

On Monday, Sept 12, join Carter Emmart at the American Museum of Natural History for an OpenSpace-built Osiris-REx after-hours public program in which the Osiris-REx mission’s projected trajectory and potential sampling locations will be visualized on the AMNH Hayden Planetarium dome.

 

New Horizons’ Media Responses

Breakfast at Pluto Event at AMNH LeFrak Theatre

Breakfast at Pluto Event at AMNH LeFrak Theatre

Our event was a great success with much media attention throughout the world. If you have a news article covering our event, please let us know! First and foremost, the whole event took place in a Google Hangouts that is available online: Youtube Pre-event information: American Museum of Natural History As for the news articles: International: Engadget Space.com Gizmodo SpaceFlight Insider Swedish: Linköping University Press Release Norrköpings Tidningar Corren

Prerelease for New Horizons’ Closest Approach

In honor of the closest approach of the New Horizons spacecraft at Pluto, we prepared another pre-alpha version of OpenSpace in binary form. You can find all information about this in the Download section (or by following this link).

Using this pre-alpha version, we organize a global event connecting many planetariums the world over to celebrate this unique, once-in-a-lifetime experience.

Prerelease for Pluto-Palooza at the AMNH

To coincide with the Pluto Palooza at the AMNH, we are releasing a pre-alpha version of OpenSpace in binary form for Windows and Mac platforms. All information for this release is found here.

Space.com was present at the Pluto-Palooza in New York and some of the OpenSpace footage is shown next to the great explanations of the mission scientists:

IMERSA demo

This Saturday we will have an unofficial demo session at the IMERSA conference hosted in Boulder, CO. The demo will be given in the Fiske planetarium after the official events and will be hosted by Carter Emmart and driven by Miroslav Andel.