Scene downloads and GPU uploads

Jonathan: I’ve continued to work on implementing the LM algorithm to solve the best-fit for a non-linear least square problem that will give us the transform matrix for direct-manipulation. The implementation is mostly done, with some small tests left to run until the correct matrix can be generated for our user case; namely defining the gradient of a function s(xi,q) that takes surface coordinates xi and applies a parameterized transform M(q) that gives us the closest fit to the current screen-space coordinates, as well as designating the desired interactions in low DOF cases (less than three fingers down on the surface).

Gene: This past week I merged the properties grouping feature into the master branch. I also started on a file downloader for the Lua interpreter so that file(s) can be downloaded at the point where an individual scene is loaded. This was started for the satellites scene but expanded to use for any scene. I have a rough version working that uses the existing C++ DownloadManager. It successfully downloads a file, but still needs error handling and more testing. I spent a little time getting up to speed on the Utah WMS server for globebrowsing data, which I will be taking over next month. I am also working on incorporating the E&S Digistar import feature into SGCT.

Michael N. & Oskar: The bottleneck for streaming spacecraft imagery data in 4K resolution is still uploading to the GPU. Asynchronous texture uploading has been implemented last week using the Pixel Buffer Object. The idea is to orphan the buffer data before filling up the next frame, so that we don’t have to wait for the texture download on the GPU to complete. This gives an increase of approximately 10 fps, compared to normal synchronous texture upload, but this is still not fast enough. Ideas that will be investigated this week are multi-res approaches, offline texture compression formats, and using a multithreaded approach for filling the PBO. Temporal, real-time field line tracing has now been made possible on the GPU (for the BATS-R-US model). Smooth transitions between states are made possible by linearly interpolating the vector field data between two time steps, as opposed to the line morphing which was done before.

Rickard & Michael S: Last week we finalized the construction and decimation of the meshes, from pointclouds, and started working on putting some texture on them. We also started implementing the models into the current chunking/culling already implemented in the globebrowsing module.

Touch, Trace, Sockets, and Point Clouds

Gene: I combined single and grouped property access using curly brackets { } to distinguish a group (see feature/grouping branch). I met with Alex and Erik and we came up with a good method for importing MPCDI configuration into SGCT, which will provide the capability to run on Evans & Sutherland Digistar hardware. I continued trying to get the luasocket library running correctly on Windows. It now does an integrated build with OpenSpace successfully, but suffers a runtime error on Windows which is not seen on Linux or Mac. I am currently looking at using the OpenSpace download manager as a substitute.

Alex: Last week I was catching up on most of the remaining tasks that are left before the Earth Day presentation on the 21st of April at the American Museum of Natural History. Aside from making the code more robust, this also involved a lot of on-site testing and preparing datasets. The most significant parts are in the GlobeBrowsing module and the handling of the temporal datasets.

Jonathan: By tracing a ray from the cursor position on the view plane through the world space and finding the closest intersection point with celestial bodies we can find the surface coordinates on a planet. This is what I’ve been working on last week, as well as defining methods to go from a renderable’s model view coordinates back to screen-space. This represents the main step toward creating a direct-touch interaction. Working in screen-space, we’re looking for the minimal L2 error of a non-linear least squares problem with six degrees of freedom. In short, the goal is to find the best-fit solution for a translation and rotation matrix that transforms the object to follow with the new screen-space coordinates. In the next week I will continue to work on implementing the Levenberg-Marquardt algorithm for this purpose.

Rickard & Michael S: Last week we continued working with reading point clouds and constructing meshes from them. To be able to create simplified meshes, without too many vertices, we included the PCL library. Another area that we started looking into was to be able to read models in multiple threads, to be able to quickly switch between models.

Oskar & Michael N.: We have continued to work on improving and optimizing the FITS reader. The resulting images are now also properly colored using color tables provided to us by the CCMC. We have starting testing a new approach regarding field line tracing. The idea is to send the volumetric field data to the GPU as a 3D texture along with the seed points and do the tracing live on the GPU (compared to calculating the vertices of each line in a pre-processing step).

Translation, Touch, and Tracing Field Lines

Rickard & Michael .S: Reading and rendering point clouds on the fly is a bit tricky. One big problem is that the binary files from the pds servers include a lot of data missing XYZ values, such as vertices in the origin (due to stereo shadow) and also redundant data outside where the stereo images overlap. Since we want to do it on the fly, extraction of XYZ data and rendering needs to be very effective because of the amount of vertices for one model. Tests have been made in Matlab to easier understand the structure of the binary files and also because Matlab has built-in algorithms for connecting unstructured vertices. Another area which is progressing is the positioning of the models. Earlier a preprocessing step was necessary to get the translation of the model aligned with the HiRISE heightmap. Now the translation problem is reduced to a small offset in height when rover origin is used instead of site origin.

Jonathan: The first step towards a direct-touch, constraint-based, interaction is to be able to trace the touch positions on the viewplane into the world space and find intersection points. This week I’ve implemented just that. With this feature the user can now select a new SceneGraphNode as their focus node to orbit the camera around. I’ve also defined a way to describe how big different renderables are, based on the camera position and projection matrix. This will be used in order to have intelligent swapping between different interaction modes.

Gene: I spent more time studying the MPCDI configuration in order to get OpenSpace running on the Evans & Sutherland “Digistar” system. I now have config data from both E&S and LiU to use as guides. I worked on problems with the auto-downloading of satellite telemetry files on Windows. I also implemented changes to the tag/grouping feature based on feedback provided, and did some more testing on it.

Michael N. & Oskar: An algorithm for scaling of the SDO imagery has been implemented as a preprocessing step, and the FITS reader can now dynamically read keywords from the header file, such as the exposure time and number of bits per pixel. Some AIA images from the solar flare event have also been fetched and can be shown in different wavelengths in 4K resolution. However, this needs to be tested much more thoroughly on a larger scale. As for the fieldlines, morphing has been implemented by first resampling the traced field lines and then linearly interpolating between states. Some of the corner cases, (where magnetic reconnection occurs) have been investigated and will, at least for now, morph much quicker than the stables ones.

Implementations and Interfaces

Rickard & Michael S: Last week we worked on reading point clouds straight into OpenSpace and skipping one preprocessing step. This implementation might not work because of alignment issues that might need preprocessing, but we will look into that this week. We also fixed a bug in the rendering of the Mars Rover path. Now that is working as expected.

Emil: I’m currently working our dome casting feature to support networked events across several sites. Since the last time dome casting was used, there has been some changes in OpenSpace’s navigation. To allow for smooth camera movements for all connected clients, I’m looking into restructuring the camera navigation and syncing to allow for an interpolation scheme that is more suitable to the new interaction mode.

Jonathan: I have been researching different published papers on direct-touch interaction to find a good methodology to implement this feature into OpenSpace, specifically screen-space interaction. Since there are some distinct differences between OpenSpace’s user case and simple object manipulation discussed in the papers, some alterations must be made. The goal is to interface the direct-touch interaction well with the current implementation, such that the user feels it is an intuitive transition between directly manipulating a planet and traversing through space.

Michael N. & Oskar: Last week we implemented the same scaling algorithm that NASA uses to process the raw data of the images taken by SDO. We also cleaned up the fieldlines code and implemented a new class structure to prepare for fieldline morphing in-between states.

Rolling and Roving

Jonathan: This week I’ve continued to work with the interaction class for touch input. The sensitivity of the input now works better in globebrowsing and a lot small bugfixes with the camera movement has been fixed. The user can now roll the camera by circling multiple fingers on the touch surface as well as zooming and rolling can now be done simultaneously.

Rickard and Michael S.: The focus of last week was to refactor the rover terrain model/path code and also create a script for automated download and assembly of the rover terrain models. We also implemented a more dynamic loading of the models and textures by giving a path to a folder instead of a path to a specific file.

Michael N. & Oskar: Last week we managed to get the download manager to work with the fits reader, we can now load images from CCMC’s servers and read them directly from memory. Also, field lines can now be rendered in the correct reference frame depending on the model, using Kameleon’s tracer.

Gene: I got a new feature working for grouping, so that multiple properties can be changed with a single command. This still needs more test cases, however. I just got started on the task of getting OpenSpace to run on a Evans & Sutherland multi-projector system. We just got a new AMD GPU that Matt and/or I will try running OpenSpace on.

CCfits (and) starts and GDAL isolation

Rickard & Michael S.: Last week we mainly focused on the rendering of rover-derived models. As of now we can render them, but without a correct placement. We’ve only started looking into the placement of the models. We’ve also received new, more accurate data for rendering the rover path on top of the reference ellipsoid, and implemented it in OpenSpace.

Michael N. & Oskar: Last week we integrated the CCfits library into OpenSpace as a separate module. The module will be used to read and process the high resolution imagery in the FITS format that the spacecraft provide. We’ve also started adding kameleon’s field line tracer to OpenSpace in order to support more of CCMC’s data models and managed to render some of the data from ENLIL. CCMC also provided us with the SPICE kernel for SDO and we can now render its trail/orbit in the correct location.

Jonathan: Since last week I’ve implemented a TouchInteraction class that handles and interprets the input. Zooming, rotation and panning now work by having the touch event add a velocity, and the interaction class calculates the new camera position orientation by incrementing with velocity times dt. The velocities all have some friction/intertia to eventually stop the interaction and make the movement smooth. In the coming week I will continue to work on how the touch input is best interpreted, as well as adding rolling to the multi-touch gesture list.

Kalle: I have worked on a memory aware tile cache that can be dynamically resized, to limit the amount of memory used for caching tiles. I have also isolated GDAL in the globe browsing module so that GDAL is no longer a necessary dependency. If OpenSpace is not compiled with GDAL, a simple tile reader will be used instead. The simple tile reader currently only supports the common image formats used by the texture loader, and the textures currently need to have width and height being powers of two.

Gene: The frequent crashing of the Jenkins Mac node seems to be fixed now, but I’m watching it long-term to make sure it stays that way. I fixed a build problem with satellite TLE downloading with luasocket. I can now get the luasocket library to build as a part of building OpenSpace, rather than having to build it separately. I’m working on a feature that allows scenegraph nodes to be tagged and then operated on (e.g. renderable commands) as a group.

Field Lines and Rover Views

Gene: I got the ‘luasocket’ library working for automatically downloading latest TLE files for satellite telemetry. Testing of this feature on all 3 platforms is needed. I created a short demo video of the satellites scene that I’ll share in order to solicit feedback.

Oskar & Michael N.: We’ve started implementing time varying field lines which include a visualization of the direction of each field line. Currently it’s all being pre-processed and we are waiting for a larger dataset to be able to improve it. We’ve also started implementing a class for projecting photographs taken from spacecraft onto an image plane within their field of view. The image plane is movable within the frustum of the camera that took the image.

Matt: I’m hunting down some build bugs that are preventing me from getting the latest develop branch updates. I’m concerned they might be a larger Heisenbug issue than just my machine configurations.

Rickard & Michael S.: We’ve mainly worked on two different functionalities regarding the rover terrain. One is being able to view rover derived panoramas (much like google street view) by projecting the image on top of a sphere and placing the camera in the center of the sphere. The other part is visualizing the path of the rover on top of HiRISE height maps. There are still some work on both parts to make it look super good but the base functionality is there.

Jonathan: Since last week I’ve looked through and tested different ideas on how to store the input more efficiently. The pinch-to-zoom feature now works pretty well after resolving a bug that was introduced by not checking if we got any new input from the last frame. Coming week I will focus on an interaction class for the touch input and to do rotations with spherical coordinates rather than angles. There’s also an error in the module’s CMake files that I will look at to resolve failed builds on Linux and macOS.

Eric: I started a script to be run on the Jenkins server after a build to pack up binaries and some data (either a minimal subset or a larger collection) for distribution to users. I just used a shell script to create a tarball and/or zip file, but Alex has pointed me to CPack, a cousin of CMake, which could make it much easier to do this on all platforms, so I’ll be investigating that next.

Kalle: I solved a couple of issues related to tile rendering for globes. A bug previously caused some textures which did not have the number of bytes per row divisible by four to get corrupted once uploaded to the GPU. I also started working on a memory aware tile cache to prevent memory overflow when allocating too many textures for tiles.

Alex visits North Carolina

Alex: This week, I was invited to the North Carolina Museum of Natural Sciences, one of our Informal Science Institution partners, to install OpenSpace on their systems. It was a full success and is now installed in the science panorama and visible to the general public in the Astrolab. On the code side of things, I made the specification of SGCT configuration files a bit simpler by enabling the use of Lua scripts to generate SGCT configuration files on the fly, thus making a resolution change a matter of changing arguments.

Kalle: I just got back in OpenSpace development. I have looked at the current issues and started planning and figuring out where I will be needed. I will probably continue doing some work on the globe browsing feature that me and Erik did not have time to do during our thesis work.

Gene: This week I have been working on debugging problems with FreeImage library on linux, and worked on building the Lua socket library into OpenSpace’s Lua interpreter.

Michael N. & Oskar: We’ve been discussing the details and priorities of the Solstice event in June with the people at CCMC. We managed to render the trails of the SDO and STEREO satellites and been digging into some of the already existing work related to field line tracing. We will start to look at the already supported BATS-R-US model and get that to work with time-varying data before examining other models related to coronal magnetic fields such as the PFSS.

Jonathan: I’ve managed to implement some crude multi-touch interaction with zooming and changing the orientation of the camera. The methods use will need to be reiterated to feel more intuitive, my next step is to have the touch callbacks manipulate the view with velocities and inertia rather than absolute positions.

The New Crew Gets to Work

Alex: This week has been mostly focussed on the supporting backend. First and foremost, we now have a Python script that will check the files in the repository for consistency (kind of like baby’s first linter), which will hopefully aid in detecting some bugs in the future. The second big part is the addition of callback functions in the main loops to enable a greater degree of customization from modules. This will help in removing the module dependency inside libOpenspace, an important step towards transitioning to dynamic libraries. Finally, I added code to make the capabilities of the system queryable from Lua scripts. This will be very useful to customize scene files, for example enabling the loading of different texture resolutions depending on the capabilities of the hardware.

Gene: I got the satellite branch working with TLE “batch” files downloaded from celestrak.com. After a lot of testing, I found & fixed a bug in the satellite telemetry code that affected position accuracy. I have continued to work on trying to get a working GL texture library in linux. deVIL doesn’t work, FreeImage doesn’t fully work, and SOIL loads all textures but needs image correction. I’ve also worked on getting the cURL module incorporated into OpenSpace’s built-in Lua interpreter.

Matt: I started implementing our configuration system for Launcher specific properties this week.

Eric: This week I made some small changes to the Launcher synchronization controls. Only the ‘default’ scene is selected by default, so the user must check the boxes to download more than the minimum. The checked boxes are green, while unchecked boxes are grey (not red). In the future we might include some status information in this window, but for now it’s just to select what gets downloaded. Matt and I are going to tackle another problem with the sync function: not downloading files that have already been downloaded. I’ve also started thinking about how to pack up binaries for distribution, and of course this will be an automated process.

The next group of Masters students are now getting to work on their projects, and we’ll be hearing much from them in the future about their plans and progress. Here is the first round:

Michael N. & Oskar: Last week we arrived at CCMC where we will work on visualisations associated with Space Weather. This will amongst other things involve expanding the visualisations for Earth’s magnetic field. During our first week we got introduced to the team at CCMC and sat in on several meetings and seminars. We also got everything up and running and continued looking at the code base and some example data provided by the CCMC. Michael also debugged the image loading issues on Linux. It seems that SOIL cannot load progressive JPEGS (baseline is preferred). Latest FreeImage requires JPEG-library version 80, whereas libjpeg-turbo (dependency in SGCT) seems to use version 62.

Jonathan: I arrived to Salt Lake and the SCI Institute three weeks ago and have since then been working on developing a multi-touch interface. I’ve implemented a new touch module that makes OpenSpace TUIO-aware, a cross-platform touch protocol for tangible multi-touch surfaces. Now my main focus is to continue with developing touch gestures to interact with the application through zooming and rotating.

Michael S. & Rickard: We arrived at AMNH, New York about three weeks ago. We will be working with extending the current globebrowsing module with a close up interaction mode of models created from the images from the Mars Rovers. During our first weeks here we have familiarized ourselves with the codebase of OpenSpace and the globebrowsing module. As for now we are doing pre-studies regarding model rendering and looking into alignment issues regarding the models and HiRISE heightmaps.

Getting Down to Work

Emil: Lately, I have been focusing on developing the open source app C-Troll (https://github.com/c-toolbox/C-Troll) that will make it easier to launch and control applications such as OpenSpace on clustered environments. The application is split up into three tiers: A “core” application to run on a master node, which sends out commands to the slave nodes. Slaves run a “tray” application which receives commands, launch and quit applications as subprocesses, and collect log messages and send them back to the core. The third tier is a web application (which for example can run from a tablet) connected to the core application, which can be used to view log messages.

Alex: After a one-month hiatus of coding due to moving to a new city, I could finally return to it and spend my first week on fixing various issues. I started cleanup of the globebrowsing branch to make it look and feel more like the rest of the code base. In addition, we now have a script that will check some of the style guides for all files to maintain a consistent look to the include guards. Regarding non-coding events, I presented OpenSpace to an interested group at the MIT Media Lab, as well as my new home, the Center for Data Science at NYU.

Matt: After a first semester focused on coursework, I’m excited to be back on the project! I’m starting to dig into the multi-resolution volume rendering work done by Emil and Tomas so we can expand it, and I’m trying to improve the Launcher’s syncing process with more controls and validation.

Gene: I’m currently working on automatic download of satellite telemetry (TLE) files from celestrak.com. This will allow users to run with the latest satellite data. I also spent a little time troubleshooting a problem with the FreeImage library on linux. This is what reads textures for planet surfaces and other things. The SOIL library works on linux, but has an image-flipped problem that was corrected with FreeImage. I worked with the VR port of OpenSpace a bit. I created separate SGCT cfg files for the HTC Vive and Occulus Rift, since they appear to use slightly different render target resolutions. The OpenVR wiki page was updated with this info.

1 2 3 4 5 6