Voxelizing Light-Field Recordings

Topics: light fields, voxel-based rendering, Unity game engine, real-time rendering
Supervision: David Schedl, Indrajit Kurmi, Oliver Bimber
Contact: david.schedl@jku.at
Type: BSc Thesis, MSc Thesis

Light fields are an emerging image-based technique that support free viewpoint navigation of recorded scenes as demanded in several recent applications (e.g., Virtual Reality). Pure image-based representations, however quickly become inefficient, as a large number of images are required to be captured, stored, and processed. Geometric scene representations require less storage and are more efficient to render. Geometry reconstruction, however, is unreliable and might fail for complex scene parts. Furthermore, view-dependent effects that are preserved with light fields are lost in pure geometry based techniques. Therefore, we propose a hybrid representation and rendering scheme for recorded dense light fields: we want to extract isotropic scene regions and represent them by voxels, while the remaining areas are represented as sparse light field. In comparison to dense light fields, storage demands are reduced while visual quality is sustained.

The goal of this project is to (a) develop analysis techniques that efficiently separate light-field recordings into a voxel-based and image-based representation, (b) investigate efficient data structures for sparse light-fields representations (while sparse volume data representations already exist), and (c) implement real-time hybrid (voxel/light-field) rendering techniques. These three sub-projects (a,b,c) can be carried out independently by different students.

Light Field Display

Topics: light fields, Unity game engine, real-time rendering
Supervision: David Schedl, Indrajit Kurmi, Oliver Bimber
Contact: david.schedl@jku.at
Type: BSc Thesis, MSc Thesis

Like holograms, light fields preserve complex light transport. But in contrast to holograms, light-field imaging and display technology is significant easier to realize as it is based on geometric optical principles rather than on wave optical. A light-field display, however, would replay the same 3D image that a hologram would produce – supporting all depth queues (which is not the case for simple stereo-based 3D, like with virtual reality glasses or 3D movie theatres).

First commercial light-field displays are on the market since February, and a first version has been obtained by ICG. It is driven by the Unity game engine on standard PCs. From a 3D scene description, a multi-view representation is rendered that is interleaved on a high-resolution 2D image with respect to the micro-optics applied by the display. However, multi-view data generated from a 3D representation (i.e., a 3D model) does not represent a full 4D light field.    

The goal of the project is to develop a real light-field viewer that renders true 4D light fields into the display.

Airborne Optical Sectioning: Visualization and Flight-Planning

Topics: drones, image-based rendering, user interfaces, light fields
Supervision: David Schedl, Indrajit Kurmi, Oliver Bimber
Contact: david.schedl@jku.at
Type: BSc Thesis, MSc Thesis (two projects)

Drones are becoming increasingly popular for remote sensing of landscapes in archeology, cultural heritage, forestry, and other disciplines. They are more efficient than airplanes for capturing small areas, of up to several hundred square meters. LiDAR (light detection and ranging) and photogrammetry have been applied together with drones to achieve 3D reconstruction. With airborne optical sectioning (AOS), we have present a radically different approach that is based on an old idea: synthetic aperture imaging. Rather than measuring, computing, and rendering 3D point clouds or triangulated 3D meshes, we apply image-based rendering for 3D visualization. In contrast to photogrammetry, AOS does not suffer from inaccurate correspondence matches and long processing times. It is cheaper than LiDAR, delivers surface color information, and has the potential to achieve high sampling resolutions. AOS samples the optical signal of wide synthetic apertures (30–100 m diameter) with unstructured video images recorded from a low-cost camera drone to support optical sectioning by image integration. The wide aperture signal results in a shallow depth of field and consequently in a strong blur of out-of-focus occluders, while images of points in focus remain clearly visible. Shifting focus computationally towards the ground allows optical slicing through dense occluder structures (such as leaves, tree branches, and coniferous trees), and discovery and inspection of concealed artifacts on the surface.

In this project, students will develop an image-based visualization and flying path-planning framework for AOS. The goal is to support users with an intuitive navigation through recorded image data, and easy flight-planing to capture images with the drone.

Presentation of Application Examples for ProjectionPathExplorer

Topics: web design, visualization
Supervision: Andreas Hinterreiter, Marc Streit
Contact:  andreas.hinterreiter@jku.at, marc.streit@jku.at
Type:  BSc/MSc Seminar

In a recent publication, we introduced ProjectionPathExplorer [1], an interactive visualization of high-dimensional processes as trajectories in a low-dimensional embedding space. In the paper, we describe how this type of visualization can be applied to a wide variety of domains. So far, the online prototype contains examples from Rubik’s cube, chess, interaction data, neural networks and Go. We have recently collected a number of additional application scenarios from student projects in the Explainable AI course. We would like to make all the different examples available and easily accessible as part of the ProjectionPathExplorer paper website. This task includes consolidating the different datasets and making them individually accessible via dedicated links, as well as potentially adapting the prototype with domain-specific summary visualizations for those applications that do not yet include them. The final result should be a neatly arranged collection of links and short descriptions that showcase the generalizability of the tool and can be used for publicity and presentation purposes.

  1. Hinterreiter, Andreas, Christian Steinparz, Moritz Schöfl, Holger Stitz, and Marc Streit. 2021. “ProjectionPathExplorer: Exploring Visual Patterns in Projected Decision-Making Paths.” ACM Transactions on Interactive Intelligent Systems. Special Issue on "Interactive Visual Analytics for Making Explainable and Accountable Decisions" (To be published): 1-30. arxiv.org/abs/2001.08372. 

Automatic detection and visualization of patterns in projected paths

Topics: visualization, dimensionality reduction, (trajectory) patterns
Supervision: Andreas Hinterreiter, Marc Streit
Contact:  andreas.hinterreiter@jku.at, marc.streit@jku.at
Type:  BSc Thesis, MSc Thesis, BSc Practicum, MSc Practicum

Many real-world problems can be viewed as sequences of high-dimensional states. As an example, each color configuration of Rubik’s cube can be encoded in a vector, and solving the cube can be viewed as transitioning from one high-dimensional vector to the next. In order to better understand patterns of these paths through high-dimensional state spaces, the states can be projected to 2D using non-linear dimensionality reduction techniques such as t-SNE or UMAP. The paths can then be visualized as trajectories in the embedded space.

We developed an interactive visualization prototype to explore such groups of projected paths (see image and link above). We already obtained some interesting findings for different application areas, but we would like to further  expand our prototype in two directions:

  1. Automatic detection of high- and low-dimensional patterns of points and trajectories
    The visualization system should be able to autonomously detect interesting patterns for both points and paths. It should be able to determine whether these patterns in the low-dimensional states actually relate to the “true” high-dimensional data.
  2. Automatic and effective visualization of the detected patterns
    The visualization system should automatically highlight the detected patterns and visualize them in an effective way. This will likely include novel visual encodings for similarity of high-dimensional states. It is of particular interest how suitable similarity encodings compare between different application areas.

The work will combine traditional data analysis techniques (e.g., dimensionality reduction and clustering techniques), conceptual visualization research (novel similarity encodings), and implementation of the developed techniques in the form of a web application.

PathExplorer (https://develop--projection-path-explorer.netlify.com/)

Novel Github Repository Visualization

Topics: visualization, source code management, D3.js, Javascript
Supervision: Holger Stitz, Marc Streit
Contact: holger.stitz@jku.at
Type: BSc Practicum, MSc Practicum, BSc Thesis, MSc Thesis

As the leading web platform for source code management and versioning control Github processes millions of commits every day. Especially for large repositories with many developers it is difficult for users to keep track of changes across different branches. The current Github user interface and state-of-the-art visualizations are limited in terms of scalability and not able to represent changes for different file types, such as source code, tabular data, and images, over time. The aim of this project is to design and implement an integrated representation for visualization changes of files associated with Github repositories. File changes must be quantified according to file type and considered over time. The visualization should allow users to get an overview over all changes and also drill-down to investigate changes of interest in more detail. The project should be implemented using web-based techniques such as D3.js.

Discovering Usage Patterns in Exploratory Data Analysis

Topics: visualization, big data, guidance, D3.js
Supervision: Holger Stitz, Marc Streit
Contact: holger.stitz@jku.at
Type: BSc Practicum, BSc Thesis, MSc Practicum, MSc Thesis

Exploratory data analysis allows domain experts to find new insights in large and complex data. Together with partners from Harvard University and University of Utah we developed visual data analysis tools for drug discovery, cancer genomics and other applications. In such exploratory tools we are able to track full history of the exploratory data analysis process including all loaded datasets, the visual and analytical tools applied to the data, and all interactions done by the user. Collecting this information results in a large and fast growing provenance graph. In order to make the information accessible to the user, an existing project at the institute deals with the interactive visualization of this graph and finding of similar states in the graph. The goal of this project is to run meta-analysis on the provenance graph to derive recurring chains of actions and usage patterns that can then be used to speed up the exploratory analysis and also to visually guide the user through the analysis by making suggestions.
The project will be implemented as part of the Caleydo project, which uses D3.js, JavaScript and Python as base technology.

Copyright Information