close

Voxelizing Light-Field Recordings

Topics: light fields, voxel-based rendering, Unity game engine, real-time rendering
Supervision: David Schedl, Indrajit Kurmi, Oliver Bimber
Contact: david.schedl@jku.at
Type: BSc Thesis, MSc Thesis

Light fields are an emerging image-based technique that support free viewpoint navigation of recorded scenes as demanded in several recent applications (e.g., Virtual Reality). Pure image-based representations, however quickly become inefficient, as a large number of images are required to be captured, stored, and processed. Geometric scene representations require less storage and are more efficient to render. Geometry reconstruction, however, is unreliable and might fail for complex scene parts. Furthermore, view-dependent effects that are preserved with light fields are lost in pure geometry based techniques. Therefore, we propose a hybrid representation and rendering scheme for recorded dense light fields: we want to extract isotropic scene regions and represent them by voxels, while the remaining areas are represented as sparse light field. In comparison to dense light fields, storage demands are reduced while visual quality is sustained.

The goal of this project is to (a) develop analysis techniques that efficiently separate light-field recordings into a voxel-based and image-based representation, (b) investigate efficient data structures for sparse light-fields representations (while sparse volume data representations already exist), and (c) implement real-time hybrid (voxel/light-field) rendering techniques. These three sub-projects (a,b,c) can be carried out independently by different students.

Light Field Display

Topics: light fields, Unity game engine, real-time rendering
Supervision: David Schedl, Indrajit Kurmi, Oliver Bimber
Contact: david.schedl@jku.at
Type: BSc Thesis, MSc Thesis

Like holograms, light fields preserve complex light transport. But in contrast to holograms, light-field imaging and display technology is significant easier to realize as it is based on geometric optical principles rather than on wave optical. A light-field display, however, would replay the same 3D image that a hologram would produce – supporting all depth queues (which is not the case for simple stereo-based 3D, like with virtual reality glasses or 3D movie theatres).

First commercial light-field displays are on the market since February, and a first version has been obtained by ICG. It is driven by the Unity game engine on standard PCs. From a 3D scene description, a multi-view representation is rendered that is interleaved on a high-resolution 2D image with respect to the micro-optics applied by the display. However, multi-view data generated from a 3D representation (i.e., a 3D model) does not represent a full 4D light field.    

The goal of the project is to develop a real light-field viewer that renders true 4D light fields into the display.

Airborne Optical Sectioning: Visualization and Flight-Planning

Topics: drones, image-based rendering, user interfaces, light fields
Supervision: David Schedl, Indrajit Kurmi, Oliver Bimber
Contact: david.schedl@jku.at
Type: BSc Thesis, MSc Thesis (two projects)

Drones are becoming increasingly popular for remote sensing of landscapes in archeology, cultural heritage, forestry, and other disciplines. They are more efficient than airplanes for capturing small areas, of up to several hundred square meters. LiDAR (light detection and ranging) and photogrammetry have been applied together with drones to achieve 3D reconstruction. With airborne optical sectioning (AOS), we have present a radically different approach that is based on an old idea: synthetic aperture imaging. Rather than measuring, computing, and rendering 3D point clouds or triangulated 3D meshes, we apply image-based rendering for 3D visualization. In contrast to photogrammetry, AOS does not suffer from inaccurate correspondence matches and long processing times. It is cheaper than LiDAR, delivers surface color information, and has the potential to achieve high sampling resolutions. AOS samples the optical signal of wide synthetic apertures (30–100 m diameter) with unstructured video images recorded from a low-cost camera drone to support optical sectioning by image integration. The wide aperture signal results in a shallow depth of field and consequently in a strong blur of out-of-focus occluders, while images of points in focus remain clearly visible. Shifting focus computationally towards the ground allows optical slicing through dense occluder structures (such as leaves, tree branches, and coniferous trees), and discovery and inspection of concealed artifacts on the surface.

In this project, students will develop an image-based visualization and flying path-planning framework for AOS. The goal is to support users with an intuitive navigation through recorded image data, and easy flight-planing to capture images with the drone.


Novel Github Repository Visualization

Topics: visualization, source code management, D3.js, Javascript
Supervision: Holger Stitz, Marc Streit
Contact: holger.stitz@jku.at
Type: BSc Practicum, MSc Practicum, BSc Thesis, MSc Thesis

As the leading web platform for source code management and versioning control Github processes millions of commits every day. Especially for large repositories with many developers it is difficult for users to keep track of changes across different branches. The current Github user interface and state-of-the-art visualizations are limited in terms of scalability and not able to represent changes for different file types, such as source code, tabular data, and images, over time. The aim of this project is to design and implement an integrated representation for visualization changes of files associated with Github repositories. File changes must be quantified according to file type and considered over time. The visualization should allow users to get an overview over all changes and also drill-down to investigate changes of interest in more detail. The project should be implemented using web-based techniques such as D3.js.


Discovering Usage Patterns in Exploratory Data Analysis

Topics: visualization, big data, guidance, D3.js
Supervision: Holger Stitz, Marc Streit
Contact: holger.stitz@jku.at
Type: BSc Practicum, BSc Thesis, MSc Practicum, MSc Thesis

Exploratory data analysis allows domain experts to find new insights in large and complex data. Together with partners from Harvard University and University of Utah we developed visual data analysis tools for drug discovery, cancer genomics and other applications. In such exploratory tools we are able to track full history of the exploratory data analysis process including all loaded datasets, the visual and analytical tools applied to the data, and all interactions done by the user. Collecting this information results in a large and fast growing provenance graph. In order to make the information accessible to the user, an existing project at the institute deals with the interactive visualization of this graph and finding of similar states in the graph. The goal of this project is to run meta-analysis on the provenance graph to derive recurring chains of actions and usage patterns that can then be used to speed up the exploratory analysis and also to visually guide the user through the analysis by making suggestions.
The project will be implemented as part of the Caleydo project, which uses D3.js, JavaScript and Python as base technology.

Copyright Information