UAV-Based Scanning for Hybrid Scene Representations

Drone-Guided Light-Field Imaging

Topics: drone programming, scene reconstruction, real-time rendering
Supervision: David Schedl, Indrajit Kurmi, Oliver Bimber
Type: BSc Thesis, MSc Thesis (alone or team of two)

Unmanned Arial Vehicles, UAVs (i.e., drones) are becoming more and more mature to handle useful tasks, such as package delivery, inspection, search-and-rescue assistance, or mapping. For 3D reconstruction (e.g., of buildings) camera-equipped drones are being applied as flexible range scanners. Reconstructing geometry based on captured images, however, is only feasible for relatively simple surfaces. It fails for transparent, reflecting, or untextured scene parts. Advanced image-based representations, like light fields, are suitable for capturing and rendering such complex scenes – but quickly become inefficient, as for larger environments an unrealistic number of images are required to be captured, stored, and processed. We believe that instead of relying only on pure geometric representation that fails for complex scene regions, or only on an extensive light-field representation that requires excessive image sampling for large scenes, a hybrid sampling, representation, and rendering scheme is most efficient. While the majority of common scenes can be represented with geometry, those regions that cannot will be covered with a light-field representation. This, however, requires to distinguish between these types, efficiently sample them with a UAV, and seamlessly blend them during visualization.
This project is suited for BSc and MSc students interested in one or multiple topics, such as drone programming, real-time rendering and 3D reconstructions. Sub-projects in any of these topics can be carried out alone or in a team.

Atmospheric Energy Line Detection During Soaring

Topics: sensors, machine learning, mobile applications
Supervision: Oliver Bimber
Type: BSc Thesis / Project, MSc Thesis / Project

Following energy lines in the atmosphere caused by updrafts (e.g. thermals, ridge deflected wind, mountain waves, convergences, etc.) is not only essential for birds to soar energy efficient, but also for planes and gliders (including para-gliders and hang-gliders). Unfortunately, these energy lines remain often invisible to pilots. The goal of this project is to investigate if these energy lines can be detected during flight by means of real-time measurements and pattern detection. Attaching mobile motion sensors (often used for body tracking in sports analysis) to the outer wing tips of a modern glider allows to measure the up- and down motions of the wings while flying through different air-masses.  These motions correspond to up- and downdrafts next to the airplane. Indicating updrafts allows the pilot to follow the invisible energy lines efficiently. 

Students will develop a mobile app (Android) that reads measurements form wireless motion sensors, analyze these measurements (e.g. via machine learning), and display energy line patterns effectively on the mobile device.   

Novel Computational Imaging Concepts

Topics: imaging, compressed sensing, image reconstruction, SLM
Needs: Matlab, C++, CUDA
Supervision: Oliver Bimber, Bettina Heise
Type: BSc Thesis, MSc Thesis, MSc Practicum, BSc Practicum

Imaging and reconstructing objects by a single pixel camera concept became an emerging novel method introduced and demonstrated recently. This approach has the advantage that only a point detector is needed, which can be beneficial with respect to speed or the simple fact that area cameras are not available or to expensive (relevant e.g. for area camera sensitive in mid infrared or THz wavelength range or for non-linear optics applications). But also for the visible wave length range such a concept shows it challenges with respect to an effective reconstruction: There a sequence of recordings obtained as a (temporal) series of (spatial) projections of the object wave field modulated by an intensity or phase pattern are captured by the point detector. Meanwhile advanced methods are derived from this original technique applying e.g. multiple apertures or using the medium itself as tool for imaging. This leads to fully novel imaging methods and will enable further perspectives for imaging, which will provide insights where imaging as technique itself can be used: e.g. for material sciences, medical diagnostics, or in the depth of astronomy.
Here we have the possibility to implement and test in some principal optical settings such novel imaging concepts containing different types of spatial light modulator (SLM), such as a digital light processor (DLP) or a liquid crystal SLM-types.

In contact and supported by the imaging group at CDL MS-MACH/ZONA such configurations should be further develop, evaluated and optimized for different illumination schemes (coherent and incoherent), modulation types (spatial and temporal modulation) or material types, and in particular computationally effective reconstructions should be performed, where approaches of sparse signals and compressive sensing might be suitably included.

Rubik’s Cube Solver Visualization

Topics: visualization, machine learning
Supervision: Marc Streit
Type: BSc Practicum, MSc Practicum

Everyone who has tried to solve Rubik’s cube, the 3D combination puzzle invented in 1974, without instructions knows that this is a challenging task. Today, computers are better and faster in solving the cube automatically. But how is the solving strategy different to the strategy a human applies?

To answer this question, we developed an interactive visualization for comparing different solving strategies by projecting them in 2D space. The goal of this follow-up project is to create a demonstrator that combines a LEGO® MINDSTORMS® robot, which is able to solve the cube without the help of a human, with the interactive visualization that shows the solving process on the fly. The demonstrator should be showcased to the general public and future students at science fairs and similar events.


Visualizing the Evolution of Instance Classification in Deep Learning

Topics: visualization, machine learning, D3.js, Javascript
Supervision: Peter Ruch, Marc Streit
Type: BSc Practicum, MSc Practicum, BSc Thesis, MSc Thesis

In machine learning, classification is a supervised learning method in which the computer program learns from given observations (instances)  and then uses this learning to classify new data. During training of deep neural networks, the learning algorithm optimizes the model by feeding the training instances multiple times (multiple epochs) into the network in order to increase the performance of the model steadily.
Existing approaches [1] visualize the final performance of the model after training by visualizing correctly and incorrectly classified instances and their class association (see figure). However, they lack the ability to show the evolution of instance classification across the epochs during training.
The goal of this project is to design and implement a novel visualization that allows users to observe the learning behavior during training in deep neural networks. The solutions should be implemented using web-based technologies such as D3 and JavaScript/TypeScript.


Novel Github Repository Visualization

Topics: visualization, source code management, D3.js, Javascript
Supervision: Holger Stitz, Marc Streit
Type: BSc Practicum, MSc Practicum, BSc Thesis, MSc Thesis

As the leading web platform for source code management and versioning control Github processes millions of commits every day. Especially for large repositories with many developers it is difficult for users to keep track of changes across different branches. The current Github user interface and state-of-the-art visualizations are limited in terms of scalability and not able to represent changes for different file types, such as source code, tabular data, and images, over time. The aim of this project is to design and implement an integrated representation for visualization changes of files associated with Github repositories. File changes must be quantified according to file type and considered over time. The visualization should allow users to get an overview over all changes and also drill-down to investigate changes of interest in more detail. The project should be implemented using web-based techniques such as D3.js.

Discovering Usage Patterns in Exploratory Data Analysis

Topics: visualization, big data, guidance, D3.js
Supervision: Holger Stitz, Marc Streit
Type: BSc Practicum, BSc Thesis, MSc Practicum, MSc Thesis

Exploratory data analysis allows domain experts to find new insights in large and complex data. Together with partners from Harvard University and University of Utah we developed visual data analysis tools for drug discovery, cancer genomics and other applications. In such exploratory tools we are able to track full history of the exploratory data analysis process including all loaded datasets, the visual and analytical tools applied to the data, and all interactions done by the user. Collecting this information results in a large and fast growing provenance graph. In order to make the information accessible to the user, an existing project at the institute deals with the interactive visualization of this graph and finding of similar states in the graph. The goal of this project is to run meta-analysis on the provenance graph to derive recurring chains of actions and usage patterns that can then be used to speed up the exploratory analysis and also to visually guide the user through the analysis by making suggestions.
The project will be implemented as part of the Caleydo project, which uses D3.js, JavaScript and Python as base technology.

Copyright Information