Page Areas:



Current Submenu:

Additional Information:

Student Projects

Hier den Alternativtext zum Bild eingeben!

Displays Book (Korean Edition)

Hier den Alternativtext zum Bild eingeben!

Displays Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

Spatial Augmented Reality Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

Aero Glass

Hier den Alternativtext zum Bild eingeben!

The Caleydo Project

Hier den Alternativtext zum Bild eingeben!

VIOSO smartprojecting

Hier den Alternativtext zum Bild eingeben!

Position Indication:

Content

Theses and Practical Courses

The Institute of Computer Graphics continuously supervises students on all levels (BSc, MSc as well as PhD) in computer graphics, computer vision, visual computing, visualization, or related fields. Each student will be individually advised.... continue reading.
Interested and motivated students are invited to contact us at any time during the semester.

Selected Student Work

A selection of the finest student projects can be found here.

Open Theses Topics

The following list of topics is currently available and can be chosen at any time in the semester. The topics are assigned to our main research areas: Light-Field Processing, Transparent and Flexible Image Sensor and Visual Analytics. Topics will be assigned on a first-come, first-served basis. We also welcome own suggestions that are related to visual computing. LaTex templates for theses and practical course reports are available in German and English. If you have general questions regarding doing a thesis at our institute, feel free to contact us.


Dynamic Light-Field Excitation with a Light-Field Microscope

Topics: light fields, microscope, object tracking, computer vision, GPU, Python
Supervision: David Schedl, Oliver Bimber
Contact: david.schedl(/\t)jku.at
Type: BSc Practicum, BSc Thesis, MSc Practicum, MSc Thesis
Recently, we explained how to concentrate light simultaneously at multiple selected volumetric positions by means of a 4D illumination light field with a microscope. Our approach, however, requires static probes. The goal of this student project is the extension of our approach to dynamic scenes. In our existing approach, a target is selected in a 4D imaging light field. Then an illumination light field is computed for this selection, avoiding illumination of the remaining areas in the scene. The tasks for this project are: a speed-up implementation of our approach to support moving targets; and an algorithm for tracking objects, while avoiding the illumination of other scene areas. Potential applications are in the field of microbiology, where tracking of living animals without photo damaging the remaining specimen is required.

Learning-based image reconstruction in the Fourier domain

Topics: machine learning, signal processing, Matlab
Supervision: Alexander Koppelhuber, Oliver Bimber
Contact: alexander.koppelhuber(/\t)jku.at
Type: BSc Thesis, MSc Thesis, MSc Practicum, BSc Practicum
Our current image reconstruction approach for our thin-film luminescent concentrator image sensor is to use linear regression to estimate the inverse light transport matrix. This allows a simple matrix-vector multiplication to reconstruct the image from the measured signal.
This research project aims to increase the reconstruction quality by investigating methods that operate in the Fourier domain. For example, the phase of a signal in Fourier space contains much of the information of edges and lines in the spatial domain. Is it possible to estimate the amplitude/phase information directly from the measured signal of the image sensor and to use this to increase the image reconstruction quality?

Light-Field Rendering for Virtual Reality Glasses

Topics: light fields, head mounted displays, computer vision, C++, Java
Supervision: Clemens Birklbauer, Oliver Bimber
Contact: clemens.birklbauer(/\t)jku.at
Type: BSc Practicum, BSc Thesis, MSc Practicum
Quality Metric
Virtual reality glasses, such as the Oculus Rift or Google Cardboard, are becoming popular, affordable and practical, due to compact hardware. These devices display stereoscopic content with a wider field of view than regular displays, supporting an immersive experience for the user. Additionally the viewer's head position, orientation and movement are tracked and the display can be altered accordingly. However, available content for such devices is mostly limited to games or live-renderings. Precalculated stereoscopic pictures and videos do not support altering the viewer's head orientation, due to the horizontal camera displacement (i.e. fails if the user tilts his head sideways). 4D light fields store additional data that is needed for orientation-aware stereoscopic displaying.
The goal of this project is to incorporate orientation-aware stereoscopic light-field rendering into our software, while solving challenges such as interpolation, rendering and navigation. Students will have the opportunity to work with a development version of the "Oculus Rift" virtual reality glasses or program their own Android phones, and will get insights on light-field processing.

Single Pixel Camera Imaging

Topics: imaging, compressed sensing, image reconstruction, SLM, Matlab, C++, CUDA
Supervision: Oliver Bimber, Bettina Heise
Contact: oliver.bimber(/\t)jku.at, bettina.heise(/\t)jku.at
Type: BSc Thesis, MSc Thesis, MSc Practicum, BSc Practicum
Imaging and reconstructing objects by a single pixel camera concept became an emerging novel method introduced and demonstrated recently. This approach has the advantage that only a point detector is needed, which can be beneficial with respect to speed or the simple fact that area cameras are not available or to expensive (relevant e.g. for area camera sensitive in mid infrared or THz wavelength range or for non-linear optics applications). But also for the visible wave length range such a concept shows it challenges with respect to an effective reconstruction: There a sequence of recordings obtained as a (temporal) series of (spatial) projections of the object wave field modulated by an intensity or phase pattern are captured by the point detector. The applied patterns can be generated by a spatial light modulator (SLM), such as a digital light processor (DLP) or a liquid crystal SLM included in the configuration. This allows to compress the image content already during recording.
In contact and supported by the imaging group at CDL MS-MACH/ZONA such a single pixel camera setup has to be build up there, tested and optimized for different illumination schemes (coherent and incoherent), modulation types (spatial and temporal modulation) and a computationally effective reconstruction must be performed, where approaches of sparse signals and compressive sensing might be included.
Additional information: single-pixel imaging via compressive sampling, single-pixel cameras.

Quality Metric for Light Fields

Topics: light fields, quality metric, processing, user study, Matlab
Supervision: David Schedl, Oliver Bimber
Contact: david.schedl(/\t)jku.at
Type: BSc Practicum, BSc Thesis, MSc Practicum, MSc Thesis
Quality Metric
For images and videos, various de facto standard quality metrics exists (e.g. structural similarity index or high-dynamic-range visual difference predictor). Such metrics are widely used for quality assessments of processing techniques. For light fields, however, no such metrics are available. Image or video quality metrics cannot be directly applied to light fields, because multiple views require additional considerations. The goal of this project is to develop a first light field quality metric. One main task of the project is the evaluation of how the metric agrees with human perception, based on a user study.

Fusing Data Analysis with Data Wrangling

Topics: visualization, visual analytics, big data, D3.js, Python
Supervision: Samuel Gratzl, Marc Streit
Contact: samuel.gratzl(/\t)jku.at
Type: BSc Practicum, BSc Thesis, MSc Practicum, MSc Thesis
Solving data intensive problems requires users to merge and analyze datasets that come from different sources, in various formats and semantics. The current workflow in visual data analysis is to first import and specify the semantics of the datasets in a standard importer dialog, before moving on to the actual visual analysis. However, when dealing with multiple, heterogeneous datasets the back and forth between data import and analysis results in a disruption of the workflow that hampers the sense making process. The goal of this project is to seamlessly fuse the data import with the data analysis. The project will be implemented in the Caleydo Web framework, which is based on D3.js, JavaScript and Python.

Provenance Graph Based Steering

Topics: visualization, big data, guidance, D3.js, Python
Supervision: Holger Stitz, Marc Streit
Contact: holger.stitz(/\t)jku.at
Type: BSc Practicum, BSc Thesis, MSc Practicum, MSc Thesis
In the context of visual data analysis a provenance graph contains the information about all loaded datasets, the visual and analytical tools applied to the data, and all interactions done by the user. Collecting this information results in a large and fast growing graph. In order to make the information accessible to the user, an existing project at the institute deals with the interactive visualization of this graph. The goal of this project is to run meta-analysis on the provenance graph to derive user intentions and tasks that can then be used to adapt the visualization and visually guide the user through the analysis. The project will be implemented in the Caleydo Web framework, which is based on D3.js, JavaScript and Python.

View Management for Web Visualizations

Topics: visualization, multiple coordinated views, D3.js, Javascript
Supervision: Samuel Gratzl, Marc Streit
Contact: samuel.gratzl(/\t)jku.at
Type: BSc Practicum, MSc Practicum
In visual data analysis tools and software from other domains, changes and selections in one view are immediately reflected in all other views–a principle known as Multiple Coordinated Views (MCV). Although this is a standard component in standalone software, most web visualizations are still either implemented as single view solutions or with a very basic manually created view coordination. The goal of this project is to implement a library that provides an effective view management for web visualizations. Besides the coordination of views within one browser window, it should be possible to coordinate views hosted in different browser windows/tabs. The goal of this project is to seamlessly fuse the data import with the data analysis. The project will be implemented in the Caleydo Web framework, which is based on D3.js, JavaScript and Python.