Page Areas:



Current Submenu:

Additional Information:

Student Projects

Hier den Alternativtext zum Bild eingeben!

Displays Book (Korean Edition)

Hier den Alternativtext zum Bild eingeben!

Displays Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

Spatial Augmented Reality Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

Austrian Computer Science Day

Hier den Alternativtext zum Bild eingeben!

The Caleydo Project

Hier den Alternativtext zum Bild eingeben!

VIOSO smartprojecting

Hier den Alternativtext zum Bild eingeben!

Position Indication:

Content

Theses and Practical Courses

The Institute of Computer Graphics continuously supervises students on all levels (BSc, MSc as well as PhD) in computer graphics, computer vision, visual computing, visualization, or related fields. Each student will be individually advised.... continue reading.
Interested and motivated students are invited to contact us at any time during the semester.

Selected Student Work

A selection of the finest student projects can be found here.

Open Theses Topics

The following list of topics is currently available and can be chosen at any time in the semester. The topics are assigned to our main research areas: Light-Field Processing, Transparent and Flexible Image Sensor and Visual Analytics. Topics will be assigned on a first-come, first-served basis. We also welcome own suggestions that are related to visual computing. LaTex templates for theses and practical course reports are available in German and English. If you have general questions regarding doing a thesis at our institute, feel free to contact us.


Structured Similarity Index for Light Fields

Topics: light fields, quality metric, SSIM, processing, Matlab
Supervision: David Schedl, Oliver Bimber
Contact: david.schedl(/\t)jku.at
Type: BSc Thesis, (MSc Thesis)
SSIM
The Structured Similarity (SSIM) index is the de facto standard quality metric for the perceptual comparisons of a modified image with its ground truth. While the mean squared error (MSE) produces similar scores for differently degraded images (see image), the SSIM generates scores that are in correlation with a human viewer. Although SSIM has been extended to other domains (e.g., video), no extension for light fields is available. The SSIM cannot be directly applied to light fields, because multiple views require additional considerations. The goal of this project is to develop a first light-field SSIM.
Initial experiments/ideas for a light-field SSIM have already been carried out previously by students and can be used as a starting point for the project.

Additionally, master students should conduct a user study, on human light-field perception, which is used for evaluating and tuning the metric (see Project: User Study on Light-Field Perception).

User Study on Light-Field Perception

Topics: light fields, quality metric, processing, user study, online
Supervision: David Schedl, Oliver Bimber
Contact: david.schedl(/\t)jku.at
Type: BSc Thesis
SSIM
For images human-rated image databases exist (e.g., TID2008 database). Such databases contain several images and differently degraded versions of each image. For each pair (original vs. modified image) a quality score from a user study is available. Such databases are used for the assessment and tuning of perceptual image quality metrics, such as Structured Similarity (SSIM). For light-fields, however, no such database is available.
The goal of this project is to establish a representative light-field database with degraded light fields and to conduct a user study on human light-field perception. The database and the results of the study are of interest for researchers that develop new light-field quality assessment techniques. Therefore, the results will be published in an international journal or at an international conference. On acceptance the student will be sent to a conference by our institute.
Preliminary work/research for this project has already been carried out: A modified version of the Stanford Viewer for viewing two light fields in parallel is already available. Plugins for rendering virtual light fields with Blender are usable. Furthermore, databases with freely available light fields exist.

Improved image reconstruction using perceputally-motivated objective functions

Topics: machine learning, signal processing, Matlab
Supervision: Alexander Koppelhuber, Oliver Bimber
Contact: alexander.koppelhuber(/\t)jku.at
Type: BSc Thesis, MSc Thesis, MSc Practicum, BSc Practicum
Quality Metric

Our current image reconstruction approach for our thin-film luminescent concentrator image sensor is to use Ordinary Least Squares to estimate the inverse light transport matrix. This allows a simple matrix-vector multiplication to reconstruct the image from the measured signal. This research project aims to increase the reconstruction quality by investigating the use of a different objective function than the l2-norm which is known to correlate poorly with the perceived image quality. In a recent paper ([1]) the structural similarity index (SSIM) was used to improve the image quality in the context of demosaicing and denoising. Is it possible to use SSIM (or other metrics) as the objective function for image reconstruction from projections?
[1] "Is L2 a Good Loss Function for Neural Networks for Image Processing?".

Airplane Heading Correction with Wind Prediction

Topics: airplane navigation, weather databases, Android, Java
Supervision: Clemens Birklbauer, Oliver Bimber
Contact: clemens.birklbauer(/\t)jku.at
Type: BSc Practicum, BSc Thesis, MSc Practicum
Wind Correction
Airplanes are heavily influenced by the wind during navigation. For steering in a certain direction the wind has to be compensated. This means the actual heading of a plane in the air is different to its moving direction over ground provided by GPS tracking. This is a problem e.g. for augmented reality systems that rely on GPS based heading information.
With known wind speed and direction the actual heading of the airplane can be computed. Existing applications rely on manual user inputs to compute the required wind parameters which have to be updated frequently.
The goal of this project is to replace the manual wind compensation with automatically computed values from weather forecast systems (e.g. GFS, ICON) looked up for the current GPS position and height. Data should be downloaded and preprocessed before flights (for a desired region). In a first step freely available weather databases should be explored. Furthermore an Android app has to be implemented that downloads wind values and supports a comparison with a manual wind computation during flight.

Collaborative Timelapse Photography

Topics: smartphones, computer vision, Java
Supervision: Clemens Birklbauer, Oliver Bimber
Contact: clemens.birklbauer(/\t)jku.at
Type: MSc Thesis
© Stephen Wilkes
Photographers achieve great effects by combining pictures of an object taken at various day times and light conditions. Capturing such images requires placing a camera at one spot for a long time or coming back multiple times to the same place. Thus, this is nothing one can quickly do during sightseeing. Furthermore registration and stitching can be a very tedious job.
The goal of this project is to develop a smartphone-app which allows multiple users to collaboratively capture such images and easily achieve such great effects with semiautomatic processing on the smartphone.
The first task of this project is to find appropriate registration and stitching methods to combine images taken with varying cameras at very different illumination conditions. Furthermore simple editing tools have to be explored that allow to easily achieve the desired effects.
The second part of the project is to develop a smartphone-app which allows to ask other users close to the desired location for help realizing the picture you have in mind. It should allow to specify the capturing goals and then guide other users to take a picture with the desired light conditions, the right position and the same field of view. An example for such a crowdsourcing app can be found here: Time-Lapse

Image: © Stephen Wilkes

Novel Computational Imaging Concepts

Topics: imaging, compressed sensing, image reconstruction, SLM Needs: Matlab, C++, CUDA
Supervision: Oliver Bimber, Bettina Heise
Contact: oliver.bimber(/\t)jku.at, bettina.heise(/\t)jku.at
Type: BSc Thesis, MSc Thesis, MSc Practicum, BSc Practicum
Imaging and reconstructing objects by a single pixel camera concept became an emerging novel method introduced and demonstrated recently [1]. This approach has the advantage that only a point detector is needed, which can be beneficial with respect to speed or the simple fact that area cameras are not available or to expensive (relevant e.g. for area camera sensitive in mid infrared or THz wavelength range or for non-linear optics applications). But also for the visible wave length range such a concept shows it challenges with respect to an effective reconstruction: There a sequence of recordings obtained as a (temporal) series of (spatial) projections of the object wave field modulated by an intensity or phase pattern are captured by the point detector. Meanwhile advanced methods are derived from this original technique applying e.g. multiple apertures or using the medium itself as tool for imaging. This leads to fully novel imaging methods and will enable further perspectives for imaging, which will provide insights where imaging as technique itself can be used: e.g. for material sciences, medical diagnostics, or in the depth of astronomy.
Here we have the possibility to implement and test in some principal optical settings such novel imaging concepts containing different types of spatial light modulator (SLM), such as a digital light processor (DLP) or a liquid crystal SLM-types.

In contact and supported by the imaging group at CDL MS-MACH/ZONA such configurations should be further develop, evaluated and optimized for different illumination schemes (coherent and incoherent), modulation types (spatial and temporal modulation) or material types, and in particular computationally effective reconstructions should be performed, where approaches of sparse signals and compressive sensing might be suitably included.

[1] Image source: http://machinedesign.com/news/get-ready-single-pixel-cameras

Fusing Data Analysis with Data Wrangling

Topics: visualization, visual analytics, big data, D3.js, Python
Supervision: Samuel Gratzl, Marc Streit
Contact: samuel.gratzl(/\t)jku.at
Type: BSc Practicum, BSc Thesis, MSc Practicum, MSc Thesis
Solving data intensive problems requires users to merge and analyze datasets that come from different sources, in various formats and semantics. The current workflow in visual data analysis is to first import and specify the semantics of the datasets in a standard importer dialog, before moving on to the actual visual analysis. However, when dealing with multiple, heterogeneous datasets the back and forth between data import and analysis results in a disruption of the workflow that hampers the sense making process. The goal of this project is to seamlessly fuse the data import with the data analysis. The project will be implemented in the Caleydo Web framework, which is based on D3.js, JavaScript and Python.

Visualizing Degree-of-Interest Functions over Time

Topics: visualization, big data, guidance, D3.js
Supervision: Holger Stitz, Marc Streit
Contact: holger.stitz(/\t)jku.at
Type: BSc Practicum, BSc Thesis, MSc Practicum, MSc Thesis

Multi-attribute time-series data plays an essential role in many different domains, such as economics, sensor networks, and biology. An important task when making sense of such data is to provide users with an overview to identify items that show an interesting development over time. In earlier work, we presented ThermalPlot, a visualization technique that summarizes combinations of multiple attributes over time. Techniques such as ThermalPlot are based on a user-defined Degree-of-Interest (DoI) function that is a weighted combination of multiple attributes. However, such complex DoI functions and their effects on the visualization are hard to comprehend by users.
The goal of the project is to effectively visualize weighted multi-attribute DoI functions. The solution should go beyond standard stacked stream graphs and should consider dynamic weights, missing values, and different levels of granularity. The project will be implemented as independent library that should later on be used in the Caleydo Web framework.

 

View Management for Web Visualizations

Topics: visualization, multiple coordinated views, D3.js, Javascript
Supervision: Samuel Gratzl, Marc Streit
Contact: samuel.gratzl(/\t)jku.at
Type: BSc Practicum, MSc Practicum
In visual data analysis tools and software from other domains, changes and selections in one view are immediately reflected in all other views–a principle known as Multiple Coordinated Views (MCV). Although this is a standard component in standalone software, most web visualizations are still either implemented as single view solutions or with a very basic manually created view coordination. The goal of this practicum project is to implement a library that provides a simple and yet effective view management for web visualization applications. Besides the coordination of views within one browser window, it should be possible to coordinate views hosted in different browser windows/tabs. The project will be implemented in the Caleydo Web framework, which is based on D3.js, JavaScript and Python.