Go to JKU Homepage
Institute of Computer Graphics
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.

Selected MSc and BSc Theses and
Practical Courses


This gallery presents a selection of the finest student projects that have been carried out at the Institute of Computer Graphics. If you are interested in doing your student work at the ICG, feel free to contact us.

LaTex templates for theses and practical course reports are available in German and English.

MSc Theses

Comparative Visualization of Large Tabular Data, Reem Hourieh

Tabular data plays a vital role in many different domains, such as accounting, biology, and computer science. The size of tabular data can grow to more than a few thousand rows and columns quickly. Visualizing this data can help users to gain insights about the information contained in the tables. Existing visualization techniques, however, are inadequate to show modification applied to one table compared to other tables, such as structural changes (i.e., added or removed rows and/or columns), or modification of data values in cells. Alternatively, comparing tabular data manually is cumbersome and time consuming. Traditional comparison tools can assist users to inspect differences between tables, however, their results are often hard to interpret or they do not scale to large tables. This thesis proposes a comparison tool that calculates the difference between large homogeneous tabular data and provides a novel interactive visualization to encode the difference. A multi-levels of detail solution allows users to effectively compare multiple tables and investigate structural and content changes. The comparative visualization tool was tested using large biomedical data, enabling users to see patterns of changes across tables with various timestamps.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Interactive Visualization of Provenance Graphs for Reproducible Biomedical Research, Stefan Luger

Recent advances in cancer research potentially translate into new drugs and treatment methods. Through new sequencing technologies, cancer researchers can now acquire the biomedical samples needed for pre-clinical studies more quickly. A study often requires the integration of many external scripts and programs, resulting in numerous workflows that contain hundreds of tools and files. Due to the sheer size and complexity of such data- intensive experiments, many labs struggle to replicate and guarantee the trustworthiness of the published results. Recent efforts, such as The Refinery Platform, emerged to manage biomedical data, support automated scientific workflow execution, and provide the visual exploration tools that let analysts review their results and ensure the reproducibility of a study. Meta-data and the collected provenance information about files, tools, and analysis results of every workflow execution over time are typically stored in a provenance graph for each study. Existing provenance graph visualizations are mostly static, not on par with modern visualization techniques, and do not scale well. The thesis’ goal is to deliver an interactive provenance graph visualization for Refinery that handles provenance graphs at the file level over time. In collaboration with the Refinery team, we elicited the six most important user tasks and requirements. The two major challenges when dealing with provenance graphs are to handle large and quickly growing graphs while they simultaneously evolve over time. We address these issues by using hierarchical aggregation to organize the graph into multiple hierarchy levels. Based on network motifs, we further aggregate similar and redundant analyses, which adds an even more abstract representation on top of this hierarchy. The graph dynamically adjusts the visibility of these levels for every node based on user interest through the application of a modular degree-of-interest function. Our combined approach enables researchers to review and communicate time-varying provenance graphs containing numerous workflows of hundreds of tools and files. The visualization was evaluated using real datasets acquired from the Stem Cell Commons database. The provenance graph grew up to 100 analyses, resulting in more than 1100 files and tools. We demonstrate the effectiveness of our visualization by means of task-driven use cases.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Light-Field Caching, Simon Opelt

With the continuously increasing sensor resolution of cameras, light-field imaging is becoming a more and more practical extension to conventional digital photography. It complements post-processing by synthetic aperture control, refocusing, as well as perspective and field-of-view changes. For being a true alternative to classical 2D imaging, however, the spatial resolution of light fields has to be in the same megapixel-order as the resolution of today’s digital images. The additional angular resolution must also be adequately high to prevent sampling artifacts (in particular for synthetic refocusing). This will quickly result in uncompressed light-field datasets requiring over 10 gigabytes as compared to below 100 megabytes for a conventional digital photo. Rendering such a light field within the constraints of limited graphics memory is the primary challenge of this thesis. This work describes the approach, implementation, and evaluation of a light-field caching framework that allows to render very large light fields in real-time. A GPU-accelerated renderer, a software-managed cache, and different strategies for managing the cache have been created.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

A Transparent, Flexible, Scalable, and Disposable Image Sensor Using Thin-Film Luminescent Concentrators, Alexander Koppelhuber

Conventional CCD image sensors are of a planar shape, opaque and inflexible. Recent approaches produced image sensors that are transparent, flexible, scalable and potentially disposable. To our knowledge none of the approaches has all the properties at the same time. In this work, a novel image sensor is presented, that is based on a luminescent concentrator (LC) foil which absorbs light of a specific portion of the light spectrum. The absorbed light is re-emitted at a lower frequency and transported to the edges of the LC by total internal reflection. The transported light is measured at the border of the foil by line scan cameras. An image that is projected onto the LC surface is reconstructed by the acquired measurements. Because of the LC foil, the image sensor is transparent, flexible, scalable and, due to the low price of the foil, potentially disposable.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Stereopsis in the Context of High Dynamic Range Stereo Displays, Philip Aumayr

There are two major trends in the display industry: Increasing contrast and (auto-) stereoscopic content presentation. While it is obvious that both trends do have an impact on perception, the relation between high dynamic range contrast and stereoscopic viewing is not well established in the literature. The goal of this thesis was to construct a high dynamic range display capable of presenting stereoscopic content to perform a user study testing the response to such a viewing experience especially when considering multiplexing side-effects such as crosstalk. The construction process and the many setbacks, such as polarization and thermal design issues, encountered during the process of building such a display are described in this thesis.
Even though the display prototype did not exhibit the highly anticipated contrast range, the user study did provide valuable feedback on how far stereopsis benefits from a higher dynamic range. The user study also included an attempt to uncover the role of crosstalk and its perceived counterpart ghosting in the process of stereopsis.

Advanced Impaired Vision Compensation, Marlene Hochrieser

Age-related macular degeneration (AMD) is a chronic and progressive eye condition with advancing central vision loss. In this thesis, two video enhancement techniques which compensate for this visual function loss, are presented: filtering and scaling. Since face recognition is a serious problem with central vision loss, video images are processed particularly with respect to face recognition. The filtering method extends an existing adaptive contrast enhancement technique for the visually impaired, based on the assumption, that depending on the shown contents, different filter parameterization is needed. Thereby each video image is segmented into facial and non-facial regions. The filter is then applied to both regions separately, using different parameters according to the particular content. Finally, the results are blended seamlessly. The scaling technique magnifies facial regions temporarily per scene in order to facilitate better recognition of faces and their expression. Filtering and scaling can be combined on demand. Both methods have been evaluated in a user study in cooperation with the Department of Ophthalmology and Optometry of the Medical University in Vienna.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Display Pixel Caching, Clemens Birklbauer

In this thesis a new mode for viewing videos on television sets that have different aspect ratios or resolutions from the videos is presented. This new mode, which was named display pixel caching (DPC), fills empty borders, caused by these format differences, with spatially and temporally consistent information while preserving the original video format. To do this, DPC merges the motion information from many subsequent frames to generate high-resolution panoramas in a consistent manner. In contrast to the stretching or scaling of the individual frames, as in other video modes, the original video frames are not modified but extended by showing more content than possible with a single rescaled frame. Compared to state-of-the-art video mosaicing approaches, DPC achieves real-time rates for high resolution video content while processing more complex motion (like object or parallax motion) fully automatically. The achieved results are compared to related video modes in a user evaluation.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Fast and Reliable CAMShift Tracking, David Exner

Reliable and fast object tracking is a key challenge in computer vision. In general, however, tracking approaches deal with only one of numerous specific problems, such as assigning stable tracks under changing lighting conditions, working on noisy images, handling partial and full occlusions, coping with objects that have complex or changing appearance, distinguishing similar objects, and re-detecting lost targets. The work at hand presents a comprehensive tracking approach which is capable of tracking multiple objects at the same time, using standard hardware and common web cams. The developed tracker is based on the CAMShift algorithm, but considerably extends this tracking approach. The system, besides resolving inter-object confusion by constantly checking each object’s identity, provides a stable recovery process, even in the presence of similar-looking objects. Additionally, it facilitates to model objects with complex or even changing appearance by allowing the use of any number of histograms for each target. Furthermore, as it incorporates no constraining motion model, it is able to track objects that undergo arbitrary movements. The tracker was realised by exploiting the processing power of highly parallelised graphics hardware and effortlessly tracks multiple objects at real-time frame-rates.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Enhancement of Perceived Contrast in Projected Light Microscopy, Daniel Klöck

Recently, a new illumination technique for optical microscopy, referred to as projected light microscopy, has been presented. This technique uses projected (high-frequency) light to illuminate the samples instead of uniform (low-frequency) light. Projected light microscopy allows to capture a specimen under white illumination and to project the recorded image back, which results in a double-modulation limited only by the dynamic range of the camera. The aim of this study is to improve the visual perception of the specimen even further by projecting transformed versions of the captured image, for example with contrast enhanced variations. Various techniques have been considered and evaluated for the present thesis according to properties of the human visual system. The techniques which proved to achieve the best results have been additionally compared through a user study. This thesis also introduces a new contrast enhancement technique for digital image processing, based on pixel adjacencies.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Streulichtreduktion durch optimierte Beleuchtungsmuster in der Mikroskopie, Daniel Kurz

In der Lichtmikroskopie sorgt Streulicht zu einem verminderten Kontrast und folglich reduzierter Sichtbarkeit kontrastarmer Details auf der Probe. Mit Hilfe eines modifizierten Lichtmikroskops, welches mit durch einen Videoprojektor räumlich moduliertem Licht arbeitet, lassen sich beliebige Projektionsmuster auf der Probe erzeugen. Die registrierte Rückprojektion eines Kamerabildes der Probe ist eine Möglichkeit, den Kontrast zu steigern. Da das Licht zunächst im Projektor und anschließend auf der Probe ein zweites Mal moduliert wird, spricht man in diesem Fall von einer Doppelmodulation. Dieses Verfahren berücksichtigt jedoch kein Streulicht. Diese Masterarbeit beschäftigt sich mit der Optimierung von Projektionsmustern, um Streulicht auf der Probe zu reduzieren oder zu kompensieren. Eine Vielzahl unterschiedlicher Techniken wurde entwickelt, die als Grundlage den lokalen Lichttransport der Probe messen oder eine Trennung von direkten und globalen Beleuchtungsanteilen durchführen. Zu schnelleren Ergebnissen füren auf einem Kamerabild der Probe basierende Techniken sowie interaktive Techniken, die ohne Kenntnis der Probe auskommen. Diese Arbeit zeigt, für welche Arten von Proben sich die verschiedenen Verfahren besonders gut eignen und beweist in den Ergebnissen den positiven Effekt der entwickelten Techniken. Der wahrgenommene Kontrast wird gesteigert und aus Streulicht resultierende Inhomogenitäten auf der Probe können kompensiert werden.

Subobjekt-Detektion auf mobilen Geräten unter Verwendung von räumlichen Beziehungen, Benjamin Brombach

Mobiltelefone sind zu einem ständigen Begleiter geworden und bieten neben dem Telefonieren einen stetig wachsenden Funktionsumfang. So ermöglichen sie den Benutzern jederzeit und überall die Beschaffung von verschiedenen Informationen über aktuelle Ereignisse, Interessantes in der Umgebung oder die Antwort auf eine konkrete Frage. Der digitale Museumsführer PhoneGuide ermöglicht die Identifizierung von Exponaten über ein Bilderkennungsverfahren und bietet nützliche multimediale Inhalte zu den gewünschten Objekten an. Die notwendige Software wird direkt auf den Mobiltelefonen der Besucher ausgeführt und durch im Museum verteilte Sensoren unterstützt. Über ein adaptives Lernverfahren und die Auswertung von Informationen, die während der Benutzung gesammelt werden, wird die Erkennung kontinuierlich verbessert. Da Ausstellungsstücke zum Schutz häufig in Vitrinen oder hinter anderen Absperrungen präsentiert werden, ist es nicht immer möglich, einzelne Objekte mit der Kamera des Mobiltelefons auszuwählen. In der vorliegenden Diplomarbeit wird eine Weiterentwicklung vorgestellt, die mit einem geringen zusätzlichen Aufwand mehrere Objekte (Subobjekte) in einem aufgenommenen Bild detektieren kann und so die Auswahl von einzelnen Subobjekten ermöglicht. Im Gegensatz zu ähnlichen Verfahren findet die Erkennung ausschließlich auf dem Mobiltelefon, ohne jegliche Verbindung zu einem Server, statt. In einem Vorverarbeitungsschritt werden einmalig die räumlichen Beziehungen zwischen den Subobjekten ermittelt. Diese können nicht nur für eine deutliche Beschleunigung der Detektion genutzt werden, sondern ermöglichen auch die korrekte Unterscheidung von ähnlichen Subobjekten und die Erkennung von teilweise oder vollständig verdeckten Ausstellungsstücken.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Klassifizierungsmethoden unter Anwendung einer adaptiven Datenmenge zur mobilen Bilderkennung, Erich Bruns

Durch integrierte Kameras in heutigen Mobiltelefonen ist es möglich, effiziente Bilderkennungsverfahren einzusetzen, um die Funktionalitäten eines Handys zu erweitern. 2003 startete das Projekt PhoneGuide mit dem Ziel, einen digitalen Museumsführer für mobile Geräte zu entwickeln. Basierend auf globalen Farb-Features und einem neuronalen Netz als Classifier ist es möglich, Ausstellungsstücke durch ein Foto zu identifizieren. Dadurch können dem Museumsbesucher multimediale Inhalte über Exponate präsentiert werden. Gegenstand dieser Arbeit ist ein Verfahren, dass die Erkennung unabhängig von der Position des Betrachters ermöglicht. Hierfür werden die aufgenommenen Daten der Objekte auf einem Server vorverarbeitet, so dass sie den Kriterien zweier unterschiedlicher Classifierarten genügen. Darüber hinaus werden adaptive Verfahren vorgestellt, die mit Hilfe einer Client-Server-Architektur das System, und somit die Erkennung, über die Zeit robuster werden lässt. Dieses System wurde durch eine Benutzerstudie evaluiert.

Nicht-wahrnehmbare hybride Kalibrierung für Projektor-Kamera Systemen, Stefanie Zollmann

Um die bei Projektionen auf beliebige, geometrisch komplexe, texturierte Oberflächen entstehenden geometrischen und farblichen Verzerrungen zu korrigieren, ist es erforderlich, die Korrespondenzen zwischen Betrachter und Projektor und die Texturierung der Oberfläche zu bestimmen. Ziel dieser Diplomarbeit ist es, geeignete Verfahren zu entwickeln, um diesen Kalibrierungsprozeß für den Betrachter nicht wahrnehmbar zu gestalten und somit eine kontinuierliche Kalibrierung des Systems zu gewährleisten. Ermöglicht wird diese nichtwahrnehmbare Kalibrierung durch das Einbetten der Kalibrierungsmuster in den eigentlichen Inhalt der Projektion, wobei es dem Betrachter durch die Trägheit des Auges nicht möglich ist, diese wahrzunehmen, während eine synchronisierte Kamera die eingebetteten Informationen extrahieren kann.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Optical Data Transfer from Public Displays, Tobias Langlotz

Die Nutzung von 2-dimensionalen Barcodes zur Kodierung und Dekodierung von digitalen Informationen hat sich in den letzten Jahren weit verbreitet. Diese Barcodes werden beispielsweise auf Visitenkarten genutzt, zur Beschriftung von Objekten und um auf Inhalte im Internet zu verweisen. Bei der Nutzung von Mobiltelefonen mit integrierter Kamera als Scanner ist ihre Kapazität jedoch durch die Bildqualität und -auflösung stark beschränkt. Gegenstand dieser Arbeit ist eine neues Verfahren für die optische Datenübertragung zwischen Displays und Mobiltelefonen basierend auf unsynchronisierten 4D-Barcodes. Dabei wird davon ausgegangen, dass keine elektromagnetische Verbindung zwischen den Geräten existieren kann. Zur Datenübertragung werden zeit-multiplexe, farbige 2D-Barcodes auf einem Display angezeigt und mit einer, in ein Mobiltelefon integrierten, Kamera aufgenommen. Die in dieser Arbeit vorgestellte Technik maximiert den Datendurchsatz und die Robustheit der Barcodeerkennung, obwohl keine direkte Synchronisation zwischen den Geräten besteht. Auch wenn die Datenrate unter der von elektromagnetischen Techniken liegt, sehen wir Einsatzmöglichkeiten in Fällen, wo keine direkte Verbindung zwischen den Geräten möglich ist. Die in dieser Arbeit beschriebenen 4D-Barcodes können in Webseiten, Filmsequenzen oder in Werbung eingebettet werden und Übertragen mehr Informationen, als es mit einem einzelnen 2D- oder 3D-Barcode möglich ist.

Radiometric Compensation of Global Illumination Effects with Projector-Camera Systems, Gordon Wetzstein

Projector-based displays have been evolving tremendously in the last decade. Reduced costs and increasing capabilities have let to a widespread use for home entertainment and scientific visualization. The rapid development is continuing - techniques that allow seamless projection onto complex everyday environments such as textured walls, window curtains or bookshelfs have recently been proposed. Although cameras enable a completely automatic calibration of the systems, all previously described techniques rely on a precise mapping between projector and camera pixels. Global illumination effects such as reflections, refractions, scattering, dispersion etc. are completely ignored since only direct illumination is taken into account. We propose a novel method that applies the light transport matrix for performing an image-based radiometric compensation which accounts for all possible lighting effects. For practical application the matrix is decomposed into clusters of mutually influencing projector and camera pixels. The compensation is modeled as a linear equation system that can be solved separately for each cluster. For interactive compensation rates this model is adapted to enable an efficient implementation on programmable graphics hardware. Applying the light transport matrix’s pseudo-inverse allows to separate the compensation into a computational expensive preprocessing step (computing the pseudo-inverse) and an on-line matrix-vector multiplication. The generalized mathematical foundation for radiometric compensation with projector-camera systems is validated with several experiments. We show that it is possible to project corrected imagery onto complex surfaces such as an inter-reflecting statuette and glass. The overall sharpness of defocused projections is increased as well. Using the proposed optimization for GPUs, real-time framerates are achieved.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

A Framework for Real-time 3D Reconstruction by Space Carving using Graphics Hardware, Christian Nitschke

Reconstruction of real-world scenes from a set of multiple images is a topic in Computer Vision and 3D Computer Graphics with many interesting applications. There exists a powerful algorithm for shape reconstruction from arbitrary viewpoints, called Space Carving. However, it is computationally expensive and hence can not be used with applications in the field of 3D video or CSCW as well as interactive 3D model creation. Attempts have been made to achieve real-time framerates using PC cluster systems. While these provide enough performance they are also expensive and less flexible. Approaches that use GPU hardware acceleration on single workstations achieve interactive framerates for novel-view synthesis, but do not provide an explicit volumetric representation of the whole scene. The proposed approach shows the efforts in developing a GPU hardware-accelerated framework for obtaining the volumetric photo hull of a dynamic 3D scene as seen from multiple calibrated cameras. High performance is achieved by employing a shape from silhouette technique in advance to obtain a tight initial volume for Space Carving. Also several speed-up techniques are presented to increase efficiency. Since the entire processing is done on a single PC the framework can be applied to mobile setups, enabling a wide range of further applications. The approach is explained using programmable vertex and fragment processors with current hardware and compared to highly optimized CPU implementations. It is shown that the new approach can outperform the latter by more than one magnitude.

Adaptive radiometrisch kompensierte Projektion in Echtzeit, Anselm Grundhöfer

Projektoren ermöglichen es, Lichtfarben und -helligkeiten pixelgenau zu kontrollieren. In Kombination mit einer Kamera ist es möglich, ein Bild so auf eine farbige und unebene Oberfläche zu projizieren, dass es dem Betrachter erscheint, als ob es auf einer ebenen, weißen Leinwand dargestellt würde. Dazu wird der Bildinhalt radiometrisch kompensiert: Es werden an dunklen Oberflächenpunkten mehr, an hellen weniger Licht projiziert, um die variablen Reflektionseigenschaften der Oberfläche zu neutralisieren. Hierbei kann es je nach Bildinhalt zu Darstellungsfehlern durch den beschränkten Wertebereich der Projektorintensitäten kommen. In dieser Arbeit wird die Technik erweitert, indem die Projektion in Abhängigkeit des Bildinhaltes so angepasst wird, dass das Bild mit maximaler Helligkeit ohne Abbildungsfehler dargestellt wird. Das Verfahren generiert das optimierte Bild in Echtzeit und ist damit in der Lage, Animationen und interaktive Inhalte fehlerfrei zu projizieren.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Schärfegradbestimmung zur multifokalen Projektion, Andreas Emmerling

Projektoren können sehr viel größere Abbildungen darstellen als das Gerät selbst groß ist. Deshalb ist deren Einsatz für mobile Anwendungen und im Heimbereich besonders vorteilhaft. Mit dem SmartProjector wurde bereits ein Verfahren vorgestellt, dass ein großes Hindernis bei der Verwendung von Projektoren mindert: Es erlaubt Projektionen auf geometrisch und farblich komplexen Oberflächen, und die Notwendigkeit einer weißen Projektionsfläche entfällt. Zusätzlich wurde im Rahmen dieser Arbeit eine pixelweise Schärfegradbestimmung von Projektoren in den SmartProjector integriert. Die Schärfegradbestimmung ist unabhängig von der Komplexität der Projektionsfläche und von den Positionen der Projektoren zueinander und zur Fläche. Ermittelte Schärfegrade mehrerer Projektoren können pixelweise verglichen werden. Es ist also möglich, den Projektor zu bestimmen, der auf eine bestimmte Stelle der Projektionsfläche schärfer projiziert als alle anderen. Diese Vergleichsmöglichkeit wird für eine multifokale Projektion verwendet. Ein Projektor projiziert dann nur den Bereich einer Gesamtabbildung, den er schärfer darstellt als die anderen. Dadurch wird eine überall scharfe Projektion selbst auf geometrisch komplexen Oberflächen erreicht. Unscharfe Abbildungen, ein weiteres Hindernis bei der Verwendung von Projektoren, können damit vermieden werden.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

MSc Practicums

A Game of Shadows, Philipp Wintersberger

Gesture- and movement-based human computer interaction devices have found their way into a broad band of applications over the last years. Although many different devices and techniques show great results in a wide range of tasks, many of them fail in certain situations. This and other reasons indicate the need of new devices to exceed the limits, covering more ways of interaction.
The Institute for Computer Graphics at the Johannes Kepler University Linz recently introduced a new technique by presenting a flexible, transparent and scalable image sensor (LumiConSense). In this paper we present the first software application to demonstrate the usage and the potential of this new technology by creating a computer game that can be controlled with by kind of shadows be- ing thrown onto the transparent surface of the interface.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Interactive Workflow Visualization for Biomedical Processing Pipelines, Stefan Luger

In the field of biomedical research, domain experts have to manage the complexity of heterogeneous and large data processed within analysis pipelines. Their goal is to get insights through (intermediate) results conveyed by the workflows representing the pipelines. Obviously, when analyzing biomedical data (such as blood samples taken from patients), the information is stored in raw text formats and often too big to be handled. Scientific workflows consist of tools with each of them defining their own experiment execution parameters. Because of the vast amount of data being processed, simply labeling the tools of a workflow (Nodes) and the IO-file stream connecting nodes (Links) would cause too much textual information. The Refinery Platform (Refinery) - developed by the Park Lab in collaboration with the Hide Lab in Harvard - aims to collect, process and most importantly visualize biomedical workflows. Their former visualization was based on a Python-based generated image which does not fulfill its initial purpose any more. In this report, we present a workflow visualization based on the Data-Driven Documents (D3.js) JavaScript library that provides level of detail, encodes multidimensional information through compact glyph design and provides necessary interaction features to satisfy the needs of domain experts working with biomedical data. The analyst’s requirements are then formalized as user tasks. These tasks are looked at concerning visual encoding and design choices as well as implementation and limitations. The workflow visualization is integrated into Refinery and showcased by the use of complex workflows (provided by the Galaxy Project). Cancer researchers who are working with processing pipelines on a regular basis will benefit from the visualization presented.

Light Field Denoising and Upsampling Using Variations of the 4D Bilateral Filter, Mariana Barakchieva

This paper presents an effective and not complicated method for denoising and upsampling of light fields (LF). Denoising of a light field scene is an important problem, since sensor noise is often affecting image quality, especially in low-light conditions. Afterwards the denoised light field can be used for further processing, e.g. depth reconstruction and superresolution. Also, denoising a LF can be used to produce a clean single view, resulting in better signal-to-noise ratio (SNR) than any other single-image denoising method. Having in mind the multi-dimensional structure of the LF, it is also important to have a quick and efficient algorithm for downsampling and upsampling the LF. The method proposed here is based on the four-dimensional Bilateral filter. It solves both problems fast and outperforms other related algorithms.

Focus Tracking for Cinematography, Aurel Wildfellner

Cinematographers primarily use manual control for focusing their cameras because existing autofocus techniques used in photography can’t be directly applied to video or motion-pictures and don’t provide sufficient artistic control. Adjusting focus therefore remains a key challenge, which limits the possibilities of executing certain shots. For instance, the amount of depth of field used in shots with a moving camera or subject is heavily influenced by how precise focus can be controlled. This work presents a simple method to overcome some of these challenges by tracking the focus with off the shelf sensory equipment and state of the art 3D point cloud processing techniques. The method integrates well with the current workflow of camera operators and their first assistants and even gives them more flexibility than a manually controlled follow focus. To evaluate the feasibility, a fully functional prototype was built and tested with professional camera operators.

Compressive Sensing for Light Fields, Ingomar Wesp

Capturing the light transport for a given scene is known to be a time-consuming process. While there are adaptive approaches that can speed up acquisition considerably under favorable circumstances, they require real-time computation at the time of acquisition and thereby complicate the capturing setup. The fairly young field of compressive sensing paves the way for a new and entirely non-adaptive approach that recovers an approximation of the light transport from a small set of fixed measurements. The entire recovery is performed independently of the actual acquisition in a post-processing step, effectively trading acquisition time for processing time. This work tries to outline the fundamentals of compressed sensing as well as two slightly different ways of applying the theory to the task of capturing the light transport of a scene. In addition to comparing these two known approaches in terms of their advantages and disadvantages in real-world setups, it also attempts to show how easily they can be extended from the common 4D case to a full 8D reflectance field. It closes by conceptually comparing the compressed sensing approach to more traditional adaptive capturing schemes.

Light Field Deconvolution, Darko Lukic

The main idea behind this project is, to implement an image-based renderer that works with light fields and to compute focal stacks with it. In addition we have to deconvolve the focal stack. As shown by Ng in rendering an image at a certain focal depth can be done by applying the so-called Fourier projection-slice theorem. Additionally to the project, it is also of interest, if we can deconvolve a high-resolution image, captured with an ordinary camera, since light field have usually an much lower spatial resolution, compared to the images captured with ordinary cameras. What an light field is, how we can render focal stacks with it, which approach seems to be the most promising to deconvolve a focal stack and whether we can deconvolve a high-resolution image with a low-resolution light field, will be discussed in the this work.

BSc Theses

Virtual Reality GPS Route Visualization for Mobile Devices, Simon Stix

While virtual reality is mainly marketed for gaming, it can be used for a variety of new applications. With the raise of consumer grade VR hardware this can now be utilized by a broad audience. We propose a VR route and flight path viewer, rendering a 3D representation of holiday destinations, hiking routes, or flight paths. It supports multiple control and movement schemes and is targeted at a wide variety of devices, including mobile VR headsets.

Airplane Heading Correction with Wind Prediction, Simon Woisetschläger

This paper presents a new method for obtaining the current wind conditions at a certain location, which is important for pilots of gliders and smaller airplanes due to the resulting drift off from the proposed flight course. Instead of current methods, where certain flight maneuvers have to be performed to obtain the wind direction, a weather forecast system is used which furthermore can provide even more information than only the wind condition. A further important part of this project was to incorporate a positioning system to be able to extract the needed data out of the forecast at the current position. The project was set up as an Android application due to the fact, that nowadays smartphones are allrounders and provide all the requirements for this project such as portability, a positioning system and a high computation power. The application visualizes the heading direction, of a small aircraft or glider, calculated from the received tracking data and the obtained wind data from the forecast. Furthermore, pilot specific information which can be additionally extracted from the weather prediction should also be incorporated.

Light-Field Viewer for Android, Manuel Kollegger

A new revolution in the field of digital photography is taking place. The revolution of light fields becoming a practical improvement over traditional images is reaching it’s peak. As the smartphone is indispensable in today’s society, a solution to capture and display light fields on a mobile device is the next logical step. Some guidance systems to record light fields with a handheld phone are already existent. However there are no convenient applications to display those images right after they have been recorded. This work creates a light-field viewer for Android phones, enabling variable focus and aperture after the image has been taken. This project allows the display of both conventional light fields recorded using special hardware as well as light fields captured with the mobile phone. The user can change the focus, the perspective and the aperture at runtime.

Rendering light fields in a graphics-memory constrained environment in realtime is the biggest challenge for this project. This work describes the approach, rendering and performance of the Android Light-Field Viewer.

Light Field Global Illumination, Pascal Goldschmid

The information stored in a captured light field can be used to illuminate and render a virtual scene. Compared to primitive light sources in rendering algorithms, light fields can encapsulate any complex form of light or shadow, whilst still being able to correctly light limited dynamic scenes.

Traditionally, 2-dimensional textures are used on an all-encompassing object to represent the background or environment lighting of a scene. For this thesis, a light field based global illumination renderer was implemented, allowing a dynamic test scene to be lit by either a real-world or an artificially created 4-dimensional light field, with two supported parametrisations (two-plane and cylindrical). Multiple importance sampling for two-plane parametrised light fields was implemented to increase the rendering speed and rate of convergence, and different light field rendering parameters were tested. Furthermore, hardware requirements and constraints were evaluated to determine the present usability and profitability of such a lighting algorithm. As it stands currently, testing shows that the hardware is not quite ready yet to comfortably use light fields for global illumination lighting due to the enormous amount of data overhead required to do so.

HDR Light Field, Benjamin Pichler

The two main areas of high dynamic range imaging and light field photography were combined to capture ”HDR Light Fields”. A planar array of 49 ordinary and inexpensive digital cameras was constructed and accurately calibrated to obtain each camera’s extrinsic and intrinsic parameters. With this knowledge, arbitrary views with different synthetic settings of focus and aperture could be rendered based on a captured four-dimensional light field. Temporal or spatial distribution of multiple different exposures was then used to retrieve light fields with higher dynamic ranges. The impact of point-spread functions on image deformations in defocused regions was analyzed and several applied distribution patterns were compared against each other. Comparative results based on the human visual perception are indicating, that more exposures in a pattern are recognizably increasing image errors. Further it was indicated within several test sceneries, that uniformly distributed patterns, and especially such with periodic repetitions, seem to be a more robust choice.

Coded Aperture Projection, Max Große

Coded apertures are integrated into an off-the-shelf video projector to increase the depth of field in conjunction with inverse filtering. The regional defocus on the screen surface is measured and projected images are deconvolved with locally scaled aperture codes. Using a coded aperture, problems that arise from deconvolution when a circular aperture is used are mitigated. A technique is used that enables interactive frame-rates of up to 16 fps for an XVGA resolution. This method is extended by using a dynamic aperture pattern that is rendered into a liquid crystal array at the projector’s aperture plane. A method is used to calculate an optimal aperture pattern at interactive frame-rates based on the image content and on limitations of the human visual system. Such an adaptive coded aperture even supersedes the visual quality and depth of field achieved with a static coded aperture and poses a step towards next-generation auto-iris projector optics.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Evaluation der Keying-Techniken für das Augmented Studio (Offline), Man-Man Fu

In dieser Arbeit wurde der Schwerpunkt auf die Evaluation von Keying-Techniken gelegt, um das Potenzial des Einsatzes von Projektoren in Virtuellen Studios zu untersuchen. Aus den Ergebnissen der vorgelegten Arbeit sind folgende Schlüsse zu ziehen: Die radiometrisch kompensierten Hintergründe sind für ein erfolgreiches Keying sehr wichtig. Vor allem bei einer niedrigen Umgebungshelligkeit bekommt man die beste Matte. Generell ist eine Trennung des Vordergrundes von dem Hintergrund für ein opakes Objekt wie zum Beispiel eine Hand mit wenig Detailverlust realisierbar. Schwierig wird das Keying von den transparenten oder halbtransparenten Gegenständen wie Glas oder Haare. Der Nachteil von dem Chroma-Keying ist die farbliche Einschränkung der Vordergrundobjekte. Das Flash-Keying liefert zwar keine schelchten Ergebnisse, jedoch muss man Schwellwerte manuell einstellen, um ein gutes Ergebnis zu erzielen. Da die Hintergründe wegen der Marker für Kameratracking unterschiedlich sind, entsteht das Matte-Bild auf dem Hintergrund mehr oder weniger Rauschen. Das Flash-Keying ist außerdem sehr sensitiv für schwarze Oberfläche oder ähnliche auf dem Vordergrund. Der Flash-Chroma-Keying-Prototyp ist eine Erweiterung des Flash-Keying-Prototyps und des Chroma-Keying-Prototyps. In diesem Prototyp werden Farbinformationen und temporale Informationen verwendet. Dieser erlaubt es, eine beliebige Farbe der Vordergrundobjekte auszuwählen, was ihn von den anderen beiden Prototypen unterscheidet. Die Bewegungsunschärfe bringt vor allem Problem bei den drei Keying-Prototypen. Aufgrund der visuellen besseren Keying-Qualität und der freien Farbwahl für den Vordergrund hebt sich das Flash-Chroma-Keying von den restlichen Verfahren ab. Zudem wurden Keyingtechniken in dieser Arbeit vorgestellt, die im Rahmen von verwendeten Projektoren in Virtuellen Studio eingesetzt werden können.

Laserpointer-Tracking in realen Umgebungen, Daniel Kurz

In der Erweiterten Realität beschränkt man sich nicht auf klassische Displays, wie Monitore oder Projektionsleinwände. Etwa mit Hilfe kalibrierter Projektoren und radiometrischer Kompensation wird jede reale Umgebung – zum Beispiel ein Wohnzimmer – zu einem potentiellen Display, das interaktive Inhalte darstellen kann. Ein solches interaktives System erfordert ein intuitives und schnelles Eingabegerät. Klassische desktopgebundene Eingabegeräte, wie Maus und Tastatur eigenen sich hier nicht. Laserpointer-Tracking hat sich als eine intuitive und schnelle Eingabemethode herausgestellt, die es dem Benutzer ermöglicht über Distanz mit einer Projektionsleinwand zu interagieren. Diese Arbeit stellt ein System vor, das Laserpointer-Tracking in realen Umgebungen ermöglicht. Das Display – und damit die Interaktionsfläche – ist nicht länger auf eine planare Leinwand beschränkt, sondern kann beliebig komplex und den Benutzer umgebend sein. Ein hybrides Kamerasystem, bestehend aus einer pan-tilt-zoom (PTZ) Kamera in der Funktion einer Detailkamera und einer Fischaugenkamera, die als Kontextkamera fungiert, ermöglicht die Lokalisierung und das Verfolgen (tracking) eines Laserpunktes in einem Interakionsfeld von 360◦ ∗ 180◦ . Die Position des Laserpunktes in dreidimensionalen Weltkoordinaten liefert die Grundlage für Laserpointer-Interaktion in realen Umgebungen. Darüber hinaus ist das System in der Lage, die es umgebene Szene zu akquirieren. Ein Lasermodul ermöglicht die Abtastung von Geometrieinformationen. Zusammen mit einer Textur, die aus Einzelbildern der Detailkamera zusammengesetz wird, erhält man ein texturiertes Geometriemodell der Szene, das aus beliebigen Perspektiven betrachtet werden kann.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

BSc Practicums

Volume Scattering Simulation for Optical Inverse Tone Mapping in Light Field Microscopy, Aurel Wildfellner

In this project optical inverse tone-mapping for light fields was examined, with a focus on applications in light field microscopy. The priniciple of optical inverse tone-mapping in microscopy is to use modulated illumination to enhance several image properties like contrast, or to suppress unwanted highlights. A light field microscope, as described by Levoy et al., could also be extended by illuminating a specimine with a modulated light field. Rather than examine new techniques to perform optical inverse tone-mapping with light fields, the goal of this project was to perform simple and rough tests to get a basic understanding how this could work in principle. Therefore a real physical correct simulation was out of scope. The performed tests and experiments required to render images from a scene, which gets illuminated by a modulated light source. Because of their size, specimens in microscopy tend to be very translucent and scatter light a lot. This has to be taken into account and rendering with surfaces or simple sub surface scattering techniques wouldn’t produce results, which are accurate enough. Therefore volume rendering was used, which computes the propagation of light in a participating media. As the main tool, the free 3d software blender was used for setting up scenes and rendering. For this, blenders volume rendering was further enhanced, also with the goal to permanently integrate this changes and make them useful for other application than optical inverse tone-mapping experiments, like animation or production rendering. The current results can only give a very rough idea, how optical inverse tone-mapping could work with light fields, also due to the highly approximating simulation. For a better understanding a more physical correct simulation would be required.