Page Areas:



Current Submenu:

Additional Information:

Student Projects

Hier den Alternativtext zum Bild eingeben!

Displays Book (second edition)

Displays Book (second edition)

Displays Book (Korean edition)

Hier den Alternativtext zum Bild eingeben!

Displays Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

Spatial Augmented Reality Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

The Caleydo Project

Hier den Alternativtext zum Bild eingeben!

VIOSO smartprojecting

Hier den Alternativtext zum Bild eingeben!

Position Indication:

Content

Research

The Institute of Computer Graphics carries out research in a modern field that has been coined "visual computing". Our core disciplines cover the imaging, processing, visualization and displaying of visual data. These are enabled by new fields and technologies, such as light fields, projector-camera systems, responsive optics, mobile computing, and visual analytics.

Please select one of the following topics or years for detailed information.
















2015

Active Guidance for Light-Field Photography on Smartphones


We present a novel approach for guided light-field photography using off-the-shelf smartphones. In contrast to previous work that requires the user to decide where next to position a mobile camera, we actively compute and visualize during runtime a recommendation for the next sampling position and orientation taking into account the current camera pose and required camera alignments. This supports efficient capture of various types of large-field-of-view light fields in just a matter of minutes and without specialized camera equipment. To further reduce the overall capture time, we describe an extension of our guidance algorithm to collaborative light-field photography by small groups of users.

Birklbauer, C. and Bimber, O. Active Guidance for Light-Field Photography on Smartphones. Computers & Graphics, 53(B), 127–135, 2015.

application/pdfPreprint (13.7 MB)
Computers & Graphics Article  (Neues Fenster)

Multi-exposure color imaging with stacked thin-film luminescent concentrators


We present a fully transparent, scalable, and flexible color image sensor that consists of stacked thin-film luminescent concentrators (LCs). At each layer, it measures a Radon transform of the corresponding LC’s spectral responses. Color images are then reconstructed through inverse Radon transforms that are obtained using machine learning. A high sampling rate in Radon space allows encoding multiple exposures to cope with under- and overexposed cases in one recording. Thus, our sensor simultaneously measures multiple spectral responses in different LC layers and multiple exposures in different Radon coefficients per layer. We also show that machine learning enables adequate three-channel image reconstruction from the response of only two LC layers.

Koppelhuber, A. and Bimber, O.,Multi-exposure color imaging with stacked thin-film luminescent concentrators, Opt. Express, 23(26), pp. 33713-33720, 2015

application/pdfManuscript (25.7 MB)

A classification sensor based on compressed optical Radon transform


We present a thin-film sensor that optically measures the Radon transform of an image focussed onto it. Measuring and classifying directly in Radon space, rather than in image space, is fast and yields robust and high classification rates. We explain how the number of integral measurements required for a given classification task can be reduced by several orders of magnitude. Our experiments achieve classification rates of 98%–99% for complex hand gesture and motion detection tasks with as few as 10 photosensors. Our findings have the potential to stimulate further research towards a new generation of application-oriented classification sensors for use in areas such as biometry, security, diagnostics, surface inspection, and human-computer interfaces.

Koppelhuber, A. and Bimber, O. A classification sensor based on compressed optical Radon transform. Opt. Express, 23(7), 9397-9406 (2015).

application/pdfManuscript (6.1 MB)

ThermalPlot: Visualizing Multi-Attribute Time-Series Data Using a Thermal Metaphor


Multi-attribute time-series data plays a vital role in many different domains. An important task when making sense of such data is to provide users with an overview to identify items that show an interesting development over time. However, this is not well supported by existing visualization techniques. To address this issue, we present ThermalPlot, a visualization technique that summarizes complex combinations of multiple attributes over time using an item’s position, the most salient visual variable. More precisely, the x-position in the ThermalPlot is based on a user-defined degree-of- interest (DoI) function that combines multiple attributes over time. The y-position is determined by the relative change in the DoI value (DoI) within a user-specified time window. Animating this mapping via a moving time window gives rise to circular movements of items over time—as in thermal systems. To help the user to identify important items that match user-defined temporal patterns and to increase the technique’s scalability, we adapt the items’ level of detail based on the DoI value. We demonstrate the effectiveness of our technique in a stock market usage scenario.

Stitz, H., Gratzl, S., Aigner, W., Streit, M., ThermalPlot: Visualizing Multi-Attribute Time-Series Data Using a Thermal Metaphor, IEEE Conference on Information Visualization (poster), 2015.

Generalized Depth-of-Field Light-Field Rendering


Plenoptic cameras together with advanced light-field rendering enable depth-of-field effects that go far beyond the capabilities of classical imaging. We present a generalized depth-of-field light-field rendering method that allows arbitrarily shaped objects to be all-in-focus while the surrounding fore- and background is consistently rendered out-of-focus based on adjusted focal plane and aperture settings. Our approach generates correct occlusion boundaries which is not possible with a simple image blending technique. It does not rely on dense depth-estimation and allows presenting complex scenes with new visual effects.

David Schedl, Clemens Birklbauer, and Oliver Bimber. Generalized Depth-of-Field Light-Field Rendering . IEEE International Conference on Computational Photography (ICCP), (Poster+Demo), 2015.

application/pdfPoster (30.5 MB)

Directional Super-Resolution by means of Coded Sampling and Guided Upsampling


We present a simple guided super-resolution technique for increasing directional resolution without reliance on depth estimation or image correspondences. Rather, it searches for best- matching multidimensional (4D or 3D) patches within the entire captured data set to compose new directional images that are consistent in both the spatial and the directional domains. We describe algorithms for guided upsampling, iterative guided upsampling, and sampling code estimation. Our experimental results reveal that the outcomes of existing light-field camera arrays and light-stage systems can be improved without additional hardware requirements or recording effort simply by realignment of cameras or light sources to change their sampling patterns.

David Schedl, Clemens Birklbauer, and Oliver Bimber. Directional Super-Resolution by means of Coded Sampling and Guided Upsampling. IEEE International Conference on Computational Photography (ICCP), 2015.

CloudGazer: A Divide-and-Conquer Approach to Monitoring and Optimizing Cloud-Based Networks


With the rise of virtualization and cloud-based networks of various scales and degrees of complexity, new approaches to managing such infrastructures are required. In these networks, relationships among components can be of arbitrary cardinality (1:1, 1:n, n:m), making it challenging for administrators to investigate which components influence others. In this paper we present CloudGazer, a scalable visualization system that allows users to monitor and optimize cloud-based networks effectively to reduce energy consumption and to increase the quality of service. Instead of visualizing the overall network, we split the graph into semantic perspectives that provide a much simpler view of the network. CloudGazer is a multiple coordinated view system that visualizes either static or live status information about the components of a perspective while reintroducing lost inter-perspective relationships on demand using dynamically created inlays. We demonstrate the effectiveness of CloudGazer in two usage scenarios: The first is based on a real- world network of our domain partners where static performance parameters are used to find an optimal design. In the second scenario we use the VAST 2013 Challenge dataset to demonstrate how the system can be employed with live streaming data.

Stitz, H., Gratzl, S., Krieger, M. and Streit, M., CloudGazer: A Divide-and-Conquer Approach to Monitoring and Optimizing Cloud-Based Networks, Proceedings of IEEE Pacific Visualization Symposium (PacificVis ’15), 175-182, 2015.

application/pdfManuscript (1.2 MB)