Page Areas:

Current Submenu:

Additional Information:

Student Projects

Hier den Alternativtext zum Bild eingeben!

Displays Book (second edition)

Displays Book (second edition)

Displays Book (Korean edition)

Hier den Alternativtext zum Bild eingeben!

Displays Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

Spatial Augmented Reality Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

The Caleydo Project

Hier den Alternativtext zum Bild eingeben!

VIOSO smartprojecting

Hier den Alternativtext zum Bild eingeben!

Position Indication:



The Institute of Computer Graphics carries out research in a modern field that has been coined "visual computing". Our core disciplines cover the imaging, processing, visualization and displaying of visual data. These are enabled by new fields and technologies, such as light fields, projector-camera systems, responsive optics, mobile computing, and visual analytics.

Please select one of the following topics or years for detailed information.


TACO: Visualizing Changes in Tables Over Time

Multivariate, tabular data is one of the most common data structures used in many different domains.
Over time, tables can undergo changes in both structure and content, which results in multiple versions of the same table. A challenging task when working with such derived tables is to understand what exactly has changed between versions in terms of additions/deletions, reorder, merge/split, and content changes. For textual data, a variety of commonplace "diff" tools exist that support the task of investigating changes between revisions of a text. Although there are some comparison tools which assist users in inspecting differences between multiple table instances, the resulting visualizations are often difficult to interpret or do not scale to large tables with thousands of rows and columns.
To address these challenges, we developed TACO, an interactive comparison tool that visualizes effectively the differences between multiple tables at various levels of detail. With TACO we show (1) the aggregated differences between multiple table versions over time, (2) the aggregated changes between two selected table versions, and (3) detailed changes between the selection. To demonstrate the effectiveness of our approach, we show its application by means of two usage scenarios.

Niederer, C., Stitz, H., Hourieh, R., Grassinger, F., Aigner, W., Streit, M., TACO: Visualizing Changes in Tables Over Time, IEEE Transactions on Visualization and Computer Graphics (InfoVis'17), 23(1), 2017 (to appear).

application/pdfPaper (2.6 MB)
Project Website  (Neues Fenster)

Provenance-Based Visualization Retrieval

Storing interaction provenance generates a knowledge base with a large potential for recalling previous results and guiding the user in future analyses. However, search and retrieval of analysis states can become tedious without extensive creation of meta-information by the user. In this work we present an approach for an efficient
retrieval of analysis states which are structured as provenance graphs of automatically recorded user interactions and visualizations. As a core component, we describe a visual interface for querying and exploring analysis states based on their similarity to a partial definition of the requested analysis state. Depending on the use case, this definition may be provided explicitly by the user or inferred from a reference state. We explain the definition by means of a Gapminder-inspired prototype and discuss our implementation for an effective retrieval of previous states.

Stitz, H., Gratzl, S., Piringer, H., Streit, M., Provenance-Based Visualization Retrieval, Poster at the IEEE Conference on Visual Analytics Science and Technology (VAST'17), Best Poster Award, 2017.

application/pdfPoster Abstract (330KB)

Thin-film camera using luminescent concentrators and an optical Söller collimator

Thin-film camera with opitcal Söller collimator

This article reports our investigation of the potential of optical Söller collimators in combination with luminescent concentrators for lens-less, short-distance, and shape-independent thin-film imaging. We discuss optical imaging capabilities and limitations, and present first prototypes and results. Modern 3D laser lithography and deep X-ray lithography support the manufacturing of extremely fine collimator structures that pave the way for flexible and scalable thin-film cameras that are far thinner than 1 mm (including optical imaging and color sensor layers).

Koppelhuber, A. and Bimber, O., A Thin-Film Camera using Luminescent Concentrators and an Optical Söller Collimator, Opt. Express 25(16), 18526-18536, 2017

Link to Optics Express  (Neues Fenster)
application/pdfManuscript (29.0 MB)

Optimized Sampling for View Interpolation in Light Fields Using Local Dictionaries

We present an angular superresolution method for light fields captured with a sparse camera array. Our method uses local dictionaries extracted from a sampling mask for upsampling a sparse light field to a dense light field by applying compressed sensing reconstruction. We derive optimal sampling masks by minimizing the coherence for representative global dictionaries. The desired output perspectives and the number of available cameras can be arbitrarily specified. We show that our method yields qualitative improvements compared to previous techniques.

Schedl, D. C., Birklbauer, C, and Bimber, O, Optimized Sampling for View Interpolation in Light Fields Using Local Dictionaries, J. Computer Vision and Image Understanding, to appear, 2017

Link to Elsevier  (Neues Fenster)

Computational imaging, relighting and depth sensing using flexible thin-film sensors

We wrap a thin-film luminescent concentrator (LC) - a flexible and transparent plastic foil doped with fluorescent dye particles - around an object to obtain images of the object under varying synthetic lighting conditions and without lenses. These images can then be used for computational relighting and depth reconstruction. An LC is an efficient two-dimensional light guide that allows photons to be collected over a wide solid angle, and through multiple overlapping integration areas simultaneously. We show that conventional photodetectors achieve a higher signal-to-noise ratio when equipped with an LC than in direct measurements. Efficient light guidance in combination with computational imaging approaches, such as presented in this article, can lead to novel optical sensors that collect light in a structured way and within a wide solid angle rather than unstructured through narrow apertures. This enables flexible, scalable, transparent, and lens-less thin-film image and depth sensors.

Koppelhuber, A. and Bimber, O., Computational Imaging, Relighting and Depth Sensing Using Flexible Thin-Film Sensors, Opt. Express 25(3), 2694-2702, 2017

Link to Optics Express  (Neues Fenster)
application/pdfManuscript (60.2 MB)


Volumetric Light-Field Excitation

Volumetric Light-Field Excitation

We explain how to concentrate light simultaneously at multiple selected volumetric positions by means of a 4D illumination light field.
First, to select target objects, a 4D imaging light field is captured.
A light field mask is then computed automatically for this selection to avoid illumination of the remaining areas.
With one-photon illumination, simultaneous generation of complex volumetric light patterns becomes possible.
As a full light-field can be captured and projected simultaneously at the desired exposure and excitation times, short readout and lighting durations are supported.

Schedl, D. C. and Bimber, O. Volumetric Light-Field Excitation. Nature Sci. Rep. 6, 29193; doi: 10.1038/srep29193, 2016

Link to Nature  (Neues Fenster)
application/pdfManuscript (24.2 MB)

Generalized Depth-of-Field Light-Field Rendering

Typical light-field rendering uses a single focal plane to define the depth at which objects should appear sharp.
This emulates the behavior of classical cameras.
However, plenoptic cameras together with advanced light-field rendering enable depth-of-field effects that go far beyond the capabilities of conventional imaging.
We present a generalized depth-of-field light-field rendering method that allows arbitrarily shaped objects to be all in focus while the surrounding fore- and background is consistently rendered out of focus based on user-defined focal plane and aperture settings.
Our approach generates soft occlusion boundaries with a natural appearance which is not possible with existing techniques.
It furthermore does not rely on dense depth estimation and thus allows presenting complex scenes with non-physical visual effects.

Schedl, D. C., Birklbauer, C., Bimber, O, Generalized Depth-of-Field Light-Field Rendering, International Conference on Computer Vision and Graphics (ICCVG), 2016 (to appear).

Non-Uniform Spatial Deformation of Light Fields by Locally Linear Transformations

Light-field cameras offer new imaging possibilities compared to conventional digital cameras. The additional angular domain of light fields, however, prohibits direct application of frequently used image processing algorithms, such as warping, retargeting, or stitching. We present a general and efficient framework for non-uniform light-field warping, that forms the basis for extending many of these image processing techniques to light fields. It propagates arbitrary spatial deformations defined in one light-field perspective consistently to all other perspectives by means of 4D patch matching instead of relying on explicit depth reconstruction. This allows processing light-field recordings of complex scenes with non-Lambertian properties such as transparency and refraction. We show application examples of our framework in panorama light-field imaging, light-field retargeting and artistic manipulation of light fields.

Birklbauer, C., Schedl, D. C., Bimber, O, Non-Uniform Spatial Deformation of Light Fields by Locally Linear Transformations, ACM Transactions on Graphics, 35, 5, Article 156, 2016.

application/pdfPreprint (9.3 MB, © JKU ICG 2016. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Graphics,
Download ACM Article  (Neues Fenster)

From Visual Exploration to Storytelling and Back Again

The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author 'Vistories', visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals.

Gratzl, S., Lex, A., Gehlenborg, N., Cosgrove, N., Streit, M., From Visual Exploration to Storytelling and Back Again, Computer Graphics Forum (EuroVis 2016), 35(3), pp. 491-500, 2016.

application/pdfPaper (1.7 MB)
Project Website  (Neues Fenster)

AVOCADO: Visualization of Workflow-Derived Data Provenance for Reproducible Biomedical Research

A major challenge of data-driven biomedical research lies in the collection and representation of data provenance information to ensure reproducibility of findings. In order to communicate and reproduce multi-step analysis workflows executed on datasets that contain data for dozens or hundreds of samples, it is crucial to be able to visualize the provenance graph at different levels of aggregation. Most existing approaches are based on node-link diagrams, which do not scale to the complexity of typical data provenance graphs. In our proposed approach we reduce the complexity of the graph using hierarchical and motif-based aggregation. Based on user action and graph attributes a modular degree-of-interest (DoI) function is applied to expand parts of the graph that are relevant to the user. This interest-driven adaptive provenance visualization approach allows users to review and communicate complex multi-step analyses, which can be based on hundreds of files that are processed by numerous workflows. We integrate our approach into an analysis platform that captures extensive data provenance information and demonstrate its effectiveness by means of a biomedical usage scenario.

Stitz, H., Luger, S., Streit, M., Gehlenborg, N., AVOCADO: Visualization of Workflow-Derived Data Provenance for Reproducible Biomedical Research, Computer Graphics Forum (EuroVis 2016), 35(3), pp. 481-490, 2016.

application/pdfPaper (846KB)
Project Website  (Neues Fenster)

Pathfinder: Visual Analysis of Paths in Graphs

The analysis of paths in graphs is highly relevant in many domains. Typically, path-related tasks are performed in node-link layouts. Unfortunately, graph layouts often do not scale to the size of many real world networks. Also, many networks are multivariate, i.e., contain rich attribute sets associated with the nodes and edges. These attributes are often critical in judging paths, but directly visualizing attributes in a graph layout exacerbates the scalability problem. In this paper, we present visual analysis solutions dedicated to path-related tasks in large and highly multivariate graphs. We show that by focusing on paths, we can address the scalability problem of multivariate graph visualization, equipping analysts with a powerful tool to explore large graphs. We introduce Pathfinder, a technique that provides visual methods to query paths, while considering various constraints. The resulting set of paths is visualized in both a ranked list and as a node-link diagram. For the paths in the list, we display rich attribute data associated with nodes and edges, and the node-link diagram provides topological context. The paths can be ranked based on topological properties, such as path length or average node degree, and scores derived from attribute data. Pathfinder is designed to scale to graphs with tens of thousands of nodes and edges by employing strategies such as incremental query results. We demonstrate Pathfinder's fitness for use in scenarios with data from a coauthor network and biological pathways.

Partl, C., Gratzl, S., Streit, M., Wassermann, A. M., Pfister, HP., Schmalstieg, D., Lex, A., Pathfinder: Visual Analysis of Paths in Graphs, Computer Graphics Forum (EuroVis 2016), 35(3), pp. 71-80, 2016

application/pdfPaper (521KB)
Project Website  (Neues Fenster)

ThermalPlot: Visualizing Multi-Attribute Time-Series Data Using a Thermal Metaphor

Multi-attribute time-series data plays a vital role in many different domains, such as economics, sensor networks, and biology. An important task when making sense of such data is to provide users with an overview to identify items that show an interesting development over time, including both absolute and relative changes in multiple attributes simultaneously. However, this is not well supported by existing visualization techniques. To address this issue, we present ThermalPlot, a visualization technique that summarizes combinations of multiple attributes over time using an items position, the most salient visual variable. More precisely, the x-position in the ThermalPlot is based on a user-defined degree-of-interest (DoI) function that combines multiple attributes over time. The y-position is determined by the relative change in the DoI value (delta DoI) within a user-specified time window. Animating this mapping via a moving time window gives rise to circular movements of items over time—as in thermal systems. To help the user to identify important items that match user-defined temporal patterns and to increase the technique’s scalability, we adapt the level of detail of the items’ representation based on the DoI value. Furthermore, we present an interactive exploration environment for multi-attribute time-series data that ties together a carefully chosen set of visualizations, designed to support analysts in interacting with the ThermalPlot technique. We demonstrate the effectiveness of our technique by means of two usage scenarios that address the visual analysis of economic development data and of stock market data.

Stitz, H., Gratzl, S., Aigner, W., and Streit, M., ThermalPlot: Visualizing Multi-Attribute Time-Series Data Using a Thermal Metaphor, IEEE Transactions on Visualization and Computer Graphics, 2016 (to appear).

application/pdfPreprint (7.9 MB)