Page Areas:



Current Submenu:

Additional Information:

Student Projects

Hier den Alternativtext zum Bild eingeben!

Displays Book (second edition)

Displays Book (second edition)

Displays Book (Korean edition)

Hier den Alternativtext zum Bild eingeben!

Displays Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

Spatial Augmented Reality Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

The Caleydo Project

Hier den Alternativtext zum Bild eingeben!

VIOSO smartprojecting

Hier den Alternativtext zum Bild eingeben!

Position Indication:

Content

Research

The Institute of Computer Graphics carries out research in a modern field that has been coined "visual computing". Our core disciplines cover the imaging, processing, visualization and displaying of visual data. These are enabled by new fields and technologies, such as light fields, projector-camera systems, responsive optics, mobile computing, and visual analytics.

Please select one of the following topics or years for detailed information.

















2014

Enhanced learning-based imaging with thin-film luminescent concentrators


LumiConSense, a transparent, flexible, scalable, and disposable thin-film image sensor has the potential to lead to new human-computer interfaces that are unconstrained in shape and sensing-distance. In this article we make four new contributions: (1) A new real-time image reconstruction method that results in a significant enhancement of image quality compared to previous approaches; (2) the efficient combination of image reconstruction and shift-invariant linear image processing operations; (3) various hardware and software prototypes which, realize the above contributions, demonstrating the current potential of our sensor for real-time applications; and finally, (4) a further higher quality offline reconstruction algorithm.

Koppelhuber, A., Fanello, S., Birklbauer, C., Schedl, D., Izadi, S. and Bimber, O. Enhanced learning-based imaging with thin-film luminescent concentrators. Opt. Express, 22(24), 29531-29543 (2014).

application/pdfManuscript (34.7 MB)

LumiConSense, A Transparent, Flexible, and Scalable Thin-Film Sensor


Most image sensors are planar, opaque, and inflexible. We present a novel image sensor that is based on a luminescent concentrator (LC) film which absorbs light from a specific portion of the spectrum. The absorbed light is re-emitted at a lower frequency and transported to the edges of the LC by total internal reflection. The light transport is measured at the border of the film by line scan cameras. With these measurements, images that are focused onto the LC surface can be reconstructed. Thus, our image sensor is fully transparent (no integrated circuits or other structures such as grids of optical fibers or photodiodes), flexible (makes curved sensor shapes possible), scalable (sensor size can be small to large at similar cost, pixel size is not restricted to size of the photodiodes) and potentially disposable (the sensing area is low-cost and can be replaced if damaged). Our most recent work extends our sensor towards lensless multifocal imaging and depth estimation. In contrast to widely applied touch sensors that are mainly limited to planar shapes and interaction through direct touch, our approach has the potential to lead to new human-computer interfaces that are unconstrained in shape and sensing-distance.

Koppelhuber, A. and Bimber, O., LumiConSense, A Transparent, Flexible, and Scalable Thin-Film Sensor, IEEE Computer Graphics and Applications, 34(5), 98-102, 2014

application/pdfManuscript (1.0 MB)

Guided Visual Exploration of Genomic Stratifications in Cancer


Cancer is a heterogeneous disease, and molecular profiling of tumors from large cohorts has enabled characterization of new tumor subtypes. This is a prerequisite for improving personalized treatment and ultimately achieving better patient outcomes. Potential tumor subtypes can be identified with methods such as unsupervised clustering or network-based stratification, which assign patients to sets based on high-dimensional molecular profiles. Detailed characterization of identified sets and their interpretation, however, remain a time-consuming exploratory process.
To address these challenges, we combined 'StratomeX', an interactive visualization tool that is freely available at >?http://www.caleydo.org/, with exploration tools to efficiently compare multiple patient stratifications, to correlate patient sets with clinical information or genomic alterations and to view the differences between molecular profiles across patient sets. Although we focus on cancer genomics here, StratomeX can also be applied in other disease cohorts.

Streit, M., Lex, A., Gratzl, S., Partl, P., Schmalstieg, D., Pfister, HP., Park, PJ., Gehlenborg, N., Guided Visual Exploration of Genomic Stratifications in Cancer, Nature Methods, 11, 884-885, 2014

Domino: Extracting, Comparing, and Manipulating Subsets across Multiple Tabular Datasets


Answering questions about complex issues often requires analysts to take into account information contained in multiple interconnected datasets. A common strategy in analyzing and visualizing large and heterogeneous data is to divide the data into meaningful subsets. In a next step, interesting subsets are selected and the associated data as well as the relationships between the subsets are visualized. However, neither the extraction and manipulation, nor the comparison of subsets is well supported by state-of-the-art techniques.
In this paper we present Domino, a novel multiform visualization technique for effectively representing subsets and the relationships between them. By providing comprehensive tools to arrange, combine, and extract subsets, Domino allows users to create both common visualization techniques and advanced visualizations tailored to specific use cases. In addition to the novel technique, we present an implementation that enables analysts to manage the wide range of options that our approach offers. Innovative interactive features such as placeholders and live previews support the rapid creation of complex analysis setups. We introduce the technique and the implementation using a simple example and demonstrate scalability and effectiveness in a use case from the field of cancer genomics.

Gratzl, S., Gehlenborg, N., Lex, A., Pfister, HP. and Streit, M., Domino: Extracting, Comparing, and Manipulating Subsets across Multiple Tabular Datasets, IEEE Transactions on Visualization and Computer Graphics (InfoVis 2014), 2014.

application/pdfManuscript (1.0 MB)
Project Website  (Neues Fenster)

Opening the Black Box: Strategies for Increased User Involvement in Existing Algorithm Implementations


An increasing number of interactive visualization tools stress the integration with computational software like MATLAB and R to access a variety of proven algorithms for many purposes. In many cases, however, the algorithms are used as black boxes that run to completion in isolation which contradicts the needs of interactive data exploration. This paper structures, formalizes, and discusses possibilities to enable user involvement in ongoing computations. Based on a structured characterization of needs regarding intermediate feedback and control, the main contribution is a formalization and comparison of strategies for achieving user involvement for algorithms with different characteristics. In the context of integration, we describe considerations for implementing these strategies either as part of the visualization tool or as part of the algorithm, and we identify requirements and guidelines for the design of algorithmic APIs. To assess the practical applicability, we provide a survey of frequently used algorithm implementations within R regarding the fulfillment of these guidelines. While echoing previous calls for analysis modules which support data exploration more directly, we conclude that a range of pragmatic options for enabling user involvement in ongoing computations exists on both the visualization and algorithm side and should be used.

Mühlbacher, T., Piringer, H., Gratzl, S., Sedlmair, M., and Streit, M., Opening the Black Box: Strategies for Increased User Involvement in Existing Algorithm Implementations, IEEE Transactions on Visualization and Computer Graphics (VAST 2014), 2014.

application/pdfManuscript (263KB)

ConTour: Data-Driven Exploration of Multi-Relational Datasets for Drug Discovery


Large scale data analysis is nowadays a crucial part of drug discovery. Biologists and chemists need to quickly explore and evaluate potentially effective yet safe compounds based on many datasets that are in relationship with each other. However, there is a is a lack of tools that support them in these processes. To remedy this, we developed ConTour, an interactive visual analytics technique that enables the exploration of these complex, multi-relational datasets. At its core ConTour lists all items of each dataset in a column. Relationships between the columns are revealed through interaction: selecting one or multiple items in one column highlights and re-sorts the items in other columns. Filters based on relationships enable drilling down into the large data space. To identify interesting items in the first place, ConTour employs advanced sorting strategies, including strategies based on connectivity strength and uniqueness, as well as sorting based on item attributes. ConTour also introduces interactive nesting of columns, a powerful method to show the related items of a child column for each item in the parent column. Within the columns, ConTour shows rich attribute data about the items as well as information about the connection strengths to other datasets. Finally, ConTour provides a number of detail views, which can show items from multiple datasets and their associated data at the same time. We demonstrate the utility of our system in case studies conducted with a team of chemical biologists, who investigate the effects of chemical compounds on cells and need to understand the underlying mechanisms.

Partl, C., Lex, A., Streit, M., Strobelt, H., Wassermann, AM., Pfister, HP. and Schmalstieg, D., ConTour: Data-Driven Exploration of Multi-Relational Datasets for Drug Discovery, IEEE Transactions on Visualization and Computer Graphics (VAST 2014), 2014.

application/pdfManuscript (1.6 MB)
Project Website  (Neues Fenster)

A Transparent Thin-Film Sensor for Multi-Focal Image Reconstruction and Depth Estimation


We present a fully transparent and flexible light-sensing film that, based on a single thin-film luminescent concentrator layer, supports simultaneous multi-focal image reconstruction and depth estimation without additional optics. Together with the sampling of two-dimensional light fields propagated inside the film layer under various focal conditions, it allows entire focal image stacks to be computed after only one recording that can be used for depth estimation. The transparency and flexibility of our sensor unlock the potential of lensless multilayer imaging and depth sensing with arbitrary sensor shapes – enabling novel human-computer interfaces.

Koppelhuber, A., Birklbauer, C., Izadi, S., and Bimber, O., A transparent thin-film sensor for multi-focal image reconstruction and depth estimation, Opt. Express 22(8), 8928-8942, 2014

application/pdfManuscript (54.3 MB)

Panorama Light-Field Imaging


We present a novel approach to recording and computing panorama light fields. In contrast to previous methods that estimate panorama light fields from focal stacks or naive multi-perspective image stitching, our approach is the first that processes ray entries directly and does not require depth reconstruction or matching of image features. Arbitrarily complex scenes can therefore be captured while preserving correct occlusion boundaries, anisotropic reflections, refractions, and other light effects that go beyond diffuse reflections of Lambertian surfaces.

Birklbauer, C. and Bimber, O., Panorama Light-Field Imaging, In proceedings of Eurographics (Computer Graphics Forum), 33(2), 43-52, 2014

Source Code  (Neues Fenster)

Coded Exposure HDR Light-Field Video Recording


Capturing exposure sequences to compute high dynamic range (HDR) images causes motion blur in cases of camera movement. This also applies to light-field cameras: frames rendered from multiple blurred HDR lightfield perspectives are also blurred. While the recording times of exposure sequences cannot be reduced for a single-sensor camera, we demonstrate how this can be achieved for a camera array. Thus, we decrease capturing time and reduce motion blur for HDR light-field video recording. Applying a spatio-temporal exposure pattern while capturing frames with a camera array reduces the overall recording time and enables the estimation of camera movement within one light-field video frame. By estimating depth maps and local point spread functions (PSFs) from multiple perspectives with the same exposure, regional motion deblurring can be supported. Missing exposures at various perspectives are then interpolated.

Schedl, D. C., Birklbauer, C. and Bimber, O., Coded Exposure HDR Light-Field Video Recording, In proceedings of Eurographics (Computer Graphics Forum), 33(2), 33-42, 2014

Furby: Fuzzy Force-Directed Bicluster Visualization


Background: Cluster analysis is widely used to discover patterns in multi-dimensional data. Clustered heatmaps are the standard technique for visualizing one-way and two-way clustering results. In clustered heatmaps, rows and/or columns are reordered, resulting in a representation that shows the clusters as contiguous blocks. However, for biclustering results, where clusters can overlap, it is not possible to reorder the matrix in this way without duplicating rows and/or columns.
Results: We present Furby, an interactive visualization technique for analyzing biclustering results. Our contribution is twofold. First, the technique provides an overview of a biclustering result, showing the actual data that forms the individual clusters together with the information which rows and columns they share. Second, for fuzzy clustering results, the proposed technique additionally enables analysts to interactively set the thresholds that transform the fuzzy (soft) clustering into hard clusters that can then be investigated using heatmaps or bar charts. Changes in the membership value thresholds are immediately reflected in the visualization. We demonstrate the value of Furby by loading biclustering results applied to a multi-tissue dataset into the visualization.
Conclusions: The proposed tool allows analysts to assess the overall quality of a biclustering result. Based on this high-level overview, analysts can then interactively explore the individual biclusters in detail. This novel way of handling fuzzy clustering results also supports analysts in finding the optimal thresholds that lead to the best clusters.

Streit, M., Gratzl, S., Gillhofer, M., Mayr, A., Mitterecker, A. and Hochreiter, S., Furby: Fuzzy Force-Directed Bicluster Visualization, BMC Bioinformatics, 15(Suppl 6):S4, 2014.

application/pdfManuscript (1.9 MB)
Project Website  (Neues Fenster)

Show Me the Invisible: Visualizing Hidden Content


Content on computer screens is often inaccessible to users because it is hidden, e.g., occluded by other windows, outside the viewport, or overlooked. In search tasks, the efficient retrieval of sought content is important. Current software, however, only provides limited support to visualize hidden occurrences and rarely supports search synchronization crossing application boundaries. To remedy this situation, we introduce two novel visualization methods to guide users to hidden content.
Our first method generates awareness for occluded or out-of-viewport content using see-through visualization. For content that is either outside the screen's viewport or for data sources not opened at all, our second method shows off-screen indicators and an on-demand smart preview. To reduce the chances of overlooking content, we use visual links, i.e., visible edges, to connect the visible content or the visible representations of the hidden content. We show the validity of our methods in a user study, which demonstrates that our technique enables a faster localization of hidden content compared to traditional search functionality and thereby assists users in information retrieval tasks.

Geymayer, T., Steinberger, M., Lex, A., Streit, M., and Schmalstieg, D., Show Me the Invisible: Visualizing Hidden Content, ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2014), 2014.

application/pdfManuscript (2.0 MB)
Project Website  (Neues Fenster)

Characterizing Cancer Subtypes using the Dual Analysis Approach in Caleydo


The comprehensive analysis and characterization of cancer subtypes is an important problem to which significant resources have been devoted in recent years. In this paper we integrate the dual analysis method, which uses statistics to describe both the dimensions and the rows of a high dimensional dataset, into StratomeX, a Caleydo view tailored to cancer subtype analysis. We introduce significant difference plots for showing the elements of a candidate cancer subtype that differ significantly from other subtypes, thus enabling analysts to characterize cancer subtypes. We also enable analysts to investigate how samples relate to the subtype they are assigned and to the other groups. Our approach gives analysts the ability to create well-defined candidate subtypes based on statistical properties. We demonstrate the utility of our approach in three case studies, where we show that we are able to reproduce findings from a published cancer subtype characterization.

Turkay, C., Lex, A., Streit, M., Pfister, HP. and Hauser, H., Characterizing Cancer Subtypes using the Dual Analysis Approach in Caleydo, Computer Graphics and Applications, 34(2), 38-47, 2014

application/pdfManuscript (7.4 MB)