The Institute of Computer Graphics carries out research in a modern field that has been coined "visual computing". Our core disciplines cover the imaging, processing, visualization and displaying of visual data. These are enabled by new fields and technologies, such as light fields, projector-camera systems, responsive optics, mobile computing, and visual analytics.
Please select one of the following topics or years for detailed information.
We present a caching framework with a novel probability-based prefetching and eviction strategy applied to atomic cache units that enables interactive rendering of gigaray light fields. Further, we describe two new use cases that are supported by our framework: panoramic light fields, including a robust imaging technique and an appropriate parameterization scheme for real-time rendering and caching; and light-field-cached volume rendering, which supports interactive exploration of large volumetric datasets using light-field rendering. We consider applications such as light-field photography and the visualization of large image stacks from modern scanning microscopes.
Birklbauer, C., Opelt, S. and Bimber, O., Rendering Gigaray Light Fields, In proceedings of Eurographics (Computer Graphics Forum), 2013
Light-Field Supported Fast Volume Rendering
Advances in imaging technology leads to a continues increase of image data sets. Modern scanning microscopes, for instance, produce image stacks with a megapixel lateral resolution and many hundreds to thousands slices in axial direction. This trend will continue – resulting in very large volumetric data sets that are difficult to explore interactively, since the complexity of volume rendering is proportional to spatial and lateral resolution of the data. Light-field rendering is a fast and simple image-based rendering method that requires pre-computed image data. For volume rendering, each costly computed image is discarded after changing the viewing parameters, while it becomes idle if the viewing parameters are not changed and the visualization does not need to be updated. We combine light-field rendering and volume rendering with two goals: We recycle previously rendered images and use the idle times for filling a cached-managed light field. The final images are then composed from light-field rendering and from volume rendering – depending on the state of the light-field cache. This leads to a significant increase in rendering performance and to the ability of exploring large volumetric datasets interactively.
Birklbauer, C. and Bimber, O., Light-Field Supported Fast Volume Rendering, ACM Siggraph (poster), 2012
enRoute: Dynamic Path Extraction from Biological Pathway Maps for In-Depth Experimental Data Analysis
Pathway maps are an important source of information when analyzing functional implications of experimental data on biological processes. Associating large quantities of data with nodes on a pathway map and allowing in depth-analysis at the same time, however, is a challenging task. While a wide variety of approaches for doing so exist, they either do not scale beyond a few experiments or fail to represent the pathway appropriately. To remedy this, we introduce enRoute, a new approach for interactively exploring experimental data along paths that are dynamically extracted from pathways. By showing an extracted path side-by-side with experimental data, enRoute can present large amounts of data for every pathway node. It can visualize hundreds of samples, dozens of experimental conditions, and even multiple datasets capturing different aspects of a node at the same time. Another important property of this approach is its conceptual compatibility with arbitrary forms of pathways. Most notably, enRoute works well with pathways that are manually created, as they are available in large, public pathway databases. We demonstrate enRoute with pathways from the well-established KEGG database and expression as well as copy number datasets from humans and mice with more than 1,000 experiments at the same time. We validate enRoute in case studies with domain experts, who used enRoute to explore data for glioblastoma multiforme in humans and a model of steatohepatitis in mice.
Partl, C., Lex, A., Streit, M., Kalkofen, D., Kashofer, K. and Schmalstieg, D., enRoute: Dynamic Path Extraction from Biological Pathway Maps for In-Depth Experimental Data Analysis, To appear: In proceedings of the IEEE Symposium on Biological Data Visualization (BioVis'12), 2012.
Partl, C., Lex, A., Streit, M., Kalkofen, D., Kashofer, K. and Schmalstieg, D., enRoute: Dynamic Path Extraction from Biological Pathway Maps for Exploring Heterogeneous Experimental Datasets, BMC Bioinformatics (to appear), 2013.
- Manuscript (945KB)
StratomeX: Visual Analysis of Large-Scale Heterogeneous Genomics Data for Cancer Subtype Characterization
Identification and characterization of cancer subtypes are important areas of research that are based on the integrated analysis of multiple heterogeneous genomics datasets. Since there are no tools supporting this process, much of this work is done using ad-hoc scripts and static plots, which is inefficient and limits visual exploration of the data. To address this, we have developed StratomeX, an integrative visualization tool that allows investigators to explore the relationships of candidate subtypes across multiple genomic data types such as gene expression, DNA methylation, or copy number data. StratomeX represents datasets as columns and subtypes as bricks in these columns. Ribbons between the columns connect bricks to show subtype relationships across datasets. Drill-down features enable detailed exploration. StratomeX provides insights into the functional and clinical implications of candidate subtypes by employing small multiples, which allow investigators to assess the effect of subtypes on molecular pathways or outcomes such as patient survival. As the configuration of viewing parameters in such a multi-dataset, multi-view scenario is complex, we propose a meta visualization and configuration interface for dataset dependencies and data-view relationships.
Lex, A., Streit, M., Schulz, H.-J., Partl, C., Schmalstieg, D., Park, P. J. and Gehlenborg, N., StratomeX: Visual Analysis of Large-Scale Heterogeneous Genomics Data for Cancer Subtype Characterization, Computer Graphics Forum (EuroVis 2012), vol. 31, no. 3, pp. 1175-1184, 2012.
- Manuscript (3.0 MB)
- Manuscript (2.1 MB)
- Manuscript (1.0 MB)
- Manuscript (444KB)
Procedural Texture Synthesis for Zoom-Independent Visualization of Multivariate Data
We propose a method for visualizing multivariate data in a view-dependent manner using a synthesized texture. The texture is generated using procedural random-phase Gabor noise and a specially constructed transfer function that is based on statistical properties of the noise. The procedural nature of texture synthesis allows us to alter the noise according to current viewing conditions by modifying its parameters. As a consequence, the textures are alias-free and data values can be interpreted at any scale. We show the applications of our method for displaying multivariate 2D weather data and for encoding additional information on top of a video without using color, thus leaving this communication channel for other purposes. We achieve simultaneous display of color and texture information by modifying the value component using the HSV color model, which allows us to ensure sufficient contrast for texture perception while keeping hue and saturation components intact. We provide the results of a user study, which shows significant advantage of our method over current methods with similar goals.
Klebnikov, R., Kainz, B., Steinberger, M., Streit, M. and Schmalstieg, D., Procedural Texture Synthesis for Zoom-Independent Visualization of Multivariate Data, Computer Graphics Forum (EuroVis'12), vol. 31, no. 3, pp. 1355-1364, 2012.
Model-Driven Design for the Visual Analysis of Heterogeneous Data
As heterogeneous data from different sources are being increasingly linked, it becomes difficult for users to understand how the data are connected, to identify what means are suitable to analyze a given data set, or to find out how to proceed for a given analysis task. We target this challenge with a new model-driven design process that effectively codesigns aspects of data, view, analytics, and tasks. We achieve this by using the workflow of the analysis task as a trajectory through data, interactive views, and analytical processes. The benefits for the analysis session go well beyond the pure selection of appropriate data sets and range from providing orientation or even guidance along a preferred analysis path to a potential overall speedup, allowing data to be fetched ahead of time. We illustrate the design process for a biomedical use case that aims at determining a treatment plan for cancer patients from the visual analysis of a large, heterogeneous clinical data pool. As an example for how to apply the comprehensive design approach, we present Stack’n’flip, a sample implementation which tightly integrates visualizations of the actual data with a map of available data sets, views, and tasks, thus capturing and communicating the analytical workflow through the required data sets.
Streit, M., Schulz, H.-J., Lex, A., Schmalstieg, D. and Schumann, H., Model-Driven Design for the Visual Analysis of Heterogeneous Data,
IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 6, pp.998-1010, 2012.
Visualizing Uncertainty in Biological Expression Data
Expression analysis of ∼omics data using microarrays has become a standard procedure in the life sciences. However, microarrays are subject to technical limitations and errors, which render the data gathered likely to be uncertain. While a number of approaches exist to target this uncertainty statistically, it is hardly ever even shown when the data is visualized using for example clustered heatmaps. Yet, this is highly useful when trying not to omit data that is “good enough” for an analysis, which otherwise would be discarded as too unreliable by established conservative thresholds. Our approach addresses this shortcoming by first identifying the margin above the error threshold of uncertain, yet possibly still useful data. It then displays this uncertain data in the context of the valid data by enhancing a clustered heatmap. We employ different visual representations for the different kinds of uncertainty involved. Finally, it lets the user interactively adjust the thresholds, giving visual feedback in the heatmap representation, so that an informed choice on which thresholds to use can be made instead of applying the usual rule-of-thumb cut-offs. We exemplify the usefulness of our concept by giving details for a concrete use case from our partners at the Medical University of Graz, thereby demonstrating our implementation of the general approach.
Holzhüter, C., Lex, A., Schmalstieg, D., Schulz H.-J., Schumann, H. and Streit, M., Visualizing Uncertainty in Biological Expression Data, In proceedings of the SPIE Conference on Visualization and Data Analysis (VDA'12), 2012.
With the continuously increasing sensor resolutions of cameras, light field imaging is becoming a more and more practical extension to conventional digital photography. It complements postprocessing by synthetic aperture control, refocusing, as well as perspective and field-of-view changes. For being a true alternative to classical 2D imaging, however, the spatial resolution of light fields must be in the same megapixel-order as the resolution of today’s digital images. The additional angular resolution must also be adequately high to prevent sampling artifacts (in particular for synthetic re-focussing). This will quickly cause gigabytes rather than megabytes of data that have to be rendered with limited graphics memory. We describe a light-field caching framework that makes it possible to render very large light fields in real-time.
Opelt, S. and Bimber, O., Light-Field Caching, ACM Siggraph (poster), 2011