Page Areas:



Current Submenu:

Additional Information:

Student Projects

Hier den Alternativtext zum Bild eingeben!

Displays Book (second edition)

Displays Book (second edition)

Displays Book (Korean edition)

Hier den Alternativtext zum Bild eingeben!

Displays Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

Spatial Augmented Reality Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

The Caleydo Project

Hier den Alternativtext zum Bild eingeben!

VIOSO smartprojecting

Hier den Alternativtext zum Bild eingeben!

Position Indication:

Content

Research

The Institute of Computer Graphics carries out research in a modern field that has been coined "visual computing". Our core disciplines cover the imaging, processing, visualization and displaying of visual data. These are enabled by new fields and technologies, such as light fields, projector-camera systems, responsive optics, mobile computing, and visual analytics.

Please select one of the following topics or years for detailed information.
















Visualization

From Visual Exploration to Storytelling and Back Again


The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author 'Vistories', visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals.

Gratzl, S., Lex, A., Gehlenborg, N., Cosgrove, N., Streit, M., From Visual Exploration to Storytelling and Back Again, Computer Graphics Forum (EuroVis '16), 2016 (to appear).

application/pdfPaper (1.7 MB)
Project Website  (Neues Fenster)

AVOCADO: Visualization of Workflow-Derived Data Provenance for Reproducible Biomedical Research


A major challenge of data-driven biomedical research lies in the collection and representation of data provenance information to ensure reproducibility of findings. In order to communicate and reproduce multi-step analysis workflows executed on datasets that contain data for dozens or hundreds of samples, it is crucial to be able to visualize the provenance graph at different levels of aggregation. Most existing approaches are based on node-link diagrams, which do not scale to the complexity of typical data provenance graphs. In our proposed approach we reduce the complexity of the graph using hierarchical and motif-based aggregation. Based on user action and graph attributes a modular degree-of-interest (DoI) function is applied to expand parts of the graph that are relevant to the user. This interest-driven adaptive provenance visualization approach allows users to review and communicate complex multi-step analyses, which can be based on hundreds of files that are processed by numerous workflows. We integrate our approach into an analysis platform that captures extensive data provenance information and demonstrate its effectiveness by means of a biomedical usage scenario.

Stitz, H., Luger, S., Streit, M., Gehlenborg, N., AVOCADO: Visualization of Workflow-Derived Data Provenance for Reproducible Biomedical Research, Computer Graphics Forum (EuroVis '16), 2016 (to appear).

application/pdfPaper (846KB)
Project Website  (Neues Fenster)

Pathfinder: Visual Analysis of Paths in Graphs


The analysis of paths in graphs is highly relevant in many domains. Typically, path-related tasks are performed in node-link layouts. Unfortunately, graph layouts often do not scale to the size of many real world networks. Also, many networks are multivariate, i.e., contain rich attribute sets associated with the nodes and edges. These attributes are often critical in judging paths, but directly visualizing attributes in a graph layout exacerbates the scalability problem. In this paper, we present visual analysis solutions dedicated to path-related tasks in large and highly multivariate graphs. We show that by focusing on paths, we can address the scalability problem of multivariate graph visualization, equipping analysts with a powerful tool to explore large graphs. We introduce Pathfinder, a technique that provides visual methods to query paths, while considering various constraints. The resulting set of paths is visualized in both a ranked list and as a node-link diagram. For the paths in the list, we display rich attribute data associated with nodes and edges, and the node-link diagram provides topological context. The paths can be ranked based on topological properties, such as path length or average node degree, and scores derived from attribute data. Pathfinder is designed to scale to graphs with tens of thousands of nodes and edges by employing strategies such as incremental query results. We demonstrate Pathfinder's fitness for use in scenarios with data from a coauthor network and biological pathways.

Partl, C., Gratzl, S., Streit, M., Wassermann, A. M., Pfister, HP., Schmalstieg, D., Lex, A., Pathfinder: Visual Analysis of Paths in Graphs, Computer Graphics Forum (EuroVis '16), 2016 (to appear).

ThermalPlot: Visualizing Multi-Attribute Time-Series Data Using a Thermal Metaphor


Multi-attribute time-series data plays a vital role in many different domains, such as economics, sensor networks, and biology. An important task when making sense of such data is to provide users with an overview to identify items that show an interesting development over time, including both absolute and relative changes in multiple attributes simultaneously. However, this is not well supported by existing visualization techniques. To address this issue, we present ThermalPlot, a visualization technique that summarizes combinations of multiple attributes over time using an items position, the most salient visual variable. More precisely, the x-position in the ThermalPlot is based on a user-defined degree-of-interest (DoI) function that combines multiple attributes over time. The y-position is determined by the relative change in the DoI value (delta DoI) within a user-specified time window. Animating this mapping via a moving time window gives rise to circular movements of items over time—as in thermal systems. To help the user to identify important items that match user-defined temporal patterns and to increase the technique’s scalability, we adapt the level of detail of the items’ representation based on the DoI value. Furthermore, we present an interactive exploration environment for multi-attribute time-series data that ties together a carefully chosen set of visualizations, designed to support analysts in interacting with the ThermalPlot technique. We demonstrate the effectiveness of our technique by means of two usage scenarios that address the visual analysis of economic development data and of stock market data.

application/pdfPaper (521KB)
Project Website  (Neues Fenster)

Stitz, H., Gratzl, S., Aigner, W., and Streit, M., ThermalPlot: Visualizing Multi-Attribute Time-Series Data Using a Thermal Metaphor, IEEE Transactions on Visualization and Computer Graphics, 2016 (to appear).

application/pdfPreprint (7.9 MB)

CloudGazer: A Divide-and-Conquer Approach to Monitoring and Optimizing Cloud-Based Networks


With the rise of virtualization and cloud-based networks of various scales and degrees of complexity, new approaches to managing such infrastructures are required. In these networks, relationships among components can be of arbitrary cardinality (1:1, 1:n, n:m), making it challenging for administrators to investigate which components influence others. In this paper we present CloudGazer, a scalable visualization system that allows users to monitor and optimize cloud-based networks effectively to reduce energy consumption and to increase the quality of service. Instead of visualizing the overall network, we split the graph into semantic perspectives that provide a much simpler view of the network. CloudGazer is a multiple coordinated view system that visualizes either static or live status information about the components of a perspective while reintroducing lost inter-perspective relationships on demand using dynamically created inlays. We demonstrate the effectiveness of CloudGazer in two usage scenarios: The first is based on a real- world network of our domain partners where static performance parameters are used to find an optimal design. In the second scenario we use the VAST 2013 Challenge dataset to demonstrate how the system can be employed with live streaming data.

Stitz, H., Gratzl, S., Krieger, M. and Streit, M., CloudGazer: A Divide-and-Conquer Approach to Monitoring and Optimizing Cloud-Based Networks, Proceedings of IEEE Pacific Visualization Symposium (PacificVis ’15), 175-182, 2015.

application/pdfManuscript (1.2 MB)

Guided Visual Exploration of Genomic Stratifications in Cancer


Cancer is a heterogeneous disease, and molecular profiling of tumors from large cohorts has enabled characterization of new tumor subtypes. This is a prerequisite for improving personalized treatment and ultimately achieving better patient outcomes. Potential tumor subtypes can be identified with methods such as unsupervised clustering or network-based stratification, which assign patients to sets based on high-dimensional molecular profiles. Detailed characterization of identified sets and their interpretation, however, remain a time-consuming exploratory process.
To address these challenges, we combined 'StratomeX', an interactive visualization tool that is freely available at >?http://www.caleydo.org/, with exploration tools to efficiently compare multiple patient stratifications, to correlate patient sets with clinical information or genomic alterations and to view the differences between molecular profiles across patient sets. Although we focus on cancer genomics here, StratomeX can also be applied in other disease cohorts.

Streit, M., Lex, A., Gratzl, S., Partl, P., Schmalstieg, D., Pfister, HP., Park, PJ., Gehlenborg, N., Guided Visual Exploration of Genomic Stratifications in Cancer, Nature Methods, 11, 884-885, 2014

Domino: Extracting, Comparing, and Manipulating Subsets across Multiple Tabular Datasets


Answering questions about complex issues often requires analysts to take into account information contained in multiple interconnected datasets. A common strategy in analyzing and visualizing large and heterogeneous data is to divide the data into meaningful subsets. In a next step, interesting subsets are selected and the associated data as well as the relationships between the subsets are visualized. However, neither the extraction and manipulation, nor the comparison of subsets is well supported by state-of-the-art techniques.
In this paper we present Domino, a novel multiform visualization technique for effectively representing subsets and the relationships between them. By providing comprehensive tools to arrange, combine, and extract subsets, Domino allows users to create both common visualization techniques and advanced visualizations tailored to specific use cases. In addition to the novel technique, we present an implementation that enables analysts to manage the wide range of options that our approach offers. Innovative interactive features such as placeholders and live previews support the rapid creation of complex analysis setups. We introduce the technique and the implementation using a simple example and demonstrate scalability and effectiveness in a use case from the field of cancer genomics.

Gratzl, S., Gehlenborg, N., Lex, A., Pfister, HP. and Streit, M., Domino: Extracting, Comparing, and Manipulating Subsets across Multiple Tabular Datasets, IEEE Transactions on Visualization and Computer Graphics (InfoVis 2014), 2014.

application/pdfManuscript (1.0 MB)
Project Website  (Neues Fenster)
application/pdfManuscript (263KB)

Opening the Black Box: Strategies for Increased User Involvement in Existing Algorithm Implementations


An increasing number of interactive visualization tools stress the integration with computational software like MATLAB and R to access a variety of proven algorithms for many purposes. In many cases, however, the algorithms are used as black boxes that run to completion in isolation which contradicts the needs of interactive data exploration. This paper structures, formalizes, and discusses possibilities to enable user involvement in ongoing computations. Based on a structured characterization of needs regarding intermediate feedback and control, the main contribution is a formalization and comparison of strategies for achieving user involvement for algorithms with different characteristics. In the context of integration, we describe considerations for implementing these strategies either as part of the visualization tool or as part of the algorithm, and we identify requirements and guidelines for the design of algorithmic APIs. To assess the practical applicability, we provide a survey of frequently used algorithm implementations within R regarding the fulfillment of these guidelines. While echoing previous calls for analysis modules which support data exploration more directly, we conclude that a range of pragmatic options for enabling user involvement in ongoing computations exists on both the visualization and algorithm side and should be used.

Mühlbacher, T., Piringer, H., Gratzl, S., Sedlmair, M., and Streit, M., Opening the Black Box: Strategies for Increased User Involvement in Existing Algorithm Implementations, IEEE Transactions on Visualization and Computer Graphics (VAST 2014), 2014.

ConTour: Data-Driven Exploration of Multi-Relational Datasets for Drug Discovery


Large scale data analysis is nowadays a crucial part of drug discovery. Biologists and chemists need to quickly explore and evaluate potentially effective yet safe compounds based on many datasets that are in relationship with each other. However, there is a is a lack of tools that support them in these processes. To remedy this, we developed ConTour, an interactive visual analytics technique that enables the exploration of these complex, multi-relational datasets. At its core ConTour lists all items of each dataset in a column. Relationships between the columns are revealed through interaction: selecting one or multiple items in one column highlights and re-sorts the items in other columns. Filters based on relationships enable drilling down into the large data space. To identify interesting items in the first place, ConTour employs advanced sorting strategies, including strategies based on connectivity strength and uniqueness, as well as sorting based on item attributes. ConTour also introduces interactive nesting of columns, a powerful method to show the related items of a child column for each item in the parent column. Within the columns, ConTour shows rich attribute data about the items as well as information about the connection strengths to other datasets. Finally, ConTour provides a number of detail views, which can show items from multiple datasets and their associated data at the same time. We demonstrate the utility of our system in case studies conducted with a team of chemical biologists, who investigate the effects of chemical compounds on cells and need to understand the underlying mechanisms.

Partl, C., Lex, A., Streit, M., Strobelt, H., Wassermann, AM., Pfister, HP. and Schmalstieg, D., ConTour: Data-Driven Exploration of Multi-Relational Datasets for Drug Discovery, IEEE Transactions on Visualization and Computer Graphics (VAST 2014), 2014.

application/pdfManuscript (1.6 MB)
Project Website  (Neues Fenster)

Furby: Fuzzy Force-Directed Bicluster Visualization


Background: Cluster analysis is widely used to discover patterns in multi-dimensional data. Clustered heatmaps are the standard technique for visualizing one-way and two-way clustering results. In clustered heatmaps, rows and/or columns are reordered, resulting in a representation that shows the clusters as contiguous blocks. However, for biclustering results, where clusters can overlap, it is not possible to reorder the matrix in this way without duplicating rows and/or columns.
Results: We present Furby, an interactive visualization technique for analyzing biclustering results. Our contribution is twofold. First, the technique provides an overview of a biclustering result, showing the actual data that forms the individual clusters together with the information which rows and columns they share. Second, for fuzzy clustering results, the proposed technique additionally enables analysts to interactively set the thresholds that transform the fuzzy (soft) clustering into hard clusters that can then be investigated using heatmaps or bar charts. Changes in the membership value thresholds are immediately reflected in the visualization. We demonstrate the value of Furby by loading biclustering results applied to a multi-tissue dataset into the visualization.
Conclusions: The proposed tool allows analysts to assess the overall quality of a biclustering result. Based on this high-level overview, analysts can then interactively explore the individual biclusters in detail. This novel way of handling fuzzy clustering results also supports analysts in finding the optimal thresholds that lead to the best clusters.

application/pdfManuscript (1.9 MB)

Streit, M., Gratzl, S., Gillhofer, M., Mayr, A., Mitterecker, A. and Hochreiter, S., Furby: Fuzzy Force-Directed Bicluster Visualization, BMC Bioinformatics, 15(Suppl 6):S4, 2014.

Project Website  (Neues Fenster)

Show Me the Invisible: Visualizing Hidden Content


Content on computer screens is often inaccessible to users because it is hidden, e.g., occluded by other windows, outside the viewport, or overlooked. In search tasks, the efficient retrieval of sought content is important. Current software, however, only provides limited support to visualize hidden occurrences and rarely supports search synchronization crossing application boundaries. To remedy this situation, we introduce two novel visualization methods to guide users to hidden content.
Our first method generates awareness for occluded or out-of-viewport content using see-through visualization. For content that is either outside the screen's viewport or for data sources not opened at all, our second method shows off-screen indicators and an on-demand smart preview. To reduce the chances of overlooking content, we use visual links, i.e., visible edges, to connect the visible content or the visible representations of the hidden content. We show the validity of our methods in a user study, which demonstrates that our technique enables a faster localization of hidden content compared to traditional search functionality and thereby assists users in information retrieval tasks.

Geymayer, T., Steinberger, M., Lex, A., Streit, M., and Schmalstieg, D., Show Me the Invisible: Visualizing Hidden Content, ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2014), 2014.

application/pdfManuscript (2.0 MB)
Project Website  (Neues Fenster)

Characterizing Cancer Subtypes using the Dual Analysis Approach in Caleydo


The comprehensive analysis and characterization of cancer subtypes is an important problem to which significant resources have been devoted in recent years. In this paper we integrate the dual analysis method, which uses statistics to describe both the dimensions and the rows of a high dimensional dataset, into StratomeX, a Caleydo view tailored to cancer subtype analysis. We introduce significant difference plots for showing the elements of a candidate cancer subtype that differ significantly from other subtypes, thus enabling analysts to characterize cancer subtypes. We also enable analysts to investigate how samples relate to the subtype they are assigned and to the other groups. Our approach gives analysts the ability to create well-defined candidate subtypes based on statistical properties. We demonstrate the utility of our approach in three case studies, where we show that we are able to reproduce findings from a published cancer subtype characterization.

Turkay, C., Lex, A., Streit, M., Pfister, HP. and Hauser, H., Characterizing Cancer Subtypes using the Dual Analysis Approach in Caleydo, Computer Graphics and Applications, 34(2), 38-47, 2014

application/pdfManuscript (7.4 MB)

LineUp: Visual Analysis of Multi-Attribute Rankings


Rankings are a popular and universal approach to structure otherwise unorganized collections of items by computing a rank for each item based on the value of one or more of its attributes. This allows us, for example, to prioritize tasks or to evaluate the performance of products relative to each other. While the visualization of a ranking itself is straightforward, its interpretation is not because the rank of an item represents only a summary of a potentially complicated relationship between its attributes and those of the other items. It is also common that alternative rankings exist that need to be compared and analyzed to gain insight into how multiple heterogeneous attributes affect the rankings. Advanced visual exploration tools are needed to make this process efficient. In this paper we present a comprehensive analysis of requirements for the visualization of multi-attribute rankings. Based on these considerations, we propose a novel and scalable visualization technique - LineUp - that uses bar charts. This interactive technique supports the ranking of items based on multiple heterogeneous attributes with different scales and semantics. It enables users to interactively combine attributes and flexibly refine parameters to explore the effect of changes in the attribute combination. This process can be employed to derive actionable insights into which attributes of an item need to be modified in order for its rank to change. Additionally, through integration of slope graphs, LineUp can also be used to compare multiple alternative rankings on the same set of items, for example, over time or across different attribute combinations. We evaluate the effectiveness of the proposed multi-attribute visualization technique in a qualitative study. The study shows that users are able to successfully solve complex ranking tasks in a short period of time.

Gratzl, S., Lex, A., Gehlenborg, N., Pfister, HP. and Streit, M., LineUp: Visual Analysis of Multi-Attribute Rankings, IEEE Transactions on Visualization and Computer Graphics (InfoVis 2013), 19(12), pp. 2277-2286, 2013

application/pdfManuscript (1.1 MB)
Project Website  (Neues Fenster)

Entourage: Visualizing Relationships between Biological Pathways using Contextual Subsets


Biological pathway maps are highly relevant tools for many tasks in molecular biology. They reduce the complexity of the overall biological network by partitioning it into smaller manageable parts. While this reduction of complexity is their biggest strength, it is, at the same time, their biggest weakness. By removing what is deemed not important for the primary function of the pathway, biologists lose the ability to follow and understand cross-talks between pathways. Considering these cross-talks is, however, critical in many analysis scenarios, such as, judging effects of drugs. In this paper we introduce entourage, a novel visualization technique that provides contextual information lost due to the artificial partitioning of the biological network, but at the same time limits the presented information to what is relevant to the analyst's task. We use one pathway map as the focus of an analysis and allow a larger set of contextual pathways. For these context pathways we only show the contextual subsets, i.e., the parts of the graph that are relevant to a selection. Entourage suggests related pathways based on similarities and highlights parts of a pathway that are interesting in terms of mapped experimental data. We visualize interdependencies between pathways using stubs of visual links, which we found effective yet not obtrusive. By combining this approach with visualization of experimental data, we can provide domain experts with a highly valuable tool. We demonstrate the utility of Entourage with case studies conducted with a biochemist who researches the effects of drugs on pathways. We show that the technique is well suited to investigate interdependencies between pathways and to analyze, understand, and predict the effect that drugs have on different cell types.

Lex, A., Partl, C., Kalkofen, D., Streit, M., Gratzl, S., Wassermann, A.M., Schmalstieg, D. and Pfister, HP., Entourage: Visualizing Relationships between Biological Pathways using Contextual Subsets, IEEE Transactions on Visualization and Computer Graphics (InfoVis 2013), 19(12), pp. 2536-2545, 2013

application/pdfManuscript (2.9 MB)
Project Website  (Neues Fenster)

Rendering Gigaray Light Fields


We present a caching framework with a novel probability-based prefetching and eviction strategy applied to atomic cache units that enables interactive rendering of gigaray light fields. Further, we describe two new use cases that are supported by our framework: panoramic light fields, including a robust imaging technique and an appropriate parameterization scheme for real-time rendering and caching; and light-field-cached volume rendering, which supports interactive exploration of large volumetric datasets using light-field rendering. We consider applications such as light-field photography and the visualization of large image stacks from modern scanning microscopes.

Project Website  (Neues Fenster)

Birklbauer, C., Opelt, S. and Bimber, O., Rendering Gigaray Light Fields, In proceedings of Eurographics (Computer Graphics Forum), 32(2), 469-478, 2013

Source Code LFC2013  (Neues Fenster)

Light-Field Supported Fast Volume Rendering


Advances in imaging technology leads to a continues increase of image data sets. Modern scanning microscopes, for instance, produce image stacks with a megapixel lateral resolution and many hundreds to thousands slices in axial direction. This trend will continue – resulting in very large volumetric data sets that are difficult to explore interactively, since the complexity of volume rendering is proportional to spatial and lateral resolution of the data. Light-field rendering is a fast and simple image-based rendering method that requires pre-computed image data. For volume rendering, each costly computed image is discarded after changing the viewing parameters, while it becomes idle if the viewing parameters are not changed and the visualization does not need to be updated. We combine light-field rendering and volume rendering with two goals: We recycle previously rendered images and use the idle times for filling a cached-managed light field. The final images are then composed from light-field rendering and from volume rendering – depending on the state of the light-field cache. This leads to a significant increase in rendering performance and to the ability of exploring large volumetric datasets interactively.

Birklbauer, C. and Bimber, O., Light-Field Supported Fast Volume Rendering, ACM Siggraph (poster), 2012

enRoute: Dynamic Path Extraction from Biological Pathway Maps for In-Depth Experimental Data Analysis


Pathway maps are an important source of information when analyzing functional implications of experimental data on biological processes. Associating large quantities of data with nodes on a pathway map and allowing in depth-analysis at the same time, however, is a challenging task. While a wide variety of approaches for doing so exist, they either do not scale beyond a few experiments or fail to represent the pathway appropriately. To remedy this, we introduce enRoute, a new approach for interactively exploring experimental data along paths that are dynamically extracted from pathways. By showing an extracted path side-by-side with experimental data, enRoute can present large amounts of data for every pathway node. It can visualize hundreds of samples, dozens of experimental conditions, and even multiple datasets capturing different aspects of a node at the same time. Another important property of this approach is its conceptual compatibility with arbitrary forms of pathways. Most notably, enRoute works well with pathways that are manually created, as they are available in large, public pathway databases. We demonstrate enRoute with pathways from the well-established KEGG database and expression as well as copy number datasets from humans and mice with more than 1,000 experiments at the same time. We validate enRoute in case studies with domain experts, who used enRoute to explore data for glioblastoma multiforme in humans and a model of steatohepatitis in mice.

Partl, C., Lex, A., Streit, M., Kalkofen, D., Kashofer, K. and Schmalstieg, D., enRoute: Dynamic Path Extraction from Biological Pathway Maps for In-Depth Experimental Data Analysis, In proceedings of the IEEE Symposium on Biological Data Visualization (BioVis'12), 2012.

Partl, C., Lex, A., Streit, M., Kalkofen, D., Kashofer, K. and Schmalstieg, D., enRoute: Dynamic Path Extraction from Biological Pathway Maps for Exploring Heterogeneous Experimental Datasets, BMC Bioinformatics, 14(Suppl 19):S3, 2013.

application/pdfManuscript (945KB)

StratomeX: Visual Analysis of Large-Scale Heterogeneous Genomics Data for Cancer Subtype Characterization


Identification and characterization of cancer subtypes are important areas of research that are based on the integrated analysis of multiple heterogeneous genomics datasets. Since there are no tools supporting this process, much of this work is done using ad-hoc scripts and static plots, which is inefficient and limits visual exploration of the data. To address this, we have developed StratomeX, an integrative visualization tool that allows investigators to explore the relationships of candidate subtypes across multiple genomic data types such as gene expression, DNA methylation, or copy number data. StratomeX represents datasets as columns and subtypes as bricks in these columns. Ribbons between the columns connect bricks to show subtype relationships across datasets. Drill-down features enable detailed exploration. StratomeX provides insights into the functional and clinical implications of candidate subtypes by employing small multiples, which allow investigators to assess the effect of subtypes on molecular pathways or outcomes such as patient survival. As the configuration of viewing parameters in such a multi-dataset, multi-view scenario is complex, we propose a meta visualization and configuration interface for dataset dependencies and data-view relationships.
StratomeX is developed in close collaboration with domain experts. We describe case studies that illustrate how investigators used the tool to explore subtypes in large datasets and demonstrate how they efficiently replicated findings from the literature and gained new insights into the data.

Lex, A., Streit, M., Schulz, H.-J., Partl, C., Schmalstieg, D., Park, P. J. and Gehlenborg, N., StratomeX: Visual Analysis of Large-Scale Heterogeneous Genomics Data for Cancer Subtype Characterization, Computer Graphics Forum (EuroVis 2012), vol. 31, no. 3, pp. 1175-1184, 2012.

application/pdfManuscript (3.0 MB)

Procedural Texture Synthesis for Zoom-Independent Visualization of Multivariate Data


We propose a method for visualizing multivariate data in a view-dependent manner using a synthesized texture. The texture is generated using procedural random-phase Gabor noise and a specially constructed transfer function that is based on statistical properties of the noise. The procedural nature of texture synthesis allows us to alter the noise according to current viewing conditions by modifying its parameters. As a consequence, the textures are alias-free and data values can be interpreted at any scale. We show the applications of our method for displaying multivariate 2D weather data and for encoding additional information on top of a video without using color, thus leaving this communication channel for other purposes. We achieve simultaneous display of color and texture information by modifying the value component using the HSV color model, which allows us to ensure sufficient contrast for texture perception while keeping hue and saturation components intact. We provide the results of a user study, which shows significant advantage of our method over current methods with similar goals.

Klebnikov, R., Kainz, B., Steinberger, M., Streit, M. and Schmalstieg, D., Procedural Texture Synthesis for Zoom-Independent Visualization of Multivariate Data, Computer Graphics Forum (EuroVis'12), vol. 31, no. 3, pp. 1355-1364, 2012.

application/pdfManuscript (2.1 MB)

Model-Driven Design for the Visual Analysis of Heterogeneous Data



As heterogeneous data from different sources are being increasingly linked, it becomes difficult for users to understand how the data are connected, to identify what means are suitable to analyze a given data set, or to find out how to proceed for a given analysis task. We target this challenge with a new model-driven design process that effectively codesigns aspects of data, view, analytics, and tasks. We achieve this by using the workflow of the analysis task as a trajectory through data, interactive views, and analytical processes. The benefits for the analysis session go well beyond the pure selection of appropriate data sets and range from providing orientation or even guidance along a preferred analysis path to a potential overall speedup, allowing data to be fetched ahead of time. We illustrate the design process for a biomedical use case that aims at determining a treatment plan for cancer patients from the visual analysis of a large, heterogeneous clinical data pool. As an example for how to apply the comprehensive design approach, we present Stack’n’flip, a sample implementation which tightly integrates visualizations of the actual data with a map of available data sets, views, and tasks, thus capturing and communicating the analytical workflow through the required data sets.

application/pdfManuscript (1.0 MB)

Streit, M., Schulz, H.-J., Lex, A., Schmalstieg, D. and Schumann, H., Model-Driven Design for the Visual Analysis of Heterogeneous Data,
IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 6, pp.998-1010, 2012.

Visualizing Uncertainty in Biological Expression Data


Expression analysis of ∼omics data using microarrays has become a standard procedure in the life sciences. However, microarrays are subject to technical limitations and errors, which render the data gathered likely to be uncertain. While a number of approaches exist to target this uncertainty statistically, it is hardly ever even shown when the data is visualized using for example clustered heatmaps. Yet, this is highly useful when trying not to omit data that is “good enough” for an analysis, which otherwise would be discarded as too unreliable by established conservative thresholds. Our approach addresses this shortcoming by first identifying the margin above the error threshold of uncertain, yet possibly still useful data. It then displays this uncertain data in the context of the valid data by enhancing a clustered heatmap. We employ different visual representations for the different kinds of uncertainty involved. Finally, it lets the user interactively adjust the thresholds, giving visual feedback in the heatmap representation, so that an informed choice on which thresholds to use can be made instead of applying the usual rule-of-thumb cut-offs. We exemplify the usefulness of our concept by giving details for a concrete use case from our partners at the Medical University of Graz, thereby demonstrating our implementation of the general approach.

Holzhüter, C., Lex, A., Schmalstieg, D., Schulz H.-J., Schumann, H. and Streit, M., Visualizing Uncertainty in Biological Expression Data, In proceedings of the SPIE Conference on Visualization and Data Analysis (VDA'12), 2012.

application/pdfManuscript (444KB)

Light-Field Caching


With the continuously increasing sensor resolutions of cameras, light field imaging is becoming a more and more practical extension to conventional digital photography. It complements postprocessing by synthetic aperture control, refocusing, as well as perspective and field-of-view changes. For being a true alternative to classical 2D imaging, however, the spatial resolution of light fields must be in the same megapixel-order as the resolution of today’s digital images. The additional angular resolution must also be adequately high to prevent sampling artifacts (in particular for synthetic re-focussing). This will quickly cause gigabytes rather than megabytes of data that have to be rendered with limited graphics memory. We describe a light-field caching framework that makes it possible to render very large light fields in real-time.

Opelt, S. and Bimber, O., Light-Field Caching, ACM Siggraph (poster), 2011