Go to JKU Homepage
Institute of Computer Graphics
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.

2024

Robert Krueger
New York University

Scalable Visual Analytics for Digital Cancer Pathology

Date: June 19th, 2024, 2:00 pm CEST

Room: S3 055 / Zoom, opens an external URL in a new window

Abstract:

With new tumor imaging technologies, cancer biology has entered a digital era. Artificial intelligence has enabled the processing and analysis of imaging data at unprecedented scale. While processing pipelines are rapidly evolving, pre-clinical research performed with the data is highly experimental and exploratory in nature, making integration of biomedical experts essential to steer workflows and interpret results.
In my talk, I will introduce a scalable rendering framework enabling users to load, display, and interactively navigate terabyte-sized multiplexed images of cancer tissue. I will then present visual analytics interfaces that build on this framework and support cell biologists and pathologists in their workflows. By leveraging both unsupervised and supervised learning in an interactive setting, cells in the tissue can be iteratively classified into tumor, immune, and stromal cell type hierarchies. Subsequently, spatial neighborhoods of cells are quantified in order to query and cluster reoccurring, biologically-meaningful cellular interactions both in and across specimens. Once relevant biological patterns are identified, a novel focus and context lensing interface enables pathologists to further assess and annotate these regions of interest in an intuitive fashion. I will conclude with an outlook into my future research agenda, addressing the transition to volumetric and time-varying datasets, detailed analysis of cell-cell interaction profiles in high-resolution 3D data, and the joint exploration of multimodal images with increasing amounts of spatially-referenced sequencing data.

About the Speaker:

Robert Krueger is an assistant professor at New York University (NYU) - Department of Computer Science and Engineering, and a member of VIDA, the NYU Visualization Imaging and Data Analysis Center. Krueger's research lies in the field of data visualization and visual analytics for spatial and spatially-referenced multivariate data with a focus on biomedical visualization. He is the lead organizer of the transatlantic Visualization and Image Data Management Group (VIM) and a co-organizer of the Spatial Biology Association (SBA) where he has been closely collaborating with leading biologists and oncologists in the field of computational pathology and cancer research.
Previously, Dr. Krueger was a postdoctoral fellow and subgroup leader at the Visual Computing Group (VCG), School of Engineering and Applied Sciences at Harvard University, and a senior research scientist at the Laboratory of Systems Pharmacology, Harvard Medical School. Dr. Krueger received his Ph.D. degree (Dr. rer. nat.) in Computer Science at the Institute for Visualization and Interactive Systems, University of Stuttgart in 2017. Krueger's work is published in leading visualization journals including Transaction on Visualization Computer Graphics (TVCG) and the Computer Graphics Forum (CGF) as well as in biological journals including Cell and Nature Methods.

Michael Behrisch
Utrecht University

Human-in-the-(Exploration-)Loop: Visual Pattern-Driven Exploration of Big Datasets

Date: May 22nd, 2024, 2:00 pm CEST

Room: Youtube, opens an external URL in a new window

Abstract: Visual Analytics (VA) is the science of analytical reasoning in big and complex datasets facilitated by interactive visual interfaces. Computers are capable of processing enormous amounts of data while humans can creatively pursue their analytical tasks by incorporating their general knowledge. VA systems unite these strengths by allowing the user to interact, understand, and creatively steer the automatic data analysis process.

VA faces, however, challenges like highly specialized expert visualizations, requiring expert model selection, and complex visualization/analysis technique combinations hindering interaction impact. My research pursues a Visual Quality Metrics (VQM) driven approach to overcome these drawbacks. By using quantitative VQMs as visual pattern extractors, analysts can reason over large, complex datasets through exploring interpretable visual patterns in the visualizations.

This talk will demonstrate the overall VQM concept for detecting and making use of meaningful visual patterns with the aim to make data analysis more accessible, effective, efficient, transparent, and reliable. I will show how VQMs and rapid human-in-the-loop interactions can enhance big data exploration by enabling pattern-driven data exploration without relying on specialized visualizations or analysis techniques.

About the Speaker: Michael Behrisch is a tenured Assistant Professor at Utrecht University in the Netherlands since 2017. Behrisch's research focuses on data visualization, visual analytics, and human-computer interaction, making contributions to areas like visualization of large graphs, interaction techniques, and evaluating visualization systems. He is known for his pioneering work on developing quality metrics for information visualization, which provide frameworks to evaluate and judge the quality and effectiveness of data visualizations. His work centers on interdisciplinary collaborations across domains, especially with a focus on multivariate (knowledge) graph visualization and multivariate time-series exploration. Behrisch regularly publishes at top-tier venues like IEEE VIS, EuroVis, and IEEE Transactions on Visualization and Computer Graphics (TVCG). He has served as a reviewer for these leading visualization conferences and journals.

Prior to joining Utrecht University, Behrisch held positions as a postdoctoral researcher at Harvard's Visual Computing Group and the Visual Analytics Laboratory at Tufts. Behrisch's research has made "broad and original contributions" as highlighted by his strong publication record of over 60 well-cited papers across top venues in the field of visualization and human-computer interaction.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Jan Aerts
KU Leuven

From Complexity to Comprehensibility: an Integrative View on Biological and Agricultural Systems

Date: May 8th, 2024, 2:00 pm CEST

Room: Youtube, opens an external URL in a new window

Abstract: Biology is messy and complex. In this talk I will explore how we can embrace this complexity and look at a more holistic approach in understanding the intricate and multifaceted nature of biological and agricultural systems. Central to this discussion is the role of visual analytics, a key tool that aids researchers to embrace the inherent complexity and uncertainty in these fields and allows for multimodal data integration. The presentation will go into the nuances of data visualization and novel visual design, highlighting how they can be used as a tool for thought and for generating new hypotheses. Furthermore, I will go into the added value of topological data analysis and multilayer networks, demonstrating their efficacy in uncovering hidden patterns and connections in complex datasets.

This talk aims to illustrate how integrating these methods can lead to a deeper and more comprehensive understanding of complex biological dna agricultural systems, paving the way for more informed decision-making and innovative research breakthroughs.

About the Speaker: Jan Aerts has been appointed at KU Leuven as a Professor and has a background in omics and bioinformatics, contributing to large model organism sequencing projects. In 2010 he changed his focus to data visualization. He supports domain experts and non-experts to make sense of complex data, using data visualization, visual analytics and data analysis. His work aims to help the expert define new hypotheses in complex data, and grasp the intricacies of complex data that includes interactions, feedforward and/or feedback loops, time sensitivity, hidden subpopulations/patterns, etc. It is his conviction that data visualization is a necessary complement to machine learning and AI (by allowing the expert to drive the analysis, and take responsibility in data-driven decisions) as well as statistics (by embracing the full complexity of the data).

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Micah Corah
Colorado School of Mines

Active Perception for Robot Teams: From Visual Search to Videography

Date: March 20th, 2024, 5:00 pm CET

Room: Youtube, opens an external URL in a new window

Abstract: Over the last ten years, drones have become increasingly integrated with our society: drones film our sports, inspect our crops, survey our geography, and inspect our disaster sites. Across these domains, drones are key because they are particularly adept at maneuvering cameras and sensors to ideal vantage points in diverse environments. However, these applications often still involve either manual operation or extensive operator interaction, and the teams deploying these systems can consist of multiple operators per robot. Bridging this gap will require both more effective coordination between robots and better understanding of application domains.

My work focuses on enabling aerial robots to make intelligent decisions about how to sense, sample, and observe their environments both individually and in groups. I will start this talk by discussing active perception with individual robots in the context of searching for survivors in a subterranean environment; I will discuss how robots can quickly navigate and map such environments with careful attention to dynamics, camera views, and the interactions between the two. Given individual robots that we have endowed with the ability to intelligently observe and inspect, how can we develop teams that coordinate effectively and efficiently? Toward this end, I will turn to the problem of autonomously filming a group of people such as to film a team sport or a dramatic performance. By applying the rich theory of submodular and combinatorial optimization, simple algorithms can enable individual robots that are able to film autonomously and augment them with the ability to coordinate in teams. I will then present a distributed submodular optimization algorithm I developed (Randomized Sequential Partitions or RSP) that enables this approach to scale to large numbers of robots, and I will discuss how to apply this approach to multi-robot videography by carefully designing objectives and reasoning in terms of pixel densities.

About the Speaker: Micah Corah is an Assistant Professor in Computer Science at the Colorado School of Mines where his research will focus on aerial robots, active perception, and multi-robot teams. Before that, Micah was a postdoc in the AirLab at Carnegie Mellon University where he worked to develop teams of flying cameras for filming and reconstructing groups of moving people. Micah also competed with team CoSTAR in the DARPA Subterranean Challenge while a postdoc at JPL where he focused on aerial autonomy and multi-robot exploration in caves and mines. He completed his Ph.D. in Robotics at Carnegie Mellon University in fall 2020. His thesis work involved active perception, exploration, and target tracking for aerial robots with a focus on distributed perception planning; during this time Micah developed the first submodular optimization algorithms for multi-robot perception planning that scale to large numbers of robots while maintaining optimization guarantees.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Amanda Prorok
University of Cambridge

Using Graph Neural Networks to Learn to Communicate, Cooperate, and Coordinate in Multi-Robot Systems

Date: January 30th, 2024, 2:00 pm CET

Room: HS 1 , Youtube, opens an external URL in a new window

Abstract: How are we to orchestrate large teams of agents? How do we distill global goals into local robot policies? Machine learning has revolutionized the way in which we address these questions by enabling us to automatically synthesize decentralized agent policies from global objectives. In this presentation, I first describe how we leverage data-driven approaches to learn interaction strategies that lead to coordinated and cooperative behaviors. I will introduce our work on Graph Neural Networks, and show how we use such architectures to learn multi-agent policies through differentiable communications channels. I will present some of our results on cooperative perception, coordinated path planning, and close-proximity quadrotor flight. To conclude, I discuss the impact of policy heterogeneity on agent alignment and sim-to-real transfer.

About the Speaker: Amanda Prorok is Professor of Collective Intelligence and Robotics in the Department of Computer Science and Technology, at Cambridge University, and a Fellow of Pembroke College. Her lab's research focuses on multi-agent and multi-robot systems. Their mission is to find new ways of coordinating artificially intelligent agents (e.g., robots, vehicles, machines) to achieve common goals in shared physical and virtual spaces. Together with her lab, Prorok pioneered methods for differentiable communication between learning agents. Their research brings in methods from machine learning, planning, and control, and has numerous applications, including automated transport and logistics, environmental monitoring, surveillance, and search.

Prior to joining Cambridge, Amanda was a postdoctoral researcher at the General Robotics, Automation, Sensing and Perception (GRASP, opens an external URL in a new window) Laboratory at the University of Pennsylvania, USA. She completed her PhD at EPFL, opens an external URL in a new window, Switzerland. She has been honored by numerous research awards, including an ERC Starting Grant, an Amazon Research Award, the EPSRC New Investigator Award, the Isaac Newton Trust Early Career Award, and several Best Paper awards. Her PhD thesis was awarded the Asea Brown Boveri (ABB) prize for the best thesis at EPFL in Computer Science. She serves as Associate Editor for IEEE Robotics and Automation Letters (R-AL) and Associate Editor for Autonomous Robots (AURO). 

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Michael Burch
University of Applied Sciences Graubünden

Eye Tracking in Visual Analytics

Date: January 10th, 2024, 2:00 pm CET

Room: S3 055 / Youtube, opens an external URL in a new window

Abstract: Visual analytics tools are complex visual interfaces that can be inspected from many perspectives like the visualizations, user interface components, interaction techniques, displays, algorithmic techniques, but even more the users - expert or non-expert ones - with their experience levels and tasks-at-hand. No matter how complex such a visual analytics tool is and on what application field its focus lies, user evaluation is a powerful concept to investigate whether the tool is understandable and useful or creates challenges on the users' sides due to many design flaws. Eye tracking is getting more and more prominent in visual analytics to understand user behavior based on visual attention and visual scanning strategies. However, the recorded eye movement data creates a new complex data source for which visual analytics is required again to find patterns, anomalies, insights, and knowledge in the eye movement data.

About the Speaker: Michael Burch studied computer science and mathematics at the Saarland University in Saarbrücken, Germany. He received his PhD from the University of Trier in 2010 in the fields of information visualization and visual analytics. After 8 years of having been a PostDoc in the Visualization Research Center (VISUS) in Stuttgart, he moved to the Eindhoven University of Technology (TU/e) as an assistant professor for visual analytics. From October 2020 he has been working as a lecturer in visualization at the University of Applied Sciences in Chur, Switzerland. Michael Burch is on many international program committees and has published more than 190 conference papers and journal articles in the field of visualization. His main interests are in information visualization, visual analytics, eye tracking, and data science.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy