Go to JKU Homepage
Institute of Computer Graphics
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.

2024

Jan Aerts
KU Leuven

From Complexity to Comprehensibility: an Integrative View on Biological and Agricultural Systems

Date: May 8th, 2024, 2:00 pm CEST

Room: Zoom

Abstract: Biology is messy and complex. In this talk I will explore how we can embrace this complexity and look at a more holistic approach in understanding the intricate and multifaceted nature of biological and agricultural systems. Central to this discussion is the role of visual analytics, a key tool that aids researchers to embrace the inherent complexity and uncertainty in these fields and allows for multimodal data integration. The presentation will go into the nuances of data visualization and novel visual design, highlighting how they can be used as a tool for thought and for generating new hypotheses. Furthermore, I will go into the added value of topological data analysis and multilayer networks, demonstrating their efficacy in uncovering hidden patterns and connections in complex datasets.

This talk aims to illustrate how integrating these methods can lead to a deeper and more comprehensive understanding of complex biological dna agricultural systems, paving the way for more informed decision-making and innovative research breakthroughs.

About the Speaker: Jan Aerts has been appointed at KU Leuven as a Professor and has a background in omics and bioinformatics, contributing to large model organism sequencing projects. In 2010 he changed his focus to data visualization. He supports domain experts and non-experts to make sense of complex data, using data visualization, visual analytics and data analysis. His work aims to help the expert define new hypotheses in complex data, and grasp the intricacies of complex data that includes interactions, feedforward and/or feedback loops, time sensitivity, hidden subpopulations/patterns, etc. It is his conviction that data visualization is a necessary complement to machine learning and AI (by allowing the expert to drive the analysis, and take responsibility in data-driven decisions) as well as statistics (by embracing the full complexity of the data).

Micah Corah
Colorado School of Mines

Active Perception for Robot Teams: From Visual Search to Videography

Date: March 20th, 2024, 5:00 pm CET

Room: Zoom, Youtube, opens an external URL in a new window

Abstract: Over the last ten years, drones have become increasingly integrated with our society: drones film our sports, inspect our crops, survey our geography, and inspect our disaster sites. Across these domains, drones are key because they are particularly adept at maneuvering cameras and sensors to ideal vantage points in diverse environments. However, these applications often still involve either manual operation or extensive operator interaction, and the teams deploying these systems can consist of multiple operators per robot. Bridging this gap will require both more effective coordination between robots and better understanding of application domains.

My work focuses on enabling aerial robots to make intelligent decisions about how to sense, sample, and observe their environments both individually and in groups. I will start this talk by discussing active perception with individual robots in the context of searching for survivors in a subterranean environment; I will discuss how robots can quickly navigate and map such environments with careful attention to dynamics, camera views, and the interactions between the two. Given individual robots that we have endowed with the ability to intelligently observe and inspect, how can we develop teams that coordinate effectively and efficiently? Toward this end, I will turn to the problem of autonomously filming a group of people such as to film a team sport or a dramatic performance. By applying the rich theory of submodular and combinatorial optimization, simple algorithms can enable individual robots that are able to film autonomously and augment them with the ability to coordinate in teams. I will then present a distributed submodular optimization algorithm I developed (Randomized Sequential Partitions or RSP) that enables this approach to scale to large numbers of robots, and I will discuss how to apply this approach to multi-robot videography by carefully designing objectives and reasoning in terms of pixel densities.

About the Speaker: Micah Corah is an Assistant Professor in Computer Science at the Colorado School of Mines where his research will focus on aerial robots, active perception, and multi-robot teams. Before that, Micah was a postdoc in the AirLab at Carnegie Mellon University where he worked to develop teams of flying cameras for filming and reconstructing groups of moving people. Micah also competed with team CoSTAR in the DARPA Subterranean Challenge while a postdoc at JPL where he focused on aerial autonomy and multi-robot exploration in caves and mines. He completed his Ph.D. in Robotics at Carnegie Mellon University in fall 2020. His thesis work involved active perception, exploration, and target tracking for aerial robots with a focus on distributed perception planning; during this time Micah developed the first submodular optimization algorithms for multi-robot perception planning that scale to large numbers of robots while maintaining optimization guarantees.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Amanda Prorok
University of Cambridge

Using Graph Neural Networks to Learn to Communicate, Cooperate, and Coordinate in Multi-Robot Systems

Date: January 30th, 2024, 2:00 pm CET

Room: HS 1 / Zoom

Abstract: How are we to orchestrate large teams of agents? How do we distill global goals into local robot policies? Machine learning has revolutionized the way in which we address these questions by enabling us to automatically synthesize decentralized agent policies from global objectives. In this presentation, I first describe how we leverage data-driven approaches to learn interaction strategies that lead to coordinated and cooperative behaviors. I will introduce our work on Graph Neural Networks, and show how we use such architectures to learn multi-agent policies through differentiable communications channels. I will present some of our results on cooperative perception, coordinated path planning, and close-proximity quadrotor flight. To conclude, I discuss the impact of policy heterogeneity on agent alignment and sim-to-real transfer.

About the Speaker: Amanda Prorok is Professor of Collective Intelligence and Robotics in the Department of Computer Science and Technology, at Cambridge University, and a Fellow of Pembroke College. Her lab's research focuses on multi-agent and multi-robot systems. Their mission is to find new ways of coordinating artificially intelligent agents (e.g., robots, vehicles, machines) to achieve common goals in shared physical and virtual spaces. Together with her lab, Prorok pioneered methods for differentiable communication between learning agents. Their research brings in methods from machine learning, planning, and control, and has numerous applications, including automated transport and logistics, environmental monitoring, surveillance, and search.

Prior to joining Cambridge, Amanda was a postdoctoral researcher at the General Robotics, Automation, Sensing and Perception (GRASP, opens an external URL in a new window) Laboratory at the University of Pennsylvania, USA. She completed her PhD at EPFL, opens an external URL in a new window, Switzerland. She has been honored by numerous research awards, including an ERC Starting Grant, an Amazon Research Award, the EPSRC New Investigator Award, the Isaac Newton Trust Early Career Award, and several Best Paper awards. Her PhD thesis was awarded the Asea Brown Boveri (ABB) prize for the best thesis at EPFL in Computer Science. She serves as Associate Editor for IEEE Robotics and Automation Letters (R-AL) and Associate Editor for Autonomous Robots (AURO). 

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Michael Burch
University of Applied Sciences Graubünden

Eye Tracking in Visual Analytics

Date: January 10th, 2024, 2:00 pm CET

Room: S3 055 / Zoom

Abstract: Visual analytics tools are complex visual interfaces that can be inspected from many perspectives like the visualizations, user interface components, interaction techniques, displays, algorithmic techniques, but even more the users - expert or non-expert ones - with their experience levels and tasks-at-hand. No matter how complex such a visual analytics tool is and on what application field its focus lies, user evaluation is a powerful concept to investigate whether the tool is understandable and useful or creates challenges on the users' sides due to many design flaws. Eye tracking is getting more and more prominent in visual analytics to understand user behavior based on visual attention and visual scanning strategies. However, the recorded eye movement data creates a new complex data source for which visual analytics is required again to find patterns, anomalies, insights, and knowledge in the eye movement data.

About the Speaker: Michael Burch studied computer science and mathematics at the Saarland University in Saarbrücken, Germany. He received his PhD from the University of Trier in 2010 in the fields of information visualization and visual analytics. After 8 years of having been a PostDoc in the Visualization Research Center (VISUS) in Stuttgart, he moved to the Eindhoven University of Technology (TU/e) as an assistant professor for visual analytics. From October 2020 he has been working as a lecturer in visualization at the University of Applied Sciences in Chur, Switzerland. Michael Burch is on many international program committees and has published more than 190 conference papers and journal articles in the field of visualization. His main interests are in information visualization, visual analytics, eye tracking, and data science.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy