Visualization and Analysis for Decision Making using XCT Data
Date: October 11th, 2023, 14:00 CEST
Room: S3 055 / Youtube / Zoom
Abstract: Visual analysis and visualization of primary and secondary X-ray computed tomography (XCT) data has become highly attractive for boosting research endeavors in the materials science domain. Research on complex material systems, such as carbon fiber reinforced polymers (CFRP), graphite nanosphere battery materials, etc., require a detailed understanding of internal structures, their characteristics, and their correlations to support decision making. Using novel methods and techniques from interactive visualization of spatial and quantitative data, uncertainty quantification and visualization, visual parameter space analysis, comparative visualization and ensemble analysis, visual analysis of time varying data as well as many other research domains, a multitude of different phenomena may now be studied, at multiple scales, different dimensions, or using different modalities. This was simply impossible before.
This talk presents computer science contributions in terms of novel algorithms, techniques and data structures advancing visual analysis and visualization for providing insights into complex material systems. For each area, problem statements, research questions as well as the contributions are discussed. The methods and techniques to be presented focus on three distinct technical areas of visual analysis and visualization of XCT data:
1. Interactive visualization of spatial and quantitative data: To reveal insight into complex objects, MObjects is discussed as an aggregation and exploration technique, which computes average volumetric representations from selections of individual objects of interest. To analyze various of these average objects and compare them regarding their individual characteristics, visual analysis techniques as presented in Fiberscout facilitate a detailed exploration of spatial data together with derived quantitative data.
2. Visual parameter space analysis (vPSA): To explore the space of possible parameter combinations of algorithms and their ensemble of results, methods and techniques in this area guide users in finding adequate input parameter sets leading to optimal output results. The techniques addressed here, focus on exploring and analyzing input parameter spaces of segmentation and reconstruction algorithms as well as effect on their respective result spaces.
3. Comparative visualization and ensemble analysis: The comparison of many members of an ensemble as generated by vPSA is difficult, tedious, and error-prone, which is aggravated by often just subtle differences. Dynamic Volume Lines is a novel technique for visual analysis and comparison of large sets of 3D volumes using linearization techniques combined with interactive data exploration. This technique is accompanied by comparative visualization in the spatial domain, to facilitate a link of the abstracted data to real world representations.
About the Speaker: Christoph Heinzl received his PhD degree in computer science from TU Wien in the field of visualization and analysis of industrial XCT data. He was awarded the habilitation (venia docendi) from TU Wien in Computer Science in 2022. He is currently professor for cognitive sensor systems at University of Passau and leading the research group for knowledge-based image processing and visualization at the Fraunhofer IIS Development Center for X-ray Technology. His current research covers visual analysis and visualization of "rich" XCT data, a research domain, in which he published >100 papers, >36 of them peer-reviewed, four book chapters and a patent. He acquired various applied and basic research grants on national and European level (xCTing, X-Pro, AugmeNDT, COMPARE, BeyondInspection, NanoXCT, QUICOM, InteraQCT, ADAM, SmartCT, SimCT, etc.). His research interests are focused but not limited to the following areas: Scientific visualization, visual analytics, visual parameter space analysis, visual analysis of spatio-temporal data (time varying or 4DXCT data), visual analysis of ensemble data, comparative visualization, multi modal data analysis and visualization, immersive analytics, cross-virtuality analytics, virtual and augmented reality in visualization, machine learning, visual optimality and stability analysis of XCT scans, parameter sensitivity analysis and visualization, uncertainty visualization, for non-destructive testing and beyond.
The Power of Collectives: Exploring LLM-Driven Agents at GoodAI
Date: 14.06.2023, 13:45 CEST
Room: S3 055/YouTube, opens an external URL in a new window/Zoom
About the Speakers: Together with his colleagues, Jan Feyereisl is actively investigating various aspects of GoodAI’s LLM-drivenmulti-agent architectures, focusing on facets at theintersection of disciplines ranging from machine learning, complexity science and computational mechanics all the way topsychology. Jan is particularly interested in the dynamics ofcollective computation & learning. He is also the Executive Director of the AI Roadmap Institute. association.
Olga Afanasjeva is an AI evangelist with a background in arts and social sciences, pursuing her passion for discovery and redefining the limits of what is possible. She leads the Drones project at GoodAI.
Jaroslav Vitku holds a Ph.D. in Artificial Intelligence and Biocybernetics, and is actively working on R&D of the GoodAI Drones project, applying his experience in Deep- and Multi-Agent RL, various multi-agent learning techniques as well as open-ended and online learning..
Petr Hlubuček has a background in Computer Science and Biology and is currently working on GoodAI Drones project. Along with passion for robotics and hardware prototyping, his research interests include applying Large Language Models to use cases in robotics, interpretability and visualization of RNN-based models, attention mechanisms, meta-learning, and program induction.
Abstract: This talk provides an overview of the research conducted at GoodAI, an AI company based in Prague, Czech Republic. The first part, led by Jan Feyereisl, Senior Research Scientist at GoodAI, delves into the core research on LLM-driven agents. We are developing agents with the capability to self-organize into applications, developing their own code and user interfaces. Through user-specified goals and iterative feedback, the agents collaborate collectively to achieve desired outcomes. Jan also highlights GoodAI's video game project, where LLM-driven agents exhibit emergent behaviors, leading to unexpected and amusing interactions. The second part is covered by the GoodAI Drones project team, and illustrates the team's research on current use cases in public safety and security in South Africa. GoodAI plans to integrate LLM technology into drone control, enabling communication with the system through natural language, as well as experimenting with LLM-driven collective behaviors. The goal is to leverage the power of collaborative groups or swarms of robots through a simple and intuitive user interface, breaching the gap in accessibility of advanced drone technology in places where safety is still a luxury. The talk will open the floor for discussion about creative approaches to applying LLMs in robotics.
Understanding Complex Behavior: From Dynamic Graph Visualization to Visual Game Analytics
Date: 16.05.2023, 11:00 CEST
About the Speakers:
Since October 2021, Fabian Beck holds the chair of /Information Visualization/ at the University of Bamberg. His research focuses on methods for the visualization of dynamic structures and on self-explanatory visual representations. These methods can be applied in many areas, such as understanding complex software systems, analyzing the behavior of artificial agents, or organizing literature. His research also explores the interaction of visualizations with textual content and other media for understandable visual reporting. He received his Dr. rer nat. Degree (PhD) in Computer Science from the University of Trier in 2013 and worked as a postdoctoral researcher at the University of Stuttgart Visualization Research Center (VISUS) until 2016. Afterward, he led the visualization group of the /paluno /Institute for Software Technology at the University of Duisburg-Essen as an assistant professor. In 2018, he was awarded the /EuroVis Young Researcher Award/ and appointed Junior Fellow of the /EUROGRAPHICS /association.
Shivam Agarwal is a PhD candidate with research interest in data visualization, artificial intelligence, and game analytics. He is interested in visualizing the concepts learned by machine learning models, exploring intuitive visual representations of data or finding creative ways of narrating data stories, such as enriched data comics. He has developed visualizations for understanding the behaviors of trained agents in machine learning competitions (e.g., NeurIPS), which have been awarded by the community.
Humans as well as machines exhibit complex behavior, already when acting alone, but even more when they interact with each other. Events and connections that evolve dynamically are embedded in spatial or non-spatial environments. Such scenarios can be found across various domains: Social networks, human gaze, software systems, or play data from computer games involve as actors human participants, traditional algorithms, and artificial, intelligent agents. To understand the recorded behavior, these scenarios can all be mapped to similar data structures and visualized through related methods. In our talk, we discuss dynamic graph visualization as a method to analyze such scenarios. We focus on timeline-based methods, which provide a good overview of temporal developments. Since insights can be specifically gained through contrast, visual comparison is a cross-cutting challenge. Finally, game analytics serves as a use case to study complex behavior in a controlled environment. When analyzing artificial agents competing in games, insights can be gained on what behavior the agents learned and strategies they follow.
Text to Realistic Image Synthesis
Date: 26.04.2023, 14:00 CEST
Room: Zoom/YouTube, opens an external URL in a new window/S3055
About the Speaker:
Mohammed Abbass received the B.S. and M.Sc. degrees in Electronics and Electrical Communications Engineering, Egypt, and the Ph.D. degree in Electronics Engineering from Chungbuk National University, South Korea. From 2008 to 2016, he joined Atomic Energy Authority, Egypt, where he was a Researcher and Teaching Assistant with the Engineering Department. From 2016 to 2021, he joined the Chungbuk National University, Cheongju, South Korea, where he was a Student Researcher and has conducted professional research on computer vision based on deep learning. From 2022 to 2023, he joined again Atomic Energy Authority, Egypt, where he was an Assistant Professor with the Engineering Department. In 2022, he joined remotely University of Malta, Malta for six months, where he was a Research Support Officer III with the Department of Communications and Computer Engineering. Currently, he has been with Johannes Kepler University Linz, Austria, where he is a Senior Lecturer with the Institute of Computer Graphic. His research interests include computer vision, video analysis, machine learning and deep learning in the applications of super-resolution, object tracking and detection in video.
Synthesizing natural images from text descriptions is a significant topic in computer vision filed. It allows to directly draw, generate, and manipulate images with minimal effort. However, this topic has numerous challenges that desire additional research prospects such as aiding the generation of various objects with adequate resolution, and expanding consistent evaluation metrics that deal with human aspects. In this talk, the main goal is to focus on producing images from given text descriptions, and link the gap between computer vision and natural language processing areas. This talk deals with several challenge aspects, evaluations of adversarial models, and potential future work with a focus on text-to-image synthesis.
Scalable Visual Model Inspection
Date: 29.03.2023, 14:00 CEST
Room: Zoom/YouTube, opens an external URL in a new window/S3055
About the Speaker:
Manuela Waldner is an assistant professor (tenure track) at the Institute of Visual Computing & Human-Centered Technology at TU Wien, Austria. She has a PhD in computer science from Graz University of Technology, Austria. Her main research interests cover scalable and interactive data analysis and visualization with the primary goal to make complex data and models better understandable for users. She has co-authored papers at journals and venues like IEEE Transactions on Visualization and Computer Graphics or ACM Human Factors in Computing Systems, of which some have been nominated or awarded best paper awards. In 2014, she received a Hertha Firnberg fellowship for highly qualified female scientists by the Austrian Science Fund (FWF).
This talk is built upon on our vision of human-machine data exploration, which integrates interactive machine learning and interactive visualization to learn about and from large unstructured data in a joint fashion. The goal thereby is to combine the strengths of machine learning and human analytical skills into a unified process that helps users to "detect the expected and discover the unexpected" in large domain-specific data sets. In particular, in this talk, I will focus on scalable visual model inspection to help users analyze how the machine's understanding of a large unstructured data set aligns with their own. I will present a new multi-scale method for visual inspection of models based on data sets with hundreds of thousands of data instances and thousands of labels. Furthermore, I will present the results of a study analyzing how the data, model, and visualization characteristics influence users model inspection performance. Finally, I will conclude with a discussion about the challenges of visual model inspection, its evaluation, and its role in human-machine data exploration.
Making Choices with Visualizations
Date: 18.01.2023, 13:00 CET
Room: Zoom/Youtube, opens an external URL in a new window/S3055
About the Speaker:
Lena Cibulski is a visualization researcher at Fraunhofer IGD and a PhD candidate at Technical University of Darmstadt, Germany. She received her master’s degree in computer science in 2017 from Otto-von-Guericke University Magdeburg, where she soon found her way into visualization research. She completed her bachelor studies with a six-month stay at the VRVis Research Center in Vienna. In early 2023, Lena will be a visiting researcher at JKU Linz for two months. Her research is at the intersection between visualization and multi-attribute decision-making, with an emphasis on design studies for engineering applications. She is particularly interested in multidisciplinary collaborations to encourage discussions on human factors, methodological aspects, and applications. Lena conducts industrial and research projects that aim at assisting and informing decisions by using interactive visualization. She has authored a number of journal and conference publications and regularly acts as a reviewer for conferences like IEEE VIS and EuroVis.
This talk provides different perspectives on using data visualization to assist and inform choices. We face many choices in our personal and professional lives. Computing has made it easy to compile large numbers of options to choose from. Identifying the best solution among such a set is called multi-attribute choice. With no objectively optimal solution present, our human judgment is needed to trade off conflicting goals.
Data visualization is a powerful tool to help us explore and make sense of available courses of action. While many interactive visualizations already live in the context of decision-making, how to design for humans who make decisions with visualized data continues to be a vibrant research area. In this talk, I will outline several properties of multi-attribute choices that we encountered when studying real users and data. I will also hint at how disciplines like decision theory might help with that. Finally, I will layout some open visualization challenges along with two examples, where our visualizations helped engineers learn what level of performance is achievable under which conditions, even for co-dependent choices.