Understanding Complex Behavior: From Dynamic Graph Visualization to Visual Game Analytics
Date: 16.05.2023, 11:00 CEST
About the Speakers:
Since October 2021, Fabian Beck holds the chair of /Information Visualization/ at the University of Bamberg. His research focuses on methods for the visualization of dynamic structures and on self-explanatory visual representations. These methods can be applied in many areas, such as understanding complex software systems, analyzing the behavior of artificial agents, or organizing literature. His research also explores the interaction of visualizations with textual content and other media for understandable visual reporting. He received his Dr. rer nat. Degree (PhD) in Computer Science from the University of Trier in 2013 and worked as a postdoctoral researcher at the University of Stuttgart Visualization Research Center (VISUS) until 2016. Afterward, he led the visualization group of the /paluno /Institute for Software Technology at the University of Duisburg-Essen as an assistant professor. In 2018, he was awarded the /EuroVis Young Researcher Award/ and appointed Junior Fellow of the /EUROGRAPHICS /association.
Shivam Agarwal is a PhD candidate with research interest in data visualization, artificial intelligence, and game analytics. He is interested in visualizing the concepts learned by machine learning models, exploring intuitive visual representations of data or finding creative ways of narrating data stories, such as enriched data comics. He has developed visualizations for understanding the behaviors of trained agents in machine learning competitions (e.g., NeurIPS), which have been awarded by the community.
Humans as well as machines exhibit complex behavior, already when acting alone, but even more when they interact with each other. Events and connections that evolve dynamically are embedded in spatial or non-spatial environments. Such scenarios can be found across various domains: Social networks, human gaze, software systems, or play data from computer games involve as actors human participants, traditional algorithms, and artificial, intelligent agents. To understand the recorded behavior, these scenarios can all be mapped to similar data structures and visualized through related methods. In our talk, we discuss dynamic graph visualization as a method to analyze such scenarios. We focus on timeline-based methods, which provide a good overview of temporal developments. Since insights can be specifically gained through contrast, visual comparison is a cross-cutting challenge. Finally, game analytics serves as a use case to study complex behavior in a controlled environment. When analyzing artificial agents competing in games, insights can be gained on what behavior the agents learned and strategies they follow.
Text to Realistic Image Synthesis
Date: 26.04.2023, 14:00 CEST
About the Speaker:
Mohammed Abbass received the B.S. and M.Sc. degrees in Electronics and Electrical Communications Engineering, Egypt, and the Ph.D. degree in Electronics Engineering from Chungbuk National University, South Korea. From 2008 to 2016, he joined Atomic Energy Authority, Egypt, where he was a Researcher and Teaching Assistant with the Engineering Department. From 2016 to 2021, he joined the Chungbuk National University, Cheongju, South Korea, where he was a Student Researcher and has conducted professional research on computer vision based on deep learning. From 2022 to 2023, he joined again Atomic Energy Authority, Egypt, where he was an Assistant Professor with the Engineering Department. In 2022, he joined remotely University of Malta, Malta for six months, where he was a Research Support Officer III with the Department of Communications and Computer Engineering. Currently, he has been with Johannes Kepler University Linz, Austria, where he is a Senior Lecturer with the Institute of Computer Graphic. His research interests include computer vision, video analysis, machine learning and deep learning in the applications of super-resolution, object tracking and detection in video.
Synthesizing natural images from text descriptions is a significant topic in computer vision filed. It allows to directly draw, generate, and manipulate images with minimal effort. However, this topic has numerous challenges that desire additional research prospects such as aiding the generation of various objects with adequate resolution, and expanding consistent evaluation metrics that deal with human aspects. In this talk, the main goal is to focus on producing images from given text descriptions, and link the gap between computer vision and natural language processing areas. This talk deals with several challenge aspects, evaluations of adversarial models, and potential future work with a focus on text-to-image synthesis.
Scalable Visual Model Inspection
Date: 29.03.2023, 14:00 CEST
Room: Zoom/YouTube, opens an external URL in a new window/S3055
About the Speaker:
Manuela Waldner is an assistant professor (tenure track) at the Institute of Visual Computing & Human-Centered Technology at TU Wien, Austria. She has a PhD in computer science from Graz University of Technology, Austria. Her main research interests cover scalable and interactive data analysis and visualization with the primary goal to make complex data and models better understandable for users. She has co-authored papers at journals and venues like IEEE Transactions on Visualization and Computer Graphics or ACM Human Factors in Computing Systems, of which some have been nominated or awarded best paper awards. In 2014, she received a Hertha Firnberg fellowship for highly qualified female scientists by the Austrian Science Fund (FWF).
This talk is built upon on our vision of human-machine data exploration, which integrates interactive machine learning and interactive visualization to learn about and from large unstructured data in a joint fashion. The goal thereby is to combine the strengths of machine learning and human analytical skills into a unified process that helps users to "detect the expected and discover the unexpected" in large domain-specific data sets. In particular, in this talk, I will focus on scalable visual model inspection to help users analyze how the machine's understanding of a large unstructured data set aligns with their own. I will present a new multi-scale method for visual inspection of models based on data sets with hundreds of thousands of data instances and thousands of labels. Furthermore, I will present the results of a study analyzing how the data, model, and visualization characteristics influence users model inspection performance. Finally, I will conclude with a discussion about the challenges of visual model inspection, its evaluation, and its role in human-machine data exploration.
Making Choices with Visualizations
Date: 18.01.2023, 13:00 CET
Room: Zoom/Youtube, opens an external URL in a new window/S3055
About the Speaker:
Lena Cibulski is a visualization researcher at Fraunhofer IGD and a PhD candidate at Technical University of Darmstadt, Germany. She received her master’s degree in computer science in 2017 from Otto-von-Guericke University Magdeburg, where she soon found her way into visualization research. She completed her bachelor studies with a six-month stay at the VRVis Research Center in Vienna. In early 2023, Lena will be a visiting researcher at JKU Linz for two months. Her research is at the intersection between visualization and multi-attribute decision-making, with an emphasis on design studies for engineering applications. She is particularly interested in multidisciplinary collaborations to encourage discussions on human factors, methodological aspects, and applications. Lena conducts industrial and research projects that aim at assisting and informing decisions by using interactive visualization. She has authored a number of journal and conference publications and regularly acts as a reviewer for conferences like IEEE VIS and EuroVis.
This talk provides different perspectives on using data visualization to assist and inform choices. We face many choices in our personal and professional lives. Computing has made it easy to compile large numbers of options to choose from. Identifying the best solution among such a set is called multi-attribute choice. With no objectively optimal solution present, our human judgment is needed to trade off conflicting goals.
Data visualization is a powerful tool to help us explore and make sense of available courses of action. While many interactive visualizations already live in the context of decision-making, how to design for humans who make decisions with visualized data continues to be a vibrant research area. In this talk, I will outline several properties of multi-attribute choices that we encountered when studying real users and data. I will also hint at how disciplines like decision theory might help with that. Finally, I will layout some open visualization challenges along with two examples, where our visualizations helped engineers learn what level of performance is achievable under which conditions, even for co-dependent choices.