2021

Alexander Lex, University of Utah

Literate Visualization: Making Visual Analysis Sessions Reproducible and Reusable

June 16th, 2021, 10:30 am CEST

Room: Zoom/YouTube

About the Speaker:
I am an Associate Professor of Computer Science at the Scientific Computing and Imaging Institute and the School of Computing at the University of Utah. I direct the Visualization Design Lab where we develop visualization methods and systems to help solve today’s scientific problems.
Before joining the University of Utah, I was a lecturer and post-doctoral visualization researcher at Harvard University. I received my PhD, master’s, and undergraduate degrees from Graz University of Technology. In 2011 I was a visiting researcher at Harvard Medical School.
I am the recipient of an NSF CAREER award and multiple best paper awards or honorable mentions at IEEE VIS, ACM CHI, and other conferences. I also received a best dissertation award from my alma mater. I co-founded Datavisyn (http://datavisyn.io), a startup company developing visual analytics solutions for the pharmaceutical industry.
http://alexander-lex.net

Abstract:
Interactive visualization is an important part of the data science process. It enables analysts to directly interact with the data, exploring it with minimal effort. Unlike code, however, an interactive visualization session is ephemeral and can't be easily shared, revisited, or reused. Computational notebooks, such as Jupyter Notebooks, R Markdown, or Observable are a perfect match for many data science applications. They are also the most popular embodiment of Knuth's "Literate Programming", where the logic of a program is explained in natural language, figures, and equations.
In this talk, I will sketch approaches to "Literate Visualization". I will show how we can leverage provenance data of an analysis session to create well-documented and annotated visualization stories that enable reproducibility and sharing. I will also introduce work on inferring analysis goals, which allows us to understand the analysis process at a higher level. Understanding analysis goals enables us to enhance interaction capabilities and even re-used visual analysis processes. I will conclude by demonstrating how this provenance data can be leveraged to bridge between computational and interactive environments.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Cristina Olaverri Monreal, ITS- Sustainable Transport Logistics 4.0, JKU Linz

Vehicle Automation

May 19th, 2021, 10:30 CEST

Room: Zoom/YouTube

About the Speaker:
Univ. Prof. Dr. Cristina Olaverri-Monreal received her PhD from the Ludwig-Maximilians University (LMU) in Munich in cooperation with BMW. She is full professor and holder of the BMVIT endowed chair ITS-Sustainable Transport Logistics 4.0 at the Johannes Kepler University Linz, in Austria. Her research aims at studying solutions for an efficient and effective transportation focusing on minimizing the barrier between users and road systems. To this end, she relies on the automation, wireless communication and sensing technologies that pertain to the field of Intelligent Transportation Systems (ITS).
Prior to this position, she led diverse teams in the industry and in the academia in the US and in distinct countries in Europe. Dr. Olaverri is president-elect of the IEEE Intelligent Transportation Systems Society, founder and chair of the Austrian IEEE ITSS chapter, and chair of the Technical Activities Committee (TAC) on Human Factors in ITS. This TAC was recognized with the award “best TAC of the IEEE ITSS” in 2018 and 2019 respectively. To commemorate the anniversary of the IEEE ITS Workshop on Human Factors that she has been organizing since 2012, she has recently (co) edited the book “Human Factors in Intelligent Vehicles”, within the River Publishers Series in Transport Technologies.
In addition, she serves as editorial board member of several journals in the field, including the IEEE Transactions on Intelligent Transportation Systems, and the IEEE International Transportation Systems Magazine. She was recently recognized for her dedicated contribution to continuing education in the field of ITS with the 2017 IEEE Educational Activities Board Meritorious Achievement Award in Continuing Education.
Cristina Olaverri-Monreal is a member of the single EU-wide platform to coordinate open road testing of cooperative, Connected and Automated Mobility (CCAM). She has served as advisor and project evaluator in the field of ICT and Connected, Cooperative Autonomous Mobility Systems for several European agencies and organizations and she is a European Commission Expert for “Automated Road Transport”.

Abstract:
The feasibility of incorporating new technology-driven functionality to vehicles has played a central role in automotive design. The overall diffusion in the application of digital technologies presents the possibility of designing systems, the functioning of which is based on intelligent technologies that simultaneously reside in multiple, interconnected applications. Consequently, the development of intelligent road-vehicle systems such as cooperative advanced driver assistance systems (co-ADAS) and with them the degree of vehicle automation is rapidly increasing.
The advent of vehicle automation promotes a reduction of the driver workload. However, depending on the automation grade consequences for the passengers such as out-of-the-loop states can be foreseen. Also the protection of Vulnerable Road Users (VRUs) has been an active research topic in recent years. A variety of responses that exhibit several levels of trust, uncertainty and a certain degree of fear when interacting with driverless vehicles has been observed. In this context, P2V (Pedestrian-to-Vehicle) and V2P (Vehicle-to-Pedestrian) have become crucial technologies to minimize potential dangers, due to the high detection rates and the high user-satisfaction levels they achieve.
This presentation gives an overview of the impact of such technologies on traffic awareness towards improving driving performance and reducing road accidents. Furthermore, the benefits and potential problems regarding vehicle automation will be outlined.

Matthias Nießner, Visual Computing & AI, TU Munich

The Revolution of Neural Rendering

April 13th, 2021, 10:30 CEST

Room: Zoom/YouTube

About the Speaker:
Dr. Matthias Nießner is a Professor at the Technical University of Munich where he leads the Visual Computing Lab. Before, he was a Visiting Assistant Professor at Stanford University. Prof. Nießner’s research lies at the intersection of computer vision, graphics, and machine learning, where he is particularly interested in cutting-edge techniques for 3D reconstruction, semantic 3D scene understanding, video editing, and AI-driven video synthesis. In total, he has published over 70 academic publications, including 22 papers at the prestigious ACM Transactions on Graphics (SIGGRAPH / SIGGRAPH Asia) journal and 43 works at the leading vision conferences (CVPR, ECCV, ICCV); several of these works won best paper awards, including at SIGCHI’14, HPG’15, SPG’18, and the SIGGRAPH’16 Emerging Technologies Award for the best Live Demo. Prof. Nießner’s work enjoys wide media coverage, with many articles featured in main-stream media including the New York Times, Wall Street Journal, Spiegel, MIT Technological Review, and many more, and his was work led to several TV appearances such as on Jimmy Kimmel Live, where Prof. Nießner demonstrated the popular Face2Face technique; Prof. Nießner’s academic Youtube channel currently has over 5 million views. For his work, Prof. Nießner received several awards: he is a TUM-IAS Rudolph Moessbauer Fellow (2017 – ongoing), he won the Google Faculty Award for Machine Perception (2017), the Nvidia Professor Partnership Award (2018), as well as the prestigious ERC Starting Grant 2018 which comes with 1.500.000 Euro in research funding; in 2019, he received the Eurographics Young Researcher Award honoring the best upcoming graphics researcher in Europe. In addition to his academic impact, Prof. Nießner is a co-founder and director of Synthesia Inc., a brand-new startup backed by Marc Cuban, whose aim is to empower storytellers with cutting-edge AI-driven video synthesis.

Abstract:
In this talk, I will present our research vision in how to create a photo-realistic digital replica of the real world, and how to make holograms become a reality. Eventually, I would like to see photos and videos evolve to become interactive, holographic content indistinguishable from the real world. Imagine taking such 3D photos to share with friends, family, or social media; the ability to fully record historical moments for future generations; or to provide content for upcoming augmented and virtual reality applications. AI-based approaches, such as generative neural networks, are becoming more and more popular in this context since they have the potential to transform existing image synthesis pipelines. I will specifically talk about an avenue towards neural rendering where we can retain the full control of a traditional graphics pipeline but at the same time exploit modern capabilities of deep learning, such as handling the imperfections of content from commodity 3D scans. While the capture and photo-realistic synthesis of imagery open up unbelievable possibilities for applications ranging from entertainment to communication industries, there are also important ethical considerations that must be kept in mind. Specifically, in the content of fabricated news (e.g., fake-news), it is critical to highlight and understand digitally-manipulated content. I believe that media forensics plays an important role in this area, both from an academic standpoint to better understand image and video manipulation, but even more importantly from a societal standpoint to create and raise awareness around the possibilities and moreover, to highlight potential avenues and solutions regarding trust of digital content.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Günter Wallner, Johannes Kepler University

Visualization of Mixed Playtesting Data

March 24th, 2021, 14:00 CET

Room: Zoom/YouTube

About the Speaker:
Günter is Prof. for Game Computing at the Johannes Kepler University Linz and holds positions at the Eindhoven University of Technology and Ontario Tech University. His work particularly centers on understanding player behavior in games and on researching methods to explore and communicate the collected data to derive actionable insights for game design and development. As part of this, he is working on data visualizations to support the analysis of the increasingly large-scale player behavioral datasets used in game analytics. He is editor of the 'Data Analytics Applications in Gaming and Entertainment' book.

Abstract:
Data collected through Games User Research (GUR) is constantly growing in volume. Moreover, playtesting frequently results in multi-modal datasets consisting of qualitative and quantitative data. Simultaneously displaying mixed playtesting data can offer additional benefits for user researchers as it can provide relevant contextual information. This talk will show how visualizations can be helpful for interpreting (mixed) GUR datasets.

The presentation will start with a high-level discussion of the benefits of visualization for analyzing playtesting data. Afterwards, possible ways of visualizing individual player data will be introduced. The talk will then demonstrate different aggregation techniques for different types of playtesting data (movement, discrete events, and continuous player-related variables such as health or physiological measurements) to convey common patterns and outliers. Aggregation techniques can be very helpful when dealing with larger datasets to avoid difficult to read visualizations. The presented techniques can be applied individually or in combination to triangulate the different data types.

Lastly, the talk will address a thus-far mostly neglected issue in GUR, namely how visual design choices for data visualizations can impact the interpretation of gameplay data. An inappropriate visual design can be misleading, result in wrong conclusions, and have practical consequences. Attendees will come away with an understanding of how visualizations can help in understanding playtesting data.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Michael Sedlmair, University of Stuttgart

Machine Learning meets Visualization

January 20th, 2021, 10:30 am CET

Room: Zoom/YouTube

About the Speaker:
Michael Sedlmair is a junior professor at the University of Stuttgart, where he works at the intersection of human-computer interaction, visualization, and data analysis. Previously, Michael has worked at Jacobs University Bremen, University of Vienna, University of British Columbia, University of Munich, and the BMW Group Research and Technology. He also holds visiting positions at the Vienna University of Technology, and the Shandong University. His interests focus on  information visualization, interactive machine learning, virtual and augmented reality, as well as the research and evaluation methodologies underlying them.

Abstract:
Based on our experience conducting projects at the intersection of machine learning (ML) and interactive visualization (Vis), my talk will reflect on and discuss the current relation between these two areas. For that purpose, the talk’s structure will follow two main streams. First, I will talk about *Vis for ML*, that is, the idea that visualization can help machine learning researchers and practitioners gain interesting insights into their models. In the second part, I will then turn the relationship around and discuss how *ML for Vis* can guide visualization designers and analysts towards interesting visual patterns in the data. The talk will conclude with research challenges that lie ahead of us and that will pave the way for future interfaces between humans and data.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information