Go to JKU Homepage
Institute of Computer Graphics
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.

2021

Tatiana von Landesberger, University of Cologne

Visual Analytics for Infection Contagion
December 14th, 2021, 14:00 pm CET
Room: Zoom/Youtube, opens an external URL in a new window

About the Speaker:
Tatiana von Landesberger is a full professor of computer science – visualization at University of Cologne, Germany. Her research focuses on information visualization and visual analysis of networks and spatio-temporal data with applications in medicine, biology, finance and transportation. She has high number of publications at top-conferences and journals. Her work on visual analysis of disease spreading has been awarded at VIS 2020. She is full chair of EuroVis conference.

Abstract:
Disease spreading (also called infection contagion) is an acute problem. Corona pandemic is one example of the worldwide consequences of infection contagion. In addition to viruses, multi-resistant pathogens are an acute problem in hospitals. These pathogens are transmitted among patients. They are a threat for patients’ health and increase treatment costs. Infection control experts need to analyse how has disease spread, who was the initial patient, they need to identify potentially infected patients and predict how the disease could spread in the future. This talk will present visual analytics approaches to answer these questions. These approaches have been developed and evaluated in cooperation with infection control experts in a long-term project HiGHmed and recent COVID-19 project B-FAST.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Erich Kobler, Johannes Kepler University Linz

Learning Regularizers for Inverse Problems in Imaging
November 30th, 2021, 14:00 pm CET
Room: Zoom/Youtube, opens an external URL in a new window

About the Speaker:
Erich Kobler, received his BSc (2009-2013) and his MSc (2013-2015) in Information and Computer Engineering (Telematik) as well as his PhD (2016-2020) - sub auspiciis Praesidentis - in computer science from Graz University of Technology. After a PostDoc position at Graz University of Technology, he moved to the University of Linz for a senior lecturer position at the Institute of Computer Graphics. In 2021, Erich Kobler received the award of excellence for his PhD thesis on combining variational methods and deep learning. His current research interests include computer vision, inverse problems, medical imaging, and machine learning.

Abstract:
Numerous problems in computer vision and medical imaging can be cast as inverse problems. The variational approach robustly estimates solutions of an inverse problem by minimizing an energy composed of a data fidelity term and a regularizer. While the data fidelity term is utilized to incorporate knowledge about the underlying physical process of the inverse problem, the regularizer typically encodes a-priori statistical properties of the desired solutions. Classically, handcrafted regularizers motivated by first-order statistics of images are used, which are frequently outperformed by state-of-the-art purely data-driven deep learning models. In this talk, we develop novel methods combining variational methods and deep learning that lead to state-of-the-art results on various imaging tasks and allow a rigorous mathematical analysis. In detail, we further investigate the effect that in variational methods the best image quality is frequently observed when the associated gradient flow is stopped before converging to a stationary point. We argue that this phenomenon originates from a tradeoff between optimization and modeling errors and remains valid even if highly expressive deep learning-based regularizers are employed. We analyze this paradox by considering a variational method featuring a parametric regularizer and by introducing an optimal stopping time in the corresponding gradient flow. This optimal stopping time as well as the parameters of the regularizer are determined by a mean-field optimal control approach, where the gradient flow defines the state equation. Moreover, we propose a novel data-driven general-purpose regularizer called total deep variation (TDV), which exploits recent architectural design patterns from deep learning to overcome the limited expressiveness of the regularizer advocated before. The TDV regularizer is a CNN that extracts local features on multiple scales and in successive blocks to assign an energy to every image pixel. The combination of the mean-field optimal control training problem and the highly expressive TDV regularizer leads to state-of-the-art results on various image restoration and reconstruction problems and simultaneously enables a rigorous mathematical analysis. We prove the existence of solutions of the mean-field optimal control problem in the time-continuous and time-discrete setting and characterize the stability with respect to initial value and parameter variations. Finally, we experimentally verify the robustness against adversarial attacks and numerically derive upper bounds for the generalization error.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Dmitriy Shutin, German Aerospace Center (DLR)

Physics-assisted multi-robot exploration of spatio-temporal dispersal phenomena

October 28th, 2021, 10:30 am CEST

Room: Zoom/YouTube, opens an external URL in a new window

About the Speaker:
Dmitriy Shutin received the master's degree in computer science from Dniepropetrovsk State University, Dnipropetrovsk, Ukraine, in 2000, and the Doctorate degree in electrical engineering from the Graz University of Technology, Graz, Austria, in 2006.,During 2001–2006 and 2006–2009, he was a Teaching Assistant and an Assistant Professor, respectively, with the Signal Processing and Speech Communication Laboratory, Graz University of Technology, Graz, Austria. In 2009, he joined the Department of Electrical Engineering, Princeton University, where until 2011, he worked as a Research Associate. In 2011, he joined the Institute of Communications and Navigation, German Aerospace Center, where he is currently a leader of the Swarm Exploration Group. His current research interests include machine learning for signal processing, modeling and estimation of the radio propagation channels, statistical signal processing, and distributed algorithms.,Dr. Shutin was a recipient of the Best Student Paper Award at the 2005 IEEE International Conference on Information, Communications and Signal Processing (ICICS). In 2009, he was awarded the Erwin Schroedinger Research Fellowship. From 2012 to 2014, he acted as Selected Advisor of German air navigation service provider, within the Navigational System Panel of ICAO.

Abstract:
The problem of exploring a dispersal of a potentially hazardous or toxic material in air using robots has a number of applications for e.g., environmental monitoring, infrastructure inspection, or civil protection, to name only a few. Especially in situations when explored substances pose a health risk to human operators, autonomous solutions are of a great interest. However, the key challenge that arises on a path towards autonomy in this context is a rather complicated dynamics of the dispersed material, coupled with specifics of spatial aperture and low temporal resolution of olfactory (chemical) sensors used for perception.
While the former precludes tele-operation (or makes it rather challenging), the latter requires perception and autonomy schemes that are able to cope with very low information rate acquired through olfactory sensing.

To address these challenges the proposed solution incorporates two elements that will be discussed in this talk.
First, a mobile swarm of robotic sensor carriers is used to increase spatial sampling, and thus capture spatial dynamics more efficiently.
Second, a prior information about the dispersal process in terms of domain-specific knowledge is used to support data processing and autonomy.
Specifically, the dispersal process is modeled with an advection-diffusion partial differential equation (PDE). The advection, or plainly speaking, the wind – a dominant transport mechanism in a majority of practically relevant applications – is likewise modeled with a PDE. Specifically, Navier-Stockes equations that describe spatial wind velocity are used. Such description provides a physics-based, global level process dynamics that effectively “fills the gaps” between the acquired sensor data. Furthermore, using a probabilistic (Bayesian) formulation of the PDE models, the resulting representation can be relaxed to additionally allow for more control over model mismatches.

Using data samples collected by multiple robots, the multi-robot exploration then includes two steps: (i) a cooperative solution to an inverse problem of identifying parameters of the PDEs given measurements, and (ii) exploration – the design of an optimal sampling scheme for multiple robotic platforms.

This work will describe the used models, discuss the developed probabilistic inference schemes, their advantages and limitations, as well as demonstrate their performance in simulations and in experiments.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Alexander Lex, University of Utah

Literate Visualization: Making Visual Analysis Sessions Reproducible and Reusable

June 16th, 2021, 10:30 am CEST

Room: Zoom/YouTube, opens an external URL in a new window

About the Speaker:
I am an Associate Professor of Computer Science at the Scientific Computing and Imaging Institute and the School of Computing at the University of Utah. I direct the Visualization Design Lab where we develop visualization methods and systems to help solve today’s scientific problems.
Before joining the University of Utah, I was a lecturer and post-doctoral visualization researcher at Harvard University. I received my PhD, master’s, and undergraduate degrees from Graz University of Technology. In 2011 I was a visiting researcher at Harvard Medical School.
I am the recipient of an NSF CAREER award and multiple best paper awards or honorable mentions at IEEE VIS, ACM CHI, and other conferences. I also received a best dissertation award from my alma mater. I co-founded Datavisyn (http://datavisyn.io, opens an external URL in a new window), a startup company developing visual analytics solutions for the pharmaceutical industry.
http://alexander-lex.net, opens an external URL in a new window

Abstract:
Interactive visualization is an important part of the data science process. It enables analysts to directly interact with the data, exploring it with minimal effort. Unlike code, however, an interactive visualization session is ephemeral and can't be easily shared, revisited, or reused. Computational notebooks, such as Jupyter Notebooks, R Markdown, or Observable are a perfect match for many data science applications. They are also the most popular embodiment of Knuth's "Literate Programming", where the logic of a program is explained in natural language, figures, and equations.
In this talk, I will sketch approaches to "Literate Visualization". I will show how we can leverage provenance data of an analysis session to create well-documented and annotated visualization stories that enable reproducibility and sharing. I will also introduce work on inferring analysis goals, which allows us to understand the analysis process at a higher level. Understanding analysis goals enables us to enhance interaction capabilities and even re-used visual analysis processes. I will conclude by demonstrating how this provenance data can be leveraged to bridge between computational and interactive environments.

ICG Lab Talk: Alexander Lex

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Cristina Olaverri Monreal, ITS- Sustainable Transport Logistics 4.0, JKU Linz

Vehicle Automation

May 19th, 2021, 10:30 CEST

Room: Zoom/YouTube, opens an external URL in a new window

About the Speaker:
Univ. Prof. Dr. Cristina Olaverri-Monreal received her PhD from the Ludwig-Maximilians University (LMU) in Munich in cooperation with BMW. She is full professor and holder of the BMVIT endowed chair ITS-Sustainable Transport Logistics 4.0 at the Johannes Kepler University Linz, in Austria. Her research aims at studying solutions for an efficient and effective transportation focusing on minimizing the barrier between users and road systems. To this end, she relies on the automation, wireless communication and sensing technologies that pertain to the field of Intelligent Transportation Systems (ITS).
Prior to this position, she led diverse teams in the industry and in the academia in the US and in distinct countries in Europe. Dr. Olaverri is president-elect of the IEEE Intelligent Transportation Systems Society, founder and chair of the Austrian IEEE ITSS chapter, and chair of the Technical Activities Committee (TAC) on Human Factors in ITS. This TAC was recognized with the award “best TAC of the IEEE ITSS” in 2018 and 2019 respectively. To commemorate the anniversary of the IEEE ITS Workshop on Human Factors that she has been organizing since 2012, she has recently (co) edited the book “Human Factors in Intelligent Vehicles”, within the River Publishers Series in Transport Technologies.
In addition, she serves as editorial board member of several journals in the field, including the IEEE Transactions on Intelligent Transportation Systems, and the IEEE International Transportation Systems Magazine. She was recently recognized for her dedicated contribution to continuing education in the field of ITS with the 2017 IEEE Educational Activities Board Meritorious Achievement Award in Continuing Education.
Cristina Olaverri-Monreal is a member of the single EU-wide platform to coordinate open road testing of cooperative, Connected and Automated Mobility (CCAM). She has served as advisor and project evaluator in the field of ICT and Connected, Cooperative Autonomous Mobility Systems for several European agencies and organizations and she is a European Commission Expert for “Automated Road Transport”.

Abstract:
The feasibility of incorporating new technology-driven functionality to vehicles has played a central role in automotive design. The overall diffusion in the application of digital technologies presents the possibility of designing systems, the functioning of which is based on intelligent technologies that simultaneously reside in multiple, interconnected applications. Consequently, the development of intelligent road-vehicle systems such as cooperative advanced driver assistance systems (co-ADAS) and with them the degree of vehicle automation is rapidly increasing.
The advent of vehicle automation promotes a reduction of the driver workload. However, depending on the automation grade consequences for the passengers such as out-of-the-loop states can be foreseen. Also the protection of Vulnerable Road Users (VRUs) has been an active research topic in recent years. A variety of responses that exhibit several levels of trust, uncertainty and a certain degree of fear when interacting with driverless vehicles has been observed. In this context, P2V (Pedestrian-to-Vehicle) and V2P (Vehicle-to-Pedestrian) have become crucial technologies to minimize potential dangers, due to the high detection rates and the high user-satisfaction levels they achieve.
This presentation gives an overview of the impact of such technologies on traffic awareness towards improving driving performance and reducing road accidents. Furthermore, the benefits and potential problems regarding vehicle automation will be outlined.

Matthias Nießner, Visual Computing & AI, TU Munich

The Revolution of Neural Rendering

April 13th, 2021, 10:30 CEST

Room: Zoom/YouTube, opens an external URL in a new window

About the Speaker:
Dr. Matthias Nießner is a Professor at the Technical University of Munich where he leads the Visual Computing Lab. Before, he was a Visiting Assistant Professor at Stanford University. Prof. Nießner’s research lies at the intersection of computer vision, graphics, and machine learning, where he is particularly interested in cutting-edge techniques for 3D reconstruction, semantic 3D scene understanding, video editing, and AI-driven video synthesis. In total, he has published over 70 academic publications, including 22 papers at the prestigious ACM Transactions on Graphics (SIGGRAPH / SIGGRAPH Asia) journal and 43 works at the leading vision conferences (CVPR, ECCV, ICCV); several of these works won best paper awards, including at SIGCHI’14, HPG’15, SPG’18, and the SIGGRAPH’16 Emerging Technologies Award for the best Live Demo. Prof. Nießner’s work enjoys wide media coverage, with many articles featured in main-stream media including the New York Times, Wall Street Journal, Spiegel, MIT Technological Review, and many more, and his was work led to several TV appearances such as on Jimmy Kimmel Live, where Prof. Nießner demonstrated the popular Face2Face technique; Prof. Nießner’s academic Youtube channel currently has over 5 million views. For his work, Prof. Nießner received several awards: he is a TUM-IAS Rudolph Moessbauer Fellow (2017 – ongoing), he won the Google Faculty Award for Machine Perception (2017), the Nvidia Professor Partnership Award (2018), as well as the prestigious ERC Starting Grant 2018 which comes with 1.500.000 Euro in research funding; in 2019, he received the Eurographics Young Researcher Award honoring the best upcoming graphics researcher in Europe. In addition to his academic impact, Prof. Nießner is a co-founder and director of Synthesia Inc., a brand-new startup backed by Marc Cuban, whose aim is to empower storytellers with cutting-edge AI-driven video synthesis.

Abstract:
In this talk, I will present our research vision in how to create a photo-realistic digital replica of the real world, and how to make holograms become a reality. Eventually, I would like to see photos and videos evolve to become interactive, holographic content indistinguishable from the real world. Imagine taking such 3D photos to share with friends, family, or social media; the ability to fully record historical moments for future generations; or to provide content for upcoming augmented and virtual reality applications. AI-based approaches, such as generative neural networks, are becoming more and more popular in this context since they have the potential to transform existing image synthesis pipelines. I will specifically talk about an avenue towards neural rendering where we can retain the full control of a traditional graphics pipeline but at the same time exploit modern capabilities of deep learning, such as handling the imperfections of content from commodity 3D scans. While the capture and photo-realistic synthesis of imagery open up unbelievable possibilities for applications ranging from entertainment to communication industries, there are also important ethical considerations that must be kept in mind. Specifically, in the content of fabricated news (e.g., fake-news), it is critical to highlight and understand digitally-manipulated content. I believe that media forensics plays an important role in this area, both from an academic standpoint to better understand image and video manipulation, but even more importantly from a societal standpoint to create and raise awareness around the possibilities and moreover, to highlight potential avenues and solutions regarding trust of digital content.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Günter Wallner, Johannes Kepler University

Visualization of Mixed Playtesting Data

March 24th, 2021, 14:00 pm CET

Room: Zoom/YouTube, opens an external URL in a new window

About the Speaker:
Günter is Prof. for Game Computing at the Johannes Kepler University Linz and holds positions at the Eindhoven University of Technology and Ontario Tech University. His work particularly centers on understanding player behavior in games and on researching methods to explore and communicate the collected data to derive actionable insights for game design and development. As part of this, he is working on data visualizations to support the analysis of the increasingly large-scale player behavioral datasets used in game analytics. He is editor of the 'Data Analytics Applications in Gaming and Entertainment' book.

Abstract:
Data collected through Games User Research (GUR) is constantly growing in volume. Moreover, playtesting frequently results in multi-modal datasets consisting of qualitative and quantitative data. Simultaneously displaying mixed playtesting data can offer additional benefits for user researchers as it can provide relevant contextual information. This talk will show how visualizations can be helpful for interpreting (mixed) GUR datasets.

The presentation will start with a high-level discussion of the benefits of visualization for analyzing playtesting data. Afterwards, possible ways of visualizing individual player data will be introduced. The talk will then demonstrate different aggregation techniques for different types of playtesting data (movement, discrete events, and continuous player-related variables such as health or physiological measurements) to convey common patterns and outliers. Aggregation techniques can be very helpful when dealing with larger datasets to avoid difficult to read visualizations. The presented techniques can be applied individually or in combination to triangulate the different data types.

Lastly, the talk will address a thus-far mostly neglected issue in GUR, namely how visual design choices for data visualizations can impact the interpretation of gameplay data. An inappropriate visual design can be misleading, result in wrong conclusions, and have practical consequences. Attendees will come away with an understanding of how visualizations can help in understanding playtesting data.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Michael Sedlmair, University of Stuttgart

Machine Learning meets Visualization

January 20th, 2021, 10:30 am CET

Room: Zoom/YouTube

About the Speaker:
Michael Sedlmair is a junior professor at the University of Stuttgart, where he works at the intersection of human-computer interaction, visualization, and data analysis. Previously, Michael has worked at Jacobs University Bremen, University of Vienna, University of British Columbia, University of Munich, and the BMW Group Research and Technology. He also holds visiting positions at the Vienna University of Technology, and the Shandong University. His interests focus on  information visualization, interactive machine learning, virtual and augmented reality, as well as the research and evaluation methodologies underlying them.

Abstract:
Based on our experience conducting projects at the intersection of machine learning (ML) and interactive visualization (Vis), my talk will reflect on and discuss the current relation between these two areas. For that purpose, the talk’s structure will follow two main streams. First, I will talk about *Vis for ML*, that is, the idea that visualization can help machine learning researchers and practitioners gain interesting insights into their models. In the second part, I will then turn the relationship around and discuss how *ML for Vis* can guide visualization designers and analysts towards interesting visual patterns in the data. The talk will conclude with research challenges that lie ahead of us and that will pave the way for future interfaces between humans and data.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy