Go to JKU Homepage
Institute of Computer Graphics
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.

2022

Michael Schmitt,
Department of Aerospace Engineering of the University of the Bundeswehr Munich (UniBw M)

New possibilities for global Earth observation using multi-sensor data and transferable machine learning models

Date: 14.12.2022, 13:00 CET
Room: Zoom/Youtube, opens an external URL in a new window

About the Speaker:
Michael Schmitt received his Dipl.-Ing. (Univ.) degree in geodesy and geoinformation, his Dr.-Ing. degree in remote sensing, and his habilitation in data fusion from the Technical University of Munich (TUM), Germany, in 2009, 2014, and 2018, respectively. Since 2021, he has been a Full Professor for Earth Observation at the Department of Aerospace Engineering of the University of the Bundeswehr Munich (UniBw M) in Neubiberg, Germany. He is a also a member of the Research Center SPACE and the Institute of Space Technology & Space Applications of UniBw M. From 2020 to 2022, he additionally held the position of a Consulting Senior Scientist at the Remote Sensing Technology Institute of the German Aerospace Center (DLR). Before joining UniBw M, he was a Professor for Applied Geodesy and Remote Sensing at the Munich University of Applied Sciences, Department of Geoinformatics. From 2015 to 2020, he was a Senior Researcher and Deputy Head at the Professorship for Signal Processing in Earth Observation at TUM; in 2019 he was additionally appointed as Adjunct Teaching Professor at the Department of Aerospace and Geodesy of TUM. In 2016, he was a guest scientist at the University of Massachusetts, Amherst. His research focuses on technical aspects of Earth observation, in particular image analysis and machine learning applied to the extraction of information from multi-modal remote sensing observations. He is a co-chair of the Working Group "Active Microwave Sensing'' of the International Society for Photogrammetry and Remote Sensing, and also of the Working Group "Benchmarking'' of the IEEE-GRSS Image Analysis and Data Fusion Technical Committee. He frequently serves as a reviewer for a number of renowned international journals and conferences and has received several Best Reviewer awards. He is a Senior Member of the IEEE, a Co-Editor of the Springer Journal of Photogrammetry, Remote Sensing and Geoinformation Science (PFG), and a member of the Editorial Advisory Board of the ISPRS Journal of Photogrammetry and Remote Sensing.

Abstract:
Earth observation has seen a tremendous boom in the recent years. From a discipline that was formerly almost only used for scientific or military purposes and whose data products were expensive and hard to get, nowadays new public satellite missions as well as a commercial startup companies provide an almost unlimited amount of images to anyone who is interested. Paired with advances in machine learning, most notably in the field of deep neural networks, this has lead to new possibilities for the extraction of geoinformation. This talk will show some of these new possibilities with a focus on the exploitation of multi-sensor satellite data, coming both from optical and synthetic aperture radar (SAR) sensors.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Johanna Schmidt,
VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH

Visual Analytics for Large-Scale IoT Data

Date 22.11.2022, 14:00 CET
Room: Zoom/Youtube, opens an external URL in a new window/Room HS1

About the Speaker:
Johanna received her master's degree in computer science in 2011. She continued with a PhD in data visualization at TU Wien, which she completed in 2016. Afterward, Johanna joined the AIT Austrian Institute of Technology GmbH as a Scientist. She was responsible for developing Visual Analytics solutions for large-scale mobility and trajectory data. In 2019, Johanna joined the VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH in Vienna, where she became the head of the Visual Analytics research group in 2020. She now leads a team of senior and junior researchers, research engineers, and students and interns. Johanna's primary research focuses on information visualization and visual analytics of large, multi-dimensional data. In this respect, she is especially interested in web-based, mobile, and progressive visualization solutions. Johanna teaches data visualization to bachelor's and master's students at TU Wien, the University of Applied Sciences in Salzburg, and the University of Applied Sciences in Krems.

Abstract:
With the digitization of the manufacturing industry, energy supplies, and other domains, enormous amounts of IoT (Internet-of-Things) data are collected. Expectations regarding quality, costs, delivery time, durability, and environmental aspects are rising at a similar speed. Data-driven manufacturing and planning opens up unprecedented opportunities to understand the impact of decisions on engineering performance and customer satisfaction. Visual Analytics plays a very important role in turning data into actionable decisions, which more and more becomes a major part for companies for remaining competitive. Visual Analytics has already proven its usability in analyzing IoT data in various projects, as outlined in this talk. However, there are still some major challenges ahead of us. We are still facing Big Data problems when it comes to providing concise overview over large amounts of data. Further, there are many challenges attached to applying visualization successfully, both from the manufacturing industry and energy sector domains, and also from visualization research perspectives. In this talk I will reflect on our past experiences in applying Visual Analytics in research projects together with industry partners and describe the main challenges we are facing now.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Hannes Kaufmann,
TU Wien

Large-Scale Walkable Virtual Reality

October 19th, 13:00 pm CEST
Room: Zoom/Youtube, opens an external URL in a new window/S3 055

About the Speaker:
Hannes Kaufmann is full professor of Virtual and Augmented Reality at the Institute of Visual Computing & Human-Centered Technology at TU Wien. He conducted research in the areas of virtual reality, tracking, mobile augmented reality, training spatial abilities in AR/VR, tangible
interaction, medical VR/AR applications, real time ray-tracing, redirected walking, geometry and educational mathematics software. His Habilitation (2010) was on “Applications of Mixed Reality” with a major focus on educational mixed reality applications. He has acted on behalf
of the European Commission as a project reviewer, participated in EU projects in FP5, FP7 and Horizon2020, managed over 35 research projects and published more than 120 scientific papers.

Abstract:
This talk focuses on real walking through large virtual environments. Seven years ago we developed a low-cost virtual reality platform that supports multiple users to walk untethered in a large environment. For that purpose we also designed and implemented a low-cost large area
tracking system. Our platform was extended within multiple research projects and used for research in application areas such as first responder training, scientific visualization, architectural design, marketing & sales and entertainment. Based on this platform we were able to design and perform evaluation studies to investigate users' behavior in large virtual worlds. First we studied if users can perceive each other when fully immersed (mutual
proximity awareness) and then studied collision prevention techniques, collision avoidance and copresence. We also developed novel techniques to manipulate the spatial layout of virtual worlds to make them seem larger than they are by overlapping virtual rooms while the overlap is unnoticed by users. In a follow-up study we investigated which design choices provide the best results for spatial manipulation. Finally, I will present our current work on large-scale haptics utilizing a mobile robotic platform to provide haptic feedback to multiple users in large-scale VR.

ICG Lab Talk Hannes Kaufmann

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Elif Surer,
Middle East Technical University

A Brief Introduction to Game Research: Serious Games, Extended Reality Applications, and Reinforcement Learning

Jun 22nd, 16:00 pm CEST
Room: Zoom/Youtube, opens an external URL in a new window

About the Speaker:
Elif Surer received her Ph.D. in Bioengineering in 2011 from the University of Bologna. She received her M.Sc. and B.Sc. degrees in Computer Engineering from Boğaziçi University in 2007 and 2005. From 2013 to 2015, she worked as a post-doctoral researcher at the University of Milan in the EU Project REWIRE, where she developed video games to rehabilitate stroke and Neglect patients. She joined METU Graduate School of Informatics’s Modeling and Simulation Department in 2015 and is currently working as an Associate Professor at the METU Graduate School of Informatics’ Multimedia Informatics program. She is the director of METU Entrepreneurship Research Center (GIMER). She is a mentor at METU Design Factory and bang. Art Innovation Prix. She collaborates as a researcher in several interdisciplinary national and EU-funded projects. Her research interests are serious games, virtual/mixed reality, and reinforcement learning.

Abstract:
In this talk, a quick overview of the game research domain and recent applications from the extended reality will be provided. Game research will briefly be introduced via recent research on serious games (games with ulterior motives rather than entertainment only), reinforcement learning agents, and virtual reality and mixed reality games. Example games and case studies from recent national and EU-funded projects, which use adaptive difficulty adjustment, procedural content generation, and reinforcement learning algorithms, will also be discussed. Finally, opportunities and limitations of using serious games will briefly be introduced.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Nils Gehlenborg,
Harvard Medical School

Grammar-Based Visualization of Genome-Mapped Data with Gosling

May 24th, 13:00 pm CEST
Room: MT127/Zoom/Youtube, opens an external URL in a new window

About the Speaker:
Nils Gehlenborg, PhD, is an Associate Professor in the Department of Biomedical Informatics at Harvard Medical School, Director of the Master in Biomedical Informatics, opens an external URL in a new window program, and Director of the Biomedical Informatics and Data Science Research Training, opens an external URL in a new window (BIRT) program.

Nils received his PhD from the University of Cambridge and was a predoctoral fellow at the European Bioinformatics Institute (EMBL-EBI) in the Functional Genomics Group of Alvis Brazma. Dr Gehlenborg completed his postdoctoral training as a Research Associate in the lab of Peter J Park at the Center for Biomedical Informatics at Harvard Medical School.

The goal of Nils research is to improve human health by developing visual interfaces and computational techniques that enable scientists and clinicians to efficiently interact with biomedical data. Tight integration of algorithmic approaches from biomedical informatics with advanced data visualization techniques is central to his efforts, as is close collaboration with clinicians and experimentalists. Currently, Nils is researching and developing novel tools to visualize 3D genome conformation data as well as heterogeneous data from large-scale cancer genomics studies. These efforts integrate visual and computational approaches to support sense-making in biology and to support reproducible, collaborative research. Nils is also a Co-Investigator for the 4D Nucleome Network Data Coordination and Integration Center hosted at Harvard Medical School.

Nils is a co-founder, former general chair, and current steering committee chair of BioVis, the Symposium on Biological Data Visualization, and co-founder of VIZBI, the annual workshop on Visualizing Biological Data. Currently, he co-chairs the Policy Working Group for the 4D Nucleome Network, an NIH Common Fund project. Nils has served on the program committees of several international bioinformatics and data visualization conferences and held multiple editorial roles, including his current role as associate editor of BMC Bioinformatics. He has also contributed to the “Points of View” data visualization column in Nature Methods.

Abstract:
Visualization of genomics and epigenomics data for exploration and communication has a long history in molecular biology. Data analysts commonly use genome browsers, Circos-type plots, and sequence logos to investigate genomic sequences and data mapped to them. However, they are dozens of other techniques and hundreds of tools to view and explore genomics data. This rich set of tools and techniques demonstrates the importance of genome-mapped data visualization. However, it also poses significant challenges for data analysts, who often need to convert between different data formats and use multiple tools for their analysis tasks. To address these challenges, we designed the Gosling visualization grammar that can be used to generate virtually all previously described interactive visualization techniques for genome-mapped data. I will present how we designed Gosling and introduce the tool ecosystem that we built to support Gosling-based visualizations. Finally, I will propose opportunities for future research in genomics and epigenomics data visualization.

ICG Lab Talk Nils Gehlenborg

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Michael Lankes,
University of Applied Sciences Upper Austria

The Power of Social Gaze in Games

April 12th, 14:00 pm CET
Room: S3 055/Zoom/Youtube, opens an external URL in a new window

About the Speaker:
Michael Lankes is working as a professor at the Digital Media department at the University of Applied Sciences Upper Austria on game art & design and usability & interaction design. Since 2017 he is also the head of the research group Playful Interactive Environments (PIE). His expertise ranges from VR in games, serious games in the health domain, to playful interactions supported by nonverbal communication channels, which has been documented in more than 50 publications. Lankes, together with his research network, established a strong track record in the field of gaze-based interactions in games, disseminating his work to journals and conferences. He is an active member of the HCI games community by co-organizing conferences, such as the CHI PLAY conference series.

Abstract:
Gaze-based interactions have found their way into the games domain. "AAA games," such as Assassin's Creed Valhalla, support eye-tracking devices to grant players a more natural form of game interaction. Furthermore, hardware devices (e.g., PlayStation VR 2) indicate that this technology might play an essential role in the following years. Currently, many gaze-supported games use gaze input as a supporting element and, in some cases, as a complete replacement for mouse input and game controllers.

Besides that, the inclusion of gaze has the potential to enrich social interactions by adding a valuable channel of non-verbal communication between players. Social gaze regulates information flow, turn-taking, and mediates shared attention on an area of interest. Gazing at an object during an interaction can also establish a mutual understanding of what is being discussed. In this talk, the concept of social gaze will be introduced, and different case studies will be presented that illustrate the design potentials and pitfalls. The talk shall provide insights for researchers and designers that are interested in games in general and that are keen on employing and investigating social gaze in their research projects.

The Power of Social Gaze in Games

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Kerstin Hammernik,
Technical University of Munich

Physics-based learning for MRI reconstruction -
Recent advances in static and dynamic imaging
 

March 15th, 12:00 pm CET
Room: HS 5/Zoom/Youtube

About the Speaker:
Kerstin Hammernik is a postdoctoral researcher in the group of Prof. Daniel Rueckert at the Lab for Artificial Intelligence in Healthcare and Medicine, Technical University of Munich, Germany and at the Department of Computing, Imperial College London, United  Kingdom. In 2019, she received her PhD degree in Computer Science from Graz University of Technology, Austria. Her PhD thesis "Variational Networks for Medical Image Reconstruction" was supervised by Prof. Thomas Pock, Institute of Computer Graphics and Vision, Graz University of Technology. During her PhD, she spent four months as a research intern at the Center for Advanced Imaging Innovation and Research, New York University School of Medicine, USA. Her research interests are inverse problems and machine learning in medical imaging, with a special focus on fast MRI acquisition and reconstruction for cardiac and musculoskeletal applications.

Abstract:
During the past years, deep learning has evolved tremendously in the research field of MR image reconstruction. In this talk, I will guide you through these developments, ranging from learning advanced image regularization to learning physics-based unrolled  optimization, and I will discuss challenges and caveats of deep learning for MR image reconstruction. I will cover examples ranging from 2D musculoskeletal imaging to higher-dimensional cardiac imaging that will show the vast potential for the future of fast MR  image acquisition and reconstruction.

Sanaz Mostaghim,
Otto von Guericke University Magdeburg

Collective and Individual Decision-Making Algorithms for Autonomous Systems
January 25th, 14:00 pm CET
Room: Zoom/Youtube

About the Speaker:
Sanaz Mostaghim is a professor of computer science at the chair of Computational Intelligence and the founder and head of SwarmLab at the Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany. She holds a PhD degree (2004) in electrical engineering from the University of Paderborn, Germany. Sanaz has worked as a postdoctoral fellow at ETH Zurich in Switzerland (2004 - 2006) and as a lecturer at Karlsruhe Institute of Technology (KIT), Germany (2006 - 2014), where she received her habilitation degree in applied computer science in 2012. Her research interests are in the area of multi-criteria decision-making, collective learning and decision-making, and their applications in robotics and science. Sanaz is the deputy chair of the Informatics Germany and a member of the advisory board on Digitalization at the ministry of Economy, Science and Digitalization, State Saxony-Anhalt, Germany. She is the vice president of the IEEE Computational Intelligence Society (CIS) and is associate editor of IEEE Transactions on AI, IEEE Transaction on Evolutionary Computation and member of the editorial board of several international journals on Robotics and AI. Since 2020, she is appointed as a distinguished lecturer at IEEE CIS.  

Abstract:
This talk is about the recent advances in multi-objective optimization and decision-making techniques for autonomous systems. Decision-making is usually required when we are confronted with conflicting objectives and is in fact a very challenging task even for human decision-makers, since we first need to find all the possible optimal alternatives and then select the right choice using a decision policy. In this talk, we replace the human decision-maker with an autonomous system and provide novel methodologies for multi-criteria decision-making on a range of scenarios in which the autonomous systems are confronted with conflicting objectives during the mission. Enabling such systems to autonomously decide can contribute to their applicability in critical missions such as rescue robotics where the intervention of a human-controller is not always possible. The challenge is not only in finding and selecting the best alternative, but also in acting in a limited timeframe during the mission. One more focus of the talk is on the individual vs. collective decision-making algorithms. We will show that collective learning of a decision policy can help both the individual and the collective to act in an efficient way.