Go to JKU Homepage
Institute of Computer Graphics
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.


Hannes Kaufmann,
TU Wien

Large-Scale Walkable Virtual Reality

October 19th, 13:00 pm CEST
Room: Zoom/Youtube, opens an external URL in a new window

About the Speaker:
Hannes Kaufmann is full professor of Virtual and Augmented Reality at the Institute of Visual Computing & Human-Centered Technology at TU Wien. He conducted research in the areas of virtual reality, tracking, mobile augmented reality, training spatial abilities in AR/VR, tangible
interaction, medical VR/AR applications, real time ray-tracing, redirected walking, geometry and educational mathematics software. His Habilitation (2010) was on “Applications of Mixed Reality” with a major focus on educational mixed reality applications. He has acted on behalf
of the European Commission as a project reviewer, participated in EU projects in FP5, FP7 and Horizon2020, managed over 35 research projects and published more than 120 scientific papers.

This talk focuses on real walking through large virtual environments. Seven years ago we developed a low-cost virtual reality platform that supports multiple users to walk untethered in a large environment. For that purpose we also designed and implemented a low-cost large area
tracking system. Our platform was extended within multiple research projects and used for research in application areas such as first responder training, scientific visualization, architectural design, marketing & sales and entertainment. Based on this platform we were able to design and perform evaluation studies to investigate users' behavior in large virtual worlds. First we studied if users can perceive each other when fully immersed (mutual
proximity awareness) and then studied collision prevention techniques, collision avoidance and copresence. We also developed novel techniques to manipulate the spatial layout of virtual worlds to make them seem larger than they are by overlapping virtual rooms while the overlap is unnoticed by users. In a follow-up study we investigated which design choices provide the best results for spatial manipulation. Finally, I will present our current work on large-scale haptics utilizing a mobile robotic platform to provide haptic feedback to multiple users in large-scale VR.

Elif Surer,
Middle East Technical University

A Brief Introduction to Game Research: Serious Games, Extended Reality Applications, and Reinforcement Learning

Jun 22nd, 16:00 pm CEST
Room: Zoom/Youtube, opens an external URL in a new window

About the Speaker:
Elif Surer received her Ph.D. in Bioengineering in 2011 from the University of Bologna. She received her M.Sc. and B.Sc. degrees in Computer Engineering from Boğaziçi University in 2007 and 2005. From 2013 to 2015, she worked as a post-doctoral researcher at the University of Milan in the EU Project REWIRE, where she developed video games to rehabilitate stroke and Neglect patients. She joined METU Graduate School of Informatics’s Modeling and Simulation Department in 2015 and is currently working as an Associate Professor at the METU Graduate School of Informatics’ Multimedia Informatics program. She is the director of METU Entrepreneurship Research Center (GIMER). She is a mentor at METU Design Factory and bang. Art Innovation Prix. She collaborates as a researcher in several interdisciplinary national and EU-funded projects. Her research interests are serious games, virtual/mixed reality, and reinforcement learning.

In this talk, a quick overview of the game research domain and recent applications from the extended reality will be provided. Game research will briefly be introduced via recent research on serious games (games with ulterior motives rather than entertainment only), reinforcement learning agents, and virtual reality and mixed reality games. Example games and case studies from recent national and EU-funded projects, which use adaptive difficulty adjustment, procedural content generation, and reinforcement learning algorithms, will also be discussed. Finally, opportunities and limitations of using serious games will briefly be introduced.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Nils Gehlenborg,
Harvard Medical School

Grammar-Based Visualization of Genome-Mapped Data with Gosling

May 24th, 13:00 pm CEST
Room: MT127/Zoom/Youtube, opens an external URL in a new window

About the Speaker:
Nils Gehlenborg, PhD, is an Associate Professor in the Department of Biomedical Informatics at Harvard Medical School, Director of the Master in Biomedical Informatics, opens an external URL in a new window program, and Director of the Biomedical Informatics and Data Science Research Training, opens an external URL in a new window (BIRT) program.

Nils received his PhD from the University of Cambridge and was a predoctoral fellow at the European Bioinformatics Institute (EMBL-EBI) in the Functional Genomics Group of Alvis Brazma. Dr Gehlenborg completed his postdoctoral training as a Research Associate in the lab of Peter J Park at the Center for Biomedical Informatics at Harvard Medical School.

The goal of Nils research is to improve human health by developing visual interfaces and computational techniques that enable scientists and clinicians to efficiently interact with biomedical data. Tight integration of algorithmic approaches from biomedical informatics with advanced data visualization techniques is central to his efforts, as is close collaboration with clinicians and experimentalists. Currently, Nils is researching and developing novel tools to visualize 3D genome conformation data as well as heterogeneous data from large-scale cancer genomics studies. These efforts integrate visual and computational approaches to support sense-making in biology and to support reproducible, collaborative research. Nils is also a Co-Investigator for the 4D Nucleome Network Data Coordination and Integration Center hosted at Harvard Medical School.

Nils is a co-founder, former general chair, and current steering committee chair of BioVis, the Symposium on Biological Data Visualization, and co-founder of VIZBI, the annual workshop on Visualizing Biological Data. Currently, he co-chairs the Policy Working Group for the 4D Nucleome Network, an NIH Common Fund project. Nils has served on the program committees of several international bioinformatics and data visualization conferences and held multiple editorial roles, including his current role as associate editor of BMC Bioinformatics. He has also contributed to the “Points of View” data visualization column in Nature Methods.

Visualization of genomics and epigenomics data for exploration and communication has a long history in molecular biology. Data analysts commonly use genome browsers, Circos-type plots, and sequence logos to investigate genomic sequences and data mapped to them. However, they are dozens of other techniques and hundreds of tools to view and explore genomics data. This rich set of tools and techniques demonstrates the importance of genome-mapped data visualization. However, it also poses significant challenges for data analysts, who often need to convert between different data formats and use multiple tools for their analysis tasks. To address these challenges, we designed the Gosling visualization grammar that can be used to generate virtually all previously described interactive visualization techniques for genome-mapped data. I will present how we designed Gosling and introduce the tool ecosystem that we built to support Gosling-based visualizations. Finally, I will propose opportunities for future research in genomics and epigenomics data visualization.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Michael Lankes,
University of Applied Sciences Upper Austria

The Power of Social Gaze in Games

April 12th, 14:00 pm CET
Room: S3 055/Zoom/Youtube, opens an external URL in a new window

About the Speaker:
Michael Lankes is working as a professor at the Digital Media department at the University of Applied Sciences Upper Austria on game art & design and usability & interaction design. Since 2017 he is also the head of the research group Playful Interactive Environments (PIE). His expertise ranges from VR in games, serious games in the health domain, to playful interactions supported by nonverbal communication channels, which has been documented in more than 50 publications. Lankes, together with his research network, established a strong track record in the field of gaze-based interactions in games, disseminating his work to journals and conferences. He is an active member of the HCI games community by co-organizing conferences, such as the CHI PLAY conference series.

Gaze-based interactions have found their way into the games domain. "AAA games," such as Assassin's Creed Valhalla, support eye-tracking devices to grant players a more natural form of game interaction. Furthermore, hardware devices (e.g., PlayStation VR 2) indicate that this technology might play an essential role in the following years. Currently, many gaze-supported games use gaze input as a supporting element and, in some cases, as a complete replacement for mouse input and game controllers.

Besides that, the inclusion of gaze has the potential to enrich social interactions by adding a valuable channel of non-verbal communication between players. Social gaze regulates information flow, turn-taking, and mediates shared attention on an area of interest. Gazing at an object during an interaction can also establish a mutual understanding of what is being discussed. In this talk, the concept of social gaze will be introduced, and different case studies will be presented that illustrate the design potentials and pitfalls. The talk shall provide insights for researchers and designers that are interested in games in general and that are keen on employing and investigating social gaze in their research projects.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information

Kerstin Hammernik,
Technical University of Munich

Physics-based learning for MRI reconstruction -
Recent advances in static and dynamic imaging

March 15th, 12:00 pm CET
Room: HS 5/Zoom/Youtube

About the Speaker:
Kerstin Hammernik is a postdoctoral researcher in the group of Prof. Daniel Rueckert at the Lab for Artificial Intelligence in Healthcare and Medicine, Technical University of Munich, Germany and at the Department of Computing, Imperial College London, United  Kingdom. In 2019, she received her PhD degree in Computer Science from Graz University of Technology, Austria. Her PhD thesis "Variational Networks for Medical Image Reconstruction" was supervised by Prof. Thomas Pock, Institute of Computer Graphics and Vision, Graz University of Technology. During her PhD, she spent four months as a research intern at the Center for Advanced Imaging Innovation and Research, New York University School of Medicine, USA. Her research interests are inverse problems and machine learning in medical imaging, with a special focus on fast MRI acquisition and reconstruction for cardiac and musculoskeletal applications.

During the past years, deep learning has evolved tremendously in the research field of MR image reconstruction. In this talk, I will guide you through these developments, ranging from learning advanced image regularization to learning physics-based unrolled  optimization, and I will discuss challenges and caveats of deep learning for MR image reconstruction. I will cover examples ranging from 2D musculoskeletal imaging to higher-dimensional cardiac imaging that will show the vast potential for the future of fast MR  image acquisition and reconstruction.

Sanaz Mostaghim,
Otto von Guericke University Magdeburg

Collective and Individual Decision-Making Algorithms for Autonomous Systems
January 25th, 14:00 pm CET
Room: Zoom/Youtube

About the Speaker:
Sanaz Mostaghim is a professor of computer science at the chair of Computational Intelligence and the founder and head of SwarmLab at the Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany. She holds a PhD degree (2004) in electrical engineering from the University of Paderborn, Germany. Sanaz has worked as a postdoctoral fellow at ETH Zurich in Switzerland (2004 - 2006) and as a lecturer at Karlsruhe Institute of Technology (KIT), Germany (2006 - 2014), where she received her habilitation degree in applied computer science in 2012. Her research interests are in the area of multi-criteria decision-making, collective learning and decision-making, and their applications in robotics and science. Sanaz is the deputy chair of the Informatics Germany and a member of the advisory board on Digitalization at the ministry of Economy, Science and Digitalization, State Saxony-Anhalt, Germany. She is the vice president of the IEEE Computational Intelligence Society (CIS) and is associate editor of IEEE Transactions on AI, IEEE Transaction on Evolutionary Computation and member of the editorial board of several international journals on Robotics and AI. Since 2020, she is appointed as a distinguished lecturer at IEEE CIS.  

This talk is about the recent advances in multi-objective optimization and decision-making techniques for autonomous systems. Decision-making is usually required when we are confronted with conflicting objectives and is in fact a very challenging task even for human decision-makers, since we first need to find all the possible optimal alternatives and then select the right choice using a decision policy. In this talk, we replace the human decision-maker with an autonomous system and provide novel methodologies for multi-criteria decision-making on a range of scenarios in which the autonomous systems are confronted with conflicting objectives during the mission. Enabling such systems to autonomously decide can contribute to their applicability in critical missions such as rescue robotics where the intervention of a human-controller is not always possible. The challenge is not only in finding and selecting the best alternative, but also in acting in a limited timeframe during the mission. One more focus of the talk is on the individual vs. collective decision-making algorithms. We will show that collective learning of a decision policy can help both the individual and the collective to act in an efficient way.