Go to JKU Homepage
Institute of Computer Graphics
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.

Tobias Schreck, Graz University of Technology

Visual-Interactive Search and Analysis of Patterns in Complex Data

December 15th, 2015, 4:00pm, Science Park 1, MT130

Abstract:
Advances in data acquisition and storage technology lead to the creation of increasingly large, complex data sets across many different application domains, including science, engineering, business and social media. Often, this data is of complex nature, involving high-dimensional, temporal and spatial data. Important user tasks for leveraging large, complex data sets include finding relevant information, exploring for patterns and insights, and re-using of data for authoring purposes. If appropriately combined, methods from interactive data visualization and data analysis allow to factor in background user knowledge during the analysis process, and provide solutions for many search and analysis tasks in important applications.
In this talk, we discuss visual-interactive data analysis techniques from our work that can support search and analysis in a variety of different data types and novel application scenarios. These include example- and sketch-based retrieval in scientific data repositories, interactive cluster analysis approaches for time-oriented data, and user-adaptive learning of patterns in high-dimensional data. We also briefly address the question of evaluation based on contests and crowdsourcing. We conclude with an outline of research challenges in the area.

About the Speaker:

Tobias Schreck holds a Professor position with the Institute for Computer Graphics and Knowledge Visualization at Graz University of Technology, Austria. Between 2011 and 2015, he was an Assistant Professor with the Data Analysis and Visualization Group at University of Konstanz, Germany. Between 2007 and 2011 he was a Postdoc researcher and head of a junior research group on Visual Search and Analysis with TU Darmstadt, Germany. Tobias Schreck obtained a PhD in Computer Science in 2006, and a Master of Science degree in Information Engineering in 2002, both from the University of Konstanz. His research interests include visual search and analysis in time-oriented, high-dimensional and 3D object data, with applications in data analysis, multimedia retrieval and cultural heritage. He has served as co-chair for Posters, Workshops and Panels for IEEE VIS, as well as a co-organizer for the EG Workshop on 3D Object Retrieval. Tobias Schreck is a PI in the EU projects PRESIOUS and CONSENSUS, and leads a state-funded project on Visual Search and Analysis in Time-Oriented, Annotated Research Data. A research profile can be found at www.cgv.tugraz.at/schreck, opens an external URL in a new window.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Piotr Didyk, Saarland University

Perceptually-driven Inputs for New Output Devices

November 10th, 2015, 4:00pm, Computer Science Building (SP 3) room 063

Abstract:
There has been a tremendous increase in quality and number of new output devices, such as stereo and automultiscopic screens, portable and wearable displays, and 3D printers. Some of them have already entered the mass production and gained a lot of users’ attention, others will follow this trend promptly. Unfortunately, abilities of these emerging technologies outperform capabilities of methods and tools for creating content. Also, the current level of understanding of how these new technologies influence user experience is insufficient to fully exploit their advantages. In this talk, I will demonstrate that careful combinations of new hardware, computation, and models of human perception can lead to solutions that provide significant increase in perceived quality. More precisely, I will show how careful rendering of frames can improve spatial and temporal resolution as well as reduce temporal artifacts of video content without sacrificing its cinematic look. Next, I will discuss techniques for overcoming limitations of current 3D displays. In the context of 3D printing, I will discuss methods for specifying objects for 3D printing.

About the Speaker:
Piotr Didyk is an Independent Research Group Leader at the Cluster of Excellence on ''Multimodal Computing and Interaction'' at the Saarland University (Germany), where he is heading a group on Perception, Display, and Fabrication. He is also appointed as a Senior Researcher at the Max Planck Institute for Informatics. Prior to this he spent two years as a postdoctoral associate at Massachusetts Institute of Technology. In 2012 he obtained his PhD from the Max Planck Institute for Informatics and the Saarland University. During his studies he was also a visiting student at MIT. In 2008 he received his M.Sc. degree in Computer Science from the University of Wrocław (Poland). His main research focuses on perception and computer graphics. More precisely, he works on new display methods which by accounting for properties of the human visual system improve perceived quality as well as new algorithms for computational fabrication which account for
human interaction and perception. His interests include human perception, new display techniques, image/video processing, and computational fabrication.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Stefan Bruckner, University of Bergen

Smart Visual Interfaces in Data Exploration and Analysis

October 21st, 2015, 4:30pm, Computer Science Building (SP 3) room 063

Abstract:
Visualization, the algorithmic transformation of data into easily understandable images, is essential in enabling users to gain insight into complex phenomena by exploiting the extraordinary capability of the human brain to process visual information. However, in most scenarios users need to interact with the visualization in order to take full advantage. Visualization techniques have to be chosen, parameters need to be tuned, and regions of interest have to be specified. Hence, many powerful visualization systems have highly complex user interfaces which can intimidate users and require a substantial amount of training. In this talk, I will discuss recent approaches for developing "smart" visual interfaces which incorporate domain knowledge to provide guided navigation and automation of common tasks in a goal-oriented manner.

About the Speaker:
Stefan Bruckner is professor in visualization at the Department of Informatics of the University of Bergen, Norway. He received his master's degree in Computer Science from the Vienna University of Technology (VUT), Austria in 2004 and his Ph.D. in 2008 from the same university. He was awarded the habilitation (venia docendi) in Practical Computer Science in 2012. From 2008 to 2013, he was an assistant professor at the Institute of Computer Graphics and Algorithms at VUT. His research interests include interactive visualization techniques for biomedical data, illustrative visualization, volume rendering, and visual data exploration. His research has received publication awards at several international events, he won the 2006 Karl-Heinz-Höhne Award for Medical Visualization, and received the 2011 Eurographics Young Researcher Award. He was paper co-chair of EuroVis 2012 and currently serves on the editorial board of Computers & Graphics. He is a member of the IEEE Computer Society, ACM SIGGRAPH, and Eurographics.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Bernd Bickel, IST Austria

Computational Challenges in Designing Virtual Models for Fabrication

June 30th, 2015, 4:00pm, Computer Science Building (SP 3) room 057

Abstract:
3D printing is considered a disruptive technology with potentially tremendous socioeconomic impact. In recent years, additive manufacturing technologies have made significant progress in terms of both sophistication and price; they have advanced to a point where devices now feature high-resolution, full-color, and multi-material printing. Nonetheless, they remain of limited use, given the lack of efficient algorithms and intuitive tools that can be used to design and model 3D-printable content.
My vision is to unleash the full potential of 3D printing technology with the help of computational methods. In our research, we are working to invent and develop new computational techniques for intuitively designing virtual 3D models and bringing them to the real world. Given the digital nature of the process, three factors play a central role: computational models and efficient representations that facilitate intuitive design, accurate and fast simulation techniques, and easy-to-use authoring tools for physically realizable objects and materials.
In this talk, I will present several projects that demonstrate our recent efforts in working toward this goal, structured according to basic object properties, and the lessons learned from working over several years with various 3D printers.

About the Speaker:
Bernd Bickel joined IST Austria in early 2015 as Assistant Professor. He is a computer scientist interested in computer graphics and its overlap into animation, biomechanics, material science, and digital fabrication. His main objective is to push the boundaries of how digital content can be efficiently created, simulated, and reproduced.
Bernd obtained his Master's degree in Computer Science from ETH Zurich in 2006. For his PhD studies, Bernd joined the group of Markus Gross who is a full professor of Computer Science at ETH Zurich and the director of Disney Research Zurich. From 2011-2012, Bernd was a visiting professor at TU Berlin, and in 2012 he became a research scientist and research group leader at Disney Research, where he investigates approaches for simulating, designing, and fabricating materials and 3D objects.
Bernd's work focuses on two closely related challenges: (1) developing novel modeling and simulation methods, and (2) investigating efficient representation and editing algorithms for materials and functional objects. Recent work includes: theoretical foundations and practical algorithms for measuring and modeling the deformation behavior of soft tissue; simulating and reproducing fundamental properties, such as elasticity, surface reflectance, and subsurface scattering; and computational design systems for efficiently creating functional artifacts such as deformable objects and mechanical systems.

Vincent Lepetit, Graz University of Technology

Image Features for Robust Registration

May 5th, 2015, 4:00pm, Science Park I room MT130

Abstract:
Extracting image features such as feature points or edges is a critical step of many Computer Vision systems, however this is still performed with carefully handcrafted methods. In this talk, I will present a new Machine Learning-based approach to detecting local image features, with application to contour detection in natural images, but also biomedical and aerial images, and to feature point extraction under drastic weather and lighting changes. I will then show that it is also possible to learn efficient object description based on low-level features for scalable 3D object detection.

About the Speaker:
Dr. Vincent Lepetit is a Professor at the Institute for Computer Graphics and Vision, TU Graz and a Visiting Professor at the Computer Vision Laboratory, EPFL. He received the PhD degree in Computer Vision in 2001 from the University of Nancy, France, after working in the ISA INRIA team. He then joined the Virtual Reality Lab at EPFL as a post-doctoral fellow and became a founding member of the Computer Vision Laboratory. He became a Professor at TU GRAZ in February 2014. His research interests include vision-based Augmented Reality, 3D camera tracking, Machine Learning, object recognition, and 3D reconstruction. He often serves as program committee member and area chair of major vision conferences (CVPR, ICCV, ECCV, ACCV, BMVC). He is an editor for the International Journal of Computer Vision (IJCV) and the Computer Vision and Image Understanding (CVIU) journal.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Jan Kautz, NVIDIA

A Flexible Image Processing Pipeline

April 14th, 2015, 4:00pm, Computer Science Building (SP 3) room 063

Abstract:
Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques.

About the Speaker:
Jan leads the Mobile Visual Computing research team at NVIDIA, working on computational photography and computer vision for mobile devices. Before joining NVIDIA in 2013, Jan was a tenured faculty member at University College London for eight years. He holds a BSc in Computer Science from the University of Erlangen-Nürnberg (1999), an MMath from the University of Waterloo (1999), received his PhD from the Max-Planck-Institut für Informatik (2003), and worked as a post-doc at the Massachusetts Institute of Technology (2003-2006). Jan is particularly interested in computational photography and imaging, computer vision, and computational displays, on which he has published many articles at various conferences including ACM SIGGRAPH, Eurographics, CVPR, ECCV, CHI, and many more.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Ákos Maróy, Aero Glass

Head Orientation Tracking in an Aircraft Cockpit Environment

March 18th, 2015, 10:30am, S2 Z74

Abstract:
A Head-Mounted Display-based Augmented Reality solution such as Aero Glass needs to track the orientation of the users head to be able to display AR in a usable fashion. While off-the-shelf Head-Mounted Displays such as the Epson Moverio BT-200 or Osterhout ODG R6 / X6 contain IMUs, tracking via an IMU is imprecise mainly because of gyro drift. In an aviation context, neither the accelerometer nor the magnetometer cannot be used to offset gyro drift with sufficient results. Thus optical tracking approaches are investigated that allow for determining the users head orientation in relation to the cockpit, which, combined with a knowledge of the aircraft orientation, allows to offset gyro drift. Several optical tracking approaches are investigated: 'inside out', where a camera on the head-mounted display is used, using markerless SLAM-based tracking, tracking markers in front of and above the users head. An 'outside-in' approach is also investigated where an external camera is tracking the users head by tracking IR LEDs.

About the Speaker:
Ákos Maróy is the founder of Aero Glass, an Augmented Reality solution for aircraft pilots. Ákos has a background in software engineering, has founded several IT companies & startups in the past, being still involved in the outsourced software development shop EU Edge. Ákos has a media- and bio-art background, having exhibited at various venues, including the Venice Architecture Biennale, the Prix Ars Electronica Festival, the NTT InterCommunication Center in Tokyo and the Yamaguchi Center for Art & Media in Japan, as a member of the Tokyo-based art collective doubleNegatives Architecture. Ákos is the co-founder of atlatszo.hu, an investigative journalism NGO based in his native Hungary.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

2014

Masahiko Inami, Keio University

Initial Step Towards Augmented Human

December 17th, 2014, 10:00am, SP3 room 063

Abstract:
What are the challenges in creating interfaces that allow a user to intuitively express his/her intentions? Today's HCI systems are limited, and exploit only visual and auditory sensations. However, in daily life, we exploit a variety of input and output modalities, and modalities that involve contact with our bodies can dramatically affect our ability to experience and express ourselves in physical and virtual worlds. Using modern biological understanding of sensation, emerging electronic devices, and agile computational methods, we now have an opportunity to design a new generation of 'implicit interaction' technologies. This talk will present several approaches that use multi/cross modal interfaces for enhancing human I/O. They include Transparent Cockpit, Stop-Motion Goggle, Galvanic Vestibular Stimulation and Super Human Sports.

About the Speaker:
Masahiko Inami is a professor in the School of Media Design at the Keio University (KMD), Japan. His research interest is in human I/O enhancement technologies including bioengineering, HCI and robotics. He received BE and MS degrees in bioengineering from the Tokyo Institute of Technology and PhD in 1999 from the University of Tokyo. His scientific achievements include the Retro-reflective Projection Technology (RPT) known as "Optical Camouflage," which was chosen as one of the coolest inventions of 2003 by /TIME/ magazine. His research has appeared at the Siggraph Emerging Technologies via 36 installations from 1997 through 2014. His installations have appeared at Ars Electronica Center.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Oliver Grau, Intel

The Evolution of Digital Media

November 18th, 2014, 4:00pm, SP3 room 063

Abstract:
The first part of the presentation gives a brief overview of how digitalization led to a number of evolutionary steps in media industry in the last decades. In particular broadcast has recently seen an unrivaled development from only standard resolution video to HD and now 4k within just one decade and 8k standards are already trialed. Cinema industry was successfully adopting stereoscopic 3D to re-gain attention of audiences, but this trend is now in decline. The second part raises then the question what are the next innovations in media. Contenders are: High-frame rate, HDR imaging and most of all: Full immersion and (more) interaction. The interest in the latter two aspects has been fuelled by recent advantages in display and computing technologies, with a new generation of head-mounted devices and the promise to revitalize VR. However, these devices will only be successful as products if also attractive content or applications are introduce with them. We therefore also look into what tools are available and/or missing to satisfy this demand.

About the Speaker:
Oliver Grau joined Intel as Associate Director of Operations of the Intel Visual Computing Institute in Germany in October 2012. He received a PhD from the University of Hanover, Germany in 1999. Prior to Intel he worked for BBC R&D in the UK on innovative tools for visual media production. Since 2013 he is a visiting professor of University of Surrey, UK. Oliver’s research interests are in the intersection between computer vision and computer graphics techniques. His prior work included immersive virtual production systems, stereoscopic video production tools, free-viewpoint visualization of sport scenes and web-delivery of free-viewpoint experiences. More recent research interests include visual computing for new user experiences and digital content creation tools. Oliver has a long track history of leading interdisciplinary work in more than 10 major collaborative projects, between academical and industrial partners. Oliver published a number of scientific papers and holds several patents.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Bernhard Preim, Otto-von-Guericke-Universität Magdeburg

Visual Exploration of Blood Flow Data

October 28th, 2014, 4:00pm, SP3 room 063

Abstract:
Blood flow data is acquired either by measurements or by simulations. The acquisition of blood flow data is motivated by decisions in the diagnosis and treatment planning in vascular diagnosis. Blood flow data represents a challenge for the visual exploration since for each voxel a 3D vector is available that characterizes the direction and speed of blood flow. Moreover, the blood flow changes during the cardiac cycle. In the talk, a number of visualization and interaction techniques are presented that enable a focussed and guided exploration of the flow. These techniques include a lens-based interaction (FlowLens), the use of widgets that are guided by anatomical features and serve to probe the flow, clustering techniques that reduce the large number of streamlines to a few representatives. While these techniques primarily serve neuroradiologists, the characterization of vortices in measured cardiac flow is related to some cardiac diseases, such heart valve defects.

About the Speaker:
Bernhard Preim was born in 1969 in Magdeburg, Germany. He received the diploma in computer science in 1994 (minor in mathematics) and a Ph.D. in 1998 from the Otto-von-Guericke University of Magdeburg. In 1999 he moved to Bremen where he joined the staff of MeVis . In close collaboration with radiologists and surgeons he directed the work on "computer-aided planning in liver surgery". In June 2002 he received the Habilitation degree (venia legendi) for computer science from the University of Bremen. Since Mars 2003 he is full professor for "Visualization" at the computer science department at the Otto-von-Guericke-University of Magdeburg, heading a research group which is focussed on medical visualization and applications in surgical education and surgery planning. He authored several textbooks: "Entwicklung interaktiver Systeme" (1999), "Visualization in Medicine" (Co-author Dirk Bartz), "Interaktive Systeme" (Co-author: R. Dachselt) (2010) and "Visual Computing in Medicine" (Co-author: C. Botha).

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Tamara Munzner, University of British Columbia

Dimensionality Reduction From Several Angles

May 27th, 2014, 3:00pm, SP3 room HS18

Abstract:
I will present several projects that attack the problem of dimensionality reduction (DR) in visualization from different methodological angles of attack, in order to answer different kinds of questions. First, can we design better DR algorithms? Glimmer is a multilevel multidimensional scaling (MDS) algorithm that exploits the GPU. Glint is a new MDS framework that achieves high performance on costly distance functions. Second, can we build a DR system for real people? DimStiller is a toolkit for DR that provides local and global guidance to users who may not be experts in the mathematics of high-dimensional data analysis, in hopes of "DR for the rest of us". Third, how should we show people DR results? An empirical lab study provides guidance on visual encoding for system developers, showing that points are more effective than spatialized landscapes for visual search tasks with DR data. A data study, where a small number of people make judgements about a large number of datasets rather than vice versa as with a typical user study, produced a taxonomy of visual cluster separation factors. Fourth, when do people need to use DR? Sometimes it is not the right solution, as we found when grappling with the design of the QuestVis system for a environmental sustainability simulation. We provide guidance for researchers and practitioners engaged in this kind of problem-driven visualization work with the nested model of visualization design and evaluation and the nine-stage framework for design study methodology. Much of this work was informed by preliminary results from an ongoing project, a two-year qualitative study of high-dimensional data analysts in many domains, to discover how the use of DR "in the wild" may or may not match up with the assumptions that underlie previous algorithmic work.

About the Speaker:
Tamara Munzner is a professor at the University of British Columbia Department of Computer Science, and holds a PhD from Stanford. She has been active in visualization research since 1991 and has published over fifty papers and book chapters. She co-chaired co-chaired EuroVis in 2010 and 2009, and InfoVis in 2004 and 2003. Her research interests include the development, evaluation, and characterization of information visualization systems and techniques from both problem-driven and technique-driven perspectives. She has worked on problem-driven visualization projects in a broad range of application domains including genomics, evolutionary biology, geometric topology, computational linguistics, large-scale system administration, web log analysis, and journalism. Her technique-driven interests include graph drawing and dimensionality reduction.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Markus Hadwiger, King Abdullah University of Science and Technology (KAUST)

Interactive Big Data Visualization Using Novel Multi-Resolution Representations

May 14th, 2014, 4:00pm, SP3 room 0352

Abstract:
The amount of measured and simulated data is increasing at an extreme pace, which significantly challenges our capabilities to explore and analyze these ‘big data’ and obtain actual insight. Interactive visualization is a crucial enabling technology to help tackle these challenges. In this talk, I will first give an overview of some of our research in this area. I will put our work into the context of the following four subareas: (1) novel visualization systems, (2) novel visual workflows, (3) novel data representations, and (4) novel data structures. The main part of the talk will then focus on our research on novel multi-resolution representations for large image and volume data. Our goal is to come up with image and volume representations (and data structures) that support more accurate and scalable rendering and processing in an 'output-sensitive' or 'display-aware' manner than standard multi-scale representations.

About the Speaker:
Markus Hadwiger is an Assistant Professor in computer science at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia, where he leads the High-Performance Visualization group at the Visual Computing Center. Before joining KAUST, he was a Senior Researcher at the VRVis Research Center in Vienna. His PhD is in computer science from the Vienna University of Technology in 2004. He is a co-author of the book Real-Time Volume Graphics. His research interests include petascale visual computing and scientific visualization, volume rendering, large-scale image processing, and GPU algorithms and architecture.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Michael Haller, Media Interaction Lab, University of Applied Sciences Upper Austria

Designing natural user interfaces: from large surfaces to flexible input sensors

March 26th, 2014, 4:00pm, SP3 room 057

Abstract:
We at the Media Interaction Lab are investigating the use of interactive surfaces for collaborative environments. With the increasing development of interactive walls, interactive tables, and multi-touch devices, both companies and academics are evaluating their potential for wider use. These newly emerging form factors require novel human–computer interaction techniques which will be discussed in this presentation. Our research goal is to design, develop, and evaluate natural user interfaces that will enable everyone, not just experts, to use large, interactive surfaces. In this talk, we will describe particular challenges and solutions for the design of interactive wall environments as well as the development of a new flexible input sensor.

About the Speaker:
Michael Haller is professor at the department of Interactive Media of the University of Applied Sciences Upper Austria (Hagenberg, Austria), head of the Media Interaction Lab (www.mi-lab.org), and responsible for interactive computer graphics, human-computer interaction, and augmented reality. He received Dipl.-Ing. (1997), Dr. techn. (2001), and Habilitation (2007) degrees from Johannes Kepler University of Linz, Austria. His current focus is on innovative interaction techniques and interfaces for next generation working environments. Currently, he leads a team of over 15 researchers and students. In 2004, he received the Erwin Schrödinger fellowship award presented by the Austrian Science Fund for his visit at the Human Interaction Technology Laboratory (HITLabNZ), University of Canterbury (New Zealand), and the Integrated Media Systems Center (IMSC), University of Southern California (USA). The research output includes 17 journal papers and book chapters, and has been presented already in high quality academic conferences (ACM CHI, ACM UIST) and several demonstrations including ACM SIGGRAPH, Disney's New Technology Forum, and Ars Electronica Festival.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Ivan Viola, Vienna University of Technology

Human-Centric Visualization Design

January 15th, 2014, 4:00pm, room S2 046

Abstract:
During this talk I will discuss two approaches for designing human-centric visualization. One approach is based on adoption of visual techniques from traditional visual crafts such as illustration, visual communication, or cinematography. The argument for adopting these techniques is based on the fact that such visual techniques have been used to convey information to humans over centuries, and its effectiveness has been proven by time. They have been developed for human viewer in mind and those techniques that became most successful, effectively utilize visual metaphors to encode underlying information. Successful adoption of these techniques can be achieved in close cooperation with experts on visual communication from traditional visual media.
The approach delineated above results into effective visualization design, but it does not tell us quantitatively how effective a particular visualization design really is. The second approach for effective human-centric visualization design is therefore based on evaluation of visualization, where the outcome from the evaluation step tells us in numbers how much of intended information has been conveyed to the human correctly and how error-prone is a particular visualization design. This information can be fed back into the visualization design to improve visualization effectiveness. The outcome after several iterations of visualization evaluation and redesign is a visualization technique that conveys the intended information close to optimum and the quantitative characteristics originating from the evaluation serve as a basis for prediction of how much of the intended information will be interpreted by the humans correctly, and what error rate is to be expected during the interpretation step of the visualization pipeline.

About the Speaker:
Ivan Viola is Assistant Professor at the Vienna University of Technology, Austria and Adjunct Professor at University of Bergen, Norway. He received Dipl.-Ing. degree in 2002 and Dr. techn. in 2005 from Vienna University of Technology, Austria. His research is focusing on illustrative visualization, which stands for new visual abstraction methods that are easy to interpret by humans from the perceptual and cognitive point of view and are related to techniques developed in visual communication and arts. Viola coauthored several scientific works published in international journals and conferences such as IEEE TVCG, IEEE Vis, EG CGF, and EG EuroVis and acted as a reviewer and IPC member for conferences in the field of computer graphics and visualization. He is member of Eurographics, IEEE Computer Society, VGTC, and ACM.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

2013

Anselm Grundhöfer, Disney Research Zürich

Augmenting Physical Avatars Using Projector-Based Illumination

December 10th, 2013, 4:00pm, room S2 120

Abstract:
Bringing virtual characters to life is one of the great challenges in computer graphics. While there were tremendous advancements in capturing, animating, and rendering realistic human faces in the past decade, displaying them on traditional screens conveys only a limited sense of physical presence. Animatronic figures or robotic avatars can bridge this gap. However, in contrast to virtual face models, reproducing detailed facial motions on an animatronic head is highly challenging due to physical constraints. Although steady progress in creating highly sophisticated robotic heads that strive to recreate convincing facial motions can be observed, for example those in Disney World’s Hall of Presidents or “Geminoids”, these achieve only limited expressiveness when compared to a real human being.
In my presentation I will present a processing pipeline for augmenting physical avatars using projector-based illumination, significantly increasing their expressiveness by superimposing high-frequency details using projection based illumination. Besides a detailed description of the different processing steps of the system, I'll present an overview of the variety of advanced projection systems the Walt Disney Company uses in current theme park installations to enhance the visitor's sense of immersion.

About the Speaker:
Dr. Anselm Grundhöfer is a Senior Research Engineer at Disney Research Zürich, where he works in Dr. Paul Beardsley’s computer vision group as technical lead of the Procams Toolbox project. Before joining Disney, he graduated in Media Systems Sciences (2006) and obtained his Doctor of Engineering (2010) at Bauhaus University Weimar, where he worked under the supervision of Dr. Oliver Bimber.
Anselm’s work is mainly focused on developing projector-camera systems and helping deploying them, for example, in Disney theme park attractions. Most recently, he is working on multi-projector optimization methods overcoming several still not satisfyingly solved problems when projecting onto non-trivial surfaces. Besides that he is also conducting research in other fields of computer vision, AR, video processing and display technologies.

Due to unforeseeable legal restrictions, we are not allowed to stream this talk.

Torsten Möller, University of Vienna

Visual tools for understanding multi-dimensional parameter spaces

November 6th, 2013, 4:00pm, Science Park 2, room 219

Abstract:
Simulations are an integral part of computational science. Simulations are characterized by a particular set of inputs and a multitude of outputs. Understanding the dependency of the outputs from the inputs is key for understanding the underlying phenomena that are being modeled. In this talk I will try to give a characterization of such general input/output systems and present several tools we have built to aid in different applications, ranging from fisheries science to medical imaging to fluid simulation. I will try to make the case that visual support greatly facilitates the understanding of these complex systems.

About the Speaker:
He heads the research group of Visualization and Data Analysis. He served as the appointed Vice Chair for Publications of the IEEE Visualization and Graphics Technical Committee (VGTC) between 2003 and 2012. He has served on a number of program committees and has been papers co-chair for IEEE Visualization, EuroVis, Graphics Interface, and the Workshop on Volume Graphics as well as the Visualization track of the 2007 International Symposium on Visual Computing. He has also co-organized the 2004 Workshop on Mathematical Foundations of Scientific Visualization, Computer Graphics, and Massive Data Exploration as well as the 2010 Workshop on Sampling and Reconstruction: Applications and Advances at the Banff International Research Station, Canada. He is a co-founding chair of the Symposium on Biological Data Visualization (BioVis). In 2010, he was the recipient of the NSERC DAS award. He received best paper awards from IEEE Conference on Visualization (1997), Symposium on Geometry Processing (2008), and EuroVis (2010), as well as two second best paper awards from EuroVis (2009, 2012).

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Shahram Izadi, Microsoft Research

The Seventh Wonder of the Computer Interface

October 16th, 2013, 4:00pm, SP3, 3rd floor, room 0356

Abstract:
The user interface (UI) is the fundamental way in which humans interact with the digital world. However, despite its importance, we have seen few innovations in this domain since the birth of the mouse nearly half a century ago. In this talk I will highlight six major UI disruptions and, by drawing on a number of trends (and with the help of the audience), we will uncover a seventh exciting new disruption that is emerging.
I will then demonstrate the work within my group at Microsoft Research Cambridge, which is motivated by making this seventh disruption a reality. Our group is a Human-Computer-Interaction (HCI) group with a 'twist': rather than solely focusing on the study and evaluation of interactive systems and techniques, we embrace a more technology-focused approach, which is as much about building the underlying technologies and systems as it is about studying their use.
This modern and technically inspired approach to HCI is still very much multi-disciplinary in nature, but across subfields in computer science and engineering. All this makes it an incredibly rewarding time to be a researcher in the HCI field: you get to play with the newest technologies, such as exotic cameras, displays and sensing hardware; readily embrace approaches outside of your discipline (e.g. within computer vision, machine learning, signal processing, or computer graphics); and just perhaps, if you are really lucky, invent technologies that will cause the eighth, ninth or tenth disruption.

About the Speaker:
Professor Shahram Izadi is a senior research scientist within Microsoft Research Cambridge. He leads the Interactive 3D Technologies (I3D) group, and holds a visiting professorship in the Virtual Environments and Computer Graphics (VECG) group at University College London (UCL). He describes his work as: mashing together exotic sensing and display hardware with signal processing, vision and graphics algorithms to create new interactive systems, which enable users to experiences computing in magical ways. His group has had many notable projects and publications to date including: KinectFusion; KinEtre; Vermeer; HoloDesk; Mouse 2.0; SurfacePhysics; SecondLight; and ThinSight. Shahram has been at Microsoft Research since 2005 and prior to that spent time at Xerox PARC. He received a TR35 award in 2009 and was nominated one of the Microsoft Next in 2012. He lives in Cambridge, UK, with his wife and daughter.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Henry Fuchs, UNC Chapel Hill

Dreams, Illusions, and Promise of Telepresence

June 18th, 2013, 8:00pm, Ars Electronica Center, Deep Space Live

Abstract:
Dreams of telepresence – almost teleportation – are fed by special effects in movies, on stage, and even in TV news shows. These illusions may fool some passive viewers, but fail to work for the actual distant participants. Some of these illusions have been exploited for centuries. Why, then is telepresence so difficult to achieve? This talk will explain some of the tricks and illusions, and explain why they work on the stage but not “in the real world”. We will also explain why several promising new technologies, such as Microsoft Kinect and Google Glass, may radically improve the chances for telepresence to work not just “in the real world”, but to become available for the consumer market in the coming decade, enabling most of us to experience daily this age-old dream of being at a far-off place, interacting with distant people, as if we were being there ourselves.

About the Speaker:
Henry Fuchs is the Federico Gil Distinguished Professor at the University of North Carolina at Chapel Hill (USA). He has been active in 3-dimensional computer graphics since the 1970’s. His innovative rendering algorithms enabled some of the earliest 3D computer games and medical visualizations. His graphics-specialized hardware designs foreshadowed the specialized graphics chips now used in nearly every personal computer and mobile device. His research group at UNC also pioneered specialized displays for virtual reality and augmented reality, both head-worn devices and also room-sized multi-projector environments. These systems have been used in both training and medical applications. His most recent work has focused on tele-presence, attempting to make long-distance tele-conferencing become more and more like face-to-face interaction in the same room.
Fuchs has been honored with a variety of awards, among them the 1992 Achievement Award from ACM-Siggraph, the world’s largest computer graphics society; membership in the National Academy of Engineering (USA) and the American Academy of Arts and Sciences (USA), and the 2013 IEEE-VGTC Virtual Reality Career Award.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Martin Lenz, Institute for Computer Graphics and Vision, Graz University
of Technology

Computational Illumination in Microscopic 3D Metrology

May 14th, 2013, 4:00pm, Science Park 1, MT 130

Abstract:
Within the last two decades, camera-based microscopic 3D metrology became a well-established method in contactless object inspection tasks for industry. In particular, a computer vision approach called shape from focus emerged when surface color and roughness of micro-structures are of interest in a 3D measurement process. In literature, a variety of shape from focus methods for reflected-light microscopy already exist. However, all methods deliver inaccurate or sparse reconstructions for mainly two reasons: First, untextured and homogeneous image regions do not respond correctly regarding applied focus measures. Second, due to the classic acquisition process using a co-axial light source with respect to the observing camera, over-saturated image regions frequently appear because of specular reflections. On the other hand, averted scene geometries only reflect a fraction of light - resulting in under-exposure.
In this talk, I will present new methods from the field of computational photography that automatically optimize the image acquisition process in microscopic 3D metrology and hence tackle the mentioned problems by increasing the image quality of input data. Two different approaches are proposed: One using a ring-light with individually adjustable light segments. The second approach incorporates a projector as co-axial light source to the camera. This allows to (a) actively project a texture onto homogeneous scene regions, (b) locally adjust illumination intensities to avoid saturation effects in an image and furthermore (c) to radically reduce the number of required image acquisitions for a 3D reconstruction.

About the Speaker:
Martin Lenz received his BBA degree in Software Development and Knowledge Management from the Graz University of Technology in 2007. He further received his MSc Degree in Software Development and Business Economics from the Graz University of Technology in 2008. He is currently doing his PhD as a research assistant at the Insitute for Computer Graphics and Vision (ICG). His research work is mainly focused on 3D reconstruction and illumination optimization in industrial 3D metrology.

Shoji Yamamoto, Tokyo Metropolitan College of Industrial Technology

Appearance reproduction of material property by using projection image

March 19h, 2013, 4:00pm, Science Park 2, 046

Abstract:
The reproduction of material appearance, especially specular reflection, is very important information to recognize material property and object feature. For the reality of appearances, the reproduction of specular should be raised with shininess and stereoscopic perception. However, it is difficult to display the real appearance by conventional display systems, because they have limit of radiance range and contrast. Therefore, our group in Chiba University has focused on the projection-based display which has high contrast and a lot of flexibility for the use of Digital Mock-up and/or design support system. My presentation consists of two parts. The first part represents the challenge for accurate specular reproduction with color matching and head tracking. The color matching is useful for accurate reproduction of texture, and the head tracking is effective for reproduction of gloss. The other part represents the challenge for real specular reproduction with binocular stereoscopic. We constructed 3D projection display with eye-tracking control for realistic specular reflection. This system enable to make an authenticity reproduction for increase of radiance and consistent 3D view.

About the Speaker:
Shoji Yamamoto received his B.E. and M.E. degrees from the Department of Opt-Electro-Mechanics Engineering from Shizuoka University in 1989 and 1991 respectively. From 1992 to 2007, he worked Mitsubishi Heavy Industries, Information and Electronics Research Center in Japan. And he received Dr. Eng degrees in information science from Chiba University in 2007. He is currently Associate Professor in Tokyo Metropolitan College of Industrial Technology, Japan. He is interested in Vision science, Image processing, and Computer graphics. He is a member of the Optical Society of Japan, the Institute of Image Information and Television Engineers, and the Institute of Image Electronics Engineers of Japan.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

2012

Eduard Gröller, Vienna University of Technology

Variability in Visualization

December 11th, 2012, 02:30pm, Science Park 3, 056

Abstract:
Data Visualization uses computer-supported, interactive, visual representations of (abstract) data to amplify cognition. In recent years data complexity and variability has increased considerably. This is due to the availability of uncertainty, error and tolerance information. The talk discusses visual steering to support decision making in the presence of alternative scenarios. Multiple, related simulation runs are explored through branching operations. To account for uncertain knowledge about the input parameters, visual reasoning employs entire parameter distributions. This can lead to an uncertainty-aware exploration of (continuous) parameter spaces. Coping with the heightened visual complexity and variability requires advanced strategies like comparative visualization, integrated views and inclusion of fuzzy sets in the visualization process.

About the Speaker:
Eduard Gröller (http://www.cg.tuwien.ac.at/staff/EduardGroeller.html) is Professor at the Institute of Computer Graphics and Algorithms (ICGA), Vienna University of Technology. In 1993 he received his PhD from the same university. His research interests include computer graphics, flow visualization, volume visualization, medical visualization, information visualization, and visual analytics. He is heading the visualization group at ICGA. The group performs basic and applied research projects in the area of scientific visualization (http://www.cg.tuwien.ac.at/research/vis/). Dr. Gröller has given lecture series on scientific visualization at various other universities (Tübingen, Graz, Praha, Bahia Blanca, Magdeburg, Bergen). He is a scientific proponent and key researcher of the VRVis research center (http://www.vrvis.at/) The center performs applied research in visualization, rendering, and visual analysis. Dr. Gröller is adjunct professor of computer science at the University of Bergen, Norway (since 2005). He co-authored more than 200 scientific publications and acted as a reviewer for numerous conferences and journals in the field. He also has served and serves on various program and paper committees. Examples include Computers&Graphics, IEEE Transactions on Visualization and Graphics, EuroVis, IEEE Visualization conference, Eurographics conference. He has been paper co-chair of Volume Graphics 2005, IEEE Visualization 2005 and 2006, and Eurographics 2006. He has been co-chair of the VisSym1999 symposium, the Eurographics 2011 conference, and the EuroVis 2012 conference. Dr. Gröller has been chief editor of the Journal Computer Graphics Forum (http://wileyonlinelibrary.com/journal/cgf) in the period 2008-2011. He became a fellow of the Eurographics association in 2009. Dr. Gröller is head of the working group on computer graphics of the Austrian Computer Society and member of IEEE Computer Society, ACM (Association of Computing Machinery), GI (Gesellschaft für Informatik), OCG (Austrian Computer Society).

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Joaquim Jorge, Technical University of Lisboa

Touching more than two dimensions: augmenting surfaces with sound

November 13th, 2012, 02:30pm, Science Park 3, 048

Abstract:
Work on interactive tabletops and surfaces has focused mostly on two-dimensional issues, such as multi-finger gestures and tangible interaction. Interesting as it is, however this picture is missing several dimensions. I will describe work on 2D and 3D semi-immersive environments and present novel on-and-above-the-surface techniques based on bi-manual models to take advantage of the continuous interaction space for creating and editing 3D models in stereoscopic environments. I will also discuss means to allow for more expressive interactions, including novel uses of sound and combining hand and finger tracking in the space above the table with multitouch gestures on its surface continuously. These combinations can provide alternative design environments and allow novel interaction modalities.

About the Speaker:
Joaquim Jorge is a Professor at Instituto Superior Técnico (IST/UTL), the School of Engineering of the Technical University of Lisboa, Portugal, where he teaches User Interfaces and Computer Graphics. He received PhD and MSc degrees in Computer Science from Rensselaer Polytechnic Institute, Troy, NY, in 1995. He is Editor in Chief of Computers & Graphics Journal and a member of the ERCIM Editorial Board. He is a senior member of ACM/SIGGRAPH and IEEE Computer Society as well as Portuguese national representative to IFIP´s TC13 (Human Computer Interaction). He has also served on the EG Education Board since its inception in 2001 until 2011. Joaquim Jorge's interests are in Calligraphic and Multimodal User Interfaces, Visual Languages and Pattern Recognition techniques applied to Human-Computer Interaction. He was elected Fellow of the Eurographics Association in 2010.

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy

Oliver Staadt, University of Rostock

From Large High-Resolution Displays to Tele-Presence

October 24th, 2012, 05:00pm, Science Park 3, 0352

Abstract:
Tele-Presence technology enables users at geographically distributed sites to collaborate in real time in a shared, simulated environment as if they were in the same physical space. This new paradigm for human-computer interaction is the ultimate synthesis of networking and media technologies. Prior projects, such as the blue-c and the National Tele-Immersion Initiative have developed prototype systems that addressed some of the challenges, but also highlighted unresolved issues and the need for further research. In this talk, I will present our vision of a novel compact tele-immersion system that will eventually support bidirectional communication and interaction between users located at more that two geographic sites.

About the Speaker:
Oliver Staadt is a full professor of Computer Science and Vice Dean of the Faculty of Computer Science and Electrical Engineering at the University of Rostock. He received a Master of Science in computer science and a PhD in computer science from TU Darmstadt and ETH Zurich, respectively. Prior to joining the University of Rostock, he was an Assistant Professor of computer science at the University of California, Davis, where he was also the director of the Virtual Reality Laboratory. His research interests include computer graphics, virtual reality, telepresence, visualization, and multiresolution analysis. He serves as a member of international program committees of many graphics, VR, and visualization conferences. Dr. Staadt is associate editor of Computers & Graphics and co-chair of the program committees of the EG/IEEE Symposium on Point-Based Graphics (PBG) 2008 and the Forth International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT) 2008. He is a member of ACM, ACM SIGGRAPH, the IEEE Computer Society, the Eurographics Association, and the German Informatics Society.

Neil Dodgson, University of Cambridge

Multi-view Autostereoscopic Displays

June 27th, 2012, 05:00pm, MT 325

Abstract:
Autostereoscopic displays produce a different image to each eye without the need for special glasses. Multi-view displays do this without head-tracking. I summarise the various technologies that have been used to make these displays, then discuss the fundamental limitations of these displays, and possible ways in which they could be overcome.

About the Speaker:
Neil Dodgson is Professor of Graphics & Imaging at the University of Cambridge Computer Laboratory, of which he is Deputy Head. He has worked in 3D for twenty years. He helped develop the Cambridge autostereoscopic display, working on 3D display design, 3D cameras, 3D software, and 3D content. He was involved in the late 1990s in the attempt to commercialise this technology as an early 3DTV. He is co-chair of Stereoscopic Displays & Applications (www.stereoscopic.org), the earliest annual conference for the dissemination of research in the field, held annually in Silicon Valley. He works also on subdivision surfaces and aesthetic imaging. His PhD, in image processing, is from the University of Cambridge (1992). He is a Chartered Electrical Engineer, and is a Fellow of the Institution of Engineering and Technology.

Kari Pulli, NVIDIA Research, Santa Clara, CA USA

Mobile Visual Computing

May 23rd, 2012, 05:15pm, MT 325

Abstract:
Mobile phones made cameras ubiquitous and programmable. Cameras in phones and tablets are not just tools for recording the visual world of the users, but also tools for personal story telling. The compact size of mobile devices makes it difficult to provide sensors and optics that would easily
provide high-quality images. Computational photography allows overcoming some of the limitations by combining several input images into outputs that are better or more interesting than any of the single input images. Camera can also be a powerful real-time sensor allowing interaction between the mobile computing device and the real world. In this talk we will discuss the various components of the mobile imaging ecosystem, including APIs, sensors and optics, computing hardware, and the applications enabled by those tools.

About the Speaker:
Kari joined NVIDIA research in April 2011 to work in imaging and other mobile applications. He heads the Mobile Visual Computing Research team which works on topics related to cameras, imaging, and vision on mobile devices. Previously he was at Nokia (1999-2004 in Oulu, Finland; 2004-06 a visiting scientist at MIT CSAIL; 2006-11 at Nokia Research Center Palo Alto). He was one of the 3 Nokia Fellows in 2010 (6th in Nokia history), and a Member of CEO's Technology Council. Kari worked on standardizing mobile graphics APIs at Khronos (OpenGL ES, OpenVG) and JCP (M3G) and wrote with colleagues a book on Mobile 3D Graphics. In Palo Alto he started a research group working on mobile augmented reality and computational photography (including the FCam architecture for computational cameras).
Kari has a B.Sc. from the University of Minnesota, M.Sc. and Lic. Tech. from the University of Oulu (Finland), and Ph.D. from the University of Washington (Seattle), all in Computer Science / Engineering; MBA from the University of Oulu; and he worked as a research associate at Stanford University as the technical lead of the Digital Michelangelo Project.

Charl Botha, Delft University of Technology, Netherlands

BrainCove: A tool for voxel-wise fMRI brain connectivity visualization

April 24th, 2012, 05:15pm, MT 325

Abstract:
Functional brain connectivity from fMRI studies has become an important tool in studying functional interactions in the human brain as a complex network. Most recently, research has started focusing on whole brain functional networks at the voxel-level, where fMRI time-signals at each voxel are correlated with every other voxel in the brain to determine their functional connectivity. For a typical 4mm isotropic voxel resolution, this results in connectivity networks with more than twenty thousand nodes and over 400 million links. These cannot be effectively visualized or interactively explored using node-link representations, and due to their size are challenging to show as correlation matrix bitmaps.
In this talk, I present recent work from my group on a number of methods for the visualization and interactive visual analysis of this new high resolution brain network data, both in its matrix representation as well as in its anatomical context. We have implemented these methods in a GPU raycasting framework that enables real-time interaction, such as network probing and volume deformation, as well as real-time filtering. The techniques are
integrated in a visual analysis application in which the different views are coupled, supporting linked interaction. Furthermore, we allow visual comparison of different brain networks with side-by-side and difference visualization. We have evaluated our approach via case studies with domain scientists at two different university medical centers.

About the Speaker:
Charl Botha is a tenured assistant professor of Visualisation at the Delft University of Technology, a Visiting Scientist at LKEB section of the Radiology Department of the Leiden University Medical Centre and scientific advisor to the companies Clinical Graphics (surgical planning) and Treparel Information Systems (data mining). In Delft he heads the Medical Visualisation group, where his research focuses on surgical planning and guidance, and visual analysis for medical research. He was one of the initiators of the Eurographics Workshop series on Visual Computing for Biology and Medicine and acted as co-chair in 2008 and 2010. Dr Botha holds an M.Sc. in electronic engineering (1999, University of Stellenbosch) and a Ph.D. in computer science (2005, Delft University of Technology). Between M.Sc. and Ph.D. he worked in industry developing embedded image processing products.

Michael S. Brown, National University of Singapore, Republic of Singapore

Modeling the Digital Camera Pipeline: From RAW to sRGB and Back

March 22nd, 2012, 10:00am, MT 325

Abstract:
This talk presents a study of the in-camera imaging process through an extensive analysis of more than 10,000 images from over 30 cameras. The goal is to investigate if output image values (i.e. sRGB) can be transformed to physically meaningful values, and if so, when and how this can be done. From our analysis, we show that the conventional radiometric model fits well for image pixels with low color saturation but begins to degrade as color saturation level increases. This is due to a color mapping process which includes gamut mapping in the in-camera processing that cannot be modeled with conventional methods. To address this issue we introduce a new imaging model for radiometric calibration together with an effective calibration scheme that allows us to compensate for the nonlinear color correction to convert non-linear sRGB images to CCD RAW responses.

About the Speaker:
Michael S. Brown obtained his BS and PhD in Computer Science from the University of Kentucky in 1995 and 2001 respectively. He is currently
an Associate Professor and Assistant Dean (External Relations) in the School of Computing at the National University of Singapore. Dr. Brown regularly serves on the program committees for the major Computer Vision conferences (ICCV, CVPR, ECCV and ACCV) and is currently an Associate Editor for IEEE TPAMI. He has served as an area chair for CVPR’09, ACCV’10, CVPR’11, ICCV’11, and ECCV’12 (upcoming). His research interests include Computer Vision, Image Processing and Computer Graphics.

Hans-Jörg Schulz, University of Rostock, Germany

New Perspectives on Tree Visualization

February 9th, 2012, 10:00am, MT 132

Abstract:
Many people in the information visualization and graph-drawing communities consider tree visualization a solved problem. Reasonably good tree layouts can be computed efficiently in terms of runtime and screen space utilization. In the course of the search for heuristics to generate ever-tidier tree layouts, the comparatively simple problem of transforming parent-child relationships into graphical representations has been solved over and over many times and is still the subject of information visualization research. Researchers have explored and published almost every way of arranging a tree’s nodes in 2D and 3D; encoding them in different shapes or forms; and folding, unfolding, or otherwise interactively manipulating them. So, it is time to gain new perspectives on tree visualization.
This talk will not only survey existing tree visualizations, but also provide insight in the principles that govern their design and which one should be conscious about when choosing a tree visualization or developing a new one. This comprehensive survey leads to insights about the design space of tree visualization as a whole and the talk will discuss current research endeavors to get a grip on this design space as a whole.

About the Speaker:

Hans-Jörg Schulz received his diploma (2004) and his PhD (2010) from the University of Rostock, Germany. At present, he is a post-doctoral researcher in a project on "Visual Support for the Analysis of hierarchically structured, multiple heterogeneous Data Sources" at the University of Rostock. Furthermore, he is associated with the DFG project “Visual Analytics and Stochastic Spatial Simulation for Cell Biology” in Rostock and an associated researcher at Graz University of Technology, Austria. His main interests concern the visualization of graphs and the adoption of graph visualization and analysis principles for non-graph structured data. In his free time, he maintains the tree visualization survey site at treevis.net, opens an external URL in a new window.

Alan Chalmers, International Digital Laboratory, WMG, University of Warwick, UK

Capturing Real World Lighting

January 17th, 2012, 5:15pm, MT 325

Abstract:
High Dynamic Range (HDR) video offers the possibility, for the first time, of capturing, storing, manipulating, and displaying dynamic real-world lighting. This gives a step change in viewing experience, for example the ability to clearly see the football when it is kicked from the shadow of the stadium into sunshine. An HDR video camera now exists which is capable of capturing 20 f-stops at full HD resolution (1920×1080) at 30 frames per second and commercial HDR displays are available. However, there are many significant challenges that still need to be overcome if HDR video is to be widely adopted and move from a niche research area into mainstream use. These include the need for high quality compression algorithms to cope with the enormous amount of data generated, the development of a common interface standard to facilitate widespread uptake, and even a definition of exactly what HDR is and what dynamic range might be considered “enough”.
This talk discusses investigates these challenges and highlights some of the key endeavours being undertaken to ensure HDR is the future of imaging technology.

About the Speaker:
Alan Chalmers is Professor of Visualisation in the International Digital Laboratory, WMG, at the University of Warwick. He has published over 200 papers in journals and international conferences on realistic computer graphics, HDR imaging, parallel processing, multisensory perception and virtual archaeology. He is Honorary President of Afrigraph and a former Vice President of ACM SIGGRAPH. In addition, he is Founder and Innovation Director of the spinout company, goHDR, which is developing software to facilitate the wide spread adoption of high dynamic range (HDR) imaging technology. His research is working towards achieving Real Virtuality: high-fidelity, multi-sensory virtual environments.

2011

Christian Nitschke, Cybermedia Center, Osaka University, Japan

The Eye and its Environment: Geometric Analysis of Corneal Reflections

December 14th, 2011, 1:30pm, MT 325

Abstract:
An image of the human eye does not only capture its anatomical features, but also the environmental light distribution reflected in the cornea. Recently, it has been shown that the combination of camera and cornea can be modeled as a catadioptric imaging system to extract a person’s view and incident illumination at the eye. We want to go further and discuss that integrating multiple eye and scene images establishes a relation between the individual (eye) and its environment, which naturally overcomes inherent problems and enables novel solutions in eye analysis, scene understanding, and human-machine interfaces. However, this demands for specialized algorithms to handle and exploit the complex geometric and photometric properties of the human eye, leading to various challenges in feature tracking and geometric modeling. This talk will provide a comprehensive overview about the topic of geometric eye modeling and corneal reflections analysis, with a focus on the speaker’s own developments, covering motivation, algorithmic pipeline, novel applications, and promising future directions.

About the Speaker:
Christian Nitschke is a Postdoctoral Research Fellow at Cybermedia Center, Osaka University. He received a Diplom (M.S.) in Media Systems with Honors from the Bauhaus University Weimar in 2006 and a Ph.D. in Engineering Science from Osaka University in 2011. Meanwhile, he worked for VIOSO GmbH, creating software solutions for multi-projector displays. His research interests include computer vision, computer graphics, and human-machine interfaces, in particular, 3D reconstruction, scene understanding, and projector-camera systems. He currently develops tools and applications for the analysis of information from eye images.

Dieter Schmalstieg, Technische Universitaet Graz

Smartphone Augmented Reality: Towards Mass-Market Augmented Reality Technology

December 7th, 2011, 1:00pm, MT 124

Abstract:
The Christian Doppler Laboratory of Handheld Augmented Reality established at Graz University of Technology focuses on research aimed at establishing Augmented Reality (AR) technology on smartphones. AR is a key technology in user interfaces for personalized, situated information delivery, navigation, on-demand instructions and games. The widespread availability and rapid evolution of smartphones enables software-only solutions for AR, where it was previously necessary to assemble custom hardware solutions. However, ergonomic and technical limitations of smartphones as a platform make this a challenging endeveour. In particular, it is necessary to design novel efficient time-critical algorithms for computer graphics and computer vision tasks, and create new lightweight forms of interaction with the environment through small form-factor devices. This talk will present selected technical achievements in this field and highlight some examples of successful application prototypes.

About the Speaker:
Dieter Schmalstieg is full professor of Virtual Reality and Computer Graphics at Graz University of Technology (TUG), Austria, where he directs the "Studierstube" research project on augmented reality. His current research interests are augmented reality, virtual reality, real-time graphics, 3D user interfaces, and ubiquitous computing. He received Dipl.-Ing. (1993), Dr. techn. (1997) and Habilitation (2001) degrees from Vienna University of Technology. He is author and co-author of over 200 peer-reviewed scientific publications, associate editor of IEEE Transactions on Visualization and Computer Graphics, member of the editorial advisory board of computers & graphics, member of the steering committee of the IEEE International Symposium on Mixed and Augmented Reality, chair of the EUROGRAPHICS working group on Virtual Environments (1999-2010), advisor of the K-Plus Competence Center for Virtual Reality and Visualization in Vienna and member of the Austrian Academy of Science. In 2002, he received the START career award presented by the Austrian Science Fund. Since 2008, he is also director of the Christian Doppler Laboratory for Handheld Augmented Reality.

Gordon Wetzstein, MIT Media Lab, Boston, USA

Computational Light Field Displays

November 16th, 2011, 3:30pm, MT 325

Abstract:
With the invention of integral imaging and parallax barriers in the beginning of the 20th century, glasses-free 3D display has become
feasible. Today -more than 100 years later- glasses-free 3D displays mostly remain a niche product. Hope is not lost! Nintendo recently
introduced the 3DS - the first attempt to push glasses-free 3D displays into the mass market. While successful in generating interactive 3D
content, the underlying display technology in the 3DS has not fundamentally changed in the last century. In this talk, we explore
modern approaches to glasses-free 3D display using dual and multi-layer light field displays. In contrast to conventional display technology,
these systems aim at a joint design of the display optics and computational processing - a concept that has been exploited in
computational photography for about a decade.

About the Speaker:
Gordon Wetzstein is a Postdoctoral Associate at the MIT Media Lab. His research interests include light field and high dynamic range displays,
projector-camera systems, computational optics, computational photography, computer vision, computer graphics, and augmented reality.
Gordon received a Diplom in Media System Science with Honors from the Bauhaus-University Weimar in 2006 and a Ph.D. in Computer Science at the University of British Columbia in 2011. His doctoral dissertation focuses on computational light modulation for image acquisition and display. He served on the program committee of IEEE ProCams 2007 and won a best paper award for “Hand-Held Schlieren Photography with Light Field Probes” at ICCP 2011, introducing light field probes as computational displays for computer vision and fluid mechanics applications.

Kiyoshi Kiyokawa, Cybermedia Center, Osaka University, Japan

Trends and Vision of Head Mounted Display Technologies for Augmented Reality

October 19th, 2011, 1:00pm, MT 325

Abstract:
Head mounted displays (HMDs) used for Augmented Reality (AR) have unique characteristics that handheld displays and spatial displays do not offer. HMDs are head-worn, and used near receptors of special senses such as visual, auditory, and olfactory sensations. This is convenient to modulate (augment) sensations a user perceives, and thus suitable for AR applications. HMDs are designed for personal use, and the information they present is hidden from others. They have no impact to others’ sensations. HMDs are wearable, hands-free, and available in a wide mobile environment. In this talk, introduced are research trends and future visions of HMDs for AR. Specifically, studies on head mounted visual displays, head mounted multi-modal displays, and head mounted sensing technologies for augmented reality are introduced, and challenges and visions are discussed for the realization of better AR experience. During this talk, introduced are both milestone research projects of the world, and speakers' past and current research projects. The latter includes an occlusion-capable optical see-through display, a super wide view head mounted projective display with semi-transparent retroreflective screen, and a super wide view parallax-free eye camera. Speaker's related work on context awareness will also be introduced in the context of sensing technologies.

About the Speaker:
Kiyoshi Kiyokawa is an Associate Professor at Cybermedia Center, Osaka University, since 2002. He received his M.S. and Ph.D. degrees in information systems from Nara Institute of Science and Technology in 1996 and 1998, respectively. He was a Research Fellow of the Japan Society for the Promotion of Science in 1998. He worked for Communications Research Laboratory (current National Institute of Information and Communications Technology) from 1999 to 2002. He was a visiting researcher at Human Interface Technology Laboratory of the University of Washington from 2001 to 2002. His research interests include virtual reality, augmented reality, 3D user interface, CSCW, and context awareness. He has been involved in organizing IEEE and ACM conferences, such as IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR), IEEE International Symposium on Wearable Computers (ISWC), IEEE Symposium on 3D User Interfaces (3DUI), IEEE Virtual Reality and ACM Virtual Reality Software and Technology (VRST). He is a board member of the Virtual Reality Society of Japan.

Maurizio Seracini, CISA3 Director, University of California, San Diego, USA

A FUTURE FOR THE PAST: Engineering Sciences to Safeguard and Rediscover Cultural Heritage

June 9th, 2011, 8:00pm, Ars Electronica Center (AEC), Deep Space

Abstract:
Engineering Sciences are playing an increasing role in conservation, preservation and rediscovery of Cultural Heritage. A survey of some of the most significant and ground breaking contributions of science and technology to Art, Architecture and Archaeology will be presented. New discoveries on Leonardo’s masterpieces and the latest developments on the search for the “Battle of Anghiari" in the Hall of the 500 in Palazzo Vecchio, Florence will be shown.
Finally, some unique technologies to improve our interaction with works of art and a novel scientific methodology to guide and manage conservation issues, both developed by the Center of Interdisciplinary Sciences for Art, Architecture and Archaeology, at UCSD, will be introduced.

About the Speaker:
CISA3 Director Maurizio Seracini is a pioneer in the use of multispectral imaging and other diagnostic as well as analytical technologies as applied to works of art and structures. He joined Calit2 at UC San Diego in 2006, more than thirty years after graduating from UCSD with a B.A. in bioengineering in 1973. Seracini returned to Italy for graduate school and received the Laurea degree in electronic engineering from the University of Padua, where he went on to study medicine. From 1975-77, he participated in the “Leonardo Project,” to locate the long-lost fresco, “The Battle of Anghiari” (a project sponsored by the Armand Hammer Foundation, Kress Foundation and Smithsonian Institute). In 1977, Seracini established Editech, a Florence-based company that was the first to provide art and architectural diagnostic services, and in 1979 he co-founded the Interdisciplinary Center for Ultrasonic Diagnostics in Medicine, also in Florence.
He has studied more than 2,500 works of art and historic buildings, ranging from Leonardo Da Vinci's "Last Supper” and Botticelli's "Allegory of Spring", to Da Vinci's "Adoration of the Magi". In 1983, he investigated 19 paintings by Raphael on the 500th anniversary of the artist’s birth, and in 1991 he analyzed 19 paintings by Caravaggio in his role as scientific director of an exhibition in Florence and Rome.
Since 1980, Seracini has lectured at institutions in Italy and abroad, including Argentina and the United States. He has been an adjunct professor at the University of Florence in its School of Engineering, School of Architecture, and School of Natural, Physical and Mathematical Sciences. He has also lectured at the International University of Art in Florence; the University of Ca’Foscari (Venice); and the University of Calabria (Cosenza), where he remains an adjunct Research Professor in the School of Engineering and School of Arts and Humanities.
Seracini – whose work was exhibited as part of "The Mind of Leonardo Da Vinci" at Florence's Uffizi Gallery in 2006, and later at the U.S. Library of Congress – believes the Renaissance artist-scientist-inventor would be among the first to appreciate scientific analysis in the cause of understanding art. "We do justice to Leonardo," says Seracini. "We are using technology to understand his masterpieces. I think he would have been happy about that."

Roberto Scopigno,CNR-ISTI, Visual Computing Lab, Pisa, Italy

Sampled 3D models for Cultural Heritage: which uses beyond visualization?

May 25th, 2011, 5:15pm, room MT325

Abstract:
Digital technologies are now mature for producing high quality digital replicas of Cultural Heritage (CH) artifacts. CNR-ISTI has a long experience with technologies for acquiring high-quality digital 3D models (3D scanning) and for rendering those models at interactive speed.
Technology is mature enough to go beyond the plain visualization of those assets, devising new tools able to extend our insight capabilities and to revise the current consolidated procedures for CH research and management.
I will present a few recent experiences where high-quality 3D models have been used in CH research, restoration and conservation, to give a broad review of different uses of those digital assets in the CH domain.

About the Speaker:
Roberto Scopigno is a Research Director at ISTI-CNR, an Institute of the Italian National Research Council (CNR). He leads the Visual Computing Lab of ISTI-CNR. He is currently engaged in research projects concerned with multiresolution data modeling and rendering, 3D scanning, surface reconstruction, scientific visualization and applications to Cultural Heritage. Most of these activities have been funded by EU grants. He published more than one hundred fifty papers in international refereed journals/conferences and gave invited lectures or courses at several international conferences.
He was Co-Chair of international conferences (Eurographics 1999, Rendering Symposium 2002, WSCG 2004, Geometry Processing Symp. 2004, Eurographics 2008, Afrigraph 2010). He served as Co-Editor in Chief of the Computer Graphics Forum Journal (2001-2010). He is now member of the Editorial Board of the ACM J. on Computers and Cultural heritage and Elsevier J. on Cultural Heritage.
He has been elected member of the Eurographics Executive Committee since 2001, and appointed Chair of the Eurographics association (2009-2010).

Nassir Navab, Chair for Computer Aided Medical Procedures (CAMP), TU Munich, Germany

Advance Imaging and Visualization for Computer Assisted Interventions: motivation, state-of-art and future challenges

April 6th, 2011, 4:00 pm, room MT325

Abstract:
In this talk, I will focus on the problem of design and development of advance imaging and visualization solutions for computer assisted interventions. One major scientific challenge is the recovery and modeling of surgical workflow. The second one is the analysis of large amount of heterogeneous data and their intelligent real-time fusion. The third one is the advanced visualization of such data during focused, high-intensity surgical procedures. In particular, I review the state of art in Medical Augmented Reality, and discuss challenges faced by scientific community in the upcoming years. Throughout this presentation, I use clinical applications and our recent results, obtained in our real-world laboratories within several clinics in Munich, to demonstrate the issues and to provide exemplary paths towards possible solutions. Such examples include real-time Ultrasound/CT registration, Free-Hand SPECT reconstruction, Camera-Augmented Mobile C-arm (CAMC) and HMD based AR for intra-operative visualization and medical training.

About the Speaker:

Nassir Navab is a full professor and director of the institute for Computer Aided Medical Procedures (CAMP: campar.in.tum.de, opens an external URL in a new window) at Technical University of Munich (TUM) with a secondary faculty appointment at its Medical School. In 2001, while acting as distinguished member of technical staff at Siemens Corporate Research (SCR) in Princeton, he received the prestigious Siemens Inventor of the Year Award for the body of his work in interventional imaging. He had received his PhD from INRIA and University of Paris XI and enjoyed two years postdoctoral fellowship at MIT Media Laboratory before joining SCR in 1994. In November 2006, he was elected as a member of board of directors of MICCAI society. He has been serving on the Steering Committee of the IEEE Symposium on Mixed and Augmented Reality since 2001. He is the author of hundreds of peer reviewed scientific papers and over 40 US and international patents. He is currently serving as Program Chair for MICCAI 2010 and as Area Chair for ECCV and ACCV 2010. He is on the editorial board of many international journals including IEEE TMI, MedIA and Medical Physics. Nassir is also the co-founder and Chief Scientific Officer for SurgicEye (http://www.surgiceye.com). He is proud of his PhD students, who have received many prestigious awards including MICCAI young investigator awards in 2007, 2009 and 2010, best paper award at IEEE ISMAR 2005, IBM best paper award at VOEC-ICCV 2009, best paper award AMDO 2009, best demonstration BMVC 2009 and IPMI Erbsmann award in 2007.

Paul Debevec, USC ICT, Los Angeles, CA, USA

From Spider-Man to Avatar: Achieving Photoreal Digital Actors

March 16th, 2011, 5:00 pm, room MT325

Abstract:
Somewhere between "Final Fantasy" in 2001 and "The Curious Case of Benjamin Button" in 2008, digital actors crossed the "Uncanny Valley" from looking strangely synthetic to believably real. This talk describes some of the key technological advances that have enabled this achievement. For an in-depth example, the talk describes how high-resolution face scanning, advanced character rigging, and performance-driven facial animation were combined to create "Digital Emily", a collaboration between our laboratory and Image Metrics. Actress Emily O'Brien was scanned in Light Stage 5 in 33 facial poses at the resolution of skin pores and fine wrinkles. These scans were assembled into a rigged face model driven by Image Metrics' video-based animation software, and the resulting photoreal facial animation premiered at SIGGRAPH 2008. The talk also presents a 3D teleconferencing system that uses live facial scanning and an autostereoscopic display to transmit a person's face in 3D and make eye contact with remote collaborators, and a new head-mounted facial performance-capture system based on photometric stereo.

About the Speaker:
Paul Debevec is a research associate professor at the University of Southern California and the associate director of graphics research at USC's Institute for Creative Technologies. Debevec's Ph.D. thesis (UC Berkeley, 1996) presented Façade, an image-based modeling and rendering system for creating photoreal architectural models from photographs. Using Facade he led the creation of virtual cinematography of the Berkeley campus for his 1997 film The Campanile Movie whose techniques were used to create virtual backgrounds in The Matrix. Subsequently, Debevec pioneered high dynamic range image-based lighting techniques in his films Rendering with Natural Light (1998), Fiat Lux (1999), and The Parthenon (2004); he also leads the design of HDR Shop, the first high dynamic range image editing program. At USC ICT, Debevec has led the development of a series of Light Stage devices for capturing and simulating how objects and people reflect light, used to create photoreal digital actors in films such as Spider Man 2, Superman Returns, The Curious Case of Benjamin Button, and Avatar. He received ACM SIGGRAPH's first Significant New Researcher Award in 2001 and co-authored the 2005 book High Dynamic Range Imaging; he chaired the SIGGRAPH 2007 Computer Animation Festival is a member of the ACM SIGGRAPH Executive Committee, the Visual Effects Society, and the Academy of Motion Picture Arts and Sciences. He recently received a Scientific and Engineering Academy Award® for his work on the Light Stage systems.

Hong Hua, University of Arizona, College of Optical Science, USA

Near-eye displays for augmented reality applications

January 19th, 2011, 5:00 pm, room MT325

Abstract:
We at the 3D Visualization and Imaging systems Lab in the University of Arizona have been developing various 3-D display technologies, human-computer interface techniques in virtual and augmented reality environments, and innovative imaging systems. In this talk, I will present our work in developing and assessing various near-eye displays for augmented reality applications. I will start with a brief overview of the recent advancements in near-eye displays, then will focus on the development and results of two emerging technologies aiming to address two critical issues related to near-eye displays: a freeform optical technology promising near-eye displays a very compact form factor, close to a pair of eyeglasses rather than a traditional helmet style; and a multi-focal plane technologies promising more accurate rendering of depth cues than conventional stereoscopic displays, with the hope of uncovering the mystery of depth under- or over-estimation in virtual or augmented reality systems.

About the Speaker:
Hong Hua is an Associate Professor in the College of Optical Sciences (joint faculty with the Department of Computer Sciences and department of Electrical and Computer Engineering) and Director of 3D Visualization and Imaging Systems Laboratory at the University of Arizona. Hong received her Ph.D. in Optical Engineering from Beijing Institute of Tehcnology, China, in 1999. Prior to joining the faculty of University of Arizona in December 2003, she was a Postdoctoral Research Associate at the University of Central Florida from February to December in 1999, was a Beckman Fellow in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign from December 1999 to December 2002, and was was an Assistant Professor in Department of Information and Computer Sciences at the University of Hawaii in 2003. Hong's current research interests include: 3-D display technologies, advanced imaging systems, virtual and augmented environments, 3-D human computer interfaces, and optical engineering. Her research group has been funded by the National Science Foundation (NSF), Department of Defense, and industrial sponsors.

Andrew Straw, Research Institute of Molecular Pathology (IMP), Vienna, Austria

Combining Virtual Reality and Genetics to Understand the Fruit Fly Brain

January 12th, 2011, 5:15pm, room MT124

Abstract:
A fundamental question in neuroscience is how simultaneous, distributed activity across the nervous system orchestrates animal-environment interaction to produce adaptive behavior. The visual system of the fly has been intensively studied for six decades from computational, behavioral, physiological, developmental, genetic, and optical perspectives and is perhaps the best understood system of any brain. Despite this wealth of knowledge, our understanding of the behavioral capabilities of flies, and consequently how this well-studied part of the nervous system is involved in these tasks, remains rudimentary. To address this limitation, I and my colleagues have developed a realtime computer vision system for tracking freely flying flies. Combined with a virtual reality system built on computer games technology, we are using this automated system to ask how the fly perceives its world and structures its movement through it. In this talk, I will describe a recent study using this apparatus in which we discovered that fruit flies use nearby horizontal edges for regulating their altitude. We are now extending this system to investigate high level visual behavior in flies. Furthermore, by utilizing modern molecular genetic techniques, we can inactivate small sets of neurons in the fly brain, and by correlating these manipulations with behavioral deficits measured using the VR system, we can establish a correspondence between particular brain circuits and their role in behavior.

About the Speaker:
Andrew Straw received his Ph.D. in Physiology in 2004 from the University of Adelaide, Australia studying motion vision in the hoverfly using electrophysiological approaches. He then moved to the California Institute of Technology where he was a postdoc and then senior research fellow, and studied visually guided flight behavior of the fruit fly. In November, 2010 he began his own laboratory at the Research Institute of Molecular Pathology (IMP), Vienna, where he is combining realtime fly tracking and visual stimulus generation with modern molecular genetic tools to elucidate the function and structure of the fly brain.

2010

Hideo Saito, Department of Information and Computer Science, Keio University, Japan

3D Visual Media Synthesis and Analysis for Auto-Stereoscopic Display and Aerial 3D Display

December 15th, 2010, 1:00pm, room MT325

Abstract:
3D visual media takes much attention in recent years. 3D cinemas and 3D TV have already been getting popular in recent few years. One of the most significant issues for practical use of such 3D visual technologies is the method for creating contents which are suitable to 3D display system. Without contents to be displayed, the 3D display cannot show anything.
In this talk, I would like to talk about 3D visual media synthesis and analysis using multiple viewpoint cameras for auto-stereoscopic display. 3D geometrical structure of target scene is captured with the multiple viewpoint cameras, and/or with depth cameras, then arbitrary viewpoints videos are sent to auto-stereoscopic display for 3D viewing. Such arbitrary viewpoints videos can also be displayed for virtual/augmented
reality applications.
I would also like to introduce aerial 3D display system that can truly show a sequence of illuminating dots at arbitrary positions in the aerial space. The illuminating dots can be generated based on laser-plasma phenomenon. I will talk about the history of the development of the aerial 3D display system. Then I will also introduce the method for generating 3D contents that can be displayed with the aerial 3D display.

About the Speaker:
Hideo Saito is a full professor in Department of Information and Computer Science, Keio University, Japan, since 2006. He received Ph.D. degrees in Electrical Engineering from Keio University, Japan, in 1992. After that, he has been on the faculty of Keio University. In 1997 to 1999, he was a visiting researcher in the Robotics Institute, Carnegie Mellon University. He is currently the leader of the research project, "Technology to Display 3D Contents into Free Space," supported by CREST, JST, Japan. He recently served as the general Co-Chairs of ICAT2008 (International Conference on Artificial Reality and Telexistence, December 2008), MVA2009 (IAPR International Conference on Machine Vision Applications, May 2009), AH2010 (The first Augmented Human International Conference. He has also been serving as a Program Co-Chairs of International Symposium on Mixed and Augmented Reality (ISMAR2008 and ISMAR2009).

Prof. Bruce H. Thomas, The University of South Australia Mawson Lakes, Australia

Spatial Augmented Reality the Ultimate Design Tool

November 16th, 2010, 2:00pm, room MT325

Abstract:
We at the Wearable Computer Lab in the University of South Australia are investigating the use of Spatial Augmented Reality (SAR) in the process of interactive rapid prototyping for industrial design. This talk presents our concepts and current results for employing SAR to aid designers in the development of prototypes, and in particular the ability to quickly add interactive behaviour. Current industrial design processes are widely employed to optimize product development in automotive, home appliances, aerospace, and other industries. A strong benefit of this modern approach is the iterative aspect of the engineering, design, and construction phases. Characteristics such as aesthetics, ergonomics, and usability are addressed through numerous iterations to improve a product’s design. We are collaborating with designers from our University on the use of SAR in the design process. We are interested in developing tools to aid in iterative aspects of the physical design, whilst providing interactive functionality early in the detail design process. This talk presents a number of current projects investigating this concept.

About the Speaker:
Professor Thomas is the current the Deputy Director of the Advanced Computing Research Centre, Director of the Human Embedded Systems Group, and Director of the Wearable Computer Laboratory at the University of South Australia. He is currently a NICTA Fellow, CTO A-Rage Pty Ltd, Senior Member of the ACM, and visiting Scholar with the Human Interaction Technology Laboratory, University of Washington. Prof. Thomas is the inventor of the first outdoor augmented reality game ARQuake. His current research interests include: wearable computers, user interfaces, augmented reality, virtual reality, CSCW, and tabletop display interfaces.

Prof. Thomas' academic qualifications include the following:

  1. B.A. in Physics, George Washington University;
  2. M.S. in Computer Science, University of Virginia with a thesis titled: Pipeline Pyramids in Dynamic Scenes; and
  3. Ph.D. in Computer Science, Flinders University with a thesis titled: Animating Direct Manipulation in Human Computer Interfaces

His experience includes working at the School of Computer and Information Science, University of South Australia since 1990. He has run his own computer consultancy company. He was a Computer Scientist at the National Institute of Standards and Technology (A major US government laboratory for the Department of Commerce.), and a software engineer for the Computer Sciences Corporation and the General Electric Company.

Dr. Ronald Azuma, Nokia Research Center Hollywood, USA

The Westwood Experience: Connecting Story to Location

October 19th, 2010, 5:00pm, room MT325

Abstract:
This talk briefly describes Nokia Research Center Hollywood, a new research lab in Santa Monica and its areas of interest, and then focuses on the first major project completed by this lab: The Westwood Experience, a novel location-based experience that combined Mixed Reality effects and the power of specific, unique locations to tell a simple linear story in a new way. This work was our initial effort in exploring how to use context to make entertainment experiences on mobile devices more compelling. In particular, it combined research with the efforts of Hollywood professionals to build a more compelling experience. The talk concludes with a discussion of what we learned from evaluating this experience.

About the Speaker:
Ronald Azuma is a Research Leader at the Nokia Research Center Hollywood. He received a B.S. in Electrical Engineering and Computer Science from the University of California at Berkeley, and an M.S. and Ph.D. in Computer Science from the University of North Carolina at Chapel Hill. Prior to joining Nokia, he worked at HRL Laboratories in the areas of Outdoor Augmented Reality, air traffic control visualization, and virtual environments.
Ronald is best known for his work in Augmented Reality. For his dissertation, he built the world's first compelling demonstration of Augmented Reality, developing calibration techniques and hybrid tracking methods to improve both static and dynamic registration. His survey papers helped establish and guide the field. He has worked on visualizing occluded objects, automatically repositioning labels so they are readable, and making AR systems operate outdoors. He also is involved in running the premier forum of this field: the IEEE/ACM International Symposium on Mixed and Augmented Reality (ISMAR), and he currently is the leader of its Steering Committee.

Vinh Ninh DAO, Interaction Technology Lab, University of Tokyo, Japan

A robust one-shot geometry acquisition technique for a mobile projector-camera system

August 20th, 2010, 10:30am, room MT327

Abstract:
This research describes a technique for reconstructing dynamic scene geometry using a handheld video projector--camera system and a single checkerboard image as a structured light pattern.
The proposed technique automatically recognizes a dense checkerboard pattern under dynamic conditions. The pattern-recognition process is adaptive to different light conditions and an object's color, thereby avoiding the need to set threshold values manually for different objects when the scanning device is moving. We also propose a technique to find corresponding positions for the checkerboard pattern, when displayed by a projector, without needing any position-encoding techniques. The correspondence matching process is based on epipolar geometry, enabling the checkerboard pattern to be matched even if parts of it are occluded. We can construct a handheld projector--camera system that can acquire the geometry of objects in real time, and we have verified the feasibility of the proposed techniques.

About the Speaker:
Vinh Ninh DAO is doing Ph.D research in the Interaction Technology Laboratory at the University of Tokyo under the instruction of professor Masanori Sugimoto. He received B.S. and M.S degrees in the University of Tokyo in 2006 and 2008 respectively. His research concentrates on the technology and interaction technique of using a handheld projector.

Prof. Mark Billinghurst, HIT Lab NZ , Christ Church, New Zealand

Designing Augmented Reality Experiences

June 21st, 2010, 4:00pm, room MT226

Abstract:
Augmented Reality (AR) is a technology that allows the virtual overlay of images on the real world. Although the underlying technology is not new, it is only recently that compelling augmented reality experiences can be developed. This presentation discusses how to design effective augmented reality applications. The fundamental technologies needed are briefly discussed and then research on authoring tools, interaction techniques and evaluation methods presented. A set of design guidelines are given for researchers and developers in the field, and finally topics for future research. The presentation will draw on research at the HIT Lab NZ and other leading Augmented Reality research groups and companies. Examples will be discussed from desktop, mobile and wearable AR systems.

About the Speaker:
Professor Mark Billinghurst is a researcher developing innovative computer interfaces that explore how virtual and real worlds can be merged. Director of the HIT Lab New Zealand (HIT Lab NZ) at the University of Canterbury in New Zealand, he has produced over 200 conference and journal technical publications and presented demonstrations and courses at a wide variety of academic and industry conferences. He has a PhD from the University of Washington and conducts research in Augmented and Virtual Reality, Human Computer Interaction and mobile interfaces. He has previously worked at ATR Research Labs, British Telecom and the MIT Media Laboratory and has been actively involved in developing and commercializing AR technology, such as the popular ARToolKit tracking library.

Prof. Aditi Majumder, University of California Irvine, USA

Ubiquitous Displays: A Distributed Network of Active Displays

May 5th, 2010, 5:00pm, room MT325

Abstract:
This talk presents our work-in-progress on developing a new display paradigm where displays are not mere carriers of information, but active members of the workspace interacting with data, user, environment and other displays. The goal is to integrate such active displays seamlessly with the environment making them ubiquitous to multiple users and data. Such ubiquitous display can be a critical component of the future collaborative workspace.
We have developed an active display unit, a projector augmented with sensors, and an embedded computation and communication unit. We are exploring for the first time, the challenges and capabilities resulting from instrumenting a workspace with a distributed network of such active displays to achieve ubiquitous displays. Our main objective is to develop novel distributed methodologies (a) to cover existing surfaces (e.g. walls, floors) - that can deviate considerably from planar, white and Lambertian - with multiple active displays; (b) provide scalability and reconfigurability (in terms of scale, resolution and form factor) of displays; (c) provide a framework for shared viewing and interaction modalities for multiple users.

About the Speaker:
Aditi Majumder is an assistant professor at the Department of Computer Science in University of California, Irvine. She received her BE in Computer Science and Engineering from Jadavpur University, Calcutta, India in 1996 and PhD from Department of Computer Science, University of North Carolina at Chapel Hill in 2003.
Her research area is computer graphics and vision, image processing with primary focus on multi-projector displays. Her research aims to make multi-projector displays truly commodity products and easily accessible to the common man. Her significant research contributions include photometric and color registration across multi-projector displays, enabling use of imperfect projectors in tiled displays and more recently a distributed framework for tiled displays via a distributed network of projector-camera pairs. She is the co-author of the book "Practical Multi-Projector Display Design". She was the program and general co-chair of the Projector-Camera Workshop (PROCAMS) 2005 and the program chair of PROCAMS 2009. She was also the conference co-chair for ACM Virtual Reality Software and Technology 2007. She has played a key role in developing the first curved screen multi-projector display being marketed by NEC/Alienware currently and is an advisor at Disney Imagineering for advances in their projection based theme park rides. She is the recipient of the NSF CAREER award in 2009 for Ubiquitous Displays Via a Distributed Framework.

Dr. Mark Mine, Disney Imagineering, USA

We Make the Magic: Walt Disney Imagineering

April 21st, 2010, 5:00pm, room MT325

Abstract:
Ever since Walt Disney first opened the doors to Disneyland in 1955, Imagineers have been using (and misusing) state-of-the-art technology to immerse their guests in magical worlds. Combined with richly detailed environments, imaginative characters, and compelling stories, these tools have enabled visitors to Disney theme parks to dance with ghosts, sail with pirates, and fly to the furthest reaches of both inner and outer space.
The theme park world of today, however, is vastly different from the theme park world of 1955; audiences are more diverse, guests more sophisticated, and children growing up faster than ever before. The competition is likewise greater than ever before; consumers have an increasingly broad array of rich and compelling entertainment options to choose from, many conveniently located in the local theater, shopping mall, and more than ever in the home. To succeed in this ever-changing marketplace, Imagineers must continue to innovate and push the boundaries of engineering, design, and magic. Our worlds must be richer, our characters more interactive, and our storytelling more fluid, customizable, and reactive.
In this talk Mark will discuss how advances in computing power, display technology, and sensing devices, along with ever accelerating trends of miniaturization and cost reduction are enabling exciting new ways for Imagineers to create magical Disney worlds. He will present some of the new techniques and technology being used to light, animate, and augment Disney theme parks, bringing the world of Disney animated features to life in ways never before possible. He will also demonstrate how these very same advances in technology are changing the theme park design process itself. He will give examples of the work of the Creative Technology Group, a new group at Imagineering dedicated to developing innovative computer-based tools and techniques to be used in the design of Disney theme parks, resorts, and attractions worldwide. Mark will describe how advanced virtual reality technology such as head-mounted displays, high precision motion capture systems, and ultra-high definition display walls are being used to help designers at WDI experience, and evaluate attraction designs months and years before the actual construction process begins in the real world,

About the Speaker:
Mark Mine is Director of Technical Concept Design at Walt Disney Imagineering and the head of WDI's new Creative Technology Group. The Creative Technology Group's expertise in virtual reality and computer graphics has been applied to many projects including the Finding Nemo Submarine Voyage, Toy Story Mania, and the new Radiator Springs Racers at Disney's California Adventure. Prior to working at Disney, Mark was a system engineer at NASA's Jet Propulsion Laboratory in Pasadena California working on the Voyager Missions to the outer planets. Mark has a Bachelor's degree in Aerospace Engineering from the University of Michigan, a Master's Degree in Computer Science and Electrical Engineering from the University of Southern California, and Master's and Ph.D. degrees from the University of North Carolina, Chapel Hill.

Prof. Xubo Yang, Shanghai Jiao Tong University, China

Visual Effects for Computer Games

March 24th, 2010, 12:45pm, room MT325

Abstract:
As the power of graphics processor is continuously enhanced, more and more amazing visual effects are emerging from
off-the-shelf computer games. This talk will give a brief introduction to cutting-edge visual effect techniques invented for or adopted in latest computer games. The techniques will mainly involve global illumination, environmental effects,camera effects and other advanced features such as physics.Future trends and improvements of visual effects for computer games will also be briefly discussed.

About the Speaker:
Xubo Yang is an associate professor at Shanghai Jiao Tong University. He is now the director of Digital Art Lab in the School of Software. He received a PhD in computer science from the State Key Lab of CAD&CG at Zhejiang University in 1998. Later he joined the Virtual Environment group at the Fraunhofer Institute for Media Communication (IMK) in Germany till 2001 and the Mixed Reality Lab of National University of Singapore till 2003. He has been a visiting researcher in the Ubisoft Shanghai Studio during 2008-2009. His current research interests focus on next-generation media art computing technologies in the context of computer graphics, computer vision, augmented reality and novel media interaction. He has published more than 30 peer-reviewed papers in the field of virtual and augmented reality, computer graphics and entertainment computing. He is member of IEEE and ACM.

Prof. Ramesh Raskar, MIT, Boston, USA

Computational Photography

February 10th, 2010, 4:15pm, room MT325

Abstract:
Though revolutionary in many ways, digital photography is essentially electronically implemented film photography. By contrast, computational photography exploits plentiful low-cost computing and memory, new kinds of digitally enabled sensors, optics, probes, smart lighting, and communication to capture information far beyond just a simple set of pixels. It promises a richer, even a multilayered, visual experience that may include depth, fused photo-video representations, or multispectral imagery. Professor Raskar will discuss and demonstrate advances he is working on in the areas of generalized optics, sensors, illumination methods, processing, and display, and describe how computational photography will enable us to create images that break from traditional constraints to retain more fully our fondest and most important memories, to keep personalized records of our lives, and to extend both the archival and the artistic possibilities of photography.

About the Speaker:

Ramesh Raskar joined the Media Lab from Mitsubishi Electric Research Laboratories in 2008 as head of the Lab’s Camera Culture research group. He received his PhD from the University of North Carolina at Chapel Hill, where he introduced “Shader Lamps,” a novel method for seamlessly merging synthetic elements into the real world using projector-camera based spatial augmented reality. In 2004, Raskar received the TR100 Award from Technology Review, which recognizes top young innovators under the age of 35, and in 2003, the Global Indus Technovator Award, instituted at MIT to recognize the top 20 Indian technology innovators worldwide. In 2009, he was awarded a Sloan Research Fellowship. He holds 37 US patents and has received four Mitsubishi Electric Invention Awards. He is currently co-authoring a book on computational photography. raskar.info, opens an external URL in a new window

Prof. Michael Goesele, TU Darmstadt, Germany

Scene Reconstruction from Community Photo Collections

January 26th, 2010, 4:15pm, room MT325

Abstract:
With the recent rise in popularity of Internet photo sharing sites, community photo collections---collections of images contributed by users online---have emerged as a powerful new type of image dataset for computer vision and computer graphics research. Compared to standard datasets captured under laboratory conditions, community photo collections show an extreme variability since the images were captured by a large number of photographers with various cameras and at different times. In this talk, I will first describe our work on geometry reconstruction from community photo collections. We employ a robust multi-view stereo approach which yields high quality results even under such difficult conditions. I will then discuss how we can use images captured from different viewpoints and under varying lighting conditions to reconstruct a more complete scene model including geometry, reflectance, and distant illumination. I will close with an outlook on open research problems in the area.

About the Speaker:
Michael Goesele studied computer science at the University of Ulm and the University of North Carolina at Chapel Hill. He then moved to the Max-Planck-Institut für Informatik (MPI) and received his doctorate degree in 2004. In 2005, he received a Feodor Lynen-Fellowship from the Alexander von Humboldt-Foundation to work as a PostDoc at the University of Washington, Seattle, USA. Since 2007, he is an assistant professor of computer graphics at TU Darmstadt. Since 2009, he is additionally leading an Emmy Noether Research group funded by the German National Science Foundation (DFG). His research interests include capturing and modeling techniques for graphics and vision as well as high performance computing on modern massively parallel hardware. Michael Goesele received several awards including the Eurographics 2008 Young Researcher Award.

2009

Dr. Ivo Ihrke, University of British Columbia Vancover, Canada

Digitizing the Unscannable: 3D Acquisition of Intangible Phenomena and Objects with Challenging Surface Characteristics

December 1st, 2009, 4:15pm, room MT325

Abstract:
Standard range scanning techniques work well for approximately Lambertian reflectors, but large classes of objects can currently not be scanned robustly. Specular and refractive objects pose challenges to range scanning because the surface cannot be observed directly. Translucent objects exhibit significant effects of global light transport, while volumetric phenomena like fire, smoke and gas flows do not have a proper surface. In the talk I will give an overview of my work in this area. I will discuss computed tomography as a useful tool in acquiring volumetric phenomena and discuss ways of dealing with and exploiting refraction for real-world object digitization.

About the Speaker:
Ivo Ihrke is a postdoctoral research fellow at the University of British Columbia, supported by the Alexander von Humboldt-Foundation.
Beginning 2010, he will lead a research group on "Generalized Image Acquisition and Analysis" within the Cluster of Excellence "Multi-Modal Computing and Interaction" at Saarland University, Germany. He received a MS degree in Scientific Computing from the Royal Institute of Technology (KTH), Stockholm, Sweden in 2002 and a Ph.D. in computer science from Saarland University in 2007. His dissertation focused on reconstruction methods for transparent, dynamic phenomena, like fire, smoke, and fluid flows. His main research interest are the modeling of forward and inverse light transport processes and the development of effcient numerical solvers for these problems.

Prof. Daisuke Iwai, Osaka University, Japan

Interactive Projection-Based Mixed Reality Systems

November 17th, 2009, 10:15am, room T911

Abstract:
Projection-based mixed reality (PBMR) which seamlessly merges physical world and virtual content has a great potential to realize various interactive systems in which users can intuitively interact with existent objects that are augmented by projected imagery. Prominent advantages of PBMR are that the users do not have to wear any annoying head-mounted displays and that augmented reality experiences can be simultaneously shared among co-located multiple users. Our group at Osaka University has been focusing on exploring broad range of PBMR interactive applications including a real object painting with projected texture (1999-), supporting of tele-existence collaboration (1999-), wearable projector to support private activities (2001-), supporting of artwork creation through heat phenomena (2002-), radiometric compensation of projection light (2002-), virtual shape deformation to support product design (2003-), interactive palmtop display (2005-), document search on physical desktop (2006-) and in bookshelf (2008-), and cooperative distributed projection (2008-). I will give an overview of the above projects to introduce our research activities as well as PBMR's promising possibilities in interactive applications.

About the Speaker:
Daisuke Iwai is an assistant professor at Graduate School of Engineering Science, Osaka University, Japan. He received the BS degree from Osaka University, in 2003 and the MS and Ph.D. degrees in engineering science from Osaka University, in 2005 and 2007, respectively. From 2007 to 2008, he was a visiting scientist at the Media System Science Department of the Bauhaus-University Weimar. He was a research fellow of the Japan Society for the Promotion of Science from 2006 to 2008. His research interests include projection-based mixed reality and human-computer interaction. He is member of ACM and IEEE.