Page Areas:



Current Submenu:

Additional Information:

Student Projects

Hier den Alternativtext zum Bild eingeben!

Displays Book (free image material)

Hier den Alternativtext zum Bild eingeben!

Spatial Augmented Reality Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

VIOSO smartprojecting

Hier den Alternativtext zum Bild eingeben!

The Flagship Publication of the IEEE Computer Society

Hier den Alternativtext zum Bild eingeben!

The Caleydo Project

Hier den Alternativtext zum Bild eingeben!

Location - Computer Science Building (S3)

Location of the Institute for Computer Graphics: Computer Science Building (Science Park 3)

Position Indication:

Content

Research

The Institute of Computer Graphics carries out research in a modern field that has been coined "visual computing". Our core disciplines cover the imaging, processing, visualization and displaying of visual data. These are enabled by new fields and technologies, such as light fields, projector-camera systems, responsive optics, mobile computing, and visual analytics.

Please select one of the following topics or years for detailed information.















2010

Mobile Museum Guidance using Relational Multi-Image Classification


We present a multi-image classification technique for mobile phones that is supported by relational reasoning. Users capture a sequence of images employing a simple near-far camera movement. After classifying distinct keyframes using a nearest-neighbor approach the corresponding database images are only considered for a majority voting if they exhibit similar near-far inter-image relations to the captured keyframes. In the context of PhoneGuide, our adaptive mobile museum guidance system, a user study revealed that our multi-image classification technique leads to significantly higher classification rates than single image classification. Furthermore, when using near-far image relations, less keyframes are sufficient for classification. This increases the overall classification speed of our approach by up to 35%.

Bruns, E. and Bimber, O., Mobile Museum Guidance through Relational Multi-Image Classification, International Conference on Multimedia and Ubiquitous Engineering (MUE’10), 2010, Best Paper Award

Localization and Classification through Adaptive Pathway Analysis


We present an unobtrusive technique for supporting and improving object recognition approaches on mobile phones. To accomplish this we determine the present and future locations of museum visitors by evaluating user-generated spatio-temporal pathway data. In the context of our adaptive mobile museum guidance system called PhoneGuide we show that this improves the classification performance significantly and can achieve recognition rates comparable to those of traditional location-based image classification approaches. Over a period of four months, we collected the pathway data of 132 regular museum visitors at the Natural History Museum of Erfurt, Germany.

Bruns, E. and Bimber, O., Localization and Classification through Adaptive Pathway Analysis, IEEE Pervasive Computing, 2010 (submitted: July 2009, accepted: August 2010), April-June issue, 2012

Closed-Loop Feedback Illumination for Optical Inverse Tone-Mapping in Light Microscopy


We show that optical inverse tone-mapping (OITM) in light microscopy can improve the visibility of specimens, both when observed directly through the oculars and when imaged with a camera. In contrast to previous microscopy techniques, we pre-modulate the illumination based on the local modulation properties of the specimen itself. We explain how the modulation of uniform white light by a specimen can be estimated in real-time, even though the specimen is continuously but not uniformly illuminated. This information is processed and back-projected constantly, allowing the illumination to be adjusted on the fly if the specimen is moved or the focus or magnification of the microscope is changed. The contrast of the specimen's optical image can be enhanced, and high-intensity highlights can be suppressed. A formal pilot study with users indicates that this optimizes the visibility of spatial structures when observed through the oculars. We also demonstrate that the signal-to-noise (S/N) ratio in digital images of the specimen is higher if captured under an optimized rather than a uniform illumination. In contrast to advanced scanning techniques that maximize the S/N ratio using multiple measurements, our approach is fast because it requires only two images. This can be beneficial for image analysis in digital microscopy applications with real-time capturing demands.

Bimber, O., Klöck, D., Amano, T., Grundhöfer, A., and Kurz, D., Closed-Loop Feedback Illumination for Optical Inverse Tone-Mapping in Light Microscopy, IEEE Transactions on Visualization and Computer Graphics, 2010 (submitted: August 2009, accepted: July 2010)

Coded Aperture Projection


Coding a projector's aperture plane with adaptive patterns together with inverse filtering allow the depth-of-field of projected imagery to be increased. We present two prototypes and corresponding algorithms for static and programmable apertures. We also explain how these patterns can be computed at interactive rates, by taking into account the image content and limitations of the human visual system. Applications such as projector defocus compensation, high quality projector de-pixelation, and increased temporal contrast of projected video sequences can be supported. Coded apertures are a step towards next-generation auto-iris projector lenses.

Grosse, M., Wetzstein, G., Grundhöfer, A., and Bimber, O., Coded Aperture Projection, ACM Transactions on Graphics, 2010 (submitted: June 2009, accepted: March 2010)
ACM Siggraph 2010

Mobile Museum Guidance using Relational Multi-Image Classification


We present a multi-image classification technique for mobile phones that is supported by relational reasoning. Users capture a sequence of images employing a simple near-far camera movement. After classifying distinct keyframes using a nearest-neighbor approach the corresponding database images are only considered for a majority voting if they exhibit similar near-far inter-image relations to the captured keyframes. In the context of PhoneGuide, our adaptive mobile museum guidance system, a user study revealed that our multi-image classification technique leads to significantly higher classification rates than single image classification. Furthermore, when using near-far image relations, less keyframes are sufficient for classification. This increases the overall classification speed of our approach by up to 35%.

Bruns, E. and Bimber, O., Mobile Museum Guidance using Relational Multi-Image Classification, In proceedings of International Conference on Multimedia and Ubiquitous Engineering (MUE), 2010

Fast and Robust CAMShift Tracking


CAMShift is a well-established and fundamental algorithm for kernel-based visual object tracking. While it performs well with objects that have a simple and constant appearance, it is not robust in more complex cases. As it solely relies on back projected probabilities it can fail in cases when the object’s appearance changes (e.g., due to object or camera movement, or due to lighting changes), when similarly colored objects have to be re-detected or when they cross their trajectories.
We propose low-cost extensions to CAMShift that address and resolve all of these problems. They allow the accumulation of multiple histograms to model more complex object appearances and the continuous monitoring of object identities to handle ambiguous cases of partial or full occlusion. Most steps of our method are carried out on the GPU for achieving real-time tracking of multiple targets simultaneously. We explain efficient GPU implementations of histogram generation, probability back projection, computation of image moments, and histogram intersection. All of these techniques make full use of a GPU’s high parallelization capabilities.

Exner, D., Bruns, E., Kurz, D., Grundhoefer, A., and Bimber, O., Fast and Robust CAMShift Tracking, In proceedings of IEEE International Workshop on Computer Vision for Computer Games (IEEE CVCG), 2010

Color Invariant Chroma Keying and Color Spill Neutralization for Dynamic Scenes and Cameras


We show how temporal backdrops that alternately change their color rapidly at recording rate can aid chroma keying by transforming color spill into a neutral background illumination. Since the chosen colors sum up to white, the chromatic (color) spill component is neutralized when integrating over both backdrop states. Being able to separate both states, however, additionally allows to compute high quality alpha mattes. Besides neutralizing color spill, our method is invariant to foreground colors and supports applications with real-time demands. In this article, we explain different realizations of temporal backdrops and describe how keying and color spill neutralization are carried out, how artifacts resulting from rapid motion can be reduced, and how our approach can be implemented to be compatible with common real-time post-production pipelines.

Grundhöfer, A., Kurz, D., Thiele, S. and Bimber, Color Invariant Chroma Keying and Color Spill Neutralization for Dynamic Scenes and Cameras, The Visual Computer - International Journal of Computer Graphics (Computer Graphics International), 2010

 

English Version English Version