Page Areas:



Current Submenu:

Additional Information:

Student Projects

Hier den Alternativtext zum Bild eingeben!

Displays Book (second edition)

Displays Book (second edition)

Displays Book (Korean edition)

Hier den Alternativtext zum Bild eingeben!

Displays Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

Spatial Augmented Reality Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

The Caleydo Project

Hier den Alternativtext zum Bild eingeben!

VIOSO smartprojecting

Hier den Alternativtext zum Bild eingeben!

Position Indication:

Content

Research

The Institute of Computer Graphics carries out research in a modern field that has been coined "visual computing". Our core disciplines cover the imaging, processing, visualization and displaying of visual data. These are enabled by new fields and technologies, such as light fields, projector-camera systems, responsive optics, mobile computing, and visual analytics.

Please select one of the following topics or years for detailed information.
















Projector-Camera Systems

Visual Computing Featuring Responsive Optics


The combination of advanced software algorithms and optics opens up new possibilities for display, imaging, and lighting. It makes possible responsive optical systems that adapt to particular situations automatically and dynamically. Visual computing is a relatively young research field that provides a foundation for many of these approaches. It represents a tight coupling between image synthesis, image analysis, and visual perception. While optics is all about image formation, visual computing deals with the general processing of images. This paper summarizes several examples that illustrate how graphics, vision, perception, and optics are combined to realize smart projectors, smart cameras, and smart light sources.

Bimber, O, Visual Computing Featuring Responsive Optics, invited paper, In proceedings of 27th Spring Conference on Computer Graphics (SCCG’11), 2011

application/pdfManuscript (9.5 MB)

Closed-Loop Feedback Illumination for Optical Inverse Tone-Mapping in Light Microscopy


We show that optical inverse tone-mapping (OITM) in light microscopy can improve the visibility of specimens, both when observed directly through the oculars and when imaged with a camera. In contrast to previous microscopy techniques, we pre-modulate the illumination based on the local modulation properties of the specimen itself. We explain how the modulation of uniform white light by a specimen can be estimated in real-time, even though the specimen is continuously but not uniformly illuminated. This information is processed and back-projected constantly, allowing the illumination to be adjusted on the fly if the specimen is moved or the focus or magnification of the microscope is changed. The contrast of the specimen's optical image can be enhanced, and high-intensity highlights can be suppressed. A formal pilot study with users indicates that this optimizes the visibility of spatial structures when observed through the oculars. We also demonstrate that the signal-to-noise (S/N) ratio in digital images of the specimen is higher if captured under an optimized rather than a uniform illumination. In contrast to advanced scanning techniques that maximize the S/N ratio using multiple measurements, our approach is fast because it requires only two images. This can be beneficial for image analysis in digital microscopy applications with real-time capturing demands.

Bimber, O., Klöck, D., Amano, T., Grundhöfer, A., and Kurz, D., Closed-Loop Feedback Illumination for Optical Inverse Tone-Mapping in Light Microscopy, IEEE Transactions on Visualization and Computer Graphics, 2010 (submitted: August 2009, accepted: July 2010)

application/pdfManuscript (37.6 MB)

Coded Aperture Projection


Coding a projector's aperture plane with adaptive patterns together with inverse filtering allow the depth-of-field of projected imagery to be increased. We present two prototypes and corresponding algorithms for static and programmable apertures. We also explain how these patterns can be computed at interactive rates, by taking into account the image content and limitations of the human visual system. Applications such as projector defocus compensation, high quality projector de-pixelation, and increased temporal contrast of projected video sequences can be supported. Coded apertures are a step towards next-generation auto-iris projector lenses.

Grosse, M., Wetzstein, G., Grundhöfer, A., and Bimber, O., Coded Aperture Projection, ACM Transactions on Graphics, 2010 (submitted: June 2009, accepted: March 2010)
ACM Siggraph 2010

Color Invariant Chroma Keying and Color Spill Neutralization for Dynamic Scenes and Cameras


We show how temporal backdrops that alternately change their color rapidly at recording rate can aid chroma keying by transforming color spill into a neutral background illumination. Since the chosen colors sum up to white, the chromatic (color) spill component is neutralized when integrating over both backdrop states. Being able to separate both states, however, additionally allows to compute high quality alpha mattes. Besides neutralizing color spill, our method is invariant to foreground colors and supports applications with real-time demands. In this article, we explain different realizations of temporal backdrops and describe how keying and color spill neutralization are carried out, how artifacts resulting from rapid motion can be reduced, and how our approach can be implemented to be compatible with common real-time post-production pipelines.

Grundhöfer, A., Kurz, D., Thiele, S. and Bimber, Color Invariant Chroma Keying and Color Spill Neutralization for Dynamic Scenes and Cameras, The Visual Computer - International Journal of Computer Graphics (Computer Graphics International), 2010

application/pdfManuscript (91.1 MB)

VirtualStudio2Go: Digital Video Composition for Real Environments


We synchronize film cameras and LED lighting with off-the-shelf video projectors. Radiometric compensation allows displaying keying patterns and other spatial codes on arbitrary real world surfaces. A fast temporal multiplexing of coded projection and flash illumination enables professional keying, environment matting, displaying moderator information, scene reconstruction, and camera tracking for non-studio film sets without being limited to the constraints of a virtual studio. The reconstruction of the scene geometry allows special composition effects, such as shadow casts, occlusions and reflections. This makes digital video composition more flexible, since static studio equipment, such as blue screens, teleprompters, or tracking devices, is not required. Authentic film locations can be supported with our portable system without causing a lot of installation effort. We propose a concept that combines all of these techniques into one single compact system that is fully compatible with common digital video composition pipelines, and offers an immediate plug-and-play applicability.

Grundhoefer, A. and Bimber, O., VirtualStudio2Go: Digital Videocomposition for Real Environments, In proceedings of ACM Siggraph Asia (ACM Transactions on Graphics), vo. 27, no. 5, article 151, 2008

Superimposing Dynamic Range


We present a simple and cost-efficient way of extending contrast, perceived tonal resolution, and color space of reflective media, such as paper prints, hardcopy photographs, or electronic paper displays. A calibrated projector-camera system is applied for automatic registration, radiometric scanning and superimposition. A second modulation of the projected light on the surface of such media results in a high dynamic range visualization. This holds application potential for a variety of domains, such as radiology, astronomy, optical microscopy, conservation and restoration of historic art, modern art and entertainment installations. In our experiments, we achieved contrast ratios of up of 45,000-60,000:1 with a peak luminance of more than 2,750 cd/m^2, could technically re-produce more than 620 perceptually distinguishable tonal values. Furthermore, we attained color space extensions of up to a factor of 1.4 (compared to a regular projection on white screens) or factor of 3.3 (compared to regular paper prints under environment light). Thereby, the hardcopy resolution can be several thousand DPI or several hundred LPI, while luminance and chrominance are modulated with a registration error of less than 0.3 mm. Thus, compared with most existing interactive HDR displays, we support near distance viewing at a contrast frequency of up to 7 cpd (given our current registration precision and assuming a viewing distance of 50 cm).

Bimber, O. and Iwai, D., Superimposing Dynamic Range, In proceedings of ACM Siggraph Asia (ACM Transactions on Graphics), vo. 27, no. 5, article 150, 2008

Bimber, O., and Iwai, D., Superimposing Dynamic Range, Siggraph'08 (NewTech Demo Abstract), 2008

Grundhoefer, A. and Bimber, O., Superimposing Dynamic Range, Siggraph'08 (Poster+Talk), 2008

Source Code  (Neues Fenster)

Radiometric Compensation Through Inverse Light Transport


Radiometric compensation techniques allow seamless projections onto complex everyday surfaces. Implemented with projector-camera systems they support the presentation of visual content in situations where projection-optimized screens are not available or not desired - as in museums, historic sites, air-plane cabins, or stage performances. We propose a novel approach that employs the full light transport between projectors and a camera to account for many illumination aspects, such as interreflections, refractions, shadows, and defocus. Pre-computing the inverse light transport in combination with an efficient implementation on the GPU makes the real-time compensation of captured local and global light modulations possible.

Wetzstein, G. and Bimber, O., Radiometric Compensation through Inverse Light Transport. In proceedings of Pacific Graphics, pp. 391-399, 2007

Wetzstein, G. and Bimber, O., Radiometric Compensation of Global Illumination Effects with Projector-Camera Systems. ACM Siggraph`06 (Poster), 2006

Imperceptible Calibration for Radiometric Compensation


We present a novel multi-step technique for imperceptible geometry and radiometry calibration of projector-camera systems. Our approach can be used to display geometry and color corrected images on non-optimized surfaces at interactive rates while simultaneously performing a series of invisible structured light projections during runtime. It supports disjoint projector-camera configurations, fast and progressive improvements, as well as real-time correction rates of arbitrary graphical content. The calibration is automatically triggered when mis-registrations between camera, projector and surface are detected.

Zollmann, S. and Bimber, O., Imperceptible Calibration for Radiometric Compensation. EUROGRAPHICS (short), pp. 61-64, 2007

application/pdfManuscript (9.1 MB)

Dynamic Adaptation of Projected Imperceptible Codes


We present a novel adaptive imperceptible pattern projection technique that considers parameters of human visual perception. A coded image that is invisible for human observers is temporally integrated into the projected image, but can be reconstructed by a synchronized camera. The embedded code is dynamically adjusted on the fly to guarantee its non-perceivability and to adapt it to the current camera pose. Linked with real-time flash keying, for instance, this enables in-shot optical tracking using a dynamic multi-resolution marker technique. A sample prototype is realized that demonstrates the application of our method in the context of augmentations in television studios.

Grundhoefer, A., Seeger, M., Haentsch, F., and Bimber, O., Dynamic Adaptation of Projected Imperceptible Codes. In proceedings of IEEE International Symposium on Mixed and Augmented Reality (ISMAR'07), pp. 181-190, 2007

application/pdfManuscript (2.8 MB)

Laser Pointer Tracking in Projector-Augmented Architectural Environments


We present a system that applies a custom-built pan-tilt-zoom camera for laser-pointer tracking in arbitrary real environments. Once placed in a building environment, it carries out a fully automatic self-registration, registrations of projectors, and sampling of surface parameters, such as geometry and reflectivity. After these steps, it can be used for tracking a laser spot on the surface as well as an LED marker in 3D space, using inter-playing fisheye context and controllable detail cameras. The captured surface information can be used for masking out areas that are critical to laser-pointer tracking, and for guiding geometric and radiometric image correction techniques that enable a projector-based augmentation on arbitrary surfaces. We described a distributed software framework that couples laser-pointer tracking for interaction, projector-based AR as well as video see-through AR for visualizations with the domain specific functionality of existing desktop tools for architectural planning, simulation and building surveying.

Kurz, D., Haentsch, F., Grosse, M., Schiewe, A., and Bimber, O., Laser Pointer Tracking in Projector-Augmented Architectural Environments, In proceedings of IEEE International Symposium on Mixed and Augmented Reality (ISMAR'07), pp. 19-26, 2007

application/pdfManuscript (1.0 MB)

Passive-Active Geometric Calibration for View-Dependent Projections onto Arbitrary Surfaces


Projecting images onto surfaces that are not optimized for projections becomes more and more popular. Such approaches will enable the presentation of graphical, image or video content on arbitrary surfaces. Virtual reality visualizations may become possible in everyday environments - without specialized screen material or static screen configurations. Upcoming pocket projectors will enable truly mobile presentations on all available surfaces of furniture or papered walls. The playback of multimedia content will be supported on natural stonewalls of historic sites without destroying their ambience through the installations of artificial projection screens. We present a hybrid technique for correcting distortions that appear when projecting images onto geometrically complex, colored and textured surfaces. It analyzes the optical flow that results from perspective distortions during motions of the observer and tries to use this information for computing the correct image warping. If this fails due to an unreliable optical flow, an accurate –but slower and visible– structured light projection is automatically triggered. Together with an appropriate radiometric compensation, view-dependent content can be projected onto arbitrary everyday surfaces. An implementation mainly on the GPU ensures fast frame rates.

Zollmann, S., Langlotz, T. and Bimber, O., Passive-Active Geometric Calibration for View-Dependent Projections onto Arbitrary Surfaces, Journal of Virtual Reality and Broadcasting, vol. 4, no. 6, 2007 (re-print from Workshop on Virtual and Augmented Reality of the GI-Fachgruppe AR/VR 2006)

Zollmann, S., Langlotz, T. and Bimber, O., Passive-Active Geometric Calibration for View-Dependent Projections onto Arbitrary Surfaces, Workshop on Virtual and Augmented Reality of the GI-Fachgruppe AR/VR, pp. 181-191, 2006

Real-Time Adaptive Radiometric Compensation


Our new radiometric compensation algorithm considers the human visual perception properties to reduce visible artefacts resulting from the limited dynamic range and brightness of projectors. It preserves a maximum of luminance and contrast and is implemented entirely on the GPU. Real-time frame rates are achieved for supporting animated and interactive content. Initially our algorithm performs an off-line analysis of the projection surface’s geomety and reflectance. The image content is then analyzed to determine the average luminance values, the amount of high spatial frequencies, and a luminance threshold map. The threshold map stores information about the maximum non-perceivable luminance differences for each pixel. The radiometric compensation is carried out in two passes: In the first pass the intensity values are translated and scaled globally depending on the surface reflectance and the image content itself. The result is analyzed for clipping errors. These errors are then blurred with a Gaussian kernel. The applied sigma is inverse proportional to the amount of high spatial frequencies in the local image areas. In the final pass the image intensities are translated and scaled globally, but the luminance values are also adjusted locally depending on the defocused clipping errors. Time dependent adaptation factors are used for global and local transformations to avoid popping artifacts in animated and interactive content.

Grundhoefer, A. and Bimber, O., Real-Time Adaptive Radiometric Compensation. In IEEE Transactions on Visualization and Computer Graphics (TVCG), vol. 14, no. 1, pp. 97-108, 2008, (submitted: August 2006, accepted: February 2007, electronic version published: March 2007)

Grundhoefer, A. and Bimber, O. Real-Time Adaptive Radiometric Compensation. ACM Siggraph`06 (Poster), 2006

Augmenting Large Scale Optical Holograms


Large scale optical holograms require large scale display technology for combining them with interactive graphical elements. Shuttered projection screens (SPS) can be used to sequentially display stereoscopic graphics (in the diffuse mode) and to reconstruct the holographic content (in the transparent mode). While the SPS is shuttered with 50Hz, the stereo pairs are synchronized and time-modulated at approximately 100Hz. Depth information of the holographic content are required to create consistent occlusion and illumination effects with the graphical content. A two-lens stereo camera system can be used for scanning the hologram partially. The different point clouds have to merge into a common coordinate system to form the whole surface. Due to the limited resolution of the range sensor, small gaps appear between the actual surface points. Instead of triangulating the points into a mesh of triangle primitives, the points remain unconnected. They are rendered as point primitives (splatted) with appropriate radii to fill the gaps of missing surface information. The splat size and resolution is adapted dynamically with respect to the observer’s position to ensure interactive frame rates.

Bimber, O., Merging Graphics and Holograms. Journal of Holography and Speckle, vol.3, no. 2, pp. 73-79, 2006

Bimber, O., Augmenting Holograms. IEEE Computer Graphics and Applications (CG&A), vol. 26, no. 5, pp. 12-17, 2006

Compensating Indirect Scattering for Immersive and Semi-Immersive Projection Displays


Concavely shaped projection screens, such as CAVEs, two-sided workbenches, domes, or cylinders scatter a fraction of light to other screen portions. The amount of indirect illumination adds to the directly projected image and causes the displayed content to appear partially inconsistent and washed out. We have developed a reverse radiosity method that compensates first-level and higher-level secondary scattering effects in real-time. The images appear more brilliant and uniform when reducing the scattering contribution. A numerical solution is approximated with Jacobi iteration for a sparse-matrix linear equation system on the GPU. Efficient data structures allow packing the required data into textures which are processed by pixel shaders. Frame-buffer objects are used for a fast exchange of intermediate iteration results, and enable computations with floating point precision. Our algorithm’s result can be optimized for quality or performance.

Bimber, O., Grundhoefer, A., Zeidler, T., Danch, D., and Kapakos, P. Compensating Indirect Scattering for Immersive and Semi-Immersive Projection Displays. In proceedings of IEEE Virtual Reality (IEEE VR'06), pp. 151-158, 2006

application/pdfManuscript (3.7 MB)

Multi-Focal Projection


Many multi-projector rendering techniques exist that aim at creating a high consistency of image geometry, intensity and color. We proposed a concept and a solution for considering and optimizing a fourth image property – its focus. We describe a novel multi-focal projection concept that applies conventional video projectors and camera feedback. Multiple projectors with differently adjusted focal planes, but overlapping image areas are used. They can be either arbitrarily positioned in the environment, or can be integrated into a single projection unit. During an automatic one-time calibration process, structured light projection together with camera feedback allows to measure the relative focus value of every projector pixel on an arbitrary diffuse surface. Thereby, the focus values are geometrically and radiometrically corrected. If this is known, a final image with minimal defocus can be composed in real-time from individual pixel contributions of all projectors. Our technique is independent of the surfaces’ geometry, color and texture, of the environment light, as well as of the projectors’ parameters (i.e., position, orientation, luminance and chrominance).

Bimber, O. and Emmerling, A., Multi-Focal Projection: A Multi-Projector Technique for Increasing Focal Depth. In IEEE Transactions on Visualization and Computer Graphics (TVCG), vol. 12, no. 4, pp. 658-667, 2006

Bimber, O., Wetzstein, G., Emmerling, A., and Nitschke, C. Enabling View-Dependent Stereoscopic Projection in Real Environments. In proceedings of International Symposium on Mixed and Augmented Reality (ISMAR'05), pp. 14-23, 2005

Enabling View-Dependent Stereoscopic Projection in Real Environments


With this work we take a first step towards an ad-hoc stereoscopic projection within real environments. We show how view-dependent image-based and geometric warping, radiometric compensation, and multi-focal projection enable a view-dependent visualization on ordinary (geometric complex, colored and textured) surfaces within everyday environments. All these techniques are accomplished at interactive rates and on a per-pixel basis for multiple interplaying projectors. Special display configurations for immersive or semi-immersive VR/AR applications that require permanent and artificial projection canvases might become unnecessary. Such an approach does not only offer new possibilities for augmented reality and virtual reality, but also allows merging both technologies. This potentially gives some application domains – like architecture – the possibility to benefit from the conceptual overlaps of AR and VR.
Special thanks to the Faculty of Architecture, Bauhaus-University Weimar for their support.

Bimber, O., Wetzstein, G., Emmerling, A., and Nitschke, C. Enabling View-Dependent Stereoscopic Projection in Real Environments. In proceedings of International Symposium on Mixed and Augmented Reality (ISMAR'05), pp. 14-23, 2005

Interacting with Augmented Holograms


Holography and computer graphics are being used as tools to solve individual research, engineering, and presentation problems within several domains. Up until today, however, these tools have been applied separately. Our intention is to combine both technologies to create a powerful tool for science, industry and education. We are currently investigating the possibility of integrating computer generated graphics and holograms. We presents several applications of interaction techniques to graphically enhanced holograms and give a first glance on a novel method that reconstructs depth from optical holograms.
Special thanks to Deutsche Forschungsgemeinschaft (DFG) for their support.

Bimber, O., Augmenting Holograms. IEEE Computer Graphics and Applications (CG&A), vol. 26, no. 5, pp. 12-17, 2006

Bimber, O., Zeidler, T., Grundhoefer, A., Wetzstein, G., Moehring, M., Knoedel, S., and Hahne, U. Interacting with Augmented Holograms. In proceedings of SPIE Conference on Practical Holography XIX: Materials and Applications, 2005