Page Areas:

Current Submenu:

Additional Information:

Student Projects

Hier den Alternativtext zum Bild eingeben!

Displays Book (free image material)

Hier den Alternativtext zum Bild eingeben!

Spatial Augmented Reality Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

VIOSO smartprojecting

Hier den Alternativtext zum Bild eingeben!

The Flagship Publication of the IEEE Computer Society

Hier den Alternativtext zum Bild eingeben!

The Caleydo Project

Hier den Alternativtext zum Bild eingeben!

Location - Computer Science Building (S3)

Location of the Institute for Computer Graphics: Computer Science Building (Science Park 3)

Position Indication:



The Institute of Computer Graphics carries out research in a modern field that has been coined "visual computing". Our core disciplines cover the imaging, processing, visualization and displaying of visual data. These are enabled by new fields and technologies, such as light fields, projector-camera systems, responsive optics, mobile computing, and visual analytics.

Please select one of the following topics or years for detailed information.


Light-Field Retargeting

We present a first approach to light-field retargeting using z-stack seam carving, which allows light-field compression and extension while retaining angular consistency. Our algorithm first converts an input light field into a set of perspective-sheared focal stacks. It then applies 3D deconvolution to convert the focal stacks into z-stacks, and seam-carves the z-stack of the center perspective. The computed seams of the center perspective are sheared and applied to the z-stacks of all off-center perspectives. Finally, the carved z-stacks are converted back into the perspective images of the output light field. To our knowledge, this is the first approach to light-field retargeting. Unlike existing stereo-pair retargeting or 3D retargeting techniques, it does not require depth information.

Birklbauer, C. and Bimber, O., Light-Field Retargeting, In proceedings of Eurographics (Computer Graphics Forum), 31(2), 295-303, 2012

  • application/pdfManuscript (16.9 MB)
    • application/pdfSlides (Eurographics 2012) (2.9 MB)
    • video/aviResults Video (interactive editing) (9.0 MB)
    • Light-Field Caching

      With the continuously increasing sensor resolutions of cameras, light field imaging is becoming a more and more practical extension to conventional digital photography. It complements postprocessing by synthetic aperture control, refocusing, as well as perspective and field-of-view changes. For being a true alternative to classical 2D imaging, however, the spatial resolution of light fields must be in the same megapixel-order as the resolution of today’s digital images. The additional angular resolution must also be adequately high to prevent sampling artifacts (in particular for synthetic re-focussing). This will quickly cause gigabytes rather than megabytes of data that have to be rendered with limited graphics memory. We describe a light-field caching framework that makes it possible to render very large light fields in real-time.

      Opelt, S. and Bimber, O., Light-Field Caching, ACM Siggraph (poster), 2011

      • application/pdfManuscript (1.0 MB)
        • application/pdfPoster (6.4 MB)
        • Light-Field Retargeting with Focal Stack Seam Carving

          With increasing sensor resolutions of digital cameras, light-field imaging is becoming more and more relevant, and might even replace classical 2D imaging in photography sooner or later. It enables, for instance, digital refocussing and perspective changes after capturing. Rescaling light fields to different resolutions and aspect rations, however, is challenging. As for regular image and video content, a linear scaling alters the aspect ratio of recorded objects in an unnatural way. In contrast, image and video retargeting utilizes a nonlinear and content-based scaling. Applying image retargeting to individual video frames independently does not retain temporal consistency. Similarly, applying image retargeting naively to the spatial domain of light fields will not retain angular consistency. We present a first approach to light-field retargeting. It allows compressing or stretching light-fields while retaining angular consistency.

          Birklbauer, C. and Bimber, O., Light-Field Retargeting with Focal Stack Seam Carving, ACM Siggraph (poster), 2011

          • application/pdfManuscript (1.5 MB)
            • application/pdfPoster (1.2 MB)
            • Adaptive Coded Aperture Photography

              We show how the intrinsically performed JPEG compression of many digital still cameras leaves margin for deriving and applying image-adapted coded apertures that support retention of the most important frequencies after compression. These coded apertures, together with subsequently applied image processing, enable a higher light throughput than corresponding circular apertures, while preserving adjusted focus, depth of field, and bokeh. Higher light throughput leads to proportionally higher signal-to-noise ratios and reduced compression noise, or -alternatively- to lower shutter times. We explain how adaptive coded apertures can be computed quickly, how they can be applied in lenses by using binary spatial light modulators, and how a resulting coded bokeh can be transformed into a common radial one.

              Bimber, O., Qureshi, H., Grundhoefer, A., Grosse, M., and Danch, D., Adaptive Coded Aperture Photography, In proceedings of 7th International Symposium on Visual Computing (ISVC'11), 2011

              • application/pdfManuscript (6.2 MB)
                • application/pdfSupplementary Material (13.0 MB)
                • application/pdfSlides (ISVC'11) (4.6 MB)
                • Display Pixel Caching

                  A variety of standard video modes that stretch or zoom lower resolution video content linearly to take full advantage of large screen sizes have been implemented in TV sets. When content and screen aspect ratios differ, format proportions may be compromised, video content may be clipped, or screen regions may remain unused. Newer techniques, such as video retargeting and video upsampling, rescale individual video frames and can potentially match them to the display resolution and aspect ratio. However, none of these methods can display simultaneously more than is contained in a single frame.

                  We present a new video mode for television sets that we refer to as display pixel caching (DPC). It fills empty borders with spatially and temporally consistent information while preserving the original video format. Unlike related video modes, such as stretching, zooming and video retargeting, DPC does not scale or stretch individual frames. Instead, it merges the motion information from many subsequent frames to generate screen filling panoramas in a consistent manner. In contrast to state-of-the-art video mosaicing, DPC achieves real-time rates for high resolution video content while processing more complex motion patterns fully automatically. We compare DPC to related video modes in the context of a user evaluation.

                  Birklbauer, C., Grosse, M., Grundhoefer, A., Liu, T., and Bimber, O., Display Pixel Caching, In proceedings of 7th International Symposium on Visual Computing (ISVC'11), 2011

                  Birklbauer, C., Grosse, M., Grundhoefer, A., Liu, T., and Bimber, O., Display Pixel Caching, ACM Siggraph (poster+talk), 2011

                  • application/pdfManuscript (6.7 MB)
                    • application/pdfManuscript 2 (1.9 MB)
                    • application/pdfSupplementary Material (8.8 MB)
                    • application/pdfPoster (1.4 MB)
                    • application/pdfSlides (Siggraph'11) (2.9 MB)
                    • application/pdfSlides (ISVC'11) (2.6 MB)
                    • PhoneGuide: Adaptive Image Classification for Mobile Museum Guidance

                      With the increasing computational capacity of camera-equipped mobile phones, object recognition on such devices is shifting away from centralized client-server approaches, in which the phones act only as input/output front-ends, to local on-device classification systems. The advantages of such a decentralization are shorter response times, scalability with respect to a large number of simultaneous users, and reduced network traffic costs. Mobile image classification can support applications that rely on device localization, such as museum or city guidance, by supplementing existing positional information retrieved, for instance, from GPS or
                      GSM cells. The challenge for mobile image classification, however, is to become as robust as possible, even when applied in large, highly dynamic, and uncontrollable public environments: Hundreds to thousands of objects must be recognized from different perspectives, from varying distances, and under changing lighting conditions, while recognition rates must remain usable. The key to solving this problem may be automatic adaptation to dynamic changes in the environment and to the most common user behavior.
                      This paper summarizes the various components of our mobile museum guidance system PhoneGuide.

                      Bimber, O and Bruns, E., PhoneGuide: Adaptive Image Classification for Mobile Museum Guidance, invited paper, In proceedings of 5th International Symposium on Ubiquitous Virtual Reality (ISUVR'11), 2011

                      • application/pdfManuscript (4.2 MB)
                      • Visual Computing Featuring Responsive Optics

                        The combination of advanced software algorithms and optics opens up new possibilities for display, imaging, and lighting. It makes possible responsive optical systems that adapt to particular situations automatically and dynamically. Visual computing is a relatively young research field that provides a foundation for many of these approaches. It represents a tight coupling between image synthesis, image analysis, and visual perception. While optics is all about image formation, visual computing deals with the general processing of images. This paper summarizes several examples that illustrate how graphics, vision, perception, and optics are combined to realize smart projectors, smart cameras, and smart light sources.

                        Bimber, O, Visual Computing Featuring Responsive Optics, invited paper, In proceedings of 27th Spring Conference on Computer Graphics (SCCG’11), 2011


                English Version English Version