Page Areas:



Current Submenu:

Additional Information:

Student Projects

Hier den Alternativtext zum Bild eingeben!

Displays Book (free image material)

Hier den Alternativtext zum Bild eingeben!

Spatial Augmented Reality Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

VIOSO smartprojecting

Hier den Alternativtext zum Bild eingeben!

The Flagship Publication of the IEEE Computer Society

Hier den Alternativtext zum Bild eingeben!

The Caleydo Project

Hier den Alternativtext zum Bild eingeben!

Location - Computer Science Building (S3)

Location of the Institute for Computer Graphics: Computer Science Building (Science Park 3)

Position Indication:

Content

Research

The Institute of Computer Graphics carries out research in a modern field that has been coined "visual computing". Our core disciplines cover the imaging, processing, visualization and displaying of visual data. These are enabled by new fields and technologies, such as light fields, projector-camera systems, responsive optics, mobile computing, and visual analytics.

Please select one of the following topics or years for detailed information.















Processing

Real-Time Video Enhancement Compensating For Macular Degeneration


Age-related macular degeneration (AMD) is a chronic and progressive eye condition with advancing central vision loss. In this article, we present two real-time video enhancement techniques that compensate for this loss of visual function: filtering and scaling. Since face recognition is problematic with central vision loss, video images are processed to improve face recognition. Our filtering method extends an existing adaptive contrast enhancement technique for the visually impaired and is based on the assumption that different content requires different filter parameterization. Each video image is segmented into facial and non-facial regions. The filter is then applied separately to each region using different parameters according to the particular content. Finally, the results are blended seamlessly. The scaling technique magnifies facial regions temporarily in each shot to improve recognition of faces and their expressions. Filtering and scaling can be combined when needed. Both methods were evaluated in a user study with AMD patients at the Medical University of Vienna.

Hochrieser,, M., Eisenkölbl, S. and Bimber, O., Real-Time Video Enhancement Compensating For Macular Degeneration, Submitted to IEEE Transactions on Circuits and Systems for Video Technology, 2012

Light-Field Retargeting


We present a first approach to light-field retargeting using z-stack seam carving, which allows light-field compression and extension while retaining angular consistency. Our algorithm first converts an input light field into a set of perspective-sheared focal stacks. It then applies 3D deconvolution to convert the focal stacks into z-stacks, and seam-carves the z-stack of the center perspective. The computed seams of the center perspective are sheared and applied to the z-stacks of all off-center perspectives. Finally, the carved z-stacks are converted back into the perspective images of the output light field. To our knowledge, this is the first approach to light-field retargeting. Unlike existing stereo-pair retargeting or 3D retargeting techniques, it does not require depth information.

Birklbauer, C. and Bimber, O., Light-Field Retargeting, In proceedings of Eurographics (Computer Graphics Forum), 31(2), 295-303, 2012

  • application/pdfManuscript (16.9 MB)
    • application/pdfSlides (Eurographics 2012) (2.9 MB)
    • video/aviResults Video (interactive editing) (9.0 MB)
    • Light-Field Retargeting with Focal Stack Seam Carving


      With increasing sensor resolutions of digital cameras, light-field imaging is becoming more and more relevant, and might even replace classical 2D imaging in photography sooner or later. It enables, for instance, digital refocussing and perspective changes after capturing. Rescaling light fields to different resolutions and aspect rations, however, is challenging. As for regular image and video content, a linear scaling alters the aspect ratio of recorded objects in an unnatural way. In contrast, image and video retargeting utilizes a nonlinear and content-based scaling. Applying image retargeting to individual video frames independently does not retain temporal consistency. Similarly, applying image retargeting naively to the spatial domain of light fields will not retain angular consistency. We present a first approach to light-field retargeting. It allows compressing or stretching light-fields while retaining angular consistency.

      Birklbauer, C. and Bimber, O., Light-Field Retargeting with Focal Stack Seam Carving, ACM Siggraph (poster), 2011

      • application/pdfManuscript (1.5 MB)
        • application/pdfPoster (1.2 MB)
        • Display Pixel Caching




          A variety of standard video modes that stretch or zoom lower resolution video content linearly to take full advantage of large screen sizes have been implemented in TV sets. When content and screen aspect ratios differ, format proportions may be compromised, video content may be clipped, or screen regions may remain unused. Newer techniques, such as video retargeting and video upsampling, rescale individual video frames and can potentially match them to the display resolution and aspect ratio. However, none of these methods can display simultaneously more than is contained in a single frame.

          We present a new video mode for television sets that we refer to as display pixel caching (DPC). It fills empty borders with spatially and temporally consistent information while preserving the original video format. Unlike related video modes, such as stretching, zooming and video retargeting, DPC does not scale or stretch individual frames. Instead, it merges the motion information from many subsequent frames to generate screen filling panoramas in a consistent manner. In contrast to state-of-the-art video mosaicing, DPC achieves real-time rates for high resolution video content while processing more complex motion patterns fully automatically. We compare DPC to related video modes in the context of a user evaluation.

          Birklbauer, C., Grosse, M., Grundhoefer, A., Liu, T., and Bimber, O., Display Pixel Caching, In proceedings of 7th International Symposium on Visual Computing (ISVC'11), 2011

          Birklbauer, C., Grosse, M., Grundhoefer, A., Liu, T., and Bimber, O., Display Pixel Caching, ACM Siggraph (poster+talk), 2011

          • application/pdfManuscript (6.7 MB)
            • application/pdfManuscript 2 (1.9 MB)
            • application/pdfSupplementary Material (8.8 MB)
            • application/pdfPoster (1.4 MB)
            • application/pdfSlides (Siggraph'11) (2.9 MB)
            • application/pdfSlides (ISVC'11) (2.6 MB)
            • Fast and Robust CAMShift Tracking


              CAMShift is a well-established and fundamental algorithm for kernel-based visual object tracking. While it performs well with objects that have a simple and constant appearance, it is not robust in more complex cases. As it solely relies on back projected probabilities it can fail in cases when the object’s appearance changes (e.g., due to object or camera movement, or due to lighting changes), when similarly colored objects have to be re-detected or when they cross their trajectories.
              We propose low-cost extensions to CAMShift that address and resolve all of these problems. They allow the accumulation of multiple histograms to model more complex object appearances and the continuous monitoring of object identities to handle ambiguous cases of partial or full occlusion. Most steps of our method are carried out on the GPU for achieving real-time tracking of multiple targets simultaneously. We explain efficient GPU implementations of histogram generation, probability back projection, computation of image moments, and histogram intersection. All of these techniques make full use of a GPU’s high parallelization capabilities.

              Exner, D., Bruns, E., Kurz, D., Grundhoefer, A., and Bimber, O., Fast and Robust CAMShift Tracking, In proceedings of IEEE International Workshop on Computer Vision for Computer Games (IEEE CVCG), 2010


        English Version English Version