Page Areas:

Current Submenu:

Additional Information:

Student Projects

Hier den Alternativtext zum Bild eingeben!

Displays Book (second edition)

Displays Book (second edition)

Displays Book (Korean edition)

Hier den Alternativtext zum Bild eingeben!

Displays Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

Spatial Augmented Reality Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

The Caleydo Project

Hier den Alternativtext zum Bild eingeben!

VIOSO smartprojecting

Hier den Alternativtext zum Bild eingeben!

Position Indication:



The Institute of Computer Graphics carries out research in a modern field that has been coined "visual computing". Our core disciplines cover the imaging, processing, visualization and displaying of visual data. These are enabled by new fields and technologies, such as light fields, projector-camera systems, responsive optics, mobile computing, and visual analytics.

Please select one of the following topics or years for detailed information.

Light Fields

Compressive Volumetric Light-Field Excitation

We explain how volumetric light-field excitation can be converted to a process that entirely avoids 3D reconstruction, deconvolution, and calibration of optical elements while taking scattering in the probe better into account. For spatially static probes, this is achieved by an efficient (one-time) light-transport sampling and light-field factorization. Individual probe particles (and arbitrary combinations thereof) can subsequently be excited in a dynamically controlled way while still supporting volumetric reconstruction of the entire probe in real-time based on a single light-field recording.

D. C. Schedl, O. Bimber, Compressive Volumetric Light-Field Excitation, Scientific Reports 7(13981), doi:10.1038/s41598-017-13136-2, 2017

application/pdfPaper (Preprint) (34.9 MB)

Optimized Sampling for View Interpolation in Light Fields Using Local Dictionaries

We present an angular superresolution method for light fields captured with a sparse camera array. Our method uses local dictionaries extracted from a sampling mask for upsampling a sparse light field to a dense light field by applying compressed sensing reconstruction. We derive optimal sampling masks by minimizing the coherence for representative global dictionaries. The desired output perspectives and the number of available cameras can be arbitrarily specified. We show that our method yields qualitative improvements compared to previous techniques.

Schedl, D. C., Birklbauer, C, and Bimber, O, Optimized Sampling for View Interpolation in Light Fields Using Local Dictionaries, J. Computer Vision and Image Understanding, to appear, 2017

Link to Elsevier  (Neues Fenster)

Volumetric Light-Field Excitation

Volumetric Light-Field Excitation

We explain how to concentrate light simultaneously at multiple selected volumetric positions by means of a 4D illumination light field.
First, to select target objects, a 4D imaging light field is captured.
A light field mask is then computed automatically for this selection to avoid illumination of the remaining areas.
With one-photon illumination, simultaneous generation of complex volumetric light patterns becomes possible.
As a full light-field can be captured and projected simultaneously at the desired exposure and excitation times, short readout and lighting durations are supported.

Schedl, D. C. and Bimber, O. Volumetric Light-Field Excitation. Nature Sci. Rep. 6, 29193; doi: 10.1038/srep29193, 2016

Link to Nature  (Neues Fenster)
application/pdfManuscript (24.2 MB)

Generalized Depth-of-Field Light-Field Rendering

Typical light-field rendering uses a single focal plane to define the depth at which objects should appear sharp.
This emulates the behavior of classical cameras.
However, plenoptic cameras together with advanced light-field rendering enable depth-of-field effects that go far beyond the capabilities of conventional imaging.
We present a generalized depth-of-field light-field rendering method that allows arbitrarily shaped objects to be all in focus while the surrounding fore- and background is consistently rendered out of focus based on user-defined focal plane and aperture settings.
Our approach generates soft occlusion boundaries with a natural appearance which is not possible with existing techniques.
It furthermore does not rely on dense depth estimation and thus allows presenting complex scenes with non-physical visual effects.

Schedl, D. C., Birklbauer, C., Bimber, O, Generalized Depth-of-Field Light-Field Rendering, International Conference on Computer Vision and Graphics (ICCVG), 2016 (to appear).

Non-Uniform Spatial Deformation of Light Fields by Locally Linear Transformations

Light-field cameras offer new imaging possibilities compared to conventional digital cameras. The additional angular domain of light fields, however, prohibits direct application of frequently used image processing algorithms, such as warping, retargeting, or stitching. We present a general and efficient framework for non-uniform light-field warping, that forms the basis for extending many of these image processing techniques to light fields. It propagates arbitrary spatial deformations defined in one light-field perspective consistently to all other perspectives by means of 4D patch matching instead of relying on explicit depth reconstruction. This allows processing light-field recordings of complex scenes with non-Lambertian properties such as transparency and refraction. We show application examples of our framework in panorama light-field imaging, light-field retargeting and artistic manipulation of light fields.

Birklbauer, C., Schedl, D. C., Bimber, O, Non-Uniform Spatial Deformation of Light Fields by Locally Linear Transformations, ACM Transactions on Graphics, 35, 5, Article 156, 2016.

application/pdfPreprint (9.3 MB, © JKU ICG 2016. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Graphics,
Download ACM Article  (Neues Fenster)

Active Guidance for Light-Field Photography on Smartphones

We present a novel approach for guided light-field photography using off-the-shelf smartphones. In contrast to previous work that requires the user to decide where next to position a mobile camera, we actively compute and visualize during runtime a recommendation for the next sampling position and orientation taking into account the current camera pose and required camera alignments. This supports efficient capture of various types of large-field-of-view light fields in just a matter of minutes and without specialized camera equipment. To further reduce the overall capture time, we describe an extension of our guidance algorithm to collaborative light-field photography by small groups of users.

Birklbauer, C. and Bimber, O. Active Guidance for Light-Field Photography on Smartphones. Computers & Graphics, 53(B), 127–135, 2015.

application/pdfPreprint (13.7 MB)
Computers & Graphics Article  (Neues Fenster)

Directional Super-Resolution by means of Coded Sampling and Guided Upsampling

We present a simple guided super-resolution technique for increasing directional resolution without reliance on depth estimation or image correspondences. Rather, it searches for best- matching multidimensional (4D or 3D) patches within the entire captured data set to compose new directional images that are consistent in both the spatial and the directional domains. We describe algorithms for guided upsampling, iterative guided upsampling, and sampling code estimation. Our experimental results reveal that the outcomes of existing light-field camera arrays and light-stage systems can be improved without additional hardware requirements or recording effort simply by realignment of cameras or light sources to change their sampling patterns.

David Schedl, Clemens Birklbauer, and Oliver Bimber. Directional Super-Resolution by means of Coded Sampling and Guided Upsampling. IEEE International Conference on Computational Photography (ICCP), 2015.

Panorama Light-Field Imaging

We present a novel approach to recording and computing panorama light fields. In contrast to previous methods that estimate panorama light fields from focal stacks or naive multi-perspective image stitching, our approach is the first that processes ray entries directly and does not require depth reconstruction or matching of image features. Arbitrarily complex scenes can therefore be captured while preserving correct occlusion boundaries, anisotropic reflections, refractions, and other light effects that go beyond diffuse reflections of Lambertian surfaces.

Birklbauer, C. and Bimber, O., Panorama Light-Field Imaging, In proceedings of Eurographics (Computer Graphics Forum), 33(2), 43-52, 2014

Source Code  (Neues Fenster)

Coded Exposure HDR Light-Field Video Recording

Capturing exposure sequences to compute high dynamic range (HDR) images causes motion blur in cases of camera movement. This also applies to light-field cameras: frames rendered from multiple blurred HDR lightfield perspectives are also blurred. While the recording times of exposure sequences cannot be reduced for a single-sensor camera, we demonstrate how this can be achieved for a camera array. Thus, we decrease capturing time and reduce motion blur for HDR light-field video recording. Applying a spatio-temporal exposure pattern while capturing frames with a camera array reduces the overall recording time and enables the estimation of camera movement within one light-field video frame. By estimating depth maps and local point spread functions (PSFs) from multiple perspectives with the same exposure, regional motion deblurring can be supported. Missing exposures at various perspectives are then interpolated.

Schedl, D. C., Birklbauer, C. and Bimber, O., Coded Exposure HDR Light-Field Video Recording, In proceedings of Eurographics (Computer Graphics Forum), 33(2), 33-42, 2014

Coded Exposure HDR Light-Field Video Recording

Capturing exposure sequences for computing HDR images is prone to motion blur, which also affects HDR light-field recording.
We record four exposures encoded at varying camera perspectives and deblur long exposure recordings by tracking features in low exposure recordings.
This reduces motion blur and leads to shorter recording interval.

Schedl, D. C., Birklbauer, C. and Bimber, O., Coded Exposure HDR Light-Field Video Recording, ACM Siggraph (poster+talk), 2013

Rendering Gigaray Light Fields

We present a caching framework with a novel probability-based prefetching and eviction strategy applied to atomic cache units that enables interactive rendering of gigaray light fields. Further, we describe two new use cases that are supported by our framework: panoramic light fields, including a robust imaging technique and an appropriate parameterization scheme for real-time rendering and caching; and light-field-cached volume rendering, which supports interactive exploration of large volumetric datasets using light-field rendering. We consider applications such as light-field photography and the visualization of large image stacks from modern scanning microscopes.

Birklbauer, C., Opelt, S. and Bimber, O., Rendering Gigaray Light Fields, In proceedings of Eurographics (Computer Graphics Forum), 32(2), 469-478, 2013

Source Code LFC2013  (Neues Fenster)

Panorama Light-Field Imaging

With increasing resolution of imaging sensors, light-field photography is now becoming increasingly practical, and first light-field cameras are already commercially available (e.g., Lytro, Raytrix, and others). Applying common digital image processing techniques to light-fields, however, is in many cases not straight forward. The reason for this is, that the outcome must not only be spatially consistent, but also directionally consistent. Otherwise, refocussing and perspective changes will cause strong image artifacts. Panorama imaging techniques, for example, are an integral part of digital photography – often being supported by camera hardware today. We present a first approach towards the construction of panorama light-fields (i.e., large field-of-view light-fields computed from overlapping sub-light-field recordings).
By converting overlapping sub-light-fields into individual focal stacks, computing a panorama focal stack from them, and converting the panorama focal stack back into a panorama light field, we avoid the demand for a precise reconstruction of scene depth.

Birklbauer, C. and Bimber, O., Panorama Light-Field Imaging, ACM Siggraph (poster+talk), 2012

Light-Field Supported Fast Volume Rendering

Advances in imaging technology leads to a continues increase of image data sets. Modern scanning microscopes, for instance, produce image stacks with a megapixel lateral resolution and many hundreds to thousands slices in axial direction. This trend will continue – resulting in very large volumetric data sets that are difficult to explore interactively, since the complexity of volume rendering is proportional to spatial and lateral resolution of the data. Light-field rendering is a fast and simple image-based rendering method that requires pre-computed image data. For volume rendering, each costly computed image is discarded after changing the viewing parameters, while it becomes idle if the viewing parameters are not changed and the visualization does not need to be updated. We combine light-field rendering and volume rendering with two goals: We recycle previously rendered images and use the idle times for filling a cached-managed light field. The final images are then composed from light-field rendering and from volume rendering – depending on the state of the light-field cache. This leads to a significant increase in rendering performance and to the ability of exploring large volumetric datasets interactively.

Birklbauer, C. and Bimber, O., Light-Field Supported Fast Volume Rendering, ACM Siggraph (poster), 2012

Light-Field Retargeting

We present a first approach to light-field retargeting using z-stack seam carving, which allows light-field compression and extension while retaining angular consistency. Our algorithm first converts an input light field into a set of perspective-sheared focal stacks. It then applies 3D deconvolution to convert the focal stacks into z-stacks, and seam-carves the z-stack of the center perspective. The computed seams of the center perspective are sheared and applied to the z-stacks of all off-center perspectives. Finally, the carved z-stacks are converted back into the perspective images of the output light field. To our knowledge, this is the first approach to light-field retargeting. Unlike existing stereo-pair retargeting or 3D retargeting techniques, it does not require depth information.

Birklbauer, C. and Bimber, O., Light-Field Retargeting, In proceedings of Eurographics (Computer Graphics Forum), 31(2), 295-303, 2012

Light-Field Caching

With the continuously increasing sensor resolutions of cameras, light field imaging is becoming a more and more practical extension to conventional digital photography. It complements postprocessing by synthetic aperture control, refocusing, as well as perspective and field-of-view changes. For being a true alternative to classical 2D imaging, however, the spatial resolution of light fields must be in the same megapixel-order as the resolution of today’s digital images. The additional angular resolution must also be adequately high to prevent sampling artifacts (in particular for synthetic re-focussing). This will quickly cause gigabytes rather than megabytes of data that have to be rendered with limited graphics memory. We describe a light-field caching framework that makes it possible to render very large light fields in real-time.

Opelt, S. and Bimber, O., Light-Field Caching, ACM Siggraph (poster), 2011

Light-Field Retargeting with Focal Stack Seam Carving

With increasing sensor resolutions of digital cameras, light-field imaging is becoming more and more relevant, and might even replace classical 2D imaging in photography sooner or later. It enables, for instance, digital refocussing and perspective changes after capturing. Rescaling light fields to different resolutions and aspect rations, however, is challenging. As for regular image and video content, a linear scaling alters the aspect ratio of recorded objects in an unnatural way. In contrast, image and video retargeting utilizes a nonlinear and content-based scaling. Applying image retargeting to individual video frames independently does not retain temporal consistency. Similarly, applying image retargeting naively to the spatial domain of light fields will not retain angular consistency. We present a first approach to light-field retargeting. It allows compressing or stretching light-fields while retaining angular consistency.

Birklbauer, C. and Bimber, O., Light-Field Retargeting with Focal Stack Seam Carving, ACM Siggraph (poster), 2011