Page Areas:



Current Submenu:

Additional Information:

Student Projects

Hier den Alternativtext zum Bild eingeben!

Displays Book (free image material)

Hier den Alternativtext zum Bild eingeben!

Spatial Augmented Reality Book (free ebook)

Hier den Alternativtext zum Bild eingeben!

VIOSO smartprojecting

Hier den Alternativtext zum Bild eingeben!

The Flagship Publication of the IEEE Computer Society

Hier den Alternativtext zum Bild eingeben!

The Caleydo Project

Hier den Alternativtext zum Bild eingeben!

Location - Computer Science Building (S3)

Location of the Institute for Computer Graphics: Computer Science Building (Science Park 3)

Position Indication:

Content

Research

The Institute of Computer Graphics carries out research in a modern field that has been coined "visual computing". Our core disciplines cover the imaging, processing, visualization and displaying of visual data. These are enabled by new fields and technologies, such as light fields, projector-camera systems, responsive optics, mobile computing, and visual analytics.

Please select one of the following topics or years for detailed information.
















2008

Phone-to-Phone Communication for Adaptive Image Classification


We present a novel technique for adapting local image classifiers that are applied for object recognition on mobile phones through ad-hoc network communication between the devices. By continuously accumulating and exchanging collected user feedback among devices that are located within signal range, we show that our approach improves the overall classification rate and adapts to dynamic changes quickly. This technique is applied in the context of our PhoneGuide system -- a mobile phone based museum guidance framework that combines pervasive tracking and local object recognition for identifying a large number of objects in uncontrolled museum environments. We explain a technique that distributes the user feedback information during runtime through ad-hoc network connections between local devices. By doing so, we enforce cooperative classification improvements during the actual stay of the visitors. The general functionality of our technique has been tested with a small number of real devices in a museum. For proving its scalability, however, we have developed a simulator that evaluates our method for many hundred devices under several conditions. The simulation parameters have all been gathered in a museum, and are therefore realistic. We will show that ad-hoc phone-to-phone synchronization not only leads to higher overall classification rates, but also to quicker adaptations to dynamic changes during runtime.

Bruns, E. and Bimber, O., Phone-to-Phone Communication for Adaptive Image Classification, In proceedings of International Conference on Advances in Mobile Computing & Multimedia (MoMM2008), pp. 276-281, 2008

VirtualStudio2Go: Digital Video Composition for Real Environments


We synchronize film cameras and LED lighting with off-the-shelf video projectors. Radiometric compensation allows displaying keying patterns and other spatial codes on arbitrary real world surfaces. A fast temporal multiplexing of coded projection and flash illumination enables professional keying, environment matting, displaying moderator information, scene reconstruction, and camera tracking for non-studio film sets without being limited to the constraints of a virtual studio. The reconstruction of the scene geometry allows special composition effects, such as shadow casts, occlusions and reflections. This makes digital video composition more flexible, since static studio equipment, such as blue screens, teleprompters, or tracking devices, is not required. Authentic film locations can be supported with our portable system without causing a lot of installation effort. We propose a concept that combines all of these techniques into one single compact system that is fully compatible with common digital video composition pipelines, and offers an immediate plug-and-play applicability.

Grundhoefer, A. and Bimber, O., VirtualStudio2Go: Digital Videocomposition for Real Environments, In proceedings of ACM Siggraph Asia (ACM Transactions on Graphics), vo. 27, no. 5, article 151, 2008

Superimposing Dynamic Range


We present a simple and cost-efficient way of extending contrast, perceived tonal resolution, and color space of reflective media, such as paper prints, hardcopy photographs, or electronic paper displays. A calibrated projector-camera system is applied for automatic registration, radiometric scanning and superimposition. A second modulation of the projected light on the surface of such media results in a high dynamic range visualization. This holds application potential for a variety of domains, such as radiology, astronomy, optical microscopy, conservation and restoration of historic art, modern art and entertainment installations. In our experiments, we achieved contrast ratios of up of 45,000-60,000:1 with a peak luminance of more than 2,750 cd/m^2, could technically re-produce more than 620 perceptually distinguishable tonal values. Furthermore, we attained color space extensions of up to a factor of 1.4 (compared to a regular projection on white screens) or factor of 3.3 (compared to regular paper prints under environment light). Thereby, the hardcopy resolution can be several thousand DPI or several hundred LPI, while luminance and chrominance are modulated with a registration error of less than 0.3 mm. Thus, compared with most existing interactive HDR displays, we support near distance viewing at a contrast frequency of up to 7 cpd (given our current registration precision and assuming a viewing distance of 50 cm).

Bimber, O. and Iwai, D., Superimposing Dynamic Range, In proceedings of ACM Siggraph Asia (ACM Transactions on Graphics), vo. 27, no. 5, article 150, 2008

Bimber, O., and Iwai, D., Superimposing Dynamic Range, Siggraph'08 (NewTech Demo Abstract), 2008

Grundhoefer, A. and Bimber, O., Superimposing Dynamic Range, Siggraph'08 (Poster+Talk), 2008

Adaptive Training of Video Sets for Image Recognition on Mobile Phones


We present an enhancement towards adaptive video training for PhoneGuide, a digital museum guidance system for ordinary camera–equipped mobile phones. It enables museum visitors to identify exhibits by capturing photos of them. In this article, a combined solution of object recognition and pervasive tracking is extended to a client–server–system for improving data acquisition and for supporting scale–invariant object recognition. A static as well as a dynamic training technique are presented that preprocess the collected object data differently and apply two types of neural networks for classification. Furthermore, the system enables a temporal adaptation for ensuring a continuous data acquisition to improve the recognition rate over time. A formal field experiment reveals current recognition rates and indicates the practicability of both methods under realistic conditions in a museum.

Bruns, E. and Bimber, O., Adaptive Training of Video Sets for Image Recognition on Mobile Phones. In Journal of Pervasive and Ubiquitous Computing, vol. 13, no. 2, pp. 165-178, 2009 (submitted: June 2007, accepted: February 2008)

 

English Version English Version