Voxelizing Light-Field Recordings
Light fields are an emerging image-based technique that support free viewpoint navigation of recorded scenes as demanded in several recent applications (e.g., Virtual Reality). Pure image-based representations, however quickly become inefficient, as a large number of images are required to be captured, stored, and processed. Geometric scene representations require less storage and are more efficient to render. Geometry reconstruction, however, is unreliable and might fail for complex scene parts. Furthermore, view-dependent effects that are preserved with light fields are lost in pure geometry based techniques. Therefore, we propose a hybrid representation and rendering scheme for recorded dense light fields: we want to extract isotropic scene regions and represent them by voxels, while the remaining areas are represented as sparse light field. In comparison to dense light fields, storage demands are reduced while visual quality is sustained.
The goal of this project is to (a) develop analysis techniques that efficiently separate light-field recordings into a voxel-based and image-based representation, (b) investigate efficient data structures for sparse light-fields representations (while sparse volume data representations already exist), and (c) implement real-time hybrid (voxel/light-field) rendering techniques. These three sub-projects (a,b,c) can be carried out independently by different students.