Real-Time Systems for Computational Photography and Illumination

Real-Time Systems for Computational Photography and Illumination

Recently, quality camera systems with high frame rates have become commercially available. In combination with ever increasing processing speeds in graphics processing units (GPUs), which enable real-time computation on optical data in image space, it is now possible to drastically increase the quality of traditional applications and perform novel visual interactions with real-world scenes.

Fast processing of high speed video data streams

b5420ccbd8

By appropriately choosing the filter on the recording side, the visual effect of movement can be reduced (left) or enhanced (right) at will.

Digital camera technology has made considerable progress in the last years, especially regarding the frame rate recordings can be made with. Even some consumer cameras can record – at the least in brief bursts – video sequences in excess of 1000 frames per second. While projectors and display screens have also become faster, their development in that regard has been slower, and there is an increasing gap between available recording and display speeds. Therefore, storing the data at the original frame rate for later consumption is not economical. This calls for techniques which can process fast video data in real time, converting it into video streams with standard temporal sampling rates that employ the initial high frame rate to create footage that exceeds the quality of conventional methods.

A prototype setup combining a fast camera with GPU based processing [Fuchs et al. 2010 CAG, 2009 VMV] achieves this by means of temporal super sampling. A software filter, which can be adapted to the characteristics of the expected display device, controls the smoothness of the result (see the demonstration in the video on the project web site) and the perceived intensity of motion; artifacts such as the cart wheel illusion and flickering, which are typical conventional sampling techniques when fast motions are observed, can largely be suppressed. Fourier space analysis enables non-photorealistic effects, such as the removal of static scene components.

 

In-Situ-Visualisation of Image Space Scene Analysis


A portable camera and projector setup visualizes scene edges in situ (connected computer not shown).

Surface details, which are invisible to the bare human eye, can sometimes be made visible easily with image processing techniques, such as edge detection. Powerful GPUs can perform these computations in real-time; however, if the results are visualized on a standard screen, the interactive experience remains incomplete, because the user has to split his or her attention between the investigated object and the screen.

A context-aware light source [Wang et al. 2010, ICCP] can bridge this interaction gap: a simple geometric configuration combines a digital projector serving as a programmable light source with a digital camera acting as a scene sensor. Streaming processing of the recorded video on a connected GPU detects scene features in image space, and by appropriately controlling how the projector illuminates the scene, the features are highlighted in-situ in the observed scene.

 

Challenges

Future challenges for real-time systems in computational photography and illumination may be found in the integration of the optical setups in mobile devices, and the implementation of powerful signal processing filters on the reduced-power GPUs contained within.

To the top of the page