Advanced Real-Time Rendering
Karol Myszkowski & Tobias Ritschel
Advanced Real-Time Rendering
Efficient simulation of indirect illumination
The creation of photorealistic images using computers is a basic technology required in many types of visual media. Although initially only used for creating special effects, nowadays, the usage of computers to render entire films is common. However, the required realism is only possible with high computational effort, and as a result, the synthesis of a single frame of motion picture can typically take several hours. At the same time, interactive real-timer rendering has become part of our everyday lives: In computer games, in geo-visualization on a cell phone, or in interactive kitchen planers on desktop computers. The required images are produced instantaneously from a userís input. In order to achieve this performance, severe simplifications had to be made, which led to the development of highly specialized graphics hardware (GPUs). In our work, we aim to fi ll the gap between highly realistic offline-rendering and fast inter- active rendering.
In particular, our approaches were the fi rst to allow interactive simulation of indirect illumination in dynamic scenes. Most previous interactive techniques assume direct illumination, where the light is emitted from point light sources and reflected once inside the scene before arriving at the observer. This simplification, however, is often not true in nature. Many visual effects such as the appearance of materials or the understanding of spatial configuration are known to be affected by indirect light. To achieve efficient indirect illumination, our approaches make assumptions that are different from classic numerical approaches (Finite-Element Radiosity, Raytracing), fi rst, by fitting the demands of parallel graphics hardware and, second, by including knowledge of the human visual system.
Interactive simulation of a deforming piece of cloth.
In order to efficiently make use of existing graphics hardware, our approaches employ existing and optimized hardware functionality such as the drawing of points or the application of local image filters. Such operations are executed in ten thousands of parallel cores on modern hardware; by succeeding to keep those cores busy, quality or performance can be improved by orders of magnitude. In particular, the resulting approaches allow the simulation of dynamic scenes in real-time.
Our approach can compute results similar to a reference (left, hours of computation time) orders of magnitude faster (right, seconds of computation time).
In order to only compute what a human observer perceives, our work includes, and extends in independent perceptual studies, the knowledge about the human visual system; for example, it was not known until now how humans perceive indirect shadows. Our studies of this matter have shown that such shadows are perceived as largely smooth, which allows us to make strong simplifications and leads to improved performance.
Recently, the (long-standing) idea of performing expensive computations in a computational “cloud”, which then streams its results to client machines such as cell phones, has received renewed interest. In our work, we consider streaming approaches that fi t best to modern graphics hardware and human perception. First, 3D information is en- coded on the server side with human perception in mind, e.g., by faithfully encoding luminance and depth edges. Next, existing graphics hardware functions are used to efficiently extrapolate information on the client side.
Karol Myszkowski
DEPT. 4 Computer Graphics
Phone +49 681 9325-4029
Email karol@mpi-inf.mpg.de
Tobias Ritschel
DEPT. 4 Computer Graphics
Phone +49 681 9325-4041
Email ritschel@mpi-inf.mpg.de