Algorithmic Improvements for Stochastic Rasterization & Depth Buffering

Sammanfattning: The field of computer graphics refers to the use of computers to generate realistic-looking images from virtual scenes. Graphics processing units use an algorithm known as rasterization to compute images of scenes viewed from a virtual camera. The commonly used pinhole camera model does not account for the imperfections that stem from the physical limitations in real-world cameras. This includes, for example, motion and defocus blur. These two phenomena can be captured using stochastic rasterization, which is an algorithm that extends upon conventional rasterization by being able to handle moving and out-of-focus objects. Using this approach, the virtual scene is sampled at different instances in time and using different paths through the camera lens system. Alas, the extended functionality comes at a higher computational cost and consumes much more memory bandwidth. Much of the increased bandwidth usage is due to the increase in traffic to the depth buffer. The focus of the six papers included in this thesis is threefold. First, we have explored ways to reduce the high memory bandwidth consumption inherent in depth buffering, targeting both conventional and stochastic rasterization. We have evaluated a number of hardware changes, including novel compression schemes and cache improvements, which efficiently reduce memory bandwidth usage. We also propose a hardware friendly algorithm which reduces the pressure on the depth buffering system by culling unnecessary work early in the pipeline. Second, we propose an algorithm to reduce shading computations for stochastic rasterization. In our approach, we decouple shading and visibility determination into two separate passes. The surface color is sparsely evaluated in the first pass and can be efficiently used in the second pass, when rendering from the camera. The two-pass approach allows us to adaptively adjust the shading rate based on the amount of blur resulting from motion and defocus effects, which greatly reduces rendering times. Third, we propose a real-time algorithm for rendering shadows cast by objects in motion. Due to the complicated interplay between moving objects, moving light sources, and a moving camera, rendering motion blurred shadows is an especially difficult problem. Using our algorithm, high quality, smooth shadows can be achieved on conventional graphics processors. Collectively, I believe that our research is a significant step forward for rendering scenes with motion and/or defocus blur, both in terms of quality and performance.