Fakultät Informatik

Realtime simulation of overcast sky

Author:  Dirk Fischer, January 2004


The task:

The task of the project was to develop a realistic simulation of an overcast sky, which was to be rendered in real time under the use of actual graphic hardware and processors. The Simulation is implemented in C++ and OPENGL.


Approaches:

Many techniques have been used for clouds in games or flight simulators. They have been hinted at with planar textures - both static and animated - or with semi-transparent textured objects or fogging effects. These techniques leave a lot to be desired. In a flying environment, you would like to fly in and around realistic, volumetric clouds, and to see other flying objects in and behind them.

This project renders realistically shaded static clouds and does not address issues of dynamic cloud simulation. This choice leads to the generation of clouds ahead of time using shading only once per scene and dynamically generated impostors.


The system:

Scattering illumination models simulate the emission and absoption of light by a medium as well as scattering through a medium. Single scattering models simulate scattering through the medium in a single direction. The direction is usually the direction leading to the point of view. Multiple scattering models are more physically accurate, but must account for scattering in all directions (or a sampling of all directions), and therefore are much more complicated and expensive to evaluate. This project simplifies multiple scattering by approximating multiple scattering only in the light direction, called multiple forward scattering, and anisotropic scattering in the eye direction.

This cloud rendering method is a two-pass algorithm precomputing cloud shading in the first pass, and use this shading to render the clouds in the second pass.

While the cloud rendering method described above provides beautiful results and is fast enough for simple scenes, it suffers under the weight of many complex clouds. With direct particle rendering, even a scene with only a few clouds is prohibitely slow on current hardware.
An impostor replaces an object in the scene with a semi-transparent polygon texture-mapped with an image of the object it replaces. This project generates impostors using the following procedure. A view frustum is positioned so that its viewpoint is at the position from which the impostor will be viewed. You then render the object into an image used to texture the impostor polygon. An impostor is valid (with no error) for the viewpoint from which its image was generated, regardless of changes in the viewing direction. Impostors may be precomputed for an object from multiple viewpoints, requiring much storage, or they may be generated only when needed. This project uses the latter technique called dynamically generated impostors.
If you would like more information, you can download the paper of this project (in German) and the source code (C++, CG).


Downloads:


 Gallery
   
Dirk Fischer, January 2004, Munich

 

News

Matthias Niessner, our new Professor from Stanford University, offers a number of interesting topics for  master theses.

 

PhD positions on   Computational Fabrication and 3D Printing and  Photorealistic Rendering for Deep Learning and Online Reconstruction are available at the Computer Graphics & Visualization group.