Fakultät Informatik

Simulation of Snowfall

Author:  Christoph Bichlmeier, October 2003


The task:

The task of the project was to develop a realistic simulation of heavily falling snow, which was to be rendered in real time under the use of actual graphic hardware and processors. The Simulation is implemented in C++ and OPENGL.


Approaches:

Particle systems, which normally are used for waterfalls, fog, etc, aren't efficient enough to realize a weather scenario like intensive snowfall on a large area in real time.
There are other solutions, which work with billboards. They render the weather scenario onto a mask, which is permanently placed in front of the camera.
This only can be used for a simulation of a view from a room onto snow falling rough. But that's just useful for a simulation of a view out of a window on a snowfall scenario.

This solution to the task tries to combine the two approaches described above.


The system:

A particle system simulates objects as snowflakes, which can be recognized by a person, watching the scene. The unclear, fuzzy background is realized by some transparent billboards, which are placed behind each other. The combination of the two systems maintains the spatial, 3 dimensional effect, because the snowflakes, near the camera, can be detected as individual objects.
Also, it allows the illusion of a never ending snowfall scenario. Objects far away from the camera are visible, but only very fuzzily. This effect is adjustable by varying the number of billboards, placed behind each other, so that these can define the denseness of the snowfall.

For a better visual effect, I used Point Sprites for the particle system, to which small snowflake textures are bound. Point Sprites are primitive POINTS. They rotate with the camera. The textures' front faces are always directed towards the camera. By using Point Sprites, there is also an processing advantage in using point sprites. If the simulation had to render QUADS instead of POINTS, the GPU calculating effort would be four times higher.

3D Textures are bound to the billboards. When there is a translation or rotation on the camera position, the billboards move through the 3D Texture in the same direction.
The color of the 3D Texture is defined in different shads of gray by a randomized noise algorithm.

To optimize the processing time of the simulation, the animation of the particles is completely handled by a vertex shader. All processing tasks are transmitted to the GPU of the graphic hardware.

A second optimization is the development of the bounding boxes system. A certain area around the camera is divided up in raster volumes. The particles are placed into a snow volume with a randomized algorithm. This snow volume, which is realized by a vertex array, has the same dimensions as the raster volumes.
In every rendered scene, an algorithm detects all raster volumes, which cut the frustum volume (specifies the view port of the camera). The snow volumes are placed and animated in the detected raster volumes.


If you would like more information, you can download the paper of this project (in German) and the source code (C++, CG).

Downloads:

 Gallery

 

Christoph Bichlmeier, October 2003, Munich

 

News

Matthias Niessner, our new Professor from Stanford University, offers a number of interesting topics for  master theses.

 

PhD positions on   Computational Fabrication and 3D Printing and  Photorealistic Rendering for Deep Learning and Online Reconstruction are available at the Computer Graphics & Visualization group.