Fakultät Informatik

Voxelization of Large Point Scans

Supervisor: Dipl.-Inf. Jens Schneider
Author: Reinhardt Hierl
Nov 2004 - April 2005


Introduction

Point sets are receiving a growing amount of attention as a representation of models in computer graphics. One reason for this is the emergence of affordable and accurate scanning devices generating a dense point set. Another reason is that highly detailed surfaces require a large number of small primitives, which contribute to less than a pixel when displayed, so that points become an effective display primitive.


The Change of the topic

Because of the immense amount of space required by large point sets, the original planned outcome of this project was a software tool to compress these sets. Additionally it was meant to support processing the compressed data on graphics hardware.
The main resource for these point sets have been three dimensional objects stored in ply-files. Unfortunately there existed no efficient way to access the data directly.

There are some main problems that explain why:

  • The definition of the ply format is incomplete.
  • Although the ply format was directly developed to store three dimensional objects, ply files are able to hold any kinds of lists. Therefore a notable amount of time is required to interpret the data.
  • Because some three dimensional objects are splitted into several ply files, additional data has to be processed in order to recombine the object.


In order to clarify the difficulties I would like to cite the Atlas-Scan as an example:
This object has approx. 255 million vertices and approx. 507 million face primitves. The complete data is distributed in 23 files which need 9.94 GB of disk space.

So developing a software tool to extract all vertices and the corresponding normals into special files became the main objective of this project.


Realization

Generally ply files store three dimensional objects as a list of vertices and a list of face primitves. Therefore hardly any file contains the normals directly.

One way to generate them is using the face primitives. These are stored as sets of points, which can be used to create two vectors per face. The direction of the face normal is then gained calculating the corss product. Due to the fact that vertices are normally involved in several faces, any vertex-normal should be the weighted average of all relevant face normals.

If the list of face primitives is missing, the program generates the normals using the list of vertices. At first the nearest neighbourhood of a vertex is calculated. Then a plane is genrated in a way, that all vertices of the neighbourhood have minimum distance to that plane. Finally the normal of the plane is taken as the vertex normal.

When the vertices are read and all their normals are calculated, the data is stored in a binary file. Therefore all required data can be read directly. Another advantage is that - even if the files are not compressed - only approx. 57 percent of the original disk space is required to hold the data.


Downloads
ausarbeitung.pdf (German)

Screenshots

This picture shows the object "Armadillo". Because the file contains no normals directly, they have to be gerneated on the fly.
This picture shows the object, after the data has been transformed into the binary file.
This picture shows the object "Stanford Bunny".

When the binary file is generated, all normals are set to zero. If a vertex is not referred to in a face primitive, its normal stayes unchanged. These vertices are only removed, when the object is recombined using several ply-files.

 

News

Matthias Niessner, our new Professor from Stanford University, offers a number of interesting topics for  master theses.

 

PhD positions on   Computational Fabrication and 3D Printing and  Photorealistic Rendering for Deep Learning and Online Reconstruction are available at the Computer Graphics & Visualization group.