Poisson Surface Reconstruction¶
Poisson Surface reconstruction is an industry standard algorithm for generating a 3D surface model from a set of unordered but oriented points that imposes no a-priori restrictions on the shape of the surface and is robust even when given very noisy data, like that from laser scanners (LiDAR) or Semi-global matching. (There are a couple of notable modifications in ADAM’s implementation of the algorithm: Firstly, the user gets to specify the desired output point spacing directly, rather than implicitly by specifying the octree depth — more on that below — or some target number of triangles in the output. Indeed, our software imposes no limits on the number of triangles in the output — this is entirely a function of the memory available on the user’s PC. Secondly, the points are generated at exact multiples of that spacing so that DTMs generated separately will line up perfectly when subsequently loaded together.)
The linked paper is the original description explaining the theory; in this article we will concentrate instead on some of the key elements of how the algorithm works and the practical use of it in our software.
Overview¶
The linked paper has the following figure illustrating the concept:
The Introduction of the linked paper explains the idea; the important point from our point of view is that the algorithm takes 3D points and their normals (hence “oriented”) as inputs.
Figuring out the correct direction of the surface normal at every point is actually one of the challenges that must first be overcome in order to use this algorithm;
in 3DM Analyst you can trigger this manually using the .ptc
file format so that you don’t
need to derive the normals for that point cloud again if something goes wrong during the Poisson surface reconstruction process, since generating the normals can be a
time- and memory-intensive process for very large point clouds.
The next step is to “bin” the points in the point cloud into a fixed and finite set of boxes in 3D, known as “voxels”, using an octree, so-named because when you split a cube into equal sub-cubes by cutting in half in the X, Y, and Z axes, you get eight child cubes:

First level of an octree subdivision. (Gaps between cubes for illustration only; the cubes are adjacent in reality.)¶
Each level performs this subdivision operation on each of the cubes in the level above, so a level 2 octree replaces each of the eight cubes shown in the image above with eight smaller cubes, 64 in total. In general, the number of cubes in an octree is \(8^n\), where \(n\) is the level.
Another way to think about it is in terms of how many subdivisions there are along each axis; for level \(n\), there are \(2^n\) subdivisions along each exis, and since there are three axes, the number of cubes is \((2^n)^3\) which is the same as \(2^{3n}\) (hence \(8^n\)). Thinking about it this way makes it easier to see what the output resolution is going to be.
For example, the figure below is taken from the same linked paper, and it shows the same “dragon” model at octree depths of 6, 8, and 10:
This corresponds to a resolution in each axis of \(2^6 = 64\), \(2^8 = 256\), and \(2^{10} = 1024\), respectively. That’s important to understand because it determines the output resolution of the model — Poisson surface reconstruction will not generate a surface model more detailed than that.
That means, for example, that a level 10 reconstruction of a flat plane parallel to the XY plane will never have more than \(2^{10}=1024\) points in X or Y, or about 1 million points in total.
Memory consumption¶
There are two factors to consider when it comes to memory consumption of the Poisson surface reconstruction algorithm: The number of input points in the point cloud, and the spacing of the points in the output DTM.
Generally speaking, the memory consumption scales linearly the the number of input points. That is, if you have twice as many input points, it will need twice as much memory to store them. The storage per point is not huge — 100 million points requires less than 4 GB to store for processing. The resolution of the output DTM has a much greater effect.
One optimisation in the octree used by Poisson surface reconstruction is that it does not subdivide a cube that has no points in it. In the best case, all points lie in a single plane parallel to one axis; in that case, the memory consumption scales with the square of the level, not the cube. (That is, if you increase the level by 1, halving the size of the voxels, it will use 4 \(\times\) as much memory instead of 8 \(\times\) as much.) In the worst case, every voxel at the next level has at least one point in it; then, increasing the level by 1 really will increase the memory required by a factor of 8!
In most cases, the performance lies in between these two; a 2D surface embedded in a 3D space is still a 2D surface, and so memory is likely to be proportional to the surface area of that surface, which is typically a little worse than the best case, depending on the complexity of the surface.
Nevertheless, it’s clear that trying to increase the output resolution very quickly hits a brick wall, even in the best case; if a level 10 reconstruction requires 1 GB of RAM, say, then a level 11 might require 4 GB, a level 12 might require 16 GB, a level 13 might need 64 GB, and a level 14 might need 256 GB — and that’s the best case, with a nearly flat surface.
In our software, we do not ask the user to enter the level directly; instead, we ask the user to enter the desired output spacing, since that’s much more likely what they are thinking in terms of, and we compute the level required to achieve that output spacing internally.
However, you can estimate the level simply by looking at the extent of the project in the longest direction and then dividing it by your desired output spacing, then take the base-2
logarithm of that number and round up the result if it’s not an integer. (If your calculator does not have a base-2 logarithm function, you can use \(n=\frac{\log{x}}{\log{2}}\)
or \(n=\frac{\ln{x}}{\ln{2}}\); in Excel you can use =log(x,2)
, substituting in the correct value for x
.) Alternatively, you can just use the table below, and
take the first row where the Points per Axis is greater than the figure you calculated:
Points per Axis |
Octree Level |
# Points |
---|---|---|
1024 |
10 |
~1 million |
2048 |
11 |
~4 million |
4096 |
12 |
~17 million |
8192 |
13 |
~67 million |
16384 |
14 |
~268 million |
If you calculate the level and it comes to a value or 13, 14, or even higher, then unless you have a lot of memory in your PC, it might take a long time or run out of memory during processing.
Operation¶
With the background information out of the way, we can now look at how the settings in our software control the reconstruction.
Our software uses Poisson surface reconstruction internally in a couple of places:
The Merge DTM Options section of the Multiple DTM Data Management dialog in 3DM Analyst.
The Merged DTM section of the Generation Settings dialog in DTM Generator.

Multiple DTM Data Management dialog in 3DM Analyst.¶

Generation Settings dialog in DTM Generator.¶
Minimum Spacing¶
In both dialogs, the Minimum Spacing setting is the spacing between points in the output mesh that you wish to generate and is the one that determines what octree level will be required, and hence how much memory will be needed to complete the reconstruction.
Because the surface is embedded in a three-dimensional space, you may get more points than you expected. For example, if the plane was perfectly flat and parallel to the XY plane, and you asked points to be generated with a minimum spacing of 0.2 m, then you should only get points in a regular grid with X and Y values all multiples of 0.2 m.
However, if the plane was on an incline so that it also passed through the 0.2 m Z spacing, additional points need to be added at the 0.2 m Z intervals even if the X and Y values are not on an even 0.2 m location. This creates a contour-like effect in the triangulation in all three axes.
To illustrate, consider the point cloud below; the blue grid lines are at exact multiples of 0.2 m in X and Y, and the intersections of these lines is where a 200 mm DTM would have output points:

Unordered point cloud with a mean spacing of about 180 mm and regular 200 mm grid overlaid.¶
If we use that point cloud to create a 200 mm minimum spacing DTM, we see that we do indeed get points on exact multiples of 0.2 m in X and Y, but we also get contour lines running through the DTM where the surface passes through a Z value that is an exact multiple of 0.2 m:

200 mm minimum spacing DTM generated from the point cloud above. Note “contour lines”, where the surface passes through Z with a multiple of 200 mm.¶
We can see this clearly if we ask 3DM Analyst to generate 200 mm contours without smoothing or isolated contour removal:

200 mm minimum spacing DTM generated from the point cloud above with actual contours added. Note how they align with the additional points added to the DTM by the Poisson surface reconstruction algorithm.¶
So, while it’s true that Poisson surface reconstruction will never generate points that are not at co-ordinates that are exact multiples of the minimum spacing, we need to remember that it applies in all three dimensions — that is, a point can be output if any one co-ordinate is on an exact multiple of the minimum spacing.
What happens if we ask the algorithm to generate a DTM with a spacing that is smaller than the point cloud’s spacing? As we’ll see below, Poisson never subdivides a voxel if none of the children would have the specified minimum number of points in the input point cloud. In the point cloud above, we can see that although the average spacing of the point cloud is about 180 mm, it is not uniform. If we ask 3DM Analyst to generate a DTM with a point spacing of 100 mm, there are areas where the source point cloud is dense enough to support that, but there are other areas where it is not, and so Poisson falls back to the octree level above and generates points at double the requested spacing:

100 mm minimum spacing on the same dataset. The mean spacing of the source data is 180 mm, but the density is not uniform — in some areas the requested 100 mm spacing can be achieved. However, in other areas, Poisson falls back to the next highest level, which has 2 \(\times\) the spacing, i.e. 200 mm.¶
If your goal is to achieve a uniform spacing, then you need to ensure that you are not trying to generate a spacing smaller than the least dense area of the input point cloud. If, however, your goal is to capture as much data as the source point cloud allows, you can try to generate a spacing similar to the most dense areas of the input point cloud and let Poisson automatically drop down to lower resolutions where the source data isn’t dense enough to support that. Note, however, that because it always drops down in powers of 2, it is possible that those less-dense areas end up being even less dense than necessary because the next-largest power of 2 was quite a lot bigger. (For example, if we asked for a spacing of 200 mm, it will output 200 mm where it can, then 400 mm if not, then 800 mm, then 1600 mm. If the input data had a spacing of 1000 mm in some areas, then asking for a 1000 mm spacing would generate a denser DTM in those areas than asking for a 200 mm DTM, because the 200 mm DTM would be forced to use 1600 mm in those areas.)
Maximum Spacing Factor¶
The Maximum Spacing Factor is also closely related — this determines what the largest distance between points should be for them to still be connected by a triangle. It’s a factor, not a distance, and because it relates to the octree depth, it must always be a power of 2 — what this setting does is tell the software how many levels up the octree to still allow points to be connected. If the maximum spacing factor was 2, then it’s saying that voxels from the level above should be used to create the connected surface even if they did not have enough points inside to justify subdividing to the same level, just like happened in the 100 mm minimum spacing image above. If the maximum spacing factor was 4, then it incorporates voxels two levels up; if it’s 8, then it incorporates voxels that are three levels up.
We do not recommend going beyond 8 because of “ballooning” — a tendency of the Poisson surface reconstruction algorithm to produce “bubbles” in the surface model where there is no data in the point cloud.
Points per Sample/Samples per Node¶
The first setting in both dialogs, which we skipped over earlier — Points per Sample in the first one and Samples per Node in the second — refers to the number of points in the input point cloud that must lie within a voxel at that level for the parent voxel to be subdivided. For noisier point clouds with lots of bad points, increasing it beyond 1 can help prevent isolated bad points from affecting the generated surface. (Note that can be used to remove isolated bad points prior to processing!) For higher-quality photogrammetric point clouds it is normally safe to leave this value at 1 to maximise the resolution of the surface model that you extract from the point cloud.
To illustrate the effect of increasing this value, the following figure shows the same region above generated with a 200 mm output spacing but the points-per-sample value set to 4 instead of 1:

200 mm minimum spacing DTM generated from the point cloud above using a points per sample value of 4 instead of 1. Note that the areas where a 100 mm DTM could be generated are still present (at 200 mm) but the areas where only a 200 mm DTM could be generated are now at a 400 mm spacing.¶
Point Weighting¶
The final setting, Point Weighting, controls how closely the output surface should follow the points — for noisier points, this value should be kept low (even 0) so that the surface model does not “capture” the noise in the points. For higher-quality point clouds, 4 or more will ensure that fine details in the point cloud will be preserved.
Final Thoughts¶
If the output DTM looks “noisy”, then that’s a sign that Poisson is capturing the noise in the underlying point cloud. Either:
The minimum spacing should be increased (we would normally use 2–4 \(\times\) the mean point spacing for LiDAR and Semi-global Matching point clouds, around 1 \(\times\) the mean point spacing for normal photogrammetric point clouds), or
The points-per-sample should be increased (to 4 or 5 if it’s a LiDAR or Semi-global Matching point cloud, 1–2 if it’s a normal photogrammetric point cloud), or
The point weighting should be decreased (to 0 if it’s a LiDAR or Semi-global Matching point cloud, 1 if it’s a normal photogrammetric point cloud).
If, on the other hand, the output DTM has a “melted ice cream” look, then that’s a sign your minimum spacing factor is too large, the points per sample is too large, or the point weighting is too low. By default we use 1 for points-per-sample and 1 for point weighting so if you are using the default settings then the minimum point spacing is the prime candidate for tuning.