Joint neural denoising and consolidation for portable handheld laser scan
Keywords: Point cloud denoising, Handheld laser scanners, Deep learning, Uniform point cloud
Abstract. Mobile and handheld laser scanners document scenes in an economical manner, but the data they acquire are often noisy, of low resolution, unevenly distributed, and feature voids within the scanned scene. These characteristics challenge such applications as feature extraction and 3D modeling when processing the raw pointset. To date, point cloud denoising and consolidation (address of distribution and void regions) have been treated independently despite their complementary nature and their mutual dependence on the underlying surface representation. We argue that if treated jointly, richer shape context features can be learned and an improved enhancement framework can be derived. Accordingly, we formulate the shape context description as a joint contribution by both denoising and consolidation, within an end-to-end framework. To this end, we introduce densely packed graph convolution layers to extract contextual information, allowing to query points offset to the underlying surface and to compensate for the structural loss. We demonstrate how the commonly used L2-driven loss functions generate non-smooth output and volume shrinkage, and alleviate this by ones that mitigate the noisy outcome, repair voids, and improve point density distributions. Performance analysis on benchmark datasets demonstrates how we outperform state-of-the-art solutions, produce high-fidelity outcomes, and improve reconstruction-based tasks in real-world setups.