The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B2-2020
https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-651-2020
https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-651-2020
12 Aug 2020
 | 12 Aug 2020

UNDERSTANDING 3D POINT CLOUD DEEP NEURAL NETWORKS BY VISUALIZATION TECHNIQUES

Y. Cao, M. Previtali, and M. Scaioni

Keywords: Deep Learning, Reconstruction, Visualization, 3DCAM, 3D Point Cloud

Abstract. In the wake of the success of Deep Learning Networks (DLN) for image recognition, object detection, shape classification and semantic segmentation, this approach has proven to be both a major breakthrough and an excellent tool in point cloud classification. However, understanding how different types of DLN achieve still lacks. In several studies the output of segmentation/classification process is compared against benchmarks, but the network is treated as a “black-box” and intermediate steps are not deeply analysed. Specifically, here the following questions are discussed: (1) what exactly did DLN learn from a point cloud? (2) On the basis of what information do DLN make decisions? To conduct such a quantitative investigation of these DLN applied to point clouds, this paper investigates the visual interpretability for the decision-making process. Firstly, we introduce a reconstruction network able to reconstruct and visualise the learned features, in order to face with question (1). Then, we propose 3DCAM to indicate the discriminative point cloud regions used by these networks to identify that category, thus dealing with question (2). Through answering the above two questions, the paper would like to offer some initial solutions to better understand the application of DLN to point clouds.