FOREST COVER CLASSIFICATION USING GEOSPATIAL MULTIMODAL DATA
Keywords: Forest Cover Classification, LiDAR, Airborne Imagery, Convolutional Neural Network, Multimodal Learning
Abstract. To address climate change, accurate and automated forest cover monitoring is crucial. In this study, we propose a Convolutional Neural Network (CNN) which mimics professional interpreters’ manual techniques. Using simultaneously acquired airborne images and LiDAR data, we attempt to reproduce the 3D knowledge of tree shape, which interpreters potentially make use of. Geospatial features which support interpretation are also used as inputs to the CNN. Inspired by the interpreters’ techniques, we propose a unified approach that integrates these datasets in a shallow layer in the CNN network. With the proposed CNN, we show that the multi-modal CNN works robustly, which gets more than 80 % user’s accuracy. We also show that the 3D multi-modal approach is especially suited for deciduous trees thanks to the ability of capturing 3D shapes.