DEAD WOOD DETECTION BASED ON SEMANTIC SEGMENTATION OF VHR AERIAL CIR IMAGERY USING OPTIMIZED FCN-DENSENET
Keywords: Semantic Segmentation, Deep Learning, Dead Wood Detection
Abstract. The assessment of the forests’ health conditions is an important task for biodiversity, forest management, global environment monitoring, and carbon dynamics. Several research works were proposed to evaluate the state condition of a forest based on remote sensing technology. Concerning existing technologies, employing traditional machine learning approaches to detect the dead wood in aerial colour-infrared (CIR) imagery is one of the major trends due to its spectral capability to explicitly capturing vegetation health conditions. However, the complicated scene with background noise restricted the accuracy of existing approaches as those detectors normally utilized hand-crafted features. Currently, deep neural networks are widely used in computer vision tasks and prove that features learnt by the model itself perform much better than the hand-crafted features. The semantic image segmentation is a pixel-level classification task, which is best suitable to dead wood detection in very high resolution (VHR) mode because it enables the model to identify and classify very dense and detailed components on the tree objects. In this paper, an optimized FCN-DenseNet is proposed to detect dead wood (i.e. standing dead tree and fallen tree) in a complicated temperate forest environment. Since the appearance of dead trees generally occupies greatly different scales and sizes; several pooling procedures are employed to extract multi-scale features and dense connection is employed to enhance the inline connection among the scales. Our proposed deep neural network is evaluated over VHR CIR imagery (GSD-10cm) captured in a natural temperate forest in Bavarian national forest park, Germany, which has undergone on-site bark beetle attack. The results show that the boundary of dead trees can be accurately segmented, and the classification are performed with a high accuracy, even though only one labelled image with moderate size is used for training the deep neural network.