The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B3-2021
https://doi.org/10.5194/isprs-archives-XLIII-B3-2021-47-2021
https://doi.org/10.5194/isprs-archives-XLIII-B3-2021-47-2021
28 Jun 2021
 | 28 Jun 2021

SEMANTIC SEGMENTATION OF BURNED AREAS IN SATELLITE IMAGES USING A U-NET-BASED CONVOLUTIONAL NEURAL NETWORK

A. K. Brand and A. Manandhar

Keywords: Deep Learning, Burned Area Mapping, Sentinel-2, Semantic Segmentation, Convolutional Neural Network, U-Net

Abstract. The use of remote sensing data for burned area mapping hast led to unprecedented advances within the field in recent years. Although threshold and traditional machine learning based methods have successfully been applied to the task, they implicate drawbacks including the involvement of complex rule sets and requirement of previous feature engineering. In contrast, deep learning offers an end-to-end solution for image analysis and semantic segmentation. In this study, a variation of U-Net is investigated for mapping burned areas in mono-temporal Sentinel-2 imagery. The experimental setup is divided into two phases. The first one includes a performance evaluation based on test data, while the second serves as a use case simulation and spatial evaluation of training data quality. The former is especially designed to compare the results between two local (trained only with data from the respective research areas) and a global (trained with the whole dataset) variant of the model with research areas being Indonesia and Central Africa. The networks are trained from scratch with a manually generated customized training dataset. The application of the two variants per region revealed only slight superiority of the local model (macro-F1: 92%) over the global model (macro-F1: 91%) in Indonesia with no difference in overall accuracy (OA) at 94%. In Central Africa, the results of the global and local model are the same in both metrics (OA: 84%, macro-F1: 82%). Overall, the outcome demonstrates the global model’s ability to generalize despite high dissimilarities between the research areas.