Improving Off-Nadir Deep Learning-Based Change and Damage Detection through Radiometric Enhancement
Keywords: Change Detection, Off-nadir Imagery, Remote Sensing, Neural Networks, Radiometric Enhancement
Abstract. Aerial and satellite imagery can provide vital information to relief organizations about the extent and distribution of damages after natural disasters. With manual change detection being too inefficient to be effective, the pursuit of automated change detection has accelerated with the recent developments of deep learning methods. Off-nadir imagery (captured not directly overhead) is the fastest to acquire post-disaster, making it ideal for disaster management scenarios. However, the changes in viewing angles result in shadows and occlusions, making damage detection more difficult. Differences in illumination conditions are ever present in bitemporal aerial and satellite imagery, especially for off-nadir imagery, where the reflectance angle affects the amount of light returning to the sensor, making it harder to detect changes and damages. The hypothesis of this study was that artificial intelligence methods fail to adequately account for the illumination differences between images. To test this hypothesis, two radiometric enhancements, matching and equalization, were applied to four change and damage detection datasets, including a damage detection dataset from the 2010 Haiti earthquake. Using a leading high accuracy fusion convolutional neural network architecture called Changer, improvements of up to 20 percent for F1-Score, a popular remote sensing metric for quantifying the number of correctly classified pixels for specific datasets, were achieved through applying radiometric enhancement techniques. Applying radiometric enhancements on a case-by-case basis led to considerable improvements in accuracy, showing the promise of radiometric enhancement. Lower accuracies were achieved on the Haiti dataset, outlining the need for large disaster-specific datasets for training.