Cloud shadow and uneven illumination detection and correction using the U-net architecture in near-surface images of complex forest canopies
Keywords: UAV, deep learning, cloud shadow, tropical forest, multispectral, RGB
Abstract. In humid tropical regions, irregular illumination and cloud shadows can complicate near-surface optical remote sensing. This could lead to costly and repetitive surveys to maintain geographical and spectral consistency. This could have a significant impact on the regular monitoring of forest ecosystems. A novel correction method using deep learning is presented here to address the issue in high-resolution canopy images. Our method involves training a deep learning model on one or a few well-illuminated/homogeneous reference images augmented with artificially generated cloud shadows. This training enables the model to predict illumination and cloud shadow patterns in any image and ultimately mitigate these effects. Using images captured by multispectral and RGB cameras, we evaluated the method across multiple sensors and conditions. These included nadir-view images from two sensors mounted on a drone and tower-mounted RGB Phenocams. The technique effectively corrects uneven illumination in near-infrared and true-color RGB images, including non-forested areas. This improvement was also evident in more consistent normalized difference vegetation index (NDVI) patterns in areas affected by uneven illumination. By comparing corrected RGB images to the original in a binary classification task, we evaluated the method's accuracy and Kappa values. Our goal was to detect non-photosynthetic vegetation (NPV) in a mosaic. The overall accuracy and Kappa were both significantly improved in corrected images, with a 2.5% and 1.1% increase, respectively. Moreover, the method can be generalized across sensors and conditions. Further work should focus on refining the technique and exploring its applicability to satellite imagery and beyond.