COMPARISON OF CONVOLUTIONAL NEURAL NETWORKS FOR CLOUDY OPTICAL IMAGES RECONSTRUCTION FROM SINGLE OR MULTITEMPORAL JOINT SAR AND OPTICAL IMAGES
Keywords: image reconstruction, clouds, atmospheric perturbation, SAR, optical, convolutional neural networks, gap-filling
Abstract. With the increasing availability of optical and synthetic aperture radar (SAR) images thanks to the Sentinel constellation, and the explosion of deep learning, new methods have emerged in recent years to tackle the reconstruction of optical images that are impacted by clouds. In this paper, we focus on the evaluation of convolutional neural networks that use jointly SAR and optical images to retrieve the missing contents in one single polluted optical image. We propose a simple framework that ease the creation of datasets for the training of deep nets targeting optical image reconstruction, and for the validation of machine learning based or deterministic approaches. These methods are quite different in terms of input images constraints, and comparing them is a problematic task not addressed in the literature. We show how space partitioning data structures help to query samples in terms of cloud coverage, relative acquisition date, pixel validity and relative proximity between SAR and optical images. We generate several datasets to compare the reconstructed images from networks that use a single pair of SAR and optical image, versus networks that use multiple pairs, and a traditional deterministic approach performing interpolation in temporal domain.