MRSSC: A BENCHMARK DATASET FOR MULTIMODAL REMOTE SENSING SCENE CLASSIFICATION
Keywords: Benchmark Dataset, Domain Adaptation, Remote Sensing Classification, Optical Remote Sensing Images, Near-nadir SAR
Abstract. Scene classification based on multi-source remote sensing image is important for image interpretation, and has many applications, such as change detection, visual navigation and image retrieval. Deep learning has become a research hotspot in the field of remote sensing scene classification, and dataset is an important driving force to promote its development. Most of the remote sensing scene classification datasets are optical images, and multimodal datasets are relatively rare. Existing datasets that contain both optical and SAR data, such as SARptical and WHU-SEN-City, which mainly focused on urban area without wide variety of scene categories. This largely limits the development of domain adaptive algorithms in remote sensing scene classification. In this paper, we proposed a multi-modal remote sensing scene classification dataset (MRSSC) based on Tiangong-2, a Chinese manned spacecraft which can acquire optical and SAR images at the same time. The dataset contains 12167 images (optical 6155 and 6012 for optical and SAR, resp.) of seven typical scenes, namely city, farmland, mountain, desert, coast, lake and river. Our dataset is evaluated by state-of-theart domain adaptation methods to establish a baseline with average classification accuracy of 79.2%. The MRSSC dataset will be released freely for the educational purpose and can be found at China Manned Space Engineering data service website (http://www.msadc.cn). This dataset will fill the gap between remote sensing scene classification between different image sources, and paves the way for a generalized image classification model for multi-modal earth observation data.