The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLVIII-1/W2-2023
https://doi.org/10.5194/isprs-archives-XLVIII-1-W2-2023-1515-2023
https://doi.org/10.5194/isprs-archives-XLVIII-1-W2-2023-1515-2023
13 Dec 2023
 | 13 Dec 2023

APPLICATION OF DEEP LEARNING CROP CLASSIFICATION MODEL BASED ON MULTISPECTRAL AND SAR SATELLITE IMAGERY

Y. Qi, G. Bitelli, E. Mandanici, and F. Trevisiol

Keywords: Crop classification, deep learning, Transformer, Land cover, Sentinel, Google Earth Engine

Abstract. Classifying crops using satellite data is a challenge, especially since most crops have similar growth cycles. Due to their different characteristics and chlorophyll content, different crops exhibit subtle differences in their reflectance spectra. This study uses a data-driven approach to build a series of deep learning models to classify 36 different land covers in Steele County and Traill Country, North Dakota, US. A Google Earth Engine workflow was implemented to generate a composite layer containing Sentinel 1 and Sentinel 2 satellite data and surface crop data over the study area. 200,000 sample points were generated on this layer, 140,000 for training dataset, 30,000 for validation dataset and 30,000 for testing dataset. Each sample point contains the values of 12 months of SAR and spectral data. In this way, a two-dimensional feature matrix of the time dimension and spectral band dimension (bands refer to specific wavelengths of data in remote sensing imagery and other type of data like NDVI) is generated for each sample point. The training dataset of the model is composed of the feature matrix of these sample points, and the surface crops as labels correspond to the feature matrix. Since this is a dataset with two-dimensional features, this research uses four deep learning models: Dense Neural Network (DNN), Long short-term memory (LSTM), Convolutional neural network (CNN) and Transformer. Among them, the Transformer model based on the self-attention mechanism performed the best, with a comprehensive accuracy rate of 85%, and the classification accuracy rate of crops with more than 2,000 sample points in the training data set reached more than 90%.