AIRPORT RUNWAY SEMANTIC SEGMENTATION BASED ON DCNN IN HIGH SPATIAL RESOLUTION REMOTE SENSING IMAGES
Keywords: Airport runway, Semantic Segmentation, DeepLabv3, Lovasz-softmax, Remote Sensing
Abstract. Due to the diverse structure and complex background of airports, fast and accurate airport detection in remote sensing images is challenging. Currently, airport detection method is mostly based on boxes, but pixel-based detection method which identifies airport runway outline has been merely reported. In this paper, a framework using deep convolutional neural network is proposed to accurately identify runway contour from high resolution remote sensing images. Firstly, we make a large and medium airport runway semantic segmentation data set (excluding the south Korean region) including 1,464 airport runways. Then DeepLabv3 semantic segmentation network with cross-entropy loss is trained using airport runway dataset. After the training using cross-entropy loss, lovasz-softmax loss function is used to train network and improve the intersection-over-union (IoU) score by 5.9%. The IoU score 0.75 is selected as the threshold of whether the runway is detected and we get accuracy and recall are 96.64% and 94.32% respectively. Compared with the state-of-the-art method, our method improves 1.3% and 1.6% of accuracy and recall respectively. We extract the number of airport runway as well as their basic contours of all the Korean large and medium airports from the remote sensing images across South Korea. The results show that our method can effectively detect the runway contour from the remote sensing images of a large range of complex scenes, and can provide a reference for the detection of the airport.