URBAN CLASSIFICATION BASED ON TOP-VIEW POINT CLOUD AND SAR IMAGE FUSION WITH SWIN TRANSFORMER
Keywords: Deep Learning, Transformer, Feature Fusing, Urban Classification, Synthetic Aperture Radar, Point Cloud
Abstract. Urban areas are complex scenarios consisting of objects with various materials. This variety poses a challenge to single-data classification schemes. In this paper, we propose a feature fusion and classification network on RGB top-view point cloud and SAR images with swin-Transformer. In this network, the heterogeneous features are learned separately by an asymmetric encoder, and then they are concatenated along the channel dimension and fed into a fusing encoder. Finally, the fused features are decoded by an UperNet for generating the semantic labels. As data we use high-resolution 3D point cloud provided by Hessigheim benchmark which are complemented by TerraSAR-X images. The overall precision and the mean intersection over union (mIoU) achieves 87.25% and 73.56%, respectively, which outperforms the single-data swin-Transformer by 4.08% and 1.91%, respectively.