MULTIMODAL PERSON RE-IDENTIFICATION IN AERIAL IMAGERY BASED ON CONDITIONAL ADVERSARIAL NETWORKS
Keywords: person re-identification, generative adversarial networks, thermal images, airborne images
Abstract. Person Re-Identification (Re-ID) is the task of matching the same person in multiple images captured by different cameras. Recently deep learning-based Re-ID algorithms demonstrated an exciting progress for terrestrial-based cameras, still, person Re-ID in aerial images poses multiple challenges including occlusion of human feature parts, image distortion, and dynamic camera location. In this paper, we propose a new Person Aerial Re-ID framework Robust to Occlusion and Thermal imagery (ParrotGAN). Our model is focused on cross-modality person Re-ID in aerial images. Furthermore, we collected a new large-scale synthetic multimodal AerialReID dataset with 30k images and 137 person ID. Our ParrotGAN model leverages two strategies to achieve robust performance in the task of person Re-ID in thermal and visible range. Firstly, we use a latent space of StyleGAN2 model to estimate the distance between two images of a person. Specifically, we project each real image into the latent space with a correspondent latent vector z. We use the distance between latent vectors to provide a Re-ID similarity metric. Secondly, we use a generative-adversarial network to translate a color image to a synthetic thermal image. We use synthetic image for a cross-modality Re-ID. We evaluate our ParrotGAN model and basleines on our AerialReID and PRAI-1581 datasets. The results of the evaluation are encouraging and demonstrate that our ParrotGAN model competes with baselines in visible range aerial person Re-ID and outperforms them in the cross-modality setting. We made our code and dataset publicly available.