IMPROVED CLASSIFICATION OF SATELLITE IMAGERY USING SPATIAL FEATURE MAPS EXTRACTED FROM SOCIAL MEDIA
Keywords: Deep Learning, Satellite Images, Classification, Social Media Mining, Data Fusion
Abstract. In this work, we consider the exploitation of social media data in the context of Remote Sensing and Spatial Information Sciences. To this end, we explore a way of augmenting and integrating information represented by geo-located feature vectors into a system for the classification of satellite images. For that purpose, we present a quite general data fusion framework based on Convolutional Neural Network (CNN) and an initial examination of our approach on features from geo-located social media postings on the Twitter and Sentinel images. For this examination, we selected six simple Twitter features derived from the metadata, which we believe could contain information for the spatial context. We present initial experiments using geotagged Twitter data from Washington DC and Sentinel images showing this area. The goal of classification is to determine local climate zones (LCZ). First, we test whether our selected feature maps are not correlated with the LCZ classification at the geo-tag position. We apply a simple boost tree classifier on this data. The result turns out not to be a mere random classifier. Therefore, this data can be correlated with LCZ. To show the improvement by our method, we compare classification with and without the Twitter feature maps. In our experiments, we apply a standard pixel-based CNN classification of the Sentinel data and use it as a baseline model. After that, we expand the input augmenting additional Twitter feature maps within the CNN and assess the contribution of these additional features to the overall F1-score of the classification, which we determine from spatial cross-validation.