The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLVIII-4/W2-2022
https://doi.org/10.5194/isprs-archives-XLVIII-4-W2-2022-15-2023
https://doi.org/10.5194/isprs-archives-XLVIII-4-W2-2022-15-2023
12 Jan 2023
 | 12 Jan 2023

CASTLE: A CONTEXT-AWARE SPATIAL-TEMPORAL LOCATION EMBEDDING PRE-TRAINING MODEL FOR NEXT LOCATION PREDICTION

J. Cheng, J. Huang, and X. Zhang

Keywords: Geospatial Data, Ubiquitous Computing, Location Prediction, Location Embedding, Trajectory Mining, Smart City

Abstract. Next location prediction is helpful for service recommendation, public safety, intelligent transportation, and other location-based applications. Existing location prediction methods usually use sparse check-in trajectories and require massive historical data to capture complex spatial-temporal correlations. High spatial-temporal resolution trajectories have rich information. However, obtaining personal trajectories with long time series and high spatio-temporal resolution usually proves challenging. Herein, this paper proposes a two-stage Context-Aware Spatial-Temporal Location Embedding (CASTLE) model, a multi-modal pre-training model for sequence-to- sequence prediction tasks. The method is built in two steps. First, large-scale location datasets, which are sparse but easier to be acquired (i.e., check-in and anomalous navigation data), are used for pre-training location embedding to capture the multi-functional properties under different contexts. After that, the learned contextual embedding is used for downstream location prediction in small-scale but higher spatio-temporal resolution trajectory datasets. Specifically, the CASTLE model combines Bidirectional and Auto-Regressive Transformers to generate contextual embedding vectors rather than a fixed vector for each location. Furthermore, we introduce a location and time-aware encoder to reflect the spatial distances between locations and visit times. Experiments are conducted on two real trajectory datasets. The results show that the CASTLE model can pre-train beneficial location embedding and outperforms the model without pre-training by 4.6–7.1%. The proposed method is expected to improve the next location prediction accuracy without massive historical data, which will greatly drive the use of trajectory data.