VISIBLE ROUTES IN 3D DENSE CITY USING REINFORCEMENT LEARNING
Keywords: 3D GIS, Visibility, Routes, Reinforcement learning
Abstract. In the last few years, the 3D GIS domain has developed rapidly, and has become increasingly accessible to different disciplines. 3D Spatial analysis of Built-up areas seems to be one of the most challenging topics in the communities currently dealing with spatial data. One of the most basic problems in spatial analysis is related to visibility computation in such an environment. Visibility calculation methods aim to identify the parts visible from a single point, or multiple points, of objects in the environment.
In this work, we present a unique method combining visibility analysis in 3D environments with dynamic motion planning algorithm, named Visibility Velocity Obstacles (VVO) with Markov process defined as spatial visibility analysis for routes in 3D dense city environment.
Based on our VVO analysis, we use Reinforcement Learning (RL) method in order to find an optimal action policy in dense 3D city environment described as Markov decision process, navigating in the most visible routes. As far as we know, we present for the first time a Reinforcement Learning (RL) solution to the visibility analysis in 3D dense environment problem, generating a sequence of viewpoints that allows an optimal visibility in different routes in urban city. Our analysis is based on fast and unique solution for visibility boundaries, formulating the problem with RL methods.