Investigation and Implementation of Multi-Stereo Camera System Integration for Robust Localization in Urban Environments
Keywords: Stereo Camera, Localization, 3D Point Cloud, Multi-sensor, prior map
Abstract. Urban environments are dynamic and complex, posing constant challenges for the localization and navigation of autonomous vehicles (AV). This demands more innovative sensor systems for effective autonomous navigation. Autonomous vehicles use sensors like LiDAR, cameras, and radar to traverse complicated urban environments with precision. These technologies have advantages in improving perception and localization, but they have their own shortcomings – LiDAR can be costly and falters under adverse weather conditions, cameras are sensitive to lighting conditions, and radars lack high-resolution details. Beyond these complexities, environmental conditions like signal-blocking skyscrapers, unpredictable obstacles, and the high costs of precision sensing add further convolution. A multi-sensor integrated solution can be a reliable option to overcome these challenges. Our work explores the use of a multi-stereo camera array that provides a 360° perception for localization in dense urban environments. We use computer vision algorithms to derive 3D point clouds from stereo-images and localize the cameras using a prior 3D map to balance cost and performance. We tested the system in Calgary’s urban setting with various lighting conditions and GNSS-denied zones. Our approach provided accurate localization in 85% of the cases we tested. The results demonstrate that our multi-stereo camera system can help to achieve robust localization in challenging urban situations. This approach offers a cost-effective alternative to LiDAR-based systems while ensuring adequate accuracy.