Unsupervised Clustering-based 3D Static Scene Construction Using LiDAR Channel and Azimuth Angle
Keywords: Light Detection and Ranging (LiDAR), Static Roadside LiDAR, 3D Static Environment, DBSCAN, Azmuth, Laser Channel
Abstract. Cameras are typically used at road intersections to collect data to perform object detection but struggle in low-light and harsh weather. On the other hand, Light Detection and Ranging (LiDAR) is used as a key technology in 3D vision systems. It gives the 3D point cloud, which includes accurate depth information, but its resolution is cost-dependent, with higher resolutions being more expensive. Deep learning-based method requires large, labelled dataset which increases the cost, time and accuracy depending on the model trained on labelled dataset. To perform object detection in point cloud data is a challenging task due to the incomplete representations, data sparsity and unavailability of training data. To overcome this by using an unsupervised approach, it is important to identify the static scene and then detect the moving object. This work incorporated a novel approach to construct static scene using azimuth angle and laser channel information using an unsupervised clustering approach. This work incorporates two modules I) data collection using VLP-16 LiDAR, and II) static scene construction using DBSCAN (density-based spatial clustering and noise) clustering-based approach. Data is collected at a 4-legged intersection and pre-processed to extract aggregated distances corresponding to the unique pair of azimuth angle and laser channel. DBSCAN is used to perform clustering on the aggregated distances, based on the highest silhouette score and lowest intra distance between points in cluster, static points are identified, and static scene constructed. The qualitative evaluation of method demonstrates that the algorithm effectively and accurately filters out background points.