The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Share
Publications Copernicus
Download
Citation
Share
Articles | Volume XLVIII-1/W4-2025
https://doi.org/10.5194/isprs-archives-XLVIII-1-W4-2025-123-2025
https://doi.org/10.5194/isprs-archives-XLVIII-1-W4-2025-123-2025
16 Jun 2025
 | 16 Jun 2025

Lidargrammetric co-matching and co-adjustment – a new method of photogrammetric and LiDAR data integration

Antoni Rzonca and Mariusz Twardowski

Keywords: LiDAR, Bundle adjustment, Image matching, Data registration, Lidargrammetry

Abstract. The accuracy of spatial raw data is contingent on a number of factors, including the accuracy of the sensors, their calibration, and direct and indirect referencing. The efficacy of the captured data can be enhanced through the implementation of effective data processing methodologies. The present research is dedicated to the enhancement of combined photogrammetric and LiDAR data.
In the contemporary context, the trajectory of the vehicle can be affected by GPS signal jamming, a phenomenon that is attributable to international circumstances. This necessitates a combined adjustment of RGB/CIR and LiDAR data. The proposed method necessitates the utilisation of both sets of data: The study area was imaged using both RGB and CIR technologies, with intensity LiDAR strips also employed to cover the same area. The present approach is founded upon the notion of lidargrammetry (Jayendra-Lakshman and Devarajan, 2013), which uses photogrammetric algorithms for lidar data processing (Rzonca and Twardowski, 2022).
The research tool, PyLiGram, generates synthetic images of the LiDAR data, known as lidargrams or renders, applying the same interior orientation parameters (IOPs) as the real camera used to capture the real RGB/CIR images. The synthetic images are centrally projected according to IOPs and predefined external orientation parameters (EOPs). The projection process is reversible by use of unique lidar point identifiers. Points can be projected onto synthetic images, and after processing the same images, points can be intersected to create a new point cloud.
The paper sets out the potential of applying photogrammetric methods to the co-adjustment of RGB/CIR and LiDAR data. The process entails the collaborative processing of RGB/CIR and LiDAR images through the utilisation of deep learning matching techniques and common least squares adjustment. The refined EOPs are then utilised to intersect the refined point cloud and enhance the images' positions.
A key benefit of this approach is that it has the potential to eliminate the necessity for LiDAR control patches/points or ground control points for RGB/CIR images, depending on the specific case.
The process was subjected to a series of trials on multiple datasets, each comprising a point cloud density of approximately 100 points per square metre and images with a ground sample distance (GSD) ranging from 5 to 10 centimetres.

Share