The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B1-2022
https://doi.org/10.5194/isprs-archives-XLIII-B1-2022-453-2022
https://doi.org/10.5194/isprs-archives-XLIII-B1-2022-453-2022
30 May 2022
 | 30 May 2022

AUTONOMOUS SENSING AND LOCALIZATION OF A MOBILE ROBOT FOR MULTI-STEP ADDITIVE MANUFACTURING IN CONSTRUCTION

L. Lachmayer, T. Recker, G. Dielemans, K. Dörfler, and A. Raatz

Keywords: Mobile robotics, Additive Manufacturing, Adaptive Fabrication, Localization, Architecture and Digital Fabrication

Abstract. In contrast to stationary systems, mobile robots have an arbitrarily expandable workspace. As a result, the spatial dimensioning of the task to be mastered plays only a subordinate role and can be scaled as desired. For the construction industry in particular, which requires the handling and production of substantial components, mobile robots mean an unlimited expansion of the workspace based on their mobility levels and thus increased flexibility. The greatest challenge in mobile robotics lies with the discrepancy between the precision required for the task and the achievable positioning accuracy. External localization systems show significant potential for improvement in this respect but, in many cases, require a line of sight between the measurement system and the robot or a time-consuming calibration of markers. Therefore, this article presents an approach for an onboard localization system for use in a multi-step additive manufacturing processes for building construction. While a SLAM algorithm is used for the initial estimation of the robot’s base at the work site, in a refined estimation step, the positioning accuracy is enhanced using a 2D-laser-scanner. This 2D-scanner is used to create a 3D point cloud of the 3D-printed component each time after a print job of one segment has been carried out and before continuing a print job from a new location, to enable printing of layers on top of each other with sufficient accuracy over many repositioning manouvres. When the robot returns to a position for print continuation, the initial and the new point clouds are compared using an ICP-algorithm, and the resulting transformation is used to refine the robot’s pose estimation relative to the 3D-printed building component. While initial experiments demonstrate the approach’s potential, transferring it to large-scale 3D-printed components presents additional challenges, highlighted in this paper.