Comparative Assessment of Point Cloud Annotation Workflows for Applications in Architectural and Spatial Studies
Keywords: Point Cloud Annotation, Manual Classification, Ground Truth, Semantic Segmentation, Software Comparison
Abstract. Manual annotation of 3D point clouds is essential for creating high-quality datasets used in training machine learning models for semantic classification. Despite the development of various annotation tools (ranging from research prototypes to commercial platforms) their usability, functionality, and availability vary greatly depending on users’ technical expertise and the intended application. This study presents a comparative evaluation of manual point cloud annotation tools, focusing on their effectiveness for users with limited Geomatics experience, such as architecture and urban planning professionals. The research encompasses both literature-based and market-driven analyses to identify prevalent tools, including opensource, commercial, and web-based solutions. Eight selected platforms (CloudCompare, QGIS, ArcGIS Pro, Autodesk ReCap Pro, Leica Cyclone 3DReshaper, GreenValley LiDAR360, TerraScan, and Pointly) were tested on two case studies: an indoor university office and an outdoor urban area in Mantova, Italy. Tools were assessed considering usability, interface design, supported formats, classification capabilities, and required user expertise. Results highlight differences in usability and performance. The study concludes that tool selection should align with user expertise, project scale, and environmental complexity. Findings aim to support informed software choices for professionals in built heritage, architecture, and urban studies requiring reliable manual point cloud annotation solutions.