Reconstruction of Building LoD2 Wireframe Models Using Semantic Segmentation
Keywords: 3D Building Modeling, Sematic Segmentation, Multi-object Deep Learning, 3D Building Wireframe, Convolutional Neural Network
Abstract. LoD2 building models can be used in different digital twin-related applications such as urban planning, disaster management, optimizing green energy efficiency, and solar panel recommendation. Existing technology for 3D modelling of buildings still relies on a large amount of manual work due to the irregular geometries of different roof types. Wireframes have shown to be an effective representation for 3D building especially in LoD2 format. Due to the complexity and diversity of roof types in urban areas, 3D building modeling remains a challenging task. In this paper, we propose a new framework for generating 3D wireframes to model different roof types. While high-resolution airborne images can be utilized to exploit the fine details of the roofs, they have difficulties in areas with poor contrast or shadows. The proposed framework incorporates the Digital Surface Model (DSM) as an auxiliary data source to address this limitation. In this work, we focus on the extraction of roof geometrical components including lines and planes of individual buildings to achieve a consistent LoD-2 building reconstruction. The proposed methodology is divided into two phases: (1) jointly predicting building lines and roof planes from the RGB imagery and DSM and (2) generating 3D wireframes of buildings using the extracted roof planes and lines. Subsequently, height values from the point clouds are used to derive 3D wireframes. Experiments with 1,620 buildings from Fredericton, the capital of New Brunswick in eastern Canada, demonstrate an IoU of 0.9337, an F1-score of 0.939, and an F2-score of 0.9378 for the roof geometrical components detection phase, as well as an RMSE of around 0.2-0.8 meter for the final 3D building model compared to the original LiDAR data was achieved.