The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Share
Publications Copernicus
Download
Citation
Share
Articles | Volume XLVIII-4/W14-2025
https://doi.org/10.5194/isprs-archives-XLVIII-4-W14-2025-333-2025
https://doi.org/10.5194/isprs-archives-XLVIII-4-W14-2025-333-2025
26 Nov 2025
 | 26 Nov 2025

Semantic-aware Multi-Scale Simplification of Urban-Scale 3D Real-Scene Mesh Models

Wang Xia, Yong Luo, Jiang Fan, Shaoyi Wang, Xinyi Liu, Yongjun Zhang, Liang Fei, Wei Wang, Bin Zhang, Jinming Zhang, and Zeshuang Zheng

Keywords: 3D real scene data, Planar feature extraction, Semantic segmentation, 3D model simplification

Abstract. Recent advances in measurement technologies have significantly improved the accuracy of multi-scale 3D reconstruction, yet the resulting large-scale data with inherent redundancy pose challenges for storage and real-time rendering. This paper proposes a systematic framework for efficient lightweight processing of 3D real-scene mesh model, integrating planar feature extraction, point cloud classification, and semantics-driven simplification. The key scientific contributions include: (1) A preprocessing process for the reality 3D model is added for the plane segmentation algorithm; (2) A training-free point cloud classification method employing 9 complementary geometric-semantic features and probabilistic smoothing to achieve computationally efficient classification without the need for deep learning or annotated data; and (3) An innovative semantic-driven simplification strategy that dynamically adjusts processing priorities based on feature importance. Experimental results demonstrate the framework's effectiveness in preserving critical architectural features (e.g., façades and roofs) while aggressively compressing less significant elements (e.g., terrain and clutter), achieving balanced data reduction and information retention. At equivalent simplification ratios, our algorithm achieves a 23% improvement in model accuracy compared to the baseline method, with a 31% accuracy enhancement specifically for critical geometric features. When maintaining equivalent accuracy levels, the proposed method reduces face count by 23% relative to the baseline approach. The proposed methods advance 3D urban modeling by addressing both technical and practical challenges in large-scale scene processing.

Share