SEMANTIC ROAD SCENE KNOWLEDGE FOR ROBUST SELF-CALIBRATION OF ENVIRONMENT-OBSERVING VEHICLE CAMERAS
Keywords: camera calibration, self-calibration, structure from motion, semantic segmentation, road scene understanding
Abstract. Environment-observing vehicle camera self-calibration using a structure from motion (SfM) algorithm allows calibration over vehicle lifetime without the need of special calibration objects being present in the calibration images. Scene-specific problems with feature-based correspondence search and reconstruction during the SfM pipeline might be caused by critical objects like moving objects, poor-texture objects or reflecting objects and might have negative influence on camera calibration. In this contribution, a method to use semantic road scene knowledge by means of semantic masks for a semantic-guided SfM algorithm is proposed to make the calibration more robust. Semantic masks are used to exclude image parts showing critical objects from feature extraction, whereby semantic knowledge is obtained by semantic segmentation of the road scene images. The proposed method is tested with an image sequence recorded in a suburban road scene. It has been shown that semantic guidance leads to smaller deviations of the estimated interior orientation and distortion parameters from reference values obtained by test field calibration compared to a standard SfM algorithm.