DEVELOPMENT OF A NEW LOW-COST INDOOR MAPPING SYSTEM – SYSTEM DESIGN , SYSTEM CALIBRATION AND FIRST RESULTS

For mapping of building interiors various 2D and 3D indoor surveying systems are available today. These systems essentially differ from each other by price and accuracy as well as by the effort required for fieldwork and post-processing. The Laboratory for Photogrammetry & Laser Scanning of HafenCity University (HCU) Hamburg has developed, as part of an industrial project, a lowcost indoor mapping system, which enables systematic inventory mapping of interior facilities with low staffing requirements and reduced, measurable expenditure of time and effort. The modelling and evaluation of the recorded data take place later in the office. The indoor mapping system of HCU Hamburg consists of the following components: laser range finder, panorama head (pan-tilt-unit), single-board computer (Raspberry Pi) with digital camera and battery power supply. The camera is pre-calibrated in a photogrammetric test field under laboratory conditions. However, remaining systematic image errors are corrected simultaneously within the generation of the panorama image. Due to cost reasons the camera and laser range finder are not coaxially arranged on the panorama head. Therefore, eccentricity and alignment of the laser range finder against the camera must be determined in a system calibration. For the verification of the system accuracy and the system calibration, the laser points were determined from measurements with total stations. The differences to the reference were 4-5mm for individual coordinates.


INTRODUCTION
Digital building models represent the basis for planning, construction and renovation as well as for the management of buildings.In this context Building Information Modelling (BIM) describes a method for optimized three-dimensional planning, execution of construction work and management of buildings by appropriate software so that all relevant building data are digitally recorded, combined and interconnected.BIM applies both in the building industry for the building, planning and execution of construction work (architecture, engineering, building services), and in the facility management sector.
Primarily, the areas of buildings in the building and real estate economy represent the basis for project planning, costing and income planning as well as for construction financing.Since building areas are also used for the apportionment of operating cost, incomplete, misleading or false data about building areas can have extensive, unwanted consequences (Kalusche 2011).For 90% of real estate inventory area calculations are not correct (PresseBox 2005, Wagner 2015).Thus the real estate industry uses incorrect data about building stock in many cases, i.e. reliable current data is often not available at all.
In order to measure current areas of properties, different modern measurement methods such as terrestrial laser scanning (Kersten et al. 2005, Kersten & Lindstaedt 2012), digital photogrammetry (Kersten et al. 2004) and tachymetry are available, all of which work satisfactorily for 3D recording of architectural objects and which are established in the market.However, these procedures and systems require a certain level of expert knowledge, underpinned by high acquisition and education costs.A laserphotogrammetric imaging system using panorama photography and laser ranging in the opposite axis to the camera was presented by Clauß (2011) and Clauß (2012).Hering (2012) showed in his investigations into instrument and model accuracy, that this laserphotogrammetric imaging system is able to ensure measurements up to an accuracy of 2cm.Fangi (2007) presented an approach where several spherical panorama photographs were used for the 3D modelling of the interiors of two Italian churches.Furthermore, so-called depth sensors (time-of-flight cameras) have already been used for the recording of interiors (Zhu & Donia 2013, Henry et al. 2014).How semantic information can be automatically extracted from point clouds derived from photography of the interior for BIM applications has been described by Tamke et al. (2014) and Krispel et al. (2015).
The Laboratory for Photogrammetry & Laser Scanning of the HafenCity University Hamburg is developing a low-cost imaging system for interiors in the context of a R&D cooperation project for the central innovation programme (ZIM) for small and medium-sized enterprises (SMEs).Partners in this project are the Institute for Computer Science (Multimedia Information Processing) of the Christian-Albrechts University Kiel, Germany and the company ETU Software GmbH from Cologne, Germany.The goal of this project is to develop new or significantly improve existing products, processes and technical services.In the following the system configuration (chapter 2), the camera and system calibration (chapter 3 and 4), the data acquisition (chapter 5) and the 3D modelling (chapter 6), including first results, will be described.

SYSTEM CONFIGURATION
The indoor mapping system of the HCU Hamburg is designed to capture complete 360° panorama images of interior spaces.The overall system consists of a panorama head with a camera, which is firmly mounted with a laser range finder on a common platform.The laser range finder points into the direction of the camera and measures simultaneously a distance to a nonspecific point in the interior for each taken photo.
In order to get first experiences and acquire test data, a prototype for the low-cost mapping system was built at HCU Hamburg using the following components (Fig. 1): 1.The Manfrotto multi-line panorama head fulfils two substantial functions for the generation of spherical panoramas: 1.The rotation about a second axis (tilting axis), in order to take photographs with different vertical angles.2. The movement of the camera around the entrance pupil of the lens (nodal point).
Control software with a simple user interface runs on the Raspberry Pi computer, which enables acquisition of the images and the distance measurements as well as the registration of the camera axes (horizontal and vertical angles).The laser range finder is connected to the Raspberry Pi computer via USB interface.The Raspberry Pi computer is equipped with a WiFi USB adapter, which sets up a local WLAN network, so that the system can be operated by remote connection from an external personal computer, e.g. a laptop.

CAMERA CALIBRATION
To acquire geometrically correct information from the panoramic images, the digital camera has been selected not only for cost reasons but also by its optical characteristics such as scale, distortion and stability.As a method for the objective validation of these criteria camera calibration in a 3D test field was selected.
For the geometrical investigations of the camera at the new building location of the HCU Hamburg a provisional test field was established (Fig. 2).The test field has a size of 4.5m (width) × 3.3m (height) × 2.0m (depth).The measurements of the control points were carried out by a total station Leica TM30.For the adjustment of the geodetic 3D network the program PANDA from GEOTEC was used.The standard deviations of the adjusted control point coordinates were 0.3mm (XY) in plane and 0.9mm (Z) in depth.Unfortunately, no serial number is used for clear identification of each camera module.The effects of the additional parameters on the image coordinates were determined for the parameter groups (A1, A2), (A3, A4) and (A5, A6) separately and all together.These effects are represented in Fig. 3 with a 11 × 11 grid on the sensor chip.The green grid shows the sensor (chip size is 3.63mm × 2.72mm) and the red grid shows the effect of the additional parameters to images coordinates with 10 times enlargement.

Camera
For the calibrated cameras the following results are valid: 1.The effect of the radial symmetric distortion (A1, A2) is approximately +20µm at the corners of the sensor.The values are similar for different cameras modules (Fig. 4).
2. The effect of affinity and shearing (A3, A4) is usually smaller than 1µm.
3. The effect of the radial asymmetric and tangential distortion (A5, A6) is usually smaller than 3µm.
The radial symmetric distortion has the largest systematic effect on the image coordinates.All parameters are significantly determined in the bundle block adjustment.Furthermore, the values show little variation for the different cameras.The repeated calibrations show the following results: 1.The camera constant c varied only slightly and seems to be stable.2. On the other hand the position of the principle point shows deviations up to 50 µm.3. The radial symmetric distortion (A1, A2) varied only slightly and seems to be relative stable (Fig. 4).4. The other additional parameters (affinity and shearing (A3, A4) and radial asymmetric and tangential distortion (A5, A6)) demonstrate no clear trend.However, these last two parameter groups only have a relatively small effect on the final image coordinates.The Raspberry Pi camera has been compared with two digital SLR cameras (Nikon D700 and D90, both with 20mm lens) and an industrial camera IDS (with autofocus) in a 3D test field calibration, in order to get information about the accuracy potential of the low-cost camera.The results of the comparison are summarized in Table 3.The image measuring accuracy of the Raspberry Pi camera with 0.2 pixels (1/5 pixel) is nearly as good as the SLR cameras, while the empirical accuracy (comparison of photogrammetrically determined points with the total station reference) is worse by the factor 2-3, but significantly better than the industrial camera.

SYSTEM CALIBRATION
The system calibration of the HCU 3D-IMAGER includes the following steps: (a) the adaption of the camera in the nodal point of the panorama head and (b) the determination of the eccentricity and orientation of the two sensors (camera and laser range finder).The entrance pupil of the camera lens must lie accurately in the nodal point, since the horizontal and vertical rotation of the camera must be performed around the entrance pupil of the lens.During the camera rotation around the entrance pupil the laser finder rotates simultaneously on the panorama head.It is assumed that the relative position and rotation of the laser range finder will not change with respect to the camera.The vertical and horizontal adjustment of the camera is done separately via alignment on a close and a remote point.
The camera is shifted until the position of the entrance pupil is precisely fixed in the nodal point using the alignment plates of the panorama head.An imprecise alignment leads to parallaxes at close range while turning the camera, which results in gaps in the panorama image.The relative position of the laser range finder to the camera is defined by a shift vector (eccentricity), while the relative adjustment of the laser range finder to the camera is described by a direction vector.Thus the relation between camera and laser range finder is represented by a coordinate transformation with five degrees of freedom.The shift vector is measured with an accuracy of 1-2mm, while the determination of the direction vector is performed with an iterative method.
For the calibration and validation of the overall system two panorama images were taken with three photo series including 18 photos each in the 3D test field.The laser points were measured with a total station Trimble S6 (measurement 1) and/or two Trimble 5603 DR200+ (measurement 2a and b) as reference.The precision (standard deviation) of the total stations is 2" (= 0.6mgon) and/or 3" (= 1.0mgon) for an angle measurement (horizontal and vertical) and 2mm + 2ppm for a distance measurement without reflector.The polar coordinates of the laser points were converted into Cartesian coordinates, so that both systems -laser points measured by total station (system 1) and panorama image (system 2) -are available in the same coordinate system.For accuracy evaluation the coordinates of the laser points were transformed distortion-free by a spatial similarity transformation (Helmert transformation, 7-parameter transformation) into the panorama coordinate system (HCU 3D-IMAGER) (system 1  system 2).During an optimization process (after Sansò 1973) the seven transformation parameters were determined as follows: a spatial shift vector (3 parameters), a scale factor (1 parameter) and a spatial rotation (3 parameters).The transformation parameters were determined using five control points (#CP) in each dataset.For the five control points the RMS values RMS ( ), RMS ( ) and RMS ( ) were determined.The remaining 37 points were used as checkpoints (#ChP).The differences in the checkpoints were determined as RMS values (RMS (X), RMS (Y), RMS (Z)).The results are summarized in Table 4.

Camera
The residues of the control points are between 2mm and 5mm after the adjustment.However, the empirical accuracy of the checkpoints is between 3mm to 11mm.The reasons for these higher deviations are accounted for by the measurements of the laser point and by the precision of the measuring instrument.Since each laser point shows as a relatively large item in the photos and since the border of the laser point is not well defined, the targeting of the laser point centre with the total station is not always completely precise (see also Fig. 5).Furthermore, the precision of the total station is limited due to the precision of the (reflectorless) distance measurements of 2mm.  4. Empirical accuracy of laser points of the HCU 3D-IMAGER using five control points

3D INDOOR DATA ACQUISITION
For investigations into 3D indoor mapping the calibrated imaging system HCU 3D-IMAGER has been used.The image acquisition and the laser ranging were performed simultaneously, so that the red laser point is visible in each single image (Fig. 5).The positioning of the system takes place in the centre of small rooms with sufficient view to all relevant corners or as an optimal positioning configuration, in order to realize optimal sight distances shorter than 15m.After levelling the system, it is ready for use.For the controlling of the system a laptop has been used, to operate the control computer Raspberry Pi via WLAN.One single data set consists of a laser distance and a photo (Fig. 5, right), which is saved on the Raspberry Pi computer together with the orientation parameters of the taken photo (horizontal angle in degrees from fixed raster and vertical angles by reading the scale).Depending upon desired imaging configuration (image overlap) all horizontal photographs are accomplished with one vertical angular setting.For a standard room 18 horizontal photographs are currently used, which corresponds to an overlap range of 51%.These 18 photographs are acquired at different inclinations, e.g. with the 5 inclination stages +60°, +30°, 0°, -30° and -50°, in order to be able to compute a fullspherical panorama (Fig. 6).A lower inclination is not necessary, since these photos will mainly show the panorama head as well as the tripod.
After completion of the photography 360° spherical panorama images with at least 12000 pixels of horizontal resolution were generated from the individually-acquired images.The horizontal and vertical angle, which are defined in a project file, were used as approximate values for the determination of the exact rotations.These adjusted rotation angles are the basis for the calculation of the coordinates of each laser point in object space.The production of a panorama proceeds in three steps: 1. Determination of general tie and special tie points on vertical lines: Since the adjustment of the individual images cannot take place with sufficient accuracy, the pictures are linked by homologous points in neighbouring images.2. Optimization of the orientation parameters: Using these tie points the determination of image orientation is in a subsequent optimization process.3. Stitching of the single images: Using the improved image orientation the individual photos are stitched to a panorama including brightness balancing.
For the steps 1 and 2 the panorama tools (http://panotools.sourceforge.net/)and for step 3 the CAU Stitcher (from the Institute for Computer Science of the Christian-Albrechts University Kiel) have been used.Additionally, the CAU Stitcher computes Cartesian 3D coordinates for each laser point.For the following 3D modelling all characteristic parameters are written into a project file and are therefore consistently documented.

3D MODELLING
Using only one individual panorama image a scaled reconstruction of an indoor object (room) is not possible.Furthermore the 3D coordinates of the laser points (see Fig. 7 small blue image), which represent the 3D room as a very sparse point cloud, do not allow a complete reconstruction of the object.
In addition not all laser points are projected on relevant planar surfaces.Only by combining the panorama image and at least one On the basis of these above mentioned model assumptions the company ETU software GmbH in Cologne has developed the 3D modelling software 3D BIS for manual modelling of interiors.
During the reconstruction process additional information can be specified, e.g. the labelling of object parts (ground floor, ceiling, wall, …), the material (stone, wood, PVC, …) and the status (OK, take down, repair, …).The software administers the individuallyacquired indoor data and the associated metadata, and automatically computes the solid and square measure.For data exchange the constructed 3D model can be exported in the COLLADA format.
The reconstruction of a room works with the 3D modelling software 3D BIS as follows: On the condition that the HCU 3D-IMAGER has been set up perpendicularly on the ground floor and laser measurements to the ground floor took place, the ground level can be constructed.Within the ground plane the floor plane can be constructed next by the definition of 2D surfaces using the panorama image in the background (Fig. 8 right, yellow surface).On the condition that the walls are perpendicular to the ground floor plane, the wall planes can be constructed afterwards (Fig. 8 centre left, yellow surface).Within the wall planes wall surfaces and openings (windows and doors) Figure 9. Generated panorama including depicted laser points and modelled room with slanted walls Figure 10.Generated panorama and modelled room which is under construction can be determined (Fig. 8 centre right, yellow surface with opening).After all walls are constructed, the modelling of the room can be completed with the construction of the ceiling.The final 3D model can be exported to be further processed or visualized in another 3D modelling software (e.g.SketchUp) (Fig. 8 right).Fig. 9 shows the generated panorama photo including the visible laser points of another test object and the constructed room, while Fig. 10 represents the generated panorama image and the constructed room, which is in process of construction.

CONCLUSION AND OUTLOOK
The Laboratory for Photogrammetry & Laser Scanning of the HCU Hamburg developed a low-cost 3D imaging system for interiors, which does not cost more than 1000 EUR using the current system structure.The data acquisition of interiors using the HCU 3D-IMAGER, which is equipped with a Raspberry Pi camera and a Bosch laser range finder on a panorama head, currently takes approx.12 minutes for a manual series of individual photos around 360° and laser distance measurements for each image.The Raspberry Pi camera used has small distortion values up to 20m, but an unstable principle point (changes up to 50m).Nevertheless, the accuracy potential of the Raspberry Pi camera obtained after simultaneous camera calibration does not quite reach the performance of a digital SLR camera.After system calibration an accuracy of approx.1cm for single points can be expected for a standard interior with maximum distances of 10m.20 rooms have already been recorded using this system and all could be reconstructed from the panorama images and distance measurements using the software 3D-BIS.
In the future the 3D indoor mapping will work with a motorized and computer-controlled panorama head, taking fewer photos in two minutes per station.It is planned that the workflow from panorama production to object construction will be fully supported by automatic image processing operations.The assumption here is that the recorded room is mostly formed from simple two dimensional elements, and that the predominating photo characteristics consist to a large extent of horizontal and vertical structures (Manhattan World).

Figure 2 .
Figure 2. Temporary 3D test field of HCU Hamburg For the camera calibration four Raspberry Pi cameras (RPI 1, 2, 3, 4) with normal angle lens (c = 3.6mm) and one camera (RPI 5) with wide angle lens (c = 1.7mm) were available.The calibration procedure was repeated with the different cameras as indicated in Tab. 2. The Raspberry Pi Foundation currently offers three different camera modules.All camera modules have the same 5-Mega-Pixel CMOS sensor of the type OmniVision OV5647 with 1.4 µm pixel spacing.The technical specifications of the Raspberry Pi camera are summarized in Tab. 1.The tested cameras are probably not identically constructed.They differ outwardly by

For
camera calibration the bundle adjustment program Pictran from Technet GmbH was used.The Pictran camera model describes the camera by three parameters of interior orientation (principle point x0, y0 and camera constant c) and by six additional parameters A1…A6 (systematic image errors), whereby A1 and A2 model the radial symmetric distortion of the lens, A3 and A4 the affinity and shearing of the sensor chip and A5 and A6 the radial asymmetric and tangential distortion of the lens.The adjusted nine parameters for the interior orientation are summarized in Table 2 after calibration of five different Raspberry Pi cameras (four with normal lens and one with wide angle lens).

Figure 3 .
Figure 3.Effect of the additional parameters (A1 … A6) on the image coordinates

Figure 4 .
Figure 4. Radial symmetrical distortion (A1, A2) of four different Raspberry Pi cameras (top) and repetition of calibrations of a single camera (bottom)

Figure 6 .
Figure 6.Five image rows with 18 single images each for the generation of one panorama image

Figure 7 .
Figure 7. Generated panorama image and sparse point cloud of the laser points (small blue image)

Table 1 .
Technical specifications of the Raspberry Pi camera

Table 3 .
Statistics of the bundle block adjustment and empirical accuracy for different cameras in the 3D test field in comparison I….images, IP….image points, OP….object points, xy ….standard deviation of image coordinates, SD PP….standard deviation of control points, C/Ch….control/ check points, XYZ….empiricalaccuracy