REGISTRATION OF TIME OF FLIGHT TERRESTRIAL LASER SCANNER DATA FOR STOP-AND-GO MODE

Terrestrial Laser Scanners (TLS) are utilized through different data acquisition techniques such as Mobile Laser Scanning (MLS) and the output can be used in different applications such as 3D city modelling, cultural heritage documentations, oil and Gas as built, etc... In this paper, we will compare between two TLS modes of mobile mapping operations; namely continuous mode and stopand-go mode. In continuous mode, Global Positioning System (GPS), Inertial Measurement Unit (IMU), and motion compensator are integrated with the TLS for georeferencing purposes. All sensors are mounted on-board a moving vehicle while the laser scanner collects data continuously. In the Stop-and-Go (SAG) mode the vehicle stops at certain locations for TLS data collection. Unlike the continuous mode, the Stop-and-Go mode does not require the use of IMU to estimate the TLS attitude and thus in-turn it has an overall reduction in the system cost. Moreover, it decreases the time required for data processing in comparison with the continuous mode. SAG mode has one problem which is that some parts of the data are not covered (measured) due to occlusion occurred by objects such as vehicles or trees. This problem arises with the continuous mode as well. However, the occlusion increases in the SAG mode since the vehicle is stationary, while in the continuous mode the moving vehicle reduces the occlusion by scanning a covered scene from different angles. Some authors tried to solve the problem of occlusion via the integration between SAG and continuous modes. In this paper we are going to address this problem in a different way. For successful use of SAG mobile mapping in urban areas, it is preferred to use a long range time of flight laser scanner to cover long distances in each scan and minimize the registration error. That comes with the price of low-density point cloud and the occlusion of some parts in the scan areas by other objects. In this research paper, we are using a Time of Flight (ToF) Terrestrial Laser Scanner to perform Stop-and-Go scanning. Since no IMUs are going to be used in the Stop-and-Go scanning mode, another method of orientation determination should be implemented. After performing the SAG scans, point cloud registration should be implemented in order to build a 3D model of large urban areas. The problems facing the registration are: a) inaccurate orientation determination, in case of tilted or misaligned scans, b) the loss of parts of data due to occlusions and c) the low resolution of the ToF laser scanner data. We address these problems via a plane/surface detection algorithm. This algorithm is based on the segmentation of the point cloud before the registration process in order to detect the planer surfaces in the scene. There are several segmentation algorithms, such as edge detection algorithms, region growing algorithms and Principal Component Analysis (PCA). The segmentation process in this research is based on the fusion between optical images captured by the laser scanner and the point cloud. The planar surfaces detection is used to determine an accurate estimate of the orientation of the scans. For the registration process, the planar surfaces will be used in the registration to get coarse points matching from which initial transformation parameters are estimated. These transformation parameters are fed to the Iterative Closest projected Point (ICPP) algorithm and the iteration continues until we have best fit of the parameters. The loss of some parts of the scans due to occlusions is treated with an interpolation algorithm of the planar surfaces behind the occulting objects. The same interpolation algorithm is used to densify the point cloud in order to overcome the low density problem associated with the ToF scanners. * Corresponding author. This is useful to know for communication with the appropriate person in cases with more than one author. ISPRS Technical Commission I Symposium, Sustaining Land Imaging: UAVs to Satellites 17 – 20 November 2014, Denver, Colorado, USA, MTSTC1-153


INTRODUCTION
TLS nowadays are vital for mobile mapping applications.One of the most interesting applications is 3-D modelling of large areas.To cover a large area usually full mobile mapping with a moving car with Inertial Measurement Units (IMUs) and Global Navigation Satellite Systems (GNSS) on-board and a short range Phase Shift (PS) laser scanner.The choice of the PS laser scanner for full mobile mapping is obvious to cope with the speed of the moving vehicle.That comes with the price of an expensive system and shortage in coverage especially with high buildings.To overcome the drawback of the full mobile mapping, the long range Time of Flight (ToF) laser scanner is used with the SAG mode.SAG mode requires registration of the point clouds obtained from each scan (stop).The registration error increases with the increase of number of scans.In the case of ToF long range TLS, a few number of scans can cover a large area that requires many scans if we use PS laser scanners.The problem arising with ToF data is their low density.This low density lowers the accuracy of the registration especially when matching points that are manually selected.Camera nowadays are associated with many TLS and obtains images at each scan.These images can be utilized to obtain the relative orientation of the TLS at each scan.In this research paper we investigate the automatic registration of ToF laser scanner point cloud.We perform automatic registration based on the image matching for images associated with each scan.The most efficient algorithm used nowadays for image matching is the Speeded-Up Robust Features (SURF) algorithm.After using the SURF algorithm we use the tie points in each * Corresponding author.image to estimate the relative orientation between images.Relative orientation parameters are then fed to the Iterative Closest Point ICP algorithm to find the best registration parameters for the scans.

AUTOMATIC REGESTRATION OF POINT CLOUD
The registration process usually involves manual selection of matching points between the scans to be registered (at least three non collinear points to apply Least Squares (LS)).Manual selection of the matching points is an inaccurate and time consuming process, especially when the number of points is limited.The technique used to overcome the inaccuracy of the manual registration is to use a transformation parameters obtained from the manual registration and fed it to the ICP algorithm.Nowadays most of the Laser scanners are equipped with metric cameras.The images associated with each scan can be utilized to obtain their relative orientation parameters.These parameters could be used as an estimate for the initial transformation parameters used in the ICP algorithm.At least five tie points must exist between the stereo images in order to obtain the relative orientation.In this paper, the tie points are determined via the feature selection algorithm known as SURF (Bay, Ess, Tuytelaars, & Van Gool, 2007) .In the following subsection we are going to review the algorithms used to obtain perform automatic registration of point cloud using images.

SURF ALGORITHM
SURF algorithm was proposed to overcome the slow performance of the Scale Invariant Feature Transformation (SIFT) algorithm.SIFT was Introduced by Lowe in 2004(Lowe, 2014) to find matching features in images with different scales.Several tests on SIFT shows its robustness against illumination changes, rotation and scale change.The major problem with SIFT was its slow performance.To improve the performance of image matching based on key points detection and description SURF used integral images and approximated the Laplacian of Gaussian (LoG) with a Box filter.The way that SURF work is to find interest points such as corners and blobs.Then neighbourhood of every interest point is represented by feature vector (descriptor).Finally the descriptor vectors in the two images are matched.

Estimation of orientation parameters from stereo images
The problem of relative orientation can be solved using at least five tie points and applying the coplanrity condition.The solution of the relative orientation parameters based on the coplanarity condition is described in several literatures such as (Jue, 2008) and (Pozzoli & Mussio, 2003).The baseline vector between the two images described in the first image's coordinate system is  = [  ,   ,   ]  .
And the coordinates of a point  1 in the first image is described in the first image's coordinate system as  1 ( 1 ,  1 ,  1 ).Similarly for a point  2 in the second image's coordinate system is  2 ( 2 ,  2 ,  2 ).The coplanarity condition is a constraint for a number of vectors to be in a plane (Mullen, Mikhail, McGlone, & Bethel, 2004).For the three vector case, the coplanarity condition is that their triple product is zero.That is: Both images points must be described in the same coordinate system.The relation between the vector [ 2 ,  2 ,  2 ]  and [ 2 ′ ,  2 ′ ,  2 ′] is: where  is the rotation matrix between the first image's coordinate and the second image's coordinates and is a function of the rotation angles(, , ).
The determinant  can be computed in terms of the base vector and the rotation angles. where The parameters (  ,   , ) are the Internal Orientation Parameters (IOPs) of the camera.The estimation of the parameters is straight forward via LS: where  is the state vector: and The design matrix  is given by: The following equations are used to simplify the derivatives of the elements of the rotation matrix  with respect to the rotation angles: The LS equations are solved to get an initial estimate for the registration parameters of two scans with which the stereo images are associated.The initial estimate is fed to the ICP algorithm to have an accurate registration parameters.

ICP
The ICP algorithm is used to get an accurate estimate of the registration parameters.The basic idea about this method is to minimize the square distance between neighbouring points in the two scans.Suppose that the points in the reference scan are   ( = 1,2 … , ) and the points in the other scan are   ( = 1,2, … , ) , where each point is described by three coordinates in the scan's local coordinate system.The equation of transformation is given by: Where   the translation vector between the two scans' coordinate systems and  is the rotation matrix between the scans. is the distance between the two points in the two scans.
One can think of it as an error originating from the registration process.
Our target is to minimize the square of .That is The ICP algorithm we used here is discussed in details (Vosselman & Maas, 2010).This algorithm is first proposed by Besl and McKay (Besl & McKay, 1992) for shape registration and is based on Horn's method for estimating the transformation parameters (Horn, 1987).
The ICP algorithm starts with computation of the centroids of each point cloud: Then the coordinates of each point in both scans is shifted such that the centroid is its origin.
This allows separate estimation of rotation and translation.The square error is then written as: The first and last term do not depend on the transformation parameters.Only the third term is dependent on the rotation parameters appearing in the rotation matrix.In order to minimize  2 then, the last term should be maximum.
The rotation matrix is defined in terms of the quaternions to simplify the manipulation of the equations.The elements of the rotation matrix in terms of the quaternions are given as: Where the unit quaternion is defined as: and its norm is 1 Using the quaternions, the ∑   ′    ′  is simplified to be in the following form Now the matrix  is constructed from the elements of the cross covariance matrix of the two point clouds.The cross covariance matrix is given by: is the symmetric matrix whose elements are as follows: The unit quaternion that maximizes the term      is the eigenvector corresponding to the largest eigenvalue of the matrix .After estimating the quaternion, the rotation matrix is constructed and we can use the equation   =   −   (23) to estimate the translation.
In computing the covariance matrix between the two sets of point clouds, the two sets must be organized to get the nearest points in the two sets.This is required because simply the term ∑     ′   is the sum of points in one scan and their corresponding points in the other scan.

Test data
The test data were collected using the ToF long range laser scanner.The site of data collection was done at North West Calgary, AB, Canada.The point spacing of the point cloud was chosen to between 1.5 cm and 2cm for the sake of fast data collection.The laser scanner is equipped with a DSLR camera with the parameters:  = 20.4801mm  = −0.0744mm  = 0.0302mm

Visual assessment of the results
In Figure ( 1) and ( 2) the two images are shown with the results of the image matching points using SURF Algorithm.The coloured circles in both images indicates the matching points.

Statistical assessment of the registration results
The Root Mean Square Error (RMSE) is computed for the registration.It is given by the equation: This error is computed at the final estimate of the registration parameters to assess our registration quality.
The RMSE for the registration between the scans was found to be 1.888 cm.
Compared to the point spacing of the point cloud of the reference scan the value of the RMSE is satisfactory.

CONCLUSION
The research paper investigates the registration of ToF laser scanner data in the SAG mode.The images associated with the scans taken by this type of scanners are utilized to get an estimate for the relative orientation between the two scans.The initial estimate of the orientation parameters is used in the ICP algorithm to enhance the registration between the scans.The overall accuracy of the registration process can be assessed with the RMSE with comparison to the point spacing.It was shown that the RMSE of the registration process is 1.888 cm.Compared to the point spacing of the scan this is a good result.
The visual assessment of the results shows that with only few scans with a ToF scanner a 3D model can be generated for a very large area that requires many scans of a PS scanner, which leads to larger registration error in the case of PS scanner.

Figure 1 .
Figure 1.Image taken by the ILRIS Camera associated with reference Scan

Figure 3 .
Figure 3. Result of registration the two scans.

Figure ( 4
Figure (4) shows part of the scan scene to show the large area that can be covered with two scan using long range ToF laser scanner.

Figure 3 .
Figure 3. Result of registration the two scans.