SOLID IMAGE EXTRACTION FROM LIDAR POINT CLOUDS

Abstract. In laser scanner architectural surveying it is necessary to extract orthogonal projections from the tridimensional model, plans, elevations and cross sections. The paper presents the workflow of architectural drawings production from laser scans, focusing on the orthogonal projection of the point cloud on solid images, in order to avoid the time consuming surface modeling, when it is not strictly necessary. The proposed procedures have been implemented in fortran90 and included in the VELOCE software package, then tested and applied to the case study of the San Pietro church in Porto Venere (SP), integrating the architectural surveying with an existing bathymetric and coastal surveying.


INTRODUCTION
In laser scanner architectural surveying it is necessary to extract orthogonal projections from the tridimensional model, plans, elevations and cross sections.The paper presents the workflow of architectural drawings production from laser scans.Because of the surface modeling take a considerable amount of time (about the 30% of the total workflow) and a huge computational load, we propose to extract the architectural drawings directly from point clouds, avoiding the surface modeling step.Moreover the surface modeling generally produce a loss of the finest details, that in this way can be retained.The geometric procedures to extract the orthogonal projections of point clouds will be defined.The products of these projections however are not traditional drawings, but are in the form of solid images.The solid image (or 3D image) is a concept introduced in photogrammetry by Prof. S. Dequal in 2003.While in a traditional bi-dimensional digital color image, consisting in a 3 band pixel-matrix, the RGB information is stored, in a 3D image a fourth band is required to store the third dimension, for example the distance from the section plane.A fifth band has also been used to store the surface reflectivity measured by the laser scanner.The algorithms have been implemented in fortran90 and included in the VELOCE 1 software package (Manzino & Roggero, 2002).VELOCE is intended for batch processing of large data sets.The software born in 2000 to process large airborne LIDAR data sets, and includes algorithms for point cloud filtering and segmentation.Later it has been adapted to terrestrial laser scans, including several functions for point cloud management, filtering and projection.VELOCE is actually available under GNU license for academic users, but new personalized functions can be implemented also for commercial users.The studied procedures have been applied to the case study of the San Pietro church in Porto Venere (SP), integrating the architectural surveying with an existing bathymetric and coastal surveying, realized by Istituto Idrografico della Marina Militare and Codevintec in 2006.The survey consists of 15 scans in the interior and of 20 scans in the exterior, for a total of 743 millions of scanned points.The survey campaign required two 1 https://sites.google.com/site/roggeroresearch/home/software/veloce days, while the post processing of the scanned points has required 3-4 weeks to filter and colorize the data, to align the point clouds and to produce the solid images of plans, elevations, cross sections and projection of the vaults.Finally it has been produced a video of the scanned point cloud, representing the San Pietro promontory, that is intended to be used by the Comune di Porto Venere for touristic promotion and divulgation purposes.

POINT CLOUD PROCESSING
In laser scan surveying production the largest amount of time is required by point cloud processing.This phase usually takes the 60÷80% of time of the entire work, while the surveying campaign is much less time consuming.This leads to an optimization of the entire surveying workflow, reducing the cost of the campaign and moving the human resources to the post processing phase.
The point cloud processing includes all the software operations that are needed to fit the customer requirements (i.e.textured Because of all the geometric and radiometric information are already contained in the point cloud, we can avoid surface modeling, when the 3D model is not explicitly required by the customer.On the other hand, point clouds are not suitable for a quick extraction of vector drawings; for this reason the most of commercial software require surface modeling in order to extract cross sections.The present work will suggest an alternative way, that we have applied to architectural surveying.

POINT CLOUD PROJECTION ON SOLID IMAGES
The three-dimensional model (mesh surface or point cloud) is used to extract the final products, that usually can be vector drawings or ortophotos.Because of the traditional ortophoto of a point cloud is simply an orthogonal projection of the colored points onto a plane, its limit is that the third dimension is lost, while using the concept of solid ortophoto (Bornaz, Dequal, & Lingua, 2006), the third dimension is paired to the radiometric information: a fourth band is coupled to the three RGB bands, with the same spatial resolution, in order to store the distance of the points from a reference surface.The R, G, B color components are stored in the 1 st , 2 nd and 3 rd band, as short unsigned integers (1 byte per band).


The laser intensity is stored in the 4 th band as short unsigned integers (1 byte).


The distance from the projection plane is stored in the 5 ft band, as long signed integer (2 byte). A 6 th band can be used to record the number of scanned point projected in the pixel (scan density); to store this information 1 byte is enough.The memory used by this data structure is 7 bytes per pixel.
In the case of horizontal cross sections (plans, projection of vaults) the third dimension can be the orthometric height of points, while in the case of vertical cross sections or elevations, it can simply be the distance from the projection plane.Navigating the solid image the three dimensional coordinates of the points can be visualized or used to measure distances or angles2 .An example is given in Figure 2.

Point cloud pre-processing and bounding box definition
We suppose to use registered and filtered point clouds, in cartesian coordinates x, y and z.The bounding box will be defined by and The VELOCE software can import binary or ASCII files in .ptsor .ptxformat.The data are archived in binary format designed in order to have the maximum data compression and a fast access to the information, as described in (Manzino, Roggero, 2002).Because of the coordinates are stored as REAL3 (4 byte), a shift (x T , y T , 0) is applied to the E and N components in order to contain the values inside the limit range; in the following example: When the data are exported, the opposite shift (-x T , -y T , 0) is applied to compute the original cartographic coordinates.
The VELOCE raw binary format includes some header record, where are recorded the number of points, the bounding box and horizontal shift.The solid image is initialized with the following values:  The background color is assigned to the R, G, B bands (1 st , 2 nd and 3 rd band)  The value 0 is assigned to the 4 th band, the laser intensity  To the 5 ft band, the distance from the projection plane, is assigned the maximum value of a long signed integer (32767).The distances will be then recorded in cm, up to the limit of 327.67 m, that is enough for the architectural applications.


To the 6 th band is assigned the value zero.

Cross sections
Most of commercial software compute the cross section as the intersection between the mesh surface and a plane, as in Figure 3.The points in the interval dz will be considered as coplanar and used to define the cross section line.The interval dz must be defined by the user, and it depends on the point density.When a point it is find to be in the interval dz, the corresponding pixel on the solid image is colored in red (or in black or in another user defined color), while when a point is out of the dz interval, it is projected with its proper color.We have seen as the distance band is initialized at the value 32767 (327.67 m), then the scanned points are sequentially red from the raw file and projected onto the solid image, with the following criteria:  If the solid image pixel is void, the point is projected on the pixel  If the solid image pixel is not void and the distance of the new point is lower than the distance in the 5 th band, the point is projected on the pixel When the algorithm finds non hidden points to be projected, the pixel values in the solid image are updated, as shown in the following example of a plans generated by an horizontal section at the height of 1.5 m.Consider this points that are projected in the same image pixel: id.
x The algorithm read the point 56 and verify that is under (or behind) the section plane:  z 56 < 1.50 m  z 56 > -327.67 m This is verified, so the value z 56 = -1277 is written in the solid image and the 6 bands are updated as follows: R(36) G( 24 The position of a pixel in the multi-band image is given by the pointer p band = -+ (l i -1) • n c + c i where  n band is the number of the band (i.e.1-6)  l i is the pixel line  c i is the pixel column  n l is the total number of lines  n c is the total number of columns This algorithm is quite slow, requiring a sequential reading of the raw data file and an a priori unknown number of image pixels updates.Because of the points are registered by the scanner in a sequential order following the scanning path, it is possible to speed up the process neglecting some points when they can be considered not necessary (hidden points).We use the following algorithm to skip a number j of points:  If the point is behind the cross section plane, then j is zero.In this case all the following points will be sequentially red and processed  If the point is ahead the cross section plane it is not processed, and the probability is as larger as the dis-tance from the plane is large, so we can skip some points.The number j of point to skip can be where d is the distance from the cross section plane and d the increment in this distance.
In any case the number j must be limited to a maximum (i.e.j max = 500) in order to avoid to go back behind the cross section plane too rapidly, loosing non hidden points.The Figure 6 exemplifies how the algorithm works.Suppose to produce an horizontal section at the height z sez .The algorithm read the point 1; it is ahead the section plane, so it must not be projected and probably we can skip some points.It is necessary to read the point 2 to compute d and j; in the example we skip 201 points and jump to the point 204.It is now necessary to know how the scanning path is approaching the section plane, recomputing d and j, so we need the point 205.The method allows to skip a large number of points, avoiding a lot of slow READ instruction.Only when the scanned points approach the section plane or when we are behind the plane, it is necessary to process all the points.Figure 6.Raw data sequential processing and skipped points The algorithm is analogous both for horizontal and vertical cross section (as in the example in Figure 2).In the case of vertical cross sections or elevations, it can be useful define partial or broken out sections.Also a vertical section can be orthogonally project on plans with different orientation, as shown in Figure 7, where a vertical cross section is defined by the horizontal polyline on the (x, y) plane.The order of the polyline vertex defines the orientation of the view, conventionally on the left following the polyline path.
Figure 7. Plan of the projection on a broken vertical section plane: the cross section is in the middle line of a building.
In the case of vertical cross sections the coordinates of a point (x, y, z) are projected in the point (x p , y p , z), belonging to the projection plane.If the vertical section plane is broken, also with different planes orientations, the first question is on which plane the point must be projected, the second one is the computation of the projected horizontal coordinates (x p , y p ).At first we compute the horizontal distance between the point P i the po e verte , Po , Po 2, …, Po N, the we ook for the two nearest vertex (Poly1 and Poly2 in the example).These two vertex identify the vertical plane on which the point P i must be projected.The distance of the point P i from the projection plane (or from the projection point) cannot be simply computed as a module using the Pitagora theorem, because it is necessary to determine if the point is behind or ahead the projection plane.It is necessary to define a new coordinate system (x I , y I ), with origin in (x INT , y INT ) and orientation of the y I axis toward P i .the coordinates of P i in this new system are given by the transformation

Tools for image enhancement
The obtained solid image obviously presents some lack of data, imperfections due to picture exposure and projections of distant or hidden objects.These shortcomings can be repaired to some extent, in order to make a more acceptable and readable image.The central pixel is filled only if a minimum of point in the search matrix is colored, in this case we use a threshold of 20%.
The result is shown in Figure 9.
To improve the readability of the image it is also possible to use color mapping, in order to represent the point that are far from the projection plane with a modified brightness or tone.VELOCE allows to fix a distance z 1 from the section plane beyond which the points are represented by a modified color, and a maximum distance z 2 to completely exclude far objects.These distance intervals are represented in Figure 11.Color mapping can also be used in topographic surveying, to represent the ground heights in a DTM.
Figure 11.Distance intervals from the section plane; between dz and z 1 the point are represented in real colors, between z 1 and z 2 are represented in modified colors.
In the interval z 1 ÷z 2 the color is modified in function of the parameter 2 that ranges from 0 to 1.The color RGB is transformed in the HSL system (Hue, Saturation, Lightness), so it is possible to modify one of these three components, in function of the desired graphic effect.For example, if we want to vanish the far points into a black background, we can modify the lightness by In this way the lightness remain unchanged for z = z 1 , and it is null for z = z 2 .Finally the RGB components are updated.
Another tool for image enhancement has been developed to correct overexposures.Photographic images acquired by an integrated camera are commonly used by laser scanners to colorize the scan points.The integrated camera exposure is automatically set by the scanner firmware over a mean vale for the spherical panorama.This results obviously in some overexposed images, in the direction of lights, windows or sun.The Figure 12 shows a scan of the ceiling of a tunnel, where around the light projectors the image is clearly overexposed.Image exposure has been corrected by substituting the lightness L with the laser intensity; then the RGB values have been updated obtaining a much more readable color image.

Results
As example some results are shown.Figure 13 represents the raw solid image of a cross section exported by VELOCE.In this case the image have been enhanced repairing it only for lack of data, while color saturation and lightness have been leaved unchanged.The raw solid image has been then edited by a photo editing software; it is notable that the three RGB bands of the solid image can be treated as a common photographic image.Editing involved the removal of hidden points and the inclusion of a pattern.

CONCLUSIONS
The goal that we set for this study, namely the extraction of drawings directly from the point cloud, has been fully achieved: while following the classical canons of the architectural drawings, we were able to preserve, through the solid image, the three-dimensional content that is intrinsic in laser scanner surveys.This does not mean that the work is done here, on the contrary, have emerged still several areas to work on.It is still necessary to improve some aspects of the solid image graphics, the exposure tuning and the vectorization of the section line.

Figure 2 .
Figure 2. Solid image navigation VELOCE produces 6 band solid images with this band structure: The R, G, B color components are stored in the 1 st , 2 nd and 3 rd band, as short unsigned integers (1 byte per band).Thelaser intensity is stored in the 4 th band as short unsigned integers (1 byte).Thedistance from the projection plane is stored in the 5 ft band, as long signed integer (2 byte). A 6 th band can be used to record the number of scanned point projected in the pixel (scan density); to store this information 1 byte is enough.The memory used by this data structure is 7 bytes per pixel.

Figure 3 .
Figure 3. Cross section from the mesh surface modelBecause of the point cloud is, by definition, a set of adimensional entities, it is not possible to intersect it with a plane, but it is possible to find the points within a given distance dz from the intersecting plane itself, as in Figure4.

Figure 4 .
Figure 4. Cross section from the point cloud

Figure 5 .
Figure 5. Discretized cross section projected from the point cloud on the solid image ) B(33) (-1535) z(-1277) n(1) Than the point 10 is red and  z 102 = -3 mm  z 102 > z 56 So the solid image pixel is updated and overwritten as follows: R(38) G(23) B(32) (-1503) z(-3) n(2).The point 388 is skipped because the condition z 388 > z 102 it is not verified.The point 5735 is accepted and used, while the points 8290 and 9006 are skipped, because the condition z 8920 , z 9006 < 1500 mm it is not verified.Finally the point 9197 is used and overwritten to the previous values as follows: R(90) G(84) B(80) (929) z(1498) n(3).Fixing dz = 50 mm, the point 9197 is in the interval 1500 > z > 1500-dz = 1450 so the pixel must be colored in red and the RGB values updated: R(255) G(0) B(0) (929) z(1498) n(3).The solid image is recursively updated up to the end of the raw data file.

Figure 8 .
Figure 8. Projection of a point on a segment The Figure 8 shows how to compute the horizontal coordinates of the proje te po t.It's e s to wr te, the for y = mx + c, the equation of the line passing through Poly1 and Poly2 (tangent line T), and of the line passing through P i and normal to the segment Po Po 2 (normal line N).The coordinates of the projection are given by the intersection between these two lines.The angular coefficients are given by Po 2 Po Po 2 Po International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, VolumeXL-5/W1, 2013  3D-ARCH 2013 -3D Virtual Reconstruction and Visualization of Complex Architectures, 25 -26 February 2013, Trento, Italy   While P I by definition, P I represents the distance of P i from the projection plane, and will be stored in the band 5 of the solid image.Finally the position of the point in the image must be computedof the point P i maximum z value of the bounding-box z P coordinate z of the point P i x f Progressive coordinate along the polyline res Image resolution (side of the pixel)

Figure 9 .
Figure 9. Solid image with lack of data filled Lack of data in the point cloud can produce non colored pixels or pixels erroneously colored by projecting hidden points.Both cases are shown in Figure 2, and depends on the point density with respect to the image resolution.To correct for lack of data the VELOCE software scans the entire image with a 3x3 search matrix, loading both the RGB and the distance information.At first on the 9 elements of the search matrix it is computed the minimum distance d min , then the pixel for which d > d min + (2÷3)•res are considered hidden points and canceled out.If at this stage the central pixel of the search matrix is void, it is filled with a mean value of the other non void pixels, as shown in Figure 10.

Figure 10 .
Figure 10.Search matrix used to correct for lack of data.The search radius r is normally set to 1.

Figure 12 .
Figure 12.Scan of the ceiling of a tunnel in cylindrical projection.The RGB image (in a) is overexposed around the light projectors, while the laser intensity (in c) is perfectly readable.In the b image the RGB lightness is tuned according to the laser intensity.

Figure 13 .
Figure 13.The raw solid image exported by VELOCE