<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v3.0 20080202//EN" "http://dtd.nlm.nih.gov/publishing/3.0/journalpublishing3.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article" dtd-version="3.0" xml:lang="en">
<front>
<journal-meta>
<journal-id journal-id-type="publisher">ISPRS-Archives</journal-id>
<journal-title-group>
<journal-title>The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences</journal-title>
<abbrev-journal-title abbrev-type="publisher">ISPRS-Archives</abbrev-journal-title>
<abbrev-journal-title abbrev-type="nlm-ta">Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">2194-9034</issn>
<publisher><publisher-name>Copernicus Publications</publisher-name>
<publisher-loc>Göttingen, Germany</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.5194/isprs-archives-XLII-2-W7-431-2017</article-id>
<title-group>
<article-title>AUTOMATIC RECOGNITION OF INDOOR NAVIGATION ELEMENTS FROM KINECT POINT CLOUDS</article-title>
</title-group>
<contrib-group><contrib contrib-type="author" xlink:type="simple"><name name-style="western"><surname>Zeng</surname>
<given-names>L.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author" xlink:type="simple"><name name-style="western"><surname>Kang</surname>
<given-names>Z.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
</contrib-group><aff id="aff1">
<label>1</label>
<addr-line>School of Land Science and Technology, China University of Geoscience, Beijing China</addr-line>
</aff>
<pub-date pub-type="epub">
<day>12</day>
<month>09</month>
<year>2017</year>
</pub-date>
<volume>XLII-2/W7</volume>
<fpage>431</fpage>
<lpage>437</lpage>
<permissions>
<copyright-statement>Copyright: &#x000a9; 2017 L. Zeng</copyright-statement>
<copyright-year>2017</copyright-year>
<license license-type="open-access">
<license-p>This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this licence, visit <ext-link ext-link-type="uri"  xlink:href="https://creativecommons.org/licenses/by/4.0/">https://creativecommons.org/licenses/by/4.0/</ext-link></license-p>
</license>
</permissions>
<self-uri xlink:href="https://isprs-archives.copernicus.org/articles/XLII-2-W7/431/2017/isprs-archives-XLII-2-W7-431-2017.html">This article is available from https://isprs-archives.copernicus.org/articles/XLII-2-W7/431/2017/isprs-archives-XLII-2-W7-431-2017.html</self-uri>
<self-uri xlink:href="https://isprs-archives.copernicus.org/articles/XLII-2-W7/431/2017/isprs-archives-XLII-2-W7-431-2017.pdf">The full text article is available as a PDF file from https://isprs-archives.copernicus.org/articles/XLII-2-W7/431/2017/isprs-archives-XLII-2-W7-431-2017.pdf</self-uri>
<abstract>
<p>This paper realizes automatically the navigating elements defined by indoorGML data standard &amp;ndash; door, stairway and wall. The data used is indoor 3D point cloud collected by Kinect v2 launched in 2011 through the means of ORB-SLAM. By contrast, it is cheaper and more convenient than lidar, but the point clouds also have the problem of noise, registration error and large data volume. Hence, we adopt a shape descriptor &amp;ndash; histogram of distances between two randomly chosen points, proposed by Osada and merges with other descriptor &amp;ndash; in conjunction with random forest classifier to recognize the navigation elements (door, stairway and wall) from Kinect point clouds. This research acquires navigation elements and their 3-d location information from each single data frame through segmentation of point clouds, boundary extraction, feature calculation and classification. Finally, this paper utilizes the acquired navigation elements and their information to generate the state data of the indoor navigation module automatically. The experimental results demonstrate a high recognition accuracy of the proposed method.</p>
</abstract>
<counts><page-count count="7"/></counts>
</article-meta>
</front>
<body/>
<back>
</back>
</article>