<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v3.0 20080202//EN" "https://jats.nlm.nih.gov/nlm-dtd/publishing/3.0/journalpublishing3.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article" dtd-version="3.0" xml:lang="en">
<front>
<journal-meta>
<journal-id journal-id-type="publisher">ISPRS-Archives</journal-id>
<journal-title-group>
<journal-title>ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences</journal-title>
<abbrev-journal-title abbrev-type="publisher">ISPRS-Archives</abbrev-journal-title>
<abbrev-journal-title abbrev-type="nlm-ta">Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">2194-9034</issn>
<publisher><publisher-name>Copernicus Publications</publisher-name>
<publisher-loc>Göttingen, Germany</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.5194/isprs-archives-XLII-4-W10-135-2018</article-id>
<title-group>
<article-title>SEGMENTATION OF 3D PHOTOGRAMMETRIC POINT CLOUD FOR 3D BUILDING MODELING</article-title>
</title-group>
<contrib-group><contrib contrib-type="author" xlink:type="simple"><name name-style="western"><surname>Özdemir</surname>
<given-names>E.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author" xlink:type="simple"><name name-style="western"><surname>Remondino</surname>
<given-names>F.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<ext-link>https://orcid.org/0000-0001-6097-5342</ext-link></contrib>
</contrib-group><aff id="aff1">
<label>1</label>
<addr-line>3D Optical Metrology, Bruno Kessler Foundation (FBK), Trento, Italy</addr-line>
</aff>
<pub-date pub-type="epub">
<day>12</day>
<month>09</month>
<year>2018</year>
</pub-date>
<volume>XLII-4/W10</volume>
<fpage>135</fpage>
<lpage>142</lpage>
<permissions>
<license license-type="open-access">
<license-p>This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this licence, visit <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">https://creativecommons.org/licenses/by/4.0/</ext-link></license-p>
</license>
</permissions>
<self-uri xlink:href="https://isprs-archives.copernicus.org/articles/isprs-archives-XLII-4-W10-135-2018.html">This article is available from https://isprs-archives.copernicus.org/articles/isprs-archives-XLII-4-W10-135-2018.html</self-uri>
<self-uri xlink:href="https://isprs-archives.copernicus.org/articles/isprs-archives-XLII-4-W10-135-2018.pdf">The full text article is available as a PDF file from https://isprs-archives.copernicus.org/articles/isprs-archives-XLII-4-W10-135-2018.pdf</self-uri>
<abstract>
<p>3D city modeling has become important over the last decades as these models are being used in different studies including, energy evaluation, visibility analysis, 3D cadastre, urban planning, change detection, disaster management, etc. Segmentation and classification of photogrammetric or LiDAR data is important for 3D city models as these are the main data sources, and, these tasks are challenging due to their complexity. This study presents research in progress, which focuses on the segmentation and classification of 3D point clouds and orthoimages to generate 3D urban models. The aim is to classify photogrammetric-based point clouds (&amp;gt;&amp;thinsp;30&amp;thinsp;pts/sqm) in combination with aerial RGB orthoimages (~&amp;thinsp;10&amp;thinsp;cm, RGB image) in order to name buildings, ground level objects (GLOs), trees, grass areas, and other regions. If on the one hand the classification of aerial orthoimages is foreseen to be a fast approach to get classes and then transfer them from the image to the point cloud space, on the other hand, segmenting a point cloud is expected to be much more time consuming but to provide significant segments from the analyzed scene. For this reason, the proposed method combines segmentation methods on the two geoinformation in order to achieve better results.</p>
</abstract>
<counts><page-count count="8"/></counts>
</article-meta>
</front>
<body/>
<back>
</back>
</article>
