Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Segmentation of LiDAR point cloud data in urban areas using adaptive neighborhood selection technique

Abstract

Semantic segmentation of urban areas using Light Detection and Ranging (LiDAR) point cloud data is challenging due to the complexity, outliers, and heterogeneous nature of the input point cloud data. The machine learning-based methods for segmenting point clouds suffer from the imprecise computation of the training feature values. The most important factor that influences how precisely the feature values are computed is the neighborhood chosen by each point. This research addresses this issue and proposes a suitable adaptive neighborhood selection approach for individual points by completely considering the complex and heterogeneous nature of the input LiDAR point cloud data. The proposed approach is evaluated on high-density mobile and low-density aerial LiDAR point cloud datasets using the Random Forest machine learning classifier. In the context of performance evaluation, the proposed approach confirms the competitive performance over the state-of-the-art approaches. The computed accuracy and F1-score for the high-density Toronto and low-density Vaihingen datasets are greater than 91% and 82%, respectively.

Introduction

Three-dimensional (3D) Light Detection and Ranging (LiDAR) point cloud data segmentation is a prominent area of research in remote sensing, photogrammetry, and computer vision. Due to the rapid development of technology, it is now possible to obtain LiDAR point cloud data using mobile laser scanning (MLS), terrestrial laser scanning (TLS), and aerial laser scanning (ALS) [1]. Those technologies can extract LiDAR point cloud data from a complex urban environment and the acquired data have been used in various 3D urban scene analysis applications, including building extraction [24], road identification [5], power line identification [6], vegetation cover analysis [7], and urban scene segmentation [8, 9]. The individual point in aerial LiDAR point cloud data contains Cartesian coordinates (X, Y, Z), where X, Y, and Z represent each point’s latitude, longitude, and height, respectively. In addition to these coordinates, it may also contain additional properties, such as RGB color information. The segmentation of individual points is quite challenging due to outliers, partial loss, and uneven density in the captured LiDAR point cloud data [10].

The existing approaches of LiDAR point cloud data segmentation can be categorized into geometric rule-based [7, 11], machine learning-based [1, 1214], and deep learning-based approaches [15, 16, 61]. The machine learning-based approaches usually consist of three essential steps: selecting a neighborhood for each 3D point, extracting feature values for all points based on the identified neighborhoods, and training and testing by using a supervised classifier based on the extracted features [14].

As a key step in point cloud segmentation, traditional neighborhood recovery methods, including k-Nearest Neighbor (k-NN) [17], spherical neighborhood [18], and cylindrical neighborhood [19], can select neighboring points based on a fixed size of k, radius (r), or height (h) value for all points in a dataset. In these methods, the size of neighborhoods must be arbitrarily determined without having any knowledge of the point cloud dataset. Selecting appropriate neighborhood points is essential as it enables the estimation of local surface orientations by extracting the geometric features of an object. Geometric features can be calculated based on different combinations of eigenvalues, which are extracted by analyzing the 3D covariance matrix of a 3D point and its neighborhood. Therefore, selecting the wrong neighborhood can lead to significant errors in eigenvalue calculation and may cause inaccurate extracted features.

Urban scenes contain objects with distinct geometric shapes, such as buildings, trees, and roads. In the case of a fixed k-NN approach, it may select all neighboring points from a small area due to the higher point density, or it may include neighbors from outliers, different planes, or different objects. In the case of fixed spherical or cylindrical neighborhoods, selecting the neighborhood in a missing point area may also be difficult. To determine neighborhood scale parameters, including height, radius, or k values, a comprehensive understanding of the urban scene is necessary. This issue can be addressed by employing an adaptive neighborhood selection strategy.

Among the state-of-the-art adaptive neighborhood selection approaches, there are several methods including entropy-based [1], curvature-based [14, 20], and omnivariance-based [21] techniques. Omnivariance-based, and entropy-based methods calculate local geometric features using various k neighborhood sizes. The neighborhood with the minimum omnivariance or entropy is considered the optimal selected neighborhood. The omnivariance-based neighborhood selection method proposed by Günen [21] does not utilize geometric features to divide the input point cloud data into different regions for neighborhood selection. This can lead to inappropriate neighborhood selection and feature extraction by choosing neighboring points from different regions or objects that do not correspond to the intended object point. The curvature-based neighborhood selection method divides the input point cloud data into regular and scatter regions based on curvature features. Following this division, the entropy-based neighborhood is applied separately to the regular and scatter regions. To further enhance this method, Xue et al. [14] select neighborhood points based on distance and normal angle within a spherical radius around each point in the scatter region. Here, the regular region includes points from building roofs, facades, and roadways, whereas the scatter region encompasses points from object edges, corners, and vegetation [14]. However, the authors did not consider instances where the neighborhood point of a specific object is selected from a different object class or a different surface plane within these two regions. Thus, clustering the input point cloud into multiple regions or understanding the classes beforehand enables the selection of neighboring points with similar attributes or geometric characteristics.

To address these issues, regular and scattered regions need to be further divided into additional regions to select the neighborhood appropriately. In addition, an adaptive neighborhood value selection approach based on each point’s geometric region is required. This approach should select neighboring points that have comparable geometric features and ensure that the points are chosen from similar geometric regions.

The particular contributions presented in this research are as follows:

  • Based on correlations between points and several geometric properties, including curvature, verticality, and omnivariance, the input LiDAR point cloud dataset is divided into four distinct regions.
  • For each of the distinct regions, appropriate adaptive neighborhood selection techniques are used to solve the problems of the existing fixed-scale neighborhood. A simplicial complex-based neighborhood selection approach is introduced in the highly dispersed region of the urban area point cloud.
  • Appropriate geometric features are computed for each region using the adaptively selected neighborhood techniques to enhance the overall effectiveness of the urban point cloud segmentation.

The rest of this paper is organized as follows: the Literature Review section provides an overview of current state-of-the-art methodologies. Following that, the proposed neighborhood selection technique for various regions is described in the Methodology section. The Experiments section elaborates on the detailed experiments conducted, while the Results subsection discusses the findings derived from these experiments.

Literature review

The main objective of this research is to find suitable neighbors for each point in the input point cloud to calculate the accurate feature value for segmenting the LiDAR point cloud data. This section first discusses relevant studies on existing LiDAR point cloud segmentation and then discusses the existing neighborhood selection techniques.

Point cloud segmentation

Different geometric rule-based, machine learning-based, and deep learning-based approaches exist in the literature to analyze the aerial, terrestrial, and mobile LiDAR point cloud data that contain different objects. In the rule-based approach, a set of geometric rules is established for every object based on its geometry and elevation aspects [7]. For example, Vega et al. [22] introduced the rule-based PTrees technique, which is a multi-scale dynamic point cloud segmentation approach for recovering forest trees from LiDAR point clouds by employing raw elevation values (Z) and height computation (H = Z—ground elevation). Awrangjeb et al. [23] presented a rule-based segmentation approach for building roof plane extraction where the raw LiDAR data was divided into ground and non-ground points. Then, non-ground points were segmented to extract the building planar roof. Dey et al. [24] used a robust plane fitting method based on M-estimator SAmple Consensus (MSAC) to classify buildings from the input LiDAR point cloud data. The main problem with the rule-based approach is selecting the appropriate threshold value for the chosen geometric parameters (e.g., angle, curvature, and normal). Setting a threshold globally is challenging due to the heterogeneous nature of LiDAR point cloud data.

In the machine learning-based approach, the extracted features from the input point cloud data are fed into a machine learning classifier to train the system, and then the system can make predictions for new or unseen point cloud test data [21, 25]. Different machine-learning classifiers, including Random Forest [26], Support Vector Machine (SVM) [27], AdaBoost [28], Decision Tree [21], Linear Discriminant Analysis [21], Bayesian Discriminant Analysis [29], LightGBM [60], and Few-Shot Learning [62] have been used by different authors in the literature for the semantic segmentation of LiDAR point cloud data. Xue et al. [14] conducted a comparative analysis of point cloud segmentation using various machine learning classifiers to work on adaptive neighborhood selection. In the same year, Jiang et al. [30] investigated multispectral airborne LiDAR point cloud classification with different machine learning classifiers, observing higher accuracy with Random Forest. The main issue with the machine learning-based approach is selecting suitable features and selecting optimal neighborhood selection to increase the effectiveness of classifiers [14, 20, 21, 30].

Recent research has emphasized the use of deep learning algorithms to improve current methods and accuracy [15, 16, 3133]. PointNet, a deep learning model, has proven to be beneficial for point cloud classification and segmentation, but it overlooks the relationship between points and local neighborhoods [34]. To consider the local neighborhood during deep learning classification, A-CNN [35], 3P-RNN [36], and DGCNN [37], PointCNN [31] have been proposed, which can extract geometric features and consider the neighborhood. Han et al. [61] presented a deep learning model that includes components for spatial downsampling, feature abstraction, and addressing class imbalance for semantic segmentation of urban scenes. To capture the relationship between points and their neighbors, Nong et al. proposed a PointNet++ method incorporating elevation information interpolation to improve object discrimination in ALS point classification tasks [32]. Along with that, a multilayer perception using a neighborhood selection was also utilized by Amakhchan et al. [38] and Fayez et al. [16] for urban scene classification.

In all instances, selecting the appropriate neighborhood is crucial for point cloud deep learning and machine learning models, and has a direct impact on their performance [33]. It influences the computation of various geometric properties and the efficacy of deep learning and machine learning models. The following subsection details the existing literature on neighborhood selection techniques.

Neighborhood selection

Traditional neighborhood recovery methods such as k-NN, spherical neighborhood, and cylindrical neighborhood can calculate features for machine learning classifiers based on a fixed-size neighborhood for each point in the input point cloud data. The k-NN method is defined as the k number of points that are nearest to a selected point according to the Euclidean distance. Here, the number of neighbors is determined according to a fixed number of k for all points in a dataset [39]. Weinmann et al. [40] used a fixed k-NN to calculate the features for semantic segmentation of the LiDAR point cloud. Chen et al. [41] used k-NN with a fixed value of k to segment boundary points. The spherical neighborhood method of a selected point is defined as all points included in a sphere with a predefined radius (r) around the origin selected point. The r is the main parameter of this neighborhood method [18]. Li et al. [42] employed a spherical neighborhood selection with radius values of 0.2m, 0.5m, and 0.8m for urban scene classification. Mallet et al. [27] used a fixed spherical neighborhood based on the point density of the dataset. Another method developed for selecting the neighborhood size is the cylindrical neighborhood [43]. This method determines the neighborhood based on volumetric calculations. In cylindrical neighborhood selection, the radius (r) and height (h) of the cylinder are the two main factors that affect the success of the method. The authors of the literature [28, 44, 45] utilized various adaptations of cylindrical neighborhood methods to segment diverse urban objects from the input LiDAR point cloud data.

However, fixed-size neighborhoods may not accurately capture the geometric features of all objects in the input LiDAR point cloud data of urban scenes [46]. To avoid the limitation of fixed-size neighborhoods, several authors proposed adaptive approaches to neighborhood selection [20, 4749]. Weinmann et al. [1] used a Shannon entropy-based adaptive method to select the k value for each point individually in the input point cloud data. They computed eigenentropy to select neighboring points and extracted features to segment urban scenes based on that neighborhood selection. Eigenentropy means the order or disorder of points, as well as the amount of uncertainty associated with these eigenvalues. They considered different values of k and chose the k value, which calculates the minimum entropy for each point in a point cloud dataset. He et al. [20] proposed a curvature-based adaptive neighborhood selection method. Based on a calculated threshold of curvature value, the author divided the input point cloud data into scatter and regular regions. Then, they used a k-minimal entropy-based neighborhood selection for scatter regions and a r-minimal entropy-based neighborhood for regular regions. Günen [21] presented a neighborhood recovery method based on the omnivariance property. The author defined a lower bound and an upper bound of k. Then, he calculated the omnivariance of a point and selected the k value with the lowest omnivariance value. Peng et al. [50] employed a controlled search radius experiment within a specific range to find the best-sized neighborhood for their used datasets. Dey et al. [46] proposed a method to determine the k value for k-NN in order to find an optimal neighborhood initially. The proposed method uses the k-NN algorithm to select a minimal number of neighboring points (k = 3) for a specific point Pi. A best-fit 3D line is then constructed using these neighboring points, and the standard deviation of the calculated distances is compared to a distance threshold. A distance threshold is computed by using point density, which is equal to the distance between two neighboring points. If the standard deviation is below the threshold, the value of k is increased iteratively to find the minimal neighborhood for Pi, ensuring an accurate plane’s normal estimation in LiDAR point cloud data. Xue et al. [14] proposed an adaptive neighborhood selection method based on the method proposed by He et al. [20]. The authors observed that corner points have higher curvature values and worked specifically with the scatter set of points. A large spherical radius for neighborhood selection was employed in the scatter set of points, and then excess points within the sphere were filtered out based on a selected threshold, which was calculated from the distance and normal angle between the points. In 2023, Jiang et al. [30] proposed maximum entropy-based neighborhood selection for multispectral airborne LiDAR point cloud segmentation. In the same year, Sevgen et al. [60] converted irregular points into a regular format using radius search with grid sampling before implementing the machine learning classifier.

In all cases of point cloud segmentation, implementing an adaptive neighborhood selection for feature computation is more effective than utilizing a fixed-size neighborhood selection [14, 20, 21, 30]. During neighborhood selection, points of an entire urban scene can be divided into regions based on several geometrical properties. For each region, an appropriate adaptive method can be proposed for selecting the neighborhood, which can enable the selection of points with similar geometrical properties as well as belonging to the same class, thereby minimizing outlier issues. The next section describes the proposed method to select an appropriate neighborhood for individual points to facilitate an effective segmentation of the input point cloud data.

Methodology

The general architecture of LiDAR point cloud data segmentation using machine learning is illustrated in Fig 1. The selection of neighborhoods serves as the basis for feature extraction. Considering the geometric variation in the input point cloud data, the next subsection presents the method for selecting an appropriate neighborhood approach. The feature extraction subsection provides an overview of the selected features for this study, and subsequently discusses the supervised classifier model utilized in this study.

thumbnail
Fig 1. The general architecture of the point cloud data segmentation.

https://doi.org/10.1371/journal.pone.0307138.g001

Proposed neighborhood selection method

The literature review describes the importance of selecting appropriate neighborhoods in urban areas for point cloud analysis. However, these studies did not take into account the diverse geometric regions within the urban area point cloud dataset during neighborhood selection. Although the curvature-based [14, 20] neighborhood selection technique divides the input point cloud data into regular and scatter regions, to select neighborhood points more accurately from similar objects, similar planes, or similar geometric features, we use the verticality feature to divide the regular region into planar and vertical regions. Additionally, we divide the scatter region into low and high omnivariance areas using the omnivariance feature. As urban point clouds can be categorized into distinct regions based on geometric features, by dividing the input point cloud into four regions, suitable neighborhoods are chosen. To choose an appropriate neighborhood, the initial step of the proposed method involves categorizing the urban point clouds into four different regions, as follows:

  • Planar region,
  • Vertical region,
  • Low omnivariance region, and
  • High omnivariance region.

A distinct neighborhood selection process is employed based on the categories of the regions. For the planar and vertical regions, an entropy-based method is utilized. A neighborhood selection method based on the direction of the normal is used in the low omnivariance region, and a simplicial complex-based method is used for neighborhood selection in the high omnivariance region. This approach ensures that throughout the neighborhood selection process, points from similar geometric regions are chosen as neighbors. The overall framework is illustrated in Fig 2.

thumbnail
Fig 2. Framework of our proposed neighborhood selection method.

https://doi.org/10.1371/journal.pone.0307138.g002

First, based on the curvature geometric property of each point, the input point cloud data is separated into two regions using the approach proposed by He et al. [20]. The curvature (Cλ) is calculated using Eq (1) and based on a curvature threshold (ct) the input data is separated into regular (Pr) and scattered regions (Ps) using Eq (2). Here, the eigenvalues λ1, λ2, and λ3 are extracted from a 3D covariance matrix. The curvature threshold value (ct) is selected following the method of He et al. [20] and Xue et al. [14]. (1) (2)

The points in the regular region are further separated into planar and vertical regions based on the verticality (V) property of the input point cloud. The verticality (V) of any point Pi is calculated using the normal vector (nx, ny, nz) [51], as shown in Eq (3) where nz represents the third component of the normal vector of a point. The calculation of the normal vector is performed utilizing the weighted PCA method as mentioned in [46]. Following the approach mentioned by Xue et al. [14] for curvature threshold, the verticality threshold (vt) is determined. Lastly, Eq (4) is used to separate the points into planar and vertical regions. (3) (4)

The geometric characteristic omnivariance Oλ of a point Pi can be calculated using Eq (5). To separate the points into low and high omnivariance regions, we use the Eq (6) where the threshold ot is also retrieved based on the method utilized for curvature threshold by Xue et al. [14]. The distinctive neighborhood selection method is then applied to each of the four regions in the input LiDAR point cloud data to extract the feature value for segmentation purposes. (5) (6)

Fig 3 demonstrates the values of curvature, omnivariance, and verticality of individual points from a small portion of an urban area.

thumbnail
Fig 3. Visualization of features.

(A) Curvature, (B) Omnivariance, and (C) Verticallity features of individual points from a portion of an urban area.

https://doi.org/10.1371/journal.pone.0307138.g003

Entropy-based neighborhood.

For the regular region of the input point cloud data, we choose the entropy-based neighborhood selection approach. The entropy-based method begins by calculating the eigenentropy or Shannon entropy of a point Pi using Eq (7), where ei = λiλ with i ∈ 1, 2, 3. Various values of k nearest neighbors are then considered for any point Pi, and the k value with the lowest entropy is chosen as the most appropriate neighboring point for Pi [52]. (7)

Points on the building roof, ground surface, and horizontal surface are referred to as planar points, whereas any vertically elevated points, such as the building facade or electrical pole, are referred to as vertical points. However, there might be a situation where points from a building roof or a ground surface are included as neighborhoods of a building facade or other vertical surfaces during the process of identifying neighboring points. Additionally, the entropy-based neighborhood selection method may also select neighborhood points from different regions. This could lead to errors in the calculation of the value of the accurate feature [46]. Our method avoids this problem, as we previously divided the points into vertical and horizontal planar regions. The verticality feature is used to divide the regular points into two sets to reduce these particular types of problems. As a result, points carrying similar geometric features will be selected as neighborhood points. The entropy-based neighborhood selection method is applied separately for each region to determine the neighborhood with the least amount of disorder.

Normal direction-based neighborhood selection.

In LiDAR point cloud data, the scatter points mostly cover vegetation and object corners or edges [14]. However, there might be a situation where building edges and trees are too close to each other. For example, during the neighborhood selection of a building edge point, it may select neighboring points from scattered tree points or other objects near the building, or vice versa. To acquire an appropriate neighborhood, the scatter region is divided into two regions based on omnivariance, as described earlier. Omnivariance is a characteristic based on eigenvalues that describes how points spread or disperse in different directions [21]. The points of trees and shrubs are more widely dispersed in diverse directions. Thus, vegetation has a larger omnivariance value than the corners or edges of an object.

As the low omnivariance region (Plow) mostly covers building edges and corners, variable-size neighborhoods for each point Pi are selected adaptively based on the direction of the normal vector of the nearby points. Initially, the k value for the nearest neighbor is selected using the method proposed by Dey et al. [46]. The method starts with a minimum value for k and iteratively increases the value of k until the standard deviation of the selected neighboring points satisfies a threshold value θ computed based on the average point density of the input point cloud. After determining the neighborhood points Sp of a point Pi, the direction of the normal of each neighboring point is computed. The normal angle is taken into account to determine the proposed appropriate neighborhood. Then, the average of the normal angles (ηp) of points in Sp is calculated. If the normal angle of any points in Sp is less than ηp, then we consider that point as a neighborhood of Pi in the low omnivariance regions of the input point cloud data. This ensures the selection of neighboring points within the fold points of an object, as shown in Fig 4A. Furthermore, if any point Pi is on a horizontal or vertical plane edge, the neighboring points are selected from the same plane as Pi, as shown in Fig 4B.

thumbnail
Fig 4. An example of adding points to the neighborhood in the low omnivariance region.

(A) Fold points, (B) Edge points between planes. Here, red points represent Point Pi, and cyan points indicate the selected neighborhood.

https://doi.org/10.1371/journal.pone.0307138.g004

Simplicial complex-based neighborhood selection.

The high omnivariance region encompasses the majority of rougher surfaces, such as vegetation points, shrub areas, and outlier points [53]. The selection of neighboring points from the same class or region is expected. To select appropriate neighboring points for any point Pi in the high omnivariance region, we utilized a method based on the simplicial complex-based neighborhood selection technique. The concept of a simplicial complex, presented by Zomorodian et al. [54], is a key aspect of the Persistent Homology method [55], which is used in topological data analysis. The simplicial complex-based method facilitates the clustering of similar points in a group, which can be considered neighbors of each other. The 3D covariance matrix can be calculated to extract features for each cluster when there are at least three different neighboring points for each point Pi [56]. The process began with an initial radius value considering the density of the input point cloud and then gradually expanded until any point in the cluster failed to reach at least two additional points. Every point in a cluster is connected to its neighbors by edges. Fig 5A demonstrates an example of scatter point cloud data with high omnivariance, and Fig 5B illustrates how the neighborhood is connected based on the simplicial complex-based neighborhood selection technique.

thumbnail
Fig 5. Simplicial complex-based neighborhood selection.

(A) Point Cloud Data, (B) Choosing the neighborhoods.

https://doi.org/10.1371/journal.pone.0307138.g005

This approach can mitigate the impact of the isolated outlier points group, as the outlier points will be connected within themselves and will not attract points from any other object. Compared to entropy-based neighborhood selection, this approach will immediately locate the neighborhood for all the scattered points with high omnivariance value. The entropy-based approach is not suitable for high omnivariance regions because it requires different values of k to find the best one, which is time-consuming, and sometimes k could attract outlier points, leading to inaccurate feature value calculation for the scattered high omnivariance region.

Feature extraction

Features of point clouds encompassing various characteristics of the point cloud data are extracted to train machine learning or deep learning models for classification and segmentation [14, 57]. The feature values are calculated after the local neighborhood size is determined for each point. Various features can be extracted from point clouds. For instance, eigenvalue-based features [1, 14, 20, 21, 46] and elevation features have also been explored in several studies [14]. Additionally, radiometric features are derived from multispectral LiDAR point cloud data [30]. This study employs a range of eigenvalue- and elevation-based features to enhance the effectiveness of 3D point cloud segmentation.

Eigenvalue-based features.

The 3 × 3 covariance matrix is used to determine the eigenvalues λ1, λ2, and λ3 for a 3D point Pi, where λ1 ≥ λ2 ≥ λ3 ≥ 0 [21]. In this study, nine eigenvalue-based features such as the sum of eigenvalues (Σλ), omnivariance (Oλ), eigenentropy (Eλ), anisotropy (Aλ), planarity (Pλ), linearity (Lλ), curvature (Cλ), sphericity (Sλ), and mean curvature (Mλ) are used, respectively. Σλ characterizes the overall variance of a point along with its neighboring points, and Oλ aids in determining the dispersion of points across various directions [21]. Eλ determines the ordered or disordered points [1]. Pλ, and Lλ are employed to assess the degree of planarity and linearity of a point [46]. The value of Sλ represents areas with high spheres. Anisotropy Aλ is a characteristic that describes the existence of sharp edges or corners [21]. The curvature Cλ, also known as surface variation, represents the geometric attribute indicating the change in the shape of local surfaces [14, 20, 58]. Table 1 shows the selected eigenvalue-based features used in this paper.

Elevation features.

Along with the eigenvalue-based features, we have used five additional elevation-based features, as shown in Table 2. Have represents the average elevation value of all neighboring points, including the selected point Pi in the point cloud [14]. Hi represents the elevation of the i-th individual point Pi, and N denotes the total number of points within the neighborhood. Hd indicates elevation difference, where Hhighest is the highest Z coordinate value and Hlowest is the lowest Z coordinate value within the neighborhood of Pi. Ha represents the difference between the elevation of the current point and the point with the highest elevation value, denoted as Hhighest, in its neighborhood. Hb, on the other hand, denotes the difference between the elevation of the current point and the point with the lowest elevation value, denoted as Hlowest, in its neighborhood.

Supervised classifier

The training set of point cloud data is represented by {(Xi=1…m, Ln)}, where Xi represents features, and here m number of features are used for each point Pi, while Ln signifies the corresponding semantic label of point Pi. Using the selected different features mentioned in the previous subsection, a supervised machine learning classifier is used to train and test the dataset. The test dataset is used to assign predicted semantic labels to each data point, and these labels are then used to evaluate the performance of the classifier. The Random Forest (RF) classifier is selected as a representative of conventional classifiers because of its feasible and comprehensive adoption in the field of point cloud segmentation [57]. The Random Forest algorithm constructs multiple decision trees during training. Each tree can predict a class for every individual point.

Experiments

In this section, the datasets used for the experiments are described first. Following the dataset description, the details of the experimental setup and the evaluation metrics used in this research are discussed. Finally, the experimental results are presented alongside the necessary findings of this study.

Dataset description

The aerial LiDAR point cloud dataset from the Vaihingen (VH) area, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) benchmark, and a mobile LiDAR point cloud dataset of the Toronto area are used to validate the performance of this research. An aerial Leica ALS50 scanner was used to collect the Vaihingen dataset from an altitude of 500 m at an angle of 45° [48]. The point density of the dataset ranges from 4 to 8 points/m2, and the points in this data set are not evenly distributed [32]. The dataset contains a total of nine semantic categories, including cars, facades, fences, impervious surfaces, low vegetation, roofs, power lines, shrubs, and trees. It contains three area sites with a total of 411,722 points in the two test sites and 753,876 points in one training site. Fig 6 demonstrates the LiDAR point cloud data of the training and test sites of the Vaihingen area.

thumbnail
Fig 6. The LiDAR point cloud of ISPRS Vaihingen 3D benchmark dataset.

(A) Site 1 for training, (B) Site 2 and 3 for testing. The legend at the bottom indicates the segmentation labels rendered in colors.

https://doi.org/10.1371/journal.pone.0307138.g006

The Toronto-3D dataset is from Avenue Road in Toronto, Canada, which contains 6.7 million points with a high point density of approximately 1000 points/m2 on road surfaces [59]. The dataset contains a total of nine classes, including ground, road markings, trees, buildings, powerlines, electrical poles, cars, fences, and unclassified. The Toronto dataset contains four different parts denoted as L001, L002, L003, and L004. This study uses the L001, L002, and L003 areas for training and the L004 area for testing. Fig 7 illustrates the LiDAR point cloud data of the Toronto area used for our experimental purposes. Tables 3 and 4 show the distribution of the total number of points in the training and test sets for each class in the Vaihingen area of the ISPRS benchmark datasets and the Toronto area of the Toronto-3D LiDAR dataset, respectively.

thumbnail
Fig 7. The Toronto-3D benchmark LiDAR point cloud dataset.

(A) Site L001 for training, (B) Site L002 for training, (C) Site L003 for training, (D) Site L004 for testing. The legend at the bottom indicates the segmentation labels rendered in colors.

https://doi.org/10.1371/journal.pone.0307138.g007

thumbnail
Table 3. Number of points per category in training and test sets of the Vaihingen area of ISPRS benchmark dataset.

https://doi.org/10.1371/journal.pone.0307138.t003

thumbnail
Table 4. Number of points (thousand) per category in training and test sets of the Toronto-3D dataset.

https://doi.org/10.1371/journal.pone.0307138.t004

Experimental setup

The experiment is implemented using the Python programming language. The Open3D library is used to visualize the 3D input point cloud data. Principal Component Analysis (PCA) is used as a base to estimate the significant eigenvalue-based geometric features of the points. Scikit-learn Python library is used to implement the Random Forest machine learning classifier model. The most appropriate class for a point Pi is chosen by a majority vote in the Random Forest classifier, which is fed the earlier described selected computed features for every point as input. Consequently, this chosen class becomes the final prediction for each point in the input point cloud data. This experiment is carried out using a set-up comprising an 11th-generation Intel Core i5 processor with a clock speed of 2.70 GHz, 16 GB of DDR4 RAM, a 4 GB NVIDIA RTX 3050Ti graphics processing unit, and the Windows 11 operating system.

Evaluation measures

A confusion matrix for actual and predicted classes is formed using conventional parameters True Positive Rate (TP) (the ratio of the number of correctly classified points to the total number of points in this class), True Negative Rate (TN) (the ratio of the number of incorrectly detected points to the total points in that class), False Positive Rate (FP) (the ratio of the number of points in a class identified as a wrong class to the total points in a class), and False Negative Rate (FN) (the ratio of the number of correct points in a class identified as another class to the total correct point in that class). Based on these parameters, four commonly used evaluation metrics, such as Accuracy, Recall, Precision, and F1-score are used for evaluation using the following equations: (8) (9) (10) (11)

Results

This section presents the extensive experimental results. The impact of the calculated feature values, using the proposed neighborhood selection approach described in the methodology section, for the different regions is compared with state-of-the-art techniques.

A portion of the LiDAR point cloud data from the Vaihingen area is used for demonstration purposes in Fig 8. Here, Fig 8A shows the ground truth of the selected area where each class is distinctly colored for clarity. Fig 8B depicts the initially separated four regions, including planar, vertical, low omnivariance, and high omnivariance, based on the proposed approach.

thumbnail
Fig 8. A portion of Vaihingen point cloud data.

(A) Labeled ground truth, (B) Four distinct regions of the portion.

https://doi.org/10.1371/journal.pone.0307138.g008

Fig 9 demonstrates the outcomes of the proposed neighborhood selection methods for any point Pi based on its associated region. Fig 9A represents the selected neighborhood from a planar region using the entropy-based neighborhood selection approach. Fig 9B depicts the selected neighboring points in a vertical region using the same approach. Fig 9C shows the selected neighborhood in a low omnivariance region using the approach based on the direction of the normal. Finally, Fig 9D illustrates the selected neighborhood in a high omnivariance region based on the simplicial complex neighborhood selection method. In all of the cases, the point Pi is represented by the red color, while the selected neighborhood points are highlighted in cyan color.

thumbnail
Fig 9. Neighborhood selection from distinct region.

(A) Planar Region, (B) Vertical Region, (C) Low Omnivariance Region, and (D) High Omnivariance Region. The red point indicates any point Pi, and the cyan color indicates the corresponding selected neighborhood.

https://doi.org/10.1371/journal.pone.0307138.g009

The highlighted red point (Pi) in Fig 9A is a point from the building roof, and it is clear that the neighboring points are also selected from the same roof plane. Fig 9B depicts that the neighboring points are only selected from the building’s vertical facade area. The points of the tree are also vertical; since they are initially divided into separate regions based on the value of curvature, those points are not taken into consideration for the vertical facade area.

The low omnivariance region mainly contains the points of the object’s edges and corners as shown in Fig 8B. Thus, a point Pi from rooftop fold points in Fig 9C selects the neighboring points only from the low omnivariance fold area of the building based on the angle of the normal. Additionally, the high omnivariance regions, encompassing the points of vegetation, including trees and shrubs, are depicted in Fig 8B. In this instance, Pi is chosen from a tree identified as a high omnivariance region. Here, multiple neighborhood points are selected, each derived from vegetation utilizing the simplicial complex-based neighborhood selection method.

Table 5 represents a quantitative performance evaluation of the proposed neighborhood selection methods on the Vaihingen test dataset. The proposed approach is compared with the state-of-the-art neighborhood selection methods of Nong et al. [32], Xue et al. [14], He et al. [20], Günen [21], Weinmann et al. [52]. We have also experimented with the point cloud test data using some fixed k nearest neighbors method using the features described in the eigenvalue-based and elevation feature section. The table shows that our proposed method outperforms all current neighborhood approaches in terms of evaluation metrics.

thumbnail
Table 5. The accuracy, precision values, recall values, and F1-score values according to different neighborhood selection methods of the Vaihingen dataset.

https://doi.org/10.1371/journal.pone.0307138.t005

To demonstrate the performance of the machine learning classifier visually, LiDAR point cloud data of urban areas are segmented using distinct colors. Fig 10A illustrates that the majority of the points in the test scenes are accurately segmented. The misidentified points are shown with an error map in Fig 10B. Subsequently, the confusion matrix is represented in Fig 11, which shows that the true positive rate is satisfactory, particularly for identifying points corresponding to low vegetation, impervious surface, roof, and high omnivariance regions, which included the trees. However, some significant shrub points are misidentified as low vegetation and trees. Furthermore, a notable observation in the confusion matrix is the misidentification of most powerline points as roof points. This is because there are comparatively very few powerline points in the dataset for training. The machine learning classifier often misidentified the elevated nature of powerline points as roof points.

thumbnail
Fig 10. (A) Prediction map and (B) error map of the proposed method on the Vaihingen dataset.

https://doi.org/10.1371/journal.pone.0307138.g010

thumbnail
Fig 11. Confusion matrix for the machine learning classifier using proposed neighborhood approach on Vaihingen test dataset.

https://doi.org/10.1371/journal.pone.0307138.g011

Fig 12B–12I shows the results of segmentation using different neighborhood selection methods along with two fixed-size neighborhoods for Vaihingen test areas. Fig 12A is the ground truth, and Fig 12B is the segmentation outcome of our proposed method, which clearly demonstrates that our proposed approach exhibits superior performance compared to other state-of-the-art methodologies.

thumbnail
Fig 12. Visualization of segmentation outcomes for site 2 and site 3 within the Vaihingen dataset employing various neighborhood retrieval approaches.

(A) Ground Truth, (B) Proposed Method, (C) Nong et al. (D) Xue et al. (E) He et al., (F) Günen, (G) Weinmann et al., (H) k=100, (I) k=50.

https://doi.org/10.1371/journal.pone.0307138.g012

The efficiency of the proposed method is further validated through experimentation on an additional Toronto-3D dataset. The proposed approach generated promising results. To reduce computational effort, unclassified points in the Toronto dataset are not considered during training and testing. Fig 13A visually demonstrates the segmentation results by the created model on our test area, while Fig 13B shows an error map for misidentified points. Fig 14 depicts the confusion matrix of the machine learning model, encompassing the proposed neighborhood selection method, tested on the Toronto-3D dataset’s test area.

thumbnail
Fig 13. (A) Prediction map, and (B) error map of our proposed method on the Toronto-3D dataset test area.

https://doi.org/10.1371/journal.pone.0307138.g013

thumbnail
Fig 14. Confusion matrix of the machine learning classifier using the proposed approach on Toronto-3D dataset.

https://doi.org/10.1371/journal.pone.0307138.g014

Table 6 presents a quantitative performance evaluation of the proposed neighborhood selection methods on the Toronto test dataset. The proposed approach is compared with the state-of-the-art methods of Sevgen et al. [60], Han et al. [61], and Huang et al. [62]. Specifically, our approach achieved an accuracy of 0.963, precision of 0.874, recall of 0.784, and F1-score of 0.825, which demonstrates the robustness and effectiveness of the proposed methodology.

thumbnail
Table 6. The accuracy, precision values, recall values, and F1-score values comparison according to different state-of-the-art methods of the Toronto-3D dataset.

https://doi.org/10.1371/journal.pone.0307138.t006

Discussion

Initially, we divided the input point cloud data into four regions by taking into account three distinct geometric natures: curvature, verticality, and omnivaraince. Neighborhoods should be selected based on the geometric nature of the input LiDAR point cloud data, and neighboring points must be chosen from similar regions. Considering this fact, we employed distinct adaptive neighborhood selection methods for different regions in our proposed approach. Furthermore, our machine learning-based classifier, which incorporates the proposed neighborhood approach based on selected features, outperformed current state-of-the-art deep learning and machine learning approaches, thereby improving urban scene segmentation results. The proposed approach yielded notably higher F1-score, precision, and overall accuracy compared to recent deep learning and machine learning approaches, as demonstrated in Tables 5 and 6. However, for some specific classes in both datasets, the proposed approach did not exhibit satisfactory segmentation performance. For instance, in the Vaihingen area, a significant number of points in the powerline and shrub classes were misidentified. This is primarily attributed to the sparse and imbalanced distribution of the input point cloud in the training dataset. Due to the same reasons, significant points in some classes in the Toronto3D dataset, including road markings and powerlines, are also misidentified.

Conclusion

In this research, we have investigated the issue of 3D urban scene segmentation using LiDAR point cloud data. In the literature review section, we have pointed out that selecting an appropriate neighborhood for calculating the accurate feature value is the major issue in the existing approaches for segmenting the urban scene using machine learning classifiers. In this paper, we have proposed and used suitable neighborhood selection techniques based on different geometric properties of the individual points in different regions of the input LiDAR point cloud data. The experimental result section demonstrates the effectiveness of this research. We have used two different benchmark datasets to validate the impact of our method. The Vaihingen area of the ISPRS benchmark dataset has a low point density, and the Toronto3D dataset has a very high-density point cloud. In both cases, the proposed method demonstrates significantly high quantitative performance in terms of our selected evaluation metrics: Accuracy, Precision, Recall, and overall F1-score. We have also demonstrated significant qualitative performance in the experimental result section. However, the proposed method fails to show the expected segmentation performance for a few classes because of the comparatively fewer points in the training point cloud data and their confusing nature. We will specifically address this issue of enhancing the segmentation performance for those classes that have fewer points or imbalance classes with ambiguous characteristics in the future.

References

  1. 1. Weinmann M, Jutzi B, Hinz S, Mallet C. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers. ISPRS Journal of Photogrammetry and Remote Sensing. 2015;105:286–304.
  2. 2. Dey EK, Awrangjeb M, Kurdi FT, Stantic B. Building Boundary Extraction from LiDAR Point Cloud Data. In: 2021 Digital Image Computing: Techniques and Applications (DICTA). IEEE; 2021. p. 1–6.
  3. 3. Malihi S, Valadan Zoej M, Hahn M, Mokhtarzade M, Arefi H. 3D building reconstruction using dense photogrammetric point cloud. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2016;41:71–74.
  4. 4. Dorninger P, Pfeifer N. A comprehensive automated 3D approach for building extraction, reconstruction, and regularization from airborne laser scanning point clouds. Sensors. 2008;8(11):7323–7343. pmid:27873931
  5. 5. Li Y, Yong B, Wu H, An R, Xu H. Road detection from airborne LiDAR point clouds adaptive for variability of intensity data. Optik. 2015;126(23):4292–4298.
  6. 6. Azevedo F, Dias A, Almeida J, Oliveira A, Ferreira A, Santos T, et al. Lidar-based real-time detection and modeling of power lines for unmanned aerial vehicles. Sensors. 2019;19(8):1812. pmid:30995721
  7. 7. Rutzinger M, Höfle B, Hollaus M, Pfeifer N. Object-based point cloud analysis of full-waveform airborne laser scanning data for urban vegetation classification. Sensors. 2008;8(8):4505–4528. pmid:27873771
  8. 8. Ramiya AM, Nidamanuri RR, Krishnan R. Object-oriented semantic labelling of spectral–spatial LiDAR point cloud for urban land cover classification and buildings detection. Geocarto International. 2016;31(2):121–139.
  9. 9. Duran Z, Ozcan K, Atik ME. Classification of photogrammetric and airborne lidar point clouds using machine learning algorithms. Drones. 2021;5(4):104.
  10. 10. Zhu Q, Wang F, Hu H, Ding Y, Xie J, Wang W, et al. Intact planar abstraction of buildings via global normal refinement from noisy oblique photogrammetric point clouds. ISPRS International Journal of Geo-Information. 2018;7(11):431.
  11. 11. Pessoa GG, Amorim A, Galo M, Galo MdLBT. Photogrammetric point cloud classification based on geometric and radiometric data integration. Boletim de Ciências Geodésicas. 2019;25.
  12. 12. Li W, Wang FD, Xia GS. A geometry-attentional network for ALS point cloud classification. ISPRS Journal of Photogrammetry and Remote Sensing. 2020;164:26–40.
  13. 13. Lin W, Fan W, Liu H, Xu Y, Wu J. Classification of handheld laser scanning tree point cloud based on different KNN algorithms and random forest algorithm. Forests. 2021;12(3):292.
  14. 14. Xue J, Men C, Liu Y, Xiong S. Adaptive neighbourhood recovery method for machine learning based 3D point cloud classification. International Journal of Remote Sensing. 2023;44(1):311–340.
  15. 15. Qi CR, Su H, Mo K, Guibas LJ. Pointnet: Deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. p. 652–660.
  16. 16. Tarsha Kurdi F, Amakhchan W, Gharineiat Z, Boulaassal H, El Kharki O. Contribution of Geometric Feature Analysis for Deep Learning Classification Algorithms of Urban LiDAR Data. Sensors. 2023;23(17). pmid:37687815
  17. 17. Mahdaoui A, Sbai EH. 3D point cloud simplification based on k-nearest neighbor and clustering. Advances in Multimedia. 2020;2020:1–10.
  18. 18. Lee I, Schenk T. Perceptual organization of 3D surface points. International Archives of Photogrammetry Remote Sensing and Spatial Information Sciences. 2002;34(3/A):193–198.
  19. 19. Filin S, Pfeifer N. Neighborhood systems for airborne laser data. Photogrammetric Engineering & Remote Sensing. 2005;71(6):743–755.
  20. 20. He E, Chen Q, Wang H, Liu X. A curvature based adaptive neighborhood for individual point cloud classification. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2017;42:219–225.
  21. 21. Günen MA. Adaptive neighborhood size and effective geometric features selection for 3D scattered point cloud classification. Applied Soft Computing. 2022;115:108196.
  22. 22. Vega C, Hamrouni A, El Mokhtari S, Morel J, Bock J, Renaud JP, et al. PTrees: A point-based approach to forest tree extraction from lidar data. International Journal of Applied Earth Observation and Geoinformation. 2014;33:98–108.
  23. 23. Awrangjeb M, Fraser CS. Rule-based segmentation of LIDAR point cloud for automatic extraction of building roof planes. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2013;2:1–6.
  24. 24. Dey EK, Awrangjeb M, Stantic B. Outlier detection and robust plane fitting for building roof extraction from LiDAR data. International Journal of Remote Sensing. 2020;41(16):6325–6354.
  25. 25. Croce V, Caroti G, De Luca L, Jacquot K, Piemonte A, Véron P. From the semantic point cloud to heritage-building information modeling: A semiautomatic approach exploiting machine learning. Remote Sensing. 2021;13(3):461.
  26. 26. Chehata N, Guo L, Mallet C. Airborne lidar feature selection for urban classification using random forests. In: Laserscanning; 2009.
  27. 27. Mallet C, Bretar F, Roux M, Soergel U, Heipke C. Relevance assessment of full-waveform lidar data for urban area classification. ISPRS journal of photogrammetry and remote sensing. 2011;66(6):S71–S84.
  28. 28. Wei Y, Yao W, Wu J, Schmitt M, Stilla U. Adaboost-based feature relevance assessent in fusing LiDAR and image data for classification of trees and vehicles in urban scenes. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2012;1:323–328.
  29. 29. Khoshelham K, Oude Elberink S. Role of dimensionality reduction in segment-based classification of damaged building roofs in airborne laser scanning data. In: Proceedings of the International Conference on Geographic Object Based Image Analysis, Rio de Janeiro, Brazil; 2012. p. 7–9.
  30. 30. Jiang G, Yan WY, Lichti DD. A Maximum Entropy-Based Optimal Neighbor Selection for Multispectral Airborne LiDAR Point Cloud Classification. IEEE Transactions on Geoscience and Remote Sensing. 2023;.
  31. 31. Li Y, Bu R, Sun M, Wu W, Di X, Chen B. Pointcnn: Convolution on x-transformed points. Advances in neural information processing systems. 2018;31.
  32. 32. Nong X, Bai W, Liu G. Airborne LiDAR point cloud classification using PointNet++ network with full neighborhood features. Plos one. 2023;18(2):e0280346. pmid:36763685
  33. 33. Xiang Q, He Y, Wen D. Adaptive deep learning-based neighborhood search method for point cloud. Scientific Reports. 2022;12(1):2098. pmid:35136167
  34. 34. Zhang J, Zhao X, Chen Z, Lu Z. A review of deep learning-based semantic segmentation for point cloud. IEEE access. 2019;7:179118–179133.
  35. 35. Komarichev A, Zhong Z, Hua J. A-cnn: Annularly convolutional neural networks on point clouds. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; 2019. p. 7421–7430.
  36. 36. Ye X, Li J, Huang H, Du L, Zhang X. 3d recurrent neural networks with context fusion for point cloud semantic segmentation. In: Proceedings of the European conference on computer vision (ECCV); 2018. p. 403–417.
  37. 37. Wang Y, Sun Y, Liu Z, Sarma SE, Bronstein MM, Solomon JM. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (tog). 2019;38(5):1–12.
  38. 38. Amakhchan W, Kurdi FT, Gharineiat Z, Boulaassal H, El Kharki O. Classification of Forest LiDAR Data Using Deep Learning Pipeline Algorithm and Geometric Feature Analysis. International Journal of Environmental Sciences and Natural Resources. 2023;.
  39. 39. Lalonde JF, Vandapel N, Huber DF, Hebert M. Natural terrain classification using three-dimensional ladar data for ground robot mobility. Journal of field robotics. 2006;23(10):839–861.
  40. 40. Weinmann M, Jutzi B, Mallet C. Feature relevance assessment for the semantic interpretation of 3D point cloud data. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2013;2:313–318.
  41. 41. Chen X, Yu K. Feature line generation and regularization from point clouds. IEEE Transactions on Geoscience and Remote Sensing. 2019;57(12):9779–9790.
  42. 42. Li Q, Yuan P, Lin Y, Tong Y, Liu X. Pointwise classification of mobile laser scanning point clouds of urban scenes using raw data. Journal of Applied Remote Sensing. 2021;15(2):024523–024523.
  43. 43. Wang X, Ma X, Yang F, Su D, Qi C, Xia S. Improved progressive triangular irregular network densification filtering algorithm for airborne LiDAR data based on a multiscale cylindrical neighborhood. Applied Optics. 2020;59(22):6540–6550. pmid:32749354
  44. 44. Mohamed M, Morsy S, El-Shazly A. Improvement of 3D LiDAR point cloud classification of urban road environment based on random forest classifier. Geocarto International. 2022;37(27):15604–15626.
  45. 45. Wang Y, Chen Q, Liu L, Zheng D, Li C, Li K. Supervised classification of power lines from airborne LiDAR data in urban areas. Remote Sensing. 2017;9(8):771.
  46. 46. Dey EK, Tarsha Kurdi F, Awrangjeb M, Stantic B. Effective selection of variable point neighbourhood for feature point extraction from aerial building point cloud data. Remote Sensing. 2021;13(8):1520.
  47. 47. Blomley R, Jutzi B, Weinmann M. 3D semantic labeling of ALS point clouds by exploiting multi-scale, Multi-type neighborhoods for feature extraction. GEOBIA 2016: Solutions and synergies. 2016.
  48. 48. Niemeyer J, Rottensteiner F, Soergel U. Contextual classification of LIDAR data and building object detection in urban areas. ISPRS Journal of Photogrammetry and Remote Sensing. 2014;87:152–165.
  49. 49. Schmidt A, Niemeyer J, Rottensteiner F, Soergel U. Contextual classification of full waveform lidar data in the Wadden Sea. IEEE Geoscience and Remote Sensing Letters. 2014;11(9):1614–1618.
  50. 50. Peng S, Xi X, Wang C, Dong P, Wang P, Nie S. Systematic comparison of power corridor classification methods from ALS Point Clouds. Remote Sensing. 2019;11(17):1961.
  51. 51. Kuprowski M, Drozda P. Feature Selection for Airbone LiDAR Point Cloud Classification. Remote Sensing. 2023;15(3):561.
  52. 52. Weinmann M, Jutzi B, Mallet C. Semantic 3D scene interpretation: A framework combining optimal neighborhood size selection with relevant features. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2014;2:181–188.
  53. 53. Dos Santos R, Galo M, Habib A. K-Means Clustering Based on Omnivariance Attribute for Building Detection from Airborne LIDAR Data. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2022;2:111–118.
  54. 54. Zomorodian A, Carlsson G. Computing persistent homology. In: Proceedings of the twentieth annual symposium on Computational geometry; 2004. p. 347–356.
  55. 55. Kong G, Fan H. PH-shape: an adaptive persistent homology-based approach for building outline extraction from ALS point cloud data. Geo-spatial Information Science. 2023; p. 1–11.
  56. 56. Salnikov V, Cassese D, Lambiotte R. Simplicial complexes and complex systems. European Journal of Physics. 2018;40(1):014001.
  57. 57. Dey EK, Awrangjeb M, Tarsha Kurdi F, Stantic B. Machine learning-based segmentation of aerial LiDAR point cloud data on building roof. European Journal of Remote Sensing. 2023;56(1):2210745.
  58. 58. Pauly M, Gross M, Kobbelt LP. Efficient simplification of point-sampled surfaces. In: IEEE Visualization, 2002. VIS 2002. IEEE; 2002. p. 163–170.
  59. 59. Seyfeli S, OK A. Classification of mobile laser scanning data with geometric features and cylindrical neighborhood. Baltic Journal of Modern Computing. 2022;10(2).
  60. 60. Sevgen E, Abdikan S. Classification of Large-Scale Mobile Laser Scanning Data in Urban Area with LightGBM. Remote Sensing. 2023;15(15):3787.
  61. 61. Han X, Dong Z, Yang B. A point-based deep learning network for semantic segmentation of MLS point clouds. ISPRS Journal of Photogrammetry and Remote Sensing. 2021;175:199–214.
  62. 62. Huang R, Gao Y, Xu Y, Hoegner L, Tong X. A Simple Framework of Few-Shot Learning Using Sparse Annotations for Semantic Segmentation of 3D Point Clouds. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2024;.