Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A Structure-Adaptive Hybrid RBF-BP Classifier with an Optimized Learning Strategy

  • Hui Wen,

    Affiliation ATR Key Lab of National Defense, shenzhen University, shenzhen 518060, China

  • Weixin Xie,

    Affiliation ATR Key Lab of National Defense, shenzhen University, shenzhen 518060, China

  • Jihong Pei

    jhpei@szu.edu.cn

    Affiliation ATR Key Lab of National Defense, shenzhen University, shenzhen 518060, China

Abstract

This paper presents a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, the adaptive RBF network is used for nonlinear kernel mapping and the BP network is used for nonlinear classification. The optimized learning strategy is as follows: firstly, a potential function is introduced into training sample space to adaptively determine the number of initial RBF hidden nodes and node parameters, and a form of heterogeneous samples repulsive force is designed to further optimize each generated RBF hidden node parameters, the optimized structure-adaptive RBF network is used for adaptively nonlinear mapping the sample space; then, according to the number of adaptively generated RBF hidden nodes, the number of subsequent BP input nodes can be determined, and the overall SAHRBF-BP classifier is built up; finally, different training sample sets are used to train the BP network parameters in SAHRBF-BP. Compared with other algorithms applied to different data sets, experiments show the superiority of SAHRBF-BP. Especially on most low dimensional and large number of data sets, the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms.

Introduction

In the field of pattern recognition and data mining, as typical single-layer feed-forward networks (SLFNs), radial basis function networks (RBF) have been intensively studied over the past several decades. When used for classifying problems, there are three important factors for evaluating network performance: 1) classifying accuracy, 2) network size, and 3) training time. To achieve good network performance, different optimization algorithms are used to train the RBF hidden layer, such as K-means clustering [1, 2], fuzzy C-means clustering [3, 4], fuzzy K-nearest neighbors [5], differential evolution [6, 7], and other optimization algorithms [812]. However, in most of these methods, the number of RBF hidden nodes is assigned a priori, which may lead to poor adaptability for different sample sets. The selection of network size is also a critical issue. If there are too few hidden nodes the network may not be able to approximate the given function, and if there are too many, the network may exhibit poor generalization performance because of over fitting. Several sequential learning algorithms have been proposed to find a proper network size [1316]. In [17], a minimal resource allocation network (MRAN) is proposed, which is allowed to delete the previous center. The deletion strategy is based on the overall contribution of each hidden unit to the network output. A sequential learning algorithm for growing and pruning the RBF (GAP-RBF) and a generalized growing and pruning RBF (GGAP-RBF) algorithm are proposed in [18, 19], which use the significance of nodes as the learning strategy. Because the GGAP-RBF algorithm could not handle problems with high-dimensional probability density distribution, this problem is overcome in [20], which uses a Gaussian mixture model (GMM) to approximate the GGAP (GGAP-GMM) evaluation formula. In [21], an error correction (ErrCor) algorithm is used for function approximation. In each iteration of the algorithm, one RBF unit is added to fit and then eliminate the highest peak in the error surface, which can reach a desired error level with fewer RBF units. Other methods have also been established to identify a proper structure while maintaining the desired level of accuracy [2227].

For online training algorithms, the training time is very important. This parameter directly determines how efficiently an algorithm runs. This problem is well overcome by extreme learning machines (ELMs) [28], which are also effective algorithms for training SLFNs; ELMs choose random hidden node parameters and calculate the output weights with the least squares algorithm. This method can achieve a fast training speed, as well as good classifying accuracy. In ELMs, the number of hidden nodes is assigned a priori, and many non-optimal nodes may exist; ELM tends to require more hidden nodes than conventional tuning-based algorithms [29]. Thus, in [3033], several types of growing and pruning techniques based on ELMs are proposed to effectively estimate the number of hidden nodes. In [34], an evolutionary ELM (E-ELM) based on differential evolution and ELM is proposed; the algorithm uses the differential evolution method to optimize the network input parameters and an ELM algorithm to calculate the network output weights. Because the trial vector generation strategies and the control parameters have to be manually chosen in E-ELM, in [35], a self-adaptive evolutionary extreme learning machine (SaE-ELM) is proposed; its network hidden node parameters are optimized by the self-adaptive differential evolution algorithm, which further improves the network performance.

This paper mainly focuses on how to obtain higher classifying accuracy as well as a suitable network size for the RBF hidden layer. A structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy is presented. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, and a suitable network size for RBF hidden layer that matches the complexity of the sample space can be built up. Thus, SAHRBF-BP solves the problem of dimension change from sample space mapping to feature space. In SAHRBF-BP, the nodes in the RBF network are used for nonlinear kernel mapping, the complexity of sample space is mapped onto the dimension of the BP input layer, and the BP network is then used for nonlinear classification. The nonlinear kernel mapping can improve the separability of sample spaces, and a nonlinear BP classifier can then supply a better classification surface. In this manner, SAHRBF-BP combines the local response characteristics of the RBF network with the global response characteristics of the BP network, which simplifies the selection of parameters in the BP hidden layer while reducing the dependence on space mapping in the RBF hidden layer; thus, the classification accuracy is improved while the generalization performance is guaranteed.

An optimized learning strategy is presented to construct SAHRBF-BP classifier. The optimized learning strategy uses global information of training sample space and generates RBF hidden nodes incrementally. On the one hand, many optimization algorithms, such as K-means clustering, fuzzy C-means clustering, differential evolution, also use global information of training sample space to optimize RBF hidden nodes, however, the number of hidden nodes in these optimization algorithms needs to be manually determined, which may lead to poor adaptivity for different sample sets. On the other hand, the sequential learning algorithms, such as MRAN, GAP-RBF, can achieve the estimation of RBF hidden nodes for different sample sets, however, the loss of global information may lead to a reduction in classification performance. In addition, unlike GAP-RBF, the presented method does not require an assumption that the input samples obey a unified distribution. Unlike GGAP-GMM, it does not need to fit the input sample distribution. By using a potential function clustering approach to measure the density in each class of training sample space, the corresponding RBF hidden nodes that cover different sample regions can be established. It reduces the restrictions on the sample sets and is adaptable to more complex sample sets. Once an initial RBF hidden node is generated, a form of heterogeneous samples repulsive force is designed to further optimize the hidden node parameters. For each initial hidden node, in a certain region, we assume the heterogeneous samples can affect the center; that is, there is an repulsive force that makes the current center move away from the heterogeneous samples. When the center reaches a suitable position, the repulsive force will disappear. Then, a suitable width parameter can be determined accordingly. A mechanism for eliminating the potentials of the original samples is then presented. This mechanism is ready for the next learning step. Thus, the RBF centers and the width and number of RBF hidden nodes can be effectively estimated.

Once the number of RBF hidden nodes is generated adaptively and the node parameters are optimized, the number of subsequent BP input nodes can be determined, and the overall SAHRBF-BP classifier is built up; then different training sample sets are used to train the BP network parameters in SAHRBF-BP, where the BP network parameters are optimized by the existing BP algorithm.

In this paper, the performance of SAHRBF-BP is compared with that of other well-known training SLFNs algorithms, such as back propagation based on stochastic gradient descent(SGBP) [36], MRAN, SVM, ELM, and SaE-ELM on 108 benchmark data sets. To measure the unique features of SAHRBF-BP, the RBF nerwork based on k-means clustering (KMRBF) [2], GAP-RBF and the k-means clustering learning algorithm based on the hybrid RBF-BP network (KMRBF-BP) are added to compare with SAHRBF-BP on two artificial data sets. Experiments show that the superiority of SAHRBF-BP. Especially on most low dimensional and large number of data sets, the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms.

Methods

SAHRBF-BP classifier

SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes are adjusted adaptively according to the distribution of sample space, the complexity of sample space is mapped onto the dimension of the BP input layer, and the BP network is then used for nonlinear classification. The nonlinear kernel mapping can improve the separability of sample space, and a nonlinear BP classifier can then supply a better classification surface. To clarify the situation, Fig 1(A) and 1(B) show an illustrative diagram of sample space mapping onto feature space for different classification problems. Note that in Fig 1(A), the samples in the red box far away from the center of each kernel function will be mapped near the origin of the coordinate plane, this problem will be overcome by the optimized learning strategy presented in the next section. In Fig 1(B), with the increase in sample space complexity, the dimension of the feature space is increased accordingly, which is classified by a BP network.

thumbnail
Fig 1. Illustrative diagram of the sample space mapping onto feature space for different sample sets.

(A) The mapping dimension is 3 (B) The mapping dimension is 6.

https://doi.org/10.1371/journal.pone.0164719.g001

SAHRBF-BP is shown in Fig 2, which consists of four components:

  1. The input layer, which consists of t source neurons, where t is the dimensionality of the input vector.
  2. The RBF hidden layer, which consists of a group of Gaussian kernel functions: (1) where μk and σk are the center and width of the hidden node, respectively, and K is the number of hidden neurons.
  3. The BP hidden layer, which consists of the neurons between the RBF hidden layer and output layer. The induced local field for node j in layer l of the BP is (2) where is the output signal of the neuron i in the previous layer l − 1 of the BP network and is the synaptic weight of neuron j in layer l that is fed from neuron i in layer l − 1. Assuming the use of a sigmoid function, the output signal of neuron j in layer l is (3) where a and b are constants.
    If neuron j is in the first BP network hidden layer, i.e., l − 1, set (4) where gj(x) is the double polar output of φj(x) and can be denoted as (5)
  4. The output layer. Set L is the depth of the BP network, note the depth of the BP network is equal to the sum of the BP network input layer, the hidden layer, and the output layer, i.e., if l = 1, then L = 3, and the output can be given as (6)

In Fig 2, the double polar processing can ensure the validity of the BP network input. In addition, the combination of the structure-adaptive adjustment of the RBF hidden layer with the BP network can provide a good complementary effect. On the one hand, the RBF network has good stability, where the activation response in the RBF hidden nodes has local characteristics and maps the output to a value between 0 and 1. Thus, the original samples, including outliers, will be limited to a finite space, and the adaptive adjustment of RBF hidden nodes can ensure the validity of the space mapping. Processing the results of mapping the RBF hidden nodes and using them for the input of the BP network can reduce the dependence on the selection of BP network parameters; furthermore, the convergence rate of the BP network can be increased and local minima can be avoided. On the other hand, in a BP network, the activation response in hidden nodes has global characteristics, especially those regions not fully displayed in the training set. In SAHRBF-BP, the BP network is used for nonlinear classification, which can reduce the dependence on the original sample space mapping. Even if there are errors in the original sample space mapping, the nonlinear BP network can be compensated for to a certain extent. Therefore, SAHRBF-BP combines the stability of the RBF network with the generalization ability of the BP network and improves the classification performance further.

A single hidden layer multilayer perceptron neural network with input-output mapping can provide an approximate realization of any continuous mapping [37]. In light of the foregoing discussion, in SAHRBF-BP, we set the number of BP hidden layers to l = 1, and the number of hidden nodes of BP should be appropriately increased with the increase in the complexity of the sample space.

When SAHRBF-BP classifier is built up, however, new problems may arise because the number of RBF hidden nodes and their parameters are unknown, and inappropriate kernel mapping will deteriorate the network performance. This process can be overcome by the optimized learning strategy presented in the next section.

The optimized learning strategy

Main objective.

To obtain good classifying performance for a given training sample set, it is necessary to fully use training sample information. Fig 3 demonstrates such a scenario, where the generated RBF hidden nodes are used to cover samples of class 2. However, these RBF hidden nodes may cover samples of classes 1 and 3, which leads to a reduction in classification performance. Our main objective is to design a method that can optimize the coverage of each class of training samples, where each coverage generates a RBF hidden node and ultimately estimates the center, the width and the number of RBF hidden nodes. For that purpose, the following issues should be considered:

  1. 1). To optimize the coverage of the training sample space, a suitable initial RBF hidden node must be established each time.
  2. 2). The adjustments of the center and width should meet certain criteria such that each generated RBF hidden node can cover the samples of the current class as much as possible, while covering the samples of other classes as little as possible.

thumbnail
Fig 3. Example of the current RBF hidden nodes covering other classes of samples.

https://doi.org/10.1371/journal.pone.0164719.g003

To address issue 1), we consider that for each class of training samples, in different regions, their densities are different. To cover the training sample space effectively, the sample in the most intensive region can be selected as the initial center. Therefore, it is necessary to quantify each class of samples and establish a standard for measuring the density of the input sample space. In this paper, a potential function is introduced into training sample space. By calculating the sample potentials in each class, the densities of different regions can be measured, where the sample with the maximum potential value can be used as the initial center. To address issue 2), we consider that the information of other classes of samples can be used to adjust the center and width such that an optimization model is established, where a form of heterogeneous samples repulsive force is designed to adjust the center and the width adaptively.

To complete the main objective, the following steps can be followed.

  1. Step 1. Compute the potential value of each sample in the current class.
  2. Step 2. Set the sample with the maximum potential value as the initial center.
  3. Step 3. Consider the distance between the heterogeneous samples and the center; in a certain region, the center and width should be adjusted adaptively by a form of heterogeneous samples repulsive force.
  4. Step 4. Eliminate the potential value of each sample in the current class.
  5. Step 5. Iterate Steps 2-4 until the stop condition is met, then turn to learn other classes of samples.

Algorithm principle.

In the field of pattern recognition, potential functions can be used for density clustering and image segmentation (IS). Several methods of constructing potential function are proposed in [38]; here, we choose the potential function (7) where γ(x1, x2) represents the interaction potential of two points x1, x2 in the input sample space, d(x1, x2) represents the distance measure, and T is a constant, which can be regarded as the distance weighting factor.

Given a training sample set S, where a specific label yi, yi ∈ {yi; i = 1, 2, … h} is attached to each sample vector x in S, h is the number of pattern class. Let Si denote the set of feature vectors that are labeled yi, , where Ni is the number of training samples in the ith pattern class. Thus , SiSj = ∅, ∀ij. For a pair of samples in Si, its interaction potential can be denoted as (8)

Let be the baseline sample; therefore, the interaction potential of all other samples to can be denoted as (9)

Once the potentials of each sample in Si are given, the sample with the maximum potential can be selected, where it is assumed the sample is , that is, (10)

To generate valid Gaussian kernel functions, we find the densest region in the sample space and then establish a Gaussian kernel to cover the region. To that end, the sample with the maximum potential is chosen as the initial center of the Gauss kernel function, which is expressed as follows: (11) where k refers to the number of RBF hidden neurons generated.

Once the width is given, an initial RBF hidden node is established, which can be used to cover samples of the current class. However, the generated RBF node takes into account sample information about the current class only, which may cause the current RBF hidden node to cover samples of other classes. To achieve the optimization coverage of each class of training samples, here, heterogeneous samples are taken into account to optimize the initial hidden node parameters. A form of heterogeneous samples repulsive force is used to adjust the center and the width adaptively. To make the center adjustment, first, the direction of each heterogeneous sample repulsive force should be in line with the centerline, which can make the center far away from the heterogeneous sample directly. Second, when a heterogeneous sample is close to the center, the magnitude of the center should be adjusted by a large margin, whereas when a heterogeneous sample is relatively far from the center, the magnitude of the center should be adjusted slightly. According to the foregoing description, the heterogeneous sample repulsive force is defined as follows:

Definition: Given two vectors, where one is the center and the other is a heterogeneous sample, there is a repulsive force that makes the heterogeneous sample point to the center. The magnitude of the force is inversely proportional to the square of the distance between the two vectors, and the direction of the repulsive force is in line with the centerline.

To adjust the center by the form of heterogeneous sample repulsive force, two hypothetical conditions should be met:

  1. 1). When the initial center is determined, given the initial width, only in the current coverage region, the heterogeneous sample repulsive force exists.
  2. 2). When the center is adjusted to a suitable position, the heterogeneous sample repulsive force will disappear.

Condition 1) demonstrates the case when the distance between the center and the heterogeneous samples is outside a certain range, the heterogeneous sample repulsive force can be ignored; this condition simplifies the study of the problem. For condition 2), the key is to establish criteria to make the center converge toward a suitable position.

According to the definition of the heterogeneous sample repulsive force and condition 1), the optimization model can be given as follows.

Given the initial width σ, when the initial center μk is generated, where . For a heterogeneous sample , , if , where λ is a width covering factor, there is a repulsive force from to μk, which can be denoted as (12)

Here, a negative exponential function is chosen to express the relationship between the heterogeneous sample repulsive force and the distance: (13) where α is a positive constant and can be seen as the heterogeneous sample repulsive force control factor. Assume the number of heterogeneous samples in the current covered region is Mj. Adding all the heterogeneous samples repulsive force, the center can be adjusted as follows: (14)

Note that for a sample x, when xSi, if |xμk|| < λσ, Mj counts plus 1. Similarly, set Mi denotes the number of samples in the current class in the current covered region, when xSi, if ||xμk|| < λσ, Mi counts plus 1.

Fig 4 provides a geometrical description of the heterogeneous sample repulsive force model, where the black and red boxes denote the covered region before and after the center adjustment, respectively. In Fig 4, there is a repulsive force makes samples 1 and 2 point to the initial center. The resultant force adjusts the initial center to a new position that is far away from samples 1 and 2.

For condition 2), to make the center reach a suitable position, it is necessary to carry out multiple iterations. Set M is the iteration step variable, at the initial stage, the magnitude of the center adjustment is relatively large; with the increase in the iteration step, the magnitude of the center adjustment will gradually decrease and eventually converge to a suitable position. To ensure validity of the center adjustment, set and represent the number of current class of samples and heterogeneous samples covered in the updated region, respectively, for each center adjustment, Eq (14) can be corrected as follows: (15)

In practice, because of the complexity of different training sample sets, even if the center is adjusted to a suitable position, the generated RBF hidden nodes may also cover heterogeneous samples under the given initial width parameters. Decreasing the width may be one way of reducing the coverage of heterogeneous samples; however, if the width is too small, the generalization performance will be greatly reduced. To further complete the optimization coverage of different regions of the each class of training samples, and guarantee good generalization performance, the width adjustment is set as follows: (16) where σmin is the minimum width, β is a fixed value and can be seen as the width constraint factor. In practice, to ensure validity of the width adjustment, the selection of β should be a little less than the width covering factor λ and the adjusted width should be in the range between σmin and σ. For different RBF hidden nodes, this adjustment can ensure the relative difference of the widths to better fit the training sample space. Note that the adjustment of the width is carried out only once, after the center adjustment.

When a hidden node is established, it is necessary to eliminate the potentials of the region to find the next initial center in the remaining samples. This process can be accomplished as follows: (17) where is the initial center of the current hidden neuron. For the potential value update process, Eq (17) shows when a sample is close to the initial center , the potential value of is attenuated fast, whereas when a sample is far away from the center, the potential value of is attenuated slowly. When meeting the inequality (18) the learning process goes on and is ready to search the next initial center. Otherwise, the algorithm of constructing RBF hidden nodes in the current pattern class is terminated and turns to learn other pattern classes, where δ is a threshold.

Fig 5 shows the illustrative diagram of adaptively generating RBF hidden nodes and optimizing node parameters, where the number of RBF hidden nodes is increased incrementally, and each initial RBF hidden node is determined by a potential function clustering approach, then a form of heterogeneous sample repulsive force is used to further optimize each RBF hidden node parameters. In Fig 5, the black line box represents regions covered by the initial center and the width, the red line box represents the final coverage regions.

thumbnail
Fig 5. Illustrative diagram of adaptively generating RBF hidden nodes and optimizing node parameters.

https://doi.org/10.1371/journal.pone.0164719.g005

Combining SAHRBF-BP classifier, the optimized learning strategy is summarized in Algorithm 1.

Algorithm 1 The optimized learning strategy

 Initialization;

for i = 1: c    % for each class of training samples

  Compute the potential value of each sample according to Eq (9).

  while

   Determine the maximum potential value of each sample according to Eq (10).

   The number of RBF hidden nodes counts plus 1, allocate an initial center using Eq (11).

   Determine Mi, Mj.

   Use Eqs (13) and (14) to update the initial center, determine , .

   while

    if

      , .

     Use Eq (15) to update the center, update , .

     MM + 1.

    else

     Use Eq (16) to update the width.

     Break;

    end if

   end while

   Eliminate the sample potential value of the region according to Eq (17).

  end while

end for

 Use Eqs (1) and (5) to compute gj(x), let g(x) be the input of the BP network, where g(x) = (g1(x), g2(x), …, gK(x)).

while ||e|| > mse_thres && m ≤ num_Epoch

  Use Eqs (2)–(4) and Eq (6) to compute the error signal ej = djoj, where dj is the jth element of the desired response vector d.

  Compute the local gradients of the network as follow where is the differentiation with respect to the argument.

  Adjust the synaptic weights of the network in layer l of BP as below. where τ is the momentum constant, η is the learning rate.

  mm + 1.

end while

Adjustment of the output label values

The SAHRBF-BP algorithm can handle binary class problems and multi-class problems. For multi-class classification problems, suppose that the observation dataset is given as , where xnRt is an t–dimentional observation features and ynRh is its coded class label. Here, h is the total number of classes, which is equal to the number of output hidden neurons. If the observation data xn is assigned to the class label c, then the cth element of yn = [y1, …, yc, … yh]T is 1 and other elements are -1, which can be denoted as follows: (19)

The output tags of SAHRBF-BP are , where (20)

According to the coding rules, only one output tag value is 1, and the other value is -1. If this condition is not met, the output tag is saturated and must be adjusted. Therefore, we set an effective way to correct the saturation problem in the learning process, which can be denoted as the pseudo code in Algorithm 2.

Algorithm 2 The method of adjusting the output saturation problem

 Given observation dataset , for every input vector xn,

  while jh

   if the number of is equal to h

    Set max(oj) = 1 and and hold other output values fixed.

   end if

   if the number of is more than 1

    Set max(oj) = 1 and the other output values are -1.

   end if

  end while

Results

In this section, we evaluate the performance of SAHRBF-BP using two artificial data sets and 108 benchmark data sets, where Double moon data set are taken from [39], 101 benchmark data sets are taken from the UCI machine learning repository [40]. In addition, seven benchmark data sets including cod_rna, DNA, fourclass, ijcnn1, splice, svmguide1 and svmguide3 are taken from [41]. Tables 13 provide descriptions of the benchmark data sets. The benchmark data sets are grouped into three categories: binary class, multi-class and large number of samples. All benchmark binary class and multi-class benchmark data sets are grouped into low dimensional and high dimensional sets. For all benchmark data sets, the inputs to each algorithm are scaled appropriately to fall between -1 and +1. In each data set, the training set, validation set and testing set are independent. For balanced data sets, the number of training samples in each class is identical, which is also applicable to the validation set and testing set. For imbalance data sets, when the number of training samples is given, the number of each class of training samples is determined according to the proportion of each class in the whole data set. This method is also applicable to the validation set and testing set.

thumbnail
Table 3. Descriptions of large number of samples data sets.

https://doi.org/10.1371/journal.pone.0164719.t003

The performance of SAHRBF-BP is compared with other well-known training and optimization SLFNs algorithms, such as SGBP, MRAN, SVM, ELM, and SaE-ELM on different data sets. To measure the unique features of SAHRBF-BP, other optimization algorithms such as KMRBF, GAP-RBF and KMRBF-BP are also compared to SAHRBF-BP on two artificial data sets, namely Double moon and Concentric circle. For SGBP, the momentum constant is set to τ = 0.1. For SVM, the RBF is used as the kernel function, the cost C is selected from the set [212, 211, …, 1] and kernel parameter is selected from the set [2−3, 2−2, …, 24]. For GAP-RBF and MRAN, the common parameters are fixed to εmax = 0.5, εmin = 0.01, k = 0.8 and γ = 0.09. Other parameters for GAP-RBF are set to emin = 0.01; for MRAN, the parameters are set to emin = 0.5, , the sliding window M is selected from the set [30, 50, 100, 200, 400]. For SaE-ELM, the number of populations NP is selected from the set [20, 50, 100, 200, 500]. For SAHRBF-BP, the common parameters of the distance weighting factor, width covering factor, width constraint factor, potential value learning threshold are set to T = 1, λ = 1.5, β = 1.3, δ = 0.001, respectively. The heterogeneous sample repulsive force control factor α is selected from the set [2, 5, 10, 15, 20]. The initial width σ is selected from the set [0.4, 0.5, …, 1.6]. Note the number of hidden nodes in KMRBF, KMRBF-BP and SaE-ELM is selected manually. When gradually increasing the number of hidden nodes, the one with the lowest overall validation error is selected as the number of hidden nodes. For benchmark data sets with the number of training samples is less than 2000 and artificial data sets, simulations in each algorithm are performed 20 times and are conducted in the MATLAB 2013a environment on an Intel(R) Core(TM) i5 with a 3.2GHZ CPU and 4G of RAM. For other data sets, simulations in each algorithm are performed 3∼15 times and are conducted in the MATLAB 2013a environment on an Intel(R) Xeon(R) CPU E5-2687w @3.40GHZ(dual processor) and 128G of RAM. For each algorithm, the one with the lowest validation error is used to determine the parameters in the training models. The simulations for the SVM are carried out using the popular LIBSVM package in C [41].

Performance measures

In this paper, the overall and average per-class classification accuracies are used to measure performance. Class-level performance is measured by the percentage classification (ηi), which is defined as (21) where qii is the number of correctly classified samples and is the number of samples for the class yi in the training/testing data set. The overall classification accuracy(ηo) and the average per-class classification accuracy(ηa) are defined as (22) (23) where h is the number of classes, NT is the number of training/testing samples. Thus, for balanced classification problems, the overall testing ηo is used to measure the performance of each algorithm. For imbalanced classification problems, the overall testing ηo and the average testing ηa are used to measure the performance of each algorithm.

Performance comparison

Artificial binary class data set: The Double moon classification problem.

The Double moon data set and the classifying results of SAHRBF-BP are shown in Fig 6(A) and 6(B), respectively. The classification results illustrate that SAHRBF-BP can provide a superior classification surface. Fig 7(A)–7(C) show when the initial width takes different values, the adaptive coverage of the training sample space can be completed effectively. Each cover generates a RBF hidden node and the number of RBF hidden nodes is increased incrementally, the bold lines represent the first coverage region in each pattern class, which can be seen as the densest region learned by the potential function clustering approach. With the change of the initial width, the number of RBF hidden nodes and node parameters are changed accordingly. Based on the potential function clustering to generate initial RBF hidden nodes, the form of heterogeneous sample repulsive force can further ensure that each initial RBF hidden node adjusted to a suitable position. In this way, the optimal coverage of the training sample space can be completed and the RBF centers and the width and number of RBF hidden nodes can be effectively estimated.

thumbnail
Fig 6. Double moon classification problem.

(A) Double moon data set (B) Classifying result of SAHRBF-BP.

https://doi.org/10.1371/journal.pone.0164719.g006

thumbnail
Fig 7. Using different width parameters to cover the training sample space for Double moon classification problem.

(A) σ = 2 (B) σ = 3 (C) σ = 4.

https://doi.org/10.1371/journal.pone.0164719.g007

In Fig 8, when the number of training samples has changed, KMRBF-BP needs less number of RBF hidden neurons than KM-RBF and can get a higher classifying accuracy. This results demonstrate the hybrid RBF-BP network structure is effective, which can improve the classifying accuracy and reduce the dependence on the original sample space mapping. Note the number of KM-RBF and KMRBF-BP is selected manually, when changing the number of hidden neurons several times, the one with the highest overall validation accuracy is selected as the suitable number of hidden neurons. The classifying accuracy of SAHRBF-BP is comparable with KMRBF-BP, however, the number of RBF hidden nodes in SAHRBF-BP is generated adaptively. The classifying accuracy of SAHRBF-BP outperforms SGBP apparently, which further shows the effectiveness of SAHRBF-BP, the reason is that the structure-adaptive RBF network can improve the separability of sample spaces. Compared to GAP-RBF, SAHRBF-BP can better adapt to the change of sample space. The classifying accuracy of SAHRBF-BP outperforms GAP-RBF and needs less RBF hidden nodes. The classifying accuracy of SAHRBF-BP is comparable with SVM, however, the number of RBF hidden nodes in SAHRBF-BP is less than SVM apparently. Thus, SAHRBF-BP can adapt training sample space well, which can get a high classifying accuracy, as well as a compact network size for the RBF hidden layer.

thumbnail
Fig 8. Performance comparisons between SAHRBF-BP and other algorithms on Double moon data set.

(A) Number of training samples- Number of RBF hidden neurons/ support vectors (B) Number of training samples- Overall classifying accuracy.

https://doi.org/10.1371/journal.pone.0164719.g008

Artificial binary class data set: The Concentric circle classification problem.

The Concentric circle data set and the classifying results of SAHRBF-BP are shown in Fig 9(A) and 9(B), respectively. Compared to the Double moon classification problem, the Concentric circle classification problem is more complex and can thus be used to measure the unique features of SAHRBF-BP. The classification results illustrate SAHRBF-BP can still provide a superior classification surface for Concentric circle classification problem.

thumbnail
Fig 9. Concentric circle classification problem.

(A) Concentric circle data set (B) Classifying result of SAHRBF-BP.

https://doi.org/10.1371/journal.pone.0164719.g009

Fig 10(A)–10(C) show when the initial width takes different values, the adaptive coverage of the training sample space can be completed effectively. Each cover generates a RBF hidden node and the number of RBF hidden nodes is increased incrementally, the bold lines represent the first coverage region in each pattern class. With the change of the initial width, the number of RBF hidden nodes and node parameters change accordingly. For each generated initial RBF hidden node, the form of heterogeneous sample repulsive force can further ensure each initial RBF hidden node adjusted to a suitable position. Thus, the optimal coverage of the training sample space can be completed and the RBF centers and the width and number of RBF hidden nodes can be effectively estimated.

thumbnail
Fig 10. Using different width parameters to cover the training sample space for Concentric circle classification problem.

(A) σ = 0.1 (B) σ = 0.2 (C) σ = 0.3.

https://doi.org/10.1371/journal.pone.0164719.g010

Fig 11(A) and 11(B) demonstrate when the number of training samples changes, KMRBF-BP needs less number of RBF hidden neurons than KM-RBF and can get a higher classifying accuracy. Thus the hybrid RBF-BP network architecture improves the classifying accuracy and reduces the dependence on the original sample space mapping. Note in KM-RBF and KMRBF-BP, when the number of training samples changes, the number of RBF hidden neurons has to be adjusted manually, otherwise it will lead to a poor classification accuracy. Compared with KM-RBF and KMRBF-BP, SAHRBF-BP can adapt the training sample space well, when the number of training samples changes, the number of RBF hidden neurons in SAHRBF-BP changes accordingly, and can get a higher classifying accuracy. Compared to GAP-RBF, SAHRBF-BP can better adapt to the change of sample space. The classifying accuracy of SAHRBF-BP outperforms GAP-RBF and ELM apparently. When the number of training samples is more than 500, the classifying accuracy of SAHRBF-BP outperforms SVM. In this way, the effectiveness of SAHRBF-BP is further verified.

thumbnail
Fig 11. Performance comparisons between SAHRBF-BP and other algorithms on Concentric circle data set.

(A) Number of training samples- Number of RBF hidden neurons/ support vectors (B) Number of training samples- Overall classifying accuracy.

https://doi.org/10.1371/journal.pone.0164719.g011

For the Concentric circle classification problem, SAHRBF-BP can get a higher classifying accuracy than other training SLFNs algorithms, however, there are still a certain number of incorrect predictions in SAHRBF-BP. In Fig 9(B), we can see the samples of incorrect predictions generally appear in the boundary region. A main reason is that, due to the complexity of the sample set, it is often difficult to achieve an ideal coverage of the sample space. As shown in Fig 10(A)–10(C), if the current class of RBF hidden nodes cover heterogeneous samples, it may lead to a reduction in classification performance. From this point of view, to get a higher classification performance, it is necessary to optimize each generated RBF hidden node. In SAHRBF-BP, the combination of potential function clustering and heterogeneous samples repulsive force can adaptively determine the number of RBF hidden nodes and optimize node parameters, which ensures each generated RBF hidden node covers the samples of the current class as much as possible, while covering heterogeneous samples as little as possible.

In Fig 11(B), when the number of training samples is reduced, it will lead to a reduction in classification performance. Especially when the number of training samples is 200, the overall classifying accuracy of SAHRBF-BP is a little lower than SVM. Fig 12 further shows inadequate training samples lead to the reduction of the classification accuracy. For complex data sets, when the number of training samples is reduced, the randomness of training samples in the sample space is enhanced, which can not effectively reflect the actual distribution of the entire data set, and may lead to some extent of failure by the methods of potential function clustering and heterogeneous samples repulsive force.

thumbnail
Fig 12. The learning effect of training and testing sample space when the number of training samples is 200.

(A) Covering effect of training sample space (B) Classifying result of SAHRBF-BP.

https://doi.org/10.1371/journal.pone.0164719.g012

Benchmark binary class classification problems.

In this section, 21 benchmark binary class low dimensional data sets and 25 benchmark binary class high dimensional data sets are used to evaluate the performance of SAHRBF-BP. Fig 13 shows the overall testing accuracy comparisons between SAHRBF-BP and other learning algorithms. For binary class low dimensional data sets, the overall testing accuracy of SAHRBF-BP is higher than other learning algorithms on most data sets, except for Fertility(B08), Haberman(B10) and Planning(B17) data sets. For binary class high dimensional data sets, the overall testing accuracy of SAHRBF-BP is higher than other learning algorithms on Breast(diagnostic)(B22), Chronic(B24), Climate(B25), Congressional(B26), First order(B27), German(B30), Ionosphere(B31), Retinopathy(B38), Spambase(B42) and Vote(B46) data sets. The overall testing accuracy of SAHRBF-BP is comparable with SVM on Mushrooms(B33), Musk2(B35), Seismic bumps(B40), Splice(B44) and Svmguide3(B45) data sets, however, the overall testing accuracy of SAHRBF-BP is lower than SVM on Musk1(B34), Parkinsons(B36), QSAR(B37), Secom(B39) and Sonar(B41) data sets, lower than ELM and SaE-ELM on Hill(with noise)(B29) data set, lower than SVM, ELM and SaE-ELM on Hill(B28) data set, and lower than SGBP and SVM on Breast(prognostic)(B23) and Spect heart(B43) data sets.

thumbnail
Fig 13. Overall testing accuracy comparisons between SAHRBF-BP and other algorithms on benchmark binary class data sets.

(A) Binary class low dimensional data sets (B) Binary class high dimensional data sets.

https://doi.org/10.1371/journal.pone.0164719.g013

Tables 46 give performance comparisons between SAHRBF-BP and other learning algorithms, where a few cases of success and failures are given a more detail description. In Tables 4 and 5, the overall and average testing accuracies of SAHRBF-BP are clearly higher than SGBP. For Blood data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.8%, and SaE-ELM, ELM, MRAN by approximately 1.5%-3.3%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 2.2%, and SaE-ELM, ELM, MRAN by approximately 1.4%-4.5%.

thumbnail
Table 4. A few cases of success in SAHRBF-BP compared with other learning algorithms on benchmark binary class data sets.

https://doi.org/10.1371/journal.pone.0164719.t004

thumbnail
Table 5. A few cases of success in SAHRBF-BP compared with other learning algorithms on benchmark binary class data sets.

https://doi.org/10.1371/journal.pone.0164719.t005

thumbnail
Table 6. A few cases of failures in SAHRBF-BP compared with other learning algorithms on benchmark binary class data sets.

https://doi.org/10.1371/journal.pone.0164719.t006

For Diabetes data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.7%, and SaE-ELM, ELM, MRAN by approximately 2.2%-7%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 5.2%, and SaE-ELM, ELM, MRAN by approximately 2.9%-7.9%.

For Heart disease data set, the overall and average testing accuracies of SAHRBF-BP outperforms SaE-ELM, ELM, MRAN by approximately 0.6%-5.7%. The average testing accuracy of SAHRBF-BP is approximately 2.2% lower than that of SVM, however, the overall testing accuracy is higher than that of SVM by approximately 1.4%.

For Mammographic data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.9%, and SaE-ELM, ELM, MRAN by approximately 3.2%-7%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.5%, and SaE-ELM, ELM, MRAN by approximately 3.3%-8.5%.

For Monk1 data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.1%, and SaE-ELM, ELM, MRAN by approximately 6.1%-10.1%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.6%, and SaE-ELM, ELM, MRAN by approximately 6.5%-10.4%.

For Monk2 data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.3%, and SaE-ELM, ELM, MRAN by approximately 9%-11.3%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 0.5%, and SaE-ELM, ELM, MRAN by approximately 9.1%-13%.

For Svmguide1 data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.6%, and SaE-ELM, ELM, MRAN by approximately 2.1%-6%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.5%, and SaE-ELM, ELM, MRAN by approximately 1.4%-7.1%.

For Vertebral data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 2.3%, and SaE-ELM, ELM, MRAN by approximately 2.7%-6%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.4%, and SaE-ELM, ELM, MRAN by approximately 2%-6.7%.

For Wholesale data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.3%, and SaE-ELM, ELM, MRAN by approximately 1.8%-4.6%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.1%, and SaE-ELM, ELM, MRAN by approximately 2.1%-6.3%.

For Climate data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.1%, and SaE-ELM, ELM, MRAN by approximately 1%-5.2%. The average testing accuracy of SAHRBF-BP is comparable with SVM, and outperforms SaE-ELM, ELM, MRAN by approximately 0.5%-5.7%.

For Congressional data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 2.1%, and SaE-ELM, ELM, MRAN by approximately 1%-4.2%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 2.4%, and SaE-ELM, ELM, MRAN by approximately 1.3%-6.3%.

For German data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 5.2%, and SaE-ELM, ELM, MRAN by approximately 5.1%-14.2%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 5.1%, and SaE-ELM, ELM, MRAN by approximately 8.9%-15.2%.

For Ionosphere data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 2.2%, and SaE-ELM, ELM, MRAN by approximately 2.8%-9.2%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 2.2%, and SaE-ELM, ELM, MRAN by approximately 5.9%-11.7%.

For Retinopathy data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 2.9%, and SaE-ELM, ELM, MRAN by approximately 2.6%-6.7%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.6%, and SaE-ELM, ELM, MRAN by approximately 4%-6.2%.

For Spambase data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.5%, and SaE-ELM, ELM, MRAN by approximately 1.4%-8%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.6%, and SaE-ELM, ELM, MRAN by approximately 1.2%-6.2%.

For Vote data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.5%, and SaE-ELM, ELM, MRAN by approximately 0.8%-5%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 1%, and SaE-ELM, ELM, MRAN by approximately 1.2%-6.1%.

In Table 6, for Fertility data set, the overall testing accuracy of SAHRBF-BP is lower than SGBP about 9%, lower than SVM about about 3.5%, SaE-ELM about 2.5%, and ELM about 2%. The average testing accuracy of SAHRBF-BP is lower than SGBP about 8.1%, and SVM, SaE-ELM, ELM about 4.5%, 3.4%, 2.8%, respectively. For Planning data set, the overall and average testing accuracies are lower than SGBP and SVM about 2.6%-3.2%. For Hill data set, the overall testing accuracy is lower than ELM, SVM and SaE-ELM about 2.3%-2.9%, and the average testing accuracy of SAHRBF-BP is lower than ELM, SVM, SaE-ELM about 1.7%-3.1%. For LSVT data set, the overall testing accuracy is lower than ELM about 0.6%, SaE-ELM about 2%, and SVM about 13.3%. The average testing accuracy of SAHRBF-BP is lower than ELM, SaE-ELM and SVM about 0.7%, 1%, 14.8%, respectively. For Musk1 data set, the overall and average testing accuracies are lower than SVM about 8.8% and 9.4%, respectively. For QSAR data set, the overall and average testing accuracies are lower than SVM about 1.4% and 1.7%, respectively. For Sonar data set, the overall testing accuracy of SAHRBF-BP is lower than SVM about 3.1%, and the average testing accuracy of SAHRBF-BP is lower than SVM about 8.4%. For Spect heart data set, the overall and average testing accuracies are lower than SGBP clearly. The overall testing accuracy of SAHRBF-BP is lower than SaE-ELM, SVM about 0.5%, 2.5%, respectively. The average testing accuracy of SAHRBF-BP is lower than SaE-ELM and SVM about 2%, 6.7%, respectively.

From Fig 13 and Tables 46 we can see that for most binary class low dimensional data sets, the classification accuracy of SAHRBF-BP is higher than other learning algorithms. However, for binary class high dimensional data sets, the classification accuracy of SAHRBF-BP is decreased clearly on a number of data sets, such as LSVT(B32), Musk1(B34), QSAR(B37), Secom(B39), Sonar(B41) data sets. The main reasons are that, with the increase of dimension, spatial distribution of the samples is relatively sparse, especially for small number of training samples data sets, the randomness of training samples in the sample space is greatly enhanced, which can not effectively reflect the actual distribution of entire data sets, and leads to a certain degree of failure by the methods of potential function clustering and heterogeneous samples repulsive force. Thus, the classification performance of SAHRBF-BP will be reduced to varying degrees.

Benchmark multi-class classification problems.

In this section, 18 multi-class low dimensional data sets and 17 multi-class high dimensional data sets are used to evaluate the performance of SAHRBF-BP. Fig 14 shows the overall testing accuracy comparisons between SAHRBF-BP and other learning algorithms. For multi-class low dimensional data sets, the overall testing accuracy of SAHRBF-BP is comparable with SVM on Teaching(M13) data set, and is lower than SVM on Breast tissue(M02), Hayes-Roth(M08) data sets, lower than SaE-ELM, ELM on Glass(M07) data set, lower than SaE-ELM, SVM, ELM on Iris(M09) data set, however, the overall testing accuracy of SAHRBF-BP is higher than other learning algorithms on the rest 13 data sets.

thumbnail
Fig 14. Overall testing accuracy comparisons between SAHRBF-BP and other algorithms on benchmark multi-class data sets.

(A) Multi-class low dimensional data sets (B) Multi-class high dimensional data sets.

https://doi.org/10.1371/journal.pone.0164719.g014

For multi-class high dimensional data sets, the overall testing accuracy of SAHRBF-BP is higher than other learning algorithms on Firm(M22), Image segmentation(M25), Landsat(M26), Steel(M30), Turkiye(M31), Vehicle silhouettes(M32), Waveform1(M33) and Waveform2(M34) data sets. The overall testing accuracy of SAHRBF-BP is comparable with SVM on Gas(2012)(M24) data set, and SVM, ELM, SaE-ELM on Optical digits(M28) and Semeion(M29) data sets, however, the overall testing accuracy of SAHRBF-BP is lower than SVM on Air(M19), DNA(M21), Forest(M23) and Libras(M27) data sets, and ELM, SaE-ELM, SVM on Dermatology(M20) and Zoo(M35) data sets.

Tables 79 give performance comparisons between SAHRBF-BP and other learning algorithms on partial multi-class data sets, where the overall and average testing accuracies of SAHRBF-BP are clearly higher than SGBP. In Tables 7 and 8, for Balance data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.5%, and SaE-ELM, ELM, MRAN by approximately 1%-6%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.3%, and SaE-ELM, ELM, MRAN by approximately 1.5%-7.5%.

thumbnail
Table 7. A few cases of success in SAHRBF-BP compared with other learning algorithms on benchmark multi-class data sets.

https://doi.org/10.1371/journal.pone.0164719.t007

thumbnail
Table 8. A few cases of success in SAHRBF-BP compared with other learning algorithms on benchmark multi-class data sets.

https://doi.org/10.1371/journal.pone.0164719.t008

thumbnail
Table 9. A few cases of failures in SAHRBF-BP compared with other learning algorithms on benchmark multi-class data sets.

https://doi.org/10.1371/journal.pone.0164719.t009

For Cardiotocography data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.3%, and SaE-ELM, ELM, MRAN by approximately 2%-9.8%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 0.8%, and SaE-ELM, ELM, MRAN by approximately 4%-9.7%.

For Knowledge data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 2.5%, and SaE-ELM, ELM, MRAN by approximately 2.9%-4.9%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 1%, and SaE-ELM, ELM, MRAN by approximately 3%-6.1%.

For Seeds data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.6%, and SaE-ELM, ELM, MRAN by approximately 8.8%-11.7%.

For Wine data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.2%, and SaE-ELM, ELM, MRAN by approximately 0.4%-8.4%.

For Yeast data set, the overall and average testing accuracies of SAHRBF-BP outperform SVM by approximately 2.5%, and SaE-ELM, ELM, MRAN by approximately 4.3%-6.3%.

For Image segmentation data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.7%, and SaE-ELM, ELM, MRAN by approximately 1.1%-6.7%.

For Landsat data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 0.7%, and SaE-ELM, ELM, MRAN by approximately 1.1%-4.7%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 1%, and SaE-ELM, ELM, MRAN by approximately 1.4%-6.4%.

For Steel data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 2.2%, and SaE-ELM, ELM, MRAN by approximately 0.9%-9.1%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 2%, and SaE-ELM, ELM, MRAN by approximately 1.2%-10%.

For Turkie data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.4%, and SaE-ELM, ELM, MRAN by approximately 2%-11%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.4%, and SaE-ELM, ELM, MRAN by approximately 1.9%-10.9%.

For Vehicle silhouettes data set, the overall and average testing accuracies of SAHRBF-BP are clearly higher than MRAN. The overall testing accuracy outperforms SVM by approximately 4.4%, and SaE-ELM, ELM by approximately 0.5%-1.6%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 8.9%, and SaE-ELM, ELM by approximately 0.4%-1.5%.

For Waveform1 data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.2%, and SaE-ELM, ELM, MRAN by approximately 0.8%-4.2%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 0.9%, and SaE-ELM, ELM, MRAN by approximately 0.9%-4.9%.

In Table 9, for Breast tissue data set, the overall and average testing accuracies of SAHRBF-BP are lower than SVM about 1.3% and 1.4%, respectively. For Glass data set, the overall testing accuracy of SAHRBF-BP is lower than ELM about 1.9%, SaE-ELM about 2.4%, and SVM about 0.3%. The average testing accuracy of SAHRBF-BP is lower than ELM, SaE-ELM and SVM about 3%, 3.6%, 1.6%, respectively. For Hayes-Roth data set, the overall and average testing accuracies of SAHRBF-BP are lower than SVM about 1.4% and 3.2%, respectively. For Air data set, the overall testing accuracy of SAHRBF-BP is lower than SVM about 1.2%. The average testing accuracy of SAHRBF-BP is lower than SVM about 1.7%, and SaE-ELM about 0.8%. For Forest data set, the overall and average testing accuracies of SAHRBF-BP are lower than SVM about 2.3% and 5.2%, respectively. For Libras data set, the overall testing accuracy of SAHRBF-BP is lower than SVM about 1.6%.

Similar to benchmark binary class data sets, the overall classification accuracy of SAHRBF-BP is higher than other learning algorithms on most multi-class low dimensional data sets, and the overall classification accuracy of SAHRBF-BP is decreased on a number of multi-class high dimensional data sets. However, for multi-class high dimensional data sets, when the number of training samples is sufficient, a relatively high classification accuracy of SAHRBF-BP can be obtained. The main reason is that enough training samples can offset the random distribution of sample space to a great extent. Under these circumstances, the methods of potential function clustering and heterogeneous samples repulsive force are still valid.

Benchmark large number of samples classification problems.

In this section, 27 large number of samples data sets are used to evaluate the performance of SAHRBF-BP. Fig 15 shows the overall testing accuracy comparisons between SAHRBF-BP and other learning algorithms. For large number of samples data sets, the overall testing accuracy of SAHRBF-BP is comparable with other learning algorithms on A1a(adult)(L01), A6a(adult)(L02) data sets, and SVM, SaE-ELM, ELM on Cod_rna(L15), Credit(L16), Gas(2013)(L18), Record(L23), Sensorless(L24), Skin(L25), Shuttle(L26) data sets. The overall testing accuracy of SAHRBF-BP is slightly lower than SaE-ELM, ELM on Letter(L20) data set. For other data sets, the overall testing accuracy of SAHRBF-BP is higher than other learning algorithms to varying degrees.

thumbnail
Fig 15. Overall testing accuracy comparisons between SAHRBF-BP and other algorithms on large number of samples data sets.

https://doi.org/10.1371/journal.pone.0164719.g015

Table 10 gives performance comparisons between SAHRBF-BP and other learning algorithms on partial large number of samples data sets. The overall and average testing accuracies of SAHRBF-BP are clearly higher than SGBP on each large number of samples data set, except for A1a(adult) and A6a(adult) data sets. For Action2(normal)(L04) data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 0.7%, and SaE-ELM, ELM, MRAN by approximately 1.1%-9.6%. For Action1(aggressive) data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 0.8%, and SaE-ELM, ELM, MRAN by approximately 1.7%-2.1%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.2%, and SaE-ELM, ELM, MRAN by approximately 1.8%-2.5%. For Action2(abnormal detection) data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.2%, and SaE-ELM, ELM, MRAN by approximately 1%-8.7%. For Action3(abnormal detection) data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.4%, and SaE-ELM, ELM, MRAN by approximately 1.2%-8.2%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 0.9%, and SaE-ELM, ELM, MRAN by approximately 1%-8.2%. For Ijcnn1 data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 0.7%, and SaE-ELM, ELM, MRAN by approximately 1.3%-11%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.1%, and SaE-ELM, ELM, MRAN by approximately 2%-11.5%. For letter data set, the overall and average testing accuracies of SAHRBF-BP outperform SVM by approximately 0.5%, and MRAN by approximately 9.6% and 9.8%, respectively. However, the overall and average testing accuracy of SAHRBF-BP is lower than SaE-ELM about 0.4%, and ELM about 0.6% and 0.5%, respectively. For Occupancy data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.2%, and SaE-ELM, ELM, MRAN by approximately 1%-8.7%. For Action3(abnormal detection) data set, the overall testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.4%, and SaE-ELM, ELM, MRAN by approximately 2.2%-8.5%. The average testing accuracy of SAHRBF-BP outperforms SVM by approximately 1.5%, and SaE-ELM, ELM, MRAN by approximately 2.7%-8.6%.

thumbnail
Table 10. Performance comparisons between SAHRBF-BP and other learning algorithms on partial large number of samples data sets.

https://doi.org/10.1371/journal.pone.0164719.t010

From Fig 15 and Table 10, we can see that for large number of samples data sets, the classification accuracy of SAHRBF-BP is higher than other learning algorithms in general. Enough training samples can effectively reflect the actual distribution of entire data sets, and the superiority of potential function clustering and heterogeneous samples repulsive force can be fully demonstrated.

Discussion

Selection of the initial width parameters for SAHRBF-BP.

The width parameters can be used to control the classification accuracy and generalization performance. To optimize the coverage of each class of samples, the center adjustment and the width adjustment strategy are combined together. When an initial width is given, for each generated RBF hidden node, the center is iteratively adjusted to a suitable position, and the width is then adjusted only once. To reduce the range of initial width values, we execute a preprocessing step for the sample space. For all benchmark classification problems, the inputs to each algorithm are scaled appropriately to fall between -1 and +1.

In addition, the initial width σ and the minimum width σmin are related to each other. According to Eq (16), the adjusted width is in the range between σmin and σ; that is, , where σmin ∈ {σmin|σϑσmin < σ, σmin > 0}. To guarantee the generalization performance, here we set ϑ = 0.2. Thus, when the initial width σ is given, the minimum width σmin can be determined accordingly. For example, if σ = 0.5, σmin can be selected in the set {σmin|σ − 0.2 ≤ σmin < σ}. To simplify this case, σmin can be selected in the set {0.3, 0.4}, and the one with the lowest validation error is selected as the suitable minimum width.

Effect of the initial width parameters on SAHRBF-BP.

When the initial width changes, the number of generated RBF hidden nodes and node parameters will change accordingly. Here Diabetes, Heart disease, Ionosphere, Image segmentation and Vehicle silhouettes data sets are used to evaluate the effect of the initial width parameters on SAHRBF-BP. In Fig 16, when the initial width value is too small, the overall classification accuracy is poor and the network size for the RBF hidden layer is large, e.g., for Heart disease data set, when σ = 0.2, the number of generated RBF hidden neurons is 151 and is equal to the number of training samples.

thumbnail
Fig 16. Effect of initial width parameters on SAHRBF-BP.

(A) σ- Overall testing accuracy (B) σ- Number of RBF hidden nodes.

https://doi.org/10.1371/journal.pone.0164719.g016

This result demonstrates the corresponding RBF hidden nodes will be established at each training sample, and the generated RBF hidden neurons will not cover other samples, thus the methods of potential function clustering and heterogeneous sample repulsive force are invalid and the overall classifying accuracy is poor in this case.

Thus, in SAHRBF-BP, to complete the effective coverage of the training sample space, an effective initial width parameter should be provided, which can generate proper RBF hidden neurons to cover the sample space. Note that the number of generated RBF hidden neurons should not be close to the number of the training samples, otherwise, SAHRBF-BP is invalid.

When the value of the initial width falls within a suitable range, the number of generated RBF hidden nodes will change, but a relatively stable classification accuracy can be achieved. For instance, for Image segmentation data set, when the range of initial widths is between 0.5 and 0.9, the overall testing accuracy ranges from 91.54% to 92.23%. Once initial width parameters are given, SAHRBF-BP can learn the sample space automatically and generate different RBF hidden nodes to adapt the sample space. Thus, SAHRBF-BP can counteract the effect of the initial width parameters to some extent.

Effect of the number of BP hidden nodes on SAHRBF-BP.

In SAHRBF-BP, the nonlinear SGBP algorithm is used to adjust the weights of the BP network component, which further improves the classification result. However, this method results in an increase in the number of parameters to be selected, particularly the selection of the number of BP hidden nodes. For this problem, we conduct experiments on five UCI data sets and discuss the results.

Fig 17 shows the effect of the number of BP hidden nodes on SAHRBF-BP. The results show that for binary class classification problems, such as Diabetes, Heart disease and Ionosphere data sets, when the number of BP hidden nodes ranges from 1 to 10, a relatively stable classification accuracy can be achieved.

thumbnail
Fig 17. Effect of the number of BP hidden nodes on SAHRBF-BP.

https://doi.org/10.1371/journal.pone.0164719.g017

For multi-class classification problems, such as Image segmentation and Vehicle silhouettes data sets, when the number of BP hidden nodes is greater than 4, the overall classification accuracy also does not change considerably. Thus, the dependence on the number of BP hidden nodes is reduced.

For the SAHRBF-BP classifier, the adaptively mapping results for the RBF hidden nodes are processed and used for the input of BP network component, which improves the stability of the BP network component and effectively avoids falling into local minima for the BP algorithm. When the sample set is more complex, the momentum term can be used to improve the BP algorithm further.

Limitations for SAHRBF-BP.

Compared with other training SLFNs algorithms, SAHRBF-BP shows excellent classification performance on artificial and most benchmark data sets. However, there are still some limitations for SAHRBF-BP.

For complex classification problems, to achieve good classification results, the number of training samples should not be too small. Otherwise, the randomness of training samples in the sample space is enhanced, which can not effectively reflect the actual distribution of entire data sets, especially for high dimensional data sets, and will lead to the methods of potential function clustering and heterogeneous sample repulsive force some extent of failure.

In addition, to ensure the effectiveness of learning, the initial kernel width should not be too small, which is another limitation for SAHRBF-BP. Otherwise, the generalization performance of the classifier will be greatly reduced, and each generated RBF hidden node does not cover the heterogeneous samples, which leads to the failure of heterogeneous sample repulsive force.

Conclusion

In this paper, a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy is presented. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space. SAHRBF-BP makes use the global information of each class of training samples to generate the initial RBF hidden nodes, and then makes full use of the neighborhood information of each hidden node to optimize the hidden node parameters. Thus, SAHRBF-BP solves the problem of dimension change from sample space mapping onto feature space. In addition, it also effectively combines the stability of a RBF network and the generalization ability of a BP network to improve the classification performance. In this way, SAHRBF-BP simplifies the selection of the number of nodes in the BP hidden layer while further reducing the dependence on space mapping in the RBF hidden layer. The optimized learning strategy can generate RBF hidden nodes incrementally, as well as adjust the centers and width adaptively. The combination of the potential function clustering with heterogeneous sample repulsive force improves the classification accuracy of each hidden node; at the same time, it ensures a compact network size for the RBF hidden layer.

The performance of SAHRBF-BP is compared with that of other training SLFNs algorithms, namely SGBP, KMRBF, KMRBF-BP, MRAN, GAP-RBF, SVM, ELM, and SaE-ELM on different data sets. In each training SLFNs algorithm, SVM is still the most stable classifier. Compared to other algorithms, the classification performance of SVM is maintained at a relatively high level on each data set in general. Overall, for high dimensional with too small training samples data sets, the classification performance of SVM outperforms SAHRBF-BP clearly. However, with the increase of the number of samples in data sets, the randomness of training samples in the sample space is gradually being eliminated. On the basis of effective learning of sample space, SAHRBF-BP shows its unique advantages. On most low dimensional and large number of data sets, the results show the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms.

In the future, we will focus on imbalanced data classification problems. For imbalanced data classification problems, the samples of minority classes and the samples in boundary regions should be emphasized more, which contain more classification information, thus how to measure and select these samples is particularly important. Further studies are needed to address these concerns.

Acknowledgments

The authors thank the support provided by the National Natural Science Foundation of China (No. 61331021 and No. U1301251), and the Shenzhen Science and Technology Plan Project (JCYJ20130408173025036). The authors thank to Dr. Hongguang Fan for helping to organize data sets.

Author Contributions

  1. Conceptualization: WX JP.
  2. Data curation: HW.
  3. Formal analysis: JP.
  4. Funding acquisition: JP.
  5. Investigation: JP.
  6. Methodology: HW.
  7. Project administration: WX.
  8. Resources: HW.
  9. Supervision: WX.
  10. Validation: HW.
  11. Visualization: WX.
  12. Writing – original draft: HW.
  13. Writing – review & editing: JP.

References

  1. 1. Moody J, Darken CJ. Fast learning in networks of locally-tuned processing. Neurocomputing. 1989; 1(2): 281–294.
  2. 2. Lowe D. Characterising complexity by the degrees of freedom in a radial basis function network. Neurocomputing. 1998; 19(1-3): 199–209.
  3. 3. Pedrycz W. Conditional fuzzy clustering in the design of radial basis function neural networks. IEEE Transactions on Neural Networks. 1998; 9(4): 601–612. pmid:18252484
  4. 4. Staiano A, Tagliaferri R. Pedrycz W. Improving RBF networks performance in regression tasks by means of a supervised fuzzy clustering. Neurocomputing. 2006; 69(13-15): 1570–1581.
  5. 5. Roh SB, Ahn TC, Pedryczb W. The design methodology of radial basis function neural networks based on fuzzy K-nearest neighbors approach. Fuzzy Sets and Systems. 2010; 161(13): 1803–1822.
  6. 6. Ilonen J, Kamarainen JK, Lampinen J. Differential evolution training algorithm for feed-forward neural networks. Neural Process Letters. 2003; 17(1): 93–105.
  7. 7. Subudhi B, Jena D. Differential evolution and levenberg marquardt trained neural network scheme for nonlinear system identification. Neural Process Letters. 2008; 27(3): 285–296.
  8. 8. Mountrakis G, Zhuang W. Integrating Local and Global Error Statistics for Multi-Scale RBF Network Training: An Assessment on Remote Sensing Data. Plos One. 2012; 7(8): e40093. pmid:22876278
  9. 9. Rubio-Solis A, Panoutsos G, Interval Type-2 Radial Basis Function Neural Network: A Modeling Framework. IEEE Transactions on Fuzzy Systems. 2015; 23(2): 457–473.
  10. 10. Zhu BL, Zhang XM, Fatikow S, Wang NF. Bi-directional evolutionary level set method for topology optimization. Engineering Optimization. 2015; 47 (3): 390–406.
  11. 11. Oh SK, Kim WD, Pedrycz W, Joo SC. Design of K-means clustering-based polynomial radial basis functionneural networks (pRBF NNs) realized with the aid of particle swarm optimization and differential evolution. Neurocomputing. 2012; 78 (1): 121–132.
  12. 12. Panchapakesan C, Palaniswami M, Ralph D, Manzie C. Effects of moving the centers in an RBF network, IEEE Transactions on Neural Networks. 2002; 13 (6): 1299–1307. pmid:18244528
  13. 13. Platt J. A resource-allocating network for function interpolation. Neural Computation. 1991; 3(2): 213–225.
  14. 14. Bors AG, Gabbouj M. Minimal Topology for a Radial Basis Functions Neural Network for Pattern Classification. Digital Signal Processing. 1994; 4 (3): 173–188.
  15. 15. Kadirkamanathan V, Niranjan M. A function estimation approach to sequential learning with neural networks. Neural Computation. 1993; 5(6): 954–975.
  16. 16. Constantinopoulos C, Likas A. An incremental training method for the probabilistic RBF network. IEEE Transactions on Neural Networks. 2006; 17 (4): 966–974. pmid:16856659
  17. 17. Lu YW, Sundararajan N, Saratchandran P. A sequential learning scheme for function approximation using minimal radial basis function. Neural Computation. 1997; 9 (2): 461–478. pmid:9117909
  18. 18. Huang GB, Saratchandran P, Sundararajan N. An efficient sequential learning algorithm for growing and pruning RBF (GAP-RBF) networks. IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics). 2004; 34(6): 2284–2292.
  19. 19. Huang GB, Saratchandran P, Sundararajan N. A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation. IEEE Transactions on Neural Networks. 2005; 16(1): 57–67. pmid:15732389
  20. 20. Bortman M, Aladjem M. A Growing and Pruning Method for Radial Basis Function Networks. IEEE Transactions on Neural Networks. 2009; 20(6): 1030–1045. pmid:19447726
  21. 21. Yu H, Reiner PD, Xie TT, Bartczak T, Wilamowski BM. An incremental design of radial basis function networks. IEEE Transactions on Neural Networks and Learning Systems. 2014; 2(10): 1793–1803.
  22. 22. Suresh S, Keming D, Kim HJ. A sequential learning algorithm for self-adaptive resource allocation network classifier. Neurocomputing. 2010; 73(16-18): 3012–3019.
  23. 23. Karayiannis NB, Mi GWQ. Growing radial basis neural networks: Merging supervised and unsupervised learning with network growth techniques. IEEE Transactions on Neural Networks. 1997; 8(6): 1492–1506. pmid:18255750
  24. 24. Salmeron M, Ortega J, Puntonet CG, Prieto A. Improved RAN sequential prediction using orthogonal techniques. Neurocomputing. 2001; 41(1): 153–172.
  25. 25. Xie TT, Yu H, Hewlett J, Rzycki P, Wilamowski B. Fast and efficient second-order method for training radial basis function networks. IEEE Transactions on Neural Networks and Learning Systems. 2012; 23 (4): 609–619. pmid:24805044
  26. 26. Savitha R, Suresh S, Sundararajan N. Metacognitive learning in a fully complex-valued radial basis function neural network. Neural Computation. 2012; 24(5): 1297–1328. pmid:22168554
  27. 27. Subramanian K, Suresh S, Sundararajan N. A metacognitive neuro-fuzzy inference system (McFIS) for sequential classification problems. IEEE Transactions on Fuzzy Systems. 2013; 21(6): 1080–1095.
  28. 28. Huang GB, Zhu QY, Siew CK. A new learning scheme of feedforward neural networks.2004 IEEE International Joint Conference on Neural Networks (IJCNN). 2004. p. 985–999.
  29. 29. Huang GB, Zhu QY, Siew CK. Extreme learning machine: theory and applications. Neurocomputing. 2006; 70 (1-3): 489–501.
  30. 30. Huang GB, CHEN L, Siew CK. Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Transactions om Neural Networks. 2006; 17(4): 879–892. pmid:16856652
  31. 31. Huang GB, CHEN L. Convex incremental extreme learning machine. Neurocomputing. 2007; 70 (16-18): 3056–3062.
  32. 32. Huang GB, CHEN L. Enhanced random search based incremental extreme learning machine. Neurocomputing. 2008; 71(16-18): 3460–3468.
  33. 33. Feng G, Huang GB, Lin QP, Gay R. Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Transactions on Neural Networks. 2009; 20(8): 1352–1357. pmid:19596632
  34. 34. Zhu QY, Qin AK, Suganthan PN, Huang GB. Evolutionary extreme learning machine. Pattern Recognition. 2005; 38 (10): 1759–1763.
  35. 35. Cao JW, Lin ZP, Huang GB. Self-Adaptive Evolutionary Extreme Learning Machine. Neural Process Letters. 2012; 36(3): 285–305.
  36. 36. LeCun YA, Bottou L, Orr GB, Muller LR. Efficient backprop. Neural Networks: Tricks of the Trade, Second Edition. 2012; LNCS 7700: 9–48.
  37. 37. Cybenko G. Approximation by superpositions of a sigmoidal function. Mathematics of Control. Signal, and Systems. 1989; 22(2): 303–314.
  38. 38. Meisel WS. Potential Functions in Mathematical Pattern Recognition. IEEE Transactions on Computers. 1969; C-18(10): 911–918.
  39. 39. Hayin S. Neural Networks and learning Machines, Third Edition. China Machine Press, China, 2009.
  40. 40. Blake C, Merz C. UCI repository of machine learning databases. University of California, Irvine, Department of Information and Computer Sciences, available: http://archive.ics.uci.edu/ml, 1998.
  41. 41. Chang CC, Lin CJ. LIBSVM: A library for support vector machines. National Taiwan University, Taiwan, Department of Computer Science and Information Engineering, available: http://www.csie.ntu.edu.tw/∼cjlin/libsvm, 2003.