Adaptive Swarm Balancing Algorithms for rare-event prediction in imbalanced healthcare data

Clinical data analysis and forecasting have made substantial contributions to disease control, prevention and detection. However, such data usually suffer from highly imbalanced samples in class distributions. In this paper, we aim to formulate effective methods to rebalance binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat algorithm, and apply them to empower the effects of synthetic minority over-sampling technique (SMOTE) for pre-processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reported in this paper reveal that the performance improvements obtained by the former methods are not scalable to larger data scales. The latter methods, which we call Adaptive Swarm Balancing Algorithms, lead to significant efficiency and effectiveness improvements on large datasets while the first method is invalid. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. The proposed methods lead to more credible performances of the classifier, and shortening the run time compared to brute-force method.


Introduction
Big Data in medical fields, such as hospital informatization construction, the progress of treatments, and the extensive use of high-throughput equipment, have caused a geometric growth of attentions. It has been desirable to improve the efficiency, accuracy and quality of medical data processing [1]. The sources of health data include clinical medical treatments, pharmaceutical companies, medical research, medical assistance application, and more. Existing a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 datasets bring in important medical and health information for research topics, such as understanding of the human genetic and disease systems [2] [3], medical and biological imaging [4]; and classification and prediction in medical engineering [5].
Specifically, we investigate disease diagnosis in the context of data mining and classification. Disease diagnosis can be divided into two stages: we first obtain the diagnostic rules from clinical data with known labels, and then apply the rules to diagnosis new patients. However, the high complexity, heterogeneous sources and uncertain reliabilities of medical data pose challenges for classification. For example, it is well known that compared with normal and healthy persons, patients comprise only a small part of the total population. Those more serious diseases, such as cancer and AIDS, have fewer numbers of cases. That constitutes the imbalanced dataset when we try to train classifiers on such data, which causes over-fitting the majority classes and biases our results For instance, in the binary classification of a cancer dataset, the amount of the negative samples (healthy) is dominant, and the obtained model is likely to have little discriminative ability on the positive samples (patient). However in practice, it is an unacceptable mistake to identify cancer patients as healthy people.
In our experiments to solve the imbalanced dataset classification problem, we combine SMOTE and meta-heuristic algorithms to created two methods, which respectively process the data as a whole and partition it into segments. The first method is simple parameter optimization of SMOTE by the meta-heuristic algorithms, namely the Swarm Balancing Algorithms, and after the experiments, we find that it is effective in processing a static and relatively small imbalanced dataset. However the experimental results reflect that the effect of first method is not very good for handling big and highly imbalanced dataset. Therefore, the big data will be divided into several data segments (We used the concept of windows to name these several data segments) suitable for processing by the next methods. To perform SMOTE and classification, the parameters of the data in each window are established on the basis of last window, and the algorithm eventually collects the performances of each window and determines their average values. We call this method Adaptive Swarm Balancing Algorithms in this paper. We observed in our experiments that this latter method is faster and more efficient.

Related work
In recent years, more and more researchers from different fields have begun to focus on imbalanced dataset research. This research can be considered as having two different levels, the first concerns methods of data modification and optimization, and the second relates to improvement of the algorithms.

Data level methods
Random under-sampling [6] is a simple sampling technique in which parts of the majority class data are randomly removed to reduce the imbalance ratio, i.e., the ratio between minority and majority is not equal to one but with this method, it is easy to ignore the useful information in the majority class. Contrary to the under-sampling method, random over-sampling, as the other sampling technique [7], increases the number of minority class data to improve the imbalance problem of the dataset. However, its disadvantage is its focus on classification over learning [8]. Based on the over-sampling technique, the synthetic minority oversampling technique (SMOTE) algorithm [9] is a commonly used algorithm that often obtains excellent results in imbalanced dataset classification. The principle is the algorithm is to analyze the feature space of the minority class samples, then synthesize the minority class data and combine it with the original dataset to reduce the imbalance ratio. Assuming that the oversampling rate is S, the number of minority classes is equal to M, and each minority class can be signified as x i (i = 1, 2, 3 . . . M), which belongs to S min , then every x i searches out K neighbors of the minority class, and the algorithm randomly sets an x t from the K neighbors and finally synthesizes new data: Eq (1) synthesizes S times new samples, and the function rand [0, 1] produces a random number in the scope of 0 to 1. The two key parameters of this algorithm, S and K, influence the data synthesis and the classification performance. In our experiment, we use meta-heuristic algorithms to find the best and most suitable parameters for the SMOTE algorithm.

Algorithm level methods
The research emphasis in imbalanced dataset classification is the minority class data. It is more meaningful that the algorithm correctly identifies the minority class rather than the majority class samples. In other words, the cost is higher if the classification algorithm misclassifies the minority class data. The cost-sensitive learning [8] approach assigns different error prices to different classes. If a classifier misclassifies a minority class, it is "punished" in a manner that forces the classifier to increase its recognition rate of minority class samples. Meanwhile, on the basis of the kernel processing method, researchers modified the support vector machine classification in the field of machine learning, which also improved the imbalanced dataset classification problem [10]. The idea of ensemble learning methods is to use an algorithm to obtain a series of child classifiers from the training set and then by integrating these child classifiers, to improve the classification accuracy. SMOTEBoost [11] is an algorithm that combines the SMOTE and Boosting methods; it is a quite effective method among the ensemble learning methods.
In recent year, swarm intelligence algorithms are widely used in different fields to solve the rough original dataset, especially feature selection [12,13]. We use two different meta-heuristic algorithms, particle swarm optimization (PSO) [14] and the bat algorithm (BA) [15] for comparison of the optimization effect between the two different meta-heuristic algorithms. We choose the neural network algorithm, a representative and popular intelligence classification algorithm, for verification of the classification performance in each iteration.
Pseudo code of PSO: For each particle Initialize particle and parameters End While maximum iterations or the termination mechanism is not satisfied.
For each particle Calculate and update particle velocity and position as equation (2) and (3) End For each particle Calculation of fitness function If the fitness value is better than the best fitness value (pBest) in history current fitness value represent the older pBest to be the new pBest End End Selected the gBest whose fitness value is the best in the population. End PSO [15] is a widely used meta-heuristic algorithm that imitates the feeding process of birds. Above-mentioned pseudo code describe the process of PSO. Assuming there is a population X = (X 1 , X 2 ,. . ., X n ) which is grouped by n particles in D dimension search space, the i th particle in this space is expressed as a vector X i with D dimension, X i = (x i1, x i2 , . . ., x iD ) T , and the position of the i th particle in the search space represents a potential solution. As the objective function, the program can calculate the corresponding fitness of position X i of each particle, where the speed of the i th particle is V i = (V i1 ,V i2 , . . ., V iD ) T , the extremum value of each agent is P i = (P i1 , P i2 , . . ., P iD ) T and the extremum of the population is P g = (P g1 , P g2 , . . ., P gD ) T . In the process of iteration, the extremum value of each agent and the population will update their position and speed [16]. Eqs 2 and 3 show the mathematical process as follows: In Eq 2, ω is inertia weight; d = 1, 2, . . ., D; i = 1, 2, . . ., n; k is the current iteration time; c 1 and c 2 are non-negative constants as the velocity factor, r 1 and r 2 are random values between 0 to 1 and V id is the particle speed [17].
Pseudo code of BA [17]: The other algorithm, BA [15], is a new meta-heuristic algorithm that has already shown good results in research. Moreover, we also list the pseudo code of BA of aforementioned part. It learns from the theory of echolocation in bats. The algorithm also assumes the bat populations in a D dimension search pace, and the following equations show the process of updating Rank the bats and select the best value in the population. End the bats' position x i and v i , in the t th iteration: In these three equations, β is a random vector between 0 to 1, and xÃ is the current global best solution among the bats.
In recently years, based on the original version, there are many variant versions of PSO and BA to improve their searching ability in accuracy and efficiency, like SEPSO [18], APSO [19] and FDR-PSO [20], as well as Self-Adaptive Bat algorithm [21], hybrid bat algorithm [22] and chaotic bat algorithm [23], etc. In the future we will try to adopt different version of meta-heuristic algorithms to expand our experiments.

Experiments and datasets
Differences in the sources and formats of datasets cause complexity. In this paper, the health and medical datasets are divided into two kinds according to the size of the datasets, which are processed by the two methods, Swarm Balancing Algorithms and Adaptive Swarm Balancing Algorithms. Therefore, two experiments are performed as follows. The following experimental results responded that the first method is more suitable for the relatively small dataset. However it would be invalid when the processed dataset is relatively big. As above mentioned that big data is common to seen in health care filed and imbalanced classification problem [24]. Therefore, the latter method was proposed to overcome the big and highly imbalanced dataset.
For the optimizer, Table 1 contains the information of operating environments and the parameters of the two swarm algorithms. Since these parameters are susceptible to performance, thus they are carefully selected from several tests. In PSO, the two learning factors, c1 and c2 which were widely used in equal and smaller than 2. BA has more parameters. Loudness and pulse rate can influence the position of bats to search the neighbor of the objectives. Here we separately set the values of these two parameter to 0.5 and 1. What's more, the other two factors of BA, Qmin and Qmax, which were commonly used the value of 0 and 1. Furthermore, the populations and the amount of iterations of PSO and BA were the same.
All the software programs were coded in MATLAB version 2014a, and the computing environment for all experiments was a PC workstation (CPU: E5-2670 V2 @ 2.50 GHz, RAM: 128 GB).
Both of the following two experiments used 10-fold cross validation method to perform the testing experiment. That means a dataset will be split into 10 non-redundant pieces. Each piece of the ten samples and the rest nine parts respectively are used as the testing dataset and training dataset. The algorithms compute the average value of the ten for verifying the performance. In the second experiment we find that the size of testing dataset (one-tenth of the original dataset) is bigger than the whole dataset in the first experiment.

Experiment 1: Swarm Balancing Algorithms with a moderately imbalanced dataset
We selected five datasets from the UCI [25] in our experiment. The imbalance ratios between majority class and minority class range from 2.05:1 to 70.3:1.The Surgery dataset in S1 Data, contains data on lung surgery for 5 years. Some datasets related to bioassay data are imbalanced datasets. We selected four of them and respectively stored them in S2 Data to S5 Data, for testing the basic method in the first experiment. The main problem in the classification of an imbalanced dataset is that the algorithm ignores the minority class data and tends to assign the trained classifier to the majority class with very good accuracy. However, the Kappa statistic is an index that can help people to judge the confidence level of the classification results. It is a very important value when judging problems of imbalanced class classification because even though the accuracy may be high, the Kappa [26,27] for the classification results can be close to zero or sometimes even a negative value. The Kappa index ranges between -1 to 1. As mentioned in the introduction, in disease diagnosis, classifying a patient as normal is completely unacceptable, and the consequences can be tragic.
As a monitor of the credibility of the classification results, a higher Kappa value indicates that the accuracy is more credible. The Kappa index is commonly divided into three levels to evaluate the credibility of classification accuracy [28,29]. In the top level, the Kappa value is !0.75, which means that the classification accuracy is high in credibility. A Kappa value from !0.4 to <0.75 indicates general credibility. Finally, a Kappa value of <0.4 indicates a classification accuracy with either low or no credibility. Our aim in this experiment is to ensure relatively high accuracy by maintaining the largest possible Kappa value. In the experimental process, we used PSO and BA to globally search the two best parameters for SMOTE, K and S, and the neural network classification algorithm to help the meta-heuristics to estimate and check the two objectives in accordance with the fitness in every iteration of the meta-heuristic algorithms. It means that in the experimental process (also the same in experiment 2), accuracy was not the only objective; we also needed to consider the Kappa index as both are our objectives in optimization.
Solving imbalanced healthcare data by swarm based balancing algorithms PLOS ONE | https://doi.org/10.1371/journal.pone.0180830 July 28, 2017 TP means true positive, TN means true negative, FP means false positive, FN means false negative and P and N respectively stand for positive and negative. From the above equations, we can find that Kappa and Accuracy have inevitable links. Thus both Eqs 7 and 8 are our objective functions.
To find a balance between the two, we set a condition for both. Since we knew the credible range of Kappa, therefore we fixed the Kappa value in the top two intervals to !0.4 (this value of Kappa can be changed just as with a threshold or parameter value). The swarms regarded Kappa and Accuracy as fitness function to find the optimal position, gradually [30].  [30].We initialized two control conditions in the generation of the meta-heuristic algorithms to maintain the authenticity of accuracy, so the first condition was the value of Kappa that must fail in the first and second levels of the Kappa scope (Kappa !0.4). Secondly, after satisfying the first condition, the particles or bats needed to find the largest possible accuracy in the search space with control of the two parameters.
Pseudo code of Swarm Balancing Algorithm [30]: In general, the Kappa value increases while accuracy rises. The interval of S is from 10% to the value of the number of majority classes divided by minority classes, and the scope of K is from 2 to the number of minority classes. To realize the effect of our method, we used normal SMOTE for compression, and in the experiment, we used SMOTE to synthesize minority class samples until the number of minority class data were equal to that of the majority class to obtain a compete balance of the dataset; meanwhile, we used the default value of K which is 5. Furthermore, a contrast test was performed using the traditional class balancing algorithm on the same imbalanced datasets, on which the neural network was also used as the classification algorithm for verification. The principle of this algorithm is to change an imbalanced dataset into a completely balanced dataset by splitting the majority class into minority classes. Experiment 2: Adaptive Swarm Balancing Algorithms with highly imbalanced dataset AID 362 dataset (it's in S5 Data) in the first experiment and the other five highly imbalanced datasets (they are in S6 Data to S10 Data) are also selected from the Bioassay multiple dataset in UCI [25], and they were used in this experiment. However, regardless of the number of features or the imbalance ratio of these datasets, they are much larger than the datasets used in experiment 1. Compared with the datasets in the last experiments, the datasets in this experiments have increased not only in overall size but also in the scales of minority class dataset growth. Therefore, we treated them as big data in our experiments. The largest dataset has 47,831 data instances.  Table 2 lists the characteristics of these highly imbalanced datasets, which are high in volume and dimensions. The approach of Adaptive Swarm Balancing Algorithms is to process the full dataset window by window or to break up the big data into several parts that imitate the data flow to improve the imbalanced dataset classification problem. In our experiment, due to considerations of data size and volume and to guarantee the integrity of the original dataset, we used three windows for each dataset (the concept mentioned in section I) when performing this experiment. Table 2 also shows the length of each window, which indicates how many instances of the dataset are present in each window.

Initialize the floor value of Kappa Define the scope of K and S //K ϵ [K Min, K max ], S ϵ [S Min, S max ], K is the selected Neighbor and S is the increased proportion of minority class //data Define the limit value of Kappa T Load dataset
The principle or working flow of the Swarm Balancing Algorithm is presented in Fig 2, which clearly shows the important role of this algorithm in the experiment. Each of the data windows needs to use this method. From the figure, we can see that the length of Window X is X times longer than Window 1, which means that as the data flows, the size of the data or the window becomes longer and longer. In Window 1, the initial parameters input into the Swarm Balancing Algorithms are K = 100% and S = 2, and the algorithms will process the child dataset in Window 1 and generate a suitable K = A1% and S = B2 with the current Accuracy and Kappa values.
Then, the A1% of K and the B2 of S will be regarded as the initial parameters for Window 2 to repeat the process, which also generates the present values of K, S, Accuracy and Kappa. This process is repeated until the last window, Window X. Here, the algorithm need only use the parameters generated from Window (X-1) as its setting parameters to perform the classification. The algorithm ultimately determines the average of each window's Accuracy and Kappa as the final result. We can find that the processing direction and the data segment direction are opposite. Following is the pseudo code of the Adaptive Swarm Balancing Algorithm which is the novel method which is proposed in this paper: Pseudo code of Adaptive Swarm Balancing Algorithm: In experiment 2 we also used SMOTE to synthesize the minority class samples until we obtained a compete balance of the dataset with the default value of K = 5 to do the contrast test. As mentioned above that the method of 10-fold cross-validation are used to verify the experimental results.

Results and discussion
Results of experiment 1 Our experiment collected performance results in terms of Accuracy, Kappa (Kappa statistic), Precision, Recall, F-measure, ROC area and the imbalance ratio between minority class and majority class. These results are presented in Table 3 to Table 7 with the different classification algorithms or data imbalance processing method, respectively.
In Table 3, which shows the results from the Surgery dataset, the imbalance ratio (min/maj) in th eoriginal dataset is low, and the two key performance measures of Accuracy and Kappa are at the two extremes of low accuracy, both with Kappa values of zero, which means the classifier results are not credible. The performance of the Swarm Balancing Algorithm showed that our method pulls the classification results into a reliable range of scope, although the accuracy suffers slightly. With the imbalance ratio index, we can observe changes in the degree of imbalance of a dataset, which shows whether our methods need to bring the dataset into a Solving imbalanced healthcare data by swarm based balancing algorithms completely balanced state. The performance of SMOTE in processing the completely balanced data is also worse than that with the Swarm Balancing Algorithm. The other four datasets are subsets of a large and diversified bioassay dataset. For the Bioassay dataset AID439, Table 4 clearly shows that our method is better than the traditional classbalancer method, which is used as a comparison benchmark. Our approach simultaneously improves the performance of both Accuracy and Kappa. We find that the PSO Balancing approach systhesizes fewer of the minority class data to obtain better performance than the traditional SMOTE approach. The results for Bioassay dataset AID721 are shown in Table 5. It is hard to attain a Kappa value >0.4 with the two meta-heuristic algorithms, and although the PSO and BA are almost equally effective in achieving a compeletly balanced dataset, their performance is still better than that of SMOTE. The results reflect that for this and the previous datasets, most of the time, PSO is slightly better than BA in finding the two parameters with which to acheive higher performance, but it also needs to synthesize more minority samples. The results in Table 6 Tables 3 to 6. We can find that although the class balancing method can bring the dataset into full balance, the performance still very poor.
The performance of the neural network algorithm also remains poor as it does not result in a Kappa value high enough to reach the credible stage of !0.4.For the other complete datasets artificially generated by traditional SMOTE, the performance is much better than that with the original and class balancing methods, but there is still a gap with the two Swarm Balancing Algorithms in terms of the two important indexes. Although both the PSO and BA can search the suitable parameters to make the Kappa value fall within the range of credibility, the PSO--Balancing algorithm is better than the BA-Balancing algorithm for both Accuracy and Kappa. From the perspective of the quantity of synthesis necessary, the BA-Balancing algorithm produces less synthesis of minority class samples than the PSO-Balancing algorithm does. As the results in this experiment, the Swarm Balancing Algorithms satisfied the expected goal, which is to attain the highest possible accuracy with a Kappa value falling within a credible and reasonable area. However, as the results from the AID362 dataset show, when this method meets a large and highly imbalanced dataset, the performance is not as good as that in the small datasets, in which a Kappa value of !0.4 cannot be reached. Hence, to ensure that our basic concepts are effective in highly imbalanced and larger datasets, we realize that the algorithm needs to be improved.    Tables 9 to 14 range from the best to the worst. Table 8 shows the average performance of Accuracy. Kappa and imbalance ratio (min/maj) from Tables 9 to 14. The data highlighted in bold format are the classification results of the original dataset, Swarm Balancing Algorithms and Adaptive Swarm Balancing Algorithms. The other parts respectively reflect the results of Window 1, Window 2 and Window 3. Because the length from Window 1 to Window 3 becomes longer and longer, the results with the Adaptive Swarm Balancing Algorithms become worse and worse; however, the final results are, on the whole, much better than those with the Swarm Balancing Algorithms. When the algorithms process a big dataset, the traditional SMOTE is slightly better than the Swarm Balancing Algorithms, but the Adaptive BA-Balancing algorithm is better than the traditional SMOTE and the Adaptive PSO-Balancing algorithm for all three performance parameters because it uses less synthetic minority class data and achieves higher accuracy with a higher Kappa value than the latter two. At the same time, it is easy to see that the problem with the AID362 dataset in experiment 1 has been solved in experiment 2, which shows that the Adaptive Swarm Balancing Algorithms are highly effective in processing a large imbalanced dataset. It must be mentioned that the performance with the 746 dataset, which contains the most data of all, is not good due to its large data size. However, we believe that if we choose to use more windows to process this dataset, the results can be improved. Fig 4 separates the two key performance parameters of Accuracy and Kappa from Table 8 and depicts them graphically in a bar diagram. It is clear that both approaches increase the Solving imbalanced healthcare data by swarm based balancing algorithms Kappa value, but that with the Adaptive Swarm Balancing Algorithms is two times higher than that with the Swarm Balancing Algorithms. However, the Kappa value with the latter method still indicates non-credibility, whereas that with the former method indicates credibility and, thereby, higher accuracy. Furthermore, in terms of average values, performance parameters with the traditional SMOTE are much worse than those with the two new Swarm Balancing Algorithms, indicating that optimization of the parameters is more important than rebalancing of the dataset and that a completely balanced dataset does not necessarily mean that a better result can be achieved. The Swarm Balancing Algorithms can save time, compared with the brute-force method using same dataset, when finding the best global parameters. For example, consider an imbalanced dataset with an imbalance ratio between majority class and minority class of 10, and the number of minority classes is 20. This means that S can be 9801 different values from 100 to 9900, and the scope of K is from 2 to 20. Therefore, the brute-force method will try a total of 186,219 combinations, requiring many repetitions to find the most suitable values of S and K. If we meet with a large and highly imbalanced dataset, the brute-force method will need to try many more possible combinations. It is clear that the Adaptive Swarm Balancing Algorithms can save more time. As Fig 5 shows, the Adaptive Swarm Balancing Algorithms only take onethird to one-fourth the time required by the Swarm Balancing Algorithms. Meanwhile, it is also easy to see that PSO is faster than the BA in the experiment. Therefore, with real-world data, the latter approach is better in performance, and it is more practical because it can flexibly process a large dataset in real time. Furthermore, application of the brute-force method to dataset 1608, the smallest of the six datasets, required 10279963.41876 sec. In comparison, the

Conclusion
Our methods clearly show their effectiveness in the processing of the imbalanced dataset classification problem with different dataset sizes. Meta-heuristic algorithms can blindly select the parameters of SMOTE to obtain a relatively high accuracy with a Kappa value that falls within the credible range. With changes in the sizes of the datasets, we used two methods to respectively improve processing of the normal-size imbalanced dataset and the large-size imbalanced dataset. The experiments indicate that the Swarm Balancing Algorithms are more suitable for a small dataset, and if we consider the big dataset as a data feed, the Adaptive Swarm Balancing Algorithms will more quickly and better solve the imbalance problem of the dataset. In the small-and normal-size datasets, no matter from which aspect is assessed, when compared with the neural network classification algorithm, PSO was better than BA. With large datasets however, except for search time, for which the PSO is still faster than BA, the other important performance parameters are better with BA rather than PSO. The Adaptive Swarm Balancing Algorithms operate more like a process of constant iteration and learning, which is more suitable to the actual problem in health and medical datasets. Because the number of diagnosed cases is constantly increasing daily, along with the gradual accumulation of cases, the dataset will grow into a large dataset that needs to be processed as a data feed. Therefore, the Adaptive Swarm Balancing Algorithms can effectively solve the imbalanced data classification problem in the large datasets typically found in the health and medical field. These methods will help the classifier to accurately classify and identify patient data.