Figures
Abstract
Missing data is a prevalent problem that requires attention, as most data analysis techniques are unable to handle it. This is particularly critical in Multi-Label Classification (MLC), where only a few studies have investigated missing data in this application domain. MLC differs from Single-Label Classification (SLC) by allowing an instance to be associated with multiple classes. Movie classification is a didactic example since it can be “drama” and “bibliography” simultaneously. One of the most usual missing data treatment methods is data imputation, which seeks plausible values to fill in the missing ones. In this scenario, we propose a novel imputation method based on a multi-objective genetic algorithm for optimizing multiple data imputations called Multiple Imputation of Multi-label Classification data with a genetic algorithm, or simply EvoImp. We applied the proposed method in multi-label learning and evaluated its performance using six synthetic databases, considering various missing values distribution scenarios. The method was compared with other state-of-the-art imputation strategies, such as K-Means Imputation (KMI) and weighted K-Nearest Neighbors Imputation (WKNNI). The results proved that the proposed method outperformed the baseline in all the scenarios by achieving the best evaluation measures considering the Exact Match, Accuracy, and Hamming Loss. The superior results were constant in different dataset domains and sizes, demonstrating the EvoImp robustness. Thus, EvoImp represents a feasible solution to missing data treatment for multi-label learning.
Citation: Jacob Junior AFL, do Carmo FA, de Santana AL, Santana EEC, Lobato FMF (2024) EvoImp: Multiple Imputation of Multi-label Classification data with a genetic algorithm. PLoS ONE 19(1): e0297147. https://doi.org/10.1371/journal.pone.0297147
Editor: Mohammad A. Al-Mamun, West Virginia University, UNITED STATES
Received: May 29, 2023; Accepted: December 28, 2023; Published: January 19, 2024
Copyright: © 2024 Jacob Junior et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant files are available from the Zenodo database (Link: https://doi.org/10.5281/zenodo.7748933).
Funding: FMFL was financed in part by the National Council for Scientific and Technological Development (CNPq, Brazil) under Grant 147336/2020-1. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Missing data is ubiquitous in data analysis [1]. Their causes are the most diverse and related to the application domain. These include drawbacks in data acquisition, measurement errors, sensor network problems, data migration failures, and unwillingness to respond to survey questions [2, 3]. Since data analysis algorithms/methods are not designed to deal with Missing Values (MVs), it is essential to treat them before aiming to guarantee the results’ validity, impairing the research conclusions [1, 4, 5]. MVs are problematic because of the risk of bias, which depends on the type of missing data, the extent of the missingness, and how to deal with MVs in the analyses [1]. Thus, it is critical to deal with the missing data timely for intelligent decision-making [6].
Several techniques have emerged to address this problem [4, 7, 8]. LIN [4] comments that if the MVs rate is less than 10% or 15%, they can be removed without causing any significant loss to the mining process. However, this does not mean that the datasets in any problem domain must follow this rule; in other words, small amounts of missing data may contain essential information that must be managed [9]. In addressing this issue, the literature suggests using missing data imputation methods, which involve replacing missing data with actual (plausible) values. While this approach allows for more data retention compared to deletion, it requires time to generate reasonable replacement values [10, 11].
A naïve method for tackling the missing values issue is by Single Imputation (SI). This method involves filling in missing values with a single estimated value, often based on mean, median, or regression models [4]. While this approach simplifies the dataset and makes it easier to analyze, it can introduce bias and underestimate uncertainty in the results [12, 13]. To overcome this limitation, Rubin [14] introduced a gold-standard imputation strategy within the scientific community—Multiple Imputations (MI) for handling missing data. In contrast with SI approaches, this method seeks to find a single solution in which m complete solutions are created in the operational database such that m > 1. These solutions were analyzed separately and combined to obtain the best solution [15, 16]. To reduce the missing values prediction error, using metaheuristics could optimize the value that would be imputed [15]. Notably, bioinspired strategies such as Genetic Algorithms (GAs) are prominent in optimizing solutions [17].
The GAs were proposed by Holland [18]. It is an optimization heuristic based on “the survival of the fittest”, inspired in Charles Darwin´s evolutionary theory. Regarding the GAs usage for Multiple Imputations, it is crucial to acknowledge the work of Garcia [19] and the MultImp algorithm [15]. The MultImp algorithm serves as the cornerstone for this research. This algorithm employed genetic algorithms for multiple imputations and was also applied for Multi-Label Classification (MLC) scenarios. The authors contend that data mining tasks, particularly those related to data classification, are notably sensitive to addressing MV. Furthermore, classification tasks are widely used to assess the accuracy (ACC) of imputation methods [5, 11, 20]. Consequently, the higher the classification accuracy, the more successful the imputation method. However, only a few studies have employed MLC. In contrast to Single-Label Classification (SLC), or simply data classification, which associates an example with a single label, MLC allows an instance to be associated with multiple labels, thereby increasing the complexity of classification tasks [21, 22]. Further details on this topic will be highlighted in the Background section.
Considering the importance of handling missing values in data analysis and the available solutions in the existing literature, this work presents an efficient algorithmic approach for multiple imputations applied to multi-label classification tasks. This method is named EvoImp, a combination of “evolutionary” and “imputation”. Furthermore, the name is inspired by MultImp [15], which serves as the foundation for our algorithm and has shown promise in its preliminary stages for multiple imputations with missing data. EvoImp enhances the parameterization of MultImp to maximize its imputation capabilities and explores new configurations for computational experiments.
We conducted a rigorous benchmarking process to validate the proposed method’s performance using diverse multi-label datasets. We compared EvoImp with well-established imputation methods documented in the literature. These datasets were systematically subjected to six missing value rates to simulate the Missing Completely At Random (MCAR) mechanism. The outcomes of these experiments were meticulously evaluated using five distinct classifiers. This comprehensive evaluation provides insights into the strengths and potential limitations of our EvoImp when applied to real-world multi-label classification scenarios. By addressing the challenges associated with missing data in this context, our work aims to advance multi-label classification and the broader field of data analysis.
Accordingly, the remainder of this paper is organized as follows. The section “Background” presents a preliminary background. The section “EvoImp—Proposed Method” included the proposed method in this section. The section “Computational Experiments” details the experimental setup. The performance of the method and comparison with data imputation techniques are demonstrated in sections “Results and Analysis” and “Discussion”. Finally, section “Conclusion and Suggestions for Future Work” summarizes the paper and points out potential directions for future exploration.
Background
Multi-label Classification and classical approaches to handling MVs
In single-label classification problems, a set of class labels is predetermined, and each object must be associated with one and only one label [23]. Formally, let X denote the input/feature space, and y denote the class value, where y ∈ L, which is the output space (a set of disjoint class labels). In this case, each sample is strictly associated with a single class label [24, 25]
However, there are increasingly more contexts in which data may belong to more than one class label. This classification condition is referred to as Multi-Label classification. Initially, MLC primarily focused on tasks such as text categorization, protein function classification, music categorization, semantic scene classification, and medical diagnosis [23, 24, 26]. Recently, new applications have emerged in Computer Vision, Natural Language Processing, and Data Mining, including Video Annotation, Legal Text Mining, and User Profiling [27]. According to [25, 28], similar to SLC, MLC is represented by X and y, where each sample x ∈ X is assigned a subset of the output space (a set of non-disjoint class labels). Table 1 illustrates a toy example depicting the difference between SLC and MLC, adapted from [29]. Considering that the data in Table 1 comprises 5 instances (x1, x2, x3, x4, x5) and 3 labels (y1, y2, y3).
Table 1a illustrates the SLC scenario, where five data instances (x1 to x5) are each strictly associated with a single label (y1 to y3). For instance, x1 is associated with y1, x2 is associated with y2, and so on. On the other hand, MLC allows data instances to be associated with multiple labels simultaneously. Table 1b demonstrates the MLC scenario, where the same five data instances (x1 to x5) can have multiple labels assigned to them. For example, x1 is associated with both y1 and y2, x2 is associated with both y2 and y3, and so forth. This distinction highlights how SLC restricts each data instance to a single label, while MLC permits instances to belong to multiple labels simultaneously, making it more suitable for scenarios where objects or data points can be associated with different classes.
Although the difference is subtle in theory, MLC tends to be more challenging in practice. Gonçalves et al. [23] and Sá et al. [25] enumerated the following reasons for this:
- The possible classes of a given instance (output space) in MLC grow exponentially from the increasing number of labels. Therefore, when considering that a problem has L distinct labels, the size of the output space in MLC is 2L (combination of labels) while it is only L in SLC;
- An MLC algorithm must consider whether there exists or not a correlation between labels. This kind of correlation is an essential step to ensure the effectiveness of several MLC processes [24, 30, 31];
- MLC systems performance evaluation uses different metrics than those traditionally used in SLC [32]. In SLC, the rating of a new instance can be either correct or wrong. On the other hand, in MLC, the result can be partially correct. It occurs when the classifier predicts some correct labels but includes some incorrect predictions or even omits a label that should be predicted. This problem requires cautious attention since some metrics follow contrasting aspects to define what is a good MLC prediction [25, 33];
- Unlike SLC problems, which traditionally involve the analysis of relational (structured) data, MLC applications typically address big data tasks, which involve semi-structured or unstructured data [24, 34].
All these challenges have amplified the complexity associated with handling MVs. Nevertheless, finding studies that relate MLC and MV is not straightforward, as demonstrated in [4, 8, 17].
In this context, we emphasize a limited number of studies that specifically address the issue of missing labels [35, 36], which means focusing on predicting an unknown label. Wang et al. [35] present a multi-label feature selection that considers feature interaction. For that, the authors use the definitions of multi-label neighborhood information entropy and multi-label neighborhood mutual information to mitigate the negative impact of missing labels. Cheng, Song & Qian [36] focus on addressing missing labels by leveraging label correlations and implementing a two-level kernel extreme learning machine autoencoder. The authors verified the proposed method on both missing and complete label datasets. Since these studies primarily focus on missing labels rather than missing values (predictive features), to the best of our knowledge, there is no work addressing missing values in the predictive feature space in an ML scenario. Thus, this constitutes one of the contributions of the present study.
Bio-inspired computation for the handling of MVs
Tran, Zang, and Andreae [37] proposed a data imputation method by adopting an approach based on genetic programming called GPMI. An MI strategy was applied in this method, and an estimation of missing values was performed using regression techniques. The GPMI was compared with seven imputation methods through an experiment carried out in eight datasets and applying seven different missing values ratios (5, 10, 20, 30, 40, and 50) with the aid of MCAR as a missing data mechanism. The classifier’s accuracy was the performance measure adopted. The results suggest that the planned method performed better than all methods. According to the authors, genetic programming was primarily responsible for these results because the algorithm initially used random samples to fill the gaps before being submitted to genetic processes. The results confirmed that strategies based on evolutionary algorithms are feasible alternatives for missing values treatment.
Shahzad, Rehman, and Ahmed, in their study, “Missing Data Imputation using Genetic Algorithm for Supervised Learning” [38], employed GA to search for plausible values for missing data imputation. An exciting strategy adopted in this study is using information gain to observe how solutions are found as the process grows. In an experiment with five datasets that originally contained missing values, the proposed method was compared with other imputation approaches: the average, lowest value, highest value, zero, and MI. They used the following performance measures: predictive accuracy, precision, recall, F-measure, and the area under the Receiver Operating Characteristic (ROC) curve, with the following classifiers: NB-tree, PART, JRIP, Naive Bayes, KNN, and J48. The authors noted that the GA-based method showed promising results and worked well in datasets with a high percentage of missing values.
In [39], an algorithm called MOGAImp was proposed for multiple imputation datasets based on genetic algorithms. One of the exciting strategies of this work is to apply a multi-objective approach, which until then had not been adopted in the literature for the performance analysis of imputation techniques. This approach involves simultaneously employing two or more evaluation measures. It can be explained by the fact that there are distinctions between various performance measures because, while one increases, the other declines. In the case of MOGAImp, two conflicting measures were used: the classifier accuracy and the predictive accuracy of the imputation method, calculated using Normalized Root-Mean-Square error (NRMSE) and the Pareto front.
Another critical factor in the study conducted by [39] concerns population initialization, which employs a pool of candidate solutions based on each attribute. The solution pool involves grouping all possible dataset values for the attribute that has a missing value (by lexicographically comparing two strings in cases of categorical variables). The method was experimentally compared with other well-known techniques in the literature, employing benchmarking through several databases with missing values. The results demonstrated that the method achieved competitive performance and, according to the authors, demonstrated potential for real-world applications. However, high computational power is required for handling the MVs individually with MOGAImp and through the solution pool. Additionally, this strategy is an excellent alternative to a mixture of genetic materials. Therefore, it has been adopted in EvoImp as a baseline for mutation operations.
In [15], the authors created a scheme based on genetic algorithms, which served as a baseline for developing and analyzing the method employed in this study. The strategy, nominated as MultImp, predicts multiple imputations of datasets in a multi-label classification model. In this study, the authors conducted experiments using four databases that were initially completed. Subsequently, 5% of the missing values were added through the MCAR mechanism. Binary relevance (BR) was employed as the multi-label classifier, with C4.5 as a parameter. In the test scenario, MultImp was compared with two other imputation methods (K-Nearest Neighbors Imputation—KNNI and Most Common—MC) and evaluated lexicographically using the following measures: Exact Match (EM), Accuracy, and Hamming Loss (HL). The preliminary results of this study proved to be promising, particularly in the case of EM, where the performance achieved by the method was better in all the datasets used and justified adopting the lexicographical approach.
For a comprehensive summary of the works discussed in this section, we have provided a detailed table in our supplementary material, available on the project’s GitHub repository (https://github.com/jacobjr/EvoImp).
EvoImp—Proposed method
Since EvoImp is based on a genetic algorithm, the following descriptions explain how EvoImp was mapped and configured within the GA structure: a) the codification of individuals, b) the formation of the initial population, c) the configuration of genetic operators, and d) the definition of the fitness function. Fig 1 presents a toy example of this structure, which will be detailed in the following subsections.
Toy example of a dataset with MV and how EvoImp’s GA works with it. (a) Dataset with missing values; (b) A complete dataset with imputed data. (c) Phenotype: contains the values corresponding to the missing data space; Genotype: represents the genes in binary code and the values of the measurements used in the fitness function. (d) Illustration of how the initial population is initialized. (e) Random selection of parents for crossover. (f) Illustration of crossover being applied to the two selected individuals.
Individual encoding and population initialization
The individual encoding of EvoImp took place in the following form: the variables in the datasets represent individual genes. Genes initially marked with “?” represent the missing values (Fig 1(a)). Each individual is represented by a completed (“accomplished”) instance of the databases (Fig 1(b)). The phenotype consists of imputed values, while the genotype represents these values in binary form, as illustrated in Fig 1(c).
The initial population comprised five simple imputation methods for the generation of each individual (Fig 1(d)). All imputation methods are well-known and established in the literature [7]: K-means Clustering Imputation, KNNI, WKNNI, Concept Most Common (CMC), and MC. The parameters for the KNNI, WKNNI, and KMI methods followed the guidelines set by the authors. This kind of population initialization was adopted in EvoImp to reduce the search space and, hence, the computational costs.
The methods employed are as follows [7]:
- KNNI: Whenever there is a missing value, the K-nearest neighbors closest to the instance containing the MV are determined. The most common value among the K-nearest neighbors was used to impute nominal attributes. For numerical attributes, imputation is performed by calculating the average of the neighboring values;
- WKNNI: This technique involves determining the distances between K-nearest neighbors and a weighting distribution regarding the distances between each neighbor. After this, the KNNI process was repeated;
- KMI: This technique divides a database into clusters based on their features. Once this has been done, the K-nearest neighbors technique is applied when deciding which value should be imputed;
- MC: In this method, the most common value is adopted for imputation in nominal attributes and the average of all corresponding attributes in the case of numerical attributes;
- CMC: This method does the same thing as MC but only employs the referenced attribute class with MV.
In contrast to MOGAImp [39], which employs random initialization of the initial population, the proposed method optimizes simple imputations through evolutionary processes to perform multiple imputations. This approach reduces the search space and introduces a novel method. This reduction in search space is particularly beneficial in scenarios where computational cost is critical in objective function calculations, such as multi-label classification.
It is also noteworthy that the presented work has two innovative contents: 1) using simple imputation methods as a priori solution, reducing the search space; 2) treating missing values in the multi-label scenario. To our best knowledge, there is no similar study in the literature.
Genetic operators
The individual selection involves a tournament in which two (or more) members of the previous population are selected, and the better one is chosen based on fitness value, as illustrated in Fig 1(e). This procedure was followed until a limited number of individuals from the current generation were obtained. The best individual is always selected through elitism [40].
In the literature, numerous methods for parameter tuning and control have been proposed and analyzed. [41] describes some of these methods and discusses various trends and challenges in the field. Specifically, [42] conducted experiments to find appropriate settings for these parameters when applying evolutionary algorithms to a multi-objective problem class. They concluded that determining the value of the scaling factor can be difficult and is highly dependent on the specific problem. Considering these findings, initial tests were conducted to define the parameters used in our study. In line with the work of [42], the initial percentage of Crossover was delimited to [0.8, 1.0], following the standard proposal for non-separable problems like the one tackled in our research. EvoImp employs a crossover for 80% of the individuals using an n-point crossover operator [43], as shown in Fig 1(f). It is also consonant with the work of [44].
The mutation process is performed on 20% of the individuals chosen randomly, except for the best one. For each individual to be mutated, the imputed value is exchanged for a candidate value. The mutation is applied only to genes that contain missing values. To accomplish this, each attribute in the dataset has a set of solutions, as shown in Table 2. This set is formed by considering all possible response options for that attribute in the evaluated dataset.
Table 2a displays a toy dataset containing five records and four attributes: “Year”, “Gender”, “Age”, and “Have Credit”. Some values in the dataset are missing and are represented by “?”. Table 2b lists the possible values for each attribute. For example, the “Year” attribute can have values 1998, 2005, or 2010; and the “Gender” attribute can have values M or F. The same reasoning is applied to the other attributes.
Lobato et al. [39] adopted this technique to initiate the first MOGAImp population. The mutation operator was not implemented in MultImp. The lack of it caused a premature convergence, limiting the method’s robustness. That operator is one of the main differences between MultImp and EvoImp. In other words, the proposed method implements a strategy to avoid local minimum.
The algorithm’s search and optimization process occurs over predetermined generations. The population goes into a growth phase, starting with the number of MI methods adopted in the population initialization and increasing by its cross-over. This strategy aims to provide population diversity. In the second phase, the population is gradually reduced, achieving the same initial population size, allowing the analysis to choose the best solution qualitatively.
Fitness function
As mentioned earlier, the method was evaluated on an MLC scenario. For this, EvoImp performs a classification process on each individual. The goal is to analyze the performance of the classifier and, consequently, the data imputation efficiency.
Three performance measures were adopted to evaluate the classifier, as with MultImp: Exact Match, Accuracy, and Hamming Loss. The notation used by [15, 45] were adopted to describe these measures: (i) n: number of instances in the test set; (ii) q: number of labels; (iii) Yi: set of original labels, for instance, i; and (iv) Zi: set of predictive labels, for instance, i.
- Exact Match calculates, using a binary system, whether all the instance labels are predicted correctly. This measure, as expressed in Eq 1, is assumed to be trivial because it ignores partial predictions: (1)
- Accuracy is also a measure that counts the correctly predicted labels of an instance. In this case, partial predictions are taken into account. Eq 2 expresses the mathematical model of this measure: (2)
- Hamming loss is a measure that, in contrast to accuracy, evaluates the classifier’s performance by finding the average of incorrect predictions. Eq 3 describes this measure: (3)
These measures were used in lexicographical order; in other words, this approach prioritizes all the problem’s objectives and then tries to satisfy them, keeping a list of priorities [46]. Thus, the fitness (f) for the problem solution can be expressed as Eq 4: (4) where n is the number of objectives defined; fn is an optimization goal. Given two fitness evaluations f1 and f2 and a precision threshold t, the lexicographic relation between them (noted as ≺l and ⪯l) can be defined [47]: (5) (6) (7)
As can be observed, the Eq 5 shows f1 ≺l f2, which means that f1 is lexicographically less than f2. This relationship is established when there exists an index k in in the range [0, n0) ∩ , such that , indicating that the k-th component of f1 is less than the k-th component of f2. Additionally, the difference between and is greater than or equal to t. This ensures that the k-th components differ significantly by at least t. Finally, the absolute differences between corresponding components and should be less than t for all i less then k. In essence, this relation means that f1 is superior to f2 in terms of some objectives. The Eq 6 determines equality in lexicographical order (f1 = lf2). This occurs when the absolute differences between corresponding components and are all less than t for all i in the range . In other words, f1 and f2 are considered equal regarding their performance across objectives. Finally, the Eq 7 presents f1 ⪯l f2, which means that f1 is either less than or equal to f2 in lexicographical order. It combines the ≺l and ⪯l relations, indicating that f1 is either better than or equal to f2 in terms of the defined objectives.
These equations are used to rank and compare solutions or fitness evaluations in optimization problems, considering the objectives, prioritization, and performance. The lexicographical order approach allows for precise, multi-objective optimization when there are multiple criteria or objectives to be considered. Once the threshold t has been introduced, this formulation differs from the pure mathematical lexicographic relation. It permits the decision maker to choose the precision to compare two fitness functions. This relation allows the ranking of solutions of EvoImp as follows:
- The EM behavior is evaluated;
- If two or more individuals match their respective scores, the ACC evaluation is checked;
- If the tie remains, the HL evaluation is used.
This approach allows different performance measures to be added to a single evaluation [45]. It is similar to the classical lexicographical approach, but once evolutionary algorithms are adopted, local optima can be avoided [47].
The EvoImp algorithm
As shown in Algorithm 1, EvoImp begins the execution by creating and evaluating individuals for the initial population. The datasets are initially imputed using simple imputation methods: KNNI, CMC, MC, KMI, and WKNNI (lines 1–5). Afterward, the population is evaluated and ranked based on each individual’s performance (line 6). The algorithm applies the genetic operators if the stopping criterion is not attained (e.g., the number of generations).
Algorithm 1: EvoImp
Input: datasets with MV and parameters (see Table 4)
Output: complete datasets
1 foreach Simple Imputation Method do
2 Generate a new Individual: individual;
3 Evaluate the individual;
4 Add individual to the Current Population: currentPop ← individual;
5 end
6 Order currentPop using Lexicographical order;
7 while Stop criterion not reached do
8 Add to the Current population the Best Individual: currentPop ← bestIndividual;
9 while currentPop < Number individuals of the new generation do
10 Select Parents;
11 Appy Crossover;
12 Evaluate the new Individual;
13 Add the Individual to Current population: currentPop ← individual;
14 end
15 while Number of mutated individuals < 20% of Individuals of the new generation do
16 Randomly choose an individual from the Current population;
17 Apply Mutation;
18 Evaluate the Individual;
19 Add the Individual to Current population: currentPop ← individual;
20 end
21 Order currentPop using Lexicographical order;
22 end
23 return bestIndividual;
The elitist individual is always passed on to the next generation (line 8). The selection is performed using the tournament selection operator (line 10). Two individuals are randomly drawn in this process. These two parents exchange genetic material using a crossover operator. These steps are repeated until the population is complete. Afterward, the mutation follows the established rate (lines 15–20). The new population is arranged, and the iterative process continues until the stopping criterion is reached. The return of the algorithm is the individual that achieves the best performance (line 23).
In summary, EvoImp adopts the configuration for the parameterization of MultImp [15], except for the mutation operator, as pointed out earlier. Besides, we also corrected bugs and optimized the code, bearing in mind maintainability and reuse. Moreover, we implemented the lexicographic strategy and expanded the computational tests, expanding the technical-scientific contribution of the present work.
Computational experiments
Datasets
The experiments were designed using six multi-label datasets from the UCI Machine Learning repository (https://archive.ics.uci.edu/). The quantity datasets agree with the literature review conducted by [17], which mapped 48 papers related to experiments in the context of data imputation. Chiu’s work [17] shows that most papers (77%) use up to six datasets in experiments. Another interesting finding of Chiu et al. [17] is that the UCI Machine Learning Repository is the most used. Regarding the characteristics of the datasets, most use small-scale datasets, which contain fewer than 15 attributes and 800 instances. Table 3 shows the datasets used and their features.
Regarding multi-label datasets, the works of [35, 48] must be mentioned. These studies, as well as EvoImp, used datasets obtained at the UCI repository and formatted using the Mulan library (http://mulan.sourceforge.net/). The datasets used in these papers have similar characteristics (cardinality, density, and the number of instances) to those chosen in this paper. This observation highlights the experimental setup consonance with the state of the art and the EvoImp potential applicability in real-world problems.
Experimental setup
In the experiments, the missing values were artificially added to each dataset with the following rates: 5%, 10%, 15%, 20%, 25%, and 30%. This “amputation” process was carried out using the MCAR mechanism, as described in Santos (2019) [49]. The complete experimental configuration consisted of 36 datasetss with missing data, and these datasetss underwent a comparative evaluation. This evaluation involved five simple imputation methods: KNNI, CMC, MC, KMI, and WKNNI.
The following classification methods were used for the multi-label learning tasks: Binary Relevance (BR), Hierarchy of Multi-label classifiER (HOMER), Multi-Label K-Nearest Neighbors (ML-KNN), Classifier Chains (CC), and Ensembles of Classifier Chains (ECC) [21, 50]. K-fold cross-validation was used for the classification model’s evaluation (learning and testing). Table 4 summarizes the overall parameters which were used in the experiments.
Regarding the simple imputation methods, the parameters recommended by [7] were used. The mutation rate (MR) chosen is higher than the typical usage rates because the starting point is not random. Therefore, considering that the initial population is obtained by other methods, parameterization experiments demonstrated that a higher MR yields better results, providing fast convergence. The entire experimental setup and the obtained results are available as supplementary material on the project’s GitHub (https://github.com/jacobjr/EvoImp).
Implementation
The GA was programmed in the Java language, version 8.1, based on the works of [15, 44]. Other components used third-party implementations as follows:
- For the multi-label classifiers, Mulan’s library (https://mulan.sourceforge.net/) was used [51]. This library also contains some classifiers implemented in Weka (https://www.cs.waikato.ac.nz/ml/weka/index.html) [52].
- The simple imputation methods used for forming the first population of EvoImp and in the comparative analyses are implemented in KEEL-software (http://www.keel.es/) [53].
It is noteworthy that GA used in the EvoImp was fully implemented by the authors despite KEEL providing a framework for evolutionary computation. This design decision aimed to give us more control over the experiments. The computational complexity is another crucial aspect to consider in implementing this proposed method. It plays a vital role in determining the feasibility and efficiency of applying bio-inspired techniques to solve optimization problems. Addressing this concern and reducing computational complexity enhances the algorithm’s applicability and scalability. As a result, it makes it more suitable for handling larger datasets and complex optimization landscapes, particularly in multi-label classification tasks. More detailed information about EvoImp’s computational complexity can be found in the supplementary materials on the project’s GitHub repository.
Results and analysis
This section examines the results obtained from the computational experiments. The data displayed in the following tables show the differences in performance between the methods for each percentage of missing values analyzed (5%, 10%, 15%, 20%, 25%, and 30%). The best results are highlighted in bold for easy viewing. The metrics (Exact Match (↑), Accuracy (↑), and Hamming Loss (↓)) are presented with these symbols, where (↑) indicates that higher values reflect better performance, and (↓) indicates that lower values represent better performance.
Binary relevance
In the learning performed with the BR classifier, the results showed that the EvoImp was numerically superior (Table 5). In the EM evaluation, EvoImp outperformed its competitors in 35 of the 36 datasets evaluated (97.22%). The proposed method demonstrated superior performance compared to others in 18 scenario datasetss (50%) regarding the Accuracy evaluation measure. Finally, considering the HL, EvoImp outperformed the baseline methods in 16 datasets (44.44%).
It is essential to highlight the priorities adopted in the EvoImp lexicographic order, prioritizing the evaluation with EM, as mentioned in the Subsection “Fitness Function”, which explains the performance decrease for the ACC and HL metrics considering the binary relevance classifier.
Hierarchy Of Multi-label Classifier (HOMER)
The results for the HOMER classifier are given in presented in Table 6. Analyzing the results, it is possible to observe that EvoImp is also superior to the others in 35 of the 36 datasets used in the experiments (97.22%) regarding the EM metric. These results corroborate the ones obtained from the Binary Relevance classifier.
Continuing analyzing Table 6 results, regarding the ACC evaluation measure, EvoImp outperformed the baseline methods in 23 datasets (63.88%). The HL results show that EvoImp had the slightest error in classification in 19 out of 36 datasets (52.78%). In summary, EvoImp outperformed the methods for all performance measures for HOMER classifier, in consonance with the results for BR classifier as well.
Multi-Label k-Nearest Neighbors
The results obtained with the ML-KNN classifier is shown in Table 7. As can be seen, EvoImp showed similar performance to the previous scenarios considering the BR and the HOMER classifiers. For instance, considering the primary analyzed metric (EM), EvoImp outperformed the baseline methods at 97.22%. Considering the ACC and HL, the EvoImp presented superior performance for 20 (55.55%) and 22 (61.11%) datasets, respectively.
Classifier Chains
The results for the Classifier Chains are presented in Table 8. Again, EvoImp outperformed the baseline methods for all evaluation measures considered: EM with superiority in 32 out of 36 datasets (88.88%), ACC with 30 (83.33%), and HL with 22 (61.11%).
Ensembles of Classifier Chains
The last scenario analyzed was considering the Ensemble Classifier Chains method. The results are shown in Table 9. The results obtained with the ECC (Table 9) also show a significant advantage of EvoImp over competitors in the analyses performed. However, EvoImp had the lowest performance, with numerical superiority in 29 (80.55%) datasets in the evaluation with EM, 16 (44.44%) for ACC, and 17 (47.22%) for HL.
In summary, the EvoImp performance for the ECC presents the same pattern described in the previous scenarios, demonstrating the EvoImp robustness.
Discussion
In summary, EvoImp proved to be competitive in all classification scenarios, which underlines the fact that the optimization of imputation through evolutionary strategies, such as genetic algorithms, is an excellent alternative for handling missing values in the preprocessing phase of data analysis. It should be noted that the algorithm created performed optimizations based on simple imputation methods (applied to the initial population of EvoImp). Considering the computational experiments, other factors should be highlighted regarding the EvoImp performance:
- Maximizing the labels’ success: The primary purpose of classification, particularly in this study, is the correct labeling of data instances, a task that is becoming increasingly complex in the multi-labeling scenario. In the EM measure, where the classifier must label all the classes of an instance correctly so that they can be counted correctly, the proposed method achieved better performance in 92.22% of all the datasets in all the scenarios. This performance is more evident in BR, HOMER, and ML-KNN, with 35 out of the 36 datasets. Another measure that allows this conclusion is ACC. The superior performance achieved by the EvoImp is more apparent in the analyses with the CC and HOMER classifiers (with 30 and 23 datasets, respectively). In general terms, EvoImp was better in 68.3% of all used datasets. This can be explained by the fact that this measure is flexible regarding the number of successes achieved by labels. For example, if an instance belongs to five labels and obtains four correct labels, it achieves an 80% degree of accuracy. At the same time, the excellent performance of ACC indicates that the classifier can increase its labeling capacity. This can be confirmed by analyzing the classification error evaluated using HL. In this metric, the proposed method obtained the lowest error (53.33%). It is worth mentioning that the results obtained reflect the lexicographic order chosen (as explained in subsection “Fitness function”), demonstrating the method’s superiority over all the others. A comparison shows that when ACC increases, there is an automatic reduction in the HL error, justifying the usage of lexicographical order instead of more complex approaches, such as Pareto Frontier Analysis, used to deal with conflicting measures.
- Superior performance in datasets over different domains and sizes: The six datasets used in the experiments can be divided in terms of i) different domains—the multi-label datasets used were related to the areas of audio (1), music (2), image (2), and biology (1); ii) their sizes—considering the number of instances and attributes, as was done by [54]. These datasets were curated to provide a robust experimental setup, simulating diverse real-world problems. It was noted that EvoImp performed superior in all the tests, proving that the method is robust on datasets of different domains and sizes.
- Stable performance in the distribution rates of the missing values under study: A critical evaluation of this study is related to the relationship between the missing values percentage and the performance measures. The results show that the EvoImp maintains its consistency, even with variations, which, in this study, was between 5% to 30% (with a rate of k = 5%). These rates agree with those used in most studies in the literature—one related work that addresses this discussion is [17]. A total of 48 related articles from 2011 to 2021 were selected in this investigation. About missing rates, this review indicated that 60,4% used missing rates < = 30% or did not reveal their missing rates for the experimentation.
The above aspects demonstrate that EvoImp is suitable for missing value treatments in real-world scenarios.
Conclusion and suggestions for future work
The data analyses conducted in real-world datasets make it clear that there is a critical need to handle missing values in multi-label classification domain. The ubiquitous presence of MVs and the fact that most of the techniques employed only work or ensure good performance when applied to datasets with complete cases underlines the need to tackle this problem. Data imputation methods have emerged as an alternative solution, searching for plausible values to fill the missing ones.
Therefore, we proposed in this study the EvoImp, an imputation method based on genetic algorithms for the optimization of multiple imputations for missing data applied to multi-label learning. For validation, the method was submitted to an extensive experimental benchmarking process with various multi-label datasets and compared with other state-of-the-art imputation methods. Six missing value rates are applied to the datasets to simulate the MCAR mechanism. The results were analyzed using five classifiers: Binary Relevance, Hierarchy of Multi-label Classifier, Multi-Label k-Nearest Neighbors, Classifier Chains, and Ensembles of Classifier Chains. Three well-known evaluation measures were adopted to assess the experiments: Exact Match, Accuracy, and Hamming-loss.
EvoImp achieved exceptional results in all the scenarios evaluated, being quantitatively superior to the others. These outstanding results make it possible to conclude that the proposed method is suitable for application in real-world scenarios. In addition to a novel approach for dealing with MV in multi-label classification, the present works contribute to the body of knowledge by: i) assessing the impact of missing data on multi-label classification to improve classification robustness; ii) providing an extensive experimental comparison of many state-of-the-art data imputation algorithms, multi-label machine learning classifiers, and performance measures; iii) making source codes and experiments results in a GitHub repository.
In future work, we want to evaluate other missingness mechanisms apart from MCAR and adjust the method for handling high rates of missing data (> 30%). Experiments could also be performed to make the EvoImp learn its parameters (AutoML). Finally, we would like to investigate the Influence of Cardinality and Density Characteristics on Multi-Label Learning with missing values.
References
- 1. Heymans MW, Twisk JW. Handling missing data in clinical research. Journal of clinical epidemiology. 2022 Nov 1;151:185. pmid:36150546
- 2. Honaker J, King G. What to Do about Missing Values in Time-Series Cross-Section Data. American Journal of Political Science. 2010 Apr 54;2:561–581
- 3. Tsai CF, Li ML, Lin WC. A class center based approach for missing value imputation. Knowledge-Based Systems. 2018 Jul 151:124–35.
- 4. Lin WC, Tsai CF. Missing value imputation: a review and analysis of the literature (2006–2017). Artificial Intelligence Review. 2020 Feb;53:1487–509.
- 5. Garciarena U, Santana R. An extensive analysis of the interaction between missing data types, imputation methods, and supervised classifiers. Expert Systems with Applications. 2017 Dec 15;89:52–65.
- 6. Adhikari D, Jiang W, Zhan J, He Z, Rawat DB, Aickelin U, et al. A comprehensive survey on imputation of missing data in internet of things. ACM Computing Surveys. 2022 Dec 15;55(7):1–38.
- 7. Luengo J, García S, Herrera F. On the choice of the best imputation methods for missing values considering three groups of classification methods. Knowl Inf Syst. 2012;32(1):77–108.
- 8. Emmanuel T, Maupong T, Mpoeleng D, Semong T, Mphago B, Tabona O. A survey on missing data in machine learning. Journal of Big Data. 2021 Dec;8(1):1–37. pmid:34722113
- 9. McMahon P, Zhang T, Dwight RA. Approaches to dealing with missing data in railway asset management. IEEE Access. 2020 Mar 6;8:48177–94.
- 10. Ren L, Wang T, Seklouli AS, Zhang H, Bouras A. A review on missing values for main challenges and methods. Information Systems. 2023 Oct 119.
- 11. Farhangfar A, Kurgan L, Dy J. Impact of imputation of missing values on classification error for discrete data. Pattern Recognition. 2008 Dec 1;41(12):3692–705.
- 12.
Rubin DB. An overview of multiple imputation. In: Proceedings of the survey research methods section of the American statistical association 1988 Aug (Vol. 79, p. 84). Princeton, NJ, USA: Citeseer.
- 13. Li P, Stuart EA, Allison DB. Multiple imputation: a flexible tool for handling missing data. Jama. 2015 Nov 10;314(18):1966–7. pmid:26547468
- 14.
Rubin DB. Multiple imputation for nonresponse in surveys. John Wiley & Sons; 2004 Jun 9.
- 15.
Lobato FMF. Evolutionary strategies to optimize the treatment of missing data by multiple imputation data (in Portuguese). PhD Thesis, Federal University of Pará, 2016.
- 16. Nunes LN, Kluck MM, Fachel JMG. Use of multiple imputation for missing data: a simulation using epidemiological data (in Portuguese). Cad Saúde Pública [online]. 2009;25(2):268–278. pmid:19219234
- 17. Chiu PC, Selamat A, Krejcar O, Kuok KK, Bujang SD, Fujita H. Missing Value Imputation Designs and Methods of Nature-Inspired Metaheuristic Techniques: A Systematic Review. IEEE Access. 2022 May 9.(pp. 61544–61566).
- 18.
Holland JH. Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT press; 1992 Apr 29.
- 19. Garcia JCF, Kalenatic D, Bello CAL Missing data imputation in multivariate data by evolutionary algorithms. Comput Hum Behav. 2011. 27:1468–1474
- 20. Provost F, Saar-Tsechanski M. Handling Missing Values when Applying Classification Models. Journal of Machine Learning Research. 2007;8.
- 21. Read J, Pfahringer B, Holmes G, Frank E. Classifier chains for multi-label classification. Machine learning. 2011 Dec;85:333–59.
- 22. Ghani MU, Rafi M, Tahir MA. Discriminative adaptive sets for multi-label classification. IEEE Access. 2020 Dec 1;8:227579–95.
- 23.
Gonçalves EC, Freitas AA, Plastino A. A survey of genetic algorithms for multi-label classification. In: 2018 IEEE Congress on Evolutionary Computation (CEC) 2018 Jul 8 (pp. 1-8). IEEE.
- 24. Nguyen TT, Nguyen TT, Luong AV, Nguyen QV, Liew AW, Stantic B. Multi-label classification via label correlation and first order feature dependance in a data stream. Pattern recognition. 2019 Jun 1;90:35–51.
- 25.
de Sá AG, Pimenta CG, Pappa GL, Freitas AA. A robust experimental evaluation of automated multi-label classification methods. In: Proceedings of the 2020 Genetic and Evolutionary Computation Conference 2020 Jun 25 (pp. 175-183).
- 26.
Venkatesan R, Er MJ. Multi-label classification method based on extreme learning machines. In: 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV) 2014 Dec 10 (pp. 619-624). IEEE.
- 27. Liu W, Wang H, Shen X, Tsang IW. The emerging trends of multi-label learning. IEEE transactions on pattern analysis and machine intelligence. 2021 Oct 12;44(11):7955–74.
- 28. Tsoumakas G, Katakis I, Vlahavas I. Random k-labelsets for multilabel classification. IEEE transactions on knowledge and data engineering. 2010 Sep 9;23(7):1079–89.
- 29.
Tang, Lei and Rajan, Suju and Narayanan, Vijay K. Large scale multi-label classification via metalabeler. In Proceedings of the 18th international conference on World wide web, pages 211–220, 2009.
- 30. Qian K, Min XY, Cheng Y, Song G, Min F. Self-dependence multi-label learning with double k for missing labels. Artificial Intelligence Review. 2022 Oct 23:1–38.
- 31. Sun L, Yin T, Ding W, Qian Y, Xu J. Feature selection with missing labels using multilabel fuzzy neighborhood rough sets and maximum relevance minimum redundancy. IEEE Transactions on Fuzzy Systems. 2021 Jan 22;30(5):1197–211.
- 32. Gibaja E, Ventura S. A tutorial on multilabel learning. ACM Computing Surveys (CSUR). 2015 Apr 16;47(3):1–38.
- 33. Pereira RB, Plastino A, Zadrozny B, Merschmann LH. Correlation analysis of performance measures for multi-label classification. Information Processing & Management. 2018 May 1;54(3):359–69.
- 34. Zheng X, Li P, Chu Z, Hu X. A survey on multi-label data stream classification. IEEE Access. 2019 Dec 24;8:1249–75.
- 35. Wang C, Lin Y, Liu J. Feature selection for multi-label learning with missing labels. Applied Intelligence. 2019 Aug 15;49:3027–42.
- 36. Cheng Y, Song F, Qian K. Missing multi-label learning with non-equilibrium based on two-level autoencoder. Applied Intelligence. 2021 Oct 1:1–9.
- 37.
Tran CT, Zhang M, Andreae P. Multiple imputation for missing data using genetic programming. In: Proceedings of the 2015 annual conference on genetic and evolutionary computation 2015 Jul 11 (pp. 583-590).
- 38. Shahzad W, Rehman Q, Ahmed E. Missing Data Imputation using Genetic Algorithm for Supervised Learning. Int J Adv Comput Sci Appl. 2017;8.
- 39. Lobato F, Sales C, Araujo I, Tadaiesky V, Dias L, Ramos L, et al. Multi-objective genetic algorithm for missing data imputation. Pattern Recognition Letters. 2015 Dec 15;68:126–31.
- 40.
Mirjalili S. Genetic Algorithm. In: Evolutionary Algorithms and Neural Networks. Studies in Computational Intelligence. 2019;780. Springer.
- 41.
Karafotias, Giorgos, Mark Hoogendoorn, and AE Eiben. Evaluating reward definitions for parameter control. In Proceedings of the 18th European Conference on Applications of Evolutionary Computation (EvoApplications 2015), Copenhagen, Denmark, April 8-10, 2015, pp. 667-680. Springer, 2015.
- 42.
Reynoso-Meza, Gilberto, Javier Sanchis, Xavier Blasco, and Juan M Herrero. Hybrid DE algorithm with adaptive crossover operator for solving real-world numerical optimization problems. In Proceedings of the 2011 IEEE Congress of Evolutionary Computation (CEC), pp. 1551-1556. IEEE, 2011.
- 43.
Semenkin E, Semenkina M. Self-configuring genetic algorithm with modified uniform crossover operator. In: Tan Y, Shi Y, Ji Z, editors. Advances in Swarm Intelligence. ICSI 2012. Lecture Notes in Computer Science. 2012;7331. Springer. pp. 414–421.
- 44. Lobato FMF, Tadaiesky VW, Araújo IM,de Santana ÁL. An Evolutionary Missing Data Imputation Method for Pattern Classification. In: Proc. Genet Evol Comput Conf—GECCO. 2015.
- 45. Gonçalves EC, Plastino A, Freitas AA. A genetic algorithm for optimizing the label ordering in multi-label classifier chains. In: Proc. Int. Conf. Tools with Artif. Intell. ICTAI. 2013. pp. 469–476.
- 46. González J, Ortega J, Escobar JJ, Damas M. A lexicographic cooperative co-evolutionary approach for feature selection. Neurocomputing. 2021;463:59–76.
- 47.
González J, Ortega J, Damas M, Martín-Smith P. Many-objective cooperative co-evolutionary feature selection: A lexicographic approach. In: Rojas I, Joya G, Catalá A, editors. Advances in Computational Intelligence, IWANN 2019. Lecture Notes in Computer Science. 2019;11507. Springer. pp. 463–474.
- 48. Esmaeili A, Behdin K, Fakharian MA, Marvasti F. Transductive multi-label learning from missing data using smoothed rank function. Pattern Anal Applic. 2020;23:1225–1233.
- 49. Santos MS, Pereira RC, Costa AF, Soares JP, Santos J, Abreu PH. Generating Synthetic Missing Data: A Review by Missing Mechanism. IEEE Access. 2019;7:11651–11667.
- 50.
Tsoumakas G, Katakis I, Vlahavas I. Effective and efficient multilabel classification in domains with large number of labels. In: Proc. ECML/PKDD 2008 Work. Min. Multidimens. Data. 2008. pp. 30–44.
- 51. Tsoumakas G, Spyromitros-Xioufis E, Vilcek J, Vlahavas I. MULAN: A Java library for multi-label learning. J Mach Learn Res. 2011;12:2411–2414.
- 52. Frank E, Hall MA, Witten IH. The WEKA Workbench. Online Appendix for “Data Mining: Practical Machine Learning Tools and Techniques”. Morgan Kaufmann, Fourth Edition. 2016.
- 53. Triguero I, González S, Moyano JM, García S, Alcalá-Fdez J, Luengo J, et al. KEEL 3.0: An Open Source Software for Multi-Stage Analysis in Data Mining. Int J Comput Intell Syst. 2017;10:1238–1249.
- 54. Schmitt P, Mandel J, Guedj M. A comparison of six methods for missing data imputation. Journal of Biometrics & Biostatistics. 2015 Jan 1;6(1):1.