Figures
Abstract
Software reliability growth models (SRGMs) are universally admitted and employed for reliability assessment. The process of software reliability analysis is separated into two components. The first component is model construction, and the second is parameter estimation. This study concentrates on the second segment parameter estimation. The past few decades of literature observance say that the parameter estimation was typically done by either maximum likelihood estimation (MLE) or least squares estimation (LSE). Increasing attention has been noted in stochastic optimization methods in the previous couple of decades. There are various limitations in the traditional optimization criteria; to overcome these obstacles metaheuristic optimization algorithms are used. Therefore, it requires a method of search space and local optima avoidance. To analyze the applicability of various developed meta-heuristic algorithms in SRGMs parameter estimation. The proposed approach compares the meta-heuristic methods for parameter estimation by various criteria. For parameter estimation, this study uses four meta-heuristics algorithms: Grey-Wolf Optimizer (GWO), Regenerative Genetic Algorithm (RGA), Sine-Cosine Algorithm (SCA), and Gravitational Search Algorithm (GSA). Four popular SRGMs did the comparative analysis of the parameter estimation power of these four algorithms on three actual-failure datasets. The estimated value of parameters through meta-heuristic algorithms are approximately near the LSE method values. The results show that RGA and GWO are better on a variety of real-world failure data, and they have excellent parameter estimation potential. Based on the convergence and R2 distribution criteria, this study suggests that RGA and GWO are more appropriate for the parameter estimation of SRGMs. RGA could locate the optimal solution more correctly and faster than GWO and other optimization techniques.
Citation: Pradhan V, Patra A, Jain A, Jain G, Kumar A, Dhar J, et al. (2024) PERMMA: Enhancing parameter estimation of software reliability growth models: A comparative analysis of metaheuristic optimization algorithms. PLoS ONE 19(9): e0304055. https://doi.org/10.1371/journal.pone.0304055
Editor: Vedik Basetti, SR University, INDIA
Received: December 19, 2023; Accepted: May 7, 2024; Published: September 4, 2024
Copyright: © 2024 Pradhan et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the manuscript.
Funding: The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Small group Research Project under grant number RGP1/306/44.
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
Nowadays, the requirement for software systems has been overgrown because of its extensive application-based area; because of the revved technical advancement, the demand for multi-purpose software systems increases. In versatile software development, the size and complexity of software increases, so producing high-quality software becomes a more crucial task for software developers. The enormous complexity process of the software development makes it complicated for software developers to build high-grade software. The software has become an integral part of industry and modern society, e.g., nuclear reactor, defense, patient health monitoring, industrial process, banking, transportation, telecommunications, personal entertainment, home appliances, etc. [1, 2].
The software is a program in the form of a set of instructions or a line of codes. In the twenty-first era, all corporations or even individual personalities depend on software-based systems. Now, almost everyone is associated with computer systems, either partially or entirely. Software defects have emerged in disastrous failures, with few with terrible outcomes. There are few cases of devastating defeats solely due to software crashes. Newly, many banks and digital banking institutions felt software crashes in the form of misplaced data or information is a jumble. Software industries must assess their reliability when releasing software products in the market. Reliability prediction of software becomes the most prominent measure to avoid software failure and its maintenance expenses. User feedback generally indicates reliability; however, this method does not help evaluate software reliability before release. However, software reliability growth models (SRGMs) serve to present this information before release. There are various features for reckoning software quality; software reliability is a comprehensively utilized metric for software excellence quality [3].
The operating probability, or the probability of completing the expected function between the assigned scenario of the environment over a specified period with defined, designed, and restricted circumstances, is directed to software reliability [4]. We need a sophisticated way to formulate SRGMs to quantitatively estimate software reliability [5]. SRGMs try to correlate the failure data with distinguished distributions such as exponential, Weibull, logistic, etc. Over the previous few decades, diverse growth models have been ormulated for reliability growth estimation and software failure prediction [6–10]. Several pieces of research on software failure forecasts have just obtained considerable engagement since they helped the testing group [11]. Many environmental elements are considered in the existing literature to enhance the accuracy of the SRGMs. Recentally, Kim et al. [12] proposed a new SRGM and optimal release time with dependent failure. In this work, they derive the SRGM and assuming point symmetry fault detection rate.
A large number of growth models based on the non-homogeneous Poisson process (NHPP) have been created in the literature to measure software reliability in a realistic environment [13, 14]. Because of their ubiquity and performance in practice, NHPP-SRGMs are the most used type of growth model to estimate reliability [15–17]. The NHPP is a counting procedure, and the SRGMs based on it predict the amount of identified problems over time. The Goel and Okumoto (GO) model is one of the most well-known and early NHPP-based SRGMs [18]. In order to build realistic SRGMs, these studies contain several assumptions in their models. These growth models also assist in operational phase decision-making for the software’s to ensure required reliability [19]. In previous studies, the majority of growth models considered the traditional hypothesis, i.e., perfect debugging, while only a small amount of research has been done on an imperfect debugging environment [20–22].
Failure data estimate the parameters of these models through several estimation methods [23]. After the completion of parameter estimation, the SRGMs can be utilized to analyze several performance criteria. For the stochastic system, the parameter estimation problem can be transformed into an issue of optimization. The purpose of estimation methods is to know a collection of parameters that are the most desirable parameters to fit the function and outline the failure data perfectly. There are various analytical/ numerical methods present to find the value of unknown parameters such as maximum likelihood estimation (MLE) and least squares estimation (LSE), etc. This work considers two, three, four, and five parameters existing concave and S-shaped SRGMs, i.e., GO, Inflection S-shaped model (ISS), PNZ, and PZM models, respectively [1]. These model parameters are estimated by numerical (LSE) and various meta-heuristic optimization techniques, i.e., RGA, GSA, SCA, and GWO. The ultimate objective of the suggested work is to find a more satisfying parameter estimation accuracy for the growth models. Existing SRGMs call for numerical approaches, MLE, or LSE to estimate parameters. However, these methods put significant limits on SRGM parameter estimation, such as requiring the modelling function’s continuity and the existence of derivatives.
The original assistance of this assignment is outlined as follows:
- To analyze the applicability of various developed meta-heuristic optimization techniques in the area of SRGMs parameter estimation.
- To make the comparative analysis of various meta-heuristic optimization techniques for the optimal parameter estimation and identify the most appropriate algorithms for the same.
This work conduct the comparative analysis for four well-established SRGMs, GO, ISS, PNZ, and PZM model on three real-failure datasets. The leftovers of this article are systematized as follows. Section 2 confers the related literature. NHPP and NHPP based SRGMs are debated in Section 3. Parameter estimation techniques MLE, LSE and meta-heuristic optimization techniques are represented in Section 4. Section 5 discusses the proposed methodology. Section 6 presents the evaluation process and experiment report based on the numerical investigation of data sets. Finally, Section 7 completes the assignment with forthcoming suggestions.
2 Literature survey
This section delivers detailed literature related to the SRGM models and their respective parameter estimation techniques. In 2018, [24] consider the testing coverage as well as the operational environment’s uncertainty or randomness. They established a novel NHPP-based software reliability model using testing coverage with an uncertain operating environment. In this research, they show a sensitivity analysis to investigate the influence of individual parameters of the presented model. They use the LSE method to estimate the reliability model’s parameter. In 2019, the reliability evaluation of software based on a NHPP introduces a generalized model, [25] included the unpredictability of the operational conditions and its impact on the rate of defect detection to encompass imprecise debugging. This article uses a general model to construct operational environment uncertainty models, allowing for flexibility in incorporating a different random environmental element and fault detection rate. MLE may be unable to get precise estimates in some cases, mainly when u(t) is too intricate, and we must resort to LSE. As a result, they employ the LSE approach for parameter estimation.
In 2020 under the martingale framework, [26] suggested a generalized numerous environmental aspects based SRGM and associated unpredictability. The unpredictability is reflected in the software defect detection mechanism; this is indeed a stochastic defect detection procedure. For example, a multi-environmental-variables SRGM is further refined, integrating two unique environmental aspects, the portion of reused modules and the frequency of programme specification modification. They employ LSE to estimate model parameters.
The literature demonstrates the limitations of the numerical estimate, such as the fact that it is rarely trivial. For small samples, the estimate can sometimes reveal significant biases. The first parameter value selection is likewise a delicate matter. Some academics presented a neural network established strategy for outlining the software failures non-linearity [27–30]. The foremost downside of these methods is that they need an extensive training dataset to teach the technique, which results in a significant increase in computing cost and time. These approaches also incur significant computational expenses in order to forecast the number of failures per iteration. Nature-inspired solutions are adopted in numerous software testing, software reliability, and software engineering disciplines to solve the aforementioned restriction [31]. Meta-heuristics are straightforward ideas primarily inspire from animal manners, biological circumstances, and evolutionary conceptions are typical seeds of motivation. Recently, Various algorithm are are developed for various optimization problems [32–38]
For parameters optimization of SRGMs, the intelligent algorithm of optimization is used. Intelligent optimization methods do not require any additional hypotheses. In 1995, [39] presented a prototype for SRGMs using GA and discovered a different steady way of obtaining an estimation three decades ago. In 2013, [40] suggested a renewed estimation technique for SRGM, namely the PSO methodology, albeit it should be noted that this method required a broad search range and a slow convergence rate. In 2009, [41] recommended that for SRGM, a multi-objective GA be used. In 2010, To enhance the implementation of essential GA for directing the estimating concern of reliability models, [42] suggested a modified GA (MGA) based estimation approach.
In 2011, [43] studied a parameter estimate technique established on the Ant-Colony-Algorithm. Three collections of actual defects datasets are used to generate numerical examples presented and explored in depth. It is demonstrated that (1) the standard technique fails to find feasible explanations for a few datasets and SRGMs, whereas the suggested technique always does; (2) when compared to the PSO, various technique’s outcomes are roughly ten times more accurate for most models. Several techniques exceed the PSO algorithm in representations of convergence rate and precision. Our future research will focus on the initial value setting of parameters as well as the way of splitting solution space. In 2013, [44] proposes that the SRGM parameter estimation problem be solved using the PSO algorithm and then compares the results to those obtained using GA. Data from 16 other projects back up the findings. The PSO results show a high predictive capacity, as evidenced by the minimal fallacy forecasts. The outcomes acquired by PSO are superior to those accepted by GA. As a consequence, PSO can be utilized for SRGM parameter estimation, and the findings were validated using 16 projects. They analyzed the data and compared it to the GA approach.
In 2015, [45] performed a comparative analysis between CGA, BGA, and RGA. They used five typical SRGMs to conduct tests on eight failure datasets. The researchers used real-valued GA for parameter estimation and concluded that the optimal solution could be located more correctly by RGA and faster than previous GA techniques. They propose in their future work to do a comperative analysis of more meta-heuristics in SRGMs parameter estimation. We plan to execute a relatively practical investigation of RGA and other techniques for SRGMs parameter estimation in the future. To optimize these parameters of TE based SRGM in 2016 [46] investigates the application and advancement of a swarm intelligent system, specifically the quantum particle technique. The suggested SRGM-TEF model’s performance with improved parameters is analogized to different current models. The investigation findings indicated that the suggested parameter estimation strategy utilizing quantum particles is advantageous and versatile. One can attain better reliability performance by employing SRGM-TEF on various software failure datasets.
The results of [47] show that a technique based on a GSA for estimating parameters solves these issues and provides outstanding grade parameter estimation. Comprehensive experiments on nine real datasets were carried out in this research, with the results assessed to compare the suggested method. The investigation results show that the suggested strategy outperforms existing estimation, GA, and cuckoo search methods. This research discusses an effective parameter estimate strategy for SRGMs using GSA that overcomes the constraints of prior approaches. Ample investigations on several popular datasets for various notable SRGMs are used to evaluate the suggested technique. In 2018, [48] proposed a multi-release fault dependency SRGM for open-source software. In this paper, they employed a GA algorithm for solving the optimization function.
3 Software reliability growth models
3.1 Non-homogeneous poisson process
In the computing procedure, {ℵ(t), t ≥ 0} stands for the total numeral of diagnosed defects, while NHPP stands for time-dependent failure intensity ℏ(t), i.e.,
(1)
Here, ϑ(t) is the expected absolute amount of faults present in the system by time t, i.e., mean value function (MVF). So, the probability that the κ defects appears by time t is,
(2)
3.2 NHPP based SRGMs
There are four existing models considered in this study. Out of four, the first two models are developed in perfect debugging(PD), and the other two are imperfect debugging(ID) phenomena. In ID phenomenon, the defect scope function is an exponential and linear time-dependent function. For estimation evaluation criteria, the selected four SRGMs are explained below:
- GO model [18]
The typical hypothesis for all NHPP models is one of the G-O model’s assumptions. The expected amount of noticed defects in (t, t + Δt) is proportionate to the amount of defects staying in the system. Thus, the MVF of the G-O model can be depicted as:(3)
Here,is the initial amount of defects to be noticed, and μ is the fault identification rate, i.e., μ(t) is taken as point symmetry. It is a concave type model.
- ISS model [49]
In this model, the hypothesis is that the identification rate is time-dependent. Detection rate incorporated learning factor ξ and nature of detection rate is S-shaped. Therefore, the MVF of the SRGM can be represented as:(4)
Here,and μ(t) is taken as point symmetry. GO model is special case of ISS model when μ = 0. It is also a concave type model.
- PNZ model [50]
In this model, authors assume that defects may be raised during detection. The fault content rate is linear of testing time, i.e.,and the fault identification rate is a non-decreasing ISS function. Therefore, the MVF of the SRGM is represented as:
(5)
Here,, μ(t) is taken as point symmetry and ξ is leaning factor. When ζ = 0, PNZ model becomes ISS model and when ζ = 0 and ξ = 0, PNZ model becomes a GO model. It is S-shaped and concave type model.
- PZM model [1]
In this model, authors assume that the defects may be raised during the detection. The fault content rate is exponential function of testing time, i.e.,and the fault detection rate is same as PNZ model. Therefore, the MVF of the SRGM is represented as:
(6)
Here,and i.e., μ(t) is taken as point symmetry. When ζ = μ and κ = 0, PZM model become a ISS model and when ζ = μ, κ = 0 and ξ = 0, PZM model become a GO model. This SRGM is also says generalized NHPP SRGM. It is also a S-shaped and concave type model.
The brief detail of all the SRGMs are given in Table 1, which is considered for comparative analysis.
After development of SRGM, the succeeding step is parameter estimation. Here, the parameter estimation is classified into two categories based on the estimation techniques: (i) traditional estimation techniques, (ii) meta-heuristic optimization techniques.
4 Parameter estimation techniques
4.1 Numerical approaches
There are two most used parameter estimation techniques in literature, i.e., LSE and MSE, and in between them, most of the study uses LSE.
4.1.1 Maximum likelihood estimation (MLE).
MLE is one of the most effective methods for calculating estimators. In contrast to other estimating approaches, MLE are asymptotically normal and consistent and as the size of sample increases. For statistical models, MLE is a robust parameter estimation method. MLE has various meaningful statistical attributes of the optimal estimator for a significant quantity of data. The explication method of evaluation of parameters through MLE is very complicated. When the failure data meets certain hypothesis, like as a specified distribution, MLE can be employed. They require a numerical solution [41, 42, 51], which is a reasonable obstacle for the management team. The log-likelihood functions of SRGM are a little complex, etc., which further creates MLE quite complicated. In recent literature, various researchers used LSE for parameter estimation.
Let Πi is cumulative number of detected defects up to ti, i = 1, 2, …,
(7)
Carry the derivative on both sides concerning parameters which are unknown and put zero after obtaining the individual likelihood function; then, the system of equations is solved to get the value of parameters that are unknown. However, MLE may not be able to obtain accurate estimates in some cases, particularly when u(t) is too intricate, and we must resort to LSE. As a result, we’ll talk about how the model parameters are assessed by employing the LSE approach.
4.1.2 Least square estimation (LSE).
LSE process identifies the most likely accurate set of parameters for an assigned experimental dataset [42]. Besides, it practices a curve fitting method on the test dataset for unknown parameter estimation [39]. LSE is straightforward to use, and most of the software has non-linear regression functionality as a tool [45]. It displays the selection, while MLE cannot produce a pleasant vale of the estimated parameter. It suggests steady results in more comprehensive datasets; therefore, it reveals a notably adopted technique adopted by the practitioners in software industries. The previous work reveals the flaws in numerical estimate methodologies, such as being frequently non-trivial. For limited samples, the estimate procedure usually exposes significant biases. Another delicate issue is determining the parameter’s initial value. The regression analysis and estimation of parameter can be done using a variety of statistical approaches, including standard LSE, non-linear LSE (N-LSE), and MLE [52]. The N-LSE procedure, out of the three, confesses sampling error and all additional flaws in the estimation of parameter, and so outperforms the other two methods for the validation of model [53]. The NLS has been carried out using The N-LSE was performed using Levenberg-approach, Marquardt’s, which reduces the total sum of squared errors [54, 55].
Allow all failure data to be expressed in couples (tj, Πj) , where Πj is the total amount of defects noticed over time (0, tj]. The following is the sum of the squared distances:
(8)
We may get equations for the proposed model by carrying derivatives of Eq (8) concerning each SRGM parameter and fixing the outcomes equivalent to zero:
(9)
We may get the LSE for SRGMs of all parameters by simultaneously solving the preceding equations.
4.2 Meta-heuristic approaches
Heuristics are procedures that explore good (near-optimal) solutions at a moderate cost of computation without ensuring either optimality or feasibility” [56]. Heuristic algorithms mimic biological or physical rules. Several popular algorithms like BFA, PSO, SA, ACO, and genetic algorithm (GA) are used for parameter estimation [57, 58]. Two essential features, exploration and exploitation, are recognized in population-based algorithms. Investigation is the strength to grow search space, while exploitation is the strength to obtain the nearby right solution, i.e, optima. These algorithms must apply the exploration in the initial some iterations to bypass the local optima stagnation. Therefore, exploration is an essential concern in a population-based algorithm [59]. A primary solution is a proper trade-off between exploitation and exploration to achieve an excellent performance [60]. Despite every population-based search algorithms giving competent decisions, no heuristic algorithm could perform better than another in determining all optimizing difficulties. An algorithm may give worse than others in some problems and, in some cases, solve better than others. A hybrid global optimization algorithm is proposed for objective functions typical of non-linear least squares regression problems. It involves three components: simplifying the feasible region, excluding areas near local minimizers, and efficiently finding local minima [61]. In 2010 Žilinskas and Žilinskas explores the use of global optimization algorithms for nonlinear regression problems, with a focus on interval arithmetic-based methods. Unimodality of optimization problems in nonlinear regression is typically uncertain [62].
These algorithms are broadly classified into two categories: population-based versus individual-based algorithms. More than one solutions are created in a population-based optimization method. Throughout the iterations, the set of solutions are improved. However, this method initializes solely a single solution and grows/updates it throughout the iterations. The population-based algorithms serve as an escape from high local optima because they use many solutions. Various answers also help a population-based method to learn from distinct territories of the exploration area quickly. During the optimization process, information interchange is made among the search agents. Hence, search agents can explore and exploit search spaces adequately and faster. The major disadvantage of these techniques is the huge amount of evaluation through various functions.
These methods are free from constraints. In this study, we implemented one swarm intelligence, one evolutionary, and two physics-based algorithms for parameter estimation.
4.2.1 Regenerated GA (RGA).
A basic plan of conventional RGA [63] is as follows: (i) generate a random initial population of chromosomes, (ii) a prescribed fitness function is used to compute chromosomes, (iii) for subsequent generation picked candidates from the population, (iv) implementing the genetic operators of crossover and mutation to this picked sub-population for creating a new population. The following crossover is used for chromosomes with value encoded.
(10)
(11)
Here, rc is a random value between 0 and 1. As the generations pass, the range decreases. For mutation operator the equation is as follow:
(12)
Here, f(G) is the range function considering the current generation (G) number, the function is as follows:
(13)
Here, Gmax is the maximum number of generations and b is a shape parameter. Algorithm 1 represents the pseudo-code of RGA. This algorithm showed that the nature-inspired models could be straightforward and effective in optimizing problems.
Algorithm 1 Pseudo code of RGA.
Input: Number of agents (NA), Maximum iterations (Max_iter), Current iteration (Iter = 0), Crossover rate (Cr), Mutation rate (Mr), Objective function (f)
Output: Best agent (Pbest)
1: procedure RGA (NA, Max_iter, f, Cr, Mr)
2: Initialize the agents (P).
3: Compute the fitness of all agents through function f.
4: Identify the current best parents by top-mate selection (Pbest).
5: while Iter < Max_iter do
6: for each search agent i ∈ (NA) do
/* Perform Crossover */
7: rc = rand(0, 1) /* Here, rand(x, y) generates the uniform random number in range [0, 1]. */
8: if rc < Cr then
9: Randomly pick a agent Pr from the population for crossover operation.
10: Produce heuristic crossover breed Pc by performing arithmetic crossover using Eqs 10 and 11.
11: end if
/* Perform Mutation */
12: rm = rand(0, 1)
13: if rm < Mr then
14: Generate new solution by non-uniform mutation.
15: Re-initialize dimension d of crossover breed Pc.
16: end if
/* Perform Selection */
17: if f(Pc)<f(Pi) then
18: Pi = Pc
19: end if
20: end for
21: Compute the fitness of all search agents through function f.
22: Update Pbest agent.
23: Iter = Iter + 1
24: end while
25: return Pbest
26: end procedure
4.2.2 Gravitational Search Algorithm (GSA).
The GSA is a well-established optimization method based on the mass interactions and law of gravity. In the GSA algorithm, the explorer agents are the set of masses that communicate with each other by the laws of motion and Newtonian gravity [64]. In this, agents are viewed as objects, and masses estimate their performance. All those objects pull all others by gravity, and this force creates a global action of every object in the direction of the objects with more massive masses.
Researchers proposed simply a collection of agents with higher mass employ their force to others. But, remain cautious of applying this method because it may decrease the exploration potential and improve the exploitation capacity. Developer recalls that to bypass fooling in local optimum, the algorithm needs to utilize the exploration at the start. To enhance GSA’s performance by managing exploitation and exploration, only the K − best agents will pull the others. K − best is a time-dependent function, with the opening value K0 at the start and reducing with time. In the starting, all agents employ the force, and as time crosses, K − best is reduced linearly, and at the finish, one agent employs force to the others.
We define the force acting on mass I from mass ‘j’ at a certain moment ‘t’ as follows:
(14)
(15)
(16)
Here, the position of the ith agent in the dth dimension is represented by
. ai is the acceleration of the agenti at time t, and in direction dth randi is a uniform random variable with a value between 0 and 1. We use this random integer to give the search a randomized characteristic. Pseudo code of GSA is described in algorithm 2.
Algorithm 2 Pseudo code of GSA.
Input: Number of agents (NA), Maximum iterations (Max_iter), Current iteration (Iter = 0), Objective function (f)
Output: Best agent (Pbest)
1: procedure GSA (NA, Max_iter, f, Iter = 0)
2: Initialize each agent Pi. /* Here, i ∈ NA */
3: Initialize algorithmic parameters, i.e., gravitational field (G(Iter)), and Mass (M(Iter)).
4: Compute the fitness of all agents through function f.
5: Identify the current best agent (Pbest) and worst agent (Pworst) of the population.
6: while Iter < Max_iter do
7: Calculate the total force in each direction through Eq 14.
8: Calculate the acceleration and velocity in each direction.
9: for each search agent i ∈ (NA) do
10: Update the position of agent Pi.
11: Evaluate the fitness of Pi through objective function f.
12: end for
13: Update the Pbest and Pworst of population through updated population.
14: Update algorithmic parameters G(Iter), M(Iter).
15: Iter = Iter + 1
16: end while
17: return Pbest
18: end procedure
4.2.3 SCA: Sine Cosine Algorithm.
The SCA algorithm [65] explains that elementary mathematics is applied to compose optimization process. This algorithm is used in various areas [66]. As mentioned earlier, the algorithm introduced the uses of sine and cosine functions to exploit and explore the space among two solutions to obtain a fitter solution in the search space. The SCA produces many initial random solutions and needs them to undulate towards or outwards the fittest solution. It employs the sine and cosine function in applied mathematical models. Various random and made fit variables are also united into this algorithm to maintain the exploitation and exploration of the search space in many optimization pillars [67]. In this algorithm, the location updating equations are introduced for both stages:
(17)
(18)
Here,
is the location of the present solution in i-th aspect at k-th iteration, s1/s2/s3 are random numbers, Li is location of the target point in i-th aspect, and || shows the absolute value. These two equations are consolidated as follows:
(19)
Here, s4 ∈ [0, 1] is a random number.
In these equations, there are four central parameters s1, s2, s3, and s4, where, s1 manages the subsequent position’s region which is a space among the destination and solution. The parameter s2 define the movement should be outwards or towards the destination. The parameter s3 delivers a weight randomly for the target in order to emphasize stochastically (s3 > 1) or de emphasize (s3 < 1) the impact of goal in determining the distance. Lastly, s4 uniformly shifts among the cosine and sine components in Eq 6.
(20)
The SCA pseudocode is described in the Algorithm 3.
Algorithm 3 Pseudo code of SCA.
Input: Number of agents (NA), Maximum iterations (Max_iter), Current iteration (Iter = 0), Objective function (f)
Output: Best agent (Pbest)
1: procedure SCA (NA, Max_iter, f, Iter = 0)
2: Initialize the search agents (Pi). /* Here, i ∈ NA */.
3: Initialize regulating parameters s1, s2, s3, and s4.
4: Evaluate each of the search agent (Pi) by the objective function f.
5: Identify the current best agent (Pbest) of population.
6: while Iter < Max_iter do
7: for each search agent Pi do
8: Update s1, s2, s3, and s4.
9: Update the position of search agent Pi using Eq 19.
10: Evaluate the search agent Pi by the objective function f.
11: Update Pbest = Pi, if Pi is better than the earlier Pbest).
12: end for
13: Iter = Iter + 1
14: end while return Pbest
15: end procedure
4.2.4 Grey-wolf optimizer (GWO).
The GWO algorithm [68] incorporating a grey wolf hunting mechanism into an algorithm that mimicked the leadership structure. This algorithm is used in various areas [69]. In this algorithm, four sorts of grey wolves are utilised to simulate the leadership hierarchy: alpha, beta, delta, and omega. In addition, three primary degrees of hunting are carried out to accomplish optimization: searching for prey, encircling prey, and attacking prey.
The alpha wolf is leader in search space which is responsible making critical and democratic decision in the search space. Some grey wolves assist alpha in decision-making and other pack activities. This wolf is known as beta. These betas are alpha’s subordinate wolves, and they are ranked second in the grey wolf hierarchy. Omega is the Grey wolf’s lowest-ranking member. Omega wolves are required to follow the dictates of other dominant wolves on a regular basis. The wolf that does not belong to an alpha, beta, or omega is known as a delta. They are in charge of omega, but they must also present to alphas and betas.
In order to mathematically model encircling behavior the following equations are proposed:
(21)
(22)
Here, t indicates the current iteration,
and
are coefficient vectors,
is the position vector of the prey, and
indicates the position vector of a grey wolf. The vectors
and
are calculated as follows:
(23)
(24)
Here, components of
are linearly decreased from 2 to 0 over the course of iterations and r1, r2 are random vectors in [0, 1]. In order to mathematically recreate grey wolf hunting behaviour, we assume that the alpha (best candidate solution), beta, and delta have a superior understanding of prospective prey locations. As a result, we save the first three best solutions found thus far and require the other search agents (including the omegas) to update their locations in accordance with the best search agents’ placements. In this regard, the following formulas are proposed.
(25)
(26)
(27)
The final position is observed to be in a random location within a circle defined by the positions of alpha, beta, and delta in the search space. To put it another way, alpha, beta, and delta estimate the prey’s location, while other wolves update their positions at random around the prey. The pseudo code of the GWO algorithm is given in Algorithm 4.
Algorithm 4 Pseudo code of GWO.
Input: Number of agents (NA), Maximum iterations (Max_iter), Current iteration (Iter = 0), Objective function (f)
Output: Best agent (Pα)
1: procedure GWO (NA, Max_iter, f, Iter = 0)
2: Initialize the agents (P), and algorithmic parameters a, A, and C.
3: Compute the fitness of all agents through function f.
4: Identify the first best (Pα), second best (Pβ), and third best Pδ search agents of population.
5: while Iter < Max_iter do
6: for each search agent i ∈ (NA) do
7: Update the position of current search agents (Pi) through position update Eq 27.
8: end for
9: Update a, A, and C.
10: Compute the fitness of all search agents through function f.
11: Update Pα, Pβ, and Pδ.
12: Iter = Iter + 1
13: end while
14: return Pα
15: end procedure
5 Proposed methodology
The proposed methodology is developed to analyzed the applicability of the meta-heuristics algorithms.
Algorithm 5 Pseudo code of the proposed Algorithm.
Input: Considered SRGM models for raking, and considered Dataset for evaluation
Output: Report the rank algorithm for considered SRGM models.
1: procedure For each ‘ith’ considered model:
(a) Find the optimal parameter values through statistical learning.
(b) Compute the Mean Squared Error () between actual and predicted faults.
2: For each ‘jth’ considered model:
(a) Initialize the parameter values within their respective ranges.
(b) Predict the faults at various interval frames of given dataset with initial parameter set.
(c) Compute the initial sum of squared error (SSEth) between actual and predicted faults.
(d) For each ‘jth’ Nature-inspired optimization algorithm from set ‘RGA’, ‘GSA’, ‘SCA’, ‘GWO’:
(e) Return the vector of SRGM parameter values, MSE, and the number of iterations to converge for all the employed algorithm from set ‘RGA’, ‘GSA’, ‘SCA’, ‘GWO’ for ‘jth’ model.
(f) Rank the employed algorithms based on their number of iteration to converge for the considered ‘ith’ model.
(g) Return the MSEir1 (i.e., MSE value of rank-1 algorithm), and its iterations values for the considered ‘ith’ model.
3: For each ‘ith’ considered model
4: (a) Compare , and
values.
(b) If : Report the failure of proposed algorithm.
(c) Else:
return the rank-1 algorithm for considered SRGM model.
5: end procedure
These algorithms are used to estimate the models parameters.
The process of estimation is distributed into two categories, i.e., traditional methods and meta-heuristics algorithms. Generally, in the past literature, statistical techniques were employed for the parameters estimation of SRGMs. In traditional method LSE and MLE are considered. In recent past various nature-inspired meta-heuristics algorithms like GA, PSO, ACO, GSA, GWO, etc. are used for model parameter estimation. In this study, we consider RGA, GSA, SCA and GWO for models parameter estimation. The complete work-flow of the offered procedure is presented in Fig 1.
Start with SRGM identification, for SRGM selection we consider two, three, four and five parameter well-known SRGMs. For model and algorithm validation we used three real-failure datasets. We see that the parameter values through LSE and meta-heuristic algorithms. The value of parameters are nearly close to LSE values. Based on that parameter values calculate the several comparison criteria for model and algorithm comparison. For the algorithms comparison we evaluate the convergence based on the number of iteration. also perform the R2 distribution by performing several trails. This proposed method ultimately suggest the best algorithm out of the these competing algorithms based on these convergence and R2 distribution criteria.
6 Experiments and data analysis
This section presents the considered datasets, implementation details, evaluation metrics, and model-wise comparison in detail.
6.1 Data description
For this research and comparison, a total of three data sets are employed. Table 2 lists the data sets in detail.
The most critical task after model creation is parameter estimate. The popular traditional technique LSE and four meta heuristic techniques RGA, GSA, SCA, and GWO are applied for parameter estimation. As a result, we use three data sets to estimate the SRGMs parameters. The estimated values of the model’s parameters, as well as the goodness of fit, are listed in the Tables 3–6.
6.2 Model evaluation criteria
The Mean of Squared Errors (MSE), Predictive power (PP) and Predictive ratio risk (PRR), on the data set are used to generate the evaluation fitness value of the set of parameters. Three criteria for comparison are utilized to assess SRGM’s descriptive enactment. The following are the criteria:
- MSE: MSE is a metric that is used to compare prediction models quantitatively. The distinction between the true and anticipated value is computed using MSE. It is illustrated as follows:
(28) Here, ϑi is the monitored numeral of noticed defects by time ti and MVF at time ti is represented as ϑ(ti). It is desirable if the value of MSE is smaller by the provided model.
- Predictive ratio risk (PRR): It compares the model’s estimated distance from the actual data to the model’s estimated distance from the fitted curve. In this case, a lower PRR value indicates that the fitted curve is better suited to the failure data.
(29)
- Predictive power (PP): The difference between the model estimate and the actual data is calculated using predictive power.
(30)
We see the performance, i.e., MSE, PRR and PP of LSE and four meta heuristic techniques on SRGMs from the below tables.
6.3 Model wise comparison
Because the results of a single run may be incorrect due to meta-heuristics’ stochastic character, all of the algorithms are run 30 times, with the best results gathered and published in Tables 3–6 for GO, ISS, PNZ and PZM models, respectively.
From Tables 3–6, we can see that PZM model perform better for DS-1 and DS-3, while for DS-3, PNZ perform better. The another thing we see that all considered meta-heuristic algorithms also perform better or approx to the LSE method. Based the analysis of goodness of fit table we analyze that RGA and GWO produce almost close values. After the models parameter estimation, we plot the actual and predicted cumulative number of faults. All the considered models are plotted below for all the three datasets in Figs 2–5.
(a) GO model DS-1, (b) GO model DS-2, (c) GO model DS-3.
(a) ISS model DS-1, (b) ISS model DS-2, (c) ISS model DS-3.
(a) PNZ model DS-1, (b) PNZ model DS-2, (c) PNZ model DS-3.
(a) PZM model DS-1, (b) PZM model DS-2, (c) PZM model DS-3.
6.4 Meta-heuristic algorithm evaluation criteria
6.4.1 Convergence wise comparison.
Throughout iterations, the fitness of tracking agents degrades as shown in the observed patterns in Figs 6–9. This demonstrates that the algorithms under consideration are capable of improving the wellness of initial random solutions for a provided optimization situation in the long run. As shown in these diagrams, the search agents of the algorithms prefer to investigate the advantageous territories of the tracking space before settling on the optimal one. The algorithms’ convergence manners were examined and validated. That can be derived indirectly from the average fitness and trajectory, which are illustrated in these pictures as the convergence curves of RGA and GWO. The most reasonable explanation found so far during optimization is depicted in these diagrams. The downward tendency can be seen in the convergence curves of RGA and GWO for all of the SRGMs and datasets studied. For GO model, DS-1, RGA and GWO converges in less than 10 iterations. For DS-2, RGA, GWO, and GSA converges less than 20 iterations. For DS-3, RGA take 10, GWO take 25 and SCA takes around 40 iterations to converge the optimal solution. Similarly, for rest of the models we see that RGA converges faster and GWO also converges approximate iterations. The convergence plot for all the models for all the datasets are presented in Figs 6–9.
(a) GO model on DS-1, (b) GO model DS-2, (c) GO model DS-3.
(a) ISS model DS-1, (b) ISS model DS-2, (c) ISS model DS-3.
(a) PNZ model DS-1, (b) PNZ model DS-2, (c) PNZ model DS-3.
(a) PNZ model DS-1, (b) PNZ model DS-2, (c) PNZ model DS-3.
For comparison between RGA and GWO, we concluded that RGA performs better and converges in smaller no. of iterations for most of the datasets.
6.4.2 Comparison based on R2 distribution.
Distribution characteristics analysis. Several approaches to assess the outstanding performance of the algorithms have been employed. Here, this work examines the distribution features of the employed method on the testing collection. These optimization outcomes of the recommended method at all run are not certainly the equivalent. To assess the optimization strength and the reliability, the stated method runs 30 times on all testing set. Figs 10–13, shows the observe R2 value distributions of all four soft-computing techniques on the testing set of DS1-DS3. The cumulative value of R2 is larger than 0.95 for all the SRGMs for all the datasets. From Figs 10–13, we can see that the RGA and GWO perform better in third interval, i.e., highest R2 value interval. Although the GSA and SCA is quite reasonable, the absolute deviation only 0.009, R2 value distributions performance of RGA and GWO is considerably greater than the other two examined algorithms.
(a) ISS model, DS-1, (b) ISS model, DS-2, (c) ISS model, DS-3.
(a) ISS model, DS-1, (b) ISS model, DS-2, (c) ISS model, DS-3.
(a) PNZ model, DS-1, (b) PNZ model, DS-2, (c) PNZ model, DS-3.
(a) PZM model, DS-1, (b) PZM model, DS-2, (c) PZM model, DS-3.
7 Conclusion and future work
This work analyzes the various meta-heuristic algorithms used in estimating the parameters of SRGMs. This is the first attempt to use several meta-heuristic approaches to estimate the parameters of numerous SRGMs. The considered model is better fit when the SRGM parameters are optimized accurately. The major inference of this work are given as follows: The work does not require any hypotheses/constrains for the software failure data for parameter estimation and instead relies solely on the data’s attributes, indicating that its implementation is simple. The proposed approach is to compare the meta-heuristic approaches for parameter estimation by various criteria. The experimental results show that the performance of RGA and GWO is better on a variety of real-world failure data, and they suggest RGA has excellent parameter estimation potential.
In the parameter estimation, the traditional state-art-of-the-methods fail to find feasible solutions on specific SRGMs or datasets, however the suggested meta-heuristic algorithms performs far superior does in this context. The suggested algorithm’s findings are comparable to traditional methods for the four models, namely, Goel-Okumoto, Inflection S-Shape model, PNZ, and PZM model. When computed the optimized value of parameters through meta heuristic methods, the considered approach produces significantly superior results in some cases. LSE method’s outcomes are significantly approx to RGA, GSA, SCA and GWO. RGA could locate the optimal solution more correctly and faster than GWO and other approaches. In this paper, comparison of various estimation technique and selection of best technique for parameter estimation is done.
In future work we typically utilized approaches, such as MLE: (1) the wavelet shrinkage estimation allows us to bring out the time-series analysis with accuracy and high-speed prerequisites, and (2) The wavelet shrinkage estimation is ordered into a non-parametric estimation without setting a parametric form of the software intensity function.
References
- 1.
Pham H. System software reliability. Springer Science & Business Media; 2007.
- 2. Pradhan V, Kumar A, Dhar J. Emerging trends and future directions in software reliability growth modeling. Engineering reliability and risk assessment. 2023; p. 131–144.
- 3. Ivanov V, Reznik A, Succi G. Comparing the reliability of software systems: A case study on mobile operating systems. Information Sciences. 2018;423:398–411.
- 4.
Lyu MR. Handbook of software reliability engineering. McGraw Hill; 1996.
- 5. Wood A. Predicting software reliability. Computer. 1996;29(11):69–77.
- 6. Chang YC, Liu CT. A generalized JM model with applications to imperfect debugging in software reliability. Applied Mathematical Modelling. 2009;33(9):3578–3588.
- 7. Chatterjee S, Shukla A. Modeling and analysis of software fault detection and correction process through weibull-type fault reduction factor, change point and imperfect debugging. Arabian Journal for Science and Engineering. 2016;41(12):5009–5025.
- 8. Pachauri B, Kumar A, Dhar J. Software reliability growth modeling with dynamic faults and release time optimization using GA and MAUT. Applied Mathematics and Computation. 2014;242:500–509.
- 9. Pradhan V, Dhar J, Kumar A. Testing-Effort based NHPP Software Reliability Growth Model with Change-point Approach. Journal of Information Science & Engineering. 2022;38(2).
- 10. Wang J, Zhang C, Yang J. Software reliability model of open source software based on the decreasing trend of fault introduction. Plos one. 2022;17(5):e0267171. pmid:35500002
- 11. Minamino Y, Inoue S, Yamada S. NHPP-based change-point modeling for software reliability assessment and its application to software development management. Annals of Operations Research. 2016;244(1):85–101.
- 12. Kim YS, Song KY, Pham H, Chang IH. A software reliability model with dependent failure and optimal release time. Symmetry. 2022;14(2):343.
- 13. Pradhan V, Kumar A, Dhar J. Modelling software reliability growth through generalized inflection S-shaped fault reduction factor and optimal release time. Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability. 2021; p. 1748006X211033713.
- 14. Yang J, Liu Y, Xie M, Zhao M. Modeling and analysis of reliability of multi-release open source software incorporating both fault detection and correction processes. Journal of Systems and Software. 2016;115:102–110.
- 15. Wang J, Wu Z, Shu Y, Zhang Z. An optimized method for software reliability model based on nonhomogeneous Poisson process. Applied Mathematical Modelling. 2016;40(13-14):6324–6339.
- 16.
Pradhan V, Dhar J, Kumar A. Software Reliability Models and Multi-attribute Utility Function Based Strategic Decision for Release Time Optimization. In: Predictive Analytics in System Reliability. Springer; 2022. p. 175–190.
- 17. Felix EA, Lee SP. Predicting the number of defects in a new software version. PloS one. 2020;15(3):e0229131. pmid:32187181
- 18. Goel AL, Okumoto K. Time-dependent error-detection rate model for software reliability and other performance measures. IEEE Transactions on Reliability. 1979;28(3):206–211.
- 19.
Musa JD. Software reliability engineering: More reliable software, faster and cheaper. 2nd ed. Authorhouse; 2004.
- 20. Pachauri B, Kumar A, Dhar J. Modeling optimal release policy under fuzzy paradigm in imperfect debugging environment. Information and Software Technology. 2013;55(11):1974–1980.
- 21. Pradhan V, Kumar A, Dhar J. Enhanced growth model of software reliability with generalized inflection S-shaped testing-effort function. Journal of Interdisciplinary Mathematics. 2022;25(1):137–153.
- 22. Li Q, Pham H. A testing-coverage software reliability model considering fault removal efficiency and error generation. PloS one. 2017;12(7):e0181524. pmid:28750091
- 23. Pradhan V, Dhar J, Kumar A. Testing coverage-based software reliability growth model considering uncertainty of operating environment. Systems Engineering. 2023;.
- 24. Song KY, Chang IH, Pham H. A testing coverage model based on NHPP software reliability considering the software operating environment and the sensitivity analysis. Mathematics. 2019;7(5):450.
- 25. Li Q, Pham H. A generalized software reliability growth model with consideration of the uncertainty of operating environments. IEEE Access. 2019;7:84253–84267.
- 26. Zhu M, Pham H. A generalized multiple environmental factors software reliability model with stochastic fault detection process. Annals of Operations Research. 2020; p. 1–22.
- 27.
Kiran NR, Ravi V. Software reliability prediction using wavelet neural networks. In: International Conference on Computational Intelligence and Multimedia Applications (ICCIMA 2007). vol. 1. IEEE; 2007. p. 195–199.
- 28.
Lo D, Cheng H, Han J, Khoo SC, Sun C. Classification of software behaviors for failure detection: a discriminative pattern mining approach. In: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining; 2009. p. 557–566.
- 29. Mohanty R, Ravi V, Patra MR. Hybrid intelligent systems for predicting software reliability. Applied Soft Computing. 2013;13(1):189–200.
- 30. Su YS, Huang CY. Neural-network-based approaches for software reliability estimation using dynamic weighted combinational models. Journal of Systems and Software. 2007;80(4):606–615.
- 31. Arora D, Baghel AS. Application of genetic algorithm and particle swarm optimization in software testing. IOSR J Comput Eng. 2015;17(1):75–78.
- 32. Qaraad M, Amjad S, Hussein NK, Farag M, Mirjalili S, Elhosseini MA. Quadratic interpolation and a new local search approach to improve particle swarm optimization: Solar photovoltaic parameter estimation. Expert Systems with Applications. 2024;236:121417.
- 33. Navarro MA, Oliva D, Ramos-Michel A, Haro EH. An analysis on the performance of metaheuristic algorithms for the estimation of parameters in solar cell models. Energy Conversion and Management. 2023;276:116523.
- 34. Sharma A, Sharma A, Chowdary V, Srivastava A, Joshi P. Cuckoo search algorithm: A review of recent variants and engineering applications. Metaheuristic and Evolutionary Computation: Algorithms and Applications. 2021; p. 177–194.
- 35.
Sharma A, Pachauri R, Sharma A, Raj N. Extraction of the solar PV module parameters using chicken swarm optimization technique. In: 2019 Women Institute of Technology Conference on Electrical and Computer Engineering (WITCON ECE). IEEE; 2019. p. 45–48.
- 36. Sharma A, Sharma A, Averbukh M, Jately V, Rajput S, Azzopardi B, et al. Performance investigation of state-of-the-art metaheuristic techniques for parameter extraction of solar cells/module. Scientific reports. 2023;13(1):11134. pmid:37429876
- 37. Seyyedabbasi A, Kiani F. I-GWO and Ex-GWO: improved algorithms of the Grey Wolf Optimizer to solve global optimization problems. Engineering with Computers. 2021;37(1):509–532.
- 38. Wang JS, Li SX. An improved grey wolf optimizer based on differential evolution and elimination mechanism. Sci Rep 9: 1–21; 2019. pmid:31073211
- 39.
Minohara T, Tohma Y. Parameter estimation of hyper-geometric distribution software reliability growth model by genetic algorithms. In: Proceedings of Sixth International Symposium on Software Reliability Engineering. ISSRE’95. IEEE; 1995. p. 324–329.
- 40. Amin A, Grunske L, Colman A. An approach to software reliability prediction based on time series modeling. Journal of Systems and Software. 2013;86(7):1923–1932.
- 41.
Aljahdali SH, El-Telbany ME. Software reliability prediction using multi-objective genetic algorithm. In: 2009 IEEE/ACS International Conference on Computer Systems and Applications. IEEE; 2009. p. 293–300.
- 42.
Hsu CJ, Huang CY. A study on the applicability of modified genetic algorithms for the parameter estimation of software reliability modeling. In: 2010 IEEE 34th Annual Computer Software and Applications Conference. IEEE; 2010. p. 531–540.
- 43. Zheng C, Liu X, Huang S, Yao Y. A parameter estimation method for software reliability models. Procedia engineering. 2011;15:3477–3481.
- 44. Malhotra R, Negi A. Reliability modeling using particle swarm optimization. International Journal of System Assurance Engineering and Management. 2013;4(3):275–283.
- 45. Kim T, Lee K, Baik J. An effective approach to estimating the parameters of software reliability growth models using a real-valued genetic algorithm. Journal of Systems and Software. 2015;102:134–144.
- 46. Jin C, Jin SW. Parameter optimization of software reliability growth model with S-shaped testing-effort function using improved swarm intelligent optimization. Applied Soft Computing. 2016;40:283–291.
- 47. Choudhary A, Baghel AS, Sangwan OP. An efficient parameter estimation of software reliability growth models using gravitational search algorithm. International Journal of System Assurance Engineering and Management. 2017;8(1):79–88.
- 48. Zhu M, Pham H. A multi-release software reliability modeling for open source software incorporating dependent fault detection process. Annals of Operations Research. 2018;269(1-2):773–790.
- 49. Yamada S, Ohba M, Osaki S. S-shaped software reliability growth models and their applications. IEEE Transactions on Reliability. 1984;33(4):289–292.
- 50. Pham H, Nordmann L, Zhang Z. A general imperfect-software-debugging model with S-shaped fault-detection rate. IEEE Transactions on Reliability. 1999;48(2):169–175.
- 51. Schneidewind NF, Keller TW. Applying reliability models to the space shuttle. IEEE software. 1992;9(4):28–33.
- 52. Ahmad N, Khan MG, Quadri S, Kumar M. Modelling and analysis of software reliability with Burr type X testing-effort and release-time determination. Journal of Modelling in Management. 2009;.
- 53. Srinivasan V, Mason CH. Nonlinear least squares estimation of new product diffusion models. Marketing science. 1986;5(2):169–178.
- 54. Marquardt DW. An algorithm for least-squares estimation of nonlinear parameters. Journal of the society for Industrial and Applied Mathematics. 1963;11(2):431–441.
- 55.
Moré JJ. The Levenberg-Marquardt algorithm: implementation and theory. In: Numerical analysis. Springer; 1978. p. 105–116.
- 56. Russell S, Norvig P. A modern, agent-oriented approach to introductory artificial intelligence. Acm Sigart Bulletin. 1995;6(2):24–26.
- 57. Sangeeta Sitender. Comprehensive analysis of hybrid nature-inspired algorithms for software reliability analysis. Journal of Statistics and Management Systems. 2020;23(6):1037–1048.
- 58. Sharma K, Bala M, et al. An ecological space based hybrid swarm-evolutionary algorithm for software reliability model parameter estimation. International Journal of System Assurance Engineering and Management. 2020;11(1):77–92.
- 59.
Yazdani D, Meybodi M. A modified Gravitational Search Algorithm and its application. In: 2015 7th Conference on Information and Knowledge Technology (IKT). IEEE; 2015. p. 1–6.
- 60. Jain A, Singh PK, Dhar J. Multi-objective item evaluation for diverse as well as novel item recommendations. Expert Systems with Applications. 2020;139:112857.
- 61. Žilinskas A, Žilinskas J. A hybrid global optimization algorithm for non-linear least squares regression. Journal of Global Optimization. 2013;56:265–277.
- 62. Žilinskas A, Žilinskas J. Interval arithmetic based optimization in nonlinear regression. Informatica. 2010;21(1):149–158.
- 63. Yang S, Man T, Xu J, Zeng F, Li K. RGA: A lightweight and effective regeneration genetic algorithm for coverage-oriented software test data generation. Information and Software Technology. 2016;76:19–30.
- 64. Rashedi E, Nezamabadi-Pour H, Saryazdi S. GSA: a gravitational search algorithm. Information sciences. 2009;179(13):2232–2248.
- 65. Mirjalili S. SCA: a sine cosine algorithm for solving optimization problems. Knowledge-based systems. 2016;96:120–133.
- 66. Zhong M, Wen J, Ma J, Cui H, Zhang Q, Parizi MK. A hierarchical multi-leadership sine cosine algorithm to dissolving global optimization and data classification: The COVID-19 case study. Computers in Biology and Medicine. 2023;164:107212. pmid:37478712
- 67. Tawhid MA, Savsani V. Multi-objective sine-cosine algorithm (MO-SCA) for multi-objective engineering design problems. Neural Computing and Applications. 2019;31(2):915–929.
- 68. Mirjalili S, Mirjalili SM, Lewis A. Grey wolf optimizer. Advances in engineering software. 2014;69:46–61.
- 69. Kiani F, Seyyedabbasi A, Mahouti P. Optimal characterization of a microwave transistor using grey wolf algorithms. Analog Integrated Circuits and Signal Processing. 2021;109:599–609.
- 70. Wu Y, Hu Q, Xie M, Ng SH. Modeling and analysis of software fault detection and correction process by considering time dependency. IEEE Transactions on Reliability. 2007;56(4):629–642.
- 71.
Musa JD, Iannino A, Okumoto K. Software Reliability, Measurement, Prediction, Application.; 1987. McGraw-Hill, New York.
- 72.
Musa JD. Software reliability data. Technical Report in Rome Air Development Center New York (1979). 1979;.