Advanced dwarf mongoose optimization for solving CEC 2011 and CEC 2017 benchmark problems

This paper proposes an improvement to the dwarf mongoose optimization (DMO) algorithm called the advanced dwarf mongoose optimization (ADMO) algorithm. The improvement goal is to solve the low convergence rate limitation of the DMO. This situation arises when the initial solutions are close to the optimal global solution; the subsequent value of the alpha must be small for the DMO to converge towards a better solution. The proposed improvement incorporates other social behavior of the dwarf mongoose, namely, the predation and mound protection and the reproductive and group splitting behavior to enhance the exploration and exploitation ability of the DMO. The ADMO also modifies the lifestyle of the alpha and subordinate group and the foraging and seminomadic behavior of the DMO. The proposed ADMO was used to solve the congress on evolutionary computation (CEC) 2011 and 2017 benchmark functions, consisting of 30 classical and hybrid composite problems and 22 real-world optimization problems. The performance of the ADMO, using different performance metrics and statistical analysis, is compared with the DMO and seven other existing algorithms. In most cases, the results show that solutions achieved by the ADMO are better than the solution obtained by the existing algorithms.


Introduction
Optimization occurs naturally in many human endeavors, and most human decisions go through an optimal process. Optimization is deeply rooted in many branches of science, for example, a radiation reactor system with minimal emission in physics, maximizing profit in businesses, survival of the fittest in ecology, and production line design in a manufacturing system that satisfies a set of constraints [1]. There are two established methods of solving optimization problems: the mathematical and metaheuristic approach. Each method comes with specific drawbacks; for instance, the mathematical methods are gradient-dependent, which implies that the initial starting position of the population plays a significant role in its performance [2]. The drawbacks of the two methods, coupled with the fact that global optimization problems are complex in nature, and the ease of mimicking nature's way of solving problems, have significantly contributed to the surge in the rate at which researchers are proposing nature-inspired algorithms [3]. Many aspects of nature have been a source of inspiration for developing metaheuristic algorithms. Over the years, many optimization researchers have successfully used different natural phenomena as a source of inspiration to develop metaheuristic algorithms [2]. For instance, the genetic algorithm's (GA) source of inspiration is natural selection in the theory of evolution [4]. The intelligent way birds flock together inspired the design of the particle swarm optimization (PSO) [5]. Generally, problems in various domains ranging from the traveling salesman problem [6], optimal control [7]and many more, have been solved using natureinspired metaheuristic algorithms. The research community believes the success of natureinspired metaheuristic algorithms is attributed to imitating the best ways nature solves problems.
Some authors have criticized the over-reliance on metaphor-based paradigm by the natureinspired metaheuristic algorithms [8]; however, there is consensus on the many successes recorded by these algorithms in finding solutions to complex benchmark optimization problems [9]and real-world problems in the engineering domain [10]. Like all real-world optimization problems, almost all engineering problems come with several nonlinear and complex constraints depending on the design criteria and safety rules. The optimization process of all nature-inspired metaheuristic algorithms consists of steps that mimic the problem-solving process of the natural phenomena they mimic.
No one algorithm exists that solves all optimization problems optimally, meaning each can only solve some problems optimally and others suboptimally. Hence the argument for developing a new or improved high-performance algorithm that solves specific problems. Also, many novel metaheuristic algorithm developers have cited the no-free lunch theory as a basis for regularly developing new algorithms, claiming that the proposed algorithms find better solutions for optimization problems. There is also the claim by the newly proposed algorithms of balancing exploration and exploitation to better search the problem space [11]. The claim by some metaheuristic algorithms of drawing inspiration from nature is debatable, and so is the claim of novelty and strong optimization capability.
A list of some newly proposed metaheuristic algorithms is presented in Table 1. Interested readers are referred to [3,[12][13][14][15]for a more detailed list of metaheuristic algorithms proposed within the past five decades. Also, a detailed survey of metaheuristic algorithms that outlined their components and concepts, intending to analyze their similarities and differences, is given in [16,17]. Interestingly, some of the inspirations claimed in the articles are drawn from human inventions rather than naturally occurring phenomena. For instance, the social network search (SNS) draws inspiration from the social network user's efforts to gain more popularity, a human invention rather than a naturally occurring phenomenon.
Researchers have also hybridized existing metaheuristic algorithms instead of developing an entirely new metaheuristic algorithm. So many works of literature exist that have hybridized one metaheuristic algorithm with another. Some examples include the firefly algorithm hybridized with chaos theory [18], the hybridization of ant colony strategy and harmony search scheme (HPSACO) [19], particle swarm optimizer hybridized with a variant of cuckoo search called the island-based cuckoo search, and highly disruptive polynomial mutation (iCSPM) [20], hybridization of self-assembly and particle swarm optimization (SAPSO) [21], fuzzy controllers hybridized with slime mound algorithm (SMAF) [22].
Further to the novel research outcomes resulting from the metaheuristic method and their associated hybrid or variant algorithms, the area of applicability presents more research Table 1. Some nature-inspired metaheuristic algorithms with their source of inspiration (2019-2021).

Algorithm Inspiration Reference
Group teaching optimization algorithm Group teaching mechanism [23] Black widow optimization algorithm unique mating behavior of black widow spiders. [24] Chaos Game Optimization some principles of chaos theory [25] Adolescent Identity Search Algorithm (AISA) process of identity development/search of adolescents [26] Atomic orbital search basic principles of quantum mechanics [27] A novel metaheuristic optimizer inspired by behavior of jellyfish in the ocean behavior of jellyfish in the ocean [28] Quantum dolphin swarm algorithm dolphin swarm algorithm [29] Arithmetic optimization algorithm Arithmetic operators [30] Advanced arithmetic optimization algorithm Advanced arithmetic operators [31] Ebola Optimization Search Algorithm (EOSA) Ebola virus [32,33] Golden ratio optimization method (GROM) Growth in nature using the golden ratio of Fibonacci series [34] Bald eagle search optimization algorithm bald eagle [35] Black Hole Mechanics Optimization mechanics of black holes [36] Capuchin search algorithm capuchin monkeys [37] Tiki-taka algorithm football playing style [38] Cooperation search algorithm team cooperation behaviors in modern enterprise [39] Aquila Optimizer Aquila bird [40] The Sailfish Optimizer The Sailfish group hunting [41] Social Network Search social network user's efforts to gain more popularity [42] Sine-cosine and Spotted Hyena-based Chimp Optimization Algorithm (SSC) a hybrid algorithm is developed which is based on the sine-cosine functions and attacking strategy of Spotted Hyena Optimizer (SHO) [43] Archimedes optimization algorithm law of physics Archimedes' Principle [44] Battle royale optimization algorithm a genre of digital games knowns as "battle royale." [45] Thermal Exchange Metaheuristic Optimization Algorithm Newton's law of cooling [46] African vultures optimization algorithm African vultures [47] The Red Colobuses Monkey Red Colobuses Monkey [48] Remora optimization algorithm parasitic behavior of remora [49] Red deer algorithm (RDA) Red deer [50] Pelican optimization algorithm Pelican [51] Reptile optimization algorithm Hunting crocodiles [52] Squirrel search algorithm Squirrels [53] Dwarf mongoose optimization Dwarf mongoose [54] Human Felicity Algorithm Quest for the Evolution of Human Society [55] Giraffe kicking optimization Giraffe [56] Competitive search Competition [57] Criminal search optimization algorithm Police strategies [58] Horse herding optimization algorithm Horse herd [59] prospects in the field. Optimization problems in engineering and machine learning are currently being researched, with the former having received considerable research efforts. Machine learning, specifically deep learning, has demonstrated interesting performances in image analysis [61][62][63][64]but still suffers from architectural composition resulting from combinatorial problems, which require an optimization process as a solution. Efforts to address these using heuristic methods such as in [65,66], have further revealed the complexity of the optimization problem. To remedy this, studies [32,[67][68][69][70]have approached the use of metaheuristic algorithms, or a hybrid of metaheuristic algorithms, or even some high-performing variants.
In [32], the authors applied a metaheuristic algorithm to support the selection of an optimal combination of convolutional neural network hyperparameters (CNN) to address classification problems in digital mammography and chest x-ray. Similarly, metaheuristic algorithms were employed to address the challenge of network weight optimization in [70]. Authors in [69]have also adapted metaheuristic algorithms to the evolution of neural architectures, a combinatorial problem consisting of finding the best neural network components for obtaining the best performing architecture suitable for solving a particular classification problem. In [68], the problem of feature selection for reducing classifier bottleneck was addressed using the GA metaheuristic method. The study of [67]investigated the performance of a chaotic-theory-enabled FA metaheuristic in improving the dropout regularization of deep learning models. Several other studies have investigated the use of hybrids of metaheuristic algorithms in solving object detection, segmentation, classification, and image generation and reconstruction problems. However, with new variants and highperforming hybrids of these algorithms still being researched, it further reveals that improving performance in handling optimization problems in machine learning is opening up new research frontiers. As a result, the motivation for deepening the optimization process of existing optimization algorithms through designing variants and hybrids is furthering research in metaheuristic algorithms.
Although the dwarf mongoose optimization (DMO) algorithm [54]is inspired by the foraging and social-behavioral structure of the dwarf mongoose, modeling the unique compensatory behavioral adaptations of the dwarf mongoose in DMO has led to a limitation of slow convergence due to the role the value of the alpha female plays in the updating process. Therefore, in this study, an improvement on the DMO is presented that mimics four (4) different aspects of the life of the dwarf mongoose, eliminating the limitation posed by the value of the alpha. The four social structural adaptations are modeled for the optimization process: the alpha and subordinate group, the foraging and seminomadic behavior, the predation and mound protection, and the reproductive and group splitting behavior. The study identified some major stages of activities observed in the group, namely predation, territory circuiting, reproduction, group splitting, and foraging. These processes are repeated until termination criteria are met. The proposed improved algorithm is used to solve CEC 2011 and 2017 benchmark functions, consisting of 30 classical and hybrid composite problems and 22 real-world optimization problems.
Considering the dwarf mongoose has been the source of inspiration for DMO and all the natural phenomena explaining their existence and survival, this presents a promising and improved optimization process. The research question now is: considering the competitive performance demonstrated by the DMO [54], which models only a selected phenomenon in the natural phenomenon of the dwarf mongoose, could a better and improved optimization process and performance be achieved by modeling all fundamental and existential phenomena in nature? Motivated by this research question, a detailed study of literature on dwarf mongooses was examined, and all fundamental concepts were extracted for consideration. Interestingly, critical processes and stages of the dwarf mongoose were found, which motivated the optimization process and mathematical models resulting in the proposed advanced dwarf mongoose optimization (ADMO) algorithm presented in this study. The following are the technical contributions of this study: i. A new optimization process model is designed with four stages: predation, foraging and semi-nomadism, reproduction, and group splitting.
ii. Mathematical models were developed to model each of the four stages described in (i).
iii. The optimization process design in (i) and the models in (ii) were applied to design a new variant of the DMO algorithm, namely the ADMO.
iv. Exhaustive experimentation was carried out using CEC 2017 and CEC 2011 constraint benchmark optimization functions for comparative analysis of ADMO against the base algorithm and other methods.
The rest of the paper is organized as follows: In Section 2, the dwarf mongoose optimization algorithm (DMO) is presented. Section 3 presents the advanced dwarf mongoose optimization algorithm (ADMO). The experimental setup, results, and detailed discussion are presented in Section 4. Finally, the conclusion and future work is presented in Section 5.

The dwarf mongoose optimization algorithm
This section presents an overview of the DMO, including its inspiration and the optimization processes. Also, this section is divided into two subsections to enable the smooth presentation of the various aspect of the DMO. The source of inspiration and the basic behavior of the dwarf mongoose used for the DMO are discussed in subsection one. In contrast, the implementation of the model is discussed in subsection two.

Inspiration
The DMO drew its inspiration from the dwarf mongoose, also called Helogale. They are found in areas with abundant termite mounds, rocks, and hollow trees used for hiding and protection. Africa's semidesert and savannah bush are typical habitats of dwarf mongoose. They are the smallest known African carnivore and live in a family group that is a matriarchy [71,72]. The social order of the mongoose family is such that the females and the young are ranked higher than the males and the juveniles, respectively. The division of labor and altruism within these groups is the highest that has been recorded for a mammal, and each mongoose serves as a guard, babysitter, attacking predators, or attacking conspecific intruders [73][74][75][76].
The dwarf mongoose has developed specific behavior and adaptations to survive in its natural habitat. These adaptions and behavior relate to predation avoidance and nutrition. They are not known to have a killer bite but rather a skull-crushing bite using the prey's eye for orientation. Also, no cooperative killing of large prey has been observed in the dwarf mongoose family. These adaptations restrict their prey's size and significantly affect the mongooses' social behavior and ecological adaptations to achieve individual and family nutrition [76]. The DMO is modeled after two compensatory behavioral adaptations of the mongoose, namely i. Prey size, space utilization, and group size ii. Food Provisioning.

The DMO model
The DMO [54]algorithm simulates the compensatory adaptation of the dwarf mongoose as the forage. The dwarf mongoose population is divided into the alpha group, scouts, and babysitters. Each group contributes to the compensatory behavioral adaptation, which leads to a seminomadic way of life in a territory (problem space) large enough to support the entire group. The scouting for new mounds and foraging are done simultaneously by the same group of mongooses in DMO. The optimization procedures of the proposed DMO algorithm are represented in three phases, as shown in Fig 1. The red dot signifies the alpha leading the family (blue dots) to find a food source, leaving behind the babysitters with the young (exploration). Once the food source is found, the entire group feeds extensively in the area (exploitation). The family returns intermittently to exchange babysitters and repeats the cycle.
The DMO starts by randomly initializing the candidate population and computing the fitness of each. The selection of alpha female (α) is based on Eq 1.
To update a candidate's food position, the DMO uses the expression given in Eq 2.
where phi is a uniformly distributed random number [-1,1], the peep is assumed to be the alpha female's vocalization that helps keeps the family bound together on the same path. The sleeping mound (sm) is updated after every iteration using Eq 3.
The average value of the sleeping mound sm is computed by Eq 4.
The scout group is simulated using Eq 5. The scouts must look for the new sleeping mound because the dwarf mongooses are seminomadic and never return to the previous sleeping mound. This behavior activates the exploration, and DMO models the scouting and foraging to be carried out simultaneously [76].
where, rand is a random number between [0,1], is the collective-volitive movement control parameter and M ! ¼ determines the movement of the mongoose to the new sleeping mound.
The pseudocode for the algorithm is given in algorithm listing 1 (Fig 9, S1 File).

Advanced dwarf mongoose optimization algorithm model
The section presents the advanced dwarf mongoose optimization algorithm (ADMO). The ADMO is proposed to solve the low convergence rate limitation of the DMO. This situation arises when the initial solutions are close to the optimal global solution; the subsequent value of the alpha must be small for the DMO to converge towards a better solution. The proposed improvement incorporates other social behavior of the dwarf mongoose, namely, the predation and mound protection and the reproductive and group splitting behavior to enhance the exploration and exploitation ability of the DMO. The ADMO also modifies the lifestyle of the alpha and subordinate group and the foraging and seminomadic behavior of the DMO. The optimization procedures of the proposed ADMO algorithm are represented in three phases, as shown in Fig 2. This model shows five (major) stages in the dwarf mongoose mounds. These stages are territory circuit, predation, foraging, reproduction, and group splitting. The search space of the proposed algorithm is a population of dwarf mongoose individuals initialized using Eq 6. Search for the news areas in the search space is achieved using the exploration mechanism of the algorithm. The criterion leading to the exploration phase's optimization process is conditioned on comparing foraging distance covered and territory size values. When the foraging distance exceeds the given territory size, the algorithm transits to the exploration phase; otherwise, the intensification phase is maintained. Obtaining the best solution depends on a sustained high rate of avoiding predators. Predation often weakens the quality of individuals in the search space. At the same time, avoidance of predation and increased foraging outside a territory space produces high-quality individuals in the search space.

Population initialization
The ADMO population is initialized with candidate dwarf mongooses (X), as shown in Eq (6). The population is generated stochastically between the given problem's upper bound (U) and lower bound (L).
Where n is the population size for an arbitrary dwarf mongoose mound and each x i is initializezd using Eq (7) and the position of all individuals in X in the mound is represented by (8).
x n;1 x n;2 � � � x n;dÀ 1 x n;d ð8Þ where x i,j denotes the position of the j th dimension of the i th population, n denotes the population size, and d is the dimension of the problem computed using dmp(X).

Alpha and subordinate groups
Once the population is initialized, gender-based compositional differences (M1 and M2) for male and female alpha members and alpha vector (AV �! ) for representing alpha characteristics is applied to obtain the alpha male (x alpham ) and female (x alphaf ). The best individual, say x best , in the population is used for benchmarking members of the alpha group. So that if we randomly select x i and x j from the population to represent male and female respectively and mutate them to x best , then Eqs (9) and (10) hold for the alpha male and alpha female.
where M1 and M2 represents (1+rand(0,1)) and (0.5+rand(0, 1)) (0.5+rand(0, 1)) and AV �! is computed using AV �! ¼ x best 2 . We now have n-2 individuals to partition among the subordinate and juvenile groups. The subordinate often represents the largest set of individuals in a mound, followed by the juvenile. The size of individuals in subordinate set S and juvenile set J is computed using s ¼ floor nÀ 2 3 À � and j ¼ floor nÀ 2 4 À � respectively. Their members are allocated by sorting X using their individual fitness values, and the first s are allocated to S and j allocated to J.

Foraging and semi-nomadism stage
The foraging and seminomadic nature of dwarf mongooses are motivated by the fact that food sources are scattered, requiring an extensive search by the individual to find sufficient food for itself. This foraging act often takes an intensive search over a long distance (fd), in Eq (12), which will most times be greater than territory size (ts) in Eq (13). The x alphaf is known to lead the foraging party, hence its position dmp(x alphaf ) helps to compute fd�ts. Cessation of foraging is aided by predation rate pr and birth rate br, thereby lowering energy output due to reduced energy input from nutrition and, in that case, fd<ts. This is summarized in Eq (11) which computes the new state of any individual x i in the group. Reduced space utilization leads to depleted food sources hence reduced individual fitness. In addition, the lower the group size (gs), the higher fd, while ts is computed using the summation of the age (in this case the age function expressed in Eq 13) of anal and cheek marking of all individuals in the group.
Where pr and br represent the average predation and birth rates for a mound. The position of all individuals in the group is updated after every iteration using dmp(X)+1 for each x i .

Predation and mound protection
The dwarf mongoose population suffers from terrestrial and aerial attacks, wading off the attack using a group approach. The terrestrial attack is categorized into attacks from another group of dwarf mongooses and attacks from other animals. When another group of dwarf mongoose attacks φ 1 , x alpham is credited with leading all fights, followed by the subordinates s ¼ floor nÀ 2 3 À � , juvenile j ¼ floor nÀ 2 4 À � , and x alphaf . When animals that are not dwarf mongooses attacks φ 2 the mound, only the subordinates s ¼ floor nÀ 2 3 À � and juvenile j ¼ floor nÀ 2 4 À � attack the enemy. Fatalities are often associated with the more aggressive juvenile group, thereby depleting their number in the mound group. Group fitness gf in Eq (14) and density of marking post mp in Eq (15) determines if the predator wins the group or loses to the group in the case of φ 1 attack while only gf determines their win in the φ 2 attack.
where fit represents the fitness value of the individual x i . We simulate the case of φ 1 , φ 2 , or neither of (φ 1 and φ 2 ) in every iteration, with the impact and update on the loss of a group member shown in Eq (18). We represent the loss effect using a tuple of current group members, group fitness, and the density of marking posts. We simplify Eq (18) by showing how cases 1 and 2 are computed using Eqs (16) and (17).
where a, b, k, and l denote the index of the first subordinate in the population, the index of the first juvenile in the population, the number of subordinates, and the number of juveniles, respectively, affected during an attack. Note that k must satisfy 0 � k � s where s ¼ floor nÀ 2 3 À � , and 0 � l � j where j ¼ floor nÀ 2 4 À � .

Reproduction and group splitting
The x alphaf is the only female who can raise young in a mound, rendering female subordinates and female juveniles incapable of childbearing. All cases (100%) of estruses cycle among the x alphaf leads to pregnancy, while only 62.5% of estruses cycle for subordinates leads to pregnancy. However, the young resulting from the female subordinates are either killed at birth or unable to survive since they cannot suckle. As a result, an increase in group size gs is strictly the exclusive right of x alphaf . Studies showed that the average frequency of young by the subordinate female is 0.66 compared with 9.66 for the alpha female. As a result, the reproduction (addition) of young into the population is updated using Eq (19).
where alphayoung is computed thus: alphayoung ¼ floor n�9:66 100 À � . For group splitting, dwarf mongooses are contractors rather than expansionists to preserve an economically defensible area to avoid depletion of resources (e.g., food) for the group and promote reproduction. Although group splitting is not frequent, when it does occur, the splinter group, often motivated and led by independent females, exits the mound for the main group and moves to another territory to form a new group. This often decreases gs and gf.
Because this group exit often excludes the x alpham and x alphaf The subordinate (S) members often constitute the independent female and her followers breaking away from the main group. We simulate the impact of the group splitting on group size using Eq (20.
( Individual fitness depends on the cost and benefit relationship the individual partakes in the group. Notably, the fitness value of dominant members is higher than those of the subordinates; hence two cost factors and benefit factors: � �! for the dominant group and subordinates, respectively. Meanwhile, since individual fitness sums up the group fitness, we compute the fitness and secretion (anal and cheek) of an arbitrary individual subordinate as follows in Eqs (21) and (22): The values for the vector pair CST1 �� �! ; BF1 � �! are obtained by duplicating the best and worst individuals among the dominant group and dividing both by the size of the dominant group.
Similarly, the values for the pair CST2 �� �! ; BF2 � �! are computed in the same manner except that the best and worst individuals are selected from the subordinate groups, and the division operation is done using the size of the subordinate group.

ADMO procedure
To achieve the algorithmic design of the proposed ADMO model, we first present the procedural description of the model. This is to illustrate the flow of processes in the algorithm and flowchart. The optimization strategy obtained from the dwarf mongoose begins with population initialization. This is followed by some major activities observed in the group. These activities include predation, territory circuiting, reproduction, group splitting, and foraging. These processes are repeated until termination criteria are met. A representation of the pseudocode for the algorithm is given below • Generate a defined number of dwarf mongoose individuals.
• Each dwarf mongoose belonging to each subgroup is evaluated using a domain-specific fitness function to obtain the current best individual. The current best is explicitly defined as the global best.
• Based on the fitness evaluation of all individuals, sort the population and assign individuals to the subgroups: alpha male, alpha female, subordinates, and juveniles • Initialize and set domain-specific control parameters such as Group fitness (gf), the density of marking post (mp), • For a defined number of iterations, and while the termination condition is not satisfied, REPEAT -Compute using Eqs (12) and (13) the model on the territory circuit stage -Compute using Eq (11) the model on the foraging phase to obtain foraging distance (fd) and territory size (ts) If foraging distance exceeds settlement territory size, THEN -Mongoose foraging due to depletion in food in settlement space Otherwise, -Mongoose still have food to sustain the group in the current settlement (mounds) -Derive the nature of predation by computing the values for φ 1 , φ 2 , or neither of (φ 1 and φ 2 ) If φ 1 , or φ 2 holds, THEN Check if its -Compute using the first condition on Eq (18) Otherwise -Compute using the second condition on Eq (18) otherwise -Compute using the third condition on Eq (18) -Generate a random number of young alpha species and add them to the population: reproduction/evolution phase • Using Eq (20), split the group to achieve two new dwarf mongoose groups existing independently • Compute the current best fitness and update the global best • Go up to check if the termination condition is not satisfied. Otherwise, move to the next line

• RETURN best solution
In Fig 3, a detailed procedure representation is described, with all identified model stages highlighted. In addition, we indicate where the exploration and exploitation phases of the proposed ADMO are balanced.

Computational complexity
The computational complexity of the DMO and eight (8) other algorithms are measured as defined in [77], and their results are presented in Tables 2-4. The algorithms are implemented using MATLAB R2020b, Windows 10 OS environment, Intel Core i7-7700@3.60GHz CPU, and 16G RAM. The time (T0) needed to run the program (D=10, 30, 50) below is measured: In the same vein, the time (T1) needed to run f18 (D=10, 30, 50) from the CEC 2017 test suit 200,000 times, and the mean time (T2) for five (5) runs of the same function is measured. The value of (T2− T1)/T0 gives the complexity of the respective algorithm. From the results in Tables 2-4, the ADMO returned the minimum values compared to the other eight (8) algorithms. Conclusively, the computational complexity of the ADMO is relatively low and easy to implement.

Conceptual advantage of the ADMO
The performance of the proposed ADMO in finding the global optimum solutions to different optimization problems can be theoretically attributed to the following: • The ADMO stochastically creates a set of candidate solutions for given optimization problems and improves these solutions using the enhanced exploratory and exploitation ability of DMO. The enhancement results from the group splitting, antipredation, and reproduction activities of the dwarf mongoose, which further mutates the candidate solutions.
• The problem search space is explored and exploited as the dwarf mongooses forage across the territory. In ADMO, the foraging depends on comparing the foraging distance and territory size, ensuring the ADMO escapes local optima.
• The ADMO also has only one parameter that can be tuned.
As listed in Algorithm 2 (Fig 10, S1 File), the following algorithm reflects the mathematical model and procedural listing for the ADMO model.
The search space of the proposed algorithm is a population of dwarf mongoose individuals initialized using Eq (6) and Lines 2-4 of Algorithm 2 (Fig 10, S1 File). Search for the news areas in the search space is achieved using the exploration mechanism of the algorithm. The criterion leading to the optimization process entering the exploration phase is conditioned on comparing the values of foraging distance covered and territory size. The algorithm transits to the exploration phase when the foraging distance exceeds the given territory size. Otherwise, the intensification phase is maintained. Obtaining the best solution depends on a sustained high rate of avoiding predators. Predation often weakens the quality of individuals in the  search space. In contrast, avoiding predation and increased foraging outside a territory produces high-quality individuals in the search space.

Results and discussion
The proposed improvements of the ADMO were tested to establish performance using CEC 2011 and 2017 benchmark functions, consisting of 30 classical and hybrid composite problems and 22 real-world optimization problems. The results of ADMO for benchmark functions were compared with that of DMO and seven existing population-based metaheuristic algorithms, namely: arithmetic optimization algorithm (AOA), constriction coefficient based (PSO) and GSA (CPSOGSA), whale optimization algorithm (WOA), linear population size reduction success-history based adaptive DE (LSHADE), and covariance matrix learning with Euclidean neighborhood ensemble sinusoidal LSHADE (LSHADE-cnEpSin), LSHADE with semi-parameter adaptation hybrid with CMA-ES (LSHADESPACMA) and united multi-operator EA (UMOEA). The algorithms are carefully selected because of their track records and performance in different CEC competitions. Also, they represent different metaheuristic categories available in the literature. All the algorithms and optimization problems considered were implemented using MATLAB R2020b, and Table 5 presents the different algorithm control parameters used for the experiments. Notably, the control parameters given in Table 5 are as used in their original references. Windows 10 OS environment, Intel Core i7-7700@3.60GHz CPU, and 16G RAM were used to conduct the experiments. The results of 51 and 25 independent runs of each algorithm for CEC 2017 and CEC 2011, respectively, are collated using the "Best, Worst, Average, and SD" performance indicators. Further statistical analysis was carried out using mean, standard deviation, Friedman test, and Wilcoxon signed test.

CEC 2017 Benchmark test function
The results of all the algorithms used in this study are presented in this section. In addition to the performance metrics stated earlier, this study also presented the solution error measure defined as f(x)−f(x � ). The solution error gives the difference between the best result (x) found in one run of the algorithm and the globally known result f(x � ) for a specific benchmark function.  Tables 6-9. Clearly, the performance of ADMO across the different dimensions is competitive. Specifically, from Table 6, the ADMO found the global optimal result for 27 benchmark functions (D=10) at least once and consistently found the optimal solution for 12 out of the 27 functions over 51 runs. It can be seen from Table 7 (D=30) that the ADMO successfully found the solution for 3 benchmark functions, 2 of which were consistent over 51 runs and 1 at least once. Looking at Table 8, the ADMO found optimal solutions for 3 benchmark functions at least once and was not consistent over the 51 runs. Generally, the ADMO showed consistent performance for the unimodal problems (f1-f3). It successfully found the solutions for D=10, 30, and 50 but none for 100 dimensions. The mean value ranges from 0 to 3.95E+02, and the standard deviation is between 0 and 7.17E+01 for 10 dimensions. For 30 dimensions, the mean and standard deviation ranges between 0 to 1.96E+03, and 0 to 2.32E+02, respectively. The mean value for 50 dimensions ranges from 9.00E-01 to 6.68E+05, and the standard deviation is between 1.87E-02 and 5.26E+06. The performance of ADMO for simple multimodal functions (f4-f10) is competitive, as seen by the number of functions it successfully found solutions for. The ADMO found solutions for 3 of the simple multimodal functions over 51 runs and 5 functions at least once for 10 dimensions. The ADMO found solutions for 1 of the simple multimodal function for 30 and 50 dimensions, respectively. For the hybrid functions (F11-F20), the ADMO successfully found solutions for all 10 functions for 10 dimensions and none for 30, 50, and 100 dimensions.
Finally, the ADMO successfully found solutions for 4 composition functions (F21-F30) in 10 dimensions, none for 30 dimensions, and 1 for 50 dimensions. In most cases, the ADMO got trapped in solutions that are very close to the global optimal solutions, as noticed in the mean value and standard deviation ranging between 0 and 6.68E+05 across all dimensions. These values are small even for the worst returned result for all dimensions considered. It can conclusively be said that the ADMO is a stable and efficient algorithm for solving the CEC 2017 benchmark problems. Also, the results across all the dimensions considered showed that the performance of ADMO slightly decreases as the dimension increases. However, it still showed stability and robustness over the different dimensions.

Comparative results for CEC 2017.
The comparative results of the ADMO and 8 other state-of-the-art algorithms on the benchmark problems with varying dimensions of 10, 30, 50, and 100 are presented in Tables 10-13. The best and standard deviation are the only two performance metrics used, and the best-returned results are marked in boldface. In addition, the 9 metaheuristic algorithms are ranked according to the scoring metric defined in CEC 2017 technical report and presented in Table 14. The Wilcoxon signed test was also performed on the results returned by the 9 algorithms across the different dimensions considered, and the results are presented in Table 15.
The LSHADE, LSHADEcnEpSin, LSHADE_SPACMA, and UMOEA came first in the different CEC competitions they entered. The performance of the proposed ADMO is compared with these algorithms and candidate representation of swarm-based (WOA, DMO) and physical-based (AOA, CPSOGSA) metaheuristic algorithms. It can be seen from the results that the proposed ADMO was very competitive with the high-performing algorithms (LSHADE, LSHADEcnEpSin, LSHADE_SPACMA, and UMOEA) across all dimensions considered. The DMO, AOA, and WOA performed poorly, failing to find optimal solutions for most benchmark problems, while the CPSOGSA performed relatively better, finding solutions for 3 functions in 10 dimensions. Generally, the performance of all the algorithms deteriorated significantly as the dimensions increased. However, the ADMO showed its stability and robustness by returning the best or most competitive solutions over all the dimensions considered. The ranking of the algorithms considered based on the scoring system defined in [77]is presented in Table 14. Clearly, the five (5) high-performing algorithms were very competitive, with score differences ranging from 0.01 to 1.23, which is very small. Overall, the ADMO ranked first, outperforming the other algorithms in 10 and 30 dimensions, respectively. The graphical representation of the scores for each algorithm is shown in Fig 4. The comparative results of all algorithms considered are tested statistically using Wilcoxon's test, which is presented in Table 15. The results are presented for each dimension (10D, 30D, 50D, and 100D). From the results, the ADMO significantly outperforms the DMO, AOA, WOA, and CPSOGSA in all four (4) dimensions considered judging by the high R+ values returned by the ADMO. Also, the ADMO, UMOEA, LSHADE_SPACMA, LSHADEcnEpSin,  and LSHADE were competitive, judging by the number of ties (�) returned between their comparisons. At a significance level set at α = 0.05, the Wilcoxon's test showed a significant difference in 16 out of 28 cases, which implies that the ADMO significantly outperformed 4 out of the 9 algorithms and insignificantly outperformed the remaining 4 algorithms. In detail, the ADMO performed better, the same, less than the other algorithms considered in 138, 3, 91 out of 232 cases for 10 dimensions. In 30 dimensions, the ADMO performed better, the same, or less than the other algorithms in 171, 35, 26 out of 232 cases. Similarly, the ADMO performed better, the same, less than the other algorithms in 167, 52, 13 out of 232 cases for 50 dimensions. Finally, for 100 dimensions, the ADMO performed better, the same, less than the other algorithms in 168, 52, 12 out of 232 cases. Overall, the ADMO performed better, the same, less than the other algorithms in 644, 142, 142 out of 928 cases. Conclusively, the ADMO outperformed or was competitive in 85% of all cases. Also, Fig 5 shows the superiority of the proposed ADMO over the DMO and 7 other state-of-the-art algorithms considered across all the dimensions used in this study. The results also confirmed the searchability, stability, and efficiency of the ADMO in solving the optimization problems used in this study. The performance of ADMO was not hindered by the characteristics associated with the CEC 2017 problems, which are unimodal (separable and non-separable), multimodal (separable and non-separable), hybrid, and composite benchmark functions. This performance can be attributed to the balanced exploitation and exploration introduced by explicitly defining the predation, foraging and semi-nomadism, reproduction, and group splitting activities to carry out each optimization phase.
Furthermore, the convergence behavior of all the algorithms considered and for all dimensions is shown in Fig 6. The ADMO showed a fast convergence speed early in the iteration process for all functions. This speed slows down in the middle, especially towards the end of the iteration process. Furthermore, the convergence figure of ADMO showed that global or nearglobal solutions are attained in a smaller number of iterations for most functions. The continuous exploitation and exploration further demonstrate the scalability of the ADMO until the stop criterium is met.  Table 16. It should be noted that the value of the optimal solution to these problems is not available. However, the results are discussed based on four performance metrics (best, worst, mean, and standard deviation) used to summarize the results. The results are collated over 25 independent runs for all 22 benchmark functions. The population size and other algorithm-specific metrics remained as defined in Section 4.1. it can be observed that the ADMO consistently found the same solution over the 25 independent runs of the algorithm for F4, F8, and F10; this could be the optimal solution for these functions. For the rest of the function, the solution found was not consistent over the different runs of the algorithm, but they are very close to each other, judging by the very small deviation from the mean. A conclusion can be drawn that the ADMO is an effective tool for optimizing this set of problems. Next, the ADMO is compared with other algorithms to gauge its superiority and robustness further.

Comparative results for CEC 2011.
The comparative results of ADMO with other state-of-the-art algorithms used to solve the CEC 2011 real-world problems are presented in Table 17. The results are discussed based on the mean and standard deviation returned by the respective algorithms over 25 independent runs and the same experimental conditions as detailed earlier. The LSHADE, LSHADEcnEpSin, LSHADE_SPACMA, and UMOEA came first in the different CEC competitions they entered. The performance of the proposed ADMO is compared with these algorithms and candidate representation of swarm-based (WOA, DMO), human activity (gaining-sharing knowledge (GSK) based algorithm [60]), and physical-based (AOA, CPSOGSA) metaheuristic algorithms. It can be seen from the results that the proposed ADMO was very competitive with the high-performing algorithms (LSHADE, LSHADEcnEpSin, LSHADE_SPACMA, GSK, and UMOEA) across all 22 problems considered. The DMO, AOA, and WOA performed sub-optimally, failing to find optimal solutions for most benchmark problems except F4, while the CPSOGSA performed relatively better, closely following the six high performers.
The ranking of the algorithms considered based on Friedman's test is presented in Table 18. The implication is that the smaller the mean rank, the better the performance. The null hypothesis for Friedman's test is that "there is no significant difference between the distributions of the obtained results." At a significant tolerance level set at α=0.05, the test returned a p-value=0.000 which is less than α. Therefore, reject the hypothesis. Also, the ADMO returned the least mean rank and ranked first. Closely following ADMO is LSHADEc-nEpSin, then LSHADE. The least three performing algorithms are the DMO, AOA, and WOA. The graphical representation of the performance ranking of the algorithms in CEC 2011 is shown in Fig 7. A further statistical analysis was carried out using Wilcoxon's test to show a pairwise performance comparison between ADMO and the remaining algorithms, and the results are summarized in Table 19. From the results, the ADMO significantly outperforms the UMOEA, LSHADE_SPACMA, LSHADE, DMO, AOA, WOA, and CPSOGSA in all 22 problems considered judging by the high R+ values returned by the ADMO. Also, the ADMO, LSHADEcnEp-Sin, and GSK were competitive, judging by the number of ties (�) returned between their comparisons. At a significance level set at α = 0.05, the Wilcoxon's test showed that the ADMO significantly outperformed 7 out of the 9 algorithms and insignificantly outperformed the remaining 2 algorithms. The results also confirmed the searchability, stability, and efficiency of the ADMO in solving the real-world optimization problems defined in CEC 2011 used in this study.
Furthermore, the convergence behavior of all the algorithms considered and for all 22 realworld problems is shown in Fig 8. The ADMO showed a fast convergence speed early in the iteration process for most functions except F1 and F3, which converged at the later stage of the iterations. This speed slows down in the middle, especially towards the end of the iteration process. Furthermore, the convergence figure of ADMO showed that global or near-global solutions are attained in a smaller number of iterations for most functions. The continuous exploitation and exploration further demonstrate the scalability of the ADMO until the stop criterium is met.

Summary of results
To test the effectiveness and robustness of ADMO, it is applied to solve the CEC-2017 and CEC 2011 real-parameter benchmark and real-world optimization problems, respectively. Experimental results are compared with DMO and 7 other state-of-the-art algorithms, comprising 4 algorithms that came first in different CEC competitions (LSHADE, LSHADEcnEpSin, LSHA-DE_SPACMA, and UMOEA) and three other candidate representations of other categories of metaheuristic algorithms (AOA, GSK, CPSOGSA, WOA). The performance of the algorithms a scored using the metric defined in CEC 2017 technical report and Friedman's test.
ADMO ranked first among all algorithms for CEC 2017, closely followed by UMOEA, LSHA-DE_SPACMA, and LSHADEcnEpSin. Furthermore, the obtained results were statistically analyzed using Wilcoxon's test (a non-parametric test) with a significance level of 0.05. Again, the results confirmed the superiority and competitiveness of the ADMO with the compared algorithms for all functions in the test suite. The ADMO was further used to solve the set of real-world optimization problems proposed for the CEC2011 evolutionary algorithm competition. Generally, ADMO, LSHADE, LSHADEcnEpSin, LSHADE_SPACMA, GSK, and UMOEA performed significantly better than the DMO, AOA, CPSOGSA, and WOA on most functions. The ADMO showed a fast convergence speed early in the iteration process for all functions for CEC 2017. Similarly, the ADMO also showed a fast convergence speed early in the iteration process for most functions in CEC 2011 except F1 and F3, which converged at the later stage of the iterations. This speed slows down in the middle, especially towards the end of the iteration process. Furthermore, the convergence figure of ADMO showed that global or near-global solutions are attained in a smaller number of iterations for most functions. The continuous exploitation and exploration further demonstrate the scalability of the ADMO until the stop criteria are met.

Conclusion and future work
The ADMO algorithm is an improvement of the newly developed DMO. It addresses the slow convergence due to alpha value and performs exploitation and exploration better than the original DMO. The ADMO incorporated four different social life structures of the dwarf mongoose to accomplish this. The predation and mound protection and the reproductive and group splitting behavior enhance the exploration and exploitation ability of the DMO. The ADMO also modifies the lifestyle of the alpha and subordinate group and the foraging and seminomadic behavior of the DMO. In the proposed ADMO, each candidate solution is represented by an individual dwarf mongoose in the entire population of dwarf mongooses. They cooperate as a group to carry out these different activities that have been mathematically modeled to enhance the optimization abilities of the DMO.
To test the effectiveness and robustness of the ADMO, it is applied to solve the CEC-2017 and CEC 2011 real-parameter benchmark and real-world optimization problems, respectively. Experimental results are compared with DMO and 7 other state-of-the-art algorithms, comprising 4 algorithms that came first in different CEC competitions (LSHADE, LSHADEcnEp-Sin, LSHADE_SPACMA, and UMOEA) and three other candidate representations of other categories of metaheuristic algorithms (AOA, GSK, CPSOGSA, WOA). The performance of the algorithms a scored using the metric defined in CEC 2017 technical report and Friedman's test. The ADMO ranked first among all algorithms, closely followed by the 4 high-performing algorithms (LSHADE, LSHADEcnEpSin, LSHADE_SPACMA, and UMOEA). The DMO, AOA, and WOA performed poorly across all the optimization problems considered in this study.
The ADMO is easy to implement and has been proven reliable, efficient, and robust for real parameter optimization. The ADMO, as presented, is focused on solving the single constrained continuous optimization problem. However, in future work, efforts can be made to modify the ADMO to solve constrained multi-objective optimization problems, discrete optimization problems, practical engineering optimization problems, and a host of other real- world applications. Another exciting research direction is to look at ways individual dwarf mongooses can have unique parameters and evolving intelligence capabilities. Interestingly, future research studies may focus on applying the algorithm to solve high dimensions or largescale global optimization problems. A complete parametric study of the ADMO is another useful prospective research direction. Finally, the ADMO may be hybridized with any other robust metaheuristic algorithm.