Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Solving the maximum cut problem using Harris Hawk Optimization algorithm

  • Md. Rafiqul Islam ,

    Contributed equally to this work with: Md. Rafiqul Islam, Md. Shahidul Islam, Pritam Khan Boni

    Roles Conceptualization, Investigation, Supervision, Writing – review & editing

    shahid@uap-bd.edu (MSI); dmri@aiub.edu (MRS)

    Affiliation Department of Computer Science, American International University - Bangladesh, Dhaka, Bangladesh

  • Md. Shahidul Islam ,

    Contributed equally to this work with: Md. Rafiqul Islam, Md. Shahidul Islam, Pritam Khan Boni

    Roles Conceptualization, Formal analysis, Investigation, Visualization, Writing – original draft, Writing – review & editing

    shahid@uap-bd.edu (MSI); dmri@aiub.edu (MRS)

    Affiliation Department of Computer Science and Engineering, University of Asia Pacific, Dhaka, Bangladesh

  • Pritam Khan Boni †,

    Contributed equally to this work with: Md. Rafiqul Islam, Md. Shahidul Islam, Pritam Khan Boni

    † Deceased.

    Roles Data curation, Formal analysis, Investigation, Writing – original draft

    Affiliation Department of Computer Science, American International University - Bangladesh, Dhaka, Bangladesh

  • Aldrin Saurov Sarker ,

    Roles Formal analysis, Methodology, Software, Validation, Visualization, Writing – original draft

    ‡ASS and MAA also contributed equally to this work.

    Affiliation Computer Science and Engineering Discipline, Khulna University, Khulna, Bangladesh

  • Md. Asif Anam

    Roles Data curation, Formal analysis, Methodology, Validation, Visualization, Writing – original draft

    ‡ASS and MAA also contributed equally to this work.

    Affiliation Computer Science and Engineering Discipline, Khulna University, Khulna, Bangladesh

Abstract

The objective of the max-cut problem is to cut any graph in such a way that the total weight of the edges that are cut off is maximum in both subsets of vertices that are divided due to the cut of the edges. Although it is an elementary graph partitioning problem, it is one of the most challenging combinatorial optimization-based problems, and tons of application areas make this problem highly admissible. Due to its admissibility, the problem is solved using the Harris Hawk Optimization algorithm (HHO). Though HHO effectively solved some engineering optimization problems, is sensitive to parameter settings and may converge slowly, potentially getting trapped in local optima. Thus, HHO and some additional operators are used to solve the max-cut problem. Crossover and refinement operators are used to modify the fitness of the hawk in such a way that they can provide precise results. A mutation mechanism along with an adjustment operator has improvised the outcome obtained from the updated hawk. To accept the potential result, the acceptance criterion has been used, and then the repair operator is applied in the proposed approach. The proposed system provided comparatively better outcomes on the G-set dataset than other state-of-the-art algorithms. It obtained 533 cuts more than the discrete cuckoo search algorithm in 9 instances, 1036 cuts more than PSO-EDA in 14 instances, and 1021 cuts more than TSHEA in 9 instances. But for four instances, the cuts are lower than PSO-EDA and TSHEA. Besides, the statistical significance has also been tested using the Wilcoxon signed rank test to provide proof of the superior performance of the proposed method. In terms of solution quality, MC-HHO can produce outcomes that are quite competitive when compared to other related state-of-the-art algorithms.

Introduction

The Max-Cut problem is a famous combinatorial optimization challenge in graph theory. It entails partitioning the vertices of an undirected graph into two separate sets, to maximize the number of edges that cross between these sets. In simpler terms, the goal is to split the graph’s vertices into sets A and B in a way that maximizes the number of edges connecting vertices from different sets. This problem is formally expressed as an optimization task, where the objective is to find the partition that yields the maximum number of cut edges. It’s worth noting that the Max-Cut problem is one of the known NP-hard problems, signifying its computational complexity. [1].

The max-cut problem is remarkable for modeling other combinatorial issues and real-world applications. Many researchers have shown their interest in the Max-cut problem due to its complexity and applications. The Max-cut problem plays an important role in graph theoretic applications. In addition to its theoretical significance, the Max-Cut problem finds practical utility across a range of domains, including but not limited to network design, statistical physics [2], VLSI design [3], circuit layout design, and the production of printed circuit boards [3]. This problem has been instrumental in addressing challenges like data clustering [4], numerical computations, scientific computing, and the hybridization of techniques such as the cross-entropy method [5].

Many techniques were developed to solve the Max-cut problem. Many algorithms were proposed by the researchers, for example, GA (Genetic Algorithm) [6], SA (Simulated Annealing) [7], scatter search [8], Tabu Search [9], rank-two relaxation heuristic [10] filled function method [11], Weakly Bipartite Graphs [12], Path-Relinking (PR) intensification [13]. All of the proposed algorithms have played a great role in their places to solve the Max-cut problem. Although many algorithms were proposed by the researchers to solve the Max-cut problem, none of them were able to provide the best-known results for all the instances of the benchmark datasets.

A well-defined meta-heuristic method can be used to solve the maximum cut problem. As a result, we are trying to solve the problem by applying the Harris Hawk Optimization (HHO) algorithm to get better results. Because it has a particular searching capacity, the HHO method is more efficient than other meta-heuristic approaches in terms of reducing execution time without losing prediction accuracy [1]. Many optimization problems were solved by HHO in recent years, such as image segmentation [14], numerical and engineering optimization [15], SVM for drug design and discovery [16], information exchange [17], parameter estimation of photovoltaic models [18], simulated annealing for feature selection in the medical field [19], Gaussian Mutation [20], Parameters extraction of three-diode photovoltaic model [21], hyperparameters optimization to detect Covid 19 from chest images [22], etc. with better results than the other existing meta-heuristics algorithms. The supremacy of HHO is also encountered by hybridizing with the enhanced Chimp algorithm for protecting copyright in color images [23]. Besides, the amalgamation of two or more metaheuristic algorithms can provide better outcomes that can be proved by many contemporary research works. Such as, the whale optimization algorithm is hybridized with the salp swarm algorithm to provide better optimization outcomes in Improved Whale Optimization Salp Swarm algorithm (IWOSSA) [24]; grid search and Aquila Optimizer (AO) are bounded to obtain optimized hyperparameters of ML and CNN models to recognize heart diseases [25]; the amalgam of genetic algorithm and particle swarm optimization has been used for tuning the adaptive PI controller [26]. Thus, in the current research work, two operators of the chemical reaction optimization algorithm (CRO) are used along with the operators of HHO to obtain better solutions. The contributions of the work are as follows.

  1. To generate the initial population we modified the graph as a hawk and produced a suitable solution space for solving the Max-cut problem using HHO.
  2. The operators of HHO are redesigned in this paper to obtain the optimized solution from the initial population.
  3. HHO is hybridized with two of the operators of the chemical reaction optimization (CRO) algorithm which are refinement and crossover. These two operators help to find better solutions by searching globally from the search space.
  4. To enhance the partition of the graph, the Kernighan-Lin graph partitioning algorithm is modified in this problem.
  5. The performance of the HHO algorithm is intensified by using a repair operator that searches locally from the solution space.
  6. The experimental result of the proposed approach is compared with TSHEA and PSO-EDA.
  7. Wilcoxon signed ranked test is provided for supporting the supremacy of the experimental result of the proposed method.

Problem statement and objective function

The Max-Cut problem is not only a fundamental graph partitioning challenge but also one of the most formidable combinatorial optimization problems to tackle. Its primary objective is to partition the vertex set of a graph into two subsets in a way that maximizes the total weight of edges with one endpoint in each subset. The Max-Cut problem is classified as NP-complete, meaning it is both a part of the NP complexity class, and it’s challenging to find an optimal solution.

Let, there be a graph, G = (V, E), where V is the set of vertices {1, 2, …, n} and E is the set of edges where each edge (i, j) ∈ E is associated with a weight Wij, the problem aims to divide the graph G into two partitions, S and V\S, such that the sum of edge weights in the cut is maximized.

For a graph with nodes, v1, v2, v3, v4, and v5, the max cut algorithm cuts the edges of the graph in such a way that the cut is maximum, i.e., the sum of the weighted edges that have been cut is maximum.

In Fig 1 a graph has been shown where the graph is denoted as G = (V, E), V = {A, B, C, D} and E = {(AB, 5), (AC, 3), (BC, 3), (BD, 1), (CD, 1)}. The graph can be cut in many ways. All of the cuts are shown in Table 1.

According to Table 1 the maximum cut of the graph of Fig 1 is 10. The best solution is found by cutting down the maximum weight that is shown in Fig 2. Here, the maximum cut of the edges = AB, AC, CD, BD, and the sum of the cut edges is 5 + 3 + 1 + 1 = 10, which is the maximum.

Given a graph, G = (V, E), where V is the set of vertices {1, 2, …, n} and E is the set of edges where each edge (i, j) ∈ E is associated with a weight Wij. The problem aims to divide graph G into two partitions, S and V\S, such that the sum of edge weights in the cut is maximized. After the cut, each edge connecting i and j should be divided such that some parts will belong to the subset S while the remaining parts of the edge should be in the subset V\S in E. The set of edges in the cut graph is E′ = ei,jE,iS,jV\S. The objective function of the max cut problem is as follows as given in [27]. (1)

Related work

The Max-Cut problem has been the subject of various techniques and algorithms aimed at optimization. Each approach has its own merit and has found its niche. Every algorithm comes with its set of strengths as well as weaknesses or limitations.

Svatopluk Poljak and Franz Rendl has solved the Max-Cut problem using eigenvalue relaxation [14]. Their objectives have two main aspects—efficiently computing the constraint and enhancing its quality. The authors provide a comprehensive account of the method, covering its theoretical underpinnings, algorithmic implementation, and operational efficacy. They also leveraged the fundamental algorithm to calculate both upper and lower bounds on the max-cut, noting that, in most cases, the relative disparity between these bounds was notably less than 10%. To obtain precise max-cut values, the authors also applied the fundamental algorithm within a “branch and bound” context. They successfully solved the max-cut problem for dense geometric networks with up to 100 nodes and conducted a comparative analysis with the Kernighan-Lin local search algorithm. The eigenvalue bound introduced here is established as a potent tool for addressing max-cut problems. It is worth noting that the experiments detailed in the presented paper do not push the eigenvalue bound to its limits. The polyhedral approach excels particularly in extremely sparse graphs, as the computational effort for the LP relaxation is independent of the vertex count (|V|) but does heavily depend on the edge count (|E|).

In 2012, Q. Wu et al. introduced a hybrid evolutionary algorithm (TSHEA) based on tabu search to address the Max-Cut problem, aiming to reduce edge count bias and improve the results [15]. TSHEA employs a combination operator, merging a one-flip and confined exchange move neighborhood with a distance-and-quality-based solution approach. TSHEA distinguishes itself in several ways: it utilizes neighborhood combination in its tabu search procedure, employs a solution combination operator similar to traditional uniform crossover using two parent solutions, and performs well on larger benchmark instances, consistently matching or surpassing the best-known solutions. The approach outperforms other reference algorithms from the literature, discovering new optimal solutions in numerous instances. However, it may require a high number of iterations and has several tunable parameters.

Lin Geng et al. [17] introduced PSO-EDA, a hybrid approach for the max-cut problem that combines particle swarm optimization and estimate of distribution algorithm. It outperforms previous methods in most cases but faces challenges in high-dimensional spaces.

Sú-Hyang Kim et al. [20] proposed a hybrid genetic algorithm for max-cut graph partitioning, showing significant improvements over existing algorithms. They incorporated unique ratio gain measures for vertex movement. The algorithm works perfectly on the sparse graphs but fails to produce good results in the dense graphs.

Sahni and Gonzalez [28] developed a randomized greedy algorithm for the max-cut problem. Goemans and Williamson [29] improved its performance by using a more complex distribution, achieving a 0.878 approximation. They represented vertices as unit vectors and employed semidefinite programming, pioneering its use in approximation algorithms. This work tackled fundamental questions in approximation algorithms.

For locating approximations of solutions to this optimization issue, the authors presented a heuristic technique based on the scatter-search approach [30]. Within the scatter-search framework, their solution method included a few cutting-edge elements. To improve variety in the reference set, three things must happen that are (1) the maximum diversity problem must be solved; (2) a key search parameter must be dynamically adjusted; and (3) a combination method must be adaptively chosen. They carried out extensive computational tests to evaluate the effectiveness of their proposal with earlier solution techniques after first examining the impact of changes in crucial scatter-search aspects. The construction of a scatter-search (SS) approach that incorporates a few novel components could lead to the conventional methodology serving as the inspiration for their work. Providing a method for the max-cut problem that could generate accurate approximations in a reasonable period of computer time was also a key objective. Three distinct techniques for extending the fundamental scatter search implementation were used in this study. A novel selection method for creating a reference set out of a population of solutions made up of the initial extension. The depth parameter k related to the ejection chain mechanism was dynamically adjusted in their second extension. In the third extension, the combination strategies were chosen probabilistically. The likelihood of choosing any one of the three approaches was inversely correlated with the number of excellent solutions those approaches had produced in earlier iterations. Performance was more developed than in previous works. The results that we obtained with their SS implementation were not all due to the strategies that they tested.

In this study, the authors explored the Max-cut problem capabilities of the ant colony optimization (ACO) heuristic and provided an Ant-Cut algorithm [31]. AntCut employs the ACO heuristic to select a vertex to change the set that it belongs to at each step as it moves closer to the maximum cut. By altering the set that the chosen vertex is a part of, the likelihood of each vertex should be picked in a manner that is proportionate to the cut’s rising weight. They conducted tests using the graphs G1, and G11 in the G-set; Helmberg and Rendl created the test issues for the G-set using a Rinaldi-written graph generator. The majority of max-cut problems can be successfully solved using the AntCut algorithm. The performance of the ACO algorithm may decrease over time.

Besides, in recent times, the Max-cut problem has been solved using supervised and reinforcement learning [32], in which the authors suggested a hybrid approach that utilized the pointer network with the supervised and reinforcement learning techniques. On the other hand, Hassin et al. [33] proposed a solution to this problem that incorporates the greedy heuristic approach with edge-contraction heuristic with differencing method. However, they were not able to find the reason behind the better performance of the method properly. Quantum approximate optimization algorithm (QAOA) is also used to solve the problem and Bae et al. [34] provide the proof that on complete graphs recursive QAOA performs better than original QAOA. Although they provide the proof analytically for complete graphs there is no proof regarding sparse graphs for real-time datasets.

While various algorithms have been developed to address the Max-Cut problem, there are still several areas for improvement. Many algorithms, such as eigenvalue relaxation, scatter-search, and hybrid evolutionary algorithms like TSHEA and PSO-EDA, have shown promising results. However, most of these algorithms are applied only to smaller datasets and face significant challenges in finding the Max-Cut for dense graphs. Additionally, evolutionary algorithms typically rely on extensive parameter tuning. Therefore, the research gap lies in developing a robust and scalable algorithm that performs efficiently across both dense and sparse graph instances and minimizes computational complexity, without a heavy reliance on parameter tuning.

Meta-heuristics have gained popularity for solving complex optimization problems due to their ability to navigate large and often irregular solution spaces efficiently. These algorithms are designed inspired by natural processes, including biological evolution, animal behavior, and physical systems. In past decades, numerous meta-heuristics have been developed, each with its unique approach to balancing exploration and exploitation. In Table 2, we present a comparative analysis of some widely used meta-heuristics, highlighting their key strengths, weaknesses, and typical applications.

thumbnail
Table 2. Comparison of popular meta-heuristics.

https://doi.org/10.1371/journal.pone.0315842.t002

Harris Hawk Optimization (HHO) for the max-cut problem

While numerous algorithms have been developed to address the max-cut problem, not all of them consistently yield the best solutions across all dataset instances. In this section, we present our proposed approach for solving the maximum cut problem. We employ the Harris Hawk Optimization (HHO) algorithm, which is detailed in the following subsections.

Harris Hawk Optimization

In 2019, Ali Asghar Heidari et. al. proposed the Harris Hawk Optimization (HHO) algorithm which has been used to solve many engineering problems ever since [38]. The HHO is a metaheuristic technique that excels in terms of search performance. Its four exploitation methods may search solution space both locally and globally which makes the algorithm more efficient than any other one.

The HHO algorithm is inspired by the intelligent hunting behavior of the Harris hawk which hunts its prey in a team. The prey is normally small animals like a rabbit. The hawks explore desert sites for hours, and when they detect the prey, they encircle the prey and exploit it until the prey has low energy so the hawks can easily catch it.

There are three phases in the HHO algorithm. They are:

  • Exploration phase: This is the phase where a hawk explores and searches for a potential target. The hawks can wait several hours before prey is finally found. When a rabbit is detected, the algorithm enters the exploitation phase.
  • Transition phase: This phase is intermediate between exploration and exploitation. When a hawk detects prey, it alerts the other hawks and starts hunting.
  • Exploitation phase: This is when the group of hawks uses strategies to kill the prey depending on the fitness value of the hawk and the escaping energy of the prey. The hawks chase the rabbit from different directions for a long time, which reduces the rabbit’s energy. In the end, the hawk cannot run anymore as it is tired, and then the hawks strike the killing bow.

Basic structure of the max-cut Problem using HHO algorithm

The main method we have applied for finding the maximum cut problem is the Harris Hawk Optimization (HHO) algorithm. First of all, we have generated an initial population. After that, the hawks enter into the HHO phases—the exploration, transition phase, and exploitation phases. After these phases, HHO operators are applied to it to improve the result.

Some of the parameters used in the Harris Hawks Optimization (HHO) algorithm are described in Table 3.

Initial population generation

The initial population represents the initial positions of the hawks. It is generated based on random selection. At first, we have taken the input N, the total number of hawks. We then create random hawks by randomly removing edges to generate a random graph cut. In the HHO phase, the cut is improved in each iteration. A variable T limits the maximum number of iterations. In each iteration, the positions of the hawks are updated. The initial population generating process is shown in Algorithm 1.

Algorithm 1: Initial population generation.

1 Input: Graph, G = (V, E)

2 Output: modified graph as a hawk

3 Init: A, B

4 for each vV do

5  r ← random([0, 1])

6  if r ≥ 0.5 then

7   Av

8  end

9  else

10   Bv

11  end

12 end

13 for xA do

14  for yB do

15   Assign weight of edge E(x, y) ← 0

16  end

17 end

HHO phases

The phases of HHO are discussed below with necessary diagrams and pseudocodes.

Phase 1: Exploration phase.

This phase describes how a Harris hawk explores the search space for prey. It includes positioning the hawks randomly and waiting for several hours. Here, hawks are agents, and the position of the prey is the best candidate solution. Detecting prey depends on two strategies. The first strategy (when q < 0.5) specifies detecting prey according to the positions of other hawks (Xi, i = 1, 2, 3, …, N where N is the total number of Hawks). The second strategy (when q ≥ 0.5) specifies the detection of prey according to perching on a random tree Xrand. These two strategies are modeled in the Eq 2. (2) Xm is calculated as follows. (3)

The description of the variables used in the Exploitation phase is provided in Table 4.

thumbnail
Table 4. Description of variables used in the exploitation phase.

https://doi.org/10.1371/journal.pone.0315842.t004

Phase 2: Transition phase.

This is an intermediate phase between the exploration phase and the exploitation phase. This transition phase is formulated as follows: (4)

Here E is the escaping energy of the prey, E0 is the initial energy, t is the current number of iterations and T is the maximum iteration. The value of E0 varies in the range from [-1, 1]. When the value increases from 0 to 1, the prey is strengthening, and in the case of decreasing value from 0 to –1, the prey is flagging. When |E| ≥ 1, the algorithm enters the exploration phase, otherwise, it moves to the exploitation phase. The description of the variables used in the Transition phase is provided in Table 5.

thumbnail
Table 5. Description of variables used in transition phase.

https://doi.org/10.1371/journal.pone.0315842.t005

Phase 3: Exploitation phase.

The chasing strategies of the hawks and the escaping behaviors of the prey are the two main elements of this phase. There are four attack strategies of the hawks depending on the energy of the prey.

  • Strategy 1: (Soft Besiege) When the escaping energy of the prey is enough and tries to escape by some random jumps, but in the end cannot escape successfully, that is |E| ≥ 0.5 and r ≥ 0.5. The hawks can easily hunt prey down. The equations are: (5) (6) Here, ΔX(t) represents the difference between the position vector and current location in the iteration t and jump strength of the rabbit during the escaping procedure, J = 2(1 − r5) where r5 is a random variable inside the range [0, 1]. The value of J changes randomly in each iteration.
    The description of the variables used in the Soft Besiege is provided in Table 6.
  • Strategy 2: (Hard Besiege) When the prey is exhausted and cannot escape due to a lower energy, this strategy is used. The current positions are updated using the following equation: (7) The description of the variables used in the Hard Besiege is provided in Table 7.
  • Strategy 3: (Soft Besiege with Progressive Rapid Dives) This strategy is used when the prey has enough escaping energy, but the hawks still construct a soft besiege before the surprise pounce. This procedure is more intelligent than the soft besiege. (8) To model the escaping of the prey, the Levy Flight concept is utilized in HHO. The Levy Flight is utilized to mimic random jumping and zigzagging to escape. The prey escapes using this equation: (9) where D is the dimension of the problem, S is a random vector of size 1 × D and LF is the Levy Flight function, which is calculated as follows [47]: (10) where μ and ν are random variables drawn from normal distributions , β is the Levy distribution parameter in the range 1 < β ≤ 2, and The hawks rapidly dive to make the prey tired and catch it. The next move of the hawk can be decided by the following rule: (11) Hence, the final strategy for updating the Hawks’ position can be performed by: (12)
  • Strategy 4: (Hard Besiege with Progressive Rapid Dives) When escaping energy is less than 50% (|E| < 0.5) and the escape probability is also less than 50% (r < 0.5), the hawks employ a hard besiege strategy before launching a surprise attack to capture the prey. The dynamics for the prey during this stage are similar to those in the soft besiege condition. However, in a hard besiege, the hawks actively work to reduce the distance between their average location and the escaping prey. To achieve this, they follow a specific rule set during the hard besiege condition. (13) where, (14) Xm(t) is obtained using Eq (3) and Z is obtained using Eq (9).
    The description of variables used in Soft Besiege and Hard Besiege with Progressive Rapid Dives phases is provided in Table 8.
thumbnail
Table 6. Description of variables used in soft besiege phase.

https://doi.org/10.1371/journal.pone.0315842.t006

thumbnail
Table 7. Description of variables used in hard besiege phase.

https://doi.org/10.1371/journal.pone.0315842.t007

thumbnail
Table 8. Description of variables used in soft besiege and hard besiege with progressive rapid dives phases.

https://doi.org/10.1371/journal.pone.0315842.t008

Modified Kernighan-Lin graph partitioning algorithm

The algorithm takes an undirected graph G = (V, E) with vertex set V, edge set E, and optional edge weights as input. Its objective is to minimize the sum T of the weights of the subset of edges that cross from one partition, A, to another, B, while dividing the vertex set V into two disjoint subsets, A and B, of approximately equal size. If the graph has no edge weights, the goal is to minimize the number of crossing edges, which is equivalent to assigning each edge a weight of 1. The algorithm continuously enhances the partition by using a greedy algorithm in each iteration to match vertices from A with vertices from B in a way that shifting the paired vertices between the partitions improves the partition quality. It then selects a subset of pairs that have the most favorable impact on the overall solution quality T after vertex matching. Each pass of the procedure takes place in time for a network with v vertices.

Let’s take an example of a graph with 4 nodes. V = {A, B, C, D} as in Fig 3.

thumbnail
Fig 3. An example of the Kernighan-Lin algorithm.

https://doi.org/10.1371/journal.pone.0315842.g003

The Kernighan-Lin algorithm first divides the graph into two equal subsets {A, B} and {C, D}. The cut edge value for that is 2. The algorithm then swaps vertices between the two subsets to obtain a cut that has the lowest number of cut edges keeping the size of the subsets constant.

The algorithm checks the combinations {A, C}, {B, D} where the cut edge value is 3 and {A, D}, {B, C} where the cut edge value is 1. As, {A, D}, {B, C} is the partition with minimum cut, the algorithm returns this partition as the result. The algorithm returns {A, D}, {B, C} as output as it is the partition where two subsets have an equal number of vertices, and the cut is minimum. An example of the Kernighan-Lin algorithm is shown in Table 9.

thumbnail
Table 9. An example of the Kernighan-Lin algorithm.

https://doi.org/10.1371/journal.pone.0315842.t009

By changing the signs of all edge weights cij, it is possible to transform the problem into maximizing the cost, e.g., obtaining maximum cut [48]. So, it is possible to obtain a local maximum cut by modifying the Kernighan-Lin partitioning algorithm. We have modified this algorithm to find the Max-cut by partitioning the graph. The pseudocode of this algorithm is shown in Algorithm 2.

Algorithm 2: Modified Kernighan-Lin algorithm.

1 Input: Graph, G(V, E)

2 Output: P(A, B)—partition of vertices that yields a local maximum cut

3 Randomly partition the vertices V into two sets A and B

4 Compute the initial cut value T of partition P(A, B)

5 while gain0 do

6  compute costs D for all aA and bB

7  gv, av, bv ← ∅

8  for n ← 1 to |V|/2 do

9   find aA and bB, such that gainD[a] + D[b] − 2 × c(a, b) is maximal

10   remove a and b from further consideration in this pass

11   add g to gv, a to av, and b to bv

12   update costs D for the elements of AAa and BBb

13  end

14  find k which maximizes gain, the sum of gv[1], gv[2], …, gv[k]

15  if gain > 0 then

16   Exchange av[1], av[2], …, av[k] with bv[1], bv[2], …, bv[k]

17   T = Tgain

18  end

19 end

Additional operator design

Studies have shown that the HHO algorithm may converge prematurely [49]. Therefore, adding additional operators may increase the HHO algorithm’s efficiency in the exploration phase [50]. To find out the better results we have to design several additional operators, these operators are described in the following sub-sections.

Refinement operator.

Drawing inspiration from the decomposition operator used in the Chemical Reaction Optimization (CRO) algorithm [39], we have developed a new mechanism called the refinement operator. This adaptation aims to enhance the optimization process by improving the solution diversity and exploration capabilities of the Harris Hawks Optimization algorithm. The refinement operator prevents falling it into local optima.

In this operator, a single hawk gets split into two. Their structure is made more interesting with the addition of two newly produced hawks. Suppose hawk X produces hawks X1 and X2 as new offspring. The first portion of the hawk X generates a new hawk X1, while the remainder is chosen at random. The second part of the hawk X generates another hawk X2, and the remaining pieces are chosen at random. We are applying the refinement operator if a hawk does not update the result for 50 iterations. We have shown an example in Fig 4. The pseudocode of the refinement operator is provided in Algorithm 3.

thumbnail
Fig 4. An example of the refinement operator.

https://doi.org/10.1371/journal.pone.0315842.g004

Algorithm 3: Refinement operator.

1 Input: A Hawk

2 Output: A Better Hawk

3 hawk1, hawk2 ← hawk

4 for i ← 0 to size(hawk)/2—1 do

5  hawk1[i] ← randomly 0 or 1

6 end

7 for isize(hawk)/2 to size(hawk)—1 do

8  hawk2[i] ← random([0,])

9 end

10 if objf(hawk1) > objf(hawk2) then

11  return hawk1

12 end

13 else

14  return hawk2

15 end

Crossover operator.

Liu et al. [51] introduced an enhanced Harris Hawk Optimization (HHO) algorithm known as CCNMHHO, which integrates the Nelder–Mead Simplex algorithm with a crisscross crossover technique, termed the horizontal and vertical crossover mechanism. The authors demonstrated the effectiveness of their approach in estimating parameters for photovoltaic systems.

The crossover operator consolidates two hawks to form a new hawk. We divided two hawks, X1 and X2, each into two halves. Then, we exchanged one half from X1 with one half from X2 to form two new hawks, Xnew1, and Xnew2. We apply the crossover operator if a hawk does not update the result for 50 iterations. Crossover enables the algorithm to explore new regions of the solution space by merging successful traits from both parents, potentially leading to better solutions. If one or both parents are high-quality solutions, the crossover operator can produce offspring that inherit beneficial characteristics. The pseudocode of the crossover operator is in Algorithm 4. Fig 5 demonstrates the example of the crossover operator.

Algorithm 4: Crossover operator.

1 Input: Population(X), two distinct hawks (rand1, rand2)

2 Output: Two modified hawks

3 hawk1 ← X[rand1], hawk2 ← X[rand2]

4 for i ← 0 to size(hawk1)/2 -1 do

5  hawk1[i] ≔ X[rand2][i]

6 end

7 for isize(hawk2)/2 to size(hawk2)—1 do

8  hawk2[i] = X[rand1][i]

9 end

10 return hawk1, hawk2

Mutation mechanism.

Kardani et al. [52] presented an enhanced version of the Harris Hawks Optimization (HHO) algorithm integrated with Extreme Learning Machine (ELM), named ELM-IHHO. This innovative algorithm aims to address the shortcomings of traditional HHO by incorporating a mutation mechanism, thereby improving its performance in predictive tasks. Inspired by this, we also incorporate a mutation operator in our adaptation of the Harris Hawks Optimization (HHO) algorithm. This addition aims to enhance exploration capabilities and prevent premature convergence to local optima. We used the same mutation mechanism presented in [50] as follows: (15)

F is the scaling factor in the equation, while r1, r2, r3, and r4 are distinct random numbers chosen from the range [1, N]. The number N reflects the total population. During the exploratory phase, the above equation is applied. Many models and algorithms employ mutation operators to increase optimization efficiency and global search capabilities [5355], which inspired us to include this operator in the proposed technique. The description of the variables used in the Mutation phase is provided in Table 10.

thumbnail
Table 10. Description of variables used in mutation operation.

https://doi.org/10.1371/journal.pone.0315842.t010

Adjustment operator.

The weights of the edges can contain specific values. An edge weight will be 0 if the edge is cut, otherwise, it will remain constant. In each iteration, we adjust the weights of the edges by clipping them inside the lower and upper ranges of our algorithm.

For example, if a hawk position contains negative values, clipping them inside the range will resolve the problem. Extremely high values will be resolved as well.

If the upper bound ub = 1 and lower bound lb = 0, for a matrix M = [−1.5, 0.5, −1.2, −0.07, 1.50.86, 0.92] then after adjustment the matrix will be M = [0, 0.5, 0, 0, 1, 0.86, 0.92]. The pseudocode of the operator is shown in Algorithm 5.

Algorithm 5: Adjustment operator.

1 Input: Hawk

2 Output: Modified hawk

3 for i1 to size(hawk) do

4  if hawk[i] < lb then

5   hawk[i] ← lb

6  end

7  else if hawk[i] > ub then

8   hawk[i] ← ub

9  end

10 end

Repair operator.

We have used a repair operator for local search to enhance the performance of the HHO algorithm. We have maintained two arrays. The first array, cut[1…V], keeps track of the number of edges that have been cut where one endpoint is the vertex i in cut[i]. The other array, uncut[1…V] keeps track of the number of edges that not have been cut. Here, each array has a size of V where V is the total number of vertices.

For example, for the following graph in Fig 6, if a hawk configuration is [1, 1, 0, 0, 1, 0], from the cut graph in Fig 7, then the cut array = [2, 1, 1, 3, 2, 1], and uncut array = [1, 2, 1, 2, 1, 1].

thumbnail
Fig 7. A random graph configuration with a cut.

https://doi.org/10.1371/journal.pone.0315842.g007

In the repair operator, for each element in the hawk, a vertex has been moved from one subset to another (e.g., in a hawk, if the index of the hawk is 0, it is changed to 1, and vice-versa). The cut and uncut arrays have been modified accordingly. If the overall gain is positive, then we are accepting the change of the subset. Otherwise, the vertex remains in the subset in which it was before. We are calculating gain using the following formula:

Gain = Total changes in the sum of the elements of the cut array

The pseudocode of the repair operator is shown in Algorithm 6, Algorithm 7, and Algorithm 8 for cut and uncut calculations. We have shown a pictorial view of the working process of the repair operator in Fig 8.

Algorithm 6: Repair operator.

1 Initialize totalArr = [1…V] with 0’s

2 for edge ← 0 to size(edgeList) do

3  v1 ← edge[0]

4  v2 ← edge[1]

5  Increment totalArr[v1] and totalArr[v2]

6 end

7 cutArr ← CalculateCut(hawk, edgeList)

8 uncutArr ← CalculateUncut(totalArr, cutArr)

9 for I ← 0 to V do

10  temphawk

11 end

Algorithm 7: Cut calculation.

1 Input: hawk, edgeList

2 Output: cutArr

3 cutArr ← [1…V]

4 for edge ← 0 to size(edgeList) do

5  v1 ← edge[0]

6  v2 ← edge[1]

7 end

8 if hawk[v1] ≠ hawk[v2] then

9  Increment cutArr[v1] and cutArr[v2]

10 end

11 return cutArr

Algorithm 8: Uncut calculation.

1 Input: totalArr, cutArr

2 Output: uncutArr

3 uncutArr ← [1…V]

4 for i ← 0 to do

5  V uncutArr[i] ← totalArr[i] − cutArr[i]

6 end

7 return uncutArr

Acceptance criterion

To ensure more effective optimization, we have introduced an acceptance criterion to obtain a better result. In each iteration, after operating through the algorithm, if the fitness value of a particular hawk improves, we accept that change. Otherwise, we simply keep the hawk as it was before the iteration. For example, if the fitness value of a hawk is 20 before an iteration and 18 after the iteration, the algorithm will keep the hawk position at 20.

The overall algorithm

Unlike many other metaheuristic algorithms, HHO works on the initially generated population. After following the equations of HHO, it operates to find the optimal solution for the max-cut problem. The flowchart of the proposed method for the maximum cut problem using HHO is shown in Fig 9.

Pseudocode of the proposed algorithm is given in 9.

Algorithm 9: Max-Cut using Harris Hawks Optimization (MC-HHO) Algorithm

1 Input: Objective function f(x), Maximum number of iterations MaxIter, Population size N, Graph G(V, E), Upper and lower bounds lb, ub

2 Initialize population Xi (i = 1, 2, …, N) with random positions within bounds [lb, ub]

3 Obtain initial partition using modified Kernighan-Lin algorithm

4 Evaluate fitness of each hawk Xi using f(x)

5 Find the current best solution Xbest

6 for t = 1 to MaxIter do

7  if PreyNoUpdateCount50 then

8   Xi = refinement(Xi)

9   Xi, Xrand = crossover(Xi, Xrand)

10  Update prey energy,

11  for each hawk Xi do

12   if |E| ≥ 1 then

13    Generate random values q and r

14    Update Xi using Eq 2

15   else

16    if r ≥ 0.5 and |E| ≥ 0.5 then

17     Soft besiege: Update Xi using Eq 5

18    else if r ≥ 0.5 and |E| < 0.5 then

19     Hard besiege: Update Xi using Eq 7

20    else if r < 0.5 and |E| ≥ 0.5 then

21     Soft besiege with progressive dives: Update Xi using Eq 12

22    else

23     Hard besiege with progressive dives: Update Xi using Eq 13

24   Xi = mutation(Xi)

25   adjustment(Xi, lb, ub)

26   Evaluate updated fitness of Xi

27   Check acceptance criteria of Xi

28   Xi = repair(Xi)

29  Update the best solution Xbest if needed

30 Output: Best solution Xbest and its fitness

Experimental results and discussion

In this section, we have shown the results produced by our proposed approach and the comparison with the other related state-of-the-art methods.

Dataset collection

To test and solve the max-cut problem, G-set datasets have been used in the proposed method as graph data. Helmberg and Rendl built the G-set dataset using a graph generator [56]. We have chosen 33 G-set instances. The graphs in these datasets are described by information on the edges and vertices. The first row reflects the size of the graph that is the total number of vertices V and edges E. The first value is the size of the vertices and the second one is the number of edges in the first row. The third column of the dataset represents the weight of the edges. We converted the graph to a 1-D array to work with it.

Experimental environment

We implemented our proposed algorithm (MC-HHO) using Python 3.12 on a system equipped with an Intel Core i7-7700 CPU @ 3.60 GHz, 24 GB of RAM, and a Linux environment. The hardware supports parallel multithreading processing, enhancing computational efficiency for large-scale problems. Some cases require a large amount of memory, so we used the Linux PC running Ubuntu 22.04 LTS, which offers greater virtual memory management.

Parameters initialization

Four parameters of HHO are initialized to start the process. The value of the maximum number of iterations, upper bound, lower bound, and total number of hawks in the population are the commencing values of the proposed method that are shown in Table 11.

thumbnail
Table 11. Optimal values of parameters in the proposed method.

https://doi.org/10.1371/journal.pone.0315842.t011

Parameter tuning

There are two types of tuning used here. One is iteration and another is search agent tuning.

Iteration tuning.

For tuning the number of iterations, we tested 50, 100, 500, 1000, and 3000 iterations. In some instances, optimal results were achieved within fewer iterations, such as 100 or 1000, while in others, it required 2500 or 2600 iterations to reach the best outcome. However, the performance plateaued for 2500 or more iterations. Therefore, 3000 is selected as the maximum number of iterations to obtain the best result for all instances. Fig 10 shows the tuning of iteration using five different values for seven different instances.

thumbnail
Fig 10. Iteration tuning using different values.

https://doi.org/10.1371/journal.pone.0315842.g010

Search agent tuning.

To determine the optimal number of search agents, we tested each instance with the values of SearchAgentNo: 2, 5, 10, 50, and 100. The results indicated that increasing the SearchAgentNo improved the performance, but the improvement became very small compared to the increase in execution time. As the number of hawks increases, the algorithm takes longer to reach a solution. Thus SearchAgentNo is taken as 100 empirically. Fig 11 shows the tuning of search agents using five different values for seven different instances.

thumbnail
Fig 11. Search agent tuning using different values.

https://doi.org/10.1371/journal.pone.0315842.g011

Experimental results comparison

In this section we have compared our obtained results with the results acquired by the Discrete Cuckoo Search with Local Search for Max-cut Problem (DCSLS) [57], PSO-EDA [17], and TSHEA [15] algorithms. According to Tables 12 and 13, it is clear that HHO is successfully obtaining the maximum cut values that are near the best-known values in most cases.

thumbnail
Table 12. Comparison of results with DCSLS, MC-HHO, and MC-HHO without additional operator.

https://doi.org/10.1371/journal.pone.0315842.t012

thumbnail
Table 13. Comparison of the results of the TSHEA, PSO-EDA, and MC-HHO.

https://doi.org/10.1371/journal.pone.0315842.t013

We have also shown the gaps between the results. The gap is the difference between the best result obtained by MC-HHO and the best result obtained by the compared method. Mathematically, (16)

Table 12 shows that the number of cuts in MC-HHO is higher than DCSLS in all instances. Additionally, we compared the performance of MC-HHO without the refinement and crossover operators, labeled as “MC-HHO without Operator (c)” in the table. The results indicate that the proposed MC-HHO, which incorporates both operators, achieves a slight but consistent improvement in the quality of cuts across all instances compared to the version without these operators. This improvement demonstrates the impact of the refinement and crossover operators in enhancing the exploration and exploitation balance of the algorithm.

Table 13 demonstrates the results for TSHEA, PSO-EDA, and MC-HHO. It shows the number of cuts in each instance and the gaps between TSHEA and MC-HHO, as well as PSO-EDA and MC-HHO. A greater number of cuts signifies improved performance. According to Eq 16, a positive gap value indicates superior performance for MC-HHO, while a negative value means lower performance. If the gap is zero, it means that the performance of MC-HHO is equal to that of the compared algorithms.

Table 14 presents the gap comparison of DCSLS, TSHEA, and PSO-EDA against MC-HHO. From the table, the superiority over DCSLS is clear as there is no negative gap as well as there is a positive gap for every instance. This means for every instance MC-HHO performs better than DCSLS. The results indicate that the MC-HHO generates a greater number of cuts than the DCSLS across nine instances, with a total difference of 533 cuts. In comparison with TSHEA and MC-HHO, we see that for most of the instances, MC-HHO performs better. In nine instances MC-HHO obtained positive gaps that are 1021 more cuts than the TSHEA in total. However, in four instances, the performance of MC-HHO is lower than the TSHEA. MC-HHO gets 65 lower cuts (i.e. negative gaps) than TSHEA. Overall, MC-HHO prevailed by obtaining 956 cuts more than TSHEA. In the same four instances as the TSHEA algorithm, MC-HHO obtained lower cuts than the PSO-EDA algorithm. It achieved 1051 more cuts in 14 instances than the PSO-EDA in total. This performance helps the MC-HHO to attain the overall 988 cuts more than the PSO-EDA algorithm.

thumbnail
Table 14. Summary of comparison of the results of the DCSLS, TSHEA, PSO-EDA, and MC-HHO.

https://doi.org/10.1371/journal.pone.0315842.t014

Comparison of time with PSO-EDA

We have implemented the proposed method (MC-HHO) and the method PSO-EDA in the same machine. The comparison is shown in Table 15.

thumbnail
Table 15. Comparison of the time for the PSO-EDA and MC-HHO on G-set dataset.

https://doi.org/10.1371/journal.pone.0315842.t015

The running times of our proposed MC-HHO are less than that of PSO-EDA in almost all the cases except for two instances—G48 to G50. We have shown the time comparison between PSO-EDA and our proposed algorithm in Fig 12.

thumbnail
Fig 12. Time comparison between PSO-EDA and MC-HHO.

https://doi.org/10.1371/journal.pone.0315842.g012

Time complexity analysis

The time complexity of the HHO algorithm can generally be broken down into three components—a) Initialization: This typically requires time, where n is the number of hawks in the population. b) Fitness Evaluation: For each iteration, the fitness of each hawk is evaluated. The Fitness function can be varied in different implementations. If fitness function takes times, then for m iterations, the total time will be . c) Updating Positions: this usually takes for each iteration. Therefore, the overall time complexity of the HHO algorithm is .

The proposed system can be divided into three parts. The first part is initialization in which the elementary population has been introduced, the hawk position is initialized, the Kernighan-Lin graph partitioning algorithm is used to partition the graph and the hawk fitness is calculated. In Algorithm 1 the first iteration occurs based on the total number of vertices. If the total number of vertices is v then the time complexity will be . For creating graph G of V vertices, the time complexity is . For the Kernighan-Lin graph partitioning algorithm, the time complexity is . Every other operator has single loops and runs sequentially. As there are no nested loops in any other algorithms the maximum taken time from the operators is . Thus the maximum time taken for the initialization part is .

The second part is iterations and fitness evaluation. Based on the value of maximum iteration and search agent number the time complexity varies. If the number of search agents is m and the maximum iteration is n then the time taken by the proposed algorithm is .

Lastly, in the updating part, there are no nested loops. Therefore, the time complexity of updating n hawks is . So the upper boundary of the time is . Simplifying to the dominant term (omitting lower-order terms), the time complexity can be expressed as .

Statistical analysis

The result of the proposed approach is compared using statistical analysis with other algorithms. To show the performance of the current research statistically box plot has been used. Fig 13 shows the box plot comparison of DCSLS and MC-HHO. According to the box plot the maximum, minimum, median, and average values obtained from the MC-HHO are greater than DCSLS suggesting superior performance. Moreover, the mean value for MC-HHO is 5113.67 whereas it is 5054.44 for DCSLS, indicating that MC-HHO consistently achieves better results on average.

thumbnail
Fig 13. Box plot comparison between DCSLS and MC-HHO.

https://doi.org/10.1371/journal.pone.0315842.g013

The values of the box plots are shown in Table 16.

thumbnail
Table 16. Box plot values of DCSLS and MC-HHO.

https://doi.org/10.1371/journal.pone.0315842.t016

Similarly, Fig 14 shows the box plot comparison of TSHEA, MC-HHO, and PSO-EDA. According to the box plot the maximum, median, and average values obtained from the MC-HHO are greater than both TSHEA and PSO-EDA. The mean value for MC-HHO is 7182.39, surpassing TSHEA and PSO-EDA’s means, at 7158.97 and 7152.45, respectively. This indicates that MC-HHO consistently outperforms the other two algorithms on average. The higher mean indicates that MC-HHO produces a more reliable solution for maximizing the cut value, making it a more robust method for solving the problem compared to TSHEA and PSO-EDA.

thumbnail
Fig 14. Box plot comparison between TSHEA, MC-HHO, and PSO-EDA.

https://doi.org/10.1371/journal.pone.0315842.g014

The values of the box plots are shown in Table 17.

thumbnail
Table 17. Box plot values of TSHEA, MC-HHO, and PSO-EDA.

https://doi.org/10.1371/journal.pone.0315842.t017

Non-parametric statistical significance test

To test the significance of the outcome on the G-set dataset of the proposed method statistically, the Wilcoxon signed-rank test is applied to the MC-HHO and TSHEA, and MC-HHO and PSO-EDA pairs. The Friedman test is applied to the MC-HHO, TSHEA, and PSO-EDA.

Wilcoxon signed-rank test.

According to the Wilcoxon signed-rank test, the proposed approach performs significantly better than the two other methods. The significance level for the test is set at 0.05, which is used to compare two related samples and assess the ranks of their population means, commonly referred to as a two-tailed hypothesis.

Table 18 provides the statistical significance of the proposed method using the Wilcoxon signed-rank test for the G-set dataset compared to TSHEA and PSO-EDA respectively.

thumbnail
Table 18. Wilcoxon signed-rank test for G-set dataset.

https://doi.org/10.1371/journal.pone.0315842.t018

According to Table 18, for MC-HHO and TSHEA pair sample size is 13 and the p-value is 0.06876. For the p-value less than 0.05, the result becomes insignificant. However as the p-value is greater than 0.05, the result is significant for the sample size 13. Besides, for the pair MC-HHO and PSO-EDA, the sample size is 18, and for the p-value 0.0455, the result is significant.

Friedman test.

The Friedman test is applied to the results using MC-HHO, TSHEA, and PSO-EDA of the G-set dataset. According to the test, the result is insignificant as the pvalue is less than the significance level that is 0.05. Table 19 refers to the ranks of every comparable algorithm based on the value of the max-cut obtained as the output of the algorithms. According to the total rank of every algorithm, it is clear that the proposed technique outperforms every other method. The pvalue from the Friedman test measured from the ranks of the algorithms is 0.14056. Thus the result is not significant at p < 0.05. From the test, the χ2 value is also calculated as 3.9242.

thumbnail
Table 19. Ranks of the algorithms for Friedman test of G-set dataset.

https://doi.org/10.1371/journal.pone.0315842.t019

Conclusions and future works

Combinatorial optimization-based problems are very hard to solve using exact algorithms especially when the number of instances is large. Thus to solve these types of problems metaheuristic algorithms can be used. As the max-cut problem is one of the combinatorial optimization-based problems, we have proposed a nature-based metaheuristic algorithm called Harris Hawk Optimization (HHO) to solve it. The main challenge to solve this problem is to design the three HHO phases. To obtain a better outcome, hybridization with refinement and crossover operators has been combined with HHO. An adjustment operator has been used to improve the results. We have also used a mutation mechanism and acceptance criteria to obtain better performance. Our proposed method gives better results in most cases. For many instances of the graphs, the proposed method obtains the best-known results. It is tested for both small and large scales of instances of datasets. From the experimental results, it is observed that the proposed approach outperforms any other known related existing methods for solving the max-cut problem both in terms of results and time. The statistical significance test has also been provided to prove the superiority of the proposed system. The results of this research have important real-world implications for addressing combinatorial optimization problems, particularly in areas such as network design, VLSI, and machine learning. Although the outcome using the proposed hybrid method is better than previous research works, it could find the optimal solution for every instance on the G-set dataset. Besides, there is scope to find the outcome for other datasets like MQLIB. Future work should focus on enhancing the algorithm’s scalability and exploring new datasets to improve performance further.

Acknowledgments

Pritam Khan Boni passed away before the submission of the final version of this manuscript. Md. Shahidul Islam accepts responsibility for the integrity and validity of the data collected and analyzed.

References

  1. 1. Wikipedia. Maximum cut; accessed on 24 March 2023. https://en.wikipedia.org/wiki/Maximum_cut.
  2. 2. Wheeler JW. An investigation of the max-cut problem. University of Iowa. 2004;.
  3. 3. Barahona F, Grötschel M, Jünger M, Reinelt G. An application of combinatorial optimization to statistical physics and circuit layout design. Operations Research. 1988;36(3):493–513.
  4. 4. Poland J, Zeugmann T. Clustering pairwise distances with missing data: Maximum cuts versus normalized cuts. In: International Conference on Discovery Science. Springer; 2006. p. 197–208.
  5. 5. Laguna M, Duarte A, Marti R. Hybridizing the cross-entropy method: An application to the max-cut problem. Computers & Operations Research. 2009;36(2):487–498.
  6. 6. Fortez G, Robledo F, Romero P, Viera O. A fast genetic algorithm for the max cut-clique problem. In: International Conference on Machine Learning, Optimization, and Data Science. Springer; 2020. p. 528–539.
  7. 7. Sen S. Simulated annealing approach to the max cut problem. In: Applications of Artificial Intelligence 1993: Knowledge-Based Systems in Aerospace and Industry. vol. 1963. SPIE; 1993. p. 61–66.
  8. 8. Martí R, Duarte A, Laguna M. Advanced scatter search for the max-cut problem. INFORMS Journal on Computing. 2009;21(1):26–38.
  9. 9. Kochenberger GA, Hao JK, Lü Z, Wang H, Glover F. Solving large scale max cut problems via tabu search. Journal of Heuristics. 2013;19:565–571.
  10. 10. Burer S, Monteiro RD, Zhang Y. Rank-two relaxation heuristics for max-cut and other binary quadratic programs. SIAM Journal on Optimization. 2002;12(2):503–521.
  11. 11. Ling AF, Xu CX, Xu FM. A discrete filled function algorithm embedded with continuous approximation for solving max-cut problems. European Journal of Operational Research. 2009;197(2):519–531.
  12. 12. Grötschel M, Pulleyblank WR. Weakly bipartite graphs and the max-cut problem. Operations research letters. 1981;1(1):23–27.
  13. 13. Festa P, Pardalos PM, Resende MG, Ribeiro CC. Randomized heuristics for the MAX-CUT problem. Optimization methods and software. 2002;17(6):1033–1058.
  14. 14. Poljak S, Rendl F. Solving the max-cut problem using eigenvalues. Discrete Applied Mathematics. 1995;62(1-3):249–278.
  15. 15. Wu Q, Wang Y, Lü Z. A tabu search based hybrid evolutionary algorithm for the max-cut problem. Applied Soft Computing. 2015;34:827–837.
  16. 16. Wu Q, Hao JK. A memetic approach for the max-cut problem. In: Parallel Problem Solving from Nature-PPSN XII: 12th International Conference, Taormina, Italy, September 1-5, 2012, Proceedings, Part II 12. Springer; 2012. p. 297–306.
  17. 17. Lin G, Guan J. An Integrated Method Based on PSO and EDA for the Max-Cut Problem. Computational intelligence and neuroscience. 2016;2016(1):3420671. pmid:26989404
  18. 18. Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of ICNN’95-international conference on neural networks. vol. 4. ieee; 1995. p. 1942–1948.
  19. 19. Banks A, Vincent J, Anyakoha C. A review of particle swarm optimization. Part II: hybridisation, combinatorial, multicriteria and constrained optimization, and indicative applications. Natural Computing. 2008;7:109–124.
  20. 20. Kim SH, Kim YH, Moon BR. A hybrid genetic algorithm for the max cut problem. In: Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation; 2001. p. 416–423.
  21. 21. Rodríguez-Esparza E, Zanella-Calzada LA, Oliva D, Heidari AA, Zaldivar D, Pérez-Cisneros M, et al. An efficient Harris hawks-inspired image segmentation method. Expert Systems with Applications. 2020;155:113428.
  22. 22. Balaha HM, El-Gendy EM, Saafan MM. CovH2SD: A COVID-19 detection approach based on Harris Hawks Optimization and stacked deep learning. Expert systems with applications. 2021;186:115805. pmid:34511738
  23. 23. Fahmy H, El-Gendy EM, Mohamed M, Saafan MM. ECH3OA: an enhanced chimp-harris hawks optimization algorithm for copyright protection in Color images using watermarking techniques. Knowledge-Based Systems. 2023;269:110494.
  24. 24. Saafan MM, El-Gendy EM. IWOSSA: An improved whale optimization salp swarm algorithm for solving optimization problems. Expert systems with applications. 2021;176:114901.
  25. 25. Balaha HM, Shaban AO, El-Gendy EM, Saafan MM. A multi-variate heart disease optimization and recognition framework. Neural Computing and Applications. 2022;34(18):15907–15944.
  26. 26. El-Gendy EM, Saafan MM, Elksas MS, Saraya SF, Areed FF. Applying hybrid genetic–PSO technique for tuning an adaptive PID controller used in a chemical process. Soft Computing. 2020;24(5):3455–3474.
  27. 27. Commander CW. Maximum cut problem, MAX-cut. Encyclopedia of Optimization. 2009;2.
  28. 28. Sahni S, Gonzalez T. P-complete approximation problems. Journal of the ACM (JACM). 1976;23(3):555–565.
  29. 29. Goemans MX, Williamson DP. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM). 1995;42(6):1115–1145.
  30. 30. Martí R, Duarte A, Laguna M. Advanced scatter search for the max-cut problem. INFORMS Journal on Computing. 2009;21(1):26–38.
  31. 31. Gao L, Zeng Y, Dong A. An ant colony algorithm for solving Max-cut problem. Progress in Natural Science. 2008;18(9):1173–1178.
  32. 32. Gu S, Yang Y. A deep learning algorithm for the max-cut problem based on pointer network structure with supervised learning and reinforcement learning strategies. Mathematics. 2020;8(2):298.
  33. 33. Hassin R, Leshenko N. Greedy differencing edge-contraction heuristic for the max-cut problem. Operations Research Letters. 2021;49(3):320–325.
  34. 34. Bae E, Lee S. Recursive QAOA outperforms the original QAOA for the MAX-CUT problem on complete graphs. Quantum Information Processing. 2024;23(3):78.
  35. 35. Holland JH. Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT press; 1992.
  36. 36. Kirkpatrick S, Gelatt CD Jr, Vecchi MP. Optimization by simulated annealing. science. 1983;220(4598):671–680. pmid:17813860
  37. 37. Dorigo M. The Any System Optimization by a colony of cooperating agents. IEEE Trans System, Man & Cybernetics-Part B. 1996;26(1):1–13. pmid:18263004
  38. 38. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H. Harris hawks optimization: Algorithm and applications. Future generation computer systems. 2019;97:849–872.
  39. 39. Lam AY, Li VO. Chemical-reaction-inspired metaheuristic for optimization. IEEE transactions on evolutionary computation. 2009;14(3):381–399.
  40. 40. Yang XS. Nature-inspired metaheuristic algorithms. Luniver press; 2010.
  41. 41. Glover F. Future paths for integer programming and links to artificial intelligence. Computers & operations research. 1986;13(5):533–549.
  42. 42. Yang XS, Deb S. Cuckoo search via Lévy flights. In: 2009 World congress on nature & biologically inspired computing (NaBIC). Ieee; 2009. p. 210–214.
  43. 43. Karaboga D, et al. An idea based on honey bee swarm for numerical optimization. Technical report-tr06, Erciyes university, engineering faculty, computer …; 2005.
  44. 44. Mirjalili S, Mirjalili SM, Lewis A. Grey wolf optimizer. Advances in engineering software. 2014;69:46–61.
  45. 45. Yang XS. A new metaheuristic bat-inspired algorithm. In: Nature inspired cooperative strategies for optimization (NICSO 2010). Springer; 2010. p. 65–74.
  46. 46. Mirjalili S, Lewis A. The whale optimization algorithm. Advances in engineering software. 2016;95:51–67.
  47. 47. Yang XS. Nature-inspired metaheuristic algorithms. Luniver press; 2010.
  48. 48. Kernighan BW, Lin S. An efficient heuristic procedure for partitioning graphs. The Bell system technical journal. 1970;49(2):291–307.
  49. 49. Akl DT, Saafan MM, Haikal AY, El-Gendy EM. IHHO: an improved Harris Hawks optimization algorithm for solving engineering problems. Neural Computing and Applications. 2024; p. 1–114.
  50. 50. Gharehchopogh FS, Abdollahzadeh B. An efficient harris hawk optimization algorithm for solving the travelling salesman problem. Cluster Computing. 2022;25(3):1981–2005.
  51. 51. Liu Y, Chong G, Heidari AA, Chen H, Liang G, Ye X, et al. Horizontal and vertical crossover of Harris hawk optimizer with Nelder-Mead simplex for parameter estimation of photovoltaic models. Energy Conversion and Management. 2020;223:113211.
  52. 52. Kardani N, Bardhan A, Roy B, Samui P, Nazem M, Armaghani DJ, et al. A novel improved Harris Hawks optimization algorithm coupled with ELM for predicting permeability of tight carbonates. Engineering with Computers. 2022; p. 1–24.
  53. 53. Abd Elaziz M, Xiong S, Jayasena K, Li L. Task scheduling in cloud computing based on hybrid moth search algorithm and differential evolution. Knowledge-Based Systems. 2019;169:39–52.
  54. 54. Jadon SS, Tiwari R, Sharma H, Bansal JC. Hybrid artificial bee colony algorithm with differential evolution. Applied Soft Computing. 2017;58:11–24.
  55. 55. Xiong G, Zhang J, Yuan X, Shi D, He Y, Yao G. Parameter extraction of solar photovoltaic models by means of a hybrid differential evolution with whale optimization algorithm. Solar Energy. 2018;176:742–761.
  56. 56. Helmberg C, Rendl F. A spectral bundle method for semidefinite programming. SIAM Journal on Optimization. 2000;10(3):673–696.
  57. 57. Xu Y, Cui Z, Wang L. Discrete Cuckoo Search with Local Search for Max-cut Problem. In: Intelligence Science I: Second IFIP TC 12 International Conference, ICIS 2017, Shanghai, China, October 25-28, 2017, Proceedings 2. Springer; 2017. p. 66–74.