Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Artificial dragonfly algorithm in the Hopfield neural network for optimal Exact Boolean k satisfiability representation

Abstract

This study proposes a novel hybrid computational approach that integrates the artificial dragonfly algorithm (ADA) with the Hopfield neural network (HNN) to achieve an optimal representation of the Exact Boolean kSatisfiability (EBkSAT) logical rule. The primary objective is to investigate the effectiveness and robustness of the ADA algorithm in expediting the training phase of the HNN to attain an optimized EBkSAT logic representation. To assess the performance of the proposed hybrid computational model, a specific Exact Boolean kSatisfiability problem is constructed, and simulated data sets are generated. The evaluation metrics employed include the global minimum ratio (GmR), root mean square error (RMSE), mean absolute percentage error (MAPE), and network computational time (CT) for EBkSAT representation. Comparative analyses are conducted between the results obtained from the proposed model and existing models in the literature. The findings demonstrate that the proposed hybrid model, ADA-HNN-EBkSAT, surpasses existing models in terms of accuracy and computational time. This suggests that the ADA algorithm exhibits effective compatibility with the HNN for achieving an optimal representation of the EBkSAT logical rule. These outcomes carry significant implications for addressing intricate optimization problems across diverse domains, including computer science, engineering, and business.

Introduction

Satisfiability, also known as SAT, is a fundamental problem in computer science that involves determining whether a given Boolean formula can be satisfied by a set of truth values for its variables [1]. The satisfiability problem (SAT) is a well-known NP-complete problem in computer science, which has many practical applications such as circuit design, artificial intelligence, and robotics. Boolean satisfiability (Boolean SAT) is a specific case of the SAT problem in which the variables can only take on the values of true or false.

The optimization of exact Boolean SAT is important in solving many real-world problems. Therefore, developing efficient algorithms for solving the exact Boolean SAT problem has been a topic of interest for researchers. It is a decision problem that has applications in various fields, such as artificial intelligence, verification, cryptography, and optimization. The satisfiability problem was first introduced by Stephen Cook in 1971, who proved that the problem was NP-complete, meaning that it is computationally hard to solve in general [2]. The problem involves determining whether a given Boolean formula can be satisfied by assigning truth values to its variables. A Boolean formula is a logical expression that uses Boolean operators such as AND, OR, and NOT to combine variables and their negations.

SAT is a widely used modelling framework for representing various combinatorial problems, such as Graph Coloring Problems [3], N-queen probems [4], Travelling salesmen problem [5], Scheduling problem [6] and timetabling problem [7], planning problem [8] and many more. It finds applications in various fields, including artificial intelligence, verification, cryptography, and optimization. In artificial intelligence, SAT is utilized to solve problems related to automated reasoning, planning, and knowledge representation [9, 10]. In verification, SAT is used to check the correctness of hardware and software designs [11]. In cryptography, SAT is used to break encryption algorithms and to design new ones [12]. In optimization, SAT is used to find the optimal solution to combinatorial problems such as scheduling and routing [13, 14].

In addition to its practical applications, the satisfiability problem (SAT) has influenced several related decision and optimization problems, which are known as SAT extensions. These extensions either utilize the same algorithmic techniques as SAT or employ SAT as a core engine. Some popular SAT extensions include Maximum Satisfiability (MaxSAT), Minimum Satisfiability (MinSAT), Major Satisfiability (MSAT), Model Counting (#SAT), Exact Satisfiability, Partial MAXSAT, and Quantified-Boolean Formulas (QBF). It’s important to note that this list is not exhaustive, as the applications of SAT have been growing in recent years, particularly in addressing security and transportation challenges. Despite the computational complexity of SAT, which makes it a challenging problem to solve in general, efficient algorithms and heuristics have been developed to handle many instances of the problem.

Artificial neural networks (ANNs) are approaches used in machine learning and classification problems, loosely based on the biological structure of the brain and nervous systems. ANNs are powerful classifiers, and it has been mathematically proven that they can learn any mathematical function to arbitrary precision given sufficient training time and data. They can be applied to model multi-scale, nonlinear systems, as well as non-differentiable mathematical and engineering problems. ANNs are self-learning model frameworks capable of generating improved results based on available data [15]. There are various types of artificial neural networks developed for specific purposes. The Hopfield neural network (HNN) was proposed by Hopfield and Tank in 1985 as a means to model and optimize nonlinear patterns using the network’s energy structure during the training and testing processes. This model has gained popularity due to its ability to interpret complex real-life problems. Hopfield neural networks (HNNs) have made significant contributions to various areas, including combinatorial optimization recognition [1618].

One of the breakthroughs in Satisfiability logic programming and artificial neural networks was the incorporation of variants of an artificial neural network into a single model. The integration of logic programming into the Hopfield artificial neural network, referred to as HNN-SAT, was first introduced in [19]. This approach combines an artificial neural network with various logic programming problems. The aim is to utilize the optimization capabilities of the neural network to address the logical inconsistencies within the model network. After interpreting the synaptic strengths, the system relaxes into neural states that correspond to a valid or near-valid representation. The HNN models have gained wide acceptance among researchers due to their strong content addressable memory (CAM) component [20] and their ability to converge using the Lyapunov energy function (LEF) of the HNN [21]. However, the basic HNN relies on exhaustive search (traditional method or direct search) and heuristic approaches (metaheuristics) during the training and testing stages. It has been discovered that exhaustive search is not considered a robust searching technique, as it relies on brute force or random search mechanisms, which increase the risk of overfitting and limit variations in the searching process [2224].

Related studies

The development of novel metaheuristic algorithms has brought relief to the artificial neural network, artificial intelligence, and machine learning communities. These algorithms can involve the network at various stages, such as parameter estimation, system optimization, adjustment and weight training, system adaptation to determine the number of layers, node transfer functions, learning rules, and retrieval phase.

Attempts have been made by various researchers to overcome the issue of premature convergence associated with the structure of the Hopfield neural network. This includes the research conducted by [20], where a direct technique was employed to determine the existence and global exponential stability of a nearly automorphic solution for Clifford-valued high-order Hopfield neural networks (CHNN) with leakage delays. Instead of immediately examining Clifford-valued systems, the system under investigation was not divided into real-valued systems. Fresh findings were produced using the employed techniques, with examples based on the given case. Another investigation was carried out by [25]. In the study, the activation was sequenced based on geometric feature correlation in image hyperplanes to overcome convergence issues. The results indicated that the suggested model outperformed four existing filters when regularized under cohomology, allowing it to function as an unconventional filter for pixel spectral sequences.

The algorithms that have been developed for various scientific and engineering applications can be embedded into the network to enhance the learning and retrieval process. Several studies have been conducted on the application and utilization of metaheuristic algorithms in solving various optimization problems. One such study conducted in [26] focused on modified Particle Swarm Optimization (PSO) based optimization algorithms for large-scale nonlinear optimization problems. The modification of the original PSO in this study incorporated a local search technique to optimize the parameters of a fuzzy classification subsystem in a series of hybrid electric vehicles (SHEV), aiming to reduce harmful pollutant emissions. The results demonstrated that the proposed technique was simple, easily implementable, and had low computational complexity, outperforming the original PSO and the clonal selection-based artificial immune system algorithm (CLONALG).

A study conducted in [27] proposed three models (ANN, MF-ANN, GEP) to predict ground vibrations from tunnel blasting using artificial intelligence techniques. The MF-ANN model outperformed others in accuracy and efficiency, providing valuable information for safety assessments. similarly, [28] also used ANN models (including PSO-ANN) to predict the environmental impact of tunnel blasting, with the PSO-ANN model showing superior performance. These models offer accurate methods for assessing ground vibrations and the environmental effects of tunnel blasting. Another study proposed the use of the Graph Long Short-Term Memory (GLSTM) neural network and the Dragonfly algorithm for node localization [29]. This approach aimed to predict pollution levels in a wireless healthcare system that has been revolutionized by the integration of smart technologies. The GLSTM neural network provided an efficient and accurate method for pollution level prediction, while the Dragonfly algorithm accurately localized the nodes to facilitate efficient data transfer. The proposed system has the potential to significantly improve the accuracy of pollution prediction and node localization in wireless healthcare systems [30].

Reducing energy consumption and optimizing the lifetime of wireless sensor networks are crucial objectives. However, some clustering algorithms, such as LEACH, may not deliver satisfactory performance [31]. To enhance LEACH’s effectiveness, researchers have proposed an improved version of the Dragonfly algorithm for load balancing. Comparative simulations were conducted with traditional LEACH and particle swarm optimization algorithms, revealing that the enhanced Dragonfly algorithm outperformed the others. Maintaining reliable and secure routing protocols in Mobile Ad hoc Networks (MANETs) is essential due to the dynamic and open nature of wireless communication. Black hole attacks, where compromised nodes act as false routers, pose significant threats. To address this, a novel approach leveraging the Firefly Algorithm and Artificial Neural Network has been introduced to enhance the Ad hoc On-Demand Distance Vector (AODV) routing protocol [32]. Extensive numerical experiments evaluated computation overhead, packet delivery rate, throughput, and delay, demonstrating the superior performance of the proposed approach compared to traditional methods in mitigating black hole attacks in MANETs. In the context of matching engines, a heuristic algorithm has been developed to reduce memory demand while achieving effective and high-performance results [33]. This algorithm estimates distances between strings in a unique pattern, facilitating rule classification and enhancing the matching engine’s capabilities

Recently, researchers proposed a novel approach that combines the Gravitational Search Algorithm (GSA) and Deep Q-Learning (DQL) algorithm using Reinforcement Learning (RL) [34]. The hybridization of GSA and DQL was utilized, with GSA initializing the weights and biases of the neural network in DQL to ensure stability. The proposed approach demonstrated superior performance compared to similar techniques. In a separate study, fuzzy controllers were employed to enhance the performance of control systems in electromagnetic-actuated clutch systems [35]. The parameters of the fuzzy controllers, including membership functions and rules, were optimized using the Grey Wolf Optimizer (GWO). The Takagi-Sugeno type-2 fuzzy controller exhibited greater efficiency in handling complex processes. A new swarm-based metaheuristic algorithm, called Tuna Swarm Optimization (TSO), was developed based on the cooperative foraging behaviour of tuna swarms. The TSO algorithm outperformed other comparative algorithms in terms of optimization performance [36]. Another intriguing study focused on the optimization capabilities of the Cat Swarm Optimization (CSO) algorithm [37]. The CSO was found to be a robust and powerful swarm-based metaheuristic optimization approach, surpassing other algorithms in solving optimization problems [38].

In the realm of algorithm development, novel metaheuristics algorithms have emerged as potential solutions to address the convergence problem of networks, facilitating faster learning and testing phases. One such approach involves the integration of a genetic algorithm (GA) with the Hopfield artificial neural network (HNN) [39]. The objective is to leverage the optimization capacity of the GA to enhance the learning process of the HNN and improve the model’s overall performance. Additionally, a hybrid artificial ants colony (ACO) has been introduced in the learning phase of neural networks, specifically in the context of discrete optimization for data analysis and data mining techniques [40]. This integration aims to exploit the strengths of ACO in addressing discrete optimization problems within the neural network framework.

To address the challenging of Boolean Satisfiability (SAT) problem representation, a modified version of the Hopfield Artificial Neural Network (MHNN) has been [39]. The purpose of this approach is to assess the efficiency of the MHNN model in solving SAT, particularly in comparison to existing methods. The proposed neural network model is compared with traditional Greedy SAT and genetic algorithms (GA) for SAT. The results demonstrate that MHNN can effectively serve as a viable alternative for solving Boolean SAT, offering favourable output quality and response time speed.

A hybrid model combining artificial immune systems (AIS) and Case-based Reasoning (CBR) was proposed to manage the processes of adaptation (reuse and revision), recovery, and retention of cases [41]. The aim was to offer an alternative approach for identifying high-density areas, clustering, enhancing search efficiency within the search space, and storing relationships among similar cases. The proposed model was applied to address the problem of fault detection and diagnosis. The obtained results were compared using specific performance metrics for CBR, revealing promising prospects for the proposed model.

An application of Ant Colony Optimization (ACO) in optimization problems for classification purposes was presented by [40]. The objective was to develop a model that captures the relationships between input attributes and the target class in a dataset. The proposed classification model aimed to predict new patterns using ACO to enhance the learning structure of feed-forward neural networks. A nonparametric Friedman test was employed to determine statistical significance, comparing the proposed model with existing evolutionary algorithms for evolving neural networks. The results demonstrated the efficiency of the proposed model in handling classification problems. Similarly, [42] conducted a study using the Imperialist Competitive Algorithm (ICA). The study aimed to compare the performance of ICA in the training and testing phases of Hopfield Neural Networks (HNN) using simulated real-life datasets against Exhaustive Search (ES) and a standalone Genetic Algorithm (GA) for 3-Satisfiability. The results of both studies indicated that incorporating ICA in the learning and training phases of HNN resulted in improved classification accuracy and lower error accumulation compared to ES and standalone GA. A novel Election Algorithm (EA) as a heuristics search technique in a Hopfield-type artificial neural network (HNN) for solving random satisfiability problems using a simulated dataset was proposed in [43]. The main objective was to assess the effectiveness of the Election Algorithm (EA) in improving the learning phase of the HNN for random k-Satisfiability logical rules. The results of the proposed HNN-RANkSAT-EA model showed promising performance, demonstrating favourable agreement with the existing HNN-RANkSAT-ACO approach while outperforming the traditional HNN-RANkSAT-ES method.

Furthermore, a recent advancement was made in upgrading the Random 2-Satisfiability (RAND-2SAT) model, proposed by [43] to Random 3-Satisfiability (RAND-3SAT) by [44]. The purpose of this upgrade was to incorporate high-order logical rules into the Hopfield neural network and explore the feasibility of the proposed Election Algorithm in learning for high-order logic. The results revealed that the Election Algorithm demonstrated optimal performance in the Hopfield neural network when applied to high-order logic problems

The Dragonfly algorithm has emerged as an innovative evolutionary metaheuristic algorithm, effectively applied in engineering applications and computational optimization to find optimal solutions. In a study conducted by [45], a novel artificial dragonfly algorithm (ADA) was developed, inspired by the intelligent swarming behaviours observed in natural interactions of dragonflies, such as navigation, food search, and enemy avoidance. The main focus of ADA was to mimic the dynamic and static swarm behaviours exhibited by dragonflies in nature. This design allows ADA to possess efficient capabilities for both exploration (global search) and exploitation (local search), making it a simple and efficient algorithm suitable for integration into the learning or training process of any neural network model, including HNN. The convergence of ADA is guaranteed during the optimization process, as appropriate weights can be adaptively assigned to each ADA operator. This adaptive weighting facilitates a smooth transition from exploration (local search) to exploitation (global search) of the search space, ensuring the convergence of the dragonflies towards the optimal solution.

The dragonfly algorithm and its modified variants have proven to be successful in various optimization and search problems across mathematical and engineering applications. For instance, [46] proposed a memory-based version of the hybrid dragonfly algorithm (MHDA) specifically tailored for solving numerical optimization problems in engineering applications. In another study, [47] introduced a binary version of the dragonfly algorithm for feature selection. They demonstrated its effectiveness in selecting relevant features for optimization tasks. A similarly study in [46] proposed the ADA algorithm, an evolving metaheuristic algorithm, for solving the static economic dispatch problem in solar energy. ADA proved to be a useful optimization tool for constrained optimization problems in this context. Additionally, in [48] a novel approach utilizing ADA in an optimization technique to determine the optimal threshold value for image segmentation was proposed. Furthermore, [49] proposed the Dragonfly Chaotic algorithm for feature selection, showcasing the algorithm’s applicability in this domain. A function optimization approach based on the ADA and Multilayer Perceptron Training in Neural Networks was developed in [50]. Their computational experiments on benchmark problems demonstrated the efficacy of ADA, particularly for multilayer perceptron training in neural networks.

In a recent study by [51], the dragonfly algorithm (ADA) was employed for the dynamic scheduling of assignments in cloud computing, aiming to find near-optimal solutions. The results obtained highlighted the performance of dragonfly metaheuristic algorithms in resource management and their potential for customization to meet specific requirements in cloud computing scenarios

In a related study conducted by [43], a novel hybrid discrete version of the artificial dragonfly algorithm (DADA) was developed for Exact Satisfiability representation using agent-based modelling (ABM). The main objective was to optimize the states of neurons within a dynamic system implemented on the NETLOGO platform. The DADA algorithm was chosen due to its capability to provide diverse solutions through random searching and a static swarm mechanism, enabling the convergence of computational problems towards the best global optimal search space. The proposed DADA algorithm was compared with a genetic algorithm (GA), and the results demonstrated its efficient performance in optimizing Exact-kSAT logical representations. The DADA-ABM approach shows great potential for modelling and optimizing complex networks that cannot be effectively captured by traditional optimization modelling techniques.

The Exact Boolean kSatisfiability (EBkSAT) problem, widely recognized as a challenging problem in computer science with diverse applications, continues to demand more effective and efficient solution methods. While various algorithms have been developed to tackle this problem, there is still room for improvement. The artificial dragonfly algorithm (ADA) has shown promise in solving optimization problems, including EBkSAT. However, the potential advantages of integrating ADA with the Hopfield neural network (HNN) to enhance performance in addressing the EBkSAT problem have not been extensively investigated.

In light of this, the objectives of this study are as follows:

  1. To investigate the performance of the artificial dragonfly algorithm in solving the EBkSAT problem compared to other state-of-the-art algorithms.
  2. To evaluate the effectiveness of the Hopfield neural network in enhancing the performance of the artificial dragonfly algorithm in solving the EBkSAT problem.
  3. To determine the effect of different parameters, such as the dragonfly population size, the number of neurons in the Hopfield network, and the learning rate, on the performance of the proposed algorithm.
  4. To develop an efficient and effective algorithm based on the artificial dragonfly algorithm in the Hopfield neural network for solving the EBkSAT problem.
  5. To compare the performance of the proposed algorithm with other state-of-the-art algorithms on a set of benchmark instances.

Research questions

The purpose of this study is to address the following research questions:

  1. What is the performance of the artificial dragonfly algorithm in solving the EBkSAT problem compared to other state-of-the-art algorithms?
  2. How effective is the artificial dragonfly algorithm in enhancing the performance of the Hopfield neural network in solving the EBkSAT problem?
  3. What is the impact of different parameters, such as the size of the dragonfly population, the number of neurons in the Hopfield network, and the learning rate, on the overall performance of the proposed algorithm?
  4. How can an efficient and effective algorithm based on the artificial dragonfly algorithm in the Hopfield neural network be developed to solve the EBkSAT problem?
  5. How does the proposed algorithm perform compared to other state-of-the-art algorithms when tested on a set of benchmark instances?

The findings of this study will make valuable contributions to the advancement of nature-inspired algorithms for solving combinatorial optimization problems, thereby carrying practical implications across various fields. The proposed hybrid computational model presented in this paper offers an alternative approach to tackle different combinatorial optimization problems, making it particularly relevant for the fields of computational science and mathematics.

The paper is structured as follows: Section 2 outlines the methodology adopted in this study, encompassing 2.1 the formulation of Exact Boolean kSatisfiability (EBkSAT) of a Boolean Formula, 2.2 the mapping of EBkSAT in the Hopfield neural network (HNN) model, 2.3 the learning phase of HNN, and 2.4 the algorithms of the artificial dragonfly and the introduced hybrid algorithm that integrates the artificial dragonfly into the HNN to achieve optimal Satisfiability representation. In Section 3, we present the experimental setup of the model, while Section 4 presents the experimental results along with a comprehensive discussion and conclusions are drawn from this exploration

Materials and methods

Exact Boolean kSatisfiability (EBkSAT)

The EBkSAT problem involves a Boolean formula that represents a particular decision problem. It aims to determine whether a given Boolean satisfiability formula in Conjunctive Normal Form (CNF) holds a true representation, satisfying a specified literally in every clause. If this condition is met, it confirms the existence of a label representation. Conversely, if the condition is not satisfied, it indicates the absence of a label representation [52]. The EBkSAT problem can be seen as a variant of the Boolean Satisfiability (SAT) problem, where the input instance is similar, but with a distinction in the EBkSAT representation. In EBkSAT, a clause is considered satisfied only if exactly one of its literals is true, in contrast to the requirement of at least one literal being true in ordinary k-SAT formulations.

Let’s consider a Boolean expression in which FEBkSAT is built from Boolean variables in CNF with the following properties.

  1. Comprising a Boolean variables’ set, (x1,x2,x3,…,xn), whereby xi∈{1,−1};
  2. A group of literals, whereby a literal represents a given variable xi or a variable’s negation ¬xi;
  3. A group of m distinct logical clauses Ci∈(c1,c2,c3,…,cm);
  4. Each satisfying assignment satisfies exactly one literal in each clause;
  5. Each variable in the logical clause is linked by Boolean connectives OR (∨);
  6. Every logical clause comprises a literal connected by the Boolean connective AND (∧);
  7. Each clause Ci is a disjunction of exactly three literal and contains at most three literals.

These properties simplify the formulation of the problem via a Hopfield neural network (HNN) and preserve its NP-completeness [53]. The general formulation of EBkSAT is presented as follows. (1) when k = (1,2,3) in Eq (1) describes the Boolean formula for EBkSAT containing logical clause Ci given in Eq (2) as follows: (2)

If the context is clear, we denote the number of clauses as EBkSAT [54]. The Boolean values are presented in bipolar Eij,Dij,Fij∈[1,−1] representing the TRUE value of a mapping or its FALSIFICATION respectively. Examples of EBkSAT formulation when k = 3 is presented as follows: (3)

Eq (3) is satisfiable since it gives true value, resulting in Eq (4).

(4)

If the neuron states are considered as Ei(i = 1,2,3), Di(i = 1,2,3) and Fi(i = 1,2,3) then the Boolean expression will be unsatisfiable only if (5)

In this study, FEBkSAT has been embedded into HNN in as a proposed model, EBkSAT-DHNN in the next section. To the knowledge of the author, an artificial dragonfly algorithm has not been applied before in accelerating the HNN learning phase to attack the Exact ksatisfiability representation.

Mapping Exact of the Boolean kSatisfiability in the Hopfield neural networks

The basic structure of the Hopfield neural network (HNN) consists of several components, including inputs, outputs, and weights. In a stable HNN, the energy decreases over time. The network continuously converges towards a fixed point, which represents stable neural states. This property makes HNN suitable for solving various search and optimization problems [55]. The energy of a stable HNN decreases over time. The fundamental architecture and structure of a discrete HNN with n neurons can be defined by two nxn real matrices. (6) an n-dimensional vector is presented as follows (7) where N is denoted by (8)

The state of each neuron has been denoted by two possible values of 1 or -1 of neuron i at a time as follows, (9) sj(t) is the initial input vector pattern presented to the network as follows.

(10)

Eq (10) represents the whole neurons’ state at time t, and the states set is Sj I defined as follows (11)

The neurons in HNN can be represented in binary Sj∈[0,1] or bipolar form Sj∈[−1,1] based on the neurons dynamics as follows, (12)

Eq (12) is defined by the Ising variables in the spin-glass and Dean’s mechanical physics problem [56], whereby the local field, hj(t) is described as follows: (13)

The benefit of utilizing the bipolar values rather than the binary values involves the network’s symmetry of states. If a given pattern Sj in a bipolar form can be stable, then its inverse can be stable as well. However, the general asynchronous updating state of the discrete HNN model is discrete in time, performing as follows: (14) whereby λjk represents a synaptic connection matrix, which determined a strong connection between the j and the k neurons, Sj refers to this unit condition k, and τj describes the neuron’s threshold function j. Some studies, including the work of [57], and [23] defined τj = 0 verify that the energy state of HNN network often decreases monotonically and every time the neuron is linked to λjk, a synaptic connection value is preserved in a form of a stored pattern in the interconnected vector, whereby the vectors , or N-dimensional variable vectors presented in Eq (7). The synaptic weight matrix constraint λ(1), thereby not allowing a self-loop connection of the neurons as follows, (15) and symmetrical neuron synaptic weight matrix presented as follows (16)

The energy dynamics function of the HNN model with CAM provides a versatile and high-capacity system with error tolerance and fast memory recovery capabilities, even with partial inputs [58] and [59]. By leveraging the HNN network as a logical rule, it becomes suitable for integration into combinatorial optimization problems like SAT. This involves assigning neurons to each variable in EBkSAT using specific cost functions and a generalized cost function, thereby instructing the system configuration’s performance based on the synaptic strength matri , which is controlling the neurons’ combinations in the network and FEBkSAT is as follows, (17) Whereby N as well as NN represent the number of the variables and the given number of the generated neurons in FEBkSAT correspondingly and the inconsistency of FEBkSAT representation is defined as follows: (18)

This value can be proportional to a number of “inconsistencies” of clauses (Eij = −1, Dij = −1, Fij = −1). The minimum parallels the “most consistent” selection Sj. Consequently, an updating state in the Hopfield neural network in the given Eqs (12) and (13) is upgraded to obey a third-order connection, which is described in Eqs (19) and (20) correspondingly as follows: (19)

In such a case, two output values for each neuron can be possible as follows: (20) whereby , , as well as are a third, a second, and an initial order synaptic weight for embedded into FEBkSAT. Eqs (18) and (19) guarantee that neurons Sj always converge . Thus, the given Lyapunov energy function (LEF) was employed to ensure the network’s energy dynamics states decrease monotonically. The retrieval neural states’ quality is measured via the LEF, , k = 2, which is given in Eq (21).

(21)

The energy state always change to a negative state until the global minimum energy is reached by the system. Eq (22) represents a monotonic drop with dynamics and is possibly upgraded to incorporate the third-order connections as follows, (22)

The network will be generating the needed solution if the induced neuron state reaches global minimum energy (an equilibrium state). The Energy state of Eqs (22) and (23) always portrayed FEBkSAT decreases monotonically to a certain configuration (zero). Thus, the value demonstrates this energy value with the total final energy reached FEBkSAT. Therefore, as the network approaches the final energy state, the network energy changes approach zero. The quality of the ultimate neuron state can be maintained according to Eq (24) as follows. (23) where the parameter ξ refers to the value of the pre-determined tolerance. The value parameter ξ = 0.001 has been taken as a tolerance value in [57] and [60]. However, other tolerance values can be considered.

If the FEBkSAT logical representation embedded in HNN does not satisfy the criteria state in Eq (24), then the ultimate state achieved has been trapped in a wrong pattern (a local minimum resolution). Fig 1 Shows a flowchart of HNN learning process based on Wan Abdullah learning approach while in Table 1 is the HNN algorithm.

thumbnail
Table 1. The Hopfield artificial neural network algorithm.

https://doi.org/10.1371/journal.pone.0286874.t001

For the HNN algorithm in Table 1, in case the fitness requirement is not met, the program will continue to operate in an iterative manner. Conversely, if the solution meets the requirement, it will be terminated and printed. The HNN’s fitness requirement is met if the output of the network corresponds to a specific constant value, which indicates that no further searching/optimization is necessary in the objective function. Additionally, all decision variables must be non-negative, and no constraints should be violated. Until now, no research has combined an ADA discrete version with an HNN discrete version as a single computational model. Consequently, the ADA robustness contributes to enhancing the HNN training process.

2.1 Learning phase in Hopfield artificial neural network for Exact Boolean kSatisfiability

The learning phase of the HNN system is introduced by inserting the “behaviour” of the EBkSAT in the HNN. The behaviour of EBkSAT can be implemented by searching the correct synaptic weight vector of EBkSAT logical representation. Finding the correct weight vector for the EBkSAT clause is the primary objective of the HNN-EBkSAT learning phase. The implementation of FEBkSAT in the HNN in this work is designated as the HNN-EBkSAT model. However, there was no previous attempt to implement the EBkSAT logical representation in HNN.

Consider the following EBkSAT program: (24)

Given the goal of the program (25) where F is the conjunction of clauses, which defined the goal of the logic program, whose task involves showing that ←F is incompatible to confirm the goal F. This is represented as a combinatorial optimization problem, whereby the “inconsistency” of Eq (25) is to be optimized (Minimization problem). The inconsistency is presented as a negation of Eq (25) after translating all clauses into Boolean algebraic form and negating as follows, (26)

The costs function of Eq (26) is the Boolean algebra formula for bipolar representation represents the FEBkSAT logical inconsistency, following Wan Abdullah’s learning method, which is given herein: (27) where (i = 1,2,3) represent the truth values of neurons Ei,Di&Fi respectively. In this work, can take any of the two possible values of 1 (True) and -1(False) for deriving the cost function to be minimized . The optimum value refers to the entire satisfied logical clauses. This value is related to the unsatisfied logical clauses number [57] and [60]. It can be applied in the given network via the storing atom truth values, thereby producing an optimized cost function in case optimal clauses are signified. The cost function in Eq (17), when programmed onto a third order logic, the energy in Eq (23) yields the correct synaptic strengths of HNN-EBkSAT to be stored as CAM of HNN. These synaptic weights are, thus, utilized throughout a recovery phase. Thus, the training process can provide an optimum cost function for determining the optimum synaptic weights. Eq (23) can further be expanded and simplified by considering all neurons connections associated with HEBkSAT(k = 3), (28)

The improved global minimum energy is described as the projected global minimum energy to be achieved when a retrieval process ends. An EBkSAT logic program represents a combinatorial optimization problem and, therefore, the improved global minimum energy will be calculated in Eq (20). According to Eq (26), FEBkSAT consists of 3 logical clauses with 9 randomly selected variables from a pre-determined 9 set of variables, 7 positive laterals, and 2 negative literals, (29)

Eq (29) is one of the consistent interpretations that make the entire Boolean formula FEBkSAT true. Substituting Eq (28) into Eq (28), the expected global minimum energy is obtained as in Eq (30). (30) is used for separating the neuron state correctness, which is produced by this network throughout the retrieval phase.

2.2 Proposed artificial dragonflies algorithm in HNN for EBkSAT representation

The objective of EBkSAT is to decide whether a given Boolean formula in the CNF or Conjunctive Normal Form has a given truth assignment, which correctly satisfies one literal in every clause or determines the absence of label assignment. Since the original dragonfly algorithm was developed to address optimization problems incorporating continuous function, the SAT problem is a given discrete optimization problem. ADA in this work has been adapted to deal with EBkSAT logic. These steps demonstrate the applied procedure in the suggested approach. One of the objectives is the application, which involves the ADA operators’ reformulation to tackle discrete optimization instead of continuous optimization.

The given limitation necessitates an algorithm, which effectively flips this neuron state following the former enhanced solution using a wide space of solution. The value of the fitness function of a candidate in the ADA is provided in Eq (31). (31) where f∈[1,2,3,…N] and refers to the location of rth dragonfly in dth dimensional space, Ndragonfly denotes the prospective search agents’ number. Where refers to the satisfied clauses’ maximum number, m describes the clause’s maximum number in FEBkSAT the logical program and denotes the tested clauses by using ADA.

(32)

Each of the neuron strings in the HNN network refers to an assignment, which matches the Exact kSAT instances. The suggested ADA objective function involves maximizing the artificial dragonflies’ fitness (the neuron string). In general, global optimization can be presented in Eq (33) without the loss of generality in the minimization problem.

(33)

The mapping of ADA in HNN for EBkSAT as it is shortened to ADA-HNN-EBkSAT and the stages of ADA-HNN-EBkSAT are described herein:

Stage 1: Initialization

Any optimization or search mainly aims to find the best solution regarding the problem’s the variables and vector variables to be optimized can be formed. The first population of the given size Ndragonfly for artificial dragonflies for each solution and step matrix was generated. The state of each Artificial Dragonflies in the search space is denoted by 1 or -1 which represents the True or Falsification that corresponds to the possible mapping for the Exact kSatisfiability problem. Each of the solutions can be randomized within the boundaries of the variable in Eq (34) as follows. (34) where r∈[1,2,3…Ndragonfly] and specifies a location for rth dragonfly in the given fth dimensional space, whereas Ndragonfly specifies a prospective search agents’ number. The homogeneously distributed random can be defined as , whereby this problem aims to find the ideal EBkSAT.

Step 2. Calculate the distance for each artificial dragonfly.

The distance from the neighbourhood can be determined by calculating and selecting the Euclidean distance between the entire dragonflies and picking N out of them. The distance σij aims to obey the Euclidean distance metric in Eq (35) as follows. (35) where and are the fitness of the neighbourhood areas and artificial dragonflies, respectively. The artificial dragonfly will be assigned to its neighbourhood areas based on its fitness values.

Stage 3: Fitness Evaluation.

This variable vector can be examined according to this fitness to a given quantified variable position within a modified solution space. The entire randomized variable vectors can undergo a fitness function assessment. The fitness of each is computed based on the initial position variables that are generated randomly between the variables’ lower and upper limits by using Eq (30) to Eq (31). The parameters of each dragonfly are the same as the variables in the optimization problem. The combination of parameters determines the attractiveness of the artificial dragonfly. In this case, the proposed model obeys . The maximum fitness of the is given by (36) and if the algorithm will be terminated when the optimum fitness is reached.

Stage 4: Update the position and velocity of artificial dragonflies.

Both the position, as well as the velocity of the dragonflies’ coefficients and are determined and updated (flip) based on the Eqs (37) to (39);

Separation strategy .

This is the states of avoidance of static collisions between artificial dragonflies in the same neighbourhood.

(37)

Alignment strategy .

This is the pace at which artificial dragonflies play in the same neighbourhood.

(38)

Cohesion strategy .

This technique demonstrates the artificial dragonfly’s inclination toward the middle of the neighbourhood. To maintain the artificial dragonflies’ unity in setting the given path toward the neighbourhood core. (39) where and describe the position and velocity of the rth potential dragonfly and described the position of this artificial dragonfly, whereas N is a number of the neighbouring potential search agent.

Stage 5: Update Food source and Enemy source.

The attractiveness of the artificial dragonfly toward food source and the distraction from its enemy obeys the following equations: (40) (41) where describes this artificial dragonfly position. ζfood and ζememy describe the corresponding food source, together with an enemy source. The artificial dragonfly sources for the food and the enemy are conveyed as the finest and worst solutions, correspondingly, which are so far observed in the solution space.

The adaptive integration of the preceding operations helps to correctly flip the artificial dragonfly position. This applies a phase vector and position vector to swap the artificial dragonflies’ position in the search space and simulate their associated hunting movements every single round. The phase vector indicates the swapping direction of the artificial dragonflies as follows. (42) where wr defined the inertia weight of potential search agent. sr, ar, cr, fr, and er are the separation weights, the alignment, the cohesion, the attraction of food, and the distraction of enemy, correspondingly.

In the discrete domain, the position of the artificial dragonflies can be updated by obeying the transfer function in which the velocity values are obtained as inputs and the number which represents the likelihood of switching artificial dragonfly positions in the solution space [61].

In the ADA it is seen that an artificial dragonfly moves by flipping the bits’ number. Therefore, the artificial dragonfly’s velocity might be represented by altering the bit probabilities that are changed for each iteration, i.e., the artificial dragonfly can move within the search space by presuming -1 or 1 values only, where every velocity is a probability of the position bit, taking the value 1. To be within the range [0,1], the velocity, which is a probability, must be limited [62]. The given function, which does this is referred to as the sigmoid function. It can be mathematically formulated as follows: (43)

After calculating the probability of changing the position for the entire dragonflies, Eq (43) is used for updating (i.e., flip the neurons) the position of the search agent in these bipolar search spaces. The location switch is described by comparing it with equivalently generated random numbers between 0 and 1; these can be formulated herein: (44) with the above components, the ADA can repeatedly flip the bipolar bits in solutions until the satisfaction of an end condition.

Stage 6: New solution generation.

Finally, the emergence of a new population of bipolar artificial dragonflies. Go back to Phase 2, quantify the cost of each artificial dragonfly within the new population and then perform the loop until the stop condition is met. After that, the calculated fitness function obeys both the updated location, as well as velocities. The modification of the artificial dragonfly location continues until the criteria are met. Table 2, presented the Pseudocode of ADA.

Implementation procedure

The implementation procedure of Neuro-Heuristic searching for EBkSAT in the Hopefield Artificial Neural Network. The primary task of the program involves finding the optimum “model”, which can find the optimum EBkSAT occurrences. The two variables and the clauses were randomized initially based on EBkSAT logical principles. Simulations were completed with a changing number of the neurons’ complexity, i.e. 10≤NN≤120. These models’ executions, which were conducted on the EBkSAT logical representation, are presented via these steps:

  1. Present a given logic program in Eq (25):
  2. Translate the entire EBkSAT logical clauses to a Boolean algebra form as in Eq (26).
  3. Define a neuron to every variable in the representation of EBkSAT logical rule Eq (27).
  4. Randomize the neurons’ state, then initialize the entire connection strengths to zero as follows.
(45)
  1. Obtain the given cost function, for the EBkSAT by utilizing Eq (27).
  2. Compare the given cost function in Eq (26) with energy dynamics in Eq (23), then obtain the values of the synaptic weight vector as follows,
(46)
  1. Check the clause satisfaction by employing the ADA, ABC, ES, as well as AIS searching procedures, which correspond to the . Then, a satisfying assignment is stored as the CAM in the HNN.
  2. Randomize the neurons’ state, then calculate the given respective local field hi(t) for a state space by utilizing Eq (20). Then, it signifies a stable configuration when it stays unchanged following five loops.
  3. Find the network corresponding to the final state by applying the Lyapunov energy dynamics Eq (23).
  4. Check whether the final energy derived is a global or a local minimum based on the condition in Eq (24). Fig 2, displays the flowchart of the implementation Procedure of different algorithms in HNN with EBkSAT.
thumbnail
Fig 2. Flowchart for HNN-EBkSAT implementation procedure.

https://doi.org/10.1371/journal.pone.0286874.g002

Model experimental setup

In this study, the ADA algorithm was integrated into the HNN to search for the optimal solution for the EBkSAT model logic representation. This hybrid computational model was evaluated using three existing models from the literature. The HNN models utilized simulated datasets to establish the EBkSAT logical clauses. To ensure a meaningful comparison among the HNN models, the entire source code was developed as a simulation program using Dev C++ release version 5.11. The program was executed on a Windows 8.1 device with an Intel® Celeron® CPU B800@4GHz processor and 8 GB RAM. Table 3 provides a list of the appropriate parameters used during the execution of the ADA in the HNN model.

thumbnail
Table 3. List of some parameters of the ADA-HNN- EBkSAT model.

https://doi.org/10.1371/journal.pone.0286874.t003

Performance evaluation measure

Performance measures play a crucial role in the design process of HNN models. These measures, known as "difference measures," quantify the disparities between expected and observed values, providing a reliable assessment of the model’s precision and accuracy. After the training process is successfully executed, the neural network can calculate various metrics including GmR, RMSE, MAPE, CT, and accuracy. The equations for these metrics are presented in Eqs (47) to (51).

(47)(48)(49)(50)(51)

Experimental results and discussion

The Artificial Dragonfly Algorithm (ADA) has been incorporated into the Hopfield Neural Network (HNN) for Exact Boolean k-Satisfiability Logical Representation (EBkSAT). The purpose is to accelerate the training capacity of the HNN for optimal representation of EBkSAT logical rules and to address the premature convergence behaviour of the HNN. The performance of ADA in enhancing the training process of HNN has been compared to the Artificial Bee Colony (ABC) in the Hopfield Neural Network (ABC-HNN-EBkSAT), the Artificial Immune System (AIS) in the Hopfield Neural Network for Exact Boolean k-Satisfiability Logical Representation (AIS-HNN-EBkSAT), and the traditional exhaustive search techniques (ES) in the Hopfield Neural Network for Exact Boolean k-Satisfiability Logical Representation (ADA-HNN-EBkSAT). Figs 36 show the searching capacity between the existing models and the proposed models in finding the global optimum based on stated performance metrics measures.

thumbnail
Fig 6. Implementation time performance of various HNN-EBkSAT models.

https://doi.org/10.1371/journal.pone.0286874.g006

Fig 3 illustrates the searching behaviour of the HNN-EBkSAT logic representation in terms of the model’s global minimum ratio (GmR). The performance of the HNN learning phase has been reported 10≤NN≤120. The efficiency of a hybrid model can be assessed by examining their GmR for different levels of complexity in the network’s neurons. Based on the results presented in Fig 3, it is evident that the ADA-HNN-EBkSAT and ES-HNN-EBkSAT models achieved more accurate neural states compared to the ABC-HNN-EBkSAT and AIS-HNN-EBkSAT models. Notably, the ES-HNN-EBkSAT model employed an exhaustive trial-and-error search process to ensure compliance with the clauses, which only accommodated the maximum NN≤60 value due to the nature of the exhaustive search technique. This exhaustive process increased the computational pressure to reach the exact neural configuration. The relationship between GmR and the energy state at the end of a computation cycle has been explained in [22].

Hypothetically, if a GmR for a hybrid network is quite near to 1, all the solutions of that system can almost achieve global minimum energy (i.e., 100 per cent satisfied clauses). It was observed that ABC-HNN-EBkSAT and AIS-HNN-EBkSAT were associated with some drawbacks, which included the tendency of being trapped at a sub-optimal output weight and a slow rate of convergence. Fig 3 describes that ABC-HNN-EBkSAT, neurons state was trapped at NN = 60 and 90, this is due to neurons neuron oscillations in the network searching process. AIS-HNN-EBkSAT recorded a high number of neurons in suboptimal solution (wrong pattern), which are located at NN = 50,80,100 and 120 neuron states. However, higher than 98% were successfully reported. On the other hand, the proposed ADA-HNN-EBkSAT model recorded better efficiency in the process of neuro-searching for EBkSAT logic representation, which is close to the existing models. As the neurons’ number increases in terms of the network’s complexity, this network will become more difficult as the limitations extend indefinitely. ADA-HNN-EBkSAT model was able to sustain more accurate neuron states than other models. This is due to the searching capacity of ADA which reduces the complexity of the network in searching for the correct EBkSAT representation.

In addition, the ADA strategy creates lower diversification for the clauses, making it challenging for early solutions to fit. Therefore, ADA utilizes optimization mechanisms such as separation, cohesion, alignment, a food factor, and an enemy factor to achieve optimal solutions. The incorporation of ADA effectively reduces the learning complexity as the number of neurons increases during simulation. The success of ADA-HNN-EBkSAT in reaching global solutions can be attributed to its efficacy in global and local search processes, acting as a learning algorithm. In comparison to other algorithms (ABC, AIS, and ES), ADA demonstrates inspiring local search capabilities throughout the initial and final stages of the search [6365]. This exploration highlights the robustness and efficiency of ADA’s global and local search capabilities in accelerating the HNN learning process for optimal EBkSAT representation.

In Figs 4 and 5, the trend of ES-HNN-EBkSAT shows a continuous increase in error accumulation. This can be attributed to the brute force approach used in the search for satisfiability mapping out of 10≤NN≤60, which struggles to handle the complexity of the escalating number of neurons. On the other hand, the proposed hybrid searching method incorporating ADA mechanisms explores improved efficiency in searching for optimal EBkSAT representation, leading to . This is achieved through the intelligent search mechanisms of ADA operators, including separation, alignment, and cohesion, which facilitate the assignment that leads to . The inclusion of multiple optimization layers enables the model to reach a satisfying assignment, ultimately leading to . Additionally, a food search and enemy avoidance layer filters out non-improving solutions throughout the learning phase of HNN. The limitations of the ES-HNN-EBkSAT model are evident in its accumulation of high RMSE and MAPE during the learning stage, as well as its slow convergence rate, which requires more iterations to achieve global convergence compared to ADA-HNN-EBkSAT, ABC-HNN-EBkSAT, and AIS HNN-EBkSAT models.

The error analysis presented in Figs 4 and 5 reveals that ADA-HNN-EBkSAT achieves a lower RMSE and MAPE, approximately 20% lower than ABC-HNN-EBkSAT and 28% lower than AIS HNN-EBkSAT. This demonstrates the capability of ADA in reducing the model’s sensitivity to errors by minimizing iterations. ADA-HNN-EBkSAT incorporates multiple optimization strategies to strike a balance between the exploitation and exploration aspects of the search process.

To explore the search capacity of HNN, an alignment strategy is utilized, while a cohesion strategy is employed to exploit the HNN search space, resulting in . Additionally, to facilitate the transition between exploitation and exploration, enhancing the HNN search capacity, the radii of the neighbourhood are proportionally increased with the number of iterations. Adapting swarming weights throughout the optimization process represents another approach for balancing exploitation and exploration. The best and worst EBkSAT clauses obtained thus far serve as the sources of food and the enemy, respectively. This mechanism promotes convergence toward a promising area and divergence away from nonpromising regions within the solution space. These ADA features compel the HNN-EBkSAT model to reduce the number of iterations during the learning phase, ensuring minimal error accumulation upon the completion of a computational cycle.

This systematic optimization partition in the ADA improved the local and global search process of HNN for optimal EBkSAT representation. This partition of a solution space allowed the model to search for the proper assignment effectively in the entire solution space, which led to the lowest possible error. Specifically, the search area for ADA-HNN-EBkSAT is presented as 5 spaces. ABC-HNN-EBkSAT and AIS-HNN-EBkSAT, on the contrary, with only two partition spaces and ES-HNN-EBkSAT with none for the overall solution space, which leads to generating a nonfit solution throughout the model’s early stages of the search process, utilizing a trial-and-error method, which necessitates further iterations for obtaining a global solution. Regarding the evaluation of RMSE and MAPE, ADA can be regarded as an appropriate approach in the HNN to carry out EBkSAT logic representation successfully.

In Fig 6, the trend of implementation time (CT) has been displayed for all models under study. The observed running time showed that this program became more complex, taking more considerable effort and time to search for a global solution. All these models under study displayed a close time range in the search, i.e., 10≤NN≤40. ES-HNN-EBkSAT reported having consumed more implementation time in the search, i.e., 20≤NN≤60, which made it slower than other models. ES-HNN-EBkSAT was observed to have consumed 862 seconds slower than that of ADA-HNN-EBkSAT and ABC-HNN-EBkSAT model recorded approximately 724 seconds slower than that of AIS-HNN- EBkSAT. According to Fig 6, ADA-HNN-EBkSAT required less time to executant compared with ABC-HNN-EBkSAT and AIS HNN-EBkSAT models. This means that when the neurons’ number increases, the time accumulation will be more. In addition, ADA-HNN-EBkSAT required fewer iterations to find the desired solution, resulting in reduced execution time. On the other hand, ES-HNN-EBkSAT needed more iterations to find the global solution, as the interpretation collapses when optimal fitness cannot be achieved, leading to a new interpretation hunt with a different fitness value. Hence, ADA has proven to be more effective in enhancing or pursuing preferred approaches, even in scenarios with high complexity literals. This can be attributed to the increased number of layers in the ADA searching process, allowing for the resolution of greater eligibility in a shorter time frame.

These ADA features have enabled HNN-EBkSAT to complete the learning phase more quickly compared to current models in the literature. It has been verified that ADA-HNN-EBkSAT achieved this learning process with a slightly shorter timeframe than other existing models. However, all HNN models demonstrated competence in optimizing EBkSAT and its variants, successfully computing a global solution within a feasible CPU time.

Table 4 presents the overall testing error and accuracy of different models for a classification problem. The results demonstrate that the proposed logical rule, EBkSAT, consistently provides optimal classification to the HNN during the learning phase, resulting in very low error. The training error, represented by MAPE and RMSE, indicates the deviation between predicted and actual outputs during training. Lower values of MAPE and RMSE indicate better accuracy and reduced errors in training.

In terms of MAPE, the ADA-HNN-EBkSAT model achieved the lowest training error of 7.65, followed by the ABC-HNN-EBkSAT model with a slightly higher MAPE of 10.93. The AIS-HNNkSAT model had a MAPE of 12.17, and the ES-HNNkSAT model had the highest MAPE of 21.16. Similarly, in terms of RMSE, the ADA-HNN-EBkSAT model had the lowest training error of 3.47, followed by the ABC-HNN-EBkSAT model with an RMSE of 4.93. The AIS-HNNkSAT model had an RMSE of 8.16, and the ES-HNNkSAT model had the highest RMSE of 12.88. The accuracy measures indicate the models’ performance in correctly classifying instances. The ADA-HNN-EBkSAT model achieved the highest accuracy of 93.1%, outperforming the other models. The ABC-HNN-EBkSAT, AIS-HNN-EBkSAT, and ES-HNN-EBkSAT models achieved accuracies of 89.3%, 87.5%, and 82.5% respectively. These results highlight the superior accuracy of the ADA-HNN-EBkSAT model for the given classification problem.

Statistical analysis was conducted to compare the performance of the models based on MAPE, RMSE, and accuracy. The ANOVA tests revealed significant differences in the mean MAPE, RMSE, and accuracy values among the models. Tukey’s HSD test was then performed to identify specific pairwise differences between the models. The results of Tukey’s HSD test in Table 5 indicate the following significant differences:

thumbnail
Table 5. Comparison of MAPE mean and RMSE mean for statistical significance.

https://doi.org/10.1371/journal.pone.0286874.t005

In the Table 5, the first column represents the comparison between different models or techniques, and the second column indicates the statistical significance with "p < 0.05" denoting that the difference observed between the compared models is statistically significant.

The results of this statistical analysis provide valuable insights into the effectiveness of the models for classification tasks in comparison. Researchers and practitioners can utilize this information to select the most suitable model based on their specific requirements. Additionally, conducting postdoc tests can offer a more nuanced analysis and further contribute to our understanding of the differences among the models. This analysis serves as a valuable reference for those working on classification problems and contributes to ongoing research in the field. These findings indicate significant differences in the mean MAPE and RMSE values between the ADA-HNN-EBkSAT model and the other models. The ADA-HNN-EBkSAT model consistently outperforms the ABC-HNN-EBkSAT, AIS-HNN-EBkSAT, and ES-HNN-EBkSAT models in terms of accuracy and error metrics.

Conclusion

Based on the simulation results, the ADA-HNN-EBkSAT model exhibited superior efficiency and robustness compared to the ABC-HNN-EBkSAT, AIS-HNN-EBkSAT, and ES-HNN-EBkSAT models in accelerating the learning phase of the HNN for the EBkSAT logic program, while achieving lower learning error. The proposed model also demonstrated good agreement with the ABC-HNN-EBkSAT and AIS-HNN-EBkSAT models in terms of GmR, RMSE, MAPE, CT, and accuracy metrics.

Notably, the ADA-HNN-EBkSAT model achieved a GmR of 1, even when handling complex network structures, indicating its efficacy in approaching the global minimum. This highlights the effectiveness of the ADA algorithm as a powerful heuristic for enhancing the training phase of the HNN in the context of the EBkSAT logic program.

However, it is important to acknowledge certain practical and theoretical limitations in this study. Firstly, the evaluation was conducted based on simulated data sets, and the performance of the proposed model should be further validated using real-world datasets to ensure its generalizability. Additionally, the comparison was limited to specific existing models, and incorporating a broader range of benchmark models would provide a more comprehensive evaluation. Furthermore, although the ADA-HNN-EBkSAT model showed promising results in terms of accuracy and computational time, there may be specific problem instances or scenarios where alternative metaheuristic methods could yield better performance. Exploring and incorporating other metaheuristic approaches in future research would enable a more comprehensive understanding of the potential acceleration of the HNN computational phase.

Finally, the ADA-HNN-EBkSAT model demonstrated its superiority in accelerating the learning phase of the HNN for the EBkSAT logic program, outperforming existing models in terms of efficiency and accuracy. While acknowledging the practical and theoretical limitations of the study, the findings suggest the potential practical applicability of the proposed model in various domains where complex optimization problems are encountered.

Acknowledgments

The authors are thankful to the Deanship of Scientific Research at Najran University and the Registrar of Universiti Tun Hussein Onn Malaysia.

References

  1. 1. Aiman U, Asrar N. Genetic algorithm based solution to SAT-3 problem. J Comput Sci Appl. 2015;3: 33–39.
  2. 2. Cook SA. The Complexity of Theorem-Proving Procedures Stephen A. Cook University of Toronto. Proc Third Annu ACM Symp Theory Comput. 1971; 151–158.
  3. 3. Lemos H, Prates M, Avelar P, Lamb L. Graph colouring meets deep learning: Effective graph neural network models for combinatorial problems. Proceedings—International Conference on Tools with Artificial Intelligence, ICTAI. 2019.
  4. 4. Buño KC, Cabarle FGC, Calabia MD, Adorna HN. Solving the N-Queens problem using dP systems with active membranes. Theor Comput Sci. 2018.
  5. 5. Lampis M. Improved inapproximability for TSP. Theory Comput. 2014.
  6. 6. Kundu S, Acharyya S. A SAT approach for solving the nurse scheduling problem. IEEE Region 10 Annual International Conference, Proceedings/TENCON. 2008.
  7. 7. Matos GP, Albino LM, Saldanha RL, Morgado EM. Solving periodic timetabling problems with SAT and machine learning. Public Transp. 2021.
  8. 8. Rintanen J. Planning as satisfiability: Heuristics. Artif Intell. 2012;193: 45–86.
  9. 9. Chowdhary KR. Fundamentals of artificial intelligence. Springer; 2020.
  10. 10. Gelfond M, Kahl Y. Knowledge representation, reasoning, and the design of intelligent agents: The answer-set programming approach. Cambridge University Press; 2014.
  11. 11. Vizel Y, Weissenbacher G, Malik S. Boolean satisfiability solvers and their applications in model checking. Proc IEEE. 2015;103: 2021–2035.
  12. 12. Ahmad W, Rushdi AMA. A new cryptographic scheme utilizing the difficulty of big Boolean satisfiability. Int J Math Eng Manag Sci IJMEMS. 2018;3: 47–61.
  13. 13. AlKasem HH, Menai MEB. Stochastic local search for Partial Max-SAT: an experimental evaluation. Artif Intell Rev. 2021;54: 2525–2566.
  14. 14. Sabar NR, Kendall G. Population based Monte Carlo tree search hyper-heuristic for combinatorial optimization problems. Inf Sci. 2015;314: 225–239.
  15. 15. Krotov D, Hopfield JJ. Unsupervised learning by competing hidden units. Proc Natl Acad Sci U S A. 2019. pmid:30926658
  16. 16. Cai F, Kumar S, Van Vaerenbergh T, Sheng X, Liu R, Li C, et al. Power-efficient combinatorial optimization using intrinsic noise in memristor Hopfield neural networks. Nat Electron. 2020.
  17. 17. Fahimi Z, Mahmoodi MR, Nili H, Polishchuk V, Strukov DB. Combinatorial optimization by weight annealing in memristive hopfield networks. Sci Rep. 2021. pmid:34385475
  18. 18. Wang J, Wang J, Han QL. Multivehicle Task Assignment Based on Collaborative Neurodynamic Optimization with Discrete Hopfield Networks. IEEE Trans Neural Netw Learn Syst. 2021. pmid:34077371
  19. 19. Abdullah WATW. Logic programming on a neural network. Int J Intell Syst. 1992.
  20. 20. Kong D, Hu S, Wang J, Liu Z, Chen T, Yu Q, et al. Study of Recall Time of Associative Memory in a Memristive Hopfield Neural Network. IEEE Access. 2019.
  21. 21. Abubakar H, Masanawa AS, Yusuf S, Boaku GI. Optimal representation to high order random boolean ksatisability via election algorithm as heuristic search approach in hopeld neural networks. J Niger Soc Phys Sci. 2021.
  22. 22. Sathasivam S. Acceleration technique for neuro symbolic integration. Appl Math Sci. 2015.
  23. 23. Abubakar H. An optimal representation to Random Maximum k Satisfiability on the Hopfield Neural Network for High order logic(k ≤ 3). Kuwait J Sci. 2022;49: 1–16.
  24. 24. Li B, Li Y. Existence and global exponential stability of almost automorphic solution for clifford-valued high-order hopfield neural networks with leakage delays. Complexity. 2019.
  25. 25. Alenezi F, Santosh KC. Geometric Regularized Hopfield Neural Network for Medical Image Enhancement. Int J Biomed Imaging. 2021. pmid:33552152
  26. 26. Johanyák ZC. A modified particle swarm optimization algorithm for the optimization of a fuzzy classification subsystem in a series hybrid electric vehicle. Teh Vjesn—Tech Gaz. 2017.
  27. 27. Lawal AI, Kwon S, Kim GY. Prediction of the blast-induced ground vibration in tunnel blasting using ANN, moth-flame optimized ANN, and gene expression programming. Acta Geophys. 2021;69: 161–174.
  28. 28. Lawal AI, Kwon S, Kim GY. Prediction of an environmental impact of tunnel blasting using ordinary artificial neural network, particle swarm and Dragonfly optimized artificial neural networks. Appl Acoust. 2021;181: 108122.
  29. 29. Bacanin N, Sarac M, Budimirovic N, Zivkovic M, AlZubi AA, Bashir AK. Smart wireless health care system using graph LSTM pollution prediction and dragonfly node localization. Sustain Comput Inform Syst. 2022;35: 100711.
  30. 30. Boonyaprapasorn A, Kuntanapreeda S, Ngiamsunthorn PS, Kumsaen T, Sethaput T. Time-varying sliding mode controller for heat exchanger with dragonfly algorithm. Int J Electr Comput Eng IJECE. 2023;13: 3958–3968.
  31. 31. Zivkovic M, Zivkovic T, Venkatachalam K, Bacanin N. Enhanced dragonfly algorithm adapted for wireless sensor network lifetime optimization. Data Intelligence and Cognitive Informatics: Proceedings of ICDICI 2020. Springer; 2021. pp. 803–817.
  32. 32. Rani P, Verma S, Rawat DB, Dash S. Mitigation of black hole attacks using firefly and artificial neural network. Neural Comput Appl. 2022;34: 15101–15111.
  33. 33. Li X, Chen L, Tang Y. Hard: Bit-split string matching using a heuristic algorithm to reduce memory demand. Romanian J Inf Sci Technol. 2020.
  34. 34. Zamfirache IA, Precup RE, Roman RC, Petriu EM. Reinforcement Learning-based control using Q-learning and gravitational search algorithm with experimental validation on a nonlinear servo system. Inf Sci. 2022.
  35. 35. Bojan-Dragos CA, Precup RE, Preitl S, Roman RC, Hedrea EL, Szedlak-Stinean AI. GWO-based optimal tuning of type-1 and type-2 fuzzy controllers for electromagnetic actuated clutch systems. IFAC-PapersOnLine. 2021.
  36. 36. Xie L, Han T, Zhou H, Zhang ZR, Han B, Tang A. Tuna Swarm Optimization: A Novel Swarm-Based Metaheuristic Algorithm for Global Optimization. Computational Intelligence and Neuroscience. 2021. pmid:34721567
  37. 37. Ahmed AM, Rashid TA, Saeed SAM. Cat Swarm Optimization Algorithm: A Survey and Performance Evaluation. Computational Intelligence and Neuroscience. 2020. pmid:32405296
  38. 38. Hai T, Theruvil Sayed B, Majdi A, Zhou J, Sagban R, Band SS, et al. An integrated GIS-based multivariate adaptive regression splines-cat swarm optimization for improving the accuracy of wildfire susceptibility mapping. Geocarto Int. 2023; 2167005.
  39. 39. Mejia-Lavalle M, Jose Ruiz A, Joaquin Perez O, Marilu Cervantes S. Modified Neural Net for the Boolean Satisfiability Problem. Proceedings—2015 International Conference on Mechatronics, Electronics, and Automotive Engineering, ICMEAE 2015. 2015.
  40. 40. Salama KM, Abdelbar AM. Learning neural network structures with ant colony algorithms. Swarm Intell. 2015.
  41. 41. Costa Silva G, Carvalho EEO, Caminhas WM. An artificial immune systems approach to Case-based Reasoning applied to fault detection and diagnosis. Expert Syst Appl. 2020.
  42. 42. Zamri NE, Alway A, Mansor MA, Kasihmuddin MSM, Sathasivam S. Modified imperialistic competitive algorithm in hopfield neural network for boolean three satisfiability logic mining. Pertanika J Sci Technol. 2020.
  43. 43. Abubakar H, Rijal S, Sabri M, Masanawa SA, Yusuf S. Modified election algorithm in hopfield neural network for optimal random k satisfiability representation. Int J Simul Multidisci DesOptim. 2020;16: 1–13.
  44. 44. Abubakar H, Danrimi ML. Hopfield type of Artificial Neural Network via Election Algorithm as Heuristic Search method for Random Boolean kSatisfiability. Int J Comput Digit Syst. 2021;10: 659–673.
  45. 45. Mirjalili S. Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput Appl. 2016.
  46. 46. Suresh V, Sreejith S. Generation dispatch of combined solar thermal systems using dragonfly algorithm. Computing. 2017.
  47. 47. Mafarja M, Aljarah I, Heidari AA, Faris H, Fournier-Viger P, Li X, et al. Binary dragonfly optimization for feature selection using time-varying transfer functions. Knowl-Based Syst. 2018.
  48. 48. Díaz-Cortés MA, Ortega-Sánchez N, Hinojosa S, Oliva D, Cuevas E, Rojas R, et al. A multi-level thresholding method for breast thermograms analysis using Dragonfly algorithm. Infrared Phys Technol. 2018.
  49. 49. Sayed GI, Tharwat A, Hassanien AE. Chaotic dragonfly algorithm: an improved metaheuristic algorithm for feature selection. Appl Intell. 2019.
  50. 50. Xu J, Yan F. Hybrid Nelder–Mead Algorithm and Dragonfly Algorithm for Function Optimization and the Training of a Multilayer Perceptron. Arab J Sci Eng. 2019.
  51. 51. Shirani MR, Safi-Esfahani F. Dynamic scheduling of tasks in cloud computing applying dragonfly algorithm, biogeography-based optimization algorithm and Mexican hat wavelet. J Supercomput. 2020.
  52. 52. Björklund A, Husfeldt T. Exact algorithms for exact satisfiability and number of perfect matchings. Algorithmica N Y. 2008.
  53. 53. Allender E, Bauland M, Immerman N, Schnoor H, Vollmer H. The complexity of satisfiability problems: Refining Schaefer’s theorem. J Comput Syst Sci. 2009.
  54. 54. Porschen S. On variable-weighted exact satisfiability problems. Ann Math Artif Intell. 2007.
  55. 55. Hopfield JJ, Tank DW. “Neural” computation of decisions in optimization problems. Biol Cybern. 1985. pmid:4027280
  56. 56. Sherrington D. Physics and complexity. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2010. pmid:20123753
  57. 57. Saratha S. Upgrading logic programming in hopfield network. Sains Malays. 2010.
  58. 58. Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Feynman and Computation. 2018.
  59. 59. Gosti G, Folli V, Leonetti M, Ruocco G. Beyond the maximum storage capacity limit in hopfield recurrent neural networks. Entropy. 2019. pmid:33267440
  60. 60. Sathasivam S, Mansor MohdA, Kasihmuddin MSM, Abubakar H. Election Algorithm for Random k Satisfiability in the Hopfield Neural Network. Processes. 2020.
  61. 61. Chen Y, Wang Z. Wavelength selection for NIR spectroscopy based on the binary dragonfly algorithm. Molecules. 2019. pmid:30682788
  62. 62. Khunkitti S, Watson NR, Chatthaworn R, Premrudeepreechacharn S, Siritaratiwat A. An improved DA-PSO optimization approach for unit commitment problem. Energies. 2019.
  63. 63. Alzaeemi SA, Sathasivam S. Artificial immune system in doing 2-satisfiability based reverse analysis method via a radial basis function neural network. Processes. 2020.
  64. 64. Abdulhabib S, Alzaeemi S, Sathasivam S, Velavan M, Mamat M. Artificial Immune System Algorithm for Training Symbolic Radial Basis Function Neural Network Based 2 Satisfiability Logic Programming. Turk J Comput Math Educ. 2021;12: 2591–2600.
  65. 65. Abubakar H, Abdu Masanawa S, Yusuf S. Neuro-Symbolic Integration of Hopfield Neural Network for Optimal Maximum Random kSatisfiability (Maxrksat) Representation. J Reliab Stat Stud. 2020.