Next Article in Journal
Proving Feasibility of a Docking Mission: A Contractor Programming Approach
Next Article in Special Issue
Design of Aquila Optimization Heuristic for Identification of Control Autoregressive Systems
Previous Article in Journal
Variational Approach for a Robin Problem Involving Non Standard Growth Conditions
Previous Article in Special Issue
Multi-Objective Artificial Bee Colony Algorithm with Minimum Manhattan Distance for Passive Power Filter Optimization Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Systematic Weighted Satisfiability in Discrete Hopfield Neural Network Using Binary Artificial Bee Colony Optimization

by
Siti Syatirah Muhammad Sidik
1,
Nur Ezlin Zamri
2,
Mohd Shareduwan Mohd Kasihmuddin
1,*,
Habibah A. Wahab
3,
Yueling Guo
1 and
Mohd. Asyraf Mansor
2
1
School of Mathematical Sciences, Universiti Sains Malaysia, USM, Pulau Pinang 11800, Malaysia
2
School of Distance Education, Universiti Sains Malaysia, USM, Pulau Pinang 11800, Malaysia
3
School of Pharmaceutical Sciences, Universiti Sains Malaysia, USM, Pulau Pinang 11800, Malaysia
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(7), 1129; https://doi.org/10.3390/math10071129
Submission received: 29 January 2022 / Revised: 5 March 2022 / Accepted: 7 March 2022 / Published: 1 April 2022
(This article belongs to the Special Issue Metaheuristic Algorithms)

Abstract

:
Recently, new variants of non-systematic satisfiability logic were proposed to govern Discrete Hopfield Neural Network. This new variant of satisfiability logical rule will provide flexibility and enhance the diversity of the neuron states in the Discrete Hopfield Neural Network. However, there is no systematic method to control and optimize the logical structure of non-systematic satisfiability. Additionally, the role of negative literals was neglected, reducing the expressivity of the information that the logical structure holds. This study proposed an additional optimization layer of Discrete Hopfield Neural Network called the logic phase that controls the distribution of negative literals in the logical structure. Hence, a new variant of non-systematic satisfiability named Weighted Random 2 Satisfiability was formulated. Thus, a proposed searching technique called the binary Artificial Bee Colony algorithm will ensure the correct distribution of the negative literals. It is worth mentioning that the binary Artificial Bee Colony has flexible and less free parameters where the modifications tackled on the objective function. Specifically, this study utilizes a binary Artificial Bee Colony algorithm by modifying the updating rule equation by using not and (NAND) logic gate operator. The performance of the binary Artificial Bee Colony will be compared with other variants of binary Artificial Bee Colony algorithms of different logic gate operators and conventional binary algorithms such as the Particle Swarm Optimization, Exhaustive Search, and Genetic Algorithm. The experimental results and comparison show that the proposed algorithm is compatible in finding the correct logical structure according to the initiate ratio of negative literal.

1. Introduction

The Artificial Neural Network (ANN) has become the standard approach to solve a wide range of optimization problems due to the structural concepts of simulating the neurons’ activity of the human brain. The solid feature of ANN is mimic ability on how the brain works with the hope that we can build a system or a model that can produce collective decision making. As a result, AI practitioners heavily rely on the features of ANN which results in a lack of in-depth understanding of the operations conducted by ANN models. According to [1], ANN is considered a non-interpretable model due to its “black-box” nature, which fails to determine the direction and magnitude of the neurons in the network. In addition, the regularization and training time of the ANN model is difficult to be set as a constant. Hence, Wan Abdullah [2] embedded the Satisfiability (SAT) logical structure in a class of feedback ANN model, the Hopfield Neural Network (HNN). The role of SAT in HNN is to represent the neurons as meaningful indicators (or meaningless); when using such auxiliary inputs, one can use it as the basis of reasoning to explain the predicted outputs. In layman’s terms, SAT is the symbolic instruction to navigate the operations of HNN. The work capitalized the structure of SAT by proposing logical inference through the minimization of Horn Satisfiability (HornSAT) in HNN. The approach is capable to yield connection strengths (synaptic weight) between each neuron, resulting in an energy-minimizing dynamic network. However, HornSAT consists of redundant variables which deteriorate the practicality in representing real-life variables. Therefore, ref. [3] incorporates another variant of SAT in HNN called 2 Satisfiability (2SAT). 2SAT is a class of systematic SAT whereby the number of variables in each clause is restricted to only k variables, in this case, k = 2. The performance of 2SAT in HNN can achieve 90% production of global minimum solutions, which indicates the credibility of 2SAT in representing the behavior of neurons in HNN.
Alternatively, ref. [4] counters the problem with the existing SAT structure by proposing Random 2 Satisfiability (RAN2SAT) in the same network. RAN2SAT is the non-systematic variant of SAT which considers a non-restrictive number of variables in each clause, whereby RAN2SAT consists of clauses with k = 1, 2. Such formulation is considered to promote logical variation, which is believed to widen the search space of global minimum solutions. The work can locate maximum production of global solutions which indicates RAN2SAT is successfully embedded in the operations of HNN. Despite acceptable results, little attention has been given to RAN2SAT that claimed has potential as optimal SAT in HNN. The repetitive final neuron state which results in overfitting solutions is still an issue to be solved. Recently, ref. [5] proposed a new novel of non-systematic SAT named Major 2 Satisfiability (MAJ2SAT). The user interface provided in MAJ2SAT controls the structure of the logic where 2SAT is a dominant clause. As a result, MAJ2SAT can increase the neuron variation as the ratio of 2SAT increases. Although a restrictive training environment, the synaptic weight attained leads to global minima solutions. However, overfitting occurs when the 2SAT ratio is increased, which explains the creation of a repeated pattern of neurons states that disrupted the variation process and may reduce the precision of HNN. Thus far, there has been no investigation to seek a new novel of SAT that can provide non-overfitting solutions in HNN.
A metaheuristic is an effective method for obtaining an approximately optimal solution for the problems. For decades, a variety of metaheuristics with different classes was proposed and improved. The work by Zamani [6] is worth exploring where it proposed a new metaheuristic inspired by migratory birds’ behavior during their long-distance aerial migrations named QANA. The features of QANA are long-term and short-term memories, a V-echelon communication topology, and quantum-based navigation with two mutation strategies and a qubit-crossover operator. These features enhance the population diversity and avoid premature convergence. The effectiveness of QANA was evaluated using benchmark functions and four engineering functions. QANA can outperform state-of-the-art algorithms. However, the work shows the significant findings of QANA in solving single and continuous problems. Another interesting work that is worth exploring is by [7] improved Moth Flame Optimization (MFO) by introducing a migration strategy to enhance the exploration and maintain diversity. The performance of M-MFO was evaluated by conducting benchmark functions and compared with seven variants of MFO and eight state-of-the-art algorithms. It is worth mentioning that M-MFO managed to maintain while the size of the problem increased. Unfortunately, the work did not investigate the performance of M-MFO in the lowest population diversity. In 2019, Kasihmuddin [8] implemented Estimation of Distribution in the DHNN to predict the possible final neuron states with the lowest minimum energy which leads to a global minimum solution. EDA is able to provide superior performance in all the metrics. Despite successful results, this work did not analyze the proposed model with different logical rules. In another development, Sathasivam [9] was the first person to incorporate non-systematic SAT with metaheuristic. The work implementing Election Algorithm (EA) as a training algorithm in DHNN to optimize the structure of RAN2SAT. This work demonstrated EA and RAN2SAT’s capability to promote more diverse interpretations that lead to global minima solutions. Additionally, the EA can outperform another state-of-the-art algorithm due to its effective partitioning of solution space that reduces the complexity. However, the work did not analyze the impact of the retrieval phase. Note that metaheuristics allow us to effectively obtain approximate solutions to problems. However, extensive investigation is required to determine suitable metaheuristics for the specific problem.
Various social creatures exist in nature, such as ants, bees, and fish, where individuals from a given population cannot survive without the other members of that population. However, when we analyze the entire population, we see that the individual is extraordinarily intelligent through cooperation. Consequently, researchers have developed a type of algorithm to solve optimization problems that are inspired by these natures, named swarm intelligence algorithm (SI). One of the most widely used SI algorithms in the studies is ABC, which simulates the foraging behavior of honeybees, initially proposed to optimize continuous optimization problems [10]. The work was extended by [11] that developed binary variants of ABC named bitABC. The model uses the same framework as the original ABC but uses bitwise operations to optimize the food source. A comprehensive comparison has been made with another variant binary ABC with 13 selected benchmarks problem. The performance of BitABC is crucial in terms of final solution accuracy, convergence speed, and robustness. Subsequently, Kasihmuddin [3] proposed ABC as a training algorithm in HNN with 2SAT. This work demonstrates that ABC is more competitive in HNN compared to the standard method in obtaining satisfactiry interpretation of 2SAT. However, the work lacks variation and diversification in the quality of the retrieval final neuron states. In another development, ABC was incorporated with Double Layer Neural Network (DLNN) to solve the Bi-Level Programming Problem (BLPP) [12]. ABC was improved by cooperating with two major components of the Genetic Algorithm (GA) which are crossover and mutation. The improved-ABC is able to yield a high quality of global optimal solution with higher accuracy in a smaller time. The work proves that ABC is flexible to co-operate with another metaheuristic. Currently, ref. [13] proposed modification of solution update rule of basic ABC replaced with a xor logic gate in binABC. The comprehensive comparison was conducted with six variants of binary ABC by solving a modern benchmark problem (CEC2015). binABC is compatible in solving discrete problems. This finding shows that the update rule equation plays an important role in binary ABC. However, there is no investigation on the performance of different logic gate operators in the update rule equation. Although many researchers show the stellar performance of binary ABC in solving a variety of problems, there has been much uncertainty of which basic logic gate operators work well in solving discrete optimization problems. Table 1 below summarizes the related research.
The important contributions of this study are listed as follows:
(i)
This study introduced an additional layer of DHNN to generate a non-systematic logical structure with a consideration ratio of negative literals. The higher number of negative literals in the logical structure is considered to enhance diversified final neuron states retrieved.
(ii)
This study explores the capability of the Artificial Bee Colony algorithm (ABC) with different logic gate operators in the update rule equation. The main purpose of ABC in this study is to find the correct logical structure according to a predetermined ratio of negative literal. A comprehensive comparison was conducted to reveal the effectiveness of ABC with the NAND operator. Next, the performance of ABC will be compared with three state-of-the-art metaheuristics. Statistical analysis was conducted to show that ABC is compatible in finding the correct logical structure.
The major contributions of the present study are as follows:
(i)
A new novel of non-systematic Weighted Random k Satisfiability (rkSAT) is proposed by combining the first and second-order Satisfiability logical rule with consideration of negative literals.
(ii)
The implementation of Artificial Bee Colony in the logic phase to control the distribution of negative literals in the logical structure.
(iii)
A comprehensive comparative analysis of five different logic gate operators in the food source equation in terms of producing a correct number of negative literals for a higher ratio.
(iv)
The proposed Artificial Bee Colony in the logic phase will be compared with the state-of-the-art binary algorithm.
Figure 1 illustrates the general flow of the proposed study to ensure the readers gain a better understanding of this approach. In the first place, the initialization of the logic phase occurs after the ratio of negative literal and number of literal is predetermined. ABC will be utilized in the logic phase as the search technique in order to generate the correct structure of Weighted Random 2 Satisfiability ( Γ   r 2 SAT ). Note that the objective of the logic phase is to distribute a correct number of negative literal in the Γ   r 2 SAT . Then, the first phase of DHNN will optimize Γ   r 2 SAT to ensure the correct synaptic weight is obtained. The computation of the local field of the neuron state and the final energy happens in the testing phase. The quality of the output will be evaluated by comparing the final energy. Supposing the output achieved lies within the tolerance value, the final neuron states will achieve global minima solution, otherwise be trapped in local minima solution.
The outline of the current study includes Section 2, provides the preliminary explanation on the Weighted Random k Satisfiability. Section 3 briefly explains Weighted Random 2 Satisfiability study in DHNN. Then, the proposed binary Artificial Bee Colony and other binary algorithms are discussed in Section 4. The methods and experimental setup are given in Section 5. The simulation of the study is discussed in Section 6. Finally, concluding remarks are given in the last section.

2. Weighted Random k Satisfiability

Satisfiability (SAT) is a Boolean formula that is said to be satisfiable if satisfied assignments of the literals exist, which makes the formula true. SAT is categorized as NP-complete [14] which inspires logic mining practitioners to utilize SAT as a form of symbolic instruction in representing the entries of a real dataset. SAT can be divided into two main structural classes which are systematic and non-systematic logical rule [15]. According to the findings in [4], non-systematic SAT provides more logical variation in terms of final neuron states. In this article, Weighted Random k Satisfiability is introduced as a new class of non-systematic SAT with consideration of the desired number of negative literals. The number of negative literals in the SAT can be predetermined with a randomized choice of literals that can be either positive or negative. The general equation for Weighted k Satisfiability can be formulated as follows:
Γ   r 2 SAT =   i = 1   u J   i   ( 2 )   i = 1   v J   i   ( 1 )
whereby the value of 2 (second) and 1 (first) depicts the order of clauses ( k ) in Γ   r 2 SAT where k { 2 , 1 } . Note that u and v denote the total number of second- and first-order clauses, respectively. The total combination of clauses in Γ   r 2 SAT can also be defined as m where m = u + v . Following Equation (1), the combination of orders in the formulation makes Γ   r 2 SAT a class of non-systematic SAT whereby the number of literals in each clause is not restricted to only one value of k. Note that J   i   ( 2 ) { ( B   i C   i ) , ( B   i ¬ C   i ) , ( ¬ B   i C   i ) , ( ¬ B   i ¬ C   i ) } and J   i   ( 1 ) { A   i , ¬ A   i } are possible clause for Γ   r 2 SAT formulation. In other words, Γ   r 2 SAT has non-redundant literals, whereby i i + 1 for any orders of k . By considering both order clauses, the total number of literal exists in Γ   r 2 SAT can be calculated by using Equation (2) as below:
λ = 2 u + v
According to the study by [8], the assignments that correspond to Γ   r 2 SAT can be presented as { TRUE , FALSE }   λ = { 1 , 1 }   λ . The main goal of the Γ   r 2 SAT is to find a set of assignments that make the whole formulation become TRUE. In comparison with the previous structure such as Karim et al. [15] and Alway et al. [5], the main structural comparison of Γ   r 2 SAT is the feature of controlling the distribution of negative (negation) literals in the logical structure. Thus, a systematic mechanism is introduced to only allow a specific total number of negative literals in Γ   r 2 SAT . N   v is defined as the desired number of negative literals in Γ   r 2 SAT which can be calculated by using Equation (3) below:
N   v = r λ
r is the ratio of negative literals existing in Γ   r 2 SAT with a range of r = [ 0.5 , 0.9 ] . The pre-determined value of r depicts the percentage ( % ) of negative literals, whereby when r = 0.5 , the percentage of negative literals in Γ   r 2 SAT is 50% out of all λ . The proposed step size in this article is Δ r = 0.1 . Conjointly, the value of N   v can also is defined as N   v = r λ , as the number of negative literals that should be considered must be less than or equal to a given number in the form of N   v . For example, for ( λ , r ) = ( 6 , 0.5 ) , the value of N   v is given by N   v = ( 0.3 ) ( 6 ) = 1.8 1 . Thus, using the value of N   v , one possible negative literal of Γ   r 2 SAT is given by Γ   r 2 SAT = ( B   1 C   1 ) ( B   2 ¬ C   2 ) A   1 A   2 . This indicates that a specific method is required to filter the type of logical rule that satisfies the condition in Equation (3).
Unfortunately, without an appropriate mechanism, the distribution of negative literal in Γ   r 2 SAT will potentially be skewed to one side or biased to a specific type of clause. Therefore, in this article, an additional optimization layer of controlling the distribution of the negative literal in Γ   r 2 SAT is presented. The value of N   v is set as the objective function with randomized literals selection which will be determined as negative or positive. In this context, a minimization task is introduced before generating the right structure of Γ   r 2 SAT which will be formulated based on Equation (4).
f   L = m i n [ κ N   v ]
whereby the optimal value of the best fitness, f   L is equal to 0. Note that κ is the total number of weights in Γ   r 2 SAT which can be formulated in Equation (5) as follows:
κ = i = 1 λ η   i
As κ is the number of negative literals which should be achieved in order to satisfy Equation (6), κ always accounts for a non-zero and non-negative value. Equation (6) presents the possible value of the weight, η   i :
η   i = { 1 ,                   if     ¬ Z   i 0 ,           otherwise
where Z   i can be any arbitrary negative literals in k order clauses. Thus, to guarantee only the desired number of negative literals is achieved, without any bias towards a specific order of clauses, an additional phase named the logic phase is added to the model. This effort is to ensure Γ   r 2 SAT is optimally generated with respect to N   v . The role of the logic phase is to effectively generate Γ   r 2 SAT with the dynamic value of r and λ by successfully minimizing Equation (4). In summary, the structural feature of Γ   r 2 SAT formula are presented as follows:
  • The structure combines a different order of clauses, J   i   ( k ) with logical AND ( ).
  • The variable in the Γ   r 2 SAT considers non-redundant with randomized selection to be negative or positive.
  • Consideration of the desired number of negative literals without any biased to a specific order of clauses.
Consecutively, the minimization task in the logic phase will be carried out using an optimization algorithm which will be explained in the next section.

3. Discrete Hopfield Neural Network

DHNN consists of interconnected neurons containing input and output layers but without hidden layers. Several features of DHNN include associative memory, fault tolerance, and energy minimization as the neuron state changes. Generally, the neurons in DHNN will be updated asynchronously until the final neuron state reaches the equilibrium state which corresponds to the solution of the optimization problem. Thus, the general formulation of updating rule of i -th neuron S i in DHNN is given in Equation (7) as follows:
S   i = { 1 ,                             if     j n W   ( i ,   j ) S   j ρ   i 1 ,                     otherwise
where W   ( i ,   j ) and ρ   i denote the synaptic weight and threshold of the network. In order to ensure the energy of DHNN decreases monotonically, we set ρ   i = 0 which demonstrates effective neuron state classification. The Γ   r 2 SAT will be embedded by assigning each variable with neurons to the defined cost function, E Γ r 2 S A T . Equations (8) and (9) below represent the generalized cost function of Γ   r 2 SAT :
E Γ r 2 S A T = i = 1 λ j = 1 u + v q i j
      q   i j = { 1 2   ( 1 S   X ) ,             if   ¬   X 1 2   ( 1 + S   X ) ,           otherwise
Note that the value of E Γ r 2 S A T is proportional to the number of inconsistencies of the clauses [4]. In this context, E Γ r 2 S A T will increase as the number of unsatisfied clauses increases. The lowest possible value that can be obtained for the cost function is E Γ r 2 S A T = 0 . In order to ensure the minimization of E Γ r 2 S A T , the following local field is utilized by using Equation (10):
h   i ( t ) = j = 1 ,   i = j u + v W   ( i , j )   ( 2 ) S   j + W   ( i )   ( 1 )
S i ( t ) is an updated neuron state given by Equation (11):
S   i ( t ) = { 1 ,               tan h ( h   i ) 0 1 ,         otherwise
where W   ( i , j )   ( 2 ) and W   ( i )   ( 1 ) are second and first order synaptic weights, respectively. Hyperbolic Tangent Activation Function (HTAF) is utilized in Equation (11) to avoid the neuron state being updated linearly. In terms of neuron connection, synaptic weights can be obtained by using the Wan Abdullah method [2] which compares Equation (8) and Equation (12) as long as the proposed Γ   r 2 SAT has at least one satisfied interpretation. The Lyapunov energy function that corresponds to the Γ   r 2 SAT logic is given as in Equation (12):
L Γ r 2 S A T = 1 2 i = 1 , i j n j = 1 , j i n W ( i , j ) ( 2 ) S i S j i = 1 n W ( i ) ( 1 ) S i
The value of L Γ r 2 S A T indicates the scalar quantity of the final neuron state which corresponds to the interpretation that models Γ   r 2 SAT . Interestingly, Equation (12) informs the network whether the final neuron state reaches the absolute minimum energy that corresponds to the optimal final states for Γ   r 2 SAT . Figure 2 shows the schematic diagram for DHNN-r2SAT. Note that the blue dotted lines represent the second-order clauses with four possibilities of neurons in the second-order clauses. Meanwhile, the green dotted lines represent first-order clauses with two possibilities of neurons.

4. Artificial Bee Colony

Bee colonies are like other insect communities which consist of distributed systems. Despite the simplicity of individuals, an insect colony has a highly structured social organization that allows bee colonies to undertake complicated tasks that would be impossible for a single bee to accomplish. Thus, recently, a new SI algorithm known as ABC has piqued the interest of researchers. ABC algorithm was inspired by the waggle dance and foraging behavior of honeybee colonies [10]. There are three main components in ABC: employed bees, onlooker bees, and scout bees. Each bee utilizes a herd mentality where its work and others’ work are shared without any authority. As a result, bees have the ability to organize [16]. The employed bees will go out and search for a food source and memorize the location of the source. The onlooker waits in the hive to gain information from the employed bees by performing waggle dance in the dancing area of the hive. The onlooker bee acts as a decision-making process, deciding which food source is the most profitable [16]. Once the decision is made, the scout bee will be spawned to validate the new food source location whether is profitable. In this study, ABC is implemented in the logic phase to generate the desired number of negative literals in Γ   r 2 SAT . The profitable food source found by the scout bee indicates the correct number of negative literals.

4.1. Binary Artificial Bee Colony

Binary ABC has been implemented by several researchers ([11,17,18,19]) for binary optimization tasks. In the logic phase, the role of ABC is to minimize Equation (4), which corresponds to the desired number of negative literals in Γ   r 2 S A T or N   v . Note that the binary values are {0, 1} where “0” denotes the positive literal, and “1” depicts the negative literal for each literal in Γ   r 2 S A T . The phases involved in ABC are elaborated in the next subsections:

4.1.1. Initialization Phase

Suppose that the search space of the ABC algorithm is the hive’s surroundings containing food sources. The algorithm starts with randomly producing food sources that correspond to the solutions in the search space. The initial food source is generated at random within the parameters’ bounds. The whole population consists of S N bees and a D number of optimization parameters. Equation (13) is the initialization of the random food source, x   i ,   j , whereby,
x   i ,   j = x   j   min +   rand ( 0 ,   1 ) ( x   j   max x   j   min )
where i = 1 ,   ,   S N denotes the bees. Meanwhile, j = 1 ,   ,   D depicts the generation of bees. x   j   min and x   j   max are lower and upper bound of x   i ,   j , respectively. After initialization, the fitness of the food source will be evaluated based on Equation (4). Then, the population of food sources is susceptible to the cycle of the employed bees’, onlooker bees’, and scout bees’ search operations.

4.1.2. Employed Bees

An employed bee’s job is to look for a new food source in the vicinity of the current food source in their memory. Equations (14) and (15) is the update rule equation in ABC:
v   i ,   j = x   i ,   j ¯ [ φ ( x   i ,   j   ¯     x   k ,   j ) ]
φ = { 1                         rand ( 0 , 1 ) < 0.5 1 ,               rand ( 0 , 1 ) 0.5
where v   i ,   j is the new food source and x   k ,   j is the observed food source. Note that ¯ represents the NAND logic gate operator. The fitness of each new food source will be evaluated. If the source at v   i ,   j is superior to x   i ,   j , the employed bees have forgotten about the previous food source and have memorized the new one. Otherwise, the location of the previous food source is memorized. If x   i ,   j cannot be improved, the employed bee will undergo five trials to improvise the food source.

4.1.3. Onlooker Bees

Once the employed bee finishes the search process, they will go to the hive to share the information related to the location of the food source and nectar amounts by the dance area. The feature of multiple interactions occurred in this stage. An onlooker bee evaluates the nectar information taken from all employed bees and chooses a food source with a probability related to the nectar amount. A roulette wheel selection method in which proportions of the wheel are assigned to the bees based on their fitness value is introduced. Equation (16) formulated the probability of the fitness:
p   i = f   i i   = 1 S N f   i
The employed bee that has a higher fitness value will have more chances to be selected. After an employed bee is chosen for an onlooker bee where the positive feedback occurred, a new food source will be produced using Equations (14) and (15) until the maximum limit is achieved.

4.1.4. Scout Bees

If the food source generated by the onlooker bee does not achieve f   L = 0 , the scout bee will undergo a new iteration until the maximum limit is achieved. The feature of the negative feedback occurred when the food source does not achieve f   L = 0 after five iterations.

4.2. Optimizing Logic Phase via Binary Artificial Bee Colony

The logic phase is a systematic method that controls the distribution of the negative literals in Γ   r 2 S A T according to the desired weighted ratio by minimizing Equation (4). In this paper, the logic phase employs ABC to minimize the solution. According to [6], the parameter in ABC is flexible to adjust and simple. This is due to the parameters involved in the food source equation that can be easily changed according to the objective function. Additionally, the solution space can be improvised using the food source equation in two phases, which are employed and onlooker bees. In the logic phase, x   i ,   j represent the possible structure of Γ   r 2 S A T . The food source represents the number of negative literals, η   i where η   i { 0 ,   1 } . Initially, the first generation will be initialized by generating ten random x   i ,   j and the fitness (food source) will be evaluated. Then, the bees will explore new food sources in employed bees by using Equations (14) and (15). The profitable food source will be evaluated in the onlooker bees’ phase, improvising the solution space using Equations (14) and (15). The scout bee will examine each food source to find a food source that carries a correct number of negative literals. Figure 3 and Algorithm 1 below illustrate the whole optimization process of ABC in the logic phase.
Algorithm 1. The pseudocode of ABC in the logic phase:
Set the initial parameters, including population size SN, employed bees’ size, onlooker bees’ size, scout bees’ size, maximum allowed generations g   max (10 generations), trial number, and initialize all bees X.
for  g = 1 to g   max (number of generation)
Calculate the fitness for each bee X (one group of bees) and evaluate them (take the best two bee groups as x   i ,   j and x   k ,   j )
{Employed bees’ phase}
for  i = 1 to SN
  Produce new food source v   i ,   j using Equation (14)
  Evaluate the fitness of v   i ,   j
  if  v   i ,   j   <     x   i ,   j
  then replace x   i ,   j with v   i ,   j in next-generation and t r i a l   i = 0
  else  t r i a l   i = t r i a l   i +     0
  end if
  end for
  Computed the probability p   i by using Equation (16)
{Onlooker bees’ phase}
t = 0 ,     i = 1
while  t < S N
  if  r a n d i < p   i
  then produce a new food source v   i ,   j using Equation (14)
  Evaluate the fitness of v   i ,   j
   if  v   i ,   j   <     x   i ,   j
   then replace x   i ,   j with v   i ,   j in next-generation and t r i a l   i = 0
   else  t r i a l   i = t r i a l   i +     1 ,       B L   s u c c e s s ,   i = B L
   end if
    t = t + 1 (until t = 100 )
  end while
{Scout bees’ phase}
for  l = 0 ,     i = 1
if  f   L = 0
then record the best solution founded
else  l = l + 1 (until l = 5 )
end if
end for
Output for the solution

4.3. Benchmark Algorithms in the Logic Phase

To assess the efficiency of the proposed ABC in the logic phase, comparative analysis will be conducted versus state-of-the-art algorithms; Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Exhaustive Search (ES). Each algorithm has different baseline methods suitable to be compared with the proposed model in terms of minimizing Equation (4). GA is an evolutionary algorithm that simulated the natural selection process [20]. According to [21], GA search from a population of points to find a solution based on the desired fitness. Notably, GA consists of balanced global and local search operators of crossover and mutation. By involving the chromosomes exchange process and flipping genes in each chromosome, the termination criteria of GA is when a string of chromosomes that represent the structure of Γ   r 2 S A T successfully corresponds to the desired N   v . As for PSO, the possible solutions are represented as particles where all the particles will form a set of solutions called the swarm. Additionally, PSO is known as a class of swarm intelligence algorithms that has a similar mechanism to ABC. Based on the previous study by [22], PSO is best utilized to solve discrete vector problems where the global best solutions concept is used to search for desired solutions. The position and velocity values will be initiated for each particle. The initial population will be initialized with random positions and velocities at the outset. The fitness of each particle will be evaluated by using Equation (4). The process is repeated within the determined number of trials. Although there is no optimizer involved, ES is reported to be efficient when handling a smaller sized optimization task [23]. ES is the benchmark searching technique that has the ability to search the entire search space with the random mechanism. The selected algorithms were chosen based on their potential to solve binary optimization tasks. Thus, they are suitable for evaluating the ability of algorithms in terms of local search and convergence speed in minimizing Equation (4).

5. Simulation Setup

In this section, we verify the effectiveness of the proposed model during the logic phase using different logic gate operators in Equation (14). This experiment only considers a single objective, which is to determine whether the proposed model generates the correct number of negative literals. To guarantee the reproducibility of the experiment, we set up our experiment as follows:

5.1. Simulation Design

To avoid possible biases during experimentation, all the simulations will be conducted based on the following feature:
(i)
Device Setting: In order to avoid any biases in evaluating the results, the proposed model was implemented and analyzed by using Dev C++ version 5.11. The simulation was run on the same device with a 2.70 GHz Intel Core i7 processor with a 64-bit Windows 10 Operating System where the threshold simulation time is set to 24 h.
(ii)
SAT Configuration: The number of the second-order clauses is set twice from first-order clauses where u = 2 v . For all models, each instance is associated with λ { 10 ,   50 ,   90 } .

5.2. Parameters Assignment

Table 2, Table 3, Table 4, Table 5 and Table 6 listed the parameters used in the logic phase for all the proposed DHNN models. The performance of all r 2 SAT A B C models is evaluated for different values of the weighted ratio, r, different number of neurons, λ , and constant neuron combination, β . The employed bee and the onlooker bee have a similar number of bees which indicates the population size used in the ABC. According to [24], a small number of scout bees (approximately 5%) with regard to the overall population can navigate all the bees to the optimal solution. Therefore, we set the maximum number of scout bees to five to ensure the proposed r 2 SAT A B C will not trap to possible local maxima solution. Similar to the work by Kasihmuddin et al. [3], the number of trials in employed and onlooker bees phases was set to five, because more trials are given to the scout bees to update their food source. On the other hand, GA will initialize a population of 500 that corresponds to r 2 SAT G A during the logic phase. The selection rate was set to 0.1 in order to enhance the searching process. RWS also utilized in the selection operator of GA. The rate of crossover and mutation are set to 1 in order to reduce the computation complexity. Note that each particle in PSO has a randomly selected position and velocity. The best particle size is set to 50 in order to obtain the best results for the majority of the PSO variants.

5.3. Dataset

The experiment will utilize simulated data that will be generated randomly by the proposed model. The elements of the simulated data set are strings of bipolar values that are {−1, 1} based on the structure of Γ r 2 S A T . Existing research by Sathasivam et al. [4], Karim et al. [15], and Alway et al. [5] demonstrate that simulated data sets are commonly used in testing and evaluating the capabilities of a new proposed SAT in DHNN. As a result, the outcomes of the simulated data set will project and confirm the efficacy of the proposed model when applied to real-life data sets.

5.4. Error Analysis

The performance of all the models can be evaluated based on average mean absolute error (MAE) and best fitness, f   b e s t . These metrics will measure the effectiveness of the model in generating the correct logical structure of Γ r 2 S A T according to initiate r. The value of f   b e s t and average MAE can be obtained according to Equations (17) and (18), respectively.
f   b e s t = i   =   1 n min | κ   i ( N   v ) i |  
MAE = j = 1 α 1 n i = 1 n | f   f i t f   b e s t | α
Note that κ   i represents the cumulative weight of negative literals for i-th iteration and N   v is the desired number of negative literals in Γ r 2 S A T . On the other hand, f   f i t denoted the value of the ideal fitness ( f   f i t = 0 ) and f   b e s t is the current best fitness achieved for i-th iteration. In this context, n and α is the number of iterations and the number of neuron combinations, respectively.

5.5. Statistical Analysis

Besides the error analysis, the overall performance of ABC was statistically analyzed using the Friedman test for all values of r. Friedman test is used to detect the significant differences between the results of two or more algorithms that participated in the logic phase that control the distribution of negative literals in the logical structure. The Friedman test compares several algorithms by computing the ranking (R) of the observed results for each algorithm. For instance, the first-best results and second-best results are considered rank 1 and 2, respectively; rank n is given to the worst results. The formulation of the Friedman test is presented in Equation (19):
ψ = 12 ω δ ( ω + 1 ) [ m = 1 ω R j 2 ] 3 δ ( ω + 1 )
where ω is the number of algorithms participated in this study and δ denotes the number of cases runs. Note that the distribution of p-value is based on chi-squared ( χ   2 ) distribution with ω 1 degree of freedom. The results will be significant if the p-value is less than 0.05.

5.6. Model Reproducibility

The same experiments should be conducted to reproduce a framework with DHNN-r2SAT repeatedly on certain data sets or obtain the same results. Specific environments or factors should be considered as follows:
(i)
The logic phase must be conducted with a random β logical combination to avoid biasedness to only one specific Γ r 2 S A T structure. In addition, the threshold iteration in the logic phase is set at 100 to distinguish the maximum capacity of a certain algorithm in successfully generating the desired Γ r 2 S A T .
(ii)
In order to retrieve the synaptic weight in the training phase, the Wan Abdullah method [2] was utilized. According to Bazuhair et al. [26], in representing the strength between the neurons in DHNN, the Wan Abdullah method is more stable than Hebbian learning.
(iii)
It is also worth mentioning that different logic gate operators in food source equations have different capabilities in producing negative literal. The next subsection will explain the implication of different logic gate operators in Equation (13).

5.7. The Implication Truth Table for Food Source Equation

The configuration of the possible input and output variables for each logic gate can be determined based on the truth table in Table 7, Table 8, Table 9, Table 10 and Table 11. Note that Setup I indicates the condition where the random φ value is less than 0.5 and Setup II indicates the condition where the random φ value is equal and more than 0.5. These two setups are important to control the probability of generating ‘0’ and ‘1’ in ABC [10]. The control parameter of ABC in the logic phase has more probability to be a binary number ‘1′ when φ < 0.5 because we want the food source to produce more negative literal as this study conducted a high value of r. Notice that different logic gate operators produce a different probability of producing negative literals.

6. Results and Discussion

This section comprises two subsections: the comparison of the five different logic gates operators and the comparison of r 2 SAT n a n d A B C with r 2 SAT   G A . The first subsection will compare the performance of five different logic gates in r 2 SAT A B C in terms of capability producing desired logic. Then, the performance of r 2 SAT n a n d A B C will be compared with three state-of-the-art binary algorithms: r 2 SAT G A , r 2 SAT E S , and r 2 SAT P S O . For clarity, the best performance has been shown in boldface. The ‘+’, ‘−’, and ‘=’ indicates the performance of the model is better than, less than, and perform equally with all comparison models, respectively.

6.1. Comparison of Five Different Logic Gates Operators

6.1.1. The Performance of Five Different Logic Gates Operators in Terms of MAE

Table 12, Table 13 and Table 14 illustrate the performance of all models in terms of MAE. The presented errors show the closeness of the achieved solution to the desired solution. The acceptable value of MAE is 0, which indicates the success of generated r2SAT with a correct number of negative literals in one iteration. In this case, the results signify the ABC performance measure in the logic phase for random n logical combinations. In this section, the mean, maximum, and minimum values of MAE are selected based on the fitness obtained by specific r2SAT combinations. The MAE value provides an intuitive overview of the impact of different operators in the logic phase. In order to investigate the accuracy of our proposed ABC ( r 2 SAT n a n d A B C ), a comparative analysis is conducted with those reported in previous studies; r 2 SAT x o r A B C [13], r 2 SAT a n d A B C , r 2 SAT o r A B C , and r 2 SAT n o r A B C . In regard to different λ and r, the performance of r 2 SAT n a n d A B C outperformed other logic gate operators. The overall outcome of MAE is based on Table 12, Table 13 and Table 14 resulting from the following observations.
The proposed model by Jia et al. [11] introduced logic gates to convert the problem to binary space. The approach enabled Kiran [13] to capitalize logic gate operators in the ABC algorithm by modifying the updating food source equation to improve the search space of optimal solutions. However, there is uncertainty as to which operators work well for solving binary problems. Regardless, from Table 7, Table 8, Table 9, Table 10 and Table 11 from the previous section, we can observe the changes in the probability of the bit in the solution string based on the problem in the logic phase. With no regard to φ (Setup I or Setup II), the probability to produce 1 is more than 75 % for r 2 SAT n a n d A B C . This explains why r 2 SAT n a n d A B C works well with a higher value of r. As for r 2 SAT o r A B C , the modified ABC algorithm has the second-best MAE values for higher r due to the probability of returning 1 being higher compared to other logical operators. It is expected where the performance of r 2 SAT a n d A B C and r 2 SAT n o r A B C are poor due to the solution bit in the candidate solution tending to be 0. On the other hand, r 2 SAT x o r A B C works well in an environment where r is balanced, which is r { 0.5 , 0.6 } . This is due to the changing probability being 50% for both setups. Although the food source does not represent the final fitness in the logic phase, the selection of logic gate operators is crucial in improving the fitness of employed and onlooker bees iteratively to locate the optimal solutions. The reason for such modification is to aid the employed and onlooker bees in generating r2SAT with a higher number of negative literals. The exploration effort increases with the number of literals at the high state. Comparative analysis in Table 12, Table 13 and Table 14 highlights the reason why such an approach is considered. If this factor is not taken into account, the entire bit solutions will always trap in local optima, mapped to an undesired number of negative literals.
According to Table 15, the performance of r 2 SAT n a n d A B C for Setup I does not exhibit major differences in terms of returning ‘1’. However, the probability for Setup II to return 1 is lowered from 100% to 75%. This is also apparently why certain logic gate operators work well for selective r. In contrast to different updating food source equations, the value of λ proportionally increases with MAE. According to [27], the complex structure of the bit solution will cause the performance of ABC to deteriorate due to the possibility of locating an optimal solution having grown exponentially and requiring better control of free parameters in the algorithm. Therefore, the value of MAE achieved, especially for λ = 90 , is relatively larger than others. This is because a larger λ indicates an extreme number of negative literals is desired, which requires the additional effort of exploitation and exploration process. Modifications in the employed or onlookers can be considered to increase the local capability of the network. To avoid this issue in the future, the proposed algorithm may consider a greedy selection to retain sub-optimal solutions [27].

6.1.2. The Capability of Five Logic Gates Operator Producing Desired Logic

This section will discuss the capability of five different logic gate operators in generating desired logic structures that will be evaluated based on the best fitness. The best fitness is the solution that is near to the desired results. As for this study, the scout bee plays an important role to evaluate the quality of the food source (fitness). The rate of scout is set to five, which is the evaluation in which a food source happens five times. Note that at least 5–10% of the population is a scout bee [28]. According to [29], the contribution of the scout bee causes the number of function evaluations to increase, which reduces the time complexity. However, the computation time increases because the search space is high for r 2 SAT x o r A B C , r 2 SAT n o r A B C , and r 2 SAT a n d A B C with the highest λ and r. Figure 4 below illustrates the f   b e s t of one logical combination, β for these models in the logic phase.
Although the iterations have achieved the maximum number, the model is unable to generate the desired logical structure. Observe that premature convergence occurs due to the model converging toward a point where it was not supposed to converge [30]. According to [31], the model suffers from premature convergence due to the decrease in diversity. Despite being updated, the solution space has less diversity. For example, for λ = 90 with r = 0.9 , the probability to obtain 90% of negative literals will be lower due to the probability to produce negative literals for r 2 SAT x o r A B C , r 2 SAT n o r A B C , and r 2 SAT a n d A B C is less than 50%. Thus, there is a high tendency to see the same number of negative literal in the logic.
Figure 5 above illustrates the performance of one combination of logic, β of r 2 SAT n a n d A B C and r 2 SAT o r A B C . With no regard, the performance of both models is the same. However, Table 14 shows that the maximum average of MAE of r 2 SAT o r A B C is higher compared to r 2 SAT n a n d A B C . In this study, ten β will be generated. There is the possibility that others’ β require more iterations to obtain the desired logic. To summarize, r 2 SAT n a n d A B C outperform other models in terms of the capability of converging to the desired fitness. Therefore, in the next section, the performance of r 2 SAT n a n d A B C will be compared to other binary algorithms.

6.2. The Comparison of r 2 SAT n a n d A B C with Other Binary Algorithms

6.2.1. The Comparison in Terms of Error Analysis

Table 16, Table 17 and Table 18 illustrate the comparative performance of r 2 SAT n a n d A B C with r 2 SAT   G A , r 2 SAT   E S , and r 2 SAT   P S O in terms of MAE.
Based on general observation of Table 16, Table 17 and Table 18, the performance of r 2 SAT n a n d A B C when r { 0.7 ,   0.8 ,   0.9 } for all λ outperforms other state-of-the-art algorithms. Notice that r 2 SAT E S has a relatively larger average of MAE compared to other algorithms. This is due to the ES mechanism that is based on trial and error where it discovers all the possible solutions. According to [32], the trial-and-error mechanism managed to find the near-optimal solution, not the merely optimum. Figure 6 illustrated the convergence curve of one β for each algorithm for all r and λ . In the figure, observe that for r { 0.8 ,   0.9 } with λ = 90 , r 2 SAT E S managed to obtain lower f   b e s t for each iteration compared to r 2 SAT P S O . As the size of λ and r increases, the number of possible solutions increases. Thus, it is impossible for ES to locate all the possible solutions in a few iterations only. Hence, ES requires more iterations as compared to PSO, which leads to an increment in MAE.
Despite the fact that PSO and ES were both initiated with the same objective function, their searching capabilities of optimal solutions are indeed different. Exploration and exploitation are the main features of every optimization algorithm. Exploration is the ability to discover unknown solution space, while exploitation is finding the better solution in a known region [33]. In ABC, the features are performed in a different way. Employed bees utilize exploration, in which the bees explore the environment of the hive (search space) to find a food source (solutions). Onlooker and scout bees utilize exploitation, which improves the current solutions if the new solution is better than the previous solution. Meanwhile, according to [34], the exploration features of PSO occur when new solutions in the search space can be added to the population without being compared to the current solution. The exploitation occurs when the new solution is better than its best solution. Consequently, r 2 SAT P S O requires more iteration to improve the solution compared to r 2 SAT n a n d A B C . It is worth mentioning that r 2 SAT G A is the second-best algorithm in the logic phase. Unfortunately, the mechanism of GA causes lower diversification of Γ r 2 S A T as the early solutions are usually nonfit and require optimization operators, such as crossover and mutation, before achieving the optimal solutions [4]. Additionally, the mutation in GA has a chance to reorder already existing solutions that will decrease the possibility of finding the optimal solution [35]. These features give a drawback to GA, which magnifies the errors in an average of MAE. From Figure 6, it is obvious that ABC has better convergence speed, as ABC is able to avoid being trapped in local minima solutions [10] where the solution space keeps improvising in two phases. The capability of r 2 SAT n a n d A B C generating the desired number of negative literals in logic is related to the effectiveness of the food source equation. ABC improves the quality of the generated food sources (solution) in the early stages (employed bee) [36]. Although at the highest λ , r 2 SAT n a n d A B C is able to obtain desired logic with few iterations that make the value of MAE lesser compared to other binary algorithms.

6.2.2. Statistical Analysis

A Friedman test was conducted for r { 0.5 ,   0.6 ,   0.7 ,   0.8 ,   0.9 } with λ { 10 , 50 , 90 } . The degree of freedom, d f = 4 and considers a 0 = 0.05 [37]. Suppose that H 0 have no significant results between r 2 SAT n a n d A B C with all comparison algorithms. Table 19 is the average rank ( A v R ) of all algorithm and Table 20 is the result of the Friedman test.
According to the A v R obtained in Table 19, r 2 SAT n a n d A B C has the best average rank among other algorithms. The closest competition is from r 2 SAT G A with slightly higher A v R for higher values of r . From Table 20, the p-value for all cases of r is greater than a 0 = 0.05 . Hence, the H 0 is accepted, which signifies that ABC is the powerful algorithm in the logic phase of DHNN in generating the desired number of negative literals in the Γ r 2 S A T .

6.2.3. Impact Analysis

Thus far, the previous experimental and statistical results (Table 20) demonstrate that ABC is superior to another logic operator. This section presents the impact of the exploration and exploitation by r 2 SAT n a n d A B C , finding the optimal solution which corresponds to the desired value of r. As shown in the Pseudocode of ABC in the logic phase, the NAND gates were observed to explore more negated variable that corresponds to the given r. For the exploration front, the trajectory of the food obtained by the employed bee phase in Equation (14) was skewed towards 1’s, which minimizes the objective function given in Equation (4). This is contrary to other gates which are more skewed towards zero, which increases the gap between the desired outcome with the current fitness of the bee. As the fitness value of the employed bees increases when r 2 SAT n a n d A B C was implemented, the probability is given by Equation (16) for the onlooker bees to select the food source to increase dramatically. For different logical operators, such as XOR, AND, OR, and NOR, there is a high chance that the food selection will have equal probability to construct the desired r2SAT logic. Hence, the effectiveness of Equation (16) will diminish completely, and more iteration is required to minimize the objective function. In terms of exploitation, onlooker bees acquire fast-track fitness because the exploitation to find an optimal food source (depending to the desired N v ) starts with high fitness food. Thus, onlooker bees are able to obtain zero objective function with almost zero iteration as shown in Table 12, Table 13 and Table 14. Unfortunately, other logical operators do not acquire similar benefit because low-fitness food was inherited from the employed bees via Equation (16). Thus, r 2 SAT n a n d A B C was reported to provide a positive impact although the initial food source was started at random.
In comparison with another existing algorithm, the exploration and exploitation of the r 2 SAT n a n d A B C were shown to be more effective. As shown in Figure 6, the convergence curve for r 2 SAT n a n d A B C illustrated that ABC only requires a few iterations to obtain optimal solutions compared to another state-of-the-art algorithm. According to Su et al. [38], despite having more exploitation mechanisms, ABC managed to avoid being trapped in the local optimum solutions in the middle stage due to the exploitation mechanism occurring in the early stage (employed bees). Based on Figure 6, although r 2 SAT G A provides a competitive performance, the number of generations required to arrive at desired Γ r 2 S A T is large. Another obvious issue with r 2 SAT G A is high dependence towards the mutation operator during the initial stage due to the ineffective crossover phase. This demonstrates the primary weakness of r 2 SAT G A in terms of exploration and exploitation. As for r 2 SAT P S O , the random positioning in PSO creates the same impact as another logic operator in r 2 SAT A B C . This can be exhibited through the convergence curve in Figure 6, the particle positioning in PSO requires more iteration to explore the optimal Γ r 2 S A T structure. Meanwhile, the high error value in Table 17 and the convergence analysis shows that r 2 SAT E S is not impactful in terms of exploitation and exploration.

7. Conclusions

The first contribution of this study introduces a systematic method in generating the logic with consideration of negative literal via Weighted Random 2 Satisfiability. The study shows that the logic phase is efficient to diversify the logical rule by controlling the dynamic of negative literal. The second contribution is that binary ABC was utilized in the logic phase to find the correct structure of r2SAT. Then, the performance of the proposed model was successfully compared with four variants of binary ABC with different logic gate operators in the food source equation. The experiments show the compatibility of r 2 SAT n a n d A B C in the logic phase by generating a higher number of negative literals. Then, the proposed model is compared with three state-of-the-art algorithms: r 2 SAT G A , r 2 SAT E S , and r 2 SAT P S O . ABC gives a solid performance as it requires few iterations to obtain the desired structure of logic. This study highlighted the capability of r 2 SAT n a n d A B C in logic phase in generating the correct number of negative literals in a logical structure. In addition, this study also highlighted the capability of different logic gate operators in generating desired logical structure. The study can be further extended by considering the lowest values of r which are 0.1, 0.2, and 0.3. Furthermore, different types of logic can generate in the logic phase, such as Random 2 Satisfiability [4], High order of Random k Satisfiability [15], and Major 2 Satisfiability [5]. The robust architecture of r2SAT, integrated with an optimal operator for ABC, provides a view on the possible random dynamics for the application of real-life bioinformatics problem. For example, the proposed r 2 SAT n a n d A B C can be embedded into logic mining which extract the best logical rule that classifies single-nucleotide polymorphisms (SNPs) inside known genes associated with Alzheimer’s disease. This can lead to large-scale logic mining design incorporated with r 2 SAT n a n d A B C , which has the ability to classify and predict.

Author Contributions

Conceptualization and methodology, N.E.Z.; validation, H.A.W.; investigation and writing—original draft preparation, S.S.M.S.; writing—review and editing, M.A.M.; visualization, Y.G.; supervision and funding acquisition, M.S.M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Ministry of Higher Education Malaysia for Transdisciplinary Research Grant Scheme (TRGS) with Project Code: TRGS/1/2020/USM/02/3/2.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to express special dedication to all of the researchers from AI Research Development Group (AIRDG) for the continuous support. The authors also would like to express a great appreciation to Vikneswari Someetheram and Revin Samuel James Selva Kumar for their helpful support that has been a great contributor to the completion of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

List of Notations

ANNArtificial Neural Network
HNNHopfield Neural Network
DHNNDiscrete Hopfield Neural Network
SATBoolean Satisfiability
3SAT3 Satisfiability
2SAT2 Satisfiability
RAN2SATRandom 2 Satisfiability
RAN3SATRandom 3 Satisfiability
MAJ2SATMajor 2 Satisfiability
ABCArtificial Bee Colony
GAGenetic Algorithm
ESExhaustive Search
PSOParticle Swarm Optimization
MAEMean Absolute Error
rWeighted ratio of negated literals
Γ   r 2 SAT Logical structure of Weighted Random 2 Satisfiability
λ Total number of literals
N   v The desired number of negative literals
f L Best fitness
S i State of the ith neuron
W ( i ,   j ) Synaptic weight from unit i to j
ρ i Threshold constraints in DHNN
E Γ r 2 S A T Cost function of r2SAT
h i Local field
Conjunction (AND)
Disjunction (OR)
¬ Negation
L Γ r 2 S A T Lyapunov energy function
x   i ,   j   and   x   k ,   j Two best bees
v   i ,   j New food source
f Control distance parameter
f i Fitness of the bees
p i Selection probability
¯ Not or (NOR)
_ Exclusive or (XOR)
¯ Not and (NAND)
r 2 SAT n a n d A B C Weighted Random 2 Satisfiability with Artificial Bee Colony using NAND operator
r 2 SAT o r A B C Weighted Random 2 Satisfiability with Artificial Bee Colony using OR operator
r 2 SAT a n d A B C Weighted Random 2 Satisfiability with Artificial Bee Colony using AND operator
r 2 SAT x o r A B C Weighted Random 2 Satisfiability with Artificial Bee Colony using XOR operator
r 2 SAT n o r A B C Weighted Random 2 Satisfiability with Artificial Bee Colony using NOR operator
r 2 SAT G A Weighted Random 2 Satisfiability with Genetic Algorithm
r 2 SAT E S Weighted Random 2 Satisfiability with Exhaustive Search
r 2 SAT P S O Weighted Random 2 Satisfiability with Particle Swarm Optimization

References

  1. Chereda, H.; Bleckmann, A.; Menck, K.; Perera-Bel, J.; Stegmaier, P.; Auer, F.; Kramer, F.; Leha, A.; Beißbarth, T. Explaining decisions of graph convolutional neural networks: Patient-specific molecular subnetworks responsible for metastasis prediction in breast cancer. Genome Med. 2021, 13, 42. [Google Scholar] [CrossRef] [PubMed]
  2. Wan Abdullah, W.A.T. Logic programming on a neural network. Int. J. Intell. Syst. 1992, 7, 513–519. [Google Scholar] [CrossRef]
  3. Kasihmuddin, M.S.M.; Mansor, M.; Sathasivam, S. Robust Artificial Bee Colony in the Hopfield Network for 2-Satisfiability Problem. Pertanika J. Sci. Technol. 2017, 25, 453–468. [Google Scholar]
  4. Sathasivam, S.; Mansor, M.A.; Ismail, A.I.M.; Jamaludin, S.Z.M.; Kasihmuddin, M.S.M.; Mamat, M. Novel Random k Satisfiability for k ≤ 2 in Hopfield Neural Network. Sains Malays. 2020, 49, 2847–2857. [Google Scholar] [CrossRef]
  5. Alway, A.; Zamri, N.E.; Karim, S.A.; Mansor, M.A.; Kasihmuddin, M.S.M.; Bazuhair, M.M. Major 2 Satisfiability Logic in Discrete Hopfield Neural Network. Int. J. Comput. Math. 2021, 1–45. [Google Scholar] [CrossRef]
  6. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. QANA: Quantum-based avian navigation optimizer algorithm. Eng. Appl. Artif. Intell. 2021, 104, 104314. [Google Scholar] [CrossRef]
  7. Nadimi-Shahraki, M.H.; Fatahi, A.; Zamani, H.; Mirjalili, S.; Abualigah, L.; Abd Elaziz, M. Migration-based moth-flame optimization algorithm. Processes 2021, 9, 2276. [Google Scholar] [CrossRef]
  8. Kasihmuddin, M.S.; Mansor, M.; Md Basir, M.F.; Sathasivam, S. Discrete mutation Hopfield neural network in propositional satisfiability. Mathematics 2019, 7, 1133. [Google Scholar] [CrossRef] [Green Version]
  9. Sathasivam, S.; Mansor, M.; Kasihmuddin, M.S.M.; Abubakar, H. Election algorithm for random k satisfiability in the Hopfield neural network. Processes 2020, 8, 568. [Google Scholar] [CrossRef]
  10. Karaboga, D.; Basturk, B. Artificial bee colony (ABC) optimization algorithm for solving constrained optimization problems. In International Fuzzy Systems Association World Congress; Springer: Berlin/Heidelberg, Germany, 2007; pp. 789–798. [Google Scholar]
  11. Jia, D.; Duan, X.; Khan, M.K. Binary Artificial Bee Colony optimization using bitwise operation. Comput. Ind. Eng. 2014, 76, 360–365. [Google Scholar] [CrossRef]
  12. Watada, J.; Roy, A.; Wang, B.; Tan, S.C.; Xu, B. An artificial bee colony-based double layered neural network approach for solving quadratic bi-level programming problems. IEEE Access 2020, 8, 21549–21564. [Google Scholar] [CrossRef]
  13. Kiran, M.S. A binary artificial bee colony algorithm and its performance assessment. Expert Syst. Appl. 2021, 175, 114817. [Google Scholar] [CrossRef]
  14. Ishtaiwi, A.; Alshahwan, F.; Jamal, N.; Hadi, W.; AbuArqoub, M. A Dynamic Clause Specific Initial Weight Assignment for Solving Satisfiability Problems Using Local Search. Algorithms 2021, 14, 12. [Google Scholar] [CrossRef]
  15. Karim, S.A.; Zamri, N.E.; Alway, A.; Kasihmuddin, M.S.M.; Ismail, A.I.M.; Mansor, M.A.; Hassan, N.F.A. Random Satisfiability: A Higher-Order Logical Approach in Discrete Hopfield Neural Network. IEEE Access 2021, 9, 50831–50845. [Google Scholar] [CrossRef]
  16. Roy, S.; Biswas, S.; Chaudhuri, S.S. Nature-Inspired Swarm Intelligence and Its Applications. Int. J. Mod. Educ. Comput. Sci. 2014, 6, 55–65. [Google Scholar] [CrossRef] [Green Version]
  17. Kashan, M.H.; Nahavandi, N.; Kashan, A.H. DisABC: A new artificial bee colony algorithm for binary optimization. Appl. Soft Comput. 2012, 12, 342–352. [Google Scholar] [CrossRef]
  18. Telikani, A.; Gandomi, A.H.; Shahbahrami, A.; Dehkordi, M.N. Privacy-preserving in association rule mining using an improved discrete binary artificial bee colony. Expert Syst. Appl. 2020, 144, 113097. [Google Scholar] [CrossRef]
  19. Lin, G.; Xu, H.; Chen, X.; Guan, J. An effective binary artificial bee colony algorithm for maximum set k-covering problem. Expert Syst. Appl. 2020, 161, 113717. [Google Scholar] [CrossRef]
  20. Wang, Z.; Cao, L.; Si, H. An improved genetic algorithm for determining the optimal operation strategy of thermal energy storage tank in combined heat and power units. J. Energy Storage 2021, 43, 103313. [Google Scholar] [CrossRef]
  21. Muñoz, A.; Rubio, F. Evaluating genetic algorithms through the approximability hierarchy. J. Comput. Sci. 2021, 53, 101388. [Google Scholar] [CrossRef]
  22. Hou, Y.; Hao, G.; Zhang, Y.; Gu, F.; Xu, W. A multi-objective discrete particle swarm optimization method for particle routing in distributed particle filters. Knowl.-Based Syst. 2022, 240, 108068. [Google Scholar] [CrossRef]
  23. Mendler, A.; Döhler, M.; Ventura, C.E. Sensor placement with optimal damage detectability for statistical damage detection. Mech. Syst. Signal Process. 2022, 170, 108767. [Google Scholar] [CrossRef]
  24. Janson, S.; Middendorf, M.; Beekman, M. Honeybee swarms: How do scouts guide a swarm of uninformed bees? Anim. Behav. 2005, 70, 349–358. [Google Scholar] [CrossRef]
  25. Zamri, N.E.; Mansor, M.; Mohd Kasihmuddin, M.S.; Alway, A.; Mohd Jamaludin, S.Z.; Alzaeemi, S.A. Amazon employees resources access data extraction via clonal selection algorithm and logic mining approach. Entropy 2020, 22, 596. [Google Scholar] [CrossRef] [PubMed]
  26. Bazuhair, M.M.; Jamaludin, S.Z.M.; Zamri, N.E.; Kasihmuddin, M.S.M.; Mansor, M.; Alway, A.; Karim, S.A. Novel Hopfield Neural Network Model with Election Algorithm for Random 3 Satisfiability. Processes 2021, 9, 1292. [Google Scholar] [CrossRef]
  27. Gu, X. Application Research for Multiobjective Low-Carbon Flexible Job-Shop Scheduling Problem Based on Hybrid Artificial Bee Colony Algorithm. IEEE Access 2021, 9, 135899–135914. [Google Scholar] [CrossRef]
  28. Li, J.Q.; Xie, S.X.; Pan, Q.K.; Wang, S. A hybrid artificial bee colony algorithm for flexible job shop scheduling problems. Int. J. Comput. Commun. Control 2011, 6, 286–296. [Google Scholar] [CrossRef] [Green Version]
  29. Anuar, S.; Selamat, A.; Sallehuddin, R. A modified scout bee for artificial bee colony algorithm and its performance on optimization problems. J. King Saud Univ.-Comput. Inf. Sci. 2016, 28, 395–406. [Google Scholar] [CrossRef] [Green Version]
  30. Squillero, G.; Tonda, A. Divergence of character and premature convergence: A survey of methodologies for promoting diversity in evolutionary optimization. Inf. Sci. 2016, 329, 782–799. [Google Scholar] [CrossRef] [Green Version]
  31. Santos, R.; Borges, G.; Santos, A.; Silva, M.; Sales, C.; Costa, J.C. A semi-autonomous particle swarm optimizer based on gradient information and diversity control for global optimization. Appl. Soft Comput. 2018, 69, 330–343. [Google Scholar] [CrossRef]
  32. Weiss, T.; Moser, C.; Venus, D.; Berggren, B.; Togerro, A. Parametric multi-objective energy and cost analysis in the life cycle of nearly zero energy buildings− an exhaustive search approach. Sustain. Build. 2019, 4, 5. [Google Scholar] [CrossRef]
  33. Morales-Castañeda, B.; Zaldivar, D.; Cuevas, E.; Fausto, F.; Rodríguez, A. A better balance in metaheuristic algorithms: Does it exist? Swarm Evol. Comput. 2020, 54, 100671. [Google Scholar] [CrossRef]
  34. Akay, B. A study on particle swarm optimization and artificial bee colony algorithms for multilevel thresholding. Appl. Soft Comput. 2013, 13, 3066–3091. [Google Scholar] [CrossRef]
  35. Rashid, T.A.; Abdullah, S.M. A hybrid of artificial bee colony, genetic algorithm, and neural network for diabetic mellitus diagnosing. ARO-Sci. J. Koya Univ. 2018, 6, 55–64. [Google Scholar] [CrossRef] [Green Version]
  36. Fairee, S.; Prom-On, S.; Sirinaovakul, B. Reinforcement learning for solution updating in Artificial Bee Colony. PLoS ONE 2018, 13, e0200738. [Google Scholar] [CrossRef] [PubMed]
  37. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. CCSA: Conscious neighborhood-based crow search algorithm for solving global optimization problems. Appl. Soft Comput. 2019, 85, 105583. [Google Scholar] [CrossRef]
  38. Su, H.; Zhao, D.; Yu, F.; Heidari, A.A.; Zhang, Y.; Chen, H.; Li, C.; Pan, J.; Quan, S. Horizontal and vertical search artificial bee colony for image segmentation of COVID-19 X-ray images. Comput. Biol. Med. 2022, 142, 105181. [Google Scholar] [CrossRef] [PubMed]
Figure 1. General Flow of the Proposed Study.
Figure 1. General Flow of the Proposed Study.
Mathematics 10 01129 g001
Figure 2. Schematic Diagram for DHNN-r2SAT.
Figure 2. Schematic Diagram for DHNN-r2SAT.
Mathematics 10 01129 g002
Figure 3. Schematic diagram of ABC optimizing logic phase.
Figure 3. Schematic diagram of ABC optimizing logic phase.
Mathematics 10 01129 g003
Figure 4. (a) f   b e s t for r 2 SAT x o r A B C ; (b) f   b e s t for r 2 SAT n o r A B C ; and (c) f   b e s t for r 2 SAT a n d A B C .
Figure 4. (a) f   b e s t for r 2 SAT x o r A B C ; (b) f   b e s t for r 2 SAT n o r A B C ; and (c) f   b e s t for r 2 SAT a n d A B C .
Mathematics 10 01129 g004aMathematics 10 01129 g004b
Figure 5. (a) f   b e s t for r 2 SAT n a n d A B C , (b) f   b e s t for r 2 SAT o r A B C .
Figure 5. (a) f   b e s t for r 2 SAT n a n d A B C , (b) f   b e s t for r 2 SAT o r A B C .
Mathematics 10 01129 g005
Figure 6. Convergence Curve for each Algorithms.
Figure 6. Convergence Curve for each Algorithms.
Mathematics 10 01129 g006aMathematics 10 01129 g006b
Table 1. Summary of the Related Studies.
Table 1. Summary of the Related Studies.
Author(s)Detail of the StudiesSummary and Findings
Wan Abdullah [2]The first work incorporates SAT with HNN. The work capitalized the structure of SAT by proposing logical inference through the minimization of HornSAT in HNN. The approach is able to yield connection strengths (synaptic weight) between each neuron, resulting in an energy-minimizing dynamic network.
Kasihmuddin et al. [3]HNN incorporates with ABC algorithm in minimizing the 2SAT structure.The performance of 2SAT in HNN is able to achieve 90% production of global minimum solutions which indicates the credibility of 2SAT in representing the behavior of neurons in HNN.
Sathasivam et al. [4]A non-systematic RAN2SAT has been developed to represent the symbolic output in HNN. The work is able to locate maximum production of global solutions which indicates RAN2SAT is successfully embedded in the operations of HNN.
Alway et al. [5]A new novel of non-systematic SAT was proposed and implemented in HNN. MAJ2SAT can increase the neuron variation as the ratio of 2SAT increases. Although a restrictive training environment, the synaptic weight attained leads to global minima solutions.
Karaboga & Basturk [10]ABC in solving multidimensional and multimodal numerical optimization problems. The new SI algorithm named ABC was proposed with a few parameters and compared with existing algorithms.
Jia et al. [11]Developed binary variants of ABC named bitABC.A comprehensive comparison has been made with another variant binary ABC with 13 selected benchmarks problem. The performance of BitABC is crucial in terms of final solution accuracy, convergence speed, and robustness.
Watada et al. [12]ABC was improved by cooperating with two major components of the Genetic Algorithm (GA) and implemented in DLNN to solve BLPP.The improved-ABC is able to yield a high quality of global optimal solution with higher accuracy in a smaller time. The work proves that ABC is flexible to co-operate with another metaheuristic.
Kiran [13]Proposed modification of solution update rule of basic ABC replaced with a xor logic gate in binABC.The comprehensive comparison was conducted with six variants of binary ABC by solving a modern benchmark problem (CEC2015). binABC is compatible in solving discrete problems.
Table 2. List of Parameters used in DHNN model.
Table 2. List of Parameters used in DHNN model.
ParameterParameter Value
Ratio of negative literals, r {0.5, 0.6, 0.7, 0.8, 0.9}
Number of neurons, λ {10, 50, 90}
Neuron combination, β 10
Number of trials100
Number of learning, b100
Learning iteration, ε ε b
CPU time24 h
Tolerance value, Tol0.001
Initialization of neuron statesRandom
Training algorithmExhaustive Search [5]
Table 3. List of ABC parameters used in Logic Phase.
Table 3. List of ABC parameters used in Logic Phase.
ParameterParameter Value
Neuron combination, β 10
Number of employed bees50 [3]
Number of onlooker bee50 [3]
Number of the scout bee5 [23]
Number of trials 5 [3]
Table 4. List of GA parameters used in Logic Phase.
Table 4. List of GA parameters used in Logic Phase.
ParameterParameter Value
Number of generations100 [25]
Number of populations500
Selection rate0.1 [10]
Crossover rate1 [10]
Mutation rate 1 [10]
Table 5. List of ES parameters used in Logic Phase.
Table 5. List of ES parameters used in Logic Phase.
ParameterParameter Value
Number of strings 100 [10]
Selection rate0.1 [10]
Table 6. List of PSO parameters used in Logic Phase.
Table 6. List of PSO parameters used in Logic Phase.
ParameterParameter Value
Particle size50
Number of trials20
Particle velocity 10
Table 7. The implication truth table for NAND.
Table 7. The implication truth table for NAND.
r 2 SAT n a n d A B C Setup ISetup IINAND with Setup INAND with Setup II
x   i ,   j x   k ,   j ( x   i ,   j ¯ x   k ,   j ) φ < 0.5 φ 0.5 v   i ,   j v   i ,   j
0010111
0110111
1010101
1100111
Table 8. The implication truth table for OR.
Table 8. The implication truth table for OR.
r 2 SAT o r A B C Setup ISetup IIOR with Setup IOR with Setup II
x   i ,   j x   k ,   j ( x   i ,   j x   k ,   j ) φ < 0.5 φ 0.5 v   i ,   j v   i ,   j
0000101
0110110
1010111
1110111
Table 9. The implication truth table for AND.
Table 9. The implication truth table for AND.
r 2 SAT a n d A B C Setup ISetup IIAND with Setup IAND with Setup II
x   i ,   j x   k ,   j ( x   i ,   j x   k ,   j ) φ < 0.5 φ 0.5 v   i ,   j v   i ,   j
0000100
0100100
1000101
1110110
Table 10. The implication truth table for XOR.
Table 10. The implication truth table for XOR.
r 2 SAT x o r A B C Setup ISetup IIXOR with Setup IXOR with Setup II
x   i ,   j x   k ,   j ( x   i ,   j _ x   k ,   j ) φ < 0.5 φ 0.5 v   i ,   j v   i ,   j
0000101
0110110
1010111
1100100
Table 11. The implication truth table for NOR.
Table 11. The implication truth table for NOR.
r 2 SAT n o r A B C Setup ISetup IINOR with Setup INOR with Setup II
x   i ,   j x   k ,   j ( x   i ,   j ¯ x   k ,   j ) φ < 0.5 φ 0.5 v   i ,   j v   i ,   j
0010101
0100110
1000100
1100100
Table 12. Comparison of r 2 SAT n a n d A B C against other logic gates in terms of MAE for λ = 10 .
Table 12. Comparison of r 2 SAT n a n d A B C against other logic gates in terms of MAE for λ = 10 .
λ = 10 Performance Indicator r 2 SAT n a n d A B C r 2 SAT x o r A B C r 2 SAT a n d A B C r 2 SAT o r A B C r 2 SAT n o r A B C
r 
0.5Mean0.00000.00000.00000.00000.0000
Min0.00000.00000.00000.00000.0000
Max0.00000.00000.00000.00000.0000
+/−/=0/0/30/0/30/0/30/0/30/0/3
0.6Mean0.00000.00000.00000.00000.0000
Min0.00000.00000.00000.00000.0000
Max0.00000.00000.00000.00000.0000
+/−/=0/0/30/0/30/0/30/0/30/0/3
0.7Mean0.00000.00000.00000.00000.0000
Min0.00000.00000.00000.00000.0000
Max0.00000.00000.00000.00000.0000
+/−/=0/0/30/0/30/0/30/0/30/0/3
0.8Mean0.00000.00000.00000.00000.0000
Min0.00000.00000.00000.00000.0000
Max0.00000.00000.00000.00000.0000
+/−/=0/0/30/0/30/0/30/0/30/0/3
0.9Mean0.00000.00000.30000.00000.3267
Min0.00000.00000.00000.00000.0000
Max0.00000.00001.00000.00001.0000
+/−/=0/0/30/0/30/2/10/0/30/2/1
Table 13. Comparison of r 2 SAT n a n d A B C against other logic gates in terms of MAE for λ = 50 .
Table 13. Comparison of r 2 SAT n a n d A B C against other logic gates in terms of MAE for λ = 50 .
λ = 50   Performance Indicator r 2 SAT n a n d A B C r 2 SAT x o r A B C r 2 SAT a n d A B C r 2 SAT o r A B C r 2 SAT n o r A B C
r
0.5Mean0.00000.00000.00000.00000.0000
Min0.00000.00000.00000.00000.0000
Max0.00000.00000.00000.00000.0000
+/−/=0/0/30/0/30/0/30/0/30/0/3
0.6Mean0.00000.00000.00000.00000.0000
Min0.00000.00000.00000.00000.0000
Max0.00000.00000.00000.00000.0000
+/−/=0/0/30/0/30/0/30/0/30/0/3
0.7Mean0.00000.72331.15670.00001.0450
Min0.00000.00000.00000.00000.0000
Max0.00001.66672.00000.00002.0000
+/−/=0/0/30/2/10/2/10/0/30/2/1
0.8Mean0.00004.12006.10000.00005.6400
Min0.00001.60004.60000.00004.6000
Max0.00005.80007.40000.00007.2000
+/−/=0/0/30/3/00/3/00/0/30/3/0
0.9Mean0.05008.980011.44000.000011.0200
Min0.00007.60009.20000.00009.6000
Max0.500010.200012.40000.000012.8000
+/−/=0/2/10/2/00/3/02/0/10/3/0
Table 14. Comparison of r 2 SAT n a n d A B C against other logic gates in terms of MAE for λ = 90 .
Table 14. Comparison of r 2 SAT n a n d A B C against other logic gates in terms of MAE for λ = 90 .
λ = 90   Performance Indicator r 2 SAT n a n d A B C r 2 SAT x o r A B C r 2 SAT a n d A B C r 2 SAT o r A B C r 2 SAT n o r A B C
r
0.5Mean0.00000.00000.00000.00000.0000
Min0.00000.00000.00000.00000.0000
Max0.00000.00000.00000.00000.0000
+/−/=0/0/30/0/30/0/30/0/30/0/3
0.6Mean0.00000.05000.05000.00000.3000
Min0.00000.00000.00000.00000.0000
Max0.00000.50000.50000.00001.0000
+/−/=0/0/30/2/10/2/10/0/30/2/1
0.7Mean0.00002.18005.02000.00004.8600
Min0.00000.00003.00000.00002.6000
Max0.00003.40007.40000.00007.0000
+/−/=0/0/30/2/10/3/00/0/30/3/0
0.8Mean0.200012.540015.80000.000015.2200
Min0.00009.400014.00000.000013.6000
Max0.500014.600018.00000.000016.8000
+/−/=0/2/10/3/00/3/02/0/10/3/0
0.9Mean0.500021.620024.04000.650025.1200
Min0.000017.800022.40000.000022.8000
Max1.000023.800027.80001.500027.6000
+/−/=2/0/10/3/00/3/00/2/10/3/0
Table 15. Position update in Equation (14) with NAND logic gate operator with equal probability (unbiased φ ).
Table 15. Position update in Equation (14) with NAND logic gate operator with equal probability (unbiased φ ).
r 2 SAT n a n d A B C Setup ISetup IINAND with Setup INAND with Setup II
x   i ,   j x   k ,   j ( x   i ,   j ¯ x   k ,   j ) φ < 0.5 φ 0.5 v   i ,   j v   i ,   j
0011011
0110111
1010111
1101000
Table 16. Comparison of r 2 SAT n a n d A B C against other binary algorithms in terms of MAE for λ = 10 .
Table 16. Comparison of r 2 SAT n a n d A B C against other binary algorithms in terms of MAE for λ = 10 .
λ = 10 Performance Indicator r 2 SAT n a n d A B C r 2 SAT   G A r 2 SAT   E S r 2 SAT   P S O
r
0.5Mean0.00000.00000.00000.0000
Min0.00000.00000.00000.0000
Max0.00000.00000.00000.0000
+/−/=0/0/30/0/30/0/30/0/3
0.6Mean0.00000.00000.00000.0000
Min0.00000.00000.00000.0000
Max0.00000.00000.00000.0000
+/−/=0/0/30/0/30/0/30/0/3
0.7Mean0.00000.00000.00000.0000
Min0.00000.00000.00000.0000
Max0.00000.00000.00000.0000
+/−/=0/0/30/0/30/0/30/0/3
0.8Mean0.00000.00000.10000.0000
Min0.00000.00000.00000.0000
Max0.00000.00001.00000.0000
+/−/=0/0/30/0/30/2/10/0/3
0.9Mean0.00000.10000.72840.1786
Min0.00000.00000.00000.0000
Max0.00000.50001.80000.9524
+/−/=2/0/10/2/10/2/10/2/1
Table 17. Comparison of r 2 SAT n a n d A B C against other binary algorithms in terms of MAE for λ = 50 .
Table 17. Comparison of r 2 SAT n a n d A B C against other binary algorithms in terms of MAE for λ = 50 .
λ = 50 Performance Indicator r 2 SAT n a n d A B C r 2 SAT   G A r 2 SAT   E S r 2 SAT   P S O
r
0.5Mean0.00000.00000.00000.0000
Min0.00000.00000.00000.0000
Max0.00000.00000.00000.0000
+/−/=0/0/30/0/30/0/30/0/3
0.6Mean0.00000.05000.00000.0000
Min0.00000.00000.00000.0000
Max0.00000.50000.00000.0000
+/−/=0/0/30/2/10/0/30/0/3
0.7Mean0.00000.73332.16000.8250
Min0.00000.00001.00000.0000
Max0.00002.00004.00001.5238
+/−/=2/0/10/2/10/3/00/2/1
0.8Mean0.00002.80007.82004.3905
Min0.00000.00007.00003.0476
Max0.00004.000010.00005.3810
+/−/=2/0/10/2/10/3/00/3/0
0.9Mean0.05005.400831.89508.9714
Min0.00004.000010.00007.6191
Max0.50006.2308209.05009.7620
+/−/=3/0/00/3/00/3/00/3/0
Table 18. Comparison of r 2 SAT n a n d A B C against other binary algorithms in terms of MAE for λ = 90 .
Table 18. Comparison of r 2 SAT n a n d A B C against other binary algorithms in terms of MAE for λ = 90 .
λ = 90 Performance Indicator r 2 SAT n a n d A B C r 2 SAT   G A r 2 SAT   E S r 2 SAT   P S O
r
0.5Mean0.00000.00000.00000.0000
Min0.00000.00000.00000.0000
Max0.00000.00000.00000.0000
+/−/=0/0/30/0/30/0/30/0/3
0.6Mean0.00000.26670.72210.2267
Min0.00000.00000.00000.0000
Max0.00000.66671.00000.6000
+/−/=2/0/10/2/10/2/10/2/1
0.7Mean0.00003.18228.91004.4476
Min0.00001.00005.00002.4286
Max0.00005.000012.60005.9048
+/−/=3/0/00/3/00/3/00/3/0
0.8Mean0.20008.336517.940015.0143
Min0.00006.000015.400013.3333
Max0.50009.500020.000016.1905
+/−/=3/0/00/3/00/3/00/3/0
0.9Mean0.500013.361328.515024.0238
Min0.000012.500026.000022.8571
Max1.000014.069034.000026.0476
+/−/=3/0/00/3/00/3/00/3/0
Table 19. A v R of all algorithms in the logic phase.
Table 19. A v R of all algorithms in the logic phase.
r r 2 SAT n a n d A B C r 2 SAT   G A r 2 SAT   E S r 2 SAT   P S O
0.52.50002.50002.50002.5000
0.61.83333.00002.83332.3333
0.71.50002.16673.50002.8333
0.81.33332.00004.00002.6667
0.91.00002.00003.00004.0000
Table 20. Friedman test between r 2 SAT n a n d A B C with all comparison algorithms.
Table 20. Friedman test between r 2 SAT n a n d A B C with all comparison algorithms.
r Chi - Square   Value ,   χ   2 p-Value Accept / Reject   H 0
0.50.00001.0000Accept H 0
0.61.50000.6823Accept H 0
0.74.00000.2615Accept H 0
0.87.00000.0719Accept H 0
0.99.00000.0611Accept H 0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Muhammad Sidik, S.S.; Zamri, N.E.; Mohd Kasihmuddin, M.S.; Wahab, H.A.; Guo, Y.; Mansor, M.A. Non-Systematic Weighted Satisfiability in Discrete Hopfield Neural Network Using Binary Artificial Bee Colony Optimization. Mathematics 2022, 10, 1129. https://doi.org/10.3390/math10071129

AMA Style

Muhammad Sidik SS, Zamri NE, Mohd Kasihmuddin MS, Wahab HA, Guo Y, Mansor MA. Non-Systematic Weighted Satisfiability in Discrete Hopfield Neural Network Using Binary Artificial Bee Colony Optimization. Mathematics. 2022; 10(7):1129. https://doi.org/10.3390/math10071129

Chicago/Turabian Style

Muhammad Sidik, Siti Syatirah, Nur Ezlin Zamri, Mohd Shareduwan Mohd Kasihmuddin, Habibah A. Wahab, Yueling Guo, and Mohd. Asyraf Mansor. 2022. "Non-Systematic Weighted Satisfiability in Discrete Hopfield Neural Network Using Binary Artificial Bee Colony Optimization" Mathematics 10, no. 7: 1129. https://doi.org/10.3390/math10071129

APA Style

Muhammad Sidik, S. S., Zamri, N. E., Mohd Kasihmuddin, M. S., Wahab, H. A., Guo, Y., & Mansor, M. A. (2022). Non-Systematic Weighted Satisfiability in Discrete Hopfield Neural Network Using Binary Artificial Bee Colony Optimization. Mathematics, 10(7), 1129. https://doi.org/10.3390/math10071129

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop