Next Article in Journal
Indetermsoft-Set-Based D* Extra Lite Framework for Resource Provisioning in Cloud Computing
Previous Article in Journal
Investigating Brain Responses to Transcutaneous Electroacupuncture Stimulation: A Deep Learning Approach
Previous Article in Special Issue
Minimizing the Density of Switch–Controller Latencies over Total Latency for Software-Defined Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

List-Based Threshold Accepting Algorithm with Improved Neighbor Operator for 0–1 Knapsack Problem

1
College of Computer and Information Science, Fujian Agriculture and Forestry University, Fuzhou 350002, China
2
Key Laboratory of Smart Agriculture and Forestry, Fujian Province University, Fuzhou 350002, China
*
Author to whom correspondence should be addressed.
Algorithms 2024, 17(11), 478; https://doi.org/10.3390/a17110478
Submission received: 7 September 2024 / Revised: 17 October 2024 / Accepted: 21 October 2024 / Published: 25 October 2024

Abstract

:
The list-based threshold accepting (LBTA) algorithm is a sophisticated local search method that utilizes a threshold list to streamline the parameter tuning process in the traditional threshold accepting (TA) algorithm. This paper proposes an enhanced local search version of the LBTA algorithm specifically tailored for solving the 0–1 knapsack problem (0–1 KP). To maintain a dynamic threshold list, a feasible threshold updating strategy is designed to accept adaptive modifications during the search process. In addition, the algorithm incorporates an improved bit-flip operator designed to generate a neighboring solution with a controlled level of disturbance, thereby fostering exploration within the solution space. Each trial solution produced by this operator undergoes a repair phase using a hybrid greedy repair operator that incorporates both density-based and value-based add operator to facilitate optimization. The LBTA algorithm’s performance was evaluated against several state-of-the-art metaheuristic approaches on a series of large-scale instances. The simulation results demonstrate that the LBTA algorithm outperforms or is competitive with other leading metaheuristics in the field.

1. Introduction

The list-based threshold accepting (LBTA) algorithm [1] falls under the category of threshold accepting (TA) algorithms [2]. In a manner analogous to simulated annealing (SA), LBTA accepts suboptimal solutions to prevent being trapped in local optima. Instead of setting to a fixed temperature, LBTA utilizes an adaptive threshold list that is updated iteratively based on the search space topology. This allows LBTA to achieve better results with fewer tuning parameters compared with SA and other local search methods. Owing to its flexibility, LBTA is effectively utilized in solving both combinatorial and continuous optimization problems, such as the vehicle routing problem (VRP) and its variants [3], the job shop scheduling problem (JSSP) [4], the zero-wait scheduling problem [5], and the traveling salesman problem (TSP) [6].
The 0–1 KP, which is recognized as a classic NP-hard problem in combinatorial optimization, has practical applications in various decision-making domains [7]. The problem involves a collection of n items, labeled from 1 to n; each item has a weight w i and a value v i , as well as a maximum capacity c for the knapsack. The aim is to maximize the sum of the values of the items in the knapsack while making sure that the sum of the weights does not surpass the knapsack’s capacity, as follows (see Equation (1)):
m a x f ( x ) = i = 1 n v i x i s . t . i = 1 n w i x i c x i 0 , 1
where x i represents the number of items i included into the knapsack, and f ( x ) is the target function.
Aiming to fill the gap in the literature, this paper introduces a binary LBTA designed for tackling the 0–1 KP. The proposed algorithm features a feasible threshold list that effectively narrows the search space, alongside a hybrid greedy repair operator that surpasses traditional density-based operator regarding optimization and convergence. Moreover, a bit-flip mutation operator is employed to generate trial solutions, which provides an elastic local search area and a sufficient perturbation to escape local minimum. Extensive experimental evaluations demonstrate the efficacy and competitiveness of the LBTA algorithm, with comparative results on two large 0–1 KP instances highlighting its advantages.
The remainder of this paper is structured as follows: Section 2 offers a brief introduction of the basic LBTA algorithm, the 0–1 KP, and algorithms for the 0–1 KP. Section 3 presents the proposed LBTA algorithm, while Section 4 analyzes its behavior. In Section 5, the performance of the LBTA algorithm is compared with other state-of-the-art metaheuristics. Finally, Section 6 summarizes the findings of this study and outlines potential avenues for future research.

2. Related Work

2.1. List-Based Threshold Algorithm

LBTA is a random search algorithm that explores the solution space by employing a neighborhood search strategy. It allows the acceptance of suboptimal candidate solutions based on a predefined threshold value. As an extension of the TA metaheuristic, LBTA maintains a unique list of threshold values that decrease over the process of the iteration. This enables the algorithm to balance diversification and intensification without the need for an additional intervention or parameter configuration. The procedure consists of two main components: the initialization and updating of the threshold value list and a local search process.

2.1.1. Initialize the List of Threshold Values

During the first stage, an initial list of threshold values is generated through a particular local search method. The procedure then begins with the following steps: (i) selecting a neighboring solution s from a present solution s by applying a local search move and (ii) calculating and normalizing the changing in the fitness function as follows:
T = | f ( s ) f ( s ) | / f ( s )
The newly generated threshold T is continually added to the list until it reaches its maximum length, which is controlled solely by the parameter L, representing the length of the list. In the initial phase, the local search procedure gathers information from the search space, while the normalization procedure helps to ensure a gradual decrease in the threshold values for the following iteration.

2.1.2. Iteration of the Algorithm

During the second stage, the iteration begins by generating a neighboring solution s’ from the initial randomized solution s using a local search move. This moving strategy may be the same one adopted in the initialization, or  a new method may be used. If the solution s’ is better than the original s, then s is updated with s’, or  a new threshold value T is generated using Equation (2). If T is less than the current largest threshold value T m a x , s’ is accepted as the new solution, and T replaces T m a x and enters the list. Otherwise, s’ is abandoned, and the algorithm proceeds with the original solution s in the next iteration. This process is repeated until a stopping condition is satisfied.

2.1.3. Conservation of Threshold Strategy

In [1], to facilitate the ability to break free from local minima and enhance the chance of obtaining a better solution, this algorithm employs a strategy to save the threshold. This involves keeping the values in the list sufficiently high by periodically updating the list with the maximum normalized threshold values discovered within a predefined number of feasible moves. This delay factor helps to slow down the adaptation of list values and avoid premature convergence to suboptimal solutions.

2.1.4. Pseudocode of LBTA

The pseudocode of LBTA is shown in Algorithm 1. We assume that the problem solved here is a maximization optimization problem that is consistent with the 0–1 KP. The algorithm is composed of two parts: initialization and iteration. In the initialization phase, a preliminary list of threshold values is established through a specific local search method, where L represents the length of the list. In the iteration phase, the algorithm generates a new solution by performing a local search move and updates the current solution and threshold list based on an acceptance criterion. The iteration continues until the maximum number of iterations G is achieved.
Algorithm 1: List-based threshold accepting algorithm
Algorithms 17 00478 i001

2.2. 0–1 KP

As a constrained optimization problem, dealing with the infeasible solution in the 0–1 KP requires an additional operator. Two the most commonly used methods in the literature are the penalty function and repair operation. The former involves designing a penalty function that assigns a lower fitness value to infeasible solutions, reducing their chance of being included in the next generation [8]. The repair operation is more widely used and typically involves a two-step greedy approach [9]. In step 1, when an infeasible solution is encountered, a greedy drop operator is applied to remove some items according to specific rules until the solution becomes feasible. In step 2, items are added back in until the solution becomes infeasible again. The most commonly used rule is based on profit density, which intuitively suggests that items with higher value and lower weight have a greater change in maximizing the total profits [10]. Algorithms 2 and 3 describe these steps respectively. In Algorithm 2, if the current solution is infeasible, the greedy drop operator checks every item in increasing order of v i / w i and changes x i from 1 to 0 so that the lower density item can be removed at the very beginning. On the contrary, in Algorithm 3, the greedy add operator checks the items in descending order of v i / w i to include the higher profit density item in the first place.  
Algorithm 2: Greedy drop operator
Algorithms 17 00478 i002
Algorithm 3: Greedy add operator
Algorithms 17 00478 i003
Early exact algorithms for the 0–1 KP, such as the dynamic programming approach [11], the branch and bound approach [12], and the enumeration approach [13], were designed for small-to-medium instances. As the size of instances increased, these methods failed to tackle the problem within an acceptable timeframe. Therefore, a wide range of metaheuristics have been put forward to tackle large-scale 0–1 KPs. Depending on the inspirational origins, these algorithms can be classified into four major categories: swarm-intelligence (SI)-based, bio-inspired, chemistry-based, and physics-based [14].
In the field of SI, traditional SI algorithms include the genetic algorithm (GA) [15], ant colony optimization (ACO) [16], and particle swarm optimization (PSO) [17]. Recently, a variety of novel SIs have proved to be very efficient in solving the 0–1 KP, such as the spider algorithm (BSSA), monkey algorithm (MA), whale optimization algorithm (WOA), cuckoo search (CS), chicken swarm optimization (CSO), wolf pack algorithm (WPA), and monarch butterfly optimization (MBO). Most of them design a binary version within the general algorithm framework, and use the aforementioned repair operator to fix the infeasible solution.
For example, a binary MBO (BMBO) [18] uses real-valued vectors and binary vectors to define search space and solution space separately. In chaotic MBO (CMBO) [19], 12 one-dimensional chaotic maps are utilized to adjust the migration procedures, and a Gaussian mutation operator is imposed on the worst solution to prevent premature convergence. A generalized opposition-based learning (OBL) MBO (OMBO) [20] employs OBL to speed up the convergence. Gaussian perturbation is also included to escape from a local optimum.
In the field of bio-inspired algorithms, a modified discrete shuffled frog leaping algorithm (MDSFLA) [21] includes a PSO-like technique and shuffled complex evolution to carry on local search. A binary version of differential evolution (BDE) [22] enriches the exploration and exploitation with four components: capability with the dual representation of solutions, an improved mapping method, enhanced mutation and one-point crossover, and a diversity mechanism.
In terms of chemistry-based methods, a binary chemical reaction optimization (CRO) framework is commonly applied for the 0–1 KP [8,23]. In the field of physics-based algorithms, a series of improved harmony search (HS) algorithms have emerged. A CS algorithm with global HS (CSGHS) [24] combines the exploration of global HS and the exploitation of CS to address the 0–1 KP. Good performance is obtained by comparing CS, SFLA, and DE in terms of search accuracy and convergence speed. A simplified binary HS (SBHS) [25] utilizes the distinctions among harmonies to generate new solutions and dynamically modifies the harmony memory consideration rate with the size of the dimension. An improved HS (IHS) [26] employs a feasible parameter-tuning way to generate new solution vectors so as to improve the correctness and convergence. A global-best HS (GHS) [27] adjusts the pitch-adjustment step through a global best harmony and adds a social dimension so that the solution can use the global best and local best information at the same time. Self-adaptive HS (SAHS) [28] uses harmony memory to automatically adjust parameters. A low-discrepancy sequence is exploited to the traditional pseudo-random initialization of the harmony memory. A list-based SA (LBSA) replaces the traditional temperature-based descent method with a practical list to manage the convergence pattern [29]. In our algorithm, we included a bit-flip mutation operator that generates trial solutions, which has been shown to offer an effective search space. Additionally, the noising method (NM) is another variant of SA. NM with six variants of NMs for the 0–1 KP has been proposed to address the 0–1 KP. These variants include two noise strategies, two noise variations of objective solutions, and two decreasing strategies [30]. Both LBSA and NM algorithms incorporate a hybrid greedy optimization operator that combines a density-based operator and a value-based operator, which has been proved to enhance algorithm performance. Therefore, we have also adopted this hybrid operator in our algorithm as a replacement for the traditional single optimization operator to improve the effectiveness of the algorithm.
Furthermore, in recent years, many novel algorithms have emerged to handle these problems, such as the reptile search algorithm (RSA) [31], binary slime mould algorithm (BSMA) [32], binary Archimedes optimization algorithm (BAOA), binary tunicate swarm algorithm (BTSA) [33], binary Harris Hawks optimization (BHHO) [34], binary marine predators algorithm (BMPA) [35], and binary elephant herding optimization (BinEHO) [36]. These algorithms have demonstrated significant potential in addressing complicated optimization problems due to their ability to effectively explore large search spaces and locate near-optimal solutions. We also include these algorithms and make a comparison with them in Section 5 to show the competitiveness of our proposed algorithm.

3. LBTA Algorithm with Enhanced Local Search for 0–1 KP

In this study, we introduce a novel version of the LBTA algorithm to tackle the 0–1 KP. The main framework can be summarized as follows: Initially, we redefine the solution representation for the 0–1 KP. Subsequently, we devise a rule for generating the threshold list, which adapts based on the evolving search landscape, thus enabling a more efficient exploration of the solution space. Following this, a bit-flip mutation operator is utilized to generate a diverse set of candidate solutions. These candidates are then refined using a hybrid greedy optimization operator, which employs both value-based and density-based strategies to repair and optimize the quality of the solutions.

3.1. Solution Representation

The solution for the 0–1 KP is represented by an object s with three attributes: s . x , s . v , and  s . w . The n-bit binary array s . x is generated randomly from [0, 1], where s . x i = 1 indicates that the corresponding item is included in the knapsack. s . v represents the overall value of items in the knapsack, while s . w represents the overall weight of the items in the knapsack.

3.2. The Initialization and Update of Threshold List

The threshold list is an essential part in the LBTA, which is responsible for providing a flexible search space. In the traditional LBTA, the list is generated through stochastic local search, and the threshold values are normalized and stored in the list. In our proposed approach, we adopt a simpler and more efficient method: randomly selecting item values and inserting them into the list until the list is full, without the need to calculate fitness values.
To achieve a thorough and delicate search with a uniformly descending threshold list, we modify the basic framework of LBTA by utilizing a Markov chain to perform a series of local search, similar to SA. During each inner iteration, the decision to accept a worse solution is based on the current maximum value in the threshold list T m a x . If the worst solution is adopted, the difference between the current solution and the candidate solution is accumulated, and the number of acceptance times is recorded. After completing the local search, we replace T m a x with the average of cumulative differences. This approach provides a flexible threshold list that enables a border search space, occasional upward moves that promote escaping from local maximum points at the beginning, and a decreasing list of threshold values that lead to a thorough search and eventually reach the global maximum.

3.3. Improved Neighbor Operator

Apart from the threshold list, another crucial part of LBTA is the local search strategy. An effective strategy should be powerful enough to help the algorithm to break away from the attraction of the current solution while also being gentle enough to avoid becoming a stochastic local search. In particular, for the 0–1 KP, changing only one item at a time would result in a weak perturbation, leading to slow convergence, while changing too many items would result in a blind search. To address these issues, a feasible bit-flip mutation operator is proposed. During each local search, we randomly select items to include or exclude from the backpack. Two parameters, namely, F T and D T , are used to control the amount flipping and dropping times, respectively. Algorithm 4 outlines the process. Within the flipping number of F T , a random item index i is selected. If the current item has not been included yet, we flip s . x i from 0 to 1. Alternatively, if the current dropping number is less than D T , we exclude the item by setting s . x i from 1 to 0. The maximum flipping and dropping times are satisfied by iterating until the limits are reached.
Algorithm 4: A bit-flip mutation operator
Algorithms 17 00478 i004

3.4. Hybrid Greedy Repair and Optimized Operator

A traditional greedy repair operator only utilizes the profit density as the sole metric to determine the priority of item selection. One of the drawbacks of this approach is that it is challenging for high-value and high-weight items to be selected, which may lead to a local minimum. To solve this problem, we design a hybrid method that combines the greedy repair operator with an optimizing operator that considers both density and value. The hybrid approach involves two arrays, HD and HV, which keeps items organized in descending order based on the density of v i / w i and the value of v i , respectively. During the first stage, we use only HD to drop items with lower density first. In the optimizing process, an additional parameter p is introduced to determine which array is used to invoke the greedy drop and greedy add operator (see Algorithm 5). The reason for dropping items using only HD is to preserve as much beneficial information as possible. If we choose the hybrid arrays, items possessing a higher value tend to be removed more often, and these items would need to be re-added during the optimization process, which would be a waste of iterations.   
Algorithm 5: Hybrid greedy optimization operator
Algorithms 17 00478 i005

3.5. The Framework of the LBTA Algorithm for 0–1 KP

The framework of the LBTA algorithm is presented in Algorithm 6. Start with initializing the threshold list; a primary solution is generated randomly, repaired, and optimized using the hybrid greedy operator. In the inner iteration, a candidate solution s is generated using the bit-flip mutation operator. If  s is better than s , or if the deviation between s . v and s . v is less than T m a x , s is replaced with s . After the number of local search iterations meets the Markov chain length, the current maximum threshold value is replaced with the average of the accumulation of deviation. The algorithm then starts a new local search until the global iteration limit is reached.   
Algorithm 6: The framework of LBTA
Algorithms 17 00478 i006

4. Behaviors Analysis

In this section, we present a series of experiments conducted to analyze the behavior of the LBTA algorithm. The experiments were performed on three sets of large 0–1 knapsack instances presented in [20]. These instances are classified by three types based on their features, as shown in Table 1. Each group consists of five instances of the 0–1 knapsack problem, along with the instance name; the number of items in each instance and the best solution for each instance are listed in Table 2. The optimal values are emphasized in bold. Instances KP01–KP05 represent uncorrelated cases, KP06–KP10 represent weakly correlated cases, and KP11–KP15 represent strongly correlated cases. The experiments were run on an Intel Core i7 CPU and an 8 GB RAM, using Java implementation, and each instance was run independently 50 times.

4.1. Parameters Settings

To validate the behavior of LBTA, we divided the parameters into two groups. The first group consisted of parameters for the basic LBTA framework, which included the number of generations G, the Markov chain length M, and the threshold list length L. The second group consisted of parameters for the local search, which included the probability p of the hybrid repair operator, the number of flipping times F T , and the number of dropping times D T . The total number of fitness evaluations was fixed at G M = 40,000.
In the preliminary experiments, we determined the parameter values in the initial LBTA framework. For the other parameters, we set p = 1 , F T = 1 , and D T = 1 to ensure that these parameters did not cause interference. Specifically, we tried five values (50, 100, 200, 400, and 800) for G and ten values from 10 to 100 with 10 increments for L. We found that G = 800 , M = 50 , L = 10 , and p = 0.2 were suitable for LBTA to achieve good performance.

4.2. Performance Tuning for the Feasible Bit-Flip Operator

To determine an appropriate combination for the number of bit flips, we conducted tests across a range of values for F T and D T . We recorded the number of functions that achieved a 100% success rate for each iteration, as most of these combinations were able to produce optimal results for the majority of instances. However, when F T = 1 and D T = 1 , none of the functions were able to achieve the optimal result with a 100% success rate. Therefore, we adjusted the values to set F T from 10 to 80 with 10 increments and D T from 0 to 5 with 1 increment.
When an item is excluded from the knapsack, it still has the chance to be re-included. However, if D T is the same value as F T , numerous items may be repeatedly included and excluded, leading to unnecessary iteration. Thus, we aim to fill up the knapsack as quickly as possible while also introducing local disturbance. This is why we set D T to be much lower than F T .
The histograms in Figure 1 depict the simulation results for the different combinations of F T and D T . From these histograms, we obtained the following results: (1) For F T , there is an insignificant difference between 15 functions. When F T equals 40, 50, 70, and 80, 14 of 15 functions achieve a 100% success rate, while for F T equal to 30 and 60, 13 of 15 functions obtain a 100% success rate. For the remaining values, 12 of 15 functions reach a 100% success rate. (2) For D T , there is a significant difference between six values. When D T = 1 , all F T values were able to produce the best results. (3) In general, the value of F T should not be too large to avoid random search. Based on the simulation results, we selected F T = 40 , D T = 1 for the following simulations.
Table 2. Comparison of the LBTA algorithm on the first set of 15 instances.
Table 2. Comparison of the LBTA algorithm on the first set of 15 instances.
HGGA (2020) NM (2021) CMBO (2018) LBTA
InstanceBKV BestMeanWorstSD BestMeanWorstSD BestMeanWorstSD BestMeanWorstSD
KP0140,686 40,68640,685.0040,6856.00 × 10 2 40,68540,684.8840,6842.20 × 10 1 40,68640,683.0040,6837.10 × 10 1 40,68640,685.0740,6852.46 × 10 3
KP0250,592 50,59250,592.0050,5920.00 50,59250,592.0050,5920.00 50,59250,590.0050,5904.90 × 10 1 50,59250,592.0050,5920.00
KP0361,846 61,84661,845.9061,8453.70 × 10 1 61,84661,845.3261,8454.40 × 10 1 61,84561,841.0061,8401.38 61,84661,845.3361,8451.62 × 10 3
KP0477,033 77,03377,033.0077,0330.00 77,03377,032.9277,0321.50 × 10 1 77,03377,031.0077,0313.10 × 10 1 77,03377,032.7777,0321.30 × 10 3
KP05102,316 102,316102,315.99102,31652.00 × 10 2 102,316102,316.00102,3160.00 102,316102,314.00102,3131.11 102,316102,315.53102,3141.96 × 10 3
KP0635,069 35,06935,069.0035,0690.00 35,06935,069.0035,0690.00 35,06935,06735,0641.47 35,06935,069.0035,0690.00
KP0743,786 43,78643,786.0043,7860.00 43,78643,785.9643,7858.00 × 10 2 43,78643,784.0043,7811.34 43,78643,786.00437860.00
KP0853,553 53,55353,552.4853,5525.00 × 10 1 53,55353,552.0253,5524.00 × 10 2 53,55253,552.0053,5520.00 53,55353,552.1053,5521.87 × 10 3
KP0965,710 65,71065,709.1165,7092.00 × 10 1 65,70965,709.0065,7090.00 65,71065,709.0065,7085.80 × 10 1 65,71065,709.1365,7091.52 × 10 3
KP10118,200 118,200118,200.00118,2000.00 108,200108,200.00108,2000.00 108,200108,200.00108,2000.00 118,200118,200.00118,2000.00
KP1140,167 40,16740,167.0040,1670.00 40,16740,167.0040,1670.00 40,16740,167.0040,1661.40 × 10 1 40,16740,167.0040,1670.00
KP1249,443 49,44349,443.0049,4430.00 49,44349,443.0049,4430.00 49,43349,432.0049,4332.49 49,44349,443.0049,4430.00
KP1360,640 60,64060,640.0060,6400.00 60,64060,640.0060,6400.00 60,64060,640.0060,6391.40 × 10 1 60,64060,640.0060,6400.00
KP1474,932 74,93274,932.0074,9320.00 74,93274,932.0074,9320.00 74,93274,932.0074,9312.70 × 10 1 74,93274,932.00749320.00
KP1599,683 99,68399,683.0099,6830.00 99,68399,683.0099,6830.00 99,68399,682.0099,6722.23 99,68399,683.0099,6830.00
order 15121410 1391210 13122 15111311
rank 1 3 3 1

5. Competitiveness of the LBTA Algorithm

To access the competitive performance of the LBTA algorithm, we reviewed a broad range of references and selected two widely used benchmark sets from recent studies. The first set comprises three types of 0–1 KPs, as described in the previous section. The second set, more frequently used in recent references, was sourced from [25] and included instances with sizes ranging from 100 to 6400. The results were directly obtained from the original literature. In all experiments, the maximum number of generations was set to 40,000, and each instance was evaluated using 50 independent experiments. The performances of various algorithms were assessed using the best, worst, average, and standard deviation (SD) of the solution values. The optimal values are emphasized in bold.

5.1. Competitiveness of the LBTA Algorithm on the First Set of Instances

In this subsection, the performance of LBTA was evaluated against three state-of-the-art metaheuristics: CMBO algorithm [19], NMs [30], and HGGA [37]. The comparisons are presented in Table 2. The results show that HGGA and LBTA outperform NMs and CMBO. Regarding the best solution, the two algorithms yield comparable results. NMs failed to obtain the best solution for KP01 and KP09, while CMBO did not find the best solution for KP03 and KP08. Only HGGA and LBTA were successful in obtaining the best solution for every instance. Regarding the worst, mean solution, and SD, HGGA and LBTA also outperformed the other two algorithms.

5.2. Competitiveness of the LBTA Algorithm on the Second Set of Instances

First, we made a comparison on 25 small instances where the size varied between 8 and 24. Details of these 25 KPs were obtained from David Pisinger’s research [38]. To evaluate the effectiveness and accuracy of LBTA, it is assessed in relation to recent studies, including BEO [35], BinEHO [36], BMPA [39], BinRSA [31], BSMA [32], BHHO [34], BTSA [33], and BAOA [40]. In Table 3, the average results and gap values of each algorithms are provided. The proposed method demonstrates superior or equal performance compared with other algorithms across all 25 problems, as indicated by the gap results. Notably, LBSA outperforms all competing algorithms, achieving the best solution in 100% of the instances. While algorithms such as BMPA, BSMA, and BEO also obtain the best solution in 100% of the 24 instances, their performance falls slightly short of LBSA on specific problems—BMPA and BEO on the P24 instance and BSMA on the P9 instance. Additionally, although BinRSA achieves the best average in 20 instances, and BinEHO, BHHO, BTSA, and BAOA secure the best results in 17 out of 25 instances, none match the consistent and optimal performance of LBSA across all test cases.
Furthermore, the LBTA algorithm was evaluated on large-scale instances from David Pisinger’s optimization codes [41]. The datasets used in this study fall into three different categories: datasets 1–6 are high dimensional uncorrelated, datasets 7–12 are high dimensional weakly correlated, and datasets 13–18 are high dimensional strongly correlated. In Table 4, a comparison of the performance of the LBTA algorithm with that of IGA-SA [42], BinRSA [31], BSMA [32], BTSA [33], BAOA [40], and BHHO [34] is presented. The results, as shown in Table 4, demonstrate that LBSA performs similarly as or better than other algorithms in 14 out of 18 problems according to the mean results, matching or exceeding the number of optimal solutions obtained by the other algorithms. While the BinRSA algorithm achieves the same number of optimal solutions, our proposed LBTA ranks first overall when all algorithms are considered, with BinRSA ranking third behind BSMA. The results clearly show that LBTA performs only slightly worse than BinRSA in the KP1_5000, KP2_5000, and KP3_5000 instances, and its performance in the KP1_2000 instance is marginally worse than those of BSMA, BAOA, and BHHO. However, LBTA outperforms all algorithms in the median items for the KP2_500 instance and matches the best value achieved by other algorithms in 13 other instances.
In conclusion, the LBTA algorithm demonstrated strong performance across multiple large-scale instances, particularly on high-dimensional weakly correlated and strongly correlated datasets, showcasing its robust competitiveness. Although it performed slightly less effectively than certain algorithms on a few specific instances, it ranked first overall, proving the effectiveness and robustness of the LBTA algorithm in solving 0–1 KPs.

6. Conclusions

This paper presents an enhanced local search algorithm, LBTA, tailored to effectively address the 0–1 knapsack problem (KP). The LBTA algorithm employs several key mechanisms to improve the performance in both solution quality and computational efficiency. By incorporating a feasible threshold list, this algorithm is able to regulate convergence speed, ensuring that the search process balances between intensification and diversification. The integration of a bit-flip mutation operator further strengthens the exploration capabilities, enabling a more comprehensive search for the solution space and reducing the likelihood of premature convergence. Moreover, the algorithm incorporates a hybrid repair operator that merges value-based and density-based greedy strategies. This hybrid operator enhances the quality of candidate solutions by guiding the search towards more optimal areas of the solution space. Through this combination of strategies, LBTA demonstrates its capability to provide high-quality solutions for the 0–1 knapsack problem while maintaining simplicity in its implementation.
The LBTA algorithm’s performance has been systematically tested on two large instances of the 0–1 KP, and its results have been compared with those of several cutting-edge metaheuristics found in the existing literature. The outcomes of these simulations reveal that LBTA is not only straightforward to implement but also exhibits remarkable efficiency in solving the 0–1 knapsack problem. A limitation observed in the initial testing instances indicates that the performance of LBTA is slightly less stable compared with those of other algorithms, which may be attributed to the random flip mutation operator used during the neighborhood search. While this operation introduces search diversity, it may also lead to a degree of blind searching. Consequently, in our next steps, we plan to integrate all operators with a status feedback mechanism. This approach will allow for the adaptive selection of operators, each serving distinct purposes, based on the current evolutionary status, thereby facilitating a more balanced and effective search. Additionally, we aim to expand the LBTA algorithm to address more complicated and practical real-world optimization problems, further enhancing its applicability and effectiveness in various domains.

Author Contributions

J.L. and L.W. conceived and designed the experiments; K.L. performed the experiments; J.L. and K.L. analyzed the data; X.L. contributed analysis tools; L.W. wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Nature Science Foundation of Fujian Province of China (No. 2023J01078).

Data Availability Statement

Our research datas are shared in https://github.com/LiangchengWu1/ (accessed on 17 October 2024).

Conflicts of Interest

Liangchen Wu, Kai Lin, Xiaoyu Lin, Juan Lin declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Tarantilis, C.D.; Kiranoudis, C.T. A list-based threshold accepting method for job shop scheduling problems. Int. J. Prod. Econ. 2002, 77, 159–171. [Google Scholar] [CrossRef]
  2. Dueck, G.; Scheuer, T. Threshold accepting: A general purpose optimization algorithm appearing superior to simulated annealing. J. Comput. Phys. 1990, 90, 161–175. [Google Scholar] [CrossRef]
  3. Tarantilis, C.D.; Ioannou, G.; Kiranoudis, C.T.; Prastacos, G.P. Solving the open vehicle routeing problem via a single parameter metaheuristic algorithm. J. Oper. Res. Soc. 2005, 56, 588–596. [Google Scholar] [CrossRef]
  4. Lee, D.S.; Vassiliadis, V.S.; Park, J.M. A novel threshold accepting meta-heuristic for the job-shop scheduling problem. Comput. Oper. Res. 2004, 31, 2199–2213. [Google Scholar]
  5. Lee, D.S.; Vassiliadis, V.S.; Park, J.M. List-based threshold-accepting algorithm for zero-wait scheduling of multiproduct batch plants. Ind. Eng. Chem. Res. 2002, 41, 6579–6588. [Google Scholar] [CrossRef]
  6. Ilhan, I.; Gökmen, G. A list-based simulated annealing algorithm with crossover operator for the traveling salesman problem. Neural Comput. Appl. 2022, 34, 7627–7652. [Google Scholar] [CrossRef]
  7. Cho, M. The knapsack problem and its applications to the cargo loading problem. Anal. Appl. Math. 2019, 48. [Google Scholar]
  8. Truong, T.K.; Li, K.; Xu, Y. Chemical reaction optimization with greedy strategy for the 0-1 knapsack problem. Appl. Soft Comput. J. 2013, 13, 1774–1780. [Google Scholar] [CrossRef]
  9. Chu, P.C.; Beasley, J.E. A genetic algorithm for the multidimensional knapsack problem. J. Heurist. 1998, 4, 63–86. [Google Scholar] [CrossRef]
  10. Dantzig, G.B. Discrete-variable extremum problems. Oper. Res. 1957, 5, 266–288. [Google Scholar] [CrossRef]
  11. Martello, S.; Pisinger, D.; Toth, P. Dynamic programming and strong bounds for the 0-1 knapsack problem. Manag. Sci. 1999, 45, 414–424. [Google Scholar] [CrossRef]
  12. Kolesar, P.J. A branch and bound algorithm for the knapsack problem. Manag. Sci. 1967, 13, 723–735. [Google Scholar] [CrossRef]
  13. Cabot, A.V. An enumeration algorithm for knapsack problems. Oper. Res. 1970, 18, 306–311. [Google Scholar] [CrossRef]
  14. Fister, I., Jr.; Yang, X.S.; Fister, I.; Brest, J.; Fister, D. A brief review of nature-inspired algorithms for optimization. arXiv 2013, arXiv:1307.4186. [Google Scholar]
  15. Yadav, R.K.; Gupta, H.; Jhingran, H.; Agarwal, A.; Gupta, A. An Enhanced Genetic Algorithm to Solve 0/1 Knapsack Problem. Int. J. Comput. Sci. Inf. Secur. (IJCSIS) 2017, 15, 150–154. [Google Scholar]
  16. Alzaqebah, A.; Abu-Shareha, A.A. Ant Colony System Algorithm with Dynamic Pheromone Updating for 0/1 Knapsack Problem. Int. J. Intell. Syst. Appl. 2019, 11, 9. [Google Scholar] [CrossRef]
  17. Nguyen, P.; Wang, D.; Truong, T.K. A new hybrid particle swarm optimization and greedy for 0–1 knapsack problem. Indones. Electr. Eng. Comput. Sci. 2016, 1, 411–418. [Google Scholar] [CrossRef]
  18. Feng, Y.; Wang, G.G.; Deb, S.; Lu, M.; Zhao, X.J. Solving 0-1 knapsack problem by a novel binary monarch butterfly optimization. Neural Comput. Appl. 2015, 28, 1619–1634. [Google Scholar] [CrossRef]
  19. Feng, Y.; Yang, J.; Wu, C.; Lu, M.; Zhao, X.J. Solving 0-1 knapsack problems by chaotic monarch butterfly optimization algorithm with Gaussian mutation. Memetic Comput. 2016, 10, 135–150. [Google Scholar] [CrossRef]
  20. Feng, Y.; Wang, G.G.; Dong, J.; Wang, L. Opposition-based learning monarch butterfly optimization with Gaussian perturbation for large-scale 0-1 knapsack problem. Comput. Electr. Eng. 2018, 67, 454–468. [Google Scholar] [CrossRef]
  21. Bhattacharjee, K.K.; Sarmah, S.P. Shuffled frog leaping algorithm and its application to 0/1 knapsack problem. Appl. Soft Comput. 2014, 19, 252–263. [Google Scholar] [CrossRef]
  22. Ali, I.M.; Essam, D.; Kasmarik, K. An efficient differential evolution algorithm for solving 0–1 knapsack problems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–8. [Google Scholar]
  23. Truong, T.K.; Li, K.; Xu, Y.; Ouyang, A.; Nguyen, T.T. Solving 0-1 knapsack problem by artificial chemical reaction optimization algorithm with a greedy strategy. J. Intell. Fuzzy Syst. 2015, 28, 2179–2186. [Google Scholar] [CrossRef]
  24. Feng, Y.; Wang, G.G.; Gao, X.Z. A Novel Hybrid Cuckoo Search Algorithm with Global Harmony Search for 0-1 Knapsack Problems. Int. J. Comput. Intell. Syst. 2016, 9, 1174–1190. [Google Scholar] [CrossRef]
  25. Kong, X.; Gao, L.; Ouyang, H.; Li, S. A simplified binary harmony search algorithm for large scale 0-1 knapsack problems. Expert Syst. Appl. 2015, 42, 5337–5355. [Google Scholar] [CrossRef]
  26. Mahdavi, M.; Fesanghary, M.; Damangir, E. An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 2007, 188, 1567–1579. [Google Scholar] [CrossRef]
  27. Omran, M.G.; Mahdavi, M. Global-best harmony search. Appl. Math. Comput. 2008, 198, 643–656. [Google Scholar] [CrossRef]
  28. Wang, C.M.; Huang, Y.F. Self-adaptive harmony search algorithm for optimization. Expert Syst. Appl. 2010, 37, 2826–2837. [Google Scholar] [CrossRef]
  29. Zhan, S.H.; Zhang, Z.J.; Wang, L.J.; Zhong, Y.W. List-Based Simulated Annealing Algorithm with Hybrid Greedy Repair and Optimization Operator for 0-1 Knapsack Problem. IEEE Access 2018, 6, 54447–54458. [Google Scholar] [CrossRef]
  30. Zhan, S.; Wang, L.; Zhang, Z.; Zhong, Y. Noising methods with hybrid greedy repair operator for 0–1 knapsack problem. Memetic Comput. 2019, 12, 37–50. [Google Scholar] [CrossRef]
  31. Ervural, B.; Hakli, H. A binary reptile search algorithm based on transfer functions with a new stochastic repair method for 0–1 knapsack problems. Comput. Ind. Eng. 2023, 178, 109080. [Google Scholar] [CrossRef]
  32. Abdel-Basset, M.; Mohamed, R.; Sallam, K.M.; Chakrabortty, R.K.; Ryan, M.J. BSMA: A novel metaheuristic algorithm for multi-dimensional knapsack problems: Method and comprehensive analysis. Comput. Ind. Eng. 2021, 159, 107469. [Google Scholar] [CrossRef]
  33. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  34. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  35. Abdel-Basset, M.; Mohamed, R.; Mirjalili, S. A binary equilibrium optimization algorithm for 0–1 knapsack problems. Comput. Ind. Eng. 2021, 151, 106946. [Google Scholar] [CrossRef]
  36. Hakli, H. BinEHO: A new binary variant based on elephant herding optimization algorithm. Neural Comput. Appl. 2020, 32, 16971–16991. [Google Scholar] [CrossRef]
  37. Chen, Z.; Zhong, Y.; Lin, J. Hybrid greedy Genetic Algorithm for solving 0-1 knapsack problem. J. Comput. Appl. 2021, 41, 87. [Google Scholar]
  38. Pisinger, D. Where are the hard knapsack problems? Comput. Oper. Res. 2005, 32, 2271–2284. [Google Scholar]
  39. Abdel-Basset, M.; Mohamed, R.; Chakrabortty, R.K.; Ryan, M.; Mirjalili, S. New binary marine predators optimization algorithms for 0–1 knapsack problems. Comput. Ind. Eng. 2021, 151, 106949. [Google Scholar] [CrossRef]
  40. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  41. Pisinger, D. Instances of 0/1 Knapsack Problem. 2018. Available online: https://github.com/likr/kplib (accessed on 17 October 2024).
  42. Ezugwu, A.E.; Pillay, V.; Hirasen, D.; Sivanarain, K.; Govender, M. A comparative study of meta-heuristic optimization algorithms for 0–1 knapsack problem: Some initial results. IEEE Access 2019, 7, 43979–44001. [Google Scholar] [CrossRef]
Figure 1. Performance comparison of LBTA with the different numbers of bit-flip operators.
Figure 1. Performance comparison of LBTA with the different numbers of bit-flip operators.
Algorithms 17 00478 g001
Table 1. Characteristics of three categories of large-scale 0–1 KP instances.
Table 1. Characteristics of three categories of large-scale 0–1 KP instances.
CorrelationWeight w i Value v i Capacity c
Uncorrelatedrand (10, 100)rand (10, 100)0.75 * sum of weights
Weakly correlatedrand (10, 100)rand ( w i 10, w i + 10)0.75 * sum of weights
Strongly correlatedrand (10, 100) w i + 100.75 * sum of weights
Table 3. Comparison of LBTA on the second set of 18 large-scale instances.
Table 3. Comparison of LBTA on the second set of 18 large-scale instances.
Problem BEO (2020) BinEHO (2020) BMPA (2021) BinRSA (2023) BSMA (2021) BHHO (2019) BTSA (2020) BAOA (2021) LBTA
No. Instance BKV MeanGap (%) MeanGap (%) MeanGap (%) MeanGap (%) MeanGap (%) MeanGap (%) MeanGap (%) MeanGap (%) MeanGap (%)
P1 KP_8a 3,924,400 3,924,400.000.00 3,924,400.000.00 3,924,400.000.00 3,924,400.000.00 3,924,400.000.00 3,924,400.000.00 3,924,400.000.00 3,924,400.000.00 3,924,400.000.00
P2 KP_8b 3,813,669 3,813,669.000.00 3,813,669.000.00 3,813,669.000.00 3,813,669.000.00 3,813,669.000.00 3,813,669.000.00 3,801,149.003.28 × 10 1 3,810,772.007.60 × 10 2 3,813,669.000.00
P3 KP_8c 3,347,452 3,347,452.000.00 3,347,452.000.00 3,347,452.000.00 3,347,452.000.00 3,347,452.000.00 3,347,452.000.00 3,347,452.000.00 3,347,452.000.00 3,347,452.000.00
P4 KP_8d 4,187,707 4,187,707.000.00 4,187,707.000.00 4,187,707.000.00 4,187,707.000.00 4,187,707.000.00 4,187,707.000.00 4,187,707.000.00 4,187,707.000.00 4,187,707.000.00
P5 KP_8e 4,955,555 4,955,555.000.00 4,955,555.000.00 4,955,555.000.00 4,955,555.000.00 4,955,555.000.00 4,955,555.000.00 4,955,555.000.00 4,955,555.000.00 4,955,555.000.00
P6 KP_12a 5,688,887 5,688,887.000.00 5,688,757.342.28 × 10 3 5,688,887.000.00 5,688,757.342.28 × 10 3 5,688,887.000.00 5,688,887.000.00 5,682,416.001.14 × 10 1 5,688,887.000.00 5,688,887.000.00
P7 KP_12b 6,498,597 6,498,597.000.00 6,498,597.000.00 6,498,597.000.00 6,498,597.000.00 6,498,597.000.00 6,496,784.002.79 × 10 2 6,498,597.000.00 6,498,597.000.00 6,498,597.000.00
P8 KP_12c 5,170,626 5,170,626.000.00 5,170,626.000.00 5,170,626.000.00 5,170,626.000.00 5,170,626.000.00 5,170,626.000.00 5,170,626.000.00 5,170,626.000.00 5,170,626.000.00
P9 KP_12d 6,992,404 6,992,404.000.00 6,992,404.000.00 6,992,404.000.00 6,992,404.000.00 6,941,564.007.27 × 10 1 6,941,564.007.27 × 10 1 6,941,564.007.27 × 10 1 6,939,908.007.51 × 10 1 6,992,404.000.00
P10 KP_12e 5,337,472 5,337,472.000.00 5,337,472.000.00 5,337,472.000.00 5,337,472.000.00 5,337,472.000.00 5,337,472.000.00 5,337,472.000.00 5,337,472.000.00 5,337,472.000.00
P11 KP_16a 7,850,983 7,850,983.000.00 7,850,224.609.66 × 10 3 7,850,983.000.00 7,850,983.000.00 7,850,983.000.00 7,850,983.000.00 7,831,459.002.49 × 10 1 7,850,983.000.00 7,850,983.000.00
P12 KP_16b 9,352,998 9,352,998.000.00 9,352,998.000.00 9,352,998.000.00 9,352,998.000.00 9,352,998.000.00 9,352,998.000.00 9,332,841.002.16 × 10 1 9,352,998.000.00 9,352,998.000.00
P13 KP_16c 9,151,147 9,151,147.000.00 9,151,147.000.00 9,151,147.000.00 9,151,147.000.00 9,151,147.000.00 9,140,752.501.14 × 10 1 9,151,147.000.00 9,145,012.506.70 × 10 2 9,151,147.000.00
P14 KP_16d 9,348,889 9,348,889.000.00 9,345,961.483.13 × 10 2 9,348,889.000.00 9,348,889.000.00 9,348,889.000.00 9,348,889.000.00 9,348,889.000.00 9,348,889.000.00 9,348,889.000.00
P15 KP_16e 7,769,117 7,769,117.000.00 7,766,509.363.36 × 10 2 7,769,117.000.00 7,769,117.000.00 7,769,117.000.00 7,769,117.000.00 7,762,520.008.49 × 10 2 7,769,117.000.00 7,769,117.000.00
P16 KP_20a 10,727,049 10,727,049.000.00 10,727,049.000.00 10,727,049.000.00 10,727,049.000.00 10,727,049.000.00 10,727,049.000.00 10,727,049.000.00 10,716,101.001.02 × 10 1 10,727,049.000.00
P17 KP_20b 9,818,261 9,818,261.000.00 9,818,261.000.00 9,818,261.000.00 9,818,261.000.00 9,818,261.000.00 9,818,261.000.00 9,814,002.004.34 × 10 2 9,818,261.000.00 9,818,261.000.00
P18 KP_20c 10,714,023 10,714,023.000.00 10,713,587.704.06 × 10 3 10,714,023.000.00 10,713,993.982.71 × 10 4 10,714,023.000.00 10,713,149.008.16 × 10 3 10,714,023.000.00 10,714,023.000.00 10,714,023.000.00
P19 KP_20d 8,929,156 8,929,156.000.00 8,929,156.000.00 8,929,156.000.00 8,928,880.803.08 × 10 3 8,929,156.000.00 8,924,769.304.91 × 10 2 8,929,156.000.00 8,927,679.401.65 × 10 2 8,929,156.000.00
P20 KP_20e 9,357,969 9,357,969.000.00 9,357,751.442.32 × 10 3 9,357,969.000.00 9,357,953.461.66 × 10 4 9,357,969.000.00 9,357,969.000.00 9,357,969.000.00 9,357,969.000.00 9,357,969.000.00
P21 KP_24a 13,549,094 13,549,094.000.00 13,546,986.141.56 × 10 2 13,549,094.000.00 13,549,094.000.00 13,549,094.000.00 13,546,249.002.10 × 10 2 13,549,094.000.00 13,547,058.001.50 × 10 2 13,549,094.000.00
P22 KP_24b 12,233,713 12,233,713.000.00 12,233,713.000.00 12,233,713.000.00 12,233,713.000.00 12,233,713.000.00 12,233,713.000.00 12,233,713.000.00 12,233,713.000.00 12,233,713.000.00
P23 KP_24c 12,448,780 12,448,780.000.00 12,448,780.000.00 12,448,780.000.00 12,448,780.000.00 12,448,780.000.00 12,448,780.000.00 12,448,780.000.00 12,448,780.000.00 12,448,780.000.00
P24 KP_24d 11,815,315 11,814,367.008.02 × 10 3 11,810,682.683.92 × 10 2 11,811,630.203.12 × 10 2 11,814,367.488.02 × 10 3 11,815,315.000.00 11,814,208.009.37 × 10 3 11,814,108.001.02 × 10 2 11,814,072.001.05 × 10 2 11,815,315.000.00
P25 KP_24e 13,940,099 13,940,099.000.00 13,940,099.000.00 13,940,099.000.00 13,940,099.000.00 13,940,099.000.00 13,933,743.004.56 × 10 2 13,940,099.000.00 13,938,971.008.09 × 10 3 13,940,099.000.00
order 24 17 24 20 24 17 17 17 25
rank 2 9 4 5 3 7 8 6 1
Table 4. Comparison of LBTA on the second set of 25 instances.
Table 4. Comparison of LBTA on the second set of 25 instances.
IGA-SA (2019) BinRSA (2023) BSMA (2021) BTSA (2020) BAOA (2021) BHHO (2019) LBTA
InstanceBKV MeanGap (%) MeanGap (%) MeanGap (%) MeanGap (%) MeanGap (%) MeanGap (%) MeanGap (%)
KP1_1009147 8575.006.25 9147.000.00 9147.000.00 9147.000.00 9147.000.00 9147.000.00 9147.000.00
KP1_20011,238 8576.0023.69 11,238.000.00 11,238.000.00 11,238.000.00 11,238.000.00 11,238.000.00 11,238.000.00
KP1_50028,857 12,072.0058.17 28,857.000.00 28,857.000.00 28,849.000.03 28,852.000.02 28,850.000.02 28,857.000.00
KP1_100054,503 14,563.0073.28 54,503.000.00 54,503.000.00 53,301.002.21 53,574.001.70 53,396.002.03 54,503.000.00
KP1_2000110,625 27,645.0075.01 110,587.550.03 110,625.000.00 95,619.0013.56 110,625.000.00 110,625.000.00 110,615.400.01
KP1_5000276,457 49,306.0082.17 276453.60.00 274,569.000.68 272,923.001.28 274,385.000.75 273,961.000.90 276,438.470.01
KP2_1001514 1217.0019.62 1514.000.00 1514.000.00 1514.000.00 1514.000.00 1514.000.00 1514.000.00
KP2_2001634 1347.0017.56 1634.000.00 1634.000.00 1634.000.00 1634.000.00 1634.000.00 1634.000.00
KP2_5004566 3038.0033.46 4557.50.19 4565.300.02 4537.200.63 4549.400.36 4518.701.04 4565.300.02
KP2_10009052 5435.0039.96 9051.050.01 9052.000.00 8346.807.79 9052.000.00 8835.602.39 9052.000.00
KP2_200018,051 10,938.0039.41 18,046.750.02 17,557.002.74 14,902.0017.45 15,885.0012.00 15,729.0012.86 18,050.200.00
KP2_500044,356 27,387.0038.26 44,353.150.01 44,312.000.10 42,972.003.12 44,206.000.34 43,276.002.43 44,352.830.01
KP3_1002397 1481.0038.21 2397.000.00 2397.000.00 2395.200.08 2396.500.02 2396.400.03 2397.000.00
KP3_2002697 1495.0044.57 2697.000.00 2697.000.00 2692.100.18 2695.100.07 2694.700.09 2697.000.00
KP3_5007117 3412.0052.06 7117.000.00 7117.000.00 6999.501.65 7117.000.00 7117.000.00 7117.000.00
KP3_100014,390 5589.0061.16 14,390.000.00 14,390.000.00 14,191.001.38 14,279.000.77 14,225.001.15 14,390.000.00
KP3_200028,919 10,818.0062.59 28,919.000.00 28,919.000.00 28,919.000.00 28,597.001.11 28,657.000.91 28,919.000.00
KP3_500072,505 27,304.0062.34 72,505.000.00 72,075.000.59 71,018.002.05 71,579.001.28 71,826.000.94 72,487.830.02
order 0 14 13 5 7 6 14
rank 7 3 2 6 4 5 1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, L.; Lin, K.; Lin, X.; Lin, J. List-Based Threshold Accepting Algorithm with Improved Neighbor Operator for 0–1 Knapsack Problem. Algorithms 2024, 17, 478. https://doi.org/10.3390/a17110478

AMA Style

Wu L, Lin K, Lin X, Lin J. List-Based Threshold Accepting Algorithm with Improved Neighbor Operator for 0–1 Knapsack Problem. Algorithms. 2024; 17(11):478. https://doi.org/10.3390/a17110478

Chicago/Turabian Style

Wu, Liangcheng, Kai Lin, Xiaoyu Lin, and Juan Lin. 2024. "List-Based Threshold Accepting Algorithm with Improved Neighbor Operator for 0–1 Knapsack Problem" Algorithms 17, no. 11: 478. https://doi.org/10.3390/a17110478

APA Style

Wu, L., Lin, K., Lin, X., & Lin, J. (2024). List-Based Threshold Accepting Algorithm with Improved Neighbor Operator for 0–1 Knapsack Problem. Algorithms, 17(11), 478. https://doi.org/10.3390/a17110478

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop