Next Article in Journal
Cost Optimization in Sintering Process on the Basis of Bulk Queueing System with Diverse Services Modes and Vacation
Previous Article in Journal
Gelfand–Phillips Type Properties of Locally Convex Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling and Solving the Knapsack Problem with a Multi-Objective Equilibrium Optimizer Algorithm Based on Weighted Congestion Distance

1
College of Big Data and Intelligent Engineering, Southwest Forestry University, Kunming 650224, China
2
Research Institute of Forestry Policy and Information Chinese Academy of Forestry, Beijing 100091, China
3
College of Mathematics and Physics, Southwest Forestry University, Kunming 650224, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2024, 12(22), 3538; https://doi.org/10.3390/math12223538
Submission received: 30 September 2024 / Revised: 8 November 2024 / Accepted: 8 November 2024 / Published: 12 November 2024

Abstract

:
The knapsack problem is a typical bi-objective combinatorial optimization issue, wherein maximizing the value of the packed items is achieved concurrently with minimizing the weight of the load. Due to the conflicting objectives of the knapsack problem and the typical discrete property of the items involved, swarm intelligence algorithms are commonly employed. The diversity of optimal combinations in the knapsack problem has become a focal point, which involves finding multiple packing solutions at the same value and weight. For this purpose, this paper proposes a Multi-Objective Equilibrium Optimizer Algorithm based on Weighted Congestion Distance (MOEO_WCD). The algorithm employs a non-dominated sorting method to find a set of Pareto front solutions rather than a single optimal solution, offering multiple decision-making options based on the varying needs of the decision-makers. Additionally, MOEO_WCD improves the balance pool generation mechanism and incorporates a weighted congestion incentive, emphasizing the diversity of packing combination solutions under the objectives of value and weight to explore more Pareto front solutions. Considering the discrete characteristics of the knapsack combination optimization problem, our algorithm also incorporates appropriate discrete constraint handling. This paper designs multiple sets of multi-objective knapsack combinatorial optimization problems based on the number of knapsacks, the number of items, and the weights and values of the items. This article compares five algorithms suitable for solving multi-objective problems: MODE, MO-PSO-MM, MO-Ring-PSO-SCD, NSGA-II, and DN-NSGAII. In order to evaluate the performance of the algorithm, this paper proposes a new solution set coverage index for evaluation. We also used the hypervolume indicator to evaluate the diversity of algorithms. The results show that our MOEO-WCD algorithm achieves optimal coverage of the reference composite Pareto front in the decision space of four knapsack problems. The experimental results indicate that our MOEO_WCD algorithm achieves the optimal coverage of the reference composite Pareto front in the decision space for four sets of knapsack problems. Although our MOEO_WCD algorithm covers less of the composite front in the objective space compared with the MODE algorithm for knapsack problem 1, its coverage of the integrated reference solutions in the decision space is greater than that of the MODE algorithm. The experiments demonstrate the superior performance of the MOEO_WCD algorithm on bi-objective knapsack combinatorial optimization problem instances, which provides an important solution to the search for diversity in multi-objective combinatorial optimization solutions.

1. Introduction

The classic bi-objective knapsack problem involves selecting a subset of items from a given set to place into a knapsack with the aim of maximizing value while minimizing weight [1]. The key challenge of the problem is how to select the optimal items from a limited set under the constraint of knapsack capacity to achieve those objectives. The knapsack problem has widespread applications in real life, such as in logistics transportation, warehouse management, and resource allocation [1].
Traditional methods for solving the knapsack problem primarily include dynamic programming and greedy algorithms, each of which has its own advantages and limitations. Dynamic programming is a commonly used method for solving the knapsack problem, but it can lead to excessive memory consumption for its space complexity, especially when dealing with large-scale problems [2]. Greedy algorithms, due to their strategy of selecting locally optimal solutions, cannot guarantee achieving the global optimum [3]. Traditional methods for solving such problems are relatively complex, requiring the introduction of more sophisticated algorithms or transformations of the problem.
At present, swarm intelligence algorithms have been applied to find the optimal knapsack solution [4]. They, with strong global search capabilities, can explore the solution space from multiple directions simultaneously, allowing them to adapt to changes in dynamic environments [5]. This characteristic is key to solving the dynamic search aspect of the knapsack problem. The algorithms can relatively easily adapt to the changes in the packaging process. For different types of knapsack problems, suitable swarm intelligence algorithms can be selected [4], or packing strategies can be optimized. Abdellah Rezoug et al. used a heuristic genetic algorithm (GA) to solve the multidimensional knapsack problem, employing an efficiency-based approach to perform a pre-analysis of the problem data in order to extract useful information. This prior knowledge was integrated into the genetic algorithm as a guide in two phases: the generation of the initial population and the evaluation of the offspring through the fitness function. This approach improved the efficiency and accuracy of the algorithm; however, the computational complexity of the genetic algorithm is high, requiring a significant amount of time when solving large-scale problems [6]. Young-Bin Shin et al. utilized an improved particle swarm optimization (PSO) algorithm to solve the two-dimensional knapsack problem, enhancing the convergence speed and solution quality by adjusting the particles’ velocity and position. However, they found the PSO algorithm tended to get trapped in local optima when handling high-dimensional and complex knapsack problems. And it was noted that the choice of parameters significantly affects the results [7]. Min Kong et al. proposed an ant colony optimization (ACO) algorithm to solve the knapsack problem. To prevent premature convergence, their algorithm ACO incorporated pheromone reinitialization and adopted different pheromone reinforcement strategies based on the convergence state of the algorithm. However, the algorithm requires a long computation time in the initial phase and performs poorly when handling dynamic knapsack problems [8]. Ravneil Nand et al. used the Firefly Algorithm (FA) to solve the multidimensional knapsack problem, improving solution stability and accuracy by leveraging the characteristics of firefly flashing behavior and genetic evolution. During the study, it was found that the FA algorithm incurs high computational time and memory consumption when handling large-scale knapsack problems [9].
As the research progressed, it gradually gained more attention to find more solutions to multi-objective knapsack problems. This paper proposed an improved Equilibrium Optimizer, MOEO_WCD, to explore the Pareto frontier with more solutions to the multi-objective knapsack combinatorial optimization problems.
Section 2 of this paper provides an overview of the current research on the knapsack problem. Section 3 details the framework of the MOEO_WCD algorithm proposed in this study, the multi-objective knapsack combinatorial optimization problems addressed, and the objective functions used in the algorithm. Section 4 presents the experimental results of applying the MOEO_WCD algorithm to the knapsack combinatorial optimization problem. Finally, Section 6 provides a comprehensive discussion of the results and implications of this study.

2. Related Work

The Pareto frontier is an important indicator for the study of the diversity of solutions of intelligent algorithms in multi-objective problems [10]. Imen Ben Mansour et al. evaluated the quality of solutions by measuring the size of the region dominated by the obtained Pareto front using the hypervolume difference indicator. The larger the hypervolume value, the better the quality of the obtained non-dominated set. Additionally, the Mann–Whitney non-parametric statistical test was employed to examine whether the differences between algorithms were statistically significant. Using these methods, the performance of different algorithms is compared with determine which algorithm performs better in solving specific problem instances [11]. Van Veldhuizen et al. evaluated the quality of solution sets by calculating the distance between the solution sets obtained by the algorithm and the known or theoretical Pareto front. Common metrics include GD (Generational Distance) and IGD (Inverted Generational Distance) [12]. Deb, Kalyanmoy et al. evaluated the diversity of solutions by assessing whether the distribution of the solution set in the objective space is uniform. Common metrics used for this evaluation include Spread, Spacing, and Entropy [13]. These methods provide different perspectives and criteria for evaluating multi-objective optimization algorithms, helping researchers objectively analyze and compare the performance of various algorithms in solving different problems.
Due to the powerful global search capability and adaptability to dynamic environments of multi-objective swarm intelligence algorithms, their application in solving multi-objective problems offers significant advantages [14]. The NSGA-II algorithm proposed by Kalyanmoy Deb et al. has demonstrated its effectiveness in multi-objective optimization problems, with a primary focus on efficiently finding a balanced Pareto front in a continuous solution space to simultaneously optimize multiple objective functions [13]. Xin-She Yang et al. found that the Firefly Algorithm (FA) has high search efficiency and solution diversity, making it suitable for solving nonlinear and multimodal optimization problems. However, it has higher computational time and memory consumption, and its performance can be unstable on noisy datasets, being susceptible to the influence of the initial population [15]. Christian Blum et al. found that the Ant Colony Optimization (ACO) algorithm performs exceptionally well in solving discrete combinatorial optimization problems, demonstrating strong robustness and adaptability. However, it has high computational complexity, slow convergence in the initial stages, and unstable performance when dealing with dynamic environments [16].
When using swarm intelligence algorithms to solve the knapsack problem, the algorithm needs to be appropriately modified according to the specific characteristics of the knapsack problem to find more and better solutions. In order to seek solution set diversity for the multi-objective knapsack problem, Mahrach et al. compared multi-objective algorithms and single-objective algorithms. The performance of the multi-objective algorithms was evaluated primarily based on the diversity and convergence of the Pareto front solutions. In addressing the knapsack problem and the traveling salesman problem, it was found that MOEAs focus on finding a set of Pareto front solutions rather than a single optimal solution. Furthermore, the introduction of diversity maintenance mechanisms effectively prevents the trap of local optima, enabling efficient search while maintaining solution set diversity [17]. However, the authors only focused on the basic 0/1 knapsack problem, lacking exploration of more complex knapsack problems. To enhance the effectiveness of solutions in the discrete space of the knapsack problem, Djaafar Zouache et al. proposed a new cooperative swarm intelligence algorithm (MOFPA). This algorithm combines the Firefly Algorithm (FA) and Particle Swarm Optimization (PSO), using dominance relations to manage the size of the external archive, ensuring the convergence and diversity of the Pareto optimal solution set. Additionally, a tanh transfer function was introduced to map the continuous search space of FA and PSO into a discrete search space. This discretization method ensures the adaptability and diversity of the algorithm in the discrete solution space of the knapsack problem. However, maintaining population diversity remains a challenge. If the population loses diversity too early, the algorithm may become trapped in local optima, preventing it from finding the global optimal solution [18]. To ensure the uniformity of the distribution of the solution set in the entire target space and the diversity of the algorithm, Deb et al. introduced the calculation method of crowding distance, drawing on the non-dominated sorting and crowding distance mechanisms of NSGA-II, to maintain the diversity of the solution set [19]. To maintain algorithm diversity, Herández Castellanos et al. proposed the NEDEA algorithm, which introduces a non-epsilon dominance mechanism. The algorithm exhibits better diversity and convergence when dealing with approximate solution sets and high-dimensional multi-objective optimization problems, improving the quality and applicability of the solution set [20]. To further improve the uniformity and extensiveness of the solution set distribution and ensure effective convergence of the solution set to the Pareto front, Zhanguo Li et al. proposed a MOEA method based on a clustering mechanism. This approach utilizes clustering techniques to calculate and maintain the distribution and diversity of the solution set. The fuzzy C-means clustering algorithm was used to cluster individuals, and LMOEA was applied to solve the classic multi-objective knapsack problem. Convergence and diversity metrics were used to evaluate the algorithm’s performance. Compared with the classic NSGA-II and MOEA/D, this algorithm achieved significant improvements in both convergence and population diversity. Although the algorithm enhanced population diversity through the clustering mechanism, maintaining diversity remains a challenge when the problem size is large, potentially leading to uneven solution distribution or local convergence issues [21].
As research on solving the knapsack problem using swarm intelligence algorithms deepens, researchers have found that when the solution space for the multi-objective knapsack combinatorial optimization problem is discrete, the search process in the discrete solution space is not as smooth as in the continuous solution space. Consequently, finding high-quality or near-optimal solutions may become more challenging. At the same time, the core challenge of multi-objective optimization lies in the conflict between objectives, where optimizing one objective can lead to a decline in the performance of other objectives. This trade-off between objectives makes it difficult to find a solution that optimizes all objectives simultaneously. In multi-objective optimization, it is not only necessary to find solutions close to the Pareto front but also to ensure that these solutions are well-distributed in the objective space, i.e., maintain diversity.
For the bi-objective knapsack combinatorial optimization problem focusing on value and weight, this study proposes a multi-objective equilibrium optimizer algorithm based on weighted crowding distance (MOEO_WCD). The Bonferroni–Dunnand Holm’s tests for all functions showed that EO is significantly a better algorithm than PSO, GWO, GA, GSA, SSA, and CMA-ES, while its performance is statistically similar to SHADE and LSHADE-SPACMA [22]. Therefore, this paper is an improvement based on the equilibrium optimizer algorithm, utilizing non-dominated sorting to divide solutions into different Pareto fronts, aiming to find a Pareto optimal solution set that represents trade-offs between different objectives. Based on this, the algorithm adjusts its optimization characteristics by setting the crowding distance weights. Additionally, the algorithm improves the balanced pool random concentration generation mechanism and utilizes a weighted crowding distance mechanism to preserve the found solution set, optimizing the population diversity of the algorithm. At the same time, by applying discrete constraints to the population, the algorithm can adapt to solving multi-objective packing and combination optimization problems. In this study, nine sets of simulation experiments were conducted on the multi-objective packing and combination optimization problem. The results of each set of problems under the nine packing scenarios were integrated to form a composite Pareto front. The coverage of the composite Pareto front in the decision space was compared across different algorithms, including MODE, MO-PSO-MM, MO-PSO-Ring-SCD, NSGAII, and DN-NSGAII, along with the coverage of each algorithm’s Pareto front in the objective space [13,23,24,25]. The results show that the proposed algorithm demonstrates certain advantages in solving bi-objective packing and combination optimization problems, providing a new approach for solving such problems.

3. Multi-Objective Knapsack Combination Optimization Problem

The mathematical formulation of the multi-objective knapsack combinatorial optimization problem can be expressed as follows: Given N items and M knapsacks, each item i has two attributes, weight v i and value w i . The total weight in knapsack m is ( W m ), and the total value is ( V m ). The objective is to select a set of items to place into multiple knapsacks without exceeding the weight limit of each knapsack, such that the total value ( V ) of the M knapsacks is maximized and the total weight ( W ) is minimized.
( W m ) = i = 1 N m w i , m = 1 , 2 , , M .
The total weight W m of the l m items loaded into knapsack m, where l m [0, N].
( V m ) = i = 1 N m v i .
The total value V m of the l m items loaded into knapsack m.
( W ) = m = 1 M ( W m ) = ( W 1 ) + ( W 2 ) + + ( W m ) .
The total weight W of the items loaded into M knapsacks,
( V ) = m = 1 M ( V m ) = ( V 1 ) + ( V 2 ) + + ( V m ) .
The total value V of the items loaded into M knapsacks.
L = m = 1 M N m .
where LN.

3.1. Objective Functions

In this paper, the objective functions of the bi-objective combinatorial optimization problem are mathematically expressed, where x represents the decision variable and x i m denotes that item i is placed in knapsack m. Subject to the knapsack weight constraints, the objective is to maximize the packing value f V x and minimize the packing weight f W x , where N is the number of items, w e i g h t i is the weight of item i, and v a l u e i is the value of item i.
Objective 1: Sum the weights of the items within each corresponding knapsack and compute the total weight of the packed items across all knapsacks.
min   f W x = m = 1 M i = 1 N m | x i m | w i ,
x i m = m ,
| x i m | = 1 ,     x i m > 0 0 ,     x i m = 0 .
Here, x i m indicates that item i is placed in knapsack m, representing that item i is packed into knapsack m. When | x i m | = 0, it indicates that item i is not included in the knapsack and its weight is not considered. When x i m = 1, it indicates that item i is included in knapsack m, and its weight is added to the knapsack’s total weight.
Objective 2: Sum the values of the items within each knapsack and then aggregate the values across all knapsacks to calculate the total value of the packing scheme.
max   f V x = m = 1 M i = 1 N m | x i m |   v i .

3.2. Constraint Condition

The constraints in this paper require that the packing scheme must not exceed the capacity of each knapsack. The constraint is expressed by the formula,
W m W m .
In the formula, W m represents the maximum packing weight of knapsack m. W m represents the weight of the items in knapsack m.

3.3. Evaluation Metric

Due to the difficulty in obtaining the true Pareto front for multi-objective packing problems, this paper constructs a composite Pareto set ( c o m p o s i t e   P S ) to objectively and effectively evaluate the capability of the algorithm in finding solutions. This set integrates all solutions obtained from the swarm intelligence algorithms used in this study, with duplicate solutions removed. An evaluation metric, p s c o v e r is proposed, which represents the ratio of the solutions found by the swarm intelligence algorithm p s a l g o r i t h m , to the composite Pareto set ( c o m p o s i t e   P S ), in order to measure the algorithm’s capability in identifying the Pareto front solutions.
p s c o v e r = p s a l g o r i t h m c o m p o s i t e   P S × 100 % .
As shown in Figure 1, the black pentagram represents the c o m p o s i t e   P S , the blue circles indicate the Pareto front solutions found by algorithm 1, and the red circles denote the Pareto front solutions found by algorithm 2. The figure clearly illustrates the coverage of the composite Pareto front by algorithms 1 and 2.

4. Multi-Objective Knapsack Combination Optimization Based on MOEO-WCD Algorithm

4.1. Equilibrium Optimizer

The Equilibrium Optimizer (EO) is an optimization algorithm inspired by the physical phenomenon of controlled mass balance within a control volume. The algorithm principle is shown in Figure 2, which is primarily a physics-inspired heuristic algorithm based on the strong mixing dynamics of mass balance in a control volume. The mass balance equation reflects the physical processes of mass entering, leaving, and being generated within the control volume. The EO algorithm demonstrates fast and accurate optimization performance in single-objective continuous solution space problems. However, when faced with the demands of multi-objective optimization, the algorithm falls short of meeting those requirements. In Figure 3, the overall flowchart of the EO algorithm is shown.

4.2. MOEO-WCD Algorithm Process

Due to the fast and accurate optimization performance of the Equilibrium Optimizer (EO) algorithm in single-objective optimization problems, the algorithm is designed primarily for single-objective tasks. To address multi-objective problems while leveraging the advantages of EO, this paper proposes a multi-objective Equilibrium Optimizer algorithm based on weighted crowding distance (MOEO_WCD). In Figure 4, for the multi-objective knapsack combination optimization problem, we introduced a non-dominated sorting method to rank and classify the solution set based on the trade-offs between the objectives of knapsack weight and knapsack value. This allows for the identification and preservation of the Pareto front for the multi-objective optimization problem. We also introduced a random concentration generation strategy to enhance the diversity of the solutions obtained by the algorithm. Additionally, a weighted crowding distance was incorporated, allowing the adjustment of crowding distance weights in the decision space and objective space to meet the search requirements at different stages, thereby improving the adaptability of the algorithm.
At the same time, the algorithm design accommodates the discrete nature of the knapsack problem. MOEO_WCD ensures that the discrete population is not disrupted during the update process.

4.3. Random Concentration Knapsack Generation Mechanism in Equilibrium Pool

In solving the knapsack problem, this mechanism enhances the diversity of the obtained solutions, meaning that multiple decision solutions correspond to the same position on the Pareto front. In other words, it enables finding multiple distinct knapsack solutions that achieve the same packing objective result.
To balance exploration and exploitation capabilities and enhance diversity in both the decision space and objective space—meaning that multiple distinct knapsack solutions can be found for the same packing objective result—adaptive generation of individual vectors is employed. MOEO_WCD introduces a balanced pool random concentration generation mechanism. By using three different random concentration generation mechanisms, the EO-balanced pool particles are diversified, laying the foundation for solution diversity.
Strategy 1: Randomly select five neighbors from the entire population as the concentration.
C e q , p o o l = C n e i g h b o u r 1 , C n e i g h b o u r 2 , C n e i g h b o u r 3 , C n e i g h b o u r 4 , C n e i g h b o u r 5 , C e q a v e
where the concentration Ceqave is the average value of three neighboring solutions:
C e q a v e = C n e i g h b o u r 1 , C n e i g h b o u r 2 , C n e i g h b o u r 3 C n e i g h b o u r 4 , C n e i g h b o u r 5 5 .
Strategy 2: To enhance the exploration capability of the algorithm, select solutions that are farther apart in the decision space. This can increase population diversity and help avoid local optima. Based on the Manhattan distance between the current individual and the remaining solutions in the population, select a certain number of neighbors. Then, from the selected neighbors, randomly choose five concentrations, and select the neighbor with the maximum crowding distance in the decision space as the concentration:
C e q , p o o l = C D M n e i g h b o u r 1 , C D M n e i g h b o u r 2 , C D M n e i g h b o u r 3 , C D M n e i g h b o u r 3 , C D M n e i g h b o u r 4 , C D M n e i g h b o u r 5 , C e q a v e
where DMneighbour represents the Manhattan distance in the decision space, and the concentration Ceqave is the average value of four neighboring solutions:
C e q a v e = C D M n e i g h b o u r 1 , C D M n e i g h b o u r 2 , C D M n e i g h b o u r 3 , C D M n e i g h b o u r 4 , C D M n e i g h b o u r 5 5 .
Strategy 3: To enhance the global search capability of the algorithm and enable a broader search across different regions to find better solutions, select a certain number of neighbors based on the Manhattan distance between the current individual and the remaining solutions in the population within the objective space. Then, from the selected neighbors, randomly choose five concentrations, and select the concentrations with the maximum crowding distance in the objective space to enter the balanced pool. Under the corresponding strategy, the solutions are placed into the concentration pool for algorithm iteration.
C e q , p o o l = C O M n e i g h b o u r 1 , C O M n e i g h b o u r 2 , C O M n e i g h b o u r 3 , C O M n e i g h b o u r 4 , C O M n e i g h b o u r 5 , C e q a v e
where OMneighbour represents the Manhattan distance in the objective space, and Ceqave represents the concentration:
C e q a v e = C O M n e i g h b o u r 1 , C O M n e i g h b o u r 2 , C O M n e i g h b o u r 3 , C O M n e i g h b o u r 4 , C O M n e i g h b o u r 5 5 .
Calculate the distances in the decision space and objective space using Manhattan distance according to the formulas, and sort them in descending order.
D i s t = k = 1 n x 1 k x 2 k .

4.4. Weighted Crowding Distance

In the process of solving the multi-objective knapsack combination optimization problem with the MOEO_WCD algorithm, due to the varying performance of multi-objective optimization at different stages, the algorithm is required to have strong search performance in the early stages and to focus on local search and solution development in the later stages. The weighted crowding distance mechanism is employed to meet the optimization needs at different stages of multi-objective optimization.
The first step in calculating the weighted crowding distance is to perform a dominance sorting of the population. Given a population of N individuals, each individual is compared with others based on their objective values. If individual i is not worse than individual j on all objectives and is strictly better than j on at least one objective, then individual i dominates individual j . Establish a dominant relationship through a set of comparisons across all objectives.
k 1 , 2 , , n o b j : d o m k i , j = 1 ,     i f   f k i f k j d o m k i , j = 0 ,                           e l s e .                            
where f k i represents the k-th objective value of individual i .
Based on this dominance relation, individuals are classified into different fronts. Individuals in the first front are not dominated by any other individuals; those in the second front are only dominated by individuals in the first front, and so on, until all individuals are assigned to a front.
The crowding distance of individual i in the objective space can be expressed as:
D c r o w d , i = k = 1 n o b j f k n e i g h b o r 1 i f k n e i g h b o r 2 i
where n e i g h b o r 1 i and n e i g h b o r 2 i represent the nearest neighbors of individual i in the objective space.
Compute the crowding distance in the decision space to capture the diversity of solutions in the decision space. It measures the average distance of an individual to its k nearest neighbors in the decision space. The crowding distance of individual i in the decision space can be expressed as:
D c r o w d , i d e c i s i o n = 1 k j = 1 k d i , n e i g h b o r j i
where d ( i , j ) represents the Euclidean distance between individual i and individual j in the decision space, and n e i g h b o r j i denotes the j -th nearest neighbor of individual i .
To leverage the advantages of crowding distance in both the objective space and the decision space, this paper introduces a weighted crowding distance. The weighted crowding distance is computed as a linear combination of the crowding distances in the objective space and the decision space:
W C D c r o w d , i d e c i s i o n = α · D c r o w d , i + 1 α · D c r o w d , i d e c i s i o n
where α is a weight parameter that balances the contributions of the crowding distances in the objective space and the decision space. In multi-objective evolutionary optimization algorithms, an effective method is needed to measure the crowding degree among individuals in order to achieve convergence to the Pareto optimal front while maintaining diversity of the solutions. By adjusting the parameter α , the trade-off between the two can be flexibly controlled, thereby effectively managing the solution set. As α approaches 1, the importance of the crowding distance in the objective space increases; conversely, as α approaches 0, the importance of the crowding distance in the decision space increases. This method not only maintains the diversity of the solution set but also effectively guides the search process to converge to the Pareto optimal front.

4.5. Discretization of Solution Space

In the EO algorithm optimization process, individual updates convert discrete individuals into continuous ones. Since in the knapsack optimization problem, knapsack indices cannot be fractional, the updated population needs to be discretized.
x = r o u n d x ,
x = u b , x > u b x = l b , x < l b .
where x represents the population individuals, u b denotes the upper bound of the population, with its value depending on the number of knapsacks M, and l b represents the lower bound of the population.
In Figure 5, rows represent the first decision scheme, i.e., a solution. Figure 5a shows the population updated to floating-point numbers after mutation operations. Figure 5b illustrates the population after constraint handling, where the green areas indicate the parts that require constraints and the decision variables after constraint processing.
The pseudocode of the MOEO_WCD in Algorithm 1 is as follows:
Algorithm 1. MOEO_WCD
Input: Population size N P ; maximum number of iterations M a x _ i t e r ; individual value range of the population lb, ub
Output: Best individual x ; fitness of x
1Initialize the particle population x i D , i = 0,1 , , N P , i t e r = 1 , 2 , , M a x _ i t e r
2Initializes the control parameters a 1 = 1 , a 2 = 1 , G P = 0.5
3While i t e r < M a x _ i t e r
4    Calculate the contemporary weighted congestion distance weight β
    # Calculate the crowding distance weight
5   For i = 1 : N P
6      Calculate x i fitness
7      Calculate x i decision space distance
     # Calculate the distance in the decision space
8      Calculate x i object space distance
     #Calculate the distance in the objective space
9    End for
10    Identify three particles based on a random concentration strategy
11    Calculate x a v e particles
12    Construct the equilibrium pool by Formula (14)
13    Calculate t
14    For i = 1 : N P
15      Randomly choose one candidate from the equilibrium pool
     # Random concentration selection strategy
16      Generate randomly vector λ
17      Generate F , G C P , G 0 , G
18      Update particle population
19     End(for)
20     x = round ( x )  # Discretization of solutions in the decision space
31      i t e r = i t e r + 1
32End while

5. Experiment

5.1. Experimental Setup

  • Comparative algorithms: MODE, MO_PSO_MM, MO_Ring_PSO_SCD, NSGA-II, DN_NSGA-II, and MOEO_WCD;
  • Number of runs: 50 independent runs;
  • Population size NP: 400;
  • Maximum number of evaluations (Max_FES): 10,000.
The performance of different algorithms is compared using the above evaluation metrics, where a higher metric value indicates better performance.
The population initialization in this paper uses knapsack numbers. The initialization matrix has columns representing item numbers and rows representing individuals, i.e., packing solutions. The initial population size is set to 400. For each algorithm, 50 independent experiments are conducted, with each experiment iterated 10,000 times. The upper bound for the population of individuals is set based on the number of knapsacks, with the lower bound being 0. If (i,j) = 0, it indicates that item j is not packed into any knapsack in packing scheme i; if (i,j) = m, it indicates that item j is packed into knapsack m in packing scheme i.
Population constraints: The initialized population is determined based on the number of knapsacks. If there are M knapsacks, the initialized population is as follows:
x i m = r a n d i 0 , M

Experimental Plan

To compare the performance of different algorithms under various experimental conditions, this paper considers nine types of knapsack problems. The experimental design is based on four dimensions: knapsack size, item weight and value, the number of knapsacks, and the number of items. In Table 1, Knapsack Problem 1 involves three knapsacks with capacities of 15, 20, and 25, respectively. The goal is to pack 12 items into the knapsacks with the values and weights of the items provided in the table.

5.2. Experimental Results

Table 2 shows the upper and lower bounds of weight and value objectives for each algorithm when solving different packing optimization problems. For Problem 1, the upper bound for the weight objective using the MOEO_WCD algorithm is 50, and the lower bound is 3. The upper bound for the value objective is 66, and the lower bound is 8. As indicated by Table 2, the MOEO_WCD algorithm is able to find values closer to the boundaries in the nine sets of bi-objective packing combinatorial optimization problems, demonstrating its ability to effectively balance the conflicting objectives of value maximization and weight minimization.
Table 3 shows the coverage of solutions calculated using Formula (11), which represents the proportion of solutions obtained by different algorithms relative to the overall set of solutions. It presents the average coverage and standard deviation of the solutions obtained by each algorithm across 50 independent experiments for the nine knapsack problems in comparison to the total reference set. From Table 3, it is evident that the results of MOEO_WCD and MODE significantly outperform those of MO_PSO_MM, MO_Ring_PSO_SCD, NSGA-II, and DN_NSGA-II. Specifically, for Problems 1, 3, 5, and 8, the MOEO_WCD algorithm demonstrates the highest coverage. When solving Knapsack Problems 2, 6, 7, and 9, the MODE algorithm achieves the highest coverage. However, the optimization results of the algorithm proposed in this paper rank second or third in terms of coverage for these problem sets. The results indicate that, except for Knapsack Problem 4, the coverage of solutions obtained by the MOEO_WCD algorithm, in terms of the composite Pareto front in the decision space, is higher than that of the other algorithms, with coverage comparable to that of the MODE algorithm. Therefore, the proposed MOEO_WCD algorithm demonstrates a certain advantage in the decision space when solving multi-objective packing optimization problems.
Table 4 lists the HV metrics of six algorithms on the objective function solved in this paper, and the last two lines show the Friedman test level. The reference points in this table were selected based on Table 2. It can be seen that the MOEO-WCD algorithm ranks second in the hypervolume (HV) metric. This indicates that the diversity of the algorithm solution set is good but still inferior to the MODE algorithm.
Table 5 conducted a Wilcoxon sign rank test, where “+” indicates that the MOEO-WCD algorithm performs better than the comparison algorithm on the HV indicator, and “−” indicates that the MOEO-WCD algorithm performs worse than the comparison algorithm on the HV indicator. It can be seen that our MOEO-WCD algorithm performs second only to the MODE algorithm.
Figure 6 shows the target frontier situation of all algorithms solving knapsack problem 1.
From Figure 7, it can be seen that the MODE algorithm has the largest overlap with the composite Pareto front in the objective space, while our MOEO_WCD algorithm is in the second place. However, in the decision space, Figure 8 plots the number of decision space solutions corresponding to the objective values obtained by the MODE and MOEO_WCD algorithms for Problem 1. It was found that the number of decision space solutions obtained by the MOEO_WCD algorithm is significantly higher than that of the MODE algorithm and the average number of solutions in the integrated composite Pareto set. Although the MODE algorithm exhibits a larger overlap with the reference composite Pareto front in the plotted objective space compared with the MOEO_WCD algorithm, Table 3 and Figure 8 indicate that the proposed MOEO_WCD algorithm has a higher coverage of the composite solutions in the decision space than the MODE algorithm. In the decision space, the number of solutions obtained by the MOEO_WCD algorithm is greater than that obtained by the MODE algorithm. Additionally, the MOEO_WCD algorithm yields more solutions corresponding to multiple objective values compared with the MODE algorithm. Therefore, it can be concluded that although the diversity of solutions in the objective space is not as high as that of the MODE algorithm, the MOEO_WCD algorithm performs better in terms of the number of solutions in the decision space. However, the MOEO_WCD algorithm proposed in this study exhibits greater diversity in decision space solutions compared with the MODE algorithm. The MOEO_WCD algorithm demonstrates a certain advantage in solving multi-objective packing combinatorial optimization problems, providing decision-makers with a larger number of packing decision options.
Table 6 presents only three packing schemes from the solutions to Knapsack Combinatorial Optimization Problem 1 obtained by each algorithm. The proposed MOEO_WCD algorithm found 399 solutions. In Option 1, items numbered 1, 2, 4, and 6 are packed into Knapsack 1, items numbered 3, 8, and 11 are packed into Knapsack 2, and items numbered 5, 7, and 12 are packed into Knapsack 3. The total packing weight is 49, and the total packing value is 64. The results show that the MOEO_WCD algorithm obtained the largest number of packing schemes, indicating that our algorithm has a certain advantage in solving bi-objective knapsack combinatorial optimization problems. It can meet the diverse needs of decision-makers by providing a greater number of solutions.

6. Conclusions

This paper proposes the MOEO_WCD algorithm, a weighted crowding distance-based equilibrium optimizer, for solving bi-objective packing and combinatorial optimization problems. The algorithm employs non-dominated sorting to solve the bi-objective problem, obtaining a set of Pareto front solutions. It also improves the balance pool generation mechanism and incorporates weighted crowding distance to enhance the diversity of the solution set in solving bi-objective packing and combinatorial optimization problems. Secondly, since multi-objective packing and combinatorial optimization problems are discrete, the algorithm was also adapted with discrete constraints to effectively address these problems. To validate the performance of the MOEO_WCD algorithm, simulation experiments were conducted on multi-objective packing optimization problems with multiple experimental setups. Due to the lack of standard evaluation metrics, a new evaluation metric was designed. According to this metric, the experimental results demonstrate that the MOEO_WCD algorithm shows competitive performance compared with five other multi-objective optimization algorithms. In addition, the HV indicator was also calculated, and the results showed that the MOEO-WCD algorithm was second only to the MODE algorithm in terms of diversity in the target space.
In summary, for discrete scenarios in multi-objective problems, further research and development of existing algorithms are needed to solve these complex optimization problems more effectively. This not only requires improvements in the adaptability of the algorithms but also a more comprehensive understanding of the problem space. In future research, we will attempt to improve the method using different discrete constraints, aiming to provide a new approach for solving combinatorial optimization problems. Additionally, we will discuss methods for improving the multi-objective performance of the MOEO_WCD algorithm. Ultimately, this research will provide more robust support for addressing multi-objective combinatorial optimization problems in practical engineering and applications.

Author Contributions

Conceptualization, D.L.; methodology, D.L.; software, X.H.; validation, Z.W.; formal analysis, Z.W.; investigation, W.L.; resources, D.L.; data curation, Y.Z. and J.D.; writing—original draft preparation, Z.W.; writing—review and editing, D.L. and Z.W.; visualization, X.H. and Z.Z. Note: The contribution level of the article written by author X.H. is the same as that of the first author. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Agricultural Joint Fund of Yunnan Province under Grant No. 202301BD070001-086, the Scientific Research Foundation of the Education Department of Yunnan Province, China, under Grant No. 2022J0495, the National Natural Science Foundation of China under Grant No. 31860332, the National Natural Science Foundation of China under Grant No. 32360388, and Research on the Application of Multi-Target Swarm Intelligence Algorithms with the Multi-Modal in Biological Data.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Kellerer, H.; Pferschy, U.; Pisinger, D.; Kellerer, H.; Pferschy, U.; Pisinger, D. Multidimensional Knapsack Problems; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  2. Perry, T.C.; Hartman, J.C. An Approximate Dynamic Programming Approach to Solving a Dynamic, Stochastic Multiple Knapsack Problem. Int. Trans. Oper. Res. 2009, 16, 347–359. [Google Scholar] [CrossRef]
  3. Kan, A.R.; Stougie, L.; Vercellis, C. A Class of Generalized Greedy Algorithms for the Multi-Knapsack Problem. Discret. Appl. Math. 1993, 42, 279–290. [Google Scholar]
  4. Bhattacharjee, K.K.; Sarmah, S.P. Modified Swarm Intelligence Based Techniques for the Knapsack Problem. Appl. Intell. 2017, 46, 158–179. [Google Scholar] [CrossRef]
  5. Zhang, T.; Ding, B.; Zhao, X.; Yue, Q. A Fast Feature Selection Algorithm Based on Swarm Intelligence in Acoustic Defect Detection. IEEE Access 2018, 6, 28848–28858. [Google Scholar] [CrossRef]
  6. Rezoug, A.; Bader-El-Den, M.; Boughaci, D. Guided Genetic Algorithm for the Multidimensional Knapsack Problem. Memetic Comput. 2018, 10, 29–42. [Google Scholar] [CrossRef]
  7. Shin, Y.-B.; Kita, E. Solving Two-Dimensional Packing Problem Using Particle Swarm Optimization. Comput. Assist. Methods Eng. Sci. 2017, 19, 241–255. [Google Scholar]
  8. Kong, M.; Tian, P.; Kao, Y. A New Ant Colony Optimization Algorithm for the Multidimensional Knapsack Problem. Comput. Oper. Res. 2008, 35, 2672–2683. [Google Scholar] [CrossRef]
  9. Nand, R.; Sharma, P. Iteration Split with Firefly Algorithm and Genetic Algorithm to Solve Multidimensional Knapsack Problems. In Proceedings of the 2019 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE), Melbourne, Australia, 9–11 December 2019; pp. 1–7. [Google Scholar]
  10. Tian, Y.; Cheng, R.; Zhang, X.; Li, M.; Jin, Y. Diversity Assessment of Multi-Objective Evolutionary Algorithms: Performance Metric and Benchmark Problems. IEEE Comput. Intell. Mag. 2019, 14, 61–74. [Google Scholar] [CrossRef]
  11. Mansour, I.B.; Basseur, M.; Saubion, F. A Multi-Population Algorithm for Multi-Objective Knapsack Problem. Appl. Soft Comput. 2018, 70, 814–825. [Google Scholar] [CrossRef]
  12. Van Veldhuizen, D.A.; Lamont, G.B. Multiobjective Evolutionary Algorithm Research: A History and Analysis; Citeseer: Princeton, NJ, USA, 1998. [Google Scholar]
  13. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  14. Rahman, C.M.; Mohammed, H.M.; Abdul, Z.K. Multi-Objective Group Learning Algorithm with a Multi-Objective Real-World Engineering Problem. Appl. Soft Comput. 2024, 166, 112145. [Google Scholar] [CrossRef]
  15. Yang, X.-S. Firefly Algorithms for Multimodal Optimization. In Proceedings of the International Symposium on Stochastic Algorithms, Sapporo, Japan, 26–28 October 2009; pp. 169–178. [Google Scholar]
  16. Blum, C.; Roli, A. Metaheuristics in Combinatorial Optimization: Overview and Conceptual Comparison. ACM Comput. Surv. (CSUR) 2003, 35, 268–308. [Google Scholar] [CrossRef]
  17. Mahrach, M.; Miranda, G.; León, C.; Segredo, E. Comparison Between Single and Multi-Objective Evolutionary Algorithms to Solve the Knapsack Problem and the Travelling Salesman Problem. Mathematics 2020, 8, 2018. [Google Scholar] [CrossRef]
  18. Zouache, D.; Moussaoui, A.; Abdelaziz, F.B. A Cooperative Swarm Intelligence Algorithm for Multi-Objective Discrete Optimization with Application to the Knapsack Problem. Eur. J. Oper. Res. 2018, 264, 74–88. [Google Scholar] [CrossRef]
  19. Deb, K.; Tiwari, S. Omni-optimizer: A Generic Evolutionary Algorithm for Single and Multi-Objective Optimization. Eur. J. Oper. Res. 2008, 185, 1062–1087. [Google Scholar] [CrossRef]
  20. Hernández Castellanos, C.I.; Schütze, O.; Sun, J.-Q.; Ober-Blöbaum, S. Non-Epsilon Dominated Evolutionary Algorithm for the Set of Approximate Solutions. Math. Comput. Appl. 2020, 25, 3. [Google Scholar] [CrossRef]
  21. Li, Z.; Wang, Q. An Improved Multi-Objective Evolutionary Algorithm for Multi-Objective 0/1 Knapsack Problem. Int. J. Multimed. Ubiquitous Eng. 2015, 10, 383–394. [Google Scholar] [CrossRef]
  22. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium Optimizer: A Novel Optimization Algorithm. Knowl. -Based Syst. 2020, 191, 105190. [Google Scholar]
  23. Xue, F.; Sanderson, A.C.; Graves, R.J. Multi-Objective Differential Evolution-Algorithm, Convergence Analysis, and Applications. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; pp. 743–750. [Google Scholar]
  24. Feng, Q.; Li, Q.; Quan, W.; Pei, X.-M. Overview of Multiobjective Particle Swarm Optimization Algorithm. Chin. J. Eng. 2021, 43, 745–753. [Google Scholar]
  25. Wang, Y.; Yang, Z.; Guo, Y.; Zhu, J.; Zhu, X. A Novel Multi-Objective Competitive Swarm Optimization Algorithm for Multi-Modal Multi Objective Problems. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 271–278. [Google Scholar]
Figure 1. Schematic of frontier coverage for each algorithm and composite frontier.
Figure 1. Schematic of frontier coverage for each algorithm and composite frontier.
Mathematics 12 03538 g001
Figure 2. Schematic diagram of the equilibrium optimizer algorithm.
Figure 2. Schematic diagram of the equilibrium optimizer algorithm.
Mathematics 12 03538 g002
Figure 3. Equilibrium optimizer algorithm flowchart.
Figure 3. Equilibrium optimizer algorithm flowchart.
Mathematics 12 03538 g003
Figure 4. MOEO-WCD algorithm flowchart.
Figure 4. MOEO-WCD algorithm flowchart.
Mathematics 12 03538 g004
Figure 5. Discretization of solution space. (a) shows the population after algorithm update operation, while (b) shows the population after discretization constraint.
Figure 5. Discretization of solution space. (a) shows the population after algorithm update operation, while (b) shows the population after discretization constraint.
Mathematics 12 03538 g005
Figure 6. Objective space of all algorithms and the composite Pareto front in problem 1. The hollow pentagrams represent the composite Pareto front, the yellow hexagons represent the Pareto front obtained by the MODE algorithm, the green right triangles represent the Pareto front obtained by the MO_PSO_MM algorithm, the cyan diamonds represent the Pareto front obtained by the MO_Ring_PSO_SCD algorithm, the purple circles represent the Pareto front obtained by the NSGA-II algorithm, the blue squares represent the Pareto front obtained by the DN_NSGA-II algorithm, and the red upward triangles represent the Pareto front obtained by the MOEO_WCD algorithm.
Figure 6. Objective space of all algorithms and the composite Pareto front in problem 1. The hollow pentagrams represent the composite Pareto front, the yellow hexagons represent the Pareto front obtained by the MODE algorithm, the green right triangles represent the Pareto front obtained by the MO_PSO_MM algorithm, the cyan diamonds represent the Pareto front obtained by the MO_Ring_PSO_SCD algorithm, the purple circles represent the Pareto front obtained by the NSGA-II algorithm, the blue squares represent the Pareto front obtained by the DN_NSGA-II algorithm, and the red upward triangles represent the Pareto front obtained by the MOEO_WCD algorithm.
Mathematics 12 03538 g006
Figure 7. Objective Space for Problem 1. (a) shows the composite front, (b) shows the front solution solved by the MODE algorithm covering the composite front, (c) shows the frontier solution solved by the MO-PSO-MM algorithm covering the composite front, (d) shows the front solution solved by the MO-Ring-PSO-SCD algorithm covering the composite front, (e) shows the front solution solved by the NSGAII algorithm covering the composite front, (f) shows the front solution solved by the DN-NSGAII algorithm covering the composite front, and (g) shows the front solution solved by the MOEO-WCD algorithm covering the composite front.
Figure 7. Objective Space for Problem 1. (a) shows the composite front, (b) shows the front solution solved by the MODE algorithm covering the composite front, (c) shows the frontier solution solved by the MO-PSO-MM algorithm covering the composite front, (d) shows the front solution solved by the MO-Ring-PSO-SCD algorithm covering the composite front, (e) shows the front solution solved by the NSGAII algorithm covering the composite front, (f) shows the front solution solved by the DN-NSGAII algorithm covering the composite front, and (g) shows the front solution solved by the MOEO-WCD algorithm covering the composite front.
Mathematics 12 03538 g007aMathematics 12 03538 g007b
Figure 8. Statistics of the number of decision solutions corresponding to each objective value in Q1. (a) shows the number of decision space solutions corresponding to the target values solved by the MODE algorithm, while (b) shows the number of decision space solutions corresponding to the target values solved by the MOEO-WCD algorithm.
Figure 8. Statistics of the number of decision solutions corresponding to each objective value in Q1. (a) shows the number of decision space solutions corresponding to the target values solved by the MODE algorithm, while (b) shows the number of decision space solutions corresponding to the target values solved by the MOEO-WCD algorithm.
Mathematics 12 03538 g008
Table 1. Experimental plan.
Table 1. Experimental plan.
Number of KnapsacksKnapsack CapacityWeight and Value of ItemsNumber of Items
Q1315,20,25weight: 5,3,7,2,6,4,9,8,1,10,2,3
value: 10,8,6,9,5,7,3,4,2,1,4,8
12
Q2310,15,20weight: 5,3,7,2,6,4,9,8,1,10,2,3
value: 10,8,6,9,5,7,3,4,2,1,4,8
12
Q3315,20,25weight: 3,3,1,2,2,4,9,8,1,10,2,3
value: 10,17,6,5,5,7,3,4,2,1,4,8
12
Q4215,20weight: 5,3,7,2,6,4,9,8,1,10,2,3
value: 10,8,6,9,5,7,3,4,2,1,4,8
12
Q5125weight: 5,3,7,2,6,4,9,8,1,10,2,3
value: 10,8,6,9,5,7,3,4,2,1,4,8
12
Q658,10,6,7,13weight: 5,3,7,2,6,4,9,8,1,10,2,3
value: 10,8,6,9,5,7,3,4,2,1,4,8
12
Q7315,20,25weight: 5,3,7,2,6,4,9,8,1,10,2,3,4,7,3
value: 10,8,6,9,5,7,3,4,2,1,4,8,6,9,2
15
Q8315,20,25weight: 5,3,7,2,6,4,9,8
value: 10,8,6,9,5,7,3,4
8
Q9315,20,25weight: 8,3,4,9,17,11,8,10,7,6,5,3
value: 2,9,3,2,3,6,9,5,8,1,3,1
12
Table 2. The upper and lower limits for each algorithm to solve different problems.
Table 2. The upper and lower limits for each algorithm to solve different problems.
Algorithm Q1Q2Q3Q4Q5Q6Q7Q8Q9
MODEweight50,1041,248,433,725,233,247,244,254,3
value66,2163,972,2359,1951,959,976,952,941,9
MO_PSO_MMweight50,1041,1548,1133,425,133,1555,1744,659,15
value66,2163,3872,3059,1351,259,3880,4052,546,20
MO_Ring_PSO_SCDweight51,1538,1548,1131,1024,227,2648,1344,652,33
value64,3255,3972,3355,2949,948,3971,1752,1638,30
NSGAIIweight50,2641,1238,1131,624,332,2749,3444,1147,30
value66,4463,3071,3951,1949,1148,3773,6252,2640,17
DN_NSGAIIweight41,1639,1148,1531,315,430,1549,3444,648,37
value63,2452,2472,5346,1132,1340,2573,6252,1639,30
MOEO_WCDweight50,343,348,627,225,125,155,1244,1551,12
value66,866,872,1754,951,251,280,2752,2944,11
Table 3. The average coverage of each algorithm on the reference solution set.
Table 3. The average coverage of each algorithm on the reference solution set.
AlgorithmQ1Q2Q3Q4Q5Q6Q7Q8Q9
MODE17.89%
±0.0739
46.26%
±0.0666
14.9%
±0.0141
26.98%
±0.0700
55.86%
±0.0755
56.69%
±0.0755
56.69%
±0.0460
12.2%
±0.0126
69.09%
±0.0543
MO_PSO_MM33.83%
±0.0259
39.44%
±0.0674
39.6%
±0.0314
65.60%
±0.0320
59.49%
±0.0947
24.47%
±0.0947
24.47%
±0.0530
43.7%
±0.0337
13.19%
±0.0389
MO_Ring_PSO_SCD1.11%
±0.0558
1.80%
±0.0769
1.54%
±0.0154
1.52%
±0.0529
22.88%
±0.0714
2.70%
±0.0714
2.7%
±0.0515
2.40%
±0.0127
2.23%
±0.0079
NSGAll0.90%
±0.0035
1.39%
±0.0053
1.30%
±0.0033
1.17%
±0.0040
20.44%
±0.0568
2.24%
±0.0568
2.24%
±0.0075
3.13%
±0.0054
1.36%
±0.0061
DN_NSGAll0.91%
±0.0026
1.29%
±0.0050
1.26%
±0.0031
1.24%
±0.0023
11.39%
±0.0393
1.99%
±0.0393
1.99%
±0.0059
3.11%
±0.0050
1.34%
±0.0055
MOEO_WCD35.62%
±0.0027
10.07%
±0.0044
40.4%
±0.0025
12.24%
±0.0034
59.56%
±0.0328
11.91%
±0.0328
11.91%
±0.0053
43.7%
±0.0054
13.33%
±0.0529
Table 4. HV indicator for each algorithm on the reference solution set.
Table 4. HV indicator for each algorithm on the reference solution set.
HVRepointMODEMO_PSO_MMMO_Ring_PSO_SCDNSGAllDN_NSGAIIMOEO_WCD
Q166,3348131152737280429993451
Q266,2332930952743282426543233
Q372,4463043964239400340864559
Q459,2273727142285223319512569
Q551,1213720951795183719012137
Q659,1277024191485132017822338
Q780,2477844073559345234234719
Q852,2203119181768189218162031
Q959,31892150297179210341757
Friedman mean rank12.7855.114.892.22
Rank135642
Table 5. Wilcoxon test results of HV indicator.
Table 5. Wilcoxon test results of HV indicator.
AlgorithmR+R−p Value+
MOEO_WCD vs. MODE0280.01807
MOEO_WCD vs. MO_PSO_MM3870.07472
MOEO_WCD vs. MO_Ring_PSO_SCD4500.00490
MOEO_WCD vs. NSGAII4500.00490
MOEO_WCD vs. DN_NSGAII4500.00490
Table 6. Q1 Packaging solutions solved by various algorithms.
Table 6. Q1 Packaging solutions solved by various algorithms.
AlgorithmOptionWeightValueCount
MODEOption 1Bag1: 1,2,6,12 Bag2: 3,7,9 Bag3: 4,5,8,115066154
Option 2Bag3: 429
Option 3Bag1: 1,7 Bag2: 3,5 Bag3: 2,4,6,8,9,11,125066
MO_PSO_MMOption 1Bag1: 2,4,10 Bag2: 7,8,12 Bag3: 1,3,5,6,9,116067355
Option 2Bag1: 1,11 Bag2: 4,9 Bag3: 61432
Option 3Bag 1: 8,12 Bag2: 1,2,6,9 Bag3: 3,4,5,114163
MO_Ring_PSO_SCDOption 1Bag2: 11 Bag3: 3,613178
Option 2Bag1: 2,6 Bag2: 7,9 Bag3: 1,3,4,5,114262
Option 3Bag1: 2,3,11 Bag2: 1,12 Bag3: 4,6,8,93558
NSGAIIOption 1Bag1: 5,12 Bag2: 1,4,11 Bag3: 2,3,6,8,9416311
Option 2Bag1: 3,6,11 Bag2: 4,8,12 Bag3: 1,2,5,94163
Option 3Bag1: 2,9 Bag2: 1,4,11 Bag3: 8,123558
DN_NSGAIIOption 1Bag1: 2,3,4,12 Bag2: 1,5,6,11 Bag3: 9335911
Option 2Bag1: 5,6 Bag2: 4,7,8 Bag3: 1,2,3,11,124964
Option 3Bag1: 6,8 Bag2: 2 Bag3: 4,11,122240
MOEO_WCDOption 1Bag1: 1,2,4,6 Bag2: 3,8,11 Bag3: 5,7,124964399
Option 2Bag1: 1,2,3 Bag2: 3,8,11 Bag3: 5,7,125066
Option 3Bag1: 11,12 Bag3: 9 614
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Z.; Huang, X.; Zhang, Y.; Lv, D.; Li, W.; Zhu, Z.; Dong, J. Modeling and Solving the Knapsack Problem with a Multi-Objective Equilibrium Optimizer Algorithm Based on Weighted Congestion Distance. Mathematics 2024, 12, 3538. https://doi.org/10.3390/math12223538

AMA Style

Wang Z, Huang X, Zhang Y, Lv D, Li W, Zhu Z, Dong J. Modeling and Solving the Knapsack Problem with a Multi-Objective Equilibrium Optimizer Algorithm Based on Weighted Congestion Distance. Mathematics. 2024; 12(22):3538. https://doi.org/10.3390/math12223538

Chicago/Turabian Style

Wang, Ziqian, Xin Huang, Yan Zhang, Danju Lv, Wei Li, Zhicheng Zhu, and Jian’e Dong. 2024. "Modeling and Solving the Knapsack Problem with a Multi-Objective Equilibrium Optimizer Algorithm Based on Weighted Congestion Distance" Mathematics 12, no. 22: 3538. https://doi.org/10.3390/math12223538

APA Style

Wang, Z., Huang, X., Zhang, Y., Lv, D., Li, W., Zhu, Z., & Dong, J. (2024). Modeling and Solving the Knapsack Problem with a Multi-Objective Equilibrium Optimizer Algorithm Based on Weighted Congestion Distance. Mathematics, 12(22), 3538. https://doi.org/10.3390/math12223538

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop