Next Article in Journal
Efficient Finite-Difference Estimation of Second-Order Parametric Sensitivities for Stochastic Discrete Biochemical Systems
Next Article in Special Issue
Data-Driven Fault Diagnosis in Water Pipelines Based on Neuro-Fuzzy Zonotopic Kalman Filters
Previous Article in Journal
Design of Dual-Channel Supply Chain Network Based on the Internet of Things Under Uncertainty
Previous Article in Special Issue
Resolving Contrast and Detail Trade-Offs in Image Processing with Multi-Objective Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Metaheuristics to Solve the Internet Shopping Optimization Problem with Sensitive Prices

by
Miguel A. García-Morales
1,
José Alfredo Brambila-Hernández
1,*,
Héctor J. Fraire-Huacuja
1,
Juan Frausto
1,
Laura Cruz
1,
Claudia Gómez
1 and
Alfredo Peña-Ramos
2
1
National Technology of Mexico/Madero City Technological Institute, Madero City 89460, Tamaulipas, Mexico
2
Faculty of Engineering, Autonomous University of Tamaulipas, Tampico 89109, Tamaulipas, Mexico
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2024, 29(6), 119; https://doi.org/10.3390/mca29060119
Submission received: 6 November 2024 / Revised: 3 December 2024 / Accepted: 4 December 2024 / Published: 14 December 2024
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2024)

Abstract

:
In this research, two new methods for solving the Internet shopping optimization problem with sensitive prices are proposed, incorporating adaptive adjustment of control parameters. This problem is classified as NP-hard and is relevant to current electronic commerce. The first proposed solution method corresponds to a Memetic Algorithm incorporating improved local search and adaptive adjustment of control parameters. The second proposed solution method is a particle swarm optimization algorithm that adds a technique for diversification and adaptive adjustment of control parameters. We assess the effectiveness of the proposed algorithms by comparing them with the Branch and Bound algorithm, which presents the most favorable outcomes of the state-of-the-art method. Nine instances of three different sizes are used: small, medium, and large. For performance validation, the Wilcoxon and Friedman non-parametric tests are applied. The results show that the proposed algorithms exhibit comparable performance and outperform the Branch and Bound algorithm.

1. Introduction

Currently, most Internet transactions are primarily due to online purchases, which have become increasingly important because of the COVID-19 pandemic [1]. Customers seek to minimize the cost of their purchases by considering the discounts offered by online stores, since prices change daily depending on the advertised offers. The Internet shopping problem was formally defined by Błażewicz [2]; furthermore, this author shows that this problem corresponds to the NP-hard type by constructing a pseudo-polynomial transformation of the Exact Cover by 3-Sets problem P1.
Currently, metaheuristic algorithms are widely used to solve optimization problems. They are generally used to find optimal solutions or to work towards the optimal value in NP-hard optimization problems [3]. A study by Li [4] identifies the main trends and areas of application of metaheuristic algorithms such as logistics, manufacturing, artificial intelligence, and computer science; the main reason for its popularity is its flexibility in addressing NP-hard problems. Valencia-Rivera [5] presents a complete analysis of trends and the impact of the metaheuristic algorithms that are most frequently used to solve optimization problems today; the review includes particle swarm optimization algorithms, genetic algorithms, and Ant Colony algorithms.
Therefore, developing new solution methods and metaheuristics is essential for identifying which stores offer the best discount to satisfy a shopping list. The described issue is known as the Internet shopping problem with sensitive prices (IShOPwD); related works are described below:
Błażewicz [6] designed the IShOPwD model for the first time, considering only the customer’s perspective. This model explicitly addresses a scenario where a client aims to buy several products from multiple online stores, focusing on minimizing the total cost of all products and incorporating applicable discounts into the overall cost. The solution method developed is a branch-and-bound algorithm (BB).
Musial [7] proposes a set of algorithms (Greedy Algorithm (GA), Forecasting Algorithm (FA), Cellular Processing Algorithm (CPA), MinMin Algorithm, and Branch and Bound algorithm) to solve the problem of IShOPwD optimization. These algorithms are crafted to generate various solutions at different computation times, aiming to achieve results that are close to the optimal solution. The results of these proposed algorithms are compared with the optimal solutions and calculated using a BB.
In the work of Błażewicz et al. [8], the authors define and study some price-sensitive extensions of the IShOP. The study includes the IShOP with price-sensitive discounts and a newly defined optimization problem: the IShOP that includes two discount functions (a shipping cost discount function and a product price discount function). They formulate mathematical programming problems and develop some algorithms (new heuristic: new forecasting), and exhaustive feasibility tests are carried out.
Another related work is that of Chung and Choi [9]. This work aims to introduce an optimal bundle search problem called “OBSP” that allows integration with an online recommendation system (Bundle Search Method) to provide an optimized service considering pairwise discounting and the delivery cost. The results are integrated into an online recommendation system to support user decision-making.
Mahrundinda [10] and Morales [11] analyze the trends and solution methods applied to the IShOP and determine that no solution methods have used non-deterministic metaheuristics to solve the IShOPwD until now. That is why it was decided to adopt the solution methods proposed by Huacuja [12] and García-Morales [13], which are metaheuristic algorithms that provide the best results for the IShOP variant with shipping costs.
In this research, we develop two metaheuristic algorithms (a Memetic Algorithm (MAIShOPwD) and a particle swarm optimization algorithm (PSOIShOPwD)) that allow us to solve the Internet shopping problem with sensitive prices.
The main contributions of the proposed MAIShOPwD algorithm are as follows:
  • A novel method for calculating changes in the objective values of the current solution during local search, with a time complexity of O   ( 1 ) .
  • Adaptive adjustment of control parameters.
On the other hand, the contributions of the proposed PSOIShOPwD algorithm are as follows:
  • A neighborhood diversification technique and a local search at the end of each iteration.
  • Adaptive adjustment of control parameters.
A revised approach derived from the bandit-based adaptive operator selection technique, Fitness-Rate-Rank-Based Multiarmed Bandit (FRRMAB) [14], was used to adjust the control parameters in both proposed algorithms adaptively.
A series of feasibility experiments, including the instances used in [12], were carried out to identify the advantages of the proposed algorithms. Subsequently, the non-parametric tests were applied with a significance level of 5% to establish certain conclusions.
This research comprises the following sections: Section 2 formally defines the problem addressed. Section 3 describes the main components of the proposed algorithms. Section 4 and Section 5 describe the general structure of the MAIShOPwD and PSOIShOPwD algorithms. Section 6 describes the computational experiments carried out to validate the feasibility of the proposed algorithms. Section 7 presents the results obtained through the Wilcoxon and Friedman non-parametric tests. Moreover, Section 8 defines the conclusions obtained and potential areas for future work.

2. Definition of the Problem

The data set N j includes the available products in each store j . Each product i is associated with a cost c i j and a shipping cost d i . The shipping cost is added if the client purchases one or more items in the store j .
Formally, the problem considers that a client aims to purchase items from a specified set N = ( 1 , , n ) , in a set of online stores M = ( 1 , , m ) with the lowest total cost, including a discount f k , which is applicable in the calculation of the objective function. We calculate the objective value of the solution I using Equation (1):
m i n   f k j = 1 m i = 1 & & I i = j n d i + c i j
A candidate solution is represented by a vector I of length N , which specifies the store from which each product should be purchased. A more detailed description can be found in the work of Błażewicz et al. [6].
Equation (2) below shows the criteria used to select the discount for the total cost of the shopping list for the function f k .
f k ( x ) x   i f   x 25 , 0.95 x   i f   25 < x 50 , 0.90 x   i f   50 < x 100 , 0.85 x   i f   100 < x 200 , 0.80 x   i f   200 < x

3. The General Structure of the Proposed Algorithms

This section describes all the main elements of the proposed solution methods, which are part of the following algorithms: the Memetic Algorithm (MAIShOPwD) and the particle swarm optimization algorithm (PSOIShOPwD).

3.1. Representation of a Candidate Solution

Related state-of-the-art works use a matrix representation of the candidate solutions such that the time complexity of evaluating the objective function for each candidate solution is O ( n 2 ) . This work proposes using a vector representation to reference the solution proposed in [12], in which store j is established and each product i must be purchased. Figure 1 shows the vector representation of a candidate solution.

3.2. Adaptive Parameter Adjustment

The two algorithms proposed in this paper use initial values in their parameters. Later, these values are adjusted based on specific actions taken during the search process. The work in [12] expands the description of the modified FRRMAB [14] method for adaptive tuning of control parameters.

3.2.1. Assignment of Credits

Fialho [15] recommends not using the reward value directly to prove fitness. Because this affects the algorithm’s robustness, the adaptive adjustment method of control uses Fitness Improvement Rates (FIRs) to avoid the problem, as indicated in Equation (3).
F I R i , t = p f i , t c f i , t p f i , t
where
  • p f i , t represents the fitness value of the parent.
  • c f i , t represents the fitness value of the children.
Each F I R value of the actions used is stored in a FIFO (first in, first out) structure of fixed size W . This setup enables the latest actions to be retained while the earliest ones are discarded within the sliding window, which stores the stock index and F I R value.
Calculating the reward ( R e w a r d i ) for each action, the cumulative sum of FIR values for each action within the sliding window is calculated. They are arranged in descending order based on the rank ( R a n k i ) of each action i. Finally, a decay factor D within the range of 0 to 1 is applied to modify the R e w a r d i and this allows the best actions to be selected using Equation (4):
D e c a y i = D R a n k i × R e w a r d i
Next, each credit is allocated to action i according to Equation (5).
F R R i , t = D e c a y i j = 1 K D e c a y j
As the decay factor D decreases, the likelihood of selecting the best action increases. Algorithm 1 shows how credits are assigned to an action. The algorithm used is a modified version of the FRRMAB algorithm proposed by Li [14], which was initially used to perform adaptive selection of genetic operators (AOS) but has been adapted to perform adaptive selection of actions. First, the rewards for each operator ( R e w a r d i ) are initialized to 0, and the application counter of each action ( n i ) is also initialized to 0 (lines 1, 2). It is iterated through the sliding window (lines 3–8), which contains a fixed number of recent applications of the actions. For each entry in the window, the index of the action is obtained (line 4). Its corresponding fitness improvement rate (line 5), the improvement rate ( F I R ), is added to the total reward of the action (line 6), and the application counter of the action is incremented (line 7). Once the improvement rates have been added, all rewards are sorted in descending order (line 9). Based on this ranking, each action is given a rank value ( R a n k i ). A decay factor ( D ) is applied to its reward for each action to adjust for the influence of better-performing actions (lines 10–12). The new adjusted reward ( D e c a y a c t i o n ) is calculated by multiplying the trader’s rank by its total reward (line 11). Finally, the adjusted reward for each action is normalized (lines 14–16) to obtain the final credit value ( F R R a c t i o n ), which is subsequently used in the trader selection process.
Algorithm 1. Credit allocation.
1: Initialize each reward R e w a r d i = 0
2: Initialize n i = 0 ;
3: f o r   i 1   t o   S l i d i n g W i n d o w . l e n g t h   d o
4:          a c t i o n = S l i d i n g W i n d o w . G e t I n d e x A c t i o n ( i )
5:          F I R = S l i d i n g W i n d o w . G e t F I R ( i )
6:          R e w a r d a c t i o n = R e w a r d a c t i o n + F I R
7:          n a c t i o n + +
8: e n d   f o r
9: R a n k   R e w a r d i   i n   d e s c e n d i n g   o r d e r   a n d   s e t   R a n k i   t o   b e   t h e   r a n k   v a l u e   o f   a c t i o n   i
10: f o r   a c t i o n t o   K   d o
11:          D e c a y a c t i o n = D R a n k a c t i o n × R e w a r d a c t i o n
12: e n d   f o r
13:          D e c a y S u m = a c t i o n = 1 K D e c a y a c t i o n
14: f o r   a c t i o n t o   K   d o
15:          F R R a c t i o n = D e c a y a c t i o n / D e c a y S u m
16: e n d   f o r

3.2.2. Bandit-Based Action Selection

The bandit-based approach utilizes Fitness Rate Rank ( F R R ) values to assess quality for action selection, as depicted in Algorithm 2. Algorithm 2 is used for the action selection based on the multiarmed bandit approach [16] used in the FRRMAB algorithm [14]. The algorithm starts by checking if any actions still need to be selected. If so, one of these operators is chosen uniformly at random (lines 1–3). This mechanism ensures that the algorithm tries to use all possible actions at least once. Suppose all operators have already been selected at least once. In that case, the algorithm selects the operator that maximizes a function, combining the estimated quality of the operator ( F R R ) and further exploration based on the number of times that operator has been selected (line 4); C is a factor that controls the balance between exploitation and exploration.
Algorithm 2. Bandit-based action selection.
1: i f   T h e r e   a r e   a c t i o n s   t h a t   h a v e   n o t   b e e n   s e l e c t e d   t h e n
2:          a c t i o n t = r a n d o m l y   s e l e c t   a   s e c u r i t y   f r o m   t h e   a c t i o n   p o o l
3: e l s e
4:          a c t i o n t = argmax i = { 1 , , K } F R R i , t + C × 2 × l n ( j = 1 K n j , t ) n i , t
5: e n d   i f

3.3. Actions Pool for the Memetic Algorithm

Below are the six adjustment actions for parameters crossover probability ( p c ) and mutation probability ( p m ) of the MAIShOPwD algorithm: Action 1 increases the values of the crossover and mutation probabilities by one ten-thousandth. Action 2 decreases the values of the crossover and mutation probabilities by one ten-thousandth. Action 3 increases the value of crossover probability by one ten-thousandth. Action 4 increases the value of the mutation probability by one ten-thousandth. Action 5 decreases the value of crossover probability by one ten-thousandth. Finally, action 6 decreases the value of the mutation probability by one ten-thousandth.
(1)
Action 1
p c = p c + 0.0001 ;
p m = p m + 0.0001 ;
(2)
Action 2
p c = p c 0.0001 ;
p m = p m 0.0001 ;
(3)
Action 3
p c = p c + 0.0001 ;
(4)
Action 4
p m = p m + 0.0001 ;
(5)
Action 5
p c = p c 0.0001 ;
(6)
Action 6
p m = p m 0.0001 ;

3.4. Actions Pool for the Particle Swarm Optimization Algorithm

Below are the six adjustment actions for the personal learning quotient ( c 1 ) and global learning quotient ( c 2 ) of the PSOIShOPwD algorithm: Action 1 increases the values of c 1 and c 2 by one ten-thousandth. Action 2 decreases the values of c 1 and c 2 by one ten-thousandth. Action 3 increases the value of c 1 by one ten-thousandth. Action 4 increases the value of c 2 by one ten-thousandth. Action 5 decreases the value of c 1 by one ten-thousandth. Finally, action 6 decreases the value of c 2 by one ten-thousandth.
(1)
Action 1
c 1 = c 1 + 0.0001 ;
c 2 = c 2 + 0.0001 ;
(2)
Action 2
c 1 = c 1 0.0001 ;
c 2 = c 2 0.0001 ;
(3)
Action 3
c 1 = c 1 + 0.0001 ;
(4)
Action 4
c 2 = c 2 + 0.0001 ;
(5)
Action 5
c 1 = c 1 0.0001 ;
(6)
Action 6
c 2 = c 2 0.0001 ;

4. The General Structure of the Memetic Algorithm

Huacuja [12] introduced a Memetic Algorithm to tackle the IShOP with shipping costs. The results obtained by the Memetic Algorithm demonstrate that it could feasibly be used to solve other variants related to IShOP. Considering this contribution as a feasible solution method, the decision was made to improve the local search to allow a significant reduction in the quality and efficiency of the results for the IShOPwD. Additionally, a mechanism that allows adaptive adjustment of some control parameters was added, considering the contribution of García-Morales [17].

4.1. Used Heuristics

The heuristics used to generate candidate solutions are described below. They are called best first and best [12]. These heuristics identify the store where a product can be purchased at the lowest cost, considering the discount associated with the total cost of the shipping list. If more detailed information on the operation of these heuristics is required, please consult the work conducted in [12].

4.1.1. Heuristic: Best First

The best first heuristic changes the stores assigned to a solution vector until the first cost improvement associated with the current product is found. This process is repeated for all products in the shopping list. This heuristic has a time complexity of O ( n m ) [12].

4.1.2. Heuristic: The Best

The best heuristic works the same as the best first heuristic. The difference is mainly that this heuristic continues when it finds the first improvement in the cost of each product but continues to search all stores for the lowest cost for each product. The process concludes when all stores have been checked for all products in the solution vector. This process is applied to all products. The best heuristic has a complexity of O ( n m ) [12].

4.2. Binary Tournament

A certain number of individuals are randomly selected from the population, and the fittest advance to the next generation. Each individual participates in two comparisons, and the winning individuals form a new population; this ensures that any individual in the population has a maximum of two copies in the new population. A more extensive description of the binary tournament can be found in [12].

4.3. Crossover Mechanism

The crossover mechanism is applied to a certain percentage of the individuals sequentially. This operator selects two solution vectors called parent one and parent two. Both parent vectors are divided in half to generate two children. The first offspring is created by combining the first half of parent 1 with the second half of parent 2. The second offspring is formed by merging the first half of parent 2 with the second half of parent 1.

4.4. Mutation Operator

This operator selects a percentage of elite solutions from the population. Subsequently, it goes through each elite solution and generates a random number. If the value of the mutation probability parameter is less than the heuristic mutation process, the heuristic mutation process will be applied to each selected elite solution. Detailed information on the mutation heuristic can be found in [12].

4.5. Improved Local Search

The local search algorithm improves a given vector solution, reducing the total cost. To achieve this goal, we change the assigned store for each product, reducing the solution’s total cost. This procedure improves a given solution, and speeding up the local search algorithm is costly. This procedure was modified to reduce the computational costs and determine the change in the objective value of the current solution. L e t   I = ( j 1 , j 2 , j 3 , , j n ) be the current solution and I = j 1 , j 2 , j 3 , , j n ) be the solution we obtain when the store one should buy the product from changes from j 1 to j 1 . The change in the objective values of the current solution is given by δ = F ( I ) F ( I ) . If this is directly realized using Equation 1, the computational complexity required to determine δ is O ( n m ) . The following equation shows how to calculate δ in O ( 1 ) .
Theorem 1. 
δ = ( d j 1 + c 1 j 1 ) ( d j 1 + c 1 j 1 ) .
Proof. 
Let I = ( j 1 , j 2 , j 3 , , j n ) the current solution and I = ( j 1 , j 2 , j 3 , , j n ) the solution that we obtain when the store one should buy the product from changes from j 1 to j 1 .
Then, using Equation (6):
F I = i = 1 n j   | I j = i d j + c i j = ( d j 1 + c 1 j 1 ) + i = 2 n j   | I j = i d j + c i j
F ( I ) = i = 1 n j   | I j = i d j + c i j = ( d j 1 + c 1 j 1 ) + i = 2 n j   | I j = i d j + c i j
Therefore, δ = F ( I ) F ( I ) = ( d j 1 + c 1 j 1 ) ( d j 1 + c 1 j 1 ) . □
The local search tries to improve the current solution by changing the stores assigned to the products. For each product, the store that produces the maximal reduction in the objective value is identified. To determine if a change in the store assigned to the product produces a reduction or not, Theorem 1 is applied again in the search process. For this reason, these results significantly impact the local search performance.

4.6. Proposed Memetic Algorithm

Algorithm 3 shows the structure of the proposed MAISHOPwD algorithm. In steps 1 and 2, the values of the initial parameters are defined, and the instance to be used is loaded. From steps 3 to 7, an initial population is generated, the objective value and discount for the entire population are calculated, and the best local and global solution can be identified. In step 9, the binary tournament is applied to the population, and a new population will be generated based on the results obtained. In step 10, a percentage of elite solutions are moved from the new population to an intermediate population. In step 11, the crossover operator affects the new population, and the missing elements of the intermediate population are generated. In steps 12 and 13, the mutation operator and the improved local search are applied to the intermediate population to obtain a population of children. In step 15, the local solution is checked to determine if it is better than the global solution; if so, the global solution is updated. In steps 16 and 17, the entire population is reset, the best global solution is inserted, and all other individuals in the population will be ruled out. In steps 18,19 and 20, the sliding window is updated, and the rewards of the adaptive control parameter adjustment method are ranked. In step 21, the process of each iteration ends, and finally, in step 22, the best solution and the global cost are obtained.
Algorithm 3. MAIShOPwD Algorithm.
Input:  M a x I t e r : Maximum number of iterations
Instance: m (stores), n (products), cij (product cost), dj (shipping cost)
Parameters/Variables :   p s : initial population size
p c : crossover probability
p m : mutation probability
e r : rate elitism
p o p : Initial population
BestGlobalSolution, BestGlobalCost: Best overall solution and cost
BestLocalSolution, BestLocalCost: Best solution and cost in each generation
Functions:
BestGlobal(pop, BestGlobalSolution, BestGlobalCost): from pop obtain the BestGlobalSolution and the BestGlobalCost
C r o s s o v e r O p e r a t o r ( N e w P o p , I n t e r m e d i a t e P o p ) :   among   non - elite   individuals   in   N e w P o p ,   the   crossover   operator   is   applied   to   randomly   selected   p s p c   solutions   and   the   remaining   solutions   are   copied   as - is   to   I n t e r m e d i a t e P o p
M u t a t i o n O p e r a t o r ( I n t e r m e d i a t e P o p ,   C h i l d P o p ) :   with   probability   p m , apply the mutation operator to all the solutions in the intermediate population and move the offspring to the population of children
I m p r o v e d L o c a l S e a r c h ( I n t e r m e d i a t e P o p ,   C h i l d P o p ) : apply improved LocalSearch to all solutions in IntermediatePop and move the offspring to ChildPop
B e s t L o c a l ( C h i l d P o p ,   B e s t L o c a l S o l u t i o n , B e s t L o c a l C o s t ) : obtain the BestLocalSolution and the BestLocalCost in ChildPop
O b j e c t i v e F u n c t i o n (   ) : calculate the objective value
D i s c o u n t ( O b j e c t i v e F u n c t i o n (   ) ) : apply the discount to the objective value
F R R M A B (   ) : obtains the index for executing an action
E x e c u t e A c t i o n ( i n d e x a c t i o n ) : performs an action based on an index
S l i d i n g W i n d o w ( i n d e x a c t i o n , i m p r o v e m e n t [ i ] ) : a sliding window that stores both the frecuency of action execution and the decrease in individual costs
U p g r a d e R e w a r d s ( S l i d i n g W i n d o w ) : enhance the rewards held within the sliding window
Output:  B e s t G l o b a l S o l u t i o n : Best overall solution
B e s t G l o b a l C o s t : Best overall cost
1 :                 Set   initial   parameters :   population   size   ( p s ) ,   percentage   of   cross   ( p c ) ,   probability   of   mutation   ( p m ) ,   Rate   elitism   ( e r )
2:          Load Instance
3 :                 Randomly   generate   initial   population   of   size   p s
4 :                   O b j e c t i v e F u n c t i o n p o p
5 :                   D i s c o u n t ( O b j e c t i v e F u n c t i o n ( p o p ) )
6:         BestGlobal(pop, BestGlobalSolution, BestGlobalCost)
7:         BestLocalSolution = BestGlobalSolution; BestLocalCost = BestGlobalCost;
8 :                 while   ( n o t   M a x I t e r )  do
9 :                                                     Apply   binary   tournament   in   p o p   to   get   N e w P o p
10:                        Move from NewPop to IntermediatePop the elite solutions
11 :                                                   C r o s s o v e r O p e r a t o r ( N e w P o p , I n t e r m e d i a t e P o p )
12 :                                                   M u t a t i o n O p e r a t o r ( I n t e r m e d i a t e P o p , C h i l d P o p )
13 :                                                   I m p r o v e d L o c a l S e a r c h   ( I n t e r m e d i a t e P o p ,   C h i l d P o p )
14 :                                                   B e s t L o c a l ( C h i l d P o p ,   B e s t L o c a l S o l u t i o n , B e s t L o c a l C o s t
15 :                                                   if   B e s t L o c a l C o s t   <   B e s t G l o b a l C o s t  then {BestGlobalCost = BestLocalCost; BestGloblalSolution = BestLocalSolution}
16 :                                                   pop = ϕ ;   pop = pop     {BestGlobalSolution}
17:                         Randomly generate ps − 1 solutions and add each solution to pop
18 :                                                   Add   to   S l i d i n g W i n d o w ( i n d e x a c t i o n , i m p r o v e m e n t [ i ] )
19 :                                                   U p g r a d e R e w a r d s ( S l i n d i n g W i n d o w )
20:                         Rank Rewards
21:        end while
22 :               return   ( B e s t G l o b a l S o l u t i o n , B e s t G l o b a l C o s t )

5. Overview of the Particle Swarm Optimization Algorithm Structure

This section describes the second proposed solution method for the IShOPwD. This method utilizes the PSOIShOPwD algorithm. In this algorithm, two strategies are considered to avoid premature convergence [18]: the first corresponds to the diversification of neighborhoods [13], and the second is an improved local search; both strategies are executed only once per iteration to prevent excessive execution time consumption.
In addition, this proposed algorithm incorporates the adaptive adjustment of control parameters using an adaptation of the FRRMAB method [14].

5.1. Neighborhood Diversification Technique

The diversification technique runs through each particle position in the population [18]. For each position in a particle, it generates a random value r from the interval [ 1 ,   M ] . Then, it adjusts the value of the current position based on the following criteria: if the particle’s position is even, the operation subtracts the sum of the best position of the current particle and a random value r from the total number of stores, M . If the previous condition is not satisfied, then M is reduced by the difference between the best position of the current particle and the random r . Subsequently, the algorithm checks for compliance with minimum and maximum position constraints. Following this, it computes the objective value of the modified particle and assesses if there has been an improvement in its best cost; if so, the best overall particle is modified. The described process concludes once the entire population has been modified and evaluated. The process is detailed in Algorithm 4.
Algorithm 4. Neighborhood diversification technique.
Inputs: Population of solutions
Outputs: Updated Population.
Variables:
p a r t i c l e s : Population
p a r t i c l e s . s i z e : Population size
N : Number of products
M : Number of stores
Functions:
r a n d o m _ n u m b e r   [ 1 , M ] : Generates a random number in the range [1, M ].
O b j e c t i v e F u n c t i o n p a r t i c l e s i : Calculate objective value of the particle.
M a x p a r t i c l e s i . p o s i t i o n j , 1 : Calculates the maximum value between the j position of a particle and 1.
M i n p a r t i c l e s i . p o s i t i o n j ,   M : Calculates the minimum value between the position j of a particle and M .
D i s c o u n t ( O b j e c t i v e F u n c t i o n (   ) ) : Apply the discount to the objective value
1: f o r   i = 1   t o   P a r t i c l e s . s i z e   d o
2:       r = r a n d o m _ n u m b e r   [ 1 , M ]
3:         f o r   j = 1   t o   N   d o
4:                i f   i   %   2 = = 0   t h e n
5:                          p a r t i c l e s i . P o s i t i o n j = M ( p a r t i c l e s i . B e s t P o s i t i o n [ j ] + r )
6:                          p a r t i c l e s i . p o s i t i o n j = M a x p a r t i c l e s i . p o s i t i o n j , 1
7:                          p a r t i c l e s i . p o s i t i o n j = M i n p a r t i c l e s i . p o s i t i o n j , M
8:                e l s e
9:                          p a r t i c l e s i . P o s i t i o n j = M ( p a r t i c l e s i . B e s t P o s i t i o n j r )
10:                        p a r t i c l e s i . p o s i t i o n j = M a x p a r t i c l e s i . p o s i t i o n j , 1
11:                        p a r t i c l e s i . p o s i t i o n j = M i n p a r t i c l e s i . p o s i t i o n j , M
12:                e n d   i f
13:         e n d   f o r
14:       D i s c o u n t ( O b j e c t i v e F u n c t i o n p a r t i c l e s i )
15:       i f   p a r t i c l e s i . C o s t < p a r t i c l e s i . b e s t C o s t   t h e n
16:             p a r t i c l e s i . b e s t P o s i t i o n = p a r t i c l e s i . p o s i t i o n
17:             p a r t i c l e s i . b e s t C o s t = p a r t i c l e s i . c o s t
18:             i f   p a r t i c l e s i . b e s t C o s t < G l o b a l B e s t . c o s t   t h e n
19:                G l o b a l B e s t = p a r t i c l e s i
20:             e n d   i f
21:       e n d   i f
22:     e n d   f o r
23: r e t u r n   P a r t i c l e s

5.2. Local Search

The local search aims to improve the best global particle [18]. The process consists of modifying the particle’s positions and evaluating whether the modifications reduce the particle’s overall cost. This process is repeated in all positions of the particle. Finally, the total cost of the updated particle is calculated and compared with the best cost of the global particle. The newly updated particle will replace the best global particle if the cost is lower than the global particle. The process is detailed in Algorithm 5.
Algorithm 5. Local Search.
Inputs: Best Global solution
Outputs: Best Global Solution Updated.
Variables:
N : Number of products
M : Number of stores
C o s t : Objective value cost
Functions:
O b j e c t i v e F u n c t i o n G l o b a l B e s t : Calculate objective value of the Global Best
D i s c o u n t ( O b j e c t i v e F u n c t i o n (   ) ) : apply the discount to the objective value
1. C o s t = D i s c o u n t ( O b j e c t i v e F u n c t i o n G l o b a l B e s t )
2. f o r   i = 1   t o   N   d o
3.                f o r   j = 1   t o   M   d o
4.                    G l o b a l B e s t . P o s i t i o n [ i ] = j
5.                    i f   D i s c o u n t ( O b j e c t i v e F u n c t i o n G l o b a l B e s t ) < C o s t   t h e n
6.                        C o s t = D i s c o u n t ( O b j e c t i v e F u n c t i o n ( G l o b a l B e s t ) )
7.                        j = j
8.                    e n d   i f
9.                e n d   f o r
10.       G l o b a l B e s t . P o s i t i o n i = j
11.      e n d   f o r
12. D i s c o u n t ( O b j e c t i v e f u n c t i o n G l o b a l B e s t )
13. i f   G l o b a l B e s t . C o s t < G l o b a l B e s t . b e s t C o s t   t h e n
14.            G l o b a l B e s t . b e s t P o s i t i o n = G l o b a l B e s t . P o s i t i o n
15.            G l o b a l B e s t . b e s t C o s t = G l o b a l B e s t . C o s t
16. e n d   i f
17. r e t u r n   G l o b a l B e s t

5.3. Proposed Particle Swarm Optimization Algorithm (PSOIShOPwD)

Algorithm 6 outlines the fundamental framework of the proposed PSOIShOPwD algorithm. In steps 1 to 4, the initial values of the parameters are defined, a random population of particles is generated, the overall best particle is obtained, and the minimum and maximum particle velocities are calculated. In steps 7 and 8, the algorithm retrieves the index linked to an action and executes that action to adjust the values c 1 and c 2 . From steps 10 to 15, the algorithm adjusts the position and velocity of each particle, ensuring they remain within the permissible range; otherwise, a corrective method is applied. In steps 17, 18, and 19, the algorithm calculates the objective value, applies the discount to each particle, and determines the cost improvement of each particle relative to its best cost. From steps 20 to 27, the algorithm assigns the previously computed objective value to an improvement vector. It then compares the best position of the current particle with the global particle; if it is superior, the global particle’s objective value is replaced with it. In step 28, the sliding window is updated with the action index and the particle cost improvement. In steps 29 and 30, the sliding window rewards are updated, and descending order is applied. In steps 32 and 33, diversification and a local search are applied. Finally, in steps 34 and 35, the inertial weight ( w ) is updated, and the best global solution is obtained.
Algorithm 6. PSOIShOPwD Algorithm.
Inputs: Population.
Outputs:  Global Best Solution.
Variables/Parameters:
p a r t i c l e s : Population
p a r t i c l e s . s i z e : Population size
N : Number of products
M : Number of stores
M a x I t : Maximum number of iterations
w : Inertial weight
w d a m p : Inertial weight damping radius
c 1 : Personal learning quotient
c 2 : Global learning quotient
G l o b a l B e s t : Global Best Solution
Functions:
r a n d o m _ n u m b e r [ 0 , M ] : Generates a Random number in the range [1, M ] .
O b j e c t i v e F u n c t i o n p a r t i c l e s i : Calculate the objective value of a particle
M a x p a r t i c l e s i . p o s i t i o n j , 1 : Calculates the maximum value between the j position of a particle and 1.
M i n p a r t i c l e s i . p o s i t i o n j , M : Calculates the minimum value between the j position of a particle and M .
N D T p a r t i c l e s : Applies the neighborhood diversification technique to all particles.
L S G l o b a l B e s t : Apply local search to the best global solution.
F R R M A B ( ) : Gets an index of an action to perform.
e x e c u t e A c t i o n ( a c t i o n I n d e x ) : Execute an action according to an index.
S l i d i n g W i n d o w ( a c t i o n I n d e x , i m p r o v e m e n t [ i ] ) : Sliding window that stores the index of action to be performed and the improvement in the cost of the particle.
U p g r a d e R e w a r d s ( S l i d i n g W i n d o w ) : Update the sliding window rewards.
D i s c o u n t O b j e c t i v e F u n c t i o n ( p a r t i c l e s i ) : Apply the assigned discount to the total cost of the purchase.
1: Initialize parameters M a x I t , P a r t i c l e s . s i z e , w , w d a m p , c 1 , c 2 , N , M .
2: p a r t i c l e s = G e n e r a t e P o p u l a t i o n ( P a r t i c l e s . s i z e )
3: G l o b a l B e s t = g e t B e s t G l o b a l ( p a r t i c l e s )
4: v e l M a x = 0.2 M , v e l M i n = v e l M a x
5: f o r   i t = 1   t o   M a x I t   d o
6:      f o r   i = 1   t o   n P o p   d o
7:            a c t i o n I n d e x = F R R M A B ( )
8:            e x e c u t e A c t i o n ( a c t i o n I n d e x )
9:         f o r   j = 1   t o   d   d o
10:      p a r t i c l e s i = w × p a r t i c l e s i . v e l o c i t y j + c 1 × r a n d o m _ n u m b e r [ 0 , 1 ] × p a r t i c l e s i . b e s t P o s i t i o n j p a r t i c l e s i . p o s i t i o n j + c 2 × r a n d o m _ n u m b e r [ 0 , 1 ] × G l o b a l B e s t . p o s i t i o n j p a r t i c l e s i . p o s i t i o n j ;
11:              p a r t i c l e s i . v e l o c i t y j = M a x p a r t i c l e s i . v e l o c i t y j , v e l M i n ;
12:              p a r t i c l e s i . v e l o c i t y j = M i n p a r t i c l e s i . v e l o c i t y j , v e l M a x ;
13:              p a r t i c l e s i . p o s i t i o n j = p a r t i c l e s i . p o s i c i o n + p a r t i c l e s i . v e l o c i t y j ;
14:              p a r t i c l e s i . p o s i t i o n j = M a x p a r t i c l e s i . p o s i t i o n j , 1 ;
15:              p a r t i c l e s i . p o s i t i o n j = M i n p a r t i c l e s i . p o s i t i o n j , M ;
16:         e n d   f o r
17:         O b j e c t i v e f u n c t i o n ( p a r t i c l e s i )
18:         D i s c o u n t ( O b j e c t i v e f u n c t i o n p a r t i c l e s i )
19:         f i t n e s s i m p r o v e m e n t = ( p a r t i c l e s i . b e s t C o s t p a r t i c l e s i . c o s t ) / p a r t i c l e s _ i . b e s t C o s t
20:         i f   p a r t i c l e s i . c o s t < p a r t i c l e s i . b e s t C o s t   t h e n
21:              i m p r o v e m e n t [ i ] = f i t n e s s i m p r o v e m e n t
22:              p a r t i c l e s i . b e s t P o s i t i o n = p a r t i c l e s s i . p o s i t i o n
23:              p a r t i c l e s i . b e s t C o s t = p a r t i c l e s i . c o s t
24:              i f   p a r t i c l e s i . b e s t C o s t < G l o b a l B e s t . c o s t   t h e n
25:                  G l o b a l B e s t = p a r t i c l e s i
26:              e n d   i f
27:         e n d   i f
28:        Add to S l i d i n g W i n d o w ( a c t i o n I n d e x , i m p r o v e m e n t [ i ] )
29:         U p g r a d e _ R e w a r d s ( S l i d i n g W i n d o w )
30:        Rank Rewards
31:      e n d   f o r
32:    N D T ( p a r t i c l e s )
33:    L S ( G l o b a l B e s t )
34:     w = w w d a m p
35: e n d   f o r
36: r e t u r n   G l o b a l B e s t

6. Computational Experiments

The proposed algorithms are validated using the same instances created in [12], which are presented in Table 1. This table displays the subgroup sizes of instances used in the computational experiments. Each instance name indicates the number of products denoted by n and the number of stores denoted by m . Each algorithm was executed independently 30 times, and the averages of their 30 medians were calculated for the objective value ( O V ) and time execution ( T i m e ). The time used by the algorithms is measured in milliseconds.
Table 2 shows the assigned values of variables and parameters utilized in the computational experiments for each analyzed algorithm. These assigned initial values were taken from the state-of-the-art works of Huacuja [12] and García-Morales [18], in which different values for the corresponding parameters were analyzed.

7. Results

Table 3, Table 4 and Table 5 show the outcomes obtained from the computational experiments carried out, and subsequently, the non-parametric Wilcoxon test is applied with a significance level of 5%. For each table, the first column corresponds to the evaluated instance. The second and fourth columns show the O V and the T i m e achieved by the reference algorithm and, as a subindex, the standard deviation. Correspondingly, the third and fifth columns show these values for the comparison algorithm. Columns six and seven, however, represent the p v a l u e obtained by the Wilcoxon test for the objective value and shortest time, respectively. The cells shaded in gray represent the lowest O V or the shortest time when the best solution is found according to each column. The symbol ↑ indicates a significant difference in favor of the reference algorithm. The symbol ↓ indicates a significant difference in favor of the comparison algorithm, and the symbol = indicates no significant differences.
Based on the Wilcoxon test results, Table 3 illustrates that the proposed MAIShOPwD algorithm outperforms the state-of-the-art algorithm regarding the objective value in six out of nine instances. Regarding the shortest time metric, the MAIShOP algorithm performs better in four out of nine instances evaluated.
The results obtained by the Wilcoxon test shown in Table 4 indicate that the proposed PSOIShOPwD algorithm outperforms the state-of-the-art algorithm in six out of nine instances when evaluating the objective value. Regarding the shortest time metric, this algorithm outperforms the others in eight out of nine instances.
The results obtained by the Wilcoxon test shown in Table 5 indicate that the two proposed algorithms perform similarly to the objective value, since the MAIShOPwD algorithm performs better in medium instances. In contrast, the PSOIShOPwD algorithm performs better in large instances.
Table 6 contains the results obtained from the evaluation of the Friedman test at a 5% significance level; the first column lists the instances, while the second and third columns show the results for the O V and the T i m e achieved by the state-of-the-art algorithm. The fourth and fifth columns present equivalent data but for the MAIShOP algorithm. The sixth and seventh columns display results similar to those in the second and third columns, but specifically for the PSOIShOPwD algorithm. The last two columns contain the results of the p -value for each instance evaluated by the Friedman test.
The results obtained by the Friedman test indicate that in small instances concerning the objective value, the performances of the three algorithms are the same. For medium-sized instances, the algorithm that performs the best is MAIShOPwD. In the case of large instances, the best algorithm is PSOIShOPwD. The time analysis reveals that the two proposed algorithms outperform the state-of-the-art algorithm.

8. Conclusions and Future Work

This research addresses the IShOPwD, a variant of the IShOP that has become very relevant for buyers in the current electronic commerce scenario because online stores offer endless benefits for customers acquiring their products. This variant allows us to identify the most economical cost of a shopping list, considering a discount associated with the total cost. In this article, two main metaheuristic algorithms that have not yet been considered are proposed to solve IShOPwD. The first is a MAIShOPwD algorithm that incorporates an improved local search that contains a method used to calculate the change in the objective value of the current solution with a time complexity of O ( 1 ) , thus avoiding excessive expenditure of computing time and adaptive adjustment of control parameters that, during the execution of the algorithm, adjust the best values of the crossover and mutation probability. The second is a PSOIShOPwD algorithm that, unlike the state-of-the-art algorithms, also uses a vector representation of the candidate solutions. It includes neighborhood diversification to avoid local stagnation of the algorithm and incorporates the adaptive adjustment of control parameters that directly benefit the personal and global learning parameters. These parameters allow better positioning of the particles in the search space. The proposed algorithms are validated using the non-parametric Wilcoxon and Friedman tests, which were utilized to assess the results obtained from both the proposed algorithms and the state-of-the-art BB. According to the results obtained in the Wilcoxon test, the MAIShOPwD algorithm achieves a better performance compared to the BB; however, in inefficiency, the BB is better, and in the case of the PSOIShOPwD algorithm, it is better in terms of both quality and efficiency compared to the BB. This same test concludes that the two proposed algorithms have similar performance. The Friedman test indicates that the two proposed algorithms exhibit superior performances in both quality and efficiency when compared to the state-of-the-art algorithm.
Finally, in future work related to the IShOPwD variant, all the control parameters used by the algorithms could be incorporated into the adaptive adjustment, adding the restriction of being able to purchase more than one product of the same type and being able to use other types of discount such as coupons and lightning offers.

Author Contributions

Conceptualization: H.J.F.-H.; methodology: L.C.; research: J.F.; software: M.A.G.-M. and J.A.B.-H.; formal analysis: C.G.; writing, proofreading, and editing: H.J.F.-H., M.A.G.-M., J.A.B.-H. and A.P.-R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The source code of the proposed algorithms, as well as the instances used, can be downloaded from: https://github.com/JAlfredoBrambila/ISHOPwD (accessed on 3 December 2024).

Acknowledgments

The authors want to thank Laboratorio Nacional de Tecnologías de la Información and the support of (a) the TecNM project 21336.24-P and (b) the support granted through the Scholarship for Graduate Studies to the students Miguel Ángel García Morales and José Alfredo Brambila Hernández with CVU 658787 and 1011850, respectively.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Martínez Valdez, R.I.; Catache Mendoza, M.d.C.; Pedroza Cantú, G.; Huerta Cerda, Z.M. El Impacto del COVID-19 en la incidencia de compras en Línea de los Millenials. Rev. Ing. Gestión Ind. 2022, 1, 17–27. [Google Scholar] [CrossRef]
  2. Błażewicz, J.; Kovalyov, M.Y.; Musiał, J.; Urbański, A.P.; Wojciechowski, A. Internet shopping optimization problem. Int. J. Appl. Math. Comput. Sci. 2010, 20, 385–390. [Google Scholar]
  3. Hussain, K.; Mohd Salleh, M.N.; Cheng, S.; Shi, Y. Metaheuristic research: A comprehensive survey. Artif. Intell. Rev. 2019, 52, 2191–2233. [Google Scholar] [CrossRef]
  4. Li, G.; Zhang, T.; Tsai, C.Y.; Yao, L.; Lu, Y.; Tang, J. Review of the metaheuristic algorithms in applications: Visual analysis based on bibliometrics (1994–2023). Expert Syst. Appl. 2024, 255, 124857. [Google Scholar] [CrossRef]
  5. Valencia-Rivera, G.H.; Benavides-Robles, M.T.; Morales, A.V.; Amaya, I.; Cruz-Duarte, J.M.; Ortiz-Bayliss, J.C.; Avina-Cervantes, J.G. A systematic review of metaheuristic algorithms in electric power systems optimization. Appl. Soft Comput. 2024, 150, 111047. [Google Scholar] [CrossRef]
  6. Błażewicz, J.; Bouvry, P.; Kovalyov, M.Y.; Musial, J. Erratum to: Internet shopping with price-sensitive discounts. 4OR 2014, 12, 403–406. [Google Scholar] [CrossRef]
  7. Musial, J.; Pecero, J.E.; Lopez, M.C.; Fraire, H.J.; Bouvry, P.; Błażewicz, J. How to efficiently solve Internet Shopping Optimization Problem with price sensitive discounts? In Proceedings of the 2014 11th International Conference on e-Business (ICE-B) IEEE, Vienna, Austria, 28–30 August 2014; pp. 209–215. [Google Scholar]
  8. Błażewicz, J.; Cheriere, N.; Dutot, P.F.; Musial, J.; Trystram, D. Novel dual discounting functions for the Internet shopping optimization problem: New algorithms. J. Sched. 2016, 19, 245–255. [Google Scholar] [CrossRef]
  9. Chung, J.; Choi, B. Complexity and algorithms for optimal bundle search problem with pairwise discount. J. Distrib. Sci. 2017, 15, 35–41. [Google Scholar] [CrossRef]
  10. Mahrudinda, M.; Chaerani, D.; Rusyaman, E. Systematic literature review on adjustable robust counterpart for internet shopping optimization problem. Int. J. Data Netw. Sci. 2022, 6, 581–594. [Google Scholar] [CrossRef]
  11. Morales, M.Á.G.; Huacuja, H.J.F.; Solís, J.F.; Reyes, L.C.; Santillán, C.G.G. A Survey of Models and Solution Methods for the Internet Shopping Optimization Proble. In Fuzzy Logic and Neural Networks for Hybrid Intelligent System Design; Studies in Computational Intelligence; Castillo, O., Melin, P., Eds.; Springer: Cham, Switzerland, 2023; Volume 1061. [Google Scholar] [CrossRef]
  12. Huacuja, H.J.F.; Morales, M.Á.G.; Locés, M.C.L.; Santillán, C.G.G.; Reyes, L.C.; Rodríguez, M.L.M. Optimization of the Internet Shopping Problem with Shipping Costs. In Fuzzy Logic Hybrid Extensions of Neural and Optimization Algorithms: Theory and Applications; Springer: Cham, Switzerland, 2021; pp. 249–255. [Google Scholar]
  13. García-Morales, M.Á.; Fraire-Huacuja, H.J.; Brambila-Hernández, J.A.; Frausto-Solís, J.; Cruz-Reyes, L.; Gómez-Santillán, C.G.; Carpio-Valadez, J.M. Particle Swarm Optimization Algorithm with Improved Opposition-Based Learning (IOBL-PSO) to Solve Continuous Problems. In Hybrid Intelligent Systems Based on Extensions of Fuzzy Logic, Neural Networks and Metaheuristics; Springer: Cham, Switzerland, 2023; pp. 115–126. [Google Scholar]
  14. Li, K.; Fialho, A.; Kwong, S.; Zhang, Q. Adaptive Operator Selection with Bandits for a Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2014, 18, 114–130. [Google Scholar] [CrossRef]
  15. Fialho, A.; da Costa, L.; Schoenauer MSebag, M. Analyzing bandit-based adaptive operator selection mechanisms. Ann. Math. Artif. Intell. 2010, 60, 25–64. [Google Scholar] [CrossRef]
  16. Auer, P.; Cesa-Bianchi, N.; Fischer, P. Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 2002, 47, 235–256. [Google Scholar] [CrossRef]
  17. García-Morales, M.A.; Brambila-Hernández, J.A.; Fraire-Huacuja, H.J.; Frausto-Solis, J.; Cruz-Reyes, L.; Gómez-Santillan, C.G.; Carpio-Valadez, J.M. Multi-objective Evolutionary Algorithm Based on Decomposition with Adaptive Adjustment of Control Parameters to Solve the Bi-objective Internet Shopping Optimization Problem (MOEA/D-AACPBIShOP). Comput. Sist. 2024, 28, 727–738. [Google Scholar] [CrossRef]
  18. García-Morales, M.; Brambila, A.; Fraire-Huacuja, H.; Cruz-Reyes, L.; Frausto-Solís, J.; Gómez-Santillán, C.; Rangel-Valdez, N. A Novel Particle Swarm Optimization Algorithm to Solve the Internet Shopping Optimization Problem with Shipping Costs. Int. J. Comb. Optim. Probl. Inform. 2024, 15, 101–114. [Google Scholar] [CrossRef]
Figure 1. Candidate solution [12].
Figure 1. Candidate solution [12].
Mca 29 00119 g001
Table 1. Configuration of instance size.
Table 1. Configuration of instance size.
Small InstancesMedium InstancesLarge Instances
3 n 20 m 5 n 240 m 50 n 400 m
4 n 20 m 5 n 400 m 100 n 240 m
5 n 20 m 50 n 240 m 100 n 400 m
Table 2. Variables and parameters of the proposed algorithms.
Table 2. Variables and parameters of the proposed algorithms.
Variables/ParametersMAIShOPwDPSOIShOPwD
p o p u l a t i o n . s i z e 100100
p c 0.6 *0.6
p m 0.01 *0.01
e r 0.05
w d a m p 0.99
M a x I t 100100
w 1
c 1 , c 2 1.5, 2 *
* Start values.
Table 3. Comparative performance of BB vs. MAIShOPwD algorithms applying the Wilcoxon test.
Table 3. Comparative performance of BB vs. MAIShOPwD algorithms applying the Wilcoxon test.
InstancesBBMAIShOPwDBBMAIShOPwDp-Value (OV)p-Value (Time)
O V O V T i m e T i m e
3 n 20 m 56.560.04 = 56.5600.0846.2×10−5 1.2222 × 10−50.050.6250.00001
4 n 20 m 70.680.21 = 70.6800.0484.3×10−5 6.6667 × 10−60.0070.6250.00001
5 n 20 m 89.160.62 = 89.040.0040.0400.0002 0.00020.0290.6250.00001
5 n 240 m 75.701.94 67.681.670.0671.76 0.2740.10.000010.00001
5 n 400 m 70.562.44 62.661.680.2430.288 0.7900.20.000010.00001
50 n 240 m 503.4912.83 434.2917.6022.507.50 16.4448.030.000010.00001
50 n 400 m 433.619.78 389.1424.0833.1140.10 48.66125.300.001480.00001
100 n 240 m 992.7215.32 757.2938.2550.4939.76 73.85725.640.000010.00001
100 n 400 m 796.8714.15 655.7235.04176.1192.54 = 181.95295.300.000010.3843
Total average343.261287.00731.4135.775
Table 4. Comparative performance of BB vs. PSOIShOPwD algorithms applying the Wilcoxon test.
Table 4. Comparative performance of BB vs. PSOIShOPwD algorithms applying the Wilcoxon test.
InstancesBBPSOIShOPwDBBPSOIShOPwDp-Value (OV)p-Value (Time)
O V O V T i m e T i m e
3 n 20 m 56.56 0.04 = 56.6400.0846.2×10−5 0.010406.1×10−50.6250.00001
4 n 20 m 70.680.21 = 70.7600.0484.3×10−5 0.010300.6250.00001
5 n 20 m 89.160.62 = 89.213.3726×10−140.0400.0002 0.0086.6944×10−50.6250.00001
5 n 240 m 75.701.94 68.212.674×10−140.0671.76 0.01045.85×10−180.000010.00001
5 n 400 m 70.562.44 64.192.529×10−140.2430.288 0.01105.403×10−180.000010.00001
50 n 240 m 503.4912.83 463.5212.8322.507.50 = 24.554110.180.000010.24604
50 n 400 m 433.619.78 355.971.3876×10−1333.1140.10 0.08942.21×10−70.000010.00001
100 n 240 m 992.7215.32 712.792.3897×10−1350.4939.76 0.0923.1759×10−170.000010.00001
100 n 400 m 796.8714.15 605.632.0428×10−13176.1192.54 0.123964.1404×10−170.000010.00001
Total average343.26276.3231.412.768
Table 5. Comparative performance of MAIShOPwD vs. PSOIShOPwD algorithms applying the Wilcoxon test.
Table 5. Comparative performance of MAIShOPwD vs. PSOIShOPwD algorithms applying the Wilcoxon test.
InstancesMAIShOPwDPSOIShOPwDMAIShOPwDPSOIShOPwDp-Value (OV)p-Value (Time)
O V O V T i m e T i m e
3 n 20 m 56.560 = 56.6401.2222 × 10−50.050.01046.1×10−50.6250.00001
4 n 20 m 70.680 = 70.7606.6667 × 10−60.0070.010300.6250.00001
5 n 20 m 89.04 0.004 = 89.213.3726×10−140.00020.0290.0086.6944×10−50.6250.00001
5 n 240 m 67.68 1.67 68.212.674×10−140.2740.10.01045.85×10−180.000010.00001
5 n 400 m 62.66 1.68 64.192.529×10−140.7900.20.01105.403×10−180.000010.00001
50 n 240 m 434.29 17.60 463.5212.8316.4448.0324.554110.180.000010.00016
50 n 400 m 389.1424.08 355.971.3876×10−1348.66125.300.08942.21×10−70.000010.00001
100 n 240 m 757.2938.25 712.792.3897×10−1373.85725.640.0923.1759×10−170.000010.00001
100 n 400 m 655.7235.04 605.632.0428×10−13181.95295.300.123964.1404×10−170.000010.00001
Total average287.007276.3237.7752.7678
Table 6. Comparative performance of the BB vs. MAIShOPwD vs. PSOIShOPwD algorithms applying the Friedman test.
Table 6. Comparative performance of the BB vs. MAIShOPwD vs. PSOIShOPwD algorithms applying the Friedman test.
InstancesBBMAIShOPwDPSOIShOPwDp-Value (OV)p-Value (Time)
O V T i m e O V T i m e O V T i m e
3 n 20 m 56.560.04 = 0.0846.2×10−5 56.5601.2222 × 10−50.0556.6400.01046.1×10−50.975310.00001
4 n 20 m 70.680.21 = 0.0484.3×10−5 70.6806.6667 × 10−60.00770.7600.010300.975310.00001
5 n 20 m 89.160.62 = 0.0400.0002 89.040.0040.00020.02989.213.3726×10−140.0086.6944×10−50.975310.00001
5 n 240 m 75.701.94 0.0671.76 67.681.670.2740.168.212.674×10−140.01045.85×10−180.000010.00001
5 n 400 m 70.562.44 0.2430.288 62.661.680.7900.264.192.529×10−140.01105.403×10−180.000010.00001
50 n 240 m 503.4912.83 22.507.50 434.2917.6016.4448.03463.5212.8324.554110.180.000010.00001
50 n 400 m 433.619.78 33.1140.10 389.1424.0848.66125.30355.971.3876×10−130.08942.21×10−70.000010.00001
100 n 240 m 992.7215.32 50.4939.76 757.2938.2573.85725.64712.792.3897×10−130.0923.1759×10−170.000010.00001
100 n 400 m 796.8714.15 176.1192.54 655.7235.04181.95295.30605.632.0428×10−130.123964.1404×10−170.000010.00001
Total average343.26131.41287.00735.775276.322.768
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

García-Morales, M.A.; Brambila-Hernández, J.A.; Fraire-Huacuja, H.J.; Frausto, J.; Cruz, L.; Gómez, C.; Peña-Ramos, A. New Metaheuristics to Solve the Internet Shopping Optimization Problem with Sensitive Prices. Math. Comput. Appl. 2024, 29, 119. https://doi.org/10.3390/mca29060119

AMA Style

García-Morales MA, Brambila-Hernández JA, Fraire-Huacuja HJ, Frausto J, Cruz L, Gómez C, Peña-Ramos A. New Metaheuristics to Solve the Internet Shopping Optimization Problem with Sensitive Prices. Mathematical and Computational Applications. 2024; 29(6):119. https://doi.org/10.3390/mca29060119

Chicago/Turabian Style

García-Morales, Miguel A., José Alfredo Brambila-Hernández, Héctor J. Fraire-Huacuja, Juan Frausto, Laura Cruz, Claudia Gómez, and Alfredo Peña-Ramos. 2024. "New Metaheuristics to Solve the Internet Shopping Optimization Problem with Sensitive Prices" Mathematical and Computational Applications 29, no. 6: 119. https://doi.org/10.3390/mca29060119

APA Style

García-Morales, M. A., Brambila-Hernández, J. A., Fraire-Huacuja, H. J., Frausto, J., Cruz, L., Gómez, C., & Peña-Ramos, A. (2024). New Metaheuristics to Solve the Internet Shopping Optimization Problem with Sensitive Prices. Mathematical and Computational Applications, 29(6), 119. https://doi.org/10.3390/mca29060119

Article Metrics

Back to TopTop