Next Article in Journal
Semi-Analytical Closed-Form Solutions for the Rikitake-Type System through the Optimal Homotopy Perturbation Method
Previous Article in Journal
Analysis of the Relationship between the Organizational Resilience Factors and Key Performance Indicators’ Recovery Time in Uncertain Environments in Industrial Enterprises
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Manual Order Picking through a New Metaheuristic, Based on Particle Swarm Optimization

by
Massimo Bertolini
1,*,
Davide Mezzogori
1 and
Francesco Zammori
2
1
“Enzo Ferrari” Engineering Department, University of Modena and Reggio Emilia, Via Università 4, 41121 Modena, Italy
2
Department of Engineering and Architecture, University of Parma, Parco Area delle Scienze 181/A, 43124 Parma, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(14), 3077; https://doi.org/10.3390/math11143077
Submission received: 24 March 2023 / Revised: 29 June 2023 / Accepted: 1 July 2023 / Published: 12 July 2023

Abstract

:
This paper proposes a new metaheuristic algorithm called Particle Swarm-based picking time minimization (Pkt_PSO), ideated for picking time minimization in manual warehouses. As the name suggests, Pkt_PSO is inspired by Particle Swarm Optimization (PSO), and it is specifically designed to minimize the picking time in order case picking contexts. To assess the quality and the robustness of Pkt_PSO, it is compared to five alternative algorithms used as benchmarks. The comparisons are made in nine different scenarios obtained by changing the layout of the warehouse and the length of the picking list. The results of the analysis show that Pkt_PSO has a slower convergence rate and suffers less of early stagnation in local minima; this ensures a more extensive and accurate exploration of the solution space. In fact, the solutions provided by Pkt_PSO are always better (or at least comparable) to the ones found by the benchmarks, both in terms of quality (closeness to the overall best) and reliability (frequency with which the best solution is found). Clearly, as more solutions are explored, the computational time of Pkt_PSO is longer, but it remains compatible with the operational needs of most practical applications.

1. Introduction

Warehousing is of paramount importance in supply chains and logistics, as a well-designed warehousing policy leads to benefits within the company and across the entire supply chain [1,2]. Most of the time, high operating performance is achieved using highly efficient Automated Storage and Retrieval Systems (AS/RS) [3,4]. However, in case of small-item storage and/or spare-parts handling, manual warehouses are still more efficient and represent the standard solution [5,6]. In this case, order picking (the problem of collecting a set of items in a warehouse) is an essential issue of order fulfilment, with an immediate impact on customer satisfaction, business reputation and profitability. Picking is generally a major bottleneck in logistics and one of the most expensive and labour-intensive logistic activities, which is why improving this activity is considered a top priority for many companies [7].
When the main operations concern pallet decomposition, short-period storage, pallet recomposition, and shrink wrapping of new pallets, a picker-to-goods strategy is generally adopted [8,9]. Accordingly, the picker moves in the warehouse to collect the required number of cases of each Stock Keeping Unit (SKU) included in the picking list. As the travelling time represents the largest part of the time required to complete an order, many picking strategies have been proposed to reduce the distance covered by the picker. Among them, batch picking, zone picking, wave picking, voice picking and vision picking are certainly the most studied [10,11]. However, all the above-mentioned strategies need specific operating conditions and/or costly equipment. Batch and wave picking assume that items can be sorted after retrieval [12], zone picking has proven to be efficient only if a very high number of SKUs are stored in the warehouse, while vision and voice picking require a computerized control system and expensive augmented reality glasses or Bluetooth headphones [13].
Due to these reasons, the basic picker to goods strategy, is still the best choice in many industrial contexts [14] and it is the most widespread policy in the context of small and spare-parts handling [15]. According to this policy, the picker is in charge of completing an entire order and, to do so, he/she can move freely in each area of the warehouse to collect the cases (of the SKUs indicated in the picking list) needed to form a pallet. Its implementation is simple and cheap, as it does not require any expensive equipment or additional operations such as batch composition or sorting. Operationally, the only decision concerns the routing that the picker must follow, that is, the order in which storage locations must be visited to complete the picking list. This sequence, in fact, determines the distance covered by the picker and the total travelling time. Optimizing the routing assigned to the picker is, thus, of paramount importance to improve productivity and to cut costs. Not surprisingly, literature in this field is rather extensive, and many interesting and efficient metaheuristics have already been introduced to minimize the total travelling time.
To the authors’ knowledge, the last and most efficient framework is the one proposed by De Santis et al. [9], which is also characterized by extraordinary speed of convergence. Although this feature is desirable, it also suggests that most of the provided solutions could be points of local minimum. Therefore, we believe that there is still room for improvement, and to test this hypothesis we propose a novel metaheuristic called Pkt_PSO. As the name suggests (where Pkt stands for picking time), Pkt_PSO is an evolutionary metaheuristic inspired by the well-known Particle Swarm Optimization (PSO). However, to exploit the peculiar structure of the considered problem, many features of the standard PSO have been readapted and modified. Making major modifications to the PSO is indeed a common practice, and it is generally recommended to exploit the potentialities of this approach (see for example [16,17]) This readaptation is, therefore, the main contribution of the paper. The paper is also novel because, to the best of our knowledge, PSO has never been applied to the order-picking domain, although some recent works have shown its effectiveness and flexibility in solving very difficult-to-tackle tasks, such as Bayesian network training [18,19] and different instances of the Traveling Salesman Problem (TSP) [20,21]. The latter is a classic operating research problem very similar to the one herein considered. Assessing the operational benefits that can be achieved using a PSO-based framework is the second main contribution of the paper.
The rest of the paper is organized as follows. Section 2 presents a brief introduction on the state-of-the-art approaches for travelling time minimization in manual warehouses. In the same section, literature gaps are highlighted and the research questions addressed in this paper are clearly defined. For the sake of clarity, Section 3 pinpoints the main features of the standard PSO and Section 4 formalizes the problem and its possible formulation as a TSP. The proposed algorithm (Pkt_PSO) is introduced in Section 5, where all the new procedures and operators developed to exploit the peculiar structure of the considered problem are clearly explained. In view of an experimental section used to validate Pkt_PSO, its operating parameters are fine tuned in Section 6. Next, in Section 7 its performance is tested in nine different scenarios (obtained changing the layout of the warehouse and the length of the picking list) and compared to that of five benchmarks. One of the benchmarks [9] is a top-of-the-class metaheuristic specifically designed for the picking problem in manual warehouses, two [22,23] are outstanding PSO-based approaches for the TSP, while the last two are the well-known Full Depth 2-Opt and the nearest neighbour constructive procedure. Finally, conclusions and directions for future works are drawn in Section 8.
For the sake of clarity, all the abbreviations used in the paper are listed at the end of the paper in the “Abbreviation Section”.

2. Picker-to-Goods Strategy, a Brief Introduction and Literature Review

2.1. The Basic Picker-to-Goods Strategy

The functioning of the basic picker-to-goods strategy is very simple. Each picker receives one picking list at a time and he or she must collect all the SKUs specified therein. Specifically, the picking list includes all the SKUs needed to form an entire pallet (as required by the end customer) and indicates, for each one of them, the number of cases to be picked and their exact locations in the warehouse. The picker, generally onboard of an electric order picker, is free to move in all the aisles of the warehouse and collects cases stored at the ground level. Top racks, in fact, are used to store full pallets that will be lowered to the ground (with forklifts) whenever necessary.
The implementation of this strategy is straightforward: given a set of storage locations, the only decision concerns the order in which they must be visited. This sequence, in fact, determines the routing followed by the picker, which is the only element that can be optimized to increase productivity. Whereas the travelling time highly depends on the routing, all the other times (e.g., pick up time of a case, loading and unloading of the order picker, etc.) are constant and, therefore, irrelevant. It is, therefore, an NP-hard combinatorial problem belonging to the family of the Travelling Salesman Problem (TSP) [24,25]. Even in case of order picking with additional constraints, such as one-way aisles [26], limited handling capacity of the pickers [27], partial overlapping of picking and refilling operations, narrow aisles that can be crossed by a single picker at most [28] or allocation of the same items in multiple locations [29], the task can be reduced to a routing optimization problem with the objective of makespan minimization.

2.2. Solution Approaches Proposed in the Literature to Improve Productivity

Solutions proposed in the literature are manyfold and very differentiated but, as for many other operational research problems, the solution approaches can be classified into two broad categories, namely, exact and heuristic methods. Due to the NP-hard nature of the problem, exact methods exist only for specific or simple warehouse configurations. Just to give an idea, the exact model proposed by Ratliff et al. [30], probably the first one published on the subject, is tailored to a one-block warehouse with 50 aisles and a picking list of 10 locations. Although the problem had such as small size, some minutes of CPU time were needed to find the global optimal solution on a mainframe computer of that time. Another good example of exact models can be found in [31], where two models (based on mixed-integer programming and dynamic programming, respectively), are used to solve the order-picking problem for a two or three cross-aisles warehouse.
Anyhow, even using modern processors, for most real applications the computational time needed to execute an exhaustive approach is too high to be of any practical help. Hence, over the years, exact methods have gradually been ignored, to be replaced by heuristics and metaheuristics capable of providing very good and feasible solutions (or pseudo-optimal solutions) in a reasonably acceptable computational time. A clear discussion about the superiority of heuristic methods over exact ones can be found in [32], where it is shown that heuristics are more flexible and versatile and that can be easily readapted to different layouts and/or to incorporate additional objectives and constraints. In virtue of this, to give an idea of the different trends in heuristics and metaheuristics for the picking problem in manual warehouse, a selected set of interesting algorithms is presented next. For a more comprehensive review, the interested readers are referred to the work by Masae et al. [33].
Some authors studied the effect of different layouts on routing optimization, as in [34], where three heuristics for optimal routing in warehouses with more than two cross aisles are compared, and in [35], where a fishbone layout is considered. In addition to different layouts, in [36], economic and ergonomics issues are also considered as further objectives. More recently, heuristics have been integrated with discrete event simulations, as in [37], that introduce an algorithm for dynamic order-picking problems, in which the picker might receive instructions for new picking requests while other ones are still in progress. A concurrent simulation-based design of experiments is used in [38] to optimize picking activities and to assess the effect of a combination of operating parameters (layout, throughput, size, order pickers, etc.) on overall performance.
Most of these studies assume a random storage assignment policy, but some authors have also considered dedicated assignment policies. Among them, Caron et al. [39] evaluated two simple routing heuristics in a manual single cross-aisle warehouse, with an allocation strategy based on the Cube per Order Index. Petersen [40] carried out a detailed analysis of variance to examine possible interdependencies among routing policies, warehouse layout and depot locations under different operating conditions. The same author also addressed the issue of optimizing the routing strategy in a class-based storage system [41]. In [42], the correlation between the number of aisles, picking list size, and path length in a class-based storage environment is studied, and in [43], the optimal size of ABC zone is determined for different layouts.
In conclusion, the present work fits into this line of research, as it explores the application of metaheuristics to solve the order-picking problem. The reader who is interested in a complete review of the literature with gap analysis and future research directions can refer to over 70 papers published in Scopus; the last two [44,45] published during 2023 are recommended.

2.3. Gap Analysis and Research Questions

As discussed, the use of metaheuristics allows for the extension of non-exact applications that are operationally more attractive. Indeed, pushing the application to more realistic contexts, the TSP that models an order-picking problem becomes intractable in exact terms. As mentioned above, the existing literature on the subject is rather extensive and many interesting and efficient metaheuristics have already been introduced. To the authors’ knowledge, the last and most efficient framework is the one proposed by De Santis et al. [9], which exploits:
  • the famous Floyd–Warshall algorithm to solve the all-pair shortest path problem [46,47],
  • the Ant Colony Optimization (ACO) to provide an effective solution in a very short time.
Obtained solutions have proved to be superior, or at least equal, to those provided by previous algorithms. However, its extraordinary speed of convergence suggests that most of the times the obtained solution could be a point of local minimum. Therefore, we believe that there is still room for improvement, and to test this hypothesis, we investigate the use of a new metaheuristics inspired by the classic PSO. The rational of this choice can be traced to the fact that, rather recently, some works have demonstrated their good applicability to the TSP (see among others [20,21]). Hence, our main goal is to investigate possible advantages and/or disadvantages of the PSO when applied to this specific instance of the TSP and, most importantly, to modify its underlying structure to exploit the specific structure of the problem at hand.
We make clear from the outset that high-performance improvements are unlikely to be achieved, mainly because of the very constrained structure of the TSP underlying a warehouse organized in blocks and divided by aisles, such as the one considered in this study. Because of this binding layout structure and considering that many SKUs could be found in the same aisle, it is reasonable to expect the existence of many solutions very close in terms of makespan (i.e., point of local minimum). Testing this hypothesis, and thus understanding how sensible it is to further explore optimization techniques, is the second objective of this work.

3. Some Background on Particle Swarm Optimization

PSO is a famous metaheuristic optimization technique inspired by flocks of birds. Its ideation dates to 1995 when Eberhart and Kennedy proposed PSO as an optimization technique for continuous functions [48]. Since then, as stated in the notable works by Banks et al. [49,50] and in the comprehensive literature review by Jain et al. [51], PSO has been widely applied in many fields of operational research both for continuous and discrete problems and in closed-loop supply chain network optimisation [52]. PSO is an evolutionary and population-based metaheuristic that exploits a population of N candidate solutions, which are evolved at each step of the algorithm. For the sake of clarity, the basic notation used to describe PSO is shown in Table 1.
The whole population is referred to as a swarm, while each solution ( i = 1 , , N ) of the swarm is referred to as a particle. At each step (or iteration) t , each particle moves in the solution space to generate a new and potentially improved solution. Specifically, its movement is formally defined by Equation (1):
p i t + 1 = p i t +   v i t
where v i t and p i t are, respectively, the velocity and the position of particle i at iteration t . Both v i t and p i t are vectors with several components equal to the dimensions of the solution space: the position p i t corresponds to the current solution coded by particle i , and the velocity v i t defines how far it will move at the next step.
At each iteration, the speed of each particle is updated as in Equation (2).
v i t + 1 = v i t + C 1 u 1 , i t p i t p b e s t i t + C 2 u 2 , i t p i t g b e s t t
where:
  • g b e s t t , the best solution found up to iteration t by the whole swarm.
  • p b e s t i t , the best solution found up to iteration t by particle i .
  • u 1 , i t and u 2 , i t are two random numbers uniformly distributed in the range [0, 1]. At each iteration t they are generated for each particle i .
  • C 1 and C 2 are fixed parameters.
According to Equation (2), each particle tries to keep moving following the direction coded by v i t that, for this reason, is also referred to as the particle’s intention. However, the trajectory of the particle is subjected to small deviations that are related to its experience and to the knowledge (or culture) of the whole storm. Experience and culture are measured as the distance from the current position p i t and to the position of g b e s t t and p b e s t i t , respectively. The entity of both deviations is random but limited by the two constants C. In this way, the algorithm balances the exploration of new solutions and the exploitation of the best solutions found so far. When C 1 = C 2 = 0 , all particles fly keeping their current speed and direction until they finally hit the boundaries of the search space. If C 1 > 0   and   C 2 = 0 , particles behave as if they were independent individuals, whereas if C 1 = 0   and   C 2 = 0 , they are all attracted by the same point. A c choice common choice is to set C 1 = C 2 = 2 . In this way, all particles are attracted towards the midpoint between g b e s t t and p b e s t i t and the strength of this attraction is not negligible (as it would be with C 1 = C 2 1 ) .
To summarize, the standard PSO works as follows:
  • At every iteration t , each particle i moves from p i t to p i t + 1 according to Equation (1).
  • If the new solution p i t + 1 is better than p b e s t i t , the current best solution found by i is updated accordingly.
  • When all the swarm has moved, g b e s t t is updated and the velocity of each particle i is updated using Equation (2).
  • The process is repeated until a stopping criterion is met.
For the sake of clarity, the pseudocode is shown in Figure 1, where it is assumed that the PSO ends when the number of iterations without improvement exceeds a predefined threshold value. The pseudocode is written using a Python 3.9© style, and so, indentation is not only for readability but it also defines different blocks of code.
At present, the original PSO has been modified and extended in many ways. A significant contribution is that of [53], where a new parameter called inertia is introduced, and the updating procedure is modified as in Equation (3).
v i t + 1 = ω i t · v i t + C 1 r p i t p b e s t i t + C 2 r p i t g b e s t t
where the inertia ω i t is a random number uniformly distributed on the interval [0, 1] generated for each particle i at each iteration t . Hence, the scope of ω is to mitigate the effect of particles’ intention, and in many applications, its value is progressively reduced as the number of iterations increases [53]. The resulting effect is to force the exploration of a wider area of the solution space during the first runs of the algorithm and to exploit the local optima (sort of neighbour search) when the procedure is almost at the end.
As an alternative, [54] proposed updating the velocity, as in Equation (4).
v i t + 1 = χ v i t + C 1 r p i t p b e s t i t + C 2 r p i t g b e s t t
where χ is a constant value (in the range [0, 1]) called constriction factor that should ensure better reliability and greater stability of the results. As for the inertia, to assure convergence, some authors have also proposed a time dependent constriction factor, as in [55] where, as the particle becomes closer to the global minimum, a lower value of the constriction factor is used which helps in stabilizing the algorithm with fast convergence. We anticipate that neither the inertia nor the constriction factor will be used in the metaheuristic here proposed.
All the above-mentioned approaches work well when the objective function is continuous, and it is possible to introduce a velocity vector v i on the solution domain. When, as in most industrial settings, problems are binary and/or discrete, their industrial relevance becomes rather scarce. For this reason, many scholars have proposed discretized versions of the PSO. For instance, [56] proposed using a sigmoid function to convert continuous variables into binary ones, whereas [57] suggested working with continuous variables and rounding off the obtained solution to the nearest integer.
Relatively to the discrete domain, PSO has recently been proposed as an effective way to tackle NP-hard routing problems such as the TSP. Among the first contributions, we can cite the relevant works by [58,59]. However, the real burst of interest originated from [20,21], who showed the superiority of PSO in tackling these NP-hard problems. The point to note is that, when operating over a discrete domain, to outperform other metaheuristics, PSO must be subject to major changes that can make it distinctive from the standard and continuous version. Hence, the main question is how the concept of speed can be redefined so that the distance between two discrete solutions can be measured. In this regard, two remarkable contributions were proposed by [22,60]. Specifically, letting S 1 and S 2 be two sequences representing the order with which n nodes must be visited, both authors expressed speed as a function of the number of swaps needed to convert S 1 to S 2 . Although the results are rather good, the performances cannot be compared with those obtainable with PSO implementations where the concept of speed is completely overturned, as in the works by [23,61]. For a more in-depth description of the latter approach, please see Appendix A, which describes the algorithm by [22,23] and other approaches used as benchmarks.

4. Problem Definition and Mathematical Formulation

4.1. Problem Assumptions

In general terms, the order-picking problem can be defined as “the problem of collecting a set of SKUs in a manual warehouse in a minimum amount of time” [31]. In the present paper, we contextualize the problem using the following assumptions, which hold in most industrial settings.
  • The set of SKUs to be collected is indicated in a picking list, which also reports the quantity, expressed in the number of cases, to be picked up.
  • Each SKU is stored at the ground level and in a dedicated location in the warehouse. Hence, the picking list univocally defines the locations that must be visited to fulfil an order.
  • The aisles of the warehouses are bidirectional and are large enough to avoid any traffic problems.
  • There is a single Input Output (I/O) point, from which each tour starts and ends.
  • The picker receives a picking list at a time and must collect all the cases indicated in it with a single tour (i.e., the handling capacity of the picker is assumed to be limitless).
  • When the picker moves from one location to the next one, he or she proceeds at a constant speed, moving approximately at the centre of an aisle.
  • When the picker reaches a picking location, he or she moves from the centre of the aisle to the proximity of the shelves where SKUs are stored. The time needed for this traversal movement is constant.
  • The time needed to pick up a case is constant, and it does not depend on the shape, dimension, or weight of the case.

4.2. Problem Formulation and Graph Representation

It is easy to see that the total travelled time is the only relevant quantity of the problem; all the other times are constant (see assumptions 6 and 7) and can be excluded from the optimization procedure. Also, since the travelling time depends on the sequence with which the picker collects the SKUs, the goal is to find the optimal sequence (or optimal picking tour). It is, therefore, a permutational problem that can be reduced to a standard TSP.
To this aim:
  • The layout must be schematized as a bidirectional graph, with nodes representing the I/O point, the storage locations, and the access points to the aisles. Considering that each aisle serves two counterposed shelves, the total number of nodes is N = 0.5 L + 2 A + 1 , where L is the number of storage locations and A is the number of aisles. Furthermore, due to the constrained layout of the warehouse, (based on horizontal and vertical aisles), each node is linked with two or at most with three other nodes. The only exception is the I/O point that is linked with only one node. For the sake of clarity, an example of graph representation is shown in Figure 2, which is relative to a warehouse with 24 locations (from A to X) and three horizontal aisles. The total number of nodes is therefore (24/2 + 2 × 3 + 1) = 19. Also, note that node zero represents the I/O point and that nodes inside the aisles serve two locations each (for instance node two serves both location S and location T).
  • The shortest path connecting each pair of nodes (including the I/O point) must be computed and saved in a triangular distance matrix D . This can easily be carried out in a polynomial time, using the well-known Floyd–Warshall algorithm [47,48].
  • Given a picking list with M SKUs, a bidirectional and fully connected network with M + 1 nodes and M · M + 1 edges must be created. The nodes correspond to the I/O point and to the M locations that must be visited, where the edges codify the distances among them. These distances are read in the distance matrix D , i.e., the length of the edge connecting node i to node j equals D i , j   =   d i j . For instance, if the picking list was {S, R, V}, the network of Figure 3 should be built, where it is supposed that the distance between two adjacent nodes equals 2 and that between node zero and node one equals 1.
  • The optimal sequence is finally found in solving a TSP (as in Equation (5)) formulated for the network generated by the picking list.
  • OF:
min z = i = 1 M + 1 j = 1 M + 1 x i j d i j
  • ST:
i = 1 M + 1 x i j = 1             j 1 ,   2 ,   ,   M + 1
j = 1 M + 1 x i j = 1             i 1 ,   2 ,   ,   M + 1
i , j S x i j S 1         S
x i j 0 , 1   i 1 ,   2 ,   ,   M + 1 ,   j 1 ,   2 ,   ,   M + 1
where
  • x i j is a binary decision variable that equals one if the picker moves from location i to location j, and it is zero otherwise.
  • d i j is the distance between location i to location j.
  • M is the number of SKUs in the picking list.
  • M + 1 is the number of nodes in the network ( M locations for M SKUs and one I/O point).
  • S is a generic set of nodes, and |S| denotes its cardinality. Since the cardinality |S| ranges from 2 to M there are 2 M possible set S.
The objective (Equation (5a)) consists of finding the minimum travelling distance, and the constraints make sure that each location of the picking list is visited only once (i.e., constraints (5b,c)) and that subtours are avoided (i.e., constraint (5d)). For an additional insight of this integer linear programming model, the interested reader is referred to the extensive literature review on the TSP (see for example [24]).
Continuing with the example, solving the optimization problem, the optimal tour {I/O → P → V → S → I/O} is obtained, for a total distance of 18. We also note that the actual path followed by the picker can be reconstructed, starting from the optimal tour, using the Floyd–Warhsall algorithm in reverse. In this way, the following path is finally obtained: {0, 1, 12, 11, 10, 11, 12, 1, 2, 1, 0}.

5. Pkt_PSO, the Proposed Metaheuristic

Since the TPS is a well-known NP-hard problem, a novel metaheuristic is herein proposed to solve the order-picking problem. The metaheuristic takes inspiration from the PSO and so, from here on, it will be referd to as the Pkt_PSO.

5.1. Notation

As discussed in Section 4, the problem input is a picking list P . We denote the picking list as P = A , B , C , D , E , F , where the uppercase letters are the M locations to be visited. Locations are not sorted or can be sorted in lexicographical order just for convenience.
Similarly, a feasible solution is a sorted array of M + 2 elements, which defines a picking tour. Since a tour always starts and ends at the I/O point, the first and last elements can be removed, and the tour can be fully represented with an array of only M elements. We refer to this array as the picking tour and we denote it as p = h 1 , h 2 , , h M , where each element p j =   h j identifies the j -th location that must be visited by the picker. Consider, for instance, a picking list with six codes P = A , B ,   C , D , E , F , and a picking tour p = B , A , C , F , D , E . In this case, the first stop is at location B , the second one is at location A , and so on until location E is reached and the picker completes the tour (i.e., p 1 =   h 1 = B ,   p 2 =   h 2 = A ,   , p 6 =   h 6 = E ) .
The objective function evaluated in a specific tour is therefore computed as in Equation (6):
f p = d I O ,     h 1 + j = 1 M d h j ,   h j + 1 + d h M ,     I O
where d x , y is the minimal distance between location x and location y and IO is the input–output point.
For the sake of clarity, before delving into the details of the Pkt_PSO, the notation of the problem is fully introduced in Table 2 and Table 3. This notation integrates that of Table 1, valid for a generic PSO.

5.2. Basic Framework of the Algorithm

The iterative procedure of the Pkt_PSO follows a standard framework, like the one described in Section 3. Specifically, at every iteration t 1 ,   2 ,   ,   T :
  • Each particle i moves from its previous position p i t 1 to the new position p i t . This part is the core of the optimization procedure, which is explained in full detail in Section 5.4 and Section 5.5.
  • Next, with probability γ , a neighbour search is made on p i t using a 2-Opt-based approach, as explained in Section 5.6.
  • If the solution has improved, the personal best is updated of particle i is updated, i.e., p b e s t i t   p i t ,   if   f ( p i t ) < f p b e s t i t 1 .
  • When all particles have moved to a new position, the minimum personal best is computed and compared to the global best. In case of improvement, the global best is updated accordingly, i.e., g b e s t t min i p b e s t i t ,   if   min i f p b e s t i t < f g b e s t t 1 .
Lastly, after T iterations (or when a termination criterion is met), the algorithm ends and the best solution g b e s t T found so far is returned as the output. Since optimality cannot be guarantee, this solution is referred to as pseudo-optimal solution.

5.3. New Solution Generation

Anytime particle i moves, a new solution p i t is generated in an element-by-element way, by sequentially adding to an initially empty tour p i 0 t one location at a time. Also, the locations used to form p i t are sequentially extracted from four reference solutions, that are: p b e s t i t ,   g b e s t t ,   g r e e d y and r n d _ i n t e n t i o n i t . As revealed by the notation, g r e e d y is a solution that depends neither on iteration t nor on particle i . As better described in Section 5.5, this solution is shared by all the particles of the swarm, and it is generated with a greedy algorithm at the very beginning of Pkt_PSO. From this point onward, it is no longer changed. Conversely, r n d _ i n t e n t i o n i t is a random solution that is regenerated at every iteration t for each particle i .
Concerning how locations are extracted from the four reference solutions, the following procedure is used. Suppose that a total of j locations have already been inserted in the partial tour p i j t , i.e., the last inserted location is h j and p i j t   = h 1 , , h j . Now, we need to define, among the remaining M j locations of the picking list, which one should be inserted at position j + 1 . To this aim, a set of possible options H P is created by taking all locations that come immediately after h j in the four reference solutions. More precisely, let k g b , k p b ,   k g r and k r n be the position occupied by location h j in g b e s t t , p b e s t i t , g r e e d y and r n d _ i n t e n t i o n i , respectively. So, we have that H = h g b , h p b , h g r , h r n , where:
  • h g b = g b e s t i k g b + 1 ,
  • h p b = p b e s t i k p b + 1 ,
  • h g r =   g r e e d y k g r + 1 ,
  • h r n = r n d _ i n t e n t i o n i k r n + 1 .
Note that, if the partial tour p i 0 t is empty, the k-indexes cannot be defined and so they are conventionally set to zero, i.e., k g b =   k p b =   k g r =   k r n = 0.
To better clarify this concept, the extraction procedure is exemplified in Figure 4, for a partial tour made of two locations p i 2 t = h 1 = E , h 2 = A . Since location A is the last one inserted, to generate the set of the candidate locations H , we need to consider all the locations that come after A in the reference solutions. For instance, in g b e s t i t , location A is at position four (i.e., k g b = 4 ) and so h gb = g b e s t i k g b + 1 = g b e s t i 5 = C . Considering the other reference solutions, we finally have that: H = h gb = C , h pb = D , h gr = B , h rn = F . Therefore, the next and third location of p i 3 t (indicated with three question marks in Figure 1) will be chosen among C , D , B , F .
Once the four candidate solutions of H have been identified, one of them must be selected and appended to the partial tour. As the objective is to minimize the overall travelling distance, a natural choice is to link the selection probability to the distance d ( h j , h k ) between the last location h j of the partial tour p i j t and each location h k of H . In other words, the greater is d ( h j , h k ) , the lower is the probability of selecting h k and vice versa. Also, to give a different importance to the reference solutions, the distance d ( h j , h k ) should be rescaled, taking the Hadamard (or elementwise) product with the weighting vector w i t = w 1 , w 2 , w 3 , w 4 .
This is shown in Equation (7), where D is the rescaled distance vector:
D t = d ( h j , h g b ) , d ( h j , h p b ) , d ( h j , h g r ) , d ( h j , h r n ) w 1 , w 2 , w 3 , w 4
In general terms, neither d ( h j , h g b ) nor d ( h j , h p b ) should be modified, whereas d ( h j , h g r ) and d ( h j , h r n ) should be increased. The first two distances correspond to g b e s t i t and p b e s t i t , which are the most important reference solutions, whereas the latter two correspond to g r e e d y and r n d _ i n t e n t i o n i t that are the less important ones. Therefore, a natural choice is to use the following weight vector w i t = 1 ,   1 ,   1 / g , 1 / g 1 , for each particle i and for each iteration t , where g 0 ,   1 is a greedy factor, that measures how much the greedy solution outweighs the random one. A neutral level is obtained for g = 0.5 and rescales the distances by doubling the original ones. We anticipate that, under some peculiar conditions, the weight vector w i t could be modified for a limited number of particles, to trigger an antistagnation mechanism, as described in Section 5.5.
To implement this logic, first the four candidate solutions h k are sorted in ascending order of their rescaled distance, to form the sorted array H . Next, an element of H is randomly extracted using the quasi-geometric distribution defined as in Equation (8) [62].
f x = 1 α x
where α is a shape parameter in the interval [0, 1), and x is the position occupied by a candidate solution in the sorted array H . Therefore, the selection probability is maximum for the element at position one and minimal for the element at position four.
It is also important to stress that the use of a quasi-geometric distribution has a twofold purpose, as clearly explained by Juan et al. [63]. First, the adoption of a theoretical distribution provides implementation advantages concerning the use of an empirical distribution. Second, the quasi-geometric distribution has a single parameter, which greatly simplifies the fine-tuning step.
We conclude by noting that two issues could occur during the selection process. Indeed:
  • If a candidate location h k is already included in the partial tour p i j t ,   h k is unusable and cannot be included in H .
  • If the location h j (the last one of the partial tours p i j t ) matches the last location of a reference solution, its candidate location h k cannot be defined, since position [ M + 1] does not exist.
When this happens, the cardinality of H is less than four; anyhow, provided there is at least one element in H , the selection procedure remains unaltered. Should H be empty, the next location h j + 1 is randomly extracted among all locations that have not yet been included in p i j t .

5.4. Exploiting the Structure of the Problem: Aisle-Based Construction and Greedy Algorithm

To exploit the underlying structure of the picking problem, an additional solution-generation scheme is considered. As described in Section 4.2, the picking problem is characterized by a very constrained layout that makes the path connecting two locations virtually unique [64]. In other words, whenever the picker goes from location x to y , he or she necessarily passes in front of other locations. This is typical when x and y are at opposite sides of the same aisle, so that moving from x to y , all the intermediate locations are necessarily encountered. For instance, referring to Figure 2, if the picker moves from location V to location D, he or she also passes in front of locations {O, J, C, U, P, I}.
Let S x ,   y P be the set of all the locations found on the minimal path connecting x and y . To take advantage of this fact, when a new candidate location h k is selected, all locations l S h j , h k will also be included in the partial tour, with probability β . Therefore, the following two partial tours may be generated:
  • p i j t = h 1 , , h j , h j + 1 = h k , with probability 1 β .
  • p i j + s t = h 1 , , h j , h j + 1 = l 1 , , h j + s = l m , h j + s + 1 = h k , with probability β
where s is the cardinality of S h j , h k . Please note that the first one is the standard tour, and the second is the one generated using the aisle-based construction algorithm.
Due to the very constrained graph of routes, we also suggest generating the greedy solution (shared by all the particles of the swarm) using the well-known nearest neighbour algorithm. According to this algorithm, the picker starts the tour from the I/O point and repeatedly visits the nearest location until all locations in P have been visited. Indeed, due to the constraints induced by the layout on the feasible paths, this solution is known to be good, and it is thus advantageous to include it within the four reference solutions used to build a new tour. Also note that this solution never changes and remains the same from iteration t = 1 to iteration t = T . This choice may seem rather odd, but it can be justified as follows. At the single-item level, the nearest neighbour move is certainly optimal; it is therefore useful to ensure that this move is always available when a new tour is generated in an element-by-element way. This is exactly why the greedy solution must be included in the reference solutions.
For clarity, Figure 5 provides a Python 3.9©-based pseudocode, showing the generation of a new solution p i t .

5.5. Local Search via 2-Opt

Any time a new solution p i t is generated, with probability γ , a local search is also performed. The local search is based on the 2-Opt, a simple local search algorithm originally introduced by Croes in 1958 [65], which is characterized by a special swapping mechanism.
Specifically, the proposed local search can be executed in a complete and full-depth mode, which differ in the number of 2-Opt exchange steps that they perform. If the complete 2-Opt is used, a new sequence is generated by inverting the order in which the locations between h j and h k in p i t are visited. This procedure is called complete because the inversion is made for each possible couple of nodes j , k . To make a simple numerical example, let us consider a four-item sequence p i t = A , B , C , D . In this case, a complete 2-Opt search generates the following six tours: B , A , C , D 1 , 2 , C ,   B , A , D 1 , 3 , D ,   C ,   B , A 1 , 4 , D ,   C ,   B , A 1 , 4 , A ,   C ,   B , D 2 , 3 and A ,   C ,   B , D 2 , 3 , where the subscripts refer to node j and k , respectively. Instead, when the full-depth 2-Opt is used, the complete 2-Opt is recursively repeated on each improved solution that is found. The two approaches are performed with probability γ c and γ f , respectively, with γ c + γ f = γ . Also, to balance accuracy and computational time, we suggest using γ f = 0.5 γ 2 so that γ c = γ 0.5 γ 2   γ .
For clarity, Figure 6 shows the Python 3.9 © pseudocode of the local search via 2-Opt.

5.6. Antistagnation Mechanism

Let i * be a particle for which the full-depth 2-Opt, executed at iteration t , has failed (i.e., p b e s t i * t has not changed). To avoid stagnation, particle i * activates a procedure that is meant to escape possible local minima. The proposed procedure acts on the weight vector w i * t , as explained next.
-
Only for step t + 1 and limited to particle i * , the weight vector is changed to: w i * t + 1 = 1 ,   ,   1 / g , 1 / g 1 .
-
Additionally, if p b e s t i * t = g b e s t t , the weight vector becomes: w i * t + 1 = ,   ,   1 / g , 1 / g 1 .
-
At the end of the generating process:
The current personal best is replaced by the solution just created, regardless of its objective function, i.e., p b e s t i * t + 1 =   p i * t .
The weigh vector is restored to its initial values, i.e., w i * t + 2 = 1 ,   1 ,   1 / g , 1 / g 1 .
In this way, all locations h p b e s t i * t + 1 become the least likely to be selected and the particle tends to move away (or escape) from its current best. Metaphorically, limited to iteration t + 1 , particle i * temporarily forgets its personal best (and eventually the global best) and explores the solution space without making use of this past information.

5.7. Time Complexity of Pkt_PSO

For the sake of clarity, the whole algorithm is summarized in the flow diagram of Figure 7. From the diagram, it should be clear that in each epoch t , each particle i generates at least a new solution using the element-by-element generation scheme, possibly enhanced by the aisle-based generation approach. Next, if a neighbour search is performed, an additional number of solutions is generated. This extra number can be easily determined if the complete 2-Opt is used. In this case C M , n solutions will be considered, where C M , n is the binomial coefficient. Conversely, determining an exact number of different sequences generated by the full-depth 2-Opt before convergence is challenging, as this number will vary depending on the problem size and characteristics. Anyhow, since the complexity of the full-depth 2-Opt is generally considered to be O M 2 , we can confidently assume that the number of explored solutions is << M 3 , especially because, in relative terms, the number of iterations required by the 2-Opt to converge tends to decrease as the problem size increases. or a further discussion on this point see [66].
Because of this, considering the number of particles of the storm N and the probability to perform a neighbour search γ , the total number of generated solutions during T epochs can be estimated as in Equation (9).
N _ S o l u t i o n s = T · N · 1 + γ 0.5 γ 2 · M 2 + 0.5 γ 2 · φ M
where φ M M 3 is the number of solutions generated by the full-depth 2-OPT. For instance, using problem instances as the ones discussed next in Section 7, we experimentally found that φ 20 1400 , φ 30 3750   and φ 30 8400 .

5.8. Concluding Remarks on the Proposed Algorithm

In a standard PSO, particles constantly move according to a velocity profile that codifies the knowledge of the good solutions visited in the past. Indeed, any time a particle moves, a new solution p i t + 1 is generated based on its current position p i t and current velocity v i t . The latter, in turn, depends both on a particle’s personal best and on the swarm’s global best. Metaphorically, each particle represents a bird that explores an area looking for food and changes its trajectory based on its intuition (cognitive behaviour) and/or to imitate the paths followed by the other birds of the storm (social behaviour).
Akin to the standard PSO, we also construct new solutions exploiting p b e s t i t and g b e s t t ; however, aside from this similarity, a careful reader could spot some evident divergences. Specifically:
-
Due to the discrete nature of the problem, an analytic formula (as in Equation (1)) is not implemented to recombine the reference solutions; an element-by-element approach is used instead.
-
Two additional solutions, namely g r e e d y and r n d _ i n t e n t i o n i t , take part in the generation process.
-
Conversely, the current solution does not explicitly appear in this process.
-
The concept of speed does not seem to be explicitly implemented in the framework.
However, especially concerning the last two points, at a conceptual level, these differences are only apparent. Our algorithm can be reinterpreted in the following way. Each particle always moves to start from its personal best, which coincides with its current position. When the particle moves to a new point, it stops there if the solution has improved; if not, it returns to the starting point. Furthermore, the direction and modulus of the movement depend on four reference solutions and the weight vector w i t . In this sense, the reference solutions can be seen as four cardinal points toward which the particles are attracted. Moreover, the weight vector can be seen as a velocity profile that determines how much a particle deviates from its precedent trajectory, heading toward one of the above-mentioned cardinal points.
Also note that the velocity vector is dynamic, and it is also specific for each particle of the swarm. Most of the time, the velocity vector is shared by all particles, but if a full-depth search is made on particle i at epoch t , and the full-depth search fails, for that particle, the velocity vector immediately changes, helping the particle to escape the local minimum, where it presumably was trapped.
To conclude, we note that in the case of discrete problems, the existing literature has two main strands of thought. A minority of authors propose readapting PSO with few marginal changes, while most of the scholars disrupt the original version, giving it a new shape. Given the description of the procedure, we claim that our approach stays in the middle. We reinterpreted the concept of speed, taking inspiration from the recent work by [23], trying at the same time to preserve most of the underlying features of the original PSO, such as the intention of the particles.

6. Parameter Tuning

The algorithm has four operating parameters: (i) the shape factor α of the quasi-geometric distribution, (ii) the probability β of activating the aisle-based construction procedure, (iii) the probability γ to carry out a local search via 2-Opt, and (iv) the greediness factor g , which defines how much the exploration is influenced by the greedy solution. The number of particles of the swarm is another operating parameter that should be fine tuned. However, we verified that the results do not appreciably change if the number of particles remains within the range of 30–40, which is usually used in similar works proposed in technical literature. Therefore, to assure a fair comparison with the other algorithms used as a benchmark, the number of particles was arbitrarily set to 30. Even the probability of triggering a full-depth search should be optimized. However, this event is so rare that performing a detailed optimization would be useless. Hence, also this parameter was arbitrarily fixed to γ 2 , as mentioned in Section 5.
For proper fine-tuning, as in a two-level factorial experiment design, we defined reasonable low and high values for each parameter. Specifically:
-
For the probability α , the values 0.7 and 0.2 were chosen, as in Juan et al. [63]. When α = 0.7 , the selection of a potential location h k is strongly influenced by its rescaled distance. For α approaching one, the selection tends to be deterministic and the candidate solution with the lowest distance is almost always returned. Conversely, when α = 0.2 , the selection is almost random (with pure randomness that would be achieved with α 0 ).
-
For the probability γ , the values 0.2 and 0.05 were chosen. Both values are rather low because the execution of a 2-Opt local search should be an extreme condition with a very low occurrence rate. For values higher than 0.2, the time complexity (see Equation (8)) would grow tremendously, making its practical implementation impossible in a real industrial context.
-
For the greediness factor g , we are “pioneers”, as this is a completely new parameter. Lacking scientific references (or experimental evidence), two extreme values 0.8 and 0.1 were tested. When g = 0.8 the greedy exploration prevails on the random behaviour, and vice versa.
-
Also, probability β is a new parameter and so, in line with the previous point, we tested two opposite situations using an extreme value of 0.8 and a low value of 0.1.
All the resulting 2 4 combinations were tested using five different picking lists with an average length of 30 locations (i.e., M   =   30 ). A common layout characterized by three blocks divided by two cross aisles and 10 racks per block was also considered. A more accurate description of this layout is presented in Section 7.2, where the numerical assessment of the algorithm is discussed. For each combination and each picking list, the algorithm was executed five times for a total of 400 repetitions. Table 4 reports the results, expressed as the sample mean and standard deviation of the total travelling time of each parameter’s combination and picking list.
The values shown in Table 5 demonstrate that Pkt_PSO is very consistent and reliable. Indeed, for each parameter’s combination, the standard deviation is always very low, and the mean value does not change significantly as the parameters change. Results also show that the only parameter that has a significant impact on the results is the greediness factor, which should be kept at its low level (i.e., g = 0.1 ).
For clarity, we also highlighted in red the experiments where Pkt_PSO failed, at least one time, to find the global optimum shown in the last row of the Table 5. The global optimum was computed with the famous Concorde TSP solver (using the Python version available in the PyTSP library) which, according to Mulder and Wunsch [67], is one of the fastest TSP solvers. For instance, the tuple (533, 0), relative to the first parameters’ combination and first picking list), means that the algorithm always converged to a value of 533 higher than the global optimum. Similarly, the tuple (508, 3), relative to the third parameters’ combination and third picking list, means that the algorithm has not always converged to the global optimum. The mean is higher than the global optimum and the standard deviation is not. Similarly, we highlighted in green the combination of parameters that never failed in finding the global optimum. This is the combination we opted for, also because, having a low value of the local search probability γ = 0.05 it is also the combination requiring the lower computational time. From Equation (8) and using φ 30 3750 , the expected number of generated solutions for each epoch are approximately 950.

7. Case Study

7.1. The Benchmarks

To test Pkt_PSO, a set of competitive benchmarks was chosen. Since our algorithms are inspired by the PSO and since PSO often outperforms other metaheuristics when applied to the TSP (see for example [20]), the very recent and highly performant PSO by Zhou et al. [22] (PSO_1) and by Zhong et al. [23] (PSO_2) were considered. We also included in the benchmarks the ACO by De Santis et al. [9]. This is one of the most recent solutions specifically designed for the picking problem in manual warehouses, and it has been shown to perform better than other high-performance algorithms. Lastly, as simple benchmarks, we also considered the basic solution provided by the nearest neighbour constructive algorithm and by the full-depth 2-Opt, the same algorithms used by Pkt_PSO to generate the greedy solution and to perform the neighbour search. For a more in-depth description of the selected benchmarks, please see Appendix A.

7.2. Layouts and Picking Lists

Three different layouts that differ in terms of the disposition and number of blocks and rack were considered. For the sake of clarity, Figure 8 shows what we mean by blocks and racks.
The first layout (L1) is a very long warehouse with no cross aisles, made of a single block with 30 racks divided by 32 aisles. The second one (L2) has the same storage capacity as the first, but the racks are arranged differently. Specifically, it is made of three blocks divided by two cross aisles, and, in each block, there are 10 racks divided by 12 aisles. The third one (L3) is a high-capacity warehouse made of three blocks divided by two cross aisles and 30 racks per block, each one divided by 32 aisles. For all considered configurations:
-
Each aisle has 11 storage locations on both sides; the only exceptions are the first and the last aisle that, being flanked by the wall, have only 11 storage locations on a single side
-
The I/O position is in the bottom-right corner
-
The storage locations have a standard size of 2 × 2 m, while the aisles and cross aisles are, respectively, 2 and 4 m wide.
For each layout, we tested several problems of increasing complexity. To the best of our knowledge, and according to [60], given a routing problem, the number of locations to visit is a good proxy of its complexity. For this reason, on each layout, we tested 10 picking lists of length 20, 10 of length 30, and 10 of length 40. We also verified that with a few locations M 19 all the algorithms always converged to the global optimum computed with the Concorde TSP solver. On the other one hand, a number M > 40 would be very unusual in a manual warehouse and, due to capacity constraints, certainly unrealistic with a single tour. Lastly, we note that for each generated picking list, we randomly selected the storage locations, to simulate a purely random allocation policy.

7.3. Results

All the algorithms were coded in Python 3.9© using the standard CPython interpreter (to increase speed) and run on a standard PC powered by Intel i7 with an 8-core processor 3.2 GHz and 16 Gb RAM. We also open-sourced the source code and the Jupyter-Notebook© used to run the tests at the following link: <GitHub-dmezzogori/op-pso>. To determine the reliability of the compared approaches, for each layout configuration and picking list, each algorithm was launched five times, for a total of 3 × 10 × 3 × 5 = 450 tests performed for each algorithm. Clearly, since the nearest neighbour is a constructive and therefore deterministic algorithm, it was tested only one time for each layout configuration and picking list for a total of 3 × 10 × 3 × 1 = 90 tests. For each algorithm and for each run, the total travelled distance of the generated picking tour (i.e., the objective function) and the computational time were recorded. We clarify that we did not calculate the global optimum of each scenario, as this would have taken too long (especially for the 40-locations configuration) and also because: (i) the capability to converge to the real global optimum was previously tested with values of M 19 (and even with M = 30 for Pkt_PSO) and (ii) all the benchmarks had already been compared with exhaustive procedures and/or with optimization solvers in their original papers, to which we refer for more details. In view of this, comparisons will be made relatively to the overall best solution found by the five algorithms; from here on this solution will be referred to as the pseudo-optimal solution.
Obtained results are reported in Table 5 in terms of two performance indicators:
-
Number of wins (N. Wins column), the number of times an algorithm found the pseudo-optimal solution (i.e., its solution was better or equal to that provided by the other algorithms).
-
Average deviation (Avg. Dev. column) relative to the pseudo-optimal solution, computed as in Equation (10) for layout l , picking list length p and algorithm a   A v g _ D e v l , p , a   =   i = 1 10 ( r e s u l t l , p , a , i   p s e u d o _ o p t i m a l l , p ) 10
where the index i 1 ,   10 indicates the i -th instance of the 10 randomly generated picking lists of equal length p .
For the sake of clarity, and due to space constraints, the starting data from which the performance indicators were computed (i.e., the length of the picking tours generated by each algorithm) have been omitted from the main body of the article. However, they are included in Appendix B.
For clarity, in Table 6 we highlighted in bold the best result for a given layout and picking list combination, that is, the best result of each row.
A complementary analysis is provided in Table 6 that shows the average computation times in seconds. Note that the nearest neighbour algorithm has not been included in the table, as its running time is negligible, given its simplicity and determinism.

7.4. Discussion

Results synthesized in Table 6 and Table 7 clearly show that Pkt_PSO is more accurate and precise than the benchmarks. Indeed, as Table 6 reveals, Pkt_PSO always converges to the pseudo-optimal at every run and so the average deviation (of the solution provided by Pkt_PSO with respect to the pseudo-optimal) is always null. This fact incontrovertibly proves that Pkt_PSO is more stable and consistent than the benchmarks and that the added procedures (i.e., aisle-based generation, 2-Opt, and deep search), tailored to the task at hand, tackle the problem structure effectively. Moreover, a consistent convergence to a superior solution and a null average deviation is observed for all the investigated combinations of layout and picking lists. This behaviour is extremely important, especially from an operational viewpoint, as interested practitioners should not bother to fine tune the hyperparameters of the algorithm for the specific layout at hand.
The only algorithm that shows similar results, although it does not match them, is the PSO_2. This PSO-based algorithm was able to consistently track the optimum in five out of nine configurations. In all the remaining configurations, the deviation from the best solution (found by Pkt_PSO) remained very contained, although not null. Besides being less precise, this algorithm is also less accurate, as it consistently converged to the same (supposedly best) solution at each attempt, only for the third investigated layout (L3). Instead, in all other cases, it found different solutions at every run, as certified by a non-null value of the average deviation. As for the remaining methods, none of them can track the optimum, and the average deviation is always very high.
Concerning the computational times shown in Table 7, we can say that the best operational trade-off is achieved by the PSO_2. This algorithm, in fact, has a low computational time comparable to that of the ACO, which is the fastest, but it provides much better results in terms of total travelled distance. Pkt_PSO, instead, while showing the best results, needs considerably longer computational times, probably due to the application of the 2-Opt procedure with deep search. These computational times are operationally acceptable and could be easily improved by optimizing the code and/or using a low-level programming language (such as C). Nonetheless, we checked the quality of the solution provided by our algorithm if it stopped after an amount of time equal to that required by the ACO to reach convergence. We performed this comparison for all scenarios described above, with five repetitions each. As shown in Figure 9, which displays the trend of the best solution found by the two algorithms (for problems of increasing complexity), the solutions are practically indistinguishable, with a maximum difference (or error) lower than 1%.
These results were also confirmed by the three-way ANOVA of Table 7, where “layout type” (LT), “picking list length” (PL), and “algorithm type” (ALG) are used as blocking factors. In the ANOVA, we also included the interaction between factor LT and PL; the other ones were excluded because they were non-significant in a preliminary analysis.
As the ANOVA shows, ALG is the only factor that is not significant (with a very high p-value), and so we can conclude that the performances of the two algorithms is indistinguishable. We also note that all the other factors are highly significant, and this is a further demonstration of the correctness of the choices we made to define the complexity of the problem.

8. Conclusions and Future Works

In this paper, we focused on the picking problem in manual warehouses, a very relevant problem with many managerial implications. Considering that a plethora of metaheuristics have been proposed to solve this problem, the paper aims to check if, and how much, the solution can be improved. Furthermore, since the problem is very constrained, it could be also characterized by many points of local minimum with a similar value. If so, at least operationally, a good local minimum could be enough, and it would be unnecessary to use a very sophisticated approach to find the absolute minimum.
To shed light on these research questions, we developed a novel metaheuristic (referred to as Pkt_PSO) and we compared it with a set of state-of-the-art benchmarks. Outcomes of the experimental campaign showed that Pkt_PSO provides the best results, in terms of both accuracy and precision. However, as we initially supposed, the obtained improvements are marginal, and to reach the optimum (or the pseudo-optimum), it is necessary to use an approach that favours exploration over convergence and that requires more computational time. If the execution of Pkt_PSO is interrupted as soon as the benchmarks reach convergence, the obtained results are almost indistinguishable. This finding shows that for this peculiar type of problem, research has practically come to an end, and the use of a general-purpose metaheuristic can be considered enough for many industrial applications. Nonetheless, the use of more accurate approaches could be useful when additional features and/or constraints are considered, such as dedicated storage, multilevel shelves, one-way aisles, and transport capacity of the picker. The application of Pkt_PSO to these alternative scenarios will be a topic of future works.
An interesting and promising field of application for future research could be the development of a methodology, based on the PSO, to solve the three-dimensional case picking problem (3D-CPP) in an order-picking strategy. 3D-CPP combines two important issues in logistics: the problem of pallet loading and the routing of pickers in manual warehouses. The proposed method could find an extension in the combined optimization of pickers and 3D bin-packing problem.

Author Contributions

M.B.: conceptualization, methodology, supervision. D.M.: software, validation, formal analysis, data curation, writing—original draft preparation. F.Z.: conceptualization, methodology, software, validation, formal analysis, data curation, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the National Recovery and Resilience Plan (NRRP), Mission 04 Component 2 Investment 1.5—NextGenerationEU, Call for tender n. 3277 dated 30 December 2021. Award Number: 0001052 dated 23 June 2022.

Data Availability Statement

All the code and the obtained results are available at the following link GitHub-dmezzogori/op-pso.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

All the abbreviations used in the paper are listed below.
ACOAnt Colony Optimization,
AS/RSAutomated Storage and Retrieval System,
BSSBasic Swap Procedure,
CPUCentral Processing Unit,
I/OInput Output Point,
NaNNot a Number,
NeNeNearest Neighbour,
NPNon-deterministic Polynomial-time hardness
OFObjective Function
Pkt_PSOThe proposed metaheuristic, i.e., an evolutionary approach inspired by PSO and designed for picking time optimization.
PSOParticle Swarm Optimization,
PyTSPA Python library containing external solvers for the traveling salesman problem,
SASimulated Annealing,
SKUStock Keeping Unit,
STSubjected To (constraints),
TSPTraveling Salesman Problem,
3D-CPPThree-dimensional case picking problem.

Appendix A. Description of the Heuristics and Metaheuristics Used as Benchmarks

Appendix A.1. PSO_1 by Zhou et al. [22]

PSO_1 is a PSO specifically designed for the TSP. Its peculiarity lies in the way in which the difference or “distance” between two solutions (or tours) is calculated. Let s w a p x , y be the operator that swaps the elements in positions x and y of an array; and let p i t be an array containing the storage locations in their order of visitation. Accordingly, the distance between p i t and a reference solution, say p * t , is defined as the minimum number of swaps needed to turn p i t into p * t . In this sense, the velocity v i t of particle i can be represented by the Basic Swap Sequence (BSS), that is the minimum sequence of s w a p i , j needed to turn p i t into p * t . To clarify these concepts a simple example follows, where a zero-based indexing is assumed:
-
Solutions: p i t = (4, 5, 1, 3, 2, 6, 7, 8), p * t = (1, 2, 3, 4, 5, 6, 7, 8).
-
BSS(( p i t p * t ): [swap(0, 2), swap(1, 4), swap(2, 3)]
-
The BSS produces: (1, 5, 4, 3, 2, 6, 7, 8) → (1, 2, 4, 3, 5, 6, 7, 8) → (1, 2, 3, 4, 5, 6, 7, 8).
Overall, the algorithm proceeds as a standard PSO. In fact, at each iteration t , for each particle i , a new position is obtained by applying the velocity vector v i t , or BBS, to the current position p i t . Next, the velocity is updated as follows:
v i t + 1 = w · v i t α · u · p i t p b e s t i t β · u · p i t g b e s t t
where w , α and β are coefficients, u is a uniformly distributed random number and is the similarity operator.
Relative to Equation (A1), it is important to note that:
-
The differences between the current and the best solution (either personal, or global) must be interpreted as distances. Therefore, the result of the difference is a minimal swap sequence.
-
The product of a scalar for a swap sequence is interpreted as the element-wise product of the scalar with each element of the sequence, e.g., 2 · 1 ,   2 , 3 ,   4 = 2 ,   4 , 6 ,   8 .
-
The similarity operator is the average of the elements of a two-swap sequence paired together, e.g., 1 ,   2 , 3 ,   4 1 ,   5 , 2 ,   4 = 1 + 1 / 2 , 2 + 5 / 2 , 3 + 2 / 2 , 4 + 4 / 2 = 1 ,   3.5 , 2.5 ,   4 .
-
The ceil function is used to eliminate the floating-point values.

Appendix A.2. PSO_2 by Zhong et al. [23]

PSO_2 makes use of a novel notation, and the classic concept of velocity is deeply modified. Each solution p i t is coded using an edge-based notation, that is a sequence of 2-tuples (A, B) where A and B are storage locations. Each 2-tuple represents an edge included in the solution, and so (A, B) means that storage location B is visited immediately after storage location A. For instance, the sequence A , C , D , B , F , G , E , A is coded as A , C , B , F , C , D , D , B , E , A , F , G ,   G , E , where the 2-tuples are sorted in lexicographical order of their first element. Velocity is coded using the same approach but, in this case, a weight w is associated to each edge, e.g., A ,   C ,   0.5 ,   B ,   F ,   1.0 ,   ,   X ,   Y ,   w . Also, the edges included in the velocity vector do not necessarily form a tour; they are just candidate edges that could be inserted in a new solution.
Concerning distances, the distance between two solutions p i t p j t is given by the sequence of edges of p i t without the edges that are also in p j t . This distance is transformed in a velocity by associating to each edge a weight of 1. In other words, the difference of two solutions is interpreted as a speed sequence with all weights equal to one. Similarly, to sum speeds, their edges are compared one by one: if the edges are equal, the respective weights are added together; otherwise, the edge with the highest weight is taken. For instance, taking v i t = A ,   C ,   0.5 ,   B ,   F ,   1 ,   and v j t = A ,   C ,   0.8 ,   M ,   P ,   1.2 ,   , their sum equals to v i t + v j t = A ,   C ,   1.3 ,   M ,   P ,   1.2 ,   .
Lastly, at each iteration t , the velocity of particle i is updated as in Equation (A2):
v i t + 1 = W · v g r t + u · p i t r n d _ p b e s t t
where v g r t is a velocity generated using a greedy algorithm, r n d _ p b e s t t is the personal best of a randomly selected particle of the swarm, W is a weight factor and u is a random number.
It is important to note that:
-
When the velocity is multiplied by a scalar, say x , each edge-weight w is multiplied by x .
-
The difference p i t r n d p b e s t t generates a speed sequence that can be added to W · v g r t .
-
The velocity v g r t is iteratively constructed edge by edge. Specifically, for each storage location X, a location Y is randomly extracted (with probability proportional to the distance between X and Y) and a new edge (X, Y), with weight w = 1 is created.
Next, a new solution p i t + 1 is generated from p i t using one of the edges included in v i t + 1 . Specifically, each edge of v i t + 1 is considered for possible insertion in p i t and the insertion that generated the best tour is finally selected. Also, any time one of the edges of v i t + 1 is selected, three different inserting strategies (namely swap, insert and inverse) are tested, and the best one is retained. Lastly, p i t is replaced by p i t + 1 using a Metropolis acceptance criterion akin to the one commonly implemented in Simulated Annealing. In this way, new solutions may be accepted even in the case of a small deterioration of the objective function.

Appendix A.3. ACO by De Santis et al. [9]

This metaheuristic is a classic Ant Colony Optimization, an algorithm that mimics the behaviour of ants looking for food. When an ant leaves the nest, it must decide which is the best way to find some food. To this aim, the ant looks at the quantity of pheromone left by other members of the colony who had previously walked a path. A higher level of pheromone means a better path and vice versa. In fact, a good path will be selected by many ants will receive much pheromone. In contrast, a poor one will be visited by few ants and the pheromone will tend to evaporate. Inspired by this behaviour, at each iteration of the ACO, a new path is built from scratch, adding one location at a time. Let j 1 be the last location included in the new path; the next one, say j 1 , is randomly selected with probability π j 1 j 2 that is proportional to the amount of pheromone laid on the edge connecting j 1 to j 2 , as shown by Equation (A3):
π j 1 j 2 = τ j 1 , j 2 k τ j 1 , k
where τ j 1 , j 2 is the pheromone on edge j 1 , j 2 . Therefore, the probability to visit j 2 after j 1 equals the amount of pheromone on edge j 1 , j 2 divided by the sum of the amount of pheromone on all the edges leaving from j 1 .
Once the new path has been completed, it is compared to the best solution found so far. In case of improvement, the best solution is updated and the amount of pheromone on each edge of the network is modified as in Equation (A4).
τ j 1 , j 2 = τ j 1 , j 2 α · 1 d j 1 , j 2 β     i f   j 1 , j 2 b e s t   p a t h τ j 1 , j 2 = ρ · τ j 1 , j 2   i f   j 1 , j 2 b e s t   p a t h
where α , β and ρ are parameters of the algorithm and d j 1 , j 2 is the distance between two locations.

Appendix B. Experimental Results

The following tables show the results obtained in each evaluated scenario. Specifically, for each combination of layout and picking list, the following data are reported: (i) mean and standard deviation of the total travelled distance, (ii) the number of solutions explored, and (iii) the computational time.
Table A1. Layout L1 and picking lists with 20 locations.
Table A1. Layout L1 and picking lists with 20 locations.
ListDistance Covered by the Picker [m]
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
164460401139236040618664613
26726080108755608063046507
36245720107464575257205829
47086240119750624064376906
57806000116955600060736292
663658401143605840589463419
773265601202456560660068324
866460401143686040604063110
976065601212756560656067017
108247840129747784079346282
ListNumber of Generated Solutions before the Pseudo-Optimum Was Found
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
1123,94511,5237728671216,53825775674964617320
21201,33188,1614596519122,32123493454655198587
3159,53333,8905464736712,28116615002485643242
4117,00871247296717415,88436597223025096699
5128,07015,01511,116819514,00924364853084549624
61102,395139,71811,224820317,74751416455464685394
7157,60722,1249104418620,62623262863524720893
8136,38212,4085896677017,5433864113464309371
9120,437727410,156690911,30226303624054036941
10116,74710,12615,18011,33416,75928554462774891624
ListComputational Time [s]
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
103722042741000
20362173910000
302732042931000
403331861521000
503332372830000
603042332321000
703621831630000
803131952220000
903322253120000
1003132572530000
Table A2. Layout L1 and picking lists with 30 locations.
Table A2. Layout L1 and picking lists with 30 locations.
ListDistance Covered by the Picker [m]
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
177671601738947160754779416
28247480176663748076127625
3848764018221097640787779518
4744672018241006720690574413
582869601778796960706584336
679672252011357200743781814
7792708017739370807241273523
8820740019281187400759277217
9680644016451096474668069417
108046160169130616062386640
ListNumber of Generated Solutions before the Pseudo-Optimum Was Found
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
11679,224563,34414,556873041,545736245943320,219880
21325,733273,04115,38812,76040,482259316912019,4882798
31494,748323,0666384746838,944831261133018,8381303
41436,762331,03711,79213,17638,669391047927922,8981059
51544,264357,27316,06014,52736,769531856656019,1631475
61800,584647,06311,368977693,652119,08357168020,3811328
71537,062377,38324,05213,39636,747686957878920,7061572
811,489,4691,943,8479096278146,98613,08141632023,4672714
911,453,8571,491,12815,17211,40439,86211,840331420,3003360
10160,38825,1615176985230,578190231952517,6202412
ListComputational Time [s]
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
101451243134481000
20152941125651000
30129833103531000
401411537144151000
50134944203861000
6014817361054111100
701451753193791100
80144263564761000
901432242144281000
10011392685251000
Table A3. Layout L1 and picking lists with 40 locations.
Table A3. Layout L1 and picking lists with 40 locations.
ListDistance Covered by the Picker [m]
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
19528800270090881288839238
2864824023781048264846489519
377672002439747200720075820
488879602570937960838982316
5864804026591398040838592236
689683202468708342864487111
783673602331717360756881024
8924784024901217852830584439
991278002410897800817788424
1091676402588357640795148687
ListNumber of Generated Solutions before the Pseudo-Optimum Was Found
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
115,983,8137,970,0986932500472,76918,19067552659,2802160
215,497,9104,844,4499576925677,4037769105749155,4271606
31328,813189,128943210,16661,59310,54586975859,1324550
412,177,5312,156,7253688319574,53410,01587063358,0943690
511,867,387726,8074136376390,334861173245858,6875677
615,059,6753,423,04216,5289372127,268102,5541519253,9453289
712,023,0222,128,96716,172864178,57810,72259776852,6116177
812,164,5231,124,10124,21220,450113,70823,87552836557,2055379
911,671,4211,476,39816,17213,44170,22910,74851150159,5765435
1011,595,432939,0085480487973,19318,73889877549,2023420
ListComputational Time [s]
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
105101094698822100
2044177521778143100
302993151197582100
403755538566102100
503972241885102100
604873563179181000
7034135611366102100
804152478368442000
903813464249072100
100460464488162100
Table A4. Layout L2 and picking lists with 20 locations.
Table A4. Layout L2 and picking lists with 20 locations.
ListDistance Covered by the Picker [m]
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
14984320820464320433347431
2558466083831466046605268
35204560868474560469756429
45584860930304860489456235
55484720799344720475450321
65064440820284440451646922
76265220938405220529759024
85824340819254340448554612
95104980950214980505553212
104524080741384080408052017
ListNumber of Generated Solutions before the Pseudo-Optimum Was Found
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
1152,31058,6879704690514,61017851431424617419
2124,11318,6336796723415,38316551901353796867
3196,89877,51519,25613,97617,40821987531164309797
41114,02587,2079528751117,73829124707344480475
5113,144608914,21215,26916,88630734354534754475
6138,37543,5114876491219,27350076872993762342
7123,20611,1939976560616,96833984334414754585
8126,71618,29110,628843919,82220675162444822709
91775,262268,1614688654355,38279,0786515444549647
10117,16320,6415752499713,85816524112914891732
ListComputational Time [s]
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
102632031930000
202522142640000
303242882631000
403032142351000
502622592940000
602801841971000
702612242520000
802712272831000
904051622531000
1002421942720000
Table A5. Layout L2 and picking lists with 30 locations.
Table A5. Layout L2 and picking lists with 30 locations.
ListDistance Covered by the Picker [m]
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
165059031319345900606969434
270655201267315520555663022
374659401474465940640969947
465453001400355300531162431
560855801458465580569863823
6670548012702954805821259453
778857601438565760603562825
873262211357386220652567631
9642568012843156805931667214
1059056801411735680574261737
ListNumber of Generated Solutions before the Pseudo-Optimum Was Found
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
11626,456292,69913,22015,66157,94317,43236226319,4883105
21170,798135,8048072837143,995499573760320,5442702
31130,23444,2529020516639,359547891962619,8941770
41557,410584,35410,640570441,506925428830221,8431583
51294,690163,30118,53218,04244,706938996748121,8432231
61405,151249,70510,38812,00439,117564062936421,0312590
712,174,0142,251,594796410,48359,53721,11441723019,0011051
811,694,2191,688,46214,104934840,0438557100089819,3262198
91862,602633,7299992477441,338721955126020,6251891
101984,257843,0986576632783,55748,76961288919,7322894
ListComputational Time [s]
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
101151237195181000
2098528144351000
308663272921000
401031033743101000
50981040133991000
60112836143831000
7013935341335101000
801393539135811100
90120203774361000
10012311291148101100
Table A6. Layout L2 and picking lists with 40 locations.
Table A6. Layout L2 and picking lists with 40 locations.
ListDistance Covered by the Picker [m]
GreedyProposedPSO_1PSO_2ACO2-Opt
Avg.St. Dev.Avg.St. Dev.Avg.St. Dev.Avg.St. Dev.Avg.St. Dev.
1848640018531664006722672523
273861401833786140637374126
380264641850356460670772244
467062631750346260650473835
5784645318494264536732176835
666060801886186080635169018
7782650016986565006721576229
8810682417816068207201281429
968059201686195920602076717
10846730018564373237912984520
ListNumber of Generated Solutions before the Pseudo-Optimum Was Found
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
112,633,1561,939,48313,10814,62482,47114,20872282051,7226340
21920,262708,0879032867093,48620,32356645248,3133569
314,597,8714,668,58156324868110,04829,94255640347,8696279
417,165,2944,851,88913,86817,083121,71430,37141622656,1682641
517,555,9435,344,93217,65211,119122,60632,0451466101749,2023420
613,257,0092,299,9828496844883,37322,91536818851,1292723
713,231,8143,723,19612,7968351119,85838,86168056650,0922320
816,252,1004,754,099704414,420148,30482,385152886854,3894340
912,123,3681,589,03611,508863285,40023,32747345850,2405797
1015,311,84437,19,317747611,650174,341160,65561359849,2021444
ListComputational Time [s]
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
1034248542270132100
2028433501573112100
30367594397782100
404078758307662000
504588964197053100
603575148157722000
703397352774122100
804236046269283100
9033156561775102100
10041158462186122100
Table A7. Layout L3 and picking lists with 20 locations.
Table A7. Layout L3 and picking lists with 20 locations.
ListNumber of Generated Solutions before the Pseudo-Optimum Was Found
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
18907800151221780079488312
29748620162874862090217102453
390479801620957980798092029
41048866015968186608798105527
588072601485677260737127808
6101276601482887660766087825
784667201292216720688671649
8113885931569708593892993821
99928460147571846088012105041
1085874401467477440744085531
ListSolutions Explored before Finding the Best
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
1185,64771,14812,41611,73817,90015282561605233902
21306,429340,8217736908262,05885,374841104448561051
3121,42212,06210,92813,78015,71019793321995233798
41166,403214,50611,13212,20616,85367618534855848371
5126,49112,98217,61614,15013,36322092661254993585
6131,33716,8738308734415,26326268284415096281
7135,50116,4578028622714,3806393862585301897
81180,454219,4068840442219,46639305243724891877
91150,52482,36419,260980517,81046945923945198843
10138,96140,34015,19619,08716,1232224360495027732
ListComputational Time [s]
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
103812181960000
203572152741000
302822161950000
403562262661000
502912762610000
603111962231000
703011821770000
802942122251000
903412642241000
1003452492430000
Table A8. Layout L3 and picking lists with 30 locations.
Table A8. Layout L3 and picking lists with 30 locations.
ListDistance Covered by the Picker [m]
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
1112097402280739740100814109022
21024894022731228940919297615
310409880224591988099616118214
41224104602320741046010645113332
51012896020845589609421199843
6116210200240011410200103411120438
79188600203413686008623101872
811941046022625810460107313117648
9992866020976486608711194445
1011149160213954916093117102619
ListNumber of Generated Solutions before the Pseudo-Optimum Was Found
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
11590,671481,10911,92810,89438,020427165249122,8982395
21435,755519,09912,40015,46736,674233986742321,0311503
31657,692524,307979612,46442,73014,515120454519,8134624
412,289,4602,121,88322,96813,97345,61918,40246252120,2192933
51655,740450,27415,368916533,497659439858422,3301968
61446,289312,4694376406034,590651169548020,0563179
71339,281313,0934520439139,590545588148117,1332684
81847,059835,7198068748860,48132,65250036122,9802777
911,458,344659,1677252366740,6737071106494020,1383256
101206,184120,8856472741331,816512548934519,7322748
ListComputational Time [s]
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
101401038164951000
201241834174681000
3011953284872000
401832851174291000
50129184293931000
6011692844331000
7012592665541000
801551635104731000
9014143453992100
100130433103991000
Table A9. Layout L3 and picking lists with 40 locations.
Table A9. Layout L3 and picking lists with 40 locations.
ListDistance Covered by the Picker [m]
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
1133499002971979900102618109163
211401030032671201030010538129862
3139810931336910810920112413129932
41154111203256571112011280124013
5128810980322111110980113322134034
614161212033629312120127319139634
712221008028359910080106832117721
813781090032399010900111011122418
912381122031275811220115322130839
10131011220312013211220122625137124
ListNumber of Generated Solutions before the Pseudo-Optimum Was Found
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
116,344,1814,493,4137204948467,22310,117109769559,1327672
213,575,1212,002,8466732504185,58025,04552544661,2074244
314,764,7723,897,3615984558293,93021,68033824457,6505629
413,160,2681,488,14620,792957791,40112,01660819259,1325430
51968,025396,864988415,88668,002956197161659,5763460
617,802,8704,103,11612,01214,495125,63323,7292337136950,2403757
717,585,2504,333,59911,004533882,65517,00575569153,9452792
813,608,3483,960,95312,396853669,86011,998115028257,5023690
911,085,203541,2262624179162,12015,52698394754,8343778
1016,460,0643,381,16929722258104,16721,51470938659,5763942
ListComputational Time [s]
GreedyPkt_PSOPSO_1PSO_2ACO2-Opt
MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.MeanSt. Dev.
104759247178493100
2041156471078112100
3040560441168112000
4039940711772112000
503505752306542100
6052781552769144200
704317355107032100
8042049531471103000
903643738477112100
100470613859222100

References

  1. Roodbergen, K.; Vis, I. A model for warehouse layout. IIE Trans. 2006, 10, 799–811. [Google Scholar] [CrossRef]
  2. Chowdhury, N.A.; Ali, S.M.; Paul, S.K.; Mahtab, Z.; Kabir, G. A hierarchical model for critical success factors in apparel supply chain. Bus. Process Manag. J. 2020, 26, 1761–1788. [Google Scholar] [CrossRef]
  3. Ma, Z.; Shang, X.; Fu, X.; Luo, F. The Architecture and Key Technologies of Internet of Things in Logistics. In Proceedings of the International Conference on Cyberspace Technology, Beijing, China, 23 November 2013. [Google Scholar]
  4. Jaghbeer, Y.; Hanson, R.; Johansson, M.I. Automated order picking systems and the links between design and performance: A systematic literature review. Int. J. Prod. Res. 2020, 58, 4489–4505. [Google Scholar] [CrossRef]
  5. Janssen, C.P.; Donker, S.F.; Brumby, D.P.; Kun, A.L. History and future of human-automation interaction. Int. J. Hum. -Comput. Stud. 2019, 131, 99–107. [Google Scholar] [CrossRef]
  6. Roodbergen, K.J.; Sharp, G.P.; Vis, I.F.A. Designing the layout structure of manual order picking areas in warehouses. IIE Trans. 2008, 40, 1032–1045. [Google Scholar] [CrossRef]
  7. Van Gils, T.; Ramaekers, K.; Caris, A.; De Koster, R.B.M. Designing efficient order picking systems by combining planning problems: State-of-the-art classification and review. Eur. J. Oper. Res. 2018, 267, 1–15. [Google Scholar] [CrossRef] [Green Version]
  8. Koster, R.D.; Le-Duc, T.; Roodbergen, K.J. Design and control of warehouse order picking: A literature review. Eur. J. Oper. Res. 2007, 2, 481–505. [Google Scholar] [CrossRef]
  9. De Santis, R.; Montanari, R.; Vignali, G.; Bottani, E. An adapted ant colony optimization algorithm for the minimization of the travel distance of pickers in manual warehouses. Eur. J. Oper. Res. 2018, 1, 120–137. [Google Scholar] [CrossRef]
  10. Van Gils, T.; Ramaekers, K.; Braekers, K.; Depair, B.; Caris, A. Increasing order picking efficiency by integrating storage, batching, zone picking, and routing policy decisions. Int. J. Prod. Econ. 2018, 197, 243–261. [Google Scholar] [CrossRef]
  11. Bottani, E.; Volpi, A.; Montanari, R. Design and optimization of order picking systems: An integrated procedure and two case studies. Comput. Ind. Eng. 2019, 137, 1–17. [Google Scholar] [CrossRef]
  12. Henn, S.; Koch, S.; Doerner, K.F.; Strauss, C.; Wäscher, G. Metaheuristics for the order batching problem in manual order picking systems. Bus. Res. 2010, 1, 82–105. [Google Scholar] [CrossRef] [Green Version]
  13. Zhao, Z.; Zhang, M.; Yang, C.; Fang, J.; Huang, G.Q. Distributed and collaborative proactive tandem location tracking of vehicle products for warehouse operations. Comput. Ind. Eng. 2018, 125, 637–648. [Google Scholar] [CrossRef]
  14. Casella, G.; Volpi, A.; Montanari, R.; Tebaldi, L.; Bottani, E. Trends in order picking: A 2007–2022 review of the literature. Prod. Manuf. Res. 2022, 11, 1–71. [Google Scholar] [CrossRef]
  15. Kim, T.Y.; Dekker, R.; Heij, C. Spare part demand forecasting for consumer goods using installed base information. Comput. Ind. Eng. 2017, 103, 201–215. [Google Scholar] [CrossRef] [Green Version]
  16. Valarmathi, B.; Santhi, K.; Ravi, C.; Peeyush, G.; Bhagyashree, B. Performance analysis of genetic algorithm, particle swarm optimization and ant colony optimization for solving the travelling salesman problem. Int. J. Recent Technol. Eng. 2019, 2, 2277–3878. [Google Scholar]
  17. Islam, M.R.; Ali, S.M.; Fathollahi-Fard, A.M.; Kabir, G. A novel particle swarm optimization-based grey model for the prediction of warehouse performance. J. Comput. Des. Eng. 2021, 8, 705–727. [Google Scholar] [CrossRef]
  18. Aouay, S.; Jamoussi, S.; Ayed, Y.B. Particle swarm optimization-based method for Bayesian Network structure learning. In Proceedings of the 5th International Conference on Modeling, Simulation and Applied Optimization, Hammamet, Tunisia, 28–30 April 2013. [Google Scholar]
  19. Cowie, J.; Oteniya, L.; Coles, R. Particle swarm optimization for learning Bayesian networks. In Proceedings of the World Congress on Engineering, WCE, London, UK, 2–4 July 2007. [Google Scholar]
  20. Gharib, A.; Benhra, J.; Chaouqi, M. A performance comparison of PSO and GA applied to TSP. Int. J. Comput. Appl. 2015, 975, 8887. [Google Scholar] [CrossRef]
  21. Sumathi, M.; Rahamathunnisa, U.; Anitha, A.; Druheen, D.; Nallakaruppan, M.K. Comparison of particle swarm optimization and simulated annealing applied to travelling salesman problem. Int. J. Innov. Technol. Explor. Eng. 2019, 8, 2278–3075. [Google Scholar]
  22. Zhou, H.; Song, M.; Pedrycz, W. A comparative study of improved GA and PSO in solving multiple traveling salesman problem. Appl. Soft Comput. 2018, 64, 564–580. [Google Scholar] [CrossRef]
  23. Zhong, Y.; Lin, J.; Wang, L.; Zhang, H. Discrete comprehensive learning particle swarm optimization algorithm with Metropolis acceptance criterion for traveling salesman problem. Swarm Evol. Comput. 2018, 42, 77–88. [Google Scholar] [CrossRef]
  24. Lawler, E.L.; Lenstra, J.K.; Kan, A.R.; Shmoys, D.B. The Traveling Salesman Problem, A Guided Tour to Combinatorial Optimization; John Wiley and Sons: Hoboken, NJ, USA, 1985. [Google Scholar]
  25. Le Bouthillier, A.; Crainic, T.G.; Kropf, P. A guided cooperative search for the vehicle routing problem with time windows. IEEE Intell. Syst. 2005, 20, 36–42. [Google Scholar] [CrossRef]
  26. Gue, K.R.; Meller, R.D.; Skufca, J.D. The effects of pick density on order picking areas with narrow aisles. IIE Trans. 2006, 38, 859–868. [Google Scholar] [CrossRef] [Green Version]
  27. Battini, D.; Glock, C.H.; Grosse, E.H.; Persona, A.; Sgarbossa, F. Human energy expenditure in order picking storage assignment: A bi-objective method. Comput. Ind. Eng. 2016, 94, 147–157. [Google Scholar] [CrossRef]
  28. Chen, F.; Wang, H.; Qi, C.; Xie, Y. An ant colony optimization routing algorithm for two order pickers with congestion consideration. Comput. Ind. Eng. 2013, 1, 77–85. [Google Scholar] [CrossRef]
  29. Önüt, S.; Tuzkaya, U.R.; Doğaç, B. A particle swarm optimization algorithm for the multiple-level warehouse layout design problem. Comput. Ind. Eng. 2008, 4, 783–799. [Google Scholar] [CrossRef]
  30. Ratliff, H.; Rosenthal, A. Order-picking in a rectangular warehouse: A solvable case of the traveling salesman problem. Oper. Res. 1983, 3, 507–521. [Google Scholar] [CrossRef]
  31. Pansart, L.; Catusse, N.; Cambazard, H. Exact algorithms for the order picking problem. Comput. Oper. Res. 2018, 100, 117–127. [Google Scholar] [CrossRef] [Green Version]
  32. Koster, R.D.; Van der Poort, E. Routing order pickers in a warehouse: A comparison between optimal and heuristic solutions. IIE Trans. 1998, 30, 469–480. [Google Scholar] [CrossRef] [Green Version]
  33. Masae, M.; Glock, C.H.; Gross, E.H. Order picking routing in warehouses: A systematic literature review. Int. J. Prod. Econ. 2020, 224, 107564. [Google Scholar] [CrossRef]
  34. Roodbergen, K.J.; Koster, R. Routing methods for warehouses with multiple cross aisles. Int. J. Prod. Res. 2001, 9, 1865–1883. [Google Scholar] [CrossRef]
  35. Çelk, M.; Süral, H. Order picking under random and turnover-based storage policies in fishbone aisle warehouses. IIE Trans. 2014, 3, 283–300. [Google Scholar] [CrossRef]
  36. Calzavara, M.; Glock, C.H.; Gross, E.; Sgarbossa, F. An integrated storage assignment method for manual order picking warehouses considering cost, workload and posture. Int. J. Prod. Res. 2019, 57, 2392–2408. [Google Scholar] [CrossRef]
  37. Lu, W.; McFarlane, D.; Giannikas, V.; Zhang, Q. An algorithm for dynamic order-picking in warehouse operations. Eur. J. Oper. Res. 2016, 1, 107–122. [Google Scholar] [CrossRef]
  38. Altazari, S.A.; Ammouri, M.M. Concurrent manual-order-picking warehouse design: A simulation-based design of experiments approach. Int. J. Prod. Res. 2018, 56, 7103–7121. [Google Scholar] [CrossRef]
  39. Caron, F.; Marchet, G.; Perego, A. Routing policies and COI-based storage policies in picker-to-part systems. Int. J. Prod. Res. 1998, 36, 713–732. [Google Scholar] [CrossRef]
  40. Petersen, C. An evaluation of order picking routing policies. Int. J. Oper. Prod. Manag. 1997, 17, 1098–1111. [Google Scholar] [CrossRef]
  41. Petersen, C. The impact of routing and storage policies on warehouse efficiency. Int. J. Oper. Prod. Manag. 1999, 19, 1053–1064. [Google Scholar] [CrossRef]
  42. Rao, S.S.; Adil, G.K. Optimal class boundaries, number of aisles, and pick list size for low-level order picking systems. IIE Trans. 2013, 45, 1309–1321. [Google Scholar] [CrossRef]
  43. Silva, A.; Roodbergen, K.J.; Coelho, L.C.; Darvish, M. Estimating optimal ABC zone sizes in manual warehouses. Int. J. Prod. Econ. 2022, 252, 108579. [Google Scholar] [CrossRef]
  44. Pardo, E.; Gil-Borrás, S.; Alonso-Ayuso, A.; Duarte, A. Order batching problems: Taxonomy and literature review. Eur. J. Oper. Res. 2023, in press. [Google Scholar] [CrossRef]
  45. Vanheusden, S.; van Gils, T.; Ramaekers, K.; Cornelissens, T.; Caris, A. Practical factors in order picking planning: State-of-the-art classification and review. Int. J. Prod. Res. 2023, 61, 2032–2056. [Google Scholar] [CrossRef]
  46. Floyd, R.W. Algorithm 97: Shortest path. Commun. ACM 1962, 5, 345–354. [Google Scholar] [CrossRef]
  47. Warshall, S. A theorem on Boolean matrices. J. ACM 1962, 9, 11–12. [Google Scholar] [CrossRef]
  48. Eberhart, R.; Kennedy, J. A New Optimizer Using Particle Swarm Theory. In Proceedings of the 6th International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995. [Google Scholar]
  49. Banks, A.; Vincent, J.; Anyakoha, C. A review of particle swarm optimization. Part I: Background and development. Nat. Comput. 2007, 6, 467–484. [Google Scholar] [CrossRef]
  50. Banks, A.; Vincent, J.; Anyakoha, C. A review of particle swarm optimization. Part II: Hybridization, combinatorial, multicriteria and constrained optimization, and indicative applications. Nat. Comput. 2008, 7, 109–124. [Google Scholar] [CrossRef]
  51. Jain, N.K.; Nangia, U.; Jain, J. A review of particle swarm optimization. J. Inst. Eng. Ser. B 2018, 99, 407–411. [Google Scholar] [CrossRef]
  52. Asghari, M.; Afshari, H.; Mirzapour, S.M.J.; Fathollahi-Fard, A.M.; Dulebenets, M. A Pricing and advertising decisions in a direct-sales closed-loop supply chain. Comput. Ind. Eng. 2022, 171, 108439. [Google Scholar] [CrossRef]
  53. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation, Word Congress on Computational Intelligence, Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar]
  54. Clerc, M.; Kennedy, J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef] [Green Version]
  55. Al-Awami, A.T.; Zerguine, A.; Cheded, L.; Zidouri, A.; Saif, W. A new modified particle swarm optimization algorithm for adaptive equalization. Digit. Signal Process. 2011, 21, 195–207. [Google Scholar] [CrossRef]
  56. Kennedy, J.; Eberhart, R.C. A discrete binary version of the particle swarm algorithm. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, Orlando, FL, USA, 12–15 October 1997; pp. 4104–4108. [Google Scholar]
  57. Laskari, E.C.; Parsopoulos, K.E.; Vrahatis, M.N. Particle swarm optimization for integer programming. In Proceedings of the 2002 Congress on Evolutionary Computation, Honolulu, HI, USA, 12–17 May 2002; pp. 1582–1587. [Google Scholar]
  58. Shi, X.H.; Liang, Y.C.; Lee, H.P.; Lu, C.; Wang, Q.X. Particle swarm optimization-based algorithms for TSP and generalized TSP. Inf. Process. Lett. 2007, 103, 169–176. [Google Scholar] [CrossRef]
  59. Chen, W.N.; Zhang, J.; Chung, H.S.; Zhong, W.L.; Wu, W.G.; Shi, Y.H. A novel set-based particle swarm optimization method for discrete optimization problems. IEEE Trans. Evol. Comput. 2009, 14, 278–300. [Google Scholar] [CrossRef]
  60. Wang, K.P.; Huang, L.; Zhou, C.G.; Pang, W. Particle swarm optimization for traveling salesman problem. In Proceedings of the 2003 International Conference on Machine Learning and Cybernetics, Xi’an, China, 2–5 November 2003; pp. 1583–1585. [Google Scholar]
  61. Zhong, W.H.; Zhang, J.; Chen, W.N. A novel discrete particle swarm optimization to solve traveling salesman problem. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 3283–3287. [Google Scholar]
  62. Grasas, A.; Juan, A.A.; Faulin, J.; de Armas, J.; Ramalhinho, H. Biased randomization of heuristics using skewed probability distributions: A survey and some applications. Comput. Ind. Eng. 2017, 110, 216–228. [Google Scholar] [CrossRef]
  63. Juan, A.A.; Faulin, J.; Ruiz, R.; Barrios, B.; Caballé, S. The SR-GCWS hybrid algorithm for solving the capacitated vehicle routing problem. Appl. Soft Comput. 2010, 10, 215–224. [Google Scholar] [CrossRef]
  64. Theys, C.; Bräysy, O.; Dullaert, W.; Raa, B. Using a TSP heuristic for routing order pickers in warehouses. Eur. J. Oper. Res. 2010, 200, 755–763. [Google Scholar] [CrossRef]
  65. Croes, G.A. A method for solving traveling salesman problems. Oper. Res. 1958, 6, 791–812. [Google Scholar] [CrossRef]
  66. Englert, M.; Rglin, H. Worst case and probabilistic analysis of the 2-Opt algorithm for the TSP. Algorithmica 2014, 68, 190–264. [Google Scholar] [CrossRef] [Green Version]
  67. Mulder, S.A.; Wunsch, D.C. Million city traveling salesman problem solution by divide and conquer clustering with adaptive resonance neural networks. Neural Netw. 2003, 16, 827–832. [Google Scholar] [CrossRef]
Figure 1. PSO pseudocode.
Figure 1. PSO pseudocode.
Mathematics 11 03077 g001
Figure 2. An example of graph representation.
Figure 2. An example of graph representation.
Mathematics 11 03077 g002
Figure 3. The subgraph generated by a picking list.
Figure 3. The subgraph generated by a picking list.
Mathematics 11 03077 g003
Figure 4. Creation of the set H of the candidate locations.
Figure 4. Creation of the set H of the candidate locations.
Mathematics 11 03077 g004
Figure 5. Pseudocode concerning the generation of a new solution.
Figure 5. Pseudocode concerning the generation of a new solution.
Mathematics 11 03077 g005
Figure 6. Pseudocode of the 2-Opt local search.
Figure 6. Pseudocode of the 2-Opt local search.
Mathematics 11 03077 g006
Figure 7. The flowchart of the proposed metaheuristic.
Figure 7. The flowchart of the proposed metaheuristic.
Mathematics 11 03077 g007
Figure 8. Template of the layouts adopted.
Figure 8. Template of the layouts adopted.
Mathematics 11 03077 g008
Figure 9. Comparison between the ACO and the proposed PSO, with the same number of generated routes.
Figure 9. Comparison between the ACO and the proposed PSO, with the same number of generated routes.
Mathematics 11 03077 g009
Table 1. PSO Parameters.
Table 1. PSO Parameters.
SymbolDefinition
i A generic particle of the swarm.
C A positive constant.
g b e s t (t)The overall best solution found until iteration t .
n The dimension of the solution space.
N The number of particles of the swarm.
p i t A n-dimensional vector that codifies the position of particle i at iteration t, i.e., a feasible solution of the problem.
p b e s t i t The best solution found by particle i until iteration t .
t The current iteration or epoch of the algorithm.
u A generic random number uniformly distributed in the range [0, 1].
v i t A n-dimensional vector that codifies the velocity of particle i at iteration t.
ω The inertia factor, a random number uniformly distributed in the range [0, 1].
χ The constraining factor, a constant with a value between 0 ,   1 .
f p The objective function of the problem.
Table 2. Variables related to the picking problem and to the layout of the warehouse.
Table 2. Variables related to the picking problem and to the layout of the warehouse.
SymbolDefinition
P The picking list P = A , B , C , containing all the storage locations that must be visited by the picker to complete an order. Locations in P are indicated with an uppercase letter and they are not sorted; they may be sorted in lexicographical order just for convenience.
p The picking tour p = h 1 , h 2 , , h M containing all the locations of P sorted in the order that the picker must follow during the tour. Please note that locations in p are denoted with the letter h .
j The positional index j = 1 , , M that specifies at which point of the tour a location is visited.
h j The j-th location to be visited, i.e., p j   =   h j .
d x , y The minimum distance between location 𝑥 and location y , with x , y P . The minimum distance is always obtained using the Floyd–Warshall algorithm.
S x , y The set of all the storage locations l located within the shortest path linking x and y . That is, to move from x to y , the picker also passes in front of all locations contained in S x , y .
s The cardinality of the set S x , y , i.e., the number of locations l in S x , y
Table 3. Variables related to the parameters of the algorithm.
Table 3. Variables related to the parameters of the algorithm.
SymbolDefinition
p i t A completed picking tour (solution) generated for particle i at iteration t . With “complete” we mean that p i t is made of M locations.
p i m t A partial picking tour generated for particle i at iteration t . The tour is partial because it is made of m locations with m < M .
T The maximum number of iterations before the algorithm stops.
g r e e d y A greedy solution shared by all the N particles of the swarm.
r n d _ i n t e n t i o n i t A random solution generated for particle i at iteration t .
H P A subset of P containing four candidate locations that can be selected to generate a new tour.
h k One of the four candidate locations contained in H .
d ( h j , h k ) The distance between the j -th and last location of p i j t and the candidate location h k .
w i t A four-elements vector w i t = w 1 , w 2 , w 3 , w 4 that determines the tendency of particle i at iteration t to change its position, moving toward one of reference positions.
D t A rescaled distance vector generated at each iteration t , which is used to create the new solutions p i t     i .
H The array of the candidate solution sorted in ascending order of their rescaled distance.
g The greediness factor used to quantify the propensity of a particle to exploit the greedy solution.
α The shape factor of the quasi-geometric distribution used to select a candidate location h k among the four included in H .
γ The local search probability, i.e., the probability of performing a 2-Opt-based neighbour search on a starting solution.
β The probability of exploiting the aisle-based generation mechanism; in a certain way, this probability codifies the propensity of a particle to follow a predetermined path inside an aisle of the warehouse.
Table 4. Parameters’ tuning obtained results (in red the experiments where Pkt_PSO did not find the global optimum, in green the best parameters combination).
Table 4. Parameters’ tuning obtained results (in red the experiments where Pkt_PSO did not find the global optimum, in green the best parameters combination).
ParametersPicking List 1Picking List 2Picking List 3Picking List 4Picking List 5
MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.
g = 0.8 ,   α = 0.70 ,
β = 0.8 ,   γ = 0.20
53306190502043203890
g = 0.8 ,   α = 0.70 ,
β = 0.8 ,   γ = 0.05
53306190502043303890
g = 0.8 ,   α = 0.70 ,
β = 0.2 ,   γ = 0.20
51206190508343203890
g = 0.8 ,   α = 0.70 ,
β = 0.2 ,   γ = 0.05
51206190502043203890
g = 0.8 ,   α = 0.20 ,
β = 0.8 ,   γ = 0.20
53406190508243203890
g = 0.8 ,   α = 0.20 ,
β = 0.8 ,   γ = 0.05
53406190508343203890
g = 0.8 ,   α = 0.20 ,
β = 0.2 ,   γ = 0.20
51306190502043203890
g = 0.8 ,   α = 0.20 ,
β = 0.2 ,   γ = 0.05
51306190502043203890
g = 0.1 ,   α = 0.70 ,
β = 0.8 ,   γ = 0.20
51206190502043203890
g = 0.1 ,   α = 0.70 ,
β = 0.8 ,   γ = 0.05
51206190503043203890
g = 0.1 ,   α = 0.70 ,
β = 0.2 ,   γ = 0.20
51206190502043203890
g = 0.1 ,   α = 0.70 ,
β = 0.2 ,   γ = 0.05
51206190502043203890
g = 0.1 ,   α = 0.20 ,
β = 0.8 ,   γ = 0.20
51206190502043823890
g = 0.1 ,   α = 0.20 ,
β = 0.8 ,   γ = 0.05
51206190503043203890
g = 0.1 ,   α = 0.20 ,
β = 0.2 ,   γ = 0.20
51206190502043203890
g = 0.1 ,   α = 0.20 ,
β = 0.2 ,   γ = 0.05
52036190502043203890
Global Optimum512[-]619[-]502[-]432[-]389[-]
Table 5. Comparison of the results, with Avg. Dev expressed in meters; best results are highlighted in bold.
Table 5. Comparison of the results, with Avg. Dev expressed in meters; best results are highlighted in bold.
LayoutPick. List LengthNeNePkt_PSOPSO_1 [22]PSO_2 [23]ACO [9]2-Opt
Avg. Dev.N. WinsAvg. Dev.N WinsAvg. Dev.N. WinsAvg. Dev.N WinsAvg. Dev.N WinsAvg. Dev.N Wins
L12075.20/10010/10537.10/100.38/108.03/1034.90/10
3088.10/10010/101095.20/100.38/1019.10/1059.70/10
4090.80/10010/101711.30/100.66/1027.20/1067.80/10
L22074.00/10010/10390.50/100.010/105.50/1066.80/10
30108.00/10010/10797.20/100.010/1019.90/1076.60/10
40118.70/10010/101160.90/100.29/1028.90/10113.90/10
L320162.30/10010/10720.70/100.010/1016.10/10112.80/10
30129.40/10010/101262.80/100.010/1019.40/10124.10/10
40200.20/10010/102089.10/100.010/1041.80/10186.80/10
Table 6. Comparison of computational times in seconds.
Table 6. Comparison of computational times in seconds.
LayoutPicking List
Length
Pkt_PSOPSO_1 [22]PSO_2 [23]ACO [5]2-Opt
Avg.Avg.Avg.Avg.Avg.
L12032.570.4922.5620.550.02
30139.440.9944.6639.020.09
40410.842.0388.4453.760.36
L22028.470.5024.7021.150.02
30113.201.1742.7534.460.11
40371.872.0777.0551.680.33
L32032.350.5322.5122.050.02
30136.151.2444.5235.240.11
40425.262.3974.5949.940.38
Table 7. 3-way ANOVA to compare performance of ACO and Pkt_PSO, *** denotes a very high significant factor.
Table 7. 3-way ANOVA to compare performance of ACO and Pkt_PSO, *** denotes a very high significant factor.
Sum_Sq (Type III)dfFp-Value
Intercept ***5.15 × 1071238,180.29.78 × 10−141
ALG7.84 × 10110.36235.49 × 10−1
LT ***1.97 × 10624542.174.36 × 10−83
PL ***1.01 × 10622351.348.64 × 10−72
(LO:PL) ***5.32 × 104461.541.18 × 10−23
Residuals1.73 × 10480NaNNaN
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bertolini, M.; Mezzogori, D.; Zammori, F. Enhancing Manual Order Picking through a New Metaheuristic, Based on Particle Swarm Optimization. Mathematics 2023, 11, 3077. https://doi.org/10.3390/math11143077

AMA Style

Bertolini M, Mezzogori D, Zammori F. Enhancing Manual Order Picking through a New Metaheuristic, Based on Particle Swarm Optimization. Mathematics. 2023; 11(14):3077. https://doi.org/10.3390/math11143077

Chicago/Turabian Style

Bertolini, Massimo, Davide Mezzogori, and Francesco Zammori. 2023. "Enhancing Manual Order Picking through a New Metaheuristic, Based on Particle Swarm Optimization" Mathematics 11, no. 14: 3077. https://doi.org/10.3390/math11143077

APA Style

Bertolini, M., Mezzogori, D., & Zammori, F. (2023). Enhancing Manual Order Picking through a New Metaheuristic, Based on Particle Swarm Optimization. Mathematics, 11(14), 3077. https://doi.org/10.3390/math11143077

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop