1. Introduction
Warehousing is of paramount importance in supply chains and logistics, as a well-designed warehousing policy leads to benefits within the company and across the entire supply chain [
1,
2]. Most of the time, high operating performance is achieved using highly efficient Automated Storage and Retrieval Systems (AS/RS) [
3,
4]. However, in case of small-item storage and/or spare-parts handling, manual warehouses are still more efficient and represent the standard solution [
5,
6]. In this case, order picking (the problem of collecting a set of items in a warehouse) is an essential issue of order fulfilment, with an immediate impact on customer satisfaction, business reputation and profitability. Picking is generally a major bottleneck in logistics and one of the most expensive and labour-intensive logistic activities, which is why improving this activity is considered a top priority for many companies [
7].
When the main operations concern pallet decomposition, short-period storage, pallet recomposition, and shrink wrapping of new pallets, a picker-to-goods strategy is generally adopted [
8,
9]. Accordingly, the picker moves in the warehouse to collect the required number of cases of each Stock Keeping Unit (SKU) included in the picking list. As the travelling time represents the largest part of the time required to complete an order, many picking strategies have been proposed to reduce the distance covered by the picker. Among them, batch picking, zone picking, wave picking, voice picking and vision picking are certainly the most studied [
10,
11]. However, all the above-mentioned strategies need specific operating conditions and/or costly equipment. Batch and wave picking assume that items can be sorted after retrieval [
12], zone picking has proven to be efficient only if a very high number of SKUs are stored in the warehouse, while vision and voice picking require a computerized control system and expensive augmented reality glasses or Bluetooth headphones [
13].
Due to these reasons, the basic picker to goods strategy, is still the best choice in many industrial contexts [
14] and it is the most widespread policy in the context of small and spare-parts handling [
15]. According to this policy, the picker is in charge of completing an entire order and, to do so, he/she can move freely in each area of the warehouse to collect the cases (of the SKUs indicated in the picking list) needed to form a pallet. Its implementation is simple and cheap, as it does not require any expensive equipment or additional operations such as batch composition or sorting. Operationally, the only decision concerns the routing that the picker must follow, that is, the order in which storage locations must be visited to complete the picking list. This sequence, in fact, determines the distance covered by the picker and the total travelling time. Optimizing the routing assigned to the picker is, thus, of paramount importance to improve productivity and to cut costs. Not surprisingly, literature in this field is rather extensive, and many interesting and efficient metaheuristics have already been introduced to minimize the total travelling time.
To the authors’ knowledge, the last and most efficient framework is the one proposed by De Santis et al. [
9], which is also characterized by extraordinary speed of convergence. Although this feature is desirable, it also suggests that most of the provided solutions could be points of local minimum. Therefore, we believe that there is still room for improvement, and to test this hypothesis we propose a novel metaheuristic called Pkt_PSO. As the name suggests (where Pkt stands for picking time), Pkt_PSO is an evolutionary metaheuristic inspired by the well-known Particle Swarm Optimization (PSO). However, to exploit the peculiar structure of the considered problem, many features of the standard PSO have been readapted and modified. Making major modifications to the PSO is indeed a common practice, and it is generally recommended to exploit the potentialities of this approach (see for example [
16,
17]) This readaptation is, therefore, the main contribution of the paper. The paper is also novel because, to the best of our knowledge, PSO has never been applied to the order-picking domain, although some recent works have shown its effectiveness and flexibility in solving very difficult-to-tackle tasks, such as Bayesian network training [
18,
19] and different instances of the Traveling Salesman Problem (TSP) [
20,
21]. The latter is a classic operating research problem very similar to the one herein considered. Assessing the operational benefits that can be achieved using a PSO-based framework is the second main contribution of the paper.
The rest of the paper is organized as follows.
Section 2 presents a brief introduction on the state-of-the-art approaches for travelling time minimization in manual warehouses. In the same section, literature gaps are highlighted and the research questions addressed in this paper are clearly defined. For the sake of clarity,
Section 3 pinpoints the main features of the standard PSO and
Section 4 formalizes the problem and its possible formulation as a TSP. The proposed algorithm (Pkt_PSO) is introduced in
Section 5, where all the new procedures and operators developed to exploit the peculiar structure of the considered problem are clearly explained. In view of an experimental section used to validate Pkt_PSO, its operating parameters are fine tuned in
Section 6. Next, in
Section 7 its performance is tested in nine different scenarios (obtained changing the layout of the warehouse and the length of the picking list) and compared to that of five benchmarks. One of the benchmarks [
9] is a top-of-the-class metaheuristic specifically designed for the picking problem in manual warehouses, two [
22,
23] are outstanding PSO-based approaches for the TSP, while the last two are the well-known Full Depth 2-Opt and the nearest neighbour constructive procedure. Finally, conclusions and directions for future works are drawn in
Section 8.
For the sake of clarity, all the abbreviations used in the paper are listed at the end of the paper in the “Abbreviation Section”.
3. Some Background on Particle Swarm Optimization
PSO is a famous metaheuristic optimization technique inspired by flocks of birds. Its ideation dates to 1995 when Eberhart and Kennedy proposed PSO as an optimization technique for continuous functions [
48]. Since then, as stated in the notable works by Banks et al. [
49,
50] and in the comprehensive literature review by Jain et al. [
51], PSO has been widely applied in many fields of operational research both for continuous and discrete problems and in closed-loop supply chain network optimisation [
52]. PSO is an evolutionary and population-based metaheuristic that exploits a population of
N candidate solutions, which are evolved at each step of the algorithm. For the sake of clarity, the basic notation used to describe PSO is shown in
Table 1.
The whole population is referred to as a
swarm, while each solution (
) of the swarm is referred to as a
particle. At each step (or iteration)
, each particle moves in the solution space to generate a new and potentially improved solution. Specifically, its movement is formally defined by Equation (1):
where
and
are, respectively, the velocity and the position of particle
at iteration
. Both
and
are vectors with several components equal to the dimensions of the solution space: the position
corresponds to the current solution coded by particle
, and the velocity
defines how far it will move at the next step.
At each iteration, the speed of each particle is updated as in Equation (2).
where:
, the best solution found up to iteration by the whole swarm.
, the best solution found up to iteration by particle .
and are two random numbers uniformly distributed in the range [0, 1]. At each iteration they are generated for each particle .
and are fixed parameters.
According to Equation (2), each particle tries to keep moving following the direction coded by that, for this reason, is also referred to as the particle’s intention. However, the trajectory of the particle is subjected to small deviations that are related to its experience and to the knowledge (or culture) of the whole storm. Experience and culture are measured as the distance from the current position and to the position of and , respectively. The entity of both deviations is random but limited by the two constants C. In this way, the algorithm balances the exploration of new solutions and the exploitation of the best solutions found so far. When , all particles fly keeping their current speed and direction until they finally hit the boundaries of the search space. If , particles behave as if they were independent individuals, whereas if , they are all attracted by the same point. A c choice common choice is to set . In this way, all particles are attracted towards the midpoint between and and the strength of this attraction is not negligible (as it would be with .
To summarize, the standard PSO works as follows:
At every iteration each particle moves from to according to Equation (1).
If the new solution is better than , the current best solution found by is updated accordingly.
When all the swarm has moved, is updated and the velocity of each particle is updated using Equation (2).
The process is repeated until a stopping criterion is met.
For the sake of clarity, the pseudocode is shown in
Figure 1, where it is assumed that the PSO ends when the number of iterations without improvement exceeds a predefined threshold value. The pseudocode is written using a Python 3.9© style, and so, indentation is not only for readability but it also defines different blocks of code.
At present, the original PSO has been modified and extended in many ways. A significant contribution is that of [
53], where a new parameter called
inertia is introduced, and the updating procedure is modified as in Equation (3).
where the inertia
is a random number uniformly distributed on the interval [0, 1] generated for each particle
at each iteration
. Hence, the scope of
is to mitigate the effect of particles’ intention, and in many applications, its value is progressively reduced as the number of iterations increases [
53]. The resulting effect is to force the exploration of a wider area of the solution space during the first runs of the algorithm and to exploit the local optima (sort of neighbour search) when the procedure is almost at the end.
As an alternative, [
54] proposed updating the velocity, as in Equation (4).
where
is a constant value (in the range [0, 1]) called
constriction factor that should ensure better reliability and greater stability of the results. As for the inertia, to assure convergence, some authors have also proposed a time dependent constriction factor, as in [
55] where, as the particle becomes closer to the global minimum, a lower value of the constriction factor is used which helps in stabilizing the algorithm with fast convergence. We anticipate that neither the inertia nor the constriction factor will be used in the metaheuristic here proposed.
All the above-mentioned approaches work well when the objective function is continuous, and it is possible to introduce a velocity vector
on the solution domain. When, as in most industrial settings, problems are binary and/or discrete, their industrial relevance becomes rather scarce. For this reason, many scholars have proposed discretized versions of the PSO. For instance, [
56] proposed using a sigmoid function to convert continuous variables into binary ones, whereas [
57] suggested working with continuous variables and rounding off the obtained solution to the nearest integer.
Relatively to the discrete domain, PSO has recently been proposed as an effective way to tackle NP-hard routing problems such as the TSP. Among the first contributions, we can cite the relevant works by [
58,
59]. However, the real burst of interest originated from [
20,
21], who showed the superiority of PSO in tackling these NP-hard problems. The point to note is that, when operating over a discrete domain, to outperform other metaheuristics, PSO must be subject to major changes that can make it distinctive from the standard and continuous version. Hence, the main question is how the concept of speed can be redefined so that the distance between two discrete solutions can be measured. In this regard, two remarkable contributions were proposed by [
22,
60]. Specifically, letting
and
be two sequences representing the order with which
nodes must be visited, both authors expressed speed as a function of the number of swaps needed to convert
to
. Although the results are rather good, the performances cannot be compared with those obtainable with PSO implementations where the concept of speed is completely overturned, as in the works by [
23,
61]. For a more in-depth description of the latter approach, please see
Appendix A, which describes the algorithm by [
22,
23] and other approaches used as benchmarks.
5. Pkt_PSO, the Proposed Metaheuristic
Since the TPS is a well-known NP-hard problem, a novel metaheuristic is herein proposed to solve the order-picking problem. The metaheuristic takes inspiration from the PSO and so, from here on, it will be referd to as the Pkt_PSO.
5.1. Notation
As discussed in
Section 4, the problem input is a picking list
. We denote the picking list as
, where the uppercase letters are the
locations to be visited. Locations are not sorted or can be sorted in lexicographical order just for convenience.
Similarly, a feasible solution is a sorted array of elements, which defines a picking tour. Since a tour always starts and ends at the I/O point, the first and last elements can be removed, and the tour can be fully represented with an array of only elements. We refer to this array as the picking tour and we denote it as , where each element identifies the -th location that must be visited by the picker. Consider, for instance, a picking list with six codes and a picking tour . In this case, the first stop is at location , the second one is at location and so on until location is reached and the picker completes the tour (i.e., .
The objective function evaluated in a specific tour is therefore computed as in Equation (6):
where
is the minimal distance between location x and location y and IO is the input–output point.
For the sake of clarity, before delving into the details of the Pkt_PSO, the notation of the problem is fully introduced in
Table 2 and
Table 3. This notation integrates that of
Table 1, valid for a generic PSO.
5.2. Basic Framework of the Algorithm
The iterative procedure of the Pkt_PSO follows a standard framework, like the one described in
Section 3. Specifically, at every iteration
:
Each particle
moves from its previous position
to the new position
. This part is the core of the optimization procedure, which is explained in full detail in
Section 5.4 and
Section 5.5.
Next, with probability
, a neighbour search is made on
using a 2-Opt-based approach, as explained in
Section 5.6.
If the solution has improved, the personal best is updated of particle is updated, i.e., .
When all particles have moved to a new position, the minimum personal best is computed and compared to the global best. In case of improvement, the global best is updated accordingly, i.e., .
Lastly, after iterations (or when a termination criterion is met), the algorithm ends and the best solution found so far is returned as the output. Since optimality cannot be guarantee, this solution is referred to as pseudo-optimal solution.
5.3. New Solution Generation
Anytime particle
moves, a new solution
is generated in an
element-by-element way, by sequentially adding to an initially empty tour
one location at a time. Also, the locations used to form
are sequentially extracted from four reference solutions, that are:
and
. As revealed by the notation,
is a solution that depends neither on iteration
nor on particle
. As better described in
Section 5.5, this solution is shared by all the particles of the swarm, and it is generated with a greedy algorithm at the very beginning of Pkt_PSO. From this point onward, it is no longer changed. Conversely,
is a random solution that is regenerated at every iteration
for each particle
.
Concerning how locations are extracted from the four reference solutions, the following procedure is used. Suppose that a total of locations have already been inserted in the partial tour , i.e., the last inserted location is and . Now, we need to define, among the remaining locations of the picking list, which one should be inserted at position . To this aim, a set of possible options is created by taking all locations that come immediately after in the four reference solutions. More precisely, let and be the position occupied by location in , , and , respectively. So, we have that , where:
,
,
,
.
Note that, if the partial tour is empty, the k-indexes cannot be defined and so they are conventionally set to zero, i.e., = 0.
To better clarify this concept, the extraction procedure is exemplified in
Figure 4, for a partial tour made of two locations
. Since location
is the last one inserted, to generate the set of the candidate locations
, we need to consider all the locations that come after
in the reference solutions. For instance, in
location
is at position four (i.e.,
) and so
. Considering the other reference solutions, we finally have that:
. Therefore, the next and third location of
(indicated with three question marks in
Figure 1) will be chosen among
.
Once the four candidate solutions of have been identified, one of them must be selected and appended to the partial tour. As the objective is to minimize the overall travelling distance, a natural choice is to link the selection probability to the distance between the last location of the partial tour and each location of . In other words, the greater is , the lower is the probability of selecting and vice versa. Also, to give a different importance to the reference solutions, the distance should be rescaled, taking the Hadamard (or elementwise) product with the weighting vector .
This is shown in Equation (7), where
is the rescaled distance vector:
In general terms, neither
nor
should be modified, whereas
and
should be increased. The first two distances correspond to
and
, which are the most important reference solutions, whereas the latter two correspond to
and
that are the less important ones. Therefore, a natural choice is to use the following weight vector
, for each particle
and for each iteration
, where
is a
greedy factor, that measures how much the greedy solution outweighs the random one. A neutral level is obtained for
and rescales the distances by doubling the original ones. We anticipate that, under some peculiar conditions, the weight vector
could be modified for a limited number of particles, to trigger an antistagnation mechanism, as described in
Section 5.5.
To implement this logic, first the four candidate solutions
are sorted in ascending order of their rescaled distance, to form the sorted array
Next, an element of
is randomly extracted using the quasi-geometric distribution defined as in Equation (8) [
62].
where
is a shape parameter in the interval [0, 1), and
is the position occupied by a candidate solution in the sorted array
. Therefore, the selection probability is maximum for the element at position one and minimal for the element at position four.
It is also important to stress that the use of a quasi-geometric distribution has a twofold purpose, as clearly explained by Juan et al. [
63]. First, the adoption of a theoretical distribution provides implementation advantages concerning the use of an empirical distribution. Second, the quasi-geometric distribution has a single parameter, which greatly simplifies the fine-tuning step.
We conclude by noting that two issues could occur during the selection process. Indeed:
If a candidate location is already included in the partial tour is unusable and cannot be included in .
If the location (the last one of the partial tours ) matches the last location of a reference solution, its candidate location cannot be defined, since position [ + 1] does not exist.
When this happens, the cardinality of is less than four; anyhow, provided there is at least one element in , the selection procedure remains unaltered. Should be empty, the next location is randomly extracted among all locations that have not yet been included in .
5.4. Exploiting the Structure of the Problem: Aisle-Based Construction and Greedy Algorithm
To exploit the underlying structure of the picking problem, an additional solution-generation scheme is considered. As described in
Section 4.2, the picking problem is characterized by a very constrained layout that makes the path connecting two locations virtually unique [
64]. In other words, whenever the picker goes from location
to
, he or she necessarily passes in front of other locations. This is typical when
and
are at opposite sides of the same aisle, so that moving from
to
, all the intermediate locations are necessarily encountered. For instance, referring to
Figure 2, if the picker moves from location V to location D, he or she also passes in front of locations {O, J, C, U, P, I}.
Let be the set of all the locations found on the minimal path connecting and To take advantage of this fact, when a new candidate location is selected, all locations will also be included in the partial tour, with probability . Therefore, the following two partial tours may be generated:
, with probability .
, with probability
where is the cardinality of . Please note that the first one is the standard tour, and the second is the one generated using the aisle-based construction algorithm.
Due to the very constrained graph of routes, we also suggest generating the greedy solution (shared by all the particles of the swarm) using the well-known nearest neighbour algorithm. According to this algorithm, the picker starts the tour from the I/O point and repeatedly visits the nearest location until all locations in have been visited. Indeed, due to the constraints induced by the layout on the feasible paths, this solution is known to be good, and it is thus advantageous to include it within the four reference solutions used to build a new tour. Also note that this solution never changes and remains the same from iteration = 1 to iteration = . This choice may seem rather odd, but it can be justified as follows. At the single-item level, the nearest neighbour move is certainly optimal; it is therefore useful to ensure that this move is always available when a new tour is generated in an element-by-element way. This is exactly why the greedy solution must be included in the reference solutions.
For clarity,
Figure 5 provides a Python 3.9©-based pseudocode, showing the generation of a new solution
.
5.5. Local Search via 2-Opt
Any time a new solution
is generated, with probability
, a local search is also performed. The local search is based on the 2-Opt, a simple local search algorithm originally introduced by Croes in 1958 [
65], which is characterized by a special swapping mechanism.
Specifically, the proposed local search can be executed in a complete and full-depth mode, which differ in the number of 2-Opt exchange steps that they perform. If the complete 2-Opt is used, a new sequence is generated by inverting the order in which the locations between and in are visited. This procedure is called complete because the inversion is made for each possible couple of nodes . To make a simple numerical example, let us consider a four-item sequence . In this case, a complete 2-Opt search generates the following six tours: , , , , and , where the subscripts refer to node and , respectively. Instead, when the full-depth 2-Opt is used, the complete 2-Opt is recursively repeated on each improved solution that is found. The two approaches are performed with probability and , respectively, with . Also, to balance accuracy and computational time, we suggest using so that .
For clarity,
Figure 6 shows the Python 3.9 © pseudocode of the local search via 2-Opt.
5.6. Antistagnation Mechanism
Let be a particle for which the full-depth 2-Opt, executed at iteration , has failed (i.e., has not changed). To avoid stagnation, particle activates a procedure that is meant to escape possible local minima. The proposed procedure acts on the weight vector , as explained next.
- -
Only for step and limited to particle , the weight vector is changed to: .
- -
Additionally, if , the weight vector becomes: .
- -
At the end of the generating process:
- ▪
The current personal best is replaced by the solution just created, regardless of its objective function, i.e., .
- ▪
The weigh vector is restored to its initial values, i.e., .
In this way, all locations become the least likely to be selected and the particle tends to move away (or escape) from its current best. Metaphorically, limited to iteration particle temporarily forgets its personal best (and eventually the global best) and explores the solution space without making use of this past information.
5.7. Time Complexity of Pkt_PSO
For the sake of clarity, the whole algorithm is summarized in the flow diagram of
Figure 7. From the diagram, it should be clear that in each epoch
, each particle
i generates at least a new solution using the
element-by-element generation scheme, possibly enhanced by the
aisle-based generation approach. Next, if a neighbour search is performed, an additional number of solutions is generated. This extra number can be easily determined if the
complete 2-Opt is used. In this case
solutions will be considered, where
is the binomial coefficient. Conversely, determining an exact number of different sequences generated by the full-depth 2-Opt before convergence is challenging, as this number will vary depending on the problem size and characteristics. Anyhow, since the complexity of the full-depth 2-Opt is generally considered to be
, we can confidently assume that the number of explored solutions is <<
, especially because, in relative terms, the number of iterations required by the 2-Opt to converge tends to decrease as the problem size increases. or a further discussion on this point see [
66].
Because of this, considering the number of particles of the storm
and the probability to perform a neighbour search
, the total number of generated solutions during
epochs can be estimated as in Equation (9).
where
is the number of solutions generated by the full-depth 2-OPT. For instance, using problem instances as the ones discussed next in
Section 7, we experimentally found that
,
and
.
5.8. Concluding Remarks on the Proposed Algorithm
In a standard PSO, particles constantly move according to a velocity profile that codifies the knowledge of the good solutions visited in the past. Indeed, any time a particle moves, a new solution is generated based on its current position and current velocity . The latter, in turn, depends both on a particle’s personal best and on the swarm’s global best. Metaphorically, each particle represents a bird that explores an area looking for food and changes its trajectory based on its intuition (cognitive behaviour) and/or to imitate the paths followed by the other birds of the storm (social behaviour).
Akin to the standard PSO, we also construct new solutions exploiting and ; however, aside from this similarity, a careful reader could spot some evident divergences. Specifically:
- -
Due to the discrete nature of the problem, an analytic formula (as in Equation (1)) is not implemented to recombine the reference solutions; an element-by-element approach is used instead.
- -
Two additional solutions, namely and , take part in the generation process.
- -
Conversely, the current solution does not explicitly appear in this process.
- -
The concept of speed does not seem to be explicitly implemented in the framework.
However, especially concerning the last two points, at a conceptual level, these differences are only apparent. Our algorithm can be reinterpreted in the following way. Each particle always moves to start from its personal best, which coincides with its current position. When the particle moves to a new point, it stops there if the solution has improved; if not, it returns to the starting point. Furthermore, the direction and modulus of the movement depend on four reference solutions and the weight vector . In this sense, the reference solutions can be seen as four cardinal points toward which the particles are attracted. Moreover, the weight vector can be seen as a velocity profile that determines how much a particle deviates from its precedent trajectory, heading toward one of the above-mentioned cardinal points.
Also note that the velocity vector is dynamic, and it is also specific for each particle of the swarm. Most of the time, the velocity vector is shared by all particles, but if a full-depth search is made on particle at epoch , and the full-depth search fails, for that particle, the velocity vector immediately changes, helping the particle to escape the local minimum, where it presumably was trapped.
To conclude, we note that in the case of discrete problems, the existing literature has two main strands of thought. A minority of authors propose readapting PSO with few marginal changes, while most of the scholars disrupt the original version, giving it a new shape. Given the description of the procedure, we claim that our approach stays in the middle. We reinterpreted the concept of speed, taking inspiration from the recent work by [
23], trying at the same time to preserve most of the underlying features of the original PSO, such as the intention of the particles.
6. Parameter Tuning
The algorithm has four operating parameters: (i) the shape factor
of the quasi-geometric distribution, (ii) the probability
of activating the aisle-based construction procedure, (iii) the probability
to carry out a local search via 2-Opt, and (iv) the greediness factor
, which defines how much the exploration is influenced by the greedy solution. The number of particles of the swarm is another operating parameter that should be fine tuned. However, we verified that the results do not appreciably change if the number of particles remains within the range of 30–40, which is usually used in similar works proposed in technical literature. Therefore, to assure a fair comparison with the other algorithms used as a benchmark, the number of particles was arbitrarily set to 30. Even the probability of triggering a full-depth search should be optimized. However, this event is so rare that performing a detailed optimization would be useless. Hence, also this parameter was arbitrarily fixed to
, as mentioned in
Section 5.
For proper fine-tuning, as in a two-level factorial experiment design, we defined reasonable low and high values for each parameter. Specifically:
- -
For the probability
, the values 0.7 and 0.2 were chosen, as in Juan et al. [
63]. When
, the selection of a potential location
is strongly influenced by its rescaled distance. For
approaching one, the selection tends to be deterministic and the candidate solution with the lowest distance is almost always returned. Conversely, when
, the selection is almost random (with pure randomness that would be achieved with
).
- -
For the probability , the values 0.2 and 0.05 were chosen. Both values are rather low because the execution of a 2-Opt local search should be an extreme condition with a very low occurrence rate. For values higher than 0.2, the time complexity (see Equation (8)) would grow tremendously, making its practical implementation impossible in a real industrial context.
- -
For the greediness factor , we are “pioneers”, as this is a completely new parameter. Lacking scientific references (or experimental evidence), two extreme values 0.8 and 0.1 were tested. When the greedy exploration prevails on the random behaviour, and vice versa.
- -
Also, probability is a new parameter and so, in line with the previous point, we tested two opposite situations using an extreme value of 0.8 and a low value of 0.1.
All the resulting
combinations were tested using five different picking lists with an average length of 30 locations (i.e.,
). A common layout characterized by three blocks divided by two cross aisles and 10 racks per block was also considered. A more accurate description of this layout is presented in
Section 7.2, where the numerical assessment of the algorithm is discussed. For each combination and each picking list, the algorithm was executed five times for a total of 400 repetitions.
Table 4 reports the results, expressed as the sample mean and standard deviation of the total travelling time of each parameter’s combination and picking list.
The values shown in
Table 5 demonstrate that Pkt_PSO is very consistent and reliable. Indeed, for each parameter’s combination, the standard deviation is always very low, and the mean value does not change significantly as the parameters change. Results also show that the only parameter that has a significant impact on the results is the greediness factor, which should be kept at its low level (i.e.,
).
For clarity, we also highlighted in red the experiments where Pkt_PSO failed, at least one time, to find the global optimum shown in the last row of the
Table 5. The global optimum was computed with the famous Concorde TSP solver (using the Python version available in the PyTSP library) which, according to Mulder and Wunsch [
67], is one of the fastest TSP solvers. For instance, the tuple (533, 0), relative to the first parameters’ combination and first picking list), means that the algorithm always converged to a value of 533 higher than the global optimum. Similarly, the tuple (508, 3), relative to the third parameters’ combination and third picking list, means that the algorithm has not always converged to the global optimum. The mean is higher than the global optimum and the standard deviation is not. Similarly, we highlighted in green the combination of parameters that never failed in finding the global optimum. This is the combination we opted for, also because, having a low value of the local search probability
it is also the combination requiring the lower computational time. From Equation (8) and using
, the expected number of generated solutions for each epoch are approximately 950.
8. Conclusions and Future Works
In this paper, we focused on the picking problem in manual warehouses, a very relevant problem with many managerial implications. Considering that a plethora of metaheuristics have been proposed to solve this problem, the paper aims to check if, and how much, the solution can be improved. Furthermore, since the problem is very constrained, it could be also characterized by many points of local minimum with a similar value. If so, at least operationally, a good local minimum could be enough, and it would be unnecessary to use a very sophisticated approach to find the absolute minimum.
To shed light on these research questions, we developed a novel metaheuristic (referred to as Pkt_PSO) and we compared it with a set of state-of-the-art benchmarks. Outcomes of the experimental campaign showed that Pkt_PSO provides the best results, in terms of both accuracy and precision. However, as we initially supposed, the obtained improvements are marginal, and to reach the optimum (or the pseudo-optimum), it is necessary to use an approach that favours exploration over convergence and that requires more computational time. If the execution of Pkt_PSO is interrupted as soon as the benchmarks reach convergence, the obtained results are almost indistinguishable. This finding shows that for this peculiar type of problem, research has practically come to an end, and the use of a general-purpose metaheuristic can be considered enough for many industrial applications. Nonetheless, the use of more accurate approaches could be useful when additional features and/or constraints are considered, such as dedicated storage, multilevel shelves, one-way aisles, and transport capacity of the picker. The application of Pkt_PSO to these alternative scenarios will be a topic of future works.
An interesting and promising field of application for future research could be the development of a methodology, based on the PSO, to solve the three-dimensional case picking problem (3D-CPP) in an order-picking strategy. 3D-CPP combines two important issues in logistics: the problem of pallet loading and the routing of pickers in manual warehouses. The proposed method could find an extension in the combined optimization of pickers and 3D bin-packing problem.