1. Introduction
In most ‘real world’ decision-making and/or optimization problems, countless decision criteria and objectives can not be evidently included in their computational model formulation; or, if included, they may increase the complexity of such models up to a point where they can not be exactly solved. Final decisions (i.e., selecting a solution to deploy) are often taken based not only on modeled objectives, but also on potentially subjective decision makers goals, biases, and preferences [
1].
Many mathematical optimization algorithms are devoted to finding single optimal solutions to single-objective problems or, in the best case, to determine sets of non-inferior solutions to multi-objective formulations [
2,
3,
4]. In any case, it should never be forgotten that we are, hopefully, obtaining the optimum solution for the model.
Assuming the existence of unmodeled (or hard to model) goals and parameters implies that non-conventional solving approaches are needed, not only to search the decision space for optimal solutions, but also to explore the decision region for alternative solutions with good quality. Such solutions can be sub-optimal for the model point of view, but valuable from additional perspectives.
It is here where the role of metaheuristics as solutions’ generators becomes relevant in two senses. Firstly, because several runs (a single one, in the case of population based techniques) allow to obtain a set of potentially good solutions. Secondly, let us suppose a reference solution
for a problem
P is available. Then, it is possible to define a new optimization problem
where the objective is to find a solution
x that maximize the difference with
subject to
, being
f the objective function from problem
P and
the maximum variation in quality allowed. In other words, the aim in
is to obtain solutions with similar quality but maximally different structure. This last approach is the so-called ’Modeling to Generate Alternatives’ (MGA) approach proposed by [
4,
5,
6].
Setting apart this MGA approach, the consideration of metaheuristics in the sense posed here is not being explored (to the best of our knowledge). Most of the metaheuristics publications emphasize their role as “solvers” trying to find the best possible solution for the problem at hand usually without making further analysis.
In this context, the aim of this contribution, is to illustrate the role of metaheuristics as solutions’ generators in a basic problem solving framework that considers the problem formulation, its resolution, and the analysis of solutions beyond their objective function value.
Two examples are considered. The first problem is a dynamic version of the tourist trip design problem. In the static version, we have a set of points of interest (POI), each one having a visit time and a level of interest. The aim is to select a subset of points that maximizes the level of interest subject to certain available time. The problem is usually modeled as a team orienteering problem [
7].
In the dynamic version, the level of interest of a POI depends on the time that it is being visited, thus adding an extra level of complexity. We will describe the many characteristics associated with a solution and how difficult it is to include them a priori in a model, thus requiring a different solving strategy.
The second one, is a perishable food distribution problem (basically, a vehicle routing problem) [
8]. Originally modeled as a many objectives’ problem, we describe a solving strategy that first solves a sub-problem and then, using the obtained solution, sets a new optimization problem with a fuzzy constraint that allows to obtain new solutions with similar quality but maximally different structure (as in the MGA approach).
The paper is structured as follows. In
Section 2 the proposed basic problem solving framework is presented, together with an existing approach to rank solutions according to the user’s preference. Next,
Section 3 and
Section 4 present the two examples commented before. For each one, every step of the solving framework is illustrated with a specific dataset, and the corresponding analyses of results are presented.
In each example, a metaheuristic is used to generate a set of solutions. We should highlight here that the specific details of the implemented methods are omitted, as they are not relevant in this context: we do not want to make comparisons against other methods over a set of well-defined objectives.
Finally,
Section 5 is devoted to conclusions and further work.
2. Basic Problem Solving Framework
The role of metaheuristics as solutions’ generators is best seen when integrated in the following problem solving framework. Although every step is clear enough, we describe the mains aspects considered here.
Let us depart from an optimization problem P. The following steps are required to solve it.
Problem description: determine a set of solutions’ features for problem P, . Such features are used to assess the quality of a solution;
Model formulation: Define the subset of features to be used in P optimization model . For every feature, either a maximization or minimization goal should be established;
Problem Solving: “Solve” the problem P. Depending on the solver algorithm:
- (a)
An optimum solution (or at least a reference one) is obtained. From such solution, generate new ones with similar quality but as different as possible;
- (b)
A set of solutions is obtained;
Analysis of Solutions: For every solution, calculate the corresponding values for the features in ;
Ranking: Rank the solutions according to the user’s preferences.
Let us illustrate these steps with the well known travelling salesmen problem (TSP). Given a solution (a route), features like the distance travelled, time of the route, fuel consumption, average length of the inter-city paths, the length of the longest path, and so on can be measured.
In a basic TSP formulation, a single goal is defined: distance minimization.
It is in Step 3 where metaheuristics comes into play. Suppose we solve the problem using a genetic algorithm, so it is easy to obtain a set of routes (Step 3.b). For example, the best m solutions from the final population are kept.
Then, in Step 4, those
m solutions can be organized in a matrix
M as:
| | |
| | | … | |
| | | … | |
| | | … | |
| | | … | |
where
is the value achieved by solution
on feature
. Those values
with
are given as the output of the optimization process, while those for
are calculated after the optimization.
Finally, in Step 5 the values in every
M row can be combined (for example using some aggregation function [
9]) to obtain a score
for every solution
. The relevance of the objectives for the user can be expressed through a set of weights. Sorting according to
allows to rank the solutions.
Many options are available for this step. Here, we will use the approach in [
10] that allows to select a solution of interest from a set of solutions. The user states the preferences, just providing a linear ordering of the objectives. A brief summary of the approach follows.
2.1. Ranking Solutions According to the User’s Preferences
Let us depart from the solutions’ matrix M. Additionally, suppose the user provides a set of weights where if the feature is more relevant than for the decision maker, then . It should hold that . Under these assumptions, a score for the solution can be calculated as .
However, this approach has a problem. Let us suppose we have just three criteria and the given preference order is . Then, we need the weights in such a way that with . As the reader may notice, there are infinite values for that verifies both conditions, and every possible set of values will give a different score for the alternative.
Instead of a single score value, the authors in [
10] proposed to calculate an interval of the potential scores that a solution can attain and then sort the solutions in terms of their intervals.
The main steps are detailed below.
Solutions’ matrix normalization: normalization is needed because different measurements should be combined;
Intervals calculation: the user’s preferences are given as an ordinal relation among features denoted as . The symbol is to be read as “at least as preferred to”. This implies that the weights are ordered as .
All the potential scores that a solution
can attain are included in the interval denoted as
with
are, respectively, the minimum, and the maximum scores (obtained through solving two simple linear programming problems. See [
10] fo details);
Select a reference solution : such solution and its corresponding interval is the one with the greatest lower bound , thus there is no solution that always scores better than ;
Compare solutions against : every solution is compared against using a possibility function that measures the possibility degree of a solution being better than another using their corresponding intervals.
Let us have two solutions
with their corresponding non-negative intervals
and
The possibility degree of
being greater than
is stated in terms of
as in [
10] and considering a user with a neutral attitude is defined as follows [
11]:
- (a)
if
(intervals do not overlap)
- (b)
if
(intervals overlap)
Ranking of Solutions: for every solution, the value is calculated. Then, they are are sorted based on such possibility degree values.
3. The Time-Dependant Tourist Trip Design Problem
The tourist trip design problem (TTDP) involves two essential aspects: the selection of points of interest suitable to the tourist’s preferences, and the generation of itineraries to visit these points (all or a subset of them). Although the first aspect is usually related to recommendation systems, which have to identify user preferences, the second is related to decision and optimization issues.
In general, TTDP models are based on the well-known
orienteering problem (OP),
team orienteering problem (TOP) and their variants for route design ([
12,
13,
14,
15,
16].
3.1. Problem Description
The tourist trip design problem with time-dependent scores (TTDP-TDS) starts from a set N of nodes or points of interest (POIs). Each node has an associated interest or score and several recommendation factors to weight the interest according to the time at which the node is visited. The travel time from node i to node j is given by .
On the other hand, not all nodes in the set can be visited due to there is a maximum available time (). There is a node defined as the starting and ending point of the route (for example, the location of a hotel).
A solution to the problem will be a permutation of a subset of nodes (POIs), i.e., the ordered list of POIs to visit. Over such a list, it is easy to imagine the many features that could be calculated.
3.1.1. Features
I, Overall interest;
, Number of POIs in the route;
, Time spent visiting POIs;
, Total walking time between POIs on the route;
, Overall trip duration: ;
E, Route “efficiency”: , the percentage of time in the POIs with respect to the trip time;
Average, maximum. and minimum time spent in POIs;
Average, maximum and minimum travel time between POIs;
Average number of POIs visited in every time period;
Percentage of POIs visited at the best time of the day.
Considering all of these features in a single model is, if not impossible, very difficult. Moreover, different optimization problems can be defined: Max I, Max , (Max I, Min ), (Max I, Max E), etc.
In this contribution, we depart from the following model.
3.1.2. Parameters
N—Node set (the POIs);
—Index of the start/end node of the route;
T—Number of time periods;
—Interest of node i;
—Travel time between node i and node j;
—Recommendation factor to visit node i in period t;
—Time required to visit node i (service time);
—Maximum time available for the route.
3.1.3. Decision Variables
The following variables are considered:
, a binary variable. If the node i is visited in time period t, then , and 0 otherwise;
, a permutation of k elements (the length of route) where represents the node that is visited at position i of the route.
3.1.4. Objective Function
Although a visit to a point can span two different time periods, here the contribution of such point to the overall interest is determined by the period at which its visit starts.
3.2. Problem Solving and Generation of Solutions
We describe next the basic details of the metaheuristic used, the dataset and the way the set of solutions is generated.
3.2.1. Metaheuristic
A crossover-less evolutionary algorithm (EA) is used to solve this problem.
Each individual of the initial population is generated using a GRASP-like procedure. Initially, an initial POI is randomly selected. Subsequently, a list of candidate POIs is constructed, an element from the list is randomly chosen and then added to the solution. The process is repeated as long as the maximum time available is not exceeded.
In each generation, from the current population of individuals, K parents are chosen by roulette wheel selection and incorporated into an intermediate population. Using one of the several mutation operators available, a child solution is generated from each parent. If the new solution is feasible and better than the parent, it is incorporated into the intermediate population.
Finally, the intermediate population is sorted, and the best elements are taken to replace the original population.
The proposed algorithm always operates with feasible solutions. Therefore, non-feasible solutions obtained by mutation are discarded.
3.2.2. Dataset
A dataset with the 50 most visited POIs of Granada city, Spain is used. The distribution of these POIs is shown in
Figure 1 and the dataset is available at
http://18.156.111.23/TTDP-TDS/web/ (accessed on 1 August 2021).
Four time slots () and a value are considered. Each time slot has a duration of 90 min.
3.2.3. Generation of Solutions
The generation of solutions is done running the EA several times. More specifically, we made 40 independent runs of the EA with different parameters. Every run deals with solutions and 100 generations. After every run, we kept the best solution, thus obtaining a set of 40 solutions.
3.3. Analysis of Solutions’ Diversity
In order to explore the relation between the interest and the diversity of the solutions, we will use the Jaccard’s similarity coefficient to assess how similar two solutions
(considered as two sets of POIs) are. The coefficient is defined as:
It is easy to see that if routes
A and
B contain the same POIs,
then
. On the other hand, if routes
A and
B do not share any element at all,
so
.
From the set of 40 solutions , we select the best one in terms of interest () and compute .
The scatter plot in
Figure 2 displays the relation between similarity and interest difference for every
and
.
Solutions 5, 12, 14, 5 have the same interest that but a different structure, with a Jaccard coefficient . There are other three solutions (3,4, 10) with a minor difference in cost (less than 2 units) and rather different from , (). If the user may allow routes with less interest (up to ) then all the solutions in the range of interest difference between 2 and 4 are available.
3.4. Users Profiles and Ranking
Now, having those 40 solutions, we calculate the values of several additional features (beyond the route interest I) which are:
I, Route Interest;
, Number of visited POIs;
E, Route efficiency;
, total walking time.
The corresponding values for every solution are shown in
Table A1 (
Appendix B), while
Table 1 summarizes the values of the features over the 40 solutions. We can observe that a wide variety of values are available. There are solutions ranging from 11 to 15 POIs, with an interest which is (in the worst case)
lower than the best solution. Regarding the travel time, values from 25 min to almost an hour of walking between POIs can be considered and looking at the efficiency, it is clear that the solutions are very effective at reducing the travel time while maximizing the visit time.
Now, let us define the following three profiles.
Guided tour: A tourist wanting to visit as many of the city’s most interesting POIs as possible while maximizing the time available. To do so, we rank the solutions taking the interest of the solution as the main criterion, ordering the rest of features as ;
Casual visit: A visitor, who is in the city for another reason, wants to visit the most interesting places in the city but wants to have as much free time as possible during this visit. In this case, we maximize the Effectiveness of the visit and order the rest of the criteria by ;
Large groups or people with reduced mobility: Due to the size of the groups or the mobility problems of the visitors, it is necessary to keep the journeys to be made as short as possible. In this last profile we minimize travel time between POIs , and we set the order of the rest of the criteria by
For every profile, the procedure described in
Section 2.1 is applied.
Figure 3 shows the best 10 solutions of the corresponding rankings. The values for the reference solution (
are indicated in the top row, and the columns are ordered in terms of the criteria importance. Values in red cells are worse than the reference value, while the green ones are better.
Regarding Profile 1, the reference solution is the top ranked alternative. Solution 4 has a slightly lower value of interest while attaining the same values in the rest of features. Nevertheless, this solution is quite different from , having .
Solutions 5, 12, and 14 have the same value of interest, but they are slightly worse in the rest of the factors.
Regarding Profile 2, again the reference solution is the top ranked alternative followed by . Then, solution appears (which was not among the top 15 solutions for Profile 1). This solution has better efficiency and travel time than the reference solution, but is ranked in third place due to the interest (which is 10 units less than the reference). Solution is also interesting because it is better in efficiency and travel time, while just having 6 units less of interest. In turn, the route uses 12 POIs instead of 15 which is a remarkable point.
Finally, for Profile 3, we observe that the reference solution is ranked in the fourth place. Solutions 37, 39, and 27 are top ranked, providing improvements in travel time (shorter displacements between POIs) and greater efficiency. All of them had 12 POIs which nevertheless allowed to reach up to of the reference interest (90 units).
So, Profiles 1 and 2 share the top two solutions. The third solution from Profile 2 does not even appear for Profile 1. None of the top 3 solutions from Profile 3 are available in Profile 1 and the third solution from Profile 3 does not appear in Profile 2.
These results clearly highlight the need of having a set of solutions to choose from instead of concentrating the efforts in obtaining one single best solution.
4. The Perishable Food Distribution Problem
Perishable food distribution has become increasingly tricky due to the features of these goods: perishables have a short shelf life, and their quality deteriorates over time during the distribution process until it is consumed. Therefore, ensuring food quality has become a major priority that must be met if customers are to be satisfied.
In a real-world setting, the distribution of perishable food entails a set of complex specifications that might be difficult to incorporate into mathematical models. Such parameters are frequently conflicting and, in some cases, unquantifiable [
2,
17,
18].
A well-designed distribution plan that prioritizes short transit times and distances can help to preserve products quality while reducing waste. However, improving the sustainability of perishables distribution network necessitates balancing several competing goals, such as minimizing the travel costs (e.g., fuel consumption costs, refrigeration costs, the damage cost), maximize the average freshness, meeting consumers requirements to ensure their satisfaction (e.g., on-time delivery, good service level), and reducing environmental effect.
4.1. Problem Description
The problem is represented as a direct graph denoted by , being the set of vertices, nodes 0 and represent the depot, and the remaining nodes are the customers to serve. Each customer i should be served within a specific time slot defined by .
The problem is modeled under the assumption of fleet homogeneity: all the refrigerated trucks have the same refrigeration characteristics, the same load capacity, and the same constant velocity.
When considering solutions for the perishable food distribution problem, several features can be defined.
4.1.1. Features
From this set of available objectives, we deal with the following model formulation.
4.1.2. Parameters
: the transportation cost from node i to j;
F: fixed cost associated to a vehicle;
: the travel time from i to j;
: the distance between node i and j;
: the cost per unit time for the refrigeration during the transportation process;
: the unloading time at customer i;
: the unit refrigeration cost during the unloading;
: the necessary time to serve customer i;
: the demand of customer i;
Q: the loading capacity of trucks;
K: the fleet of trucks;
: the time window of customer i.
4.1.3. Decision Variables
: a 0–1 decision variable, equal to 1 in case the truck k travels on arc , 0 otherwise;
: a 0–1 decision variable, equal to 1 in case the customer i is served by the truck k, 0 otherwise;
: a continuous decision variable representing the starting service time at customer i using the truck k.
4.1.4. Objective Function
The objective is to minimize the travel cost, calculated from: , the fixed costs for using a vehicle denoted; , the transportation costs and the refrigeration costs.
, The fixed costs: represent the vehicle’s maintenance and depreciation costs.
where
k is the number of refrigerated trucks available and
F represents the fixed cost related to a vehicle;
, The transportation costs: are proportional to the vehicle mileage. We consider only the fuel consumption cost and express it as:
, The refrigeration costs: are calculated as
where
are the refrigeration costs of the transportation process:
and
is the cost of energy supplied during the unloading:
Considering the objectives described above, the problem can be formulated as a mixed integer program (MIP) giving by the following:
subject to:
The objective (
11) corresponds to the total cost minimization. Constraint (
12) avoids going from a point to itself. Constraint (13) avoids going from the depot to node
which represent the depot. Constraint (14) states that only one vehicle visits each customer exactly once. Constraint (15) and (16) ensure that each vehicle that arrives at a customer must depart for another destination. The vehicle departs from the depot 0, and returns to the depot node
according to the Constraint (17) and (18). Constraint (19) guarantees the respect of vehicle loading capacity. The connection between the start service at a specific customer and the next one is established by Constraint (20). Constraint (21) is the time window constraint. Constraint (22) guarantees that the number of trucks departing from the depot do not exceed the available fleet of trucks
K. Constraint (23) states that the vehicle must traverse the arc
in order to serve the customer
j.
4.2. Problem Solving and Generation of Solutions
We describe next the basic details of the metaheuristic used, the dataset and the way the set of solutions is generated.
4.2.1. Metaheuristic
To solve the proposed problem, we use a basic implementation of a general variable neighborhood search GVNS metaheuristic. In GVNS, variable neighborhood descent (VND) is used for local search. When designing a GVNS, one should take the following decisions: the number of neighborhood structures, the order in which they will be explored, how the initial feasible solution is generated, the acceptance criterion and the stopping conditions. These elements define the configuration of the GVNS.
GVNS starts by an initialization phase in which a feasible solution is generated. The best solution is set as the first feasible solution. After selecting the neighborhood structures for the shaking stage , and for local search , the stopping condition is then chosen. The stopping condition corresponds to a number of iterations M that will be set in the computational experience. The shaking process, the local search, and the move decision are repeated until completing all neighborhood structures (). In the shaking phase a solution is generated randomly at the neighborhood of . Then, the local search is performed to find a better solution from using the neighborhood structures.
4.2.2. Dataset
The proposed approach is evaluated through a dataset of 50 customers in Granada, Spain. The customers’ time window takes a random value in the interval
with an interval width
. For customer’s target time, we assume that it is the midpoint of the corresponding time window. The dataset is available at
http://18.156.111.23/PER-FOOD/DataPerishableFood.csv (accessed on 1 August 2021).
The refrigerated vehicles used to deliver products have a fixed cost of 25 €, and the fuel consumption is estimated to 3 €/km. The other parameters are: €/unit, , , €/unit, €/unit and .
4.2.3. Generation of Solutions
Although multiple runs of GVNS may allow to obtain a set of solutions, we illustrate here a different approach inspired in the so called ’Modeling to Generate Alternatives’ (MGA) strategy [
5,
6]. MGA tries to generate a set of solutions that are quantifiable good across all the modeled goals, while remaining as different as possible from each other (in the decision space).
Let us assume that the best solution obtained for the problem is
, with
as the objective value. To generate alternative solutions
X that are maximally different from
, the following problem is addressed:
Subject to:
where
is the similarity coefficient used in the previous example. In the original MGA approach, the problem is stated as maximizing a dissimilarity function. The parameter
T is the tolerance threshold related to the optimal objective value
and should be defined by the decision maker.
We define the Jaccard’s coefficient between two routing solutions as the ratio of the number of shared arcs to the number of total arcs used in both solutions. Let
if arc
from vertex
i to vertex
j is used by any vehicle in solution
and
otherwise.
where
iff arc
is used by both solutions, and
1 if any of the solutions use it. If solutions
p and
q are the same, the sum in the numerator will equal the sum in the denominator, and therefore
. On the other hand, if they are two completely different solutions with no arc in common, the numerator will equal
and then
In second place, we need to manage the fuzzy constraint (
26) (which is crisp in the original MGA approach). The membership function that represents the satisfaction degree of the fuzzy constraint is is defined as:
where
is the endurable distance threshold,
is the cost difference between the reference solution and the obtained alternative.
Using the concept of
and the parametric approach [
19], the fuzzy constraint is transformed into:
Now, the GVNS parameters, we use the following values: . The stopping criteria M which correspond to the number of iterations, is fixed to 10. The other parameters are: €/unit, , , €/unit, €/unit and .
To generate a set of alternative solutions, we start by finding a reference solution
by running 10 times the GVNS metaheuristic. The best solution obtained (according to Equation (
12)) is considered as the reference solution
.
Then, using
, the new problem considering as objective the Equation (
24) subject to the constraints (
25) and (
29) is solved for
. For each value of
, a tailored version of GVNS is run 10 times.
At the end, 62 different solutions are obtained and the corresponding values for the following features are calculated: the damage cost, the average freshness, the service level, and the tardiness. The mathematical definition of these criteria can be found in
Appendix A.
Table A2 (in the
Appendix B) shows the results obtained for every value of
. It should be noted that several solutions per GVNS were obtained and just the different ones are reported.
4.3. Analysis of Solutions’ Diversity
Table 2 summarizes the values of the features over the 62 solutions. It is clear that a wide variety of alternatives are available to choose from.
As we explicitly generate the set of solutions minimizing the similarity with
, it is interesting to observe the relation between the Jaccard coefficient, and the differences in the calculated values for every solution. This information is shown in
Figure 4 (for visualization purposes, just a subset of solutions are shown in every case). The Y axis reflects the difference as percentage.
When considering the distance (which is correlated with the travel cost),
Figure 4a shows that all the solutions are worst in this feature. This is not surprising, as
has the lowest distance. There are several interesting solutions, like 10, 5, 6, and 9, that had a quite similar distance but a Jaccard similarity below 0.9. Solution 28 is also interesting because it has a low level of similarity (less than 0.45), while attaining a quite similar distance value (less than
of difference).
With an increment in the distance between 2% and 5%, up to eight solutions can be found with a Jaccard value between .
For the average freshness and service level, the situation is more interesting, because there are better and worse solutions than the reference one. Focusing in the average freshness (
Figure 4b), those solutions with a positive value in Y are better than the reference solution. Solution 15 allows to improve up to
in the criteria while having a similarity value of 0.7. Other improvements between 1% and 2% can be obtained with quite different routes (solutions 47, 25, 52).
The results in terms of the service level are shown in
Figure 4c. Here, solution 59 provides more than a
of improvement with a moderate value of similarity (less than 0.65). Several solutions providing improvements higher than
and similarity below 0.5 are available. In this case, even solution 37 may be interesting, as it has a minor decrement in service level but very low level of similarity.
Finally,
Figure 4d shows the solution in terms of the Tardiness criteria. Here, all the solutions are better than the reference one. It is clear here that tardiness more similar than the reference solution can not be attained. Great improvements can be obtained, but with quite different solutions (look at the low values for similarity).
4.4. User Profiles and Ranking
The previous analysis is useful for detecting good solutions when analysed from just two points of view: similarity vs. a single criterion.
Here, we define three user’s profiles, representing the preferences of a decision maker. As stated before, the preferences are indicated through a linear ordering of the criteria that is described below.
Economic-centric (E-c): Travel Cost = Total Damage Average freshness Tardiness Service level;
Product-centric (P-c): Average freshness Travel Cost Total Damage Tardiness Service level;
Customer Satisfaction-centric (C-c): Tardiness = Service level Travel Cost Total Damage Average freshness.
where the symbol should be read as “at least as preferred to”.
For every profile, the procedure described in
Section 2.1 to rank the solutions is applied. The best 15 solutions under each user profile are shown in
Table 3,
Table 4 and
Table 5.
Every solution is compared against the reference solution in terms of the following criteria: travel cost, total damage, average freshness, service level, and tardiness. Values in red cells are worse than the reference value, while the green ones are better.
We can observe that the proposed approach allowed to generate a variety of alternative solutions, each of which shows a different behavior for each criterion. This diversity allows the decision maker (DM) to select the most appropriate solution, considering different perspectives and depending on his preference. Regarding the E-c profile, solution 18 is the top-ranked alternative. This solution is quite different from the reference solution having a Jaccard coefficient . performs better than in terms of service level, tardiness, and damage. On the other hand, has approximately the same travel cost as the reference solution. Solution 30 is slightly worse in terms of travel cost. However, it allows improvement in all the remaining criteria. Regarding the P-c profile, solution 30 is now the best one. This solution has a worse travel cost than the reference solution, but it is better in the remaining criteria. Solution 52 appears in the second place. Note that it was ranked in the 10th position for the E-c profile. Regarding the C-c profile, solution 30 retain the first place by improving significantly all the criteria, with a slight increase in travel cost compared to .
It should be noted that solutions 26 and 30 are among the TOP-3 best alternatives for both the E-c and C-C profile. Another interesting alternative is solution 3 which maintains its place in the TOP-3 for the P-C and C-c profile. Solution 18 is the best alternative for a DM that have economic focus, and retain the third position in terms of service level improvement. However, this solution does not have interest if the focus is on the product freshness or customer satisfaction.
It is highly remarkable that the solution does not appear among the top 15 solutions for all the profiles, which illustrate the ability of our proposed approach in generating performant and quite dissimilar solutions that will serve the decision maker to select the appropriate solution from his perspective.
5. Conclusions
The computational models of relevant optimization problems necessarily leave out of consideration several characteristics and features of the real world. So trying to obtain the optimum solution can not be enough for a problem solving point of view. Moreover, it is doubtful that a single solution would be able to meet all the real specifications. Therefore, it is desirable for the decision maker to have a set of solutions that are good enough for the modeled objective, but can perform also well when it comes to other unmodeled criteria.
In the context of a basic problem solving framework, we illustrated the role of metaheuristics as solutions’ generators. In the tourist trip design problem, independent runs of an evolutionary algorithm allowed to obtain the set. In the case of perishable food distribution, a more complex strategy was used, inspired from the MGA approach.
Considering a set of solutions and different user profiles, allows to properly rank the available solutions. Using a similarity measure, such as the Jaccard’s coefficient, also allowed to better understand the relation between the reference solution and the other ones, not only in the objectives space, but also in the decision space.
This opportunity to consider different perspectives will grant a decision better suited to the real context.
As future work, we will further explore MGA-like approaches, where some measurement of similarity between solutions is needed. Here, the concept of symmetry appears to be crucial as potentially symmetric solutions in the design space can be equal in the objectives space. In other words, different ways to attain the same results can be explored with a proper usage of the symmetry/anti-symmetry ideas.
Finally, connecting the solutions’ analysis with multicriteria decision-making approaches will be also explored.