Dealing with Multiple Optimization Objectives for UAV Path Planning in Hostile Environments: A Literature Review
Abstract
:1. Introduction
- RQ1: What preference handling techniques have been used in the military UAV path planning literature?
- RQ2: For each of the identified methods: What are the advantages and disadvantages of using these methods?
2. Methodology
- (“Route planning” OR “Path planning”) AND (hostile OR threat) AND (radar OR missile)
3. Preference Handling in the UAV Route Planning Literature
- No-preference Preference HandlingFirstly, in no-preference methods, the DM is not involved in the process. Instead, a solution which is a neutral compromise between all objectives is identified. This method is commonly used when no DM is present and it usually constructs a scalar composite objective function .
- a-priori Preference HandlingSecondly, in a-priori methods, a DM specifies their preference information before the optimization process is started. Most commonly, the preference information is used to construct a scalar composite objective function . This has the advantage that the problem is simplified into an SOO problem. As a result existing SOO algorithms can be used. The main limitation is that the DM has limited knowledge of the problem beforehand, which can result in inaccurate and misleading preferences [20,21]. Intuitively, asking the DM to specify their preferences before the optimization corresponds to asking them to select a point on the Pareto Front (e.g., by specifying objective weights) before knowing what the actual Pareto Front and its corresponding tradeoffs look like. Consequently, a set of preferences specified a-priori might lead to solution x, whereas the DM might have preferred y when presented with both options.
- a-posterior Preference HandlingAlternatively, a-posterior methods can be used. These methods include the DM’s preferences after the optimization process. First, they approximate the Pareto Front and then allow the DM to choose a solution from said front. The main advantage is that the DM can elicit their preferences on the basis of the information provided by the Pareto Front. The DM has a better idea of the tradeoffs present in the problem and can thus make a better-informed choice. Furthermore, it can provide the DM with several alternatives rather than a single solution. The disadvantage of this class of methods is the large computational complexity of generating an entire set of Pareto optimal solutions [22]. Additionally, when the Pareto Front is high-dimensional, the large amount of information can cognitively overload and complicate the decision of the DM [20].
- Interactive Preference HandlingFinally, interactive preference elicitation attempts to overcome the limitations of the a-priori and a-posterior classes by actively involving the DM in the optimization process. As opposed to the sequential views of a-priori and a-posterior methods, the optimization process is an iterative cycle. Optimization is alternated with preference elicitation to iteratively update a preference model of the DM and to quickly identify regions of the Pareto Front that are preferred. Undesired parts of the Pareto Front need not be generated, which saves computational resources. Instead, these resources can be spent on better approximating the preferred regions, improving their approximation quality [23]. Lastly, being embedded in the optimization process helps the DM build trust in the model and a better understanding of the problem, the tradeoffs involved, and the feasibility of their preferences [24,25,26]. One challenge of interactive methods is that the interaction needs to be explicit enough to build an expressive and informative model of the DM’s preferences whilst also being simple enough for the DM to not cognitively overload them.
4. Preference Handling Methods and Their Implications
4.1. No Preference—Preference Handling
Nash Product
- Pareto Optimality—EfficiencyThe identified Nash Bargaining Solution is a Pareto-optimal solution [34], which, of course, is a desirable property.
- Symmetry—FairnessWhen the game is symmetric, which is the case for the Nash Bargaining Solution defined in Equation (3), the solution does not favor any objective but returns a fair solution [37]. The concept of Nash Bargaining can be extended to include asymmetric games by introducing weights to the problem, as in Equation (4). These weights can be determined by the DM, where they correspond to the preferences with respect to certain objectives [38]. This extends the Nash Bargaining Solution from a no-preference method to an a-priori method.
- Scale Invariance—No NormalizationMathematically, this property means that any positive affine transformation of the objectives does not influence the decision-theoretic properties of the optimization problem. Intuitively, this means that the Nash Bargaining Solution is independent of the unit of the objectives. It does not matter whether one is measuring route length in kilometers or miles. Instead of comparing units of improvement in one objective to units of degradation in another objective, ratios are compared. For example, a route twice as dangerous but half as long as another route is equally good in this view. One important implication of this is that there is no need to normalize the objectives before optimizing.
- Independence of Irrelevant AttributesThis axiom states that removing any feasible alternative other than the optimal Nash bargaining solution does not influence the optimal solution. This axiom has often been critiqued [39], but in the rational optimization context, it makes sense. Removing a nonoptimal solution, should not change what the optimal solution is.
4.2. A-Priori Preference Handling
4.2.1. Weighted Sum Method
- Constraints—Normalization and Weakly Dominated PointsThe first constraint: does not impose any constraints on the optimization problem. Any weight vector could be normalized to satisfy the constraint. However, restricting the DM to choose weights according to this constraint can cause unnecessary complications. Generally, the weight vector is interpreted to reflect the relative importance of the objectives [42]. In the case where one objective is five times more important than the other, it would be much easier for a DM to set the weights as than . This becomes especially troublesome when there are many objectives, hindering the interpretability of the weight vector. Therefore, it is better to normalize the weights automatically after weight specification rather than restricting the DM in their weight selection.The second constraint can pose a problem when one of the objective weights is equal to 0. When setting to 0, a solution with objective values is equivalent to . The equivalence of these two objective vectors introduces weakly-dominated points (i.e., points like x for which a single objective can be improved without any deterioration in another objective) to the Pareto Front. When the DM truly does not care about this is not a problem, however, when one would still prefer y over x, one could set the weight of to a very small value instead of 0 [43]. This allows the method to differentiate between and using the third objective , removing weakly dominated points from the Pareto front.
- Mathematical Form—Imposing LinearityThe mathematical form of the equation refers to the functional composition of the composite objective function. In all papers identified, a linear combination of objective functions, as defined in Equation (5), has been employed. However, there is no reason, besides simplicity, why the equation has to be a simple linear combination. In fact, because Equation (5) is a linear combination of the objectives, it imposes a linear structure on the composite objective function. Preferences, on the other hand, generally follow a nonlinear relationship [44]. A DM might approve a 20km detour to reduce the probability of detection from 0.2 to 0 but might be less willing to do so to reduce the probability of detection from 1 to 0.8. Furthermore, imposing a linear structure on the preferences leads to favoring extreme solutions whereas DMs generally favor balanced solutions [42]. This is particularly troublesome when the feasible objective space is non-convex, as can be seen in Figure 9 (compare Figure 9a to Figure 9b). The linear weighted sum is not able to find any solution between the two extremes. Points on this non-convex section of the Pareto Front cannot be identified by a linear weighted sum. In discrete cases, it has been shown that up to 95% of the points can remain undetected [45].
- Weight vector—Relative Importance of ObjectivesThis very simple method poses a not-so-simple question: how does one determine the weights? There is no unique answer to this, as the weight vector needs to be determined by the DM and it differs per scenario. The most common interpretation of the weights is to assign them based on the relative importance of the individual objectives. For example, in a bi-objective problem case, setting and assumes that objective 2 is twice as important. However, as shown by Marler & Arora [42] the value of a weight is significant relative to both the weights of the other objectives and all the objective function’s ranges. In particular, if objective 1’s range is much higher than objective 2’s range, objective 1 will still dominate regardless of the weight set. Therefore, normalizing the objectives is necessary. Although the normalization of objectives is a necessary precondition to interpret weights as relative importance, only a very small portion () of the literature has done so, as can be seen in Figure 10. Alternatively, one can interpret the ratio of the weights as the slope of the line crossing the mathematical optimum on the Pareto Front. By changing the slope, one can identify other points on the Pareto front. However, as one does not know the shape of the Pareto front a-priori, it is hard to relate the slope of the line to the final MPS. For example, in Figure 9b, a ratio of 1 does not actually result in a balanced solution.Additionally, it is hard for a DM to set the weights before having information about the possible trade-offs. Why should the weight vector be chosen? Why not ? How much would the solution and its objective values change when slightly changing the weights? These questions are important, as in a practical setting a DM will need to trust that the weights set lead to the most preferred solution and that that solution is robust to a small weight change.
4.2.2. Reward Shaping
4.2.3. Decision-Making Based on Goals and Priorities
- Priorities—Lexicographic OrderingCategorizing objectives into priorities imposes an absolute ordering on the objectives. Even an infinitely small improvement in a higher-priority objective is preferred over an infinitely large improvement in a lower-priority objective. This ordering is called a lexicographic ordering and its sequential optimization process is called lexicographic optimization [8]. First, the highest priority is optimized. Only when there are ties or when all goals are achieved, subsequent priority levels are considered. For each lower priority level, one thus need more ties or more achieved goals, making it less likely to even reach the lower priority levels. Indeed, the presence of an excessive amount of priority levels, causes low-level priorities to be ignored [66]. Moreover, even when only two priority levels are used, such a strong lexicographic ordering has strong implications. It makes sense when a clear natural ordering is available. For example, satisfying a UAV’s flight constraints is always preferred to optimizing an objective, as a route that cannot be flown is always worse than a route that can be flown. However, for optimization objectives, this is usually not the case. It is hard to argue that there is no trade-off possible between criteria. Surely, a huge decrease in the route length warrants a tiny increase in the probability of detection. Therefore, when setting priority levels, objectives should only be categorized differently when one is infinitely more important than the other.
- Goals—Satisficing vs OptimizingSetting goals has a profound impact on the resulting final solution—it can have as large an impact, if not more, as the mathematical definition of the objective functions themselves [67]. In fact, the entire underlying philosophy of the optimization process can change when a goal is set to an attainable value versus an unattainable value. When the goals are set at attainable values, goal programming essentially follows a satisficing philosophy. This philosophy, named after the combination of the verbs to satisfy and to suffice, introduced by Simon [68], argues that a decision-maker wants to reach a set of goals, and when they are reached this suffices for them and hence they are satisfied [69]. Mathematically, this means that objectives are optimized until they reach a goal value. Any further improvements are indifferent to the DM. For constraints, this makes sense, satisfying the goals imposed by a UAV flight constraints suffices. There is no need to make even tighter turns than the turn constraint imposes. For optimization objectives, however, this contradicts the entire concept of optimizing, for which “less is always better” (i.e., a monotonic belief). Even when the goal value for the danger levels is satisfied, a solution that has less danger is preferred. Note that this does not align with the principle of Pareto-optimality and can thus result in dominated solutions. This is precisely the main disadvantage of setting goals for optimization objectives. Indeed imposing attainable goals on monotonic optimization objectives should not be done. Instead, for monotonic objectives, goals should be set to unattainable values [70] to ensure the optimization process optimizes rather than satisfices. Alternatively, techniques for Pareto-efficiency restoration can be used (e.g., see Jones & Tamiz [71]). Finally, when objectives in the same priority level all have goals, the objective will be to minimize the sum of unwanted deviations from the goals. For this sum to weigh objectives equally, objectives need to be on the same order of magnitude (i.e., commensurable). Therefore, when they are incommensurable, the objectives need to be normalized [66].Clearly, setting goals properly is of the utmost importance. So, how would one reliably set goals? To determine whether a goal value is realistic and attainable, one needs information regarding the ranges of the objective functions as well as the trade-offs in objective space, and the beliefs of the DM about the objectives (e.g., is it a monotonic objective?). a-priori, such information is not present. Therefore, setting goals is usually not done a-priori, but rather it is an iterative (i.e., interactive) process.
4.3. A-Posterior Preference Handling
- Optimization: Generating the Pareto frontTo generate the entire Pareto front, the UAV route planning literature has used two distinct methods. In two papers [72,73], the Pareto front is approximated directly using metaheuristics, in particular, evolutionary algorithms. These optimization algorithms use an iterative population-based optimization process to get an approximation of the Pareto front. As the scope of the paper is on preference handling techniques, evolutionary algorithms will not be discussed. Instead, they are extensively covered in existing surveys identified in Table 1 as well as by Yahia & Mohammed [74] and Jiang et al. [75]. The two other papers: Dasdemir et al. [76] and Tezcaner et al. [77] reformulated the problem using the -constraint method. This method is a general technique that reformulates the problem into a series of SOO problems. Although it is not directly a preference handling method, it is a general method to deal with multiple criteria and can be combined with any optimization algorithm. Therefore, it is covered in Section 4.3.1.
- Preference Handling: Selecting the most preferred solutionNone of the identified papers that generate the entire Pareto front cover the preference handling stage after the optimization. Instead, they simply assume that the DM can choose their most preferred solution based on the Pareto front. However, as explained at the start of Section 3, choosing a solution from the Pareto front is not a straightforward task. For example, when the Pareto front is high dimensional, the DM will face cognitive overload. One paper [72] addresses this by first linearly aggregating multiple objectives into two composite objective functions to lower the dimensionality. Afterwards, the authors assume that the DM can choose their most preferred solution from the two-dimensional Pareto front. However, to linearly aggregate the objectives, the method requires a-priori preference information (i.e., a weight vector), thereby, introducing the same challenges of a-priori preference handling and in particular of the weighted sum. Clearly, the focus of the aforementioned papers is on the optimization process and on the creation of a Pareto front. The subsequent step of selecting the MPS, however, is neglected. The DM is not aided in this process, which hinders the usability of the method. In particular, the decision-aid aspect of optimization is neglected.
4.3.1. -Constraint Method
- Constraint Introduction—Problem ComplexityThe introduction of constraints can greatly complicate the structure of the optimization problem. In particular, for the shortest path problem, introducing constraints changes the complexity of the problem from polynomially solvable to NP-complete [78]. Therefore, for larger problem instances, an exact -constraint method for the UAV route planning problem is not possible. Instead, for example, metaheuristics can be used to solve the formulated SOO problem.
- Weakly Dominated PointsThe -constraint method can generate weakly dominated points. In particular, the method will generate all points that satisfy the constraint on and for which is at its minimum. When there is no unique solution (i.e., multiple solutions x for which is at its minimum), all of these solutions can be generated. Of course, in this case, one would prefer to only keep the solution with minimal . Therefore, one could add a second optimization problem that optimizes w.r.t. given the optimal value. Note that this is lexicographic optimization [79]. Alternatively, one could eliminate these weakly efficient solutions by modifying the objective function slightly. In particular, in the augmented -constraint method, the problem is reformulated to include a small term meant to differentiate between weakly efficient and efficient solutions [43]. For example, in the bi-objective case, we add the factor to get Equation (9). When is sufficiently small to only differentiate between solutions with equal , we remove weakly efficient solutions from the equation. Note that this happens without the need for the formulation and solving of additional optimization problems, thereby saving computational resources. Alternative solutions to remove weakly efficient solutions are available (e.g., Mavrotas [80]).
- Start and End Points of Tightening—Objective RangeIn order to get the full Pareto front, one needs to know where to start systematically tightening the bound and when to stop tightening the bound. In particular, the loose bound (starting point) is the nadir point. The ending point of the method is when the bound is at its tightest and only one solution remains. This is at the ideal point, the point with the best possible objective values of points on the Pareto front. The ideal point is easy to estimate; one simply needs to solve a single-objective minimization problem for each objective i and the resulting optimal objective value will be the ideal point of dimension i [8]. For the nadir point, one can take the worst objective values found by solving the series of SOO problems. In the bi-objective case, this will find the exact nadir point. However, when there are more than two objectives, this can greatly over- or under-estimate the nadir point [8,80]. Many different estimation methods exist (for a survey see Deb & Miettinen [81]) but no efficient general method is known [8].
- Systematically Tightening the WeightsOriginally, the epsilon constraint method tightens the bounds of each constraint i by a value . The value of relative to the range of the objectives determines how detailed the generated Pareto front will be. A large results in a coarse Pareto front, whereas a small results in a fine-grained Pareto front. In particular, this can be seen as the stepsize of the method. Increasing it makes bigger steps, leading to a coarser Pareto front. At the same time, it decreases the computational complexity. A bigger stepsize, reduces the amount of steps needed to reach the ideal point, hence fewer optimization problems need to be solved. Therefore, needs to be fine enough to get a good approximation of the Pareto front but coarse enough to prevent a lot of redundant computational effort [82]. Alternatively, one can tighten the weights adaptively. When the bound is at a non-Pareto efficient part of the objective space, larger steps can be taken, whereas when the bound is at a Pareto efficient point, small steps can be taken. An example of this can be found in Laumans et al. [82].
4.4. Interactive Preference Handling
Interactive Reference Point-Based Methods
- Preference Information—Reference pointPreference information is supplied by means of a reference point. Providing reference points is a cognitively valid way of providing preference information [86], as DMs do not have to learn to use other concepts. Instead, they can simply work in the objective space, which they are familiar with. However, as discussed in Section 4.2.3, using goals can result in dominated solutions due to the satisficing philosophy. Therefore, it is important to handle dominated solutions when using interactive reference point-based methods. Dasdemir et al. [85] prevent dominated solutions by keeping track of a set of nondominated solutions throughout the entire optimization process. After the optimization is done, the set of nondominated points that are closest to the reference point are returned as the ROI. Therefore, assuming the evolutionary algorithm has found nondominated solutions, only nondominated solutions will be returned.Additionally, when achieving one goal is more important than achieving another goal, it can be difficult to represent this with a reference vector. Therefore, a weighted variant [87] exists that introduces relative importance weights in the distance function (see preference model below). Note that although this gives the DM the possibility to include preferences regarding the importance of achieving certain goals, specifying weights is not a trivial task (as described in Section 4.2.1).
- Preference Model—Distance FunctionBased on the provided reference point, a preference model is constructed. The preference model captures the subjective preferences of the DM and uses them to identify the ROI on the Pareto front. In interactive reference point-based methods, the preference model is usually fully determined by the combination of a scalar distance function and the reference point [87,88]. This distance function is sometimes called an Achievement Scalarizing Function, which essentially does distance minimization [89]. The choice of distance function has an influence on the chosen ROI. For example, by using a quadratic distance function, points far away from the reference point are punished more than by using a linear function. Moreover, by choosing a nonlinear distance function, one introduces nonlinearity, alleviating the issues of the weighted sum methods, described in Section 4.2.1. In Dasdemir et al. [85] the distance function is the normalized euclidean distance (which is nonlinear) to the reference point. The solution closest to the reference point will be ranked 1, the second closest will be ranked 2, etc. Whichever solution is closer (based on the chosen distance function) to the reference point is thus preferred by the DM.Instead of finding a single most preferred solution, interactive reference point-based methods aim to identify an ROI. To identify this ROI, one needs to balance specificity and diversity. In particular, the ROI should not collapse into a single point because it is too specific, but it should also not be too diverse such that it is spread around the entire Pareto front. Therefore, to tune the spread of the ROI, an -niching method is commonly used [83] (e.g., in R-NSGA-II [90] and RPSO-SS [91]). Dasdemir et al. [85] also employ such an -niching method. In particular, one representative is constructed and all solutions within an -ball to the representative are removed. Subsequently, the next representative outside of the -ball is created and the process repeats. By increasing , more solutions will be deemed equal and the ROI will be wider and vice versa. Setting this design parameter, however, is not trivial [83]. For example, in the exploration phase a DM could want a more diverse spread of the ROI compared to when the DM tries to find the final most preferred solution. Therefore, adaptively changing the based on whether the DM is exploring or selecting his MPS might be useful.
- Feedback to the DM—Pareto Front InformationOne of the main advantages of using interactive optimization methods is that the DM can learn from the possibilities and limitations of the problem at hand by receiving feedback from the optimization process. Based on the learned information, a DM can better identify the ROI of the Pareto front. The feedback provided to the DM can take many forms. For example, possible trade-off information or a general approximation of the Pareto front can be provided. For an interactive system to work correctly, it is vital that the interaction is not too cognitively demanding for the DM (i.e., cognitive overload). At the same time, the information needs to be rich enough to provide the DM with the necessary capabilities to correctly identify their ROI. Moreover, different DMs have different skillsets and affinities with optimization, therefore, interaction should be designed in close collaboration with the relevant DMs.In Dasdemir et al. [85], the feedback provided to the DM is a general approximation of the Pareto front (although no guarantees are given that the provided points are actually nondominated points). In the bi-objective case they evaluate, this is a cognitively valid way of information communication, but when the Pareto front becomes higher dimensional it can cognitively overload the DM [92]. Therefore, alternative visualization methods exist (e.g., Talukder et al. [93]). Besides that, the interaction is not designed nor evaluated with the relevant DMs, therefore, it is hard to say whether, in this context, Pareto front information is an appropriate way to communicate information to the DM.
- Interaction Pattern—During or After a RunFinally, when designing an interactive optimization method, one needs to decide at what point the DM is allowed to provide preference information. In particular, Xin et al. [83] have identified two interaction patterns: Interaction After a complete Run (IAR) and Interaction During a Run (IDR). Reference-point based methods generally follow the IAR method. IDR methods, on the other hand, give the DM more chances to guide the search process, thereby making it more DM-centric. Alternatively, one could allow the DM to intervene at will, either waiting for a complete run or stopping the run. Most importantly, there needs to be a balance between requesting preference information and optimization. We do not want to strain the DMs too much by asking them to provide a lot of preference information, but at the same time, we need to be able to build an accurate preference model and allow the DM to be in control.
5. Comparison of Preference Handling Techniques
- Nash ProductThe Nash product, a no-preference method, transforms the original MOO problem into an SOO problem by aggregating the k-objectives into a single composite objective function via the product rule (see Equation (3)). Subsequently, it allows the use of SOO techniques to solve the formulated problem. The resulting single Pareto Optimal solution assumes equal importance of objectives and finds a balanced tradeoff between the objectives. One of the advantages of using the Nash product is the low computational complexity, allowing the use of SOO methods. Additionally, the use of the product implies the method is scale-invariant which means no normalization of objectives is needed. The main disadvantage of the method is that it is impossible to tailor to different scenarios, which is much needed due to the large variety of military missions. Suitable applications for this method are applications where one does not have access to a DM and/or requires a single balanced compromise solution.
- Weighted Sum Method and Reward shapingThe weighted sum method, an a-priori technique, transforms the original MOO problem into an SOO problem by linearly aggregating the k-objectives via a weighted sum (see Equation (5)). The Reward Shaping method, as used by the reviewed papers, handles preferences equivalently to the weighted sum method. By using regular SOO techniques one can solve the formulated problem. The resulting solution is weakly Pareto optimal, but when all the weights are set higher than 0 () the solution is guaranteed to be Pareto Optimal. The main advantage of using the Weighted Sum method or Reward shaping is that it greatly reduces the computational complexity of the problem. A disadvantage is that it can only find points on the convex hull of the problem. Besides that, it can favor extreme solutions even when weights are set equal. Because of this, setting weights is non-trivial, and the interpretation of the relative importance of objectives might not hold. Additionally, it requires the normalization of objectives. Finally, specifying preference information a-priori might result in suboptimal results. Suitable applications are those with convex Pareto fronts (e.g., linear objective functions), requiring a single optimal solution, and having access to objective trade-off information before optimization.
- PrioritiesSetting priorities transforms the original MOO problem into a series of j SOO problems (assuming j priority levels are used) via the use of lexicographic optimization. Subsequently, SOO techniques can be used to solve the formulated SOO problems. The resulting solutions are Pareto optimal, assuming the objectives all have their own priority level. The main advantage of using priorities is that it greatly reduces the computational complexity of the original MOO problem. Besides that, when a natural absolute ordering is available, it is easy to provide preferences. However, it is questionable whether such an ordering exists between optimization objectives. Applications suitable for this method are thus applications where a natural absolute ordering between objectives exists and a single optimal solution is needed.
- GoalsSpecifying goals (i.e., reference points) transforms the MOO problem into a single SOO problem. In particular, the composite optimization function is the sum of unwanted deviations from the goal values. Traditional SOO methods can be used to solve the formulated problem. The resulting solution is not guaranteed to be Pareto optimal. The main advantage of using goals is the reduction in computational complexity. Additionally, when information on the range and trade-offs in the objective space is available, setting goals is a cognitively valid way of expressing preference information. The main disadvantage is that such information is usually not available a-priori. Besides that, solutions found by these methods are not guaranteed to be Pareto optimal. Therefore, techniques for Pareto-efficiency restoration have to be used. Suitable applications are thus characterized by having access to objective range and trade-off information a-priori and needing a single optimal solution.
- -constraint methodThe -constraint method transforms the MOO problem into a series of n (dependent on stepsize of tightening) SOO problems. The solutions to these problems form the approximated Pareto front. To formulate the SOO problems, one objective is kept as an optimization objective, and all the others are set as constraints. First, the constraints are set loosely, and in every iteration, they are tightened. The main advantage of this method is that it provides the DM with Pareto front information before requiring them to make a choice. Additionally, it provides the DM with multiple solutions. Finally, as opposed to the systematic weighted sum method, this method also finds points on the concave parts of the Pareto front. The main disadvantage of the method is the computational complexity. To get a reasonable level of detail of the Pareto front a very large amount of SOO problems need to be solved. Therefore, this method does not scale well to many objectives or complex SOO problems, like the resource-constrained shortest path problem. Finally, when the Pareto front is high-dimensional, it is hard for the DM to select the final solution. Suitable applications for this method are thus applications where multiple alternative solutions are preferred, few optimization objectives exist, and the formulated SOO problems are easy to solve.
- Interactive Reference point-based methodInteractive reference point-based methods allow the DM to specify their reference points iteratively during the optimization process. By doing so they aim to approximate parts of the Pareto front close to the reference points. The resulting approximations are not guaranteed to be Pareto optimal by default. The main advantage is that the interactive nature of the method allows the DM to specify preference information under better-informed circumstances thereby simplifying the preference specification step and consequently improving the likelihood that the final solution identified is the MPS. Moreover, since preference information is supplied during the optimization process, only parts of the Pareto front have to be generated, saving computational resources. The main disadvantage of this method is that it is difficult to find the right balance between overloading the DM and getting enough preference information. Moreover, since the solutions are not guaranteed to be Pareto optimal, Pareto-efficiency restoration techniques or similar methods have to be used. Suitable areas of application are those with intensive access to DMs, who are willing to participate in the decision process.
6. Conclusions and Future Research
- Aligning the mathematical optimum with the MPS via accurate preference modelsThe goal of the route planning problem is to aid the decision-maker in finding their MPS. Currently, most of the research is focused on improving optimization algorithms. No doubt, it is important to ensure that a mathematical optimum can be found. However, when this mathematical optimum does not result in the DM’s MPS, the optimization is useless. Therefore, more effort should be put into aligning the mathematical optimum with the DM’s MPS. For instance, the weighted sum method, used by 70% of the reviewed papers, employs an overly simplistic preference model, potentially leading to a mismatch between the mathematical optimum and the MPS. Lexicographic optimization methods assume an absolute ordering between optimization objectives, however, it is questionable whether such an ordering ever exists. Therefore, future research should employ accurate representations of the DM’s preferences.
- a-posterior and interactive preference handlingAs our literature review has shown, nearly 90% of the papers employ a-priori preference handling techniques. These methods have the major disadvantage that preference elicitation happens before any knowledge of the possibilities, limitations, and capabilities of the problem are known. As a result, the final identified solution might not be the MPS. Additionally, these methods provide a single optimal solution, whereas in some scenarios, particularly in military missions, providing alternative solutions is extremely useful. Therefore, it would be interesting to further investigate a-posterior and interactive preference handling techniques, as these techniques include preference elicitation in a later, better-informed stage, and provide multiple optimal solutions, mitigating the disadvantages of a-priori methods.
- Cognitive validity of preference elicitationMost of the literature has neglected the cognitive validity of the preference elicitation stage. For example, providing weights as preference information is a nontrivial task, potentially misleading the DM. For the method to work in practice, a DM needs to be able to trust the method. Therefore, future research should be geared towards using cognitively valid preference elicitation techniques.
- DM-centric evaluationFinally, as the goal of the route planning problem is to help DMs arrive at their MPS, the performance should be evaluated in this regard as well. In particular, methods should be evaluated based on their decision-aid capabilities and not just on algorithmic performance. Therefore, evaluation with expert DMs is an interesting direction for future research.
- Preference Handling in cooperative path planningOur paper has focused on the handling of preferences w.r.t. the optimization objectives for individual UAVs. When one focuses on cooperative path planning (e.g., Pan et al. [94] and Liu et al. [95]), one not only has multiple conflicting objectives per UAV, but each UAV’s optimal path might conflict with the others. Therefore, there is another layer of multi-objective optimization. For future research, it would be interesting to further investigate the way preferences are handled in such cases.
- Standardization of evaluation of methodsCurrently, research papers propose an optimization algorithm and evaluate it using self-defined scenarios. Since every paper defines its own scenario, it is difficult to compare the results of different methods with each other. For future research, it would be interesting to define a set of benchmark scenarios, on which each method can be tested. This would facilitate a better comparison of methods.
- Definition of optimization objectivesMany different mathematical definitions of optimization objectives are employed throughout the literature. For example, to minimize the probability of detection, the literature has maximized the distance from the radar [96], minimized the radar power with various different functions [97,98,99], minimized a probability defined by a Gaussian [62], minimize time in threat zone [100], etc. Similarly for the flight time objective, many different definitions exist. All these different definitions have different effects on the resulting mathematical optimum.
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
DURC Statement
Conflicts of Interest
Abbreviations
UAV | Unmanned Aerial Vehicle |
EW | Electronic Warfare |
DM | Decision-Maker |
MPS | Most Preferred Solution |
RQ | Research Question |
NLDA | Netherlands Defence Academy |
NPS | Naval Postgraduate School |
AIAA | American Institute of Aeronautics and Astronautics |
MOO | Multi-Objective Optimization |
SOO | Single-Objective Optimization |
ROI | Region Of Interest |
IAR | Interaction After complete Run |
IDR | Interaction During Run |
Appendix A
Reference | Class | Method | Evaluation |
---|---|---|---|
Flint et al. [28] | No Preference | Nash Product | Not Applicable |
Kim & Hespanha [101] | a-priori | N-L-WS | Preference Assumed |
Misovec et al. [102] | a-priori | L-WS | Preference Assumed |
Dogan [62] | a-priori | Goals and Priorities | Multiple preferences visualized |
Chaudhry et al. [63] | a-priori | Goals and Priorities | Multiple preferences visualized |
Qu et al. [103] | a-priori | L-WS | Multiple preferences visualized |
McLain & Beard [104] | a-priori | L-WS | Preference Assumed |
Ogren & Winstrand [64] | a-priori | Goals and Priorities | Preference Assumed |
Changwen et al. [105] | a-priori | L-WS | Preference Assumed |
Weiss et al. [106] | a-priori | N-L-WS | Preference Assumed |
Foo et al. [107] | a-priori | L-WS | Multiple preferences visualized |
Ogren et al. [59] | a-priori | Goals and Priorities | Preference Assumed |
Kabamba et al. [60] | a-priori | Goals and Priorities | Not Applicable |
Zabarankin et al. [65] | a-priori | Goals and Priorities | Multiple preferences visualized |
Zhang et al. [108] | a-priori | L-WS | Preference Assumed |
Lamont et al. [72] | a-posterior | L-WS, Pareto Front | Provide Pareto Front |
Zhenhua et al. [73] | a-posterior | Pareto Front | Provide Pareto Front |
de la Cruz et al. [57] | a-priori | Goals and Priorities | Preference Assumed |
Xia et al. [109] | a-priori | L-WS | Multiple preferences visualized |
Tulum et al. [110] | a-priori | L-WS | Heuristic approximation |
Qianzhi & Xiujuan [111] | a-priori | L-WS | Preference Assumed |
Wang et al. [112] | a-priori | L-WS | Preference Assumed |
Zhang et al. [113] | a-priori | L-WS | Preference Assumed |
Besada-Portas et al. [58] | a-priori | Goals and Priorities | Preference Assumed |
Xin Yang et al. [114] | a-priori | N-L-WS | Preference Assumed |
Yong Bao et al. [115] | a-priori | L-WS | Preference Assumed |
Kan et al. [97] | a-priori | L-WS | Multiple preferences visualized |
Lei & Shiru [98] | a-priori | L-WS | Preference Assumed |
Zhou et al. [99] | a-priori | L-WS | Preference Assumed |
Holub et al. [116] | a-priori | L-WS | Multiple preferences visualized |
Chen et al. [117] | a-priori | L-WS | Preference Assumed |
Li et al. [118] | a-priori | L-WS | Preference Assumed |
Wallar et al. [119] | a-priori | N-L-WS | Preference Assumed |
Yang et al. [56] | a-priori | Goals and Priorities | Preference Assumed |
Qu et al. [120] | a-priori | L-WS | Multiple preferences visualized |
Duan et al. [121] | a-priori | L-WS | Preference Assumed |
Chen & Chen [117] | a-priori | L-WS | Preference Assumed |
Wang et al. [122] | a-priori | N-L-WS | Preference Assumed |
Xiaowei & Xiaoguang [123] | a-priori | L-WS | Preference Assumed |
Wen et al. [124] | a-priori | L-WS | Expert DM |
Zhang et al. [125] | a-priori | L-WS | Preference Assumed |
Erlandsson [46] | a-priori | reward shaping | Multiple preferences visualized |
Humphreys et al. [126] | a-priori | L-WS | Preference Assumed |
Tianzhu et al. [127] | a-priori | L-WS | Preference Assumed |
Wang & Zhang [128] | a-priori | L-WS | Preference Assumed |
Jing-Lin et al. [129] | a-priori | L-WS | Preference Assumed |
Savkin & Huang [61] | a-priori | Goals and Priorities | Not Applicable |
Zhang et al. [130] | a-priori | L-WS | Multiple preferences visualized |
Maoquan et al. [131] | a-priori | N-L-WS | Preference Assumed |
Danacier et al. [132] | a-priori | L-WS | Multiple preferences visualized |
Chen & Wang [133] | a-priori | L-WS | Preference Assumed |
Zhang & Zhang [96] | a-priori | L-WS | Preference Assumed |
Patley et al. [134] | a-priori | L-WS | Preference Assumed |
Ma et al. [135] | a-priori | L-WS | Preference Assumed |
Dasdemir et al. [85] | interactive | Reference Point-based | Author as DM |
Zhang et al. [136] | a-priori | L-WS | Preference Assumed |
Wu et al. [137] | a-priori | L-WS | Preference Assumed |
Xiong et al. [138] | a-priori | N-L-WS | Preference Assumed |
Zhang et al. [139] | a-priori | L-WS | Preference Assumed |
Yan et al. [47] | a-priori | reward shaping | Preference Assumed |
Abhishek et al. [29] | no preference | Nash product | Not Applicable |
Zhou et al. [140] | a-priori | N-L-WS | Preference Assumed |
Zhang et al. [141] | a-priori | L-WS | Multiple preferences visualized |
Xu et al. [142] | a-priori | L-WS | Preference Assumed |
Huan et al. [143] | a-priori | L-WS | Preference Assumed |
Leng & Sun [100] | a-priori | N-L-WS | Multiple preferences visualized |
Yuksek et al. [48] | a-priori | reward shaping | Preference Assumed |
Woo et al. [144] | a-priori | L-WS | Preference Assumed |
Lu et al. [145] | a-priori | L-WS | Preference Assumed |
Zhang et al. [146] | a-priori | L-WS | Preference Assumed |
Alpdemir [49] | a-priori | reward shaping | Preference Assumed |
Dasdemir et al. [76] | a-posterior | Pareto Front | Provide Pareto Front |
Zhao et al. [50] | a-priori | reward shaping | Preference Assumed |
Tezcaner et al. [77] | a-posterior | Pareto Front | Provide Pareto Front |
Zhang et al. [147] | a-priori | L-WS | Preference Assumed |
Luo et al. [148] | a-priori | N-L-WS | Multiple preferences visualized |
References
- Buckley, J. Air Power in the Age of Total War, 1st ed.; Routledge: Oxfordshire, UK, 2006. [Google Scholar] [CrossRef]
- Imperial War Museums. A Brief History of Drones; Imperial War Museums: London, UK, 2018. [Google Scholar]
- Central Intelligence Agency. Remotely Piloted Vehicles in the Third World: A New Military Capability; Central Intelligence Agency: McLean, VA, USA, 1986.
- Lambeth, B.S. Moscow’s Lessons from the 1982 Lebanon Air War. Technical Report; RAND: Boston, MA, USA, 1984. [Google Scholar]
- Zafra, M.; Hunder, M.; Rao, A.; Kiyada, S. How Drone Combat in Ukraine is Changing Warfare; Reuters: Hoboken, NJ, USA, 2024. [Google Scholar]
- Watling, J.; Reynolds, N. Meatgrinder: Russian Tactics in the Second Year of Its Invasion of Ukraine Special Report; Technical Report; Royal United Services Institute: London, UK, 2023. [Google Scholar]
- Milley, M.A.; Schmidt, E. America Isn’t Ready for the Wars of the Future. Foreign Aff. 2024, 103, 26. [Google Scholar]
- Ehrgott, M. Multicriteria Optimization, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2005; p. 7. [Google Scholar] [CrossRef]
- Fevgas, G.; Lagkas, T.; Argyriou, V.; Sarigiannidis, P. Coverage Path Planning Methods Focusing on Energy Efficient and Cooperative Strategies for Unmanned Aerial Vehicles. Sensors 2022, 22, 1235. [Google Scholar] [CrossRef] [PubMed]
- Aggarwal, S.; Kumar, N. Path planning techniques for unmanned aerial vehicles: A review, solutions, and challenges. Comput. Commun. 2020, 149, 270–299. [Google Scholar] [CrossRef]
- Jones, M.; Djahel, S.; Welsh, K. Path-Planning for Unmanned Aerial Vehicles with Environment Complexity Considerations: A Survey. ACM Comput. Surv. 2023, 55, 1–39. [Google Scholar] [CrossRef]
- Allaire, F.C.J.; Labonté, G.; Tarbouchi, M.; Roberge, V. Recent advances in unmanned aerial vehicles real-time trajectory planning. J. Unmanned Veh. Syst. 2019, 7, 259–295. [Google Scholar] [CrossRef]
- Ravankar, A.; Ravankar, A.A.; Kobayashi, Y.; Hoshino, Y.; Peng, C.C. Path Smoothing Techniques in Robot Navigation: State-of-the-Art, Current and Future Challenges. Sensors 2018, 18, 3170. [Google Scholar] [CrossRef]
- Song, B.; Qi, G.; Xu, L. A Survey of Three-Dimensional Flight Path Planning for Unmanned Aerial Vehicle. In Proceedings of the 2019 Chinese Control And Decision Conference (CCDC), Nanchang, China, 3–5 June 2019; IEEE: Piscataway, NJ, USA, 2019; Volume 6, pp. 5010–5015. [Google Scholar] [CrossRef]
- Ait Saadi, A.; Soukane, A.; Meraihi, Y.; Benmessaoud Gabis, A.; Mirjalili, S.; Ramdane-Cherif, A. UAV Path Planning Using Optimization Approaches: A Survey. Arch. Comput. Methods Eng. 2022, 29, 4233–4284. [Google Scholar] [CrossRef]
- Zhao, Y.; Zheng, Z.; Liu, Y. Survey on computational-intelligence-based UAV path planning. Knowl.-Based Syst. 2018, 158, 54–64. [Google Scholar] [CrossRef]
- Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.; Horsley, T.; Weeks, L.; et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef]
- Haddaway, N.R.; Page, M.J.; Pritchard, C.C.; McGuinness, L.A. PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis. Campbell Syst. Rev. 2022, 18, e1230. [Google Scholar] [CrossRef]
- Miettinen, K. Nonlinear Multiobjective Optimization; International Series in Operations Research & Management Science; Springer: Boston, MA, USA, 1998; Volume 12. [Google Scholar] [CrossRef]
- Thiele, L.; Miettinen, K.; Korhonen, P.J.; Molina, J. A Preference-Based Evolutionary Algorithm for Multi-Objective Optimization. Evol. Comput. 2009, 17, 411–436. [Google Scholar] [CrossRef] [PubMed]
- Wang, H.; Olhofer, M.; Jin, Y. A mini-review on preference modeling and articulation in multi-objective optimization: Current status and challenges. Complex Intell. Syst. 2017, 3, 233–245. [Google Scholar] [CrossRef]
- Tezcaner, D.; Köksalan, M. An Interactive Algorithm for Multi-objective Route Planning. J. Optim. Theory Appl. 2011, 150, 379–394. [Google Scholar] [CrossRef]
- Zajac, S.; Huber, S. Objectives and methods in multi-objective routing problems: A survey and classification scheme. Eur. J. Oper. Res. 2021, 290, 1–25. [Google Scholar] [CrossRef]
- Afsar, B.; Miettinen, K.; Ruiz, F. Assessing the Performance of Interactive Multiobjective Optimization Methods. ACM Comput. Surv. 2022, 54, 85. [Google Scholar] [CrossRef]
- Belton, V.; Branke, J.; Eskelinen, P.; Greco, S.; Molina, J.; Ruiz, F.; Słowiński, R. Interactive Multiobjective Optimization from a Learning Perspective. In Multiobjective Optimization: Interactive and Evolutionary Approaches; Branke, J., Miettinen, K., Slowinski, R., Eds.; Theoretical Computer Science and General: Strasbourg, France, 2008; Volume 5252, pp. 405–433. [Google Scholar] [CrossRef]
- Olson, D.L. Review of Empirical Studies in Multiobjective Mathematical Programming: Subject Reflection of Nonlinear Utility and Learning. Decis. Sci. 1992, 23, 1–20. [Google Scholar] [CrossRef]
- Khaira, A.; Dwivedi, R. A State of the Art Review of Analytical Hierarchy Process. Mater. Today Proc. 2018, 5, 4029–4035. [Google Scholar] [CrossRef]
- Flint, M.; Fernandez-Gaucherand, E.; Polycarpou, M. Cooperative control for UAV’s searching risky environments for targets. In Proceedings of the 42nd IEEE International Conference on Decision and Control (IEEE Cat. No.03CH37475), Maui, HI, USA, 9–12 December 2003; IEEE: Piscataway, NJ, USA, 2003; pp. 3567–3572. [Google Scholar] [CrossRef]
- Abhishek, B.; Ranjit, S.; Shankar, T.; Eappen, G.; Sivasankar, P.; Rajesh, A. Hybrid PSO-HSA and PSO-GA algorithm for 3D path planning in autonomous UAVs. SN Appl. Sci. 2020, 2, 1805. [Google Scholar] [CrossRef]
- Nash, J.F. The Bargaining Problem. Econometrica 1950, 18, 155. [Google Scholar] [CrossRef]
- Kaneko, M.; Nakamura, K. The Nash Social Welfare Function. Econometrica 1979, 47, 423. [Google Scholar] [CrossRef]
- Hooker, J.N. Moral Implications of Rational Choice Theories. In Handbook of the Philosophical Foundations of Business Ethics; Springer: Dordrecht, The Netherlands, 2013; pp. 1459–1476. [Google Scholar] [CrossRef]
- Brânzei, S.; Gkatzelis, V.; Mehta, R. Nash Social Welfare Approximation for Strategic Agents. Oper. Res. 2022, 70, 402–415. [Google Scholar] [CrossRef]
- Charkhgard, H.; Keshanian, K.; Esmaeilbeigi, R.; Charkhgard, P. The magic of Nash social welfare in optimization: Do not sum, just multiply! ANZIAM J. 2022, 64, 119–134. [Google Scholar] [CrossRef]
- Narahari, Y. Cooperative Game Theory. In The Two Person Bargaining Problem; Indian Institute of Science: Bangalore, India, 2012. [Google Scholar]
- Zhou, L. The Nash Bargaining Theory with Non-Convex Problems. Econometrica 1997, 65, 681. [Google Scholar] [CrossRef]
- Huynh, D. Bargaining Games: A Comparison of Nash’s Solution with the Coco-Value; CEREMADE: Paris, France, 2016. [Google Scholar]
- Binmore, K.; Rubinstein, A.; Wolinsky, A. The Nash Bargaining Solution in Economic Modelling. RAND J. Econ. 1986, 17, 176. [Google Scholar] [CrossRef]
- Dagan, N.; Volij, O.; Winter, E. A characterization of the Nash bargaining solution. Soc. Choice Welf. 2002, 19, 811–823. [Google Scholar] [CrossRef]
- Yang, X.S. Multi-Objective Optimization. In Nature-Inspired Optimization Algorithms; Elsevier: Amsterdam, The Netherlands, 2014; pp. 197–211. [Google Scholar] [CrossRef]
- Deb, K.; Deb, K. Multi-objective Optimization. In Search Methodologies; Springer: Boston, MA, USA, 2014; pp. 403–449. [Google Scholar] [CrossRef]
- Marler, R.T.; Arora, J.S. The weighted sum method for multi-objective optimization: New insights. Struct. Multidiscip. Optim. 2010, 41, 853–862. [Google Scholar] [CrossRef]
- Steuer, R.E. Multiple Criteria Optimization; Theory, Computation, and Application; John Wiley: Hoboken, NJ, USA, 1986; p. 425. [Google Scholar]
- Kaim, A.; Cord, A.F.; Volk, M. A review of multi-criteria optimization techniques for agricultural land use allocation. Environ. Model. Softw. 2018, 105, 79–93. [Google Scholar] [CrossRef]
- Doğan, I.; Lokman, B.; Köksalan, M. Representing the nondominated set in multi-objective mixed-integer programs. Eur. J. Oper. Res. 2022, 296, 804–818. [Google Scholar] [CrossRef]
- Erlandsson, T. Route planning for air missions in hostile environments. J. Def. Model. Simul. Appl. Methodol. Technol. 2015, 12, 289–303. [Google Scholar] [CrossRef]
- Yan, C.; Xiang, X.; Wang, C. Towards Real-Time Path Planning through Deep Reinforcement Learning for a UAV in Dynamic Environments. J. Intell. Robot. Syst. 2020, 98, 297–309. [Google Scholar] [CrossRef]
- Yuksek, B.; Demirezen, U.M.; Inalhan, G. Development of UCAV Fleet Autonomy by Reinforcement Learning in a Wargame Simulation Environment. In Proceedings of the AIAA Scitech 2021 Forum, Online, 11–21 January 2021; Volume 1. [Google Scholar] [CrossRef]
- Alpdemir, M.N. Tactical UAV path optimization under radar threat using deep reinforcement learning. Neural Comput. Appl. 2022, 34, 5649–5664. [Google Scholar] [CrossRef]
- Zhao, X.; Yang, R.; Zhang, Y.; Yan, M.; Yue, L. Deep Reinforcement Learning for Intelligent Dual-UAV Reconnaissance Mission Planning. Electronics 2022, 11, 2031. [Google Scholar] [CrossRef]
- Hayes, C.F.; Rădulescu, R.; Bargiacchi, E.; Källström, J.; Macfarlane, M.; Reymond, M.; Verstraeten, T.; Zintgraf, L.M.; Dazeley, R.; Heintz, F.; et al. A practical guide to multi-objective reinforcement learning and planning. Auton. Agents Multi-Agent Syst. 2022, 36, 26. [Google Scholar] [CrossRef]
- Moffaert, K.V.; Nowé, A. Multi-Objective Reinforcement Learning using Sets of Pareto Dominating Policies. J. Mach. Learn. Res. 2014, 15, 3663–3692. [Google Scholar]
- Vamplew, P.; Yearwood, J.; Dazeley, R.; Berry, A. On the Limitations of Scalarisation for Multi-objective Reinforcement Learning of Pareto Fronts. In AI 2008: Advances in Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2008; Volume 5360, pp. 372–378. [Google Scholar] [CrossRef]
- Nguyen, T.T.; Nguyen, N.D.; Vamplew, P.; Nahavandi, S.; Dazeley, R.; Lim, C.P. A multi-objective deep reinforcement learning framework. Eng. Appl. Artif. Intell. 2020, 96, 103915. [Google Scholar] [CrossRef]
- Fonseca, C.; Fleming, P. Multiobjective optimization and multiple constraint handling with evolutionary algorithms. I. A unified formulation. IEEE Trans. Syst. Man, Cybern. Part A Syst. Humans 1998, 28, 26–37. [Google Scholar] [CrossRef]
- Yang, P.; Tang, K.; Lozano, J.A. Estimation of Distribution Algorithms based Unmanned Aerial Vehicle path planner using a new coordinate system. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 June 2014; IEEE: Piscataway, NJ, USA, 2014; Volume 7, pp. 1469–1476. [Google Scholar] [CrossRef]
- de la Cruz, J.M.; Besada-Portas, E.; Torre-Cubillo, L.; Andres-Toro, B.; Lopez-Orozco, J.A. Evolutionary path planner for UAVs in realistic environments. In Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation, Atlanta, GA, USA, 12–16 July 2008; Volume 7, pp. 1477–1484. [Google Scholar] [CrossRef]
- Besada-Portas, E.; de la Torre, L.; de la Cruz, J.M.; de Andres-Toro, B. Evolutionary Trajectory Planner for Multiple UAVs in Realistic Scenarios. IEEE Trans. Robot. 2010, 26, 619–634. [Google Scholar] [CrossRef]
- Ogren, P.; Backlund, A.; Harryson, T.; Kristensson, L.; Stensson, P. Autonomous UCAV Strike Missions Using Behavior Control Lyapunov Functions. In Proceedings of the AIAA Guidance, Navigation, and Control Conference and Exhibit, Keystone, CO, USA, 21–24 August 2006; Volume 8. [Google Scholar] [CrossRef]
- Kabamba, P.T.; Meerkov, S.M.; Zeitz, F.H. Optimal Path Planning for Unmanned Combat Aerial Vehicles to Defeat Radar Tracking. J. Guid. Control. Dyn. 2006, 29, 279–288. [Google Scholar] [CrossRef]
- Savkin, A.V.; Huang, H. Optimal Aircraft Planar Navigation in Static Threat Environments. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 2413–2426. [Google Scholar] [CrossRef]
- Dogan, A. Probabilistic Path Planning for UAVs. In Proceedings of the 2nd AIAA “Unmanned Unlimited” Conference and Workshop & Exhibit, San Diego, CA, USA, 15–18 September 2003; Volume 9. [Google Scholar] [CrossRef]
- Chaudhry, A.; Misovec, K.; D’Andrea, R. Low observability path planning for an unmanned air vehicle using mixed integer linear programming. In Proceedings of the 2004 43rd IEEE Conference on Decision and Control (CDC) (IEEE Cat. No.04CH37601), Nassau, Bahamas, 14–17 December 2004; IEEE: Piscataway, NJ, USA, 2004; pp. 3823–3829. [Google Scholar] [CrossRef]
- Ogren, P.; Winstrand, M. Combining Path Planning and Target Assignment to Minimize Risk in SEAD Missions. In Proceedings of the AIAA Guidance, Navigation, and Control Conference and Exhibit, San Francisco, CA, USA, 15–18 August 2005; Volume 8. [Google Scholar] [CrossRef]
- Zabarankin, M.; Uryasev, S.; Murphey, R. Aircraft routing under the risk of detection. Nav. Res. Logist. (NRL) 2006, 53, 728–747. [Google Scholar] [CrossRef]
- Romero, C. Handbook of Critical Issues in Goal Programming, 1st ed.; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
- Simon, H.A. On the Concept of Organizational Goal. Adm. Sci. Q. 1964, 9, 1. [Google Scholar] [CrossRef]
- Simon, H.A. Rational choice and the structure of the environment. Psychol. Rev. 1956, 63, 129–138. [Google Scholar] [CrossRef] [PubMed]
- Jones, D.; Tamiz, M. Practical Goal Programming; International Series in Operations Research & Management Science; Springer: Boston, MA, USA, 2010; Volume 141. [Google Scholar] [CrossRef]
- Hannan, E.L. An assessment of some criticisms of goal programming. Comput. Oper. Res. 1985, 12, 525–541. [Google Scholar] [CrossRef]
- Jones, D.; Tamiz, M. Detection and Restoration of Pareto Inefficiency. In Practical Goal Programming; International Series in Operations Research & Management Science; Hillier, F.S., Ed.; Springer: Boston, MA, USA, 2010; Volume 141, pp. 95–110. [Google Scholar]
- Lamont, G.B.; Slear, J.N.; Melendez, K. UAV Swarm Mission Planning and Routing using Multi-Objective Evolutionary Algorithms. In Proceedings of the 2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making, Honolulu, HI, USA, 1–5 April 2007; IEEE: Piscataway, NJ, USA, 2007; Volume 4, pp. 10–20. [Google Scholar] [CrossRef]
- Zhenhua, W.; Weiguo, Z.; Jingping, S.; Ying, H. UAV route planning using Multiobjective Ant Colony System. In Proceedings of the 2008 IEEE Conference on Cybernetics and Intelligent Systems, Chengdu, China, 21–24 September 2008; IEEE: Piscataway, NJ, USA, 2008; Volume 9, pp. 797–800. [Google Scholar] [CrossRef]
- Yahia, H.S.; Mohammed, A.S. Path planning optimization in unmanned aerial vehicles using meta-heuristic algorithms: A systematic review. Environ. Monit. Assess. 2023, 195, 30. [Google Scholar] [CrossRef]
- Jiang, Y.; Xu, X.X.; Zheng, M.Y.; Zhan, Z.H. Evolutionary computation for unmanned aerial vehicle path planning: A survey. Artif. Intell. Rev. 2024, 57, 267. [Google Scholar] [CrossRef]
- Dasdemir, E.; Batta, R.; Köksalan, M.; Tezcaner Öztürk, D. UAV routing for reconnaissance mission: A multi-objective orienteering problem with time-dependent prizes and multiple connections. Comput. Oper. Res. 2022, 145, 105882. [Google Scholar] [CrossRef]
- Tezcaner Öztürk, D.; Köksalan, M. Biobjective route planning of an unmanned air vehicle in continuous space. Transp. Res. Part B Methodol. 2023, 168, 151–169. [Google Scholar] [CrossRef]
- Handler, G.Y.; Zang, I. A dual algorithm for the constrained shortest path problem. Networks 1980, 10, 293–309. [Google Scholar] [CrossRef]
- Sáez-Aguado, J.; Trandafir, P.C. Variants of the ϵ -constraint method for biobjective integer programming problems: Application to p-median-cover problems. Math. Methods Oper. Res. 2018, 87, 251–283. [Google Scholar] [CrossRef]
- Mavrotas, G. Effective implementation of the ϵ-constraint method in Multi-Objective Mathematical Programming problems. Appl. Math. Comput. 2009, 213, 455–465. [Google Scholar] [CrossRef]
- Deb, K.; Miettinen, K. A Review of Nadir Point Estimation Procedures Using Evolutionary Approaches: A Tale of Dimensionality Reduction. Technical Report. In Proceedings of the Multiple Criterion Decision Making (MCDM-2008) Conference, Chengdu, China, 22–26 June 2009. [Google Scholar]
- Laumanns, M.; Thiele, L.; Zitzler, E. An efficient, adaptive parameter variation scheme for metaheuristics based on the epsilon-constraint method. Eur. J. Oper. Res. 2006, 169, 932–942. [Google Scholar] [CrossRef]
- Xin, B.; Chen, L.; Chen, J.; Ishibuchi, H.; Hirota, K.; Liu, B. Interactive Multiobjective Optimization: A Review of the State-of-the-Art. IEEE Access 2018, 6, 41256–41279. [Google Scholar] [CrossRef]
- Miettinen, K.; Ruiz, F.; Wierzbicki, A.P.; Jaszkiewicz, A.; Słowiński, R. Introduction to Multiobjective Optimization: Interactive Approaches. In Multiobjective Optimization, LNCS 5252; Springer: Berlin/Heidelberg, Germany, 2008; pp. 27–57. [Google Scholar]
- Dasdemir, E.; Köksalan, M.; Tezcaner Öztürk, D. A flexible reference point-based multi-objective evolutionary algorithm: An application to the UAV route planning problem. Comput. Oper. Res. 2020, 114, 104811. [Google Scholar] [CrossRef]
- Larichev, O.I. Cognitive validity in design of decision-aiding techniques. J. -Multi-Criteria Decis. Anal. 1992, 1, 127–138. [Google Scholar] [CrossRef]
- Luque, M.; Miettinen, K.; Eskelinen, P.; Ruiz, F. Incorporating preference information in interactive reference point methods for multiobjective optimization. Omega 2009, 37, 450–462. [Google Scholar] [CrossRef]
- Bechikh, S.; Kessentini, M.; Said, L.B.; Ghédira, K. Preference Incorporation in Evolutionary Multiobjective Optimization; Elsevier: Amsterdam, The Netherlands, 2015; pp. 141–207. [Google Scholar] [CrossRef]
- Nikulin, Y.; Miettinen, K.; Mäkelä, M.M. A new achievement scalarizing function based on parameterization in multiobjective optimization. OR Spectr. 2012, 34, 69–87. [Google Scholar] [CrossRef]
- Deb, K.; Sundar, J. Reference point based multi-objective optimization using evolutionary algorithms. In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, Seattle, WA, USA, 8–12 July 2006; Volume 7, pp. 635–642. [Google Scholar] [CrossRef]
- Allmendinger, R.; Li, X.; Branke, J. Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach. In Proceedings of the Simulated Evolution and Learning, Melbourne, Australia, 7–10 December 2008; pp. 200–209. [Google Scholar] [CrossRef]
- Deb, K.; Jain, H. An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints. IEEE Trans. Evol. Comput. 2014, 18, 577–601. [Google Scholar] [CrossRef]
- Talukder, A.K.A.; Deb, K. PaletteViz: A Visualization Method for Functional Understanding of High-Dimensional Pareto-Optimal Data-Sets to Aid Multi-Criteria Decision Making. IEEE Comput. Intell. Mag. 2020, 15, 36–48. [Google Scholar] [CrossRef]
- Pan, Y.; Li, R.; Da, X.; Hu, H.; Zhang, M.; Zhai, D.; Cumanan, K.; Dobre, O.A. Cooperative Trajectory Planning and Resource Allocation for UAV-Enabled Integrated Sensing and Communication Systems. IEEE Trans. Veh. Technol. 2024, 73, 6502–6516. [Google Scholar] [CrossRef]
- Liu, S.; Lin, Z.; Wang, Y.; Huang, W.; Yan, B.; Li, Y. Three-body cooperative active defense guidance law with overload constraints: A small speed ratio perspective. Chin. J. Aeronaut. 2024, 38, 103171. [Google Scholar] [CrossRef]
- Zhang, W.; Zhang, B. Improvement of UAV Track Trajectory Algorithm Based on Ant Colony Algorithm. In Proceedings of the 2019 International Conference on Intelligent Transportation, Big Data & Smart City (ICITBS), Changsha, China, 12–13 January 2019; IEEE: Piscataway, NJ, USA, 2019; Volume 1, pp. 28–31. [Google Scholar] [CrossRef]
- Kan, E.M.; Lim, M.H.; Yeo, S.P.; Ho, J.S.; Shao, Z. Contour Based Path Planning with B-Spline Trajectory Generation for Unmanned Aerial Vehicles (UAVs) over Hostile Terrain. J. Intell. Learn. Syst. Appl. 2011, 3, 122–130. [Google Scholar] [CrossRef]
- Lei, L.; Shiru, Q. Path Planning For Unmanned Air Vehicles Using An Improved Artificial Bee Colony Algorithm. In Proceedings of the 31st Chinese Control Conference, Hefei, China, 25–27 July 2012; pp. 2486–2491. [Google Scholar]
- Zhou, S.; Wang, J.; Jin, Y. Route Planning for Unmanned Aircraft Based on Ant Colony Optimization and Voronoi Diagram. In Proceedings of the 2012 Second International Conference on Intelligent System Design and Engineering Application, Sanya, China, 6–7 January 2012; IEEE: Piscataway, NJ, USA, 2012; Volume 1, pp. 732–735. [Google Scholar] [CrossRef]
- Leng, S.; Sun, H. UAV Path Planning in 3D Complex Environments Using Genetic Algorithms. In Proceedings of the 2021 33rd Chinese Control and Decision Conference (CCDC), Kunming, China, 22–24 May 2021; IEEE: Piscataway, NJ, USA, 2021; Volume 5, pp. 1324–1330. [Google Scholar] [CrossRef]
- Jongrae, K.; Hespanha, J. Discrete approximations to continuous shortest-path: Application to minimum-risk path planning for groups of UAVs. In Proceedings of the 42nd IEEE International Conference on Decision and Control (IEEE Cat. No.03CH37475), Maui, HI, USA, 9–12 December 2003; IEEE: Piscataway, NJ, USA, 2003; Volume 2, pp. 1734–1740. [Google Scholar] [CrossRef]
- Misovec, K.; Inanc, T.; Wohletz, J.; Murray, R. Low-observable nonlinear trajectory generation for unmanned air vehicles. In Proceedings of the 42nd IEEE International Conference on Decision and Control (IEEE Cat. No.03CH37475), Maui, HI, USA, 9–12 December 2003; IEEE: Piscataway, NJ, USA, 2003; Volume 3, pp. 3103–3110. [Google Scholar] [CrossRef]
- Yao-hong, Q.; Quan, P.; Jian-guon, Y. Flight path planning of UAV based on heuristically search and genetic algorithms. In Proceedings of the 31st Annual Conference of IEEE Industrial Electronics Society, 2005. IECON 2005; Raleigh, NC, USA, 6–10 November 2005, IEEE: Piscataway, NJ, USA, 2005; p. 5. [Google Scholar] [CrossRef]
- McLain, T.W.; Beard, R.W. Coordination Variables, Coordination Functions, and Cooperative Timing Missions. J. Guid. Control. Dyn. 2005, 28, 150–161. [Google Scholar] [CrossRef]
- Changwen, Z.; Lei, L.; Fanjiang, X.; Fuchun, S.; Mingyue, D. Evolutionary route planner for unmanned air vehicles. IEEE Trans. Robot. 2005, 21, 609–620. [Google Scholar] [CrossRef]
- Weiss, B.; Naderhirn, M.; del Re, L. Global real-time path planning for UAVs in uncertain environment. In Proceedings of the 2006 IEEE Conference on Computer Aided Control System Design, 2006 IEEE International Conference on Control Applications, 2006 IEEE International Symposium on Intelligent Control, Munich, Germany, 4–6 October 2006; IEEE: Piscataway, NJ, USA, 2006; Volume 10, pp. 2725–2730. [Google Scholar] [CrossRef]
- Foo, J.L.; Knutzon, J.; Oliver, J.; Winer, E. Three-Dimensional Path Planning of Unmanned Aerial Vehicles Using Particle Swarm Optimization. In Proceedings of the 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Portsmouth, VA, USA, 6–8 September 2006; Volume 9. [Google Scholar] [CrossRef]
- Zhang, R.; Zheng, C.; Yan, P. Route Planning for Unmanned Air Vehicles with Multiple Missions Using an Evolutionary Algorithm. In Proceedings of the Third International Conference on Natural Computation (ICNC 2007), Haikou, China, 24–27 August 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 23–28. [Google Scholar] [CrossRef]
- Xia, L.; Jun, X.; Manyi, C.; Ming, X.; Zhike, W. Path planning for UAV based on improved heuristic A* algorithm. In Proceedings of the 2009 9th International Conference on Electronic Measurement & Instruments, Beijing, China, 16–19 August 2009; IEEE: Piscataway, NJ, USA, 2009; Volume 8, pp. 3–488. [Google Scholar] [CrossRef]
- Tulum, K.; Durak, U.; Yder, S.K. Situation aware UAV mission route planning. In Proceedings of the 2009 IEEE Aerospace Conference, Big Sky, MN, USA, 7–14 March 2009; IEEE: Piscataway, NJ, USA, 2009; Volume 3, pp. 1–12. [Google Scholar] [CrossRef]
- Qianzhi, M.; Xiujuan, L. Application of artificial fish school algorithm in UCAV path planning. In Proceedings of the 2010 IEEE Fifth International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA), Changsha, China, 23–26 September 2010; IEEE: Piscataway, NJ, USA, 2010; Volume 9, pp. 555–559. [Google Scholar] [CrossRef]
- Wang, G.; Li, Q.; Guo, L. Multiple UAVs Routes Planning Based on Particle Swarm Optimization Algorithm. In Proceedings of the 2010 2nd International Symposium on Information Engineering and Electronic Commerce, Ternopil, Ukraine, 23–25 July 2010; IEEE: Piscataway, NJ, USA, 2010; Volume 7, pp. 1–5. [Google Scholar] [CrossRef]
- Chao, Z.; Zhen, Z.; Daobo, W.; Meng, L. UAV path planning method based on ant colony optimization. In Proceedings of the 2010 Chinese Control and Decision Conference; Xuzhou, China, 26–28 May 2010, IEEE: Piscataway, NJ, USA, 2010; Volume 5, pp. 3790–3792. [Google Scholar] [CrossRef]
- Xin, Y.; Ming-yue, D.; Cheng-ping, Z. Fast Marine Route Planning for UAV Using Improved Sparse A* Algorithm. In Proceedings of the 2010 Fourth International Conference on Genetic and Evolutionary Computing; Shenzhen, China, 13–15 December 2010, IEEE: Piscataway, NJ, USA, 2010; Volume 12, pp. 190–193. [Google Scholar] [CrossRef]
- Yong, B.; Xiaowei, F.; Xiaoguang, G. Path planning for reconnaissance UAV based on Particle Swarm Optimization. In Proceedings of the 2010 Second International Conference on Computational Intelligence and Natural Computing; Wuhan, China, 13–14 September 2010, IEEE: Piscataway, NJ, USA, 2010; Volume 9, pp. 28–32. [Google Scholar] [CrossRef]
- Holub, J.; Foo, J.L.; Kalivarapu, V.; Winer, E. Three Dimensional Multi-Objective UAV Path Planning Using Digital Pheromone Particle Swarm Optimization. In Proceedings of the 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference 20th AIAA/ASME/AHS Adaptive Structures Conference 14th AIAA, Honolulu, HI, USA, 23–26 April 2012; Volume 4. [Google Scholar] [CrossRef]
- Chen, Y.M.; Wu, W.Y.; Rd, Y.T. Cooperative Electronic Attack for Groups of Unmanned Air Vehicles based on Multi-agent Simulation and Evaluation. Int. J. Comput. Sci. Issues 2012, 9, 1. [Google Scholar]
- Li, B.; Gong, L.; Zhao, C. Unmanned combat aerial vehicles path planning using a novel probability density model based on Artificial Bee Colony algorithm. In Proceedings of the 2013 Fourth International Conference on Intelligent Control and Information Processing (ICICIP), Beijing, China, 9–11 June 2013; IEEE: Piscataway, NJ, USA, 2013; Volume 6, pp. 620–625. [Google Scholar] [CrossRef]
- Wallar, A.; Plaku, E.; Sofge, D.A. A planner for autonomous risk-sensitive coverage (PARCov) by a team of unmanned aerial vehicles. In Proceedings of the 2014 IEEE Symposium on Swarm Intelligence, Orlando, FL, USA, 9–12 December 2014; IEEE: Piscataway, NJ, USA, 2014; Volume 12, pp. 1–7. [Google Scholar] [CrossRef]
- Qu, Y.; Zhang, Y.; Zhang, Y. Optimal flight path planning for UAVs in 3-D threat environment. In Proceedings of the 2014 International Conference on Unmanned Aircraft Systems (ICUAS), Orlando, FL, USA, 27–30 May 2014; IEEE: Piscataway, NJ, USA, 2014; Volume 5, pp. 149–155. [Google Scholar] [CrossRef]
- Duan, Y.; Ji, X.; Li, M.; Li, Y. Route planning method design for UAV under radar ECM scenario. In Proceedings of the 2014 12th International Conference on Signal Processing (ICSP), Hangzhou, China, 19–23 October 2014; IEEE: Piscataway, NJ, USA, 2014; Volume 10, pp. 108–114. [Google Scholar] [CrossRef]
- Wang, Q.; Zhang, A.; Qi, L. Three-dimensional path planning for UAV based on improved PSO algorithm. In Proceedings of the The 26th Chinese Control and Decision Conference (2014 CCDC), Changsha, China, 31 May–2 June 2014; IEEE: Piscataway, NJ, USA, 2014; Volume 5, pp. 3981–3985. [Google Scholar] [CrossRef]
- Xiaowei, F.; Xiaoguang, G. Effective Real-Time Unmanned Air Vehicle Path Planning in Presence of Threat Netting. J. Aerosp. Inf. Syst. 2014, 11, 170–177. [Google Scholar] [CrossRef]
- Wen, N.; Zhao, L.; Su, X.; Ma, P. UAV online path planning algorithm in a low altitude dangerous environment. IEEE/CAA J. Autom. Sin. 2015, 2, 173–185. [Google Scholar] [CrossRef]
- Zhang, D.; Xian, Y.; Li, J.; Lei, G.; Chang, Y. UAV Path Planning Based on Chaos Ant Colony Algorithm. In Proceedings of the 2015 International Conference on Computer Science and Mechanical Automation (CSMA), Hangzhou, China, 23–25 October 2015; IEEE: Piscataway, NJ, USA, 2015; Volume 10, pp. 81–85. [Google Scholar] [CrossRef]
- Humphreys, C.; Cobb, R.; Jacques, D.; Reeger, J. Optimal Mission Path for the Uninhabited Loyal Wingman. In Proceedings of the 16th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Dallas, TX, USA, 22–26 June 2015; Volume 6. [Google Scholar] [CrossRef]
- Ren, T.; Zhou, R.; Xia, J.; Dong, Z. Three-dimensional path planning of UAV based on an improved A* algorithm. In Proceedings of the 2016 IEEE Chinese Guidance, Navigation and Control Conference (CGNCC); Nanjing, China, 12–14 August 2016, IEEE: Piscataway, NJ, USA, 2016; Volume 8, pp. 140–145. [Google Scholar] [CrossRef]
- Wang, Q.; Zhang, J. MPC and TGFC for UAV real-time route planning. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 26–28 July 2017; IEEE: Piscataway, NJ, USA, 2017; Volume 7, pp. 6847–6850. [Google Scholar] [CrossRef]
- Jing-Lin, H.; Xiu-Xia, S.; Ri, L.; Xiong-Feng, D.; Mao-Long, L. UAV real-time route planning based on multi-optimized RRT algorithm. In Proceedings of the 2017 29th Chinese Control And Decision Conference (CCDC), Chongqing, China, 28–30 May 2017; IEEE: Piscataway, NJ, USA, 2017; Volume 5, pp. 837–842. [Google Scholar] [CrossRef]
- Zhang, C.; Liu, H.; Tang, Y. Quantitative Evaluation of Voronoi Graph Search Algorithm in UAV Path Planning. In Proceedings of the 2018 IEEE 9th International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 23–25 November 2018; IEEE: Piscataway, NJ, USA, 2018; Volume 11, pp. 563–567. [Google Scholar] [CrossRef]
- Maoquan, L.; Yunfei, Z.; Shihao, L. The Gradational Route Planning for Aircraft Stealth Penetration Based on Genetic Algorithm and Sparse A-Star Algorithm. MATEC Web Conf. 2018, 151, 04001. [Google Scholar] [CrossRef]
- Danancier, K.; Ruvio, D.; Sung, I.; Nielsen, P. Comparison of Path Planning Algorithms for an Unmanned Aerial Vehicle Deployment Under Threats. IFAC-PapersOnLine 2019, 52, 1978–1983. [Google Scholar] [CrossRef]
- Chen, Y.; Wang, S. Flight Parameter Model Based Route Planning Method of UAV Using Stepped-Adaptive Improved Particle Swarm Optimization. In Proceedings of the 2019 5th International Conference on Control, Automation and Robotics (ICCAR), Beijing, China, 19–22 April 2019; IEEE: Piscataway, NJ, USA, 2019; Volume 4, pp. 524–530. [Google Scholar] [CrossRef]
- Patley, A.; Bhatt, A.; Maity, A.; Das, K.; Ranjan Kumar, S. Modified Particle Swarm Optimization based Path Planning for Multi-Uav Formation. In Proceedings of the AIAA Scitech 2019 Forum, San Diego, CA, USA, 7–11 January 2019; Volume 1. [Google Scholar] [CrossRef]
- Ma, N.; Cao, Y.; Wang, X.; Wang, Z.; Sun, H. A Fast path re-planning method for UAV based on improved A* algorithm. In Proceedings of the 2020 3rd International Conference on Unmanned Systems (ICUS), Harbin, China, 27–28 November 2020; IEEE: Piscataway, NJ, USA, 2020; Volume 11, pp. 462–467. [Google Scholar] [CrossRef]
- Zhang, Z.; Wu, J.; Dai, J.; He, C. A Novel Real-Time Penetration Path Planning Algorithm for Stealth UAV in 3D Complex Dynamic Environment. IEEE Access 2020, 8, 122757–122771. [Google Scholar] [CrossRef]
- Wu, C.; Huang, X.; Luo, Y.; Leng, S. An Improved Fast Convergent Artificial Bee Colony Algorithm for Unmanned Aerial Vehicle Path Planning in Battlefield Environment. In Proceedings of the 2020 IEEE 16th International Conference on Control & Automation (ICCA), Sapporo, Japan, 9–11 October 2020; IEEE: Piscataway, NJ, USA, 2020; Volume 10, pp. 360–365. [Google Scholar] [CrossRef]
- Xiong, C.; Xin, B.; Guo, M.; Ding, Y.; Zhang, H. Multi-UAV 3D Path Planning in Simultaneous Attack. In Proceedings of the 2020 IEEE 16th International Conference on Control & Automation (ICCA), Sapporo, Japan, 9–11 October 2020; IEEE: Piscataway, NJ, USA, 2020; Volume 10, pp. 500–505. [Google Scholar] [CrossRef]
- Zhang, Z.; Wu, J.; Dai, J.; He, C. Rapid Penetration Path Planning Method for Stealth UAV in Complex Environment with BB Threats. Int. J. Aerosp. Eng. 2020, 2020, 8896357. [Google Scholar] [CrossRef]
- Zhou, X.; Gao, F.; Fang, X.; Lan, Z. Improved Bat Algorithm for UAV Path Planning in Three-Dimensional Space. IEEE Access 2021, 9, 20100–20116. [Google Scholar] [CrossRef]
- Zhang, J.; Li, N.; Zhang, D.; Wei, X.; Zhang, X. Multi-UAV cooperative Route planning based on decision variables and improved genetic algorithm. J. Phys. Conf. Ser. 2021, 1941, 012012. [Google Scholar] [CrossRef]
- Xu, H.; Jiang, S.; Zhang, A. Path Planning for Unmanned Aerial Vehicle Using a Mix-Strategy-Based Gravitational Search Algorithm. IEEE Access 2021, 9, 57033–57045. [Google Scholar] [CrossRef]
- Huan, L.; Ning, Z.; Qiang, L. UAV Path Planning Based on an Improved Ant Colony Algorithm. In Proceedings of the 2021 4th International Conference on Intelligent Autonomous Systems (ICoIAS), Wuhan, China, 14–16 May 2021; IEEE: Piscataway, NJ, USA, 2021; Volume 5, pp. 357–360. [Google Scholar] [CrossRef]
- Woo, J.W.; Choi, Y.S.; An, J.Y.; Kim, C.J. An Approach to Air-to-Surface Mission Planner on 3D Environments for an Unmanned Combat Aerial Vehicle. Drones 2022, 6, 20. [Google Scholar] [CrossRef]
- Lu, L.; Dai, J.; Ying, J. Distributed multi-UAV cooperation for path planning by an NTVPSO-ADE algorithm. In Proceedings of the 2022 41st Chinese Control Conference (CCC), Hefei, China, 25–27 July 2022; IEEE: Piscataway, NJ, USA, 2022; Volume 7, pp. 5973–5978. [Google Scholar] [CrossRef]
- Zhang, Z.; Wu, J.; Dai, J.; He, C. Optimal path planning with modified A-Star algorithm for stealth unmanned aerial vehicles in 3D network radar environment. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 2022, 236, 72–81. [Google Scholar] [CrossRef]
- Zhang, Z.; Jiang, J.; Wu, J.; Zhu, X. Efficient and optimal penetration path planning for stealth unmanned aerial vehicle using minimal radar cross-section tactics and modified A-Star algorithm. ISA Trans. 2023, 134, 42–57. [Google Scholar] [CrossRef]
- Luo, J.; Liang, Q.; Li, H. UAV penetration mission path planning based on improved holonic particle swarm optimization. J. Syst. Eng. Electron. 2023, 34, 197–213. [Google Scholar] [CrossRef]
References | Year | OA | RR | OM | C | PH |
---|---|---|---|---|---|---|
Zhao et al. [16] | 2018 | ✔ | ✗ | ✗ | ✗ | ✗ |
Song et al. [14] | 2019 | ✔ | ∼ | ∼ | ∼ | ∼ |
Allaire et al. [12] | 2019 | ✗ | ✔ | ✔ | ✔ | ✗ |
Aggarwal & Kumar [10] | 2020 | ✔ | ✔ | ✗ | ✗ | ✗ |
Saadi et al. [15] | 2022 | ✔ | ✗ | ∼ | ∼ | ✗ |
Jones et al. [11] | 2023 | ✔ | ✔ | ✗ | ✗ | ✗ |
Inclusion Criteria | Exclusion Criteria |
---|---|
Significant contribution to research ( Average yearly citations ≥ 1) | Low academic impact (Average yearly citations < 1) |
Threat as optimization objective | Completely avoid or ignore threats |
Handle multiple objectives | Sole Focus on Threat or Distance minimization |
Route planning in the flight domain | No route planning or different domain |
Sufficient explanation of methodology | No details |
Propose novel work | Minor change from existing work |
Method | Class | Complexity | Number of Solutions | Optimality | Ease of Preference Specification | Preference Model |
---|---|---|---|---|---|---|
Nash Product | No-preference | 1 SOP | Single solution | Pareto optimal | Not Applicable | Equal importance of objectives Favors Balanced solutions |
Weighted Sum/Reward Shaping | a-priori | 1 SOP | Single solution | (Weakly) Pareto optimal | Hard to provide weights and potentially misleading results | Overly simplistic Linear aggregation of preferences Favors extreme solutions |
Priorities | a-priori | k SOPs | Single solution | Pareto optimal | Easy to provide ordering but questionable it exists | Imposes absolute ordering on objectives with very strong implications |
Goals (reference points) | a-priori | 1 SOP | Single solution | Not Pareto optimal | Specifying goals is cognitively valid but requires information on objective space range and trade-offs which are usually not available a-priori | Distance to reference point |
-constraint | a-posterior | n SOPs | Entire Pareto front | Pareto optimal | Hard to select the final solution from Pareto front if it is high-dimensional | Not Applicable |
Interactive Reference Point | Interactive | m SOPs | ROI of Pareto front | Not Pareto optimal | Cognitively valid and interaction allows for the required information to be provided | Distance to reference point |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Quadt, T.; Lindelauf, R.; Voskuijl, M.; Monsuur, H.; Čule, B. Dealing with Multiple Optimization Objectives for UAV Path Planning in Hostile Environments: A Literature Review. Drones 2024, 8, 769. https://doi.org/10.3390/drones8120769
Quadt T, Lindelauf R, Voskuijl M, Monsuur H, Čule B. Dealing with Multiple Optimization Objectives for UAV Path Planning in Hostile Environments: A Literature Review. Drones. 2024; 8(12):769. https://doi.org/10.3390/drones8120769
Chicago/Turabian StyleQuadt, Thomas, Roy Lindelauf, Mark Voskuijl, Herman Monsuur, and Boris Čule. 2024. "Dealing with Multiple Optimization Objectives for UAV Path Planning in Hostile Environments: A Literature Review" Drones 8, no. 12: 769. https://doi.org/10.3390/drones8120769
APA StyleQuadt, T., Lindelauf, R., Voskuijl, M., Monsuur, H., & Čule, B. (2024). Dealing with Multiple Optimization Objectives for UAV Path Planning in Hostile Environments: A Literature Review. Drones, 8(12), 769. https://doi.org/10.3390/drones8120769