1. Introduction
The versatility of Unmanned Aerial Vehicles (UAVs), i.e., drones, is such that they are being extensively integrated into the technology of Intelligent Transportation Systems (ITSs) nowadays [
1]. Both the efficiency and the sustainability of the ITS technology can be enhanced by the integration of drones, making UAVs a revolutionizing technology in this sector. The applications of UAVs in the ITS context are being developed by researchers and companies all over the world, with a particular focus on urban applications [
2]. The main applications include, but are not limited to, traffic and environmental monitoring, infrastructure inspection, aerial mapping, surveillance, emergency management, data collection, and logistics [
3].
Considering the drone delivery application in an urban environment, which is the main focus of this work, UAVs are seen as a strategic technological advancement for addressing the challenges of urban transportation, being able to reduce the environmental impact of the transportation network as well as reduce costs and delivery time [
4]. Whether UAVs operate in cooperation with traditional transportation systems, e.g., trucks, or they constitute a transportation system on their own, their capability of enhancing the sustainability and efficiency of the delivery process is unquestionable. This is mainly due to the capability of drones to (i) navigate the crowded urban environment with ease, independently of both the complexity of the urban landscape and the road traffic conditions; (ii) reduce both urban traffic congestion and delivery costs, especially in case of small-package delivery, with superior efficiency with respect to trucks; (iii) enable delivery services in areas difficult of reach by conventional ground vehicles or areas hit by natural disasters; (iv) enable on-demand dynamic delivery services with reduced waiting times for customers; (v) collect real-time data to be included into smart city platforms, thus enabling the continuous optimization of the delivery service itself as well as improving decision-making capabilities in a smart city context; (vi) rapidly perform medical deliveries, and (vii) offer a variety of simultaneous services throughout their operation for transportation purposes, such as data analysis, prediction of patterns, etc. [
5,
6].
On the other hand, the drone technology still faces some technical, environmental, regulatory, and societal barriers for its large-scale deployment in urban environments [
7]. Limited flight range due to battery capacity, energy consumption, integration into the current airspace, licensing of pilots, noise pollution, safety and security risks, poor withstanding capability of adverse weather conditions, privacy concerns, infrastructure requirements, and high initial costs are some of the main challenges that have not been completely addressed yet [
8]. Therefore, the sustainability of UAV-based ITSs will require a non-trivial joint effort from regulators, researchers, logistics and drone companies, and urban planners to overcome the current barriers that limit the real-world applicability of unmanned aerial networks in smart cities.
When dealing with a multi-robot system, such as an ITS based-on UAVs [
9,
10,
11,
12], the conceptualization of a task allocation architecture is needed to enable an optimal distribution of tasks among the robots. The Multi-Robot Task Allocation (MRTA) problem is a variant of the well-known NP-hard Traveling Salesman Problem (TSP), and several sub-optimal approaches have been proposed in the literature [
13,
14,
15,
16]. The main task allocation algorithms for UAV networks proposed in the literature are deterministic algorithms [
17], stochastic algorithms [
18], auction-based algorithms [
19], bio-inspired algorithms [
20], and edge-computing algorithms [
21]. Depending on the communication architecture implemented in the multi-agent system, MRTA algorithms can be classified as centralized, decentralized, and distributed. The latter refers to a case where the architecture is hybrid. In centralized task allocation architectures, a central coordinator (considering the information shared by the other robots) allocates the tasks to the robots, maximizing the global utility of the multi-robot system. On the other hand, in a decentralized architecture, the tasks are allocated to the robots according to the partial knowledge of the environment that each robot has. Market-based allocation strategies are widely used to assign tasks (items) in multi-robot systems since they represent an effective and intuitive way of accounting for different robots’ costs with respect to the tasks to be allocated [
22]. Auction-based algorithms feature, in general, polynomial computational complexity and are suitable for every type of communication architecture. A classical auction-based task allocation algorithm works according to a communication protocol between a central allocating robot (auctioneer) and the other robots of the system (bidders). The protocol is repeated for every round of allocation. The total number of rounds depends on the number of tasks advertised and allocated per round, and the auction algorithm is defined as either multiple-item or single-item accordingly. A standard auction-based allocation protocol consists of four Phases (P), as follows:
Advertisement (P1): the auctioneer broadcasts the task to the robots of the network.
Bidding (P2): each robot undergoes a task valuation process and sends its bid to the auctioneer.
Award (P3): the auctioneer selects the most suitable robot for the task based on all the received bids for that task.
Acknowledgment (P4): each robot acknowledges the assignment to the auctioneer.
A schematic representation of the process is presented in
Figure 1.
One of the main advantages of auction-based optimization is the low computational complexity and the ease of hybridization with other optimization approaches. The work in [
23] proposed a dynamic task allocation framework based on the decomposition of the task assignment problem into two sub-problems, i.e., group-level task assignment and member-level task assignment. The sub-problems were then solved by using Particle Swarm Optimization (PSO) together with distributed auctions. A distributed sequential auction allocation strategy was proposed in [
24] to tackle the multi-UAV target task allocation problem with limited communication range. Detected targets are advertised sequentially and, depending on the decision of neighbor UAVs, each UAV decides whether to auction or forfeit the target. A simulation study conducted with different communication ranges proved the superiority of the distributed sequential auction strategy with respect to a simpler greedy allocation. Auction algorithms are also exploited for assigning tasks in dynamic and changeable environments, such as the battlefield environment. The problem of optimal task rescheduling in such a scenario was addressed in [
25], and a distributed task scheduling algorithm was proposed. By means of a result update strategy, the original schedule of tasks undergoes a partial reset, and re-planning aiming at payoff maximization is performed by means of the proposed online auction mechanism. Auction algorithms are also popular because they can abstract the characteristics and the limitations of the multi-robot system with respect to the tasks. As shown in [
26], a distributed auction-based assignment strategy for search and destroy missions carried out by Miniature Aerial Vehicles (MAVs) was implemented and tested for different target distributions and sensor ranges. Combinatorial auction-based strategies [
27,
28] also represent an efficient and scalable sub-optimal solution to the TSP and its variants, such as last-mile delivery with drones and, in general, Multi-Unmanned Aerial Vehicle Task Allocation (MUAVTA). Applications for the optimal scheduling of air taxi operations in an urban air mobility context can be found in [
29].
This paper is a follow-up to a previous work on a centralized market-based task allocation architecture for a drone-based parcel pick-up and delivery system [
30]. Such work consisted of (i) the formalization of a persistent drone delivery problem that takes into account the safety of UAV flyable paths, UAV battery discharge, task due dates, and UAV energy consumption for task execution, (ii) the design and implementation of a dynamic-auctioneer auction-based task scheduling architecture distributed among the UAVs of the network, (iii) the integration of a risk-aware path planning strategy with an energy consumption-aware velocity optimizer of task execution within the task scheduling framework, and (iv) the evaluation of the proposed approach with a real-world aerial delivery scenario in the city of Turin, Italy. Moreover, the impact of fallible communication among the drones on the overall assignment of tasks was simulated, highlighting an increasing number of unassigned tasks in the final solution for increasing probability of communication losses in the network.
In this work, the market-based task assignment architecture proposed in [
30] is extended to enable both resource sharing for an even distribution of tasks in the fleet and allocation of dynamic tasks during each round of the main allocation process. Starting from the same task allocation architecture proposed in [
30], a hybrid variable-auctioneer auction-based allocation of tasks is nested into the main sequential single-item algorithm. In this way, besides the allocation of “regular” delivery tasks and re-charge tasks, tasks with higher priority can be allocated dynamically to the UAVs while the main multi-auctioneer algorithm is being executed. Moreover, the message exchange in the novel distributed architecture is more redundant than the one in the previous work. As such, an increased robustness of the aerial network with respect to communication failures is obtained. The literature gaps covered in this work are highlighted in the following:
Formalization of an original drone delivery problem (DDP) with charging hubs, including risk-aware route planning, dynamic task allocation, and balance of number of assigned tasks to each UAV.
Development of a comprehensive allocation framework for tackling the formulated problem, optimizing energy efficiency, risk of flyable paths, and flight time.
Inclusion of a redundant allocation strategy within the formulated architecture to address lossy communication scenarios.
We advance the state-of-the-art about auction-based task allocation for UAV networks by not only proposing a combined auction-based task scheduling and path planning heuristic solution to a drone pick-up and delivery problem with charging hubs, but also considering dynamic assignment of tasks with high priority within the allocation protocol and the problem of resource sharing in the multi-robot system.
This paper is organized as follows. In
Section 2, the addressed problem is defined, as well as the assumptions.
Section 3 presents the comprehensive auction-based allocation architecture, including the allocation algorithm, the energy consumption model, the optimization algorithm, and the path planning approach. Simulation results are reported in
Section 4. Conclusions are drawn in
Section 5.
2. Problem Statement
The problem addressed in this work can be defined as a combined task allocation and path planning problem to enable a persistent UAV-based intelligent transportation system in urban environments. It follows the formalization of the aerial delivery scenario:
The multi-robot system consists of multi-rotor UAVs operating in a well-defined populated urban operational area with charging stations.
The task set consists of payload transportation tasks within the operational area. Each task is defined as the transportation of a payload mass from a pick-up location to a delivery location within a delivery time window.
The formulated problem is to define an optimization-based combined task scheduling and path planning architecture in order to allocate the delivery tasks to the fleet of UAVs, with the following objectives:
Minimize the total energy consumption required by the UAVs to execute all tasks.
Minimize the discrepancy in the number of allocated tasks to each drone, i.e., maximize the workload homogeneity in the fleet of UAVs.
Enable the allocation of dynamic tasks with higher priority with respect to the “regular” set of transportation tasks. An example is represented by the unexpected aerial delivery of medical samples to a hospital located in the operational area.
Maximize the safety of the community and the feasibility of deployment of the aerial system in the real world by predicting risk-aware flyable paths to be executed by the UAVs [
31].
To improve the clarity of the problem statement, the notations used in this section are provided in a dedicated “Nomenclature” section at the end of the manuscript, which includes a complete list of symbols adopted in this work.
Figure 2 represents a simplified example of the problem with three drones, three delivery tasks, and one charge hub in the city of Turin, Italy.
To address the problem formulated above, the following simplifying assumptions were made:
The payload capacity of a UAV equals its mass.
A UAV can carry one payload at a time.
A UAV cannot visit a charge station in loaded conditions.
At the beginning of the delivery process, the UAVs are fully charged.
The UAV re-charge time is the sum of the battery re-charge time (linearly interpolated from the residual battery level of the UAV) and the time needed by the UAV to reach the charge hub.
A delivery task consists of six consecutive phases: unloaded UAV take-off, unloaded UAV cruise, unloaded UAV landing (the parcel pick-up phase takes place in unloaded conditions), loaded UAV take-off, loaded UAV cruise, and loaded UAV landing (the parcel delivery phase takes place in loaded conditions).
A re-charge task consists of three consecutive phases: UAV take-off, UAV cruise, and UAV landing. In this case, no payload is carried by the UAV.
UAVs maintain constant velocity throughout each phase of task execution, i.e., UAVs ascend at constant speed, cruise at constant speed, etc.
UAV tilt angles are small throughout the flight.
Turbulent atmospheric phenomena are neglected.
A residual battery level was imposed to each UAV in order to cover the worst-case scenario, i.e., both the UAV and the charge hub were located at the extremities of the operational area.
is predicted by the following optimization problem:
with
and
.
denotes the UAV velocity vector for a re-charge task. Equation (1) minimizes the UAV energy to reach a charge station while considering the constraints on (i) maximum allowable flight time to the charge station itself in the worst-case scenario (Equation (2)), (ii) maximum energy stored in the UAV battery (Equation (3)), and (iii) maximum and minimum UAV velocities during the three phases of a re-charge task (take-off, cruise, and landing), as per Equation (4).
The UAV optimal velocity of execution of a “regular” delivery task is predicted as follows:
with
and
. The energy consumption predicted for the completion of a delivery task is
.
denotes the UAV velocity vector for a delivery task. In Equation (5), the UAV optimal velocity of execution of a delivery task is computed by minimizing the energy required to execute that task, subject to the constraints that (i) the task is delivered within the due date window, as per Equations (5) and (6), (ii) the UAV does not run out of energy, as per Equation (8), and (iii) maximum and minimum UAV velocities for the six phases of a delivery task are respected.
The optimal velocity of execution of a re-charge task is predicted as follows:
Equation (10) computes the optimal velocity of execution of a re-charge task by minimizing the flight time to the charge station, subject to the constraints that (i) the UAV does not run out of battery before reaching the charge hub, as per Equation (11), and (ii) maximum and minimum UAV velocities for the three phases of a re-charge task are respected.
The UAV optimal velocity of execution of a dynamic delivery task with high priority is predicted as follows:
The optimal energy consumption related to the completion of a dynamic delivery task is . denotes the UAV velocity vector for a dynamic delivery task. Equation (13) computes the optimal UAV velocity for a dynamic delivery task, minimizing the flight time, subject to the constraints that (i) the task is delivered within the due date window, as per Equations (14) and (15), (ii) the UAV does not run out of energy before completing the task, as per Equation (16), and (iii) maximum and minimum UAV velocities for the six phases of a delivery task are respected. It is worth noticing that after the completion of a dynamic task, the UAV may have no residual energy in the battery. This is due to the priority of the dynamic task, which overrides the constraint on introduced for formulating a persistent drone delivery system. For the sake of clarity, it is hereby specified that landing velocities are considered as negative.
3. Task Allocation Architecture
The proposed market-based task optimization architecture is conceptualized as a distributed combined task allocation and path planning architecture. The market-based architecture consists of a central allocator (the coordinator of the fleet of UAVs) with auctioneer behavior that allocates the delivery tasks to the UAVs of the fleet. Each UAV runs, in a decentralized way, both a risk-aware path planner and an optimization algorithm to (i) compute both the optimal path for the task and the optimal velocity of execution for the task, and (ii) bid for the advertised task. The architecture is implemented in MATLAB/ROS. The communication between the UAVs throughout the allocation process is simulated by means of ROS nodes and a publisher/subscriber logic. Besides the main allocation process of the delivery task set, each UAV features an auctioneer behavior when the allocation of a re-charge task is needed to preserve the persistency of the aerial UAV-based ITS. A sequential single-item auction-based allocation strategy is implemented for the allocation of the delivery tasks, while a multiple-item auction-based strategy is implemented for the allocation of the re-charge tasks. This is because sequential single-item allocation strategies represent a better strategy for tasks with different priorities and different due dates (as for the commercial parcel delivery use case of this work). On the other hand, since multiple re-charge tasks (corresponding to the multiple charge hubs in the operational aera) have to be evaluated by the UAV to be charged before the allocation is performed, a multiple-item allocation strategy is preferred for this type of task. The fleet coordinator retains the delivery task set, while dynamic tasks, such as the delivery of medical samples with high priority, can be broadcasted to all the UAVs of the aerial ITS at any time. During the bidding phase for a “regular” delivery task, another single-item allocation is enabled such that the bidding UAV temporarily becomes the auctioneer of the network in order to (i) be able to allocate, if present, a dynamic task with high priority within the main allocation process, and (ii) bid for the “regular” delivery task advertised by the fleet coordinator in that round while being aware of the current workload distribution in the fleet. The communication among the UAVs of the network is assumed to be potentially fallible, apart from the messages from auctioneer to auctioneer, which are never lost.
A schematic representation of the distributed multi-auctioneer behavior of the UAV network is provided in
Figure 3. In particular,
Figure 3 represents a snapshot of a possible set of roles of the UAVs of the network within a task allocation round, with UAV
being the fleet coordinator (auctioneer of delivery tasks), UAV
being the coordinated robot (bidder for delivery tasks), each UAV beside the fleet coordinator being the auctioneer of a re-charge task, and UAV
being the auctioneer of a dynamic high-priority task arising during its bidding phase for a “regular” delivery task.
The allocation algorithm is presented in detail in
Section 3.1, while the path planning algorithm, the energy consumption model, and the optimizer are presented in
Section 3.2,
Section 3.3, and
Section 3.4, respectively.
3.1. Market-Based Task Scheduling
The proposed market-based task optimization algorithm is reported in Algorithm 1.
Algorithm 1: Multi-Auctioneer Task Allocation Algorithm |
|
The computational complexity of Algorithm 1, which also includes Algorithms 2 and 3, is polynomial and is defined as . Therefore, the approach is highly scalable with respect to the number of drones, tasks, and re-charge hubs.
Algorithm 2 presents the auction-based algorithm for assigning dynamic high-priority delivery tasks to the UAVs of the network. During the bidding phase of each UAV’s regular delivery task allocation process, the UAV acts as an auctioneer for the dynamic task that is being broadcasted.
Algorithm 2: Algorithm |
|
Algorithm 3 presents the auction-based algorithm for assigning re-charge tasks to the UAVs of the network whose energy stored in the battery is smaller than the required minimum energy level,
, plus a threshold factor.
Algorithm 3: Algorithm |
|
3.2. Risk-Aware Path Planning
The flyable UAV paths connecting the UAV locations to the task locations are generated by means of a sampling-based path planning algorithm that samples a risk map of the operational area. The approach is taken from previous works since the aerial risk-aware path planning problem in urban environments has already been tackled. The problem of minimum-risk aerial path computation is decomposed in two steps. The first step consists of generating a 2D location-based risk map, which accounts for both the population density in the operational area and the UAV parameters (UAV mass, payload mass, UAV dimensions, UAV maximum speed, etc.). In the second step, a sampling-based algorithm based on the well-known RRT* algorithm computes the minimum-risk path in the map, minimizing both the overall risk associated with the computed path and the UAV flight time for that path. The risk map associates a risk value, expressed in expected fatalities per hour (
), with each location. Since the planning algorithm returns, in general, sub-optimal risk-optimized paths, the average risk of each computed path is compared to an equivalent level of safety (ELOS) of
, as defined in [
31]. After verifying the risk compatibility with respect to the ELOS, the computed path length is used in the energy computation function as a constant parameter. Further details about the risk map generation and the risk-aware path planning algorithm can be found in [
32,
33]. The risk-aware path planner is implemented in C++ using the ROS (Robot Operating System) framework.
The proposed risk-aware path planning approach implements a high-fidelity risk assessment [
32] considering the UAS parameters. Thus, the risk map changes according to the drone mass and the payload mass. Every time a UAV bids for a delivery task, two different risk maps are generated. The planning algorithm samples the first risk map to compute the risk-optimized path
, which corresponds to the first step of the delivery process (the UAV moves from its start location to the pick-up location with no payload). Secondly, for the computation of
, the planning algorithm samples another risk map that accounts for the updated total mass of the UAV (
) in the risk computation. For this reason, it is necessary to evaluate a delivery task also considering the payload transported. The effect of the payload on the risk map and the resulting path can be observed in
Figure 4 and
Figure 5.
Figure 4 shows the risk map with a minimum-risk path considering a UAV type C without any additional payload. A similar scenario is reported in
Figure 5, considering the same operational area and start and goal positions, but with a payload of 2.5 kg. Comparing
Figure 4 and
Figure 5, we observed that the payload caused a higher risk in the risk map and, consequently, the resulting path changed. The involved risk is also shown in
Figure 6, with the evolution of the risk along the paths with and without payload. Moreover,
Figure 6 demonstrates the usefulness of the risk-aware path planning by comparing the minimum-risk path and the Line Of Sight (LOS) path. The LOS path is shorter but, on the contrary, provides a higher risk.
The proposed approach is easily scalable to large urban areas. The risk map is generated by computing the risk for each cell (representing a location) in the map. Consequently, the computational complexity increases proportionally to the number of cells in the map and, therefore, with the dimension of the map. However, the resolution of the risk map can be adapted to manage large urban areas without excessively increasing the computational complexity.
The increase in the size of the risk map consequently also implies an increase in the computational complexity of the risk-aware path planning algorithm. The RRT*-based algorithm requires more nodes to both adequately explore the search space (i.e., the risk map) and obtain a near-optimal path. Therefore, as discussed in [
34], the computational complexity of RRT* can be approximated to
, with
n being the number of nodes sampled within the risk map.
3.3. Energy Consumption Model
The energy consumption model is derived from the power consumption model proposed and validated in [
35]. By merging such a power consumption model with the assumptions of
Section 2, the resulting UAV energy consumption model can be defined as:
for the completion of a delivery task, and as:
for the completion of a re-charge task. As far as dynamic tasks are concerned,
since the conceptualization of the delivery phases does not change. In particular,
refers to the induced energy component (to propel air downward),
refers to the profile energy component (to resist rotational drag due to the rotation of the propellers), and
refers to the parasite energy component (to resist drag during the relative translational motion of the UAV with respect to the wind), with
denoting the general path length and
denoting the general drone velocity. The proposed energy consumption model overestimates the energy consumed by a UAV during task execution since
does not depend on
in forward flight. Furthermore, such an energy consumption model is valid independently from the type of multi-rotor UAV configuration (quadrotor, hexarotor, etc.).
3.4. Numerical Optimization
According to the problem formulation in
Section 2, an optimization problem has to be solved by each bidding UAV in order to compute the optimal velocity of task execution while respecting the constraints of the delivery system. It follows the standard representation of the nonlinear constrained optimization problems defining the optimal energy-aware execution of tasks in
Section 2:
, subject to the constraints
, with
,
,
. All the optimization problems stated in
Section 2 can be brought in the standard form, with
being equal to the number of constraints of the problem stated for task for which the UAV is bidding during the allocation process,
being equal to the number of phases of the task,
being equal to either
or
, and
being the objective function of the task. Both the UAVs’ parameters and the tasks’ parameters appear as constants in the optimization problems, except for the task execution velocity, which has to be optimized. Algorithm 4 shows the implementation of the constrained optimization algorithm based on a numerical Penalty Function (PF) Sequential Unconstrained Optimization (SUO) method.
Algorithm 4: PF-based SUO Algorithm |
|
SUO algorithms iteratively perform unconstrained optimization on an enhanced objective function that includes the constraints of the original constrained optimization problem [
36]. In this case, the enhanced objective function,
, is defined as the sum of the normalized objective function of the constrained problem,
, and the constraint-dependent penalty term,
, as in [
37]. At each step,
, of unconstrained optimization on
, the penalty parameters
and
are updated depending on constraint-satisfaction-based criteria. A finite convergence to a sub-optimal solution can be achieved for finite penalty parameters, the latter of which should not be excessively increased in order to avoid numerical problems. The variation in the penalty parameters at each iteration depends on the parameters
,
, and
. The convergence speed of the algorithm depends on
,
,
,
, and
. The termination condition of the proposed PF-based SUO algorithm depends on both the constraint satisfaction and tolerance parameters,
and
. The average execution time of Algorithm 4 is 0.4 s. The unconstrained optimization performed on the PF exploits the
fminsearch(·) MATLAB function. This optimizer implements the Nelder–Mead simplex algorithm, as defined in [
38]. Considering that
is at least a three-dimensional function, there is no theoretical guarantee that Algorithm 2 converges to the global optimum of the constrained problem.
Figure 7 shows the evolution of both
and
throughout the bidding process of a UAV type A for the delivery task 2 in the simplified scenario of
Figure 2.
4. Simulation Results
In order to demonstrate the capability of the proposed market-based task allocation strategy to efficiently handle planning and allocation of delivery tasks, re-charge tasks, and dynamic tasks with different priorities and constraints, a whole set of simulation campaigns was designed for an operational area, defined as a squared populated portion of the city of Turin, Italy. As far as the operational area is concerned, the latitude and longitude extremities of the area where the aerial delivery fleet operates are
and
respectively. The height of flight
is limited to the range [50, 120] m. The aerial delivery fleet of UAVs is composed by
different UAVs, whose characteristics are reported in
Table 1. The parameters reported in
Table 1 were both taken from publicly available datasets and, when not available, linearly interpolated to fill in all the values.
The “regular” delivery tasks set was defined by a set of tasks, with 6 tasks for each category of the payload classes considered in the simulation campaigns: (0.5, 0.8, 1.0, 1.5, and 2.0 kg). The “dynamic” delivery tasks set was defined by a set of tasks, with 1 task for each category of the payload classes. Those tasks were advertised at random time instants between the start and the end of the main task allocation process. charge hubs were assumed to be present within the operational area of reference. The time needed to fully re-charge a UAV with no residual energy in the battery was set to 0.5 h. Also, , km, h, and kg·m−3. The simulation campaigns designed to corroborate the validity of the approach were based on Monte Carlo simulations, repeated with four different scenarios:
Scenario : and random initialization of the following parameters: initial , delivery tasks’ pick-up and delivery locations, associated to each task, re-charge hubs’ locations, and (within the range of h).
Scenario : analogous to , but with no resource sharing in the allocation strategy and with each phase of task execution being completed at maximum speed.
Scenario : random initialization of the following parameters: initial , delivery tasks’ pick-up and delivery locations, associated to each task, re-charge hubs’ locations, (within the range of h), and (within the range of h).
Scenario : analogous to , but with no resource sharing in the allocation strategy and with each phase of task execution being completed at maximum speed.
Tasks are advertised by the auctioneer according to ascending values of
so that tasks with higher priority in terms of due date are executed earlier. The results in terms of mean value (
) and standard deviation (
) of a set of well-defined output schedule parameters (
) are derived from
simulations repeated for each scenario. In this case, the communication among the UAVs is assumed to be ideal (
). Such results are reported in
Table 2.
The impact of communication failures on the output schedule parameters was also investigated. Sets of 50 Monte-Carlo-based simulation campaigns were performed up to a
probability of communication failure by considering 1 particular instance for each of the 2 scenarios of reference (
and
). The simulation results are reported in
Table 3 and
Table 4.
As far as the results with ideal communication are concerned, the capability of the proposed approach to handle a demanding task set was corroborated by the Monte-Carlo-based simulation campaign. The medium energy consumption decreased by 81% from the comparison of and , and by 80% from the comparison of and . The medium number of assigned re-charge tasks decreased by 85% from the comparison of and , and by 90.5% from the comparison of and . Scenarios and were conceived to show that, even with highly demanding task sets and the presence of dynamic tasks in the allocation loop, an energy-aware allocation of tasks was preferred over a simpler auction-based allocation with UAVs operating at maximum speed. The proposed market-based allocation architecture allowed the aerial fleet to evenly self-balance the distribution of the allocated delivery tasks, while minimizing the total energy consumption of the fleet and reducing the number of re-charge tasks to be executed by the fleet. This was reflected by mean and standard deviation values of , which increased in relative difference from to . Obviously, with the task sets of both scenario and scenario being highly demanding ones, with tens of random parcel deliveries to be completed within close random time deadlines, a few more tasks remained unassigned when the results were compared to and . The higher variability in the distribution of tasks in the fleet in terms of and for scenarios and was due to the lack of the resource sharing condition in the bidding phase of each UAV.
As far as the results with lossy communication are concerned, the proposed allocation architecture featured superior robustness to fallible communication with respect to the one proposed in our previous work in [
30]. This was due to the redundancy of the allocation protocol when a task was not assigned due to message loss.
The results obtained with both the implementation of the aforementioned simulation campaigns and the analysis of the solution quality parameters () enabled the following conclusions:
The proposed architecture successfully tackled the formulated DDP with real-world instances of the problem itself.
The main allocation objective of minimizing the energy consumption, despite being less favorable for the maximization of the number of completed tasks, did not compromise the level of performance of the allocation output.
The proposed architecture was highly modular, flexible, and easily adaptable to tackle other task assignment problems with different objectives in the context of drone-based ITSs.
Even with more demanding problem instances, such as scenario , the allocation architecture’s features of energy optimization, resource sharing, and dynamic task assignment did not compromise the overall level of performance of the architecture itself.
The redundancy added in the allocation strategy enhanced the performances in case of lossy communication scenarios.