Next Article in Journal
The Future of Last-Mile Delivery: Lifecycle Environmental and Economic Impacts of Drone-Truck Parallel Systems
Previous Article in Journal
Enhancing Wildlife Detection Using Thermal Imaging Drones: Designing the Flight Path
Previous Article in Special Issue
Path Planning for Fixed-Wing Unmanned Aerial Vehicles: An Integrated Approach with Theta* and Clothoids
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Why and How of Polymorphic Artificial Autonomous Swarms

1
VTT Technical Research Centre of Finland, 02150 Espoo, Finland
2
Netherlands Organisation for Applied Scientific Research, 2597 AK Den Haag, The Netherlands
3
Smart Sensor Systems Research Group, Department of Mechatronics, The Hague University of Applied Sciences, Rotterdamseweg 137, 2628 AL Delft, The Netherlands
*
Author to whom correspondence should be addressed.
Drones 2025, 9(1), 53; https://doi.org/10.3390/drones9010053
Submission received: 14 October 2024 / Revised: 9 December 2024 / Accepted: 20 December 2024 / Published: 13 January 2025
(This article belongs to the Special Issue Advances in AI for Intelligent Autonomous Systems)

Abstract

:
In this paper, we investigate the concept of polymorphism in the context of artificial swarms; that is, collectives of autonomous platforms such as, for example, unmanned aerial systems. This article provides the reader with two practical insights: (a) a proof-of-concept simulation study to show that there is a clear benefit to be gained from considering polymorphic artificial swarms; and (b) a discussion on the design of user-friendly human–machine interfaces for swarm control to enable the human operator to harness these benefits.

1. Introduction

In the last few years, there has been an explosion in the availability of Unmanned Vehicles (UxVs) or Unmanned Systems (UxSs), with Unmanned Aerial Vehicles (UAVs) [1], Unmanned Ground Vehicles (UGVs) [2,3], Unmanned Surface Vehicles (USVs) [4], and Unmanned Underwater Vehicles (UUVs) [5] (or Autonomous Underwater Vehicles (AUVs)) [3,6] all making an appearance in the literature and, for that matter, in the real world. The battlefields of Ukraine and the asymmetric conflicts in Asia and the Middle East have seen to that [1]. For example, ref. [1] estimates that less than two months into Russia’s invasion of Ukraine the entire Russian UAV fleet numbered in the hundreds; in early 2024, less than two years later, Ukraine is deploying (and losing) 20 times that number on a daily basis (source: 1st EDA Swarming Technologies Conference, February 2024), by the end of 2025, Ukarine’s annual drone prodution capability will reach 4 million (source: 1st EDA Autonomous Systems Community of Interest (ASCI) meeting, November 2024).
All of these types of systems are commonly referred to as drones, likely using the term from biology referring to entities that, while capable of independent operation and decision making, commonly serve a greater purpose other than their own. A key aspect contributing to the current proliferation of drones is their expendability or attritability. These attributes refer to the fact that the loss of an individual is acceptable from the point of view of the group, as their production cost is significantly lower than the strategic or even tactical value of the actions that lead to their demise. As a result of increasing performance and decreasing cost, unmanned systems can now be deployed for a variety of applications in an ever-growing list of domains, while and at the same time be purchased and deployed by stake holders in these domains who previously lacked the financial means for this.
The predictable next step in this evolution is the appearance of collectives of such platforms, capable of operating as a whole to an increasing extent. Such collectives may be referred to as a swarm, though the term is not understood unanimously by everyone: some argue for the need of self-organization and the emergence of properties while others claim that the mere cooperation of multiple platforms in the same environment suffices to meet the definition [7]. As discussed at ASCI late in 2024, the fact that soon these swarms will be a reality is, by now, generally undisputed. It is when, not whether.
Be that as it may, sidestepping this discussion with the compromise that, if they are not already, true swarms will be a reality in the very near future, we assume for the reminder of this article that high levels of autonomy and self-organization are essential features of swarming. From a scientific viewpoint, key challenges of swarming include, for example, the development of suitable swarm intelligence algorithms [8], determining the level of human control [9], and definition of suitable human–machine teaming approaches [10].
Among the many research topics that are being investigated worldwide and should contribute to improving the usefulness of collective intelligence paradigms in artificial swarms, one that appears to be receiving comparatively little attention, and is therefore likely to deliver uniqueness and novelty, is that of polymorphism.

1.1. Contributions of This Article

In this paper, we investigate the concept of polymorphism in the context of artificial swarms; that is, collectives of autonomous platforms, such as UAVs. As the concept has been extensively studied in the field of biology, we do not make a broad case for its potential benefits in general, but instead provide the reader with two practical insights:
  • A proof-of-concept simulation study to show that there is indeed some clear benefit to be gained from considering polymorphic artificial swarms.
  • A discussion on the design of user-friendly human-machine interfaces (HMIs) for swarm control to enable the human operator to harness these benefits.

1.2. Scope of This Article

To manage expectations and to avoid disappointment we scope our work as theoretical investigations using generic models which we intentionally kept as simple as possible. Consequently, the conducted simulation-based experiments were not intended to fully capture real-world complexities such as environmental factors (which may influence, e.g., the field strength of signals), telemetry from the platforms, physical aspects of fixed-wing or rotary-based platform flight behaviours, etc., but are meant to provide abstract insights into the generalizable benefits and disadvantages of polymorphic swarms.
Therefore, the parameter values for, e.g., UAVs’ flight speed (needed in Section 6.1) or cost (considered in Section 5.2 and Section 7.4) used in our simulations are educated estimates. They were chosen without concrete data from practical applications nor from an operational realization of the approach, as well as with a bias for opportune values for ease of analysis of the results afterwards (see the section on parameter space exploration; Section 7.4).
Consequently, it should probably not be surprising that the principles (cf., Section 4.2) and considerations (cf., Section 8) of HMI design discussed in this article, in parallel to our theoretical investigations, lack the benefit of being informed from a real-world implementation of the approach. While the reported investigations were conducted as part of a UAV-based solution for real-time management of wildfires (FireMan project), and are expected to be used eventually to assist fire fighting efforts in the vast forested regions of Finland; no autonomous drone swarm is currently being deployed and the work on designing (and tailoring it to the needs and wishes of the end users, i.e., fire fighters) is restricted to initial principles and considerations.

1.3. Outline of This Article

The next sections provide some background information on unmanned platforms (Section 2), polymorphism, and polymorphic drone swarms (Section 3) in general, and the concepts of operation of such swarms (Section 4), before we introduce a use case example for a polymorphic swarm of UAVs in Section 5. To formalize this use case, Section 6 describes the underlying mathematical models and notation and Section 7 then provides results from our investigations. Section 8 presents a visual representation/User Interface (UI) and elaborates on considerations regarding the HMI for the use case. We end with a conclusion and future suggestions (Section 9) and a reference list for the interested reader.

2. Unmanned Vehicles (UxVs)/Unmanned Systems (UxSs)

The term UxVs encompasses a broad range of platforms designed to operate without a human physically present at the site of operation. These systems operate in air, land, sea, and underwater environments (Note: we omit space-based platforms from our considerations) in both civilian and military applications:
  • Unmanned Aerial Vehicles (UAVs) [1]: These systems, currently the most common type discussed in research and the media, operate in airspace and can greatly vary in size, capability, and purpose. UAVs can be further categorized into fixed-wing, rotary-wing, and hybrid designs, such as vertical take-off and landing (VTOL) aircraft.
  • Unmanned Ground Vehicles (UGVs) [2,3]: UGVs navigate on the ground, where they can be deployed in structured environments like roads and inside of buildings as well as unstructured environments like off-road terrains and disaster zones. Examples of them include autonomous cars and mobile robots (with, e.g., embedded robotic arms).
  • Unmanned Underwater Vehicles (UUVs) [5,6]: These operate below the surface and distinguish Autonomous Underwater Vehicles (AUVs) (autonomous) and Remotely Operated Underwater Vehicles (ROVs) (requiring real-time human remote control).
  • Unmanned Surface Vehicles (USVs) [4,6]: These vehicles navigate water surfaces, conducting maritime research, surveillance, and logistic missions. USVs include both autonomous ships and smaller robotic boats. Examples of their designs include catamarans, trimarans, and hydrofoils.
As a proper overview of the landscape of UxVs warrants a large review paper in itself, we will restrict ourselves for the remainder of this article to the (currently) most common representatives of such platforms: those operating in the aerial domain. Practically speaking, the development and deployment of such UAVs [11] are guided by the principle of using the right tool for the right job, and the various sub-types attest to that: fixed-wing UAVs offer speed and endurance, rotary-wing UAVs provide agility and precise control, and hybrid designs offer a versatile compromise. As UAV technology continues to advance, the distinctions between these categories may blur, leading to new innovations and applications that further expand the capabilities of aerial unmanned vehicles in both the civilian and military spheres.
We briefly provide an overview below but then, in the interest of brevity, refrain from discussing all the variations currently available on the market. UAVs, commonly referred to as drones or aerial drones, represent a versatile and rapidly evolving segment of unmanned systems that operate within various layers of airspace. Their applications span from recreational and commercial to critical military and scientific roles, driven by advancements in technology that have expanded their capabilities and accessibility. The categorization of UAVs into fixed-wing, rotary-wing, and hybrid designs highlights [11] the diversity of their operational uses and design complexities.

2.1. Application Domains for UAVs

Currently, unmanned aircraft systems are being used in many different application areas. Aerial photography, video production, logistics, police operations, firefighting, infrastructure monitoring, and defence applications are a few of these domains. Aerial drones are therefore saving lives, producing visually stunning content, converting manual labour into more productive work, and offering safe alternatives to tedious and hazardous jobs. UAVs are typically teleoperated aircraft of variable sizes. They are also sometimes referred to as drones or Unmanned Aerial Systems (UASs). They can be classified into four groups according to their size: micro, small, medium-altitude long-endurance (MALE), and high-altitude long-endurance (HALE), ranging from less than a meter across to a wingspan of almost 40 m [12]. Swarms of small drones are typically used to increase the cost efficiency of operations, such as large-scale surveillance and search and rescue (see, for instance, [13]).

2.2. Fixed-Wing UAVs

These traditional types of aerial devices are basically small planes without a human pilot on board. As such, they resemble traditional aeroplanes, with their operation (i.e., remaining airborne) being very efficient. They are capable of relatively long-duration flights at high speeds. They generally require a runway or catapult for take-off and a runway or parachute for landing, making them well suited for applications such as large-area mapping, agriculture, and surveillance missions. As their name indicates, they are characterized by their static wings, similar to conventional aeroplanes, which provide lift due to the vehicle’s forward airspeed. Owing to this they can normally cover longer distances more efficiently, lending themselves to transporting larger payloads, than so-called rotary-wing UAV (see Section 2.3), making them the ideal choice for missions where the emphasis is placed on endurance and/or speed.

2.3. Rotary-Wing UAVs

These devices feature one or more rotors (with quadcopters being quite popular) that enable VTOL, hovering, and agile manoeuvring. This design is ideal for missions requiring precise positioning, such as inspection of infrastructure, delivery of goods, or search-and-rescue operations in complex or confined environments. The precise control and stability of rotary-wing UAVs are extremely useful for applications such as detailed infrastructure inspections, as they allow the steady and slow manoeuvring and thereby facilitate the recording of high-resolution images of, for example, structures. They are also ideal for delivery services in urban areas where space is limited, and for search-and-rescue missions in challenging terrains, as they can hover and operate close to the ground or obstacles.

2.4. Hybrid Designs (Vertical Take-Off and Landing (VTOL) Fixed-Wing UAVs)

As is to be expected, there are already hybrid versions of UAVs combining the two types discussed above. By combining elements of both fixed-wing and rotary-wing designs, these UAVs offer the endurance and speed of fixed-wing UAVs with the vertical take-off, landing, and hover capabilities of rotary-wing UAVs. Consequently, these are particularly useful in applications where space for take-off and landing is limited but longer flight times are needed. Vertical take-off and landing (VTOL) fixed-wing UAVs are particularly advantageous in scenarios where the mission area is remote or inaccessible, and where traditional runways are not available. They are used in applications ranging from logistic deliveries in remote areas to environmental monitoring where they can swiftly cover large distances and then hover or manoeuvre slowly for detailed observations.
Their versatility makes them suitable for a wide range of missions, including rapid-response scenarios and operations in mixed-terrain environments. However, the benefits of either of the two types are likely to be reduced by deviating from the increasingly fine-tuned designs of either. As we will argue in this article, when considering collectives of devices there is a benefit in using both types (fixed-wing as well as rotary) in a swarm together (as opposed to using only the hybrid type).

3. Polymorphism and Artificial Swarms of Drones

3.1. Polymorphism in Nature

In biology, the term polymorphism refers to the occurrence of two or more distinct sets of characteristics (phenotypes) within a population [14]; i.e., naturally occurring differences between the members of a group (as opposed to, e.g., self-inflicted differences such as a tattoo). These differences can manifest themselves in the form of physical differences but also as behavioural or bio-chemical traits. As such, polymorphism in a group supports specialization of its members to perform specific tasks or to divide labour or tasks in an efficient manner, and thus clearly has the potential for creating an evolutionary benefit by increasing diversity and facilitating adaptability and survival in dynamic environments.
Polymorphism can occur for a number of reasons: the most common is genetic polymorphism, where different alleles (variations of a gene) exist within a population, causing, for example, different colourings or morphology [15]. Another form is sexual dimorphism, where male and female members of a species differ significantly, such as in birds, where the male and the female have significantly different colouring or plumage. A third form is so-called balanced polymorphism, where a species that is found across different environments has adapted to local particularities in these environments, to, for example, have a fur colour adapted to provide better camouflage in either snow or high grass [16].

3.2. Division of Labour and Specialization in Social Insects

All eusocial insects are characterised by the division of labour between the breeder (the queen) and worker castes. Beyond this division, there exists a continuum of specialisation:
  • From those manifesting exclusively on a behavioural level (e.g., age polyethism [17]);
  • through a form of division of labour that relies primarily on differences in size ([18]);
  • to morphologically distinct castes among workers (as found, e.g., in many termites [19], where soldiers can differ from the foragers to look like another species).

3.3. Specialization in Artificial Swarms

Artificial drone swarms, by contrast, are often comprised of a single type of device, examples of which are the light quadcopters used to execute LED-based pre-choreographed air shows or, at the other end of the sophistication spectrum, Shield AI’s V-BAT military drones [20]. The reasons for this could be related to the following:
  • The benefits of standardisation (including the ability to swap one unit or parts thereof for another);
  • Commercial advantages (for the manufacturer, e.g., the use of proprietary technology);
  • Or simply “ease of use” considerations, since the human commander/operator only needs to familiarise themselves with the characteristics of a single model.
At the same time, it is self-evident that, as applications for drone technology multiply, so does the diversity of the associated technical requirements, which drives a form of “speciation” based on the characteristics (e.g., form, size, shape, speed, endurance, sensors) necessary or desirable to fulfil the intended purpose. The type of drone most suitable, e.g., to conduct a high-altitude survey of the forest canopy [21] is unlikely to be the same as that of a quadcopter tasked with detecting harmful chemical leaks inside a factory [22].
The paradoxical result is that we end up with a plethora of drone models that, despite their clearly complementary features, are rarely, if ever, used together to complete the kind of complex mission that could benefit from combining the unique capabilities of multiple types of UASs. It is almost certainly just a matter of time before this obvious oversight is corrected and the corresponding opportunities are seized.

3.4. Polymorphism in Autonomous Swarm Operations

Based on the current state of affairs, there is arguably one aspect of this problem that is less likely to be given much attention, at least in the short-to-medium term: the exploitation of what amounts to polymorphism in autonomous swarm operations. Although it is relatively straightforward to formulate a Concept of Operations (ConOps) in which multiple drone types are used in conjunction to perform a task beyond the capabilities of a single model [23,24], how individual units can call upon each other’s “skills” to work better as a team, without human guidance, is a much trickier question [25]. For natural swarms, we have a partial understanding of the mechanisms that have evolved to support efficient division of labour in dynamic environments characterised by variable needs for specialisation [26]. Achieving this in artificial swarms is an open challenge.

4. Concepts of Operation

4.1. Command and Control of Drone Swarms from the Human Factor Perspective

When it comes to highly automated drone operations from the human factor perspective, one must first take into account how the human operators who are monitoring the operations from a distance can attain and sustain a sufficient level of situation awareness (SA). Situation awareness from the operator’s perspective refers to “the perception of environmental elements and events with respect to time or space, the comprehension of their meaning, and the projection of their future status” [27]. Furthermore, automation awareness (AA) becomes important in highly automated drone operations. AA has been previously defined from the human’s perspective as “a continuous process that comprises of perceiving the status of the automation, comprehending this status and its meaning to the system behavior, as well as projecting its future status and meaning” [28]. It can be extremely difficult for a human operator to develop and maintain both automation and situation awareness in highly automated remote operation settings involving drone fleets. There have also been numerous reports of incidents in safety-critical environments (such as the Boeing 757 crash in Cali, Columbia) where the human operator lost situation awareness [29]. Thus, taking into account situational awareness (SA) and AA, or even artificial intelligence awareness (AIA) (see, for example, [30]), becomes crucial, particularly in contemporary drone swarm solutions. By giving operators instantaneous feedback on the swarm’s activities and surrounding conditions, they can maintain awareness of the currently prevailing situation.
Second, reducing the amount of cognitive strain on operators by arranging information so that it is easily absorbed is needed. Overcomplexity or an abundance of information can cause mistakes and poor performance. To prevent the human operators of drone swarm operations from having an excessive cognitive workload or, conversely, from having too monotonous tasks, it is crucial to divide up the tasks between humans and automation in an appropriate way. With respect to the former, it has been observed that multitasking, interruptions, and task switching significantly increase cognitive workload (e.g., [31,32]). For instance, psychological understanding of the limits of human thought and behaviour is required here. Effective task distribution and switching ensures both the safety of operations and the health of the human operators who are responsible for long-term remote drone monitoring. Additional cognitive complexity for the operator is possible when instead of a single UAV, a swarm of drones needs to be monitored and controlled [33]. Nevertheless, in multiple-unmanned-vehicle supervisory control, operator boredom and distraction can also become a significant issue (see, for example, [34]). All of these issues can be effectively resolved with proper monitoring, command, and control interface design.
Third, the quality of the user interface solutions that the human operators are interacting with is critical to user experience (see, e.g., [35]) and usability (see, e.g., [36]). The operators cannot understand the current state of the object environment and take appropriate action if the remote monitoring and operation systems of the drones are difficult to use. Human out-of-the-loop performance problems may arise from this [37], which may affect safety-critical operations negatively. Similarly, it is crucial to establish a suitable level of human operator trust in the automation of the drone and its interfaces, for instance, through design. According to Lee and See [38], trust in this context is defined as “the attitude that an agent will help to achieve an individual’s goals in a situation characterized by vulnerability and uncertainty”. In this case, the automated drone’s command and control system is the agent, collaborating with the human operator to accomplish the drone operation’s goals. Conversely, appropriate trust is a well-calibrated level of confidence in the automated system that corresponds with its capabilities. As opposed to an appropriate level of trust, an overtrust or distrust in the system can lead to issues with safety or performance as well as potentially deadly accidents (like the mid-air collision at Lake Constance [39]). As a result, user-centred design must be used when creating human–machine or user interfaces intended for drone monitoring and control. In order for users to make timely decisions based on the analysed information, the same user interfaces may also be used to analyse and visualize the data from drone sensors. These data visualizations must be clear and easy to understand for users. Effective human control requires presenting, for instance, the swarm’s status and mission progress in an easy-to-understand visual format.
At the moment, operations may also make use of some sophisticated UI output techniques, such as using Artificial Intelligence (AI) to highlight or visualize pertinent points of interest from the object environment. Large datasets can be quickly mined for online information thanks to AI. Because drones can automatically identify the needed objects from their environment and take appropriate action (like sounding an alarm for a human operator), it is also possible to develop effective SA solutions. Eventually, drone operations supported by augmented/mixed reality with a wealth of information will also be feasible. This will make it possible to develop very novel kinds of information visualizations and immersive representations of drone sensor data for the human users.

4.2. Human–Machine Interface Principles for Controlling a Swarm of Drones

Designing a user-friendly human–machine interface for drone swarm control is a critical part of a successful swarming application. A number of scholarly articles have examined human–machine interface solutions for multi-UAV control. In general, management by consent or management by exception can be used to control UAVs [40]. From the standpoint of human factors, the latter has issues such as automation bias, which occurs when people accept computer-generated recommendations even in the presence of unreliable systems. Goodrich & Cummings [40] cite the work of Ruff et al. [41] and conclude that the management-by-consent approach—where an automated solution requires human approval prior to execution—is better for managing multiple UAVs than the management-by-exception approach, which gives the operator a window of time to reject the automated solution. For controlling up to four UAVs, management by consent seemed to offer the highest situation awareness ratings, performance scores, and trust. All things considered, there is some evidence to suggest that human operators may not be able to manage several vehicles that require assistance with payload and navigation, especially in the event of unreliable automation (see [40] for a review).
A discussion of the most fruitful ways for the swarm and its human operator(s) to interact is found in [42]. First off, it is obvious that many of the benefits of using swarms are negated by low-level “micro-management” of each unit within the swarm. Second, it is still critical to maintain some degree of control over the swarm’s goals and actions in real time. Saffre et al. [42] offer two families of control methods—direct and indirect—that can be used to create HMIs for drone swarms that are suitable—that is, concurrently powerful, flexible, intuitive, and easy to use—and that would enable a single operator to coordinate the actions of a swarm. They come to the conclusion that while indirect methods enable the specification of more abstract long-term objectives at the “operational” level, which makes them naturally complementary, direct approaches are better suited for short timescales [42]. Practically speaking, a lot of applications involving multiple UAVs use the direct control approach; however, as swarm operations and AI advance, it is clear that indirect control will become more prevalent in the future.
The use of planners has also been proposed for the coordination of multiple unmanned vehicles [43]. Planners are computer programs that create plans automatically in order to meet user-specified requirements. When complex, variable, and interdependent behaviour is required, as in missions where multiple UAVs take on different roles at different times, plan libraries used for this creation are helpful. Miller et al.’s [44,45,46] Playbook is an example of a planner that uses an adaptive automation approach.
The human operators need to be given access to decision-support tools so they can make prompt, accurate decisions—especially in complex and dynamic situations. The information presented in the HMI must be understood by the human operator in order for them to make the best decisions possible during UAV swarm operation(s), taking into account the psychological aspects of human decision making. Hart et al. [47] conducted a thorough literature review on decision models’ human factor aspects and how to apply those to the context of UAVs. Moreover, Bjurling et al.’s [48] findings indicate that human-in-the-loop simulation research and related HMIs must facilitate communication among various drone swarm tiers (e.g., subswarms, individual UAVs, and their sensors), as failure to do so may result in an increased cognitive load on the human operator.
Lastly, Chen et al. [49] provide a comprehensive set of interface guidelines related to the supervisory control of multiple robots. Chen et al. [49] go into detail about the adaptability of systems and switch-off, as well as refocusing the human operator’s attention on the entire swarm after they have managed one unmanned system. Lewis [43] provides guidelines for the design of multi-unmanned vehicle systems, which are based on a proposed taxonomy of operator task complexity. Specifically, Lewis [43] offers an overview of models that break down the human operator’s interactions with unmanned vehicles (UGVs and UAVs) into a series of control episodes.

5. A Use Case Example of a Polymorphic Drone Swarm, and the Human Control Thereof

Use cases for polymorphic drone swarms are not hard to find. Many of these belong to the general category of Intelligence, Surveillance, Reconnaisance (ISR) missions (cf., e.g., [50]), in which an area is to be monitored and/or relevant objects or events must be spotted, identified and (possibly) tracked. It may be worth mentioning that in our use case the role of drones is confined to observation without actuation [51].

5.1. Distinguishing Between Detection and Identification in ISR Missions

The necessity for cooperation between multiple types of UASs in ISR missions springs from the fact that different phases of the mission usually require different capabilities [50]. For instance, if the area of interest is relatively large, better results in terms of detection will often be achieved by flying the drones at high altitude for long periods, to maximise coverage. Performance in this phase could be measured, for example, by the time between the beginning of an event and it being noticed/reported by the swarm, or by the fraction of events that are discovered before they somehow lose relevance (e.g., the intruder has left the scene, or an isolated fire incident has already spread beyond control).
Conversely, it is often the case that detection is only the first necessary step, for instance, when the nature of the phenomenon being monitored implies the presence of true and false positives. In these circumstances, the same high altitude that increased one kind of drones’ detection range and improved coverage may prevent them from achieving conclusive disambiguation, simply due to the distance from which the observation is made.
For instance, a bright spot on a thermal image may be a positive indication of, e.g., a chemical process of fuel reacting with an oxidizing agent, commonly referred to as a fire (be it a controlled or an uncontrolled fire) or of the presence of a life form (but not whether it is a bear, a hiker, or a human in combat gear). Making that determination (identification) would likely be best achieved by using another type of UAS (e.g., a low-flying quadcopter), which in turn would have been highly inefficient in the first (detection) phase.

5.2. An ISR Scenario Focusing on Surveillance and Monitoring for a Polymorphic Swarm

Accordingly, we investigated a scenario in which fixed-wing drones with long endurance, flying at high speed and high altitude, are used for event detection, whereas quadcopters with short endurance, flying at low altitude and slower speed, are called upon for disambiguation. The study involved quantifying the influence of key parameter values (frequency and half-life of relevant events, number and endurance of the two types of drones, etc.) and comparing the performance of different cooperative strategies (e.g., what exploration pattern do fixed-wing drones follow, when do quadcopters take off, how are targets for disambiguation prioritised, and what fraction of the fleet is kept in reserve?).
For the proof-of-concept simulation study, two types of UAS were considered:
  • Fixed-wing drones that operate at a higher altitude, fly faster and further, carry sensors that cover a large area, and are expensive (e.g., EUR 80,000 per unit).
  • Quadcopters that operate at lower altitude, is comparatively slow and has a shorter range, has limited field of view, is cheap (e.g., EUR 10,000 per unit).

5.3. Human Factors and HMI Aspects of the Use Case and the Proof of Concept

In the use case, multiple drones work together in a polymorphic drone swarm. When designing, deploying, and integrating polymorphic drone swarms, the human factor aspects are vital; cf., [52]. Effective interaction between the human operator and the swarm is needed via an appropriate HMI to enable seamless collaboration and mission performance [53]. In that context, two separate aspects of relevance emerge:
  • Mission planning and coordination (discussed in Section 8.1);
  • Real-time monitoring and feedback to the human operator (discussed in Section 8.2).
In our use case, there are different drone capabilities in various numbers available to the human operator planning and triggering the mission. Decisions for the operator include how many drones of which type should be deployed for the mission. How to communicate this information and how to enable the user to amend the settings (in advance or during the mission) are key user interface design issues. Furthermore, the human operator(s) must comprehend the drone ranges, event detection probabilities, and the surveyed environment thoroughly in order to start the mission. To give the operators a clear and simple understanding of these parameters, the provided interface should clearly visualize the ranges of the drones, probability functions, and the operating environment. We re-raise these questions in Section 8 after introducing the use case specifics.

6. Materials and Methods

The planned ConOpss in our case study is that fixed-wing aircraft(s), with higher speed and longer detection range, are used to spot relevant events and to direct the slower quadcopter(s) to the corresponding location to inspect them. We do not make any assumption regarding the events or the process of inspection but, depending on the application domain (cf., Figure 1), it may be disambiguation between true and false positive (wildfire start vs. controlled heat source [54], such as a BBQ), delivery of first aid to a disaster victim, tagging of wildlife [55,56], interception of an enemy vehicle, etc.
We first discuss the map and the agents (UAVs on the map) in Section 6.1, define events (Section 6.2) and how to measure the performance of the swarm (Section 6.3), and then provide the algorithms used (Section 6.4).

6.1. The Map (Test Environment) and the Airspeed of the Two Types of Platforms Used

The test environment is a 10 × 10 km square (100 km2) centred on the base from which all UAVs take off. The characteristics of the two types of UAVs involved are as follows:
  • Fixed-wing UAVs: Travel at an airspeed of 30 m/s (108 km/h) and can perform a 180 turn in 6 s. They can detect events up to a range of 500 m.
  • Quadcopters: Travel at an airspeed of 10 m/s (36 km/h) and can change direction “instantly”. They can detect and inspect events up to a range of 50 m.
By default (benchmarking), both types of drones have an endurance (battery life) that exceeds the duration of the numerical experiment (so they do not have to go back to base).

6.2. Events

Events are generated stochastically and progress through a series of states. Figure 2 (events), Figure 3 (actions), and Figure 4 show how certain actions change the state of an event.

6.2.1. Event Generation

Events are generated at a statistically fixed rate throughout the simulation and last for a duration that obeys an exponential decay rule (i.e., both the initiation and termination of events occur with a constant probability: the probability to start is 1.0 60 and the probability to stop is 1.0 1800 ). This means that, on average, one new event starts every 60 s. The half-life of an event is half an hour (1800 s). It is an exponential decay so it can last much shorter or much longer, but 30 s is the average. This method was chosen because a similar process governs many natural phenomena, such as the decay of radioactive isotopes. Note that the resulting distribution of event longevity has a “long tail”, meaning that, although by definition the average duration of an event is equal to the value of the half-life parameter, some may last a substantially shorter or longer time.

6.2.2. Event State Progression

Events can be in 5 mutually exclusive states, as illustrated in Figure 2:
  • Unknown: The event has not yet been spotted;
  • Missed: The event has terminated before ever being detected;
  • Spotted: Detected by a fixed-wing UAV but not (yet) inspected by a quadcopter;
  • Lost: Spotted by a fixed-wing UAV but terminated before it could be inspected;
  • Inspected: Spotted by a fixed-wing and inspected by a quadcopter while ongoing.
From an algorithmic and decision-making point of view, the Lost state has two substates: Checked and Unchecked. It is critical to remember that, as far as the swarm is concerned, a Lost event that is Unchecked is indistinguishable from a Spotted event; i.e., it is still marked for inspection by a quadcopter even though it is no longer active. A Lost event can transition from Unchecked to Checked in two ways: its location falls again within the detection radius of a fixed-wing UAV (which, noticing its absence, may call off inspection) or a quadcopter comes within 50 m of the target coordinates (inspection radius) and makes the same determination (i.e., there is no longer anything to investigate there).

6.3. Swarm Performance

Swarm performance is a function of the distribution of events between the three end states (Missed, Lost(Checked), and Inspected) at the end of the numerical experiment, with the Inspected state being obviously the most desirable. This is illustrated in Figure 3.

6.4. Algorithms

The behaviour algorithms for the different UAVs are straightforward; see Figure 4.
The challenge is the allocation of targets/waypoints to all the UAVs, a process that is shown in the right-most flow diagram of Figure 4 as the box labelled re-assign all UAVs. This process of dividing tasks (waypoints) over the swarm is detailed in the next section.

6.4.1. Algorithm: The Division of Labour

Allocating quadcopters to events when there are multiple drones and events (a set-matching problem to which we could not identify a universal solution in the literature) is challenging. Although it is relatively trivial to solve for small sets using “brute force”, the general case is NP-complete, and therefore much less tractable, even in the static case (when sets are defined a priori and element characteristics do not change over time).
In the dynamic situation considered here, events are being discovered in real time and quadcopters are on the move (constantly changing the proximity relationship between elements of the two sets). This called for a simple match-making method capable of quick and frequent updates at a low computational cost. We settled for the following. Whenever change is detected (an event has been spotted for the first time or has just been checked), the match-making procedure is called, which consists of only two steps:
  • The distance vectors between all quadcopters and all targeted events (Spotted and Lost but still unchecked) are calculated and ranked by increasing length.
  • This ordered list of vectors is sequentially read (and the corresponding pairings created) until there are either no more events to be allocated to a drone or no more drones to assign to an event (under a “one-to-one” rule).
This can result in suboptimal pairings (both in terms of longest and average distance between quadcopters and events), but no advanced yet tractable variant could be identified that consistently outperformed the simplest version, possibly due to the high dynamicity.

6.4.2. Algorithm: Searching

In the absence of spotted events, two search algorithms were used.
  • The simplest (“random”) consists in allocating a randomly chosen waypoint on the periphery of the map to the UAV and repeating this procedure every time a waypoint is reached (excluding points on the same border, i.e., mandatory “bouncing”).
  • The other consists in predefining an optimal search pattern (“lawnmower”) in which the distance between two consecutive legs is equal to or lower than twice the detection radius (depending on the nearest division of the depth of the area to be searched).
Finally, the UAVs may be allocated exclusive territories within the search area. When it is subdivided (into two, three, or four separate “domains”, depending on the number of drones), all territories have an identical surface (i.e., 50 km2, 33.3 km2, or 25 km2).
Figure 5 illustrates this division and the two search patterns. Whenever a UAV is assigned an exclusive territory, its search pattern is confined to the corresponding region.

7. Results Indicating the Benefits of Applying Polymorphism to Artificial Swarms

Our main concern however is to better understand the interplay between multiple types of UAVs in a hybrid swarm and how the composition of the latter (in terms of the number of units of each type) affects its overall performance. The key factor to consider is that if the different types have different characteristics (aerodynamics, endurance, sensors, payload, etc.), making them better at conducting one part of the mission (e.g., event detection) but worse at completing another (e.g., inspection, disambiguation, or neutralisation), and if furthermore resources are limited (essentially a given in any realistic application scenario), then a conservation law is necessarily at play, Meaning that there is a trade-off in that improving one aspect will come with a detrimental effect on another.
Indeed, investing into more units of one kind implies that as a result there will be fewer of the other(s), which in turn has implications for the overall performance of the hybrid swarm (assuming that both phases of the mission must be completed for it to be considered a success). Before we discuss our results with regard to the composition of the swarm (Section 7.3) we first provide some benchmarks (Section 7.1) and have a quantitative look at the impact of division of labour in the case of the random (as opposed to “lawnmower”) search pattern (Section 7.2).

7.1. Benchmarking

Benchmarking was performed with a very small number of drones of each type (one fixed-wing, one quadcopter; one fixed-wing, two quadcopters; and two fixed-wing, one quadcopter) patrolling the area for a total of 4 h (i.e., 4 times the maximum half-life value under consideration). Figure 6 shows that, as should be expected, the fraction of events that completely escape detection (Missed) decreases as a function of half-life, the number of fixed-wing UAVs, and when shifting from the random to lawnmower search pattern. Nevertheless, it is worth observing some less obvious characteristics such as, for instance, the fact that, for the chosen parameter values (search area size, speed, and detection radius) two units performing a random walk strongly outperform a single UAV following an optimal lawnmower across all considered values of the half-life parameter (though the effect is predictably weaker for statistically longer events). It does not end here, of course, because the different search patterns have different advantages and disadvantages: e.g., when the target is moving then the random walk is more likely to find it, over time. On the other hand, a random walk in 2D remains somewhat local in its search, suggesting that (for static targets) in known environments a pre-planned lawnmower has the potential to significantly outperform it.

7.2. Division of Labour for Random Searches

Another interesting result is the quantitative evaluation of the impact of the division of labour when all participating UAVs are following a random search pattern: somewhat counter-intuitively, the increased risk of duplication of efforts when no mutually exclusive territories are imposed barely affects overall performance (see Figure 7). This leads us to the useful conclusion that only when combined with an optimal search strategy (e.g., “lawnmower”) is spatial division of labour (territories) clearly beneficial to collective efficiency, at least in this region of parameter space.

7.3. Swarm Composition

In our proof-of-concept scenario, fast, high-altitude fixed-wing drones with a wide detection cone (500 m radius at ground level) are used for event discovery because their characteristics allow them to cover more space per unit of time. Conversely, slow, low-altitude quadcopters with a narrower field of view are used for closer inspection of the discovered events. If we assume that each fixed-wing UAV costs as much as four quadcopters, and the budget is sufficient to acquire five of the latter, we have four4 options. At one end of the spectrum, four fixed-wing drones will be supported by four quadcopters, maximising long-range discovery at the expense of close-range inspection. At the other end, a single fixed-wing UAV will be supported by 16 quadcopters, with the opposite result. As it turns out, for the chosen parameter values, the optimum overall performance is achieved with a swarm of threefixed-wing UASs and eight quadcopters (see Figure 8).
In summary, whereas when a single fixed-wing aircraft is tasked with patrolling the entire area of interest (lawnmower pattern), the 16-strong quadcopter fleet achieves a ≈95% inspection rate of all discovered events (with an intermediate half-life value of 30 min), the discovery rate itself is less than 50 % (i.e., over half of all simulated events terminated before they could be spotted by the high-altitude unit, meaning they were completely missed by the swarm). This translates into a mediocre actual inspection score of ≈45%. By contrast, for the optimal swarm composition, this figure increases to ≈63%, despite the inspection rate of already discovered events dropping to ≈90%.

7.4. Parameter Space Exploration

We conducted a systematic simulation-based exploration of the parameter space for any number of fixed-wing drones and quadcopters between 1 and 10 (i.e., 100 different combinations). For practical reasons, we used the most generic scenario in which all UAVs are patrolling the entire surveillance area (i.e., no exclusive territories) and all are performing a random walk following the rules described in the previous section. Furthermore, the half-life of events was set to a constant 1800 s (30 min). In order to make the potential use of our results more intuitive, we used a crude cost model, assuming the price of each quadcopter to be EUR 10,000 and each fixed-wing UAV EUR 40,000, meaning that, at the lower end (one of each), we have a global price tag of EUR 50,000, whereas at the higher (ten of each), we are looking at a figure of EUR 500,000. While this economic study is useful in the sense that it provides insights into cost effectiveness; it nevertheless falls short of a sensitivity analysis to evaluate how variations in cost parameters, such as the assumed CAPEX above, could influence the optimal swarm composition.
Using the above-provided crude model and the data collected through simulation, we can identify the optimal composition for the hybrid swarm, taking into account mission success criteria. For instance, if the acceptable threshold is that 50 % of all events must be inspected, a swarm comprised of three fixed-wing drones and four quadcopters (EUR 160,000) offers the best “value for money”, because of the law of diminishing returns (see Figure 9). If the target is 67 % (for the same performance variable), it is 5 of the former and 10 of the latter (EUR 300,000). Combining multiple criteria is of course also feasible, for example, 50 % inspection and 75 % detection thresholds are simultaneously achievable but require a minimum of six fixed-wing aircraft and three quadcopters (EUR 270,000).
Naturally, our quantitative results are merely illustrative, in the sense that they are sensitive to parameter values such as the size of the area of interest, the airspeed of the UAVs, the detection range of their sensors, etc., all of which were selected largely arbitrarily for the purpose of this proof-of-concept simulation study. Nevertheless, they demonstrate that, should these parameter values be available (i.e., when the mission specifics and the model of drone to be deployed are known), identifying the optimal swarm size and composition is feasible.

8. Human–Machine Interface (HMI) Considerations for the Use Case

In Section 5.3, we identified mission planning and coordination, real-time monitoring of swarm operations, as well as feedback from the swarm to the human operator, as important aspects that the human-machine interface (HMI) needs to account for. The UI used by us is shown in Figure 10 but there are a number of considerations we would like to share with the reader. The features described below correspond with the overarching mission requirements of our use case. These can potentially be extended or adapted to individual mission objectives (such as, e.g., long-term surveillance of a large area vs. a rapid response in a small sub-area) where they are likely to offer advantages if designed and tuned appropriately.

8.1. Mission Planning and Coordination

In our proof-of-concept demo, there are different drone capabilities in various numbers available for the human operator when planning the mission. For instance, via the HMI, it needs to be instructed how many fixed-wing drones one will deploy in a single mission. One related key HMI question is: What is the right kind of user interface element to communicate this number of drones to the user and how can the user change it?
Currently, these elements are sliders, but whether this is the best solution remains in question (the interface shown in Figure 10 was not designed with the user in mind). User tests, where users experiment with different options, would be well suited to answer this.
Secondly, at the moment, the demo does not include an HMI solution for the user to direct a specific drone to a particular location on a map. This could be implemented in the interface in the case of the polymorphic swarm being deployed into a real application. In other words, instead of abstracting the user out of the more detailed flight pattern planning, the solution would require the user to consider and tag relevant places on the map to help the swarm conduct its mission more effectively.
Thirdly, the current HMI enables the operator to activate or deactivate the lawnmower pattern for the drone flight paths. When it is active (i.e., the lawnmower pattern is on), the drones fly a deterministic pattern for a homogeneous coverage of the area. The advantage of this search pattern is that, assuming no drones are lost, no place is left unvisited and the search times can be optimal (depending on the planning of the pattern). If the lawnmower pattern is deactivated, a biased random walk is conducted, which cannot guarantee full coverage. This approach includes a pseudo-randomly generated biased random walk flying pattern, which is less predictable, and there are no hard guarantees that every part is visited in a certain amount of time.
However, a random walk is resilient against the loss of a drone, as one of the down sides of the approach is that the areas covered by the drones can overlap.
Fourthly, the human operators need to plan the operational mission by taking into account the detection and disambiguation capabilities of both the quadcopters and the fixed-wing drones. The planning HMI should enable operators to specify exploration patterns, determine when to deploy the quadcopters for disambiguation, and rank targets according to a precise comprehension of the surrounding circumstances.
Fifthly, in order to use the fixed-wing drones and quadcopters effectively, the human operators must also be aware of their features, capabilities, and limitations. In the final deployment, the HMI should offer extensive training modules, including simulations and tutorials, to acquaint operators with the characteristics of fixed-wing drones and quadcopters. Finally, the human operators need to think about the mission’s financial limitations (see Section 2.1) and how cost-effectively various fleet configurations work. A cost–benefit analysis tool integrated into the user interface would help the operators to evaluate the economic and performance effects of different hybrid fleet configurations.

8.2. Real-Time Monitoring and Feedback

The human operators must understand the drone ranges, event detection probabilities, and the surveyed environment thoroughly to start the mission. To give the operators a clear and simple understanding of these parameters, the provided interface should clearly and visually represent the ranges of the drones, probability functions, and the operating environment.
During the mission, to detect events and monitor the performance of both drone types (fixed-wing and quadcopters) the human operators require up-to-date information on the mission’s relevant factors. These factors include real-time information on the conducted event identification, classification, and both drone types’ operational statuses. To help the human operators make quick decisions, a dashboard-type user interface should show real-time data such as detection timelines, drone locations, drone battery levels, event categorization status, and mission progress.
Furthermore, it is important to notify the human operators of important developments, successful or unsuccessful detections, and the results of the disambiguation. The operators must be made aware of important occurrences or circumstances that call for quick action or judgment. To enable prompt decision making, the interface should have configurable alerts and notifications that notify operators of significant events, changes in the drone’s status, or possible problems.
In response to evolving conditions or new developments during a mission, operators may need to dynamically modify the mission goals and the swarm’s operational plans. From the HMIs perspective, this requires possibilities for the human operators to make quick changes, such as reorienting the mission’s focus or modifying the deployment strategy of the quadcopters or exploration patterns. To assist operators in interpreting the results, comparing strategies, and optimizing mission parameters in real time, the interface should incorporate data analysis tools and decision-support features.
Lastly, the success of the mission depends on the effective use and management of resources, including the various drone types. The HMI should have tools for managing and monitoring the resources online during the mission. It should show important metrics, such as drone endurance, event frequency, and reserve fleet status for proper resource management to be conducted by the human operators.

9. Conclusions

9.1. Challenges for the Design of Polymorphic Artificial Swarms

From the machine intelligence perspective, a key challenge in the implementation of polymorphic drone swarms amounts to a generalisation of the more fundamental problem to be solved whenever considering any application of collective intelligence design principles: How to generate the desired swarm behaviour for the lowest possible computational cost, using exclusively local information and communication.
As a rule, most commonly referred to artificial swarms (such as those used in aerial light shows) are not “intelligent” at all (nowhere near as capable of adaptive problem solving as a bee colony for example), they simply execute a sequence of pre-programmed moves. Other so-called swarms, though already more sophisticated in the sense that they can respond to changing conditions in real time, are simply flying in formation by following some prescribed relative orientation and separation rules, which is more accurately referred to as “flocking”. In addition, researchers have explored the potential of combining swarm approaches with machine learning [57] or optimization techniques (such as, e.g., genetic algorithms [58]) [59]. While showing potential, this path leads us further away from how collectives of, e.g., social insects, solve problems in the real world.
By way of contrast, a “true” swarm would rely on self-organisation to make autonomous decisions and implement a collective response that stands the best chance of achieving any number of objectives. Designing a rule set capable of achieving this result is already an ambitious goal when all units are “interchangeable” (i.e., have the same physical and logical characteristics). Generalising to the case in which this is no longer true (e.g., the swarm is comprised of units that have different flight characteristics, different payloads, different sensors and actuators, etc.) is even more challenging.
In the example of detection and disambiguation that was used as a proof-of-concept illustration in this paper, we opted for a simple “hard-coded” division of labour approach: fixed-wing drones are tasked with detection and have the authority to call upon “subordinate” quadcopters to perform a disambiguation mission on possible events. Although fit for purpose, this design is obviously crude and the long-term goal should be to realise a more flexible form of organisation within the swarm, whereby individual units are not permanently assigned to a single role but are instead predisposed to performing the functions for which they are best equipped, unless circumstances force them to take on some other task (e.g., when something urgent must be performed in response to a stimulus and there is no better-suited unit available nearby). Earlier work by some of the authors considered, e.g., a force-based model [60] to drive the behaviour of individual drones, a stochastic approach [61] to continuously allocate and re-allocate tasks between UAVs, and discussed design challenges arising from the desire to control a swarm through as few parameters/threshold values as possible [42].
It is anticipated that a combination of signalling, whereby individuals would be “advertising” their capabilities to each other, and threshold-based decision making could provide such flexible resource allocation, as it does in natural swarms. This should be the subject of future work as it bears the promise of a reusable approach to the formation and successful deployment of polymorphic swarms “on-demand” (i.e., notwithstanding the precise characteristics of the units involved).

9.2. Human Factors Considerations for a Real Deployment

In the future, a real deployment of a polymorphic drone swarm system could enhance performance in exploration and surveillance missions while guaranteeing efficient human–machine cooperation by taking into account the appropriate human factors and HMI design considerations. In addition to the mentioned specific human factor aspects, deployment of a polymorphic drone swarm should also consider general human factor topics.
Among general human factor considerations are the following:
  • Skill levels of the human operators
    To guarantee the functioning and safety of the system in real swarming-based operations, the polymorphic swarm system’s human–machine interface should be designed to be easy to use by human operators with a variety of background skill levels. The human–machine interface could also potentially adapt its behaviour based on the skill level of the user in question.
  • Qualifications and experience of the human operators
    Training on swarm behaviour, control techniques, and emergency protocols should be provided for the human operators before real operations. These types of training will ensure that the operators understand the grounds for the swarm’s behaviour, how to command it, and how to act in subnormal (e.g., accident) situations.
  • Human-in-the-Loop (HITL)
    Establishing a proper degree of human participation in the decision-making processes of the polymorphic swarm is needed. Routine tasks should be automated by the system, but human intervention should be permitted when needed. A key issue here is when to keep the human in/on/off the loop for the operation of the swarm.
  • Interaction between drone types
    In a polymorphic drone swarm similar to the one described earlier in this paper, a seamless transition from detection to disambiguation depends on effective communication between the fixed-wing drones and quadcopters. To ensure that different drone types cooperate to accomplish the mission objectives, the designed system solution should have a communication module that enables reliable and transparent information exchange between different drone types. The user should be kept informed about the functioning of this module and potential discrepancies.
  • Alerts and notifications
    To inform the human operators of any problems, irregularities, or possible risks in the swarm’s functioning, the human–machine interface should incorporate prompt, clear alerts and notifications. A typical solution here would be a text-type log file of the alerts and notifications. However, a more elaborate solution could incorporate intuitive visualisations of the alerts with clear instructions on how to proceed forward.
  • Legal and ethical perspectives
    Naturally, one must pay attention to legal and ethical concerns when using drone swarms, especially in populated areas. Firstly, one must make sure that the polymorphic drone swarm operations adhere to both national and international laws and relevant ethical guidelines. From the ethical perspective, especially privacy aspects should be considered in civilian applications. In military applications, rules for engagement should also be considered, taking into account the relevant ethical, legal, and cultural factors. Features that encourage ethical use should also be integrated into the final HMI.
  • System customization
    The HMI design could feature an interface adaptable to the unique requirements and inclinations of its users. This could also include, for instance, implementing feedback mechanisms to collect human operator input and gradually increase the system’s adaptability and usability related to particular user preferences via the gathered feedback.

9.3. Future Work

Without stating specific work packages or planned projects, it should be clear that the results presented in this article are generic and theoretical. We very much expect that any tailoring of the approach to a specific environment, application, or domain would have to be tuned accordingly. This will very likely affect the performance. Similarly, there are a host of different platforms that are commercially available, all with different specifications and capabilities. Any applied use of our work would have to start by considering the available platforms and payloads from which to create the polymorphic swarm. The values we used for UAV performance or cost were largely educated estimates; future work would consider more realistic values, which would depend on the specifics of the application.
Thus, future work is needed in the testing of the proposed drone solutions through deployment under real-world settings. This will lead to more representative parameters with regard to operational cost. Using these, a cost–benefit analysis should be conducted, as well as a sensitivity analysis regarding changes in cost parameters and the impact of different drone configurations on mission success. With a real use case to study, significant additional future work is needed in the testing of the HMI design with real users.
We hope to be able to corroborate our results through data collected from a real polymorphic swarm, deployed in the manner and to the ends suggested by this article, but this represents another and ambitious engineering effort.
Furthermore, while forest fires are increasingly common, it will still be difficult to compare performances of various swarms/swarm compositions on different deployments.
The first steps towards this are being taken; cf., [62]. For the time being, we present a method that, given correct input values for the various parameters, would return the following:
  • An optimal fleet composition to achieve the best performance within a given budget.
  • The minimal requirements (still in terms of fleet composition but without budget constraints) to achieve a target performance (detection/disambiguation rate).
In the absence of specific values for all parameters (such as, e.g., drone characteristics and price, sensor range, search area, ), we do not claim that our quantitative fleet planning method is immediately applicable for a real deployment.

9.4. Summary

Overall, for polymorphic drone swarms to be successfully integrated into a variety of applications, from monitoring and surveillance to search-and-rescue operations, human factor considerations are essential. Achieving optimal performance and safety in the operation of such swarms requires striking a balance between automation and human control, as well as making sure that the HMI is user-friendly. This paper has been a step towards articulating the relevant factors needed to be considered in this endeavour.

Author Contributions

Conceptualization, F.S.; methodology, F.S.; software, F.S.; validation, F.S.; formal analysis, F.S.; investigation, F.S., H.K. and H.H.; resources, F.S.; data curation, F.S.; writing—original draft preparation, F.S. and H.K.; writing—review and editing, F.S., H.K. and H.H.; visualization, F.S.; supervision, F.S.; project administration, F.S. and H.K.; funding acquisition, F.S. and H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the Research Council of Finland (project grant number 348010) and conducted as part of the Unmanned aerial systems based solutions for real-time management of wildfires (FireMan) project.

Institutional Review Board Statement

Not applicable: no real UAV was used/harmed.

Informed Consent Statement

Not applicable: none of the simulated UAVs are classified as sentient.

Data Availability Statement

The data can be made available to the interested reader.

DURC Statement

The current research is limited to the field of collective intelligence and decision making, which is beneficial to the orchestration of a variety of tasks to be performed by autonomous robots, in particular wildfire detection and monitoring and does not pose a threat to public health or national security. The authors acknowledge the dual-use potential of the research involving the presented algorithmic framework and confirm that all necessary precautions have been taken to prevent potential misuse. As an ethical responsibility, the authors strictly adhere to relevant national and international laws about DURC. The authors advocate for responsible deployment, ethical considerations, regulatory compliance, and transparent reporting to mitigate misuse risks and foster beneficial outcomes.

Acknowledgments

The authors wish to thank VTT for its support via the Roadmapping for Drone Swarm Research (RoaDS) project.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AAAutomation Awareness
AIArtificial Intelligence
AIAArtificial Intelligence Awareness
AUVAutonomous Underwater Vehicle
ConOpsConcept of Operations
HALEHigh-altitude long-endurance
HITLHuman-in-the-Loop
HMIHuman-machine interface
ISRIntelligence, Surveillance, Reconnaisance
MALEMedium-altitude long-endurance
ROVRemotely Operated Underwater Vehicle
SASituational Awareness
UASUnmanned Aerial System
UAVUnmanned Aerial Vehicle
UGVUnmanned Ground Vehicle
UIUser Interface
USVUnmanned Surface Vehicle
UUVUnmanned Underwater Vehicle
UxSUnmanned System
UxVUnmanned Vehicle
VTOLVertical take-off and landing

References

  1. Nichols, R.; Sincavage, S.; Mumm, H.; Lonstein, W. Drone Delivery of CBNRECy–DEW Weapons: Emerging Threats of Mini-Weapons of Mass Destruction and Disruption (WMDD); New Prairie Press: Manhattan, KS, USA, 2022. [Google Scholar]
  2. Sabry, F. Unmanned Ground Vehicle: Advanced Strategies and Applications in Modern Warfare; Military Science; One Billion Knowledgeable: Cleveland, OH, USA, 2024; Available online: https://www.barnesandnoble.com/w/unmanned-ground-vehicle-fouad-sabry/1145838973?ean=2940168125192 (accessed on 13 October 2024).
  3. Nichols, R.; Carter, C.; Drew II, J.; Farcot, M.; Hood, J. Cyber-Human Systems, Space Technologies, and Threats; New Prairie Press: Manhattan, KS, USA, 2023. [Google Scholar]
  4. Savitz, S.; Blickstein, I.; Buryk, P.; DeLuca, P.; Dryden, J.; Mastbaum, J.; Osburg, J.; Navy, U.S.; Potter, A.; Scott, S.; et al. U.S. Navy Employment Options for Unmanned Surface Vehicles (USVs); G—Reference, Information and Interdisciplinary Subjects Series; RAND Corporation: Santa Monica, CA, USA, 2013. [Google Scholar]
  5. Yan, J.; Yang, X.; Zhao, H.; Luo, X.; Guan, X. Autonomous Underwater Vehicles: Localization, Tracking, and Formation; Cognitive Intelligence and Robotics; Springer Nature: Singapore, 2021. [Google Scholar]
  6. O’Rourke, R. Navy Large Unmanned Surface and Undersea Vehicles: Background and Issues for Congress; Congressional Research Servive, US Government: Washington, DC, USA, 2019; Available online: https://sgp.fas.org/crs/weapons/R45757.pdf (accessed on 13 October 2024).
  7. Sherman, M.; Shao, S.; Sun, X.; Zheng, J. Counter UAV Swarms: Challenges, Considerations, and Future Directions in UAV Warfare. IEEE Wirel. Commun. 2024; Early Access. Available online: https://ieeexplore.ieee.org/document/10648596 (accessed on 13 October 2024).
  8. Duan, H.; Huo, M.; Fan, Y. From animal collective behaviors to swarm robotic cooperation. Natl. Sci. Rev. 2023, 10, nwad040. [Google Scholar] [CrossRef]
  9. Hendrickson, K. To Human-Machine Teams: Drone Swarms and Human-Machine Teaming Approaches; Technical report; Defense Systems Information Analysis Center (DSIAC): Belcamp, MD, USA, 2018. [Google Scholar]
  10. Hasbach, J.D.; Bennewitz, M. The design of self-organizing human–swarm intelligence. Adapt. Behav. 2022, 30, 361–386. [Google Scholar] [CrossRef]
  11. McEnroe, P.; Wang, S.; Liyanage, M. A Survey on the Convergence of Edge Computing and AI for UAVs. IEEE Xplore 2022, 9, 15435–15459. [Google Scholar]
  12. Peschel, J.M.; Murphy, R.R. On the Human–Machine Interaction of Unmanned Aerial System Mission Specialists. IEEE Trans. Hum.-Mach. Syst. 2013, 43, 53–62. [Google Scholar] [CrossRef]
  13. Tahir, A.; Böling, J.; Haghbayan, M.H.; Toivonen, H.; Plosila, J. Swarms of unmanned aerial vehicles—A survey. J. Ind. Inf. Integr. 2019, 16, 100106. [Google Scholar] [CrossRef]
  14. Allison, A. Aspects of polymorphism in man. Cold Spring Harb Symp Quant Biol. 1955, 20, 239–251; discussion, 251–255. [Google Scholar] [CrossRef]
  15. Richman, A. Evolution of balanced genetic polymorphism. Mol. Ecol. 2000, 9, 1953–1963. [Google Scholar] [CrossRef] [PubMed]
  16. Svensson, E.; Abbott, J.; Gosden, T.; Coreau, A. Female polymorphisms, sexual conflict and limits to speciation processes in animals. Evol. Ecol. 2009, 23, 93–108. [Google Scholar] [CrossRef]
  17. Seeley, T.D. Adaptive significance of the age polyethism schedule in honeybee colonies. Behav. Ecol. Sociobiol. 1982, 11, 287–293. [Google Scholar] [CrossRef]
  18. Chiu, M.C.; Wu, W.J.; Lai, L.C. Carriers and cutters: Size-dependent caste polyethism in the tropical fire ant (Solenopsis geminata). Bull. Entomol. Res. 2020, 110, 388–396. [Google Scholar] [CrossRef]
  19. Scholtz, O.I.; MacLeod, N.; Eggleton, P. Termite soldier defence strategies: A reassessment of Prestwich’s classification and an examination of the evolution of defence morphology using extended eigenshape analyses of head morphology. Zool. J. Linn. Soc. 2008, 153, 631–650. [Google Scholar] [CrossRef]
  20. Lauterbach, A.; Bonime-Blanc, A. The Last Word: Shield AI’s Ryan Tseng on Why the U.S. Military Needs AI Solution for Drone Swarms; Bloomsbury Publishing USA: Dallas, TX, USA, 2023. [Google Scholar]
  21. Tang, L.; Shao, G. Drone remote sensing for forestry research and practices. J. For. Res. 2015, 26, 791–797. [Google Scholar] [CrossRef]
  22. Burgués, J.; Marco, S. Environmental chemical sensing using small drones: A review. Sci. Total Environ. 2020, 748, 141172. [Google Scholar] [CrossRef] [PubMed]
  23. Teo, H.C. Closing the Gap Between Research and Field Applications for Multi-UAV Cooperative Missions. Ph.D. Thesis, Naval Postgraduate School, Monterey, CA, USA, 2013. [Google Scholar]
  24. Beierl, C.P.; Tschirley, D.R. Unmanned Tactical Autonomous Control and Collaboration Situation Awareness; Technical Report; Naval Postgraduate School: Monterey, CA, USA, 2017. [Google Scholar]
  25. Karvonen, H.; Honkavaara, E.; Röning, J.; Kramar, V.; Sassi, J. Using a semi-autonomous drone swarm to support wildfire management—A concept of operations development study. In Proceedings of the Human-Computer Interaction International Conference, Copenhagen, Denmark, 23–28 July 2023. [Google Scholar]
  26. Bonabeau, E.; Theraulaz, G.; Deneubourg, J.L. Quantitative Study of the Fixed Threshold Model for the Regulation of Division of Labour in Insect Societies. Proc. Biol. Sci. 1996, 263, 1565–1569. [Google Scholar]
  27. Endsley, M.R. Toward a Theory of Situation Awareness in Dynamic Systems. Hum. Factors 1995, 37, 32–64. [Google Scholar] [CrossRef]
  28. Karvonen, H.; Liinasuo, M.; Lappalainen, J. Assessment of automation awareness. In Proceedings of the Automaatio XXI Proceedings, Helsinki, Finland, 17–18 March 2015. [Google Scholar]
  29. Strauch, B. Automation and Decision Making—Lessons from the Cali Accident. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 1997, 41, 195–199. [Google Scholar] [CrossRef]
  30. Karvonen, H.; Heikkilä, E.; Wahlström, M. Artificial intelligence awareness in work environments. In Proceedings of the Human Work Interaction Design, Espoo, Finland, 20–21 August 2018; Clemmensen, T., Abdelnour-Nocera, J., Roto, V., Barricelli, B., Campos, P., Lopes, A., Gonçalves, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2019; pp. 175–185. [Google Scholar] [CrossRef]
  31. Laarni, J.; Karvonen, H.; Pakarinen, S.; Torniainen, J. Multitasking and interruption management in control room operator work during simulated accidents. In Proceedings of the Engineering Psychology and Cognitive Ergonomics, Toronto, ON, Canada, 17–22 July 2016; Harris, D., Ed.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 301–310. [Google Scholar] [CrossRef]
  32. Laarni, J. 7—Multitasking and interruption handling in control room operator work. In Human Factors in the Nuclear Industry; Teperi, A.M., Gotcheva, N., Eds.; Woodhead Publishing Series in Energy; Woodhead Publishing: Sawston, UK, 2021; pp. 127–149. [Google Scholar] [CrossRef]
  33. Kolling, A.; Walker, P.; Chakraborty, N.; Sycara, K.; Lewis, M. Human Interaction with Robot Swarms: A Survey. IEEE Trans. Hum.-Mach. Syst. 2016, 46, 9–26. [Google Scholar] [CrossRef]
  34. Cummings, M.; Mastracchio, C.; Thornburg, K.; Mkrtchyan, A. Boredom and Distraction in Multiple Unmanned Vehicle Supervisory Control. Interact. Comput. 2013, 25, 34–47. [Google Scholar] [CrossRef]
  35. Savioja, P.; Liinasuo, M.; Koskinen, H. User experience: Does it matter in complex systems? Cogn. Technol. Work 2013, 16, 429–449. [Google Scholar] [CrossRef]
  36. Bevan, N.; Kirakowski, J.; Maissel, J. What is Usability? In Proceedings of the 4th International Conference on HCI, Stuttgart, Germany, 1–6 September 1991; pp. 1–6. [Google Scholar]
  37. Endsley, M.R.; Kiris, E.O. The Out-of-the-Loop Performance Problem and Level of Control in Automation. Hum. Factors 1995, 37, 381–394. [Google Scholar] [CrossRef]
  38. Lee, J.D.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef] [PubMed]
  39. Sträter, O. Cognition and Safety: An Integrated Approach to Systems Design and Assessment; Ashgate: Farnham, UK, 2005. [Google Scholar]
  40. Goodrich, M.; Cummings, M. Human Factors Perspective on Next Generation Unmanned Aerial Systems. Int. C2 J. 2015, 1, 2405–2423. [Google Scholar] [CrossRef]
  41. Ruff, H.A.; Narayanan, S.; Draper, M.H. Human Interaction with Levels of Automation and Decision-Aid Fidelity in the Supervisory Control of Multiple Simulated Unmanned Air Vehicles. Presence Teleoperators Virtual Environ. 2002, 11, 335–351. [Google Scholar] [CrossRef]
  42. Saffre, F.; Hildmann, H.; Karvonen, H. The Design Challenges of Drone Swarm Control. In Proceedings of the Engineering Psychology and Cognitive Ergonomics, HCII 2021, Online, 24–29 July 2021; Harris, D., Li, W.C., Eds.; Springer: Berlin/Heidelberg, Germany, 2021; pp. 408–426. [Google Scholar] [CrossRef]
  43. Lewis, M. Human Interaction with Multiple Remote Robots. Rev. Hum. Factors Ergon. 2013, 9, 131–174. [Google Scholar] [CrossRef]
  44. Miller, C.; Funk, H.; Wu, P.; Goldman, R.; Meisner, J.; Chapman, M. The PlaybookTM Approach to Adaptive Automation. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2005, 49, 15–19. [Google Scholar] [CrossRef]
  45. Miller, C.A.; Parasuraman, R. Beyond Levels of Automation: An Architecture for More Flexible Human-Automation Collaboration. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2003, 47, 182–186. [Google Scholar] [CrossRef]
  46. Miller, C.; Parasuraman, R. Designing for Flexible Interaction Between Humans and Automation: Delegation Interfaces for Supervisory Control. Hum. Factors 2007, 49, 57–75. [Google Scholar] [CrossRef]
  47. Hart, S.; Steane, V.; Bullock, S.; Noyes, J. Understanding Human Decision-Making when Controlling UAVs in a Search and Rescue Application. In Proceedings of the Human Interaction & Emerging Technologies (IHIET 2022), Nice, France, 22–24 August 2022; AHFE International: Orlando, FL, USA, 2022; Volume 68. [Google Scholar] [CrossRef]
  48. Bjurling, O.; Arvola, M.; Ziemke, T. Swarms, Teams, or Choirs? Metaphors in Multi-UAV Systems Design; Springer: Cham, Switzerland, 2021; pp. 10–15. [Google Scholar] [CrossRef]
  49. Chen, J.Y.C.; Barnes, M.J.; Harper-Sciarini, M. Supervisory Control of Multiple Robots: Human-Performance Issues and User-Interface Design. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2011, 41, 435–454. [Google Scholar] [CrossRef]
  50. Congressional Research Service. Unmanned Aircraft Systems: Roles, Missions, and Future Concepts; Technical report; Congressional Research Service: Washington, DC, USA, 2022. [Google Scholar]
  51. Gerstein, D.; Leidy, E. Research Report, RAND Corporation Unmanned Aerial Systems Intelligent Swarm Technology. 2024. Available online: https://www.rand.org/content/dam/rand/pubs/research_reports/RRA2300/RRA2380-1/RAND_RRA2380-1.pdf (accessed on 6 September 2024).
  52. Finn, A.; Scheding, S. Developments and Challenges for Autonomous Unmanned Vehicles; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  53. Musić, S.; Hirche, S. Control sharing in human-robot team interaction. Annu. Rev. Control. 2017, 44, 342–354. [Google Scholar] [CrossRef]
  54. Saffre, F.; Hildmann, H.; Karvonen, H.; Lind, T. Monitoring and Cordoning Wildfires with an Autonomous Swarm of Unmanned Aerial Vehicles. Drones 2022, 6, 301. [Google Scholar] [CrossRef]
  55. Saffre, F.; Hildmann, H.; Karvonen, H.; Lind, T. Self-Swarming for Multi-Robot Systems Deployed for Situational Awareness. In New Developments and Environmental Applications of Drones; Lipping, T., Linna, P., Narra, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2022; pp. 51–72. [Google Scholar]
  56. Saffre, F.; Karvonen, H.; Hildmann, H. Wild Swarms: Autonomous Drones for Environmental Monitoring and Protection. In New Developments and Environmental Applications of Drones; Springer: Berlin/Heidelberg, Germany, 2024; pp. 1–32. [Google Scholar] [CrossRef]
  57. Khalil, H.; Rahman, S.U.; Ullah, I.; Khan, I.; Alghadhban, A.J.; Al-Adhaileh, M.H.; Ali, G.; ElAffendi, M. A UAV-Swarm-Communication Model Using a Machine-Learning Approach for Search-and-Rescue Applications. Drones 2022, 6, 372. [Google Scholar] [CrossRef]
  58. Eledlebi, K.; Hildmann, H.; Ruta, D.; Isakovic, A.F. A Hybrid Voronoi Tessellation/Genetic Algorithm Approach for the Deployment of Drone-Based Nodes of a Self-Organizing Wireless Sensor Network (WSN) in Unknown and GPS Denied Environments. Drones 2020, 4, 33. [Google Scholar] [CrossRef]
  59. Tang, J.; Liu, G.; Pan, Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems: Applications and Trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  60. Saffre, F.; Hildmann, H.; Anttonen, A. Force-Based Self-Organizing MANET/FANET with a UAV Swarm. Future Internet 2023, 15, 315. [Google Scholar] [CrossRef]
  61. Hildmann, H.; Kovacs, E.; Saffre, F.; Isakovic, A.F. Nature-Inspired Drone Swarming for Real-Time Aerial Data-Collection Under Dynamic Operational Constraints. Drones 2019, 3, 71. [Google Scholar] [CrossRef]
  62. Coffeng, S. Decentralised Drone Swarm for Wildfire Detection and Monitoring. Master’s Thesis, Aalto University, School of Engineering, Espoo, Finland, 2024. [Google Scholar]
Figure 1. Specific instances of the generic use case: (top) detection and subsequent identification of heat signatures of (relatively) static events such as a starting or growing forest fire (as discussed in [54]); (bottom) detection, identification, and possibly tracking of dangerous mammals on the basis of heat signatures, as discussed in [56]. The former simulates the slow evolution of an area phenomenon while the latter is a species moving around but not extending the area (contrary to the former, where size and shape can change over time). The generic use case in this article only considers the detection (spotting) of a possible event by a fixed-wing AUV and the subsequent identification and classification through a rotary UAV; whether the source of the event is mobile or not is not considered.
Figure 1. Specific instances of the generic use case: (top) detection and subsequent identification of heat signatures of (relatively) static events such as a starting or growing forest fire (as discussed in [54]); (bottom) detection, identification, and possibly tracking of dangerous mammals on the basis of heat signatures, as discussed in [56]. The former simulates the slow evolution of an area phenomenon while the latter is a species moving around but not extending the area (contrary to the former, where size and shape can change over time). The generic use case in this article only considers the detection (spotting) of a possible event by a fixed-wing AUV and the subsequent identification and classification through a rotary UAV; whether the source of the event is mobile or not is not considered.
Drones 09 00053 g001
Figure 2. Events, after occurring, i.e., becoming active (blue error), are in one of 5 states. The transition between these is determined by actions of the different UAV types. See also Figure 3.
Figure 2. Events, after occurring, i.e., becoming active (blue error), are in one of 5 states. The transition between these is determined by actions of the different UAV types. See also Figure 3.
Drones 09 00053 g002
Figure 3. The transition between states for events is determined by the actions of the UAVs. Note the uncertainty within the swarm with regard to the actual state of a Spotted event as well as with regard to the actual number of active events. Swarm performance is measured as the number of Missed, Lost(Checked), and Inspected events (green) over the total number of created events (blue).
Figure 3. The transition between states for events is determined by the actions of the UAVs. Note the uncertainty within the swarm with regard to the actual state of a Spotted event as well as with regard to the actual number of active events. Swarm performance is measured as the number of Missed, Lost(Checked), and Inspected events (green) over the total number of created events (blue).
Drones 09 00053 g003
Figure 4. Simple flow diagrams for all three acting entities in the simulation: fixed-wing UAVs, quadcopters, and the centralized database, where events are logged and from which the AUVs receive their orders. The algorithms discussed in Section 6.4.1 and Section 6.4.2 are used in the green box.
Figure 4. Simple flow diagrams for all three acting entities in the simulation: fixed-wing UAVs, quadcopters, and the centralized database, where events are logged and from which the AUVs receive their orders. The algorithms discussed in Section 6.4.1 and Section 6.4.2 are used in the green box.
Drones 09 00053 g004
Figure 5. Search patterns and territories for 2 (left) or 3 (right) UAVs. All territories (in either figure) have the same surface. All drones follow the “lawnmower” search pattern, apart from one (right figure, bottom right drone), which follows the “random” variant. Dashed circles indicate the observation radius r, which is used to compute the position of the various waypoints.
Figure 5. Search patterns and territories for 2 (left) or 3 (right) UAVs. All territories (in either figure) have the same surface. All drones follow the “lawnmower” search pattern, apart from one (right figure, bottom right drone), which follows the “random” variant. Dashed circles indicate the observation radius r, which is used to compute the position of the various waypoints.
Drones 09 00053 g005
Figure 6. Effect of search pattern on detection rate. A total of 1000 realisations per parameter value.
Figure 6. Effect of search pattern on detection rate. A total of 1000 realisations per parameter value.
Drones 09 00053 g006
Figure 7. Effect of territoriality on detection rate. Total of 1000 realisations per parameter value.
Figure 7. Effect of territoriality on detection rate. Total of 1000 realisations per parameter value.
Drones 09 00053 g007
Figure 8. Performance measures for hybrid swarms of different composition. “Spotted” is the % of all created events that are detected (Spotted), “inspected (once detected)” is the fraction of Spotted events that were inspected (Inspected), and “inspected (overall)” is what we are really after, namely, the fraction of all created events that were also inspected (the value being the product of the other two). Event half-life = 1800 s. Total of 1000 realisations per combination of parameter values.
Figure 8. Performance measures for hybrid swarms of different composition. “Spotted” is the % of all created events that are detected (Spotted), “inspected (once detected)” is the fraction of Spotted events that were inspected (Inspected), and “inspected (overall)” is what we are really after, namely, the fraction of all created events that were also inspected (the value being the product of the other two). Event half-life = 1800 s. Total of 1000 realisations per combination of parameter values.
Drones 09 00053 g008
Figure 9. Sample result of the systematic parameter space exploration, emphasising the law of diminishing returns (e.g., performance quickly plateaus when adding quadcopters to the swarm for any number of fixed-wing drones). Total of 1000 realisations per combination of parameter values; event half-life = 1800 s. Note: “successful inspection” implies that the event was spotted by a fixed-wing UAV then reached by the quadcopter dispatched for confirmation before it terminated.
Figure 9. Sample result of the systematic parameter space exploration, emphasising the law of diminishing returns (e.g., performance quickly plateaus when adding quadcopters to the swarm for any number of fixed-wing drones). Total of 1000 realisations per combination of parameter values; event half-life = 1800 s. Note: “successful inspection” implies that the event was spotted by a fixed-wing UAV then reached by the quadcopter dispatched for confirmation before it terminated.
Drones 09 00053 g009
Figure 10. The UI for the simulation with fixed-wing UAVs (green) and quadcopters (blue and green).
Figure 10. The UI for the simulation with fixed-wing UAVs (green) and quadcopters (blue and green).
Drones 09 00053 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Saffre, F.; Karvonen, H.; Hildmann, H. The Why and How of Polymorphic Artificial Autonomous Swarms. Drones 2025, 9, 53. https://doi.org/10.3390/drones9010053

AMA Style

Saffre F, Karvonen H, Hildmann H. The Why and How of Polymorphic Artificial Autonomous Swarms. Drones. 2025; 9(1):53. https://doi.org/10.3390/drones9010053

Chicago/Turabian Style

Saffre, Fabrice, Hannu Karvonen, and Hanno Hildmann. 2025. "The Why and How of Polymorphic Artificial Autonomous Swarms" Drones 9, no. 1: 53. https://doi.org/10.3390/drones9010053

APA Style

Saffre, F., Karvonen, H., & Hildmann, H. (2025). The Why and How of Polymorphic Artificial Autonomous Swarms. Drones, 9(1), 53. https://doi.org/10.3390/drones9010053

Article Metrics

Back to TopTop