Next Article in Journal
A Real-Time Strand Breakage Detection Method for Power Line Inspection with UAVs
Previous Article in Journal
Multiple Unmanned Aerial Vehicle Autonomous Path Planning Algorithm Based on Whale-Inspired Deep Q-Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SmrtSwarm: A Novel Swarming Model for Real-World Environments

1
Department of Computer Science and Engineering, Indian Institute of Technology Delhi, New Delhi 110016, India
2
Department of Electrical Engineering, Indian Institute of Technology Delhi, New Delhi 110016, India
3
School of Earth, Atmospheric and Life Sciences, University of Wollongong, Wollongong 2522, Australia
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Drones 2023, 7(9), 573; https://doi.org/10.3390/drones7090573
Submission received: 8 August 2023 / Revised: 2 September 2023 / Accepted: 5 September 2023 / Published: 8 September 2023

Abstract

:
Drone swarms have gained a lot of popularity in recent times because, as a group, drones can perform highly intelligent tasks. Drone swarms are strongly inspired by the flocking behavior of birds, insects, and schools of fish, where all the members work in a coordinated manner to achieve a common goal. Since each drone is an independent entity, automating the control of a swarm is difficult. Previous works propose various swarming models with either centralized or distributed control. With distributed control, each drone makes its own decisions based on a small set of rules to accomplish swarm behavior, whereas in centralized control, one drone acts as the leader, who knows the final destination and the path to follow; it specifies the trajectories and velocities for the rest of the drones. Almost all the work in the area of swarming models follows Reynolds’ model, which has three basic rules. For GPS-aided settings, state-of-the-art proposals are not mature enough to handle complex environments with obstacles where primarily local decisions are taken. We propose a new set of rules and a game-theoretic method to set the values of the hyperparameters to design robust swarming algorithms for such scenarios. Similarly, the area of realistic swarming in GPS-denied environments is very sparse, and no work simultaneously handles obstacles and ensures that the drones stay in a confined zone and move along with the swarm. Our proposed solution S m r t S w a r m solves all of these problems. It is the first comprehensive model that enables swarming in all kinds of decentralized environments regardless of GPS signal availability and obstacles. We achieve this by using a stereo camera and a novel algorithm that quickly identifies drones in depth maps and infers their velocities and identities with reference to itself. We implement our algorithms on the Unity gaming engine and study them using exhaustive simulations. We simulate 15-node swarms and observe cohesive swarming behavior without seeing any collisions or drones drifting apart. We also implement our algorithms on a Beaglebone Black board and show that even in a GPS-denied setting, we can sustain a frame rate of 75 FPS, much more than what is required in practical settings.

1. Introduction

In the last couple of years, unmanned aerial vehicles (UAVs) have gained massive attention and are being used in diverse fields ranging from wildlife monitoring to aerial photography and agriculture [1,2,3]. The global commercial drone market alone was USD 19.89 billion in 2022 and is expected to grow at a CAGR (compound annual growth rate) of 13.9% from 2023 to 2030 [4]. Usually, these UAVs or drones are used in groups or swarms because a drone swarm tends to outperform single drones by leveraging their collective intelligence, increased versatility, and higher operational efficiency [5]. The concept of drone swarms is inspired by the flocking behavior of birds, animals, and insects, which exhibits a pattern and a similar general direction of motion for all its members. Broadly speaking, drone swarming can be defined as the coordinated behavior of a group of autonomous drones that work together to achieve a common goal [6,7,8].
One of the earliest and most well-known flocking models was proposed by Reynolds [9]; it is also known as the boids model. It proposes three principles that govern the behavior of individual drones within a group. These rules are as follows: ❶ each drone in the swarm should maintain a minimum distance from its neighbors; ❷ it should align its velocity with the average velocity of its neighbors; ❸ it should move towards the center of mass of its neighbors. Reynolds demonstrated that these three rules controlling individual drone movement could result in complex collective behaviors such as flocking. The main aim of this model is to capture the self-organizing and coordinated motion observed in flocks of birds or schools of fish, where collective behavior emerges from localized interactions. As of today, it serves as the fundamental foundational algorithm that forms the basis for almost all drone swarming algorithms [6,8,10].
The Reynolds model is based on a distributed control algorithm, also known as self-organized flocking, where each UAV in the swarm decides its own movement. There are also models based on centralized control. In these models, the movement of UAVs is controlled either by an external agent or a specific drone within the swarm. The latter model is known as a leader–follower swarming model. The self-organizing swarm has the advantage of efficiency in terms of processing time since the work is divided among all the members, whereas the leader–follower structure is simpler and easier to implement and verify [11,12]. The leader drone guides the followers and offers additional control and coordination mechanisms. The follower drones can maintain a fixed distance and relative position with respect to the leader, ensuring that the swarm moves in a coordinated and synchronized manner. However, the problem with centralized control is the single point of failure. Hence, this paper proposes a hybrid model, S m r t S w a r m , that combines the leader–follower and self-organized flocking models.
Braga et al. [6] suggest that for a leader–follower-based model, all the follower drones need to follow one more rule along with Reynolds’ flocking rules, i.e., Migration, which forces the follower drones to migrate towards the leader drone. We integrate this behavior into S m r t S w a r m . The drone swarms are designed for working in a real-world environment where the conditions may be adverse. For example, there may be obstacles, such as buildings, towers, etc., which may block a drone’s path. The Reynolds model does not consider the presence of such obstacles. Hence, we propose including an additional obstacle avoidance rule that suggests alternative paths. However, this obstacle avoidance rule may lead to problems: the entire flock may disintegrate into smaller flocks with no interflock coordination, owing to obstacles. Olfati-Saber [13] identified this problem of fragmentation in flocking [9]. To avoid such a situation, the Olfati-Saber model proposes to define a boundary around the drones. Inspired by this method, we add a confinement rule in our model that forces all the drones to be confined to a predefined boundary.
To realize our model, each drone must be aware of its position and the position and movement of its neighbors. Therefore, the proposed model works only in a GPS-enabled environment where each flock member knows the position and velocity of others. But when operating in a real-world environment, such as in areas like mountain ranges, caves, congested urban areas, etc., access to a reliable GPS signal becomes a major hurdle [14,15,16]. In such scenarios, the swarm cannot rely on GPS for navigation. Much work has been carried out in building the swarming algorithms, but not a single framework has been provided that works on both the GPS-aided and GPS-denied environments; our proposed model has this capability.
This paper proposes a computer-vision-based strategy for achieving the flocking behavior in GPS-denied environments. We use a vision-based sensor to take pictures of the surrounding area and then analyze them to extract the required information. Previous works used ML-based models for segmenting and processing those images [17]. However, these methods require a significant amount of time and computing resources. Furthermore, many drones cannot afford to implement these algorithms for processing in every time frame; as a result, we must use or create conventional algorithms instead. Therefore, we propose an image processing algorithm based on depth maps. No work has been presented that processes depth maps for computation of the movement of drones in the swarm. SmrtSwarm proposes a method to compute the depth maps of the images captured by drones and to find the neighboring drones and obstacles along with their distance from a reference drone. We also propose a novel algorithm to track the detected objects (drones and obstacles) over time. The swarming rules can be applied once all the necessary information is obtained. The code uses limited parallel processing to enhance its efficiency.
Our primary contributions in this paper are as follows:
  • We develop an enhanced Reynolds model that incorporates leader–follower behavior. The control is still distributed; however, the leader is a distinguished drone that knows the final destination.
  • We propose new Reynolds-like flocking rules that enable a swarm to navigate through GPS-aided environments containing physical obstacles while maintaining swarm behavior. The total processing time of our model is less than 1 ms on a popular embedded board.
  • We propose new flocking rules for GPS-denied environments as well. We develop a method to process depth maps quickly and process frames in around 13 ms (≈75 fps) on a popular embedded board.
The paper is organized as follows. We discuss the background and related work in Section 2. Section 3 discusses the proposed swarming model. Section 4 shows the experimental results, and we finally conclude the paper in Section 5.

2. Background and Related Work

In this paper, we consider two kinds of scenarios. The first set of scenarios has an available GPS signal, which is arguably the most important input in a drone swarming system. The second set of scenarios does not rely on a GPS signal—they are more suitable for settings where GPS signals are weak, or places where jamming the GPS signal is a real possibility.

2.1. Swarming Models in an Environment with GPS Signals

Drone swarming is primarily inspired by the flocking behavior of birds. In general, flocking is a group behavior observed in birds, fish, and many other animals. It involves the coordinated movement of individuals within a group. To achieve this behavior, previous works propose various swarming models that enable drones in a swarm to communicate with one another and coordinate their movements [6,8,9,10]. These models specify certain rules for all the drones that guide them on how to react to the movement of other drones nearby. This way, each drone contributes to the overall group behavior. In general, the flocking process has five stages (see Figure 1). Each swarm member observes its surroundings and locates all other drones during the initial stage. Following that, it employs a neighbor selection strategy to select a set of neighbors who influence its movement. Every flocking model has a unique neighbor selection technique, such as choosing the k closest drones as neighbors. The drone then detects other obstacles in its vicinity and tracks the obstacles as well as its chosen neighbors. Subsequently, the drone calculates the net force exerted on it by the selected neighbors and obstacles and adjusts its position accordingly. There exist various varieties of flocking behavior and resultant swarming models. The two broad categories of flocking behavior are self-organized and leader–follower.

2.1.1. Self-Organized Swarming

Self-organized swarming, or decentralized flocking, is distinguished by the lack of explicit leaders within the group [19]. Instead, each group member follows simple principles to adjust its velocity based on its local interactions with other drones in the vicinity. Typically, these principles include maintaining a certain separation distance, aligning with the average direction of nearby drones, and moving toward the group’s center of mass. Global flocking patterns arise from these local interactions. Fish schools and insect swarms exhibit this form of flocking behavior. Various papers have used different approaches to develop self-organized drone swarms [6,8,9,10]. Reynolds gave the firstever swarming model for self-organized flocking. Reynolds observed the natural flocking behavior and identified three simple rules that define the movement of each swarm member [9]. These rules are as follows:
(i)
Cohesion: Each swarm member must try to travel towards the group’s center. This behavior is achieved by applying an attractive force between each flock member and the group’s center of mass, which pulls the member towards the center (refer to Figure 2a).
(ii)
Separation: Every member must keep a safe distance from its neighbors to prevent collisions. This is achieved by exerting a repulsive force between each flock member and its nearest neighbors (refer to Figure 2b).
(iii)
Alignment: Every member in the swarm should try to match its neighbors’ speed and direction. This behavior is achieved by exerting an attractive force between each flock member and its neighbors. This pushes the member’s velocity closer to the group’s average velocity (refer to Figure 2c).
Figure 2. Reynolds’ flocking principles: (a) cohesion, (b) separation, (c) alignment. x i and v i are the position and velocity of the i t h drone, respectively. v c , v s , and v a represent the cohesion, separation, and alignment velocity vectors, respectively. The final velocity of a drone is decided only by the drones inside the circular boundary.
Figure 2. Reynolds’ flocking principles: (a) cohesion, (b) separation, (c) alignment. x i and v i are the position and velocity of the i t h drone, respectively. v c , v s , and v a represent the cohesion, separation, and alignment velocity vectors, respectively. The final velocity of a drone is decided only by the drones inside the circular boundary.
Drones 07 00573 g002
If v c , v s , and v a represent the velocity vectors resulting from the cohesion, separation, and alignment rules, respectively, then the final velocity V f of a drone after incorporating all three rules is given by Equation (1). Here, r c , r s , and r a are their respective weights in V f .
V f = r c v c + r s v s + r a v a
One point to note here is that the Reynolds model deals in velocities. It computes the velocity of each drone for the next frame based on the flocking rules and inherently exerts the required force to achieve that velocity. We adopt the same approach in the proposed model.
Eversham et al. [8] analyzed the classic Reynolds flocking model in detail and described the impact of the individual parameters on the observed flock behavior. Blomkvist et al. [10] proposed to use the Reynolds model to model flocking behavior in the case of prey escaping a predator attack. However, these models work in a very constrained environment with no obstacles. Braga et al. [6] considered the presence of obstacles and proposed an obstacle avoidance rule for swarm members. To detect obstacles, they used distance-based sensors. All these models rely on communication among drones, which may not be possible in real-world environments. Our proposed approach S m r t S w a r m considers these real-world constraints and provides a robust solution that requires very little communication between drones. Communication is required only in a GPS-aided environment to broadcast its position, and no communication is required in a GPS-denied environment.

2.1.2. Leader–Follower Swarming

In leader–follower swarming, one or more group members undertake the role of a leader, while the remaining group members serve as followers [20]. The leaders determine the direction and pace of the flock, whereas the followers adjust their movements to maintain a certain distance or formation relative to the leaders. This flocking is commonly observed in avian colonies, where one or a few birds take the lead, and the remainder follow their movements. Bhowmick et al. [21] proposed a model with a leader–follower architecture with more than one leader; however, that number is fixed. They demonstrated how each member tends to move towards the center of the flock without colliding and still remains in the flock. However, it only operates in two dimensions and does not account for obstacles. Our proposed model S m r t S w a r m works in a 3D space, even with obstacles.
Walker et al. [22] also proposed a leader–follower-based swarming model that considers multileader systems. However, the leaders are chosen dynamically during flight. Humans are needed to control all the leaders. If the swarm divides itself into clusters, each with a leader, and they become segregated, the operator must manually bring the leaders closer together each time this happens. This can occur frequently in an environment with obstacles. The S m r t S w a r m model addresses this issue by defining a confinement area around the leader and adding additional forces. Unlike the other two models, Zheng et al. [23] proposed a flocking method with a single leader only. They also considered privacy concerns, such as hiding the leader if there is an adversary. However, their model does not define how to identify an adversary in a flock—its location or identity.
Reza Olfati-Saber [13] pointed out one disadvantage of the Reynolds flocking model: creating fragments in the swarm during flight time. Hence, the paper presented a method to make the swarm similar to an α -lattice, ensuring no fragments are formed. However, the leader in their model is virtual and can change anytime during the flight. Hence, this required extra computation; also, it is not scalable to the environment when there is no GPS present. Our paper presents a simple yet effective algorithm that can be scalable to environments where GPS is an issue.

2.2. Swarming in a GPS-Denied Environment

As already discussed, most proposed models rely on GPS for location and velocity information. But relying solely on GPS for swarm navigation and coordination can pose challenges in real-world environments. GPS signals can be disrupted or lost due to various factors such as signal jamming, multipath interference, and natural obstructions such as mountains, trees, and buildings. In such scenarios, swarms that heavily depend on GPS can face serious performance issues and may also suffer from complete failure [24]. To address this challenge, swarms must be designed to be more robust and resilient to GPS unavailability.
The drones thus need distance-sensing hardware [22,24]. Distance sensors are typically limited in range and accuracy. They also have reliability challenges while navigating in complex environments. In the natural world, birds and fishes rely on their sense of vision for perceiving distance [25]. A stereo camera is the most often used vision-based sensor for providing high-resolution images of the drone’s surroundings and enabling it to recognize neighboring drones and obstacles. We are thus motivated to use such stereo cameras inspired by the natural world.
Previous works that deploy vision-based sensors in drone swarms use large computational neural networks to extract and use the information provided by the sensors [2,3,26]. In spite of this, state-of-the-art implementations mostly derive position information from such sensors. They seldom obtain good-quality velocity information that the Reynolds model requires. References such as [2,3] offer alternatives to the Reynolds model by employing a rigorous mathematically derived flocking algorithm that is based on the Laplace’s equation—it relies on large convolutional neural networks (CNNs) for navigation in environments with obstacles.
For all the works that have been carried out in this area, most of them use machine learning methods to process images generated from a vision camera or use some other sensors to detect the distance and track other agents or obstacles, which is overhead for a drone for using multiple sensors or heavy computation [27]. And the papers which have used depth maps are meant for specific applications like flood level monitoring or processing depth maps with the Bayesian technique [28,29]. Our S m r t S w a r m provides an efficient method of processing images without using any extra sensors or any machine learning methods to provide a distance of objects nearby to drones; in addition to detecting the objects, it also tracks the agent without requiring any extra hardware/software implementation. A brief comparison of related work is shown in Table 1. The summary of Section 2 is given in Figure 3.

3. Materials and Methods

3.1. S m r t S w a r m in GPS-Aided Environments

Our swarming model is based on the conventional Reynolds model that incorporates the leader–follower behavior. In our implementation, each drone in the swarm considers all other drones to be its relevant neighbors even though this is not strictly necessary in larger settings. To improve the swarm’s coordination and robustness, we suggest a few more novel swarming rules. The proposed rules are named ❶ migration, ❷ obstacle avoidance, and ❸ confinement. Note that when working in a GPS-aided setting, every swarm member is aware of its own position parameters, velocity, and tag, which are broadcasted to the other members. All swarming rules in the GPS-aided setting rely on this information.
Finally, the weighted sum of all the vectors generated by the newly introduced rules and the fundamental Reynolds rules are used to calculate the final velocity assigned to the drone.
The description, implementation, and mathematical representation of the rules are shown next. The mathematical representation is designed for a drone swarm having n + 1 drones, where one drone is the leader and the remaining n drones are its followers. The mathematical notations are shown in Table 2.

3.1.1. New Rule: Migration Rule

The integration of the migration rule within S m r t S w a r m enhances the functionality of our leader–follower-based model. In such models, the leader drone assumes complete trust from the follower drones, compelling them to faithfully adhere to its chosen path [1,6,9]. By introducing this novel rule, we address the challenges associated with coordinating a cohesive and goal-oriented flock.
The essence of the migration rule lies in its ability to facilitate the migration of follower drones towards the leader drone, thereby ensuring synchronized movement within the swarm (see Figure 4a). The fundamental objective is to eliminate deviations from the intended goal, as only the leader possesses the knowledge of the optimal route required to reach the destination. This strategic alignment guarantees that each member of the flock remains focused and informed throughout the journey.
The migration vector is the directional vector from a given drone to the leader drone, which can be calculated by subtracting the position vectors of the respective drones. The rule is mathematically represented in Equation (2).
v m = x L x i

3.1.2. New Rule: Obstacle Avoidance Rule

In real-world scenarios, the presence of obstacles, such as trees, buildings, poles, and other objects, poses an alarming challenge for drone swarms [32,33,34]. To tackle this challenge, we propose the inclusion of a new rule known as the obstacle avoidance rule in the S m r t S w a r m model. Fundamental to our approach is the deployment of advanced distance sensors [35] on each drone within the swarm. These sensors can encompass either vision-based or infrared (IR) technology, providing the ability to detect obstacles within the environment. We carefully select and integrate these sensors, ensuring their suitability for obstacle detection tasks and their seamless integration with the overall swarm system.
We devise an obstacle avoidance rule that guides drones in creating new flight paths devoid of obstacles. Central to this rule is the fundamental directive for each member of the swarm to navigate in a direction away from the detected obstacle (refer to Figure 4b). However, we move beyond mere directional guidance. To enhance our obstacle avoidance strategy, we design the magnitude of the repulsive force in proportion to the inverse distance between the drone and the obstacle. This approach ensures that the repulsive force exerted along the line connecting the drone’s center and the obstacle increases as the proximity to the obstacle decreases. This magnitude adjustment maximizes the likelihood of successfully steering the drones away from potential collisions and obstructions.
Our approach to obstacle detection and avoidance showcases our awareness of the real-world challenges faced by drone swarms. By integrating sensors and formulating an effective obstacle avoidance rule, we aim to ensure the safe navigation and successful completion of the swarm’s mission even in the presence of obstacles.
Equation (3) shows the mathematical representation of this rule. It takes multiple obstacles into account.
v o a = j = 1 n ( z j / | x i z j | )

3.1.3. New Rule: Confinement Rule

The obstacle avoidance approach described in Section 3.1.2 may introduce potential issues where certain members of the swarm diverge from the rest of the group while avoiding obstacles. Furthermore, follower drones may surpass leader drones due to prolonged exposure to these repulsive forces. To mitigate these concerns, we improve the suggested model by drawing ideas from the Olfati-Saber flocking model [13].
The Olfati-Saber model introduces the concept of a confinement area, which acts as a protective boundary surrounding the swarm, ensuring that no member, or drone, ventures outside of it. In our approach, we adopt a similar concept by defining a confined area around the leader drone. This confinement area serves as a virtual enclosure, preventing any subset of the flock from detaching or straying away from the main group (see Figure 4c).
According to the confinement rule, if any drone attempts to move outside the confinement area, a force is exerted to redirect it toward the leader drone. This redirection can be determined by subtracting the position vectors of the leader and the respective drone. The magnitude of this force is directly proportional to the extent to which the drone has deviated from the restricted area.
This confinement rule generates a nonzero vector only when a drone is outside the confinement zone, which is represented by a sphere with a radius of δ centered around the leader drone. When a member drone strays beyond this region, the confinement force acts to guide it back toward the leader, ensuring the cohesion and integrity of the flock.
By incorporating this confinement rule, we address the potential problem of swarm detachment and promote a collective behavior that preserves the cohesion and interdependence of the flock. The confinement force acts as a guiding mechanism, reinforcing the importance of staying within the predefined confinement area. This enhancement enhances the overall efficiency and coordination of the swarm, ensuring that no member drones deviate too far from the rest of the group.
Equation (4) provides the mathematical representation of the rule.
v c t = ( ( x i x L ) m a x ( 0 , | x i x L | δ ) ) / | x i x L |
Up till now, we have discussed the proposed flocking rules. Since S m r t S w a r m combines these proposed rules with the basic Reynolds flocking rules and Section 2.1.1 only provides a brief description of the basic flocking rules, we provide their implementation details here.

3.1.4. Old Rule: Cohesion Rule

The cohesion vector (part of the original Reynolds model) tries to move the drone towards the swarm’s centroid. Therefore, we need a vector pointing in that direction. We calculate this vector by averaging the neighboring drones’ position vectors. Equation (5) provides the mathematical representation for this rule.
v c = j = 1 n ( x j / n )

3.1.5. Old Rule: Separation Rule

The separation vector tries to push the drone away from the neighboring drones; hence, a repulsive force needs to act between them along the line joining them. The direction of this force is from the neighbor towards the reference drone, and the magnitude of it is inversely proportional to the distance between the reference drone and the neighboring drone. The rule can be mathematically represented as Equation (6).
v s = j = 1 n ( x i x j ) / | x i x j | 2

3.1.6. Old Rule: Alignment Rule

The alignment vector tries to move the drone in the direction of the movement of the swarm. We can obtain the direction by taking the average of the velocities of all the drones in the swarm and then moving the reference drone with that velocity. Equation (7) provides the mathematical representation for this rule.
v a = j = 1 n ( v j / n )

3.1.7. The Final Velocity

The final velocity V f of a drone after incorporating all these rules is shown in Equation (8). Here, r m , r o a , and r c t , r c , r s , and r a are the respective weights of these rules used in calculating V f . We are basically computing a linear weighted sum. The summary of Section 3.1 is given in Figure 5.
V f = r c v c + r s v s + r a v a + r m v m + r o a v o a + r c t v c t

3.2. S m r t S w a r m in GPS-Denied Environments

The model we propose in Section 3.1 requires the presence of a GPS while swarming, and also some communication regarding the drones’ coordinates. The GPS signal helps each drone communicate its tags, velocity, and position with the other members of the swarm so that every drone can decide its motion accordingly. But GPS signal reception is not always possible in the real world, such as for indoor environments, dense urban areas, dense forests, security-sensitive environments, and places where there is a possibility of deliberate signal jamming [14,15,16]. Hence, we need to adopt S m r t S w a r m for GPS-denied regions. For this, we use a computer-vision-based approach. We deploy a stereo camera on the drones to capture a specialized image of the environment that we shall refer to as the depth map. We need to then use a lightweight image processing approach to obtain the required information about the drones present in the field of view. We need to take into account that ML-based techniques are computationally expensive (refer to Section 2). Hence, we need to look at either ultrafast ML techniques or fast conventional algorithms. We were not able to find good candidate algorithms in the former class; hence, we opted for the latter class (i.e., conventional computer vision (CV) algorithms).
A depth map provides a pixelwise estimation of the depth or distance of objects from a particular viewpoint. It is typically represented as a 2D image, where each pixel corresponds to a depth value indicating the distance from the camera or the viewpoint. We use a bespoke algorithm on the produced depth map to obtain the neighboring drones’ positions, velocities, and tags.

3.2.1. Object Detection in the Depth Map

We begin by computing the depth map of the current scene. One example of a depth map is shown in Figure 6. We make the following observations from the depth map:
A depth map is a 2D matrix, where the value in each cell represents the depth of the relevant part of the object corresponding to it.
The objects seen on the depth map form a cluster of pixels with similar pixel values. On the boundaries of these clusters, we can find a sudden change in pixel values.
The values of the pixels belonging to objects far away from the reference point are very high.
We exploit these findings to detect the objects in the depth map. For the purpose of describing the object detection algorithm, we take one drone from the swarm as a reference drone. The suggested approach consists of two steps: (1) detecting objects in the depth map, and (2) determining if the identified objects are drones or obstacles. For the first step, we propose a depth-first search (DFS)-based approach. We observe that all objects that are in front of the reference drone form clusters in the 2D matrix corresponding to the depth map. To identify all the pixels that belong to a particular cluster (object), we traverse the 2D matrix (represented as a graph) using DFS. We ignore objects (clusters) that are far away from the reference point because they will not influence the movement of the drone. We also ignore very small clusters because they most likely do not correspond to drones. Both these behaviors are controlled by threshold parameters. The exact values of these two thresholds are given in Section 4.3.2. We start by identifying each cluster that is present in the depth map. These clusters might represent drones as well as obstacles; therefore, the next step is to determine whether they are drones or regular obstacles. They have different characteristics, as described in Section 4.3.2.
The identification of objects is not enough because the swarming model requires a few more details, such as the position and depth for applying the proposed rules. The drone’s depth is determined by taking the pixel with the lowest value in the cluster that represents it. The component of the drone closest to the reference drone corresponds to the lowest pixel value. Additionally, to find its position, we compute the center of the drone by averaging the coordinates of all the pixels forming its cluster. For obstacles, we create a bounding box—a rectangular shape encompassing all the pixels in the cluster. This rectangle provides us with the obstacle’s dimensions, and its depth is determined by the pixel with the lowest value within the cluster. By incorporating these data, we exert an obstacle avoidance force on the reference drone, ensuring that it steers clear of the surrounding obstacle.

3.2.2. Object Tracking

We need the velocities of the neighboring drones to calculate the alignment and confinement vectors; we need to know where a drone was in the previous and current frames. In the object tracking algorithm, there is only a single step: tagging. For tracking a drone, we associate it with a unique tag. Tagging also handles the problem of identifying objects that leave or newly enter the field of view (FoV) of a drone.
The tagging of the drones uses the insight that because the drones move slowly, the difference between their positions in successive frames will be less than a threshold (the exact value is mentioned in Section 4.3.2). The threshold depends on the cluster size representing the drone in the depth map. Since every drone in the swarm is of the same size, the neighbor drone closest to the reference drone has a larger cluster in the depth map and will move more than the others. Therefore, the threshold for movement in successive frames for this drone should reflect this fact. All of the drones’ positions within the FoV are kept in lists. We maintain two such lists, one for the previous frame and the other for the current frame. While traversing these lists, the following cases may happen:
A pair of positions in the list for the previous and current frames exists such that the difference between them is less than the threshold. Then we conclude that these are the positions of the same drone, and the drone is given the same tag in the current frame as it was in the previous frame.
If a position in the list for the previous frame exists for which we are not able to find such a match (described in point 1) in the list for the current frame, then that position refers to a drone that recently left the FoV, and we do not issue a tag. In other words, if a tag found in the previous frame is not present in the current frame, then that drone has left the FoV of the reference drone.
If a position in the list of the current frame exists for which we cannot find such a match in the list of the previous frame, then that position refers to a newly appeared drone in the FoV. It needs to be assigned a new tag.
The tags are initially assigned in ascending order. As more drones continue to enter the field of view, we increment a counter and assign the new value as the new drone’s tag. In this way, we track the drones. We assume that there are no moving objects in the environments except for swarm members. In other words, this paper only considers static obstacles, which do not need to be tracked. After the completion of the steps mentioned in Section 3.2.1 and Section 3.2.2 for object detection and tracking, we gather all the required information about the neighborhood and obstacles in the environment.
We still need a notional leader drone here. It is the drone that is at the front of the swarm and cannot see any drones in its front-facing camera (towards the direction of motion). It basically knows where to go. It either has a GPS or, using visual guidance, it knows the path. The rest either implicitly follow it or have their own guidance system. This means that the leader drone has special hardware that allows it to set the course; the rest follow the leader. For instance, if the drones are tracking wildlife and they can see a pack of deer, then they can all decide (independent of each other) to follow the pack and not the leader. All the swarming rules are still required to ensure that they behave as a swarm. It turns out that we need to make some alterations to the Reynolds rules and also propose a new rule for this setting.

3.2.3. Flocking Rules in GPS-Denied Environments

All the flocking rules proposed for a GPS-aided environment in Section 3.1 are applicable to this case except one—the migration rule. Since all drones have identical physical characteristics and, as a result, have the same kind of depth map projection, it is impossible to distinguish between a leader and a follower by looking at the depth map. This is why we skip the migration rule, which makes follower drones move towards the leader.
All the rules, which use only the position information of the drones forming the swarm and the obstacles in the environment, are implemented in the same way as mentioned for GPS-aided environments in Section 3.1 (refer to Figure 7). The rules falling into this category are the cohesion, separation, and obstacle avoidance rules. The alignment and confinement rules use velocity and tags, respectively. The algorithmic implementation of these rules in a GPS-denied environment needs to slightly change. This is because finding the velocities and tags is more complex than deducing the positions.

Alignment Rule for GPS-Denied Environments

According to this rule, a drone needs to move in the direction given by the average velocity of drones present in its field of view. We store the position vectors of all the drones in the FoV for the previous and current frames in two lists. We store the tags assigned to drones for both frames, too. To find the velocity of a drone, we subtract the position vectors of the current and previous frames. We can find the velocity of only that drone that is present in both the current and previous frames. The drones which newly appeared in the FoV or recently left the FoV will not contribute to this. We then need to move the reference drone in the same direction as the mean average velocity (note: it is a vector). The complete flow is shown in Algorithm 1.
Algorithm 1: Alignment.
1  Function Alignment():
2   v a 0
3   c o u n t e r 0
4   v a l N e i g h b o u r s 0     /* Initialize the count of valid neighbors */
5   while  c o u n t e r < c u r r P o s i t i o n s . s i z e   do
    /* Get the current position and tag of the drone */
6     c u r r P o s c u r r P o s i t i o n s [ c o u n t e r ]
7     c u r r T a g c u r r T a g s [ c o u n t e r ]
8     i 0           /* Initialize the inner loop counter */
9     while  i < prevTags.size  do
10       if  currTag == prevTags[i]  then
11         p r e v P o s p r e v P o s i t i o n s [ i ]  /* Get the previous position of the corresponding drone */
12         v a = v a + ( c u r r P o s p r e v P o s )  /* Add the difference of positions,
          i.e., their velocity in a unit time frame to the alignment vector */
13         v a l N e i g h b o u r s = v a l N e i g h b o u r s + 1
14         break
15       end
16       i = i + 1          /* Move to the next tag in prevTags list */
17     end
18     c o u n t e r c o u n t e r + 1  /* Move to the next drone */
19   end
20   v a v a / v a l N e i g h b o u r s  /* Normalize the alignment vector */
21   return  v a

Confinement Rule for GPS-Denied Environments

In a GPS-denied setting, a drone is said to be out of the confined area if no other drones are within its field of view. This can easily be found because we maintain a list of position vectors of all such drones (refer to Section 3.2.3). If a drone is outside the confined area, we assign it a velocity in the opposite direction of its current velocity v t , known as the confinement velocity. The drone will continue in this direction until it detects a neighbor within its field of view. We cannot, however, just let the drone continue because it may not find any drones, even on this route. To prevent this, we limit the number of frames ( κ ) for which the drone can move in the opposite direction. If, within this limit, the drone does not encounter any drone within its field of view, it returns to its previous direction, i.e., the direction of v t , and moves 2 κ steps, then it moves 4 κ steps in the opposite direction, and so on and so forth, until it sees other drones. Algorithm 2 shows the complete implementation. In our exhaustive simulations, we never had a case where a drone became lost, even in an environment with obstacles.
Algorithm 2: Confinement.
1  Function Confinement ():
2   v c t 0
   // If there are neighboring drones in the FoV
3   if  c u r r P o s i t i o n s . s i z e > 0   then
4     c o n f i n e m e n t C o u n t e r = 0
5     l i m i t = κ
6   end
7   else
    /* Set the confinement vector opposite to the previous velocity */
8     v c t ( p r e v V e l o c i t y . x , p r e v V e l o c i t y . y , 1 p r e v V e l o c i t y . z )
9     c o n f i n e m e n t C o u n t e r + = 1
     /* If the confinement counter exceeds the limit */
10     if  c o n f i n e m e n t C o u n t e r > l i m i t   then
11       l i m i t = l i m i t × 2           /* update the limit */
12     end
13   end
14   return  v c t

3.3. Workflow of the Proposed Model

Figure 8 shows the complete workflow of S m r t S w a r m in a GPS-denied environment. For each frame, we compute a depth map, detect all the objects within it, and then compute their relative positions. We track the drones using information from the previous frame, and then compute the velocity of all the drones, and their tags. This information is used to compute all the velocities (yielded by the different rules), and the final target velocity is a weighted sum of all the individual velocities (similar to Equation (8)). The summary of Section 3.2 is given in Figure 9.

4. Results and Analysis

4.1. Simulation Setup

We implement S m r t S w a r m on Unity, a popular cross-platform game development engine. It has a lot of features and prebuilt elements for creating custom environments. We added our code in C# for simulating a drone swarm [37,38] to it. We also experimented with the Unreal engine [39] but found it to be far slower than Unity, especially when the number of drones in the flock is increased. Other than visual effects, it was not adding any additional value. Hence, we opted for Unity version 2020.3.40f1 for simulating our system (similar to [37]). We use C # version 11.0 [40] for implementing the algorithms. A few simulation environments were created using Unity assets, and a few were purchased from the Unity store, which contains urban settings with both low- and high-rise towers and buildings [41,42]. The configuration details of the simulator are shown in Table 3. The simulated scenes and the drones placement are shown in Figure 10. In the literature on drones, using simulators for studying the behavior of large drone swarms is the standard practice [2,3]. Given that we do not have any other direct competitor that implements swarming with obstacle avoidance in GPS-aided and GPS-denied environments (see Table 1), we did not perceive the need to implement any state-of-the-art algorithm and compare the results with our paper.

4.2. Setting the Hyperparameters (Coefficients in the Equations)

Recall that in Equation (8) in Section 3, we assigned weights to each component velocity vector for computing the final velocity vector. In this section, we shall evaluate the impact of these weights on drone swarming and find their best possible values.
To find the optimum value of each hyperparameter, we first assigned equal weights to each hyperparameter and then observed which force is acting more aggressively and which is not. We then fixed all hyperparameters except one and tried to discover its optimum value (creating a Nash equilibrium). For instance, when determining r s , we tried to determine how quickly drones are moving apart from one another. The optimum value was achieved when they moved at such a speed that they did not collide, yet still remained in the swarm. We then set this hyperparameter around that value and modified other hyperparameters one by one. We are basically computing a Nash equilibrium here, where the parameters are the players and the performance is the utility function.
We ran this experiment several times and tried many different parameter perturbations. For each experiment, we assessed its performance, which is defined as follows. It is a tuple comprising an integer (number of collisions or #collisions) and a Boolean value (whether any drone escaped the confinement zone).
Table 4 and Table 5 show the obtained results for a scene with an enabled and disabled GPS signal, respectively. Note that in our simulations we did not observe any collisions because the hyperparameters were chosen correctly. Specifically, we make the following observations from the results:
Experiments 5 and 9 (highlighted in bold) show the base set of values of the weights for the GPS-aided and GPS-denied environments, respectively. We set these values as the default for the subsequent experiments.
For the best case, the rule contributing the most to the final velocity is cohesion. Even though the values of r m , r c t , and r o a are much lower than r c , the overall performance is quite sensitive to these values—this is also observed in Section 4.4.1.

4.3. Performance Analysis

To evaluate the performance of the proposed model, S m r t S w a r m , in terms of the achieved flocking behavior, we ran the model in the simulated environment shown in Figure 10 with GPS enabled as well as disabled. We used a 10-drone swarm to begin with. As mentioned in Section 3.1.3, for a GPS-aided environment, we define a spherical boundary (radius = 30 m in the x, y, and z directions of Unity’s coordinate system) around the leader drone as the confinement zone, whereas for GPS-denied environments, the field of view (FoV) of the drone becomes the confinement area. In our experiments, we used two cameras on all the drones, each with a field of view of 60 . Hence, the total FoV was 120 (similar to [43]). The swarm size, the simulation environment, the total FoV, and the confinement zone were the same for every experiment unless stated otherwise.

4.3.1. Swarming in a GPS-Aided Environment

We use two types of tags in S m r t S w a r m : Leader and Follower. All the follower drones are given the Follower tag and the leader is given the Leader tag. The communication between drones is simulated using Unity’s built-in shared variables. We uploaded a video of our simulations, which can be accessed using this link [44].

4.3.2. Swarming in a GPS-Denied Environment

In the real world, stereo cameras can directly compute the depth values of each pixel in the FoV. However, in Unity, the depth values (from simulated cameras) are stored in a z-buffer called a depth buffer. This buffer is stored in the GPU memory and is not directly accessible. We wrote a shader program using High-Level Shader Language (HLSL) to read the depth values [45]. The shader program gives the depth map as a 256 × 256 2D matrix. They lie in the range of 0 to 1. We needed to post-process the data to transform them to match the camera’s coordinate system. Furthermore, we also considered the camera’s viewing range, which is 40 in the x, y, and z directions, and converted all normalized depth values to actual distances (in meters). In Figure 11, a few depth map illustrations are displayed. We make the following observations from the depth maps:
The pixels within an object have similar depth values.
We observe that clusters corresponding to obstacles are much larger than those of drones and have at least 2000 pixels. This defines a threshold for us—we use this to designate a cluster as an obstacle. Furthermore, obstacles, being static objects, often start from the bottom of the FoV.
Also, there are a few clusters that correspond to random noise (far-away objects), which can be discarded if the total number of pixels forming a cluster is fewer than 8.
As clear from Figure 11, some of the objects in the depth map may be occluded. Due to the fact that all the drones follow the flocking principles, there must be some distance between them and, as a result, a significant difference will be present in their depth values. This allows us to readily filter out each cluster even in the presence of occlusion.
We tried to design a proof technique for proving that our flocking rules will always maintain a coherent swarm and avoid collisions in all kinds of environments, regardless of obstacles. This is ongoing work and our results are not fully mature yet. We extensively searched the web, but we could not find any existing mathematical technique that similar papers have used. Research in drone swarming is validated using exhaustive experimentation, as we have performed [46,47,48,49,50,51,52].

4.4. Sensitivity Analysis

To check whether the proposed model is robust enough, we ran the model in four different simulation environments (refer to Figure 12). These environments cover various lighting conditions, obstacle types, and relative positions of drones. The resulting swarm movement for all these cases is shown in an uploaded video [44]. We tuned the weights according to the scenes and list their final values in Table 6 and Table 7. We make the following observations from the results:
The weights are almost the same for all the environments.
The model works well for almost all the environments if the value of the six-tuple r c , r s , r a , r m , r c t , r o a = 〈80, 1, 1, 1, 25, 5〉 for a GPS-aided environment.
For a GPS-denied environment, the optimal value of the weight tuple is 〈750, 6, 1, 1, 1〉.

4.4.1. Effect of the Proposed Rules

The proposed flocking rules in this paper are migration, confinement, and obstacle avoidance. To check whether these rules impact the overall swarming behavior, we ran the model by disabling these rules individually in the simulation environment shown in Figure 12d (check the results in the uploaded videos here [44]). We make the following observations from the results:
As per the migration rule, the drones migrated in the direction of the leader; after disabling this, the drones did not even move, and the significance of the migration force became abundantly clear.
Without the obstacle avoidance force, drones collided with the obstacles.
In the absence of the confinement force, all of the follower drones moved far ahead of the leader. However, when there was a confinement force, they remained confined within a boundary.

4.5. Scalability Analysis

To check the scalability of the proposed model, we varied the swarm size by keeping the simulation environment fixed. When the size of our swarm increased, we increased the radius ( δ ) of the confined region around the leader so that the swarm could cover a larger area and we reduced the likelihood of a collision. However, in the case of GPS-denied environments, there is no concept of a confinement zone. We tuned the weights in this case as well, and we list the optimal values in Table 8 and Table 9. We make the following observations from the results:
The weights were almost the same for all swarm sizes.
For the GPS-aided environment, the model worked well with all the swarm sizes if the weight values r c , r s , r a , r m , r c t , r o a = 〈80, 1, 1, 1, 25, 5〉. The results are in line with the observations made in Section 4.4.
Similarly, for the GPS-denied environment, the optimal values of weights are the same as given in Section 4.4.

4.6. Real-Time Performance of S m r t S w a r m

To check the performance of the proposed model, S m r t S w a r m , in a real-world environment, we ran it on a Beaglebone Black board [53]. Beaglebone Black is a popular embedded board with an ARM Cortex-A8 processor clocked at 1 GHz frequency. It also has 512 MB RAM. We used Python 3.8 and GCC version 4.9.2 to implement the swarming model. Table 10 shows the execution time of each step involved in the swarming model on the board. We make the following observations from the results:
For a GPS-aided environment, all the steps have an extremely low latency (<0.3 ms). Additionally, the variance in execution times is very low (< 2 % ).
The previously mentioned observation (point (1)) holds true in a GPS-denied environment as well, except for two steps: object detection and obstacle avoidance. The maximum and average latencies for these steps vary significantly across frames because these values are directly proportional to the number of objects in the depth map.
The step that takes the longest (with a maximum value of ≈12 ms) is object detection in the depth map using our algorithm.
The total latency for the GPS-aided environment is very low (<0.5 ms). The FPS (frames processed per second) can be as high as 2000 frames per second, which is orders of magnitude more than what is required (we typically need 10–20 FPS; 75 FPS is considered to be high given that traditional displays operate at 30 FPS) for drones, which are relatively slow-moving. Even for a GPS-denied environment, the maximum frame rate that can be achieved is 75 FPS (total execution time < 14 ms).

5. Conclusions

In this work, we proposed a leader–follower flocking model for controlling a drone swarm, aiming to enhance coordination within the swarm. To achieve this, we introduced three additional rules—migration, confinement, and obstacle avoidance—to the traditional Reynolds flocking model. These rules play a crucial role in maintaining better coordination and synchrony among the drones.
While GPS-assisted communication is effective for calculating the target velocity of each drone under ideal conditions, we recognize the limitations posed by unreliable GPS signals in real-world scenarios. To address this challenge, we presented a depth-map-based approach that allows for accurate control and coordination of nearby drones even in the absence of reliable GPS signals. This alternative approach significantly enhanced the swarm’s operational capabilities, enabling precise coordination and control in various environments.
In addition to our model’s contributions to swarm coordination and overcoming GPS limitations, it is essential to consider the evaluation of countermeasures and defensive strategies against adversarial actions. By studying the interactions between the swarm and moving adversaries, valuable insights can be gained into adversarial tactics, strategies, and vulnerabilities. These insights can further guide the development of more robust defense mechanisms and contribute to the creation of resilient swarm behaviors. This is a part of future work.

Author Contributions

N.B.: Conceptualization, methodology, software, writing—original draft. H.V.: Conceptualization, methodology, software, writing—original draft. A.D.: Conceptualization, methodology, validation, writing—review and editing, formal analysis. B.B.: Conceptualization, methodology, supervision. S.R.S.: Conceptualization, methodology, validation, formal analysis, writing—review and editing, supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded jointly by the governments of India and New Zealand (No. MI02544G).

Data Availability Statement

The source code and all simulation results are available on a GitHub repository [54].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Masehian, E.; Royan, M. Characteristics of and approaches to flocking in swarm robotics. In Applied Mechanics and Materials; Trans Tech Publications: Zurich, Switzerland, 2016; Volume 841, pp. 240–249. [Google Scholar]
  2. Schilling, F.; Soria, E.; Floreano, D. On the scalability of vision-based drone swarms in the presence of occlusions. IEEE Access 2022, 10, 28133–28146. [Google Scholar] [CrossRef]
  3. Schilling, F.; Schiano, F.; Floreano, D. Vision-based drone flocking in outdoor environments. IEEE Robot. Autom. Lett. 2021, 6, 2954–2961. [Google Scholar] [CrossRef]
  4. Research, G.V. Commercial Drone Market Size, Share and Trends Analysis Report by Product, by Application, by End-Use, by Propulsion Type, by Range, by Operating Mode, by Endurance, by Region, and Segment Forecasts, 2023–2030. Available online: https://www.grandviewresearch.com/industry-analysis/global-commercial-drones-market (accessed on 2 August 2023).
  5. Ling, H.; Luo, H.; Chen, H.; Bai, L.; Zhu, T.; Wang, Y. Modelling and simulation of distributed UAV swarm cooperative planning and perception. Int. J. Aerosp. Eng. 2021, 2021, 9977262. [Google Scholar] [CrossRef]
  6. Braga, R.G.; da Silva, R.X.; Ramos, A.C. Development of a Swarming Algorithm Based on Reynolds Rules to Control a Group of Multi-Rotor UAVs Using ROS. 2016. Available online: https://api.semanticscholar.org/CorpusID:221093766 (accessed on 7 August 2023).
  7. Braga, R.G.; Da Silva, R.C.; Ramos, A.C.; Mora-Camino, F. Collision avoidance based on reynolds rules: A case study using quadrotors. In Proceedings of the Information Technology-New Generations: 14th International Conference on Information Technology, Las Vegas, NV, USA, 10–12 April 2017; Springer: Berlin/Heidelberg, Germany, 2018; pp. 773–780. [Google Scholar]
  8. Eversham, J.D.; Ruiz, V.F. Experimental analysis of the Reynolds flocking model. Paladyn 2011, 2, 145–155. [Google Scholar] [CrossRef]
  9. Reynolds, C.W. Flocks, herds and schools: A distributed behavioral model. In Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, Anaheim, CA, USA, 27–31 July 1987; pp. 25–34. [Google Scholar]
  10. Blomqvist, O.; Bremberg, S.; Zauer, R. Mathematical Modeling of Flocking Behavior; Degree Project in Mathematics, Optimization and Systems Theory, First Level; Royal Institute of Technology: Stockholm, Sweden, 2012. [Google Scholar]
  11. Rizk, Y.; Awad, M.; Tunstel, E.W. Cooperative heterogeneous multi-robot systems: A survey. ACM Comput. Surv. CSUR 2019, 52, 1–31. [Google Scholar] [CrossRef]
  12. Gunnarsson, H.; Åsbrink, A. Intelligent Drone Swarms: Motion Planning and Safe Collision Avoidance Control of Autonomous Drone Swarms. 2022. Available online: https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1677350&dswid=7935 (accessed on 7 August 2023).
  13. Olfati-Saber, R. Flocking for multi-agent dynamic systems: Algorithms and theory. IEEE Trans. Autom. Control 2006, 51, 401–420. [Google Scholar] [CrossRef]
  14. Wang, T.; Wang, C.; Liang, J.; Chen, Y.; Zhang, Y. Vision-aided inertial navigation for small unmanned aerial vehicles in GPS-denied environments. Int. J. Adv. Robot. Syst. 2013, 10, 276. [Google Scholar] [CrossRef]
  15. Lu, Z.; Liu, F.; Lin, X. Vision-based localization methods under GPS-denied conditions. arXiv 2022, arXiv:2211.11988. [Google Scholar]
  16. Balamurugan, G.; Valarmathi, J.; Naidu, V. Survey on UAV navigation in GPS denied environments. In Proceedings of the 2016 International Conference on Signal Processing, Communication, Power and Embedded System (SCOPES), Paralakhernundi, India, 3–5 October 2016; pp. 198–204. [Google Scholar]
  17. Morihiro, K.; Isokawa, T.; Nishimura, H.; Matsui, N. Characteristics of flocking behavior model by reinforcement learning scheme. In Proceedings of the 2006 SICE-ICASE International Joint Conference, Busan, Republic of Korea, 18–21 October 2006; pp. 4551–4556. [Google Scholar]
  18. Fine, B.T.; Shell, D.A. Unifying microscopic flocking motion models for virtual, robotic, and biological flock members. Auton. Robot. 2013, 35, 195–219. [Google Scholar] [CrossRef]
  19. Turgut, A.E.; Çelikkanat, H.; Gökçe, F.; Şahin, E. Self-organized flocking in mobile robot swarms. Swarm Intell. 2008, 2, 97–120. [Google Scholar] [CrossRef]
  20. Gu, D.; Wang, Z. Leader–Follower Flocking: Algorithms and Experiments. IEEE Trans. Control Syst. Technol. 2009, 17, 1211–1219. [Google Scholar] [CrossRef]
  21. Bhowmick, C.; Behera, L.; Shukla, A.; Karki, H. Flocking control of multi-agent system with leader-follower architecture using consensus based estimated flocking center. In Proceedings of the IECON 2016-42nd Annual Conference of the IEEE Industrial Electronics Society, Florence, Italy, 24–27 October 2016; pp. 166–171. [Google Scholar]
  22. Walker, P.; Amraii, S.A.; Lewis, M.; Chakraborty, N.; Sycara, K. Control of swarms with multiple leader agents. In Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014; pp. 3567–3572. [Google Scholar]
  23. Zheng, H.; Panerati, J.; Beltrame, G.; Prorok, A. An adversarial approach to private flocking in mobile robot teams. IEEE Robot. Autom. Lett. 2020, 5, 1009–1016. [Google Scholar] [CrossRef]
  24. Chen, S.; Yin, D.; Niu, Y. A survey of robot swarms’ relative localization method. Sensors 2022, 22, 4424. [Google Scholar] [CrossRef]
  25. Haller, N.K.; Lind, O.; Steinlechner, S.; Kelber, A. Stimulus motion improves spatial contrast sensitivity in budgerigars (Melopsittacus undulatus). Vis. Res. 2014, 102, 19–25. [Google Scholar] [CrossRef] [PubMed]
  26. Schilling, F.; Lecoeur, J.; Schiano, F.; Floreano, D. Learning vision-based flight in drone swarms by imitation. IEEE Robot. Autom. Lett. 2019, 4, 4523–4530. [Google Scholar] [CrossRef]
  27. Zhou, X.; Zhu, J.; Zhou, H.; Xu, C.; Gao, F. Ego-swarm: A fully autonomous and decentralized quadrotor swarm system in cluttered environments. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 4101–4107. [Google Scholar]
  28. Zanol, R.; Chiariotti, F.; Zanella, A. Drone mapping through multi-agent reinforcement learning. In Proceedings of the 2019 IEEE Wireless Communications and Networking Conference (WCNC), Marrakesh, Morocco, 15–18 April 2019; pp. 1–7. [Google Scholar]
  29. Baldazo, D.; Parras, J.; Zazo, S. Decentralized multi-agent deep reinforcement learning in swarms of drones for flood monitoring. In Proceedings of the 2019 27th European Signal Processing Conference (EUSIPCO), Coruna, Spain, 2–6 September 2019; pp. 1–5. [Google Scholar]
  30. Barksten, M.; Rydberg, D. Extending Reynolds’ Flocking Model to a Simulation of Sheep in the Presence of a Predator; Degree Project in Computer Science, First Level; Royal Institute of Technology: Stockholm, Sweden, 2013. [Google Scholar]
  31. Virágh, C.; Vásárhelyi, G.; Tarcai, N.; Szörényi, T.; Somorjai, G.; Nepusz, T.; Vicsek, T. Flocking algorithm for autonomous flying robots. Bioinspirat. Biomimet. 2014, 9, 025012. [Google Scholar] [CrossRef]
  32. Reynolds, C. Boids Background and Update. 2001. Available online: http://www.red3d.com/cwr/boids/ (accessed on 7 August 2023).
  33. Müller, H.; Niculescu, V.; Polonelli, T.; Magno, M.; Benini, L. Robust and efficient depth-based obstacle avoidance for autonomous miniaturized uavs. arXiv 2022, arXiv:2208.12624. [Google Scholar]
  34. Lin, J.; Zhu, H.; Alonso-Mora, J. Robust vision-based obstacle avoidance for micro aerial vehicles in dynamic environments. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 2682–2688. [Google Scholar]
  35. Siegwart, R.; Scaramuzza, D. Range Sensors. Available online: http://www.cs.columbia.edu/~allen/F15/NOTES/rangesensing.pdf (accessed on 10 October 2022).
  36. University of Tsukuba, C. Vision Image. Available online: https://home.cvlab.cs.tsukuba.ac.jp/dataset (accessed on 20 September 2022).
  37. Unity. Unity Scene. Available online: https://docs.unity3d.com/Manual/index.html (accessed on 2 March 2022).
  38. Unity Technologies. Unity. Available online: https://unity.com/frontpage (accessed on 7 August 2023).
  39. Unreal Engine. Available online: https://www.unrealengine.com/en-US (accessed on 7 August 2023).
  40. C#. Available online: https://learn.microsoft.com/en-us/dotnet/csharp/ (accessed on 4 February 2022).
  41. Unity Asset 2. Available online: https://assetstore.unity.com/packages/3d/environments/urban/city-low-poly-2455 (accessed on 6 August 2022).
  42. Unity Asset 1. Available online: https://assetstore.unity.com/packages/3d/environments/urban/polygon-city-low-poly-3d-art-by-synty-95214 (accessed on 10 August 2022).
  43. RealCamera. Available online: https://robu.in/product/high-definition-1200tvl-coms-camera-2-8mm-lens-fpv-camera-fpv-rc-drone-quadcopter/ (accessed on 4 February 2023).
  44. Verma, H.; Bhamu, N. Simulation Results. Available online: https://drive.google.com/drive/folders/1T_Gk5irVTQTxdUJGYH93ccMLhA5MEUYi?usp=sharing (accessed on 23 June 2023).
  45. Shaders. Available online: https://docs.unity3d.com/Manual/SL-ShaderPrograms.html (accessed on 4 January 2023).
  46. Viscido, S.V.; Wethey, D.S. Quantitative analysis of fiddler crab flock movement: Evidence for ‘selfish herd’behaviour. Anim. Behav. 2002, 63, 735–741. [Google Scholar] [CrossRef]
  47. Conradt, L.; Krause, J.; Couzin, I.D.; Roper, T.J. “Leading according to need” in self-organizing groups. Am. Nat. 2009, 173, 304–312. [Google Scholar] [CrossRef]
  48. Couzin, I.D.; Krause, J.; Franks, N.R.; Levin, S.A. Effective leadership and decision-making in animal groups on the move. Nature 2005, 433, 513–516. [Google Scholar] [CrossRef]
  49. Lopez, U.; Gautrais, J.; Couzin, I.D.; Theraulaz, G. From behavioural analyses to models of collective motion in fish schools. Interface Focus 2012, 2, 693–707. [Google Scholar] [CrossRef] [PubMed]
  50. Smith, J.; Martin, A. Comparison of hard-core and soft-core potentials for modelling flocking in free space. arXiv 2009, arXiv:0905.2260. [Google Scholar]
  51. Szabó, P.; Nagy, M.; Vicsek, T. Turning with the others: Novel transitions in an SPP model with coupling of accelerations. In Proceedings of the 2008 Second IEEE International Conference on Self-Adaptive and Self-Organizing Systems, Venice, Italy, 20–24 October 2008; pp. 463–464. [Google Scholar]
  52. Levine, H.; Rappel, W.J.; Cohen, I. Self-organization in systems of self-propelled particles. Phys. Rev. E 2000, 63, 017101. [Google Scholar] [CrossRef] [PubMed]
  53. BeagleBone. BeagleBone Black. Available online: https://beagleboard.org/black (accessed on 20 June 2023).
  54. Drone Swarm Simulator. Available online: https://github.com/srsarangi/droneswarm (accessed on 8 August 2023).
Figure 1. General workflow of a flocking model [6,18].
Figure 1. General workflow of a flocking model [6,18].
Drones 07 00573 g001
Figure 3. Insights from Section 2.
Figure 3. Insights from Section 2.
Drones 07 00573 g003
Figure 4. Proposed flocking rules in a GPS-aided environment: (a) migration; (b) obstacle avoidance; (c) confinement. Here, v m , v c t , and v o a are migration, confinement, and obstacle avoidance vectors, respectively.
Figure 4. Proposed flocking rules in a GPS-aided environment: (a) migration; (b) obstacle avoidance; (c) confinement. Here, v m , v c t , and v o a are migration, confinement, and obstacle avoidance vectors, respectively.
Drones 07 00573 g004
Figure 5. Insights from Section 3.1.
Figure 5. Insights from Section 3.1.
Drones 07 00573 g005
Figure 6. (a) Head and lamp image; (b) depth map of the image (adapted from the Tsukuba Stereo Vision dataset [36]).
Figure 6. (a) Head and lamp image; (b) depth map of the image (adapted from the Tsukuba Stereo Vision dataset [36]).
Drones 07 00573 g006
Figure 7. Proposed flocking rules in a GPS-denied environment: (a) cohesion; (b) separation; (c) alignment; (d) obstacle avoidance. Here, FoV represents the field of view of the reference drone. x i and v i are the position and velocity of the i t h drone. v c , v s , v a , and v o a represent cohesion, separation, alignment, and obstacle avoidance vectors.
Figure 7. Proposed flocking rules in a GPS-denied environment: (a) cohesion; (b) separation; (c) alignment; (d) obstacle avoidance. Here, FoV represents the field of view of the reference drone. x i and v i are the position and velocity of the i t h drone. v c , v s , v a , and v o a represent cohesion, separation, alignment, and obstacle avoidance vectors.
Drones 07 00573 g007
Figure 8. Workflow of S m r t S w a r m in GPS-denied environments.
Figure 8. Workflow of S m r t S w a r m in GPS-denied environments.
Drones 07 00573 g008
Figure 9. Insights from Section 3.2.
Figure 9. Insights from Section 3.2.
Drones 07 00573 g009
Figure 10. (a) Simulation scene; (b) drone placement.
Figure 10. (a) Simulation scene; (b) drone placement.
Drones 07 00573 g010
Figure 11. Depth maps of frames with and without obstacles. (a) Frame 1; (b) depth map of frame 1; (c) frame 2; (d) depth map of frame 2.
Figure 11. Depth maps of frames with and without obstacles. (a) Frame 1; (b) depth map of frame 1; (c) frame 2; (d) depth map of frame 2.
Drones 07 00573 g011
Figure 12. Simulation environments: (a) Scene 1; (b) scene 2; (c) scene 3; (d) scene 4.
Figure 12. Simulation environments: (a) Scene 1; (b) scene 2; (c) scene 3; (d) scene 4.
Drones 07 00573 g012
Table 1. A comparison of related work.
Table 1. A comparison of related work.
WorkYearFlock’s CharacteristicsEnvironmentSensor UsedAlgorithm
Leader-FollowerSelf-OrganizedGPS-DeniedExistence of Obstacles
Eversham et al. [8]2011×××GPS-
Blomqvist et al. [10]2012××GPS-
Barksten et al. [30]2013×××GPS-
Walker et al. [22]2014×Distance-based-
Virágh et al. [31]2014×××GPS-
Bhowmick et al. [21]2016×××GPS-
Braga et al. [6]2016×××GPS-
Schilling et al. [26]2019××Vision-basedML-based
Zheng et al. [23]2020××GPS-
Schilling et al. [3]2021××Vision-basedML-based
Chen et al. [24]2022××Distance-based-
Schilling et al. [2]2022××Vision-basedML-based
S m r t S w a r m 2023Vision-basedTraditional CV
Table 2. Glossary.
Table 2. Glossary.
SymbolMeaning
v i , x i Velocity and position of the i t h drone, respectively.
x L , z j Position of the leader drone and of the j t h obstacle, respectively.
δ Radius of the confined area around the leader.
r c , r s , r a ,Weights of the cohesion, separation, alignment, migration, confinement,
r m , r c t , r o a and obstacle avoidance rules, respectively, in the final velocity of the drone.
v c , v s , v a ,Cohesion, separation, alignment, migration,
v m , v c t , v o a confinement, and obstacle avoidance vector, respectively.
Table 3. Platform configuration.
Table 3. Platform configuration.
ParameterValue
SimulatorUnity 2020.3.40f1
Operating systemWindows 10
Main memory1 TB
RAM32 GB
CPUIntel(R) Core(TM) i7-8700 CPU @ 3.20 GHz
GPUNVIDIA Ge-Force GT 710
Video memory2 GB
Table 4. Effect of the weights on the overall performance for a GPS-aided environment.
Table 4. Effect of the weights on the overall performance for a GPS-aided environment.
ExperimentWeightsPerformance
No. r c r s r a r m r ct r oa #CollisionsConfined
1101.01.01.021.03×
2601.11.51.0155.05
3751.21.01.1255.03
4771.21.01.1215.01×
5771.21.01.1215.00
6781.01.21.0244.91
7801.11.01.2235.12
88011.01.01.2235.12
9811.05.51.0254.83
10811.01.54.0254.81
11821.01.21.0104.94×
121001.11.01.01010.05
Table 5. Effect of the weights on the overall performance for a GPS-denied environment.
Table 5. Effect of the weights on the overall performance for a GPS-denied environment.
ExperimentWeightsPerformance
No. r c r s r a r ct r oa #CollisionsConfined
1101.01.0113×
21005.01.5514
33008.01.0114
45006.01.0115
58006.51.2112
67505.51.0111
78206.11.0113×
88006.21.5111
97506.01.1110
108006.00.9111
118106.21.2150×
128006.21.0311×
Table 6. Weights for various simulation environments with GPS.
Table 6. Weights for various simulation environments with GPS.
SceneWeightsPerformance
r c r s r a r m r ct r oa #CollisionsConfined
1781.01.01.0215.00
2771.21.01.1215.00
3771.11.51.0255.00
4801.21.01.1255.10
Table 7. Weights for various simulation environments without GPS.
Table 7. Weights for various simulation environments without GPS.
SceneWeightsPerformance
r c r s r a r ct r oa #CollisionsConfined
17506.11.0110
27506.01.1110
37006.21.5110
48006.01.0110
Table 8. Weights for drone swarms of different sizes in a GPS-aided environment.
Table 8. Weights for drone swarms of different sizes in a GPS-aided environment.
ExperimentSwarmRadiusWeightsPerformance
No.Size( δ ) r c r s r a r m r ct r oa #CollisionsConfined
153081.01.01.21.0254.80
273579.01.11.51.1254.70
383580.01.01.31.0225.00
4104079.51.01.41.1244.80
5124580.01.21.41.0235.00
6155079.01.11.41.0245.00
Table 9. Weights for drone swarms of different sizes in a GPS-denied environment.
Table 9. Weights for drone swarms of different sizes in a GPS-denied environment.
ExperimentSwarmWeightsPerformance
No.Size r c r s r a r ct r oa #CollisionsConfined
157906.011.01.00
278086.21.21.01.00
388106.00.91.01.00
4108006.51.01.01.00
5127956.01.01.01.00
6158006.01.01.01.00
Table 10. Runtime (in milliseconds) breakdown of our proposed method.
Table 10. Runtime (in milliseconds) breakdown of our proposed method.
StepsEnvironment
GPS-AidedGPS-Denied
MaxMinAvgMaxMinAvg
Object detection---11.878.179.95
Cohesion0.040.010.020.020.010.01
Separation0.280.210.240.270.230.25
Alignment0.010.010.010.020.010.01
Migration0.010.010.01---
Confinement0.050.030.040.010.010.01
Obstacle avoidance0.020.020.021.230.841.03
Total time0.440.390.4213.5509.3211.44
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bhamu, N.; Verma, H.; Dixit, A.; Bollard, B.; Sarangi, S.R. SmrtSwarm: A Novel Swarming Model for Real-World Environments. Drones 2023, 7, 573. https://doi.org/10.3390/drones7090573

AMA Style

Bhamu N, Verma H, Dixit A, Bollard B, Sarangi SR. SmrtSwarm: A Novel Swarming Model for Real-World Environments. Drones. 2023; 7(9):573. https://doi.org/10.3390/drones7090573

Chicago/Turabian Style

Bhamu, Nikita, Harshit Verma, Akanksha Dixit, Barbara Bollard, and Smruti R. Sarangi. 2023. "SmrtSwarm: A Novel Swarming Model for Real-World Environments" Drones 7, no. 9: 573. https://doi.org/10.3390/drones7090573

APA Style

Bhamu, N., Verma, H., Dixit, A., Bollard, B., & Sarangi, S. R. (2023). SmrtSwarm: A Novel Swarming Model for Real-World Environments. Drones, 7(9), 573. https://doi.org/10.3390/drones7090573

Article Metrics

Back to TopTop