1. Introduction
Robot farming plays a critical role in preventing the food crisis caused by future population growth [
1]. The past decades have seen the rapid development of robotic crop farming, such as automated crop monitoring, harvesting, weed control, and so forth [
2,
3]. Deploying robotic automation to improve crop production yields has actually become very popular among farmers. In contrast, research and implementations of robotic livestock farming have been mostly restricted in the fields of virtual fencing [
4], animal monitoring and pasture surveying [
5,
6]. Such applications can improve livestock production yields to some extent. But animal herding as the vital step of livestock farming has long been the least automated. Sheepdogs that have been used for centuries are still the dominant tools of animal herding around the world, and the research on robotic animal herding is still in its infancy. Two main obstacles of robotic animal herding systems are: (1) lack of practical robot-to-animal interactions and the suitable robotic herding platform; and (2) lack of efficient robotic herding algorithm for abundant animals.
The applications of robots to the actual act of animal herding started from the Robot Sheepdog Project in the 1990s [
7,
8]. These groundbreaking studies achieved gathering a flock of ducks and manoeuvring them to a specified goal position by wheeled robots. The last three decades have only seen a handful of studies on robotic herding with real animals. Recent implementations of robotic animal herding mainly employ ground robots that drive animals through bright colours [
9] or collisions [
10,
11,
12]. Reference [
9] shows that the robot initially repulsed the sheep at a distance of 60 m; however, after only two further trials, the repulsion distance drops to 10 m. Besides, such ground legged or wheeled robots might not be agile enough to deal with various terrains during herding. Furthermore, the employed Rover robot in [
10], Spot robot in [
11] and Swagbot in [
12] all cost hundreds of thousands US dollars each and are still in the prototype stages. High prices limit their popularity. Interestingly, the real sheepdogs are also expensive because of their years-long training period. A fully trained sheepdog can cost tens of thousands of US dollars [
13]. The most crucial drawback of sheepdogs is that they cannot get rid of biological limitations, that is, ageing and illness.
Besides the platforms, efficient algorithms are also critical to the study of robotic animal herding. Despite the stagnant progress of the herding platforms, research of the related algorithms has experienced considerable development. The prime cause of this contradiction is the recent rapid development of the study on swarm robotics and swarm intelligence [
14]. Bio-inspired swarming-based control algorithms for herding swarm robots are receiving much attention in robotics due to the effectiveness of solutions found in nature (e.g., interactions between sheep and dogs). Such algorithms can also be applied to herd flocks of animals. A considerable amount of literature has been published on this topic. For example, paper [
15] designs a simple heuristic algorithm for a single shepherd to solve the shepherding problem, based on adaptive switching between collecting the agents when they are too dispersed and driving them once they are aggregated. One unique contribution of [
15] is that it conducts field tests with a flock of real sheep and reproduces key features of empirical data collected from sheep—dog interactions. To elaborate the results in [
15], reference [
16] tests the effects of the step size per unit time of the shepherd and swarm agents, and clarifies that the herding algorithm in [
15] is mostly robust regarding swarm agents’ moving speeds. Its further study [
17] extends the shepherd and swarm agents’ motion and influential force vectors to the third dimension.
References [
18,
19] propose the multi-shepherd control strategies for guiding swarm agents in 2D and 3D environments based on a single continuous control law. The implementation of such strategies requires more shepherds than swarm agents, thus cannot deal with tasks with abundant agents. The level of modulation of the force vector exerted by the shepherd on the swarm agents plays a critical role in herding task success and energy used. Paper [
20] designs a force modulation function for the shepherd agent and adopts a genetic algorithm to optimise the energy used by the agent subject to a threshold of success rate. These algorithms and most of the studies in robotic herding, however, have only been carried out in the tasks with tens of swarm agents. The algorithm for efficiently herding abundant swarm agents has not been investigated.
Comparing with ground robots, autonomous drones have superior manoeuvrability and are finding increasing use in different areas, including agriculture [
21,
22], surveillance [
23,
24], communications [
25] and disaster relief [
26]. Particularly, references [
21,
22] demonstrate the feasibility of counting and tracking farm animals using drone cameras. Reference [
27] develops an algorithm for a drone to herd a flock of birds away from an airport. Field experiments show the effectiveness of such an algorithm. With the ability of rapidly crossing miles of rugged terrain, drones are potentially the ideal platforms for robotic animal herding, if they can efficiently interact with animals like sheepdogs. Sheepdogs usually herd animals by barking, glaring, or nibbling the heels of animals. For example, the New Zealand Huntaway uses its loud, deep bark to muster flocks of sheep [
28]. Drones can act like sheepdogs by playing a pre-recorded dog bark loudly through a speaker, referred to as the barking drone. Recently, some successful attempts have been made using human-piloted barking drones to herd flocks of farm animals [
29]. Besides, studies show that comparing with sheepdogs, using drones to herd cattle and sheep is faster and causes little pressure on animals [
30].
1.1. Objectives and Contributions
This paper’s primary objective is to design a robotic herding system that can efficiently herd a large number of farm animals without human input. The system should be able to collect a flock of farm animals when they are too dispersed and drive them to a designated location once they are aggregated. The main contributions of this paper are as follows:
We propose a novel idea of autonomous barking drones by improving the design of the human-piloted barking drones and further propose a novel robotic herding system based on it. Comparing with the existing approaches of ground herding robots that drive animals through collisions or bright colours, the idea of autonomous barking drones can be a solution to the problem of effective robot-to-animal interaction with significantly improved efficiency;
We propose a collision-free motion control algorithm for a network of barking drones to herd a large flock of farm animals efficiently;
We conduct simulations of herding a thousand of animals, while the existing approaches usually herd tens of animals or swarm robots. The proposed algorithm can also be applied to herd swarm robots;
Based on the animal behaviour model verified by real animal experiments and the proven shepherding examples by human-piloted barking drone, the proposed system has the potential to be the world’s first practical robotic herding solution for a large flock of farm animals;
With their functions being limited on non-essential applications such as animal monitoring and data collection, current Internet of Things (IoT) platforms for precision farming have a low return on investment. Besides solving the rigid demand (i.e., herding) for farmers, the proposed system can also serve as the IoT platform to achieve the same functions. Thus, it has the potential to popularise the IoT implementations for precision farming.
Preliminary versions of some results of this paper were presented in the PhD thesis of the first author [
31].
1.2. Organization
The remainder of the paper is organised as follows. In
Section 2, we introduce the design of the drone herding system.
Section 3 presents the system model and problem statement. Drones motion control for gathering and driving is proposed in
Section 4 and
Section 5, respectively. Simulation results are presented in
Section 6. Finally, we give our conclusions in
Section 7.
2. Design of the Drone Herding System
We now introduce the proposed drone herding system. It consists of a fleet of two types of drones. The duty of the first type of drones is to detect and track animals. For this purpose, each drone is equipped with cameras and fitted with some Artificial Intelligence (AI) algorithms that can detect and track animals from live video feeds with a sufficient accuracy. The first type of drones shares some similarity with the goat tracking drones [
21]. But different from it, our system only requires the tracking information of the animals on the boundary of the flock. This definitely relaxes the workload of the drones, and many existing image processing techniques, such as edge detection, can be adopted in real-time.
A drone of the second type is attached with a speaker that plays sheepdogs’ barking. The speaker should have a clear voice, abundant volume, relatively small size and low weight. There have been some drone digital voice broadcasting systems on the market. For example, the MP130 from Chengzhi [
32] can broadcast the voice for 500 m with a weight of 550 g. Moreover, the speaker is designed to be mounted on a stabiliser attached to the drone, so that it can stably broadcast to a desired direction, no matter which direction the barking drone is moving towards. It is worth to mention that the speaker on current human-piloted barking drone is not mounted on the stabiliser, so we improved the design of the current barking drones.
The observer and the barking drones have the communication ability. The communication between them can be realised by 2.4 gigahertz radio waves, which is commonly used by different drone products. The communication is mainly unidirectional from the observer to the barking drones. Specifically, the observer monitors the locations of animals on the edge of the flock and sends them to the barking drones in the real-time. A typical application scenario of the proposed system is herding a large flock of animals with one observer and multiple barking drones.
Figure 1 shows a schematic diagram of a basic unit of the drone herding system, with one operator and one barking drone.
Limited battery life is always a problem of drones applications. Later we will show that the proposed herding system can usually accomplish the herding task in less than 50 min, which is the common endurance of some commercialised industrial drone products such as the DJI M300 [
33] (Note that, such industrial drones usually cost thousands of US dollars each, i.e., might be cheaper than a fully trained sheepdog). Moreover, any drone in the system should be able to autonomously fly back to a ground drone base station to recharge the battery with automatic charging devices [
34]. Besides, the advancement of solar-harvesting technology enables the drones to prolong the battery lifetime [
35].
4. Drones Motion Control for Gathering
This section introduces the motion control algorithms for a network of barking drones to quickly accomplish the gathering task. We first introduce the algorithm for navigating the barking drones to fly to the extended hull and to fly on the extended hull in
Section 4.1 and
Section 4.2, respectively. Then,
Section 4.3 presents the optimal positions (steering points) for the barking drones to efficiently gather animals, as well the collision-free allocation of the steering points. A flowchart of the proposed method is shown in
Figure 4.
Let A be any point on the plane of the extended hull. Let B be a vertex of the extended hull. We now introduce two guidance laws for navigating a barking drone from A to B in the shortest time:
Fly to edge: navigate the barking drone from an initial position
A to the extended hull in the shortest time. Note that the vertices of the extended hull can be moving. Let
O denote the barking drone’s reaching point on the extended hull; see
Figure 5.
Fly on edge: navigate the barking drone from
O to
B in the shortest time following a given direction, e.g., clockwise or counterclockwise, while keeping the barking drone on the extended hull. Since the drone has non-holonomic motion dynamics, it is allowed to move along a short arc when traveling between two adjacent edges, see
Figure 5.
Note that A is not necessarily outside the extended hull. To avoid dispersing any herding animal, the speaker on the barking drone should be turned on only when it has arrived at the extended hull. Besides, if A is already on the extended hull, let and apply Fly on edge Guidance Law directly.
4.1. Fly to Edge Guidance Law
Let
and
be non-zero two-dimensional vectors, and
. Now introduce the following function
mapping from
to
as
where
. In other words, the rule (14) defined in the plane of vectors
and
. The resulted vector
is orthogonal to
and directed “towards”
. Moreover, introduce the function
as follows
We will also need the following notations to present the
Fly to edge guidance law. At time
t, let
be the extended hull edge that is the closest to the drone. We will show how to find
later. Let
denote the vector from vertex
to
. Let
denote the vector from the vertex
to the drone. Let
O be the point on
that is the closest to the drone. Let
be the vector from the drone to
O. If
, we have
and
, see
Figure 6a. If
and
,
is always orthogonal to
. Let
be the vector from
to
O, see
Figure 6b.
can be obtained by the following equation:
Otherwise, we have
and
, see
Figure 6c. Given
and
, we present the following
Fly to edge guidance law:
The proposed
Fly to edge guidance law belongs to the class of sliding-mode control laws (see, e.g., [
39]). With the simple switching strategy, sliding mode control laws are quite robust and not sensitive to parameter variations and uncertainties of the control channel. Moreover, because the control input is not a continuous function, the sliding mode can be reached in finite time which is better than asymptotic behaviour, see for example, [
39,
40,
41].
Assumption 1. Let be the length of at time t. Then for all t for some given constants . Let and be some constants such that . Let be the distance between the drone and . is the distance between A and .
Assumption 2. Let . Then Theorem 1. Suppose that Assumptions 1 and 2 holds. Then, the guidance law (17), (18) and (19) navigate the barking drone from an initial position A to and remains on after arrival.
Remark 2. It should be pointed out that Assumptions 1 and 2 are quite technical assumptions, which are necessary for a mathematically rigorous proof of the performance of the proposed guidance law. However, our simulations show that the proposed guidance law often performs well even in situations when Assumptions 1 and 2 do not hold.
Proof. From the definitions of
and
, the guidance law (18), (19) turns the velocity vector
towards
. Moreover, Equation (17) gives that
is pointing from the drone to its closest point on
, see
Figure 6a–c. Furthermore, it follows from (22) together with Assumption 1 that there exists some time
that for all
, the vectors
and
are co-linear and
. Introduce the function
is the distance between the drone’s current location and
. Then, it follows from (20) of Assumption 2 and the inequality
that if
then
for some constant
. Therefore, there exists a time
such that the drone’s current position belongs to
for all
. Moreover, (21) implies that for all
the drone will remain in the sliding mode of the system (3), (4), (17), (18), (19) corresponding to the position of the drone on
, and the and the vector
orthogonal to
. This completes the proof of Theorem 1. □
Remark 3. At time t, given and E, calculate for the barking drone to each edge of the extended hull. Then, the edge with the minimum is the closest edge of the extended hull to the drone.
4.2. Fly on Edge Guidance Law
Before introducing the
Fly on edge guidance law, we first present the
edge sliding guidance law for a drone flying along an edge of the extended hull, with possibly moving vertices. At time
t, let
be the edge that we want to keep the drone on. Let
be the target position of the barking drone. Let
denote the vector from the drone to
, as shown in
Figure 6d. We introduce
that is given by:
Then, the
edge sliding guidance law is as follows:
Theorem 2. Suppose that Assumption 1 holds. Then the guidance law (23), (24) and (25) navigates the barking drone from to along , and enables the drone to stay at .
Note that, consists of two vector components: and . Where is for keeping the drone on and is for navigating the drone to . Thus, Theorem 2 can be proved similar to Theorem 1.
Suppose that the barking drone has arrived at the point O at time . Introduce a direction index for clockwise flying and for counterclockwise flying. Given O, , B and , the Fly on edge guidance law solves for the barking drone’s control input and by commanding one of the two following motions.
4.2.1. TRANSFER
The drone flies to the adjacent edge in the given direction from its current location through a straight line and an arc. Consider that the drone flies from its current edge
to the adjacent edge
. The drone first moves to
following the
edge sliding guidance law. Then, let
and
, where
denote the vector points from the drone’s current location to the turning centre
. The drone will turn with the minimum turning radius and arrives at
, as shown in
Figure 7.
Let
be the minimum turning radius of the drone. Let
denote the angle
.
is the convex hull vertex corresponding to
. Then,
and
can be computed by:
where
denotes drone-to-animal distance, as defined above.
can be obtained from the equation similar to (10). To avoid dispersing any animals, the turning trajectory should not touch the convex hull of the animals, which always holds in the case of
. If
, the following inequalities need to be satisfied:
Remark 4. If (28) does not hold, the drone directly flies to following the edge sliding guidance law, then stops and changes direction to , which is slower than along the arc trajectory. An isolated animal far away from all the other animals may cause a very small θ and lead to this problem.
4.2.2. BRAKE
If the drone has arrived at an edge , where is the closest vertex of B on the opposite of the direction indicated by , then the drone flies to a point B following the edge sliding guidance law. Let be the closest vertex to the drone opposite to the given direction. Let be the set of vertices locating between O and B along the given direction.
We are now in a position to present the
Fly on edge guidance law, as shown in Algorithm 1. Specifically, the drone approaches the edge
that contains the destination
B by performing
TRANSFER multiple times. Afterwards, the drone starts
BRAKE once
is found to be an empty set (i.e.,
), which means the drone has arrived at
. The drone will then reaches
B through
BRAKE. The presented guidance law is designed to navigate the barking drone from any point on the extended hull to a selected vertex following a given direction, and stop the drone at the selected vertex.
Algorithm 1: Fly On Edge Guidance Law. |
Input O, , B, , 1: Find and , if , go to line 4; 2: [, ] = TRANSFER(, ); 3: Repeat lines 1, 2; 4: [, ] = BRAKE(, B). |
4.3. Selection and Allocation of Steering Points
We now find the optimal positions for the barking drones to effectively gather animals. Aiming to minimise the maximum animal-to-centroid distance in the shortest time, at any time t, we choose the animals with the largest animal-to-centroid distance as the target animals. These animals are also the convex hull vertices that are farthest to . Since the barking drones have their motions restricted on the extended hull, we select the extended hull vertices corresponding to the target animals as the optimal drone positions for steering the target animals to approach . From now on, we call these corresponding extended hull vertices the steering points, denoted by the set , .
Remark 5. For a large flock of animals, holds generally. In the cases of , let barking drones that are far from the steering points quit the gathering task, stand by at their current locations. These drones may rejoin the gathering task when increases afterwards.
Definition 2. The allocation of steering points specifies which drone goes to which steering point through which direction, that is, clockwise or counterclockwise.
The optimal allocation of steering points should meet the following two requirements:
No collision happens when each drone is flying to its allocated steering point along the extended hull;
With requirement 1 met, the maximum travel distance of the drones is minimised.
Suppose that all the drones have arrived at the extended hull at time
. We relabel the drones so that the index of the drone increases in the counterclockwise direction. Let
M be the perimeter of the extended hull. Imagine that we disconnect the extended hull from the position of the first drone, that is,
. Then, ’straighten’ the extended hull into a straight line segment with a length of
M, so that
,
and
become the points on the line segment. Based on this line segment we build a one-dimensional (1D) coordinate axis denoted as the
z axis. Let
be the 1D coordinates of the drones’ positions on the
z axis. Let
be the origin. We have
, as shown in
Figure 8a. It can be seen that the left and right flying on the
z axis corresponding to counterclockwise and clockwise flying on the extended hull, respectively.
We will also need the following notations to present our algorithm. Let
be a set of
allocated steering points with a corresponding
z axis coordinates set
, as shown in
Figure 8b.
is the destination of the drones on the
z axis. Note that,
may not hold. Let
be the set of the drones’ travel distances for reaching their allocated steering points. We now define three variables
,
and
to indicate the flying direction and extent of drone
j. Specifically, let
if drone
j reaches
by right flying on the
z axis, and
if drone
j reaches
by left flying on the
z axis. Furthermore, let
if drone
j will pass
by right flying to reach
, and
otherwise. Similarly, let
if drone
j will pass
by left flying to reach
, and
otherwise. Let
,
and
be the sets of
and
, respectively. Given
,
and
,
and
can be computed by:
The main notations are listed in
Table 1. Since the line segment
is generated by straightening the enclosed extended hull, the drones passed
by left flying will appear on the right side of the line segment, and the drones passed
by right flying will be appear on the left side of the line segment. We now imagine extending the line segment
to
and build another 1D coordinate
axis as shown in
Figure 8c. On the
axis, the drones passed
by left flying will not appear on the right side of the line segment, but will appears on
, and the drones passed
by right flying will be appear on
. Let
be the 1D coordinates set on the
axis corresponding to
. Then, the mapping between
and
is obtained by:
If place
on the
axis, as shown in
Figure 8c, the travel route of any drone
j will be
. We obtain the expression for the travel distances
as follows:
Then, the steering points allocation optimization problem is formulated as follows:
s.t.
where (33) minimises the travel distance of the drone farthest to its allocated steering point.
Assumption 3. All the drones start flying to their allocated steering points at the same time, follow the proposed Fly on edge guidance law.
Theorem 3. Suppose that Assumption 2 holds. Then, (34), (35) guarantees that no collision happens when the drones are flying to their allocated steering points.
Proof. Suppose that all the drones start flying to their steering points at time
. Let
be the time of drone
j arrives at its steering point, that is,
. From (14) and (25), at any time
, drone
has:
From the proof of Theorem 1,
is always been minimised after drone
j arrived at the extended hull. Since drone
j is moving from
to
along the
z axis, it can be obtained from (23) and (3) that
Then, the distance between drone
j and drone
at time
can be computed by
Since
, it can be concluded from (35), (38), (39) and (40) that
Which means drone
will not collide with drone
before they arrived at their steering points. Moreover, the actual distance
between drone 1 and drone
is given by
Given (34), can be proved similarly. Therefore, (34) guarantees that drone 1 will not collide with drone , and (35) guarantees that each drone will not collide with their neighbours. This completes the proof of Theorem 3. □
Remark 6. For drones, steering points and two possible directions for each drone, the number of possible allocations is .
Since
is often a limited number,
N will be limited as well. Therefore, the optimal allocation can be found by generating and searching all the possible allocations. We are now in a position to present the algorithm to find the optimal steering points allocation, as shown in Algorithm 2.
Algorithm 2: Optimal Steering Points Allocation. |
Input , E, D 1: Find ; 2: Calculate from and ; 3: Generate possible allocations ; 4: For each , calculate ; 5: Solve (29)–(31) by searching . |
Suppose that the gathering task starts at . The proposed herding system first navigates all the barking drones to the extended hull by Fly to edge guidance law. Then, the system calculates the optimal steering points allocation after every sampling time and navigates the barking drones to their allocated steering points by Fly on edge guidance law, until (7) is satisfied, that is, . It is worth mentioning that the optimal allocation may change before some drones arrived at their allocated steering points due to the animals’ movement. The gathering task, however, will be interrupted. Because as long as the barking drones are flying on the extended hull, the animals inside the drones’ barking cone will be repulsed to move towards .
5. Drones Motion Control for Driving
Suppose that (7) is satisfied at , the goal is then transferred to drive the gathered animals to a desired location, for example, the centre of a sheepfold. The convex hull of the gathered animals will be close to a circle. For simplicity, from now on, we use the smallest enclosing circle to describe the footprint of the gathered animals. Let and be the radius and centre of the animals’ smallest enclosing circle during driving, respectively. Similar to the definition of the extended hull, we define the extended circle as a circle with a larger radius centred at .
According to (7),
and
when
. Imagine a point
is moving from
to
G with a constant speed
when
, where
denotes the time when the driving task is finished (i.e., (8) is satisfied). Given
and
,
can be computed by:
We aim to drive the animals so that
can follow
moving from
to
G, with
as the driving speed. Note that, a smaller
is preferred for a larger animals’ number
, because a larger flock of animals tend to move slower. To this end, we adopt a side-to-side movement for the baking drones, which is a common animal driving strategy that can also be seen in [
15,
42], and so forth. Let
be the perpendicular line of
that passes
. Let
be the semicircle of the extended circle cut by
that is farther to
G; see
Figure 9. Let
be the set of points that
evenly distributed on
. Each drone
j is then deployed to fly on
with
and
as its start and end points, respectively, as shown in
Figure 9. With
approaching
G, the side-to-side movements of the barking drones can ’push’ the animals to approach
G while keeping them aggregated.
Given
and
,
can be computed by solving the following equations:
If
is an even number,
where
Specifically, once (7) is satisfied, all the barking drones immediately fly to the extended circle following the guardians law similar to the
Fly to edge introduced in
Section 4.1. It is worth mentioning that the extended polygon is inscribed in the extended circle, so the process of the drones flying to their closest point on the extended circle will not disperse any animal. After reaching the extended circle, the barking drones fly to their allocated start points in
following the guardians law similar to the
Fly on edge introduced in Section IV.2. The allocation of
can be found by the algorithm similar to Algorithm 2 introduced in Section IV.3. Then, drone
j continuously flies between
and
along
, as shown in
Figure 9, until (8) is satisfied.
6. Results
In this section, the performance of the proposed method is evaluated using MATLAB. Each simulation runs for 20 times. The animal motions dynamic parameters are chosen based on the field tests with real sheep conducted by [
15], as shown in
Table 2.
Table 2 also shows some parameters of the barking drones, if not specified in the following part.
For comparison, we introduce an intuitional collision-free method as the benchmark method. Specifically, the benchmark method divides the extended hull into segments with the same length at any time during gathering. Each drone is allocated to a segment and do the aforementioned side-to-side movement on the extended hull, until (7) is satisfied. The benchmark method adopts the same driving strategy as the proposed method. We consider that the animals are randomly distributed in an area with the size of 1200 m by 600 m as the initial field.
We first present some illustrative results showing 4 barking drones on two cases herding 200 and 1000 animals, respectively; see
https://youtu.be/KMWxrlkU6t0, (accessed on 8 december 2021) and
https://youtu.be/KPGrAcgPH8Q, (accessed on 8 december 2021). We can observe that the proposed method completes the gathering task in 9.5 min for the instance with 200 animals and 10.1 min for the case with 1000 animals. The total time for gathering and driving are 15 and 18.2 min for these cases. However, the benchmark method uses about 4.9 and 4.1 more minutes to complete these missions.
Figure 10a shows how the animals’ footprint radius changes with time
t for these cases. We also present snapshots when
,
and
min for the case of 1000 animals in
Figure 10b–f.
Interestingly,
Figure 10a reveals that the difference between the time of gathering 200 animals and 1000 animals is not that obvious, with both the proposed method and the benchmark method. The proposed method, however, can always use less time to complete the gathering mission. This is because that the proposed method always chases and repulses the animals that are farthest to the centre, while the benchmark method is repulsing the animals indiscriminately. Therefore, the animals’ footprint with the proposed method becomes increasingly round-like during shrinking, while animals’ footprint with the benchmark method becomes long and narrow. This fact can be observed by comparing
Figure 10c,e, and
Figure 10d,f.
Note that, the time consumption of flying to the edge and the driving task mainly depend on the initial locations of the drones and the animals. From now on, we focus on evaluating the average gathering time after the drones have arrived on the extended hull. The aforementioned minor difference between herding 200 and 1000 animals is very likely because that the gathering time is strongly correlated with the size of the initial field, rather than the number of the animals. To confirm this, we change the initial field into square and investigate the relationship between the gathering time and the length of the initial square field; see
Figure 11a. It reveals that the average gathering time increases significantly with the initial square field length. This supports the guess that the gathering time is strongly correlated with the size of the initial field. The reason is also that the gathering time mainly depends on the movement of the animals on the edge, and particularly the travelling time for them to move to the area close to
. With fixed
and the same repulsion from the barking drones, the travelling distances of these animals are dominated by the size of the initial field. Moreover,
Figure 11a shows that the difference between the gathering time of the benchmark method and the proposed method increases with the initial square field length. It means the benchmark method is more ‘sensitive’ to the varying length of the initial square field. We further investigate the relationship between the gathering time and the the number of barking drones
; see
Figure 11b. Not surprisingly, the average gathering time decreases significantly with the increase of
for both methods. Besides,
Figure 11b shows that the superiority of the proposed method becomes more obvious with
increases when
.
Next, we investigate the impact of the drone speed and animal speed on the gathering time; see
Figure 12.
Figure 12a shows that slower drones will lead to a higher average gathering time, especially when the maximum drone speed
m/s, for both the benchmark method and the proposed method. Moreover, the average gathering time of the benchmark method is more ‘sensitive’ to
when
m/s. In addition, in our simulations, drones with
m/s cannot accomplish the gathering task using the benchmark method. In the implementation of the proposed method, drones with
m/s is preferable. Furthermore, the average gathering time of the proposed method reduces with
increases, the percentage of the reduction, however, is not significant when
m/s.
Figure 12b shows that animals with higher maximum speed
can be gathered in a shorter time. Particularly, with the proposed method, the average gathering time reduces around
(from 14.7 min to 9.1 min) when
increases
(from 2 m/s to 5 m/s). For the benchmark method, the average gathering time reduces around
(from 20.8 min to 12.7 min). Therefore, the reduction of average gathering time is much slower than the increase of
when 2 m/s
m/s, for both methods.
We investigate impact of the barking cone radius
, the drone-to-animal distance
, and angle of barking cone
on the gathering time; see
Figure 13.
Figure 13a presents the relationship between the barking radius
and the gathering time with 200 animals and 1000 animals, respectively. We can observe that increasing
will accelerate the gathering when
m. But when
m, the average gathering time increases with
, which is contradictory to our expectation. One possible reason is that the gathering time mainly depends on the animals on the edges. If
is too lager, it may cause the mutual interference between the repulsive forces inflicted by the barking drones, which may pull down the gathering. Moreover,
Figure 13a shows that the proposed method is more ‘sensitive’ to
when gathering more animals with
m. This is because that the proportion of the repulsed animals near the edges tend to be less when
increases, for a fixed
.
Figure 13b suggests that the average gathering time decreases with
increases when
m. One possible reason is that more animals will be repulsed to the directions that are not point to the centre if
is too small, since the repulsive force from the barking drone is point to the opposite of it and the barking zone is fan-shaped. This result is also considered the interference. However, increasing
will decelerate the gathering when
m, and this becomes more obvious with more animals. It is reasonable because increasing
is almost equivalent to decreasing
when
is fixed.
Figure 13c indicates that the gathering time of the proposed method is significantly less than the one of the benchmark method among
and
with 200 animals and 1000 animals. It can also be seen from
Figure 13c that the gathering time does not show monotonicity with the increase of
. The gathering time with
, however, is slightly less than the cases with
and
, for both the proposed method and the benchmark method.
We are also interested in the impacts of the measurement errors from the ‘observer’ on our method. We add random noise to the measured positions of the animals, and the amplitude of the noise is from 2 to 10 m. We conduct 20 simulations with 200 animals and 1000 animals independently for each value with
m. The results are as shown in
Figure 12. Under the measurement error, the average impact on the gathering time is relatively small.
Figure 12 shows that the average gathering time increased slightly with the measurement error increases from 2 to 10 m for both cases. For example, from no measurement error to 10 m error, the average gathering time for 1000 animals increased from 11.7 min to 12.6 min. The difference is less than 1 min. Moreover, the impact of measurement errors on the average gathering time is less significant in the case of 200 animals (see
Figure 14).
In summary, we present computer simulation results in this section to demonstrate the performance of the proposed method. These results confirm that the proposed method can efficiently herd a large number of farm animals and outperform the benchmark method. By investigating the impact of the system parameters we can obtain that higher speed of drones leads to shorter gathering time. The barking cone radius and the drone-to-animal distance also significantly affect the gathering time. The optimal values of them can be obtained via experiments on real animals.