Next Article in Journal
Machine Learning-Based Environment-Aware GNSS Integrity Monitoring for Urban Air Mobility
Previous Article in Journal
Advancing Social Equity in Urban UAV Logistics: Insights from the Academic Literature and Social Media
Previous Article in Special Issue
T–S Fuzzy Observer-based Output Feedback Lateral Control of UGVs Using a Disturbance Observer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Rotor Drone-Based Thermal Target Tracking with Track Segment Association for Search and Rescue Missions

Department of Artificial Intelligence, Daegu University, Gyeongbuk 38453, Republic of Korea
Drones 2024, 8(11), 689; https://doi.org/10.3390/drones8110689
Submission received: 10 October 2024 / Revised: 13 November 2024 / Accepted: 18 November 2024 / Published: 19 November 2024

Abstract

:
Multi-rotor drones have expanded their range of applications, one of which being search and rescue (SAR) missions using infrared thermal imaging. This paper addresses thermal target tracking with track segment association (TSA) for SAR missions. Three types of associations including TSA are developed with an interacting multiple model (IMM) approach. During multiple-target tracking, tracks are initialized, maintained, and terminated. There are three different associations in track maintenance: measurement–track association, track–track association for tracks that exist at the same time (track association and fusion), and track–track association for tracks that exist at separate times (TSA). Measurement–track association selects the statistically nearest measurement and updates the track with the measurement through the IMM filter. Track association and fusion fuses redundant tracks for the same target that are spatially separated. TSA connects tracks that have become broken and separated over time. This process is accomplished through the selection of candidate track pairs, backward IMM filtering, association testing, and an assignment rule. In the experiments, a drone was equipped with an infrared thermal imaging camera, and two thermal videos were captured of three people in a non-visible environment. These three hikers were located close together and occluded by each other or other obstacles in the mountains. The drone was allowed to move arbitrarily. The tracking results were evaluated by the average total track life, average mean track life, and average track purity. The track segment association improved the average mean track life of each video by 99.8% and 250%, respectively

1. Introduction

Multi-rotor drones have extended their application to diverse domains in commerce, military, and industrial areas [1,2,3,4]. They can cover large and hazardous areas quickly and provide real-time data when being controlled remotely or autonomously [5]. A multi-rotor drone’s hovering function at arbitrary altitude and space enable hard tasks to be performed simply and easily [6]. This operation is also cost-effective and requires low human resources [7]. Multi-rotor drones with standard cameras are often employed for detection [8,9] and target tracking [10,11,12,13,14,15,16,17,18,19].
Infrared thermal imaging detects infrared radiation emitted by warm objects and generates a gray-scale image. Thus, thermal imaging can be used effectively when visibility is low, and temperature sensing is important. However, this technology has limited imaging quality compared to visible-light images [20]. Various techniques have been developed for thermal target tracking [21].
In SAR missions, drone-based thermal imaging offers significant advantages in locating people in low-visibility conditions [22,23]. Other applications include agriculture [24,25], wildlife monitoring [26], non-invasive inspection [5], firefighting [27], and surveillance [28].
Drone-based people detection with thermal imaging was studied in Refs. [29,30,31]. Persons, animals, and cars were detected through the YOLO approach in Refs. [29,30]. A two-stage detection method recognized persons in Ref. [31]. Object detection and tracking on the sea surface was demonstrated using a thermal camera mounted on an unmanned aerial vehicle (UAV) in Ref. [32]. However, in Ref. [32], non-living objects were tracked by a fixed-wing drone. A multimodal fusion tracker that utilizes both RGB and thermal data was proposed in Ref. [33].
Multi-rotor drone-based people tracking with thermal imaging was performed in Ref. [34]. The YOLOv5 detection model was adopted, and the bounding box gating was proposed in Ref. [35]. However, in Ref. [35], the number of track breakages increased due to the high maneuvering of the target or platform.
A multiple-target tracker estimates the dynamic state of multiple targets based on observations. Observations often result in corrupted measurements due to noise, missed detections, and false alarms. In order to overcome the imperfection of observation, state estimators, such as Kalman filtering or the interacting multiple model (IMM) approach [36,37], are often adopted. A track can be viewed as a set of state estimates based on measurements. Therefore, it is important to associate measurements of the same targets with tracks. Measurement–track association and track association and fusion facilitate maintenance of the same measurement source in a track [38,39,40].
Tracks can be broken for various reasons, such as wrong measurement-to-track associations, long missed detections, or high maneuvering of the target or platform. In order to connect broken tracks, i.e., track segments, track segment association (TSA) was first proposed and applied to an airborne early warning system (AEW) scenario in Ref. [41].
In this paper, multi-rotor done-based thermal target tracking for SAR missions is addressed. There are three types of associations in track maintenance: measurement–track association, track–track association for tracks exist at the same time (track association and fusion), and track–track association for tracks exist at separate times (TSA). These three associations, along with track initialization and termination, aim to generate a unique, pure, and continuous track for each target.
Figure 1 shows the framework of multiple-target tracking following object detection. The detection results are input into the target tracker as measured positions. In this study, object detection was performed by the YOLOv5 model, the fifth iteration of a popular deep learning framework for object detection [42]. The same videos and YOLOv5x detection results from Ref. [35] were utilized since this study aims to develop a target tracker and compare it with the previous study.
The contributions of this study are as follows: (1) TSA is further developed and clarified from Ref. [41]; the sequential steps of TSA are newly defined as the selection of candidate track pairs, backward filtering, association testing of forward and backward estimates, and an assignment rule. This technology significantly reduces the number of track breakages and increases track continuity. (2) The framework that combines three associations for track maintenance is completed with the IMM approach. The IMM filter shows robust performance on high-maneuvering targets [41]. It is used for both forward and backward filtering in this study. This framework is also shown to work effectively even when the platform moves rapidly in a complex background. The contributions above differentiate this study from the previous work [35], where only Kalman filtering was applied and track continuity was low due to track breakages.
In the experiments, a small drone-based thermal imaging camera captured three hikers at night. The hikers were closely located and heavily occluded by other people, trees, or leaves, and the drone could move quickly in any direction. The average total track life (TTL) of the two videos is observed to be 0.998 and 0.931, respectively. The average mean track life (MTL) of the two videos is 0.998 and 0.584, respectively. The average track purity (TP) of the two videos is obtained to be 1 and 0.982, respectively. The average TTL is improved by track association and fusion, and the average MTL is improved by TSA. The evaluation metrics, TTL, MTL, and TP are originally defined in Ref. [41] but slightly modified in Refs. [35,39,40].
The paper is structured as follows: Section 2 presents multiple-target tracking. The experiments are demonstrated in Section 3, and the discussion is in Section 4. Conclusions follow in Section 5.

2. Multiple-Target Tracking

Figure 2 shows the entire process of multiple-target tracking. This section describes each part in detail but briefly covers track initialization and track association and fusion.

2.1. System Modeling

Targets are described by the multi-mode dynamic equation, which follows the nearly constant velocity motion as
x t k = F τ x t k 1 + q τ v j k 1 , j = 1 , , M , t = 1 , , N T k ,
F ( τ ) = 1 τ 00 0100 001 τ 0001 ,           q ( τ ) = τ 2 / 2     0       τ         0       0     τ 2 / 2       0           τ ,
where the state vector x t ( k ) = [ x t k   v x t k   y t k   v y t ( k ) ] T comprises the positions, x t k and y t k , and the velocities, v x t k and v y t (k), in a two-dimensional plane; the superscript T denotes the transpose of the matrix; τ is the frame interval; and v j k is a process noise vector. It is assumed that the process noise follows the zero-mean white Gaussian distribution. The covariance matrix of the process noise vector is Q j = d i a g [ σ j x 2   σ j y 2 ] . M is the mode number of the IMM filter. The target is modeled as a multi-state representing different modes in the IMM approach. N T k is the number of targets at frame k. Ideally, the number of targets N T k is equal to the number of generated tracks. The measurement equation represents observations as
z t ( k ) = z x t ( k ) z y t ( k ) = H x t ( k ) + w ( k ) ,
H = 1000 0010 ,
where w ( k ) is a measurement noise vector. It is also assumed to be zero-mean white Gaussian. Its covariance matrix is R = d i a g [ r x 2   r y 2 ] .

2.2. Track Initialization

Two observed positions in the consecutive frames initialize a track. The two positions satisfy the following speed gating:
z x t k z x t k 1   z y t ( k ) z y t ( k 1 ) Δ V m a x ,
where · denotes the l2 norm, and Vmax is the maximum target speed allowed for the initialization. The detailed initialization process is described in [38,39,40].

2.3. Measurement–Track Association with Forward Filtering

During measurement–track association, the IMM filter is adopted as a state estimator to handle multi-modal target states. The IMM filter consists of multi-mode interaction, mode-matched Kalman filtering, and track update.

2.3.1. Multi-Mode Interaction

The multi-mode interaction is essential in the IMM estimator; the state and covariance of target t for mode j at the previous frame k 1 are mixed to generate the interacted states, interacted covariances, and interacted mode probabilities as
x ^ 0 j t k 1 k 1 = i = 1 M x ^ i t k 1 k 1 μ i | j t ( k 1 | k 1 )
P 0 j t k 1 k 1 = i = 1 M μ i | j t k 1 k 1 P i t k 1 k 1 + x ^ i t k 1 k 1 x ^ 0 j t k 1 k 1 x ^ i t k 1 k 1 x ^ 0 j t k 1 k 1 T ,
μ i | j t k 1 k 1 = p i j μ i t ( k 1 ) i = 1 M p i j μ i t ( k 1 ) ,       i , j = 1 , , M ,
where x ^ i t k 1 k 1 , P i t k 1 k 1 , and μ i t k 1 are, respectively, the state, covariance, and mode probability. The initial mode probabilities are set to 1/M. pij is the mode transition probability from mode i to mode j. The mode transition probability matrix is set to 0.8 0.2 0.3 0.7 .

2.3.2. Mode-Matched Kalman Filtering with the Nearest Neighbor Measurement

The state and covariance of target t at frame k are updated as
x ^ j t k k = x ^ j t k k 1 + W j t k ν m ^ j t k j t k ,
P j t ( k | k ) = P j t ( k | k 1 ) W j t ( k ) S j t ( k ) W j t ( k ) T ,
where x ^ j t ( k | k 1 ) and P j t ( k | k 1 ) , respectively, are the state and covariance prediction of target t as
x ^ j t k k 1 = F x ^ 0 j t k 1 k 1 ,
P j t   ( k | k 1 ) = F P 0 j t ( k 1 | k 1 ) F T + q τ Q j q ( τ ) T .
The filter gain W j t ( k ) in Equations (9) and (10) and the residual covariance S j t ( k ) in Equation (10) are, respectively, obtained as
W j t ( k ) = P j t k k 1 H T S j t ( k ) 1 ,
S j t k = H P j t k k 1 H T + R .
The measurement residue ν m j t k in Equation (9) is
ν m j t k = z m k H x ^ j t k k 1 ,
where z m k is the m-th position measurement. The nearest measurement associated with the j-th mode filter of target t at frame k minimizes the following statistical distance squared:
m ^ j t k = a r g m i n m = 1 , , N k ν m j t k T S j t k 1 ν m j t ( k ) ,
where N(k) is the number of measurements at frame k. The selected measurement must pass the following measurement gating and speed gating:
ν m ^ j t k j t k T S j t k 1 ν m ^ j t k j t k γ m   &   z m ^ j t k k x t k 1 | k 1   y t ( k 1 | k 1 ) τ S m a x ,
where γ m is the threshold for the measurement gating, and S m a x is the maximum target speed. The measurement gating is equivalent to chi-square hypothesis testing of normalized innovation under the independent Gaussian assumption. After the measurement–track association, the remaining measurements move to the initialization step in Section 2.2. The state and covariance of the tracks for which the measurements are not associated are merely the predictions as
x ^ j t k k = x ^ j t k k 1 ,
P j t k k = P j t k k 1 .

2.3.3. Track Update

At last, the state and covariance of target t at frame k are updated as
x ^ t k k = j = 1 M x ^ j t k k μ j t k ,
P t k k = j = 1 M μ j t k P j t k k + x ^ j t k k x ^ t k k x ^ j t k k x ^ t k k T .
The mode probability μ j t k   in the equations above is
μ j t k = Λ j t k i = 1 M p i j μ i t k 1 j = 1 M Λ j t k i = 1 M p i j μ i t k 1 ,
Λ j t k = N 0 ; ν m ^ j t k j t k , S j t k ,
where N is the Gaussian probability distribution function. It should be noted that the likelihood function in Equation (23) is assumed to be constant if no measurement is associated, and the mode probability becomes
μ j t k = i = 1 M p i j μ i t k 1 .

2.4. Track–Track Association at the Same Time (Track Association and Fusion)

A track fusion method was developed to associate redundant tracks generated in a multi-sensor environment [43]. The same technology has been applied to reduce redundant tracks in cases of a single sensor [38]. It was further improved with the directional gating in Ref. [39] and track selection process in Ref. [40]. The detailed procedures for the track association and fusion can be found in Ref. [40]. This track association and fusion is proven to work well with the nearest-neighbor measurement–track association scheme by fusion of redundant tracks generated by multiple associations between a single measurement and multiple tracks [35].

2.5. Track–Track Association at Different Times (Track Segment Association)

In the previous subsection, track fusion occurs on tracks that exist simultaneously. However, TSA occurs on tracks that exist at separate times. TSA consists of the selection of candidate track pairs, backward filtering, association testing, and an assignment rule.

2.5.1. Candidate Track Pairs

Any old and young tracks become a candidate track pair with the following conditions: (1) the old track has already been terminated and is valid, that is, the update number with measurements is larger than or equal to the threshold; (2) the young track has not been terminated, and the update number with measurements is less than the threshold and larger than or equal to half of it; (3) the young track start frame minus 1 is larger than the old track end frame considering the two-frame differencing initialization; and (4) the gap interval between the old track end frame and the young track start is less than or equal to the threshold.
Figure 3 illustrates two old and three young tracks. The maximum number of consecutive frames without measurements before track termination is set to two frames and the threshold is set to four frames. Tracks O1 and O2 can be old tracks from Frames 7 and 8, respectively. Tracks Y1, Y2, and Y3 can be young tracks for Frames 8–9, 7–8, and 11, respectively. Therefore, the candidate track pair is only {O1, Y2} at Frame 7. At Frame 8, the candidate track pairs are {O1, Y1}, {O1, Y2}, and {O2, Y1}; {O2, Y2} cannot be a candidate because they are too close to each other. At Frame 9, the candidate track pairs are {O1, Y1} and {O2, Y1}. At Frame 10, no track pair is found. At Frame 11, only the track pair {O2, Y3} is a candidate track pair; {O1, Y3} cannot be a candidate because the gap interval between two tracks is longer than the threshold.

2.5.2. Backward Filtering

Once a candidate track pair is selected from the current frame, the backward filtering in the young track starts from the current frame to the end frame of the old track. The backward filtering is divided into two types: backward updates and backward predictions. Backward updates are performed on frames where the measurements are already associated, while backward predictions are performed elsewhere. In Figure 3, there are two backward updates at Frames 10 and 9 and four backward predictions from Frames 8 to 5 when a candidate track pair {O2, Y3} is considered and Frame 11 is the current frame.
The backward IMM filtering is identical to the forward filtering, except that the sampling time is negative and already associated measurements during the forward filtering are reused. The multi-mode interaction is the first step of the backward filtering as
x ^ 0 j s k k = i = 1 M x ^ i s k k μ i | j s ( k | k ) ,   s = 1 , , N y ( k )  
P 0 j s k k = i = 1 M μ i | j s k k P i s k k + x ^ i s k k x ^ 0 j s k k x ^ i s k k x ^ 0 j s k k T ,
μ i | j s k k = p i j μ i s ( k ) i = 1 M p i j μ i s ( k ) ,       i , j = 1 , , M ,
where N y k is the number of young tracks in candidate track pairs at frame k. It should be noted that the superscript s implies a young track. Mode-matched Kalman filtering is performed individually in the backward direction as
x ^ j s k 1 k 1 = x ^ j s k 1 k + W j s k 1 ν m ^ j s k 1 j s k 1 ,
P j s ( k 1 | k 1 ) = P j s ( k 1 | k ) W j s ( k 1 ) S j s ( k 1 ) W j s ( k 1 ) T ,
where x ^ j s ( k 1 | k ) and P j s ( k 1 | k ) , respectively, are obtained as
x ^ j s k 1 k = F Δ x ^ 0 j s k k ,
P j s   ( k 1 | k ) = F ( τ ) P 0 j s ( k | k ) F τ T + q τ Q j q ( τ ) T ,
and W j s ( k 1 ) and S j s ( k 1 ) are, respectively, obtained as
W j s ( k 1 ) = P j s k 1 k H T S j s ( k 1 ) 1 ,
S j s k 1 = H P j s k 1 k H T + R .
The measurement residue in Equation (28) is
ν m ^ j s k 1 j s k 1 = z m ^ j s k 1 k 1 H x ^ j s k 1 k ,
where z m ^ j s k 1 k 1 is the associated measurement with the j-th mode filter of track s at frame k 1 during the forward filtering. If no measurement was associated with track s, the state and covariance are, respectively, as
x ^ j s k 1 k 1 = x ^ j s k 1 k ,
P j s k 1 k 1 = P j s k 1 k .
Finally, the state and covariance are updated as
x ^ s k 1 k 1 = j = 1 M x ^ j s k 1 k 1 μ j s k 1 ,
P s k 1 k 1 = j = 1 M μ j s k 1 P j s k 1 k 1 + x ^ j s k 1 k 1 x ^ s k 1 k 1 x ^ j s k 1 k 1 x ^ s k 1 k 1 T .
The mode probability μ j s k 1   is obtained in the same manner as in Equations (22)–(24).

2.5.3. Association Testing and Assignment Rule

Association testing is performed only on the candidate track pairs. The testing includes statistical distance testing and Euclidean distance testing between the forward and backward estimates of the state vector at frame k e t , which is the end frame of the old track t, as
x ^ t k e t k e t x ^ s k e t k e t T P t k e t k e t + P s k e t k e t 1 x ^ t k e t k e t x ^ s k e t k e t γ s ,
x ^ t k e t k e t x ^ s k e t k e t γ u , t = 1 , , N o ( k ) ,   s = 1 , , N y k ,
where N o k is the number old tracks in the candidate track pairs at frame k. It should be noted that in this subsection, the superscript t denotes an older track. Equation (39) is the chi-square hypothesis testing since the forward and backward estimates are assumed to have independent Gaussian errors [41]. If Equations (39) and (40) are satisfied, the cost function for the assignment is identical to the chi-square statistic as follows, and if not satisfied, it becomes infinite:
c t s k e t = x ^ t k e t k e t x ^ s k e t k e t T P t k e t k e t + P s k e t k e t 1 x ^ t k e t k e t x ^ s k e t k e t .  
The cost function is also infinite if a track pair {Ot, Ys} is not a selected candidate pair. The assignment rule determines how to match old and young tracks in a 1:1 way. The assignment minimizes the summation of the cost functions as follows:
a ^ t s k e t = min a = 0,1 i = 1 N o k s = 1 N y k a t s k e t c t s k e t ,  
such that
t = 1 N o k a t s k e t = 1 ,   s = 1 , , N y k ,
s = 1 N y k a t s k e t = 1 ,   t = 1 , , N o k ,
where a t s k is either 1 or 0, indicating whether it is assigned or not. During TSA, the young tracks that are assigned to the old tracks are terminated, and the corresponding old tracks that have already been terminated become on-going tracks and contain traces of the young tracks, x ^ s k e t k e t ,   x ^ s k e t + 1 k e t + 1 ,   , x ^ s k k .  Figure 4 illustrates the assignment between old and young tracks. No connection implies that the cost function is infinite. If the cost function is finite, old and young tracks are assigned 1:1. Three assigned track pairs, {O1, Y2}, {O2, Y N y k }, and { O N o k , Y1}, are indicated by bold lines in Figure 4.

2.6. Track Termination with Validation Testing

Three criteria for track termination are as follows: (1) a track is terminated when the number of consecutive frames without measurements exceeds the maximum number. (2) A potentially terminated track is terminated if it fails to be fused during track association and fusion [40]. (3) The young track assigned to the old track is terminated during TSA. The validity of a terminated track is determined by a minimum number of updates through measurements. Tracks shorter than this number are removed as invalid tracks.

3. Results

3.1. Video Description and Thermal Object Detection

A DJI Inspire 2 drone was equipped with an infrared thermal imaging camera, a FILR Vue Pro R640 (focal length = 19 mm, field of view (FOV) = 32° × 26°, pixel pitch = 17 μm, image resolution = 640 × 512 pixels, frame rate = 30 framer per second (fps)). The thermal imaging camera was operated at a 7.5–13.5 μm spectral bandwidth. Two videos were captured with the thermal camera on a winter night for 120 s. In both videos, three hikers were simulated in distress in the mountains. The drone flew slowly or quickly in various directions and altitudes, and the drone’s viewpoint was arbitrary. Both videos are identical to the videos in Ref. [35].
Figure 5a,b show the 1st, 401st, 801st, 1201st, 1601st, 2001st, 2401st, 2801st, 3201st, and 3601st frames of Videos 1 and 2, respectively. The target ID is shown near the target, on the right or at the bottom. The corresponding visible-light images appear completely black.
The YOLOv5x object detection results, bounding box, and confidence level are shown in Figure 6 for the same frames in Figure 5. A total of 197 images with 548 training instances were trained on the YOLOv5x pre-trained detection model; the detailed procedure for training is referred to Ref. [35]. Since this study aims to elevate the performance of the multiple-target tracker on the basis of the observation, the same YOLOv5x detection results as in the previous study [35] were used for target tracking.
Figure 7 shows the centroids of the YOLOv5x bounding boxes generated over 1801 frames (2 min). These are position measurements input to the target tracker. In Video 1, all the targets moved across the entire image area, while in Video 2, the target paths occupy less than a quarter of the image area; the targets in Video 2 are more congested. The recalls for Videos 1 and 2 are 0.975 and 0.895, respectively. Since there are no false alarms, the precision for both videos is 1. Video 1 has one additional detection for 18 frames but it is not included in the recall and precision calculations because it is a duplicate detection of the same object.

3.2. Thermal Target Tracking

3.2.1. System Configuration

Table 1 shows the parameter values. The frame interval (sampling time) is set to 0.067 (=1/15) s since every two frames were processed. The pixel-to-coordinate scaling ratio for Video 1 is 0.04 m/pixel, and for Video 2 it is 0.05 m/pixel. The Kalman filter was applied to Video 1, and a two-mode IMM filter was applied to Video 2. The one-mode IMM filter is identical with the Kalman filter. For Video 1, both σ j x and σ j y are set to 2.5 m/s2, and they are 0.01 m/s2 and 1 m/s2 for Video 2. Both r x and r y are set to 0.5 m for Video 1 and 0.15 m for Video 2. Other parameters were set identically for both videos, except for the maximum established target speed, which is set to 12 m/s and 10 m/s for Videos 1 and 2, respectively. The maximum initial target speed was set to 3 m/s. The gate threshold, γ m in Equation (17), was set to 4. The gate threshold, γ t in Ref. [34], for the track association and fusion is set to 10, while the angular threshold is set to 90°. Since 90° is the maximum possible angle, the directional gating was not actually applied. The minimum update numbers with the measurements of the old tracks and young tracks are set to 30 and 15 frames, respectively. The maximum update number with the measurements of the young tracks is set to 29 frames. The maximum gap interval between the old and young tracks is set to 30 frames. The gate threshold for TSA, γ s in Equation (39), is set to 10, but the maximum Euclidean distance, γ u in Equation (40), is set to 4 only for Video 2. The maximum number of consecutive frames without measurements for track termination is set to 19 frames. Tracks shorter than the minimum update number with measurements, 30, were removed as invalid. The gate thresholds, γ m ,   γ t ,   γ s , were obtained from the chi-square distribution. The degrees of freedom are equal to the dimension of the measurement vector or state vector. They are the maximum statistical distance squared. The smaller threshold indicates a stricter association, which minimizes false associations. The parameters were determined when better results were produced with all the associations applied.

3.2.2. Evaluation of Tracking Performance

Table 2 and Table 3 show the tracking results of Videos 1 and 2, respectively. The first columns of the tables, Case 1, show the results when all the associations were applied, that is, all A–E blocks in Figure 2. The second columns, Case 2, show the results when TSA is not applied, that is, the result when A–B–C–E blocks are considered. The third columns, Case 3, show the results when only the measurement–track association is applied, that is, A–B–E blocks. The average TTL of Videos 1 and 2 is 0.998 and 0.931, respectively, in Case 1. Since the number of valid tracks in Videos 1 and 2 are three and six, respectively, the average MTL is 0.998 and 0.584 for Videos 1 and 2, respectively. The average TP of Videos 1 and 2 is 1 and 0.982, respectively. The average MTL is significantly improved by TSA, from 0.441 to 0.998 for Video 1 (99.8%) and from 0.167 to 0.584 for Video 2 (250%).
Figure 8 and Figure 9 show the tracks of Video 1 in random colors. In Figure 8, the tracks are displayed in the first frame as a background, and in Figure 9, they are shown on a white background. Figure 10 and Figure 11 show the tracks of Video 2 in the same way.
Six Supplementary files are movies in MP4 format, and they are available online. Supplementary Material Videos S1–S3 show the results of Cases 1–3 of Video 1, and Supplementary Material Videos S4–S6 show the results of Cases 1–3 of Video 2. The YOLOv5x detection results (bounding boxes and their centroids) are displayed in red squares. The valid tracks are displayed in blue circles. The track ID is also displayed, which is a number in the order in which the tracks were initialized.

3.2.3. TSA Analysis

The TSA occurred 4 and 15 times in Videos 1 and 2, respectively. No false TSA was found among them. The results of the four TSAs that occurred in Video 1 are shown in Table 4: the track IDs where the TSAs occurred, the number of backward updates and backward predictions, the statistical distance squared in Ref. [39], and the Euclidean distance in Ref. [40]. The number of TSAs occurred in Track ID 1 is two, and the numbers in Track ID 2 and 3 are both one. The backward predictions include all the backward estimates without measurements even after the young track started. Figure 12a shows the first track of Video 1 with two TSAs. Figure 12b,c show the magnified areas of the TSAs.
Table 5 shows the results of 15 TSAs in Video 2. The number of TSAs that occurred in Track IDs 1 to 4 are nine, three, one, and two, respectively. No TSAs occurred on Track IDs 5 and 6. Figure 13a shows the first track of Video 2 with nine TSAs. Figure 13b–f show the magnified areas of the TSAs. In Figure 12 and Figure 13, the red x-marks represent the measurements, blue dots represent the forward position estimates, and green squares and green triangles represent the backward updates and backward predictions, respectively. The red dots and red triangles represent the forward and backward estimates at the end frame of the old track, with the former being replaced by the latter in the track.
As shown in Table 4 and Table 5, both the statistical distance squared and the Euclidean distance increased when the track was broken due to the drone’s sudden movement. Only the chi-square hypothesis testing was applied to Video 1, and both the chi-square hypothesis and Euclidean distance testing were applied to Video 2, where the target’s movement is concentrated in a relatively small area.

4. Discussion

Multi-rotor drone-based thermal target detection and tracking is effective for SAR missions. In the videos, the warm objects were captured without ambient light. They are often obscured by other hikers or natural objects in the mountains. The drone was remotely controlled manually and hovered or moved freely.
The contributions of this paper are focused on target tracking and demonstrate that (1) forward filtering is further improved with the IMM approach in block B, and (2) the whole process of the TSA is developed and clarified, which is block C in Figure 2.
YOLO detection is known to stand out for its speed, simplicity, and accuracy. The YOLO model is a pre-trained model that has already been trained on a large dataset and aims to detect multiple classes in visible-light images [42]. A relatively small number of 197 images with 548 training instances were trained mainly for the following reasons: (1) the object detection is limited to one class. The ability to distinguish between multiple classes is not required, but there is likely to be a large variance between the class features and the background features. Better recognition results are expected when the between-class variance is large and the within-class variance is small [44]. (2) The distinguishing features of the class are a few attributes, such as brightness and object contours, because warm objects appear brighter than their surroundings in thermal images. It is well known that the smaller the dimension of the feature, the less training data are required [44]. By training more images of different shapes, it is expected to be able to better detect different poses, such as people sitting or people in groups. Another consideration for training instances is that the rectangular bounding box inevitably includes objects that are not of interest. It seems important to minimize such regions while still including the object of interest.
After track association and fusion, the valid track number in Video 1 was reduced from 14 to 7, and in Video 2, from 25 to 18. Four track breakages were found in Video 1 without TSA, three of which were caused by sudden movements of the drone and one of which was caused by missed detection. In Video 2 without TSA, 15 track breakages occurred, which were caused by measurement–track association failures, missed detections, or sudden movements of the drone.
With TSA, all the track breakages were remedied in Video 1, and the valid track number was reduced from seven to three. In Video 2, the TSA reduced the number of track breakages from 15 to 3 and the valid track number from 18 to 6. The number of tracks is the same as the target number in Video 1; however, three more valid tracks were left in Video 2. These three unresolved breakages in Video 2 were caused by long-term missing detection; a target that is missing for a certain time is recognized as another target, resulting in a track breakage. TSA cannot remedy the breakage if the gap between the two tracks is too long.
Another benefit of TSA is that it can be useful in a multiple-target environment when the strict measurement–track association is applied to avoid wrong measurement–track associations. Strict association leads to more broken tracks, but TSA can resolve these breakages.
The conditions for candidate track pairs require that the old track is valid and the young track is not long enough to be valid. The accuracy of backward filtering may be reduced if the life length of the young tracks is too long or too short. The gap interval cannot be too long because if the backward prediction lasts longer, its error becomes larger.
Figure 1 shows a separate detection-tracking approach. In this scheme, each part can be developed separately; for example, a higher or lighter version of YOLO can be adopted for object detection. The target tracker can utilize other types of measurements, such as range, range rate, and azimuth, obtained from radar [41]. Another advantage of this scheme is that detection and tracking can be implemented together or separately. Only object detection can be implemented onboard, and the position measurements are transmitted to the server on the ground, and target tracking is performed on the server. All the processing can be performed onboard the drone or on the ground, with each case allowing the drone to operate in standalone mode or without requiring onboard implementation.

5. Conclusions

In this paper, multi-rotor drone-based thermal target tracking with TSA was studied for the purpose of SAR missions. Thermal imaging is shown to be beneficial in detecting warm objects in remote areas. Three associations were established for target tracking with the IMM approach, showing robust performance. TSA can effectively reduce track breakages, even when the platform moves suddenly. This target-tracking scheme after object detection has the advantage of being developed and implemented independently, and the target tracker can be applied to various observations.
One advantage of drones is their swarm cooperation [45], which allows them to search vast areas in a timely manner. Target tracking using swarm drones and applying various multi-model filters [46] remain objectives for studies in the future.

Supplementary Materials

The following are available online at https://zenodo.org/records/13907798, Video S1: Case 1 of Video 1; Video S2: Case 2 of Video 1; Video S3: Case 3 of Video 1; Video S4: Case 1 of Video 2; Video S5: Case 2 of Video 2; Video S6: Case 3 of Video 3.

Funding

This research was supported by Daegu University Research Grant in 2024.

Data Availability Statement

Data are contained within the article and Supplementary Materials.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Alzahrani, B.; Oubbati, O.S.; Barnawi, A.; Atiquzzaman, M.; Alghazzawi, D. UAV assistance paradigm: State-of-the-art in applications and challenges. J. Netw. Comput. Appl. 2020, 166, 102706. [Google Scholar] [CrossRef]
  2. Osmani, K.; Schulz, D. Comprehensive Investigation of Unmanned Aerial Vehicles (UAVs): An In-Depth Analysis of Avionics Systems. Sensors 2024, 24, 3064. [Google Scholar] [CrossRef] [PubMed]
  3. Vohra, D.; Garg, P.; Ghosh, S. Usage of Uavs/Drones Based on Their Categorisation: A Review. J. Aerosp. Sci. Technol. 2023, 74, 90–101. [Google Scholar] [CrossRef]
  4. Cao, Y.; Qi, F.; Jing, Y.; Zhu, M.; Lei, T.; Li, Z.; Xia, J.; Wang, J. Mission Chain Driven Unmanned Aerial Vehicle Swarms Cooperation for the Search and Rescue of Outdoor Injured Human Targets. Drones 2022, 6, 138. [Google Scholar] [CrossRef]
  5. Choi, H.-W.; Kim, H.-J.; Kim, S.-K.; Na, W.S. An Overview of Drone Applications in the Construction Industry. Drones 2023, 7, 515. [Google Scholar] [CrossRef]
  6. Sekeroglu, B.; Tuncal, K. Image Processing in Unmanned Aerial Vehicles. In Unmanned Aerial Vehicles in Smart Cities. Unmanned System Technologies; Al-Turjman, F., Ed.; Springer: Cham, Switzerland, 2020. [Google Scholar] [CrossRef]
  7. Li, K.W.; Peng, L. Flight Information Access When Operating a Small Drone. In Proceedings of the 2023 4th International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE), Nanjing, China, 25–27 August 2023; pp. 28–32. [Google Scholar] [CrossRef]
  8. Zhan, W.; Sun, C.; Wang, M.; She, J.; Zhang, Y.; Zhang, Z.; Sun, Y. An improved Yolov5 real-time detection method for small objects captured by UAV. Soft Comput. 2022, 26, 361–373. [Google Scholar] [CrossRef]
  9. Zhang, H.; Sun, W.; Sun, C.; He, R.; Zhang, Y. HSP-YOLOv8: UAV Aerial Photography Small Target Detection Algorithm. Drones 2024, 8, 453. [Google Scholar] [CrossRef]
  10. Li, C.; Zhao, W.; Zhao, L.; Ju, L.; Zhang, H. Application of fuzzy logic control theory combined with target tracking algorithm in unmanned aerial vehicle target tracking. Sci. Rep. 2024, 14, 18506. [Google Scholar] [CrossRef]
  11. Chai, J.; He, S.; Shin, H.-S.; Tsourdos, A. Topologica-knowledge-aided airborne ground moving targets tracking. Aerosp. Sci. Technol. 2024, 144, 108807. [Google Scholar] [CrossRef]
  12. Anastasiou, A.; Makrigiorgis, R.; Kolios, P.; Panayiotou, C. Hyperion: A Robust Drone-based Target Tracking System. In Proceedings of the 2021 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 15–18 June 2021; pp. 927–933. [Google Scholar] [CrossRef]
  13. Dang, Z.; Sun, X.; Sun, B.; Guo, R.; Li, C. OMCTrack: Integrating Occlusion Perception and Motion Compensation for UAV Multi-Object Tracking. Drones 2024, 8, 480. [Google Scholar] [CrossRef]
  14. Tan, L.; Huang, X.; Lv, X.; Jiang, X.; Liu, H. Strong Interference UAV Motion Target Tracking Based on Target Consistency Algorithm. Electronics 2023, 12, 1773. [Google Scholar] [CrossRef]
  15. Al Mdfaa, M.; Kulathunga, G.; Klimchik, A. 3D-SiamMask: Vision-Based Multi-Rotor Aerial-Vehicle Tracking for a Moving Object. Remote Sens. 2022, 14, 5756. [Google Scholar] [CrossRef]
  16. Wang, Y.; Sun, B.; Dang, R.; Wang, Z.; Li, W.; Sun, K. Design of Dynamic Multi-Obstacle Tracking Algorithm for Intelligent Vehicle. World Electr. Veh. J. 2023, 14, 39. [Google Scholar] [CrossRef]
  17. Kim, M.; Memon, S.A.; Shin, M.; Son, H. Dynamic based trajectory estimation and tracking in an uncertain environment. Expert Syst. Appl. 2021, 177, 114919. [Google Scholar] [CrossRef]
  18. Liang, J.; Yu, X.; Zou, Y. Implementation of multiple object tracking for tracking pedestrians. In Proceedings of the SPIE 12346, 2nd International Conference on Information Technology and Intelligent Control (CITIC 2022), Kunming, China, 15–17 July 2022; pp. 67–76. [Google Scholar] [CrossRef]
  19. Koundinya, P.N.; Sanjukumar, N.; Rajalakshmi, P. A Comparative analysis of Algorithms for Pedestrian Tracking using Drone Vision. In Proceedings of the 2021 IEEE 4th International Conference on Computing, Power and Communication Technologies (GUCON), Kuala Lumpur, Malaysia, 24–26 September 2021; pp. 1–6. [Google Scholar] [CrossRef]
  20. Li, H.; Wang, S.; Li, S.; Wang, H.; Wen, S.; Li, F. Thermal Infrared-Image-Enhancement Algorithm Based on Multi-Scale Guided Filtering. Fire 2024, 7, 192. [Google Scholar] [CrossRef]
  21. Yuan, D.; Zhang, H.; Shu, X.; Liu, Q.; Chang, X.; He, Z.; Shi, G. Thermal Infrared Target Tracking: A Comprehensive Review. IEEE Trans. Instrum. Meas. 2024, 73, 5000419. [Google Scholar] [CrossRef]
  22. Levin, E.; Zarnowski, A.; McCarty, J.L.; Bialas, J.; Banaszek, A.; Banaszek, S. Feasibility Study of Inexpensive Thermal Sensor and Small UAS Deployment for Living Human Detection in Rescue Missions Application Scenario. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2016 XXIII ISPRS Congress, Prague, Czech Republic, 12–19 July 2016; Volume XLI-B8. [Google Scholar]
  23. Vincent-Lambert, C.; Pretorius, A.; Van Tonder, B. Use of Unmanned Aerial Vehicles in Wilderness Search and Rescue Operations: A Scoping Review. Wilderness Environ. Med. 2023, 34, 580–588. [Google Scholar] [CrossRef]
  24. Rejeb, A.; Abdollahi, A.; Rejeb, K.; Treiblmaier, H. Drones in agriculture: A review and bibliometric analysis. Comput. Electron. Agric. 2022, 198, 107017. [Google Scholar] [CrossRef]
  25. Messina, G.; Modica, G. Applications of UAV Thermal Imagery in Precision Agriculture: State of the Art and Future Research Outlook. Remote Sens. 2020, 12, 1491. [Google Scholar] [CrossRef]
  26. Larsen, H.L.; Møller-Lassesen, K.; Enevoldsen, E.M.E.; Madsen, S.B.; Obsen, M.T.; Povlsen, P.; Bruhn, D.; Pertoldi, C.; Pagh, S. Drone with Mounted Thermal Infrared Cameras for Monitoring Terrestrial Mammals. Drones 2023, 7, 680. [Google Scholar] [CrossRef]
  27. Giitsidis, T.; Karakasis, E.G.; Gasteratos, A.; Sirakoulis, G.C. Human and Fire Detection from High Altitude UAV Images. In Proceedings of the 2015 23rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, Turku, Finland, 4–6 March 2015; pp. 309–315. [Google Scholar] [CrossRef]
  28. Sneha, M.; Aravindakshan, G.A.; Sayi, V.V.S.; Akshayaa, R.D.; Rathna, S.V.A.R.; Thamil, J.S.; Mithileysh, S. An Effective Drone Surveillance System Using Thermal Imaging. In Proceedings of the 2020 International Conference on Smart Technologies in Computing, Electrical and Electronics (ICSTCEE), Bengaluru, India, 9–10 October 2020; pp. 477–482. [Google Scholar] [CrossRef]
  29. Krišto, M.; Ivasic-Kos, M.; Pobar, M. Thermal Object Detection in Difficult Weather Conditions Using YOLO. IEEE Access 2020, 8, 25459–125476. [Google Scholar] [CrossRef]
  30. Jiang, C.; Ren, H.; Ye, X.; Zhu, J.; Zeng, H.; Nan, Y.; Sun, M.; Ren, X.; Huo, H. Object detection from UAV thermal infrared images and videos using YOLO models. J. Appl. Earth Obs. Geoinf. 2022, 112, 102912. [Google Scholar] [CrossRef]
  31. Teutsch, M.; Mueller, T.; Huber, M.; Beyerer, J. Low Resolution Person Detection with a Moving Thermal Infrared Camera by Hot Spot Classification. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition Work-Shops, Columbus, OH, USA, 23–28 June 2014; pp. 209–216. [Google Scholar] [CrossRef]
  32. Leira, F.S.; Helgensen, H.H.; Johansen, T.A.; Fossen, T.I. Object detection, recognition, and tracking from UAVs using a thermal camera. J. Field Robot. 2021, 38, 242–267. [Google Scholar] [CrossRef]
  33. Zhang, P.; Zhao, J.; Wang, D.; Lu, H.; Ruan, X. Visible-Thermal UAV Tracking: A Large-Scale Benchmark and New Baseline. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 8876–8885. [Google Scholar]
  34. Yeom, S. Moving People Tracking and False Track Removing with Infrared Thermal Imaging by a Multirotor. Drones 2021, 5, 65. [Google Scholar] [CrossRef]
  35. Yeom, S. Thermal Image Tracking for Search and Rescue Missions with a Drone. Drones 2024, 8, 53. [Google Scholar] [CrossRef]
  36. Blom, H.A.P.; Bar-shalom, Y. The interacting multiple model algorithm for systems with Markovian switching coefficients. IEEE Trans. Autom. Control 1988, 33, 780–783. [Google Scholar] [CrossRef]
  37. Houles, A.; Bar-Shalom, Y. Multisensor Tracking of a Maneuvering Target in Clutter. IEEE Trans. Aerosp. Electron. Syst. 1989, 25, 176–189. [Google Scholar] [CrossRef]
  38. Yeom, S.; Nam, D.-H. Moving Vehicle Tracking with a Moving Drone Based on Track Association. Appl. Sci. 2021, 11, 4046. [Google Scholar] [CrossRef]
  39. Yeom, S. Long Distance Moving Vehicle Tracking with a Multirotor Based on IMM-Directional Track Association. Appl. Sci. 2021, 11, 11234. [Google Scholar] [CrossRef]
  40. Yeom, S. Long Distance Ground Target Tracking with Aerial Image-to-Position Conversion and Improved Track Association. Drones 2022, 6, 55. [Google Scholar] [CrossRef]
  41. Yeom, S.-W.; Kirubarajan, T.; Bar-Shalom, Y. Track segment association, fine-step IMM and initialization with doppler for improved track performance. IEEE Trans. Aerosp. Electron. Syst. 2004, 40, 293–309. [Google Scholar] [CrossRef]
  42. Available online: https://github.com/ultralytics/yolov5 (accessed on 3 November 2024).
  43. Bar-Shalom, Y.; Li, X.R. Multitarget-Multisensor Tracking: Principles and Techniques; YBS Publishing: Storrs, CT, USA, 1995. [Google Scholar]
  44. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification, 2nd ed.; Wiley Interscience: New York, NY, USA, 2001. [Google Scholar]
  45. Li, X.; Wu, L.; Niu, Y.; Ma, A. Multi-Target Association for UAVs Based on Triangular Real-Time l Sequence. Drones 2022, 6, 119. [Google Scholar] [CrossRef]
  46. Zhou, G.; Zhu, B.; Ye, X. Switch-Constrained Multiple-Model Algorithm for Maneuvering Target Tracking. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 4414–4433. [Google Scholar] [CrossRef]
Figure 1. Framework of multiple target tracking following object detection.
Figure 1. Framework of multiple target tracking following object detection.
Drones 08 00689 g001
Figure 2. Block diagram of multiple-target tracking.
Figure 2. Block diagram of multiple-target tracking.
Drones 08 00689 g002
Figure 3. Illustration of old and young candidate track pairs.
Figure 3. Illustration of old and young candidate track pairs.
Drones 08 00689 g003
Figure 4. Illustration of assignment between old and young tracks.
Figure 4. Illustration of assignment between old and young tracks.
Drones 08 00689 g004
Figure 5. Thermal frames of (a) Video 1 and (b) Video 2.
Figure 5. Thermal frames of (a) Video 1 and (b) Video 2.
Drones 08 00689 g005
Figure 6. YOLOv5x detection results of (a) Video 1 and (b) Video 2.
Figure 6. YOLOv5x detection results of (a) Video 1 and (b) Video 2.
Drones 08 00689 g006
Figure 7. Centroids of YOLOv5x detections for 1801 frames of (a) Video 1 and (b) Video 2.
Figure 7. Centroids of YOLOv5x detections for 1801 frames of (a) Video 1 and (b) Video 2.
Drones 08 00689 g007
Figure 8. Video 1 tracks on the 1st frame background: (a) Case 1, (b) Case 2, (c) Case 3.
Figure 8. Video 1 tracks on the 1st frame background: (a) Case 1, (b) Case 2, (c) Case 3.
Drones 08 00689 g008
Figure 9. Video 1 tracks on the white background: (a) Case 1, (b) Case 2, (c) Case 3.
Figure 9. Video 1 tracks on the white background: (a) Case 1, (b) Case 2, (c) Case 3.
Drones 08 00689 g009
Figure 10. Video 2 tracks on the 1st frame background: (a) Case 1, (b) Case 2, (c) Case 3.
Figure 10. Video 2 tracks on the 1st frame background: (a) Case 1, (b) Case 2, (c) Case 3.
Drones 08 00689 g010
Figure 11. Video 2 tracks on the white background: (a) Case 1, (b) Case 2, (c) Case 3.
Figure 11. Video 2 tracks on the white background: (a) Case 1, (b) Case 2, (c) Case 3.
Drones 08 00689 g011
Figure 12. First track of Video 1: (a) 2 TSAs, (b) 1st TSA, (c) 2nd TSA.
Figure 12. First track of Video 1: (a) 2 TSAs, (b) 1st TSA, (c) 2nd TSA.
Drones 08 00689 g012
Figure 13. First track of Video 2: (a) 9 TSAs, (b) 1st, 2nd TSAs, (c) 3rd TSA, (d) 4th TSA, (e) 5th–8th TSAs, (f) 9th TSA, (x: measurement, o: forward estimation, □: backward update, △: backward prediction).
Figure 13. First track of Video 2: (a) 9 TSAs, (b) 1st, 2nd TSAs, (c) 3rd TSA, (d) 4th TSA, (e) 5th–8th TSAs, (f) 9th TSA, (x: measurement, o: forward estimation, □: backward update, △: backward prediction).
Drones 08 00689 g013
Table 1. Modeling and tracking parameters.
Table 1. Modeling and tracking parameters.
Configurations (Unit)Video 1Video 2
System ModelingFrame Interval (s)0.067
Pixel-to-coordinate scaling ratio (m/pixel)0.040.05
Process noise std. σ j x = σ j y   ( m / s 2 ) j = 12.50.01
j = 2-1
Meas. noise std. r x   = r y ( m ) 0.50.15
Track InitializationMax. initial target speed, V m a x   ( m / s ) 3
Measurement Association Gate threshold, γ m 4 (86.5% region)
Max. established target speed, S m a x   ( m / s ) 1210
Track Association and FusionGate threshold, γ t 10 (95.6% region)
Angular threshold, θ t (degree)90 (INF)
Track Segment AssociationMin. update num. with meas. for an old track (frame)30
Min./Max. update num. with meas. for a young track (frame)15/29
Max. gap interval between tracks (frame)30
Gate threshold, γ s 10 (95.6% region)
Max. Euclidean distance, γ u INF4
Track TerminationMax. consecutive frame num. without meas. (frame)19
Min. update num. with meas. for a valid track (frame)30
Table 2. Video 1 tracking results.
Table 2. Video 1 tracking results.
Case 1Case 2Case 3
Num. of Track Asso. and Fusion15150
Num. of TSA400
Num. of Valid Tracks3714
Avg. TTL0.9980.9920.996
Avg. MTL0.9980.4410.241
Avg. TP111
Table 3. Video 2 tracking results.
Table 3. Video 2 tracking results.
Case 1Case 2Case 3
Num. of Track Asso. and Fusion11110
Num. of TSA1500
Num. of Valid Tracks61825
Avg. TTL0.9310.9050.768
Avg. MTL0.5840.1670.099
Avg. TP0.9820.9980.999
Table 4. TSA results of Video 1.
Table 4. TSA results of Video 1.
Track IDBackward UpdatesBackward PredictionsStatistical Distance SquaredEuclidean Distance
11831.922.12
15200.790.46
21561.992.44
31831.372.10
Table 5. TSA results of Video 2.
Table 5. TSA results of Video 2.
Track IDBackward UpdatesBackward PredictionsStatistical Distance SquaredEuclidean Distance
12010.0340.534
1830.0530.550
1570.1633.262
2010.0110.442
2010.0100.226
2010.0070.375
2010.0140.531
1920.0090.528
1560.0070.164
22010.0740.503
2010.0340.233
1650.2172.309
32010.0450.421
415190.1100.826
15230.0171.102
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yeom, S. Multi-Rotor Drone-Based Thermal Target Tracking with Track Segment Association for Search and Rescue Missions. Drones 2024, 8, 689. https://doi.org/10.3390/drones8110689

AMA Style

Yeom S. Multi-Rotor Drone-Based Thermal Target Tracking with Track Segment Association for Search and Rescue Missions. Drones. 2024; 8(11):689. https://doi.org/10.3390/drones8110689

Chicago/Turabian Style

Yeom, Seokwon. 2024. "Multi-Rotor Drone-Based Thermal Target Tracking with Track Segment Association for Search and Rescue Missions" Drones 8, no. 11: 689. https://doi.org/10.3390/drones8110689

APA Style

Yeom, S. (2024). Multi-Rotor Drone-Based Thermal Target Tracking with Track Segment Association for Search and Rescue Missions. Drones, 8(11), 689. https://doi.org/10.3390/drones8110689

Article Metrics

Back to TopTop