Next Article in Journal
Effect of Raster Angle and Infill Pattern on the In-Plane and Edgewise Flexural Properties of Fused Filament Fabricated Acrylonitrile–Butadiene–Styrene
Previous Article in Journal
Energy Efficient Dynamic Symmetric Key Based Protocol for Secure Traffic Exchanges in Smart Homes
Previous Article in Special Issue
Novel Method for Monitoring Mining Subsidence Featuring Co-Registration of UAV LiDAR Data and Photogrammetry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Review of Target Geo-Location Algorithms for Aerial Remote Sensing Cameras without Control Points

1
Key Laboratory of Space-Based Integrated Information System, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
2
Changchun Institute of Optics, Precision Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(24), 12689; https://doi.org/10.3390/app122412689
Submission received: 24 November 2022 / Revised: 7 December 2022 / Accepted: 8 December 2022 / Published: 11 December 2022

Abstract

:
Aerial cameras are one of the main devices for obtaining ground images in the air. Since the industrial community sets higher requirements of aerial cameras’ self-locating performance yearly using aerial cameras to locate ground targets has become a research hotspot in recent years. Based on the situation that no ground control point exists in target areas, the calculation principle of the aerial remote sensing image positioning algorithm has been analyzed by establishing different positioning models. Several error analysis models of the positioning algorithm based on the total differential method and the Monte Carlo method are established, and relevant factors that cause the positioning error are summarized. The last section proposes the optimization direction of aerial camera positioning algorithms in the future, which are verified by related simulation experiments. This paper provides a certain degree of guidelines in this area for researchers, who can quickly understand the current development and optimization direction of target geo-location algorithms of aerial remote sensing imagery.

1. Introduction

With the development of related technologies such as optical design, image processing, and autonomous control, as one of the essential equipment generally loaded on aerial remote controllers, aerial cameras are developing towards the direction of having both high-resolution imaging and high-precision positioning capabilities. In order to realize real-time aerial photogrammetry, the aerial camera needs to independently locate the target while acquiring target images [1,2,3,4].
Aiming at improving positioning accuracy, the aerial triangulation method is widely applied in traditional aerial mapping practice. This method needs to set control points with known positions of the target area and calculate according to the information of the target points and control points to eliminate errors to gain more accurate results. In the circumstance that ground control points lack, aerial cameras are expected to autonomously locate the target [5,6,7,8,9,10,11]. On the basis of whether the aerial camera is equipped with an active signal transmitting device, target positioning algorithms can be divided into two categories. The algorithm that needs to be positioned by transmitting signals is called sourced positioning algorithm. Contrarily, the defined Passive localization algorithm does not need to actively transmit signals and only relies on its parameters for positioning. In light of many former research results, the earth ellipsoid model is widely accepted as a mainstream method in the location process [12,13,14,15,16,17] for the purpose of realizing the target location with a single image. For instance, the target localization algorithm based on the earth ellipsoid model proposed by Stich, E.J. [12] realizes the localization of the ground target by the aerial camera through a single image while reducing the influence of the curvature of the earth on the localization outcome. In addition, the sourced positioning algorithms proposed gradually achieve higher level accuracy through using distance information generated by devices such as the laser range finder (LRF) or synthetic aperture radar (SAR) [18,19,20,21,22,23,24,25,26,27]. By utilizing the angle information provided by the airborne photoelectric platform, Tan [26] established a multi-target positioning model and proposed the pixel line-of-sight vector method through the parameters of the laser range finder and angle sensor. The positioning results obtained by a single measurement are greatly affected by random errors. Two mainstream methods for solving this issue include multi-machine intersection positioning and a single aerial camera that accomplish multiple measurements [28,29,30,31,32,33,34,35]. Another related theory proposed to reduce the influence of random errors is filtering algorithms [36,37,38,39,40,41,42,43,44,45,46,47,48,49]. Such kinds of algorithms are widely used in the field of target tracking.
No matter what the positioning algorithm is, it will be affected by error factors. The angle and measurement accuracy, target height, image distortion, atmospheric environment, and other potential error sources will lead to the reduction in target positioning accuracy. Therefore, a large amount of analysis focusing on the error factors of the positioning algorithm has been conducted in recent experiments [50,51,52,53,54,55,56,57,58,59,60,61,62]. Wang [50] analyzed the imaging link of the aerial camera for photogrammetry of the target and established a multifactor positioning error analysis model. Furthermore, scholars have established corresponding mathematical models or correction algorithms for different error factors [63,64,65,66,67,68,69,70,71,72,73]. Taking the height error as an example, Qiao [63,64] has established the target positioning algorithm based on the digital elevation model, which solved the problem of excessive target height error caused by the positioning algorithm based on the earth ellipsoid model; HAN [72] used the feature points in the image to obtain height information and locate the target without terrain data, but this method needs to control the imaging distance and the identifiable feature points must be ensured to exist in the graph.
On account of the situation that there is no ground control point in the target area, this paper classifies the above positioning algorithms according to the different mathematical models. In the second section, the positioning principles with the pros and cons of diverse algorithms are comprehensively compared. In the third section, two simulation analysis models of positioning error are established to analyze and sort out the error factors that cause the accuracy of the algorithm to change. In the fourth section, this paper concludes the research status and development direction of aerial camera positioning algorithms.

2. Bibliometric Study and Typical Aerial Camera-to-Ground Target Localization Algorithm

2.1. Bibliometric Study Based on Web of Science Database

Firstly, this paper provides statistics and analysis on the documents and patents included in the web of science that are related to target positioning. Since 2005, with the development of camera imaging technology, UAV/MAV loading platform technology, and image processing technology, papers with remote sensing images and target position as keywords show significant growth. The number of relevant examples of literature published each year is shown in the Figure 1.
It can be found that urban surveying and mapping, geographic survey, resource detection, environmental monitoring, and other remote sensing-related work are the main application directions of the target positioning algorithm through the analysis of the research contents of the literature. This is because the geographical location of the image is closely related to these works, and the load for acquiring the image is mainly UAV, aerial camera, photoelectric pod, etc. In the statistical literature, the main research directions include engineering (1285), computer science (1111), Instruments Instrumentation (869), and Geochemistry Geophysics (519).
Figure 1 illustrates that the growth rate of the number of papers in the past five years is significantly higher than that before 2016. This is due to the development of target detection of deep learning; more and more scholars have introduced it into the image processing process of small UAVs and other devices. Although this kind of research involves the keyword of target location, most of the target detection algorithms can only give the location of the target in the image and do not calculate its geographic location, which is not within the scope of this study. When the single target detection algorithm and the positioning algorithm with control point are eliminated, the distribution obtained by analyzing the article according to the difference of the basic model of the positioning algorithm is shown in Figure 2.
Obviously, the target location algorithm based on a single image and the earth model is still the mainstream of current research and the basis of other algorithms, which also has relatively extensive research. The research on algorithms similar to the building target location, forest featureless point location, and ocean ship location are also mostly analyzed for a single image. Further, the location algorithm based on multi-point measurement, multiple shooting, or video tracking has been a research hotspot in previous years, which can effectively reduce the impact of random error factors. In the last five years, research regarding the hybrid positioning algorithm has been relatively extensive. In addition to the hybrid computation of various earth models, elevation models, and imaging models, the artificial intelligence algorithms related to image processing (target detection, image super division, image registration, etc.) are gradually integrated into the target positioning technology.

2.2. Basic Coordinate Transformation of Target Positioning Algorithm

The geo-location information of the target includes latitude (φ), longitude (λ), and altitude (h). The positioning algorithm needs to establish corresponding coordinate systems based on numerous parameters transferred from the aircraft and the camera, and unify the target, projection point, camera, aircraft, and other related elements in the same coordinate system via the coordinate transformation matrix. According to the information provided by the global positioning system (GPS) or the positioning and attitude system (POS) carried by aircraft and the camera, the actual position of the target is calculated through the positional relationship between the referred elements. The target positioning algorithm without control points generally requires four basic coordinate systems, which are the earth coordinate system, the geographic coordinate system, the aircraft coordinate system, and the camera coordinate system. The position of a space point in different coordinate systems can be calculated by the transformation matrix C A B . Equation (1) represents the transformation matrix of a point whose coordinate value is ( X A , Y A , Z A ) from the A coordinate system to the B coordinate system [14].
X B Y B Z B = X A Y A Z A × C A B ,   C B A = ( C A B ) 1
The earth coordinate system is also known as the earth-centered and earth-fixed system (ECEF). Since the earth’s shape is an irregular sphere, an approximate mathematical model needs to be built to describe it. The positioning algorithm mainly uses the ellipsoid model exploited in the WGS-84 coordinate system (World Geodetic System-1984) as the earth model, which is declared by the 17th Congress of the Union of Geodesy and Geophysics. It proposed a series of parameters defined as follows [17,19].
Earth ellipsoid model:
x 2 R e 2 + y 2 R e 2 + z 2 R p 2 = 1
The first eccentricity of the earth ellipsoid (e):
e = R e 2 R p 2 R e
When the latitude is φ , the radius of curvature of the unitary circle is as follows:
R n = R e 1 e 2 sin 2 φ
where R e is the semi-major axis, R e = 6 , 378 , 137   m , R p is the semi-minor axis, R p = 6 , 356 , 752   m .
The origin of the ECEF coordinate system is at the earth’s center of mass, the X-axis points to the intersection of the equator and the prime meridian, the Z-axis points to the geographic North Pole, and the Y-axis and the other two axes form a right-handed coordinate system [12,63,64].
The GPS/POS carried by aircraft or camera can upload its attitude and position information (λ, φ, h). According to Equation (5) [17], the position of the camera in the earth coordinate system ( X E , Y E , Z E ) can be presented as
X E Y E Z E = ( R n + h ) cos φ cos λ ( R n + h ) cos φ sin λ ( R n ( 1 e 2 ) + h ) sin φ
Using Equation (5) to solve the longitude and latitude through the earth coordinates will involve the solution of transcendental equations, so the accurate analytical solution cannot be directly obtained, and the solution process is also complicated. On the other hand, the commonly used methods for calculating geographic location information mainly include the direct method, iterative method, and quartic equation method. Among them, the iterative method is relatively simple to calculate and has high accuracy. The iterative algorithm described by Equations (6) and (7) can ensure that the latitude error is less than 0.0001 and the height accuracy reaches 0.001 m when the number of iterations is more than four times [63].
N 0 = R e h 0 = [ ( x T ) 2 + ( y T ) 2 + ( z T ) 2 ] 1 / 2 ( R e R p ) 1 / 2 φ 0 = arctan z T [ ( 1 e 2 ) N 0 + h 0 ] [ ( x T ) 2 + ( y T ) 2 ] 1 / 2 ( N 0 + h 0 )
N i = R E ( 1 e 2 sin 2 φ i 1 ) 1 / 2 h i = [ ( x T ) 2 + ( y T ) 2 ] 1 / 2 cos φ i 1 N i 1 φ i = arctan z T [ ( 1 e 2 ) N i 1 + h i 1 ] [ ( x T ) 2 + ( y T ) 2 ] 1 / 2 ( N i 1 + h i 1 )
N 0 , H 0 , φ 0 are the initial parameter of iteration. Longitude can be calculated directly from the earth’s coordinates. The conversion equation and positive–negative relationship are shown in Equation (8).
( λ P ) 0 = arctan ( x P E y P E ) ,   λ P = ( λ P ) 0   x T > 0 ( λ P ) 0 + π   x T < 0 ,   λ 0 < 0 ( λ P ) 0 π   x T < 0 ,   λ 0 > 0
The origin of the geographic coordinate system is situated at the local positioning station (aircraft or airborne camera), and the positioning station is regarded as a mass point A when establishing the geographic coordinate system. Depending on the direction of the coordinate axis, the coordinate system type can be classified into the north-east-down (NED) coordinate system and the east-north-up (ENU) coordinate system. In the NED Coordinate System, with the aircraft as the origin A, the A N axis and A E axis point to due north and due east, respectively, and the A D axis points to the center of the earth along the ellipsoid normal. The relationship between the geographic coordinate system and the earth coordinate system is revealed in Figure 3, and the conversion relationship is presented in Equation (9).
C E C E F N E D = 1 0 0 0 0 1 0 0 0 0 1 R n + h 0 0 0 1 × sin φ 0 cos φ 0 0 1 0 0 cos φ 0 sin φ 0 0 0 0 1 × cos λ sin λ 0 0 sin λ cos λ 0 0 0 0 1 0 0 0 0 1 × 1 0 0 0 0 1 0 0 0 0 1 R n e 2 sin φ 0 0 0 1
The origin of the aircraft coordinate system (A) is the same as the origin of the geographic coordinate system. The X-axis points to the nose, the Y-axis points to the right wing, and the Z-axis is vertical to the plane of the carrier plane. During the flight, the three-axis attitude is roll (φ), pitch (θ), and yaw (ψ), respectively. When the attitude angles of the three axes are all zero, the coordinate system of the carrier aircraft completely coincides with the geographic coordinate system. The relationship between the carrier coordinate system and the geographic coordinate system is shown in Figure 4, where LOS represents the direction of the camera’s visual axis (line of sight). The transformation matrix between the carrier coordinate system and the geographic coordinate system is indicated in Equation (10) [64].
C N E D A C = 1 0 0 0 0 cos φ sin φ 0 0 sin φ cos φ 0 0 0 0 1 × cos θ 0 sin θ 0 0 1 0 0 sin θ 0 cos θ 0 0 0 0 1 × cos ψ sin ψ 0 0 sin ψ cos ψ 0 0 0 0 1 0 0 0 0 1
The origin of the camera coordinate system (C) mostly is the projection center of the camera optical system S O . The Z C axis is the main optical axis and S O Z C points to the detection target when the target is imaged at the center of the detector. Modern aerial cameras are usually installed in a two-axis frame consisting of two universal shafts. Therefore, when the camera works, there are inner and outer frame angles θ p i t c h and θ r o l l as shown in Figure 5. The transformation matrix of the camera coordinate system and the aircraft coordinate system is given in Equation (11).
C A C C = cos θ p i t c h 0 sin θ p i t c h 0 0 1 0 0 sin θ p i t c h 0 cos θ p i t c h 0 0 0 0 1 × 1 0 0 0 0 cos θ r o l l sin θ r o l l 0 0 sin θ r o l l cos θ r o l l 0 0 0 0 1

2.3. Target Positioning Algorithm Based on Earth Ellipsoid Model

Early target geo-location algorithms assume the target area as a complete plane, ignoring the influence caused by geodetic elevation and the curvature of the earth to the localization results [12]. To tackle this challenge, a method is given in the following. As shown in Figure 6, when the aerial camera acquires the target image, it will be projected on the focal plane of the detector. The detector of an airborne camera in most cases is made of charge-coupled devices (CCD). If the size of a single pixel of the CCD is a, and the focal length of the camera optical system is f, then the projected position of the target point in the camera coordinate system can be described by Equation (12).
T s = [ m × a , n × a , f , 1 ] T
Based on auxiliary coordinate systems described above, the coordinates of the target projection point in the earth coordinate system T E can be expressed as Equation (13) according to the coordinate rotation transformation theory [13,14].
T E = x T y T z T 1 = C N E D E C E F × C A C N E D × C S A C × T S
If the origin of the camera coordinate system is O S = [ x S , y S , z S ] T , the coordinates of the target in the earth coordinate system [ x T , y T , z T ] T should be located on the line Equation (14) established between the camera system center and the target projection point.
x T x S x T x S = y T y S y T y S = z T z S z T z S
If the altitude of the target is h T , the earth ellipsoid equation at this area can be written as
x T 2 ( R e + h T ) 2 + y T 2 ( R e + h T ) 2 + z T 2 ( R e + h T ) × ( 1 e 2 ) 1 / 2 = 1
The coordinates of the target in the earth coordinate system can be calculated by combining Equations (14) and (15). Obviously, there should be two sets of analytical solutions to the equation, and the set of solutions closer to the camera is the true position of the target [15,16].
This kind of algorithm has been widely used in the 1990s and the beginning of the 21st century. It realizes the positioning only by the angle, solves the influence of the curvature of the earth on the target positioning results, and is the basis of the follow-up research in the field of target positioning. The equipment with the highest positioning accuracy using this method is the Global Hawk UAV. Depending on the high-precision angle sensor, Global Hawk can control the positioning error within 20 m (imaging distance 18 km) by using the target positioning algorithm based on the earth model [13].
The target geo-location algorithm based on the earth ellipsoid model is simple to calculate and can be directly used for location when the accuracy requirement is not high enough. The algorithm is extremely affected by the elevation error because the algorithm needs the average elevation information in advance and regards it as the reference height of the target; this leads to large position errors in areas with huge topographic relief.

2.4. Target Location Algorithm Based on Digital Elevation Model

In an area with complex terrain, it is impossible to acquire the geodetic elevation information of the target in advance. In this situation, the positioning algorithm of the ground target based on the earth’s ellipsoid model is shown in Figure 7a. It can be seen that the geodetic height error of the target will cause huge positioning errors. To solve this problem, the digital elevation model (DEM) is used to replace the earth ellipsoid model, as figured in Figure 7b.
The accuracy of DEM model is the core of this algorithm. There are mainly three ways to obtain DEM data: (1) Measuring on the ground by a hand-held GPS measuring instrument or total station; (2) Obtaining remote sensing images through photoelectric equipment and remote sensing technology, while carrying out three-dimensional reconstruction; (3) Using high-precision equipment such as radar and laser scanner to obtain elevation information like the 3D point cloud [65,66,67,68,69,70,71].
This paper gives the basic model of the DEM-based iterative target location algorithm. The positioning principle is contained in Figure 8, and the specific process can be divided into five steps:
(1)
Estimate the geographical location of the target and extract the digital elevation information of the target area from DEM data;
(2)
Initialize the number of iterations and calculate the maximum height H i = H max of the target area;
(3)
The localization method based on the earth ellipsoid model initializes the localization of the target, and the localization result is φ i , λ i , H i ;
(4)
Find the geodetic height information with longitude and latitude φ i , λ i through DEM data and record it as h i ;
(5)
Compare h i and H i , if h i < H i , continue the iteration and let H i = H i i ε h , if h i > H i , output the target positioning result φ i , λ i , H i .
Without the control points, terrain undulation is always one of the difficult problems of remote target location. The terrain model represented by DEM is introduced into the target location algorithm to make up for this deficiency. When the load and carrier conditions permit, the positioning accuracy in mountainous areas is greatly improved, and the use direction of DEM model is expanded.

2.5. Active Positioning Algorithm Based on Laser Ranging Sensor

In order to acquire the terrain information of the target and solve the problem that the positioning algorithm based on the earth’s ellipsoid model is too affected by the elevation, plenty of active positioning algorithms have been developed [19,20,21,22,23,24,25]. This type of algorithm mainly relies on ranging equipment such as laser range finder (LRF) to compute the distance between the camera and the target point, and calculates based on the positional relationship between the object images. The accuracy of the algorithm depends on the precision of the range finder. Since positioning models are built in various ways, there are some differences in the method of calculating the position of the target.
In the ideal imaging process of the aerial camera, the spatial relationship among the target, the target’s projection point, and the detector focal plane is illustrated in Figure 9. Since the distance between R the target and the optical system center of the camera is much greater than the focal length of camera, the distance between the target point projected position and camera coordinate system origin can be approximately recorded as the focal length f [26]. Furthermore, the position of the target in the camera coordinate system T E S can be calculated according to Equation (16).
Q 1 = R / f 0 0 0 0 R / f 0 0 0 0 R / f 0 0 0 0 1 , T E S = Q 1 T s
Therefore, the position of the target in the camera coordinate system can be directly derived by using a laser range finder, and the position information of the target in the earth coordinate system can be directly deducted only by using the coordinate transformation matrix.
The above-mentioned method carries out a lot of approximation processing, which affects the positioning accuracy. Xu proposed another more refined mathematical model than the LRF [22]. According to the geometric relationship of the aerial camera imaging model, the proportional relationship between the position of the target and the projection point is used for solution calculation with the distance information provided by the laser range finder as an input parameter.
The imaging model of the aerial camera with area array CCDs as the detector is indicated in Figure 10. M 1 is the object plane and M 2 is the image plane. P is defined as the projection of the target A on the image plane. S means the camera coordinate system, and g denotes the geographic coordinate system. The distance between the camera and the target R can be gained by measuring the distance of the target through the airborne laser range finder. Under this model, the coordinates in the camera coordinate system of the target point can be described by Equation (17), and the position of the target in the earth coordinate system can also be calculated by using coordinate transformation [19,22].
X s = x w × R × cos ρ cos ω × x w 2 + y w 2 + f 2 Y s = y w × R × cos ρ cos ω × x w 2 + y w 2 + f 2 Z s = R × cos ρ × f cos ω × x w 2 + y w 2 + f 2
The above two algorithms for positioning using a laser range finder directly calculate the position of a target in the camera coordinate system without solving the position through the collinear equation, leading to the outcome no longer being affected by the target elevation error. The improvement of precision and miniaturization of laser rangefinders make this kind of algorithm applied in engineering. This kind of algorithm also promotes research in the field of active positioning. It has certain reference significance with SLAM, 3D point clouds, and laser point clouds, and can be extended to industrial scenes such as robots and robot arms. Generally speaking, these algorithms are mostly applied to military equipment because it is usually difficult to carry laser rangefinders with sufficient accuracy in small civil UAVs.
However, distance limitation restricts the LRF’s further application. To be specific, the effective working distance of the LRF is in most cases limited within 20–30 km to ensure accuracy. With the development of aerial cameras for high-altitude and long-distance imaging, the positioning algorithm based on ranging equipment cannot meet the demand of target positioning with an image distance of more than 50 km.

2.6. Target Location Algorithm Based on Single-Machine Two-Point Measurement or Double-Machine Intersection Measurement

The intersection measurement algorithm is proposed to solve the issue that the single-machine and single-point measurement is greatly affected by random errors. The traditional two-point intersection positioning algorithm generally relies on the angle sensor carried by the camera to obtain the shooting angles at different times and calculates in the earth coordinate system through the equation of LOS.
As shown in Figure 11, the two points A and B represent different positions of a same target shot by an airborne camera. The respective target LOS pointing vectors ( L C i ) can be obtained by angle information through the projection position of the target on the CCD, and its cosine of direction can be expressed in the camera coordinate system by Equation (18) [33,34,35,36], where x and y are the coordinate values of target projection point on the camera coordinate system.
L C i = x 2 x 2 + y 2 + f 2 , y 2 x 2 + y 2 + f 2 , f 2 x 2 + y 2 + f 2 , 1 T
L E i = l E i m E i g E i 1 = C N E D E C E F × C A N E D × C C A × L C i
In addition, the actual position of aircraft or cameras in the earth coordinate system can be directly calculated by Equation (5) and the position information provided by the GPS/POS system. Combined with the direction vector in the earth coordinate system ( L E i ) provided by Equation (19), two target line-of-sight equations can be constructed, as shown in Equation (20) [35,37].
L i : x x i l E i = y y i m E i = z z i g E i = t i , i = 1 , 2
LOS is rotationally invariant in space. The intersection of the two LOS equations is the position of the target in the ECEF coordinate system. Therefore, the coordinate value of the target can be calculated by the following equation.
T E = ( x 1 + l E 1 t , y 1 + m E 1 t , z 1 + n E 1 t )   t = m E 2 ( x 2 x 1 ) L E 2 ( y 2 y 1 ) l E 1 m E 2 l E 2 m E 1
The positioning algorithm of the single-machine two-point intersection can also reduce the influence of the elevation error, and it selects the angle information as parameters for calculation, which is not limited by imaging distance. However, this type of positioning algorithm can only locate the fixed target on the ground; once the target moves, the positioning accuracy will be greatly decreased.
For dealing with this issue, the positioning algorithm for double-aircraft intersection measurement is established by using two aerial cameras to co-locate the same target at positions A and B, which can gain all the parameters required by the intersection positioning algorithm in real-time, and then calculate the target position [37,38,41]. Therefore, the double-machine intersection positioning algorithm will not be affected by target movement, which shows better performance in real-time situations than that of the single-machine algorithm. In addition, since this method does not require the coordination of the ellipsoid model, it can also be applied to aerial targets such as airplanes or airships.
When the laser rangefinder and SAR fail to achieve real-time fusion with the positioning algorithm, the intersection of single machine multi-point measurement is an effective means to improve the target positioning accuracy. With the rapid development of UAV, UAV formation further improves the positioning accuracy of this kind of algorithm, which is widely used in environmental monitoring, disaster rescue, and other fields. It can achieve large-scale search and high-precision positioning through small UAV formation.
However, the cost of double-machine intersection positioning algorithms is high, and two aerial cameras perform photogrammetry on the same area, which reduces work efficiency. Moreover, the timing requirements for the two cameras are relatively high since the target is required to be photographed at the same time. The intersection algorithm only considers the theoretical angle intersection, ignoring the random errors generated by the working cameras, which will lead to the lines of sight failing to meet at one point, resulting in an inaccurate positioning model. Hence, it is necessary to use the data processing theory such as the least square method to process the positioning data subsequently to improve the positioning accuracy. Noticeably, the actual positioning accuracy will be lower than the theoretical positioning accuracy.

2.7. Filter Positioning Algorithm Based on Single-Machine Multiple Measurements

With the development of high-altitude oblique imaging aerial cameras, the camera can not only image the target, but track the target and continuously acquire target images through multiple shoots. During the tracking process, a large amount of data can be obtained, so the algorithm tracks and positions the motion target by using filter algorithms, which tend to be proposed and applied in corresponding engineering projects [42,43,44,45,46,47,48,49,50].
In addition to the typical filter positioning algorithms like Kalman Filter (KF) and extended Kalman filter (EKF), a filter localization algorithm for stationary targets will be presented in this section. In the process of multiple measurements of the target by the aerial camera, the state quantity of the system is regarded as the geographical position of the target, then the state equation and measurement equation of the system can be expressed as Equation (22) [46,47],
X k + 1 = ϕ k + 1 , k X k + W k Z k = h ( X k ) + V k
where X k is the initial position information of the target, which can be obtained by the positioning algorithm based on the earth ellipsoid model. W k and V k are zero-mean white Gaussian noise, which are uncorrelated and represent systematic and random noise, respectively. Since the target is fixed, the state transition matrix ϕ k + 1 , k and the covariance matrix Q k of the systematic error can be expressed as Equation (23).
ϕ k + 1 , k = 1 0 0 0 1 0 0 0 1 , Q k = 0 0 0 0 0 0 0 0 0
The positioning algorithm for calculating and finding the target location can be transformed into an algorithm for calculating the optimal estimation multiple times according to the state equation and measurement equation of the system. The Kalman filter is generally divided into two steps. The first is the prediction of state and mean square error, as shown in Equation (24), where X is the system state and P is the mean square error.
X k + 1 | k = ϕ k + 1 , k X k , P k + 1 | k = ϕ k + 1 , k P k + Q k
The second step of the filtering algorithm is to update the system state, as shown in Equation (26), where K is the filtering gain and I is the identity matrix.
K k = P k + 1 | k H k + 1 T [ H k + 1 P k + 1 | k H k + 1 T + R k + 1 ] 1 X k + 1 = X k + 1 | k + K k + 1 [ Z k + 1 h ( X k + 1 | k ) ] P k + 1 = [ I K k + 1 H k + 1 ] P k + 1 | k
Therefore, the state estimator X k at time k can be derived by recursive calculation through the initial state estimator X 0 and its covariance matrix P 0 and the observation Z k at time k , where X k means the position of the target [48].
As mentioned above, the Kalman filter algorithm can be used for the high-precision positioning of stationary targets. However, considering that most of the error sources are random errors, the system state should be nonlinear. Meanwhile, the Kalman filter is a linear filtering algorithm, so position accuracy cannot be guaranteed. In order to solve this issue, scholars have explored many target localization algorithms based on nonlinear filters, such as Extended Kalman Filter (EKF), unscented Kalman Filter (UKF), and Particle Filter (PF), to improve the stability of the filter positioning algorithm.
Target tracking is one of the important requirements in the remote sensing field. A target location algorithm based on filtering is one of the most efficient and reliable options. The research of related algorithms has also expanded from static targets to mobile targets and multi-target monitoring. In addition to the initial application to Earth observation, similar algorithms have also been applied to space exploration fields such as space-based target tracking and debris monitoring. The nonlinear filtering algorithms perform well in locating the motion target and also play significant roles in the tracking and shooting works if the target velocity is regarded as the second system state equation.
The issue of the filtering positioning algorithms is its low efficiency, which means these algorithms cannot be applied to the target positioning problem of a single image. Thus, tracking and measuring the target in a long stage is necessary to ensure the feasibility of nonlinear filtering algorithms. The advantages and disadvantages of typical algorithms are analyzed, as shown in Table 1.

3. Error Analysis Model and Influencing Factors of Positioning Algorithm

3.1. Simulation Analysis Method and Evaluation Index of Positioning Algorithm

Carrying out simulation analysis, which establishes models to assess different algorithms’ performances in most situations, is crucial before the actual flight test. In this section, the total differential method and Monte Carlo method will be discussed as the error analysis methods [74,75,76,77,78,79,80,81,82,83,84,85,86,87].

3.1.1. Total Differential Method

Total differentiation is a method of researching the state of a system when the state of the independent variables of a multivariate function changes [74,75]. Because the calculation process of target positioning is indirectly calculating the target geographical position through various parameters, it can usually be described by a multivariate function composed of multiple elementary functions, which can be expressed by Equation (26).
y = f ( x 1 , x 2 , , x n )
In the equation, parameters x 1 , x 2 , , x n represent the position, angle, and distance information required by the target positioning algorithm, and y is the indirect measurement value, which represents the positioning result. For a multivariate function, the increment of the function can be represented by the total differential; the increment of the above function is:
d y = f x 1 d x 1 + f x 2 d x 2 + + f x n d x n .
The systematic and random error of each direct measurement value can be expressed as Δ x 1 , Δ x 2 , Δ x n , and these errors can be regarded as small values to replace the differential component in Equation (27). Therefore, the approximation error Δ y of the system can be calculated by Equation (28).
Δ y = f x 1 Δ x 1 + f x 2 Δ x 2 + + f x n Δ x n .
Obviously, the positioning error of the target consists of longitude, latitude, and altitude. In the ECEF coordinate system, the target coordinate error is equivalent to the target latitude–longitude error during the process of error analysis. According to the mathematical model of the target positioning algorithm, the positioning error components of the X-axis ( M X ) and Y-axis ( M Y ) directions of the target in the ECEF coordinate system can be obtained as shown in Equations (29) and (30).
M X = ( X X s δ X s ) 2 + ( X Z s δ Z s ) 2 + ( X Z A δ Z A ) 2 + ( X θ δ θ ) 2 + ( X γ δ γ ) 2 + ( X φ δ φ ) 2 +
M Y = ( Y Y C δ Y C ) 2 + ( Y Z s δ Z s ) 2 + ( Y Z A δ Z A ) 2 + ( Y θ δ θ ) 2 + ( Y γ δ γ ) 2 + ( Y φ δ φ ) 2 +
The error in the Z-axis direction is only related to the altitude, so it can be found as:
M Z = δ H .
According to the error synthesis theory [16,74], the positioning error M of target can be defined as
M = M X 2 + M Y 2 + M Z 2 .
Although the traditional total differential method can perform error analysis, the calculation method is too complicated, time-consuming, and the accuracy fluctuates far from satisfactory.

3.1.2. Monte Carlo Method

With the development of computer technology, the Monte Carlo method is gradually used in error analysis, and its accuracy and calculation speed are both better than that of the total differential method. It is also known as the statistical simulation method and is an approximate calculation method based on the theory of probability and statistics, which uses a computer to generate pseudo-random data that meets the requirements to replace the actual data that is difficult to obtain. The original Monte Carlo idea came from a large number of practical experiments. With the improvement of computer calculation power, scholars now increasingly realize the random process required by the algorithm can be satisfied through the computer simulation environment.
For the simulation requirements of the target positioning algorithm, the Monte Carlo method can unify various random errors into one model without differential approximation calculation and error synthesis. The random error generated in the process of target positioning usually obeys the normal distribution, which can be used to generate corresponding pseudo-random number sequences by computer for simulation analysis. The general form of the error analysis model of the positioning algorithm established by the Monte Carlo method is shown in Equation (33) [85,86].
Δ y = f ( x + Δ x 1 , x 2 + Δ x 2 , , x n + Δ x n ) f ( x 1 , x 2 , , x n )
In the equation, Δ y is the error of the function value y, which represents the distance between the result calculated by the target positioning algorithm and the actual position of the target. x 1 , x 2 , , x n is the measured value of each angle or distance parameter, Δ x n is the random error conforming to the normal distribution, indicating various errors that tend to occur when the aerial camera actually works (aerial carrier position error, frame angle error, attitude angle error). The result of Equation (33) is equivalent to the positioning distance error calculated by Equation (34), which means the distance between two points in space. In Equation (35), T E R = x r y r z r is the reference coordinate value of the target in the earth coordinate system, indicating the real position of the target, and T E S = x s y s z s is calculated by the positioning algorithm in the simulation environment Targeting results.
d = ( x r x s ) 2 + ( y r y s ) 2 + ( z r z s ) 2

3.1.3. Evaluation Indicators for Target Positioning Algorithms

For the data acquired by simulation analysis, an evaluation index needs to be selected to evaluate the effectiveness of the algorithm. When performing multiple simulation experiments on the target, the obtained positioning result will be randomly distributed near the real position of the target due to the influence of systematic errors and random errors. The distance between the positioning result and the target position reference value is the positioning error. Considering multiple positioning errors as a number array, the evaluation index of positioning accuracy can be developed by data processing.
There are three commonly used positioning accuracy evaluation indicators:
(1) Average positioning error E A , which refers to the average value of all positioning errors in a simulation experiment, can be calculated by Equation (35).
E A = i = 1 n Δ y n n
(2) Positioning standard deviation E S T D , which refers to the standard deviation of the positioning error array ( Δ y 1 , Δ y 2 , , Δ y n ) , as shown in Equation (36).
E S T D = i = 1 n ( y i y ¯ ) n 1
(3) The circle probability error E C E P 50 , refers to the radius of the circle centered on the target’s true geographic location and containing 50% of the positioning results. Its general form is shown in Equation (37) [88,89,90].
1 2 π σ x σ y 1 ρ 2 x 2 + y 2 R 2 exp { 1 2 ( 1 ρ 2 ) [ ( x μ x ) 2 σ x 2 2 ρ ( x μ x ) ( y μ y ) σ x σ y + ( y μ y ) 2 σ y 2 ] } d x d y = 0.5 .
The radius value, which is the circle probability error, can be calculated from formula 37. Considering the calculation of Equation (37) is too complicated, another simplified calculation method is given in Equations (38) and (39), which decomposes the positioning result data into X-axis and Y-axis components.
σ x = i = 1 n ( x i μ x ) n 1 , σ y = i = 1 n ( y i μ y ) n 1 .
In the positioning algorithm, the results of the X-axis and Y-axis components are both independent, which have different standard deviations due to random errors. Equation (39), which illustrates the quantum relationship between circular probability error and standard deviation, can be derived from Equation (37) through polar coordinates transformation and probability density formula.
E C E P = 0.589 ( σ x + σ y ) .
The accuracy of the positioning algorithm has a negative correlation to the values of the three evaluation indicators. For different situations, it is significant to select appropriate indicators to analyze the positioning algorithm. In general, the circular probability error E C E P 50 is more in line with the actual positioning error, which includes more than half the positioning result. Therefore, E C E P 50 is widely used as an evaluation index in the positioning error analysis process. Unless otherwise specified, the position errors mentioned below all refer to E C E P 50 .

3.2. Influencing Factors of Positioning Algorithm Accuracy

The error factors affecting the positioning algorithm are mainly divided into systematic error and random error. Systematic errors can be compensated by camera calibration in the laboratory. Therefore, the random error caused by the measurement of various sensors is the main factor. Scholars have conducted extensive research on the error sources of the positioning algorithm. Table 2 briefly introduces some common error terms and their distribution [91,92,93,94,95].
Different positioning models are corresponding to various error terms, which affect the accuracy of the positioning algorithm. Therefore, adding error terms based on the actual algorithm situation to error models is important, which is established during the simulation analysis. If the camera does not carry a laser rangefinder, then there is no laser ranging error; some aerial cameras carry a POS system that is rigidly connected to itself, so there is no vibration error from shock absorbers.
Meanwhile, the same error source has different weights for different positioning models. For positioning algorithms based on the earth ellipsoid model, the error of sight pointing and target elevation are the primary error sources. For the laser range finder position algorithm, the accuracy of the LRF has a greater impact than other error sources. Taking the earth’s ellipsoid model positioning algorithm as an example, Figure 12 illustrates the relationship between positioning error and imaging angle error when the airborne camera position is (35° N, 110° E, 7000 m). It is obvious that positioning error shows a positive correlation with measurement error, and positioning accuracy is significantly influenced by the measurement accuracy of angle.
The algorithm, positioning a target by imaging angle alone, is incapable to deal with the impact of the target’s elevation error, which means it is highly affected by topographic relief, as shown in Figure 13. Consequently, giving a predicted value of the target’s height to this algorithm before calculating is crucial.
In addition to the main error items shown in Table 2, many small error sources are ignored in the traditional positioning algorithm analysis. For example, the LOS error generated by the fast reflector in the optical system has not been considered. Moreover, the influence of the atmospheric environment on the positioning results is usually overlooked. The complex imaging environment will lead to image distortion, which usually brings neglected localization error in positioning algorithms. Figure 14 is a schematic diagram of the positioning error caused by image distortion. It can be concluded that the positioning error increases steadily along with the growth of distortion rate. When the image edge distortion rate reaches 0.06, the positioning error will be as high as 70 m. Correspondingly, the error caused by the synthesis of traditional error terms is about 75 m [96].
Most of the existing algorithms or theories are aimed at the height error caused by topographic relief, and do not take the height of target itself into consideration. In fact, aerial remote sensing images are the projection of a three-dimensional region on a two-dimensional plane. A single aerial image cannot directly provide the height information of the object itself, and most existing examples of research neglect this issue.
A building target positioning algorithm based on target detection and base point selection is proposed in paper [97], and the related errors are analyzed. Figure 15 illustrates the simulation results of a traditional algorithm with building heights from 10 m to 100 m. In specific, the X-axis in the figure is the height of building target, and the Y-axis represents the average positioning error. It can be clearly found that the target height and imaging angle are directly corresponding to the positioning error. For instance, the positioning error (the spatial distance error) reaching 63 m will be generated under the condition of a 65° imaging angle when applying the traditional positioning algorithm with only a 10 m height of the building. On the other hand, under the same imaging angle and random error conditions, the positioning error is about 39 m if the target is on the ground. Moreover, when the tilt angle reaches 70 degrees, the error value will increase by nearly 90 m, and the positioning error will be as high as 126 m when the target height is 80 m.
However, although the algorithm can reduce the impact of the target height on the positioning results to a certain extent, the accuracy of the algorithm still needs to be improved because the detection algorithm can only provide a rectangular detection box and the detection rate is not 100%.
Similarly, the impact of the atmosphere on the imaging environment will also significantly lead to the deviation between the imaging visual axis and the ideal visual axis. There is no perfect work to analyze the impact of the atmosphere on positioning, so it is necessary to establish an appropriate atmospheric analysis model to reduce its impact. With the introduction of various algorithms in the field of artificial intelligence into the target location process [98,99,100,101,102,103,104,105,106], the corresponding error factors also need to be dealt with in error analysis and simulation test. In addition, the positioning process using the target detection algorithm should focus on the detection accuracy and the coincidence rate between the detection box and the real label. The use of a 3D reconstruction algorithm should focus on the significant impact caused by the load body position and the LOS control deviation. For remote sensing images involving image super division and registration, the real projected pixel position of the target must be considered. The main potential error factors that have not been considered or analyzed are shown in Table 3.

4. Discussion

This section will briefly summarize the future development direction of localization algorithms; the positioning accuracy can be further improved in the following four directions:
(1) More accurate target positioning model. The positioning algorithm calculates the geo-location of the target through the imaging angle and imaging distance, which directly affect the positioning accuracy, so improving the accuracy of various sensors is a feasible way to reduce positioning error. In addition, whether the mathematical model of the algorithm is perfect will also affect the positioning accuracy. Due to the complexity of the aerial camera structure, adding auxiliary coordinate systems such as the base coordinate system or the mirror coordinate system, and establishing a more refined mathematical model are significant for improving the positioning algorithm of different camera structures. This paper verifies this view through a simulation model. On the basis of the traditional basic model, the camera coordinate system origin correction matrix, the damper auxiliary coordinate system, and the quick reflector coordinate system are gradually added to analyze the change in the positioning error. The positioning error of the basic model is about 67.8 m, and after adding a variety of auxiliary coordinate systems, it will approximate 58.2 m. Therefore, the results illustrate that the positioning accuracy of the algorithm is progressively improved, as shown in Figure 16.
Some studies [17,22,25,39] add auxiliary coordinate systems such as the base coordinate system and the reflection coordinate system according to the actual situation to optimize the unique structures of different aviation photoelectric loads. It has improved the positioning accuracy by about 12–23% compared with relying only on the basic coordinate system, which proves that a more accurate positioning model can play a very good optimization effect in the case of fixed cameras.
(2) Random error correction matrix or equation. At present, the optimization research of positioning algorithms focuses on trying to reduce the influence of random errors. Various filtering algorithms are aimed at the problem of excessive random errors in positioning algorithms. This paper also analyzes the filtering method by using the simulation environment. For the application scenario of uniform moving target location, the basic location model and EKF-based target location algorithm are used to calculate the target position. As shown in Figure 17, due to the random error, the basic model must obtain the target position with an unstable error, and it is difficult to build a stable target motion state and route. Incorporating the EKF algorithm into the target location model can effectively reduce the impact of random errors, and the calculated route is basically consistent with the actual route of the target.
The existing research [42,43,44,45,46] shows that the algorithm based on KF, EKF, or UKF can improve the target positioning accuracy of UAV to less than 20 m, which proves the effectiveness of the filtering positioning algorithm under the premise of a large number of continuous observations. The algorithm of target re-projection or Google Earth reference image positioning proves the effectiveness of the error correction matrix. However, as mentioned in Section 2.6, the filtering algorithm inherently causes the low efficiency of the camera, which performs well for tracking the target. The challenge of how to eliminate or reduce the adverse effects of random errors in the positioning process for a single image remains to be addressed. Therefore, selecting an appropriate auxiliary coordinate system or adding a corrected rotation matrix and other solutions are necessary for reducing the influence of random errors of the positioning algorithm.
(3) Focus on error terms ignored by traditional algorithms. With the gradual reduction in the measurement errors of various angle and ranging sensors, the influence of error sources that are neglected by traditional algorithms will be more obvious, such as atmospheric environment, image distortion, and the height of the target itself. In the research work of papers [96,97], it is shown that in the case of high angle measurement accuracy, the target positioning accuracy can still be improved by more than 30% only by correcting distortion and target height. In addition, we found through simulation analysis that when the atmospheric environment causes the imaging LOS to deviate by 0.01°, the target positioning error will increase by more than 25.2 m (imaging distance is 10 km, imaging angle is 60°). Hence, these factors need to be taken into account in the algorithm in subsequent research, and the accuracy of the positioning algorithm can be improved by adding auxiliary coordinate systems, correcting the position of the projected point in the coordinate system, and adding a scale factor to correct the LOS equation.
(4) Combining the traditional algorithms together. The algorithms mentioned above can be fused to compensate for respective deficiency. For instance, scholars have greatly improved the precision of positioning algorithms by combining DEM models and filtering algorithms, fusing intersection algorithms and filtering models [55,56,63]. Some positioning algorithms that combine multiple models can improve the accuracy to within 5 m. However, it should be pointed out that most of these algorithms require multiple sets of image data at the same time and fail to achieve high accuracy positioning of a single image. This problem remains to be solved.

5. Conclusions

Aerial cameras mounted on various types of aircraft have unparalleled advantages in mobility and real-time performance. Relying on aerial cameras for high-precision geographic positioning of shooting targets is a hotspot in recent years and beyond. However, the detailed description and analysis of target location technology without control points in aerial remote sensing images is not perfect at present. Aiming at the target positioning problem without ground control points, this paper introduces several typical aerial camera-to-ground target positioning algorithms and analyzes the advantages and disadvantages of each algorithm according to their different mathematical models. Meanwhile, the error factors of the accuracy are briefly analyzed, and the often-ignored error terms that have great influence are summarized in the current positioning algorithm.
With the rapid development of aerial photogrammetry technology and the continuous improvement of task requirements, various directions emerge in the target positioning technology field. Nowadays, the research of positioning algorithms not only needs to improve the existing mathematical model, but also demands to analyze the error factors ignored in the traditional algorithm. In addition, establishing a corresponding mathematical model and selecting an appropriate correction algorithm for correction are also necessary. Moreover, intelligent remote sensing image processing algorithms represented by deep learning have become a new research hotspot. How to introduce related technologies such as target detection and image registration into the field of target positioning will be a research highlight in the future. Solving the inherent defects of traditional target positioning technology through intelligent algorithms will be one of the feasible means to significantly improve positioning accuracy.

Author Contributions

Conceptualization, Y.C. and J.Z.; methodology, Y.C. and Y.Z.; software, Y.X.; validation Y.X.; data curation, P.Q.; writing—original draft preparation, Y.C.; writing—review and editing, H.Z. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62027801.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sun, H.; Jia, H.; Wang, L.; Xu, F.; Liu, J. Systematic Error Correction for Geo-Location of Airborne Optoelectronic Platforms. Appl. Sci. 2021, 11, 11067. [Google Scholar] [CrossRef]
  2. Yamamoto, Y.; Ichii, K.; Higuchi, A.; Takenaka, H. Geolocation Accuracy Assessment of Himawari-8/AHI Imagery for Application to Terrestrial Monitoring. Remote Sens. 2020, 12, 1372. [Google Scholar] [CrossRef]
  3. Wyatt, S. Dual spectral band reconnaissance systems for multiple platform. Proc. SPIE 2002, 4824, 36–46. [Google Scholar]
  4. Riehl, K. RAPTOR (DB-110) reconnaissance system: In operation. Proc. SPIE 2002, 4824, 1–12. [Google Scholar]
  5. Yuan, D.; Ding, Y.; Yuan, G.; Li, F.; Zhang, J.; Wang, Y.; Zhang, L. Two-step calibration method for extrinsic parameters of an airborne camera. Appl. Opt. 2021, 60, 1387–1398. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Yang, J.; Li, G.; Zhao, T.; Song, X.; Zhang, S.; Li, A.; Bian, H.; Li, J.; Zhang, M. Camera Calibration for Long-Distance Photogrammetry Using Unmanned Aerial Vehicles. J. Sens. 2022, 2022, 8573315. [Google Scholar] [CrossRef]
  7. Morgan, G.R.; Hodgson, M.E.; Wang, C.; Schill, S.R. Unmanned aerial remote sensing of coastal vegetation: A review. Ann. GIS 2022, 28, 385–399. [Google Scholar] [CrossRef]
  8. He, F.; Zhou, T.; Xiong, W.; Hasheminnasab, S.M.; Habib, A. Automated Aerial Triangulation for UAV-Based Mapping. Remote Sens. 2018, 10, 1952. [Google Scholar] [CrossRef] [Green Version]
  9. Sohn, S.; Lee, B.; Kim, J.; Kee, C. Vision-based real-time target localization for single-antenna GPS-guided UAV. IEEE Trans. Aerosp. Electron. Syst. 2008, 44, 1391–1401. [Google Scholar] [CrossRef]
  10. Cheng, X.; Daqiang, H.; Wei, H. High Precision Passive Target Localization Based on Airborne Electro-optical Payload. In Proceedings of the 14th International Conference on Optical Communications and Networks(ICOCN), Nanjing, China, 3–5 July 2015; pp. 1–3. [Google Scholar]
  11. Yuan, X.; Gao, Y.; Zou, X. Application of GPS-supported aerotriangulation in large scale. Wuhan Daxue Xuebao Xinxi Kexue Ban Geomat. Inf. Sci. Wuhan Univ. 2012, 37, 1289–1293. [Google Scholar]
  12. Stich, E.J. Geo-pointing and threat location techniques for airborne border surveillance. In Proceedings of the IEEE International Conference on Technologies for Homeland Security, Waltham, MA, USA, 12–14 November 2013. [Google Scholar]
  13. Held, K.J.; Robinson, B.H. TIER II Plus airborne EO sensor LOS control and image geolocation. In Proceedings of the 1997 IEEE Aerospace Conference, Snowmass, CO, USA, 13 February 1997; pp. 377–405. [Google Scholar]
  14. Cai, M.B.; Liu, J.H.; Xu, F. Multi-targets real-time location technology for UAV reconnaissance. Chin. Opt. 2018, 11, 812–821. [Google Scholar]
  15. Wang, X.; Liu, J.; Zhou, Q. Real-Time Multi-Target Localization from Unmanned Aerial Vehicles. Sensors 2017, 17, 33. [Google Scholar] [CrossRef] [PubMed]
  16. Yang, H.T.; Zhang, G.D.; Shi, K.; Zhao, R.H.; Gao, B.; Zhang, G.D. Aerial Camera Geo-location Method Based on POS System. Acta Photonica Sin. 2018, 47, 0412001. [Google Scholar] [CrossRef]
  17. Du, Y.L.; Ding, Y.L.; Xu, Y.S.; Liu, Z.; Xiu, J. Geo-Location Al-grithm for TDI-CCD Aerial Panoramic Camera. Acta Opt. Sin. 2017, 37, 355–365. [Google Scholar]
  18. Danqi, C.; Guodong, J.; Lining, T.; Libin, L.; Wenle, W. Target positioning of UAV airborne optoelectronic platform based on nonlinear least squares. Opto-Electron. Eng. 2019, 46, 190056. [Google Scholar] [CrossRef]
  19. Liu, X.; Teng, X.; Li, Z.; Yu, Q.; Bian, Y. A Fast Algorithm for High Accuracy Airborne SAR Geolocation Based on Local Linear Approximation. IEEE Trans. Instrum. Meas. 2022, 71, 5501612. [Google Scholar] [CrossRef]
  20. Jin, G.; Dong, Z.; He, F.; Yu, A. Background-Free Ground Moving Target Imaging for Multi-PRF Airborne SAR. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1949–1962. [Google Scholar] [CrossRef]
  21. Tang, X.; Zhang, X.; Shi, J.; Wei, S.; Pu, L. Ground slowly moving target detection and velocity estimation via high-speed platform dual-beam synthetic aperture radar. J. Appl. Remote Sens. 2019, 13, 026516. [Google Scholar] [CrossRef]
  22. Xu, C.; Huang, D.Q. Multiple-target localization based on electro-optical measurement platform. J. Cent. South Univ. Sci. Technol. 2015, 46, 157–163. [Google Scholar]
  23. Jin, M.; Bai, Y.; Devys, E.; Di, L. Toward a Standardized Encoding of Remote Sensing Geo-Positioning Sensor Models. Remote Sens. 2020, 12, 1530. [Google Scholar] [CrossRef]
  24. Hao, R.X. Research on the Method of Localization Of Target Based on Laser Ranging Technology. Master’s Thesis, Xi’an Technological University, Xi’an, China, 2014. [Google Scholar]
  25. Zhang, H.; Qiao, C.; Kuang, H.P. Target geo-location based on laser range finder for Airborne electro-optical imaging systems. Opt. Precis. Eng.-Ing 2019, 27, 13–21. [Google Scholar]
  26. Tan, L.G. Research of Target Automatic Positioning Technology in Airborne Photo-electricity Survey Equipment. Ph.D. Thesis, Changchun Institute Of Optics, Fine Mehcanics and Physics: Changchun, China; Chinese Academy of Sciences: Beijing, China, 2012. [Google Scholar]
  27. Merkle, N.; Luo, W.; Auer, S.; Müller, R.; Urtasun, R. Exploiting Deep Matching and SAR Data for the Geo-Localization Accuracy Improvement of Optical Satellite Images. Remote Sens. 2017, 9, 586. [Google Scholar] [CrossRef] [Green Version]
  28. Upadhyay, J.; Rawat, A.; Deb, D. Multiple Drone Navigation and Formation Using Selective Target Tracking-Based Computer Vision. Electronics 2021, 10, 2125. [Google Scholar] [CrossRef]
  29. Luo, M.; Tan, L. Method of passive location based on multi-platform collaborative detection by airborne infrared equipment. J. Appl. Opt. 2021, 42, 392–397. [Google Scholar] [CrossRef]
  30. Cheng, X.; He, C.; Huang, D. Geo-location for Ground Target with Multiple Observations Using Unmanned Aerial Vehicle. Trans. Nanjing Univ. Aeronaut. Astronaut. 2018, 35, 95–103. [Google Scholar]
  31. Liu, D.; Bao, W.; Zhu, X.; Fei, B.; Xiao, Z.; Men, T. Vision-aware air-ground cooperative target localization for UAV and UGV. Aerosp. Sci. Technol. 2022, 124, 107525. [Google Scholar] [CrossRef]
  32. Wang, X.; Yang, L.T.; Meng, D.; Dong, M.; Ota, K.; Wang, H. Multi-UAV Cooperative Localization for Marine Targets Based on Weighted Subspace Fitting in SAGIN Environment. IEEE Internet Things J. 2022, 9, 5708–5718. [Google Scholar] [CrossRef]
  33. Qu, Y.H.; Zhang, F.; Gu, R.N. Multi-UAV cooper ative target positioning method based on distance meas-urement. J. Northwestern Polytech. Univ. 2019, 37, 266–272. [Google Scholar] [CrossRef]
  34. Qu, Y.; Wu, J.; Zhang, Y. Cooperative localization based on the azimuth angles among multiple UAVs. In International Conference on Unmanned Aircraft Systems; IEEE: Piscataway, NJ, USA, 2013. [Google Scholar]
  35. Bai, G.; Liu, J.; Song, Y.; Zuo, Y. Two-UAV Intersection Localization System Based on the Airborne Optoelectronic Platform. Sensors 2017, 17, 98. [Google Scholar] [CrossRef] [Green Version]
  36. Wang, D.; Huang, D.; Xu, C. An Improved Method for Two-UAV Trajectory Planning for Cooperative Target Locating Based on Airborne Visual Tracking Platform. IEICE Trans. Inf. Syst. 2021, E104.D, 1049–1053. [Google Scholar] [CrossRef]
  37. Lee, W.; Bang, H.; Leeghim, H. Cooperative localization between small UAVs using a combination of heterogeneous sensors. Aerosp. Sci. Technol. 2013, 27, 105–111. [Google Scholar] [CrossRef]
  38. Xiang, H.T. Double UAV cooperative localization and remote location error analysis. In 5th International Conference on Advanced Design and Manufacturing Engineering (ICADME); Atlantis Press: Amsterdam, The Netherlands, 2015; pp. 76–81. [Google Scholar]
  39. Xu, C.; Huang, D.; Liu, J. Target location of unmanned aerial vehicles based on the electro-optical stabilization and tracking platform. Measurement 2019, 147, 106848. [Google Scholar] [CrossRef]
  40. Liu, Z.; Zhang, X.; Kuang, H.; Li, Q.; Qiao, C. Target Location Based on Stereo Imaging of Airborne Electro-Optical Camera. Acta Opt. Sin. 2019, 39, 1112003. [Google Scholar]
  41. Wang, S.; Jiang, F.; Zhang, B.; Ma, R.; Hao, Q. Development of UAV-Based Target Tracking and Recognition Systems. IEEE Trans. Intell. Transp. Syst. 2020, 21, 3409–3422. [Google Scholar] [CrossRef]
  42. Zhou, X.; Chen, Y.; Liu, Y.; Hu, J. A Novel Sensor Fusion Method Based on Invariant Extended Kalman Filter for Unmanned Aerial Vehicle. In Proceedings of the 2021 IEEE International Conference on Robotics and Biomimetics (ROBIO), Sanya, China, 27–31 December 2021; pp. 1111–1116. [Google Scholar] [CrossRef]
  43. Zhang, X.; Yuan, G.; Zhang, H.; Qiao, C.; Liu, Z.; Ding, Y.; Liu, C. Precise Target Geo-Location of Long-Range Oblique Reconnaissance System for UAVs. Sensors 2022, 22, 1903. [Google Scholar] [CrossRef] [PubMed]
  44. Tang, D.; Liu, X.; Deng, D. Small Un manned Aerial Vehicle Target Location Method Based on Iterative Unscented Kalman Filtering. Command. Control. Simul. 2019, 41, 110–114. [Google Scholar]
  45. Gao, F.; Ma, X.; Gu, J. An active target localization with monocular vision. In Proceedings of the IEEE International Conference on Control & Automation, Gwangju, Republic of Korea, 2–5 December 2014; IEEE: Piscataway, NJ, USA, 2014. [Google Scholar]
  46. Hosseinpoor, H.R.; Samadzadegan, F.; Dadras Javan, F. Pricise target geolocation based on integeration of thermal video imagery and rtk GPS in UAVS. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2015, 40, 333–338. [Google Scholar] [CrossRef] [Green Version]
  47. Hosseinpoor, H.R.; Samadzadegan, F.; Dadras Javan, F. Pricise target geolocation and tracking based on UAV video imagery. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 243–249. [Google Scholar] [CrossRef] [Green Version]
  48. Barber, D.B.; Redding, J.D.; McLain, T.W.; Beard, R.W.; Taylor, C.N. Vision-based Target Geo-location using a Fixed-wing Miniature Air Vehicle. J. Intell. Robot. Syst. 2006, 47, 361–382. [Google Scholar] [CrossRef]
  49. Helgesen, H.H.; Leira, F.S.; Johansen, T.A.; Fossen, T.I. Detection and Tracking of Floating Objects Using a UAV with Thermal Camera; Springer International Publishing AG 2017: Berlin/Heidelberg, Germany, 2017; pp. 289–316. [Google Scholar]
  50. Wang, J.; Yang, L.B.; Gao, L.M. Target orientation meas uring of airborne EO platform. J. Chang. Univ. Sci. Technol. Natu-Ral Sci. Ed. 2009, 32, 531–534. [Google Scholar]
  51. Tan, L.G.; Dai, M.; Liu, J.H. Error analysis Of tar-get automatic positioning for airborne photoelectric Meas-uring device. Opt. Precis. Eng. 2013, 21, 3133–3140. [Google Scholar]
  52. Liu, C.; Liu, J.; Song, Y.; Liang, H.A. Novel Sys-tem for Correction of Relative Angular Displacement be-tween Airborne Platform and UAV in Target Localization. Sensors 2017, 17, 510. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Sun, H. Target localization and error analysis of air borne electro-optical platform. Chin. Opt. 2013, 6, 912–918. [Google Scholar]
  54. Wu, J.; Xu, Y.; Zhong, X.; Sun, Z.; Yang, J. A Three-Dimensional Localization Method for Multistatic SAR Based on Numerical Range-Doppler Algorithm and Entropy Minimization. Remote Sens. 2017, 9, 470. [Google Scholar] [CrossRef] [Green Version]
  55. Yuan, X.X.; Fu, J.H.; Zuo, Z.L. Accuracy Analysis of Di rect Georeferencing by Airborne Position and Ori-entation System in Aerial Photogrammetry. Geomat. Inf. Sci. Wuhan Univers 2006, 31, 847–850. [Google Scholar]
  56. Song, D.; Tharmarasa, R.; Wang, W.; Rao, B.; Brown, D.; Kirubarajan, T. Efficient Bias Estimation in Airborne Video Georegistration for Ground Target Tracking. IEEE Trans. Aerosp. Electron. Syst. 2021, 57, 3198–3208. [Google Scholar] [CrossRef]
  57. Taghavi, E.; Song, D.; Tharmarasa, R.; Kirubarajan, T.; McDonald, M.; Balaji, B.; Brown, D. Geo-registration and Geo-location Using Two Airborne Video Sensors. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 2910–2921. [Google Scholar] [CrossRef]
  58. Ma, Z.; Gong, Q.; Chen, Y.; Wang, H. Analysis and study on influence factors of target geo-locating accuracy for electro-optical reconnaissance system. J. Appl. Opt. 2018, 39, 1–6. [Google Scholar] [CrossRef]
  59. Xiu, J.; Huang, P.; Li, J.; Zhang, H.; Li, Y. Line of Sight and Image Motion Compensation for Step and Stare Imaging System. Appl. Sci. 2020, 10, 7119. [Google Scholar] [CrossRef]
  60. Wang, D.; Shu, H. Accuracy Analysis of Three-Dimensional Modeling of a Multi-Level UAV without Control Points. Buildings 2022, 12, 592. [Google Scholar] [CrossRef]
  61. Liu, X.; Lian, X.; Yang, W.; Wang, F.; Han, Y.; Zhang, Y. Accuracy Assessment of a UAV Direct Georeferencing Method and Impact of the Configuration of Ground Control Points. Drones 2022, 6, 30. [Google Scholar] [CrossRef]
  62. Xiao, S. Positioning Accuracy Analysis of Aerial Triangulation of UAV Images without Ground Control Points. J. Chongqing Jiaotong Univ. Nat. Sci. 2021, 40, 117–123. [Google Scholar]
  63. Qiao, C.; Ding, Y.L.; Xu, Y.S. Ground target geo-location based on digital elevation model for airborne wide-area reconnaissance system. J. Appl. Remote Sens. 2018, 12, 016004. [Google Scholar] [CrossRef]
  64. Qiao, C.; Ding, Y.L.; Xu, Y. Ground target geo-location using imaging aerial camera With large inclined angle. Opt. Precision Eng. 2017, 25, 1714–1726. [Google Scholar]
  65. Bai, G.; Song, Y.; Zuo, Y.; Song, M.; Wang, X. Multitarget location capable of adapting to complex geomorphic environment for the airborne photoelectric reconnaissance system. J. Appl. Remote Sens. 2020, 14, 036510. [Google Scholar] [CrossRef]
  66. El Habchi, A.; Moumen, Y.; Zerrouk, I.; Khiati, W.; Berrich, J.; Bouchentouf, T. CGA: A New Approach to Estimate the Geolocation of a Ground Target from Drone Aerial Imagery. In Proceedings of the 2020 Fourth International Conference On Intelligent Computing in Data Sciences (ICDS), Fez, Morocco, 21–23 October 2020. [Google Scholar]
  67. Huang, C.; Zhang, H.; Zhao, J. High-Efficiency Determination of Coastline by Combination of Tidal Level and Coastal Zone DEM from UAV Tilt Photogrammetry. Remote Sens. 2020, 12, 2189. [Google Scholar] [CrossRef]
  68. Cheng, B.T. A simulation of wide area surveillance (WAS) systems and algorithm for digital elevation model (DEM) extraction. In Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications VII; SPIE: Bellingham, WA, USA, 2010; Volume 7668. [Google Scholar] [CrossRef]
  69. Yang, A.; Li, X.; Xie, J.; Wei, Y. Three-dimensional panoramic terrain reconstruction from aerial imagery. J. Appl. Remote Sens. 2013, 7, 073497. [Google Scholar] [CrossRef]
  70. Belkhouche, Y.; Duraisamy, P.; Buckles, B. Graph-connected components for filtering urban LiDAR data. J. Appl. Remote Sens. 2015, 9, 096075. [Google Scholar] [CrossRef]
  71. Athmania, D.; Achour, H. External Validation of the ASTER GDEM2, GMTED2010 and CGIAR-CSI- SRTM v4.1 Free Access Digital Elevation Models (DEMs) in Tunisia and Algeria. Remote Sens. 2014, 6, 4600–4620. [Google Scholar] [CrossRef] [Green Version]
  72. Han, K.M.; Desouza, G.N. Geolocation Of Multiple Tatgets From Airborne Video Without Terrain Data. Joutnal Intell. Robot. Syst. 2011, 62, 159–183. [Google Scholar] [CrossRef]
  73. Jain, K. How Photogrammetric Software Works: A Perspective Based on UAV’s Exterior Orientation Parameters. J. Indian Soc. Remote Sens. 2021, 49, 641–649. [Google Scholar] [CrossRef]
  74. Qureshi, S.; Soomro, A.; Hincal, E.; Lee, J.R.; Park, C.; Osman, M.S. An efficient variable stepsize rational method for stiff, singular and singularly perturbed problems. Alex. Eng. J. 2022, 61, 10953–10963. [Google Scholar] [CrossRef]
  75. Croci, M.; de Souza, G.R. Mixed-precision explicit stabilized Runge–Kutta methods for single- and multi-scale differential equations. J. Comput. Phys. 2022, 464, 111349. [Google Scholar] [CrossRef]
  76. Arshad, M.; Jabeen, R.; Khan, S. A multiscale domain decomposition approach for parabolic equations using expanded mixed method. Math. Comput. Simul. 2022, 198, 127–150. [Google Scholar] [CrossRef]
  77. Xu, W.; Wang, B.; Xiang, M.; Song, C.; Wang, Z. A Novel Autofocus Framework for UAV SAR Imagery: Motion Error Extraction From Symmetric Triangular FMCW Differential Signal. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5218915. [Google Scholar] [CrossRef]
  78. Zeybek, M. Accuracy assessment of direct georeferencing UAV images with onboard global navigation satellite system and comparison of CORS/RTK surveying methods. Meas. Sci. Technol. 2021, 32, 065402. [Google Scholar] [CrossRef]
  79. Li, X.; Qi, G.; Guo, X.; Ma, S. Trajectory Tracking of a Quadrotor UAV based on High-Order Differential Feedback Control. In Proceedings of the 2020 IEEE 9th Data Driven Control and Learning Systems Conference (DDCLS), Liuzhou, China, 20–22 November 2020; pp. 201–206. [Google Scholar] [CrossRef]
  80. Ma, S.; Liu, J.; Yang, Z.; Zhang, Y.; Hu, J. A pseudo-random sequence generation scheme based on RNS and permutation polynomials. Sci. China Inf. Sci. 2018, 61, 082304. [Google Scholar] [CrossRef] [Green Version]
  81. Long, Z.; Xiang, Y.; Lei, X.; Li, Y.; Hu, Z.; Dai, X. Integrated Indoor Positioning System of Greenhouse Robot Based on UWB/IMU/ODOM/LIDAR. Sensors 2022, 22, 4819. [Google Scholar] [CrossRef]
  82. Alandihallaj, M.A.; Emami, M.R. Satellite replacement and task reallocation for multiple-payload fractionated Earth observation mission. Acta Astronaut. 2022, 196, 157–175. [Google Scholar] [CrossRef]
  83. Ma, H.; Xu, K.; Sun, S.; Zhang, W.; Xi, T. Research on real-time reachability evaluation for reentry vehicles based on fuzzy learning. Open Astron. 2022, 31, 205–216. [Google Scholar] [CrossRef]
  84. Valentini, F.; Silva, O.M.; Torii, A.J.; Cardoso, E.L. Local averaged stratified sampling method. J. Braz. Soc. Mech. Sci. Eng. 2022, 44, 294. [Google Scholar] [CrossRef]
  85. Vermaak, J.I.; Morel, J.E. Transport error estimation using residual Monte Carlo. J. Comput. Phys. 2022, 464, 111306. [Google Scholar] [CrossRef]
  86. Qian, Y.; Sheng, K.; Ma, C.; Li, J.; Ding, M.; Hassan, M. Path Planning for the Dynamic UAV-Aided Wireless Systems Using Monte Carlo Tree Search. IEEE Trans. Veh. Technol. 2022, 71, 6716–6721. [Google Scholar] [CrossRef]
  87. Sun, Y.; Ma, O. Automating Aircraft Scanning for Inspection or 3D Model Creation with a UAV and Optimal Path Planning. Drones 2022, 6, 87. [Google Scholar] [CrossRef]
  88. Zhang, L.; Li, W.; Ju, Y. Evaluation method of positioning accuracy based on circular probability error. Command. Control. Simul. 2013, 35, 111–114. [Google Scholar]
  89. Wang, Y.; Yang, G.; Yan, D. Comprehensive assessment algorithm for calculating CEP of positioning accuracy. Measurement 2014, 47, 255–263. [Google Scholar] [CrossRef]
  90. Moon, G.B.; Jee, G.I.; Lee, J.G. Position determination using the DTV segment sync signal. Int. J. Control Autom. Syst. 2011, 9, 574–580. [Google Scholar] [CrossRef]
  91. Chen, M.; Xiong, Z.; Xiong, J.; Wang, R. A hybrid cooperative navigation method for UAV swarm based on factor graph and Kalman filter. Int. J. Distrib. Sens. Netw. 2022, 18, 15501477211064758. [Google Scholar] [CrossRef]
  92. Safi, H.; Dargahi, A.; Cheng, J. Beam Tracking for UAV-Assisted FSO Links With a Four-Quadrant Detector. IEEE Commun. Lett. 2021, 25, 3908–3912. [Google Scholar] [CrossRef]
  93. Jenssen, R.O.R.; Jacobsen, S.K. Measurement of Snow Water Equivalent Using Drone-Mounted Ultra-Wide-Band Radar. Remote Sens. 2021, 13, 2610. [Google Scholar] [CrossRef]
  94. Deng, L.; Chen, Y.; Zhao, Y.; Zhu, L.; Gong, H.L.; Guo, L.J.; Zou, H.Y. An approach for reflectance anisotropy retrieval from UAV-based oblique photogrammetry hyperspectral imagery. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102442. [Google Scholar] [CrossRef]
  95. Akin, E.; Demir, K.; Yetgin, H. Multiagent Q-learning based UAV trajectory planning for effective situationalawareness. Turk. J. Electr. Eng. Comput. Sci. 2021, 29, 20. [Google Scholar] [CrossRef]
  96. Cai, Y.; Ding, Y.; Xiu, J.; Zhang, H.; Qiao, C.; Li, Q. Distortion measurement and geolocation error correction for high altitude oblique imaging using airborne cameras. J. Appl. Remote Sens. 2020, 14, 014510. [Google Scholar] [CrossRef]
  97. Cai, Y.; Ding, Y.; Zhang, H.; Xiu, J.; Liu, Z. Geo-Location Algorithm for Building Targets in Oblique Remote Sensing Images Based on Deep Learning and Height Estimation. Remote Sens. 2020, 12, 2427. [Google Scholar] [CrossRef]
  98. Zhang, F.; Yang, T.; Bai, Y.; Ning, Y.; Li, Y.; Fan, J.; Li, D. Online Ground Multitarget Geolocation Based on 3-D Map Construction Using a UAV Platform. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5621817. [Google Scholar] [CrossRef]
  99. Zhao, X.; Pu, F.; Wang, Z.; Chen, H.; Xu, Z. Detection, Tracking, and Geolocation of Moving Vehicle From UAV Using Monocular Camera. IEEE Access 2019, 7, 101160–101170. [Google Scholar] [CrossRef]
  100. Xing, L.; Fan, X.; Dong, Y.; Xiong, Z.; Xing, L.; Yang, Y.; Bai, H.; Zhou, C. Multi-UAV cooperative system for search and rescue based on YOLOv5. Int. J. Disaster Risk Reduct. 2022, 76, 102972. [Google Scholar] [CrossRef]
  101. Bertin, S.; Stéphan, P.; Ammann, J. Assessment of RTK Quadcopter and Structure-from-Motion Photogrammetry for Fine-Scale Monitoring of Coastal Topographic Complexity. Remote Sens. 2022, 14, 1679. [Google Scholar] [CrossRef]
  102. Yang, B.; Ali, F.; Zhou, B.; Li, S.; Yu, Y.; Yang, T.; Liu, X.; Liang, Z.; Zhang, K. A novel approach of efficient 3D reconstruction for real scene using unmanned aerial vehicle oblique photogrammetry with five cameras. Comput. Electr. Eng. 2022, 99, 107804. [Google Scholar] [CrossRef]
  103. Zhang, Y.; Ma, G.; Wu, J. Air-Ground Multi-Source Image Matching Based on High-Precision Reference Image. Remote Sens. 2022, 14, 588. [Google Scholar] [CrossRef]
  104. Zhuang, J.; Chen, X.; Dai, M.; Lan, W.; Cai, Y.; Zheng, E. A Semantic Guidance and Transformer-Based Matching Method for UAVs and Satellite Images for UAV Geo-Localization. IEEE Access 2022, 10, 34277–34287. [Google Scholar] [CrossRef]
  105. Bi, R.; Gan, S.; Yuan, X.; Li, R.; Gao, S.; Luo, W.; Hu, L. Studies on Three-Dimensional (3D) Accuracy Optimization and Repeatability of UAV in Complex Pit-Rim Landforms As Assisted by Oblique Imaging and RTK Positioning. Sensors 2021, 21, 8109. [Google Scholar] [CrossRef] [PubMed]
  106. Zhuang, J.; Dai, M.; Chen, X.; Zheng, E. A Faster and More Effective Cross-View Matching Method of UAV and Satellite Images for UAV Geolocalization. Remote Sens. 2021, 13, 3979. [Google Scholar] [CrossRef]
Figure 1. Number of papers and patents published each year.
Figure 1. Number of papers and patents published each year.
Applsci 12 12689 g001
Figure 2. Distribution of positioning foundation model.
Figure 2. Distribution of positioning foundation model.
Applsci 12 12689 g002
Figure 3. Schematic of ECEF coordinate system and NED coordinate system.
Figure 3. Schematic of ECEF coordinate system and NED coordinate system.
Applsci 12 12689 g003
Figure 4. Schematic of NED coordinate system and AC coordinate system.
Figure 4. Schematic of NED coordinate system and AC coordinate system.
Applsci 12 12689 g004
Figure 5. Schematic diagram of camera coordinate system.
Figure 5. Schematic diagram of camera coordinate system.
Applsci 12 12689 g005
Figure 6. Target projection point position.
Figure 6. Target projection point position.
Applsci 12 12689 g006
Figure 7. Schematic diagram of target positioning results in areas with complex terrain. (a) Positioning results using the earth ellipsoid model; (b) Positioning results using DEM model.
Figure 7. Schematic diagram of target positioning results in areas with complex terrain. (a) Positioning results using the earth ellipsoid model; (b) Positioning results using DEM model.
Applsci 12 12689 g007
Figure 8. Schematic of target location model based on DEM.
Figure 8. Schematic of target location model based on DEM.
Applsci 12 12689 g008
Figure 9. Schematic of positional relationship for target and target projection point in the camera coordinate frame.
Figure 9. Schematic of positional relationship for target and target projection point in the camera coordinate frame.
Applsci 12 12689 g009
Figure 10. Schematic of positional relationship for target and target projection point in the camera coordinate frame.
Figure 10. Schematic of positional relationship for target and target projection point in the camera coordinate frame.
Applsci 12 12689 g010
Figure 11. Two-point intersection positioning model.
Figure 11. Two-point intersection positioning model.
Applsci 12 12689 g011
Figure 12. Positioning errors caused by imaging angle error.
Figure 12. Positioning errors caused by imaging angle error.
Applsci 12 12689 g012
Figure 13. Positioning errors caused by target elevation error.
Figure 13. Positioning errors caused by target elevation error.
Applsci 12 12689 g013
Figure 14. Schematic diagram of the positioning error caused by image distortion.
Figure 14. Schematic diagram of the positioning error caused by image distortion.
Applsci 12 12689 g014
Figure 15. Schematic diagram of positioning error with building height.
Figure 15. Schematic diagram of positioning error with building height.
Applsci 12 12689 g015
Figure 16. Schematic diagram of positioning accuracy change.
Figure 16. Schematic diagram of positioning accuracy change.
Applsci 12 12689 g016
Figure 17. Schematic diagram of moving target positioning result.
Figure 17. Schematic diagram of moving target positioning result.
Applsci 12 12689 g017
Table 1. Summary table of typical target localization algorithms.
Table 1. Summary table of typical target localization algorithms.
Positioning Algorithm BasicsAlgorithm AdvantagesAlgorithmic Disadvantages
Earth Ellipsoid ModelSingle image long-distance positioning
Real-time processing capabilities
Well extensible and easy to modify
Elevation information required
Too many error factors
Digital elevation modelAbility to calculate target elevationToo large of DEM data
Unstable calculation results
LRF or SARHigh positioning accuracy
Influence of angle error is small
Limited working distance of LRF
Increase the load of the carrier
Multipoint intersection measurementReduce terrain error
High theoretical positioning accuracy
Low positioning efficiency
complex calculation
Too high requirement for time accuracy
wFiltering algorithmlong-distance positioning
High positioning accuracy
High time cost and low positioning efficiency
Only applicable to target tracking
Table 2. Major Error Terms and Their Distribution.
Table 2. Major Error Terms and Their Distribution.
Error ClassificationError Distribution Form
Camera Latitudenormal distribution
Camera Longitudenormal distribution
Camera elevationnormal distribution
Aerial carrier pitch anglenormal distribution
Aerial carrier roll anglenormal distribution
Aircraft yaw anglenormal distribution
Three-axis vibration error of shock absorberuniform distribution
Image point position measurement errornormal distribution
Laser ranging errornormal distribution
Frame angle measurement errornormal distribution
Table 3. Analysis of often ignored errors.
Table 3. Analysis of often ignored errors.
Error ClassificationError Influence
Atmospheric scatteringVisual axis pointing deviation
Atmospheric turbulenceVisual axis stability deviation
Image distortionTarget projection deviation
Height of targetReal position deviation
Visual axis control accuracyReprojection pointing deviation
Target detection accuracyPixel deviation of target to be located
Accuracy of imaging registerHomonymous point pixel deviation
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cai, Y.; Zhou, Y.; Zhang, H.; Xia, Y.; Qiao, P.; Zhao, J. Review of Target Geo-Location Algorithms for Aerial Remote Sensing Cameras without Control Points. Appl. Sci. 2022, 12, 12689. https://doi.org/10.3390/app122412689

AMA Style

Cai Y, Zhou Y, Zhang H, Xia Y, Qiao P, Zhao J. Review of Target Geo-Location Algorithms for Aerial Remote Sensing Cameras without Control Points. Applied Sciences. 2022; 12(24):12689. https://doi.org/10.3390/app122412689

Chicago/Turabian Style

Cai, Yiming, Yao Zhou, Hongwen Zhang, Yuli Xia, Peng Qiao, and Junsuo Zhao. 2022. "Review of Target Geo-Location Algorithms for Aerial Remote Sensing Cameras without Control Points" Applied Sciences 12, no. 24: 12689. https://doi.org/10.3390/app122412689

APA Style

Cai, Y., Zhou, Y., Zhang, H., Xia, Y., Qiao, P., & Zhao, J. (2022). Review of Target Geo-Location Algorithms for Aerial Remote Sensing Cameras without Control Points. Applied Sciences, 12(24), 12689. https://doi.org/10.3390/app122412689

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop