Next Article in Journal
Process Mining Methodology for Health Process Tracking Using Real-Time Indoor Location Systems
Next Article in Special Issue
Adaptive Environmental Source Localization and Tracking with Unknown Permittivity and Path Loss Coefficients
Previous Article in Journal
Throughput Maximization for Sensor-Aided Cognitive Radio Networks with Continuous Energy Arrivals
Previous Article in Special Issue
UAVs Task and Motion Planning in the Presence of Obstacles and Prioritized Targets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

UAV Control on the Basis of 3D Landmark Bearing-Only Observations

Institute for Information Transmission Problems RAS, Bolshoy Karetny per. 19, Build. 1, GSP-4, Moscow 127051, Russia
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(12), 29802-29820; https://doi.org/10.3390/s151229768
Submission received: 15 August 2015 / Revised: 20 October 2015 / Accepted: 10 November 2015 / Published: 27 November 2015
(This article belongs to the Special Issue UAV Sensors for Environmental Monitoring)

Abstract

:
The article presents an approach to the control of a UAV on the basis of 3D landmark observations. The novelty of the work is the usage of the 3D RANSAC algorithm developed on the basis of the landmarks’ position prediction with the aid of a modified Kalman-type filter. Modification of the filter based on the pseudo-measurements approach permits obtaining unbiased UAV position estimation with quadratic error characteristics. Modeling of UAV flight on the basis of the suggested algorithm shows good performance, even under significant external perturbations.

1. Introduction

Modern UAV’s navigation systems use the standard elements of INS (inertial navigation systems) along with GPS, which permits correcting the bias and improving the UAV localization, which are necessary for resolving mapping issues, targeting and reconnaissance tasks [1]. The use of computer vision as a secondary or a primary method for autonomous navigation of UAVs has been discussed frequently in recent years, since the classical combination of GPS and INS systems cannot sustain autonomous flight in many situations [2]. UAV autonomous missions usually need so-called data fusion, which is a difficult task, especially for standard INS and vision equipment. It is clear that cameras provide visual information in a different form, inapplicable to UAV direct control, and therefore, one needs an additional on-board memory and special recognition algorithms.

1.1. Visual-Based Navigation Approaches

Several studies have demonstrated the effectiveness of approaches based on motion field estimation and feature tracking for visual odometry [3]. Vision-based methods have been proposed even in the context of autonomous landing management [2]. In [4], visual odometry based on geometric homography was proposed. However, the homography analysis uses only 2D reference points coordinates, though for the evaluation of the current UAV altitude, the 3D coordinates are necessary. All such approaches presume the presence of some recognition system in order to detect the objects nominated in advance. Examples of such objects can be special buildings, crossroads, tops of mountains, and so on. The principal difficulties are the different scale and aspect angles of observed and stored images, which leads to the necessity of huge template libraries in the memory of the UAV control system. Here, one can avoid this difficulty, because of the usage of another approach based on the observation of so-called feature points [5] that are scale and aspect angle invariant. For this purpose, the technology of feature points [6] is used. In [7], the approach based on the coordinate correspondence of the reference points observed by the on-board camera and the reference points on the map loaded into the UAV’s memory before the mission start had been suggested. During the flight, these maps are compared to the frame of the land, directly observed with the help of an on-board video camera. As a result, one can detect the current location and orientation without time-error accumulation. These methods are invariant to some transformations, and they are noise-stable, so that predetermined maps can be different in scale, aspect angle, season, luminosity, weather conditions, etc. This technology appeared in [8]. The contribution of this work is the usage of a modified unbiased pseudo-measurements filter for bearing-only observations of some reference points with known terrain coordinates.

1.2. Kalman Filter

In order to obtain metric data from visual observations, one needs first to make observations from different positions i.e., triangulation and then use nonlinear filtering. However, all nonlinear filters either have unknown bias [9] or are very difficult for on-board implementation, like the Bayesian-type estimation [10,11]. Approaches for position estimation based on bearing-only observations had been analyzed long ago, especially for submarine applications [12] and nowadays for UAV applications [1].
A comparison of different nonlinear filters for bearing-only observations in the issue of ground-based object localization [13] shows that the EKF (extended Kalman filter), the unscented Kalman filter, the particle filter and the pseudo-measurement filter give almost the same level of accuracy, while the pseudo-measurement filter is usually more stable and simple for on-board implementation. This observation is in accordance with older results [12], where all of these filters were compared in the issue of moving object localization. It has been mentioned that all of these filters have bias, which makes their use in data fusion issues rather problematic [14]. The principle requirement for such filters in data fusion is the non-biased estimate with the known mean square characterization of the error. Among the variety of possible filters, the pseudo-measurement filter can be easily modified to satisfy the data fusion demands. The idea of such nonlinear filtering was developed by V.S. Pugachev and I. Sinitsyn in the form of so-called conditionally-optimal filtering [15], which provides the non-biased estimation within the class of linear filters with the minimum mean squared error. In this paper, we develop such a filter (the so-called pseudo-measurement Kalman filter (PKF)) for the UAV position estimation and give the algorithm for path planning along with the reference trajectory under external perturbations and noisy measurements.

1.3. Optical Absolute Positioning

Some known aerospace maps of a terrain in a flight zone are loaded into the aircraft memory before the start of a flight. During the flight, these maps are compared to the frame of the land, directly observed with the help of an on-board video camera. For this purpose, the technology of feature points [6] is used. As a result, one can detect the current location and orientation without time-error accumulation. These methods are invariant to some transformations and are also noise-stable, so that predetermined maps can vary in height, season, luminosity, weather conditions, etc. Furthermore, from the moment of the previous plane surveying, the picture of this landscape can be changed due to human and natural activity. All approaches based on the capturing of the objects assigned in advance presume the presence of some on-board recognition system in order to detect and recognize such objects. Here, we avoid this difficulty by using the observation of feature points [5] that are scale and aspect angle invariant. In addition, the modified pseudo-measurements Kalman filtering (PKF) is used for the estimation of UAV positions and the control algorithm.

1.4. Outline of the Approach and the Article Structure

One of the principal parts of this research is an approach to the estimation of the UAV position on the basis of the bearing-only observations. The original filter that uses the idea of pseudo-measurements had been suggested in reference [16] for the case of the azimuth bearing of the terrain objects nominated in advance. In reference [17], this approach had been extended to the case of two angle measurements, namely azimuth and elevation. However, the usage of this approach as a real navigation tool needs huge on-board memory and a sophisticated recognition algorithm, since the template and in-flight observed images, even of the same object, are rather different due to the changes of illuminance, the altitude of flight and the aspect angles. That is why the method based on the observation of feature points looks more attractive for in-flight implementation. In reference [7], an algorithm joining together the feature points approach and modified PKF had been suggested, though for 2D feature point localization, while the more advanced 3D localization had been suggested in references [18,19], which are the shortened versions of the methodology presented in this article.
In this work, we use a computer simulation of a UAV flight and on-board video camera imaging. The simulation program is written in MATLAB. The type of feature points is ASIFT, realized in OpenCV (Python) [20]. Feature points in this model work as in a real flight, because the images for the camera model and for the template images were transformed by projective mapping and created by observations from different satellites.
The next section presents the original RANSAC algorithm for 3D feature point localization. Section 3 and Section 4 give the description of PKF, providing the unbiased estimation of the UAV position with the estimate of quadratic errors. Section 5 describes the locally optimal control algorithm for tracking the reference trajectory on the basis of PKF estimation of the UAV position. In Section 6, we give a new approach to the RANSAC robustness with the use of the UAV motion model. Section 7 presents the modeling results, and Section 8 is the conclusions.

2. Random Sample Consensus for Isometry

At every step, the algorithm deals with two images of a 3D landscape. An example of the landscape used for modeling is shown in Figure 1.
Figure 1. (a) Image I c of the on-board camera of the UAV; (b) template image loaded in the UAV memory in advance.
Figure 1. (a) Image I c of the on-board camera of the UAV; (b) template image loaded in the UAV memory in advance.
Sensors 15 29768 g001
The first image I c is obtained from the on-board UAV camera, the position of which is unknown and should be estimated. The second image I m was taken previously from a known position and uploaded into the UAV memory. The ASIFT method is used for both images to detect feature points, which are specified in pixels:
c i = c x i c y i
and:
m i = m x i m y i
and calculates their descriptors. The correspondence between images is constructed by using these descriptors, and thereby, the feature points are combined in pairs. However, many of these pairs are wrong, and therefore, these pairs are considered as outliers or they are not. The result of ASIFT correspondence is shown in Figure 2.
The Earth coordinate system is the Cartesian coordinate system, which is rigidly connected with the Earth. Therefore, the algorithm uses a 3D terrain map of the area from which the image I m was taken and over which the UAV flies. Therefore, one can determine the coordinates of the points:
r i = x i y i z i
which generated m i points in the Earth coordinate system. However, if i corresponds to the pair of points that is not an outlier, then the point r i also generates a point c i in the UAV camera.
Figure 2. (a) Image I c of the on-board camera of the UAV; (b) template image loaded in the UAV memory in advance.
Figure 2. (a) Image I c of the on-board camera of the UAV; (b) template image loaded in the UAV memory in advance.
Sensors 15 29768 g002
Another Cartesian coordinate system is rigidly connected with the UAV camera. The axis of the camera is parallel to the axis z. The transformation from the Earth coordinate system to the UAV coordinate system has the form:
r = A ( r - b )
where b represents the coordinates of the camera in the Earth coordinate system and A is the orthogonal ( A A T = I ) rotation matrix defining the orientation of the UAV camera. Then, the points r i in the camera coordinate system are:
r i = A ( r i - b )
To define the relation between r i and the feature points c i , one can use the model of the camera obscura. This model gives a central projection on the plane:
c i = ρ x i ρ z i ρ y i ρ z i
where:
ρ x i ρ y i ρ z i = ρ i = K r i = K A ( r i - b )
where K is a known calibration matrix of the camera.
Thus, the task is to estimate A and b on the basis of known c i , r i , K . The minimum number of feature point pairs needed to solve this task is three.
One can give the solution of the problem under the assumption that there are just three pairs:
i = { 1 , 2 , 3 }
Points r i form a triangle in the space with the following sides:
ρ 1 = | | r 2 - r 3 | | 2 , ρ 2 = | | r 3 - r 1 | | 2 , ρ 3 = | | r 1 - r 2 | | 2
Meanwhile, due to the rectilinear propagation of light, each point r i lies on the beam r = a i t , where t is a scalar parameter, and:
a i = K - 1 c x i c y i 1
In order to find r i , we have to determine parameters t i , i = { 1 , 2 , 3 } that satisfy the system of quadratic equations:
( a 2 t 2 - a 3 t 3 ) T ( a 2 t 2 - a 3 t 3 ) = ρ 1 2 ( a 3 t 3 - a 1 t 1 ) T ( a 3 t 3 - a 1 t 1 ) = ρ 2 2 ( a 1 t 1 - a 2 t 2 ) T ( a 1 t 1 - a 2 t 2 ) = ρ 3 2
For the given t 1 , this system may be either solved analytically or has no solution. A determination of t 1 can be done numerically, for example by the bisection method.
Finally, one can obtain the coordinates of three points on the Earth’s surface in the camera coordinate system r i and, at the same time, in the Earth coordinate system r i . The connection between them is: r i = A ( r i - b ) . Since A is the orthogonal matrix, then y = A x implies | | y | | 2 = | | x | | 2 ; thereby:
r i T r i = ( r i - b ) T ( r i - b )
Therefore, we have eliminated A and obtained the problem of finding the intersection of three spheres, which can be solved analytically. This problem may have two solutions; one of them will be rejected later. When b has been found, solutions for A are as follows:
A = r 1 r 2 r 3 r 1 - b r 2 - b r 3 - b - 1
Therefore, there are two options, and only one of them is correct:
a 11 a 21 a 31 = a 12 a 22 a 32 × a 13 a 23 a 33 or a 11 a 21 a 31 = - a 12 a 22 a 32 × a 13 a 23 a 33
If the first one is correct, then the second one corresponds exactly to the turnover.
As a result, A and b have been found by using three pairs of feature points and the height map. However, this approach alone is not suitable as the final solution, due to the following problems:
  • The method gives either knowingly false solution or no solution at all if among the three points there are outliers.
  • There is a strong dependence on the noise in the feature points’ location.
Both problems may be solved with random sample consensus (RANSAC) [21,22]. From the general selection of points, one needs to select N times a subsample of size three. For each subsample j = { 1 , 2 , , N } , one can calculate A j and b j , which allows one to simulate the generation of all feature points on the UAV camera:
c j i = ρ j x i ρ j z i ρ j y i ρ j z i , ρ j = K r j i = K A j ( r j i - b j )
Then, one can evaluate which points are the outliers by the threshold s j i = 1 ( | | c i - c j i | | 2 < d ) , where d is the threshold. Here, s j i = 0 means that projection of the i-th point on the j-th point is counted as an outlier, otherwise s j i = 1 . The answer will be the following:
A = A J , b = b J , J = arg max j i s j i
Therefore, we really solve the problem of outliers. Next, we find the required number of N subsamples of a size of three, such that among them, there will be at least one subsample without outliers with probability p . Let the proportion of outliers be 1 - w . It is easy to see that [21]:
N ( p ) = log ( 1 - p ) log ( 1 - w 3 )
In the case when w = 1 2 : N ( 0 . 9999 ) 69 , which shows the high efficiency of algorithm. After that, the points marked as outliers are removed from consideration. The clarification of the response is made by the numerical solution of the following optimization problem on the set of remaining points:
{ A * , b * } = arg min A , b i | | c i - c i ( A , b ) | | 2 2
Thereby, the second problem of noise reduction may be solved. However, one can use a more advanced procedure, which takes into account the motion model. A more stable solution may be obtained with the aid of so-called robust RANSAC [23]; the idea is to use predicted values of ( A , b ) for the preliminary rejection of outliers from the pairs of observed feature points. Therefore, if on the k-th step of the filtering procedure, the values ( A k , b k ) have been obtained, one can use the following values on the ( k + 1 ) -th step:
( A k + 1 , b k + 1 ) = ( A k , b ^ k + 1 )
where b ^ k + 1 is the predicted estimate of the UAV attitude obtained on the basis of the PKF estimate. The filtering approach is described in Section 5.

3. Filtering Problem Statement

The problem of bearing-only filtering is considered to determine the coordinates of the UAV, which can observe some objects with known coordinates. These objects can be either the well recognizable objects or a network of radio-beacon stations with a well-specified frequency and known coordinates. In this work, the function of beacons is performed by the feature points determined with the aid of the RANSAC algorithm. The UAV has the standard set of INS devices, which enables it to perform the flight with some degree of accuracy, which, however, is not sufficient for mission completion.

3.1. Model of the UAV’s Motion

We assume that a UAV motion is described by three coordinates ( X ( t k ) , Y ( t k ) , Z ( t k ) ) and velocities ( V x ( t k ) , V y ( t k ) , V z ( t k ) ) . At times t k = k Δ t , k = 1 , 2 , . . . , these coordinates satisfy the following equations:
X ( t k + 1 ) = F X ( t k ) + B a ( t k ) + W ( t k )
where:
X ( t k ) = ( X ( t k ) , Y ( t k ) , Z ( t k ) , V x ( t k ) , V y ( t k ) , V z ( t k ) ) T
is the vector of state-velocities,
a ( t k ) = ( a x ( t k ) , a y ( t k ) , a z ( t k ) ) T
is the vector of accelerations, which we consider as controls,
W ( t k ) = ( 0 , 0 , 0 , W x ( t k ) , W y ( t k ) , W z ( t k ) ) T
is the vector of current perturbations, modeling the turbulence components of the wind and the autopilot errors, as well. The matrices A and B are equal:
F = 1 0 0 Δ t 0 0 0 1 0 0 Δ t 0 0 0 1 0 0 Δ t 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1
B = Δ t 2 2 0 0 0 Δ t 2 2 0 0 0 Δ t 2 2 Δ t 0 0 0 Δ t 0 0 0 Δ t
and stochastic Equation (2) describes a controlled and perturbed UAV motion.

3.2. Measurements

Assume that ( X i , Y i , Z i ) are the coordinates of the i-th reference point and ϕ i ( t k ) , λ i ( t k ) are the bearing angles on that point. The measuring scheme is shown in Figure 3.
Figure 3. Scheme of the measurements of bearing angles. X i , Y i , Z i are the coordinates of the i-th feature point; λ ( t k ) , ϕ ( t k ) are the elevation and azimuth bearing angles, measured at the moment t k .
Figure 3. Scheme of the measurements of bearing angles. X i , Y i , Z i are the coordinates of the i-th feature point; λ ( t k ) , ϕ ( t k ) are the elevation and azimuth bearing angles, measured at the moment t k .
Sensors 15 29768 g003
At moment t k , these angles satisfy the relations:
Y i + ε k y - Y ( t k ) X i + ε k x - X ( t k ) I i ( t k ) = tan ϕ i ( t k ) + ε k ϕ Z i + ε k z - Z ( t k ) ( X i + ε k x - X ( t k ) ) 2 + ( Y i + ε k y - Y ( t k ) ) 2 I i ( t k ) = tan λ i ( t k ) + ε k λ
where ε k x WN ( 0 , σ x 2 ) , ε k y WN ( 0 , σ y 2 ) , ε k z WN ( 0 , σ z 2 ) , ε k ϕ WN ( 0 , σ ϕ 2 ) , ε k λ WN ( 0 , σ λ 2 ) are uncorrelated random variables with zero means and variances σ x 2 , σ y 2 , σ z 2 , σ ϕ 2 , σ λ 2 , defined as errors in the measurement of the coordinates of the i-th reference point and of the tangents of angles ϕ i ( t k ) , λ i ( t k ) and forming the white noise sequences.
Remark 1. 
In the majority of works based on the method of pseudo-measurements, another model is used. It assumes the measurements of the angles with Gaussian errors (see [12,14] and most of the successive works). However, in the real definition of the object position in the image or in the matrix of sensors, the system measures the distance between the object image and the center of the sensor, that is the tangent of the bearing angle. This simple observation allows one to find the unbiased estimate of the UAV coordinates.
One can rewrite Equation (3) for angle λ i ( t k ) as follows:
Z i + ε k z - Z ( t k ) Y i + ε k y - Y ( t k ) sin ϕ i ( t k ) I i ( t k ) = sin λ i ( t k ) cos λ i ( t k ) + ε k λ
Remark 2. 
The indicator function I i ( t k ) = 1 if at t k the bearing of the i-th reference point occurs, and I i ( t k ) = 0 otherwise. For convenience, we assume that I i ( t k ) = 1 .
Therefore, at the moment t k , the UAV control system determines the angles ϕ i ( t k ) and λ i ( t k ) , related to the coordinates of the UAV ( X ( t k ) , Y ( t k ) , Z ( t k ) ) as follows:
( Y i + ε k y - Y ( t k ) ) cos ϕ i ( t k ) - ( X i + ε k x - X ( t k ) ) sin ϕ i ( t k ) = ε k ϕ ( X i + ε k x - X ( t k ) ) cos ϕ i ( t k ) ( Z i + ε k z - Z ( t k ) ) sin ϕ i ( t k ) cos λ i ( t k ) - ( Y i + ε k y - Y ( t k ) ) sin λ i ( t k ) = ε k λ ( Y i + ε k y - Y ( t k ) ) cos λ i ( t k )

4. Modified Kalman Filtering on the Basis of Pseudo-Measurements

4.1. Linear Measurements Model

The idea of the pseudo-measurement method is to separate in Equation (5) the observable and non-observable values, which gives the following observation equations:
m k ϕ = Y i cos ϕ i ( t k ) - X i sin ϕ i ( t k ) = Y ( t k ) cos ϕ i ( t k ) - X ( t k ) sin ϕ i ( t k ) - ε k y cos ϕ i ( t k ) + ε k x sin ϕ i ( t k ) + ε k ϕ ( X i + ε k x - X ( t k ) ) cos ϕ i ( t k ) m k λ = Z i sin ϕ i ( t k ) cos λ i ( t k ) - Y i sin λ i ( t k ) = Z ( t k ) sin ϕ i ( t k ) cos λ i ( t k ) - Y ( t k ) sin λ i ( t k ) - ε k z sin ϕ i ( t k ) cos λ i ( t k ) + ε k y sin λ i ( t k ) + ε k λ ( Y i + ε k y - Y ( t k ) ) cos λ i ( t k )
where X i , Y i , Z i represent the coordinates of the i-th feature point determined with the aid of the RANSAC algorithm and ϕ i , λ i are the corresponding observable bearing angles measured by the system. Thus, the left-hand side of Equation (6), that is ( m k ϕ , m k λ ) , corresponds to the observable values, whereas the right-hand side containing the coordinates of the UAV corresponds to the non-observable ones. The aim is to estimate the coordinates and velocities of the UAV on the basis of linear observation Equation (6) and the motion model Equation (2). Therefore, the measurement vector has the following form:
m k = m k ϕ m k λ = Y ( t k ) cos ϕ i ( t k ) - X ( t k ) sin ϕ i ( t k ) - ε k y cos ϕ i ( t k ) + ε k x sin ϕ i ( t k ) + ε k ϕ ( X i + ε k x - X ( t k ) ) cos ϕ i ( t k ) Z ( t k ) sin ϕ i ( t k ) cos λ i ( t k ) - Y ( t k ) sin λ i ( t k ) - ε k z sin ϕ i ( t k ) cos λ i ( t k ) + ε k y sin λ i ( t k ) + ε k λ ( Y i + ε k y - Y ( t k ) ) cos λ i ( t k )
Thereby, we obtain the system Equation (7) of linear measurement equations, though the noise variance depends on non-observable coordinates. By using V.S. Pugachev’s method [15], one can obtain the unbiased estimate and the variance with the aid of a prediction-correction filter [24]. Moreover, we do not need to assume the Gaussian distribution of errors that is not valid in bearing observations with optical-electronic cameras with discrete image sensors.

4.2. Prediction-Correction Estimation

Assume that at the moment t k , we have unbiased estimates X ^ ( t k ) , such that:
E ( X ^ ( t k ) ) = X ( t k )
with the following matrix of the mean-square errors:
P ^ ( t k ) = E ( X ^ ( t k ) - X ( t k ) ) ( X ^ ( t k ) - X ( t k ) ) T = P ^ x x ( t k ) P ^ x y ( t k ) . . . . . . . . . P ^ x V z ( t k ) P ^ x y ( t k ) P ^ y y ( t k ) . . . . . . . . . P ^ y V z ( t k ) P ^ x z ( t k ) P ^ y z ( t k ) . . . . . . . . . P ^ z V z ( t k ) P ^ x V x ( t k ) P ^ y V x ( t k ) . . . . . . . . . P ^ V x V z ( t k ) P ^ x V y ( t k ) P ^ y V y ( t k ) . . . . . . . . . P ^ V y V z ( t k ) P ^ x V z ( t k ) P ^ y V z ( t k ) . . . . . . . . . P ^ V z V z ( t k )
Problem 1. 
Find the unbiased estimates X ^ ( t k + 1 ) and matrix P ^ ( t k + 1 ) on the basis of estimates at the moment t k , m k , the known position of the i-th observable object ( X i , Y i , Z i ) and the UAV’s motion parameter Equation (2). These estimates must satisfy Equation (8) and give the matrix Equation (9) for the moment t k + 1 .

4.2.1. Prediction

The prediction is obtained by assuming that at the moment t k + 1 , the values of ϕ ( t k + 1 ) , λ ( t k + 1 ) will be known:
X ˜ ( t k + 1 ) = F X ^ ( t k ) + B a ( t k ) m ˜ k + 1 = m ˜ k + 1 ϕ m ˜ k + 1 λ = I ( t k + 1 ) ( Y ˜ ( t k + 1 ) cos ϕ ( t k + 1 ) - X ˜ ( t k + 1 ) sin ϕ ( t k + 1 ) ) I ( t k + 1 ) ( Z ˜ ( t k + 1 ) sin ϕ ( t k + 1 ) cos λ ( t k + 1 ) - Y ˜ ( t k + 1 ) sin λ ( t k + 1 ) )
Assuming that the motion perturbations and the UAV position are uncorrelated, we obtain:
P ˜ ( t k + 1 ) = P ˜ x x ( t k + 1 ) P ˜ x y ( t k + 1 ) . . . . . . . . . P ˜ x V z ( t k + 1 ) P ˜ x y ( t k + 1 ) P ˜ y y ( t k + 1 ) . . . . . . . . . P ˜ y V z ( t k + 1 ) P ˜ x z ( t k + 1 ) P ˜ y z ( t k + 1 ) . . . . . . . . . P ˜ z V z ( t k + 1 ) P ˜ x V x ( t k + 1 ) P ˜ y V x ( t k + 1 ) . . . . . . . . . P ˜ V x V z ( t k + 1 ) P ˜ x V y ( t k + 1 ) P ˜ y V y ( t k + 1 ) . . . . . . . . . P ˜ V y V z ( t k + 1 ) P ˜ x V z ( t k + 1 ) P ˜ y V z ( t k + 1 ) . . . . . . . . . P ˜ V z V z ( t k + 1 )
where the elements of this matrix are:
P ˜ x x ( t k + 1 ) = P ^ x x ( t k ) + 2 P ^ x V x ( t k ) Δ t + P ^ V x V x ( t k ) Δ t 2 P ˜ y y ( t k + 1 ) = P ^ y y ( t k ) + 2 P ^ y V y ( t k ) Δ t + P ^ V y V y ( t k ) Δ t 2 P ˜ z z ( t k + 1 ) = P ^ z z ( t k ) + 2 P ^ z V z ( t k ) Δ t + P ^ V z V z ( t k ) Δ t 2 P ˜ x y ( t k + 1 ) = P ^ x y ( t k ) + P ^ x V y ( t k ) Δ t + P ^ y V x ( t k ) Δ t + P ^ V x V y ( t k ) Δ t 2 P ˜ x z ( t k + 1 ) = P ^ x z ( t k ) + P ^ x V z ( t k ) Δ t + P ^ z V x ( t k ) Δ t + P ^ V x V z ( t k ) Δ t 2 P ˜ y z ( t k + 1 ) = P ^ y z ( t k ) + P ^ y V z ( t k ) Δ t + P ^ z V y ( t k ) Δ t + P ^ V y V z ( t k ) Δ t 2 P ˜ x V x ( t k + 1 ) = P ^ x V x ( t k ) + P ^ V x V x ( t k ) Δ t P ˜ x V y ( t k + 1 ) = P ^ x V y ( t k ) + P ^ V x V y ( t k ) Δ t P ˜ x V z ( t k + 1 ) = P ^ x V z ( t k ) + P ^ V x V z ( t k ) Δ t P ˜ y V x ( t k + 1 ) = P ^ y V x ( t k ) + P ^ V x V y ( t k ) Δ t P ˜ y V y ( t k + 1 ) = P ^ y V y ( t k ) + P ^ V y V y ( t k ) Δ t P ˜ y V z ( t k + 1 ) = P ^ y V z ( t k ) + P ^ V y V z ( t k ) Δ t P ˜ z V x ( t k + 1 ) = P ^ z V x ( t k ) + P ^ V x V z ( t k ) Δ t P ˜ z V y ( t k + 1 ) = P ^ z V y ( t k ) + P ^ V y V z ( t k ) Δ t P ˜ z V z ( t k + 1 ) = P ^ z V z ( t k ) + P ^ V z V z ( t k ) Δ t P ˜ V x V x ( t k + 1 ) = P ^ V x V x ( t k ) + σ X 2 P ˜ V y V y ( t k + 1 ) = P ^ V y V y ( t k ) + σ Y 2 P ˜ V z V z ( t k + 1 ) = P ^ V z V z ( t k ) + σ Z 2 P ˜ V x V y ( t k + 1 ) = P ^ V x V y ( t k ) P ˜ V x V z ( t k + 1 ) = P ^ V x V z ( t k ) P ˜ V y V z ( t k + 1 ) = P ^ V y V z ( t k )
Note that σ X 2 is not the same as σ x 2 and similarly for the other indices.
Then, the following values P ˜ x m ( t k + 1 ) , P ˜ y m ( t k + 1 ) , P ˜ z m ( t k + 1 ) , P ˜ V x m ( t k + 1 ) , P ˜ V y m ( t k + 1 ) , P ˜ V z m ( t k + 1 ) , P ˜ m m ( t k + 1 ) should be calculated using the following relations:
m k + 1 ϕ - m ˜ k + 1 ϕ m k + 1 λ - m ˜ k + 1 λ = ( Y ( t k + 1 ) - Y ˜ ( t k + 1 ) ) cos ϕ i ( t k + 1 ) - ( X ( t k + 1 ) - X ˜ ( t k + 1 ) ) sin ϕ i ( t k + 1 ) - ε k + 1 y cos ϕ i ( t k + 1 ) + ε k + 1 x sin ϕ i ( t k + 1 ) + ε k + 1 ϕ ( X i + ε k + 1 x - X ( t k + 1 ) ) cos ϕ i ( t k + 1 ) ( Z ( t k + 1 ) - Z ˜ ( t k + 1 ) ) sin ϕ i ( t k + 1 ) cos λ i ( t k + 1 ) - ( Y ( t k + 1 ) - Y ˜ ( t k + 1 ) ) sin λ i ( t k + 1 ) - ε k + 1 z sin ϕ i ( t k + 1 ) cos λ i ( t k + 1 ) + ε k + 1 y sin λ i ( t k + 1 ) + ε k + 1 λ ( Y i + ε k + 1 y - Y ( t k + 1 ) ) cos λ i ( t k + 1 )
and the identities:
X i - X ( t k + 1 ) = X i - X ˜ ( t k + 1 ) - ( X ( t k + 1 ) - X ˜ ( t k + 1 ) ) Y i - Y ( t k + 1 ) = Y i - Y ˜ ( t k + 1 ) - ( Y ( t k + 1 ) - Y ˜ ( t k + 1 ) ) Z i - Z ( t k + 1 ) = Z i - Z ˜ ( t k + 1 ) - ( Z ( t k + 1 ) - Z ˜ ( t k + 1 ) )
where we consider that the position of the i-th object is known and use the non-correlatedness of ε k + 1 x , ε k + 1 y , ε k + 1 z , ε k + 1 ϕ , ε k + 1 λ and differences ( X ( t k + 1 ) - X ˜ ( t k + 1 ) ) , ( Y ( t k + 1 ) - Y ˜ ( t k + 1 ) ) and ( Z ( t k + 1 ) - Z ˜ ( t k + 1 ) ) .
Finally, we get:
P ˜ x m ( t k + 1 ) T = E ( X ( t k + 1 ) - X ˜ ( t k + 1 ) ) m k + 1 ϕ - m ˜ k + 1 ϕ m k + 1 λ - m ˜ k + 1 λ = P ˜ x y ( t k + 1 ) cos ϕ i ( t k + 1 ) - P ˜ x x ( t k + 1 ) sin ϕ i ( t k + 1 ) P ˜ x z ( t k + 1 ) sin ϕ i ( t k + 1 ) cos λ i ( t k + 1 ) - P ˜ x y ( t k + 1 ) sin λ i ( t k + 1 ) ,
P ˜ y m ( t k + 1 ) T = E ( Y ( t k + 1 ) - Y ˜ ( t k + 1 ) ) m k + 1 ϕ - m ˜ k + 1 ϕ m k + 1 λ - m ˜ k + 1 λ = P ˜ y y ( t k + 1 ) cos ϕ i ( t k + 1 ) - P ˜ x y ( t k + 1 ) sin ϕ i ( t k + 1 ) P ˜ y z ( t k + 1 ) sin ϕ i ( t k + 1 ) cos λ i ( t k + 1 ) - P ˜ y y ( t k + 1 ) sin λ i ( t k + 1 ) ,
P ˜ z m ( t k + 1 ) T = E ( Z ( t k + 1 ) - Z ˜ ( t k + 1 ) ) m k + 1 ϕ - m ˜ k + 1 ϕ m k + 1 λ - m ˜ k + 1 λ = P ˜ y z ( t k + 1 ) cos ϕ i ( t k + 1 ) - P ˜ x z ( t k + 1 ) sin ϕ i ( t k + 1 ) P ˜ z z ( t k + 1 ) sin ϕ i ( t k + 1 ) cos λ i ( t k + 1 ) - P ˜ y z ( t k + 1 ) sin λ i ( t k + 1 ) ,
P ˜ V x m ( t k + 1 ) T = E ( V x ( t k + 1 ) - V ˜ x ( t k + 1 ) ) m k + 1 ϕ - m ˜ k + 1 ϕ m k + 1 λ - m ˜ k + 1 λ = P ˜ y V x ( t k + 1 ) cos ϕ i ( t k + 1 ) - P ˜ x V x ( t k + 1 ) sin ϕ i ( t k + 1 ) P ˜ z V x ( t k + 1 ) sin ϕ i ( t k + 1 ) cos λ i ( t k + 1 ) - P ˜ y V x ( t k + 1 ) sin λ i ( t k + 1 ) ,
P ˜ V y m ( t k + 1 ) T = E ( V y ( t k + 1 ) - V ˜ y ( t k + 1 ) ) m k + 1 ϕ - m ˜ k + 1 ϕ m k + 1 λ - m ˜ k + 1 λ = P ˜ y V y ( t k + 1 ) cos ϕ i ( t k + 1 ) - P ˜ x V y ( t k + 1 ) sin ϕ i ( t k + 1 ) P ˜ z V y ( t k + 1 ) sin ϕ i ( t k + 1 ) cos λ i ( t k + 1 ) - P ˜ y V y ( t k + 1 ) sin λ i ( t k + 1 ) ,
P ˜ V z m ( t k + 1 ) T = E ( V z ( t k + 1 ) - V ˜ z ( t k + 1 ) ) m k + 1 ϕ - m ˜ k + 1 ϕ m k + 1 λ - m ˜ k + 1 λ = P ˜ y V z ( t k + 1 ) cos ϕ i ( t k + 1 ) - P ˜ x V z ( t k + 1 ) sin ϕ i ( t k + 1 ) P ˜ z V z ( t k + 1 ) sin ϕ i ( t k + 1 ) cos λ i ( t k + 1 ) - P ˜ y V z ( t k + 1 ) sin λ i ( t k + 1 ) .
In a similar way, we calculate:
( P ˜ m m ( t k + 1 ) ) - 1 = E ( m k + 1 ϕ - m ˜ k + 1 ϕ ) 2 ( m k + 1 ϕ - m ˜ k + 1 ϕ ) ( m k + 1 λ - m ˜ k + 1 λ ) ( m k + 1 ϕ - m ˜ k + 1 ϕ ) ( m k + 1 λ - m ˜ k + 1 λ ) ( m k + 1 λ - m ˜ k + 1 λ ) 2 - 1
Therefore:
E ( m k + 1 ϕ - m ˜ k + 1 ϕ ) 2 = P ˜ y y ( t k + 1 ) cos 2 ϕ i ( t k + 1 ) - P ˜ x y ( t k + 1 ) sin 2 ϕ i ( t k + 1 ) + P ˜ x x ( t k + 1 ) sin 2 ϕ i ( t k + 1 ) + σ y 2 cos 2 ϕ i ( t k + 1 ) + σ x 2 sin 2 ϕ i ( t k + 1 ) + σ ϕ 2 ( ( X i - X ^ ( t k ) ) 2 + P ˜ x x ( t k + 1 ) + σ x 2 ) cos 2 ϕ i ( t k + 1 ) E ( m k + 1 ϕ - m ˜ k + 1 ϕ ) ( m k + 1 λ - m ˜ k + 1 λ ) = P ˜ y z ( t k + 1 ) sin ϕ i ( t k + 1 ) cos ϕ i ( t k + 1 ) cos λ i ( t k + 1 ) - P ˜ y y ( t k + 1 ) cos ϕ i ( t k + 1 ) sin λ i ( t k + 1 ) - P ˜ x z ( t k + 1 ) sin 2 ϕ i ( t k + 1 ) cos λ i ( t k + 1 ) + P ˜ x y ( t k + 1 ) sin ϕ i ( t k + 1 ) sin λ i ( t k + 1 ) E [ ( m k + 1 λ - m ˜ k + 1 λ ) 2 ] = P ˜ z z ( t k + 1 ) sin 2 ϕ i ( t k + 1 ) cos 2 λ i ( t k + 1 ) - P ˜ y z ( t k + 1 ) sin ϕ i ( t k + 1 ) sin 2 λ i ( t k + 1 ) + P ˜ y y ( t k + 1 ) sin 2 λ i ( t k + 1 ) + σ z 2 sin 2 ϕ i ( t k + 1 ) cos 2 λ i ( t k + 1 ) + σ y 2 sin 2 λ i ( t k + 1 ) + σ λ 2 ( ( Y i - Y ^ ( t k ) ) 2 + P ˜ y y ( t k + 1 ) + σ y 2 ) cos 2 λ i ( t k + 1 )

4.2.2. Correction

After getting the measurements at the moment t k + 1 , one can obtain the estimate of the UAV position at this moment. Therefore, the solution of Problem 1 has the form:
X ^ ( t k + 1 ) = X ˜ ( t k + 1 ) + P ˜ ( t k + 1 ) ( P ˜ m m ( t k + 1 ) ) - 1 ( m k + 1 - m ˜ k + 1 )
and the matrix of the mean square errors is equal to:
P ^ ( t k + 1 ) = P ˜ ( t k + 1 ) - P ˜ ( t k + 1 ) ( P ˜ m m ( t k + 1 ) ) - 1 P ˜ ( t k + 1 ) T
where:
P ˜ ( t k + 1 ) = ( P ˜ x m ( t k + 1 ) , P ˜ y m ( t k + 1 ) , . . . , P ˜ V z m ( t k + 1 ) ) T

5. Robust Filtering on the Basis of the UAV Motion Model

The RANSAC method calculates the rotation matrix and the coordinates of the camera { A * , b * } in the Earth coordinate system with some minor error. However, the RANSAC method can give quite the wrong answer, called an outlier. It could happen, for example, if the frames I c and I m do not depict common objects. We provide further a method that makes a decision about whether { A * , b * } is an outlier or not. This problem has been considered in relation to the exclusion of outliers in the RANSAC-type procedures [25,26]. Here, we use the modification of the robust RANSAC [23] based on PKF for bearing-only observations [17].
After the prediction step of the Kalman filter, one can estimate the UAV (camera) position and the matrix of the mean square errors:
X ˜ = X ˜ ( t k + 1 ) , P ˜ = P ˜ ( t k + 1 ) .
Like in [26], one can suppose that the corresponding probability density is Gaussian. The reason is that the PKF gives the best linear estimates obtained until the current time t . This estimate is the sum of uncorrelated random variables with almost the same variations, at least on the short intervals preceding the current time. It gives the opportunity to approximate the probability density distribution by the Gaussian one. Further extension of the robust RANSAC technique is based on the prior distribution of the UAV attitude. The approach has been developed in [23,27] on the basis of the extended Kalman filter (EKF). However, the estimate given by the EKF has an unknown bias and, of course, does not give the posterior Gaussian distribution. Therefore, the PKF, which gives an unbiased estimate, looks more preferable under the hypothesis of the posterior Gaussian distribution. Therefore, at the ( k + 1 ) -th step, the posterior distribution of r i corresponding to an inlier is assumed to be Gaussian, that is according to Equation (1):
N ( A k ( r i - X ˜ ( t k + 1 ) ) , A k ( P ˜ ( t k + 1 ) + P r r ) A k T )
where P r r is the covariance matrix of the landmarks localization. Further, in the estimation algorithm, the pair { A k , X ˜ ( t k + 1 ) } is considered as an outlier at the confidence level 95 % if:
r i - A k ( r i - X ˜ ( t k + 1 ) ) 2 * S p [ A k ( P ˜ ( t k + 1 ) + P r r ) A k T ]
Otherwise, the correction step is based on { A k , X ˜ ( t k + 1 ) } . Of course, all such nonlinear conjectures need confirmation on the basis of statistical modeling, which is one of the results of this article. One can observe the performance of robust filtering in the following figures. Figure 4 shows the correspondence between feature points obtained without a UAV motion model. Next, Figure 5 shows the correspondence established on the basis of the UAV motion model. The number of outliers reduces substantially.
Figure 4. Correspondence between (a) (image I c of the on-board camera of the UAV) and (b) (template image loaded in the UAV memory in advance) feature points found without the UAV motion model. One can observe chaotic correspondence, which gives a huge number of outliers.
Figure 4. Correspondence between (a) (image I c of the on-board camera of the UAV) and (b) (template image loaded in the UAV memory in advance) feature points found without the UAV motion model. One can observe chaotic correspondence, which gives a huge number of outliers.
Sensors 15 29768 g004
Figure 5. Correspondence between (a) (image I c of the on-board camera of the UAV) and (b) (template image loaded in the UAV memory in advance) feature points found with the aid of RANSAC based on the UAV motion model. The number of outliers reduces substantially.
Figure 5. Correspondence between (a) (image I c of the on-board camera of the UAV) and (b) (template image loaded in the UAV memory in advance) feature points found with the aid of RANSAC based on the UAV motion model. The number of outliers reduces substantially.
Sensors 15 29768 g005

6. Control of the UAV

Control of a UAV that ensures its motion along the reference trajectory may be determined on the basis of the standard deterministic linear-quadratic approach [28]. However, the problem of control on the basis of bearing-only observation is different from the standard one. It should be underlined that this problem is a non-linear one and cannot be solved by the standard way. The problem of the optimal control for the system Equation (2) is the stochastic one with incomplete information and does not have an explicit solution. However, for practical reasons, one can simplify it by considering the locally optimal control. Here, we discuss the following problem:
Problem 2. 
Find the locally optimal controls a x ( t k ) , a y ( t k ) and a z ( t k ) aimed to keep the motion of the UAV along the reference trajectory.
Assume that we have some reference trajectory X n o m ( t k ) .
Therefore, at the moment t k + 1 , we have to minimize the following expressions:
E 1 = E { ( X ( t k + 1 ) - X n o m ( t k + 1 ) ) 2 + ( V x ( t k + 1 ) - V x n o m ( t k + 1 ) ) Δ t ) 2 } min a x ( t k ) E 2 = E { ( Y ( t k + 1 ) - Y n o m ( t k + 1 ) ) 2 + ( V z ( t k + 1 ) - V z n o m ( t k + 1 ) ) Δ t ) 2 } min a y ( t k ) E 3 = E { ( Z ( t k + 1 ) - Z n o m ( t k + 1 ) ) 2 + ( V z ( t k + 1 ) - V z n o m ( t k + 1 ) ) Δ t ) 2 } min a z ( t k )
Let us consider the components of the E 1 expression:
X ( t k + 1 ) - X n o m ( t k + 1 ) = X ( t k ) - X ^ ( t k ) - ( X n o m ( t k ) - X ^ ( t k ) ) + ( V x ( t k ) - V ^ x ( t k ) ) Δ t - ( V x n o m ( t k ) - V ^ x ( t k ) ) Δ t + ( a x ( t k ) - a x n o m ( t k ) ) Δ t 2 2 V x ( t k + 1 ) - V x n o m ( t k + 1 ) = V x ( t k ) - V ^ x ( t k ) - ( V x n o m ( t k ) - V ^ x ( t k ) ) + ( a x ( t k ) - a x n o m ( t k ) ) Δ t + W x ( t k )
Then, we square these components and take the derivative of the sum with respect to a x ( t k ) given that some components are uncorrelated:
E { ( X ( t k ) - X ^ ( t k ) ) ( X n o m ( t k ) - X ^ ( t k ) ) } = 0 E { ( V x ( t k ) - V ^ x ( t k ) ) ( V x n o m ( t k ) - V ^ x ( t k ) ) } = 0
Finally, we get:
a x ( t k ) = a x n o m ( t k ) + 2 ( X n o m ( t k ) - X ^ ( t k ) ) 5 Δ t 2 + 6 ( V x n o m ( t k ) - V ^ x ( t k ) ) 5 Δ t
We take into account that the acceleration of the UAV has limitations [ a x m i n , a x m a x ] , so the control acceleration has the form:
a x c ( t k ) = a x m i n i f a x ( t k ) < a x m i n , a x ( t k ) i f a x m i n a x ( t k ) a x m a x , a x m a x i f a x ( t k ) > a x m a x .
Similarly, we obtain the expressions for a y c ( t k ) and a z c ( t k ) . Thus, we get the following solution of Problem 2 [19]:
a ^ ( t k ) = a ^ n o m ( t k ) + 2 5 Δ t 2 ( X ^ n o m ( t k ) - X ^ ( t k ) ) + 6 5 Δ t ( V ^ n o m ( t k ) - V ^ ( t k ) )
where:
a ^ ( t k ) = ( a x ( t k ) , a y ( t k ) , a z ( t k ) ) T X ^ ( t k ) = ( X ( t k ) , Y ( t k ) , Z ( t k ) ) T V ^ ( t k ) = ( V x ( t k ) , V y ( t k ) , V z ( t k ) ) T
and similarly for the nominal trajectory components.

7. Experimental Results

In this section, we give the results of the algorithm’s modeling. The UAV is virtually flying over the landscape shown in Figure 1. This image has been obtained from Google Maps, and for verification of the algorithm, the image of the same region obtained from another Bing satellite was used. Therefore, these two images modeled the preliminary downloaded template and the image obtained by the on-board camera. The result of virtual flight experiment is shown in Figure 6.
Figure 6. Blue dots corresponds to the reference trajectory and black dots to the real path. Blue squares show the localization of the terrain areas corresponding to the template images, and red squares show the moments where the estimates of the UAV positions have been obtained and assumed to be reliable according to the robust RANSAC algorithm described in Section 5.
Figure 6. Blue dots corresponds to the reference trajectory and black dots to the real path. Blue squares show the localization of the terrain areas corresponding to the template images, and red squares show the moments where the estimates of the UAV positions have been obtained and assumed to be reliable according to the robust RANSAC algorithm described in Section 5.
Sensors 15 29768 g006

8. Results and Discussion

In the modeling of the control algorithm, we use the UAV moving approximately with a velocity of 50 m/s, though the change of the altitude is assumed to be rather substantial. The control algorithm takes into account the constraints imposed on linear acceleration and angular velocities. The software developed for modeling may be used in a real on-board navigation system, as well. Moreover, the filtering algorithm based on unbiased estimation may be easily incorporated with the INS, since it gives also the unbiased square error estimates, which opens the way to correct data fusion. The quality of tracking for x , y , z components is shown below in Figure 7, Figure 8 and Figure 9, respectively. In all of these figures, blue dots correspond to the reference trajectory, black dots to the real path and red squares to the moments, where the estimates of the UAV positions have been obtained and assumed to be reliable according to the robust RANSAC algorithm. One can observe that in the “measurement” areas, the algorithm estimates the coordinates with high accuracy, and the control provides the tracking with high accuracy, as well.
Figure 7. Tracking of the x-coordinate.
Figure 7. Tracking of the x-coordinate.
Sensors 15 29768 g007
Figure 8. Tracking of the y-coordinate.
Figure 8. Tracking of the y-coordinate.
Sensors 15 29768 g008
Figure 9. Tracking of the z-coordinate.
Figure 9. Tracking of the z-coordinate.
Sensors 15 29768 g009
Figure 10. Averaged standard deviation of the position estimation error. The limit value in the observation areas is close to 24.5 m. One can see that in the areas of no observations, the SD monotonically increases.
Figure 10. Averaged standard deviation of the position estimation error. The limit value in the observation areas is close to 24.5 m. One can see that in the areas of no observations, the SD monotonically increases.
Sensors 15 29768 g010
The high accuracy is in accordance with the standard deviation (SD) theoretically calculated from the PKF. The value of the averaged standard deviation, which is the square root of P x x ( t ) + P y y ( t ) + P z z ( t ) , is shown in Figure 10 below.

9. Conclusions

The main result of the work is the new algorithm of the UAV control based on the observation of the landmarks in a 3D environment. The new RANSAC based on the UAV motion model permits one to exclude the huge number of outliers and, by that, to provide the reliable set of data for the estimation of the UAV position on the basis of the novel non-biased PKF algorithm. This work is just the beginning of the implementation of this approach in the navigation of UAVs during long-term autonomous missions.

Acknowledgments

This research was supported by Russian Science Foundation Grant 14-50-00150.

Author Contributions

The work presented here was carried out in collaboration among all authors. All authors have contributed to, seen and approved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Osborn, R.W.; Bar-Shalom, Y. Statistical Efficiency of Composite Position Measurements from Passive Sensors. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 2799–2806. [Google Scholar] [CrossRef]
  2. Cesetti, A.; Frontoni, E.; Mancini, A.; Zingaretti, P.; Longhi, S. A Vision-Based Guidance System for UAV Navigation and Safe Landing Using Natural Landmarks. J. Intell. Robot. Syst. 2010, 57, 233–257. [Google Scholar] [CrossRef]
  3. Caballero, F.; Merino, L.; Ferruz, J.; Ollero, A. Vision-Based Odometry and SLAM for Medium and High Altitude Flying UAVs. J. Intell. Robot. Syst. 2009, 54, 137–161. [Google Scholar] [CrossRef]
  4. Wang, C.L.; Wang, T.M.; Liang, J.H.; Zhang, Y.C.; Zhou, Y. Bearing-only Visual SLAM for Small Unmanned Aerial Vehicles in GPS-denied Environments. Int. J. Autom. Comput. 2013, 10, 387–396. [Google Scholar] [CrossRef]
  5. Konovalenko, I.; Kuznetsova, E. Experimental comparison of methods for estimation the observed velocity of the vehicle in video stream. In Proceedings of the SPIE 9445, Seventh International Conference on Machine Vision (ICMV 2014), Milan, Italy, 19–21 November 2014; Volume 9445.
  6. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157.
  7. Konovalenko, I.; Miller, A.; Miller, B.; Nikolaev, D. UAV navigation on the basis of the feature points detection on underlying surface. In Proceedings of the 29th European Conference on Modeling and Simulation (ECMS 2015), Albena (Varna), Bulgaria, 26–29 May 2015; pp. 499–505.
  8. Guan, X.; Bai, H. A GPU accelerated real-time self-contained visual navigation system for UAVs. In Proceedings of the IEEE International Conference on Information and Automation, Shenyang, China, 6–8 June 2012; pp. 578–581.
  9. Belfadel, D.; Osborne, R.W.; Bar-Shalom, Y. Bias Estimation for Optical Sensor Measurements with Targets of Opportunity. In Proceedings of the 16th International Conference on Information Fusion, Istanbul, Turkey, 9–12 July, 2013; pp. 1805–1812.
  10. Bishop, A.N.; Fidan, B.; Anderson, B.D.O.; Dogancay, K.; Pathirana, P.N. Optimality analysis of sensor-target localization geometries. Automatica 2010, 46, 479–492. [Google Scholar] [CrossRef]
  11. Jauffet, C.; Pillon, D.; Pignoll, A.C. Leg-by-Leg Bearings-only Target Motion Analysis without Observer Maneuver. J. Adv. Inf. Fusion 2011, 6, 24–38. [Google Scholar]
  12. Lin, X.; Kirubarajan, T.; Bar-Shalom, Y.; Maskell, S. Comparison of EKF, Pseudomeasurement and Particle Filters for a Bearing-only Target Tracking Problem. Proc. SPIE Int. Soc. Opt. Eng. 2002, 4728, 240–250. [Google Scholar]
  13. Miller, B.M.; Stepanyan, K.V.; Miller, A.B.; Andreev, K.V.; Khoroshenkikh, S.N. Optimal filter selection for UAV trajectory control problems. In Proceeedings of the 37th Conference on Information Technology and Systems, Kaliningrad, Russia, 1–6 September 2013; pp. 327–333.
  14. Aidala, V.J.; Nardone, S.C. Biased Estimation Properties of the Pseudolinear Tracking Filter. IEEE Trans. Aerosp. Electron. Syst. 1982, 18, 432–441. [Google Scholar] [CrossRef]
  15. Pugachev, V.S.; Sinitsyn, I.N. Stochastic Differential Systems—Analysis and Filtering; Wiley: Chichester, UK, 1987. [Google Scholar]
  16. Amelin, K.S.; Miller, A.B. An Algorithm for Refinement of the Position of a Light UAV on the Basis of Kalman Filtering of Bearing Measurements. J. Commun. Technol. Electron. 2014, 59, 622–631. [Google Scholar] [CrossRef]
  17. Miller, A.B. Development of the motion control on the basis of Kalman filtering of bearing-only measurements. Autom. Remote Control 2015, 76, 1018–1035. [Google Scholar] [CrossRef]
  18. Karpenko, S.; Konovalenko, I.; Miller, A.; Miller, B.; Nikolaev, D. Stochastic control of UAV on the basis of 3D natural landmarks. In Proceedings of the 17th International Conference on Machine Vision (ICMV), Barselona, Spain, 19–20 November 2015.
  19. Miller, A.B.; Miller, B.M. Stochastic control of light UAV at landing with the aid of bearing-only observations. In Proceedings of the 17th International Conference on Machine Vision (ICMV), Barselona, Spain, 19–20 November 2015.
  20. GitHub. Available online: https://github.com/Itseez/opencv/blob/master/samples/python2/ asift.py (accessed on 23 September 2014).
  21. Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  22. Zuliani, M.; Kenney, C.S.; Manjunath, B.S. The MultiRANSAC Algorithm and Its Application to Detect Planar Homographies. In Proceedings of the 12th IEEE International Conference on Image Processing (ICIP 2005), Genova, Italy, 11–14 September 2005; Volume 3, pp. III-153–156.
  23. Civera, J.; Grasa, O.G.; Davison, A.J.; Montiel, J.M.M. 1-Point RANSAC for Extended Kalman Filtering: Application to Real-Time Structure from Motion and Visual Odometry. J. Field Robot. 2010, 27, 609–631. [Google Scholar] [CrossRef]
  24. Miller, B.M.; Pankov, A.R. Theory of Random Processes (in Russian); Phizmatlit: Moscow, Russia, 2007. [Google Scholar]
  25. Torr, P.H.S. Geometric motion segmentation and model selection. Phil. Trans. R. Soc. A 1998, 356, 1321–1340. [Google Scholar] [CrossRef]
  26. Torr, P.H.S.; Zisserman, A. MLESAC: A New Robust Estimator with Application to Estimating Image Geometry. Comput. Vis. Image Underst. 2000, 78, 138–156. [Google Scholar] [CrossRef]
  27. Civera, J.; Grasa, O.G.; Davison, A.J.; Montiel, J.M.M. 1-Point RANSAC for EKF-Based Structure from Motion. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 11–15 October 2009; pp. 3498–3504.
  28. Sujit, P.B.; Saripalli, S.; Borges Sousa, J. Unmanned aerial vehicle path following: A survey and analysis of algorithms for Fixed-Wing unmanned aerial vehicless. IEEE Contol Syst. 2014, 34, 42–59. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Karpenko, S.; Konovalenko, I.; Miller, A.; Miller, B.; Nikolaev, D. UAV Control on the Basis of 3D Landmark Bearing-Only Observations. Sensors 2015, 15, 29802-29820. https://doi.org/10.3390/s151229768

AMA Style

Karpenko S, Konovalenko I, Miller A, Miller B, Nikolaev D. UAV Control on the Basis of 3D Landmark Bearing-Only Observations. Sensors. 2015; 15(12):29802-29820. https://doi.org/10.3390/s151229768

Chicago/Turabian Style

Karpenko, Simon, Ivan Konovalenko, Alexander Miller, Boris Miller, and Dmitry Nikolaev. 2015. "UAV Control on the Basis of 3D Landmark Bearing-Only Observations" Sensors 15, no. 12: 29802-29820. https://doi.org/10.3390/s151229768

APA Style

Karpenko, S., Konovalenko, I., Miller, A., Miller, B., & Nikolaev, D. (2015). UAV Control on the Basis of 3D Landmark Bearing-Only Observations. Sensors, 15(12), 29802-29820. https://doi.org/10.3390/s151229768

Article Metrics

Back to TopTop