1. Introduction
Array antennas are widely used in radar, remote sensing and mobile and satellite communications, etc. However, the deteriorated working condition, coupled with ever-growing of systematic complexity, inevitably increases the probability of defective array elements. Failed sensors impose negative impacts on the radiation performance, especially deteriorating the characteristic of beam-forming and lowering the accuracy of direction-of-arrival (DOA) estimation [
1]. The cost associated with their life-cycle maintenance could exceed the corresponding original capital investment as the functionality of antennas and other equipments degrades as a result of age [
2]. Therefore, before correcting the damaged array antennas, it is necessary to locate the numbers and positions of the broken elements via array diagnosis methods, which is of significant importance both on theoretical and practical values.
Approaches for finding the impaired elements have been attracted considerable attentions of scholars and engineers worldwide, varying from academic community to industrial field. In specific, back propagation method [
3], matrix method [
4], neural networks [
5], case-based reasoning [
6], genetic algorithm [
7], source reconstruction method [
8] and mutual coupling technique [
9] could be found in the open literature. Recently, the Compressed Sensing/Sparse Recovery (CS/SR) based approaches have also been proposed in the framework of array diagnosis [
10,
11,
12,
13]. It is worth noting that, in general, accuracy and efficiency are two major aspects in developing diagnosis approaches. Therefore, improving the probability of success rate of diagnosis with an acceptable decrease of efficiency using either far-field or near-field measurements has always been the heated topic in the community of array diagnosis. For instance, re-weighted
norm minimization has been applied in both near-field case [
14] and far-field case [
15]. In [
16] a fast diagnosis method of antenna arrays using a small number of far-field measurements with simulated and experimental data is addressed. Different from classical CS based paradigm, a reliable diagnosis method of both linear and planar arrays using Bayesian Compressive Sensing (BCS) are also proposed in [
17,
18].
The aim of this paper is to increase the probability of success rate of diagnosis with a smaller number of far-field measurements using a hybrid method of random perturbation and non-convex optimization, to overcome the issue of local minima associated to the intrinsic non-convex nature of
(
) norm minimization. The better performance of
(
) norm compared to
norm is not new in the open literature [
19,
20,
21,
22]. However, application of non-convex norms in array diagnosis will bring unwanted impact on the performance of diagnosis unless the problem of solutions to be trapped in local minima is properly solved. In this article, lots of numerical experiments are carried out and the results have vividly confirmed that the proposed method overcomes this problem effectively, and also obtains a stronger promotion of sparsification of the solutions compared to the standard
norm minimization with an acceptable increase of calculated amount.
2. Description of the Method
Let us consider an Uniform Linear Array (ULA) of
N isotropic elements radiating in the free space, with inter-element distance
, where
being the free space wavelength, see
Figure 1. The goal is to identify the presence of faulty elements using a smaller number of far-field measurements. Following the approach outlined in [
10] we suppose that the excitations of the failure-free array are available, as well as the corresponding far-field data in
M measurement points. The failure-free array (i.e., the gold array) will be denoted as reference array.
Let be the vector collecting the far-field of the failure-free array (wherein the superscript r stands for reference), and the vector of its excitations. The field radiated by the Antenna Under Test (AUT) with faulty elements is collected at the same spatial positions in the vector , while the vector of its excitations will be denoted as .
Now, let us consider the linear system
wherein
and
are innovation vectors and
is the radiation matrix relating the excitations to the far-field data.
Since the number of faulty elements S is much smaller than the total number of array elements N (as usually happens), we have an equivalent problem involving a highly sparse array, in which only the faulty elements of the original array radiate. In other words, to locate the failed array elements is transformed into reconstruct a sparse vector accurately with high probability. Once recovered, the positions and numbers of non-zero entities represent the locations and amounts of the failed elements, respectively.
A more general scenario is that the far-field measurement data are contaminated by a complex white Gaussian noise
e with zero mean and variance
This under-determined system is usually solved by the following optimization strategy
wherein
is an objective function that promoting the sparsity, and the subscript
p stands for
norm.
is related to the uncertainty level affecting the data.
In the open literature the most widely adopted sparse promoting function is the
norm of the vector
[
23,
24]. Under certain conditions it is equivalent to the so called
norm minimization [
25]. These conditions involve the number of measurements, that must be large enough to assure a accurate reconstruction of the unknown vectors.
An interesting question regards the possibility of a further reduction of the number of measurements using a different norm as sparse promoting function. In this paper, instead of using
norm we focus our attention on a non-convex norm of vector
, defined as
In this case, the minimization problem (3) is not convex.
However, non-convex minimization arises in many different branches of applied physics, and a large number of local minimizer have been proposed. All the algorithms explore the search space picking up elements of the space according to some elaborate-designed rules. In our study we focus our attention on MCCA (Minimizing a Concave Function via a Convex Function Approximation) algorithm proposed by [
26], as a local minimizer. The algorithm locally replacing the original concave objective function by a convex function. Given a point
on the concave function, a quadratic convex function
using the second-order Taylor series expansion around the point
with the sign of the second order term reversed is built. The position of minimum of this quadratic (convex) function, let
be is used to build a new quadratic function. In [
26] it has shown that the sequences of minima of these quadratic functions converge to a fixed point that reduces the original concave function. A pseudo-code of this algorithm is also reported in the paper.
It is also worth noting that there is no magic algorithm which is able to efficiently solve any minimization problems, as stated by the famous no free lunch theorem [
27]. For
(
) norm minimization, the major drawback is that the solution could be trapped in local minima. The relevant consequence is that the use of local minimizer does not assure correct estimation of the minimization problem, which results in an adverse impact on the diagnosis performance. Therefore, the minimization algorithm must be tailored to the problem.
In order to overcome the defect of MCCA, a hybrid method of random search-convex local minimizer is proposed. The use of hybrid algorithms is quite common in various minimization problems and consists of using a local minimizer whose starting point is the randomly selected value of the function. In our specific minimization problem, it is possible to restrict the search space to a few-dimensional space. In fact, given any solution of the problem
, the sparsest solution belongs to the null space of matrix
shifted by the solution. Consequently, a possible strategy is to find a solution of the minimization problem and then perform a random search in the null space of that matrix. This suggests the following strategy. (1) find the least squares solution of the problem
, which is given by the pseudo-inverse of matrix
. Let
as this initial starting point, that is associated to the
norm minimization [
28]. (2) use MCCA to generate an initial solution vector
. (3) a random vector
is introduced to perturb this initial solution vector
, where
should belong to the null space of matrix
and generate a new starting point
by adding the random vector
into the initial solution vector
, i.e.,
=
+
. (4) use MCCA to get an updated solution vector
. (5) if the
(
) norm of solution vector
is not larger than
, i.e.,
, accept
, otherwise accept
as a new starting point
. (6) repeat step (1)–(5) until the stopping criterion
Q is reached, where
Q denotes the number of random starting points. The value of
Q should be a trade-off between accuracy of diagnosis and the computational burden.
In our approach the critical factor is the design of
which is formulated as
=
, where
belongs to the null space of matrix
and
represents a random noise vector, respectively. One feasible way of generating
and
is given in the following steps. Let
the singular system of matrix
, where
and
are matrices whose columns are respectively the left and right singular vectors of matrix
.
is a diagonal matrix whose diagonal elements
(
) are the singular values of the matrix, where
r being its rank. According to Matrix Theory [
28], the last
columns of
represent a basis for the null space of matrix
, therefore can be used to constitute
. In addition,
are random variables subject to uniform distribution with zero mean, where
is the maximum absolute entity of the current solution vector and
is an empirically determined non-negative number.
The procedure of the proposed array diagnosis method of random search-convex local minimizer is summarized in Algorithm 1.
Algorithm 1 The Hybrid Array Diagnosis Method of Random Search-Convex Local Minimizer |
input: measurement matrix , far-field data output: recovered sparse vector initialization: the number of random starting point Q, counter j = 0, non-negative number fordo 1. set an initial starting point 2: generate an initial solution vector via MCCA using 3: perturb to get a new starting point = + , where = 4: get an updated solution vector via MCCA using the new starting point 5: if , accept , otherwise accept to replace 6: j = j + 1 end for |
3. Numerical Simulations
As first example let us consider the same ULA with
radiating elements reported in [
29], to which the reader can refer for more details on the simulation procedure. All the elements have unit nominal excitations. We suppose that the AUT is affected by
faulty elements having both amplitude and phase failures, randomly distributed among the
N elements, and a number of far-field measurements
are considered. Each excitation of faulty elements can be modeled as
, where
and
representing amplitude and phase failures, respectively. The far-field data are corrupted by a 35 dB additive white Gaussian noise.
As discussed in the above section, in this paper the non-linear nature of
(
) norm minimization is alleviated by a random search associated to a convex local minimizer. The random starting points explored in the procedure are selected randomly in the shifted null space
of matrix
. In order to investigate the effectiveness of the proposed method, in
Figure 2,
Figure 3 and
Figure 4 the Cumulative Distribution Function (CDF) of the Mean Square Error (MSE) of the retrieved solutions obtained by proposed method of random search-convex local minimizer is plotted considering three representative non-convex norms, i.e., strong non-convex norm (
), moderate non-convex norm (
) and weak non-convex norm (
) with the number of random starting points
, respectively. In the same figure the CDF curve of
(
) norm minimization using MCCA and the standard
norm minimization using CVX convex minimization software package [
30] are also plotted for sake of comparison. For each case, a number of 500 trials have been considered, randomly changing the positions of the broken elements. The plot shows that a value of
Q equal or higher than 10 is able to overcome the strong non-convex feature of
norm and provide better performance compared to standard
norm minimization. While for the moderate non-convex norm, i.e.,
, a smaller number of random starting points, i.e.,
is enough to have satisfactory result.
An analysis of the computational time obtained using a Lenovo desktop with dual quad-core 3.6 GHz Intel i7-4790 processors and 8 GB of memory is shown in
Table 1. It should be pointed out that the
(
) norm minimization is equivalent to using just only one random starting point, i.e., in the case
using our method, as display in the first three rows of second column in
Table 1. As mentioned above, for moderate non-convex norm (
), a value of
is enough to overcome its strong non-convex feature, and showing better performance compared to the standard
norm minimization. The cost associated with this improvement is displayed in
Table 1 that in the case
, the average computational time of proposed method is just around four times longer than the classical
norm minimization. This ratio is around 10 times longer compared to the standard
norm minimization when considering strong non-convex norm (
), requiring
to achieve satisfactory result. Therefore, from the perspective of accuracy and efficiency, the increasement of computational burden appears to be acceptable for array diagnosis.
The goal of the next experiment is to investigate the effectiveness of the proposed method as function of M and ranging from 2 to 18.
In
Figure 5,
Figure 6 and
Figure 7, the equi-probability curves of the rate of success of the recovered excitations are plotted considering strong non-convex norm (
) and moderate non-convex norm (
) with the number of random starting points
and
, respectively. In the same figure the CDF curve of the standard
norm minimization is also plotted in order to make a comparison. The estimation of the excitations is considered to be exact if MSE is lower than −30 dB and the rate of success (i.e., the number of trials terminated with an MSE lower than −30 dB over the total number of trials) is evaluated for each (
) considering 100 trials. These figures demonstrate that, for a fixed number of failures
S, the rate of success increases from 0.25 (purple curve) to 0.95 (yellow curve) varying the number of measurements
M in a small range, indicating the presence of a narrow phase transition behavior also in proposed method. In addition, these figures also clearly show that the proposed algorithm requires a number of measurements less than the
norm to obtain a desired probability of success rate. For example, supposing that a probability of success rate of 0.95 or higher, and considering the presence of no more than five failures, namely
, by properly using random perturbation technique, a number of measurements
is required for both
and
norm minimization with a tolerable computational burden, while
norm minimization requires at least
measurements, confirming again the better performance of the proposed method.
A large number of simulations on different kinds of arrays confirm the above observations. As another example, let us consider an uniform planar array (UPA) composed of
point-like isotropic radiating elements, placed on
mesh. The far-field data are measured in
points using random sampling strategy and the AUT is affected by
faulty elements. The type of faulty sensors and the measurement noise have the same configurations as reported in the first example, i.e., partial failures and 35 dB additive white Gaussian noise are considered. The MSE of the retrieved excitations are estimated with proposed method of random search-convex local minimizer is plotted, also considering strong non-convex norm
, moderate non-convex norm
and weak non-convex norm
with the number of random starting points
, respectively. In the same figure the
(
) norm minimization using MCCA and
norm minimization using CVX are also taken into account. Similarly, the CDF of the MSE is plotted in
Figure 8,
Figure 9 and
Figure 10 with 500 trials for each case randomly changing the positions of the broken elements. In addition, the computational time is also shown in
Table 2.
Similar conclusions could be made according to the simulation results for planar array. The plots show that a value of
Q equal or higher than 5 is able to overcome the moderate non-convex feature of
norm and provide better performance compared to the standard
norm minimization also in this scenario. While for the strong non-convex norm
, a larger number of random starting points
is enough to have satisfactory result. Correspond to this is the fact that according to the computational burden displayed in
Table 2, if we use moderate non-convex norm
, the average computational time of proposed method is around 1.4 s. Although this time will be around 9.1 s if we use
norm as the associated non-convex feature strengthens, it still appears to be acceptable from the angle of significant reduction of acquisition time of far-field sampling data compared to the matrix method.
The goal of the last experiment is to validate the effectiveness of the proposed method as a function of
M ranging from 9 to 29. In
Figure 11 and
Figure 12, the curves of probability of exact reconstruction of the recovered excitations are plotted considering strong non-convex norm (
) and moderate non-convex norm (
) with the number of random starting points
and
, respectively. In the same figure the probability curve of the
(
) norm minimization using MCCA and the standard
norm minimization using CVX are also plotted in order to make comparisons. The estimation of the excitations is considered to be exact if MSE is lower than −30 dB and the rate of success (i.e., the number of trials terminated with an MSE lower than −30 dB over the total number of trials) is evaluated for each
M considering 300 trials. The results demonstrate that the proposed method is effective to overcome the non-convex feature of
and
norm minimization and perform better than the standard
norm minimization using an acceptable number of random starting points. For example, supposing that a probability of success rate of 0.95 or higher, a number of measurements
is required for both
norm minimization and
norm minimization taking advantage of the proposed random perturbation technique, while
norm minimization requires at least
measurements, not to mention the worse performance of
(
) norm minimization using MCCA, which uses more than 29 measurements. These results have confirmed again the better performance of the proposed method. It is also worth noting that increasing the number of random starting points will not bring any significant improvements apart from a higher computational burden.