Next Article in Journal
Non-Invasive Ambient Intelligence in Real Life: Dealing with Noisy Patterns to Help Older People
Previous Article in Journal
Dual-Resolution Dual-Path Convolutional Neural Networks for Fast Object Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Covariance-Based Estimation for Clustered Sensor Networks Subject to Random Deception Attacks

by
Raquel Caballero-Águila
1,*,†,
Aurora Hermoso-Carazo
2,† and
Josefa Linares-Pérez
2,†
1
Dpto. de Estadística, Universidad de Jaén, Paraje Las Lagunillas, 23071 Jaén, Spain
2
Dpto. de Estadística, Universidad de Granada, Avda. Fuentenueva, 18071 Granada, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2019, 19(14), 3112; https://doi.org/10.3390/s19143112
Submission received: 30 May 2019 / Revised: 27 June 2019 / Accepted: 12 July 2019 / Published: 14 July 2019
(This article belongs to the Section Sensor Networks)

Abstract

:
In this paper, a cluster-based approach is used to address the distributed fusion estimation problem (filtering and fixed-point smoothing) for discrete-time stochastic signals in the presence of random deception attacks. At each sampling time, measured outputs of the signal are provided by a networked system, whose sensors are grouped into clusters. Each cluster is connected to a local processor which gathers the measured outputs of its sensors and, in turn, the local processors of all clusters are connected with a global fusion center. The proposed cluster-based fusion estimation structure involves two stages. First, every single sensor in a cluster transmits its observations to the corresponding local processor, where least-squares local estimators are designed by an innovation approach. During this transmission, deception attacks to the sensor measurements may be randomly launched by an adversary, with known probabilities of success that may be different at each sensor. In the second stage, the local estimators are sent to the fusion center, where they are combined to generate the proposed fusion estimators. The covariance-based design of the distributed fusion filtering and fixed-point smoothing algorithms does not require full knowledge of the signal evolution model, but only the first and second order moments of the processes involved in the observation model. Simulations are provided to illustrate the theoretical results and analyze the effect of the attack success probability on the estimation performance.

1. Introduction

Nowadays, communication networks are widely used to cope with signal estimation problems in engineering, economy, health or security, among others, since they generally provide more robust and precise estimators of the target signal than a single sensor. Conventional centralized and distributed fusion estimation architectures have been widely studied under a state-space approach (see e.g., [1,2,3,4] and references therein). In addition, assuming that the evolution model of the signal is not fully known and only covariance information is available, centralized and distributed fusion estimation algorithms have been proposed for sensor networks affected by different network-induced random uncertainties (see e.g., [5,6,7]). A comprehensive survey of recent developments in estimation and fusion for networked systems with randomly occurring phenomena can be found in [8]. The key theories and methodologies of distributed multisensor data fusion are comprehensively reviewed in [9] and a survey of distributed fusion estimation algorithms with applications in networked systems, including an interesting analysis of some network-induced uncertainties, is provided in [10].
These conventional centralized and distributed fusion architectures require connecting all sensor nodes to a central processor, which sometimes can involve a serious communication burden, especially when the number of sensors is large. Additionally, in the distributed fusion estimation architecture, equipping each single sensor with a local estimator may be sometimes unaffordable. To overcome these shortcomings, a usual practice in engineering consists of arranging the sensors in clusters and selecting one node per cluster (called cluster-head node) acting as a local processor. Each cluster head first collects measurements from the regular sensor nodes of its cluster to generate a local estimator and then all local estimators are collected in the central processor, where the distributed fusion estimator is generated. This structure, called hierarchical cluster-based estimation structure, provides an efficient way to analyze big data in a great variety of application fields [11]. Clustering is also widely used, for example, in underwater sensor networks established for military purposes, such as anti-submarine warfare, communications, positioning and guidance [12]. An essential issue when dealing with complex networked systems consisting of multiple clusters or individual subsystems is coordination control and the design of appropriate consensus protocols; relevant results concerning these problems in different kinds of networked systems are found in [13,14,15,16]. A state-of-the-art and comprehensive survey on clustering approaches can be found in [17] and a detailed outline of some modern energy-efficient clustering approaches to improve the lifetime of wireless sensor networks can be seen in [18]. Recently, a comprehensive review with comparisons and classifications of different optimized clustering approaches according to different metrics has been carried out in [19].
A remarkable disadvantage of using sensor networks for signal estimation is that their reliability can be compromised by possible cyber-attacks from adversaries. For this reason, cyber-security of networked systems is drawing considerable research attention. The most typical kinds of attacks are the denial-of-service attacks and the deception attacks [20]. While the first ones strike at data availability by obstructing the flow of information through the network, the second ones violate data integrity by injecting false information that modifies the real data packets. Such false information can include a wrong sensor measurement or control input, an incorrect time-stamp or a wrong identity of the sending device. The fusion estimation problem for stochastic signals from measured data coming from a sensor network subject to deception attacks is currently an important focus of research (see e.g., [20,21,22,23,24,25,26] and references therein). In [20], new insecurity conditions for the state estimation problem under false data injection attacks are proposed. In [21], the variance-constrained distributed filtering problem is studied for time-varying systems subject to multiplicative deception attacks with bounded attack noises. A distributed recursive filtering algorithm is designed in [22] for a class of discrete time-delayed systems subject to both uniform quantization and intermittent deception attacks. In [23], the security-guaranteed filtering problem is addressed for a class of nonlinear discrete time-delayed systems with both stochastic sensor saturations and deception attacks. The centralized security-guaranteed filtering problem for linear discrete time-invariant stochastic systems with multi-rate sensor nodes is addressed in [24], when deception attacks are launched during the transmission of information from the sensors to the centralized filter. In [25], the cluster-based covariance intersection fusion estimation problem under stochastic deception attacks is investigated. An integrated analysis of event-triggered fault detection and fault estimation is proposed in [26] for a class of discrete-time stochastic systems subject to unknown disturbances and stochastic deception attacks.
Research motivation. In this paper, distributed fusion filtering and fixed-point smoothing algorithms are designed for clustered sensor networks subject to stochastic linear deception attacks. This study is motivated by the following challenges:
(i) To find out fusion estimation structures for multi-sensor networked systems that reduce transmission burdens (with respect to both centralized and distributed fusion) and the cost of embedding a local processor in each sensor (with respect to distributed fusion).
(ii) To design recursive and easily implementable estimation algorithms with high estimation accuracy, fusing unreliable multi-sensor data, corrupted by possible deception attacks, under general assumptions on the target signal that do not require the full knowledge of the system state-space model.
Paper contributions. In light of our motivations, the main contributions of this paper are highlighted as follows:
(i) The fusion estimation problem in multi-sensor networked systems when the sensors are grouped into clusters is investigated assuming, on the one hand, that the measurements are subject to stochastic deception attacks and, on the other, that the signal evolution model is not necessarily known, but only information about its first and second order statistical properties (covariance information) is available. This covariance-based estimation approach is more general than the conventional one based on the full knowledge of the state-space model of the system since, in that case, the mean and autocovariance function of the signal process can be calculated. Hence, this estimation approach provides a comprehensive framework to deal with a great variety of stochastic signals.
(ii) A two-stage fusion estimation algorithm is designed for both filtering and fixed-point smoothing problems. In the first stage, each local processor collects measurements (subject to random deception attacks) from its cluster to generate local least-squares linear estimators; in this stage, the innovation approach is used to design the local estimation algorithms, which are recursive and computationally simple. In the second stage, in order to improve the estimation performance, all these local estimators are transmitted to the fusion center, where fusion estimators are obtained by matrix-weighted linear combinations of the local estimators; the weight matrices are computed by minimizing the mean squared estimation error, for which the cross-correlation matrices between any two local estimators need to be previously calculated.
In contrast with previous papers concerning the fusion estimation problem under a covariance-based approach, this is the first time that a cluster structure of the sensor network is used and stochastic deception attacks are considered. In comparison with the existing literature about the estimation problem in clustered sensor networks, either subject to random deception attacks or not, the main difference is the kind of information required for the derivation of the algorithms: full knowledge of the state-space model in the existing literature and only covariance information (which covers the conventional formulation based on state-space model and also more general situations) in the current paper.
Paper structure. The measurement model framework is presented in Section 2, where the equations of the clustering sensor measurements and the stochastic deception attacks, together with the hypotheses under which the distributed estimation problem will be addressed, are specified. In Section 3 and Section 4 the first and second stages of the proposed cluster-based fusion estimation structure are developed. The local least-squares linear filtering and fixed-point smoothing estimators, as well as formulas for the error covariance matrices, are obtained by recursive algorithms in Section 3. The proposed distributed fusion estimators are proposed in Section 4 by a matrix-weighted linear combination of the local estimators using the mean squared error as optimality criterion; also, to measure the estimation accuracy, recursive formulas for the error covariance matrices are derived. The performance of the proposed estimation algorithms is analyzed in a numerical simulation study, carried out in Section 5. Finally, some concluding remarks are provided in Section 6.
Notation. Unless otherwise stated, the notation used in the paper is fairly standard. R n denotes the n-dimensional Euclidean space and R m × n the set of all m × n real matrices. When the dimensions of vectors or matrices are not specified, they are assumed to be compatible with algebraic operations. 1 n denotes the n-dimensional column vector with all ones, while the identity and zero matrices are denoted by I and 0, respectively. For any function Γ h , k , depending on h and k, and Ξ ( r s ) , depending on r and s, we will write Γ k = Γ k , k and Ξ ( r ) = Ξ ( r r ) , respectively, for simplicity. The Kronecker product of the matrices A R m × n and B R m × n , which is a matrix in R m m × n n , is denoted by A B . The Hadamard product of the matrices C , D R m × n , which is also a matrix in R m × n , is denoted by C D and it is defined by ( C D ) i j = C i j D i j . δ k , s is the Kronecker delta function, i.e., δ k , s = 1 when k = s and zero otherwise. The autocovariance function of a second-order process { α k } k 1 is defined as E ( α k E [ α k ] ) ( α s E [ α s ] ) T , k , s 1 , where E [ · ] stands for the mathematical expectation operator.

2. Estimation Problem Formulation and Measurement Model

In this paper, we consider the distributed fusion estimation problem of a discrete-time stochastic signal from a set of measurements provided by multiple sensors. A cluster-based estimation structure will be adopted: the sensors are grouped into different clusters, each of them connected with a local processor which, in turn, is connected with a global fusion center (FC). The distributed fusion estimation process operates in two stages. In the first one, the measured outputs provided by the sensors of each cluster are sent to the corresponding local processor, where local least-squares (LS) linear estimators are obtained; it is assumed that, during the transmission to the local processors, the clustering sensor measurements are subject to stochastic linear deception attacks. In the second stage, the local estimators received from all the local processors are gathered and fused in the FC, where the proposed distributed signal estimators are generated by a matrix-weighted linear combination of the local estimators using the mean squared error as optimality criterion.
The aforementioned estimation problem will be addressed without requiring full knowledge of the evolution model generating the signal process. Instead, it will be assumed that the signal mean function is zero and its covariance function is factorizable (covariance-based estimation approach). More precisely, the following assumption is required ([7]):
(A1) 
The n x -dimensional signal { x k } k 1 is a zero-mean second-order process and its autocovariance function is expressed in a separable form; namely, E x k x h T = A k B h T , h k , where A k , B h R n x × n are known matrices.
The signal process of the most common signal evolution models (e.g., the signal of linear systems, or that of uncertain systems with a sum of multiple multiplicative noise terms) meets this assumption (A1) and, hence, the covariance-based estimation approach provides a comprehensive framework to cope with different signal evolution models, thus overcoming the necessity of deriving specific algorithms for each situation.

2.1. Clustering Sensor Measurements and Stochastic Deception Attacks

Consider a sensor network and assume that the sensor nodes are grouped into L clusters to measure the stochastic signal of interest. Specifically, assume that each cluster r = 1 , , L is made up of m r sensors that provide measurements of the signal according to the following model:
z k ( r ( i ) ) = C k ( r ( i ) ) x k + v k ( r ( i ) ) , k 1 ; i = 1 , , m r , r = 1 , , L ,
where C k ( r ( i ) ) are known matrices and z k ( r ( i ) ) R n z is the signal measured output from the i-th sensor of the r-th cluster at time k, which is transmitted to the r-th local processor to obtain the local LS linear signal estimators. The following assumption is required on the measurement noises, v k ( r ( i ) ) k 1 , i = 1 , , m r , r = 1 , , L :
(A2) 
The measurement noises of different clusters are independent and, for each r = 1 , , L , the noises v k ( r ( i ) ) k 1 , i = 1 , , m r , are zero-mean second-order white processes with known covariance matrices
E v k ( r ( i ) ) v h ( r ( j ) ) T = R k ( r ( i j ) ) δ k , h , k , h 1 ; i , j = 1 , , m r .
For each cluster r = 1 , , L , the transmissions of the measured outputs z k ( r ( i ) ) , i = 1 , , m r , to the r-th local processor are affected by random linear deception attacks and the deceptive signal injected by the attackers, ξ k ( r ( i ) ) , is described by:
ξ k ( r ( i ) ) = z k ( r ( i ) ) + w k ( r ( i ) ) , k 1 ; i = 1 , , m r , r = 1 , , L .
This signal involves two parts: the first one neutralizes the true information and the second one is the blurred information (noise) added by the attackers. These noises, w k ( r ( i ) ) k 1 , i = 1 , , m r , r = 1 , , L , are assumed to satisfy the following requirement:
(A3) 
The attack noises of different clusters are independent and, for each r = 1 , , L , the noises w k ( r ( i ) ) k 1 , i = 1 , , m r , are zero-mean second-order white processes with known covariance matrices
E w k ( r ( i ) ) w h ( r ( j ) ) T = S k ( r ( i j ) ) δ k , h , k , h 1 ; i , j = 1 , , m r .

2.2. Measurements Received by the Local Processors

Usually, in practice, the attacks may randomly succeed or not. So, taking into account this random nature of the attacks, for every r = 1 , , L , the measurements received by the r-th local processor are modelled by introducing different sequences of Bernoulli random variables, λ k ( r ( i ) ) k 1 , i = 1 , , m r . For each r = 1 , , L , and i = 1 , , m r , the value λ k ( r ( i ) ) = 1 models a successful attack into the i-th communication channel from the r-th cluster, meaning that only noise w k ( r ( i ) ) arrives to the r-th local processor; conversely, the value λ k ( r ( i ) ) = 0 models a failed attack, which means that the real measured output z k ( r ( i ) ) is received by the r-th local processor. Taking these considerations into account, the following model for y k ( r ( i ) ) , the measurements received by the r-th local processor, is considered:
y k ( r ( i ) ) = z k ( r ( i ) ) + λ k ( r ( i ) ) ξ k ( r ( i ) ) , k 1 ; i = 1 , , m r , r = 1 , , L ,
or, equivalently, by substituting (1) and (2) into (3), we have:
y k ( r ( i ) ) = 1 λ k ( r ( i ) ) C k ( r ( i ) ) x k + v k ( r ( i ) ) + λ k ( r ( i ) ) w k ( r ( i ) ) , k 1 ; i = 1 , , m r , r = 1 , , L .
The following assumption is imposed on the Bernoulli random variables describing the success or failure of attacks:
(A4) 
λ k ( r ( i ) ) k 1 , r = 1 , , L , i = 1 , , m r , are independent sequences of independent Bernoulli random variables with known probabilities P λ k ( r ( i ) ) = 1 = λ ¯ k ( r ( i ) ) .
From this assumption, if we denote λ k ( r ) = λ k ( r ( 1 ) ) , , λ k ( r ( m r ) ) T 1 n z , for r = 1 , , L , then the correlation matrices K k λ ( r ) E λ k ( r ) λ k ( r ) T and K k 1 λ ( r ) E 1 m r n z λ k ( r ) 1 m r n z λ k ( r ) T are known and their entries are easily calculated taking into account that
E λ k ( r ( i ) ) λ k ( r ( j ) ) = λ ¯ k ( r ( i ) ) , i = j , λ ¯ k ( r ( i ) ) λ ¯ k ( r ( j ) ) , i j .
Finally, the following independence hypothesis is also assumed:
(A5) 
For r = 1 , , L and i = 1 , , m r , the signal process { x k } k 1 and the processes v k ( r ( i ) ) k 1 , w k ( r ( i ) ) k 1 and λ k ( r ( i ) ) k 1 are mutually independent.

3. First Stage: Local LS Linear Estimators

To start with, as indicated previously, our aim is to calculate in every local processor, r = 1 , , L , LS linear estimators of the signal based on the measurements received from all the sensors of the r-th cluster. Therefore, to estimate the signal x k in the local processor r at time k + N , we consider all the measurements received from all the sensors i = 1 , , m r , of the r-th cluster, up to time k + N ; that is, the measurement set y h ( r ( i ) ) , h k + N , i = 1 , , m r . So defining the vectors y h ( r ) = y h ( r ( 1 ) ) T , , y h ( r ( m r ) ) T T , made up of all the measurements received by the r-th processor at each sampling time h, the problem at hand is formulated as that of determining local LS linear estimators of the signal x k based on the vectors y h ( r ) , h k + N , for r = 1 , , L .

3.1. Stacked Model for the Measurements Received by the Local Processors

Taking into account Equation (4) for the measurements received by the r-th local processor, the following model for the above-defined vectors y k ( r ) is clearly deduced:
y k ( r ) = I Λ k ( r ) C k ( r ) x k + v k ( r ) + Λ k ( r ) w k ( r ) , k 1 ; r = 1 , , L ,
where
C k ( r ) = C k ( r ( 1 ) ) C k ( r ( m r ) ) , v k ( r ) = v k ( r ( 1 ) ) v k ( r ( m r ) ) , w k ( r ) = w k ( r ( 1 ) ) w k ( r ( m r ) ) , Λ k ( r ) = λ k ( r ( 1 ) ) 0 0 λ k ( r ( m r ) ) I .
The following statistical properties of the processes involved in model (6), which will be used to address the LS linear estimation problem, are easily inferred from the model assumptions (A1)–(A5) stated in Section 2:
(P1) 
v k ( r ) k 1 , r = 1 , , L , are independent zero-mean noise processes with
E v k ( r ) v h ( s ) = R k ( r ) δ k , h δ r , s , where R k ( r ) = R k ( r ( i j ) ) i , j = 1 , , m r .
(P2) 
w k ( r ) k 1 , r = 1 , , L , are independent zero-mean noise processes with
E w k ( r ) w h ( s ) = S k ( r ) δ k , h δ r , s , where S k ( r ) = S k ( r ( i j ) ) i , j = 1 , , m r .
(P3) 
Λ k ( r ) k 1 , r = 1 , , L , are independent sequences of independent random matrices with known means, Λ ¯ k ( r ) = D i a g λ ¯ k ( r ( 1 ) ) , , λ ¯ k ( r ( m r ) ) I . Moreover, the Hadamard product properties guarantee that, for any random matrix G independent of Λ k ( r ) , E Λ k ( r ) G Λ k ( r ) = K k λ ( r ) E [ G ] , where K k λ ( r ) is calculated from (5).
(P4) 
For r = 1 , , L , the signal process, { x k } k 1 , and the processes v k ( r ) k 1 , w k ( r ) k 1 and Λ k ( r ) k 1 are mutually independent.
(P5) 
y k ( r ) k 1 , r = 1 , , L , are zero-mean processes with covariance matrices Σ k y ( r s ) E [ y k ( r ) y k ( s ) T ] given by
Σ k y ( r s ) = K k 1 λ ( r ) C k ( r ) A k B k T C k ( r ) T + R k ( r ) + K k λ ( r ) S k ( r ) , r = s , I Λ ¯ k ( r ) C k ( r ) A k B k T C k ( s ) T I Λ ¯ k ( s ) , r s ,
where K k 1 λ ( r ) and K k λ ( r ) are calculated from (5).

3.2. Recursive Local LS Linear Filtering and Fixed-Point Smoothing Algorithms

This subsection is devoted to the design of recursive algorithms, at each local processor r = 1 , , L , for the LS linear filtering and fixed-point smoothing estimators based on the measurements received from all the sensors of the r-th cluster. In other words, for the r-th cluster, r = 1 , , L , the aim in this subsection is the design of algorithms to obtain the local LS linear estimators, x ^ k / k + N ( r ) , N 0 , of the signal x k based on the vectors y h ( r ) , h k + N , given by (6); specifically, a recursive algorithm for the local LS filter x ^ k / k ( r ) , k 1 , and a recursive algorithm for the local LS smoother x ^ k / k + N ( r ) , for fixed k 1 and N = 1 , 2 , , will be derived.
For this purpose, we will use the measurement innovations rather than the raw measurements (innovation approach), where the innovation at time k is defined as μ k ( r ) y k ( r ) y ^ k / k 1 ( r ) , being y ^ k / k 1 ( r ) the one-stage observation predictor (LS linear estimator of y k ( r ) based on y h ( r ) , h k 1 ). Taking into account the properties of the innovation process, the following expression for the LS linear estimators is derived:
x ^ k / H ( r ) = h = 1 H E x k μ h ( r ) T E μ h ( r ) μ h ( r ) T 1 μ h ( r ) .
Taking orthogonal projections in (6) and using properties (P2)–(P4), we get that the one-stage observation predictor is y ^ h / h 1 ( r ) = I Λ ¯ h ( r ) C h ( r ) x ^ h / h 1 ( r ) and, consequently, the innovation is given by
μ h ( r ) = y h ( r ) I Λ ¯ h ( r ) C h ( r ) x ^ h / h 1 ( r ) .
Furthermore, from property (P5), it is clear that the innovation covariance matrix, Π h ( r ) E μ h ( r ) μ h ( r ) T = E y h ( r ) y h ( r ) T E y ^ h / h 1 ( r ) y ^ h / h 1 ( r ) T , satisfies
Π h ( r ) = Σ h y ( r ) I Λ ¯ h ( r ) C h ( r ) E x ^ h / h 1 ( r ) x ^ h / h 1 ( r ) T C h ( r ) T I Λ ¯ h ( r ) .
Using now (6) for y h ( r ) and (8) for x ^ h / h 1 ( r ) , we obtain that the coefficients X k , h ( r ) = E x k μ h ( r ) T satisfy X k , h ( r ) = A k O h ( r ) , h k , where O h ( r ) is a matrix function such that
O h ( r ) = B h T l = 1 h 1 O l ( r ) Π l ( r ) 1 O l ( r ) T A h T C h ( r ) T I Λ ¯ h ( r ) .
Then, by defining
o k ( r ) = ( 1 δ k , 0 ) h = 1 k O h ( r ) Π h ( r ) 1 μ h ( r ) , k 1 , Σ k o ( r ) = ( 1 δ k , 0 ) h = 1 k O h ( r ) Π h ( r ) 1 O h ( r ) T , k 1 ,
we obtain that the estimators x ^ k / L ( r ) , L k , are given by x ^ k / L ( r ) = A k o L ( r ) , and, by substitution in (9), we obtain that μ h ( r ) = y h ( r ) I Λ ¯ h ( r ) C h ( r ) A h o h 1 ( r ) .
Finally, since (11) guarantees that l = 1 h 1 O l ( r ) Π l ( r ) 1 O l ( r ) T = Σ h 1 o ( r ) , by substitution in (10), it is deduced that O h ( r ) = B h T Σ h 1 o ( r ) A h T C h ( r ) T I Λ ¯ h ( r ) . Bearing in mind the above results, the following filtering algorithm is derived.
Recursive Local LS Linear Filtering Algorithm.For the r-th cluster, r = 1 , , L , the local filtering estimators, x ^ k / k ( r ) , and the error covariance matrices, Σ ^ k / k ( r ) E x k x ^ k / k ( r ) x k x ^ k / k ( r ) T , are recursively obtained by
x ^ k / k ( r ) = A k o k ( r ) , k 1 , Σ ^ k / k ( r ) = A k B k A k Σ k o ( r ) T , k 1 .
The vectors o k ( r ) and the matrices Σ k o ( r ) E o k ( r ) o k ( r ) T , defined in (11), are recursively calculated from
o k ( r ) = o k 1 ( r ) + O k ( r ) Π k ( r ) 1 μ k ( r ) , k 1 ; o 0 ( r ) = 0 ,
Σ k o ( r ) = Σ k 1 o ( r ) + O k ( r ) Π k ( r ) 1 O k ( r ) T , k 1 ; Σ 0 o ( r ) = 0 .
The matrices O k ( r ) E o k ( r ) μ k ( r ) T , given in (10), satisfy
O k ( r ) = B k A k Σ k 1 o ( r ) T C k ( r ) T I Λ ¯ k ( r ) , k 1 .
The innovations, μ k ( r ) = y k ( r ) y ^ k / k 1 ( r ) , are given by
μ k ( r ) = y k ( r ) I Λ ¯ k ( r ) C k ( r ) A k o k 1 ( r ) , k 1 ,
and the innovation covariance matrices, Π k ( r ) E μ k ( r ) μ k ( r ) T , satisfy
Π k ( r ) = Σ k y ( r ) I Λ ¯ k ( r ) C k ( r ) A k Σ k 1 o ( r ) A k T C k ( r ) T I Λ ¯ k ( r ) , k 1 ,
where the matrices Σ k y ( r ) are given in (7).
The general expression (8) for the LS linear estimators is also the starting point to derive the following covariance-based recursive fixed-point smoothing algorithm. The derivation is omitted, since this algorithm can easily be deduced by an analogous reasoning to that used in Theorem 2 of [6].
Recursive Local LS Linear Fixed-point Smoothing Algorithm.For the r-th cluster, r = 1 , , L , starting from the filter, x ^ k / k ( r ) , the local LS linear fixed-point smoothers, x ^ k / k + N ( r ) , N 1 , are calculated as
x ^ k / k + N ( r ) = x ^ k / k + N 1 ( r ) + X k , k + N ( r ) Π k + N ( r ) 1 μ k + N ( r ) , N 1 , k 1 ,
where the matrices X k , k + N ( r ) E x k μ k + N ( r ) T are recursively calculated by
X k , k + N ( r ) = B k M k , k + N 1 ( r ) A k + N T C k + N ( r ) T I Λ ¯ k + N ( r ) , N 1 , k 1 ,
with initial condition X k , k ( r ) = A k O k ( r ) , k 1 . The matrices M k , k + N ( r ) E [ x k o k + N ( r ) T ] , of the expression of X k , k + N ( r ) , obey the following recursive formula
M k , k + N ( r ) = M k , k + N 1 ( r ) + X k , k + N ( r ) Π k + N ( r ) 1 O k + N ( r ) T , N 1 , k 1 ; M k , k ( r ) = A k K k o ( r ) , k 1 .
Starting from the error covariance matrix of the filter Σ ^ k / k ( r ) , the fixed-point smoothing error covariance matrices,
Σ ^ k / k + N ( r ) E x k x ^ k / k + N ( r ) x k x ^ k / k + N ( r ) T = E x k x k T E x ^ k / k + N ( r ) x ^ k / k + N ( r ) T ,
are recursively obtained by
Σ ^ k / k + N ( r ) = Σ ^ k / k + N 1 ( r ) X k , k + N ( r ) Π k + N ( r ) 1 X k , k + N ( r ) T , N 1 , k 1 .

4. Second Stage: Distributed Signal Estimators

As we have already mentioned, once the local LS linear estimators, x ^ k / k + N ( r ) , r = 1 , , L , have been obtained, they are sent to the FC and our goal is to fuse these local estimators to obtain distributed estimators, x ^ k / k + N , N 0 , as matrix-weighted linear combinations that minimize the mean squared estimation error.
Defining the stacked vectors by comprising all the local estimators, X ^ k / k + N = x ^ k / k + N ( 1 ) T , , x ^ k / k + N ( L ) T T , and applying the LS criterion, the proposed distributed fusion estimators are given by:
x ^ k / k + N = E x k X ^ k / k + N T E X ^ k / k + N X ^ k / k + N T 1 X ^ k / k + N , N 0 , k 1 ,
where E X ^ k / k + N X ^ k / k + N T = E x ^ k / k + N ( r ) x ^ k / k + N ( s ) T r , s = 1 , , L and, from the Orthogonal Projection Lemma (OPL), E x k X ^ k / k + N T = E x ^ k / k + N ( 1 ) x ^ k / k + N ( 1 ) T , , E x ^ k / k + N ( L ) x ^ k / k + N ( L ) T . Hence, the derivation of the distributed estimators in (16) requires to obtain the cross-covariance matrices between the local ones Σ k / k + N x ^ ( r s ) E x ^ k / k + N ( r ) x ^ k / k + N ( s ) T , r , s = 1 , , L , k 1 , N 0 . Since the initial condition of the recursive Formula (15) for the local smoothing estimators is the local filter, the cross-covariance matrices, Σ k / k + N x ^ ( r s ) , N 1 , between the local smoothers will be recursively obtained by starting from the cross-covariance matrices, Σ k / k x ^ ( r s ) , between the local filters.

4.1. Cross-Covariance Matrices between Local Filtering Estimators Σ k / k x ^ ( r s ) = E x ^ k / k ( r ) x ^ k / k ( s ) T

From expression (12) for the filter x ^ k / k ( r ) , denoting Σ k o ( r s ) E o k ( r ) o k ( s ) T , it is clear that the cross-covariance matrices between any two local filtering estimators x ^ k / k ( r ) and x ^ k / k ( s ) satisfy:
Σ k / k x ^ ( r s ) = A k Σ k o ( r s ) A k T , k 1 ; r , s = 1 , , L .
  • Using (13) for o k ( r ) and denoting O h , k ( r s ) E o h ( r ) μ k ( s ) T , for h = k 1 , k , we get that Σ k o ( r s ) is recursively obtained by:
    Σ k o ( r s ) = Σ k 1 o ( r s ) + O k 1 , k ( r s ) Π k ( s ) 1 O k ( s ) T + O k ( r ) Π k ( r ) 1 O k ( s r ) T , k 1 , Σ 0 o ( r s ) = 0 ; r , s = 1 , , L .
  • Using again (13) and denoting Π k ( r s ) E μ k ( r ) μ k ( s ) T , the following expression for O k ( r s ) = E o k ( r ) μ k ( s ) T is immediately obtained:
    O k ( r s ) = O k 1 , k ( r s ) + O k ( r ) Π k ( r ) 1 Π k ( r s ) , k 1 ; r , s = 1 , , L .
  • Next, we derive an expression for O k 1 , k ( r s ) = E o k 1 ( r ) μ k ( s ) T . Using (14) for μ k ( s ) , with (6) for y k ( s ) , and taking into account that, from the OPL, E o k 1 ( r ) x k = E o k 1 ( r ) x ^ k / k 1 ( r ) T , we obtain:
    O k 1 , k ( r s ) = Σ k 1 o ( r ) Σ k 1 o ( r s ) A k T C k ( s ) T I Λ ¯ k ( s ) , k 1 ; r , s = 1 , , L
  • Finally, an expression for the innovation cross-covariance matrices, Π k ( r s ) , is derived. First we write
    Π k ( r s ) = E [ y k ( r ) y k ( s ) T ] E [ y k ( r ) y ^ k / k 1 ( s ) T ] E [ y ^ k / k 1 ( r ) y k ( s ) T ] + E [ y ^ k / k 1 ( r ) y ^ k / k 1 ( s ) T ] .
    Next, for t = r , s , we use (6) for y k ( t ) and (14) for y ^ k / k 1 ( t ) ; then, using again that, from the OPL, E x k o k 1 ( t ) T = E x ^ k / k 1 ( t ) o k 1 ( t ) T , the following expression is clear:
    Π k ( r s ) = Σ k y ( r s ) + ( I Λ ¯ k ( r ) ) C k ( r ) A k Σ k 1 o ( r s ) Σ k 1 o ( r ) Σ k 1 o ( s ) A k T C k ( s ) T ( I Λ ¯ k ( s ) ) , k 1 ; r , s = 1 , , L ,
    where the matrices Σ k y ( r s ) are given in (7).

4.2. Cross-Covariance Matrices between Local Smoothing Estimators Σ k / k + N x ^ ( r s ) = E x ^ k / k + N ( r ) x ^ k / k + N ( s ) T

From expression (15), denoting Φ k , k + N ( r s ) E x ^ k / k + N 1 ( r ) μ k + N ( s ) T , it is clear that the cross-covariance matrices between any two local smoothing estimators x ^ k / k + N ( r ) and x ^ k / k + N ( s ) satisfy:
Σ k / k + N x ^ ( r s ) = Σ k / k + N 1 x ^ ( r s ) + Φ k , k + N ( r s ) Π k + N ( s ) 1 X k , k + N ( s ) T + X k , k + N ( r ) Π k + N ( r ) 1 Φ k , k + N ( s r ) + X k , k + N ( s ) Π k + N ( s ) 1 Π k + N ( r s ) T T , N 1 , k 1 ; r , s = 1 , , L .
  • To obtain the expectations Φ k , k + N ( r s ) E x ^ k / k + N 1 ( r ) μ k + N ( s ) T , we use (14) for μ k + N ( s ) and denote Ψ k , k + N ( r s ) E x ^ k / k + N ( r ) o k + N ( s ) T ; then, we have:
    Φ k , k + N ( r s ) = E x ^ k / k + N 1 ( r ) y k + N ( s ) T Ψ k , k + N 1 ( r s ) A k + N T C k + N ( s ) T ( I Λ ¯ k + N ( s ) ) .
    Using now (6) for y k + N ( s ) and the independence of x ^ k / k + N 1 ( r ) with Λ k + N ( s ) , v k + N ( s ) and w k + N ( s ) , we have
    E x ^ k / k + N 1 ( r ) y k + N ( s ) T = E x ^ k / k + N 1 ( r ) x k + N T C k + N ( s ) T ( I Λ ¯ k + N ( s ) ) , N 1 , k 1 ; r , s = 1 , , L .
    Next, we use that, from the OPL, the estimator is orthogonal to the estimation error, to write E x ^ k / k + N 1 ( r ) x k + N T = E x ^ k / k + N 1 ( r ) x ^ k + N / k + N 1 ( r ) T . So, since x ^ k + N / k + N 1 ( r ) = A k + N o k + N 1 ( r ) , the following expression for Φ k , k + N ( r s ) is immediately derived:
    Φ k , k + N ( r s ) = Ψ k , k + N 1 ( r ) Ψ k , k + N 1 ( r s ) A k + N T C k + N ( s ) T ( I Λ ¯ k + N ( s ) ) , N 1 , k 1 ; r , s = 1 , , L .
  • Finally, the expectations Ψ k , k + N ( r s ) = E x ^ k / k + N ( r ) o k + N ( s ) T are calculated. Using (15) for x ^ k / k + N ( r ) and (13) for o k + N ( s ) , the following expression for Ψ k , k + N ( r s ) is clear:
    Ψ k , k + N ( r s ) = Ψ k , k + N 1 ( r s ) + Φ k , k + N ( r s ) Π k + N ( s ) 1 O k + N ( s ) T + X k , k + N ( r ) Π k + N ( r ) 1 O k + N 1 , k + N ( s r ) + O k + N ( s ) Π k + N ( s ) 1 Π k + N ( r s ) T T , N 1 , k 1 ; r , s = 1 , , L . Ψ k , k ( r s ) = A k Σ k o ( r s ) , k 1 ; r , s = 1 , , L .

4.3. Distributed Filtering and Fixed-Point Smoothing Estimators

The distributed estimators, x ^ k / k + N , are calculated from (16), while a formula for the error covariance matrices, Σ ^ k / k + N E x k x ^ k / k + N x k x ^ k / k + N T , is easily derived from assumption (A1) and (16). More specifically, we can state the following results:
Let X ^ k / k + N = x ^ k / k + N ( 1 ) T , , x ^ k / k + N ( L ) T T be the vectors constituted by the local estimators calculated from the recursive algorithms in Section 3.2; then, the distributed filtering and smoothing estimators are given by
x ^ k / k + N = Ξ k / k + N Σ k / k + N 1 X ^ k / k + N , N 0 , k 1 ,
with Σ k / k + N = Σ k / k + N x ^ ( r s ) r , s = 1 , , L and Ξ k / k + N = Σ k / k + N x ^ ( 1 ) , , Σ k / k + N x ^ ( L ) , where Σ k / k + N x ^ ( r s ) = E x ^ k / k + N ( r ) x ^ k / k + N ( s ) T , r , s = 1 , , L , are obtained by (18)) for N 1 , with initial condition given in (17).
The error covariance matrices of the distributed estimators are computed by
Σ ^ k / k + N = A k B k T Ξ k / k + N Σ k / k + N 1 Ξ k / k + N T , N 0 , k 1 .

5. Numerical Simulation Study

A numerical simulation example is presented to illustrate the application of the algorithms designed in the current paper to estimate a two-dimensional signal. Specifically, such algorithms have been implemented by using MATLAB software and both local and distributed filtering and fixed-point smoothing error variances of the signal components have been calculated, for 100 iterations, in order to examine the estimation accuracy. The effect of the stochastic deception attacks on the estimation performance has also been analyzed.
Consider a two-dimensional signal, x k , whose evolution is described by the following model:
x k + 1 = F 1 + ε k F 2 x k + G α k , k 0 ,
where
F 1 = 0 . 95 0 . 01 0 0 . 95 , F 2 = 0 . 01 0 0 0 . 01 , G = 0 . 8 0 . 6 .
The multiplicative noise, ε k k 1 , and the additive noise, α k k 1 , are standard white gaussian scalar noises. The initial signal x 0 is a gaussian two-dimensional random vector with zero mean and covariance matrix E [ x 0 x 0 T ] = I . The noise sequences and initial signal vector are assumed to be mutually independent; then, it is easy to see that the signal covariance function is given by E [ x k x h T ] = F 1 k h E [ x h x h T ] , h k , so assumption (A1) is satisfied taking A k = F 1 k and B h T = F 1 h Σ h x , where Σ h x E [ x h x h T ] , h 1 , is recursively obtained by:
Σ h x = F 1 Σ h 1 x F 1 T + F 2 Σ h 1 x F 2 T + G G T , h 1 ; Σ 0 x = I .
This example shows that assumption (A1) on the signal autocovariance function is fulfilled by uncertain systems with state-dependent multiplicative noise.
Suppose that a sensor network, comprising 12 sensors, is deployed to measure the stochastic signal x k . These sensors are grouped into three clusters and the number of sensors in each cluster is m 1 = 3 , m 2 = 4 and m 3 = 5 , respectively. Scalar measurements are provided by the 12 sensors, according to model (1), with the following time-invariant observation matrices:
C k ( r ( 1 ) ) = [ 0 . 8 0 . 9 ] , C k ( r ( 2 ) ) = [ 0 . 6 0 . 7 ] , C k ( r ( 3 ) ) = [ 0 . 7 0 . 8 ] , r = 1 , 2 , 3 ;
C k ( 2 ( 4 ) ) = C k ( 3 ( 4 ) ) = [ 0 . 9 0 . 5 ] , C k ( 3 ( 5 ) ) = [ 0 . 5 0 . 5 ] .
The additive measurement noises are defined as v k ( r ( i ) ) = v ( r ) η k ( r ) , for all i = 1 , , m r , where v ( 1 ) = 0 . 4 , v ( 2 ) = 0 . 7 , v ( 3 ) = 1 , and { η k ( r ) } k 1 , r = 1 , 2 , 3 , are independent zero-mean Gaussian white process with variance 10; so the measurement noises in each cluster r = 1 , 2 , 3 , are correlated with R k r ( i j ) = 10 ( v ( r ) ) 2 , k 1 ; i , j = 1 , , m r .
Assume that the transmissions to the local processors are subject to linear deception attacks and the data injected by the attackers are described by (2), where the attack noises are defined as w k ( r ( i ) ) = w ( r ) ζ k ( r ) , for all i = 1 , , m r , with w ( 1 ) = 0 . 1 , w ( 2 ) = 0 . 25 , w ( 3 ) = 1 , and { ζ k ( r ) } k 1 , r = 1 , 2 , 3 , independent standard Gaussian white processes. Clearly, in each cluster r = 1 , 2 , 3 , the attack noises are correlated with S k r ( i j ) = ( w ( r ) ) 2 , k 1 ; i , j = 1 , , m r .
According to the theoretical study, for r = 1 , 2 , 3 , the measurements received at the r-th local processor are modelled by (3); that is, the attacks are considered to take place randomly and the statuses of the attacks are described by independent sequences of independent Bernoulli random variables with known probabilities P ( λ k ( r ( i ) ) = 1 ) = λ ¯ r ( i ) , k 1 , r = 1 , 2 , 3 , i = 1 , , m r .
First, considering λ ¯ k ( r ( i ) ) = 0 . 1 i , for r = 1 , 2 , 3 , and i = 1 , , m r , the performance of the local filtering estimators has been compared with that of the local fixed-point smoothing estimators; for r = 1 , 2 , 3 , these local estimators are computed in each r-th processor using only the measurements received from the sensors of the corresponding r-th cluster. For the first signal component, Figure 1 displays the error variances of the local filtering estimators, Σ ^ k / k ( r ) 11 , and the local fixed-point smoothing estimators Σ ^ k / k + N ( r ) 11 , for N = 1 , 3 , 5 , 7 , are displayed. For the three processors, Figure 1 shows that, as expected, the error variances corresponding to the local smoothers are less than those of the local filters. It is also observed that the accuracy of the smoothers at each fixed-point, k, becomes better as the number of available observations, k + N , increases, although this improvement is practically imperceptible for N > 3 in processor 1, N > 5 in processor 2 and N > 7 in processor 3. Note that the best estimation accuracy is obtained in processor 1 and the worst one in processor 3; therefore, we can conclude that, when the filtering performance is better, the improvement of the smoothers is less significant and, actually, it becomes practically negligible for lower values of N. Similar results and conclusions are obtained for the second signal component.
Next, considering again λ ¯ k ( r ( i ) ) = 0 . 1 i , for r = 1 , 2 , 3 , and i = 1 , , m r , the performance of the distributed filtering and fixed-point smoothing estimators have been compared with that of the local filtering estimators of the three processors. Figure 2 displays, for both first and second signal components ( a = 1 , 2 ), the error variances of the local filtering estimators Σ ^ k / k ( r ) a a , r = 1 , 2 , 3 , and the distributed filtering and smoothing estimators Σ ^ k / k + N a a , N = 0 , 1 , 3 . On the one hand, this figure shows that the error variances of the distributed fusion filtering estimators are smaller than those of every local filter; consequently, the distributed fusion filtering estimators outperform all the local ones, agreeing with what theoretically is expected, since all the information of the three clusters is available to the distributed filters, while the local ones are based on the information of a single cluster. On the other hand, agreeing with the comments made about Figure 1, from Figure 2 it is also concluded that the error variances of the distributed fusion smoothers are less than those of the filters and, at each fixed-point, k, the performance of the smoothers is better as the number of available observations, k + N , increases (note that, in this case, the improvement is practically imperceptible for N > 1 ).
Finally, assuming the same success probability of attack during the transmissions of the measured outputs of all the sensors in the three clusters, λ ¯ k ( r ( i ) ) = λ ¯ , r = 1 , 2 , 3 , i = 1 , , m r , we analyze how this probability affects the error variances of the distributed filtering estimators. Taking into account that these variances stabilize after a sufficiently large number of iterations, only the results of the last iteration ( k = 100 ) are shown in Table 1. Specifically, the distributed filtering error variances of the first and second signal components, Σ ^ 100 / 100 a a , a = 1 , 2 , together with the corresponding percent variation rates are shown in Table 1, when λ ¯ varies from 0.1 to 0.9. From this table, we conclude that, for both signal components, as λ ¯ increases, the distributed filtering error variances become higher, meaning that, as expected, the lower the probability of successful attacks is, the better estimators are obtained. In addition, for both signal components, it is observed that the deterioration of the estimators is more significant (the percent variation rate of the error variance is larger) for high attack probabilities ( λ ¯ 0 . 7 ).

6. Concluding Remark

This paper contributes to the literature on the distributed fusion filtering and fixed-point smoothing problems in clustering sensor networks subject to random deception attacks. Actually, the sensor nodes of the network are grouped into some clusters and each cluster is connected to a local processor, which gathers the measured outputs of all the sensors in the cluster. During this transmission, the measured outputs sent by the sensors to the local processor may be deceptively modified by malicious attackers, who launch random deception attacks with known probabilities of success that may be different at each sensor. Aggregating the information received at every local processor, and using an innovation approach, recursive and easily implementable local filtering and fixed-point smoothing algorithms have been designed without requiring full knowledge of the signal evolution model, but only the first and second order moments of the processes involved in the measurement equations. Once the local estimators are available, they are transmitted to the global fusion center where they are fused to obtain the optimal matrix-weighted fusion filter and fixed-point smoother, under the minimum mean-squared error criterion. The simulation results have illustrated the effectiveness of the proposed algorithms, showing that the distributed fusion estimators outperform the local ones. In addition, the influence of the probability of successful attacks on the estimation performance has been analyzed, concluding that the estimation error variance is larger (and, consequently, worse estimations are obtained) for higher attack probabilities of success. Some new research directions stemming from this work would be:
-
Considering other different attack strategies (not necessarily linear ones) to cover more general scenarios in practice. Another interesting generalization would be studying the case where the Bernoulli random variables modelling the success or failure of the attacks are not necessarily independent, but they are correlated at consecutive sampling times or they obey a Markovian dependence structure, for example.
-
Addressing the estimation problem in clustering sensor networks with a topology represented by a directed or undirected graph, thus generalizing the current study by allowing the exchange of information between sensors and between clusters.
-
Investigating other interesting issues related with clustering sensor networks (cluster consensus, sensor mobility, overlap of clusters, cluster position awareness, efficient and energy, uniform clustering or stability of the cluster, among others).

Author Contributions

All the authors contributed equally to this work. R.C.-Á., A.H.-C. and J.L.-P. provided original ideas for the proposed model and collaborated in the derivation of the estimation algorithms; they participated equally in the design and analysis of the simulation results; and the paper was also written and reviewed cooperatively.

Funding

This research is supported by Ministerio de Economía, Industria y Competitividad, Agencia Estatal de Investigación and Fondo Europeo de Desarrollo Regional FEDER (grant no. MTM2017-84199-P).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pang, C.; Sun, S. Fusion Predictors for Multisensor Stochastic Uncertain Systems With Missing Measurements and Unknown Measurement Disturbances. IEEE Sens. J. 2015, 15, 4346–4354. [Google Scholar] [CrossRef]
  2. Caballero-Águila, R.; García-Garrido, I.; Linares-Pérez, J. Distributed fusion filtering for multi-sensor systems with correlated random transition and measurement matrices. Int. J. Comput. Math. 2018. [Google Scholar] [CrossRef]
  3. Ruan, Y.; Luo, Y.; Zhu, Y. Globally Optimal Distributed Kalman Filtering for Multisensor Systems with Unknown Inputs. Sensors 2018, 18, 2976. [Google Scholar] [CrossRef] [PubMed]
  4. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.; Wang, Z. A new approach to distributed fusion filtering for networked systems with random parameter matrices and correlated noises. Inf. Fusion 2019, 45, 324–332. [Google Scholar] [CrossRef]
  5. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Networked Fusion Filtering from Outputs with Stochastic Uncertainties and Correlated Random Transmission Delays. Sensors 2016, 16, 847. [Google Scholar] [CrossRef]
  6. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Centralized Fusion Approach to the Estimation Problem with Multi-Packet Processing under Uncertainty in Outputs and Transmissions. Sensors 2018, 18, 2697. [Google Scholar] [CrossRef]
  7. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Centralized filtering and smoothing algorithms from outputs with random parameter matrices transmitted through uncertain communication channels. Digit. Signal Process. 2019, 85, 77–85. [Google Scholar] [CrossRef]
  8. Jun, H.; Wang, Z.; Chen, D.; Alsaadi, F.E. Estimation, filtering and fusion for networked systems with network-induced phenomena: New progress and prospects. Inf. Fusion 2016, 31, 65–75. [Google Scholar] [Green Version]
  9. Bakr, M.A.; Lee, S. Distributed Multisensor Data Fusion under Unknown Correlation and Data Inconsistency. Sensors 2017, 17, 2472. [Google Scholar] [CrossRef]
  10. Sun, S.; Lin, H.; Ma, J.; Li, X. Multi-sensor distributed fusion estimation with applications in networked systems: A review paper. Inf. Fusion 2017, 38, 122–134. [Google Scholar] [CrossRef]
  11. Din, S.; Ahmad, A.; Paul, A.; Rathore, M.M.U.; Jeon, G. A Cluster-Based Data Fusion Technique to Analyze Big Data in Wireless Multi-Sensor System. IEEE Access 2017, 5, 5069–5083. [Google Scholar] [CrossRef]
  12. Wang, F.; Wang, L.; Han, Y.; Liu, B.; Wang, J.; Su, X. A Study on the Clustering Technology of Underwater Isomorphic Sensor Networks Based on Energy Balance. Sensors 2014, 14, 12523–12532. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Shang, Y.A. combinatorial necessary and sufficient condition for cluster consensus. Neurocomputing 2016, 216, 611–616. [Google Scholar] [CrossRef]
  14. Shang, Y. Finite-time Cluster Average Consensus for Networks via Distributed Iterations. Int. J. Control. Autom. Syst. 2017, 15, 933–938. [Google Scholar] [CrossRef]
  15. Shang, Y. Resilient Multiscale Coordination Control against Adversarial Nodes. Energies 2018, 11, 1844. [Google Scholar] [CrossRef]
  16. Shang, Y. Resilient consensus of switched multi-agent systems. Syst. Control. Lett. 2018, 122, 12–18. [Google Scholar] [CrossRef]
  17. Afsar, M.M.; Tayarani-N, M.-H. Clustering in sensor networks: A literature survey. J. Netw. Comput. Appl. 2014, 46, 198–226. [Google Scholar] [CrossRef]
  18. Akila, I.S.; Manisekaran, S.V.; Venkatesan, S.V. Modern Clustering Techniques in Wireless Sensor Networks. In Wireless Sensor Networks—Insights and Innovations; Sallis, P.J., Ed.; InTech Open: London, UK, 2017; pp. 141–156. [Google Scholar]
  19. Sambo, D.W.; Yenke, B.O.; Förster, A.; Dayang, P. Optimized Clustering Algorithms for Large Wireless Sensor Networks: A Review. Sensors 2019, 19, 322. [Google Scholar] [CrossRef]
  20. Hu, L.; Wang, Z.; Han, Q.-L.; Liu, X. State estimation under false data injection attacks: Security analysis and system protection. Automatica 2018, 87, 176–183. [Google Scholar] [CrossRef]
  21. Ma, L.; Wang, Z.; Han, Q.L.; Lam, H.K. Variance constrained distributed filtering for time-varying systems with multiplicative noises and deception attacks over sensor networks. IEEE Sens. J. 2017, 17, 2279–2288. [Google Scholar] [CrossRef]
  22. Ding, D.; Wang, Z.; Ho, D.W.C.; Wei, G. Distributed recursive filtering for stochastic systems under uniform quantizations and deception attacks through sensor networks. Automatica 2017, 78, 231–240. [Google Scholar] [CrossRef]
  23. Wang, D.; Wang, Z.; Shen, B.; Alsaadi, F.E. Security guaranteed filtering for discrete-time stochastic delayed systems with randomly occurring sensor saturations and deception attacks. Int. J. Robust Nonlinear Control. 2017, 27, 1194–1208. [Google Scholar] [CrossRef]
  24. Wang, Z.; Wang, D.; Shen, B.; Alsaadi, F.E. Centralized security-guaranteed filtering in multirate-sensor fusion under deception attacks. J. Frankl. Inst. 2018, 355, 406–420. [Google Scholar] [CrossRef]
  25. Song, H.; Hong, Z.; Song, H.; Zhang, W.-A. Fusion estimation in clustering sensor networks under stochastic deception attacks. Int. J. Syst. Sci. 2018, 49, 2257–2266. [Google Scholar] [CrossRef]
  26. Li, Y.; Wu, Q.; Peng, L. Simultaneous Event-Triggered Fault Detection and Estimation for Stochastic Systems Subject to Deception Attacks. Sensors 2018, 18, 321. [Google Scholar] [CrossRef] [PubMed]
Figure 1. First signal component error variance comparison of the local filters and fixed-point smoothers.
Figure 1. First signal component error variance comparison of the local filters and fixed-point smoothers.
Sensors 19 03112 g001
Figure 2. Local filtering and distributed filtering and fixed-point smoothing error variances of the first and second signal components.
Figure 2. Local filtering and distributed filtering and fixed-point smoothing error variances of the first and second signal components.
Sensors 19 03112 g002
Table 1. Distributed filtering error variances at k = 100 and percent variation rates of the first and second signal components under different attack success probabilities λ ¯ .
Table 1. Distributed filtering error variances at k = 100 and percent variation rates of the first and second signal components under different attack success probabilities λ ¯ .
Attack Success Probability λ ¯ 0.10.20.30.40.50.60.70.80.9
Error variance Σ ^ 100 / 100 11 0.47430.55970.64280.73430.84270.98101.17581.49502.1877
Percent variation rate 18.0114.8514.2314.7616.4119.8627.1046.38
Error variance Σ ^ 100 / 100 22 0.26500.31220.35790.40820.46750.54270.64780.81801.1787
Percent variation rate 17.8114.6414.0514.5316.0919.3726.2344.15

Share and Cite

MDPI and ACS Style

Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Covariance-Based Estimation for Clustered Sensor Networks Subject to Random Deception Attacks. Sensors 2019, 19, 3112. https://doi.org/10.3390/s19143112

AMA Style

Caballero-Águila R, Hermoso-Carazo A, Linares-Pérez J. Covariance-Based Estimation for Clustered Sensor Networks Subject to Random Deception Attacks. Sensors. 2019; 19(14):3112. https://doi.org/10.3390/s19143112

Chicago/Turabian Style

Caballero-Águila, Raquel, Aurora Hermoso-Carazo, and Josefa Linares-Pérez. 2019. "Covariance-Based Estimation for Clustered Sensor Networks Subject to Random Deception Attacks" Sensors 19, no. 14: 3112. https://doi.org/10.3390/s19143112

APA Style

Caballero-Águila, R., Hermoso-Carazo, A., & Linares-Pérez, J. (2019). Covariance-Based Estimation for Clustered Sensor Networks Subject to Random Deception Attacks. Sensors, 19(14), 3112. https://doi.org/10.3390/s19143112

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop