Next Article in Journal
A Machine Learning and Blockchain Based Efficient Fraud Detection Mechanism
Next Article in Special Issue
A Combined mmWave Tracking and Classification Framework Using a Camera for Labeling and Supervised Learning
Previous Article in Journal
ACF: An Armed CCTV Footage Dataset for Enhancing Weapon Detection
Previous Article in Special Issue
Distance- and Momentum-Based Symbolic Aggregate Approximation for Highly Imbalanced Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Acoustic Source Tracking Based on Probabilistic Data Association and Distributed Cubature Kalman Filtering in Acoustic Sensor Networks

School of Microelectronics and Control Engineering, Changzhou University, Changzhou 213164, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(19), 7160; https://doi.org/10.3390/s22197160
Submission received: 15 August 2022 / Revised: 16 September 2022 / Accepted: 20 September 2022 / Published: 21 September 2022

Abstract

:
A probabilistic data association-based distributed cubature Kalman filter (PDA-DCKF) method is proposed in this paper, whose performance on tracking single moving sound sources in the distributed acoustic sensor network was verified. In this method, the PDA algorithm is first used to sift the observations from neighboring nodes. Then, the sifted observations are fused to update the state vectors in the CKF. Since nodes in a sensor network have different reliabilities, the final tracking result integrates the estimations from the local nodes, which are weighted with the parameters depending on the mean square error of the estimation and the energy of the received signal. The experimental results illustrated that the proposed PDA-DCKF method is superior to the other DCKF methods in tracking sound sources even under severe noise and reverberant conditions.

1. Introduction

The problem of acoustic source localization and tracking has always been one of the research hotspots in the field of speech processing. It has been widely used in many aspects, such as audio and video conferencing systems, human-computer interaction and speech enhancement, etc. [1,2,3,4]. Traditional acoustic localization and tracking methods usually require the microphone array to have a regular geometric structure, and generally use a centralized data processing method [5]. With the continuous advancement of technology, some traditional microphone arrays gradually show some deficiencies. The distributed microphone network has attracted more and more research work because it has no strict restrictions on the arrangement of microphones, and is a network composed of multiple nodes arbitrarily distributed in space, usually each node contains a set of microphones [6,7,8,9,10].
So far, there have been many studies on acoustic source localization using distributed microphone networks [11]. But they only locate the acoustic source based on the current observations of multiple microphones, which can locate the acoustic source when the background noise and reverberation are small. In noisy and reverberant environments, spurious observations may even mask observations from real acoustic sources, degrading localization performance. To avoid this problem, a Bayesian filter [12] combined the current observation with a series of past observations for current position estimation, which is more effective for dealing with the adverse effects of noise or reverberation. Theoretically, Bayesian filters describe the tracking problem with a state-space model that includes a dynamic model that describes the motion of the target and an observation model that describes the relationship between the observations and the state of the acoustic source. When the state space model is linear and Gaussian, Kalman filter can replace Bayesian filter. However, in acoustic source tracking scenarios, the observation function is usually nonlinear, and some conditions and properties applicable to linear systems no longer hold, and the performance of the Kalman filter may be severely degraded.
Extended Kalman filter (EKF) was first proposed in [13] and is the simplest and most widely used nonlinear filtering method. EKF only intercepts the first-order term in the linearization process to approximate the system function, but its error is relatively large. In order to solve this problem, the literature [14] proposed iterative extended Kalman filter (IEKF), which can improve the accuracy of EKF through several iterations. However, when the nonlinearity of the system model is very strong, whether it is EKF or IEKF, their effect is not good, and there are disadvantages such as poor stability and easy divergence. The particle filter (PF) is a Monte Carlo implementation of a Bayesian filter that approximates the state by a series of weighted particles extracted from the proposal function [15]. PF can handle nonlinear and non-Gaussian situations well, and many PF-based sound source tracking methods have been developed. Vermaak and Blake [16] first introduced PF to sound source tracking. The particle filter method breaks through the limitations of linearity and Gaussian, but its computational load is too large. In practical applications, particle filtering will only be considered when the approximate filter fails. The Sigama point Kalman filter method [17] was similar to the idea of particle filter. It does not use the method of approximating nonlinear systems, but directly uses the real system and observation model by selecting a set of effective deterministic sampling sets, namely Sigama points, which can achieve second-order accuracy. According to the different selection methods of Sigama points, it can be divided into UKF, CKF, QKF and so on. In general, several Gaussian filtering methods introduced above are centralized, that is, the data of all nodes is collected and transmitted to the central processing unit to perform the task of acoustic source tracking. This method is generally unreliable, as any failure of the central processor renders the entire network untraceable.
In order to solve the unreliable problem of centralized methods, many distributed methods have been developed for sound source tracking. No central processor is needed in the distributed method, and all nodes realize the estimation of the global state only by exchanging data with their neighbors. In reference [18], a distributed extended Kalman particle filter (DEKPF) for speaker tracking was developed, which combined the current TDOA observations into EKF to propose particle filter. In reference [19], a distributed particle filter (DPF) was proposed, which applied the improved iterative covariance intersection (MICI) algorithm and interactive multiple model (IMM) to speaker tracking in distributed microphone networks. In reference [20], a distributed iterative EKF was proposed to estimate the time-varying speaker position in the microphone array. In reference [21], a Distributed Unscented Kalman Filter (DUKF) is proposed to overcome the nonlinearity of the measurement model in speaker tracking. The time difference of arrival (TDOA) was used as the observation and then the distributed IMM-UKF was used to track the location of the sound source.
In the actual environment, the existence of noise or reverberation usually produces unreliable observations with false peaks, which may lead to serious performance degradation. Usually, the current observations contained in these methods are only extracted from the largest peak value of a certain observation function. In some bad cases, the peak value related to the real acoustic source may be masked by the stray acoustic source. Therefore, it is more reasonable to extract multiple observations from the observation function, rather than one observation, and then incorporate it into the above tracking scheme. Probabilistic Data Association (PDA) [22] is an effective method to combine multiple observations into Kalman filter state update, which has been proved to be suitable for target tracking in clutter environment. In reference [23], an improved distributed unscented Kalman particle filter (DUKPF) was proposed to track a single moving acoustic source using a distributed microphone network in noise and reverberation environments. This method proposed to extract multiple observations from the observation function of each node and combined them into the status update of UKF through probabilistic data association (PDA) technology, so as to generate PDA-UKF, and then brought in particle filter. In reference [24], a microphone array network distributed multi speaker tracking method based on tasteless particle filter and data association was proposed. The available observations were associated with each speaker at each node using data association technology to track the speaker. Reference [25] proposed a volume information filter based on joint probabilistic data association (JPDA) for multi acoustic source tracking based on distributed acoustic vector sensor (AVS) array, in which JPDA was used to deal with the correlation between observations and targets. Issues related to multi-source tracking are beyond the scope of this article. However, most of particle filter-based methods require excessive computational costs, which limits them in real-time applications. Besides, in existing speaker tracking methods, the PDA algorithm is applied to sift the observations without considering the information from neighboring nodes.
Probabilistic data association with cubature Kalman filtering are combined in this paper, and they are applied to the problem of single-acoustic source tracking in noisy and reverberant environments with distributed acoustic sensor networks. The contributions of this paper are as follows:
  • Combining the cubature Kalman filter (CKF) with PDA, the probabilistic data association-cubature Kalman filter (PDA-CKF) was developed. In PDA-CKF, multiple possible observations were merged into the state update of CKF by the PDA technique.
  • In this paper, PDA-CKF was applied to the distributed acoustic sensor network, and the probabilistic data association-distributed cubature Kalman filter (PDA-DCKF) was developed by combining the observation information of each node’s neighbor nodes in the network.
  • Considering the reliability of the local state, it was proposed to combine the mean square error (MSE) of the position estimation of each node and the received signal energy to adjust the weighting coefficient of distributed acoustic sensor data fusion. In this way, the local state of high-quality nodes is enhanced, and each node can achieve global consistency and good speaker tracking performance.
The structure of this paper is as follows. Section 2 presents the problem formulation, background knowledge, and some prior knowledge of acoustic source tracking. Section 3 first introduces the single-node PDA-CKF and then details the distributed PDA-DCKF. Section 4 presents the experimental results and discussion. Section 5 summarizes some conclusions.

2. Background Knowledge

2.1. Problem Formulation

Consider a distributed sensor network with N nodes deployed as shown in Figure 1. The positions of the nodes can be obtained in advance by calibration [26]. Each node in the DMA consists of two microphones at distance L . All nodes of the network are modeled as vertices of the graph G 1 = ( ε , υ ) , where υ = { 1 , 2 , , N } is the vertex set, ε { ( p , q ) | p , q υ } is the edge set, and ( p , q ) ε represents the network’s communication constraints, i.e., node p can send information to node q , and vice versa. Let N p , k = { q υ | ( p , q ) ε } { p } denote the set of neighbors of node p at time k , where a node is a neighbor of itself certainly.

2.2. Signal Model and TDOA Estimation

In acoustic sensor networks, the discrete-time signal acquired by the l t h microphone ( l = 1 , 2 ) of node p can be modeled as [23]
y p , l ( t ) = h p , l ( t ) y ( t ) + e p , l ( t ) , p υ
where t is the discrete-time index, h p , l ( t ) is the room impulse response (RIR) between the microphone and the acoustic source, denotes the convolution operator, y ( t ) is the source signal, and e p , l ( t ) is the additive noise.
Traditionally, the generalized cross-correlation function (GCC) [27] is used for TDOA estimation. Assuming that Y 1 ( k ) and Y 2 ( k ) are the acoustic signal received by a microphone pair at time k and Y l ( f ) = F F T { Y l ( k ) } , l = 1 , 2 is the frequency domain representation of the corresponding acoustic signal in a time frame, the generalized cross-correlation function of the acoustic signal received by the microphone pair is
R 12 ( τ ) = + Y 1 ( f ) Y 2 * ( f ) | Y 1 ( f ) Y 2 * ( f ) | e j 2 π f τ d f
where Y 1 ( f ) and Y 2 ( f ) represent the frequency-domain microphone signals at the node, and represents the complex conjugation operation. Therefore, the delay estimation is [27]
τ ^ = + arg max τ [ τ max , τ max ] R 12 ( τ )
τ max is the largest time delay estimation.
However, in the real indoor environment, reverberation and noise will bring false maxima of R 12 ( τ ) and obtain invalid TDOA estimation. In order to solve this problem, the local largest of the first Q largest peaks of R 12 ( τ ) are taken as the candidate measurement value of multiple TDOA of node p at time k . In this paper, multiple TDOA observations were extracted through a two-step selection process, taking node p as an example [23].
(1)
Select Q delays according to the peak amplitude of the GCC, i.e.,
z p , k = [ τ ˜ p , k ( 1 ) , τ ˜ p , k ( 2 ) , , τ ˜ p , k ( Q ) ]
where τ ˜ p , k ( i ) is the delay of node p related to the i th largest peak of R 12 ( τ ) at time k .
(2)
Further, select m p , k observations from (4) as local observations, and the selection rules are shown in Section 3.
z p , k = [ τ ˜ p , k ( 1 ) , τ ˜ p , k ( 2 ) , , τ ˜ p , k ( m p , k ) ]
where each delay τ ^ p , k ( j ) , j = 1 , 2 , , m p , k is deemed as a TDOA candidate.

2.3. Dynamic Model of Acoustic Source

Without loss of generality, the two-dimensional tracking is considered herein, since the height of a moving acoustic source would usually not change significantly. Speakers move in a room with a distributed acoustic sensor network, and Langevin model [24] can accurately and simply describe the time-varying position of speakers. At time k , the state of the speaker is defined as x k = [ x k , y k , x ˙ k , y ˙ k ] T , where ( x k , y k ) T and ( x ˙ k , y ˙ k ) T represent the position and moving speed of the speaker, respectively. In this model, the speaker’s motion in the Cartesian coordinate system is considered to be independent and modeled as [23]
x k = [ I 2 a Δ T I 2 0 a I 2 ] x k 1 + [ b Δ T I 2 0 0 b I 2 ] u k 1
where a = e β Δ T , and b = υ ¯ 1 a 2 ; β and υ ¯ are the rate constant and the steady velocity parameter, respectively. I s denotes the s-order identical matrix, stands for the Kronecker product, Δ T is the sampling period for position estimation, and u k 1 is the zero-mean white Gaussian noise with identity covariance matrix, which describes the uncertainty of the acoustic source motion.

2.4. Bayesian Framework for Speaker Tracking

Bayesian filtering is the basis of Kalman filtering. This section briefly reviews the basic principles of the Bayesian filtering algorithm.
Assuming that the state variable at time k is x k p and its observation value is y k q , where n represents the n-dimensional real vector space, the state equation and observation equation are expressed as [21]:
x k + 1 = f k ( x k ) + Γ k w k
y k = h k ( x k ) + v k
where f k ( ) is the nonlinear state transfer function, h k ( ) is the nonlinear observation function, Γ k is the noise transfer matrix, w k is the process noise, and v k is the observation noise, which meets [21]
E { [ w k v k ] [ w l v l ] T } = [ Q k δ k , l 0 0 R k δ k , l ]
where the superscript T represents the transpose of the matrix, E { } represents the expected operator, and δ k , l represents the Kronecker delta function. Q k and R k are the covariance matrices of noise w k and v k , respectively, and it is assumed that they are both positive definite.
The Bayesian filtering problem is to infer the estimated value of the state variable x k at time k given the observation information y 1 : k = { y 1 , , y k } at time k , i.e., to estimate the posterior probability density p ( x k | y 1 : k ) . Assuming that the initial probability density function p ( x 0 ) of the state variable is known as prior knowledge, the posterior probability density p ( x k | y 1 : k ) can be obtained recursively by the following equations [20]:
p ( x k | y 1 : k 1 ) = p ( x k | x k 1 ) p ( x k 1 | y 1 : k 1 ) d x k 1
p ( x k | y 1 : k ) = p ( y k | x k ) p ( x k | y 1 : k 1 ) p ( y k | y 1 : k 1 )
In Equations (10) and (11), the state transition probability density function p ( x k | x k 1 ) is defined by the state equation; the observation likelihood probability density function p ( y k | x k ) is defined by the observation equation.

3. Improved Distributed Cubature Kalman Filter

In the CKF, the observation corresponding to the largest peak of the observation function is used for the state update. This approach works well under moderate acoustic environments, while its performance degrades in severe noise and reverberation conditions because the spurious peaks from noise or reverberation may cover up the peaks from real acoustic sources. To alleviate this problem, multiple observations are selected from the multiple local maxima of the observation function. A general framework for state updates that integrates multiple possible observations is provided by the probabilistic data association (PDA). Inspired by this idea, the probabilistic data association-cubature Kalman filter (PDA-CKF) was derived in this paper. Next, PDA-CKF was used for acoustic source tracking in distributed acoustic sensor networks, and an improved PDA-DCKF algorithm was developed. The observations of multiple nodes in the neighborhood are filtered by PDA and then merged into the state update of CKF to integrate the information of multiple nodes to realize distributed tracking.
Before introducing PDA-CKF, the preliminary knowledge of cubature Kalman—cubature point set { ξ i , ω i } [28]—should be introduce first.
The standard Gaussian weighted integral is calculated using the spherical-radial cubature rule, i.e., [28]
E [ x | z ] = R f ( x ) N ( x ; 0 , P ) d x i = 1 2 n ω i f ( ξ i )
In Equation (12), f ( ) is the nonlinear state transfer function or observation function, n is the dimension of the state variable, N ( x ; 0 , P ) is a Gaussian distribution function with a mean of zero and a variance of P , and ξ i is the cubature points.
ξ i = n [ 1 ] i , i = 1 , 2 , , 2 n
ω i = 1 2 n
[ 1 ] i represents the point set of n (n-dimensional state) dimensional space, i.e.,
[ 1 ] i = { ( 1 0 0 ) , ( 0 1 0 ) , , ( 0 0 1 ) , ( 1 0 0 ) , ( 0 1 0 ) , , ( 0 0 1 ) }

3.1. PDA-CKF Algorithm

(a)
Initialization
When k = 0 , assuming x 0 ~ N ( x ¯ 0 , P 0 ) , the initial value of the process noise and observation noise matrix are set to Q0 and R0, respectively. Then, the optimal initialization of the filter is
x ^ p , 0 | 0 = x ¯ 0 P ^ p , 0 | 0 = P 0
(b)
State Prediction
For each node p , the state estimate and covariance matrix x ^ p , k 1 | k 1 , P ^ p , k 1 | k 1 at time k 1 are given, and the positive definite noise matrix Q p , k 1 , R p , k 1 are given. Using Equations (13) and (14), the state predicted cubature points χ p , k 1 | k 1 i is calculated as:
S p , k 1 | k 1 = P ^ p , k 1 | k 1
χ p , k 1 | k 1 i = x ^ p , k 1 | k 1 + S p , k 1 | k 1 ξ i , i = 1 , 2 , , 2 n
According to the state transition model, the cubature points are propagated nonlinearly, i.e.,
χ p , k | k 1 i = f ( χ p , k 1 | k 1 i ) , i = 1 , 2 , , 2 n , p = 1 , 2 , , N
where n represents the dimension of the state variable, and N represents the number of nodes in the distributed acoustic sensor network. At this time, the state prediction x ^ p , k | k 1 and its error matrix P ^ p , k | k 1 are calculated as:
x ^ p , k | k 1 = 1 2 n i = 1 2 n χ p , k | k 1 i , i = 1 , 2 , , 2 n , p = 1 , 2 , , N
P ^ p , k | k 1 = 1 2 n i = 1 2 n ( χ p , k | k 1 i x ^ p , k | k 1 ) ( χ p , k | k 1 i x ^ p , k | k 1 ) T + Q p , k , p = 1 , 2 , , N
(c)
Status Update
From the estimated x ^ p , k | k 1 and variance P ^ p , k | k 1 at time k , the state update cubature points χ p , k | k 1 i is calculated as:
S p , k | k 1 = P ^ p , k | k 1
χ p , k | k 1 i = x ^ p , k | k 1 + S p , k | k 1 ξ i
χ p , k | k 1 i is propagated through the observation equation,
Z ^ p , k | k 1 i = h ( χ p , k | k 1 i ) , i = 1 , 2 , , 2 n , p = 1 , 2 , , N
Further, the observation prediction z ^ p , k | k 1 and the observation prediction error variance P p , k | k 1 z z are, respectively, obtained by
z ^ p , k | k 1 = 1 2 n i = 1 2 n Z ^ p , k | k 1 i
P p , k | k 1 z z = 1 2 n i = 1 2 n ( Z ^ p , k | k 1 i z ^ p , k | k 1 ) ( Z ^ p , k | k 1 i z ^ p , k | k 1 ) T + R p , k
Then, according to the probabilistic data association, the verification area of node p can be constructed by [29]:
{ z p , k : ( z p , k z ^ p , k | k 1 ) T ( P p , k | k 1 z z ) 1 ( z p , k z ^ p , k | k 1 ) γ }
where γ is the gate threshold. Suppose m p , k ( m p , k 0 ) observations fall into the validated region (27) at time k . Define validate observations z p , k , i.e.,
z p , k = z p , k ( j ) , j = 1 , 2 , , m p , k
Actually, only one of the above observations is related to the real source; the others are due to noise or reverberation, or none of them are related to the real source. Correspondingly, for m p , k validated observations, there maybe be m p , k + 1 possible hypothesis, i.e.,
{ H p , 0 = { All   observations   are   independent   of   real   sound   sources   } , j = 0 H p , j = { z p , k ( j )   is   associated   with   the   true   source } , j = 1 , 2 , , m p , k
According to Equation (29), the equation for calculating x ^ p , k | k ( j ) , j = 0 , 1 , , m p , k is as follows:
x ^ p , k | k = j = 0 m p , k E { x p , k | H p , j , z p , 1 : k } p ( H p , j | z p , 1 : k ) = j = 0 m p , k β p , k ( j ) x ^ p , k | k ( j )
where β p , k ( j ) p ( H p , j | z p , 1 : k ) is the prior probability of event H p , j , 0 β p , k ( j ) 1 , and j = 0 m p , k β p , k ( j ) = 1 , x ^ p , k | k ( j ) E { x p , k | H p , j , z p , 1 : k } is the updated estimate conditioned on the event H p , j , j = 0 , 1 , , m p , k , and
x ^ p , k | k ( 0 ) = x ^ p , k | k 1
x ^ p , k | k ( j ) = x ^ p , k | k 1 + K p , k v p , k ( j ) , j = 1 , 2 , , m p , k
where v p , k ( j ) = z p , k ( j ) z ^ p , k | k 1 is the innovation related to the observation z p , k ( j ) , K p , k is the Kalman gain of node p , and
P p , k | k 1 x z = 1 2 n i = 1 2 n ( χ p , k | k 1 i x ^ p , k | k 1 ) ( Z ^ p , k | k 1 i z ^ p , k | k 1 ) T
K p , k = P p , k | k 1 x z ( P p , k | k 1 z z ) 1
where P p , k | k 1 x z is the cross covariance between the state and observation z p , k of node p .
Given the innovation v p , k ( j ) and its covariance P p , k | k 1 z z , the probability β p , k ( j ) is generally computed as [30]
β p , k ( j ) = { e p , j b p + i = 1 m p , k e p , i , j = 1 , 2 , , m p , k b p b p + i = 1 m p , k e p , i , j = 0 e p , j = e 1 2 ( v p , k ( j ) ) T ( P p , k | k 1 z z ) 1 v p , k ( j ) b p = λ | 2 π P p , k | k 1 z z | 1 2 1 P p , D P G P p , D
where λ is the spatial probability, P p , D is the probability that the acoustic source is detected by sensor p , and P G is the gate probability.
Finally, the state estimate value x ^ p , k | k and error covariance P ^ p , k | k can be obtained by
x ^ p , k | k = x ^ p , k | k 1 + K p , k v p , k
P ^ p , k | k = β p , k ( 0 ) P ^ p , k | k 1 + ( 1 β p , k ( 0 ) ) P ˙ p , k | k + P ¨ p , k | k
where v p , k = j = 1 m p , k β p , k ( j ) v p , k ( j ) is the probability weighted innovation, and the covariances P ˙ p , k | k and P ¨ p , k | k are respectively given by [29,30]
P ˙ p , k | k = p ^ p , k | k 1 K p , k P p , k | k 1 z z K p , k T
P ¨ p , k | k = K p , k { j = 1 m p , k β p , k ( j ) ( v p , k ( j ) v p , k ) ( v p , k ( j ) v p , k ) T } K p , k T
To summarize, the pseudo-code of the PDA-CKF method of using the observations from a single node is depicted in Algorithm 1.
Algorithm 1: PDA-CKF Algorithm
Initialization: x ^ p , 0 | 0 = x ¯ 0 , P ^ p , 0 | 0 = P 0
Input: x ^ p , k 1 | k 1 , P ^ p , k 1 | k 1 , z p , k
Output: x ^ p , k | k , P ^ p , k | k
Iteration: for k = 1 , 2 ,
1: Prediction step:
2: Compute the state predicted cubature points χ p , k 1 | k 1 i at time k 1 with (18).
3: Compute the predicted estimate x ^ p , k | k 1  and covariance P ^ p , k | k 1  with (20) and (21), respectively.
4: Update step:
5: Compute the state update cubature points χ p , k | k 1 i  with (23).
6: Compute the predicted observations z ^ p , k | k 1  with (25).
7: Compute the innovation covariance P p , k | k 1 z z  with (26).
8: Select the validated observations z p , k  according to (28).
9: Compute the cross-covariance P p , k | k 1 x z  with (33).
10: Compute the Kalman gain K p , k with (34).
11: Compute the association probability β p , k ( j )  with (35), j = 1 , 2 , , m k
12: Compute the covariances P ˙ p , k | k  and P ¨ p , k | k  with (38) and (39), respectively.
13: Compute the updated estimate x ^ p , k | k  and covariance P ^ p , k | k with (36) and (37), respectively.
The PDA-CKF algorithm makes full use of the observation information of the node itself, which improves the tracking accuracy. However, this algorithm will fail when a node is damaged or the environmental noise and reverberation are severe. Therefore, this paper generalized PDA-CKF to a distributed version that can be used in distributed sensor networks. The improved method was named the probabilistic data association-based distributed cubature Kalman filter (PDA-DCKF). The specific process is shown in Section 3.2.

3.2. PDA-DCKF Algorithm

3.2.1. PDA-DCKF

The neighborhood information of nodes are fused in PDA-DCKF to form local node networks. Then, the local state estimations and error covariances for the local node networks are calculated separately. Finally, the local results are fused to obtain the global state estimation.
On the basis of the above steps, the following is defined:
Z ^ N p , k | k 1 i = [ Z ^ p , k | k 1 i ; Z ^ q , k | k 1 i ] n u m ( N p , k ) × 2 n , i = 1 , 2 , , 2 n , p = 1 , 2 , , N , q N p , k = { q υ | ( p , q ) ε } { p }
where q represents the neighborhood nodes adjacent to node p , υ = { 1 , 2 , , N } is the vertex set, ε { ( p , q ) | p , q υ } is the edge set of the distributed acoustic sensor network, n u m ( N p , k ) indicates the number of nodes in the neighborhood of node p . N p , k = { q υ | ( p , q ) ε } { p } denotes the set of neighbors of node p at time k , where a node is a neighbor of itself certainly.
Further, the resulting observations are fused into a matrix. Then, the observed prediction and prediction error variance are, respectively, given by
z ^ N p , k | k 1 = 1 2 n i = 1 2 n Z ^ N p , k | k 1 i
P N p , k | k 1 z z = 1 2 n i = 1 2 n ( Z ^ N p , k | k 1 i z ^ N p , k | k 1 ) ( Z ^ N p , k | k 1 i z ^ N p , k | k 1 ) T + [ R P , k ; R q , k ] n u m ( N p , k ) × n u m ( N p , k )
For single node p , v p , k ( j ) = z p , k ( j ) z ^ p , k | k 1 is the innovation vector related to observation z p , k ( j ) , and K p , k is the Kalman gain of node p . As far as multiple nodes are concerned, the information of node p and surrounding nodes q is fused to obtain
P N p , k | k 1 x z = 1 2 n i = 1 2 n ( χ p , k | k 1 i x ^ p , k | k 1 ) ( Z ^ N p , k | k 1 i z ^ N p , k | k 1 ) T
K N p , k = P N p , k | k 1 x z ( P N p , k | k 1 z z ) 1
where P N p , k | k 1 x z is the cross covariance between the state and the observed value of node p after fusing the information of neighboring nodes, and K N p , k is the Kalman gain of node p at time k after the fusion.
The probability weighted innovation vector of local nodes is defined as
v N P , k = [ v p , k ; v q , k ] n u m ( N p , k ) × 1 , p = 1 , 2 , , N , q N p , k = { q υ | ( p , q ) ε } { p }
The following is defined as
β N p , k ( 0 ) = ( β p , k ( 0 ) + q = 1 n u m ( N p , k ) 1 β q , k ( 0 ) ) / n u m ( N p , k )
where { j = 1 m p , k β p , k ( j ) ( v p , k ( j ) v p , k ) ( v p , k ( j ) v p , k ) T } is defined in the covariance P ¨ p , k | k of node p as w p ; when the information of node p and surrounding nodes is fused, the expression of w p is computed as
w N p = diag ( w p , w q ) n u m ( N p , k ) × n u m ( N p , k ) , p = 1 , 2 , , N , q N p , k = { q υ | ( p , q ) ε } { p }
where w q = { j = 1 m q , k β q , k ( j ) ( v q , k ( j ) v q , k ) ( v q , k ( j ) v q , k ) T } .
Finally, the state estimate x ^ N p , k | k and the error covariance P ^ N p , k | k for node p are expressed as
x ^ N p , k | k = x ^ p , k | k 1 + K N p , k v N p , k
P ^ N p , k | k = β N p , k ( 0 ) P ^ p , k | k 1 + ( 1 β N p , k ( 0 ) ) P ˙ N p , k | k + P ¨ N p , k | k
P ˙ N p , k | k = p ^ p , k | k 1 K N p , k P N p , k | k 1 z z K N p , k T
P ¨ N p , k | k = K N p , k w N p K N p , k T

3.2.2. Fusion Strategy

After calculating the estimation of each local node in the distributed acoustic sensor network, these data need to be fused to obtain a global estimate. Since nodes in a sensor network have different reliabilities, the final tracking result integrates the estimations from the local nodes, which are weighted with the parameters depending on the mean square error of the estimation and the energy of the received signal.
(a)
Energy
The energy of the signal received by each node in the acoustic sensor network is calculated [31], and the equation is described as:
E p = lim T T T | x p ( t ) | 2 d t
where x p ( t ) represents the sound signal received by node p . In practice, analog signal x ( t ) is converted into digital signal x ( n ) , and x ( n ) needs to be framed and windowed. Then, the framed signal is donated by x ( n ) ω ( n ) . In this paper, the Hamming window was selected for the window function ω ( n ) . Further, the energy of each frame can be obtained by
E p , n = m = [ x p ( m ) ω ( n m ) ] 2 = m = x p 2 ( m ) h ( n m ) = x p 2 ( n ) h ( n )
where h ( n ) = ω 2 ( n ) , and E p , n represents the short-term energy of node p when the window function starts at the n t h point of the signal. The short-term energy can be regarded as the output of the square of the speech signal passing through a linear filter, and the unit impulse response of the linear filter is h ( n ) .
(b)
MSE
In Equation (48), the local estimate x ^ N p , k | k of node p ( p = 1 , 2 , , N ) is calculated, and r ^ p , k = [ 1 0 0 0 0 1 0 0 ] x ^ N p , k | k is expressed as the estimated acoustic source position of node p at time k. The following is defined:
r ^ N , k = 1 N p = 1 N r ^ p , k
where r ^ N , k represents the global position estimation result weighted with the average consensus coefficients and calculates the MSE between the position obtained by each local node and r ^ N , k , defined as
M p = | | r ^ p , k r ^ N , k | | 2
After calculating the energy E p and the mean square error M p of node p at time k , the following is defined:
C p = E p M p
η p = C p p = 1 N C p
where η p represents the weight of node p during global fusion. A global consistency analysis is performed on the results obtained by each node according to η p , p = 1 , 2 , , N :
x ^ k | k = p = 1 N η p x ^ N p , k | k
P ^ k | k = p = 1 N η p P ^ N p , k | k
To summarize, the PDA-DCKF is depicted in Algorithm 2.
Algorithm 2: PDA-DCKF Algorithm
Initialization: x ^ p , 0 | 0 = x ¯ 0 , P ^ p , 0 | 0 = P 0
Input: x ^ k 1 | k 1 , P ^ k 1 | k 1 , z k
Output: x ^ k | k , P ^ k | k
Iteration: for k = 1 , 2 ,
For any node p ( p = 1 , 2 , , N ) in sensor network
1: Prediction step:
2: Compute the state predicted cubature points χ p , k 1 | k 1 i at time k 1 with (18).
3: Compute the predicted estimate x ^ p , k | k 1  and covariance
   P ^ p , k | k 1  with (20) and (21), respectively.
4: Update step:
5: Compute the state update cubature points χ p , k | k 1 i  with (23).
6: Compute the observed values of predicted local nodes z ^ N p , k | k 1 with (41).
7: Compute the innovation covariance of predicted local nodes P N p , k | k 1 z z with (42).
8: Select the validated observations z p , k according to (28), p = 1 , 2 , , N .
9: Compute the cross-covariance of predicted local nodes P N p , k | k 1 x z with (43).
10: Compute the Kalman gain K N p , k  with (44).
11: Compute the probability weighted innovation vector of local nodes v N p , k with (45)
12: Compute the association probability β p , k ( j )  with (35), j = 1 , 2 , , m p , k .
13: Compute the association probability β N p , k ( 0 )  with (46).
14: Compute v 1 N p  with (47).
15: Compute the updated estimate x ^ N p , k | k  and covariance P ^ N p , k | k of local nodes with (48) and (49), respectively.
16: Compute the weight η p  of node p ( p = 1 , 2 , N )  with (57).
17: Compute the updated estimate x ^ k | k  and covariance P ^ k | k  with (58) and (59), respectively.
The advantages of probabilistic data association and distributed acoustic sensor networks are combined in the PDA-DCKF proposed in this paper. In this method, the PDA algorithm is used to sift the observations from neighboring nodes. Then, the sifted observations are fused to update the state vectors in the CKF. This method not only makes the observation value obtained by each node more accurate, but also makes full use of the information of neighborhood nodes.
Meanwhile, a weighted fusion method based on local node-received signal energy and position estimation mean square error was proposed. This dynamic weighted consistency fusion considers the reliability of the local state of the nodes and provides a good global estimation performance.

4. Experiments and Results Discussion

To verify the performance of the proposed speaker tracking method, the evaluations are performed in a simulated room environment. Under the same conditions, the comparative experiments between PDA-DCKF and current methods are carried out, including centralized method (CCKF), DUKF, DCKF, iteration based DCKF [20] (DICKF) and DEKF. The results obtained by all methods are the average of 100 Monte Carlo runs.
The root mean square error (RMSE) is used here to evaluate the tracking performance. r k is expressed as the ground truth value of time k , and r ^ N , k represents the global consistency position calculated by the acoustic sensor network at this time. The RMSE is defined as [32]
RMSE = 1 K k = 1 K | | r k r ^ N , k | | 2
where K denotes the number of frames. Generally, the smaller the RMSE, the better the tracking result.

4.1. Simulation Setups

The simulation environment was a typical room of size 6 m × 6 m × 3 m, with an acoustic sensor network of 12 nodes ( N = 12 ). Each node contained a pair of microphones 0.5 m apart. The communication diagram of the distributed acoustic sensor network is shown in Figure 2, where the communication radius is 2.5 m, and each circle represents a node. The simulated trajectory 1 was a line from (0.5,0.8) to (2.5, 2.8), and trajectory 2 was an arc from (1, 2) to (4.86, 2.1), as shown in Figure 3. In different experiments, the speech sampled at the frequency of F s = 16 KHZ was used as the acoustic source signal; the speech was a female recording, and the waveform and spectrum of the signal are shown in Figure 4a. The sound speed was c = 342   m / s . The microphone signals were simulated with the Image method [33]. Specifically, different RIRS are generated by virtual sound source method to reflect different reverberation time. These RIRSs were convolved with the speech signal and then added to the Gaussian white noise with a determined mean and covariance to produce a received microphone signal with a mixture of reverberation and noise. The different covariance of Gaussian noise determines the different value of the signal-to-noise ratio (SNR), which reflects different environmental noise conditions. The microphone signal was divided into different signal frames along the sound source track, where the frame length of speech signal was N f = 512 and each signal frame was used for state estimation. Taking node 1 as an example, Figure 4b shows the waveform and spectrum of the speech signal received by the first microphone of node 1. For the observation TDOA, a total of eight time delays were chosen according to the magnitude of the GCC peak. From these delays, further TDOA observations were selected, where the relevant parameters were set as λ = 10 , γ = 4 , P G = 0.93 , and P D = 0.95 . The standard deviation of TDOA measurement error was σ = 50   μ s . In the acoustic dynamical model, the parameters were β = 10   s 1 and υ ¯ = 1   ms 1 . In the average consistency calculation of the global state estimation and its error covariance, the Metropolis weight was used, the number of consistency iterations [34] was N c o n = 10 , and the number of iterations in the iterative CKF was 3.
This paper conducted four experiments to evaluate the tracking performance of PDA-DCKF. In Experiment 1, trajectory 1 was used as the acoustic source trajectory. The initial prior p ( x 0 ) of the acoustic source position was set as a Gaussian distribution with mean x 0 = [ 0.5 , 0.8 , 0.02 , 0.02 ] T and covariance P 0 = d i a g ( [ 0.05 , 0.05 , 0.0025 , 0.0025 ] ) . In experiment 2, the sound source signal and track were the same as experiment 1. Using simple average fusion rules, the influence of fusion rules on PDA-DCKF tracking performance was discussed. Experiment 3 discussed the robustness of the algorithm. The acoustic source and trajectory were the same as the previous two experiments. In Experiment 4, trajectory 2 was used as the acoustic source track to check the tracking results of the acoustic source when the track was nonlinear.

4.2. Simulation Results

4.2.1. Experiment 1

In this experiment, the tracking performance was evaluated under different ambient and reverberant conditions. First, the impact of environmental noise on tracking performance was investigated. Figure 5 depicts the RMSE results as a function of SNR for a reverberation time of T60 = 200 ms. In Figure 5, it is observed that the RMSE of all methods decreases with the increase of SNR, which means that the tracking accuracy increases with the increase of SNR. This is because when the SNR becomes larger, the microphone signal is less affected by ambient noise, resulting in better tracking performance. In addition, under the same SNR, PDA-DCKF performs better than traditional distributed Kalman filtering, such as extended Kalman filtering, unscented Kalman filtering, and cubature Kalman filtering. Since only one time-delayed observation of the GCC largest peak is used in traditional methods, peaks associated with real sources may be masked by spurious peaks caused by noise or reverberation, resulting in erroneous state estimates. In contrast, multiple time-difference observations of multiple largest peaks of GCC are employed in PDA-DCKF, resulting in ideal tracking performance. At the same time, compared with DICKF in this experiment, the results show that the effect of PDA-DCKF is better than that of DICKF. Because DICKF is aimed at the DCKF method, and DCKF has problems such as slow response speed and low tracking accuracy. However, the tracking performance and convergence speed of the algorithm can be improved through several local iterations in DICKF. However, still only one time-delay observation of the GCC largest peak is used in DICKF, which also causes it to be inaccurate, but as can be seen from Figure 5, as the SNR increases, the gap between DICKF and PDA-DCKF becomes smaller because the observations are more reliable when the SNR becomes larger. In addition, Figure 5 shows that PDA-DCKF is not as good as CCKF because the observation information of all nodes is used in CCKF, but PDA-DCKF achieved an effect very close to the CCKF effect, and its computational cost and the burden of the network is less than that of CCKF.
The effect of reverberation on tracking performance was also studied in this paper. Figure 6 depicts the RMSE results as a function of T60 with SNR = 20 dB. From the results, we can observe that the RMSEs of all the methods increased as T60 became larger, which signifies the degradation of the tracking accuracies. This may be because the microphone signal is more affected by reverberation as T60 becomes larger, the time difference observations extracted from only the largest peak or multiple largest peaks are not reliable, and the tracking performance of these methods deteriorates. In addition, it can be found from Figure 6 that the tracking performance of PDA-DCKF is better than DEKF, DUKF, DCKF, and DICKF. In fact, in traditional methods, the time-difference observations included in the scheme are only extracted from the largest peak of the GCC, while the peaks associated with the true hypocenter may be masked by false peaks caused by reverberation. In contrast, PDA-DCKF incorporates TDOA observations of multiple largest peaks of GCC into the scheme, which can alleviate the adverse effects of reverberation to a certain extent. Furthermore, the effect is not as good as CCKF showed in Figure 6, but it also achieves a very close effect.

4.2.2. Experiment 2

The effect of the fusion strategy proposed in this paper on the results is discussed in Experiment 2. When PDA-DCKF adopts a simple average fusion rule, it is called PDA-DCKF-avg. In this section, different SNR and different reverberations are used to test the effectiveness of the fusion strategy. The experimental results are shown in Figure 7 and Figure 8.
As depicted in Figure 7, with the increase of the SNR, the RMSEs for the PDA-DCKF methods with both these two fusion strategies decrease, but the proposed one is more effective. Figure 8 also shows that, with the increase of the reverberation time, the error also increases. In addition, only under 50 ms reverberation, the error of the average fusion strategy is smaller than that proposed in this paper, and the fusion strategy proposed in this paper was better than the average fusion effect under 100–600 ms. Comparing Figure 5, Figure 6, Figure 7 and Figure 8, it can be found that, even if the average fusion strategy is used, the PDA-DCKF in this paper is still smaller than the error obtained by the above comparison test, which further proves the effectiveness of the method in this paper.

4.2.3. Experiment 3

In practical applications, a network may be damaged by nodes, and when a node in a network is damaged, whether the network can still work normally will test the robustness of the system. In this subsection, the node damage in the distributed acoustic sensor network is simulated, and the tracking results of the acoustic source after the damage are compared with those before the damage. When node 1 in the network is damaged, it is called graph G 2 , as shown in Figure 9a. When node 1 and node 6 in the acoustic sensor network are damaged, it is called graph G 3 , as shown in Figure 9b. The experimental results are shown in Table 1 and Table 2.
It can be seen from Table 1 and Table 2 that the acoustic source can still be tracked in the case of node damage. Although the accuracy has decreased, the amplitude of the drop is not large and the acoustic source can still be tracked accurately. This can prove that the method proposed in this paper has good robustness under this network.

4.2.4. Experiment 4

In order to further verify the effectiveness of the algorithm in this paper, the semicircle of trajectory 2 was used as the acoustic source trajectory, and comparative experiments were carried out under different SNR and reverberation. The experimental data are shown in Table 3 and Table 4. Figure 10 shows the tracking results with SNR = 15 dB and T60 = 400 ms.
From the above Table 3 and Table 4 and Figure 10, it can be seen that the algorithm in this paper can still accurately track the sound source in the face of such a strong nonlinear trajectory.

5. Conclusions

An improved PDA-DCKF method was proposed in this paper, which proved to be able to solve the problem of tracking a single mobile acoustic source with distributed acoustic sensor networks in the noise and reverberation environment. First, in order to reduce the adverse effects of noise and reverberation, the prediction value of observation is obtained by using the prediction state and the observation model of distributed nodes. Secondly, the actual observations are screened according to the predicted value. Multiple TDOA observations are extracted at each node and incorporated into the status update of CKF through PDA to generate PDA-CKF. PDA-CKF was applied to distributed acoustic sensor networks, and PDA-DCKF was further developed. In PDA-DCKF, the PDA algorithm is first used to sift the observations from neighboring nodes. Then, the sifted observations are fused to update the state vectors in the CKF. Each node runs PDA-DCKF for local state estimation and TDOA observation. Then, a new fusion strategy is proposed using energy and MSE to merge all single local estimates in a distributed manner for global state estimation. In order to apply the improved PDA-DCKF to the acoustic source tracking problem, the Langevin model was used to model the acoustic source dynamics, and a method to extract the time difference observation was proposed. Finally, a distributed acoustic source tracking framework was obtained. In order to evaluate the effectiveness of PDA-DCKF in acoustic source tracking, comparative experiments were carried out with existing methods (DCKF, DUKF, DEKF, and DICKF) under different ambient noise and reverberation conditions. The results show that the PDA-DCKF has better tracking performance than DCKF, DUKF, DEKF, and DICKF under most noise and reverberation conditions. In addition, the PDA-DCKF achieved the same tracking performance as the centralized CKF. Furthermore, it can even track the acoustic source stably in the case of node damage.

Author Contributions

Methodology, R.W.; Software, Y.C. (Yideng Cao); Writing—review & editing, Y.C. (Yang Chen). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by [Changzhou Science and Technology Funds] grant number [CJ20220100]. And The APC was funded by [Changzhou University].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, L.; Reiss, J.D.; Cavallaro, A. Over-Determined Source Separation and Localization Using Distributed Microphones. IEEE/ACM Trans. Audio Speech Lang. Process. 2016, 24, 1573–1588. [Google Scholar] [CrossRef]
  2. Li, X.; Chen, J.; Qi, W.; Zhou, R. A distributed sound source surveillance system using autonomous vehicle network. In Proceedings of the 2018 13th IEEE Conference on Industrial Electronics and Applications (ICIEA), Wuhan, China, 31 May–2 June 2018; pp. 42–46. [Google Scholar] [CrossRef]
  3. Kapralos, B.; Jenkin, M.R.M.; Milios, E. Audiovisual localization of multiple speakers in a video teleconferencing setting. Int. J. Imaging Syst. Technol. 2003, 13, 95–105. [Google Scholar] [CrossRef]
  4. Green, T.; Hilkhuysen, G.; Huckvale, M.; Rosen, S.; Brookes, M.; Moore, A.; Naylor, P.; Lightburn, L.; Xue, W. Speech recognition with a hearing-aid processing scheme combining beam-forming with mask-informed speech enhancement. Trends Hear. 2022, 26, 23312165211068629. [Google Scholar] [PubMed]
  5. Gerstoft, P.; Hu, Y.; Bianco, M.J.; Patil, C.; Alegre, A.; Freund, Y.; Grondin, F. Audio scene monitoring using redundant adhoc microphone array networks. IEEE Internet Things J. 2021, 9, 4259–4268. [Google Scholar] [CrossRef]
  6. Laufer-Goldshtein, B.; Talmon, R.; Gannot, S. A hybrid approach for speaker tracking based on TDOA and data-driven mod-els. IEEE/ACM Trans. Audio Speech Lang. Process. 2018, 26, 725–735. [Google Scholar] [CrossRef]
  7. Ruiz, S.; Van Waterschoot, T.; Moonen, M. Distributed combined acoustic echo cancellation and noise reduction in wireless acoustic sensor and actuator networks. IEEE/ACM Trans. Audio Speech Lang. Process. 2022, 30, 534–547. [Google Scholar] [CrossRef]
  8. Dang, X.; Zhu, H. A feature-based data association method for multiple acoustic source localization in a distributed micro-phone array. J. Acoust. Soc. Am. 2021, 149, 612–628. [Google Scholar] [CrossRef]
  9. Ziegler, J.; Schröder, L.; Koch, A.; Schilling, A. A Neural Beamforming Front-end for Distributed Microphone Arrays. In Audio Engineering Society Convention 151; Audio Engineering Society: New York, NY, USA, 2021. [Google Scholar]
  10. Guo, X.; Yuan, M.; Zheng, C.; Li, X. Distributed node-specific block-diagonal LCMV beamforming in wireless acoustic sensor net-works. Signal Processing 2021, 185, 108085. [Google Scholar] [CrossRef]
  11. Faraji, M.M.; Shouraki, S.B.; Iranmehr, E.; Linares-Barranco, B. Sound Source Localization in Wide-Range Outdoor Environment Using Distributed Sensor Network. IEEE Sens. J. 2019, 20, 2234–2246. [Google Scholar] [CrossRef]
  12. Yang, B.; Yan, G.; Wang, P.; Chan, C.-Y.; Song, X.; Chen, Y. A Novel Graph-Based Trajectory Predictor with Pseudo-Oracle. IEEE Trans. Neural Netw. Learn. Syst. 2021, 1–15. [Google Scholar] [CrossRef]
  13. Wishner, R.P.; Tabaczynski, J.A.; Athans, M. A comparison of three non-linear filters. Automatica 1969, 5, 487–496. [Google Scholar] [CrossRef]
  14. Nicoletti, O. MDS-IEKF: A Delayed-State Invariant Extended Kalman Filter for Monocular Visual-Inertial Navigation. Ph.D. Thesis, McGill University, Montreal, QC, Canada, 2020. [Google Scholar]
  15. Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Processing 2002, 50, 174–188. [Google Scholar] [CrossRef]
  16. Vermaak, J.; Blake, A. Nonlinear filtering for speaker tracking in noisy and reverberant environments. In Proceedings of the 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing, Proceedings (Cat. No. 01CH37221), Salt Lake City, UT, USA, 7–11 May 2001; IEEE: Piscataway, NJ, USA, 2001; Volume 5, pp. 3021–3024. [Google Scholar]
  17. Sung, K.; Song, H.J.; Kwon, I.H. A Local Unscented Transform Kalman Filter for Nonlinear Systems. Mon. Weather Rev. 2020, 148, 3243–3266. [Google Scholar] [CrossRef]
  18. Zhong, X.; Mohammadi, A.; Wang, W.; Premkumar, A.B.; Asif, A. Acoustic source tracking in a reverberant environment using a pairwise synchronous microphone network. In Proceedings of the 16th International Conference on Information Fusion, Istanbul, Turkey, 9–12 July 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 953–960. [Google Scholar]
  19. Wang, R.; Chen, Z.; Yin, F. Speaker tracking based on distributed particle filter and iterative covariance intersection in distrib-uted microphone networks. IEEE J. Sel. Top. Signal Processing 2019, 13, 76–87. [Google Scholar] [CrossRef]
  20. Tian, Y.; Chen, Z.; Yin, F. Distributed iterated extended Kalman filter for speaker tracking in microphone array networks. Appl. Acoust. 2017, 118, 50–57. [Google Scholar] [CrossRef]
  21. Tian, Y.; Chen, Z.; Yin, F. Distributed IMM-unscented Kalman filter for speaker tracking in microphone array networks. IEEE/ACM Trans. Audio Speech Lang. Processing 2015, 23, 1637–1647. [Google Scholar] [CrossRef]
  22. Thomas, T.; Sreeja, S. Comparison of Nearest Neighbor and Probabilistic Data Association Filters for Target Tracking in Cluttered Environment. In Proceedings of the 2021 IEEE 6th International Conference on Computing, Communication and Automation (ICCCA), Arad, Romania, 17–19 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 272–277. [Google Scholar]
  23. Zhang, Q.; Zhang, W.; Feng, J.; Tang, R. Distributed Acoustic Source Tracking in Noisy and Reverberant Environments With Dis-tributed Microphone Networks. IEEE Access 2020, 8, 9913–9927. [Google Scholar] [CrossRef]
  24. Wang, R.; Chen, Z.; Yin, F. Distributed Multiple Speaker Tracking Based on Unscented Particle Filter and Data Association in Microphone Array Networks. Circuits Syst. Signal Processing 2022, 41, 933–955. [Google Scholar] [CrossRef]
  25. Zhang, J.; Gao, S.; Zhong, Y.; Qi, X.; Xia, J.; Yang, J. An advanced cubature information filtering for indoor multiple wideband source tracking with a distributed noise statistics estimator. IEEE Access 2019, 7, 151851–151866. [Google Scholar] [CrossRef]
  26. Woźniak, S.; Kowalczyk, K. Passive joint localization and synchronization of distributed microphone arrays. IEEE Signal Processing Lett. 2018, 26, 292–296. [Google Scholar] [CrossRef]
  27. Knapp, C.; Carter, G. The generalized correlation method for estimation of time delay. IEEE Trans. Acoust. Speech Signal Processing 1976, 24, 320–327. [Google Scholar] [CrossRef]
  28. Arasaratnam, I.; Haykin, S. Cubature kalman filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269. [Google Scholar] [CrossRef]
  29. Kirubarajan, T.; Bar-Shalom, Y. Probabilistic data association techniques for target tracking in clutter. Proc. IEEE 2004, 92, 536–557. [Google Scholar] [CrossRef]
  30. Bar-Shalom, Y.; Li, X.R. Multitarget-Multisensor Tracking: Principles and Techniques; YBs: Storrs, CT, USA, 1995. [Google Scholar]
  31. Souden, M.; Kinoshita, K.; Nakatani, T. An integration of source location cues for speech clustering in distributed microphone arrays. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 111–115. [Google Scholar]
  32. Hodson, T.O. Root mean square error (RMSE) or mean absolute error (MAE): When to use them or not. Geosci. Model Dev. Discuss. 2022, 15, 5481–5487. [Google Scholar] [CrossRef]
  33. Lehmann, E.A.; Johansson, A.M.; Nordholm, S. Reverberation-time prediction method for room impulse responses simulated with the image-source model. In Proceedings of the 2007 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz, NY, USA, 21–24 October 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 159–162. [Google Scholar]
  34. Xiao, L.; Boyd, S.; Lall, S. A scheme for robust distributed sensor fusion based on average consensus. In Proceedings of the IPSN 2005. Fourth International Symposium on Information Processing in Sensor Networks, Boise, ID, USA, 15 April 2005; IEEE: Piscataway, NJ, USA, 2005; pp. 63–70. [Google Scholar]
Figure 1. Diagram of speaker tracking in the distributed acoustic sensor network.
Figure 1. Diagram of speaker tracking in the distributed acoustic sensor network.
Sensors 22 07160 g001
Figure 2. Diagram of a distributed acoustic sensor network with 12 nodes; circles represent nodes in the network. A pair of microphones was placed on each node, and the lines between the nodes indicate that the nodes can communicate with each other.
Figure 2. Diagram of a distributed acoustic sensor network with 12 nodes; circles represent nodes in the network. A pair of microphones was placed on each node, and the lines between the nodes indicate that the nodes can communicate with each other.
Sensors 22 07160 g002
Figure 3. Microphone deployments and acoustic source trajectories: the black line denotes trajectory 1, the black dashed arrow denotes the motion direction of trajectory 1, the red semicircle denotes trajectory 2, the red dashed arrow denotes the motion direction of trajectory 2.
Figure 3. Microphone deployments and acoustic source trajectories: the black line denotes trajectory 1, the black dashed arrow denotes the motion direction of trajectory 1, the red semicircle denotes trajectory 2, the red dashed arrow denotes the motion direction of trajectory 2.
Sensors 22 07160 g003
Figure 4. (a) The waveform and spectrum of the original speech signal, and (b) the waveform and spectrum of node 1 speech signal.
Figure 4. (a) The waveform and spectrum of the original speech signal, and (b) the waveform and spectrum of node 1 speech signal.
Sensors 22 07160 g004
Figure 5. RMSE versus SNR for different tracking algorithms with T60 = 200 ms.
Figure 5. RMSE versus SNR for different tracking algorithms with T60 = 200 ms.
Sensors 22 07160 g005
Figure 6. RMSE versus T60 for different tracking algorithms with SNR = 20 Db.
Figure 6. RMSE versus T60 for different tracking algorithms with SNR = 20 Db.
Sensors 22 07160 g006
Figure 7. RMSE versus SNR for different fusion rules with T60 = 200 ms.
Figure 7. RMSE versus SNR for different fusion rules with T60 = 200 ms.
Sensors 22 07160 g007
Figure 8. RMSE versus T60 for different fusion rules with SNR = 20 dB.
Figure 8. RMSE versus T60 for different fusion rules with SNR = 20 dB.
Sensors 22 07160 g008
Figure 9. (a) Indicates that node 1 is broken, (b) indicates that node 1 and node 6 are broken. The solid line indicates that the nodes can communicate with each other, and the dotted line indicates that they cannot communicate with each other.
Figure 9. (a) Indicates that node 1 is broken, (b) indicates that node 1 and node 6 are broken. The solid line indicates that the nodes can communicate with each other, and the dotted line indicates that they cannot communicate with each other.
Sensors 22 07160 g009aSensors 22 07160 g009b
Figure 10. The tracking result of the semicircle trajectory when the SNR = 15 dB and T60 = 400 ms.
Figure 10. The tracking result of the semicircle trajectory when the SNR = 15 dB and T60 = 400 ms.
Sensors 22 07160 g010
Table 1. RMSE versus SNR under different graphs with T60 = 200 ms.
Table 1. RMSE versus SNR under different graphs with T60 = 200 ms.
SNR (dB) G 1 G 2 G 3
50.33630.35210.3637
100.17350.18030.2057
150.14570.15110.1786
200.12010.12840.1543
250.11610.12030.1457
300.11010.11690.1376
Table 2. RMSE versus T60 under different graphs with SNR = 20 dB.
Table 2. RMSE versus T60 under different graphs with SNR = 20 dB.
T60 (ms) G 1 G 2 G 3
500.09920.10130.1164
1000.11080.11950.1351
2000.12010.12840.1543
3000.13930.15010.1754
4000.18240.19030.2158
5000.21870.23050.2439
6000.47030.48970.5062
Table 3. RMSE versus SNR with T60 = 200 ms.
Table 3. RMSE versus SNR with T60 = 200 ms.
SNR (dB)RMSE
50.3751
100.1869
150.1423
200.1299
250.1214
300.1167
Table 4. RMSE versus T60 with SNR = 20 dB.
Table 4. RMSE versus T60 with SNR = 20 dB.
T60 (ms)RMSE
500.1174
1000.1251
2000.1299
3000.1322
4000.1635
5000.2216
6000.5027
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, Y.; Cao, Y.; Wang, R. Acoustic Source Tracking Based on Probabilistic Data Association and Distributed Cubature Kalman Filtering in Acoustic Sensor Networks. Sensors 2022, 22, 7160. https://doi.org/10.3390/s22197160

AMA Style

Chen Y, Cao Y, Wang R. Acoustic Source Tracking Based on Probabilistic Data Association and Distributed Cubature Kalman Filtering in Acoustic Sensor Networks. Sensors. 2022; 22(19):7160. https://doi.org/10.3390/s22197160

Chicago/Turabian Style

Chen, Yang, Yideng Cao, and Rui Wang. 2022. "Acoustic Source Tracking Based on Probabilistic Data Association and Distributed Cubature Kalman Filtering in Acoustic Sensor Networks" Sensors 22, no. 19: 7160. https://doi.org/10.3390/s22197160

APA Style

Chen, Y., Cao, Y., & Wang, R. (2022). Acoustic Source Tracking Based on Probabilistic Data Association and Distributed Cubature Kalman Filtering in Acoustic Sensor Networks. Sensors, 22(19), 7160. https://doi.org/10.3390/s22197160

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop