Next Article in Journal
Time Reverse Modeling of Damage Detection in Underwater Concrete Beams Using Piezoelectric Intelligent Modules
Previous Article in Journal
Mapping of Back Muscle Stiffness along Spine during Standing and Lying in Young Adults: A Pilot Study on Spinal Stiffness Quantification with Ultrasound Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Achieving 3D Beamforming by Non-Synchronous Microphone Array Measurements

1
Institute of Vibration, Shock and Noise, State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai 200240, China
2
College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China
3
College of Energy Engineering, Zhejiang University, Zheda Road 38, Hangzhou 310027, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(24), 7308; https://doi.org/10.3390/s20247308
Submission received: 8 October 2020 / Revised: 10 December 2020 / Accepted: 14 December 2020 / Published: 19 December 2020
(This article belongs to the Section Intelligent Sensors)

Abstract

:
Beamforming technology is an essential method in acoustic imaging or reconstruction, which has been widely used in sound source localization and noise reduction. The beamforming algorithm can be described as all microphones in a plane simultaneously recording the source signal. The source position is then localized by maximizing the result of the beamformer. Evidence has shown that the accuracy of the sound source localization in a 2D plane can be improved by the non-synchronous measurements of moving the microphone array. In this paper, non-synchronous measurements are applied to 3D beamforming, in which the measurement array envelops the 3D sound source space to improve the resolution of the 3D space. The entire radiated object is covered better by a virtualized large or high-density microphone array, and the range of beamforming frequency is also expanded. The 3D imaging results are achieved in different ways: the conventional beamforming with a planar array, the non-synchronous measurements with orthogonal moving arrays, and the non-synchronous measurements with non-orthogonal moving arrays. The imaging results of the non-synchronous measurements are compared with the synchronous measurements and analyzed in detail. The number of microphones required for measurement is reduced compared with the synchronous measurement. The non-synchronous measurements with non-orthogonal moving arrays also have a good resolution in 3D source localization. The proposed approach is validated with a simulation and experiment.

1. Introduction

Sound source localization has high demand and exceptional value in applications such as automobiles [1], submarines [2], aircrafts [3,4], etc. Several methods have been proposed, such as beamforming and inverse methods [5,6], in which beamforming has evolved into one of the most important methods of sound source localization. The basic idea of beamforming is that by weighting each array element’s outputs, the expected maximum output power of the signal is directed to the position of the sound source. Different representations of the basis function also derive other classical methods, such as (1) near-field acoustical holography (NAH) [7,8,9]; (2) the inverse boundary element method (IBEM) [10,11,12]; (3) the equivalent source methods (ESM) [13,14,15]; (4) statistically optimal near-field acoustical holography (SONAH) [16,17]; and (5) the Helmholtz equation—least-squares method (HELS) [18]. The beamformer has a good resolution when the plane of the beamformer is parallel to the microphone plane. The spatial resolution of the planar beamformer deteriorated sharply when the plane is perpendicular to the microphone array. Most beamforming research is conducted on 2-dimentional (2D) because its model is typical and easy to explain the principle, which has been extensively studied in recent decades. Although most 2D beamforming algorithms can be directly extended to the 3-dimentional (3D) space domain for acoustic imaging [19,20,21], this simple extension method will cause problems, such as having a high computational complexity and difficulty in 2D parameter matching. On the other hand, 2D beamforming generally assumes that the distance between the source plane and the measurement plane is known. In contrast, 3D beamforming assumes that the source is distributed in a 3D space, and the distribution of the entire 3D sound source space is obtained by iterative scanning of the measured distance. In most studies, the implementation of 3D beamforming still uses planar array measurements, which leads to low spatial resolution problems. The 3D beamforming research is indispensable, achieving a high spatial resolution in both the lateral and normal directions of the planar array. The 3D model is more practical because the ability of great 3D sound source localization [22,23,24] is required in many industrial and national defense fields, especially in the aeroacoustics.
The research into 3D sound localization with a planar microphone array has been fully developed in recent years. A deconvolution approach of 3D DAMAS was developed to locate the distribution of gear noise sources, by Brooks et al. [20], which can improve the performance of the larger arrays at a higher frequency. A Fourier-based deconvolution with coordinate transformation and scanning technology were combined for 3D acoustic imaging by Xenaki et al. [22], which improves the normal resolution of the planar array but brings some side lobe pollutions. A compressive sensing algorithm was applied to 3D sound localization for obtaining a high-resolution source map by Ning et al. [25], which can provide more accurate results than conventional beamforming. The inverse solution strategies were exploited in the study of 3D acoustic mapping using a planar array, by Battista et al. [26], consisting of the equivalent source method and covariance matrix fitting method. Compared to the conventional beamforming approach, inverse strategies produce more accurate results. Sub-microphone arrays are added to the existing planar array to improve the 3D beamforming source map quality in some research. In 2013, four sub-microphone arrays, arranged around a wind tunnel, were used to detect aeroacoustic sources [27]. Padois and Berry [28] have compared the effects of various microphone array configurations on the 2D and 3D sound source localization. Two sub-arrays placed vertically were used to detect acoustic dipole sources in a 3D space [29]. These studies have found that adding a vertical sub-microphone array helps improve the planar array’s normal spatial resolution. There were 192 microphones in four arrays collecting sound signals at the same time in Padios et al.’s research [27]. The number of microphones in the combined array is larger when the microphone is dense in a single sub-array. Ping et al. proposed a 3D source localization model using a rigid spherical microphone array with spherical wave propagation [30], in which sparse Bayesian learning is used to perform the localization in 3D space. The above methods suffer from the problem of too many data acquisition channels or the high complexity of the algorithm. High-resolution 3D beamforming is still an important challenge and needs further study.
Non-synchronous measurements [31,32] can virtualize the microphone array into a large or high-density array by moving an array sequentially to record the signal at several locations. A sufficient number of fixed reference points are required to encode the phase relationships between the microphones in non-synchronous measurements. Antoni et al. [33] proposed a non-synchronous measurements method that does not require a reference point, in which the Bayesian probabilistic approach and the Expected-Maximization algorithm are used to reconstruct the source fields iteratively. The results of the non-synchronous measurements and the synchronous measurement have a similar accuracy. For this non-synchronous measurements method, Yu et al. [34] proposed fast iteration algorithms—including the augmented Lagrange multiplier (ALM) algorithm and alternative direction method of multipliers (ADMM) algorithm—to improve the iteration speed. These fast iteration algorithms can effectively recover the missing data in the cross-spectral matrix due to the non-synchronous measurements. The problem of non-synchronous measurements in 3D beamforming has received limited attention in the research literature, as most studies focused on 2D beamforming. Therefore, an interesting question arises: Is it possible to effectively improve the array’s spatial resolution by moving the planar microphone array in space?
In this paper, the performance of 3D beamforming with the non-synchronous measurements is studied systematically. The prototype array configuration is optimized by moving a planar array in space when the sound sources are distributed in 3D space. Compared to conventional 3D beamforming, the number of microphones required for the measurement is reduced. The spherical basis is selected on the basis function instead of the Fourier basis when the beamforming expands to being 3D. The algorithm flowchart of this paper is shown in Figure 1. It is noted that conventional beamforming is just one convenient choice here and all the beamformers (i.e., MVDR and MUSIC [35,36]) that are based on the cross-spectral matrix can be applied in the proposed methods. The current paper is organized as follows. In Section 2, the non-synchronous measurements are developed in the context of 3D beamforming. The simulation results between a single synchronous measurement (i.e., one planar array) and non-synchronous measurements (i.e., a non-synchronous moving array) are compared in Section 3. In Section 4, the performance of the non-synchronous measurements and the synchronous measurement are discussed. In Section 5, an experiment is made to validate the performance of the non-synchronous measurements in 3D imaging. The main conclusions of this study are summarized in Section 6.

2. Forward Model of Acoustic Imaging and Acoustic Measurement

2.1. Conventional Beamforming

Denoting the sound pressure measured by the microphone at position r as p ( r ) , p ( r ) is the sum of the particular pressure p P ( r ) and the random pressure p N ( r ) . p P ( r ) is the sound pressure transmitted from the sound sources s at r to the microphone at r , which can be obtained from the Green’s function G ( r , r ) . p N ( r ) is the measurement noise, which is usually assumed to be a Gaussian distribution function [37,38]. For a given frequency ω , p ( r ) can be expressed as
p ( r , ω ) = p P ( r , ω ) + p N ( ω )
where
p P ( r , ω ) = r Γ G ( r , ω | r ) s ( r , ω ) d r , G ( r , ω | r ) = e j k r r 2 4 π r r 2
G ( r , ω | r ) is the free-field Green’s function that describes the acoustic propagation between the sources at r and the microphones at r . k = ω / c is the wavenumber that represents the number of radians per unit distance, and c is the wave velocity. The 2 - norm is defined as 2 .
Substituting Equation (2) into Equation (1), and re-writing the measured pressure signal in the form of the matrix, we can get
p ( r , ω ) = G ( r , ω | r ) s ( r , ω ) + p N ( ω ) .
Denoting the number of microphones as M and the number of the sources as S , the sizes are p ( r , ω ) M × 1 , G ( r , ω | r ) M × S , G = ( g 1 , , g s , , g S ) , s ( r , ω ) S × 1 , and p N ( ω ) M × 1 . The cross-spectral matrix C ( ω ) can be obtained as
C ( ω ) = E { p ( r , ω ) p H ( r , ω ) } = 1 I i = 1 I p i ( r , ω ) p i H ( r , ω )
where E { } represent the mathematical expectation, ( ) H represents the Hermitian transpose, I is the number of pressure snapshots, and the size is C ( ω ) M × M .
Conventional beamforming is designed to locate sound sources by compensating for the time delay and amplitude attenuation between the microphones and the virtual source. The outcome of the beamformer Q BF is calculated in terms of the cross-spectral matrix as
Q BF = h n H C ( ω ) h n
The weights h n are designed to be independent of the array data or data statistics, which compensate for the time delays and amplitude attenuation of the forward propagation. It can be obtained as
h n = g n g n 2 2 , n = 1 , , S
The beamformer is designed to search for the source location by steering the microphone array. When the virtual source location coincides with the real source position, the outcome of the conventional beamforming gets the maximum. A more detailed description of the conventional beamforming can be found in Chu’s work [35].

2.2. Non-Synchronous Measurements Theory

Unlike synchronous measurement, in which all microphones acquire data simultaneously, non-synchronous measurements receive data by moving the microphone array under the assumption of a stable sound field. There is evidence that non-synchronous measurements are effective in 2D beamforming. For non-synchronous measurements, a denser or larger virtual array can be obtained by moving the microphone array in a 2D space, as shown in Figure 2. In denser virtual arrays, the working frequency of the beamforming algorithms is expanded; in larger virtual arrays, the entire radiating object is also better covered [39].
The sound source localization is based on the beamforming method, whether the synchronous or non-synchronous measurements. The cross-spectral matrix C ( ω ) is the main difference between the synchronous and non-synchronous measurements. For synchronous measurements, C ( ω ) is a M × M full matrix. The rank of C ( ω ) is equal to the number of uncorrelated sources S . For the non-synchronous measurements, a complete cross-spectral matrix is obtained, no longer due to the moving microphone array. The obtained matrix C ^ P ( ω ) will lose the non-diagonal block elements’ information when the cross-spectral matrix obtained from each measurement C ( i ) ( ω ) is placed on the diagonal block of a larger matrix. The superscript P refers to the total number of the non-synchronous measurements and the superscript i refers to the i -th measurement. C ^ P ( ω ) is a M P × M P matrix. When the elements at the non-diagonal position of C ^ P ( ω ) are set to zero in the absence of references, the rank of C ^ P ( ω ) , r ( C ^ P ( ω ) ) , is equal to the product of the number of the sources and the number of moves, S P . The rank of the cross-spectral matrix is considered to be the number of sources. The non-synchronous measurements method is located in the source position by supplementing the missing data in C ^ P ( ω ) . The accomplished cross-spectral matrix C ˜ P ( ω ) has the same rank as C ( i ) ( ω ) in one measurement. There is an example to clarify the spectral matrix issues in the non-synchronous measurements: assuming that the microphone array includes 25 microphones and the scanning plane contains three sound sources. Figure 3a shows the cross-spectral matrix C ( 1 ) ( ω ) in one measurement. It is a 25 × 25 full matrix, and the rank of C ( i ) ( ω ) , r ( C ( i ) ) , is equal to 3. If the prototype array moves four times, C ^ 4 ( ω ) (as shown in Figure 3b) is obtained by rearranging the C ( i ) ( ω ) , i = 1 , 2 , 3 , 4 of the single measurement in the diagonal block position. C ^ 4 ( ω ) is a ( 25 × 4 ) × ( 25 × 4 ) matrix, with data missing at the non-diagonal positions. The rank of C ^ 4 ( ω ) , r ( C ^ 4 ) , is equal to 12 (i.e., 3 × 4 ). Figure 3c shows the target spectral matrix, C ˜ 4 ( ω ) . The missing data were supplemented by the non-synchronous measurements and the rank of C ˜ 4 ( ω ) , r ( C ˜ 4 ) , was restored to 3.
The core problem in the non-synchronous measurements is the data-missing spectral matrix completion problem. The process of finding a full cross-spectral matrix can be described as
find C ˜ M P × M P s u c h   t h a t { r ( C ˜ ) = S A ( C ˜ ) C ^ F ε 1 Ψ C ˜ Ψ H C ˜ F ε 2 C ˜ H = C ˜ 0
The four additional constraints are explained as follows:
  • The 1st constraint: r ( C ˜ ) = S ensures that the rank of the reconstructed cross-spectral matrix is still equal to the number of the sources S .
  • The 2nd constraint: A ( ) denotes the sampling operator that gets the elements in the diagonal block of a matrix, and the size is A ( C ˜ ) M P × M P , which is identical to C ˜ M P × M P . | | A ( C ˜ ) C ^ | | F ε 1 ensures that the difference between A ( C ˜ ) and C ^ in the Frobenius norm is less than a given tolerance ε 1 .
  • The 3rd constraint: Ψ is a projection matrix. Ψ C ˜ Ψ H = E { Ψ P ˜ P ˜ H Ψ H } is the cross-spectral matrix of Ψ P ˜ , where the projected matrix Ψ P ˜ can be the smooth pressure [40] of the non-synchronous measured pressure P ˜ . To ensure the acoustic field’s spatial continuity, | | Ψ C ˜ Ψ H C ˜ | | F = | | E { Ψ P ˜ P ˜ H Ψ H } E { P ˜ P ˜ H } | | F ε 2 is added here, and the difference in the cross-spectral matrix between Ψ P ˜ and P ˜ is required to be smaller than a given tolerance ε 2 . The detailed discussion on this constraint is addressed in Section 2.2.
  • The 4th constraint: C ˜ H = C ˜ 0 ensures that both C ˜ H and C ˜ are positive semi-definite matrixes.
It can be seen from the 1st constraint, the eigenvalue vector of the cross-spectral matrix should be S -sparse. The number of non-zero eigenvalues is S , equal to the number of sound sources. The rank estimation of a matrix is still a difficult problem. An alternative model has been proposed to solve this problem: the cross-spectral matrix is considered to be a full rank matrix, with a few dominant eigenvalues. This method considers the eigenvalue spectrum to be “weak sparse”, in which the sorted eigenvalue elements of the spectral matrix decay rapidly according to the power law, rather than “complete S -sparse”. Equation (7) can be realized in another form of constrained optimization as
minimize C ˜ | | C ˜ | | * subject   to | | A ( C ˜ ) C ^ m | | F ε 1 | | Ψ C ˜ Ψ H C ˜ | | F ε 2 C ˜ H = C ˜ 0
where | | | | * denotes the nuclear norm of a matrix, which is defined as the sum of the eigenvalues λ i , so | | | | * = i = 1 i = M P λ i ( ) . The objective function minimize C ˜   | | C ˜ | | * is to seek C ˜ with the minimum nuclear norm.
2D beamforming generally assumes that the distance between the source plane and the measurement plane is known, and the non-synchronous measurements move the microphone array on one plane, making the virtual microphone array denser or larger (seen in Figure 2). 3D beamforming assumes that the source is distributed in a 3D space, and the distribution of the entire 3D sound source space is obtained by iterative scanning of the measured distance. In most studies, the implementation of 3D beamforming still uses planar array measurements, which leads to low spatial resolution problems. In this paper, the non-synchronous measurements are used to measure 3D space, in which the measurement array envelops the 3D sound source space to improve the resolution of the 3D space. As shown in Figure 4, the original array is located in the x–y plane, and then the array can be moved sequentially to the y–z plane and x–z plane, such that the scattering object is surrounded by the virtual microphone array in three planes. This study aims to extend the application of non-synchronous measurements in 3D beamforming.

2.3. Spatial Basis and Spatial Continuity of the Acoustic Field

The sound pressure signal captured by the microphone P can be expressed using a set of spatial bases as
P = i = 1 n ϕ i ( r ) ϑ i = Φ ϑ
where Φ is the spatial basis vector and ϑ is the coefficients vector. ϕ i ( r ) and ϑ i are the i -th basis and the corresponding coefficient, respectively. The Fourier basis Φ ( x , y ) = e i ( k x x + k y y ) is chosen for 2D beamforming, where x and y are the microphones’ coordinates, and k x and k y are the wavenumbers along the x and y directions. For 3D beamforming, a 3D spatial basis should be considered. In this paper, spherical harmonics are chosen (as shown in Figure 5), and its complete orthonormal form in polar coordinates ( r , θ , φ ), Y l m ( θ , φ ) of order l and degree m , can be written as
Y l m ( θ , φ ) = ( i ) m + | m | ( 2 l + 1 ) 4 π ( l | m | ) ! ( l + | m | ) ! P l m ( cos θ ) e i m φ
where l R + , | m | l . P l m ( ) is the associated Legendre function:
P l m ( x ) = ( 1 x 2 ) | m | / 2 d | m | d x | m | P l ( x )
P l ( ) is the Legendre polynomial of degree l :
P l ( x ) = 1 2 l l ! d l d x l ( x 2 1 ) l
The negative order spherical harmonics Y l m ( θ , φ ) can be obtained by rotating 90 o / m around the z-axis relative to the positive harmonics. The pressure signals at one microphone P ( r , θ , φ ) can be re-written in forms of spherical harmonics as
P ( r , θ , φ ) = l = 0 n m = l l θ l m ( r ) Y l m ( θ , φ )
where θ l m ( r ) is the coefficient.
Assuming the acoustic field’s spatial continuity, the reconstructed acoustic field should be smooth, and the sound pressures measured by the two adjacent microphones should have comparable levels. Such constraints should be included in the optimization model. Since the best approximation of vector P in subspace R ( Φ ) is its projection vector Proj R ( Φ ) P , we need to get the orthogonal projection matrix on the subspace R ( Φ ) . According to Equation (9), the coefficient ϑ can be obtained from the measured pressure P as
ϑ = Φ P
where denotes the pseudo-inverse of a matrix based on the fact that Φ is not generally invertible. The smoothed pressure P ˜ can be obtained by
P ˜ = Φ Φ P = Ψ P ,
where Ψ = Φ Φ is the orthogonal projection matrix, and P ˜ = Proj R ( Φ ) P = Φ Φ P is the projection of P on space R ( Φ ) . The cross-spectral matrix of the smoothed pressure C ˜ is given by
C ˜ = E ( P ˜ P ˜ H ) = E ( Ψ P P H Ψ H ) = Ψ E ( P P H ) Ψ H = Ψ C Ψ H
If the smoothed pressure P ˜ is already in the space R ( Φ ) , the projection of P ˜ in space R ( Φ ) should be itself. The constraint | | Ψ C ˜ Ψ H C ˜ | | F ε 2 is included in the optimization model, where Ψ C ˜ Ψ H denotes the re-projection result and C ˜ denotes the smoothed result. Ψ = Φ Φ has encoded the microphone position information into the matrix C ˜ .

3. Simulation Results

For conventional 2D beamforming, the plane where the sound sources are located is discretized into N grids with known positions. For 3D acoustic imaging, the observation zone will be extended from one plane to a rectangular box. In the current numerical experiments, the rectangular box’s center to be scanned is at the origin, and the size is 0.4   m × 0.4   m × 0.4   m . The scanned box is discretized uniformly by a 41 × 41 × 41 grid with a distance of 0.01 m. The six-point sources with equivalent unit magnitudes are located at (−0.1, −0.1, −0.1), (−0.1, 0.1, −0.1), (0.1, −0.1, −0.1), (−0.1, 0.1, 0.1), (0.1, 0.1, 0.1), and (0.1, −0.1, 0.1) (m). These sound sources are located at the vertex of an imaginary cube with the center at the origin and the side length of 0.2 m. The signal of the source is generated by random white noise. The prototype planar array with 56 microphones distributed by the Archimedes spiral is initially located in the z = −0.5 m plane, and the center of the planar array is located at (0, 0, −0.5) (m). The frequency of beamforming is set to 4000 Hz, and the signal-to-noise ratio (SNR) is set to 20 dB. To illustrate the advantages of the 3D beamforming with non-synchronous measurements, the results of 3D imaging by the conventional beamforming and the non-synchronous measurements are compared in the following sections. The results of 3D imaging by the conventional beamforming with the original planar array are given in Section 3.1. As a comparison, the results of the 3D imaging by the non-synchronous measurements with orthogonal and non-orthogonal moving arrays are shown, respectively, in Section 3.2 and Section 3.3. The orthogonal moving arrays mean that the initial microphone array and the moving microphone array are perpendicular to each other. The non-orthogonal moving arrays mean a certain angle between the microphone array before and after moving.

3.1. 3D Imaging by the Conventional Beamforming with a Planar Array

Figure 6a shows the microphone array’s geometry and the location of the sound sources in the numerical simulation. The microphone array is set to be z = −0.5 m, and the center of the array is set to (x = 0 m, y = 0 m, z = −0.5 m). The scanned box is shown as partially transparent to allow visualization of the sound sources. Figure 6b shows the source of power maps in the observation zone, which has been normalized by the maximum value. Six slices are presented on this figure to get a better view of the sound sources location. The simulated sound sources are located at the intersection of these slices. Figure 6c–e respectively show the sources power maps at three typical slices (x = 0.1 m, y = 0.1 m, z = −0.1 m). The positions of the simulated sound sources are marked with “+” on the figures. Figure 6e shows, on the z = −0.1 m slice, that, parallel to the microphone array, conventional beamforming locates the sound sources well, and has a good spatial resolution in both the x and y directions. As shown in Figure 6c,d, on x = 0.1 m and y = 0.1 m, for the slices perpendicular to the microphone array, it is difficult to distinguish the sound source’s location by conventional beamforming. The main lobes of these sound sources are merged into the perpendicular directions, which leads to poor spatial resolution so that the sound source’s position cannot be accurately located.

3.2. 3D Imaging by the Non-Synchronous Measurements with Orthogonal Moving Arrays

The orthogonal moving arrays mean that the moving microphone array is perpendicular to the plane of the previously tested microphone. Figure 7a shows the microphone array configuration when the prototype array is moved once. The initial microphone array is located on the z = −0.5 m plane, which is consistent with the array position in Section 3.1, and the moving microphone array is located on the y = −0.5 m plane. The geometric center of the moving microphone array is located on (x = 0 m, y = −0.5 m, z = 0 m). The results in the 3D zone and typical slices are shown in Figure 7b–e. Comparing the results in Figure 6d and Figure 7d, the orthogonally moving the microphone array one time can improve the resolution in the z-direction of the x-z plane of the initial prototype array. The positions of the sound sources in the Z direction are not distinguished well due to the merging of the main lobes in Figure 6d, while the two sound source positions in the Z direction are located accurately in Figure 7d. By comparing the results in Figure 6c and Figure 7c, the y–z plane’s resolution is slightly improved when the microphone array is moved orthogonally once, the resolution in the Z direction is not enough to locate the positions of the sound sources.
In this simulation case, the microphone array moves orthogonally twice. It moves vertically again based on the simulation case in Figure 7. In Figure 8a, the “third” microphone array is located on the x = 0.5 m plane, and the microphone array’s center is located at (x = 0.5 m, y = 0 m, z = 0 m). The non-synchronous measurements of using three orthogonal moving microphone arrays can achieve good spatial resolution in the x, y, and z directions. Figure 6, Figure 7 and Figure 8 show that the non-synchronized measurement by orthogonally moving the microphone array can overcome the lack of spatial resolution in the normal direction of the planar array. The planar microphone array can only guarantee the spatial resolution on the plane parallel to the microphone array. The microphone array is moved orthogonally and continuously twice, where a high spatial resolution can be obtained in all directions during 3D acoustic imaging.

3.3. 3D Imaging by the Non-Synchronous Measurements with Non-Orthogonal Moving Arrays

In the previous simulation configuration, the microphone array is always moving orthogonally. Due to limitations in measurement space or movement error, it is not always guaranteed that the microphone array remains orthogonal. Figure 9a shows the microphone array configuration when the prototype array is rotated 45° along the x-axis. The miniatures in the upper-right and lower-right corners are the x–z and y–z views of the microphone array configuration, respectively. The initial microphone array is located at the z = −0.5 m plane. The “second” microphone array is at an angle of 45° to the z = −0.5 m plane, and its center is 0.5 m away from the origin point (0, 0, 0). Figure 9 shows that the “second” microphone array still improves the resolution in the Z direction in the y = 0.1 m plane. Compared to the results of the orthogonally moving microphone array in Figure 7, the main lobes of the sources at y = 0.1 is relatively larger. As shown in Figure 9c, the extension direction of the main lobes in the x = 0.1 plane changes a bit. The resolution in the x = 0.1 m plane needs to be improved by further moving the microphone array.
The microphone array is moved again based on the previous analysis. The configuration of the microphone array after moving is shown in Figure 10a. The miniatures in the upper-right and lower-right corners are the x–z and y–z views of the microphone array configuration. The “third” microphone array rotates the initial microphone array by −45° on the y-axis. Its center still keeps 0.5 m away from the origin point (0, 0, 0). The results show that the resolution in the x = 0.1 m plane is improved. The main lobes of sound sources can be distinguished in the Z direction, and the sound sources localization effect is relatively better than that in Figure 6 and Figure 9. We can conclude that the non-synchronous measurements in the 3D space can improve the spatial resolution. Compared with Figure 8e, Figure 10e shows the more accurate result of the sound source localization because in the x–y slice the density of the microphone array increases.

4. Comparison between the Synchronous Measurements and Non-Synchronous Measurements

The results of 3D imaging between the synchronous and non-synchronous measurements are compared in this section. The microphone array is moved orthogonally twice for the non-synchronous measurements. The corresponding synchronous measurement is that the three microphone arrays capture data simultaneously. Figure 11a shows the cross-spectral matrix from three independent measurements, which has been arranged on the matrix’s diagonal in the form of blocks. The elements in the non-diagonal block positions of the measured spectral matrix are missing, and they have been filled with zeros in this figure to visualize these unknown parts. Figure 11b shows the spectral matrix completion by the proposed non-synchronous measurements method. All elements in the current spectral matrix are known in both diagonal and non-diagonal blocks. Figure 11c shows the full cross-spectral matrix in the case of all microphones in three arrays, capturing data simultaneously (i.e., the synchronous measurements). Let D non denote the position of the sound source obtained by the non-synchronous measurements. The relative error can quantify the difference between the real sound source position D real and D non by the 2 - norm , which is obtained as Error position = ( D non 2 D real 2 ) / D real 2 . The frequency of beamforming is set to 4000 Hz, and the signal-to-noise ratio is set to 20 dB. In this case, the maximum relative error Error position of the six source positions is 1.926%. Since the beamforming algorithm is suitable for higher working frequencies, we selected frequencies of 4000 Hz (Figure 12), 6000 Hz (Figure 13), and 8000 Hz (Figure 14). The beamforming map slices of the synchronous measurement are shown in Figure 12a–c, Figure 13a–c, and Figure 14a–c, and the beamforming map slices of the non-synchronous measurements are shown in Figure 12d–f, Figure 13d–f, and Figure 14d–f. The beamforming results of Figure 12, Figure 13 and Figure 14 show (1) both the synchronous measurements and the non-synchronous measurements can accurately locate the position of the sound source and have a good spatial resolution; and (2) compared to the synchronous measurements, the non-synchronous measurements increase the sidelobes slightly, but the number of microphones required for measurement is reduced significantly in the localization of the spatial sound sources.
The relative error can quantify the difference between the spectral matrix completion ( S com ) and the full spectral matrix ( S Full ) by the Frobenius norm, which is obtained as Error CSM = ( S com F S full F ) / S full F . Figure 15 shows the errors of the spectral matrix completion ( S com ) and the full spectral matrix ( S Full ) with different frequencies and different SNRs. Although the performance of the non-synchronous measurements is influenced by noise, the errors of S com and S Full decrease with the SNR. When the SNR is relatively low (for example, SNR = 20 dB and 10 dB), the noise plays a key role in the results of acoustic reconstruction. As the SNR increases and frequency increase, the difference between the spectral matrix completion and the full spectral matrix gradually decreases.

5. Experimental Verification

5.1. Experimental Setup

An experiment of applying the non-synchronous measurements to 3D acoustic imaging has been done in a full-anechoic chamber with a cut-off frequency of 40 Hz and background noise of −1 dB(A). The acoustic vibration measurement platform was provided by the Zhejiang Institute of Metrology (ZJIM). The microphone model was MPA416 (provided by Beijing Prestige Technology Co., Beijing, China). The acquisition instrument model was DH5922D (provided by Jiangsu Donghua Testing Technology Co., Ltd., Jingjiang, China). Figure 16a shows the photograph of the on-site experiment setup. To display the spatial locations of the sound sources, the schematic diagram of the sound source locations and the geometry of the microphone array are drawn in Figure 16b. Four speakers are not in one plane in the experiment. The sound source was driven by Bluetooth speakers. Similar to previous simulations, these four speakers were located at the four vertices (A, C, D and B1) of an imaginary cube ABCD-A1B1C1D1. The side length of this imaginary cube was about 0.25 m. To more conveniently describe the spatial position of the sources and the array in the following, this imaginary cube’s faces and vertices were used as the reference for positioning. The four speakers were supported by four tripods and placed on the marked locations on the ground. The microphone array has 56 elements and has the same geometry as the previous simulation. Taking the center of the imaginary cube (O) as the origin, the space Cartesian coordinate system is established in Figure 16b. When the microphone array was initially placed, the array was placed parallel to the plane ABB1A1 with a distance of 0.6 m. The microphone array was located on the x = −0.725 m plane. The microphone array position was adjusted so that the position of the array center (O1) is the same as the imaginary cube center (O). The strategy for moving the microphone array is shown in Figure 17. It is the top view. The 1st position of the microphone array was located at the x = −0.725 m plane. In the second measurement, the microphone array was rotated 90 degrees counterclockwise. The 2nd position of the microphone array was located at the y = −0.725 m. During the measurement, four speakers simultaneously emitted 4000 Hz pure tone. The acquisition instrument was provided by Jiangsu Donghua Testing Technology Co., Ltd., and its model was DH5922D. When there were more than 32 channels, the maximum sampling rate was 128 kHz/channel. So, a sampling frequency of 50 KHz was selected. The sampling time was 30 s.

5.2. Experimental Results

The 3D beamforming results between a single planar microphone array and the non-synchronous measurements are compared in Figure 18. A significant difference is visible in three cases. Only with the planar microphone array in the 1st position (Figure 18a) or 2nd position (Figure 18b), there are one or two distinct maximum peaks in the field of view, and there is a large sidelobe in the normal direction of each position. It is impossible to locate the spatial source sources accurately. With the non-synchronous measurements (Figure 18c), three distinct maximums appear in the field of vision, and the side lobes in the normal direction are reduced significantly. The 4th sound source is located at the corner on the other side, which was blocked by the slices of the 3D beamforming. The spatial localization accuracy of the sound sources is significantly improved when using the non-synchronous measurements when moving the arrays in 3D space, especially when the microphone array is perpendicular to the sound source plane.
Three slices (x = −0.13 m, y = −0.13 m, z = −0.13 m) of the 3D beamforming results are plotted in Figure 19. With the single microphone array placed in the 1st position (i.e., parallel to the y-z plane), two sound sources can be located at the upper-left and lower-right quadrants at the x = −0.13 m plane. At the y = −0.13 m and z = −0.13 m plane, beamforming maps are not used to determine the sources’ positions due to the larger extension of the lobes. With the single microphone array placed in the 2nd position (i.e., parallel to the x-z plane), the beamforming map at the y = −0.13 m plane exhibits two major lobes that correspond to the positions of the two sound sources. At the x = −0.13 m and z = −0.13 m plane, the main lobes extend so badly that it is impossible to find the sound source positions exactly. When the non-synchronous measurements are adopted, the spatial resolution in both the x = −0.13 m and y = −0.13 m planes is improved relative to the single microphone array. For the z = −0.13 m plane, the beamforming map is improved slightly with the non-synchronous measurements than with the single microphone array. Comparing the beamforming between the non-synchronous measurements and the single microphone array suggests that the non-synchronous measurements improve the spatial positioning resolution of the planar microphone array.

6. Conclusions

Due to the scanning area being a plane in the 2D beamforming method, the distance between the microphone and the sound source needs to be known in advance. The sound sources of industrial production are randomly distributed in space, instead of in a plane. The application of conventional beamforming from 2D to 3D is expanded in this paper. By moving the microphone array in space, the non-synchronous measurements can significantly improve the planar array’s positioning resolution in the normal direction. The 3D sound source localization is achieved by increasing the number of microphone arrays in most current studies, which have the disadvantage of requiring more microphones and acquisition channels. The non-synchronous measurements overcome this shortcoming and can also achieve a high-resolution in 3D positioning. The method proposed in this paper has extensive application potential in the field of 3D sound localization.

Author Contributions

Conceptualization, L.Y. and N.C.; methodology, L.Y.; validation, L.Y., Q.G., N.C. and R.W.; formal analysis, L.Y. and N.C.; investigation, R.W.; resources, N.C.; writing—original draft preparation, Q.G.; writing—review and editing, L.Y and N.C.; visualization, Q.G.; project administration, L.Y.; funding acquisition, N.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China under Grant 61701440 and Grant 12074254, the State Key Laboratory of Mechanical System and Vibration under Grant MSV202001, the Science and Technology on Sonar Laboratory under Grant 6142109KF201901, State Key Laboratory of Compressor Technology under Grant SKL-YSL201812 and Grant SKL-YSJ201903.

Acknowledgments

Thanks to Ning Yue and Qian Huang of Zhejiang University for helping with the experiment. Thanks to the Zhejiang Institute of Metrology (ZJIM) for acoustic anechoic-chamber support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Park, Y.; Choi, A.; Kim, K. Parametric Estimations Based on Homomorphic Deconvolution for Time of Flight in Sound Source Localization System. Sensors 2020, 20, 925. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Cao, R.; Yang, K.; Yang, Q.; Chen, P.; Sun, Q.; Xue, R. Localization of Two Sound Sources Based on Compressed Matched Field Processing with a Short Hydrophone Array in the Deep Ocean. Sensors 2019, 19, 3810. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Siller, H.; Drescher, M.; Saueressig, G.; Lange, R. Fly-over Source Localisation on a Boeing 747-400. In Proceedings of the Berlin Beamforming Conference, Berlin, Germany, 24–25 February 2010; pp. 1–11. [Google Scholar]
  4. Dinsenmeyer, A.; Antoni, J.; Leclere, Q.; Pereira, A. On the Denoising of Cross-Spectral Matrices for (aero) Acoustic Applications. In Proceedings of the Berlin Beamforming Conference, Berlin, Germany, 5–6 March 2018. [Google Scholar]
  5. Leclère, Q.; Pereira, A.; Bailly, C.; Antoni, J.; Picard, C. A unified formalism for acoustic imaging based on microphone array measurements. Int. J. Aeroacoustics 2017, 16, 431–456. [Google Scholar] [CrossRef] [Green Version]
  6. Chu, Z.G.; Yang, Y.; Wang, W.D.; Xiao, X.B.; He, Y.S. Identification of truck noise sources under passby condition based on wave beamforming method. J. Vib. Shock 2012, 31, 66–70. [Google Scholar]
  7. Maynard, J.D.; Williams, E.G.; Lee, Y. Nearfield acoustic holography: I. Theory of generalized holography and the development of NAH. JASA 1985, 78, 1395. [Google Scholar] [CrossRef]
  8. Tan, D.; Chu, Z.; Wu, G. Robust reconstruction of equivalent source method based near-field acoustic holography using an alternative regularization parameter determination approach. J. Acoust. Soc. Am. 2019, 146, EL34–EL38. [Google Scholar] [CrossRef]
  9. Luo, Z.-W.; Fernandez Comesana, D.; Zheng, C.-J.; Bi, C.-X. Near-field acoustic holography with three-dimensional scanning measurements. J. Sound Vibration. 2019, 439, 43–55. [Google Scholar] [CrossRef]
  10. Bai, M.R. Application of BEM (boundary element metliod)-based acoustic holography to radiation analysis of sound sources with arbitrarily shaped geometries. J. Acoust. Soc. Am. 1992, 92, 199–209. [Google Scholar] [CrossRef] [Green Version]
  11. Schuhmacher, A.; Hald, J.R.; Rasmussen, K.B.; Hansen, P.C. Sound source reconstruction using inverse boundary element calculations. J. Acoust. Soc. Am. 2003, 113, 114–127. [Google Scholar] [CrossRef]
  12. Bi, C.X.; Liu, Y.; Xu, L.; Zhang, Y.B. Sound field reconstruction using compressed modal equivalent point source method. J. Acoust. Soc. Am. 2017, 141, 73–79. [Google Scholar] [CrossRef]
  13. Ping, G.; Chu, Z.; Yang, Y.; Xu, Z. Wideband holography based spherical equivalent source method with rigid spherical arrays. Mech. Syst. Signal Process. 2018, 111, 303–313. [Google Scholar] [CrossRef]
  14. Bi, C.X.; Hu, D.Y.; Zhang, Y.B.; Jing, W.Q. Identification of active sources inside cavities using the equivalent source method-based free-field recovery technique. J. Sound Vib. 2015, 346, 153–164. [Google Scholar] [CrossRef]
  15. Yu, L.; Antoni, J.; Zhao, H.; Guo, Q.; Wang, R.; Jiang, W. The acoustic inverse problem in the framework of alternating direction method of multipliers. Mech. Syst. Signal Process. 2020, 49, 107220. [Google Scholar] [CrossRef]
  16. Steiner, R.; Hald, J. Near-field acoustical holography without the errors and limitations caused by the use of spatial DFT. Int. J. Acoust. Vib. 2001, 6, 83–89. [Google Scholar] [CrossRef]
  17. Hald, J. Basic theory and properties of statistically optimized near-field acoustical holography. J. Acoust. Soc. Am. 2009, 125, 2105. [Google Scholar] [CrossRef] [PubMed]
  18. Wang, Z.; Wu, S.F. Helmholtz equation-least-squares method for reconstructing acoustic pressure fields. J. Acoust. Soc. Am. 1997, 102, 2020–2032. [Google Scholar] [CrossRef]
  19. Sarradj, E. Three-dimensional acoustic source mapping with different beamforming steering vector formulations. Adv. Acoust. Vib. 2012, 2012. [Google Scholar] [CrossRef] [Green Version]
  20. Brooks, T.; Humphreys, W. Three-dimensional applications of DAMAS methodology for aeroacoustic noise source definition. In Proceedings of the 11th AIAA/CEAS Aeroacoustics Conference, Monterey, CA, USA, 23–25 May 2005; p. 2960. [Google Scholar]
  21. Legg, M.; Bradley, S. Comparison of CLEAN-SC for 2D and 3D scanning surfaces. In Proceedings of the 4th Berlin Beamforming Conference., Berlin, Germany, 22–23 February 2012. [Google Scholar]
  22. Ding, H.; Lu, H.; Li, C.; Jing, J.; Chai, G. Localization and identification of three-dimensional sound source with beamforming based acoustic tomography. Proc. Mtgs. Acoust. 2013, 19, 1–8. [Google Scholar]
  23. Xenaki, A.; Jacobsen, F.; Fernandez-Grande, E. Improving the resolution of three-dimensional acoustic imaging with planar phased arrays. J. Sound Vib. 2012, 331, 1939–1950. [Google Scholar] [CrossRef] [Green Version]
  24. Chu, Z.; Yang, Y.; He, Y. Deconvolution for three-dimensional acoustic source identification based on spherical harmonics beamforming. J. Sound Vib. 2015, 344, 484–502. [Google Scholar] [CrossRef]
  25. Ning, F.; Wei, J.; Qiu, L.; Shi, H.; Li, X. Three-dimensional acoustic imaging with planar microphone arrays and compressive sensing. J. Sound Vib. 2016, 380, 112–128. [Google Scholar] [CrossRef]
  26. Battista, G.; Chiariotti, P.; Herold, G.; Sarradj, E.; Castellini, P. Inverse methods for three-dimensional acoustic mapping with a single planar array. In Proceedings of the 7th Berlin Beamforming Conference, Berlin, Germany, 5–6 March 2018. [Google Scholar]
  27. Padois, T.; Robin, O.; Berry, A. 3D Source localization in a closed wind-tunnel using microphone arrays. In Proceedings of the 19th AIAA/CEAS Aeroacoustics Conference, Berlin, Germany, 27–29 May 2013; p. 2213. [Google Scholar]
  28. Padois, T.; Berry, A. Two and Three-Dimensional Sound Source Localization with Beamforming and Several Deconvolution Techniques. Acta Acust. United Acust. 2017, 103, 392–400. [Google Scholar] [CrossRef]
  29. Porteous, R.; Prime, Z.; Doolan, C.J.; Moreau, D.J.; Valeau, V. Three-dimensional beamforming of dipolar aeroacoustic sources. J. Sound Vib. 2015, 355, 117–134. [Google Scholar] [CrossRef]
  30. Ping, G.; Fernandez-Grande, E.; Gerstoft, P.; Chu, Z. Three-dimensional source localization using sparse Bayesian learning on a spherical microphone array. J. Acoust. Soc. Am. 2020, 147, 3895–3904. [Google Scholar] [CrossRef] [PubMed]
  31. Chu, N.; Ning, Y.; Yu, L.; Huang, Q.; Wu, D. A High-resolution and Low-frequency Acoustic Beamforming based on Bayesian Inference and Non-synchronous Measurements. IEEE Access 2020, 8, 82500–82513. [Google Scholar] [CrossRef]
  32. Yu, L.; Antoni, J.; Leclere, Q.; Jiang, W. Acoustical source reconstruction from non-synchronous sequential measurements by Fast Iterative Shrinkage Thresholding Algorithm. J. Sound Vib. 2017, 408, 351–367. [Google Scholar]
  33. Antoni, J.; Liang, Y.; Leclère, Q. Reconstruction of sound quadratic properties from non-synchronous measurements with insufficient or without references: Proof of concept. J. Sound Vib. 2015, 349, 123–149. [Google Scholar] [CrossRef]
  34. Yu, L.; Antoni, J.; Wu, H.; Leclere, Q.; Jiang, W. Fast iteration algorithms for implementing the acoustic beamforming of non-synchronous measurements. Mech. Syst. Signal Process. 2019, 134, 106309. [Google Scholar]
  35. Chu, N. Bayesian Approach in Acoustic Source Localization and Imaging. Ph.D. Thesis, Université Paris Sud-Paris XI, Orsay, France, 22 November 2013. [Google Scholar]
  36. Schmidt, R. Multiple emitter location and signal parameter estimation. Ieee Trans. Antennas Propag. 1986, 34, 276–280. [Google Scholar] [CrossRef] [Green Version]
  37. Antoni, J.; Le Magueresse, T.; Leclère, Q.; Simard, P. Sparse acoustical holography from iterated Bayesian focusing. J. Sound Vib. 2019, 446, 289–325. [Google Scholar] [CrossRef]
  38. Brillinger, D.R. Time Series: Data Analysis and Theory; SIAM: Philadelphia, PA, USA, 2001. [Google Scholar]
  39. Kim, Y.; Nelson, P. Optimal regularisation for acoustic source reconstruction by inverse methods. J. Sound Vib. 2004, 275, 463–487. [Google Scholar] [CrossRef]
  40. Yu, L.; Antoni, J.; Leclere, Q. Spectral matrix completion by Cyclic Projection and application to sound source reconstruction from non-synchronous measurements. J. Sound Vib. 2016, 372, 31–49. [Google Scholar] [CrossRef]
Figure 1. Algorithm flowchart of this paper. The data-missing spectral matrix is first obtained by non-synchronous measurements. Then the data-missing spectral matrix is filled with data to obtain the complete spectral matrix. Finally, acoustic imaging is achieved by conventional beamforming.
Figure 1. Algorithm flowchart of this paper. The data-missing spectral matrix is first obtained by non-synchronous measurements. Then the data-missing spectral matrix is filled with data to obtain the complete spectral matrix. Finally, acoustic imaging is achieved by conventional beamforming.
Sensors 20 07308 g001
Figure 2. Schematic diagram of the non-synchronous measurements in a 2D space: (a) measurement with a planar microphone array; (b) non-synchronous measurements’ microphone array for a denser array; (c) non-synchronous measurements’ microphone array for a bigger array. The circle “○” represents a microphone, and the same color represents the microphones in the same test.
Figure 2. Schematic diagram of the non-synchronous measurements in a 2D space: (a) measurement with a planar microphone array; (b) non-synchronous measurements’ microphone array for a denser array; (c) non-synchronous measurements’ microphone array for a bigger array. The circle “○” represents a microphone, and the same color represents the microphones in the same test.
Sensors 20 07308 g002
Figure 3. (a) C ( 1 ) ( ω ) 25 × 25 in one single measurement; r ( C ( 1 ) ) is the rank of C ( 1 ) ( ω ) . (b) C ^ 4 ( ω ) 100 × 100 by arranging C ( i ) ( ω ) , i = 1 , 2 , 3 , 4 in the diagonal block position; r ( C ^ 4 ) is the rank of C ^ 4 ( ω ) . (c) C ˜ 4 ( ω ) 100 × 100 with the non-synchronous measurements by implementing the missing data at the non-diagonal block; r ( C ˜ 4 ) is the rank of C ˜ 4 ( ω ) .
Figure 3. (a) C ( 1 ) ( ω ) 25 × 25 in one single measurement; r ( C ( 1 ) ) is the rank of C ( 1 ) ( ω ) . (b) C ^ 4 ( ω ) 100 × 100 by arranging C ( i ) ( ω ) , i = 1 , 2 , 3 , 4 in the diagonal block position; r ( C ^ 4 ) is the rank of C ^ 4 ( ω ) . (c) C ˜ 4 ( ω ) 100 × 100 with the non-synchronous measurements by implementing the missing data at the non-diagonal block; r ( C ˜ 4 ) is the rank of C ˜ 4 ( ω ) .
Sensors 20 07308 g003
Figure 4. (a) Non-synchronous original microphone array in the x–y plane (black circles); (b) non-synchronous moving microphone array in space from the x–y plane to the y–z plane (red circles); (c) non-synchronous moving microphone array in space from the x–y plane to the x–z plane (blue circles). The circle “○” represents a microphone, and the same color represents the microphones in the same test.
Figure 4. (a) Non-synchronous original microphone array in the x–y plane (black circles); (b) non-synchronous moving microphone array in space from the x–y plane to the y–z plane (red circles); (c) non-synchronous moving microphone array in space from the x–y plane to the x–z plane (blue circles). The circle “○” represents a microphone, and the same color represents the microphones in the same test.
Sensors 20 07308 g004
Figure 5. Spherical harmonics. The real spherical harmonics Y l m ( θ , φ ) for l = 0 , 1 , 2 , 3 (top to bottom) and m = 0 , 1 l (left to right). The value of the spherical harmonic function is represented by color, in which the maximum value is characterized by red, and the minimum value is represented by blue.
Figure 5. Spherical harmonics. The real spherical harmonics Y l m ( θ , φ ) for l = 0 , 1 , 2 , 3 (top to bottom) and m = 0 , 1 l (left to right). The value of the spherical harmonic function is represented by color, in which the maximum value is characterized by red, and the minimum value is represented by blue.
Sensors 20 07308 g005
Figure 6. 3D Beamforming result with a single planar array. (a) The simulation setup. The red points “·” represent the sound sources. The other same color represents the microphones in the same test. The black points “·” represent the microphones in a single planar array test. (b) 3D beamforming map. (c) The beamforming map at the x = 0.1 m slice. (d) The beamforming map at the y = 0.1 m slice. (e) The beamforming map at the z = −0.1 m slice.
Figure 6. 3D Beamforming result with a single planar array. (a) The simulation setup. The red points “·” represent the sound sources. The other same color represents the microphones in the same test. The black points “·” represent the microphones in a single planar array test. (b) 3D beamforming map. (c) The beamforming map at the x = 0.1 m slice. (d) The beamforming map at the y = 0.1 m slice. (e) The beamforming map at the z = −0.1 m slice.
Sensors 20 07308 g006
Figure 7. 3D Beamforming results from the non-synchronous measurements, orthogonally moving the array once. (a) The simulation setup. The red points “·” represent the sound sources. The other same color represents the microphones in the same test. The black points “·” represent the microphones in a single planar array test. The blue points “·” represent the microphones in the test with the orthogonal moving array, moved here for the first time. (b) 3D beamforming map. (c) The beamforming map at the x = 0.1 m slice. (d) The beamforming map at the y = 0.1 m slice. (e) The beamforming map at the z = −0.1 m slice.
Figure 7. 3D Beamforming results from the non-synchronous measurements, orthogonally moving the array once. (a) The simulation setup. The red points “·” represent the sound sources. The other same color represents the microphones in the same test. The black points “·” represent the microphones in a single planar array test. The blue points “·” represent the microphones in the test with the orthogonal moving array, moved here for the first time. (b) 3D beamforming map. (c) The beamforming map at the x = 0.1 m slice. (d) The beamforming map at the y = 0.1 m slice. (e) The beamforming map at the z = −0.1 m slice.
Sensors 20 07308 g007
Figure 8. 3D Beamforming results from the non-synchronous measurements, orthogonally moving the array twice. (a) The simulation setup. The red points “·” represent the sound source. The other same color represents the microphones in the same test. The black points “·” represent the microphones in a single planar array test. The blue points “·” represent the microphones in the test with the orthogonal moving array, moved here for the first time. The green points “·” represent the microphones in the test with the orthogonal moving array, moved for the second time. (b) 3D beamforming map. (c) The beamforming map at the x = 0.1 m slice. (d) The beamforming map at the y = 0.1 m slice. (e) The beamforming map at the z = −0.1 m slice.
Figure 8. 3D Beamforming results from the non-synchronous measurements, orthogonally moving the array twice. (a) The simulation setup. The red points “·” represent the sound source. The other same color represents the microphones in the same test. The black points “·” represent the microphones in a single planar array test. The blue points “·” represent the microphones in the test with the orthogonal moving array, moved here for the first time. The green points “·” represent the microphones in the test with the orthogonal moving array, moved for the second time. (b) 3D beamforming map. (c) The beamforming map at the x = 0.1 m slice. (d) The beamforming map at the y = 0.1 m slice. (e) The beamforming map at the z = −0.1 m slice.
Sensors 20 07308 g008
Figure 9. 3D Beamforming results from the non-synchronous measurements, non-orthogonally moving the array once. (a) The simulation setup. The red points “·” represent the sound sources. The other same color represents the microphones in the same test. The black points “·” represent the microphones in a single planar array test. The blue points “·” represent the microphones in the test with the non-orthogonal moving array, moved here for the first time. The first subfigure represents the view in the x–z plane, and the second subfigure represents the view in the y–z plane. (b) 3D beamforming map. (c) The beamforming map at the x = 0.1 m slice. (d) The beamforming map at the y = 0.1 m slice. (e) The beamforming map at the z = −0.1 m slice.
Figure 9. 3D Beamforming results from the non-synchronous measurements, non-orthogonally moving the array once. (a) The simulation setup. The red points “·” represent the sound sources. The other same color represents the microphones in the same test. The black points “·” represent the microphones in a single planar array test. The blue points “·” represent the microphones in the test with the non-orthogonal moving array, moved here for the first time. The first subfigure represents the view in the x–z plane, and the second subfigure represents the view in the y–z plane. (b) 3D beamforming map. (c) The beamforming map at the x = 0.1 m slice. (d) The beamforming map at the y = 0.1 m slice. (e) The beamforming map at the z = −0.1 m slice.
Sensors 20 07308 g009
Figure 10. 3D Beamforming results from the non-synchronous measurements, non-orthogonally moving the array twice. (a) The simulation setup. The red points “·” represent the sound sources. The other same color represents the microphones in the same test. The black points “·” represent the microphones in a single planar array test. The blue points “·” represent the microphone in the test with the non-orthogonal moving array, moved here for the first time. The green points “·” represent the microphone in the test with the non-orthogonal moving array, moved for the second time. The first subfigure represents the view in the x–z plane, and the second subfigure represents the view in the y–z plane. (b) 3D beamforming map. (c) The beamforming map at the x = 0.1 m slice. (d) The beamforming map at the y = 0.1 m slice. (e) The beamforming map at the z = −0.1 m slice.
Figure 10. 3D Beamforming results from the non-synchronous measurements, non-orthogonally moving the array twice. (a) The simulation setup. The red points “·” represent the sound sources. The other same color represents the microphones in the same test. The black points “·” represent the microphones in a single planar array test. The blue points “·” represent the microphone in the test with the non-orthogonal moving array, moved here for the first time. The green points “·” represent the microphone in the test with the non-orthogonal moving array, moved for the second time. The first subfigure represents the view in the x–z plane, and the second subfigure represents the view in the y–z plane. (b) 3D beamforming map. (c) The beamforming map at the x = 0.1 m slice. (d) The beamforming map at the y = 0.1 m slice. (e) The beamforming map at the z = −0.1 m slice.
Sensors 20 07308 g010
Figure 11. Comparison of the spectral matrix between the synchronous and non-synchronous measurements. (a) Measured cross-spectral matrix. (b) The spectral matrix completion from the non-synchronous measurements. (c) Full spectral matrix from the synchronous measurements.
Figure 11. Comparison of the spectral matrix between the synchronous and non-synchronous measurements. (a) Measured cross-spectral matrix. (b) The spectral matrix completion from the non-synchronous measurements. (c) Full spectral matrix from the synchronous measurements.
Sensors 20 07308 g011
Figure 12. The beamforming results of the synchronous and non-synchronous measurements at 4000 Hz. (a) The beamforming map of the synchronous measurement at the x = 0.1 m slice. (b) The beamforming map of the synchronous measurement at the y = 0.1 m slice. (c) The beamforming map of the synchronous measurement at the z = −0.1 m slice. (d) The beamforming map of the non-synchronous measurements at the x = 0.1 m slice. (e) The beamforming map of the non-synchronous measurements at the y = 0.1 m slice. (f) The beamforming map of the non-synchronous measurements at the z = −0.1 m slice.
Figure 12. The beamforming results of the synchronous and non-synchronous measurements at 4000 Hz. (a) The beamforming map of the synchronous measurement at the x = 0.1 m slice. (b) The beamforming map of the synchronous measurement at the y = 0.1 m slice. (c) The beamforming map of the synchronous measurement at the z = −0.1 m slice. (d) The beamforming map of the non-synchronous measurements at the x = 0.1 m slice. (e) The beamforming map of the non-synchronous measurements at the y = 0.1 m slice. (f) The beamforming map of the non-synchronous measurements at the z = −0.1 m slice.
Sensors 20 07308 g012
Figure 13. The beamforming results of the synchronous and non-synchronous measurements at 6000 Hz. (a) The beamforming map of the synchronous measurement at the x = 0.1 m slice. (b) The beamforming map of the synchronous measurement at the y = 0.1 m slice. (c) The beamforming map of the synchronous measurement at the z = −0.1 m slice. (d) The beamforming map of the non-synchronous measurements at the x = 0.1 m slice. (e) The beamforming map of the non-synchronous measurements at the y = 0.1 m slice. (f) The beamforming map of the non-synchronous measurements at the z = −0.1 m slice.
Figure 13. The beamforming results of the synchronous and non-synchronous measurements at 6000 Hz. (a) The beamforming map of the synchronous measurement at the x = 0.1 m slice. (b) The beamforming map of the synchronous measurement at the y = 0.1 m slice. (c) The beamforming map of the synchronous measurement at the z = −0.1 m slice. (d) The beamforming map of the non-synchronous measurements at the x = 0.1 m slice. (e) The beamforming map of the non-synchronous measurements at the y = 0.1 m slice. (f) The beamforming map of the non-synchronous measurements at the z = −0.1 m slice.
Sensors 20 07308 g013
Figure 14. The beamforming results of the synchronous and non-synchronous measurements at 8000 Hz. (a) The beamforming map of the synchronous measurement at the x = 0.1 m slice. (b) The beamforming map of the synchronous measurement at the y = 0.1 m slice. (c) The beamforming map of the synchronous measurement at the z = −0.1 m slice. (d) The beamforming map of the non-synchronous measurements at the x = 0.1 m slice. (e) The beamforming map of the non-synchronous measurements at the y = 0.1 m slice. (f) The beamforming map of the non-synchronous measurements at the z = −0.1 m slice.
Figure 14. The beamforming results of the synchronous and non-synchronous measurements at 8000 Hz. (a) The beamforming map of the synchronous measurement at the x = 0.1 m slice. (b) The beamforming map of the synchronous measurement at the y = 0.1 m slice. (c) The beamforming map of the synchronous measurement at the z = −0.1 m slice. (d) The beamforming map of the non-synchronous measurements at the x = 0.1 m slice. (e) The beamforming map of the non-synchronous measurements at the y = 0.1 m slice. (f) The beamforming map of the non-synchronous measurements at the z = −0.1 m slice.
Sensors 20 07308 g014
Figure 15. The relative error of the spectral matrix completion ( S com ) and the full spectral matrix ( S Full ) with different frequencies and different signal-to-noise ratios (SNRs).
Figure 15. The relative error of the spectral matrix completion ( S com ) and the full spectral matrix ( S Full ) with different frequencies and different signal-to-noise ratios (SNRs).
Sensors 20 07308 g015
Figure 16. (a) The on-site experiment setup. (b) The schematic diagram of the sound sources and the microphone array.
Figure 16. (a) The on-site experiment setup. (b) The schematic diagram of the sound sources and the microphone array.
Sensors 20 07308 g016
Figure 17. The top view of the moving strategy for the microphone array in the experiment.
Figure 17. The top view of the moving strategy for the microphone array in the experiment.
Sensors 20 07308 g017
Figure 18. Comparison of the 3D beamforming results between a single planar microphone array and the non-synchronous measurements. (a) The 1st position; (b) the 2nd position; (c) the non-synchronous measurements.
Figure 18. Comparison of the 3D beamforming results between a single planar microphone array and the non-synchronous measurements. (a) The 1st position; (b) the 2nd position; (c) the non-synchronous measurements.
Sensors 20 07308 g018
Figure 19. Comparison of the beamforming slices between a single planar microphone array and the non-synchronous measurements. From top to bottom are the 1st position, the 2nd position, and the non-synchronous measurements. From left to right are the x = −0.13 m, y = −0.13 m, and z = −0.13 m slices.
Figure 19. Comparison of the beamforming slices between a single planar microphone array and the non-synchronous measurements. From top to bottom are the 1st position, the 2nd position, and the non-synchronous measurements. From left to right are the x = −0.13 m, y = −0.13 m, and z = −0.13 m slices.
Sensors 20 07308 g019
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yu, L.; Guo, Q.; Chu, N.; Wang, R. Achieving 3D Beamforming by Non-Synchronous Microphone Array Measurements. Sensors 2020, 20, 7308. https://doi.org/10.3390/s20247308

AMA Style

Yu L, Guo Q, Chu N, Wang R. Achieving 3D Beamforming by Non-Synchronous Microphone Array Measurements. Sensors. 2020; 20(24):7308. https://doi.org/10.3390/s20247308

Chicago/Turabian Style

Yu, Liang, Qixin Guo, Ning Chu, and Rui Wang. 2020. "Achieving 3D Beamforming by Non-Synchronous Microphone Array Measurements" Sensors 20, no. 24: 7308. https://doi.org/10.3390/s20247308

APA Style

Yu, L., Guo, Q., Chu, N., & Wang, R. (2020). Achieving 3D Beamforming by Non-Synchronous Microphone Array Measurements. Sensors, 20(24), 7308. https://doi.org/10.3390/s20247308

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop