Next Article in Journal
Spatial–Spectral Constrained Adaptive Graph for Hyperspectral Image Clustering
Next Article in Special Issue
Automatic Stones Classification through a CNN-Based Approach
Previous Article in Journal
YOLOv5-AC: Attention Mechanism-Based Lightweight YOLOv5 for Track Pedestrian Detection
Previous Article in Special Issue
Motion Blur Kernel Rendering Using an Inertial Sensor: Interpreting the Mechanism of a Thermal Detector
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Sensor Data-Fusion Method Based on Cloud Model and Improved Evidence Theory

School of Automation and Electrical Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(15), 5902; https://doi.org/10.3390/s22155902
Submission received: 7 July 2022 / Revised: 26 July 2022 / Accepted: 4 August 2022 / Published: 7 August 2022
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)

Abstract

:
The essential factors of information-aware systems are heterogeneous multi-sensory devices. Because of the ambiguity and contradicting nature of multi-sensor data, a data-fusion method based on the cloud model and improved evidence theory is proposed. To complete the conversion from quantitative to qualitative data, the cloud model is employed to construct the basic probability assignment (BPA) function of the evidence corresponding to each data source. To address the issue that traditional evidence theory produces results that do not correspond to the facts when fusing conflicting evidence, the three measures of the Jousselme distance, cosine similarity, and the Jaccard coefficient are combined to measure the similarity of the evidence. The Hellinger distance of the interval is used to calculate the credibility of the evidence. The similarity and credibility are combined to improve the evidence, and the fusion is performed according to Dempster’s rule to finally obtain the results. The numerical example results show that the proposed improved evidence theory method has better convergence and focus, and the confidence in the correct proposition is up to 100%. Applying the proposed multi-sensor data-fusion method to early indoor fire detection, the method improves the accuracy by 0.9–6.4% and reduces the false alarm rate by 0.7–10.2% compared with traditional and other improved evidence theories, proving its validity and feasibility, which provides a certain reference value for multi-sensor information fusion.

1. Introduction

Heterogeneous multi-sensors play an important role in information perception, the acquired data may contain some ambiguous and conflicting information due to the limitations of multi-sensor devices’ measurement accuracy and the complexity of the working environment, which may result in inaccurate data-fusion decisions [1]. Consequently, the way to better handle multi-sensor data and improve data-fusion accuracy is a popular research direction in the field of data-fusion technology. Common data-fusion algorithms currently include Kalman filtering [2], Bayesian estimation [3], Dempster–Shafer (D-S) evidence theory [4], and artificial neural networks [5], etc. Bayesian networks and D-S evidence theory are commonly used to deal with the uncertainty in multi-sensor data, which frequently results in anomalous data. However, the Bayesian estimation fusion method requires access to prior data to generate new probability estimates, which is not always possible [6]. Dempster–Shafer (D-S) evidence theory is a theory of fuzzy reasoning proposed by Dempster in 1967 [7] and subsequently refined by Shafer [8]. It has been widely employed in areas such as target identification [9], multi-attribute decision analysis [10], fault diagnostics [11], and robotics research [12] due to its capacity to better handle uncertain and unknown situations with unknown prior probabilities. Although the D-S evidence theory has been applied in a number of fields, it has certain problems. One is that there is no unified method for determining the BPA function, and the other is that the evidence theory is prone to produce results that contradict the facts when dealing with highly conflicting evidence, and there is no unified method for solving this problem. Most scholars have done some research on the above two problems.
Determining the BPA function is an important step in evidence theory, which influences the accuracy of fusion results to some extent. Many researchers have proposed various methods for determining BPA functions [13,14,15]. The cloud model [16] is a concept proposed by Professor Li in 1995, which is a cognitive model based on probability statistics and fuzzy set theory. It can well portray the fuzziness and randomness of information and is applicable to the field of multi-sensor information fusion. Peng et al. [17] improved the multi-criteria group decision method by using a cloud-model method to deal with uncertain information based on information fusion and information measurement, Liu et al. [18] used the cloud model to describe the load direction in topology optimization with uncertainty, and Peng et al. [19] proposed an uncertain pure linguistic information multicriteria group decision-making method based on the cloud model, demonstrating the advantage of the cloud model in dealing with uncertain information. In this paper, the cloud model is used to determine the BPA function to convert measured quantitative data to qualitative concepts.
The directions for improving the accuracy of traditional evidence theory fusion can be divided into two major areas: improvement of combination rules [20,21] and improvement of the body of evidence. The former blames the D-S rule for producing results that contradict the facts, achieving certain results but destroying the D-S rule’s own advantages, such as the law of exchange and the law of union. The latter believes that the problem stems from the unreliability of the information source and uses an improved approach to the body of evidence to deal with the conflict, which retains the good characteristics of Dempster’s rule and weakens the influence of conflicting evidence on the fusion result. As Haenni [22] points out, the improvement of the body of evidence is more reasonable both from an engineering and mathematical standpoint. The calculation and assignment of weights to the body of evidence is critical to improving the body of evidence, and some scholars have conducted a series of studies on how to evaluate the body of evidence’s weights. Murphy [23] proposed a simple averaging method to assign the same weight to each piece of evidence, but it ignores the relationship between the evidence and is therefore unreasonable. Deng et al. [24] proposed a more convergent method based on the rules of evidence theory after weighted average processing of evidence based on trust degree, but it does not take into account the characteristics of the evidence itself. There are two methods for determining the weight of the body of evidence: according to the relationship between the evidence and according to the characteristics of the evidence itself. For the former, Wang et al. [25], Jousselme et al. [26], and Dong et al. [27] measure the relationship between evidence by using the Pignistic probability distance, the Jousselme distance, and cosine similarity, respectively; however, using a single measure of evidence relationship to find the weight of evidence does not accurately describe the relationship between evidence in certain cases. For the latter, scholars have proposed various uncertainty measures based on information entropy, such as Yager’s [28] dissonance measure based on the likelihood function and Deng’s [29] evidence uncertainty measure based on Shannon entropy, but such methods deal with evidence in a one-sided manner, replacing the entire uncertainty interval with only part of the evidence information. Deng et al. [30] developed a method for evaluating evidence uncertainty based on the Hellinger distance of the uncertainty interval, which is simple to compute and measures uncertainty well for a better integration effect. The relationship between evidence and the characteristics of the evidence itself do not affect each other and are both valid information available within the evidence, yet some current scholarly approaches to improving evidence theory consider only one of them to deal with the evidence, undermining the integrity of the evidentiary information. Some scholars have proposed ways to improve the evidence theory based on both, but they both have some room for improvement. For example, Tao et al. [31] proposed a multi-sensor data-fusion method based on the Pearson correlation coefficient and information entropy. Xiao et al. [32] proposed a multi-sensor data-fusion method based on belief dispersion of evidence and Deng entropy [29]. Wang et al. [33] combined the Jaccard coefficient and cosine similarity to calculate evidence similarity, combined with evidence-based precision and entropy of evidence to calculate evidence certainty. Although these methods combine the relationship between evidence and the characteristics of evidence itself, they all have certain disadvantages. The Pearson correlation coefficient is only used to portray the linear correlation between normally distributed attributes, which is more demanding on evidence. The Jaccard coefficient and cosine similarity sometimes cannot measure the relationship between evidence correctly. Using information entropy cannot measure the characteristics of evidence itself comprehensively, etc.
In order to more accurately measure the relationship between evidence and the characteristics of evidence itself, and improve the accuracy of data fusion, this paper proposes an improved evidence-theory method based on multiple relationship measures and focal element interval distance. We combine the Jousselme distance, cosine similarity and the Jaccard coefficient to calculate the similarity between the evidence, and we use the Hellinger distance between the evidence determination intervals to measure the certainty of the evidence. Based on these calculations, the evidence weight coefficients are then jointly improved. Finally, the original evidence is average-weighted and fused by using the Dempster rule to produce the result. In addition, we analyze the results of the arithmetic examples to demonstrate the validity of the proposed improved evidence theory. By using the aforementioned improved evidence theory along with cloud model, we developed a multi-sensor data-fusion method. The BPA functions corresponding to each data source are determined based on the cloud model, which converts the collected quantitative data into stereotypical concepts. The fusion results are obtained by fusing each BPA function by using the improved evidence theory mentioned above.
Multi-sensor data-fusion technology can combine relevant information within multiple sensors, thereby increasing the safety and reliability of the overall system. The proposed multi-sensor data-fusion method can be utilized in multi-sensor systems in various fields, such as fault-determination systems, target identification systems, environmental monitoring systems, and intelligent firefighting systems, among others. Due to external factors or their own aging faults, one or more sensors may acquire incorrect information, causing the fusion results to be contradictory to the facts. The proposed method overcomes the problem, improves the handling of ambiguity in sensor data, increases the reliability of data fusion results, and makes it easier for people to make appropriate decisions. We establish an early indoor fire detection model to test the efficacy of the proposed strategy. The proposed method improves accuracy by 0.7–10.2% and reduces false alarm rate by 0.9–6.4% when compared to the traditional evidence theory and other improved evidence theories. It has better fusion performance, which provides some reference value for multi-sensor data fusion.

2. Preliminaries

This section provides a brief overview of D-S evidence theory and the cloud model.

2.1. Cloud Model

Let X be a quantitative domain ( X = { x } ) and U be a qualitative concept on the domain X. For any element x ( x X ) and x is a single random realization on U, the certainty of x to U is y ( x ) [ 0 , 1 ] , which is a random number with stable tendency, the distribution of x over the domain X is called a cloud model and each ( x   y ( x ) ) becomes a cloud drop [34].
The cloud model completes the conversion of quantitative data to qualitative concepts through numerical characteristic expectation ( E x ), entropy ( E n ), and hyperentropy ( H e ), where expectation is the expected value of the distribution of cloud droplets in the theoretical domain, entropy reflects the dispersion of cloud droplets, and hyperentropy reflects the dispersion of entropy. Because the values of the characteristics corresponding to the evaluation indices have some stability across the multi-sensor domain and the interval distributions generally follow a normal distribution that is more realistic, the normal cloud model is used in this research. Each parameter’s computation formula is presented in Equation (1),
{ E x i j = C i j , m a x + C i j , m i n 2 E n i j = C i j , m a x C i j , m i n 2.355 H e = λ i ,
where [ C i j , m i n , C i j , m a x ] are the range of values of the evaluation interval corresponding to the jth certain evaluation index inside the ith data type of the multi-sensor system, and λ i is a value determined by experts based on experience and is typically 0.01. It is worth noting that the maximum and lower bounds of each data source’s evaluation value are the expectation of both cloud E x values. The entropy of the traditional cloud model is E n i j = ( C i j , m a x C i j , m i n ) / 6 , when the data is near the endpoint value, and the corresponding degree of certainty tends to 0. However, the endpoint value of the interval divided by each level is the transition boundary value of the two adjacent levels, and the edge value should belong to the upper and lower intervals at the same time. Therefore, in order to represent the boundary ambiguity of adjacent ranks, the divisor for finding the entropy is determined to be 2.355.
Let ( E x i j , E n i j , H e ) be the three numerical properties of a cloud for a given one-dimensional domain, and the procedure for this one-dimensional normal cloud generator is:
  • Generate a normal random number E n i j with E n i j as the expected value and H e 2 as the variance.
  • Generate a normal random number x i j with E x i j as the expected value and E n i j 2 as the variance.
  • Calculate y i j = exp ( ( x i j E x i j ) 2 2 E n i j 2 ) , where x i j is a specific quantified value, y i j is the degree of determination of x i j on qualitative index U, and ( x i j ,   y i j ) is the cloud drop.
  • Repeat the above steps until N cloud drops are generated.

2.2. Dempster–Shafer Evidence Theory

Let Θ = θ 1 , θ 2 , , θ n be a finite identification framework in the D-S evidence theory, where Θ = θ 1 , θ 2 , , θ n are all possible events and θ i ( i = [ 1 , n ] ) is a subset of the recognition frame Θ . The underlying trust function m be a mapping from the set 2 Θ [ 0 , 1 ] , with A being any subset of Θ and it satisfies
{ m ( ) = 0 A Θ m ( A ) = 1
We call m the basic probability assignment function (BPA function for short) of Θ   [35], where m ( ) denotes the degree of confidence of the evidence in the empty set. If m(A) > 0, then A is called a focal element within the identification framework Θ , and m(A) reflects the degree of trust of the evidence in A. In particular, the condition m ( ) = 0 is not necessarily satisfied. For the open evaluation set space, m ( ) is not necessarily equal to 0. In this paper, we only consider the case in the closed evaluation set space.
For recognition framework Θ = θ 1 , θ 2 , , θ n and BPA function m(A), Bel(A) is defined as the confidence function, which is the sum of the potential probability assignments of all subsets of A, indicating the degree of certainty of the proposition A, as shown in Equation (3):
B e l ( A ) = B A m ( B ) , A Θ   .
Pl(A) is the likelihood function of A, as defined in Equation (4), indicates the degree of trust that does not deny A,
P l ( A ) = 1 B e l ( A ¯ ) = B   A m ( B ) .
The intervals of evidence are shown in Figure 1, where [0, B e l ( A ) ] is the support interval of proposition A, [ B e l ( A ) , P l ( A ) ] is the uncertainty interval of proposition A, and [ P l ( A ) , 1 ] is the rejection interval of evidence. Among them, support interval and rejection interval together constitute the definite interval of evidence, which can represent the certainty of evidence.
Let m 1 and m 2 be two BPA functions on the same finite identification frame Θ , with focal elements B 1 , B 2 , , B n and C 1 , C 2 , , C n . Then the D-S evidence theory combination rule rules are as follows in Equations (5) and (6):
m ( A ) = { 1 1 K B i   C j = A m 1 ( B i ) m 2 ( C j )   ,   A   0                                                                                                             ,   A =
K = B i   C j = m 1 ( B i ) m 2 ( C j ) ,
where K is the coefficient of evidence conflict between m 1 and m 2 , the higher K value indicates the greater the degree of evidence conflict, and the values of K range from 0 to 1.

3. The Proposed Method

Based on the above theoretical knowledge, this paper proposes a heterogeneous data-fusion method based on a cloud model and improved evidence theory. In order to obtain the BPA function of evidence more accurately, we consider the ambiguity of multi-sensor data when completing data transformation by using the cloud model. To improve the reliability of the fusion results, we propose a new method for measuring the similarity of evidence and improve the evidence by combining the similarity and certainty of evidence together. The specific method for determining the BPA function and calculating the similarity of evidence and the certainty of evidence are described in this section, and finally the overall steps of the method are proposed.

3.1. Determination Method of BPA Function

It is assumed that the multi-sensor system’s data information is pre-processed to extract n classes of data, forming n bodies of evidence, i.e., X = ( x 1 , x 2 , x 3 , , x n ) , where x i ( i = [ 1 , n ] ) is the ith class of data measured by the system. Based on the knowledge gained from the cloud model, the membership degree μ i j ( k ) for the values of discrete feature variables is calculated as follows in Equation (7):
μ i j ( k ) = e ( x i E x i j ) 2 2 E n i j 2 ,
where μ i j ( k ) is the membership of the ith class of data relative to the jth evaluation index under the kth judgment within the same acquisition cycle of the multi-sensor system, E x i j is the expectation value of class i data relative to the jth evaluation index obtained in Equation (1), and E n i j is a normal random number generated with E n i j   as the expectation and H e as the standard deviation obtained in Equation (1).
k is the number of times the multi-sensor acquires data in the same acquisition cycle, when k is greater than 1, the membership of class i data with respect to the jth evaluation index can be determined by the maximum of the k affiliation values when the feature parameters have multiple values:
μ i j = max ( μ i j ( a ) ) , a = 1 , 2 , , k .
The multi-sensor membership matrix can be calculated based on the membership degree μ i j :
R n × m = ( μ 11 μ 12 μ 21 μ 22 μ 1 m μ 2 m μ n 1 μ n 2 μ n m ) .
The elements in each row in Equation (9) represent the membership of the ith (i = 1, 2, ⋯, n) class of data of the multi-sensor for the jth (j = 1, 2, ⋯, m) evaluation index, and the elements in each column represent the membership of all data information collected by the multi-sensor system at a certain time for the jth (j = 1, 2, ⋯, m) evaluation index.
The obtained membership matrix R n × m basically satisfies the definition of probability assignment but does not satisfy j = 1 m μ i j = 1 . Considering that the actual use of the sensor will produce a certain measurement error, the following definition is added to transform the membership of each evaluation index into a BPA function:
{ γ i = 1 max ( μ i 1 , μ i 2 , , μ i m ) m i ( Θ ) = γ i m i ( A j ) = ( 1 γ i ) μ i j , j = 1 m μ i j ,
where γ i denotes the uncertainty of the ith characteristic parameter, m i ( Θ ) is the basic probability assignment value of the uncertainty of the ith piece of evidence, and m i ( A j ) is the basic probability assignment value of the jth evaluation index of the ith piece of evidence.

3.2. Similarity of Evidence

Classical measures for describing the relationship between evidence include: conflict coefficient K, Pignistic probability distance, Jousselme distance and cosine similarity, and so on. The computation of the conflict coefficient K is given in (6), and assuming that the evidence bodies m 1 and m 2 are BPA functions of the identification framework Θ = θ 1 , θ 2 , , θ n , the calculation of the Pignistic probability distance, Jousselme distance, and cosine similarity is given below.
  • Pignistic probability distance [25]
Pignistic probability distance is a measure of conflicting relationships between evidence. Let the recognition frame Θ = θ 1 , θ 2 , , θ n , m is the BPA function of Θ , and if A Θ , then
B e t P m ( A ) = B Θ | A B | | B | m ( B )  
is said to be the Pignistic probability of the focal element A.
Assuming that B e t P m 1 and B e t P m 2 are the corresponding Pignistic probability functions, the Pignistic probability distances are calculated as follows:
d i f B e t P m 2 m 1 = m a x A Θ ( | B e t P m 1 ( A ) B e t P m 2 ( A ) | ) .
2.
Jousselme distance [26]
d B P A ( m 1 , m 2 ) = 1 2 ( m 1 m 2 ) T D ( m 1 m 2 ) ,
where m 1 and m 2 are the vector forms of the evidences m 1 and m 2 , and D is a 2 Θ × 2 Θ positive definite matrix, its mathematical expression is: D = ( d 11 d 12 d 21 d 22 d 1 n d 2 n d n 1 d n 2 d n n ) , where the element d i j = J ( θ i , θ j ) = | θ i   θ i | | θ i   θ j | , θ i is any focal element in evidence m 1 and θ j is any focal element in evidence m 2 , which can also be called the Jaccard coefficient and can be used to reveal the relationship between unifocal and multifocal elements of the evidence.
The Jousselme distance is a measure of the conflicting relationships of the evidence, and the higher its value, the greater the conflict between the evidence.
3.
Cosine similarity [27]
The cosine similarity can be used to calculate the similarity of the evidence. The greater the cosine similarity, the greater the confidence between the evidence.
cos ( m 1 , m 2 ) = m 1 · m 2 T | | m 1 | | · | | m 2 | | ,
where | | m i | | = m i · m i T .
The accuracy of the various measurements is examined based on the above computation by calculating the measures under different conditions in conjunction with Example 1.
Example 1.
Suppose there are identification frames Θ = { a , b , c , d } with different distributions of evidence bodies under different conditions, as shown in Table 1.
The body of evidence under Situation 1 is identical, and its conflict coefficient K is calculated by using Equation (6), yielding 0.75, which contradicts the fact, whereas cosine similarity and the Jousselme distance yield 1, which is consistent with the fact. Situation 2’s evidence is radically different, and the Jousselme distance metric produces 0.707, which is inconsistent with the facts, whereas the cosine similarity computation yields 0, which is consistent with the facts. Because it is impossible to determine whether the body of evidence m 2 under Situation 3 supports each focal element on average, the body of evidence under Situation 3 is somewhat conflicting, and the results of the Pignistic probability distance and cosine similarity are both 0, which contradict the facts, the result of the Jousselme distance is 0.577, which is more consistent with the facts.
From the above analysis, the cosine similarity measure is more accurate when measuring evidence with only a subset of single focal elements, and less accurate when faced with evidence containing a subset of multiple focal elements. Wang et al. [33] combined cosine similarity and the Jaccard coefficient to measure the relationship between evidence. But both measures are similarity measures, and the analysis of how the evidence relates to each other is not thorough enough. This can lead to inaccurate measurements in some situations, such as when evidence m 1 and m 2 in Situation 4 point to different correct propositions and there is a big disagreement. However, Wang’s method gives a similarity of 0.80, which is less consistent with the facts. Therefore, this paper proposes to combine conflicting evidence and similarity to jointly measure the relationship between evidence. Because the Jousselme distance can measure the relationship between evidence more accurately in most cases, and it is introduced to jointly measure the relationship between evidence.
Assuming the identification framework Θ = A 1 , A 2 , , A n , we define the local similarity of evidence s i j as:
{ J ( A a , A b ) = | A a A b | | A a A b | , A a , A b Θ s i j = ( 1 d B P A ( m 1 , m 2 ) ) × a = 1 n b = 1 n m i ( A a ) m j ( A b ) × J ( A a , A b ) c = 1 n m i ( A c ) 2 c = 1 n m j ( A c ) 2
According to Equation (15), the local similarities of the evidence under different situations in Example 1 are: 1, 0, 0.244, and 0.470, all of which are more consistent with the facts. Based on the local similarity s i j , the global similarity s i can be derived for each piece of evidence, and its normalization can lead to the similarity-based weight coefficient α i , which is calculated as follows:
{ s i = j = 1 , i j n s i j α i = s i j = 1 n s j .

3.3. Certainty of Evidence

The properties of the evidence itself can be measured based on the degree of certainty of the evidence. In probability theory, the Hellinger distance is a complete distance metric defined in the space of probability distributions and can be used to measure the similarity between two probability distributions. It has the advantage of stability and reliability compared to other metrics. Deng et al. [30] measured the uncertainty of the evidence itself by calculating the uncertainty interval distance of the evidence focal elements. However, finding the weight of the evidence based on uncertainty involves more steps and is more tedious than finding the weight based on certainty, so this paper proposes a method by which to combine the Hellinger distance of the evidence support interval and rejection interval to jointly measure the certainty of the evidence.
Suppose X = { x 1 , x 2 , x n }   a n d   Y = { y 1 , y 2 , y n } are two probability distribution vectors of the random variable Z, and the Hellinger distance is
H e l ( X | | Y ) = 1 2 i = 1 n ( x i y i ) 2 .
Assuming the identification framework Θ = A 1 , A 2 , , A n and defining D U ( m i ) as the evidence certainty, the calculation of D U ( m i ) is as follows:
D U ( m i ) = j = 1 n 2 × ( 1 2 × [ ( B e l ( m i ( A j ) ) 0 ) 2 + ( 1 P l ( m i ( A j ) )   ) 2 ] )   ,
where 2 is the normalization factor. The Hellinger distance reaches its maximum when the evidence determines that the interval is [1,1] or [0,0], which leads to the calculation of the normalization factor: 1 H e l [ [ 1 , 1 ] , [ 0 , 1 ] ] = 2 .
Normalizing the resulting determinacy D U ( m i ) obtains the weight of the evidence based on the determinacy:
β i = D U ( m i ) j = 1 n D U m j .

3.4. Steps of the Proposed Method

Based on the above study, the specific steps of the proposed method in this paper are given as follows, and the flow chart is shown in Figure 2.
  • Step 1: After pre-processing the data from sensors, the BPA function of each data source related to the body of evidence is calculated by integrating the cloud model and each data evaluation index.
  • Step 2: With the obtained BPA function of each evidence, the weight α i based on the similarity of evidence is calculated by combining Equations (15) and (16), and the weight β i based on the certainty of evidence is calculated by combining Equations (18) and (19).
  • Step 3: With the weights α i and β i , the total weight of the evidence body is calculated and normalized to obtain the final weight ω i , which is calculated as follows:
    { ω i = α i × β i ω i = ω i j = 1 n ω j .
  • Step 4: Based on the weights ω i , the original evidence is averaged and weighted to obtain the processed body of evidence m,
    m ( A ) = i = 1 n ω i × m i ( A ) .
  • Step 5: Use Dempster’s fusion rule to perform n − 1 fusion for evidence body m.

4. Numerical Example and Simulation Results

In this section, the proposed improved D-S evidence theory method based on similarity and certainty, as well as the proposed overall method of heterogeneous data fusion based on cloud model and evidence theory, are evaluated and simulated to demonstrate the feasibility and effectiveness of the proposed method in this paper.

4.1. The method for Improving D-S Evidence Theory

In this section, four common conflicting, normal, and multi-quantity single-focal and multi-focal element evidences are fused based on the proposed improvement method, comparing traditional evidence theory, classical improvement methods, and similar improvement methods, and demonstrating the effectiveness of the proposed methods in this paper through Examples 2–4. We take the methods proposed by Deng Z. [30] and Wang [33] as similar improvement methods.
Example 2.
In evidence theory, there are four common sorts of conflicts: complete conflict, 0 trust conflict, 1 trust conflict, and severe conflict [36], and the BPA functions for the four typical conflicts are provided in Table 2.
The global similarity s i and own determination D U ( m i ) of each evidence under the four conflict types are shown in Table 3. The weights α i and β i of the evidence can be calculated based on the degree of similarity s i and the degree of certainty D U ( m i ) , and the overall weight ω i of the evidence can be obtained by combining the weights α i and β i . Figure 3 displays the distribution chart for each weight. Figure 3 shows that the weights of conflicting evidence are lower than those of normal evidence, and the distribution of each weight is consistent with the facts. We combined similarity and certainty to improve the body of evidence in order to improve the science of data fusion, and it should be noted that because the certainty of evidence describes the characteristics of the evidence itself, which includes the interval information of all focal elements within the evidence and is independent of the relationship between the evidence, the weights α i and β i are not always positively correlated.
Table 4 displays the fusion results of the traditional D-S rule, the methods proposed by Sun [20], Murphy [23], Deng Y. [24], Deng Z. [30], and Wang [33], and the improved method proposed in this paper. As seen in Table 4, when confronted with the four conflicting situations listed above, the D-S fusion rule fails or does not match the genuine situation, and Sun’s method allocates the uncertainty to the entire set, resulting in high BPA values for the entire set that do not fit the true situation. The larger the value of BPA after fusing, the greater the amount of confidence in the proposition. Although the approaches of Murphy, Deng Y., Deng Z., and Wang yield correct answers, the method proposed in this work yields a higher BPA function value and converges faster, demonstrating that the improved method in this research performs better than the other methods in resolving the four conflicts. The fusion BPA results on the reasonable propositions of each algorithm are shown in Figure 4.
Example 3.
Assume the radar identification library contains three radar model data A, B, and C, with identification frame Θ = { A , B , C , A C } . Five existing heterogeneous sensors are used separately to identify the radar radiation sources, yielding a total of five conflicting evidences ranging from m1 to m5. Table 5 and Table 6 show the results of a specific two times, which represent the data distribution of multi-quantity single and multi-focal element conflict evidence, respectively.
The global similarity s i and certainty D U ( m i ) of each evidence under single and multifocal elements are shown in Table 7. The weights of evidence α i , β i , and ω i for a different number of evidence cases are shown in Figure 5. From Figure 5, it can be seen that the weight of conflicting evidence is less than the normal weight, the weight occupied by conflicting evidence gradually decreases as the number of evidence increases, and the distribution of each evidence is consistent with the facts, which proves the rationality of the method proposed in this paper.
To verify the effectiveness of the improved method proposed in this paper, the evidences are fused by using Murthy [23], Deng Y. [24], Deng Z. [30], and Wang [33], and the proposed method are fused respectively. Table 8 shows the fusion results for each method, and Figure 6 shows the comparison of BPA values for reasonable propositions. From the fusion results and comparison results, it can be concluded that when facing different numbers of single and multifocal element conflicting evidence bodies, the traditional D-S fusion results all contradict the facts. Although Murthy [23], Deng Y. [24], Deng Z. [30], and Wang [33] and the proposed method all point to the correct results, the BPA functions of the proposed method are higher than the other two improved methods, and as the number of evidence bodies increases, the improved method converges faster with higher accuracy on the BPA value as the number of evidence bodies increases.
Example 4.
With the identification frame Θ = { A , B , C } , there are five normal evidence bodies from m1 to m5, and the distribution is shown in Table 9.
The proposed improved method’s fusion of normal evidence is compared to the traditional evidence theory to demonstrate the proposed improved method’s superior performance in dealing with normal data, and the fusion results are shown in Table 10. Compared to the traditional evidence theory algorithm, the proposed method can also get correct results when dealing with normal body of evidence and has a higher BPA function with higher credibility.
According to the above examples, the similarity and certainty-based evidence theory fusion algorithm proposed in this paper performs better in handling both normal and conflicting evidence bodies, demonstrating the improved method’s rationality and effectiveness.

4.2. The Proposed Holistic Approach

To demonstrate the feasibility and effectiveness of the proposed data-fusion method, the heterogeneous data-fusion method combining cloud model and the proposed improved evidence theory in this paper is used for indoor early fire detection in this subsection.
It has been discovered that the combination of temperature, smoke concentration, and CO concentration has superior detection performance in fires [37], and the above information is collected as fire characteristic parameters in this paper. The fire discrimination results are divided into three categories: no fire, smoldering fire, and open fire. In the established fire identification framework Θ = { θ 1 , θ 2 , θ 3 , θ 1 θ 2 θ 3 } , θ 1 , θ 2 , θ 3 represent no fire, smoldering fire, and open fire, respectively, and θ 1 θ 2 θ 3 indicates uncertainty of fire. Lin et al. [38] proposed a fire-detection method by using the Jousselme distance to improve the evidence corresponding to the fire characteristic parameters and fusing the evidence according to Dempster’s rule to improve the timeliness of detection. However, the method ignored the characteristics of the evidence body itself and did not fully exploit the fire data information. Because the attribute values corresponding to the three fire characteristic parameters of CO concentration, smoke concentration, and temperature have certain stability and the interval distribution obeys normal distribution within a certain value interval [39], the cloud model of each data source based on the forward cloud generator and the evaluation index of each parameter is built, and the cloud diagram is shown in Figure 7.
PyroSim fire simulation software provides a visual user interface for fire dynamic simulation (FDS) and can more accurately predict the distribution of characteristic parameters such as fire smoke and temperature [40], so this paper simulates the occurrence of fire to obtain fire characteristic parameters. We build the indoor scenario as follows:
  • The length, width and height of the room are 5, 5, 3 m;
  • The room has a sofa, wooden bed and wooden table, in the upper left corner of the room from the wall 1 m set CO, temperature and smoke sensor group;
  • Set the vent: room left wall with 1 × 1 m window, room directly opposite the sofa with 1.2 × 2 m door;
  • The fire burning material is n-Heptane, the center of combustion is the center of the wooden bed, the burning area is 1 m2.
By setting different heat release rate and heat ramp up time to simulate the occurrence of open fire and negative fire in the room, the starting room temperature is 30 °C, the simulation time is 30 s, and the data acquisition frequency is 2 Hz. Based on the proposed data-fusion method, a fire detection model is built. The initial fire detection probability is estimated by combining CO concentration, smoke concentration, and temperature data. The probability of smoldering fire and open flame within the initial fire detection probability is also added, and if it is greater than 0.75, the fire occurred in the room.
Figure 8a depicts a simulation of an open fire with visible fire and black smoke visible at t = 2.5 s. Figure 8b shows the change of the measured CO concentration, smoke concentration, and temperature data with time. When the probability of an open fire is 1, the values of CO, smoke, and temperature are the thresholds, and the time when each parameter first reached the threshold is shown in Figure 8b. The three characteristic parameters of CO, smoke, and temperature had almost no fluctuation in the first 2 s and increased rapidly after 2 s. The temperature and smoke reached the threshold value relatively quickly, and all parameters showed an increasing trend in the first 30 s response time.
To determine whether a fire has occurred, the early open fire data from this simulation are fused using the traditional D-S evidence theory, the methods proposed by Murphy [23], Deng Y. [24], Deng. Z. [30], and Wang [33] and this paper, respectively. Because the frequency of data acquisition in the simulation is 2 Hz, the period of data fusion is 0.5 s. The traditional evidence theory, Murphy’s Deng Y’s., and Deng. Z’s methods all detect fire at t = 3.5 s, Wang’s method detects fire at t = 3 s, and the proposed method detects fire at t = 2.5 s, proving the method’s effectiveness. Figure 9 depicts the probability comparison of fire occurrence in this open fire scenario.
A smoldering fire’s combustion features include the emission of a significant amount of black smoke from the combustion point prior to the appearance of the evident fire. Figure 10a depicts a simulation of a smoldering fire, with a clear fire visible at t = 18 s. Figure 10b displays a time-plot of the data collected by the multi-sensor group during the first 30 s. As shown in Figure 10b, the rising trend of each characteristic parameter in the shaded fire scenario is slower than it is in the open fire scenario, and the parameters only continue to grow after 7 s as a result of the early shaded fire’s insufficient combustion.
The determination of whether a fire has occurred is made possible by combining data on smoldering fires based on the traditional D-S evidence theory, the methods proposed by Murphy [23], Deng Y. [24], Deng. Z. [30], and Wang [33] and this paper, respectively. The method proposed in this paper can detect the occurrence of fire at t = 10 s, which is earlier than the 11.5 s of Wang’s method, 11.5 s of Deng Y.’s method, 12 s of Deng Z’s method, 13.5 s of traditional evidence theory, and 17 s of Murthy’s method, as shown in Figure 11. As illustrated in Figure 11, when compared to the traditional evidence theory, classical improvement method, and similar improvement method, the method proposed in this paper not only detects the occurrence of fire in advance, but also has a higher detection accuracy.
To further verify the effectiveness of the proposed fire detection method, we obtained different CO concentration, smoke concentration and temperature data by setting different combustibles, combustion locations, heat release rates, and heat ramp-up times. Then we made our own fire dataset, which included 1000 positive samples and 1000 negative samples. Based on the traditional evidence theory, classical improvement method, similarity improvement method and the proposed method in this paper, the homemade samples are fused to calculate the accuracy rate and false alarm rate of detection. Assuming that TP represents the number of samples correctly judged to be fires, FN represents the number of samples not correctly judged to be fires, FP represents the number of samples misreported to be fires, and TN represents the number of samples correctly judged to be fires that did not occur. The accuracy and false alarm rates (FAR) are calculated as Equation (22):
{ a c c u r a r y | = T P + T N T P + F N + F P + T N F A R | = F P F P + T N .
Table 11 shows the fire detection accuracy and false alarm rate of various methods. According to Table 11, compared to other methods, the proposed method increased the fire detection rate by 0.7–10.2% and reduces the false alarm rate by 0.9–6.4%, which improves the reliability of fire discrimination obviously.
It is evident that when applied to indoor fire detection, the proposed heterogeneous data-fusion method has better fire detection performance and can simultaneously improve the timeliness and accuracy of detection, proving its feasibility and effectiveness in multi-sensor data fusion.

5. Conclusions

In this paper, a multi-sensor heterogeneous data fusion strategy based on the cloud model and improved evidence theory is presented, which can better cope with the ambiguity and conflict of heterogeneous multi-sensor gathered data. The cloud model is used to estimate the BPA function of each data source’s associated evidence. Evidence similarity is calculated by using multi-relationship measures, evidence certainty is measured by using interval distance, the body of evidence is jointly improved by combining the evidence’s similarity and certainty, and the improved body is fused by using Dempster’s rule. The usefulness of the improved evidence theory technique is validated in this research, and the results show that the proposed method performs better when dealing with both conflicting and normal evidence. The method is used for indoor fire detection in light of the issues of prolonged duration and low accuracy. Compared to traditional evidence theory, classical improvement method, and similar improvement method, the proposed method improves detection speed by 0.5–3 s, accuracy by 0.7–10.2%, and reduces the false alarm rate by 0.9–6.4%, which has better detection performance. It also provides a specific reference value for multi-source information fusion.
In future work, we intend to test the feasibility of the proposed method on other multi-sensor acquisition information systems, as well as investigate how to combine homogeneous and heterogeneous data fusion algorithms to fully exploit effective data information and improve data fusion accuracy.

Author Contributions

X.X. planned the study; K.L. designed the experiment and wrote the manuscript; Y.C. analyzed the experimental data; B.H. supervised and checked the completion of the study. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Zhejiang Provincial Natural Science Foundation (LY19F030004) and Zhejiang Provincial Key R&D Program Project (2018C01085).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors greatly appreciate the reviews, the suggestions from reviewers, and the editor’s encouragement.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jiang, W.; Zhuang, M.; Xie, C. A Reliability-Based Method to Sensor Data Fusion. Sensors 2017, 17, 1575. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Yang, Y.; Li, F.; Gao, Y.; Mao, Y. Multi-Sensor Combined measurement while drilling based on the improved adaptive fading square root unscented Kalman filter. Sensors 2020, 20, 1897. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Zhou, T.; Chen, M.; Yang, C.; Nie, Z. Data fusion using Bayesian theory and reinforcement learning method. Sci. China Inf. Sci. 2020, 63, 170209:1–170209:3. [Google Scholar] [CrossRef]
  4. Xiao, F. Evidence combination based on prospect theory for multi-sensor data fusion. ISA Trans. 2020, 106, 253–261. [Google Scholar] [CrossRef]
  5. Liang, Y.; Tian, W. Multi-sensor fusion approach for fire alarm using BP neural network. In Proceedings of the 2016 International Conference on Intelligent Networking and Collaborative Systems (INCoS), Ostrava, Czech Republic, 7–9 September 2016; IEEE: Piscataway Township, NJ, USA; pp. 99–102. [Google Scholar]
  6. Muñoz, J.; Molero-Castillo, G.; Benítez-Guerrero, E.; Bárcenas, E. Data fusion as source for the generation of useful knowledge in context-aware systems. J. Intell. Fuzzy Syst. 2018, 34, 3165–3176. [Google Scholar] [CrossRef]
  7. Dempster, A.P. Upper and Lower Probabilities Induced by a Multi-valued Mapping. Ann. Math. Stat. 1967, 38, 325–339. [Google Scholar] [CrossRef]
  8. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976; Volume 24. [Google Scholar]
  9. Zhang, H.; Wang, X.; Wu, X.; Zhou, Y. Airborne multi-sensor target recognition method based on weighted fuzzy reasoning network and improved DS evidence theory. J. Phys. Conf. Ser. 2020, 1550, 032112. [Google Scholar] [CrossRef]
  10. Koksalmis, E.; Kabak, Ö. Sensor fusion based on Dempster-Shafer theory of evidence using a large scale group decision making approach. Int. J. Intell. Syst. 2020, 35, 1126–1162. [Google Scholar] [CrossRef]
  11. Li, S.; Liu, G.; Tang, X.; Lu, Y.; Hu, J. An ensemble deep convolutional neural network model with improved DS evidence fusion for bearing fault diagnosis. Sensors 2017, 17, 1729. [Google Scholar] [CrossRef] [Green Version]
  12. Li, G.; Liu, Z.; Cai, L.; Yan, J. Standing-posture recognition in human–robot collaboration based on deep learning and the dempster–shafer evidence theory. Sensors 2020, 20, 1158. [Google Scholar] [CrossRef] [Green Version]
  13. Jiang, W.; Yang, Y.; Luo, Y.; Quin, X. Determining basic probability assignment based on the improved similarity measures of generalized fuzzy numbers. Int. J. Comput. Commun. Control 2015, 10, 333–347. [Google Scholar] [CrossRef]
  14. Xu, P.; Deng, Y.; Su, X.; Mahadevan, S. A new method to determine basic probability assignment from training data. Knowl. Based Syst. 2013, 46, 69–80. [Google Scholar] [CrossRef]
  15. Tang, Y.; Wu, D.; Liu, Z. A new approach for generation of generalized basic probability assignment in the evidence theory. Pattern Anal. Appl. 2021, 24, 1007–1023. [Google Scholar] [CrossRef]
  16. Wang, G.; Xu, C.; Li, D. Generic normal cloud model. Inf. Sci. 2014, 280, 1–15. [Google Scholar] [CrossRef]
  17. Peng, H.; Zhang, H.; Wang, J.; Li, L. An uncertain Z-number multicriteria group decision-making method with cloud models. Inf. Sci. 2019, 501, 136–154. [Google Scholar] [CrossRef]
  18. Liu, J.; Wen, G.; Xie, Y. Layout optimization of continuum structures considering the probabilistic and fuzzy directional uncertainty of applied loads based on the cloud model. Struct. Multidiscip. Optim. 2016, 53, 81–100. [Google Scholar] [CrossRef]
  19. Peng, B.; Zhou, J.; Peng, D. Cloud model-based approach to group decision making with uncertain pure linguistic information. J. Intell. Fuzzy Syst. 2017, 32, 1959–1968. [Google Scholar] [CrossRef]
  20. Sun, Q.; Ye, X.; Gu, W. A new combination rules of evidence theory. Acta Electonica Sin. 2000, 28, 117. [Google Scholar]
  21. Leung, Y.; Ji, N.; Ma, J. An integrated information fusion approach based on the theory of evidence and group decision-making. Inf. Fusion 2013, 14, 410–422. [Google Scholar] [CrossRef]
  22. Haenni, R. Are alternatives to dempster’s rule of combination real alternative. Comments on “About the belief function combination and the conflict management problem”. Inf. Fusion 2002, 3, 237–239. [Google Scholar] [CrossRef]
  23. Murphy, C.K. Combining belief functions when evidence conflict. Decis. Support Syst. 2000, 29, 1–9. [Google Scholar] [CrossRef]
  24. Deng, Y.; Shi, W.; Zhu, Z.; Qi, L. Combining belief functions based on distance of evidence. Decis. Support Syst. 2004, 38, 489–493. [Google Scholar]
  25. Wang, J.; Zhu, J.; Song, Y. A Self-Adaptive Combination Method in Evidence Theory Based on the Power Pignistic Probability Distance. Symmetry 2020, 12, 526. [Google Scholar] [CrossRef] [Green Version]
  26. Jousselme, A.L.; Grenier, D.; Bossé, É. A new distance between two bodies of evidence. Inf. Fusion 2001, 2, 91–101. [Google Scholar] [CrossRef]
  27. Dong, Y.; Cheng, X.; Chen, W.; Shi, H.; Gong, K. A cosine similarity measure for multi-criteria group decision making under neutrosophic soft environment. J. Intell. Fuzzy Syst. 2020, 39, 7863–7880. [Google Scholar] [CrossRef]
  28. Yager, R.R. Entropy and specificity in a mathematical theory of evidence. Int. J. Gen. Syst. 2008, 219, 291–310. [Google Scholar]
  29. Deng, Y. Deng entropy: A generalized Shannon entropy to measure uncertainty. Artif. Intell. 2015, 6, 176–188. [Google Scholar]
  30. Deng, Z.; Wang, J. Measuring total uncertainty in evidence theory. Int. J. Intell. Syst. 2021, 36, 1721–1745. [Google Scholar] [CrossRef]
  31. Tao, Y.; Zhu, X.; Yang, L. Multi-Sensor Data Fusion Based on Pearson Correlation Coefficient and Information Entropy. Minicomput. Syst. 2022, 1–7. Available online: http://kns.cnki.net/kcms/detail/21.1106.tp.20220225.1128.006.html (accessed on 24 July 2022).
  32. Xiao, F. Multi-sensor data fusion based on the belief divergence measure of evidences and the belief entropy. Inf. Fusion 2019, 46, 23–32. [Google Scholar] [CrossRef]
  33. Wang, L.; Xing, Q.; Mao, Y. A weighted combination method of evidence based on trust and certainty. J. Commun. 2017, 38, 83–88. [Google Scholar]
  34. Wu, H.; Zhen, J.; Zhangz, J. Urban rail transit operation safety evaluation based on an improved CRITIC method and cloud model. J. Rail Transp. Plan. Manag. 2020, 16, 100206. [Google Scholar] [CrossRef]
  35. Jin, X.; Yang, A.; Su, T.; Kong, J.-L.; Bai, Y. Multi-channel fusion classification method based on time-series data. Sensors 2021, 21, 4391. [Google Scholar] [CrossRef] [PubMed]
  36. Li, Y.; Chen, J.; Ye, F.; Liu, D. The improvement of DS evidence theory and its application in IR/MMW target recognition. J. Sensors 2016, 2016, 1903792. [Google Scholar] [CrossRef] [Green Version]
  37. Wu, L.; Chen, L.; Hao, X. Multi-Sensor Data Fusion Algorithm for Indoor Fire Early Warning Based on BP Neural Network. Information 2021, 12, 59. [Google Scholar] [CrossRef]
  38. Lin, S.; Wei, B.; Yang, H.; Xiong, Y.; Zhu, L.; Yu, L. D-S fusion detection method with new data sources. J. Univ. Electron. Sci. Technol. China 2021, 50, 861–867. [Google Scholar]
  39. Liu, X.; Ma, W. A multi-sensor fire alarm method based on D-S evidence theory. J. North China Univ. Technol. 2017, 39, 74–81. [Google Scholar]
  40. Zhang, H.; Yan, G.; Li, M.; Han, J. Analysis of the indoor fire risk based on the Pyrosim simulation. Conf. Ser. Earth Environ. Sci. 2021, 636, 012002. [Google Scholar] [CrossRef]
Figure 1. Diagram of evidence intervals.
Figure 1. Diagram of evidence intervals.
Sensors 22 05902 g001
Figure 2. Flow chart of the proposed method.
Figure 2. Flow chart of the proposed method.
Sensors 22 05902 g002
Figure 3. Four common types of conflicting evidence weights.
Figure 3. Four common types of conflicting evidence weights.
Sensors 22 05902 g003
Figure 4. Comparison of reasonable proposition BPA value of different fusion algorithms.
Figure 4. Comparison of reasonable proposition BPA value of different fusion algorithms.
Sensors 22 05902 g004
Figure 5. Weight of evidence under different amounts of evidence. (a) Single-focal element evidence weights. (b) Multi-focal element evidence weights.
Figure 5. Weight of evidence under different amounts of evidence. (a) Single-focal element evidence weights. (b) Multi-focal element evidence weights.
Sensors 22 05902 g005
Figure 6. Comparison of the fusion results of multiple evidence. (a) Single focus element evidence fusion results. (b) Multi-focal element evidence fusion results.
Figure 6. Comparison of the fusion results of multiple evidence. (a) Single focus element evidence fusion results. (b) Multi-focal element evidence fusion results.
Sensors 22 05902 g006
Figure 7. Fire characteristic parameter cloud chart. (a) CO concentration-fire cloud chart. (b) Smoke concentration-fire cloud chart. (c) Temperature-fire cloud chart.
Figure 7. Fire characteristic parameter cloud chart. (a) CO concentration-fire cloud chart. (b) Smoke concentration-fire cloud chart. (c) Temperature-fire cloud chart.
Sensors 22 05902 g007
Figure 8. Open fire simulation information diagram.
Figure 8. Open fire simulation information diagram.
Sensors 22 05902 g008
Figure 9. Fire occurrence probability comparison in open fire scene.
Figure 9. Fire occurrence probability comparison in open fire scene.
Sensors 22 05902 g009
Figure 10. Smoldering fire simulation information diagram.
Figure 10. Smoldering fire simulation information diagram.
Sensors 22 05902 g010
Figure 11. Fire occurrence probability comparison in smoldering fire scene.
Figure 11. Fire occurrence probability comparison in smoldering fire scene.
Sensors 22 05902 g011
Table 1. Distribution of different bodies of evidence in different situations.
Table 1. Distribution of different bodies of evidence in different situations.
SituationThe Distribution of Evidence Body
Situation 1 m 1 ( a ) = m 1 ( b ) = m 1 ( c ) = m 1 ( d ) = 0.25
m 2 ( a ) = m 2 ( b ) = m 2 ( c ) = m 2 ( d ) = 0.25
Situation 2 m 1 ( a ) = m 1 ( b ) = 0.5
m 2 ( c ) = m 2 ( d ) = 0.5
Situation 3 m 1 ( a ) = m 1 ( b ) = m 1 ( c ) = 1 / 3
m 2 ( a , b , c ) = 1
Situation 4 m 1 ( a ) = 0.25 , m 1 ( b ) = 0.65 , m 1 ( a b c ) = 0.1
m 2 ( a ) = 0.65 , m 2 ( b ) = 0.25 , m 2 ( a b c ) = 0.1
Table 2. Four common conflicting BPA functions.
Table 2. Four common conflicting BPA functions.
Types of ConflictEvidencesProposition BPA
ABCDE
Complete conflict
(k = 1)
m1100\\
m2010\\
m30.80.10.1\\
m40.80.10.1\\
0 trust conflict
(k = 0.99)
m10.50.20.3\\
m20.50.20.3\\
m300.90.1\\
m40.50.20.3\\
1 trust conflict
(k = 0.9998)
m10.90.10\\
m200.10.9\\
m30.10.150.75\\
m40.10.150.75\\
High conflict
(k = 0.9999)
m10.70.10.100.1
m200.50.20.10.2
m30.60.10.1500.15
m40.550.10.10.150.1
m50.60.10.200.1
Table 3. Similarity and certainty of each evidence under four conflicts.
Table 3. Similarity and certainty of each evidence under four conflicts.
Types of ConflictEvidencesGlobal Similarity s i Determinacy D U ( m i )
Complete conflictm11.6283
m20.0363
m31.8324.527
m41.8324.527
0 trust conflictm12.1416.009
m22.1416.009
m30.4233.785
m42.1416.009
1 trust conflictm10.0683.785
m21.7153.785
m31.8914.858
m41.8914.858
High conflictm12.6517.194
m20.6318.127
m32.7377.733
m42.7797.987
m52.8887.718
Table 4. Fusion results of four common conflicts.
Table 4. Fusion results of four common conflicts.
Types of ConflictMethodsPropositionΘ
ABCDE
D-S\\\\\Invalid
Complete conflictSun0.09170.04230.0071\\0.8589
Murthy0.82040.17480.0048\\0
Deng Y.0.81660.11640.0670\\0
Deng Z.0.97920.02070.0001\\0
Wang0.99960.00020.0002\\0
This paper0.99990.00010.0001\\0
D-S00.72700.2730000
0 trust conflictSun0.05250.05970.0377\\0.8501
Murthy0.40910.40910.1818\\0
Deng Y.0.43180.29550.2727\\0
Deng Z.0.65100.23840.1106\\0
Wang0.76280.22000.0172\\0
This paper0.84210.04280.1151\\0
D-S010000
1 trust conflictSun0.03880.01790.0846\\0.8587
Murthy0.16760.03460.7978\\0
Deng Y.0.13880.13180.7294\\0
Deng Z.0.02730.00180.9709\\0
Wang0.00060.00150.9980\\0
This paper0.00010.00080.9991\\0
D-S00.35710.428600.21430
High conflictSun0.04430.01630.01630.00450.01180.9094
Murthy0.76370.10310.07160.00800.05380
Deng Y.0.53240.15210.14620.04510.12410
Deng Z.0.98460.0040.00550.00010.00290
Wang0.99110.00250.0010.00.00040
This paper0.99830.00020.00130.00.00020
Table 5. Single focal element evidence body data distribution.
Table 5. Single focal element evidence body data distribution.
EvidencesABC
m10.50.20.3
m200.80.2
m30.60.30.1
m40.550.250.2
m50.650.150.2
Table 6. Multi-focus evidence body data distribution.
Table 6. Multi-focus evidence body data distribution.
EvidencesABCAC
m10.50.20.30
m200.90.10
m30.550.100.35
m40.550.100.35
m50.60.100.3
Table 7. Similarity and certainty of evidence under single and multifocal elements.
Table 7. Similarity and certainty of evidence under single and multifocal elements.
Evidences Global   Similarity   s i Determinacy   D U ( m i )
Single-Focal ElementMulti-Focal ElementSingle-Focal ElementMulti-Focal Element
m12.7436.0092.4966.009
m20.8584.4850.3453.785
m32.7565.5993.6853.221
m42.9835.8683.6853.221
m52.9995.4343.7303.422
Table 8. Multi-quantity evidence body fusion results.
Table 8. Multi-quantity evidence body fusion results.
Methodsm1m3m1m4m1m5
Single-Focal ElementMulti-Focal ElementSingle-Focal ElementMulti-Focal ElementSingle-Focal ElementMulti-Focal Element
D-Sm(A) = 0
m(B) = 0.9132
m(C) = 0.0868
m(A) = 0
m(B) = 0.6315
m(C) = 0.3685
m(AC) = 0
m(A) = 0
m(B) = 0.9293
m(C) = 0.0707
m(A) = 0
m(B) = 0.3287
m(C) = 0.6713
m(AC) = 0
m(A) = 0
m(B) = 0.9079
m(C) = 0.0921
m(A) = 0
m(B) = 0.1403
m(C) = 0.8597
m(AC) = 0
Murthym(A) = 0.3555
m(B) = 0.5868
m(C) = 0.0577
m(A) = 0.5568
m(B) = 0.3562
m(C) = 0.0782
m(AC) = 0.0088
m(A) = 0.5453
m(B) = 0.4246
m(C) = 0.0301
m(A) = 0.8656
m(B) = 0.0891
m(C) = 0.0382
m(AC) = 0.0074
m(A) = 0.8090
m(B) = 0.1785
m(C) = 0.0125
m(A) = 0.9688
m(B) = 0.0156
m(C) = 0.0127
m(AC) = 0.0029
Deng Y.m(A) = 0.4978
m(B) = 0.4434
m(C) = 0.0588
m(A) = 0.6500
m(B) = 0.2547
m(C) = 0.0858
m(AC) = 0.0095
m(A) = 0.7418
m(B) = 0.2312
m(C) = 0.0270
m(A) = 0.9305
m(B) = 0.0274
m(C) = 0.0339
m(AC) = 0.0082
m(A) = 0.9277
m(B) = 0.0633
m(C) = 0.0090
m(A) = 0.9846
m(B) = 0.0024
m(C) = 0.0098
m(AC) = 0.0032
Deng Z.m(A) = 0.6367
m(B) = 0.2631
m(C) = 0.1002
m(A) = 0.5669
m(B) = 0.3325
m(C) = 0.0966
m(AC) = 0.0044
m(A) = 0.6603
m(B) = 0.3095
m(C) = 0.0301
m(A) = 0.8389
m(B) = 0.1068
m(C) = 0.0507
m(AC) = 0.0036
m(A) = 0.8733
m(B) = 0.1152
m(C) = 0.0115
m(A) = 0.9136
m(B) = 0.0454
m(C) = 0.0357
m(AC) = 0.0053
Wangm(A) = 0.6594
m(B) = 0.3119
m(C) = 0.0286
m(A) = 0.6581
m(B) = 0.2409
m(C) = 0.0937
m(AC) = 0.0073
m(A) = 0.8142
m(B) = 0.1604
m(C) = 0.0255
m(A) = 0.9391
m(B) = 0.0190
m(C) = 0.0342
m(AC) = 0.0077
m(A) = 0.9518
m(B) = 0.0401
m(C) = 0.0081
m(A) = 0.9859
m(B) = 0.0014
m(C) = 0.0096
m(AC) = 0.0031
This paperm(A) = 0.7983
m(B) = 0.175
m(C) = 0.0267
m(A) = 0.8368
m(B) = 0.0478
m(C) = 0.1105
m(AC) = 0.0049
m(A) = 0.8842
m(B) = 0.0944
m(C) = 0.0221
m(A) = 0.9597
m(B) = 0.0028
m(C) = 0.0316
m(AC) = 0.0059
m(A) = 0.9849
m(B) = 0.0109
m(C) = 0.0026
m(A) = 0.9895
m(B) = 0.0003
m(C) = 0.0078
m(AC) = 0.0024
Table 9. Normal evidence body data distribution.
Table 9. Normal evidence body data distribution.
EvidencesABC
m10.850.050.1
m20.700.150.15
m30.500.200.30
m40.500.200.30
m50.70.250.05
Table 10. Normal evidence fusion result.
Table 10. Normal evidence fusion result.
Methodsm(A)m(B)m(C)
D-S0.99850.00070.0008
This paper100
Table 11. Comparison of fire detection accuracy and false alarm rate of different methods.
Table 11. Comparison of fire detection accuracy and false alarm rate of different methods.
Fusion MethodsAccuracy RateFalse Alarm Rates
Traditional D-S88.6%7.2%
Murthy93.4%5.6%
Deng Y.96.6%2.2%
Deng Z.96.3%3.1%
Wang98.1%1.7%
The method proposed98.8%0.8%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xiang, X.; Li, K.; Huang, B.; Cao, Y. A Multi-Sensor Data-Fusion Method Based on Cloud Model and Improved Evidence Theory. Sensors 2022, 22, 5902. https://doi.org/10.3390/s22155902

AMA Style

Xiang X, Li K, Huang B, Cao Y. A Multi-Sensor Data-Fusion Method Based on Cloud Model and Improved Evidence Theory. Sensors. 2022; 22(15):5902. https://doi.org/10.3390/s22155902

Chicago/Turabian Style

Xiang, Xinjian, Kehan Li, Bingqiang Huang, and Ying Cao. 2022. "A Multi-Sensor Data-Fusion Method Based on Cloud Model and Improved Evidence Theory" Sensors 22, no. 15: 5902. https://doi.org/10.3390/s22155902

APA Style

Xiang, X., Li, K., Huang, B., & Cao, Y. (2022). A Multi-Sensor Data-Fusion Method Based on Cloud Model and Improved Evidence Theory. Sensors, 22(15), 5902. https://doi.org/10.3390/s22155902

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop