Next Article in Journal
On a New Class of Fractional Difference-Sum Operators with Discrete Mittag-Leffler Kernels
Previous Article in Journal
Ostrowski Type Inequalities Involving ψ-Hilfer Fractional Integrals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimating the Major Cluster by Mean-Shift with Updating Kernel

1
Graduate School of Engineering, Gifu University, 1-1 Yanagido, Gifu-shi 501-1193, Japan
2
Department of EECE, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu-shi 501-1193, Japan
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(9), 771; https://doi.org/10.3390/math7090771
Submission received: 9 July 2019 / Revised: 19 August 2019 / Accepted: 20 August 2019 / Published: 22 August 2019
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
The mean-shift method is a convenient mode-seeking method. Using a principle of the sample mean over an analysis window, or kernel, in a data space where samples are distributed with bias toward the densest direction of sample from the kernel center, the mean-shift method is an attempt to seek the densest point of samples, or the sample mode, iteratively. A smaller kernel leads to convergence to a local mode that appears because of statistical fluctuation. A larger kernel leads to estimation of a biased mode affected by other clusters, abnormal values, or outliers if they exist other than in the major cluster. Therefore, optimal selection of the kernel size, which is designated as the bandwidth in many reports of the literature, represents an important problem. As described herein, assuming that the major cluster follows a Gaussian probability density distribution, and, assuming that the outliers do not affect the sample mode of the major cluster, and, by adopting a Gaussian kernel, we propose a new mean-shift by which both the mean vector and covariance matrix of the major cluster are estimated in each iteration. Subsequently, the kernel size and shape are updated adaptively. Numerical experiments indicate that the mean vector, covariance matrix, and the number of samples of the major cluster can be estimated stably. Because the kernel shape can be adjusted not only to an isotropic shape but also to an anisotropic shape according to the sample distribution, the proposed method has higher estimation precision than the general mean-shift.

1. Introduction

When measuring a certain physical quantity, a few abnormal values, hereinafter designated as outliers, are included among the normal measured values, thereby exacerbating measurement noise. Frequently in science and engineering, some effort is necessary to estimate the true statistical parameters of the physical quantity from these measured values and the included outliers. Because the measurement noise generally follows a Gaussian distribution with mean zero, all samples from the major cluster are Gaussian-distributed around the true value. The problem described above is summarized to estimate the parameters of the major cluster, such as the mean, covariance matrix, and the number of samples included in the major cluster.
Because the mean equals the mode in a Gaussian distribution, if the outliers do not affect the sample mode of the major cluster, then the problem above can be replaced by a mode-seeking problem of the major cluster. Fukunaga and Hostetler [1] first proposed the mean-shift method, which was subsequently generalized by Cheng [2]. It is therefore known as a convenient iterative method for mode-seeking. The mean-shift was shown to be equivalent to the method that seeks a local maximum by the steepest gradient algorithm for the probability density distribution estimated using the kernel method [3,4]. Therefore, the bandwidth, which is the size of the used kernel, deeply affects both the estimation accuracy and precision in the mean-shift as well as in kernel density estimation [5].
Usually in kernel density estimation, the bandwidth is determined such that the difference between the true distribution and the estimated distribution is minimized [6,7,8]. In mean-shift, because the normalized norm affects the convergence speed, a method for determining the bandwidth is proposed for the isotropic kernel [9] and anisotropic kernel [10] such that the norm of the mean-shift vector normalized by the bandwidth is maximized. A method for selecting the most stable bandwidth was also proposed [10,11]. Moreover, mean-shift with bandwidth that varies depending on the coordinate in data space was proposed [9,11]. Nevertheless, these methods entail high calculation costs because they require some provisional estimate of the probability density distribution, which is described as the pilot or initial estimate in some reports of the literature. Other theoretical studies of mean-shift, such as convergence, have been further proven. Li [12] proved its convergence by further imposing some commonly acceptable conditions. Ghassabeh [13] modified the mean-shift to guarantee its convergence. Although the mean-shift has been used widely in many applications [14,15,16], the use of bandwidth for mean-shift has been largely ignored in studies reported in the literature.
As described herein, we propose a new mean-shift method by which adopting the multi-dimensional Gaussian kernel, the kernel bandwidth and shape are updated to fit the major cluster size and shape in each iteration with no provisional estimation. We first derive a calculation equation for calculating the variance (or covariance matrix) of a major cluster from the sample variance in the kernel (or the sample covariance matrix in the multi-dimensional case) around the mode. Then, as the update progresses in the mean-shift method, the variance (or covariance matrix) of a major cluster is estimated using this calculation equation. In addition, the kernel bandwidth and shape are adjusted adaptively based on this estimated value. Therefore, we propose the mean-shift method with such an updating kernel. The proposed mean-shift requires no predetermination of the kernel bandwidth as necessitated by the general mean-shift method.
This paper is organized as follows. A general mean-shift method is introduced in Section 2. In Section 3, we propose the new mean-shift method in a one-dimensional case. Numerical experiments are presented to evaluate the proposed mean-shift compared to the general mean-shift method in Section 4. An explanation of applications and conclusion are presented respectively in Section 5 and Section 6. In the appendices, we describe an extension of the general mean-shift method and the proposed mean-shift to a multi-dimensional case.

2. General Mean-Shift Method

2.1. General Mean-Shift Method

Assuming that the major cluster of N N points follows a Gaussian distribution with mean μ N and standard deviation σ N , we are considering the problem of estimating the mean μ N of the major cluster when a fewer outliers of N O points exist in the sample of N = N N + N O points. If the mode of the sample is not biased from the mean μ N under the influence of outliers, then the mean μ N can be estimated as the mode. The mean-shift is a simple and iterative method to estimate the mode of the major cluster. Letting the sample be x n , n = 1 , , N , then the general mean-shift method is realized using the following iterative process:
  • Letting the mean μ x of sample x n , n = 1 , , N be the initial value of the mean estimator μ ^ N of major cluster, then
    μ ^ N μ x .
  • Consider a Gaussian distribution p ( x ; μ W , σ W ) with the mean μ W and standard deviation σ W as the kernel function in the value direction. Here, the mean μ W of kernel function is found by the mean estimator of major cluster
    μ W μ ^ N .
    The standard deviation σ W is assigned to be an appropriate size as discussed later in Section 2.2.
  • Weight a n , n = 1 , , N for each sample x n , n = 1 , , N weighted by such a Gaussian kernel is
    a n = 1 A p ( x n ; μ W , σ W ) .
    However, A in Equation (3) above is a normalization coefficient for which the sum of the weight a n is equal to 1, as
    A = k = 1 N p ( x k ; μ W , σ W ) .
    We use this weight a n to calculate the sample mean μ x with x n , n = 1 , , N as
    μ x = n = 1 N a n x n .
  • The value of mean estimator μ ^ N of the major cluster is updated by the following equation:
    μ ^ N μ x .
  • If the variation of the value of mean estimator μ ^ N is equal to or less than the predetermined fixed value, then the update process is terminated. Otherwise, return to 2 and repeat the iteration.

2.2. Shortcomings and Solution of the General Mean-Shift Method

The general mean-shift method estimates the modes of the underlying probability density function. From the definition of a probability density, if the random variable X of N data points x i , i = 1 , 2 , 3 , , N in one-dimensional space R has density f, then
f ( x ) = lim h 0 1 2 h P ( x - h < X < x + h ) .
For any given h (bin bandwidth or kernel bandwidth), we can estimate P ( x - h < X < x + h ) by the proportion of the sample falling in the interval ( x - h , x + h ) . Thus, a natural estimator f ^ of the density is given by choosing a small h and setting
f ^ ( x ) = 1 2 h N x N .
Here, N x denotes the number of samples falling in the interval ( x - h , x + h ) . To express the estimator more transparently, define the weight function ω ( x ; h ) by
ω ( x ; h ) = 1 2 h | x | < h , 0 others .
The estimator can be expressed as below [17]:
f ^ ( x ) = 1 N i = 1 N ω ( x - x i ; h ) .
Replace the weight function ω by a general kernel function K ( x ; σ ) with standard deviation σ , which satisfies the condition
- K ( x ) d x = 1 ,
and the kernel estimator for the probability density function f ^ ( x ) at point x can be expressed as
f ^ ( x ) = 1 N i = 1 N K ( x - x i ; σ ) .
The general mean-shift is an attempt to ascertain the local modes of density function f ^ ( x ) , which correspond to the zeros of the gradient x f ^ ( x ) = 0 . Therefore, the type of kernel function K ( x ; σ ) and the kernel bandwidth σ both directly affect the performance of general mean-shift method. Fixing the type of kernel function to Gaussian kernel, we specifically examine the influence of the pre-set of the kernel bandwidth in general mean-shift.
To confirm the influence of fixed kernel bandwidth on estimation accuracy in a general mean-shift method, we set various fixed kernel bandwidths in advance. Here, we summarize the numerical and experimentally obtained results for general mean-shift method as discussed in Section 4. Figure 1a presents the bias error between the estimated value in a general mean-shift method and the true value when we select various kernel bandwidths in advance. The horizontal axis shows a selection of different kernel bandwidths. The vertical axes respectively show the bias error between the estimated value for the mean and the true mean value, and the variance of the mean value. While selecting different fixed kernel bandwidths, we estimated the mean of the major cluster, which is distributed as shown in Figure 2 for 1000 trials. Furthermore, we computed the bias errors using the equation described in Section 4.2. Figure 1a shows that, when we enlarge the fixed kernel bandwidth, the mean estimator is more susceptible to outliers. The bias error in general mean-shift method increases. Otherwise, when we decrease the kernel bandwidth, the number of samples involved in the mean estimation decreases. The local mode can easily become the convergence point of the iterative process. In addition, the bias error in general mean-shift method increases. The kernel bandwidth should be set in the range of 0.5–1.5. As shown in Figure 1b, with enlargement of the kernel bandwidth, the estimation variance in general mean-shift method decreases. Therefore, the optimal kernel bandwidth is 1.5. Because the maximum value of these variances is very small and, because it does not exceed 0.06, if we select the kernel bandwidth within this range of 0.5–1.5, we can ensure the unbiasedness and consistency of the mean estimator in general mean-shift method. However, not knowing the true mean of the major cluster beforehand, we cannot calculate the bias error in general mean-shift method. Therefore, we cannot choose the appropriate kernel bandwidth based on the comparison result shown in Figure 1a. Indeed, the proper pre-set of the kernel bandwidth constitutes an important difficulty.
The optimal kernel bandwidth depends on the existence range of outliers, the number of samples belonging to the major cluster and the distribution that the major cluster follows. In the absence of prior knowledge, the kernel bandwidth is often fixed as appropriate to 1/2 the time of the standard deviation of the whole sample when the whole sample contains the major cluster and the outliers in signal processing [18]. For clustering in image processing or other multiple applications, it is still difficult to preset the kernel bandwidth properly in a general mean-shift method. When the kernel bandwidth is inappropriate, the kernel bandwidth becomes a factor that degrades the estimation accuracy.
As follows, based on the general mean-shift method, we propose a method to change the kernel bandwidth adaptively in accordance with simultaneous estimation of the mean (for a multi-dimensional case, the mean vector) and the standard deviation (for a multi-dimensional case, the covariance matrix) of a major cluster at each iteration. We need not set the kernel bandwidth properly in advance.

3. One-Dimensional Mean-Shift with Updating Kernel

3.1. Derivation of Major Cluster Standard Deviation σ N from Sample Standard Deviation σ x

Here, the Gaussian distribution with mean μ and standard deviation σ is represented by p ( x ; μ , σ ) . It is abbreviated as p ( x ; σ ) especially for μ = 0 . We use the two following equations for the two Gaussian distributions:
- p ( x ; σ W ) p ( x ; σ N ) d x = 1 2 π σ W 2 + σ N 2 ,
- x 2 p ( x ; σ W ) p ( x ; σ N ) d x = σ W 2 σ N 2 2 π ( σ W 2 + σ N 2 ) 3 2 .
We assume that the influence of outliers is small such that the sample mode is not biased from the mean μ N . If the general mean-shift method with the sufficiently small fixed kernel bandwidth decided by the standard deviation of the kernel starts the iteration from an appropriate initial value, then the influence of the outliers on estimation decreases gradually as the estimate converges. Therefore, it is sufficient to consider only the samples from the major cluster x n , n = 1 , , N N when the estimate converges to their true value. In addition, the mean μ N of the major cluster and the mean μ W of the Gaussian kernel coincide near the convergence point. Even if coordinate transformation is performed so that both are 0, generality is not lost. Therefore, we let μ N = μ W = 0 here for analysis. The variance σ x 2 of the sample x n , n = 1 , , N N weighted by a n , n = 1 , , N N is
σ x 2 = n = 1 N N a n x n 2 .
Weight a n is a Gaussian kernel given by Equations (3) and (4). In addition, N is replaced by N N .
The expected value of the sample variance σ x 2 is calculated after substituting Equation (3) into Equation (15) as
E [ σ x 2 ] = E 1 A n = 1 N N p ( x n ; σ W ) x n 2 .
The variance of 1 A is sufficiently smaller than the dispersion of other parts. Therefore, it can be approximated to the following equation based on the assumption that the major cluster follows a Gaussian distribution, as
E [ σ x 2 ] 1 E [ A ] E n = 1 N N p ( x n ; σ W ) x n 2 .
The approximation is discussed later in Appendix B. Here, we calculate the expected value of A by Equations (4) and (13) as
E [ A ] = E k = 1 N N p ( x k ; σ W ) = k = 1 N N E [ p ( x k ; σ W ) ] = N N - p ( x ; σ W ) p ( x ; σ N ) d x = N N 2 π σ W 2 + σ N 2 .
The expected value of other part becomes
E n = 1 N N p ( x n ; σ W ) x n 2 = n = 1 N N E [ p ( x ; σ W ) x 2 ] = N N - x 2 p ( x ; σ W ) p ( x ; σ N ) d x = N N σ W 2 σ N 2 2 π ( σ W 2 + σ N 2 ) 3 / 2
according to Equation (14). In other words, after being weighted by a Gaussian kernel with mean 0 and standard deviation σ W , the expected value of variance σ x 2 of the sample which follows a Gaussian distribution with mean 0 and standard deviation σ N is
E [ σ x 2 ] = σ W 2 σ N 2 σ W 2 + σ N 2
according to Equations (18) and (19). Equation (20) above can be transformed to
σ N 2 = σ W 2 E [ σ x 2 ] σ W 2 - E [ σ x 2 ] .
This expression shows that standard deviation σ N can be estimated from the standard deviation σ x of the sample, which is weighted using a Gaussian kernel with mean 0 and standard deviation σ W as
σ ^ N = σ W 2 σ x 2 σ W 2 - σ x 2 .
In addition, using Equation (18), the number N N of samples belonging to the major cluster can be estimated as
N ^ N = A 2 π σ W 2 + σ ^ N 2 .
Adaptive change of the standard deviation σ W of the kernel related to the estimated value σ ^ N of the standard deviation is sufficient for each update because the mean μ N of the major cluster and the standard deviation σ N can also be estimated. Specifically, the standard deviation σ W of the kernel is assigned to be r times the estimated value σ ^ N , although it depends on the existence range of outliers. We designate this r as a scale factor. Regarding appropriate r, we will examine this point in a numerical experiment discussed later.

3.2. Mean-Shift with Updating Kernel

Based on the discussion presented in Section 3.1, at each iteration of the general mean-shift method, the standard deviation σ N is estimated simultaneously in addition to the mean value μ N . Therefore, we propose a new mean-shift method that adaptively changes the standard deviation σ W of the kernel. The algorithm is summarized as presented below:
  • Let the mean μ x of sample x n , n = 1 , , N be the initial value of the mean estimator μ ^ N of the major cluster and let standard deviation σ x of this sample be the initial value of the standard deviation estimator σ N of the major cluster as
    μ ^ N μ x ,
    σ ^ N σ x .
  • Consider a Gaussian distribution p ( x ; μ W , σ W ) with mean μ W and standard deviation σ W as the kernel function in the value direction. Here, the mean μ W and the standard deviation σ W are given respectively by the estimated value μ ^ N of the mean and the estimated value σ ^ N of the standard deviation of the major cluster:
    μ W μ ^ N ,
    σ W r σ ^ N .
    Here, mean μ W and variance σ W of the Gaussian kernel are not estimators, although they change when the kernel updates.
  • Weight a n , n = 1 , , N for each sample x n , n = 1 , , N weighted by such a Gaussian kernel p ( x ; μ W , σ W ) is calculated using Equations (3) and (4). We use this weight a n to calculate the sample mean μ x and standard deviation σ x with x n , n = 1 , , N as shown below:
    μ x = n = 1 N a n x n ,
    σ x = n = 1 N a n ( x n - μ x ) 2 .
  • The values of mean estimator μ ^ N , standard deviation estimator σ ^ N , and number of samples estimator N ^ N of the sample are updated, respectively, by the following equations:
    μ ^ N μ x ,
    σ ^ N σ W 2 σ x 2 σ W 2 - σ x 2 ,
    N ^ N A 2 π σ W 2 + σ ^ N 2 .
  • If the variations of the values of these estimators are equal to or less than the predetermined fixed value, then the update process is terminated. Otherwise, return to 2 and repeat the iteration.

4. Numerical Experiment

4.1. Update Process of Mean-Shift with an Updatable Kernel

For the proposed method, we use iteration to confirm the process by which the estimated values of the mean vector, the covariance matrix, and the number of samples converge to true values of the major cluster. Although no restriction is made of the dimension of data to which the proposed method is applicable, to illustrate and explain the distribution of data and update process, two-dimensional data are targeted for analysis. Herein, we obtain the major cluster with N N = 3000 points generated in two-dimensional normal distribution with the mean vector μ N = ( 0 , 0 ) T and variance covariance matrix as
C N = 3 2 2 3 .
The outliers with N O = 200 points are distributed uniformly within the range of x 1 [ - 2 , - 1 ] , x 2 [ 3 , 4 ] . Figure 2 shows an example of the generated sample in ( x 1 , x 2 ) space. Symbol • in the figure represents the coordinates of each point. The points spreading in the central elliptical shape belong to the major cluster. Other points distributed in a square shape on the upper left are outliers. In the figure, the solid ellipse represents a contour line where 99% of the M-dimensional normal distribution defined by the mean vector μ N and the covariance matrix C N fall within it. Later, we present the mean vector μ N and covariance matrix C N , or their estimates.
In general, as discussed in Appendix C.2, the initial estimated value of the mean μ ^ N and covariance matrix C ^ N for major cluster can be assigned respectively to the mean and covariance matrix of all samples. However, we set the initial kernel having mean vector μ ^ N = ( - 2 , 3 ) T and covariance matrix
C ^ N = 1 . 25 - 0 . 75 - 0 . 75 1 . 25
intentionally to be located and shaped sufficiently apart from the major cluster. To demonstrate how the estimated value converges to the true value with updating, the scale factor is r = 1 . 0 . The update ends when it satisfies all conditions for which the sum of squares of the change amount μ ^ N is 0.01 or fewer, the sum of squares of the change amount of C ^ N is 0.01 or fewer, and the square of the change amount of N ^ N is 30 or less.
As described earlier, the solid ellipse shown in Figure 3 represents the estimated value of mean vector μ ^ N , covariance matrix C ^ N , and number N ^ N of samples for each update in the proposed method. In Figure 3, the estimated values μ ^ N , C ^ N , N ^ N are shown to converge to the true values μ N , C N , N N corresponding to Figure 2 as the update progresses, although they start from more or less bad initial values. Here, for the estimated value N ^ N , we have accuracy to one decimal place.

4.2. Influence of Kernel Bandwidth on Estimation Accuracy (Unbiasedness)

An exceedingly important property required for estimators is unbiasedness: a property by which the expected value of the estimated value coincides with the true value. If no statistical bias in the estimated value exists, then it represents that the estimation is accurate. Assuming that the parameter is θ , we investigate the unbiasedness of the estimator θ ^ . If parameter θ is a scalar, then the bias error is the difference E [ θ ^ ] - θ between the expected value and the true value θ of the estimator. Otherwise, if parameter θ is a vector or matrix, then the bias error is the square root E [ θ ^ ] - θ 2 of the sum of squares over all the elements. It can be evaluated whether the bias error is zero. As explained below, it demonstrates that the initial value of the kernel bandwidth has less influence on the unbiasedness of the estimated value in the proposed method discussed in Appendix C than in the general mean-shift method introduced in Appendix A.
The distributions that major cluster and outliers follow, the numbers of samples N N , N O , scale factor r, and update ending condition are the same as those described in Section 4.1. The initial estimated value of mean vector μ ^ N is the mean vector of all samples. The initial estimated value of covariance matrix is assigned to C ^ N = σ 2 I . In the general mean-shift method, the covariance matrix of the kernel is C W = σ 2 I . Under the conditions presented above, the mean vector μ N , the covariance matrix C N , and the number N N of samples are estimated using the general mean-shift method and the proposed method. In addition, because it is impossible to obtain the expected value in numerical experiments, the expected value is replaced by the average value of the estimated values for 1000 trials that change the random number.
In the proposed method, σ 2 is the initial value of the kernel bandwidth. It corresponds to the pre-set value of the kernel bandwidth in a general mean-shift method. When this σ 2 is changed to various values, the bias errors of the estimated value of the mean vector μ N , covariance matrix C N , and number N N of samples are calculated. Results are presented respectively in Figure 4a–c. The horizontal axis shows the selection of different kernel bandwidth. The vertical axes respectively show the bias errors for estimators μ N , C N , and N N . In this figure, symbol ◯ corresponds to the proposed method. The symbol △ represents the bias errors in a general mean-shift method. However, because the covariance matrix and number of samples cannot be estimated in a general mean-shift method, only the results obtained using the proposed method are shown in Figure 4b,c. The scale on the vertical axis of the figures is fixed to represent 10% of errors at full scale. In the following figures, the same scale applied to these figures will be used unless specified otherwise.
Figure 4a shows that the bias error increases linearly and that the unbiasedness is lost when the kernel bandwidth σ 2 approximately exceeds the range of 0.5–1.5 represented by symbol ↓ in a general mean-shift method because, as the kernel becomes larger, the outliers fall within the range of the kernel, which greatly affects the mean estimation of the major cluster. For this reason, the proper set of the kernel bandwidth is an important difficulty in a general mean-shift method. However, the kernel bandwidth is adjusted according to the estimated value of covariance matrix of a major cluster at each iteration in the proposed method. Therefore, it is less susceptible to the influence of initial value σ 2 . Furthermore, in Figure 4b,c, it is the same situation in the estimations of covariance matrix C N and number N N of samples.
While maintaining the ratio of the number N N of samples of major cluster and the number N O of samples of the outliers to 3000:200 and changing the number N = N N + N O of samples from 1000 to 90,000, the variance of each estimate value of the mean vector μ N , covariance matrix C N , and number N N of samples are obtained using our proposed method, as shown in Figure 5. The horizontal axis shows the selection of different numbers of samples corresponding to the whole samples. The vertical axes respectively represent the bias errors for estimators μ N , C N , and N N . Because the proposed method is independent of the initial value σ 2 , the initial value σ 2 is fixed to 1.5. Figure 5 shows that these estimators are unbiased for a finite number of samples.

4.3. Influence of the Scale Factor r Value on Estimation Accuracy

In the proposed method, we need not select the initial value of kernel bandwidth in advance because the kernel bandwidth is changed adaptively. The pre-set of the initial value shows some difficulty in influencing the estimation accuracy. Instead, the problem of optimal setting of the scale factor r occurred. Scale factor r represents the ratio of the kernel bandwidth (standard deviation) to the major cluster width (standard deviation). Therefore, the smaller the scale factor, the smaller the kernel bandwidth (standard deviation) is set with respect to the major cluster width (standard deviation). From the viewpoint of estimation accuracy, the kernel bandwidth (standard deviation) should be sufficiently large but not cover the outliers. In other words, if the outliers exist at the distance from the mode of major clusters more than three times the standard deviation of the major cluster, according to three-sigma rule of thumb, the kernel bandwidth should be the same as the standard deviation of major cluster, which means r = 1 . Otherwise, if there are a certain number of outliers within a standard deviation away from the mode of the major cluster, the kernel bandwidth is expected to be 1/3 of the standard deviation of the major cluster, which means r = 1 / 3 . If the distribution of the major cluster and the outliers is specified completely, then it is possible to derive the theoretical formula of the optimal scale factor r as a parameter. However, because the purpose is to estimate the distribution of the major cluster and the outliers, then, even if a theoretical formula for scale factor r is derived, it cannot be used for estimation. Derivation of the theoretical formula for scale factor r has no great value. Therefore, as described below, we investigate the influence of the selected value of this scale factor on the estimation accuracy.
The distributions that major cluster and outliers follow, number N N , N O of samples, and update ending condition are the same as those in Section 4.1. As shown in Section 4.2, the initial values of the estimated value of mean vector and covariance matrix μ ^ N , C ^ N are given, respectively, by the mean vector and covariance matrix of the whole samples. We select scale factor r to be various values and estimate the mean vector μ N , covariance matrix C N , and number N N of samples using the proposed method. The bias errors of each estimated value is presented in Figure 6a,c. The horizontal axis represents the selection of various scale factors r. The vertical axes respectively represent the bias errors for estimators μ N , C N , and N N .
Figure 6 shows that the bias errors of any estimate increases and that the unbiasedness is lost when scale factor r is selected as a value larger than a certain value because, when the kernel bandwidth increases, it becomes more susceptible to outliers, as with the general mean-shift method shown in Figure 6. However, when scale factor r is selected as a small value, the bias error is increased extremely. The unbiasedness is lost relative to the covariance matrix C N and number N N of samples, although it is not readily apparent on mean vector μ N . The reason for this is explainable as presented below.
If we select scale factor r as a value smaller than one, the kernel bandwidth becomes small because of a lack of the practical number of samples that contribute to the estimation. For that reason, the estimation precisions of mean vector μ x and standard deviation σ x , m are deteriorated. The deterioration of this estimation accuracy results from the small number of samples. Consequently, the estimated error has normality, but does not include bias error. As shown in Equation (A32), the estimated value μ ^ N of the mean vector is the sample mean vector μ x . The estimation equation of the standard deviation σ ^ N and number N ^ N of samples is a nonlinear function of the sample standard deviation σ x , m , as shown in Equations (A19) and (A20). In general, normality is lost by a nonlinear transformation. Therefore, the estimation errors of both the standard deviation σ ^ N and the number N ^ N of samples are converted to the bias errors by the nonlinear transformations, even if the estimation error of the sample standard deviation σ x , m had normality.
Figure 6 shows that the appropriate value of the scale factor r is in the range of 0 . 5 r 1 . 5 , but it depends on the characteristic of the target data. For example, the lower limit increases when the number of samples is small. The upper limit decreases when the outliers approach a major cluster. Comparing the bias error with the general mean-shift indicates that the selection of scale factor r need not be the same as the situation of kernel bandwidth as shown in Figure 4 because the range in which the bias error can be kept low is wide.

4.4. Verification of Consistency

The goodness of the estimator is evaluated by accuracy and precision. Accuracy is evaluated as the bias error, as discussed in Section 4.2, whereas the precision is evaluated by the variance of estimated values. Before comparing the estimation precision of a general mean-shift method with the proposed method, one must confirm the consistency of the estimated values in both methods. Consistency is an important property required for the estimator. It indicates the characteristics by which the variance of the estimated values approaches 0 as the number of samples used for estimation increases.
The distributions that major cluster and outliers follow, in addition to the update ending conditions, are the same as those described in Section 4.1. As shown in Appendix C.2, the initial values μ ^ N , C ^ N of the estimate values of the mean vector and covariance matrix are given respectively by mean vector μ x and covariance matrix C x of the whole samples. To ensure that the estimator is unbiased, we select the scale factor as r = 1 . 0 based on the discussion of the proposed method in Section 4.3, and the kernel as C W = σ 2 I , σ 2 = 1 . 5 based on the discussion for a general mean-shift method in Section 4.2.
While maintaining the ratio of the number N N of samples of major cluster and the number N O of samples of the outliers to 3000 : 200 and changing the number N = N N + N O of samples from 1000 to 90,000, the variance of each estimate value of the mean vector μ N , covariance matrix C N , and number N N of samples is obtained using both methods. The estimation variance is replaced by the sample variance of each estimate for 1000 trials as the sample number changes. The estimation variances V a r [ μ ^ N ] , V a r [ C ^ N ] , V a r [ N ^ N ] are shown in Figure 7a–c. The horizontal axis shows the logarithm of various numbers of samples N ^ N . The vertical axes respectively show logarithms for estimation variances V a r [ μ ^ N ] , V a r [ C ^ N ] , V a r [ N ^ N ] . In this figure, symbol ◯ corresponds to the proposed method. Symbol △ represents the general mean-shift method. Because the covariance matrix and number of samples can not be estimated in the general mean-shift method, only the results obtained using the proposed method are presented in Figure 7b,c.
From Figure 7a–c, it is readily apparent that the variance V a r [ · ] 0 for sample number N . Therefore, estimators μ ^ N , C ^ N , N ^ N have consistency. Figure 7a–c are drawn as a logarithmic graph; the slope should be - 1 in fact. Therefore, the relation between the sample number of samples and the estimation variance is approximated using a linear polynomial with the slope fixed at - 1 . The straight line represented by the approximate linear polynomial is superimposed by a solid line in these figures. These results demonstrate the validity of the approximation. Here, we simply define the estimation variance as the 0-order coefficient of the approximate linear polynomial or the virtual estimation variance corresponding to sample number N = 1 . Regarding to the estimation variance, we compare the estimation precision of proposed method with the general mean-shift method.

4.5. Estimation Precisions of the Proposed and General Mean-Shift Methods

The distributions that major cluster and outliers follow, the number N N , N O of samples, and the update ending condition are the same as those described in Section 4.1. As shown in Appendix C.2, the initial values of the estimated value of mean vector and covariance matrix μ ^ N , C ^ N are given, respectively, by the mean vector μ x and covariance matrix C x of the whole samples.
We select scale factor r to be various values and use the proposed method to estimate the mean vector μ N , covariance matrix C N , and number N N of samples. Figure 8a–c respectively present the estimation variances corresponding to the estimated values of the mean vector μ N , covariance matrix C N , and number N N of samples. The horizontal axis shows the logarithm of various scale factor r. The vertical axes respectively show the estimation variances V a r [ μ ^ N ] , V a r [ C ^ N ] , V a r [ N ^ N ] . Similarly, letting the covariance matrix of kernel be C W = σ 2 I , we estimate the mean vector μ N using the general mean-shift method while the kernel bandwidth σ 2 is changed to various values. The estimation variance of estimated value μ ^ N is presented in Figure 9.
From Figure 8a–c, the estimation variance of each estimated value of the mean vector μ N , covariance matrix C N , and number N N of samples decreases with respect to r, monotonically. If r is small, then the kernel bandwidth decreases. The number of substantial points involved in the estimation decreases. Therefore, the estimation precision deteriorates. On one hand, if r is large, then the estimation precision decreases. Because bias error occurs as shown in Figure 6, it is not desirable as an estimator. However, the estimation variance related to general mean-shift method decreases monotonically with respect to kernel bandwidth σ 2 , as shown in Figure 9. The reason is exactly the same as in the case of the proposed method.
Finally, the estimation precision of a general mean-shift method and that of the proposed method are compared. Regarding the general mean-shift method, the estimation is unbiased if σ 2 1 . 5 , as shown in Figure 4. However, the estimation precision increases as σ 2 becomes larger, as shown in Figure 9. In the general mean-shift method, the optimal selected value of the kernel bandwidth is σ 2 = 1 . 5 . The estimation variance at kernel bandwidth σ 2 = 1 . 5 is read from Figure 9: its value is shown by a horizontal dotted line in Figure 8a. In the proposed method, 0 . 5 r 1 . 5 is the suitable range of the scale factor r. In this range, the estimation variance of the proposed method is half or less than half of that of the general mean-shift method. The proposed method has higher estimation precision than the general mean-shift method that has the optimal kernel bandwidth for the following reason. In the general mean-shift method, the kernel shape is expressed as an isotopic shape because the covariance matrix of the kernel is represented as a diagonal matrix in which all diagonal elements are equal. Otherwise, in the proposed method, the kernel shape can take an arbitrary anisotropic shape because the covariance matrix of the kernel can take an arbitrary matrix that satisfies the condition as a covariance matrix. The practical number of samples that contribute to the estimation can be maximized by adjusting the kernel shape to the distribution of samples.

4.6. Discussion

Numerical experiments in two-dimensions described in Section 4.2, Section 4.3 and Section 4.4 yield results for the major cluster and outlier model shown in Figure 2. The purpose of our numerical experiment is to confirm whether the estimators (mean, covariance, number of sample of the major cluster) in the proposed method are unbiased and consistent without a proper pre-set of kernel bandwidth. If these estimators are consistent unbiased estimators, then the proposed method can achieve accurate estimation of the mean, covariance, and the number of samples of the major cluster. We chose the two-dimensional numerical experiment to observe the dynamic changes of the kernel more intuitively during the iteration. The iteration process is shown in Figure 3. In the numerical experiments described herein, the major cluster follows the Gaussian distribution. If the proposed method performs well on other distributions, then the scope of application of the proposed method can be expected to expand. We discuss the scope of application of the proposed method in two aspects as presented below.
For a one-dimensional signal processing field, the assumption of normality is not regarded as being such a severe strong assumption. Yokota and Ye [19] proposed the radical root, or r-th root, transform of the power spectrum series that follows the chi-square distribution, such that the transformed series follows a quasi-Gaussian distribution. Lotter and Vary [20] proposed a spectral amplitude estimator with a parametric super-Gaussian speech model for approximating the probability density distribution of the real speech spectral amplitudes. In fact, the parametric super-Gaussian distribution can approximate the Rayleigh–Laplace–Gamma distribution or other distributions exactly. Ye and Yokota [21] applied the radical root transformation to the super-Gaussian distributions. Thereby, they confirmed that the super-Gaussian distribution after r-th radical root transformation can be quasi-Gaussian distributed. By radical root transformation [21], the proposed method is applicable for major clusters that follow different distributions other than a Gaussian distribution. However, for clustering in image processing or other multiple dimensional applications, the major cluster following a Gaussian distribution is truly a strong assumption.
In addition to the problem addressed in this paper, many methods exist to solve this problem other than the mean-shift method. They have been discussed as described below. Under the normality assumption, Grubbs’ test [22,23,24] and Thompson Tau test [25] are known as methods for testing whether the sample farthest from the sample mean is an outlier. These tests are applied sequentially from the samples that are outermost from the sample mean, but the number of outliers is only valid at most to several. Moreover, applying the tests to multi-dimensional data are not easy. If the outliers follow a Gaussian distribution and if the number of clusters in which the outliers are distributed is known, then, by applying a Gaussian mixture model [26,27,28], the mean and covariance matrix of major cluster can be estimated easily using the Expectation-maximization(EM) algorithm [29,30]. However, such an assumption cannot be applied generally to the outliers.
In fact, selection of the kernel bandwidth is an important issue that strongly affects the result of the general mean-shift algorithm compared to setting of the kernel type. Therefore, we only used the Gaussian kernel to make a presumption here. However, there are many commonly used kernel functions in addition to the Gaussian kernel, such as the Epanechnikov Kernel, the Uniform Kernel, the Quartic Kernel, and the Triweight Kernel. Application of it to other kernel functions according to the derivation of this article will undoubtedly make this research more comprehensive and general. Such application is expected to be an important part of our future research.

5. Application

Considering a stochastic process x ( t ) , the short-time Fourier spectrum centering on time t with a suitable window length is denoted as X ( t , f ) . Here, f represents the frequency. Let X f ( t ) X ( t , f ) be denoted as the spectrum series if frequency f is fixed. By applying the non-steady-state analysis of the stochastic process, the spectrogram P ( t , f ) = | X ( t , f ) | 2 denotes the power of the short-time Fourier spectrum X ( t , f ) . Because the frequency f is fixed, P f ( t ) will be designated as the power spectrum series.
Yokota and Ye [19] proposed a power spectrum estimation method robust for sudden noise. The method uses the radical root transformation to quasi-Gaussian distribution. The following concludes the process of the noise estimation algorithm proposed by Yokota and Ye [19]:
(1) Obtainpower spectrogram P ( t , f ) from the noisy signal. We chose a pulse code modulation(PCM) recording of a noisy signal that contains a certain amount of sudden noise for analysis and computes the spectrogram with a Hamming window length of 10 ms achieving a 50% overlap between adjacent frames by short-time Fourier transformation. Figure 10a presents an example of a noisy signal for analysis and the corresponding spectrogram.
(2) Perform the following process for each frequency f:
(2-1) Use the radical root transformation in the power spectrum series P f ( t ) with the transformation parameter r * = 3 . 314 . Thereby, obtain the new quasi-Gaussian distributed power spectrum series P f 1 / r * ( t ) . Figure 10b portrays a histogram of the power spectrum series at f = 512 Hz before the transformation.
(2-2) Compute the mode value of transformed power spectrum series P f 1 / r * ( t ) by kernel density estimation [5]. Then, put the mode value as the corresponding time average value P n o i s e ( f ) of the noise power spectrum series. Figure 10c depicts a histogram of the transformed power spectrum series at f = 512 Hz, the kernel density estimation [5] with proper kernel bandwidth and the major cluster estimation using our proposed method.
(2-3) Compute the time average value P ( f ) of the noise power spectrum series from the time average value P n o i s e ( f ) as
P ( f ) = P n o i s e ( f ) Γ r * + 1 r * r * .
(3) Obtain P ( f ) as an estimation of the noise power spectral density.
In the noise estimation algorithm [19], the mode estimation accuracy directly affects the noise estimation result. As Figure 10c shows, kernel density estimation [5] can be replaced by our proposed method for comparison. The proper pre-setting of kernel bandwidth is also important in kernel density estimation [5]. It exhibits a strong influence on the resulting estimate similarly to the general mean-shift method. To illustrate its effects, we obtained the noise power spectrum series from the PCM recording, which is shown in Figure 10a, for analysis. Figure 11 portrays the relation between the kernel bandwidth and kernel density estimation. The histogram shows the true density. The broken curve is under-smoothed because it includes too many spurious data artifacts arising from use of 0.000001 bandwidth, which is too small. The dotted curve is over-smoothed because using 0.0001 bandwidth obscures much of the underlying structure. The solid curve with 0.00003 bandwidth is regarded as optimally smoothed because its density estimate is close to the true density.
To assess the performance of the proposed method, general mean-shift method, and kernel density estimation for a noise estimation algorithm [19], this study uses PCM recordings of air-conditioning noise with some sudden noise, as shown in Figure 10a and without sudden noise, respectively, as test data and the true value. Noisy signal data in PCM recordings are not compressed. They have no power consumption. Figure 12 presents comparison results for noise estimation using the proposed method and kernel estimation. Here, we preset the kernel bandwidth as 0.0001. As Figure 12 shows, in the case in which an inappropriate kernel bandwidth is set in advance, noise estimation using our proposed method closely approximates the true noise, but the estimation accuracy using the kernel estimation is not high.

6. Conclusions

The study described in this paper has addressed the problem of proper pre-setting for the fixed search kernel in a general mean-shift method. To improve the estimation accuracy, a new mean-shift method was proposed in which the mean vector and covariance matrix of the major cluster are estimated at each iteration. Then, the kernel bandwidth and shape are adjusted corresponding to the estimates. In numerical experiments, we compared the estimation accuracy and precision of the proposed method and of the general mean-shift method. The experimentally obtained results demonstrate that the estimation accuracy and precision of the proposed mean-shift are higher than those of a general mean-shift method. Moreover, the proposed mean-shift can estimate the covariance matrix and the number of samples of major clusters effectively and correctly. Neither can be estimated using the general mean-shift method. These results were confirmed through formal experimentation, the results of which indicated the superior performance of our method compared to that of the general mean-shift method.

Author Contributions

Y.T. and Y.Y. conceived of and designed the methodology and experiments. Y.T. wrote the manuscript. Y.Y. reviewed and edited the manuscript. All authors read and approved the final manuscript.

Funding

This research received no external funding.

Acknowledgments

This work was supported by the Human Resource Development and R&D project on Manufacturing Technology for the Aerospace Industry: subsidy from Gifu Prefecture, Japan.

Conflicts of Interest

The authors declare that they have no conflict of interest related to this report or to the study it describes.

Appendix A. General Mean-Shift for a Multi-Dimensional Situation

Even when the target for data are multi-dimensional, it is fundamentally the same as the one-dimensional data. Sample x n , n = 1 , , N of the M-dimensional column vector includes the major cluster of N N points and a few outliers. The major cluster follows an M-dimensional Gaussian distribution with mean vector μ N and covariance matrix C N . Here, the mode of the major cluster is not biased from the mean vector μ N under the influence of N O point outliers. The iteration process in the multi-dimensional mean-shift method is the following:
  • Let the mean vector μ x of sample x n , n = 1 , , N be the initial value of the mean estimator μ ^ N of the major cluster
    μ ^ N μ x .
  • Consider a M-dimensional Gaussian distribution p ( x ; μ W , C W ) with mean vector μ W and covariance matrix C N as the kernel function in value direction. Here, the mean mean vector μ W of kernel function is ascertained by the mean estimator of major cluster
    μ W μ ^ N .
    In addition, covariance matrix C N is assigned to be an appropriate size as discussed in Section 2.2.
  • The weight a n , n = 1 , , N for each sample x n , n = 1 , , N weighted by such a Gaussian kernel is
    a n = 1 A p ( x n ; μ W , C W ) .
    However,
    A = k = 1 N p ( x k ; μ W , C W ) .
    We use this weight a n to calculate the sample mean vector μ x with x n , n = 1 , , N as
    μ x = n = 1 N a n x n .
  • The value of mean vector estimator μ ^ N for the major cluster is updated using the following equation:
    μ ^ N μ x .
  • If the value variation of mean vector estimator μ ^ N is equal to or less than the predetermined fixed value, the update process is terminated. Otherwise, return to 2 and repeat the iteration.

Appendix B. Proof of Equation (17)

Equation (16) can be rewritten as
E [ C x ] = E 1 N N n = 1 N N p ( x n ; C W ) x n 2 1 N N k = 1 N N p ( x k ; C W ) .
The denominator 1 N N k = 1 N N p ( x k ; C W ) and numerator 1 N N n = 1 N N p ( x n ; C W ) x n 2 are both random variables. Obviously, if the standard deviation of the denominator is sufficiently small compared to the expected value of the denominator, Equation (16) can be approximated as shown below because the denominator can be regarded as a simple variable rather than a random variable
E [ C x ] E 1 N N n = 1 N N p ( x n ; C W ) x n 2 E 1 N N k = 1 N N p ( x k ; C W ) ,
as shown in Equation (17). Hereafter, it is proved that the standard deviation can be as small as possible with respect to the expected value of the denominator when the number of samples N N .
Proof. 
The denominator on the right side of Equation (A7) has the form of
y = 1 N N n = 1 N N x n .
The expected value E ( y ) and the standard deviation σ ( y ) are
E ( y ) = E ( x n ) > 0 ,
σ ( y ) = 1 N N σ ( x n ) .
If the number N N of samples is sufficiently large, which means N N , σ ( y ) for E ( y ) converges to 0. p ( x n ; C W ) is non-negative because it is a probability density distribution. That is, since the random variable x n follows the probability density distribution f ( x ) defined by x 0 , the expected value E [ x n ] of x n is always positive. Regardless of the number of samples N N , it becomes E [ y ] = E [ x n ] , so that, with the number of samples N N , the denominator can reduce the standard deviation as much as possible relative to the expected value.
The expected values and standard deviations for various probability density distributions f ( x ) defined by x 0 are presented in Table A1. The table shows that, for all probability density distributions shown in this table, the standard deviation σ ( x n ) does not become larger than the expected value E ( x n ) beyond the order. The same is probably true for other probability density distributions not listed in this table. Therefore, corresponding to the number of samples N N = 100 , the standard deviation σ ( y ) can be about one-tenth of the expected value E ( y ) . Practically speaking, Equation (A8), i.e., the approximation of Equation (A7), holds.
Table A1. Expected value and standard deviation of probability density distribution f ( x ) defined by x 0 .
Table A1. Expected value and standard deviation of probability density distribution f ( x ) defined by x 0 .
f ( x ) ExpectationS.D.
gamma k θ k θ
χ 2 k 2 k
exponential 1 / λ 1 / λ
Erlang k μ k μ
Rayleigh σ π / 2 σ 2 - π / 2
log-normal e μ + σ 2 / 2 e μ + σ 2 / 2 e σ 2 - 1
Pareto a b a - 1 a b ( a - 1 ) a - 2

Appendix C. Multi-Dimensional Mean-Shift with Updating Kernel

Appendix C.1. Derivation of Standard Deviation of a Major Cluster from the Sample

Here, we extend derivation of the estimated value for standard deviation σ N in the one-dimensional derived in Section 3.1 to multi-dimensional. The major cluster is assumed to follow a multi-dimensional (M-dimensional) normal distribution. Although the covariance matrix generally does not become a diagonal matrix, it is possible to re-coordinate the coordinate axes so that the covariance matrix becomes a diagonal matrix by appropriate orthogonal transformation. Furthermore, the coordinate axes are shifted such that the mean vector becomes a zero vector. In this section, we consider the variable ( x 1 , , x M ) in such a transformed coordinate system. We let the variables be x = ( x 1 , , x M ) T and denote the standard deviation of each variable by σ N = ( σ N , 1 , , σ N , M ) T . On the newly revised coordinate axes, because the covariance is zero, a M-dimensional normal distribution is represented as a direct product of the one-dimensional normal distribution of each variable as
p ( x ; σ N ) = m = 1 M p ( x m ; σ N , m ) .
The kernel function in the value direction is also assumed to be a Gaussian distribution with a mean zero vector and a diagonal covariance matrix. Because the standard deviation of each variable is σ W = ( σ W , 1 , , σ W , M ) T , the Gaussian distribution of kernel function is
p ( x ; σ W ) = m = 1 M p ( x m ; σ W , m ) .
Using this Gaussian kernel, the weight a n , n = 1 , , N N for the sample x n = ( x 1 , n , , x M , n ) T , n = 1 , , N N can be denoted as
a n = 1 A p ( x n ; σ W ) .
However, A in the above equation is
A = k = 1 N N p ( x k ; σ W ) .
The sample variance σ x , m 2 weighted by a n is
σ x , m 2 = n = 1 N N a n x m , n 2 , m = 1 , , M .
For the same reason, under the one-dimensional case, by substituting Equation (A14) into Equation (A16), the expected value of the sample variance σ x , m 2 can be approximated as
E [ σ x , m 2 ] 1 E [ A ] E n = 1 N N p ( x n ; σ W ) x m , n 2 .
Applying Equation (A12) to Equation (A15) and using Equation (13), the expected value of A is found as
E [ A ] = E k = 1 N N p ( x k ; σ W ) = k = 1 N N E [ p ( x k ; σ W ) ] = N N j = 1 M - p ( x j ; σ W , j ) p ( x j ; σ N , j ) d x j = N N ( 2 π ) M / 2 j = 1 M ( σ W , j 2 + σ N , j 2 ) - 1 / 2 ,
while using Equation (14), the remainder of Equation (A17) is
E n = 1 N N p ( x n ; σ W ) x m , n 2 = n = 1 N N E [ p ( x n ; σ W ) x m , n 2 ] = N N j = 1 M - x m 2 p ( x j ; σ W , j ) p ( x j ; σ N , j ) d x j = σ W , m 2 σ N , m 2 σ W , m 2 + σ N , m 2 N N ( 2 π ) M / 2 j = 1 M ( σ W , j 2 + σ N , j 2 ) - 1 / 2 .
That is, according to Equation (A18) and Equation (A19), Equation (A17) becomes
E [ σ x , m 2 ] = σ W , m 2 σ N , m 2 σ W , m 2 + σ N , m 2 .
The equation above can be transformed to
σ N , m 2 = σ W , m 2 E [ σ x , m 2 ] σ W , m 2 - E [ σ x , m 2 ] .
The standard deviation σ N , m of a major cluster can be estimated as
σ ^ N , m = σ W , m 2 σ x , m 2 σ W , m 2 - σ x , m 2 ,
when using the standard deviation σ x , m of the sample weighted with a Gaussian kernel with standard deviation σ W . Furthermore, using Equation (A18), we can estimate the number N N of samples belonging to a major cluster as
N ^ N = A ( 2 π ) M / 2 j = 1 M ( σ W , j 2 + σ ^ N , j 2 ) 1 / 2 .
The standard deviation σ W of the Gaussian kernel is assigned adaptively as r times the estimated value σ ^ N of the standard deviation at each iteration. The appropriate value of the scale factor r is discussed later in relation to a numerical experiment.

Appendix C.2. Mean-Shift Method with Updating Kernel

  • The mean vector μ x and the covariance matrix C x of the whole samples are determined using the following equations:
    μ x = 1 N n = 1 N x n ,
    C x = 1 N n = 1 N ( x n - μ x ) ( x n - μ x ) T .
    The initial values of the mean vector μ ^ N and the covariance matrix C ^ N of the major cluster are assigned as
    μ ^ N μ x ,
    C ^ N C x .
  • One can consider a multi-dimensional Gaussian distribution p ( x ; μ W , C W ) with mean vector μ W and covariance matrix C W as the kernel function in the value direction. Here, the mean vector μ W and covariance matrix C W of the kernel function are determined as
    μ W μ ^ N ,
    C W r 2 C ^ N .
    Actually, r 2 in the above equation is derived from the fact that the covariance matrix has the squared order of the standard deviation.
  • Weight a n for each sample x n weighted by such a Gaussian kernel is calculated using Equations (A3) and (A4). The mean vector μ x and the covariance matrix C x are determined using the following equations:
    μ x = n = 1 N a n x n ,
    C x = n = 1 N a n ( x n - μ x ) ( x n - μ x ) T .
  • The value of mean vector estimator μ ^ N is updated using the following equation:
    μ ^ N μ x .
    Let
    C W = V W Λ W V W T
    be an eigenvalue decomposition of the covariance matrix C W , which can be represented as a symmetric matrix of the kernel. The diagonal elements of the diagonalized matrix Λ W are eigenvalues of C W ; they represent the variances σ W , 1 2 , , σ W , M 2 along the directions represented by each of the column vectors of orthogonal matrix V W . In addition, the diagonal element of
    Λ x = V W T C x V W
    is the variance σ x , 1 2 , , σ x , M 2 of V W in the column vector direction in the sample covariance matrix C x . According to Equation (A20), we can estimate the number N N of samples belonging to the major cluster by the standard deviation σ N , 1 , , σ N , M , which is obtained by σ W , 1 2 , , σ W , M 2 and σ x , 1 2 , , σ x , M 2 in Equation (A19). Let Λ ^ N be the diagonal matrix that has the estimated σ ^ N , 1 , , σ ^ N , M as the diagonal elements. Using Λ ^ N , the covariance matrix C ^ N is updated with the following equation:
    C ^ N V W Λ ^ N V W T .
    The estimated value N ^ N of the number of samples belonging to a major cluster is updated using the following equation:
    N ^ N A ( 2 π ) M / 2 j = 1 M ( σ W , j 2 + σ ^ N , j 2 ) 1 / 2 .
  • If the value variations of μ ^ N , C ^ N , N ^ N are equal to or less than the predetermined fixed value, then the update process is terminated. Otherwise, return to 2 and repeat the iteration.

References

  1. Fukunaga, K.; Hostetler, L. The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Trans. Inf. Theory 1975, 21, 32–40. [Google Scholar] [CrossRef]
  2. Cheng, Y. Mean shift, mode seeking, and clustering. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 790–799. [Google Scholar] [CrossRef]
  3. Comaniciu, D.; Meer, P. Mean shift analysis and applications. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 1197–1203. [Google Scholar]
  4. Comaniciu, D.; Meer, P. Mean shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 603–619. [Google Scholar] [CrossRef]
  5. Wand, M.P.; Jones, M.C. Kernel Smoothing; Springer: London, UK, 1995. [Google Scholar]
  6. Sheather, S.J.; Jones, M.C. A reliable data-based bandwidth selection method for kernel density estimation. J. R. Stat. Soc. Ser. B 1991, 53, 683–690. [Google Scholar] [CrossRef]
  7. Chen, S. Optimal bandwidth selection for kernel density functionals estimation. J. Probab. Stat. 2015, 70, 1–21. [Google Scholar] [CrossRef]
  8. Slaoui, Y. Data-Driven Bandwidth Selection for Recursive Kernel Density Estimators Under Double Truncation. Sankhya B 2018, 80, 341–368. [Google Scholar] [CrossRef]
  9. Comaniciu, D.; Ramesh, V.; Meer, P. The variable bandwidth mean shift and data-driven scale selection. In Proceedings of the 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Maui, HI, USA, 3–6 June 1991; pp. 438–445. [Google Scholar]
  10. Okada, K.; Comaniciu, D.; Krishnan, A. Scale selection for anisotropic scale-space. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 27 June–2 July 2004; pp. 594–601. [Google Scholar]
  11. Comaniciu, D. An algorithm for data-driven bandwidth selection. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 281–288. [Google Scholar] [CrossRef]
  12. Li, X.; Hu, Z.; Wu, F. A note on the convergence of the mean shift. Pattern Recognit. 2007, 40, 1756–1762. [Google Scholar] [CrossRef]
  13. Ghassabeh, Y.A.; Rudzicz, F. Modified mean shift algorithm. IET Image Process. 2018, 12, 2171–2177. [Google Scholar]
  14. Birchfield, S.T.; Rangarajan, S. Mean shift blob tracking with kernel histogram filtering and hypothesis testing. Pattern Recognit. Lett. 2005, 26, 605–614. [Google Scholar]
  15. Leichter, I. Mean shift trackers with cross-bin metrics. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 695–706. [Google Scholar] [CrossRef] [PubMed]
  16. Vojir, T.; Noskova, J.; Matasa, J. Robust scale-adaptive mean-shift for tracking. Pattern Recognit. Lett. 2014, 49, 250–258. [Google Scholar] [CrossRef]
  17. Silverman, B.W. Density Estimation for Statistics and Data Analysis; Chapman & Hall: London, UK, 1986. [Google Scholar]
  18. Kawamura, Y.; Yokota, Y.; Matsumaru, N.; Shirai, K. 24 Hours monitoring system of heart rate variability to predict septic shock. IEICE Tech. Rep. 2012, 112, 29–34. [Google Scholar]
  19. Yokota, Y.; Ye, T. Quasi-Gaussian distributed power spectrum series by radical root transform and application to robust power spectrum density estimation against for sudden noise. IEICE Trans. Fundam. (Jpn. Ed.) 2016, 3, 149–158. [Google Scholar]
  20. Lotter, T.; Vary, P. Noise reduction by maximum a posteriori spectral amplitude estimation with super-Gaussian speech modelling. In Proceedings of the International Workshop on Acoustic Echo and Noise Control, Kyoto, Japan, 8 September 2003; pp. 83–88. [Google Scholar]
  21. Ye, T.; Yokota, Y. Noise estimation for speech enhancement based on quasi-Gaussian distributed power spectrum series by radical root transformation. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2017, 6, 1306–1314. [Google Scholar] [CrossRef]
  22. Grubbs, F.E. Sample criteria for testing outlying observations. Ann. Math. Stat. 1950, 21, 27–58. [Google Scholar] [CrossRef]
  23. Grubbs, F.E.; Beck, G. Extension of sample sizes and percentage points for significance tests of outlying observations. Technometrics 1972, 14, 847–854. [Google Scholar] [CrossRef]
  24. Zeller, C.B.; Lachos, V.H.; Labra, F.V. Influence diagnostics for Grubbs’s model with asymmetric heavy-tailed distributions. Stat. Pap. 2014, 55, 671–690. [Google Scholar] [CrossRef]
  25. Thompson, R. A note on restricted maximum likelihood estimation with an alternative outlier model. J. R. Stat. Soc. Ser. B (Methodol.) 1985, 47, 53–55. [Google Scholar] [CrossRef]
  26. Rasmussen, C.E. The infinite Gaussian mixture model. In Advances in Information Processing Systems 12; MIT Press: Cambridge, MA, USA, 2000; pp. 554–560. [Google Scholar]
  27. Blomer, J.; Bujna, K. Adaptive seeding for Gaussian mixture models. In Proceedings of the 20th Pacific-Asia Conference, PAKDD 2016, Auckland, New Zealand, 19–22 April 2016; Volume 9652, pp. 296–308. [Google Scholar]
  28. Viroli, C.; McLachlan, G.J. Deep Gaussian mixture models. Stat. Comput. 2017, 29, 43–51. [Google Scholar] [CrossRef] [Green Version]
  29. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 1977, 39, 1–38. [Google Scholar] [CrossRef]
  30. Melnykov, V.; Melnykov, I. Initializing the EM algorithm in Gaussian mixture models with an unknown number of components. J. Comput. Stat. Data Anal. 2012, 56, 1381–1395. [Google Scholar] [CrossRef]
Figure 1. Bias error and estimation variance for various fixed kernel bandwidth σ 2 in a general mean-shift method.
Figure 1. Bias error and estimation variance for various fixed kernel bandwidth σ 2 in a general mean-shift method.
Mathematics 07 00771 g001
Figure 2. Example of a sample set for numerical experiments.
Figure 2. Example of a sample set for numerical experiments.
Mathematics 07 00771 g002
Figure 3. Updates of the estimated major cluster.
Figure 3. Updates of the estimated major cluster.
Mathematics 07 00771 g003
Figure 4. Bias errors for various initial kernel bandwidths σ 2 in the proposed method and the general mean-shift method.
Figure 4. Bias errors for various initial kernel bandwidths σ 2 in the proposed method and the general mean-shift method.
Mathematics 07 00771 g004
Figure 5. Bias errors for various numbers N of samples in the proposed method.
Figure 5. Bias errors for various numbers N of samples in the proposed method.
Mathematics 07 00771 g005
Figure 6. Bias errors for various scale factors r in the proposed method.
Figure 6. Bias errors for various scale factors r in the proposed method.
Mathematics 07 00771 g006
Figure 7. Variance of the estimates μ ^ N , C ^ N , N ^ for various numbers N of samples in the proposed method and the general mean-shift method.
Figure 7. Variance of the estimates μ ^ N , C ^ N , N ^ for various numbers N of samples in the proposed method and the general mean-shift method.
Mathematics 07 00771 g007
Figure 8. Estimating the variance of the estimates μ ^ N , C ^ N , N ^ for various scale factors r of the proposed method.
Figure 8. Estimating the variance of the estimates μ ^ N , C ^ N , N ^ for various scale factors r of the proposed method.
Mathematics 07 00771 g008
Figure 9. Estimating the variance of the estimate μ ^ N for various kernel bandwidths σ 2 in the general mean-shift method.
Figure 9. Estimating the variance of the estimate μ ^ N for various kernel bandwidths σ 2 in the general mean-shift method.
Mathematics 07 00771 g009
Figure 10. (a) example of a noisy signal for analysis and the corresponding spectrogram; (b) histogram of the power spectrum series of the noisy signal at f = 512 Hz; (c) histogram of the transformed power spectrum series of the noisy signal at f = 512 Hz.
Figure 10. (a) example of a noisy signal for analysis and the corresponding spectrogram; (b) histogram of the power spectrum series of the noisy signal at f = 512 Hz; (c) histogram of the transformed power spectrum series of the noisy signal at f = 512 Hz.
Mathematics 07 00771 g010
Figure 11. Relation between kernel bandwidth and kernel density estimation.
Figure 11. Relation between kernel bandwidth and kernel density estimation.
Mathematics 07 00771 g011
Figure 12. Comparison of the proposed method to kernel estimation for noise estimation.
Figure 12. Comparison of the proposed method to kernel estimation for noise estimation.
Mathematics 07 00771 g012

Share and Cite

MDPI and ACS Style

Tian, Y.; Yokota, Y. Estimating the Major Cluster by Mean-Shift with Updating Kernel. Mathematics 2019, 7, 771. https://doi.org/10.3390/math7090771

AMA Style

Tian Y, Yokota Y. Estimating the Major Cluster by Mean-Shift with Updating Kernel. Mathematics. 2019; 7(9):771. https://doi.org/10.3390/math7090771

Chicago/Turabian Style

Tian, Ye, and Yasunari Yokota. 2019. "Estimating the Major Cluster by Mean-Shift with Updating Kernel" Mathematics 7, no. 9: 771. https://doi.org/10.3390/math7090771

APA Style

Tian, Y., & Yokota, Y. (2019). Estimating the Major Cluster by Mean-Shift with Updating Kernel. Mathematics, 7(9), 771. https://doi.org/10.3390/math7090771

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop