Next Article in Journal
Predicting Factor of Safety of Slope Using an Improved Support Vector Machine Regression Model
Previous Article in Journal
Robust Leader–Follower Formation Control Using Neural Adaptive Prescribed Performance Strategies
Previous Article in Special Issue
Multi-Type Self-Attention-Based Convolutional-Neural-Network Post-Filtering for AV1 Codec
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Classification via Finite Mixtures of Matrix Variate Skew-t Distributions

by
Abbas Mahdavi
1,
Narayanaswamy Balakrishnan
2,* and
Ahad Jamalizadeh
3
1
Department of Statistics, Vali-e-Asr University of Rafsanjan, Rafsanjan 7718897111, Iran
2
Department of Mathematics and Statistics, McMaster University, Hamilton, ON L8S 4K1, Canada
3
Department of Statistics, Faculty of Mathematics & Computer, Shahid Bahonar University of Kerman, Kerman 7616914111, Iran
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(20), 3260; https://doi.org/10.3390/math12203260
Submission received: 11 September 2024 / Revised: 10 October 2024 / Accepted: 15 October 2024 / Published: 17 October 2024
(This article belongs to the Special Issue New Advances and Applications in Image Processing and Computer Vision)

Abstract

:
Analysis of matrix variate data is becoming increasingly common in the literature, particularly in the field of clustering and classification. It is well known that real data, including real matrix variate data, often exhibit high levels of asymmetry. To address this issue, one common approach is to introduce a tail or skewness parameter to a symmetric distribution. In this regard, we introduce here a new distribution called the matrix variate skew-t distribution (MVST), which provides flexibility, in terms of heavy tail and skewness. We then conduct a thorough investigation of various characterizations and probabilistic properties of the MVST distribution. We also explore extensions of this distribution to a finite mixture model. To estimate the parameters of the MVST distribution, we develop an EM-type algorithm that computes maximum likelihood (ML) estimates of the model parameters. To validate the effectiveness and usefulness of the developed models and associated methods, we performed empirical experiments, using simulated data as well as three real data examples, including an application in skin cancer detection. Our results demonstrate the efficacy of the developed approach in handling asymmetric matrix variate data.

1. Introduction

The advent of modern data-collection technologies, such as electronic sensors, cell phones and web browsers, has resulted in an abundance of multivariate data sources. Much of these data can be represented as matrix variate (three-way) data, with two ways associated to the row and column dimension of each matrix variate observation and the third one representing subjects (see [1]). Matrix data can occur in different application domains, such as spatial multivariate data, longitudinal data on multiple response variables or spatio-temporal data. For this reason, statistical methods that can effectively utilize three-way data have become increasingly popular. The matrix variate normal (MVN) distribution is one of the most commonly used matrix variate elliptical distributions. However, for many real phenomena, the tails of the MVN distribution are lighter than required, with a direct impact on the corresponding model. In particular, in robust statistical analysis, heavy-tailed distributions are essential, and these include slash and t distributions. Matrix variate t (MVT) distribution has been discussed by [2], and some distributional properties of it have also been studied by [2].
Flexibility and robustness are often lacking in symmetric models when dealing with highly asymmetric data. To address this issue, a recognized method is to add a tail or skewness parameter to a symmetric distribution. Several formulations have been discussed in the literature in the form of continuous mixtures of normal variables, where a mixing variable operates on the mean or on the variance, or on both the mean and the variance of a multivariate normal variable. A general formulation was presented in [3], which encompasses a large number of existing constructions involving continuous mixtures of normal variables. Given a real-valued function r ( u , w ) and a positive-valued function s ( u , w ) , a generalized mixture of a p variate normal distribution is given by
Y = d ξ + r ( U , W ) γ + s ( U , W ) X ,
where = d denotes equality in distribution, ξ R p , γ R p , X N p ( 0 , Σ ) and U and W are univariate random variables, with  ( X , U , W ) being mutually independent.
The representation in (1) can be extended to the matrix variate case as
Y = d M + r ( U , W ) Λ + s ( U , W ) Z ,
where M and Λ are n × p matrices representing the location and skewness, respectively, Z N n × p ( 0 , Σ , Ψ ) and ( Z , U , W ) are mutually independent. It is worth noting that the univariate nature of functions r ( u , w ) and s ( u , W ) simplifies the stochastic representation in (2), leading to more suitable properties for Y , and it also facilitates easier parameter estimation. Furthermore, the representation in (1) can be considered, after rearranging into a vector (denoted by Vec ( Y ) ), as 
Vec ( Y ) = d Vec ( M ) + r ( U , W ) Vec ( Λ ) + s ( U , W ) Vec ( Z ) .
In this work, we introduce and study in detail finite mixtures of a new simple matrix variate skew-t (FM-MVST) distribution, based on (2), for dealing with clustering and classification of asymmetric and heavy-tailed matrix variate data. The proposed model’s simplicity in both density function and stochastic representation leads to a convenient strategy for parameter estimation using the expectation–conditional maximization either (ECME; [4]) algorithm, which is a variant of the EM algorithm [5]. In addition, using simulated and real datasets, we show how the proposed EM algorithm can be implemented for determining the ML estimates of the model parameters for the finite mixture of the proposed model.
The rest of this paper is organized as follows. Section 2 discusses the relevant previous research. Section 3 presents the formulation of the MVST distribution and discusses how the ECME algorithm can be proposed for ML estimation of model parameters. In Section 4, the finite mixture of MVST distributions is defined, and then, the implementation of the EM algorithm for fitting the FM-MVST model is presented. The proposed methods are illustrated by two simulation studies in Section 5 and also by the analysis of three real data datasets in Section 6. Finally, some concluding remarks and possible avenues for future work are outlined in Section 7.

2. Related Studies

To address highly asymmetric data and to have more flexibility, the functions r ( u , w ) and s ( u , w ) in (2) can also be considered as p-dimensional random variables. This approach may introduce complexity in both density-function and parameter-estimation issues. For instance, ref. [6] extended the scale and shape mixtures of multivariate skew normal distributions to a matrix variate setting and studied special cases and their properties. Here, we concentrate on simplifying the proposed model and the associated estimation procedure by focusing on the univariate case of r ( u , w ) and s ( u , w ) . However, it should be noted that the proposed model here is distinct from the model presented in [6], and it cannot be viewed as a special case of it.
Based on (2), various cases have been introduced. For example, ref. [7] introduced a matrix variate skew-t distribution using the following matrix variate normal variance–mean mixture representation:
Y = d M + W Λ + W 1 / 2 Z ,
where M and Λ are n × p matrices representing the location and skewness, respectively, Z N n × p ( 0 , Σ , Ψ ) and W I G ( ν / 2 , ν / 2 ) with I G ( · ) denoting the inverse gamma distribution. Herein, we denote the random variable with representation (4) by Y M V S T I G ( M , Σ , Ψ , Λ , ν ) and its resulting density function is given by
f M V S T I G ( Y ; M , Σ , Ψ , Λ , ν ) = 2 ( ν / 2 ) ν / 2 exp { tr [ Σ 1 ( Y M ) Ψ 1 Λ ] ( 2 π ) n p / 2 | Σ | p / 2 | Ψ | n / 2 Γ ( ν / 2 ) δ ( Y ; M , Σ , Ψ ) + ν ρ ( Σ , Ψ , Λ ) ν + n p 4 × κ ( ν + n p ) / 2 ρ ( Σ , Ψ , Λ ) ( δ ( Y ; M , Σ , Ψ ) + ν ) , Y R n × p ,
where
δ ( Y ; M , Σ , Ψ ) = tr [ Σ 1 ( Y M ) Ψ 1 ( Y M ) ] , ρ ( Σ , Ψ , Λ ) = tr [ Σ 1 Λ Ψ 1 Λ ] ,
and κ x is the modified Bessel function of the third kind with index x. From (4), some other matrix variate skew distributions can be introduced by assuming different distributions for W. For more details, one may refer to [8].
Ref. [9] introduced a new family of matrix variate distributions, based on the matrix variate mean mixture of normal (MVMMN) distributions, as 
Y = d M + W Λ + Z .
Based on (6), three special cases, including the restricted matrix variate skew-normal (RMVSN), exponentiated MVMMN (MVMMNE) and mixed-Weibull MVMMN (MVMMNW) distributions, have been studied by using half-normal, exponential and Weibull distributions for W, respectively. Several other skew matrix variate distributions have also been discussed in the literature; see [10,11,12].
One common statistical challenge faced by researchers is identifying sub-populations or clusters within multivariate data. Recently, researchers have explored the use of finite-mixture models for matrix variate data in applications such as image analysis, genetics and neuroscience. These models offer a flexible framework for capturing complex patterns in the data and can provide insights into the underlying sub-populations or clusters; see [8,13,14,15,16,17,18].

3. Methodology

3.1. The Model

An n × p -variate random matrix Y is said to have a matrix variate skew-t (MVST) distribution, with  n × p location matrix M , n × n and p × p scale matrices Σ and Ψ , n × p shape matrix Λ and flatness parameters ν , if its probability density function (pdf) is   
f M V S T ( Y ; θ ) = 2 ( ν / 2 ) ν / 2 Γ ν + n p 2 | Σ | p / 2 | Ψ | n / 2 ( 2 π ) n p / 2 Γ ( ν / 2 ) ρ ( Σ , Ψ , Λ ) + 1 δ ( Y ; M , Σ , Ψ ) + ν Δ 2 ( Y ; θ ) 2 ν + n p 2 × T ( ν + n p ) Δ ( Y ; θ ) ν + n p δ ( Y ; M , Σ , Ψ ) + ν Δ 2 ( Y ; θ ) , Y R n × p ,
where θ = ( M , Σ , Ψ , Λ , ν ) denotes all the model parameters, Δ ( Y ; θ ) = tr [ Σ 1 ( Y M ) Ψ 1 Λ ] / ρ ( Σ , Ψ , Λ ) + 1 and T ν ( · ) denotes the cumulative distribution function (cdf) of the student’s t distribution with ν degrees of freedom. The MVST distribution reduces to the RMVSN distribution [9] when ν .
Moreover, the MVST distribution possesses the stochastic representation
Y = d M + W 1 / 2 U Λ + Z ,
where Z N n × p ( 0 , Σ , Ψ ) , W Γ ( ν / 2 , ν / 2 ) and U T N ( 0 , 1 ) I ( 0 , ) . Herein, T N ( μ , σ 2 ) I A represents a doubly truncated normal distribution defined in the interval A = { a 1 < x < a 2 } , and  I A denotes the indicator function of set A . It is important to highlight that the MVSTIG distribution, represented by (4), utilizes a single mixing random variable. In contrast, the proposed MVST distribution, represented by (8), incorporates two mixing random variables, W and U, which can significantly increase the model’s flexibility.
From (8), it is easy to show that
E ( Y ) = M + Γ ν 1 2 Γ ν 2 ν π Λ , Vec ( Y ) r S T n p Vec ( M ) , Ψ Σ , Vec ( Λ ) ,
where ⊗ is the Kronecker product and r S T p denotes the p-variate restricted skew-t distribution (see [19,20]).
The stochastic representation given in (8) not only facilitates random number generation, but also enables the implementation of the EM algorithm for determining the maximum likelihood (ML) estimates of the parameters of the MVST distribution. This leads to the hierarchical representation
Y | ( γ , w ) N n × p ( M + γ Λ , w 1 Σ , Ψ ) , γ | w T N ( 0 , w 1 ) I ( 0 , ) , W Γ ( ν / 2 , ν / 2 ) ,
where γ = W 1 / 2 U and W are treated as latent variables. Then, ( Y , W , γ ) has the joint pdf as
f Y , W , γ ( Y , w , γ ) = 2 w 1 / 2 ϕ n × p ( Y ; M + γ Λ , w 1 Σ , Ψ ) ϕ ( w 1 / 2 γ ) g ( w ; ν / 2 , ν / 2 ) ,
where ϕ ( · ) and ϕ n × p ( · ; M , Σ , Ψ ) are the pdfs of N ( 0 , 1 ) and N n × p ( M , Σ , Ψ ) , respectively, and  g ( · ; α , β ) denotes the pdf of the gamma distribution with mean α / β .
Integrating out W and γ , respectively, from (10), we obtain the joint pdfs
f Y , γ ( Y , γ ) = 2 ( ν / 2 ) ν / 2 Γ ν + n p + 1 2 | Σ | p / 2 | Ψ | n / 2 ( 2 π ) ( n p + 1 ) / 2 Γ ( ν / 2 ) δ ( Y ; M , Σ , Ψ ) + ρ ( Σ , Ψ , Λ ) + 1 γ 2 2 η ( Y ; θ ) γ + ν 2 ν + n p + 1 2 ,
where η ( Y ; θ ) = tr [ Σ 1 ( Y M ) Ψ 1 Λ ] , and
f Y , W ( Y , w ) = 2 ( ν / 2 ) ν / 2 | Σ | p / 2 | Ψ | n / 2 ( 2 π ) n p / 2 Γ ( ν / 2 ) ρ ( Σ , Ψ , Λ ) + 1 w ν + n p 2 1 × exp w δ ( Y ; M , Σ , Ψ ) + ν Δ 2 ( Y ; θ ) 2 Φ w 1 / 2 Δ ( Y ; θ ) ,
where Φ ( · ) denotes the cdf of the standard normal distribution.
Dividing (10) by (11), we obtain
W | ( Y , γ ) Γ ν + n p + 1 2 , δ ( Y ; M , Σ , Ψ ) + ρ ( Σ , Ψ , Λ ) + 1 γ 2 2 η ( Y ; θ ) γ + ν 2 .
Additionally, dividing (10) by (12), we obtain
γ | ( Y , w ) T N η ( Y ; θ ) ρ ( Σ , Ψ , Λ ) + 1 , w 1 ρ ( Σ , Ψ , Λ ) + 1 I ( 0 , ) .
From (7) and (12), it is easy to see that
f ( w | Y ) = C w ν + n p 2 1 exp w δ ( Y ; M , Σ , Ψ ) + ν Δ 2 ( Y ; θ ) 2 Φ w 1 / 2 Δ ( Y ; θ ) ,
where
C = δ ( Y ; M , Σ , Ψ ) + ν Δ 2 ( Y ; θ ) 2 ν + n p 2 Γ ν + n p 2 T ( ν + n p ) Δ ( Y ; θ ) ν + n p δ ( Y ; M , Σ , Ψ ) + ν Δ 2 ( Y ; θ ) .
Furthermore, by using (7) and (11), it can be shown that
γ | Y T t η ( Y ; θ ) ρ ( Σ , Ψ , Λ ) + 1 , δ ( Y ; M , Σ , Ψ ) + ν Δ 2 ( Y ; θ ) ( ρ ( Σ , Ψ , Λ ) + 1 ) ( ν + n p ) , ν + n p I ( 0 , ) ,
where T t ( μ , σ 2 , ν ) I A represents a doubly truncated t distribution with ν degrees of freedom defined in the interval A = { a 1 < x < a 2 } . From the conditional density in (15), we find
E ( W | Y ) = C 0 T ( ν + n p + 2 ) Δ ( Y ; θ ) C 2 T ( ν + n p ) Δ ( Y ; θ ) C 0 ,
where
C 0 = ν + n p δ ( Y ; M , Σ , Ψ ) + ν Δ 2 ( Y ; θ ) , C 2 = ν + n p + 2 δ ( Y ; M , Σ , Ψ ) + ν Δ 2 ( Y ; θ ) .
Additionally, using the law of iterated expectations, we can obtain
E ( γ W | Y ) = η ( Y ; θ ) ρ ( Σ , Ψ , Λ ) + 1 E ( W | Y ) + 1 ρ ( Σ , Ψ , Λ ) + 1 ζ ( Y )
and
E ( γ 2 W | Y ) = 1 ρ ( Σ , Ψ , Λ ) + 1 + η 2 ( Y ; θ ) ρ ( Σ , Ψ , Λ ) + 1 2 E ( W | Y ) + η ( Y ; θ ) ρ ( Σ , Ψ , Λ ) + 1 3 / 2 ζ ( Y ) ,
where
ζ ( Y ) = Γ ν + n p + 1 2 2 π T ( ν + n p ) Δ ( Y ; θ ) C 0 Γ ν + n p 2 δ ( Y ; M , Σ , Ψ ) + ν 2 ν + n p + 1 2 × δ ( Y ; M , Σ , Ψ ) + ν Δ 2 ( Y ; θ ) 2 ν + n p 2 .

3.2. Parameter Estimation via the ECME Algorithm

Suppose Y = ( Y 1 , , Y N ) constitutes a set of n × p -dimensional observed samples of size N arising from the MVST model. In the EM framework, the latent variables are w = ( w 1 , , w N ) and γ = ( γ 1 , , γ N ) . With these, the complete data are given by Y c = ( Y , w , γ ) .
According to (10), the log likelihood function of θ corresponding to the complete data Y c , excluding additive constants and terms that do not involve parameters of the model, is given by
c ( θ Y c ) = 1 2 i = 1 N { ν log ν 2 2 log Γ ν 2 p log | Σ | n log | Ψ | + 2 η ( Y i ; θ ) γ i w i ρ ( Σ , Ψ , Λ ) + 1 γ i 2 w i δ ( Y i ; M , Σ , Ψ ) + ν w i + ν + n p 1 log w i } .
In the kth iteration, the E step requires the calculation of the so-called Q function, which is the conditional expectation of (21), given the observed data Y and the current estimate θ ^ ( k ) , where the superscript (k) denotes the updated estimates at the kth step of the iterative process. To evaluate the Q function, we then need the following conditional expectations:
w ^ i ( k ) = E ( W i Y i , θ ^ ( k ) ) , κ ^ 1 i ( k ) = E ( γ i W i Y i , θ ^ ( k ) ) , κ ^ 2 i ( k ) = E ( γ i 2 W i Y i , θ ^ ( k ) ) ,
which have explicit expressions as given earlier, but also the expectation
κ ^ 3 i ( k ) = E ( log W i Y i , θ ^ ( k ) ) ,
which is difficult to evaluate explicitly. So, we perform the ECME algorithm, which replaces the conditional maximization Q function (CMQ) step with the conditional maximization log likelihood (CML) step, to avoid computing the expectation in (23).
Substituting (22) and (23) into (21), we obtain the following expression for the Q function:   
Q ( θ θ ^ ( k ) ) = 1 2 i = 1 N { ν log ν 2 2 log Γ ν 2 p log | Σ | n log | Ψ | + 2 η ( Y i ; θ ) κ ^ 1 i ( k ) ρ ( Σ , Ψ , Λ ) + 1 κ ^ 2 i ( k ) δ ( Y i ; M , Σ , Ψ ) + ν w ^ i ( k ) + ν + n p 1 κ ^ 3 i ( k ) } .
The CMQ steps are implemented, to update estimates of θ in the order of M , Σ , Ψ , Λ and  ν by maximizing, one by one, the Q function obtained in the E step. After some algebraic manipulations, they are summarized in the following CMQ and CML steps:
  • CMQ step 1: Fixing Λ = Λ ^ ( k ) , we update M ^ ( k ) by maximizing (24) with respect to M , leading to
    M ^ ( k + 1 ) = i = 1 N w ^ i ( k ) Y i Λ ^ ( k ) i = 1 N κ ^ 1 i ( k ) i = 1 N w ^ i ( k ) ;
  • CMQ step 2: Fixing M = M ^ ( k + 1 ) , Ψ = Ψ ^ ( k ) and  Λ = Λ ^ ( k ) , we then update Σ ^ ( k ) by maximizing (24) over Σ , yielding
    Σ ^ ( k + 1 ) = 1 N p { i = 1 N w ^ i ( k ) Y i M ^ ( k + 1 ) Ψ ^ 1 ( k ) Y i M ^ ( k + 1 ) + Λ ^ ( k ) Ψ ^ 1 ( k ) Λ ^ ( k ) i = 1 N κ ^ 2 i ( k ) i = 1 N κ ^ 1 i ( k ) Y i M ^ ( k + 1 ) Ψ ^ 1 ( k ) Λ ^ ( k ) Λ ^ ( k ) Ψ ^ 1 ( k ) i = 1 N κ ^ 1 i ( k ) Y i M ^ ( k + 1 ) } ;
  • CMQ step 3: Fixing M = M ^ ( k + 1 ) , Σ = Σ ^ ( k + 1 ) and Λ = Λ ^ ( k ) , we update Ψ ^ ( k ) by maximizing (24) over Ψ , yielding
    Ψ ^ ( k + 1 ) = 1 N n { i = 1 N w ^ i ( k ) Y i M ^ ( k + 1 ) Σ ^ 1 ( k + 1 ) Y i M ^ ( k + 1 ) + Λ ^ ( k ) Σ ^ 1 ( k + 1 ) Λ ^ ( k ) i = 1 N κ ^ 2 i ( k ) i = 1 N κ ^ 1 i ( k ) Y i M ^ ( k + 1 ) Σ ^ 1 ( k + 1 ) Λ ^ ( k ) Λ ^ ( k ) Σ ^ 1 ( k + 1 ) i = 1 N κ ^ 1 i ( k ) Y i M ^ ( k + 1 ) } ;
  • CMQ step 4: Fixing M = M ^ ( k + 1 ) , we obtain Λ ^ ( k + 1 ) by maximizing (24) over Λ , yielding
    Λ ^ ( k + 1 ) = i = 1 N κ ^ 1 i ( k ) Y i M ^ ( k + 1 ) i = 1 N κ ^ 2 i ( k ) .
An update of ν ^ ( k ) can be achieved by directly maximizing the constrained actual log likelihood function. This gives rise to the following CML step:
  • CML step: Update ν ^ ( k ) by optimizing the following constrained log likelihood function:
    ν ^ ( k + 1 ) = arg max ν i = 1 N log f M V S T Y i ; M ^ ( k + 1 ) , Σ ^ ( k + 1 ) , Ψ ^ ( k + 1 ) , Λ ^ ( k + 1 ) , ν .

4. Fitting Finite Mixtures of MVST Distributions

4.1. The Model

We consider N independent random variables Y 1 , , Y N observed from a G-component mixture of MVST distributions, whose pdf is given by
f ( Y i ; Θ ) = g = 1 G π g f M V S T ( Y i ; θ g ) ,
where 0 π g 1 , i = 1 G π g = 1 and  Θ is the set containing all the parameters of the considered mixture model. To pose this mixture model as an incomplete data problem, we introduce allocation variables Z i = ( Z i 1 , , Z i G ) , where a particular element Z i g is equal to 1 if Y i belongs to group g and is equal to zero, otherwise. Observe that Z i follows a multinomial random vector with one trial and cell probabilities π 1 , , π G , denoted by Z i M ( 1 ; π 1 , , π G ) . The hierarchical representation in (9), originally designed for the single distribution, can be extended to the mixture modeling framework, as follows:
Y i | ( γ i , w i , Z i g = 1 ) N n × p ( M g + γ i Λ g , w i 1 Σ g , Ψ g ) , γ i | ( w i , Z i g = 1 ) T N ( 0 , w i 1 ) I ( 0 , ) , W i | Z i g = 1 Γ ( ν g / 2 , ν g / 2 ) , Z i g M ( 1 ; π 1 , , π G ) .
It then follows from the hierarchical structure in (25), on the basis of the observed data Y = ( Y 1 , , Y N ) and the latent data w = ( w 1 , , w N ) , γ = ( γ 1 , , γ N ) and  Z = ( Z 1 , , Z N ) , excluding additive constants, the complete data log likelihood function of Θ based on the complete data Y c = ( Y , w , γ , Z ) is
c ( θ Y c ) = 1 2 i = 1 N g = 1 G Z i g { 2 log π g + ν g log ν g 2 2 log Γ ν g 2 p log | Σ g | n log | Ψ g | + 2 η ( Y i ; θ g ) γ i w i ρ ( Σ g , Ψ g , Λ g ) + 1 γ i 2 w i δ ( Y i ; M g , Σ g , Ψ g ) + ν g w i + ν g + n p 1 log w i } .
The expected value of (26) to start the E step, given the current parameter Θ ( k ) , requires some conditional expectations, including
z ^ i g ( k ) = E ( Z i g | Y i , Θ ^ ( k ) ) = π ^ g ( k ) f M V S T ( Y i ; θ ^ g ( k ) ) f ( Y i ; Θ ^ ( k ) ) , w ^ i g ( k ) = E ( W i Y i , Z i g = 1 , Θ ^ ( k ) ) = E ( W i Y i , θ ^ g ( k ) ) , κ ^ 1 i g ( k ) = E ( γ i W i Y i , Z i g = 1 , Θ ^ ( k ) ) = E ( γ i W i Y i , θ ^ g ( k ) ) , κ ^ 2 i g ( k ) = E ( γ i 2 W i Y i , Z i g = 1 , Θ ^ ( k ) ) = E ( γ i 2 W i Y i , θ ^ g ( k ) )
and
κ ^ 3 i g ( k ) = E ( log W i Y i , Z i g = 1 , Θ ^ ( k ) ) = E ( log W i Y i , θ ^ g ( k ) )
for which we utilize the CML step, as mentioned in the preceding section. Consequently, the conditional expectation of the complete data log likelihood is obtained as   
Q ( θ θ ^ ( k ) ) = 1 2 i = 1 N g = 1 G z ^ i g ( k ) { 2 log π g + ν g log ν g 2 2 log Γ ν g 2 p log | Σ g | n log | Ψ g | + 2 η ( Y i ; θ g ) κ ^ 1 i g ( k ) ρ ( Σ g , Ψ g , Λ g ) + 1 κ ^ 2 i g ( k ) δ ( Y i ; M g , Σ g , Ψ g ) + ν g w ^ i g ( k ) + ν g + n p 1 κ ^ 3 i g ( k ) } .
Thus, the implementation of the ECME algorithm proceeds as follows:
  • E step: Given Θ = Θ ^ ( k ) , compute z ^ i g ( k ) , w ^ i g ( k ) , κ ^ 1 i g ( k ) and κ ^ 2 i g ( k ) given in (27), for  i = 1 , , N and g = 1 , , G ;
  • CM step 1: Calculate
    π ^ g ( k + 1 ) = 1 N i = 1 N z ^ i g ( k ) ;
  • CM step 2: Update M ^ g ( k ) as
    M ^ g ( k + 1 ) = i = 1 N z ^ i g ( k ) w ^ i g ( k ) Y i Λ g ^ ( k ) i = 1 N z ^ i g ( k ) κ ^ 1 i g ( k ) i = 1 N z ^ i g ( k ) w ^ i g ( k ) ;
  • CM step 3: Update Σ ^ g ( k ) as
    Σ ^ g ( k + 1 ) = 1 p i = 1 N z ^ i g ( k ) { i = 1 N z ^ i g ( k ) w ^ i g ( k ) Y i M ^ g ( k + 1 ) Ψ ^ g 1 ( k ) Y i M ^ g ( k + 1 ) + Λ ^ g ( k ) Ψ ^ g 1 ( k ) Λ ^ g ( k ) i = 1 N z ^ i g ( k ) κ ^ 2 i g ( k ) i = 1 N z ^ i g ( k ) κ ^ 1 i g ( k ) Y i M ^ g ( k + 1 ) Ψ ^ g 1 ( k ) Λ ^ g ( k ) Λ ^ g ( k ) Ψ ^ g 1 ( k ) i = 1 N z ^ i g ( k ) κ ^ 1 i g ( k ) Y i M ^ g ( k + 1 ) } ;
  • CM step 4: Update Ψ ^ g ( k ) as
    Ψ ^ g ( k + 1 ) = 1 n i = 1 N z ^ i g ( k ) { i = 1 N z ^ i g ( k ) w ^ i g ( k ) Y i M ^ g ( k + 1 ) Σ ^ g 1 ( k + 1 ) Y i M ^ g ( k + 1 ) + Λ ^ g ( k ) Σ ^ g 1 ( k + 1 ) Λ ^ g ( k ) i = 1 N z ^ i g ( k ) κ ^ 2 i g ( k ) i = 1 N z ^ i g ( k ) κ ^ 1 i g ( k ) Y i M ^ g ( k + 1 ) Σ ^ g 1 ( k + 1 ) Λ ^ g ( k ) Λ ^ g ( k ) Σ ^ g 1 ( k + 1 ) i = 1 N z ^ i g ( k ) κ ^ 1 i g ( k ) Y i M ^ g ( k + 1 ) } ;
  • CM step 5: Update Λ ^ g ( k ) as
    Λ ^ g ( k + 1 ) = i = 1 N z ^ i g ( k ) κ ^ 1 i g ( k ) Y i M ^ g ( k + 1 ) i = 1 N z ^ i g ( k ) κ ^ 2 i g ( k ) ;
  • CML step: Update ν ^ ( k ) = ( ν ^ 1 , , ν ^ G ) by optimizing the constrained log likelihood function as 
    ν ^ ( k + 1 ) = arg max ν i = 1 N log g = 1 G π ^ g ( k + 1 ) f M V S T Y i ; M ^ g ( k + 1 ) , Σ ^ g ( k + 1 ) , Ψ ^ g ( k + 1 ) , Λ ^ g ( k + 1 ) , ν g .

4.2. Initialization

In order to speed up the convergence process, it is important to establish a set of reasonable starting values. To start the ECME algorithm for fitting the FM-MVST model, an intuitive scheme to partition data into G components { Y g ( 0 ) } g = 1 G is to create an initial partition of data { Vec ( Y i ) } i = 1 N , using the K-means algorithm [21,22]. This yields a validate estimate of z ^ i g ( 0 ) , which, in turn, yields π ^ g ( 0 ) = i = 1 N z ^ i g ( 0 ) / N . Then, we compute the sample mean, covariance matrix of rows and covariance matrix of columns of Y g ( 0 ) as good initial estimates for M ^ g ( 0 ) , Σ ^ g ( 0 ) and Ψ ^ g ( 0 ) , as follows:
M ^ g ( 0 ) = i = 1 N z ^ i g ( 0 ) Y i i = 1 N z ^ i g ( 0 ) , Σ ^ g ( 0 ) = i = 1 N j = 1 p z ^ i g ( 0 ) y i j m g j ( 0 ) y i j m g j ( 0 ) p i = 1 N z ^ i g ( 0 ) , Ψ ^ g ( 0 ) = i = 1 N r = 1 n z ^ i g ( 0 ) y i . r m g . r ( 0 ) y i . r m g . r ( 0 ) n i = 1 N z ^ i g ( 0 ) ,
where m g j ( 0 ) and m g . r ( 0 ) denote the j-th column and r-th row of M g ( 0 ) , respectively, and  y i j and y i . r are the j-th column and r-th row of Y i , respectively. The initial component skewness matrices, Λ g ( 0 ) , are taken as the values randomly selected in the interval ( 1 , 1 ) . Finally, we initialize ν ^ g ( 0 ) by taking it to be as small as 5 or 10.

4.3. Identifiability

Model identifiability is key to securing unique and consistent estimates of model parameters. With regard to the mixtures of MVST distributions, the estimates of Σ g and Ψ g are only unique up to a strictly positive constant. To resolve this issue, a constraint needs to be placed, such as setting the trace of Σ g equal to n [13] or fixing | Σ g | = 1 [23]. Herein, we set the first diagonal element of Σ g as 1 [8]. This scaling procedure can be implemented at each iteration or at convergence, and either method has minimal impact on the final estimates and classifications achieved. To obtain the final parameter estimates, the resulting Σ g is divided by the first diagonal element of Σ g , and then Ψ g is multiplied by the first diagonal element of Σ g .

5. Empirical Study

5.1. Finite-Sample Properties of ML Estimators

Here, we conducted a simulation study for examining the accuracy of the parameter estimates obtained by using the proposed ECME algorithm in Section 3.2. We generated 500 Monte Carlo samples of sizes N = 250 , 500, 1000 and 2000 from the two-component FM-MVST model, in the two scenarios (low- and moderate-dimensional) described in Appendix A. Scenario I was characterized by matrices of size 3 × 4 , indicating that each matrix consisted of 3 rows and 4 columns. This structure allowed for a total of 12 elements within each matrix. On the other hand, Scenario II generated matrices of size 10 × 2 . In this case, each matrix was composed of 10 rows and 2 columns, resulting in a total of 20 elements per matrix. This increased number of elements was beneficial for examining the performance of the proposed algorithm in accurately recovering true parameters in moderate dimensional scenarios.
The accuracy of the obtained parameter estimates was assessed by the average of the root mean squared error (RMSE) of the elements of each estimated parameter. The results shown in Table 1 indicate the good performance of the proposed estimation method. Regardless of the considered scenario, it can be seen that the RMSE values all tended to zero with increasing sample size, indicating the satisfactory asymptotic properties of the ML estimates obtained by the proposed ECME algorithm.

5.2. Comparison of Classification Accuracy

To examine the classification accuracy of the FM-MVST model, we generated 1000 samples from each of the scenarios given in Appendix A. In each scenario, we compared the FM-MVST model described in Section 4 with finite mixtures of matrix variate normal (FM-MVN) and matrix variate t (FM-MVT) distributions, which were readily available in the R package MixMatrix. We also implemented the EM algorithm described in [8], to fit finite mixtures of MVSTIG (FM-MVSTIG) distributions. Additionally, the finite mixtures of the reduced RMVSN (FM-RMVSN) distributions were fitted as a sub-model of the FM-MVST model.
Model performance was assessed by comparing the classification accuracy and model selection criteria for all the fitted models. For classification accuracy, we report the adjusted rand index (ARI; [24]), which took the value of 1 when the two partitions perfectly matched, and the misclassification rate (MCR) of the map clustering for each model. ARI serves as a measure of the similarity between two data clusters, which can provide insights into the robustness of clustering results. By contrast, MCR focuses on the accuracy of classification, emphasizing the proportion of misclassified instances, thus offering a more direct assessment of predictive performance. Furthermore, the Bayesian information criterion (BIC; [25]) value was also reported as a model selection criterion. BIC incorporates both the goodness-of-fit and the complexity of the model, penalizing for overfitting, which can be particularly relevant when evaluating clustering algorithms.
We ran 100 simulations for each scenario and computed the classification accuracy and model selection criteria for each simulation. Table 2 presents the average BIC, ARI and MCR values along with their standard errors (Std), and the results are illustrated via the box plots shown in Figure 1. As one would expect, the model selection criteria selected the true model from which the data were generated. This outcome highlights the effectiveness of the selected metrics in distinguishing between models based on their ability to capture the underlying data structure. The consistency observed across the simulations further strengthens the case for the reliability of these model selection criteria in practical applications.

6. Real Data Analysis

In this section, we illustrate the results of applying the proposed methodology to three well-known real datasets.

6.1. Landsat Data

The first application concerned the Landsat data (LSD), originally obtained by NASA, and available at Irvine machine learning repository (http://archive.ics.uci.edu/ml, accessed on 1 September 2024). Multi-spectral satellite imagery allows for multiple observations over a spatial grid, resulting in matrix-valued observations. The LSD comprises lines that consist of four spectral values representing nine pixel neighborhoods in a satellite image. Essentially, each line corresponds to a 4 × 9 observation matrix. Additionally, every observation matrix in the LSD is classified into one of six distinct categories: red soil, cotton crop, gray soil, damp gray soil, soil with vegetation stubble and very damp gray soil. For our analysis, we concentrated on three specific categories: red soil, gray soil and soil with vegetation stubble, which had sizes of 461, 397 and 237, respectively.
Table 3 presents a summary of the ML fitting results, including the maximized log likelihood values, BIC, ARI and MCR of the four fitted models. The results reveal that the log likelihood value for the FM-MVN distribution was lower than that for the FM-MVT distribution, indicating a poorer fit. In contrast, the skewed distributions (FM-MVST, FM-MVSTIG and FM-RMVSN) outperformed their respective models. Particularly noteworthy was the superior performance of the FM-MVST model. The estimated tailedness parameters were ν ^ 1 = 0.47 , ν ^ 2 = 0.44 and  ν ^ 3 = 0.58 , indicating a distribution of matrix observations characterized by long-tailed behavior.

6.2. Apes Data

The second application considered the apes dataset included in the shapes R package [26]. The description of the dataset, taken from [27], is as follows. In an investigation to assess the cranial differences between the sexes of apes, 29 male and 30 female adult gorillas (Gorilla), 28 male and 26 female adult chimpanzees (Pan) and 30 male and 24 female adult orangutans (Pongo) were studied. Eight landmarks were chosen in the midline plane of each skull. These landmarks were anatomical landmarks and were located by an expert biologist. The dataset was stored as a list with two components: an array of coordinates in eight landmarks in two dimensions for each skull ( 8 × 2 observation matrix and N = 167 ), and a vector of group labels ( G = 6 ).
All the competing models were fitted for G = 6 , and their fitting results are reported in Table 4. It is clear that the FM-MNV model provided the worst fitting performance, whereas FM-MVST was the best model. Similarly to the analyses in the previous section, this may be an indication that the components of FM-MVN were not skewed and heavy-tailed enough to adequately model the data. On a related note, the estimated tailedness parameters were ν ^ 1 = 0.45 , ν ^ 2 = 2.20 , ν ^ 3 = 1.61 , ν ^ 4 = 2.80 , ν ^ 5 = 0.52 and  ν ^ 6 = 1.61 , highlighting the presence of clusters with high levels of tailed behavior.

6.3. Melanoma Data

The performance of the FM-MVST model in skin cancer detection was demonstrated in the third and final application. The objective of the skin cancer detection project was to develop a framework for analyzing and assessing the risk of melanoma, using dermatological photographs taken with a standard consumer-grade camera. Segmentation of the lesion is a crucial step for developing a skin cancer detection framework. The objective, then, was to find the border of the skin lesion. It was important that this step was performed accurately, because many features used to assess the risk of melanoma are derived based on the lesion border. The set of images included images extracted from the public databases DermIS and DermQuest, along with manual segmentations (ground truth) of the lesions, available at https://uwaterloo.ca/vision-image-processing-lab/research-demos/skin-cancer-detection, accessed on 1 September 2024.
A skin image in 100 × 70 pixels is displayed in Figure 2a. The next objective was to segment the image into two labels. We considered all the pixels of three numerical RGB components denoting red, green and blue intensities and a grayscale intensity, such as y i [ 0 , 255 ] 4 , which could be transformed into [ 0 , 1 ] 4 . Upon considering each pixel as a 2 × 2 matrix, each pixel was then be grouped into G = 2 clusters, where every cluster was assumed to have a different distribution.
It follows from Table 5 that the FM-MVST model provided the best fit, in terms of BIC, as well as the lowest misclassification error for the binary classification of each pixel. The estimates of the tailedness parameters were ν ^ 1 = 1.96 and ν ^ 2 = 0.92 , signifying the appropriateness of the use of heavy-tailed t distributions. Furthermore, the superiority of the FM-MVST model is reflected visually in Figure 2c–g, which depict the comparative segmentation performance of the fitted models in grayscale. The figures illustrate differences in identifying the lesion area, and they indicate that the proposed model exhibits a clearer boundary and a more consistent region of the lesion.
It is noteworthy that all the model selection criteria applied to the three datasets strongly favored the proposed FM-MVST model. The datasets varied significantly, in terms of dimensional and structural characteristics, which underscores the flexibility and effectiveness of the FM-MVST model. This model demonstrated a superior ability to accurately capture the skewness and leptokurtic features present in the data, outperforming the alternative models. The adaptability of the FM-MVST model across diverse datasets not only showcases its robustness but also reinforces its potential as a valuable tool for data analysis in various applications.

7. Concluding Remarks

We have introduced here a new family of matrix variate distributions that can capture both skewness and heavy-tailedness simultaneously. This MVST model was based on a stochastic representation that facilitated our developing an ECME algorithm for the maximum likelihood estimation of the model parameters. We evaluated the effectiveness and efficiency of the proposed algorithm through two simulation studies. Additionally, we used the proposed approach to analyze three real data datasets, demonstrating its capability in modeling asymmetric matrix variate data. Future developments of this approach could include accommodating censored data, determining the optimal number of mixing distributions for the clustering problem and including different distributions for the variables W and U in the stochastic representation. Another interesting extension could involve incorporating the FM-MVST distribution into a mixture-of-regression framework. We are currently looking into these problems, and we hope to report our findings in a future paper.

Author Contributions

Conceptualization, A.J.; Methodology, A.M. and A.J.; Software, A.M.; Investigation, A.J.; Writing—original draft, A.M.; Writing—review & editing, N.B.; Supervision, N.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The parameters used to generate the data in Section 5.1 are given in the following Table. Here, 1 p is used to denote the vector of length p with all its entries as 1 and I p to denote the p-dimensional identity matrix.
Table A1. The parameters used in the generation of data (scenarios I and II).
Table A1. The parameters used in the generation of data (scenarios I and II).
ScenarioParameterComponent 1Component 2
I π g 0.30.7
M g 1 1 1 2 0 2 1 0 0 0 0 1 0 2 0 1 0 2 0 1 0 1 1 1
Σ g 1.0 0.0 0.0 0.0 0.7 0.1 0.0 0.1 1.0 1.0 0.1 0.2 0.1 0.5 0.5 0.2 0.5 1.4
Ψ g 0.7 0.0 0.0 0.0 0.0 1.0 0.5 0.5 0.0 0.5 1.5 0.1 0.0 0.5 0.1 1.0 1.0 0.5 0.0 0.0 0.5 1.0 0.5 0.5 0.0 0.5 1.0 0.1 0.0 0.5 0.1 1.0
Λ g 1 2 0 1 1 2 0 1 1 2 0 1 0 1 1 0 0 1 1 1 1 1 0 1
ν g 35
II π g 0.40.6
M g 1 1 0 1 1 5 0 0 2 1 1 5
Σ g 5.0 0.5 0.5 1.0 I 5 2.0 0.1 0.1 0.5 I 5
Ψ g 0.5 0.0 0.0 0.5 1.0 0.5 0.5 1.0
Λ g 2 1 2 1 1 5 1 2 1 2 1 5
ν g 44

References

  1. Kroonenberg, P.M. Applied Multiway Data Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  2. Dickey, J.M. Matricvariate generalizations of the multivariate t distribution and the inverted multivariate t distribution. Ann. Math. Stat. 1967, 38, 511–518. [Google Scholar] [CrossRef]
  3. Arellano-Valle, R.B.; Azzalini, A. A formulation for continuous mixtures of multivariate normal distributions. J. Multivar. Anal. 2021, 185, 104780. [Google Scholar] [CrossRef]
  4. Liu, C.; Rubin, D.B. The ECME algorithm: A simple extension of EM and ECM with faster monotone convergence. Biometrika 1994, 81, 633–648. [Google Scholar] [CrossRef]
  5. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. (Methodol.) 1977, 39, 1–22. [Google Scholar] [CrossRef]
  6. Rezaei, A.; Yousefzadeh, F.; Arellano-Valle, R.B. Scale and shape mixtures of matrix variate extended skew normal distributions. J. Multivar. Anal. 2020, 179, 104649. [Google Scholar] [CrossRef]
  7. Gallaugher, M.P.; McNicholas, P.D. A matrix variate skew-t distribution. Stat 2017, 6, 160–170. [Google Scholar] [CrossRef]
  8. Gallaugher, M.P.; McNicholas, P.D. Finite mixtures of skewed matrix variate distributions. Pattern Recognit. 2018, 80, 83–93. [Google Scholar] [CrossRef]
  9. Naderi, M.; Bekker, A.; Arashi, M.; Jamalizadeh, A. A theoretical framework for Landsat data modeling based on the matrix variate mean-mixture of normal model. PLoS ONE 2020, 15, e0230773. [Google Scholar] [CrossRef]
  10. Chen, J.T.; Gupta, A.K. Matrix variate skew normal distributions. Statistics 2005, 39, 247–253. [Google Scholar] [CrossRef]
  11. Domínguez-Molina, J.A.; González-Farías, G.; Ramos-Quiroga, R.; Gupta, A.K. A matrix variate closed skew-normal distribution with applications to stochastic frontier analysis. Commun. Stat.—Theory Methods 2007, 36, 1691–1703. [Google Scholar] [CrossRef]
  12. Zhang, L.; Bandyopadhyay, D. A graphical model for skewed matrix-variate non-randomly missing data. Biostatistics 2020, 21, e80–e97. [Google Scholar] [CrossRef]
  13. Viroli, C. Finite mixtures of matrix normal distributions for classifying three-way data. Stat. Comput. 2011, 21, 511–522. [Google Scholar] [CrossRef]
  14. Thompson, G.Z.; Maitra, R.; Meeker, W.Q.; Bastawros, A.F. Classification with the matrix-variate-t distribution. J. Comput. Graph. Stat. 2020, 29, 668–674. [Google Scholar] [CrossRef]
  15. Tomarchio, S.D.; Punzo, A.; Bagnato, L. Two new matrix-variate distributions with application in model-based clustering. Comput. Stat. Data Anal. 2020, 152, 107050. [Google Scholar] [CrossRef]
  16. Tomarchio, S.D.; Gallaugher, M.P.; Punzo, A.; McNicholas, P.D. Mixtures of matrix-variate contaminated normal distributions. J. Comput. Graph. Stat. 2022, 31, 413–421. [Google Scholar] [CrossRef]
  17. Tomarchio, S.D. Matrix-variate normal mean-variance Birnbaum–Saunders distributions and related mixture models. Comput. Stat. 2024, 39, 405–432. [Google Scholar] [CrossRef]
  18. Naderi, M.; Tamandi, M.; Mirfarah, E.; Wang, W.L.; Lin, T.I. Three-way data clustering based on the mean-mixture of matrix-variate normal distributions. Comput. Stat. Data Anal. 2024, 199, 108016. [Google Scholar] [CrossRef]
  19. Lin, T.I.; Wu, P.H.; McLachlan, G.J.; Lee, S.X. A robust factor analysis model using the restricted skew-t distribution. Test 2015, 24, 510–531. [Google Scholar] [CrossRef]
  20. Lee, S.X.; McLachlan, G.J. Finite mixtures of canonical fundamental skew t-distributions: The unification of the restricted and unrestricted skew t-mixture models. Stat. Comput. 2016, 26, 573–589. [Google Scholar] [CrossRef]
  21. Macqueen, J. Some methods for classification and analysis of multivariate observations. In Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, Davis, CA, USA, 21 June–18 July 1965; University of California Press: Berkeley, CA, USA, 1967. [Google Scholar]
  22. Lloyd, S. Least squares quantization in PCM. IEEE Trans. Inf. Theory 1982, 28, 129–137. [Google Scholar] [CrossRef]
  23. Sarkar, S.; Zhu, X.; Melnykov, V.; Ingrassia, S. On parsimonious models for modeling matrix data. Comput. Stat. Data Anal. 2020, 142, 106822. [Google Scholar] [CrossRef]
  24. Hubert, L.; Arabie, P. Comparing partitions. J. Classif. 1985, 2, 193–218. [Google Scholar] [CrossRef]
  25. Schwarz, G. Estimating the dimension of a model. Ann. Stat. 1978, 461–464. [Google Scholar] [CrossRef]
  26. Dryden, I.L. shapes Package; Version 1.2.6; Contributed package; R Foundation for Statistical Computing: Vienna, Austria, 2021. [Google Scholar]
  27. Dryden, I.; Mardia, K. Statistical Shape Analysis: With Applications in R; Wiley Series in Probability and Statistics; Wiley: Hoboken, NJ, USA, 2016. [Google Scholar]
Figure 1. Box plots of BIC, ARI and MCR values for the competing models in two scenarios: (a) Scenario I and (b) Scenario II.
Figure 1. Box plots of BIC, ARI and MCR values for the competing models in two scenarios: (a) Scenario I and (b) Scenario II.
Mathematics 12 03260 g001
Figure 2. Segmentation of lesion: (a) original, (b) ground truth, and (cg) segmented images obtained using different models.
Figure 2. Segmentation of lesion: (a) original, (b) ground truth, and (cg) segmented images obtained using different models.
Mathematics 12 03260 g002
Table 1. Average RMSE based on 500 replications for the evaluation of ML estimates.
Table 1. Average RMSE based on 500 replications for the evaluation of ML estimates.
ScenarioN π 1 M 1 M 2 Σ 1 Σ 2 Ψ 1 Ψ 2 Λ 1 Λ 2 ν 1 ν 2
I2500.0310.1690.1190.0580.0450.1280.0780.1740.1181.6383.629
5000.0200.1290.0870.0440.0410.0930.0640.1260.0881.2153.128
10000.0150.0910.0590.0320.0310.0660.0520.0880.0560.8032.712
20000.0110.0620.0440.0270.0280.0510.0430.0620.0440.7022.230
II2500.0350.2600.1931.3480.3760.7850.4400.2570.1872.5442.426
5000.0220.1760.1301.2570.3380.7710.4250.1790.1292.2972.219
10000.0140.1200.0971.2060.3120.7640.4230.1240.0971.9621.897
20000.0110.0880.0681.1840.3020.7530.4010.0900.0701.4601.388
Table 2. Simulation results, based on 100 replications, for performance comparison of four mixture models in two scenarios.
Table 2. Simulation results, based on 100 replications, for performance comparison of four mixture models in two scenarios.
ScenarioModelBICStdARIStdMCRStd
FM-MVN55,419.78831.200.820.200.050.06
FM-MVT45,004.95787.390.900.170.030.05
IFM-RMVSN40,103.90962.330.950.180.020.05
FM-MVSTIG38,215.01859.470.970.080.010.02
FM-MVST38,170.98804.070.980.050.010.01
FM-MVN85,450.81705.800.910.150.070.06
FM-MVT76,011.44720.480.930.140.080.05
IIFM-RMVSN72,917.04703.480.940.120.050.02
FM-MVSTIG69,892.52694.080.950.100.040.02
FM-MVST69,839.90673.310.970.090.020.01
Table 3. Summary results from fitting various models to the LSD data.
Table 3. Summary results from fitting various models to the LSD data.
ModelGLog LikelihoodBICARIMCR
FM-MVN −114,954.90231,799.400.670.14
FM-MVT −113,169.30228,228.100.690.13
FM-RMVSN3−111,213.50225,107.400.760.09
FM-MVSTIG −110,920.90224,543.200.790.07
FM-MVST −110,836.60224,374.600.820.06
Table 4. Summary results from fitting various models to the apes data.
Table 4. Summary results from fitting various models to the apes data.
ModelGLog LikelihoodBICARIMCR
FM-MVN −7773.1417,204.510.510.41
FM-MVT −7609.6616,877.540.560.32
FM-RMVSN6−6158.0914,522.030.600.28
FM-MVSTIG −6097.6614,431.870.630.27
FM-MVST −5970.4214,177.410.670.25
Table 5. Summary results from fitting various models to the melanoma data.
Table 5. Summary results from fitting various models to the melanoma data.
ModelGLog LikelihoodBICARIMCR
FM-MVN 63,941.64−127,723.900.760.14
FM-MVT 65,509.39−130,859.400.820.13
FM-RMVSN265,601.15−130,945.500.910.12
FM-MVSTIG 65,788.76−131,303.100.930.11
FM-MVST 67,241.98−134,209.500.950.09
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mahdavi, A.; Balakrishnan, N.; Jamalizadeh, A. Robust Classification via Finite Mixtures of Matrix Variate Skew-t Distributions. Mathematics 2024, 12, 3260. https://doi.org/10.3390/math12203260

AMA Style

Mahdavi A, Balakrishnan N, Jamalizadeh A. Robust Classification via Finite Mixtures of Matrix Variate Skew-t Distributions. Mathematics. 2024; 12(20):3260. https://doi.org/10.3390/math12203260

Chicago/Turabian Style

Mahdavi, Abbas, Narayanaswamy Balakrishnan, and Ahad Jamalizadeh. 2024. "Robust Classification via Finite Mixtures of Matrix Variate Skew-t Distributions" Mathematics 12, no. 20: 3260. https://doi.org/10.3390/math12203260

APA Style

Mahdavi, A., Balakrishnan, N., & Jamalizadeh, A. (2024). Robust Classification via Finite Mixtures of Matrix Variate Skew-t Distributions. Mathematics, 12(20), 3260. https://doi.org/10.3390/math12203260

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop