Next Article in Journal
Design and Optimization of a Mid-Field Wireless Power Transfer System for Enhanced Energy Transfer Efficiency
Previous Article in Journal
A Novel Three-Parameter Nadarajah Haghighi Model: Entropy Measures, Inference, and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Clustering Model for Three-Way Asymmetric Proximities: Unveiling Origins and Destinations

by
Laura Bocci
1,† and
Donatella Vicari
2,*,†
1
Department of Social and Economic Sciences, Sapienza University of Rome, P.le Aldo Moro 5, 00185 Rome, Italy
2
Department of Statistical Sciences, Sapienza University of Rome, P.le Aldo Moro 5, 00185 Rome, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2024, 16(6), 752; https://doi.org/10.3390/sym16060752
Submission received: 13 April 2024 / Revised: 9 June 2024 / Accepted: 11 June 2024 / Published: 16 June 2024
(This article belongs to the Section Mathematics)

Abstract

:
In many real-world situations, the available data consist of a set of several asymmetric pairwise proximity matrices that collect directed exchanges between pairs of objects measured or observed in a number of occasions (three-way data). To unveil patterns of exchange, a clustering model is proposed that accounts for the systematic differences across occasions. Specifically, the goal is to identify the groups of objects that are primarily origins or destinations of the directed exchanges, and, together, to measure the extent to which these clusters differ across occasions. The model is based on two clustering structures for the objects, which are linked one-to-one and common to all occasions. The first structure assumes a standard partition of the objects to fit the average amounts of the exchanges, while the second one fits the imbalances using an “incomplete” partition of the objects, allowing some to remain unassigned. In addition, to account for the heterogeneity of the occasions, the amounts and directions of exchange between clusters are modeled by occasion-specific weights. An Alternating Least-Squares algorithm is provided. Results from artificial data and a real application on international student mobility show the capability of the model to identify origin and/or destination clusters with common behavior across occasions.

1. Introduction

Three-way proximity data are a structure of relationship data that is collected or measured between pairs of objects under multiple occasions, i.e., sources, settings, times, and experimental conditions. They extend the standard two-way proximity data, where the same objects are the entities in relation, to a three-dimensional framework. The additional dimension is given by the occasion on which the single pairwise proximity (similarity or dissimilarity) was collected. Three-way proximity data are typically arranged as a set of square data matrices (one for each occasion), each containing the proximities between all pairs of objects, and can be either symmetric or asymmetric.
In the latter case, where the relationship between the objects A and B is different from that between B and A, the asymmetry denotes a certain disequilibrium (imbalance) in the pairwise relationship. As a result, not only the presence of such relationships but also their directionality and magnitude, both of which are significant and cannot be ignored, define the true pattern of the data.
This phenomenon occurs in a variety of disciplines, including psychology, sociology, marketing research, social sciences, behavioral sciences, environmental sciences, and beyond, to name a few. Asymmetric proximity data can stem from diverse sources, such as judgments concerning the similarity between stimuli (confusion data), interactions among social actors (social networks), patterns of mobility, transitions between employment sectors, brand switching behaviors, or commercial transactions and trades between countries.
When dealing with asymmetric data, it may be of interest to unveil common behaviors of exchange in order to identify groups of objects that are primarily either origins or destinations of the exchanges, and in the presence of multiple occasions, such common patterns can also be analyzed and compared across them. For instance, the international student mobility between countries over several years yields a three-way asymmetric proximity array, where, in each annual matrix, the rows and columns correspond to the origins and destinations of the mobile students, respectively. The identification of possible pathways of student mobility over time can be an important source of information for policy makers.
Specially designed multiway models and methods are required to analyze such data. Within this framework, clustering three-way data is a complex and challenging task since asymmetric proximity matrices may subsume different classifications of the objects due to the heterogeneity across occasions, and the asymmetry may contain some information relevant to clustering efforts.
When analyzing asymmetric data, the asymmetry is often ignored by symmetrizing the proximities by averaging the two different values for each pair of objects. Nevertheless, asymmetry can be substantial and prominent, and deserves attention because it contains important information about the real patterns in the data, in terms of both the direction and the magnitude of the relationships.
A review of the literature on cluster analysis reveals a notable disparity: methods for asymmetric proximity data have received less attention compared with the large number of models and methods that have been proposed for symmetric proximity data, and even less for three-way data.
Clustering methods for asymmetric data have been developed for a single data matrix mainly following two approaches (see [1,2,3] for extended reviews of clustering methods for asymmetric data).
In the first approach, two different classification structures are estimated, one for the rows and one for the columns of the asymmetric matrix, since rows and columns are assumed to refer to two different sets of objects. Within this approach, the majority of the proposed clustering algorithms generally estimate hierarchical trees and identify non-overlapping clusters, i.e., each object belongs to only one cluster [4,5,6,7]. A non-hierarchical clustering algorithm, called GENNCLUS, that provides either overlapping or non-overlapping clusters for asymmetric or symmetric data has been proposed as a generalization of ADCLUS [8].
In the second approach, a single classification structure is estimated from both rows and columns of the asymmetric matrix because, in accordance with their true nature, they refer to the same set of objects. Within this approach, most of the methodologies extend the classical aggregative hierarchical methods [9,10,11,12], while few proposals concern non-hierarchical methods. Some of the latter are basically extensions of the k-means clustering [13,14,15]. In the same vein but with a different modeling, more recent proposals [16,17] concern non-hierarchical clustering methods that simultaneously fit both the symmetric and the skew-symmetric parts of the data resulting from the algebraic decomposition of the asymmetric matrix (see Section 1.1 for the definition). Specific clustering models for skew-symmetric data have been proposed by Vicari [18] and Vicari and Di Nuzzo [19]. In the first model, possible external covariates for the objects have been included, while the second proposal relies on the between-cluster effects modeled by Singular Value Decompositions that exploit the peculiar properties of the skew-symmetric matrices.
Regarding three-way asymmetric proximities, several models have been proposed within the framework of Multidimensional Scaling (MDS) (see [3] for a review of MDS methods for multiway asymmetric data), but to our knowledge, only one clustering model has been proposed in order to analyze three-way asymmetric proximity data. The proposal of Chaturvedi and Carroll [20] generalizes the INDCLUS model [21] to the asymmetric case by identifying two different sets of (overlapping) clusters of the objects (for the rows and the columns of the data matrices, respectively) common to all occasions, while the three-way heterogeneity is accounted for by occasion-specific weights for the clusters.
We can observe that since Generalized INDCLUS [20] aims at finding two different sets of (possibly overlapping) clusters of objects, the identification of origins and destinations of exchange patterns is rather complicated from an interpretative point of view. This is because the clusters are only partially matched, which often leads to non-parsimonious solutions. Conversely, in many scenarios, when dealing with this type of data, the goal is to analyze the exchange between the same groups of objects that serve as either origins or destinations, because they contain objects with common exchange behavior toward the other groups across occasions, both in magnitude and in direction.
Accordingly, in the lack of proposals for clustering three-way asymmetric proximities, our research objectives are (1) to identify the same groups of objects (for the rows and columns of the data matrices) that are primarily origins or destinations of the exchanges and, together, (2) to measure the extent to which these clusters differ across occasions.
To fill this research gap, we start from the clustering model proposed by Vicari [17], which addresses the first issue and accounts for between-cluster effects when only a single dissimilarity matrix is available (two-way case). In the present paper, we generalize the model [17] to the three-way case with the aim of also accounting for heterogeneity across occasions.
The model is based on the decomposition of each asymmetric matrix into the sum of its symmetric and skew-symmetric components, which are modeled jointly. The asymmetric dissimilarities are assumed to subsume two clustering structures common to all occasions: the first defines a standard partitioning of all objects that fits the symmetric component of the exchanges; the second one, which fits the imbalances, defines an incomplete partition of objects, some of which are allowed to remain unallocated to any cluster. In both clustering structures, objects within the same cluster share the same behavior with respect to exchanges that are directed to objects in different clusters so that “origin” and “destination” clusters are identified. Objects possibly unassigned to any cluster of the incomplete partition represent “nearly” symmetric objects, characterized by small imbalances. As a novel contribution, this paper accounts for heterogeneity across occasions by estimating occasion-specific sets of weights that can capture both the average magnitude and the direction of exchange between clusters. This makes it possible to analyze the role of the common clusters as either origin or destination, which may differ across occasions.
This paper is organized as follows: After an illustrative example to introduce the problem (Section 1.1), the model is formalized in a general framework in Section 2 and an appropriate algorithm is proposed in Section 3. Applications to the artificial data of Section 1.1 and to real mobility data are presented in Section 4 and Section 5, respectively, to illustrate the usefulness and effectiveness of the proposal, also in comparison with the Generalized INDCLUS [20]. Finally, Section 6 summarizes the findings and concludes with directions for future developments.

1.1. Illustrative Example: Artificial Data

Before discussing the model in detail, an illustrative example of the heterogeneous three-way asymmetric data we are dealing with is presented.
Let us consider a three-way array of asymmetric dissimilarity data pertaining to pairwise exchanges between N = 9 objects measured at H = 3 occasions. The three matrices X h ( h = 1 , 2 , 3 ) in Figure 1, containing the exchanges of each of the three occasions, have been artificially generated from model (4), which will be fully formalized in Section 2, and by assuming three clusters of objects, namely, C 1 = { a , b , c , d } , C 2 = { e , f } , and C 3 = { g , h , i } .
Exchanges are generally asymmetric: for example, the exchange from a to e is different from the exchange from e to a in each occasion. Furthermore, the exchanges from a to all other objects are greater than the exchanges from all other objects to a in occasions 1 and 3, while the opposite is true for occasion 2.
In addition, using the Gower decomposition [22], each matrix X h ( h = 1 , 2 , 3 ) can be decomposed into its symmetric and skew-symmetric components S h and K h , respectively, as shown in Figure 2.
Let us recall the Gower decomposition [22] of any square matrix X h ( h = 1 , , H ), which can be uniquely decomposed into the sum of a symmetric matrix S h and a skew-symmetric matrix K h , both of size ( N × N ) and orthogonal to each other (i.e., t r a c e ( S h K h ) = 0 ),
X h = S h + K h = 1 2 ( X h + X h ) + 1 2 ( X h X h ) , ( h = 1 , , H ) ,
as elementary linear algebra can easily prove. The entry s i j h S h represents the average amount of the exchange between objects i and j at occasion h ( h = 1 , , H ), while the entry k i j h K h represents the imbalance between i and j, i.e., the amount by which k i j h differs from its mean s i j h ( i , j = 1 , , N and h = 1 , , H ). Thus, each element of the skew-symmetric matrix K h is, by definition, such that k i j h = k i j h , and conveys information about the direction of the exchange.
According to this decomposition, every exchange between a pair of objects in matrix X h ( h = 1 , 2 , 3 ), for example, x a e 1 = 66.2 from the object a to the object e in the first occasion, can be decomposed into the sum of two entries: (1) the average amount of the exchange between a and e, s a e 1 = 34.0 in S 1 , and (2) the imbalance between the exchanges from a to e and from e to a, k a e 1 = 32.2 in K 1 , i.e., the amount by which the exchange between a and e differs from its mean s a e 1 .
Based on the Gower decomposition, it is also possible to measure the percentage of asymmetry of each matrix X h as the squared ratio between the Frobenius norm of its skew-symmetric component K h and the Frobenius norm of X h , i.e., K h 2 X h 2 100 , which is a measure in [ 0 , 100 ] .
The percentage of asymmetry of each of the matrices in Figure 1 is not negligible ( 15.5 % , 17.8 % , and 17.0 % ), which confirms that the structure of the data is characterized by the direction and magnitude of the exchanges, which are both relevant and cannot be ignored.
The underlying pattern in the data is evident from Figure 1. In each heatmap, different shades of blue represent different magnitudes of exchange between pairs of objects in different clusters, with darker shades representing higher magnitudes. With a view of searching for origin/destination clusters, we are interested in modeling the exchange between clusters, while the exchange within clusters is not of interest here. Thus, the diagonal blocks are left white for better clarity. The heatmap of each data matrix shows three “blue” off-diagonal blocks, corresponding to the three clusters C 1 = { a , b , c , d } , C 2 = { e , f } , and C 3 = { g , h , i } , each with a common pattern across matrices.
Furthermore, the same clustering structure can be identified in both symmetric and skew-symmetric components S h and K h across all occasions (Figure 2).
By analyzing the symmetric and skew-symmetric components of each data matrix, it becomes easier to disclose the pattern of the data and determine which groups of objects serve primarily as the origin or destination of the exchanges.
Objects in the same cluster have a common behavior towards objects in different clusters, in terms of outgoing and incoming exchanges, but the magnitude and direction of the exchanges change across occasions. As an example, if we look at the first and third occasions, it is clear that the outgoing exchanges from all objects in C 2 directed to objects in C 1 and C 3 are smaller than the incoming exchanges. This results in negative imbalances for the exchanges from all objects in C 2 to objects in either C 1 or C 3 . Cluster C 2 can therefore be considered as the origin cluster for these two occasions. Conversely, in the second occasion, C 2 serves as a destination cluster, as its outgoing amounts towards C 1 and C 3 are greater than its incoming ones, resulting in positive imbalances.
By inspecting all the symmetric and skew-symmetric components, we may also note that both the objects b and c (in cluster C 1 ) present a common behavior across occasions: they have almost symmetrical exchanges with the objects g and h (in cluster C 3 ) and small imbalances with all other objects belonging to either cluster C 2 or C 3 .
In order to have a better understanding of the underlying patterns and identify the common behaviors of the objects across the different occasions, the model formalized in Section 2 is fitted to the artificial data and the results analyzed in Section 4, together with the results from the Generalized INDCLUS model [20] for comparison.

2. The Model

Let us assume that X h ( h = 1 , , H ) is a square asymmetric matrix where the element x i l h represents the pairwise dissimilarity between the objects i and l ( i , l = 1 , , N ) observed at the occasion h ( h = 1 , , H ) and is generally different from x l i h .
The model proposed here aims at clustering the N objects by accounting for both the symmetric and the skew-symmetric effects from the decomposition (1) of the observed asymmetries X h ( h = 1 , , H ).
In particular, the data are assumed to subsume two clustering structures that are common to all occasions: the first one defines a standard partitioning of all objects fitting the average amount of the exchanges; the second one, which fits the imbalances, defines an “incomplete” partitioning of the objects, where some of them are allowed to remain unassigned.
Specifically, the clustering structure consists of two nested partitions into J clusters C 1 , , C j , , C J and G 1 , , G j , , G J , such that
  • Every object belongs to one and only one non-empty cluster C j ( j = 1 , , J );
  • If object i belongs to cluster C j , it can either belong to cluster G j or remain unassigned to any cluster of the partition G 1 , , G j , , G J .
The partition C 1 , , C j , , C J is referred to as a complete partition because every object must be assigned to some cluster C j , while G 1 , , G j , , G J is called an incomplete partition because a number of N 0 ( N 0 N ) out of N objects are allowed to remain unassigned to any cluster. Note that the complete and the incomplete partitions are common to all occasions and linked together, the latter being constrained to be nested within the former ( G j C j for j = 1 , , J ).
The complete partition is uniquely identified by an ( N × J ) binary membership matrix U = [ u i j ] ( u i j = 0 , 1 for i = 1 , , N and j = 1 , , J and j = 1 J u i j = 1 for i = 1 , , N ), where u i j = 1 if object i belongs to cluster C j , and u i j = 0 otherwise.
The incomplete partition is identified by an ( N × J ) binary membership matrix V = [ v i j ] ( v i j = 0 , 1 for i = 1 , , N and j = 1 , , J ), where v i j u i j ( i = 1 , , N and j = 1 , , J ); i.e., any object i can either remain unassigned to any cluster or belong to cluster G j if it belongs to cluster C j in the complete partition.
Hereafter, I N denotes the identity matrix of size N; 1 A B and 1 A denote the matrix of size ( A × B ) of all ones and the column vector with A ones, respectively; I ˜ N = 1 N N I N is the ( N × N ) matrix of ones except for the zeros on the main diagonal; and Y ̲ = Y 1 , , Y H is the ( M × P H ) augmented matrix obtained by collecting the H matrices Y 1 , , Y H of size ( M × P ) next to each other.
Let us consider the Gower decomposition (1) of each matrix X h ( h = 1 , , H ) into the sum of its symmetric and skew-symmetric components S h and K h , respectively. Both of these components can be modeled by defining two clustering structures that depend on the matrices U and V , respectively, as introduced in Vicari [17] for a two-way asymmetric dissimilarity matrix.
Specifically, the symmetric component S h and the skew-symmetric component K h for the occasion h ( h = 1 , , H ) are modeled by the two clustering structures introduced in Vicari [16,18] and depend on the common complete and incomplete membership matrices U and V , respectively, as follows:
S h = U R h U ˜ + U ˜ R h U + E h S , ( h = 1 , , H ) ,
K h = V T h V ˜ V ˜ T h V + E h K , ( h = 1 , , H ) ,
where
  • U ˜ = 1 N J U and V ˜ = 1 N J V ;
  • R h = d i a g ( r h ) and T h = d i a g ( t h ) with r h = [ r 1 h , , r J h ] and t h = [ t 1 h , , t J h ] being the occasion-specific weight vectors of size J associated with the clusters of the complete and incomplete partition, respectively;
  • The error terms E h S and E h K represent the parts of S h and K h not accounted for by the model, respectively.
For identifiability reasons, any matrix V T h is constrained to sum to zero: i.e., 1 N ( V T h ) 1 J = 0 ( h = 1 , , H ).
Models (2) and (3) can be combined and plugged into (1) to specify the model accounting for the asymmetric dissimilarities between clusters at the occasion h, as follows:
X h = S h + K h + b h I ˜ N + E h = U R h U ˜ + U ˜ R h U + V T h V ˜ V ˜ T h V + b h I ˜ N + E h , ( h = 1 , , H ) ,
where b h is an additive constant term and the general error term E h represents the part of X h not accounted for by the model.
Models (2) and (3) can be expressed in compact notation in terms of S ̲ = S 1 , , S H and K ̲ = K 1 , , K H , which denote the ( N × N H ) augmented matrices obtained by collecting the H matrices S h and K h next to each other, respectively, as follows:
S ̲ = 1 H U R I H U ˜ + 1 H U ˜ R I H U + E ̲ S ,
K ̲ = 1 H V T I H V ˜ 1 H V ˜ T I H V + E ̲ K ,
where ⊗ denotes the Kronecker product, and R and T are the two ( H J × H J ) diagonal matrices with the H J -vectors r = r 1 , , r h , , r H and t = t 1 , , t h , , t H as main diagonals, respectively, E ̲ S = E 1 S , , E h S , , E H S and E ̲ K = E 1 K , , E h K , , E H K .
Recall that, given any two matrices A = [ a i j ] and B of sizes ( N × J ) and ( M × P ) , respectively, the Kronecker product between A and B is the ( N M × J P ) matrix, as follows:
A B = a 11 B a 1 J B a N 1 B a N J B .
Finally, model (4) can be expressed in compact notation in terms of the augmented matrix X ̲ = X 1 , , X H by combining models (5) and (6) as follows:   
X ̲ = S ̲ + K ̲ + ( b I ˜ N ) + E ̲ = 1 H U R I H U ˜ + 1 H U ˜ R I H U + 1 H V T I H V ˜ 1 H V ˜ T I H V + ( b I ˜ N ) + E ̲ ,
where b = b 1 , , b h , , b H and E ̲ = E 1 , , E h , , E H .
It is important to note that, in the model, in addition to a common clustering structure, occasion-specific weight vectors r h and t h are assumed to account for the heterogeneity of the occasions. These weights allow for measuring the extent to which exchanges vary across occasions, providing quantifications of the exchanges between clusters at the occasion h in terms of magnitude and direction.

3. The Algorithm

In model (4), the complete and the incomplete membership matrices U and V , the weight vectors r h and t h , and the constants b h ( h = 1 , , H ) can be estimated by solving the following least-squares fitting problem:
min F U , V , r h , t h , b h = h = 1 H X h U R h U ˜ + U ˜ R h U V T h V ˜ V ˜ T h V b h I ˜ N 2 h = 1 H X h 2
subject to
u i j = 0 , 1 ( i = 1 , , N ; j = 1 , , J ) and j = 1 J u i j = 1 ( i = 1 , , N ) ,
v i j = 0 , 1 ( i = 1 , , N ; j = 1 , , J ) and v i j u i j ( i = 1 , , N ) ,
1 N V T h 1 J = 0 ( h = 1 , , H ) .
Problem (8), subject to constraints (9)–(11), can be reformulated in compact form in terms of model (7) as follows:
min F ( U , V , R , T , b ) = 1 X ̲ 2 X ̲ 1 H U R I H U ˜ 1 H U ˜ R I H U 1 H V T I H V ˜ + 1 H V ˜ T I H V b I ˜ N 2 = S ̲ 1 H U R I H U ˜ 1 H U ˜ R I H U b I ˜ N 2 X ̲ 2 + K ̲ 1 H V T I H V ˜ + 1 H V ˜ T I H V 2 X ̲ 2 = F S ( U , R , b ) + F K ( V , T ) ,
where the equivalence is due to the orthogonality of S ̲ and K ̲ .
The equivalent constrained optimization problems (8) and (12) can be solved by using an Alternating Least-Squares (ALS) algorithm, which alternates the estimation of a set of parameters while maintaining all the others fixed as detailed below.
After an initialization step in which all parameters satisfying the constraints are chosen, the algorithm alternates between two main steps.
In the first step, in order to ensure that the relative loss function (12) is non-increasing, the two membership matrices U and V are jointly updated row by row in N substeps by solving assignment problems for the different rows of U and V satisfying the constraints (9) and (10). Specifically, for a given row i, setting u i j = 1 for j = 1 , , J implies that either v i j = u i j or v i j = 0 ; i.e., object i in matrix V can either be assigned to the same cluster j as in matrix U or remain unassigned. For the row i and given the remaining rows of U and V , all possible 2 J assignments of the object i are considered: either to the corresponding clusters C j and G j ( u i j = 1 and v i j = 1 ) or to the cluster C j only ( u i j = 1 and v i j = 0 ). For each of them, the weight vectors r h and t h ( h = 1 , , H ) are also estimated as optimal solutions of constrained regression problems. Finally, by evaluating the relative loss function (12) for all possible potential assignments and selecting the one corresponding to the minimum loss value, the assignment of the object i in U and V is chosen. Note that the relative loss function cannot increase at each substep, since the whole space of feasible solutions for both U and V is explored for each object.
In the same step, the weight vectors r h and t h are estimated as solutions of constrained regression problems for each possible choice of the different rows of U and V , respectively.
In the second step, the constant b is then estimated by successive residualizations of the three-way data matrix.
The two main steps are alternated and iterated until convergence. The relative loss function (12) does not increase at each step, and the algorithm stops when the loss decreases less than a fixed arbitrary positive and small threshold.
In order to increase the chance of finding the global minimum, the best solution over different random starting parameters is retained.
Moreover, in order to estimate the matrices R and T , model (7) is reformulated as a regression problem with respect to the unknown vectors r and t , as follows:
x ̲ = s ̲ + k ̲ + ( b i ˜ ) + e ̲ = Q U r + Q V t + ( b i ˜ ) + e ̲ ,
where
x ̲ is the column vector of size H N 2 of the vectorized matrix X ̲ , i.e., x ̲ = v e c ( X ̲ ) = [ x 111 , , x N 11 , , x 11 h , , x N 1 h , , x 1 N H , , x N N H ] ;
s ̲ = v e c ( S ̲ ) and k ̲ = v e c ( K ̲ ) are the column vectors of size H N 2 of the vectorized matrices S ̲ and K ̲ , respectively;
Q U = I H U ˜ | | 1 H U + I H U | | 1 H U ˜ is a matrix of size ( H N 2 × H J ), where | | denotes the Khatri–Rao product [23,24];
Q V = I H V ˜ | | 1 H V I H V | | 1 H V ˜ is a matrix of size ( H N 2 × H J );
i ˜ is the column vector of size N 2 of the vectorized matrix I ˜ N ;
e ̲ = v e c ( E ̲ ) is the column vector of size H N 2 of the error term.
Recall that, given any the two matrices A and B with the same number J of columns, the Khatri–Rao product of A and B is the column-wise Kronecker product, i.e., A | | B = ( a 1 b 1 , , a j b j , , a J b J ) , where a j and b j are the j-th ( j = 1 , , J ) column of A and B , respectively.
Therefore, by taking into account (13), the relative loss function (12) becomes as follows:   
F ( U , V , r , t , b ) = F S ( U , r , b ) + F K ( V , t ) = s ̲ Q U r ( b i ˜ ) 2 x ̲ 2 + k ̲ Q V t 2 x ̲ 2 .
A detailed description of the steps of the algorithm, implemented in MATLAB R2023a, is given below.
  • Initialization step.
Initial estimates of the parameters U ^ , V ^ , r ^ , t ^ , and b ^ are chosen randomly or in a rational way, but they are required to satisfy the set of constraints (9)–(11).
  • Step 1. Updating the membership matrices U and V and weight-vectors r and t . (see Algorithm 1)
Given the current estimates of b ^ , U , and V , the weight vectors r , and t are estimated by minimizing (14) subject to constraints (9)–(11).
The loss function (14) is minimized sequentially for the different rows of U and V by solving N assignment problems. Finally, the column sums of the estimated U ^ are checked to avoid empty clusters.
Furthermore, the weight vectors r and t are estimated in two nested substeps as follows. Given the current U ^ and b ^ , for every possible binary choice for the different rows of U , the vector r is estimated by solving the following regression problem:
F S ( r ; U ^ , b ^ ) = s ̲ Q U ^ r ( b ^ i ˜ ) 2 x ̲ 2 .
Similarly, given the current V ^ , for every admissible choice for the different rows of V , the weight vector t is obtained as the solution of the following constrained regression problem:
F K ( t ; V ^ ) = k ̲ Q V ^ t 2 x ̲ 2 ,
subject to constraints (11).
The pseudocode for updating the i-th row of U and V , holding all other rows constant, and for updating r and t is as follows.
In the following, let 0 be the J-column vector of zeros; p ( j ) be the J-dimensional vector with all entries equal to 0 except for the j-th one, which is 1; and u i and v i denote the vectors corresponding to the i-th row of U and V , respectively.
Algorithm 1 S t e p 1 .
begin
    for i := 1 to N do
          for j := 1 to J do
               u i ( j ) = p ( j ) ;
               U ( j ) = u 1 , , u i ( j ) , , u N ;
               
r ( j ) = Q U ( j ) Q U ( j ) 1 Q U ( j ) s ̲ ( b ^ i ˜ ) ;

               comment: solution of the regression problem (15) corresponding to the possible
                                 assignment of object i to cluster j of the complete partition ( U );
               for w := 1 to 2 do
                      if w = 1 then v i ( j , w ) = u i ( j ) , else if w = 2 then v i ( j , w ) = 0 ; end;
                     V ( j , w ) = v 1 , , v i ( j , w ) , , v N ;
                     N V = 1 N V ( j , w ) 1 J ;
                     comment: number of assigned objects in the incomplete partition;
t ˜ ( j , w ) = Q V ( j , w ) Q V ( j , w ) 1 Q V ( j , w ) k ̲ ;

                     comment: solution of the regression problem (16) corresponding to the possible
                                       assignment of object i to the cluster j of the incomplete partition ( V );
                     for h := 1 to H do
t h ( j , w ) = V ( j , w ) + V ( j , w ) t ˜ ( j , w ) 1 N V ( j , w ) t ˜ ( j , w ) N V V ( j , w ) 1 J ;

                           comment: constraint (11) is imposed; V ( j , w ) + is the Moore–Penrose
                                            inverse of V ( j , w ) ;
                     end
                     t ( j , w ) = t 1 ( j , w ) , , t h ( j , w ) , , t H ( j , w ) ;
                     f ( j , w ) ( u i ( j ) , r ( j ) , v i ( j , w ) , t ( j , w ) ) = F S U ( j ) , r ( j ) ; b ^ + F K V ( j , w ) , t ( j , w ) ;
            end
          end
           ( l , g ) = arg min 1 j J arg min w { 1 , 2 } ( f ( j , w ) ) ;
           u ^ i = p ( l ) ;
           r ^ = r ( l ) ;
           If g = 1 then v ^ i = u ^ i , t ^ = t ( l , 1 ) , else if g = 2 then v ^ i = 0 , t ^ = t ( l , 2 ) ; end;
end
  • Step 2. Updating constant b .
Given the current estimates of U ^ , V ^ , r ^ , and t ^ , the estimate of b is given by the following:
b ^ = ( I H i ˜ ) ( I H i ˜ ) 1 ( I H i ˜ ) ( s ̲ Q U ^ r ^ ) ,
where the inverse always exists, since ( I H i ˜ ) ( I H i ˜ ) is a full-rank diagonal matrix of size H with diagonal elements all equal to N ( N 1 ) .
  • Stopping rule.
The relative loss function value is computed for the current values of U ^ , V ^ , r ^ , t ^ , and b ^ and since F U ^ , V ^ , r ^ , t ^ , b ^ is bounded from below, it converges to a point that is expected to be at least a local minimum. When the loss function (14) has not decreased considerably with respect to a tolerance value, the process is assumed to be converged. Otherwise, steps 1 and 2 are repeated in turn.

Meaning of the Parameter Estimates

In order to evaluate the meaning of the estimated weights r ^ of the complete partition, let us consider the membership matrix U ^ = [ u ^ i j ] ( i = 1 , , N ; j = 1 , , J ) that uniquely identifies the clusters of the complete partition { C 1 , , C j , , C J } .
From Equation (17), the estimated weight r ^ j h of the cluster C j in the h-th occasion is the average amount of the exchanges between objects in cluster C j and objects in clusters different from C j , corrected for the mean of the average amounts between all clusters different from C j . Hence, large (small) values of r ^ j h indicate clusters characterized by large (small) amounts of exchanges on average.
Similarly, let us consider the incomplete partition  { G 1 , , G j , , G J } identified by the matrix V ^ = [ v ^ i j ] ( i = 1 , , N ; j = 1 , , J ) . Then, from (18) imposing the constraint (11), the weight t ^ j h of the cluster G j in the occasion h is as follows:
t ^ j h = 1 N × N G j i = 1 N l = 1 N k i l h v ^ i j v ^ ˜ l j , ( j = 1 , , J ; h = 1 , , H ) ,
where N G j is the number of objects assigned to the cluster G j .
Note that t ^ j h is the average imbalance from all objects in the cluster G j towards all objects in clusters other than G j , at the occasion h, corrected for the average imbalance originating from all clusters different from G j . Therefore, a positive (negative) weight t ^ j h qualifies the cluster G j as a “destination” (“origin”) cluster of the exchanges at the occasion h, and the objects belonging to such a cluster have a similar pattern in terms of exchanges directed towards the other clusters.
Given the occasion h and due to (3), all objects belonging to the same cluster G j have the same weight t ^ j h , and the weighted sum of them over all clusters is zero due to constraint (11). Constraint (11) is necessary to guarantee the identifiability of the model and the uniqueness of the solution: this is because the weights are defined by difference. In a general and widely applicable framework, such a choice implies that the model defines a closed exchange system in the sense that the total imbalance between clusters within each occasion is zero. Note that, in specific contexts, a value other than zero could generally be chosen to handle appropriate assumptions. Only (19) in the algorithm should be modified accordingly.
Furthermore, the objects that remain unassigned to any cluster of the incomplete partition are actually objects that generate (almost) zero mutual imbalances and, equivalently, almost symmetric exchanges in all occasions.
The constant term b ^ h is assumed to be added to all dissimilarities in the h-th occasion and, thus, only affects the average amounts of exchanges between objects (symmetric component S h ). Therefore, the additive constant b ^ h in (20) represents the baseline average dissimilarity, independent of any clustering and direction, and plays the same role as the intercept of a linear regression model.
Finally, it is worth noting that, for each occasion h, model (7) accounts for the between-cluster effects, while the exchanges within clusters are fitted only by the constant term b ^ h , which actually also represents the average exchange between objects within clusters.

4. Illustrative Example: Results

In order to detect the underlying clustering structure common to all occasions and identify “origin” and “destination” clusters, the model proposed here was fitted to the artificial data of Figure 1, together with the Generalized INDCLUS model [20] for comparison.
Let us briefly recall the generalization of the INDCLUS model [20] to fit three-way asymmetric similarity data Y h ( h = 1 , , H ). The model assumes that there exist two sets of J overlapping clusters (i.e., two coverings) of the same set of N objects in row and column, and that they are common to all occasions, namely, P = [ p i j ] and Q = [ q i j ] , where p i j and q i j assume values in { 0 , 1 } (for i = 1 , , N and j = 1 , , J ); i.e., each object is allowed to belong to more than one cluster or none at all. In addition, a single set of weights is assumed for both set clusters, but different for each occasion. The model can be written as follows:
Y h = P W h Q + c h 1 N N + E h * , ( h = 1 , , H ) ,
where W h is the non-negative diagonal weight matrix of order J for the occasion h, c h is a real-valued additive constant for the occasion h, and E h * is the error term.
Since model (21) fits similarities, the original artificial dissimilarities in [ 0 , 100 ] of Figure 1 were simply converted into similarities by taking Y h = 100 X h ( h = 1 , 2 , 3 ).
The best solution in J = 3 clusters was retained over 100 random starts of the algorithm.
None of the two best coverings in three overlapping clusters obtained by the Generalized INDCLUS model (relative loss function equal to 0.0228) identify the true generated partition of the objects. The common coverings P and Q for the sets of objects in the rows and in the columns, respectively, consist of three row clusters, C 1 I r = { a , b , c , d } , C 2 I r = { e , f } , and C 3 I r = { b , c , g , h , i } , and three column clusters, C 1 I c = { b , c , e , f } , C 2 I c = { a , b , c , d , e , f , g , h , i } , and C 3 I c = { a , b , c , d , g , h , i } . The first covering identifies row clusters of objects with similar outgoing exchanges to the column clusters, which contain groups of objects with similar incoming exchanges.
The estimated weights of the three clusters of both coverings in each occasion and the constants are reported in Table 1. The constants represent the average exchange between all objects within each occasion, regardless of the direction. Thus, the second occasion has the highest average level of exchange ( c 2 = 59.7 ) and the first occasion the lowest ( c 2 = 49.6 ).
Table 1 displays the weights of the outgoing row clusters and incoming column clusters, where a large positive weight for any two corresponding clusters C j I r and C j I c ( j = 1 , 2 , 3 ) qualifies C j I r as the origin of the exchanges directed to the destination C j I c . Therefore, in the second occasion, C 1 I r = { a , b , c , d } is an origin cluster to C 1 I c = { b , c , e , f } . In addition, it can be noted that, in occasion 2, Generalized INDCLUS fails to identify the true behavior of objects { g , h , i } in C 3 as destination from C 1 = { a , b , c , d } and origin to C 2 = { e , f } (Figure 1). Instead, objects { g , h , i } are estimated here to have mutual exchanges with all other objects due to their membership in several column clusters.
Moreover, in the first and third occasions, the clusters C 2 I r = { e , f } and C 3 I r = { b , c , g , h , i } are origins toward C 2 I c , which contains all objects, and C 3 I c = { a , b , c , d , g , h , i } , respectively.
We can observe that, in the special case when the clusters from Generalized INDCLUS form a partition (they do not overlap), model (21) reduces to estimating only the exchanges from any C j I r row cluster to its corresponding C j I c column cluster. Conversely, in the presence of overlapping clusters, objects common to different clusters also contribute to the estimation of the exchange between different clusters. That is why, in Table 1, due to the large cluster overlap, it is very cumbersome to extract the directions of the exchange from any row cluster C j I r to any other column cluster C j I c . It becomes necessary to look at the whole full-size ( N × N ) estimated matrices and analyze the estimated exchanges between pairs of objects.
For the sake of clarity, we observe that, in this well-structured artificial situation, the lowest weights are exactly zero, which is generally not the case.
Model (4) proposed here was fitted to the artificial data of Figure 1, and the best solution in J = 3 clusters was retained over 100 random starts of the algorithm.
The best resulting partition (relative loss equal to 0.0026) correctly identifies the complete partition from the symmetric components consisting of clusters C 1 = { a , b , c , d } , C 2 = { e , f } , C 3 = { g , h , i } , and the incomplete partition formed by clusters G 1 = { a , d } , G 2 = { e , f } , G 3 = { i } , which are nested in the corresponding clusters of the former.
The objects b, c, g, and h remain unassigned to any cluster in the incomplete partition from the skew-symmetric components due to their almost zero mutual imbalances and almost symmetric exchanges at each occasion.
Table 2 reports the estimates of the weight vectors r ^ and t ^ and the constant b ^ . In addition, for each occasion h, the estimated between-cluster components, both the symmetric ( S ^ h ) from the complete partition and the skew-symmetric ( K ^ h ) from the incomplete partition, are shown in Figure 3.
The estimated constants b ^ h (Table 2) represent the baseline average exchange level at each occasion, regardless of any clustering, and take on different values in the occasions: the highest ( b ^ 1 = 31.8 ) in the first, while the third has the lowest ( b ^ 3 = 19.7 ).
Given the occasion h, the weight r ^ j h of the cluster C j ( j = 1 , 2 , 3 ) represents the average exchange between the cluster C j and every other cluster, in addition to the additive constant. Thus, from the occasion-specific weights r ^ h ( h = 1 , 2 , 3 ) in Table 2, we observe that, in occasion 1, C 1 subsumes the highest average exchange with the other two clusters ( r ^ 11 = 8.8 ), while in occasion 2, the highest average exchange concerns C 3 ( r ^ 32 = 9.7 ). In contrast, in occasion 3, cluster C 3 has the lowest weight ( r ^ 33 = 1.6 ), while C 2 has the highest ( r ^ 23 = 14.5 ). This is consistent with what can be seen from the data in Figure 2.
In addition, the different role of each cluster can be identified from the occasion-specific weights t ^ h ( h = 1 , 2 , 3 ) of the clusters of the incomplete partition, which allow for qualifying origins and destinations. Since each t ^ j h represents the weighted average imbalance at the occasion h from the cluster G j to all other clusters, in Table 2, the negative weight of G 1 in the second occasion ( t ^ 12 = 13.2 ) qualifies G 1 as an origin cluster, while it acts as a destination cluster in occasions 1 and 3, where its corresponding weights are both positive ( t ^ 11 = 16.2 and t ^ 13 = 14.1 , respectively). The reverse is true for both clusters G 2 and G 3 .
As an example, we can see from Figure 3 that the estimated symmetric part of the exchange between the clusters C 1 and C 2 in occasion 1 is 34.6, which is the sum of the estimated weights of C 1 and C 2 ( r ^ 11 = 8.8 and r ^ 21 = 6.0 , respectively) plus the constant value ( b ^ 1 = 31.8 ), i.e., ( s ^ 121 + b ^ 1 ) = 8.8 6.0 + 31.8 = 34.6 .
Moreover, the corresponding estimated imbalance between G 1 and G 2 yields k ^ 121 = 31.5 , which is the difference of the estimated weights of the clusters G 1 and G 2 ( t ^ 11 = 16.2 and t ^ 21 = 15.3 , respectively), i.e., k ^ 121 = 16.2 + 15.3 = 31.5 .
Finally, the estimated between-cluster dissimilarity between C 1 and C 2 in the first occasion is the sum of ( s ^ 121 + b ^ 1 ) and k ^ 121 , hence x ^ 121 = 66.1 .
Remark 1. 
From Figure 3, we can easily derive the summary of what emerges from the detailed results, since the full-size exchange matrices between objects (Figure 2) are synthesized into reduced-size exchange matrices between clusters with a very limited loss of information. This makes it possible to show the role played by each cluster of objects as an origin or destination of exchange, exactly reflecting the original data in Figure 1.

5. Application to Student Mobility Data

The data analyzed in this application have been taken from the OECD (Organization for Economic Co-operation and Development) Education Statistics Database, which collects data annually on international student exchanges in tertiary education.
The three-way data array consists of H = 5 asymmetric matrices of exchanges of international student mobility observed in N = 20 countries (objects) from 2016 to 2020 ( H = 5 occasions). The data report the number of international tertiary students who received their prior education in the origin country (the rows of each year matrix) enrolled in the host country (the columns of each year matrix). The countries are the twenty founding members of the OECD: Austria (AT), Belgium (BE), Canada (CA), Denmark (DK), France (FR), Germany (DE), Greece (EL), Iceland (IS), Ireland (IE), Italy (IT), Luxembourg (LU), Netherlands (NL), Norway (NO), Portugal (PT), Spain (ES), Sweden (SE), Switzerland (CH), Turkey (TR), United Kingdom (UK), and United States of America (US).
Between 2016 and 2020, the total population of mobile international students in the twenty founding members of the OECD experienced a remarkable growth of 16.8%. Specifically, the number jumped from 555,920 students in 2016 to 649,376 students in 2020, reflecting a significant increase in the number of tertiary students enrolled abroad. As highlighted in [25], the COVID-19 pandemic had a very uneven impact on international student exchanges across countries during 2019–2020. Nevertheless, the percentage of tertiary students enrolled abroad increased in several of the twenty OECD countries and remained unchanged in many others. Overall, there were about three international students for every national student studying abroad in OECD countries in 2016–2020, but this ratio is equal to or greater than ten in the United Kingdom and the United States [25,26]. In contrast, Luxembourg is among the twenty OECD founder countries with the lowest ratio of international students to national students abroad [25].
The aim here is to explore both the magnitude and the direction of the international mobility of the tertiary students to identify clusters of countries that are the main origins or destinations of the student exchanges across 5 years (2016–2020).
First, the data were transformed by calculating for each country the percentage share of outgoing mobile students moving to each of the other countries for tertiary education.

5.1. Mobility Data: Results from Generalized INDCLUS

The Generalized INDCLUS model (21) was first fitted directly to the original similarity data.
The algorithm was run by varying the number of clusters J from 2 to 7, and the best solution in 100 runs from different random starting partitions was retained to prevent from falling into local minima.
The choice of the optimal number of clusters was determined by looking at the decrease in the relative loss function as J increases. From the scree plot (Figure 4a, the partition in J = 5 clusters was chosen as the best solution for Generalized INDCLUS.
Table 3 shows the row and column clusters, while the occasion-specific weights are reported in Table 4.
All countries belong to at least one cluster of the row covering, but while Iceland and the United Kingdom belong to only one cluster ( C 2 I r and C 3 I r , respectively), Austria, Canada, Ireland, Switzerland, and the United States belong to three clusters. The strong overlap of the row clusters indicates a very heterogeneous situation of student mobility with, in principle, many countries from which students leave.
On the other hand, only nine countries belong to at least one column cluster, while the remaining countries are not assigned to any cluster; i.e., they correspond to zero row profiles in the membership matrix Q . This reveals a greater concentration of incoming flows in a few countries than that of outgoing flows from the countries of origin.
The cluster weights of the two coverings (Table 4) are quite similar over time, indicating that the mobility pattern is quite stable over the years considered.
Every year, student mobility is directed from almost all European countries ( C 1 I r and C 2 I r ) to two main European destinations (Germany and UK), as well as to the US from Canada and Northern Europe.
From the solution, we can say, for example, that Norway and Sweden, together in cluster C 2 I r and separately in clusters C 1 I r and C 3 I r , respectively, have outgoing flows mainly to Denmark, UK, and the US. Swedish students also move to France and the Netherlands, while Norwegian students to Germany. Swedish and Norwegian mobility to all other countries results to being reciprocal to the same extent.

5.2. Mobility Data: Results from the Proposed Model

In order to fit our proposed model that fits dissimilarity data, the proportions of outgoing students in each year were converted to pairwise dissimilarities by taking their complements to 100.
As before, the algorithm was run with J varying from 2 to 7, and the best solution in 100 runs from different random starting partitions was retained. The scree plot of the loss values in Figure 4b shows an elbow at the partition with J = 6 clusters selected for analysis.
The complete partition and the incomplete partition estimated from the symmetric and skew-symmetric components of the exchanges, respectively, are reported in Table 5, where countries not assigned to the corresponding clusters of the incomplete partition are indicated in italics.
The six clusters are shown in Figure 5, where different colors indicate different clusters and the unassigned countries of the incomplete partition are shown in light colors on the color scale.
The clusters C 1 , C 2 , and C 6 coincide with G 1 , G 2 , and G 6 , respectively, while six countries (Austria, Belgium, Denmark, France, Netherlands, and Spain) do not belong to any cluster of the incomplete partition because their mutual mobility is basically symmetrical.
The two nested partitions highlight the strength of proximity factors, such as language, historical ties, geographical distance, bilateral relationships, and political framework conditions (e.g., the European Higher Education Area) as key determinants for international student mobility. As an example, the Scandinavian countries in cluster C 3 share the same patterns of international student mobility directed to the other OECD countries.
Austria and Belgium, which belong to different clusters ( C 3 and C 4 , respectively) but remain unassigned in the incomplete partition, show small mutual mobility exchanges over time (0.7195% in 2016 and 0.8485% in 2020) with minimal imbalances (0.3077% in 2016 and 0.5077% in 2020).
The constants b ^ h ( h = 1 , , 5 ) increase slightly over the years (Table 6 and Figure 6): this means that since the model has been fitted to the complements to 100 of the share of outgoing students in each year, the baseline average mutual exchange between countries ( 100 b ^ h , h = 1 , , 5 ) shows a slight decrease over the years. This also implies that the average mutual mobility within clusters tends to decrease slightly over time, as the baseline fits the within-cluster mobility in each year.
In addition to the baseline exchange ( b ^ h ), the estimated average annual mobility between clusters of countries depends on the weights r ^ h (Table 6). The average mobility has remained relatively stable over time, with very small increases (i.e., very small decreases in the weights r ^ j h ( h = 1 , , 5 ), taking into account the transformation of the percentages to obtain dissimilarities) in almost all clusters until 2019. Fluctuations can be observed for some clusters: in 2019, a slight increase in the average mobility was observed for the United Kingdom ( r ^ 6 , 2019 = 10.892 vs. r ^ 6 , 2018 = 10.192 ), in contrast with a slight decrease for the cluster C 1 with Germany and the United States ( r ^ 1 , 2019 = 5.473 vs. r ^ 1 , 2018 = 6.232 ). Conversely, the situation is reversed in 2020, probably due to the impact of Brexit.
Each year, the ranking of the clusters based on the increasing weights r ^ h consistently ranks the United Kingdom ( C 6 ) with the highest average share of reciprocal mobility (regardless of direction), followed by Germany and the United States ( C 1 ), while Scandinavian countries ( C 3 ) have the lowest, as can be seen in Figure 7.
As for the directions of the mobility, the analysis of the cluster weights of the incomplete partition (Table 7) reveals a consistent pattern over the years: throughout the 2016–2020 period, the mobility originates from countries in the clusters G 2 , G 3 , G 4 , and G 5 (the weights t ^ j h assigned to these clusters are always negative over time) mainly directed to Germany and the United States ( G 1 ), as well as to the United Kingdom ( G 6 ).
In all years, the cluster ranking based on the increasing weights t ^ h places Greece, Ireland, and Turkey ( G 2 ), followed by Luxembourg and Portugal ( G 4 ), as the main countries of origin of the international students. As can be seen from the direction of the arrow in Figure 8, the cluster G 5 , which includes Canada, Italy, and Switzerland, is an origin cluster of mobile students toward G 1 and G 6 and, to a greater extent, a destination cluster from G 2 , G 3 , and G 4 . Similarly, the Scandinavian students ( G 3 ) move to Germany, the US, and the UK ( G 1 and G 6 ) and to G 5 , while the Scandinavian countries have incoming mobility from G 2 . This differs in part from the results of Generalized INDCLUS, which does not capture the imbalance of the international mobility from Greece, Ireland, and Turkey directed to the Scandinavian countries.
The countries in the two clusters C 2 and C 4 have comparable average mobility over time (the green ( C 2 ) and red ( C 4 ) lines overlap in Figure 7), but with different levels of imbalances: in fact, the student outflows from Greece, Ireland, and Turkey ( G 2 ) are always larger than those from Luxembourg and Portugal ( G 4 ), as can be seen in Figure 8.
English is the most widely spoken language in the globalized world, with one in four people worldwide using it [27]. Not surprisingly, English-speaking countries are always the most attractive study destinations overall: the UK is the top destination in Europe; the United States and Germany are destinations for international students from Europe and Canada and are the source of student mobility to the United Kingdom. Among the top 3 destinations, Germany is the major recipient country in the European Union.
A few countries (clusters G 1 and G 6 ) are net “importers” of students; that is, they have more students coming in to study than those leaving to study abroad. In contrast, some clusters of countries, such as G 2 and G 4 , are net “exporters” of students.

6. Results and Discussion

In a complex situation where the available data represent asymmetric proximities between pairs of objects measured or observed in different occasions, the proposed model aims to unveil a common clustering structure that accounts for the systematic differences between occasions.
The model proposed here proved to be effective in identifying clusters of objects that share a common pattern of exchange across different occasions, even with a different role as origin or destination. The artificial data and the real application have shown as main results (1) the capability of the model to identify clusters of objects with similar exchange behavior in magnitude and direction (origin/destination clusters) and (2) the flexibility to capture possible differences in the directions of such cluster exchanges across occasions, compared with the Generalized INDCLUS model.
Actually, the goal of Generalized INDCLUS is not to summarize the average exchanges of each cluster, and compared with our model, it provides much more complicated solutions in order to derive the behavior of each cluster because of the overlap between clusters, as can also be seen from the simple and small illustrative example. We can observe that, in the special case where the clusters from Generalized INDCLUS form a partition (they do not overlap), as in our model, it actually accounts for the variability within clusters and is not able to capture the exchanges between clusters, which is the very goal of our proposal. However, as shown in Section 4 and Section 5, we experienced that, in the general case, the clusters resulting from Generalized INDCLUS have (often many) objects in common, which, on the one hand, guarantees flexibility and the possibility to estimate the exchange between pairs of objects belonging to different clusters. On the other hand, this comes at the expense of ease of interpretation and synthesis, since the behavior of objects belonging to the same cluster is not the same regardless of the destination cluster.
Thus, within the scarcity of methods for clustering three-way asymmetric proximity data, our proposal is effective, more parsimonious, and promising when the aim is to obtain a synthesis of the average behavior of exchange between clusters across occasions (which could be useful for policy makers, for example). If, instead, the estimation of any pairwise exchange is the main concern, Generalized INDCLUS might be more flexible in general due to the possible overlap.
Further developments may consider the possibility of assuming a fuzzy clustering, where, instead of a crisp membership of an object to each cluster ( u i j in { 0 , 1 } ), a membership degree in [ 0 , 1 ] is allowed. In addition, the inclusion of possible covariates in the model, if available, could also be taken into consideration to better investigate the determinants of the exchanges.

Author Contributions

Conceptualization, L.B. and D.V.; methodology, L.B. and D.V.; software, D.V.; validation, L.B. and D.V.; formal analysis, L.B.; investigation, L.B.; resources, D.V.; data curation, D.V.; writing—original draft preparation, L.B. and D.V.; visualization, L.B.; supervision, D.V.; project administration, D.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article, with the exception of the data analyzed in Section 5, which are available on the OECD Education Statistics Database, at https://stats.oecd.org/Index.aspx?DataSetCode=EDU_ENRL_MOBILE (accessed on 1 February 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Saito, T.; Yadohisa, H. Data Analysis of Asymmetric Structures. Advanced Approaches in Computational Statistics; Marcel Dekker: New York, NY, USA, 2005. [Google Scholar]
  2. Bove, G.; Okada, A. Methods for the analysis of asymmetric pairwise relationships. Adv. Data Anal. Classif. 2018, 12, 5–31. [Google Scholar] [CrossRef]
  3. Bove, G.; Okada, A.; Vicari, D. Methods for the Analysis of Asymmetric Proximity Data; Springer Nature Singapore: Singapore, 2021. [Google Scholar]
  4. Furnas, G.W. Objects and Their Features: The Metric Representation of Two Class Data. Unpublished Doctoral Dissertation, Stanford University, Stanford, CA, USA, 1980. [Google Scholar]
  5. DeSarbo, W.S.; De Soete, G. On the Use of Hierarchical Clustering for the Analysis of Nonsymmetric Proximities. J. Consum. Res. 1984, 11, 601–610. [Google Scholar] [CrossRef]
  6. DeSarbo, W.S.; Manrai, A.K.; Burke, R.R. A Nonspatial Methodology for the Analysis of Two-Way Proximity Data Incorporating the Distance–Density Hypothesis. Psychometrika 1990, 55, 229–253. [Google Scholar] [CrossRef]
  7. De Soete, G.; DeSarbo, W.S.; Furnas, G.W.; Carroll, J.D. The Estimation of Ultrametric and Path Length Trees from Rectangular Proximity Data. Psychometrika 1984, 49, 289–310. [Google Scholar] [CrossRef]
  8. DeSarbo, W.S. GENNCLUS: New models for general nonmetric clustering analysis. Psychometrika 1982, 47, 449–475. [Google Scholar] [CrossRef]
  9. Hubert, L. Min and max hierarchical clustering using asymmetric similarity measures. Psychometrika 1973, 38, 63–72. [Google Scholar] [CrossRef]
  10. Fujiwara, H. Methods for Cluster Analysis Using Asymmetric Measures and Homogeneity Coefficient. Jpn. J. Behav. 1980, 7, 12–21. [Google Scholar]
  11. Yadohisa, H. Formulation of Asymmetric Agglomerative Hierarchical Clustering and Graphical Representation of Its Result. Bull. Comput. Stat. Jpn. 2002, 15, 309–316. [Google Scholar]
  12. Takeuchi, A.; Saito, T.; Yadohisa, H. Asymmetric agglomerative hierarchical clustering algorithms and their evaluations. J. Classif. 2007, 24, 123–143. [Google Scholar] [CrossRef]
  13. Olszewski, D. Asymmetric k-means algorithm. In Lecture Notes in Computer Science, Part II, Proceedings of the International Conference on Adaptive and Natural Computing Algorithm (ICANNGA 2011), Ljubljana, Slovenia, 14–16 April 2011; Dovnikar, A., Lotric, U., Ster, B., Eds.; Springer: Heidelberg, Germany, 2011; Volume 6594, pp. 1–10. [Google Scholar]
  14. Olszewski, D. K-means clustering of asymmetric data. In Hybrid Artificial Intelligent Systems, Proceedings of the HAIS 2012, Salamanca, Spain, 28–30 March 2012, Lecture Notes in Computer Science; Corchado, E., Snášel, V., Abraham, A., Woźniak, M., Graña, M., Cho, S.B., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7208, pp. 243–254. [Google Scholar]
  15. Olszewski, D.; Ster, B. Asymmetric clustering using the Alpha-Beta divergence. Pattern Recognit. 2014, 47, 2031–2041. [Google Scholar] [CrossRef]
  16. Vicari, D. Classification of asymmetric proximity data. J. Classif. 2014, 31, 386–420. [Google Scholar] [CrossRef]
  17. Vicari, D. Modeling Asymmetric Exchanges Between Clusters. In Advanced Studies in Behaviormetrics and Data Science; Imaizumi, T., Nakayama, A., Yokoyama, S., Eds.; Springer Nature Singapore: Singapore, 2020; pp. 297–313. [Google Scholar]
  18. Vicari, D. CLUSKEXT: CLUstering model for SKew-symmetric data including EXTernal information. Adv. Data Anal. Classif. 2018, 12, 43–64. [Google Scholar] [CrossRef]
  19. Vicari, D.; Di Nuzzo, C. A between-cluster approach for clustering skew-symmetric data. Adv. Data Anal. Classif. 2024, 12, 163–192. [Google Scholar] [CrossRef]
  20. Chaturvedi, A.; Carroll, J.D. An alternating combinatorial optimization approach to fitting the INDCLUS and Generalized INDCLUS models. J. Classif. 1994, 11, 155–170. [Google Scholar] [CrossRef]
  21. Carroll, J.D.; Arabie, P. INDCLUS: An individual differences generalization of ADCLUS model and the MAPCLUS algorithm. Psychometrika 1983, 48, 157–169. [Google Scholar] [CrossRef]
  22. Gower, J.C. The analysis of asymmetry and orthogonality. In Recent Developments in Statistics; Van Cutsem, B., Barra, J.R., Brodeau, F., Romier, G., Eds.; North Holland: Amsterdam, The Netherlands, 1977; pp. 109–123. [Google Scholar]
  23. McDonald, R.P. A simple comprehensive model for the analysis of covariance structures: Some remarks on applications. Br. J. Math. Stat. Psychol. 1980, 33, 161–183. [Google Scholar] [CrossRef]
  24. Rao, C.R.; Mitra, S. Generalized Inverse of Matrices and Its Applications; Wiley: New York, NY, USA, 1971. [Google Scholar]
  25. OECD. Education at a Glance 2023: OECD Indicators; OECD Publishing: Paris, France, 2023. [Google Scholar]
  26. OECD. Education at a Glance 2019: OECD Indicators; OECD Publishing: Paris, France, 2019. [Google Scholar]
  27. Sharifian, F. Globalisation and developing metacultural competence in learning English as an international language. Multiling. Educ. 2013, 3, 7. [Google Scholar] [CrossRef]
Figure 1. Artificial three-way dissimilarity data. Darker shades of blue represent higher magnitudes.
Figure 1. Artificial three-way dissimilarity data. Darker shades of blue represent higher magnitudes.
Symmetry 16 00752 g001
Figure 2. Artificial three-way dissimilarity data: symmetric and skew-symmetric components. Darker shades of blue represent higher magnitudes.
Figure 2. Artificial three-way dissimilarity data: symmetric and skew-symmetric components. Darker shades of blue represent higher magnitudes.
Symmetry 16 00752 g002
Figure 3. Artificial three-way dissimilarity data: estimated symmetric and skew-symmetric components from the proposed model.
Figure 3. Artificial three-way dissimilarity data: estimated symmetric and skew-symmetric components from the proposed model.
Symmetry 16 00752 g003
Figure 4. Mobility data: scree plot of the loss values of (a) Generalized INDCLUS and (b) the proposed model.
Figure 4. Mobility data: scree plot of the loss values of (a) Generalized INDCLUS and (b) the proposed model.
Symmetry 16 00752 g004
Figure 5. Mobility data: complete and incomplete partition from the proposed model. Different colors indicate different clusters: blue = C 1 , green = C 2 , violet = C 3 , red = C 4 , sky blue = C 5 , yellow = C 6 , and the unassigned countries of the incomplete partition are shown in light colors on the color scale.
Figure 5. Mobility data: complete and incomplete partition from the proposed model. Different colors indicate different clusters: blue = C 1 , green = C 2 , violet = C 3 , red = C 4 , sky blue = C 5 , yellow = C 6 , and the unassigned countries of the incomplete partition are shown in light colors on the color scale.
Symmetry 16 00752 g005
Figure 6. Mobility data: year-specific constants from the proposed model.
Figure 6. Mobility data: year-specific constants from the proposed model.
Symmetry 16 00752 g006
Figure 7. Mobility data: year-specific weights of the complete partition from the proposed model.
Figure 7. Mobility data: year-specific weights of the complete partition from the proposed model.
Symmetry 16 00752 g007
Figure 8. Mobility data: year-specific weights of the incomplete partition from the proposed model (arrow indicates the direction of mobility).
Figure 8. Mobility data: year-specific weights of the incomplete partition from the proposed model (arrow indicates the direction of mobility).
Symmetry 16 00752 g008
Table 1. Artificial data: occasion-specific weights and constants from Generalized INDCLUS.
Table 1. Artificial data: occasion-specific weights and constants from Generalized INDCLUS.
ClusterConstant
Row C 1 I r C 2 I r C 3 I r
Column C 1 I c C 2 I c C 3 I c
w 1 h w 2 h w 3 h c h
Occasion 1 038.018.549.6
Occasion 2 25.10059.7
Occasion 3 022.826.855.7
Table 2. Artificial data: occasion-specific weights from the proposed model.
Table 2. Artificial data: occasion-specific weights from the proposed model.
Complete PartitionIncomplete PartitionConstant
C 1 C 2 C 3 G 1 G 2 G 3
r ^ 1 h r ^ 2 h r ^ 3 h t ^ 1 h t ^ 2 h t ^ 3 h b ^ h
Occasion 18.8−6.05.816.2−15.3−1.931.8
Occasion 26.9−4.49.7−13.212.31.828.4
Occasion 36.414.5−1.614.1−12.5−3.419.7
Table 3. Mobility data: row and column clusters from Generalized INDCLUS.
Table 3. Mobility data: row and column clusters from Generalized INDCLUS.
Row ClusterColumn Cluster
C 1 I r Austria, Belgium, Denmark, Greece, Ireland, Italy, Luxembourg, Norway, Portugal, Spain, Switzerland, United States C 1 I c Germany, United Kingdom
C 2 I r Canada, Denmark, Iceland, Ireland, Norway, Sweden C 2 I c Denmark, United Kingdom, United States
C 3 I r Belgium, Canada, France, Germany, Greece, Ireland, Italy, Netherlands, Portugal, Spain, Sweden, Switzerland, Turkey, United Kingdom, United States C 3 I c France, Netherlands, United Kingdom, United States
C 4 I r Austria, France, Germany, Luxembourg, Netherlands, Switzerland C 4 I c Austria, Belgium, Germany, Switzerland
C 5 I r Austria, Canada, Turkey, United States C 5 I c Germany, United States
Table 4. Mobility data: occasion-specific weights and constants from Generalized INDCLUS.
Table 4. Mobility data: occasion-specific weights and constants from Generalized INDCLUS.
ClusterConstant
Row C 1 I r C 2 I r C 3 I r C 4 I r C 5 I r
Column C 1 I c C 2 I c C 3 I c C 4 I c C 5 I c
Year w 1 h w 2 h w 3 h w 4 h w 5 h c h
2016 15.39817.7789.2989.85220.6821.145
2017 15.14517.4969.3579.68920.1751.185
2018 16.96116.3668.2349.54521.3921.268
2019 18.16815.4907.9299.83621.0111.268
2020 17.36115.8088.0109.85921.2131.288
Table 5. Mobility data: complete and incomplete partition from the proposed model. Italics indicate unassigned countries in the incomplete partition.
Table 5. Mobility data: complete and incomplete partition from the proposed model. Italics indicate unassigned countries in the incomplete partition.
Complete PartitionIncomplete Partition
C 1 Germany, United States G 1 Germany, United States
C 2 Greece, Ireland, Turkey G 2 Greece, Ireland, Turkey
C 3 Denmark, Iceland, Norway, Sweden G 3 Iceland, Norway, Sweden
C 4 Belgium, France, Luxembourg, Netherlands, Portugal, Spain G 4 Luxembourg, Portugal
C 5 Austria, Canada, Italy, Switzerland G 5 Canada, Italy, Switzerland
C 6 United Kingdom G 6 United Kingdom
Table 6. Mobility data: year-specific weights for the complete partition and constants from the proposed model.
Table 6. Mobility data: year-specific weights for the complete partition and constants from the proposed model.
Complete PartitionConstant
C 1 C 2 C 3 C 4 C 5 C 6
Year r ^ 1 h r ^ 2 h r ^ 3 h r ^ 4 h r ^ 5 h r ^ 6 h b ^ h
2016−5.6532.7573.6762.9131.273−10.22193.135
2017−5.4832.6843.6552.8181.267−10.06993.158
2018−6.2322.4963.5192.6021.074−10.19293.570
2019−5.4732.4473.4782.5180.987−10.89293.589
2020−6.1222.4613.5342.4531.174−10.08793.575
Table 7. Mobility data: year-specific weights of the incomplete partition from the proposed model.
Table 7. Mobility data: year-specific weights of the incomplete partition from the proposed model.
Incomplete Partition
G 1 G 2 G 3 G 4 G 5 G 6
Year t ^ 1 h t ^ 2 h t ^ 3 h t ^ 4 h t ^ 5 h t ^ 6 h
20165.158−2.568−1.772−1.836−1.22310.048
20175.051−2.469−1.770−1.842−1.2119.932
20185.467−2.497−1.743−2.000−1.3089.711
20194.814−2.413−1.738−2.009−1.20810.466
20205.470−2.403−1.755−2.087−1.3779.839
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bocci, L.; Vicari, D. A Clustering Model for Three-Way Asymmetric Proximities: Unveiling Origins and Destinations. Symmetry 2024, 16, 752. https://doi.org/10.3390/sym16060752

AMA Style

Bocci L, Vicari D. A Clustering Model for Three-Way Asymmetric Proximities: Unveiling Origins and Destinations. Symmetry. 2024; 16(6):752. https://doi.org/10.3390/sym16060752

Chicago/Turabian Style

Bocci, Laura, and Donatella Vicari. 2024. "A Clustering Model for Three-Way Asymmetric Proximities: Unveiling Origins and Destinations" Symmetry 16, no. 6: 752. https://doi.org/10.3390/sym16060752

APA Style

Bocci, L., & Vicari, D. (2024). A Clustering Model for Three-Way Asymmetric Proximities: Unveiling Origins and Destinations. Symmetry, 16(6), 752. https://doi.org/10.3390/sym16060752

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop