Next Article in Journal
Socially Responsible Portfolio Selection: An Interactive Intuitionistic Fuzzy Approach
Next Article in Special Issue
Projections of Tropical Fermat-Weber Points
Previous Article in Journal
Distributed Mechanism for Detecting Average Consensus with Maximum-Degree Weights in Bipartite Regular Graphs
Previous Article in Special Issue
Jewel: A Novel Method for Joint Estimation of Gaussian Graphical Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Algorithm for Convex Biclustering

Graduate School of Engineering Science, Osaka University, Osaka 560-0043, Japan
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(23), 3021; https://doi.org/10.3390/math9233021
Submission received: 18 October 2021 / Revised: 15 November 2021 / Accepted: 22 November 2021 / Published: 25 November 2021
(This article belongs to the Special Issue Mathematics, Statistics and Applied Computational Methods)

Abstract

:
We consider biclustering that clusters both samples and features and propose efficient convex biclustering procedures. The convex biclustering algorithm (COBRA) procedure solves twice the standard convex clustering problem that contains a non-differentiable function optimization. We instead convert the original optimization problem to a differentiable one and improve another approach based on the augmented Lagrangian method (ALM). Our proposed method combines the basic procedures in the ALM with the accelerated gradient descent method (Nesterov’s accelerated gradient method), which can attain O ( 1 / k 2 ) convergence rate. It only uses first-order gradient information, and the efficiency is not influenced by the tuning parameter λ so much. This advantage allows users to quickly iterate among the various tuning parameters λ and explore the resulting changes in the biclustering solutions. The numerical experiments demonstrate that our proposed method has high accuracy and is much faster than the currently known algorithms, even for large-scale problems.

1. Introduction

By clustering, such as k-means clustering [1] and hierarchical clustering [2,3], we usually mean dividing N samples, each consisting of p covariate values, into several categories, where N , p 1 .
In this paper, we consider biclustering [4] that is an extended notion of clustering. In biclustering, we divide both { 1 , , N } and { 1 , , p } based on the data simultaneously. If we are given a data matrix in R N × p , then the rows and columns within the shared group exhibit similar characteristics. For example, given a gene expression data matrix, with genes as columns and samples as rows, the biclustering detects the submatrices, which represent the cooperative behavior of a group of genes corresponding to a group of samples [5]. Figure 1 illustrates an intuitive difference between standard clustering and biclustering. In recent years, biclustering has become a ubiquitous data-mining technique with varied applications, such as text mining, recommendation system, and bioinformatics. A comprehensive survey of biclustering was given by [6,7,8].
However, as noted in [6], biclustering is an NP-hard problem. Thus, the results may vary significantly with different initializations. Moreover, some conventional biclustering models suffer from poor performance due to non-convexity, which may return local optimal solutions. In order to avoid such an inconvenience, Chi et al. [9] proposed convex biclustering by reformulating the problem to convex formulations and using the fused lasso [10] concept.
In convex clustering [9,11,12,13], given a data matrix X R N × p and λ > 0 , we compute a matrix U of the same size as X. Let X i · and U i · be the i-th rows of X and U, let X · j and U · j be the the j-th columns of X and U, and let X U F 2 denote the square sums of the N p elements, and U i · U j · , U · m U · n as the 2 norm of p, N-dimensional vectors, respectively. Convex clustering finds U R N × p that minimizes a weighted sum of X U F 2 and { λ U i · U j · } i j (it may be formulated as minimizing a weighted sum of X U F 2 and { λ U · m U · n } m n ). If X i · and X j · share U i · = U j · , then they are in the same group w.r.t. { 1 , , N } . On the other hand, convex biclustering finds U R N × p that minimizes a weighted sum of X U F 2 and { λ U i · U j · } i j and { λ U · m U · n } m n .
The convex biclustering achieves checker-board-like biclusters by penalizing both the rows and columns of U. When λ (the tuning parameter for { U i · U j · } i j and { U · m U · n } m n ) is zero, each ( i , j ) occupies a unique bicluster { ( i , j ) } and x i j = u i j for i = 1 , , N and j = 1 , , p when X = ( x i j ) and U = ( u i j ) . As λ increases, the bicluster begins to fuse. For sufficiently large λ , all ( i , j ) merge into one single bicluster { ( i , j ) | i = 1 , , N , j = 1 , , p } . The convex formulation guarantees a globally optimal solution and demonstrates superior performance to competing approaches. Chi et al. [9] claimed that the convex biclustering performs better than the dynamic tree-cutting algorithm [14] and sparse biclustering algorithm [15] in their experiments.
Nevertheless, despite these advantages, convex biclustering has not yet gained widespread popularity, due to its intensive computation. On the one hand, the main challenge of solving the optimization problem is the two fused penalty terms: indecomposable and non-differentiable. These properties increase the difficulty of solving. Many splitting methods for the indecomposable problem are complicated and create many subproblems to solve; techniques such as the subgradient method for the non-differentiable problem are slow to converge [16,17]. Moreover, it is difficult to find the optimal tuning parameter λ because we need to solve optimization problems with a sequence of parameters λ and select the one for the specific demand of researchers. Hence, we need to propose a fast way to solve the problems with the sequence of parameters λ . On the other hand, with the increased demands of biclustering techniques, convex biclustering is faced with large-scale data as the volume and complexity of data grows. Above all, it is necessary to propose an efficient algorithm to solve the convex biclustering problem.
There are limited algorithms for solving the problem in the literature. Chi et al. [9] proposed the convex biclustering algorithm (COBRA) using a Dykstra-like proximal algorithm [18] to solve the convex biclustering problem. Weylandt [19] proposed using alternating direction method of multipliers (ADMM) [20,21] and its variant, generalized ADMM [22], to solve the problem.
However, the COBRA yields subproblems, including the convex clustering problem, which requires expensive computations for large-scale problems due to the high per iteration cost [23,24]. Essentially, COBRA is a splitting method that separately solves a composite optimization problem containing three terms. Additionally, it is sensitive to tuning parameter λ . Therefore, obtaining the solutions under a wide range of parameters λ takes time, which is not feasible for broad applications and different demands for users. ADMM generally solves the problem by breaking it into smaller pieces and updating the variables alternately. Still, at the same time, it also introduces several subproblems which may cost much time. To be more specific, the ADMM proposed by Weylandt [19] requires solving the Sylvester equation in the step of updating the variable U. Hence, the Schur decomposition requires solving the Sylvester equation based on the numerical method from [25], which is complicated and time consuming. Additionally, it is known that ADMM exhibits O ( 1 k ) , where k is the number of iterations, convergence in general [26]. It often takes time to achieve relatively high precision [27], which is not feasible in some highly accurate applications. For example, the gene expression data contain huge information (the feature dimension usually exceeds 1000). However, COBRA and ADMM do not scale well for such large-scale problems. Overall, the above algorithm shows weak performance, which motivates us to combine some current algorithms to efficiently solve the convex biclustering problem like in reference [28].
This paper proposes an efficient algorithm with simple subproblems and a fast convergence rate to solve the convex biclustering problem. Rather than update each variable alternately, like ADMM, we use the augmented Lagrangian method (ALM) to update the primal variables simultaneously. In this way, we can transform the optimization problem to be differentiable, solve the problem via an efficient gradient descent method and further simplify the subproblems. Our proposed method is motivated by the work [29] in which the authors presented a way to convert the augmented Lagrangian function to a composite optimization that can be solved by the proximal gradient method [30]. Using the process twice to handle the two fused penalties { λ U i · U j · } i j and { λ U · m U · n } m n , we obtain a differentiable problem from the augmented Lagrangian function. Then, we propose Nesterov’s accelerated gradient method to solve the differentiable problem, which has O ( 1 k 2 ) global convergence rate.
Our main contributions are as follows:
  • We propose an efficient algorithm to solve the convex biclustering model for large-scale N and p. The algorithm is a first-order method with simple subproblems. It only requires calculating the matrix multiplications and simple proximal operators, while the ADMM approaches require matrix inversion.
  • Our proposed method does not require as much computation, even when the tuning parameter λ is large, as the existing approaches do, which means that it is easier to obtain biclustering results simultaneously for several λ values.
The remaining parts of this paper are as follows. In Section 2, we provide some preliminaries which are used in the paper and introduce the convex biclustering problem. In Section 3, we illustrate our proposed algorithm for solving the convex biclustering model. After that, we conduct numerical experiments to evaluate the performance of our algorithm in Section 4.
Notation: In this paper, we use | | x | | p to denote the p norm of a vector x R d , | | x | | p : = ( i = 1 d | x i | p ) 1 p for p [ 1 , ) , and | | x | | : = max i | x i | . For a matrix X R p × q , we use | | X | | F to denote the Frobenius norm, | | X | | 2 denotes the spectral norm, and | | X | | 1 : = i = 1 p j = 1 q | x i j | if not specified.

2. Preliminaries

In this section, we provide the background for understanding the proposed method in Section 3. In particular, we introduce the notions of ADMM, ALM, NAGM, and convex biclustering.
We say that differentiable f : R n R has a Lipschitz-continuous gradient if there exists L > 0 (Lipschitz constant) such that
| | f ( x ) f ( y ) | | 2 L | | x y | | 2 , x , y R n .
We define the conjugate of a function f : R n R by
f * ( y ) = sup x dom f ( y T x f ( x ) ) ,
where dom f R n is the domain of f, and know that f * is closed ( { x dom ( f * ) | f * ( x ) α } is a closed set for any α R ) and convex. It is known that ( f * ) * = f when f is closed and convex, and that Moreau’s decomposition [31] is available. Let f : R n R be closed and convex. For any x R n and γ > 0 , we have
p r o x γ f ( x ) + γ p r o x γ 1 f * ( γ 1 x ) = x .
where prox f : R n R n is the proximal operator defined by
prox f ( x ) : = arg min y R n { 1 2 | | x y | | 2 2 + f ( y ) } .
The relation (2) is derived from ( f * ) * = f and the definition (3) [31].

2.1. ADMM and ALM

In this subsection, we introduce the general optimization procedures of ADMM and ALM.
Let f , g : R n R and h : R p R be convex. Assume that f is differentiable, and g and h are not necessarily differentiable. We consider the following optimization problem:
min x , y f ( x ) + h ( y ) + g ( x ) subject to A x = y ,
with variables x R n and y R p , and matrix A R p × n . To this end, we define the augmented Lagrangian function as the following,
L ν ( x , y , λ ) : = f ( x ) + g ( x ) + h ( y ) + λ , A x y + ν 2 | | A x y | | 2 2 ,
where ν > 0 is an augmented Lagrangian parameter, and λ R p is the Lagrangian multipliers.
ADMM is a general procedure to find the solution to the problem (4) by iterating
x k + 1 : = arg min x L ν ( x , y k , λ k ) , y k + 1 : = arg min y L ν ( x k + 1 , y , λ k ) , λ k + 1 : = λ k + ν ( A x k + 1 y k + 1 ) ,
given the initial values y 1 and λ 1 .
What we mean by the ALM [32,33,34] is to minimize the augmented Lagrangian function (5) w.r.t. variables x and y simultaneously given a λ value, i.e., we iterate the following steps:
( x k + 1 , y k + 1 ) : = arg min x , y L ν ( x , y , λ k ) ,
λ k + 1 : = λ k + ν ( A x k + 1 y k + 1 ) .
Shimmura and Suzuki [29] considered the minimization of the function ϕ : R n R ,
ϕ ( x ) : = f ( x ) + min y { h ( y ) + λ , A x y + ν 2 | | A x y | | 2 2 } ,
and the non-differentiable function g ( x ) over x R n , replacing the minimization over x R n and y R n in (6).
Lemma 1
([29], Theorem 1). The function ϕ ( x ) is differentiable and its differential is
ϕ ( x ) = f ( x ) + A T ( prox ν h * ( ν A x + λ ) ) .
By Lemma 1, the minimization in (6) can be regarded as the composite optimization problem with the differentiable function ϕ ( x ) and the non-differentiable function g ( x ) . Therefore, it is feasible to use the proximal gradient method to update the variable x, such as the fast iterative shrinkage-thresholding algorithm (FISTA) [30].

2.2. Nesterov’s Accelerated Gradient Method

Nesterov [35] proposed a variant of the gradient descent method for Lipschitz differentiable functions. It has O ( 1 k 2 ) convergence rate while the (traditional) gradient descent method has O ( 1 k ) [35]. Considering the minimization of a convex and differentiable function F ( x ) , NAGM is described in Algorithm 1 when F ( x ) has a Lipschitz constant L in (1).
Algorithm 1 NAGM.
Input: Lipschitz constant L, initial value x 0 = y 0 , t 1 = 1 .
While k < k max (until convergence) do
  1:
x k + 1 = y k 1 L F ( y k )
  2:
t k + 1 = 1 + 4 t k 2 + 1 2
  3:
y k + 1 = x k + 1 + t k 1 t k + 1 ( x k + 1 x k )
  4:
k = k + 1
End while
Algorithm 1 replaces the gradient descent y k + 1 = y k 1 L F ( y k ) by Steps 1 to 3: Step 1 executes the gradient descent to obtain x k + 1 from y k , Steps 2 and 3 calculate new y k + 1 based on the previous x k , x k + 1 , and then return to the gradient descent in Step 1. NAGM assumes that F is differentiable, while FISTA [30], an accelerated version of ISTA [30], deals with non-differentiable F using the proximal gradient descent (see Table 1).

2.3. Convex Biclustering

We consider the convex biclustering problem in a general setting. Suppose we have a data matrix X = ( x i j ) consisting of N observations, i = 1 , , N , w.r.t. p features j = 1 , , p . Our task is to assign each observation to one of the non-overlapped row clusters C 1 , , C R { 1 , , N } and assign each feature to one of the non-overlapped column clusters D 1 , , D K { 1 , , p } . We assume that the clusters C 1 , , C R and D 1 , , D K and the values of R and K are not known a priori.
More precisely, the convex biclustering in this paper is formulated as follows:
min U R N × p 1 2 | | X U | | F 2 + λ i = 1 N 1 j = i + 1 N ω i j | | U i · U j · | | 2 + m = 1 p 1 n = m + 1 p ω ˜ m n | | U · m U · n | | 2 ,
where U i · and U · j are the i-th row and j-th column of U R N × p . Chi et al. [9] suggested a requirement on the weights selection:
ω i j : = 1 i , j k exp ϕ x i · x j · 2 2
and
ω ˜ m n : = 1 m , n k exp ϕ ˜ x · m x · b 2 2 ,
where 1 i , j k is 1 if j belongs to the i’s k-nearest neighbors and 0 otherwise. 1 m , n k is defined similarly (the parameter k should be specified beforehand), and x i · and x · j are the i-th row and j-th column of the matrix X. They suggested that the constants ϕ and ϕ ˜ are determined so that the sums i = 1 N 1 j = i + 1 N ω i j and m = 1 p 1 n = m + 1 p ω ˜ m n are N 1 / 2 and p 1 / 2 , respectively.
Chi et al. [9] proposed COBRA to solve the problem (8). Essentially, COBRA solves the standard convex clustering problems of rows and columns alternately,
min U R N × p 1 2 | | X U | | F 2 + λ i = 1 N 1 j = i + 1 N ω i j | | U i · U j · | | 2
and
min U R N × p 1 2 | | X U | | F 2 + λ m = 1 p 1 n = m + 1 p ω ˜ m n | | U · m U · n | | 2 ,
until the solution converges. However, both optimization problems contain a non-differentiable 2 norm term, and solving the convex clustering problems (9) and (10) take much time [23]. Later, the ADMM-based approaches considered alternating variables procedures and outperformed the COBRA for large parameter λ [19].

3. The Proposed Method and Theoretical Analysis

In this section, we first show that the whole terms in the augmented Lagrangian function of (8) can be differentiable w.r.t. U after introducing two dual variables. Therefore, we use NAGM rather than FISTA in [29], where assumes the objective function contains a non-differentiable term.
In order to make the notation clear, we formulate the problem (8) in another way. Let ϵ 1 and ϵ 2 be the sets { ( i , j ) | ω i j > 0 , i < j } and { ( m , n ) | ω ˜ m n > 0 , m < n } , respectively, and denote the cardinality of a set S by | S | . We define the matrices C R | ϵ 1 | × n and D R p × | ϵ 2 | by
C l , i = 1 , C l , j = 1 , C l , k = 0 , k i , j l = ( i , j ) ϵ 1 ,
C = 1 1 0 0 0 1 0 1 0 0 0 0 0 1 1 | ϵ 1 | × n ,
and
D m , l = 1 , D n , l = 1 , D k , l = 0 , k m , n l = ( m , n ) ϵ 2 ,
D = 1 1 0 1 0 0 0 1 0 0 0 1 0 0 1 p × | ϵ 2 | ,
respectively.
Then, the optimization problem (8) can be reformulated as follows:
min U R N × p 1 2 | | X U | | F 2 + λ l ϵ 1 ω l | | C l , · U | | 2 + l ϵ 2 ω ˜ l | | U D · , l | | 2 .

3.1. The ALM Formulation

To implement ALM, we further construct the problem (11) into the following constrained optimization problem by introducing the dual variables V R | ϵ 1 | × p and Z R N × | ϵ 2 | ,
min U , V , Z 1 2 | | X U | | F 2 + λ l ϵ 1 ω l | | V l | | 2 + l ϵ 2 ω ˜ l | | Z l | | 2 subject to C l , · U V l = 0 , l ϵ 1 , U D · , l Z l = 0 , l ϵ 2 ,
where V l and Z l are the l-th row and l-th column of V and Z, respectively. If we introduce the following functions,
f ( U ) : = 1 2 | | X U | | F 2 h ( V ) : = λ l ϵ 1 ω l | | V l | | 2 g ( Z ) : = λ l ϵ 2 ω ˜ l | | Z l | | 2 ,
then the problem in (12) becomes
min U , V , Z f ( U ) + h ( V ) + g ( Z ) .
The augmented Lagrangian function of the problem (12) is given by
L ν ( U , V , Z , Λ 1 , Λ 2 ) : = f ( U ) + h ( V ) + l ϵ 1 Λ 1 l , C l , · U V l + ν 2 l ϵ 1 | | C l , · U V l | | 2 2 + g ( Z ) + l ϵ 2 Λ 2 l , U D · , l Z l + ν 2 l ϵ 2 | | U D · , l Z l | | 2 2 ,
where ν > 0 is an augmented Lagrangian penalty, Λ 1 R | ϵ 1 | × p and Λ 2 R N × | ϵ 2 | are Lagrangian multipliers, and Λ 1 l and Λ 2 l are the l-th row and l-th column of Λ 1 and Λ 2 , respectively.
Hence, the ALM procedure of the problem (12) consists of the following three steps:
( U k , V k , Z k ) = arg min U , V , Z L ν ( U , V , Z , Λ 1 k 1 , Λ 2 k 1 ) ,
Λ 1 l k = Λ 1 l k 1 + ν ( C l , · U k V l k ) , l ϵ 1 ,
Λ 2 l k = Λ 2 l k 1 + ν ( C l , · U k V l k ) , l ϵ 2 .

3.2. The Proposed Method

We construct our proposed method: repeatedly minimizing the augmented Lagrangian function in Equation (15) w.r.t U , V l , Z l and updating the Lagrange multipliers in Equations (16) and (17). The whole procedure is summarized in Algorithm 2.
Algorithm 2 Proposed method.
Input: Data X, matrices C and D, Lipschitz constant L calculated by (27), penalties λ and ν , initial value Λ 1 0 , Λ 2 0 , Y 1 , t 1 = 1 .
While k < k max (until convergence) do
  1:
Calculate the gradient F ( Y k ) by (20).
  2:
Update iterate: U k Y k 1 L F ( Y k ) .
  3:
Update iterate: Λ 1 l k P B l ( Λ 1 l k 1 + ν C l , · U k ) , for l ϵ 1 , by (24) and (25), where B l : = { y : | | y | | 2 λ ω l } .
  4:
Update iterate: Λ 2 l k P B ˜ l ( Λ 2 l k 1 + ν U k D · , l ) , for l ϵ 2 , by (26) and (25), where B ˜ l : = { y ˜ : | | y ˜ | | 2 λ ω ˜ l } .
  5:
t k + 1 = 1 + 1 + 4 t k 2 2
  6:
Y k + 1 = U k + t k 1 t k + 1 ( U k U k 1 )
  7:
k = k + 1
Output: optimal solution to problem (8), U * = U k .
Step 1: update U . In the U-update, if we define the following function in Equation (14),
F ( U ) : = f ( U ) + min V , Z L ( U , V , Z , Λ 1 , Λ 2 ) = f ( U ) + min V , Z h ( V ) + l ϵ 1 Λ 1 l , C l , · U V l + ν 2 l ϵ 1 | | C l , · U V l | | 2 2 + g ( Z ) + l ϵ 2 Λ 2 l , U D · , l Z l + ν 2 l ϵ 2 | | U D · , l Z l | | 2 2 ,
then the update of U in (15) can be written as
U k + 1 : = arg min U F ( U k ) .
We find that (18) is differentiable due to Lemma 1, and obtain the following proposition.
Proposition 1.
The function F ( U ) is differentiable with respect to U, and
U F ( U ) = X + U + C T prox ν h * ( ν C U + Λ 1 ) + prox ν g * ( ν U D + Λ 2 ) D T .
For the proof, see the Appendix A.1.
With Proposition 1, we can use NAGM (Algorithm 1) to update U by solving the differentiable optimization problem (19).
Step 2: update V l and Λ 1 . By step (15) in the ALM procedure, we must minimize the functions in Equation (18) corresponding to the vector V l by updating the following,
V l k = arg min V l λ ω l | | V l | | 2 + ν 2 | | V l | | 2 2 l ϵ 1 Λ 1 l k 1 + ν C l , · U k , V l = arg min V l ν 2 | | V l ( C l , · U k + ν 1 Λ 1 l k ) | | 2 2 + h l ( V l ) = prox h l / ν ( C l , · U k + ν 1 Λ 1 l k 1 ) ,
where h l ( V l ) : = λ ω l | | V l | | 2 denotes the l-th term in h ( V ) .
We substitute the optimal V l k in the k-th iteration into step ()
Λ 1 l k Λ 1 l k 1 + ν ( C l , · U k V l k ) , l ϵ 1 ,
to obtain
Λ 1 l k Λ 1 l k 1 + ν C l , · U k ν prox h l / ν ( C l , · U k + ν 1 Λ 1 l k 1 ) .
By Moreau’s decomposition (2), we further simplify the update (21) as follows,
Λ 1 l k prox ν h l * ( Λ 1 l k 1 + ν C l , · U k ) ,
which means the updates of V l and Λ 1 l become one update (22). Hence, there is no longer a need to store and compute the variable V l in the ALM updates, which reduces computational costs.
In the update (22), the conjugate function h l * ( y ) of the 2 norm is an indicator function ([36], Example 3.26):
h l * ( y ) = 0 , if | | y | | 2 λ ω l , , otherwise .
Moreover, the proximal operator of the indicator function (22) is the projection problem ([37], Theorem 6.24):
prox ν h l * ( ν C l , · U k + Λ 1 l k 1 ) = P B l ( ν C l , · U k + Λ 1 l k 1 ) ,
where B l : = { y : | | y | | 2 λ ω l } , and the operator P B l denotes the projection onto the ball B l . It solves the problem P B l ( x ) : = arg min u B l | | u x | | 2 2 , i.e.,
P B l ( x ) = x , if | | x | | 2 λ ω l , λ ω l , otherwise .
This projection problem completes in O ( p ) operations for a p-dimensional vector x R p .
Step 3: update Z and Λ 2 . Similarly, we can derive the following equations:
Z l k + 1 = arg min Z l λ ω l ˜ | | Z l | | 2 + ν 2 | | Z l | | 2 2 l ϵ 2 Λ 2 l k 1 + ν U k D · , l , Z l = arg min Z l ν 2 | | Z l ( U k D · , l + ν 1 Λ 2 l k 1 ) | | 2 2 + g l ( Z l ) = prox g l / ν ( U k D · , l + ν 1 Λ 2 l k 1 )
where g l ( Z l ) : = λ ω l ˜ | | Z l | | 2 . Then, the dual variable Λ 2 update becomes
Λ 2 l k prox ν g l * ( Λ 2 l k 1 + ν U k D · , l ) , for l ϵ 2 .
If we write in the projection operator, then it becomes
Λ 2 l k P B ˜ l ( Λ 2 l k 1 + ν U k D · , l )
where B ˜ l : = { y ˜ : | | y ˜ | | 2 λ ω ˜ l } .
Our proposed method only uses first-order information. Furthermore, we just need to calculate the gradient of the function F and proximal operators in each iteration, where the proximal operators are easy to obtain by solving the projection problem.

3.3. Lipschitz Constant and Convergence Rate

By the following lemma, we know that if we choose 1 L as the step size for each iteration in the NAGM, then the convergence rate is, at most, O ( 1 k 2 ) .
Lemma 2
([30,35,38]). Let { U k } as the sequence generated by Algorithm 1, and U 0 as an initial value. If we take the step size as 1 L , then for any k 1 we have
F ( U k ) F ( U * ) 2 L | | U 0 U * | | F 2 ( k + 1 ) 2 .
In order to examine the performance of the proposed method, we derive the Lipschitz constant L of U F ( U ) as in the following proposition.
Proposition 2.
The Lipschitz constant of U F ( U ) is upperbounded by
1 + ν λ max ( C T C ) + ν λ max ( D T D ) ,
where λ max denotes the maximum eigenvalue of the corresponding matrix.
Proof. 
By definition in (1) and Proposition 1, we derive the Lipschitz constant as follows,
| | U F ( U 1 ) U F ( U 2 ) | | 2 = | | U 1 U 2 + C T prox ν h * ( ν C U 1 + Λ 1 ) C T prox ν h * ( ν C U 2 + Λ 1 ) + prox ν g * ( ν U 1 D + Λ 2 ) D T prox ν g * ( ν U 2 D + Λ 2 ) D T | | 2 | | U 1 U 2 | | 2 + | | C T prox ν h * ( ν C U 1 + Λ 1 ) C T prox ν h * ( ν C U 2 + Λ 1 ) | | 2 + | | prox ν g * ( ν U 1 D + Λ 2 ) D T prox ν g * ( ν U 2 D + Λ 2 ) D T | | 2 .
By the definition of matrix 2-norm and the nonexpansiveness of the proximal operators ([39], Lemma 2.4), we obtain
| | U F ( U 1 ) U F ( U 2 ) | | 2 | | U 1 U 2 | | 2 + λ max ( C T C ) | | ν C U 1 ν C U 2 | | 2 + | | ν U 1 D ν U 2 D | | 2 λ max ( D T D ) | | U 1 U 2 | | 2 + ν λ max ( C T C ) | | U 1 U 2 | | 2 + ν λ max ( D T D ) | | U 1 U 2 | | 2 1 + ν λ max ( C T C ) + ν λ max ( D T D ) | | U 1 U 2 | | 2
Finally, it should be noted that for the time complexity, the proposed method is less sensitive to the λ value than the conventional methods. In fact, the λ value affects the proposed method only through the functions h l * and g l * that take 0 or depending on y 2 λ ω l and y ˜ 2 λ ω ˜ l in (23).
On the other hand, the COBRA solves two optimization problems, and the ADMM-based methods need to solve the Sylvester equation, which means that all of them are influenced by the λ value so much.

4. Experiments

In this section, we show the performance of the proposed approach for estimating and assessing the biclusters by conducting experiments on both synthetic and real datasets. We executed the following algorithms:
  • COBRA: Dykstra-like proximal algorithm proposed by Chi et al. [9].
  • ADMM: the ADMM proposed by Weylandt [19].
  • G-ADMM (generalized ADMM): the modified ADMM presented by Weylandt [19].
  • Proposed method: the proposed algorithm showed in Algorithm 2.
They were all implemented by Rcpp on a Macbook Air with 1.6 GHz Intel Core i5 and 8 GB memory. We recorded the wall times for the four algorithms.

4.1. Artificial Data Analysis

We evaluate the performance of the proposed methods on synthetic data in terms of the number of iterations, the execution time, and the clustering quality.
We generate the artificial data X R N × p with a checkerboard bicluster structure similar to the method in [9]. We simulate X i j N ( μ r c , σ 2 ) (i.i.d.) as follows, where the indices r and c range in clusters { 1 , , R } and { 1 , , C } , respectively, which means that the number of biclusters is M : = R × C . We assign each x i j randomly belongs to one of those M biclusters. The mean μ r c is chosen uniformly from an equally spaced sequence { 10 , 9 , , 9 , 10 } , and the σ is chosen as 1.5 and 3.0 for different noise levels.
In our experiments, we consider the following stopping criteria for the four algorithms.
  • Relative error:
    | | U k + 1 U k | | F max { | | U k | | F , 1 } ϵ .
  • Objective function error:
    | | F ( U k ) F ( U * ) | | F ϵ .
where ϵ is a given accuracy tolerance. We terminate the algorithm if the above error is smaller than ϵ or the maximum number of iterations exceeds 10,000. We use the relative error for the time comparisons and quality assessment and the objective function error for convergence rate analysis.

4.1.1. Comparisons

We change the sizes of N , p of the data matrix X and the tuning parameter λ to test the performance of four algorithms and compare the performance among the algorithms. At first, we compare the execution time with different λ , ranging from 1 to 2000, and setting R = 4 , C = 4 , σ = 1.5 . We obtain the results shown in Figure 2a.
From Figure 2a, we observe that the execution time of the COBRA and G-ADMM increases rapidly as λ varies. The execution time of COBRA is the largest when λ > 1400 . Therefore, it will take a long time for COBRA to visualize the whole fusion process, particularly the single bicluster case. Our proposed method significantly outperforms the other three algorithms and offers high stability in a wide range of λ . Due to its low computational time, our proposed method is a preferable choice to visualize the biclusters for various λ values when applying biclustering.
Next, we compare the execution time with different p (from 1 to 200). Here, we fix the number of column clusters and row clusters ( C = R = 4 ). Figure 2b shows that the execution time of the algorithms increases as p grows. The proposed method shows better performance than the other three. In particular, the ADMM and G-ADMM are suffered from the feature dimension p and the computations grow dramatically.
Then, we vary the sample size N from 100 to 1000 and fix the size of the feature p = 40 , with λ = 1 , R = 4 , C = 4 , σ = 1.5 . Figure 3 shows the execution times of the three algorithms (COBRA, G-ADMM, and Proposed) for each N.
The curves reveal that when the larger the sample size N, the more time the algorithms require. Moreover, the ADMM takes more than 500 s when N > 500 and takes around 1800 s when N = 1000 , which are much larger than the other three algorithms. Thus, we remove the result of ADMM from the figure. However, our proposed method only takes around 10 s even when N = 1000 , which is six times smaller than G-ADMM.

4.1.2. Assessment

We evaluate the clustering quality by a widely used criterion called the Rand index (RI) [40]. The value of RI ranges from 0 to 1; a higher value shows better performance, and 1 indicates the perfect quality of the clustering. Note that we can obtain the true bicluster labels in the data generation procedure. We generate the matrix data with N = 100 and p = 100 , and set two noise levels, low ( σ = 1.5 ) and high ( σ = 3.0 ). We compare the clustering quality of our proposed method with ADMM, G-ADMM, and COBRA under different settings. Setting 1: R = 2 , C = 4 , σ = 1.5 ; Setting 2: R = 4 , C = 4 , σ = 1.5 ; Setting 3: R = 4 , C = 8 , σ = 1.5 ; Setting 4: R = 2 , C = 4 , σ = 3 ; Setting 5: R = 4 , C = 4 , σ = 3 ; Setting 6: R = 4 , C = 8 , σ = 3 .
Table 2 presents the result of the experiment. As the tuning parameter λ increases, the biclusters tend to fuse and reduce noise interference in the raw data. While in some cases, for extremely high λ such as 10,000, the biclusters may be over-smoothed, and the value of the Rand index is decreased. For example, in the first case (Setting 1: the number of biclusters is 2 × 4 and σ = 1.5 ). The Rand index in COBRA, ADMM, and our proposed method shows a similar value in most cases because all the algorithms solve the same model. However, the G-ADMM exhibits the worst performance due to its slow convergence rate, and it cannot converge well when the tuning parameter λ is large ( λ = 5000 and λ = 10,000). Overall, from the results in Table 2, our proposed method shows high accuracy and stability from low to high noise.

4.2. Real Data Analysis

In this section, we use three different real datasets to demonstrate the performance of our proposed method.
Firstly, we use the presidential speeches dataset preprocessed by Weylandt et al. [41] that contains 75 high-frequency words taken from the significant speeches of the 44 U.S. presidents around the year 2018. We show the heatmaps in Figure 4 under a wide range of tuning parameters λ to exhibit the fusion process of biclusters. We set the tolerance ϵ to be 1 × 10 6 , and use the relative error stopping criterion as described in Section 4.1. The columns represent the different presidents, and the rows represent the different words. When λ = 0 , the heatmap is disordered, and there are no distinct subgroups. While we increase the λ , the biclusters begin to merge. We can further find out the common vocabulary used in some groups of the prime minister’s speeches. Moreover, as shown in Figure 4f, the heatmap clearly shows four biclusters with two subgroups of presidents and two subgroups of words when λ = 30,000.
Secondly, we compare the computational time of four algorithms for two actual datasets. One is The Cancer Genome Atlas (TCGA) dataset [42], which contains 438 breast cancer patients (samples) and 353 genes (features), and the other one is the diffuse large-B-cell lymphoma (DLBCL) dataset [43] with 3795 genes and 58 patient samples. In DLBCL, there are 32 samples from cured patients and 26 samples from sick individuals among the 58 samples. Furthermore, we extract 500 genes with the highest variances among the original genes.
Figure 5a,b depicts the outcomes of the elapsed time comparison. From the curves, we observe that our proposed approach surpasses the other three methods. In contrast, ADMM shows the worst performance in the DLBCL dataset, and the case of tolerance ϵ < 10 3 requirement in the TCGA dataset.
Lastly, we compare the number of iterations to achieve the specified tolerance of F ( U k ) F ( U * ) and run it on the TCGA and DLBCL datasets. Figure 6a,b reveals that the COBRA algorithm has the fastest convergence rate, whereas the generalized ADMM is the slowest to converge. Our proposed method shows competitive performance in the convergence rate.
Overall, from the above experiment results of the artificial and real datasets, our proposed method has superior computational performance with high accuracy.

5. Discussion

We proposed a method to find a solution to the convex biclustering problem. We found that it outperformed the conventional algorithms, such as COBRA and ADMM-based procedures, in the sense of efficiency. Our proposed method is more efficient than COBRA because the latter should solve two optimization problems containing non-differentiable fused terms in each cycle. Additionally, the proposed method performed better than the ADMM-based procedures because the former is based on [29] and uses the NAGM to update the variable U. However, ADMM spends much more time computing the matrix inverse. Moreover, our proposed method is stable while varying the tuning parameters λ , which is convenient for us to find the optimal λ and visualize the variation of the heatmaps under a wide range of λ .
As for further improvements, we can use ADMM as a warm start strategy to select an initial value for our proposed method. What is more, according to the fusion process of the heatmap results in Figure 4, it will be meaningful if we can derive the range of tuning parameters λ that yield the non-trivial solutions of the convex biclustering with more than one bicluster. Additionally, the proposed method can motivate future work. We can extend the proposed method to solve other clustering problems, such as the sparse singular value decomposition model [44] and the integrative generalized convex clustering model [45].

Author Contributions

Conceptualization, J.C. and J.S.; methodology, J.C. and J.S.; software, J.C.; formal analysis, J.C. and J.S.; investigation, J.C.; resources, J.C.; data curation, J.C.; writing—original draft preparation, J.C. and J.S.; writing—review and editing, J.C. and J.S.; visualization, J.C.; supervision, J.S.; project administration, J.S.; funding acquisition, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Grant-in-Aid for Scientific Research (KAKENHI) C, Grant number: 18K11192.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this paper. Presidential speeches dataset: https://www.presidency.ucsb.edu (accessed date 18 October 2021); DLBCL dataset: http://portals.broadinstitute.org/cgi-bin/cancer/datasets.cgi (accessed date 18 October 2021).

Acknowledgments

The authors would like to thank Ryosuke Shimmura for helpful discussions.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ADMMalternating direction method of multipliers
ALMaugmented Lagrangian method
COBRAconvex biclustering algorithm
DLBCLdiffuse large-B-cell lymphoma
FISTAfast iterative shrinkage-thresholding algorithm
ISTAiterative shrinkage-thresholding algorithm
NAGMNesterov’s accelerated gradient method
RIRand index
TCGAThe Cancer Genome Atlas

Appendix A

Appendix A.1. Proof of Proposition 1

First, we define the following two functions,
r 1 ( V ) : = h ( V ) + ν 2 | | V | | F 2 ,
and
r 2 ( Z ) : = g ( Z ) + ν 2 | | Z | | F 2 .
By the definition of F ( U ) in (18),
F ( U ) : = f ( U ) + min V , Z L ( U , V , Z , Λ 1 , Λ 2 ) = f ( U ) + min V , Z h ( V ) + g ( Z ) + l Λ 1 l , C l , · U V l + ν 2 l | | C l , · U V l | | 2 2 + l Λ 2 l , U D · , l Z l + ν 2 l | | U D · , l Z l | | 2 2 = f ( U ) + min V h ( V ) + ν 2 | | V | | F 2 l Λ 1 l + ν C l , · U , V l + ν 2 | | C U | | F 2 + l Λ 1 l , C l , · U + min Z g ( Z ) + ν 2 | | Z | | F 2 l Λ 2 l + ν U D · , l , Z l + ν 2 | | U D | | F 2 + l Λ 2 l , U D · , l = max V ν C U + Λ 1 , V h ( V ) ν 2 | | V | | F 2 + f ( U ) + ν 2 | | C U | | F 2 + l Λ 1 l , C l , · U max Z ν U D + Λ 2 , Z g ( Z ) ν 2 | | Z | | F 2 + ν 2 | | U D | | F 2 + l Λ 2 l , U D · , l = f ( U ) r 1 * ( ν C U + Λ 1 ) r 2 * ( ν U D + Λ 2 ) + ν 2 | | C U | | F 2 + Λ 1 , C U + ν 2 | | U D | | F 2 + Λ 2 , U D .
By Theorem 26.3 in [33], if the function r : R p R is closed and strongly convex, then we have the differentiable conjugate function r * ( v ) , and
r * ( v ) = arg max u R p { u , v r ( u ) } .
Hence, we can derive the following equations,
r 1 * ( v ) = arg max u u , v h ( u ) ν 2 | | u | | F 2 = arg max u 1 2 | | u | | F 2 + 1 ν u , v 1 ν h ( u ) = arg min u 1 2 | | u v ν | | F 2 + 1 ν h ( u ) = prox h / ν ( v ν ) .
Then, we obtain
r 1 * ( ν C U + Λ 1 ) = ν C T prox h / ν ( C U + Λ 1 ν )
and, similarly,
r 2 * ( ν U D + Λ 2 ) = ν prox g / ν ( U D + Λ 2 ν ) D T .
Next, take the derivative of F ( U ) w.r.t U,
U F ( U ) = f ( U ) r 1 * ( ν C U + Λ 1 ) r 2 * ( ν U D + Λ 2 ) + ν C T C U + ν U D D T + C T Λ 1 + Λ 2 D T = f ( U ) ν C T prox h / ν ( C U + ν 1 Λ 1 ) ν prox g / ν ( U D + Λ 2 ν ) D T + ν C T C U + C T Λ 1 + ν U D D T + Λ 2 D T = f ( U ) + C T prox ν h * ( ν C U + Λ 1 ) + prox ν g * ( ν U D + Λ 2 ) D T ,
and we obtain the last equation by Moreau’s decomposition.

References

  1. Hartigan, J.A.; Wong, M.A. Algorithm AS 136: A k-means clustering algorithm. J. R. Stat. Soc. Ser. (Appl. Stat.) 1979, 28, 100–108. [Google Scholar] [CrossRef]
  2. Johnson, S.C. Hierarchical clustering schemes. Psychometrika 1967, 32, 241–254. [Google Scholar] [CrossRef] [PubMed]
  3. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  4. Hartigan, J.A. Direct clustering of a data matrix. J. Am. Stat. Assoc. 1972, 67, 123–129. [Google Scholar] [CrossRef]
  5. Cheng, Y.; Church, G.M. Biclustering of expression data. ISMB Int. Conf. Intell. Syst. Mol. Biol. 2000, 8, 93–103. [Google Scholar]
  6. Tanay, A.; Sharan, R.; Shamir, R. Discovering statistically significant biclusters in gene expression data. Bioinformatics 2002, 18, S136–S144. [Google Scholar] [CrossRef] [Green Version]
  7. Prelić, A.; Bleuler, S.; Zimmermann, P.; Wille, A.; Bühlmann, P.; Gruissem, W.; Hennig, L.; Thiele, L.; Zitzler, E. A systematic comparison and evaluation of biclustering methods for gene expression data. Bioinformatics 2006, 22, 1122–1129. [Google Scholar] [CrossRef] [PubMed]
  8. Madeira, S.C.; Oliveira, A.L. Biclustering algorithms for biological data analysis: A survey. IEEE/ACM Trans. Comput. Biol. Bioinform. 2004, 1, 24–45. [Google Scholar] [CrossRef]
  9. Chi, E.C.; Allen, G.I.; Baraniuk, R.G. Convex biclustering. Biometrics 2017, 73, 10–19. [Google Scholar] [CrossRef]
  10. Tibshirani, R.; Saunders, M.; Rosset, S.; Zhu, J.; Knight, K. Sparsity and smoothness via the fused lasso. J. R. Stat. Soc. Ser. (Stat. Methodol.) 2005, 67, 91–108. [Google Scholar] [CrossRef] [Green Version]
  11. Hocking, T.D.; Joulin, A.; Bach, F.; Vert, J.P. Clusterpath an algorithm for clustering using convex fusion penalties. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, DC, USA, 28 June–2 July 2011; p. 1. [Google Scholar]
  12. Lindsten, F.; Ohlsson, H.; Ljung, L. Clustering using sum-of-norms regularization: With application to particle filter output computation. In Proceedings of the 2011 IEEE Statistical Signal Processing Workshop (SSP), Nice, France, 28–30 June 2011; pp. 201–204. [Google Scholar]
  13. Pelckmans, K.; De Brabanter, J.; Suykens, J.A.; De Moor, B. Convex clustering shrinkage. In Proceedings of the PASCALWorkshop on Statistics and Optimization of Clustering Workshop, London, UK, 4–5 July 2005. [Google Scholar]
  14. Langfelder, P.; Horvath, S. WGCNA: An R package for weighted correlation network analysis. BMC Bioinform. 2008, 9, 1–13. [Google Scholar] [CrossRef] [Green Version]
  15. Tan, K.M.; Witten, D.M. Sparse biclustering of transposable data. J. Comput. Graph. Stat. 2014, 23, 985–1008. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Shor, N.Z. Minimization Methods for Non-Differentiable Functions; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; Volume 3. [Google Scholar]
  17. Boyd, S.; Xiao, L.; Mutapcic, A. Subgradient methods. Lect. Notes EE392o Stanf. Univ. Autumn Quart. 2003, 2004, 2004–2005. [Google Scholar]
  18. Bauschke, H.H.; Combettes, P.L. A Dykstra-like algorithm for two monotone operators. Pac. J. Optim. 2008, 4, 383–391. [Google Scholar]
  19. Weylandt, M. Splitting methods for convex bi-clustering and co-clustering. In Proceedings of the 2019 IEEE Data Science Workshop (DSW), Minneapolis, MN, USA, 2–5 June 2019; pp. 237–242. [Google Scholar]
  20. Glowinski, R.; Marroco, A. Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de Dirichlet non linéaires. ESAIM Math. Model. Numer. Anal.-ModéLisation MathéMatique Anal. NuméRique 1975, 9, 41–76. [Google Scholar] [CrossRef]
  21. Gabay, D.; Mercier, B. A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 1976, 2, 17–40. [Google Scholar] [CrossRef] [Green Version]
  22. Deng, W.; Yin, W. On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 2016, 66, 889–916. [Google Scholar] [CrossRef] [Green Version]
  23. Chi, E.C.; Lange, K. Splitting methods for convex clustering. J. Comput. Graph. Stat. 2015, 24, 994–1013. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Suzuki, J. Sparse Estimation with Math and R: 100 Exercises for Building Logic; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  25. Bartels, R.H.; Stewart, G.W. Solution of the matrix equation AX+ XB= C [F4]. Commun. ACM 1972, 15, 820–826. [Google Scholar] [CrossRef]
  26. Goldstein, T.; O’Donoghue, B.; Setzer, S.; Baraniuk, R. Fast alternating direction optimization methods. Siam J. Imaging Sci. 2014, 7, 1588–1623. [Google Scholar] [CrossRef] [Green Version]
  27. Boyd, S.; Parikh, N.; Chu, E. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers; Now Publishers Inc.: Boston, MA, USA, 2011. [Google Scholar]
  28. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  29. Shimmura, R.; Suzuki, J. Converting ADMM to a Proximal Gradient for Convex Optimization Problems. arXiv 2021, arXiv:2104.10911. [Google Scholar]
  30. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. Siam J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  31. Moreau, J.J. Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. France 1965, 93, 273–299. [Google Scholar] [CrossRef]
  32. Hestenes, M.R. Multiplier and gradient methods. J. Optim. Theory Appl. 1969, 4, 303–320. [Google Scholar] [CrossRef]
  33. Rockafellar, R.T. The multiplier method of Hestenes and Powell applied to convex programming. J. Optim. Theory Appl. 1973, 12, 555–562. [Google Scholar] [CrossRef]
  34. Nocedal, J.; Wright, S. Numerical Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  35. Nesterov, Y.E. A method for solving the convex programming problem with convergence rate O (1/k2). Dokl. Akad. Nauk Sssr 1983, 269, 543–547. [Google Scholar]
  36. Boyd, S.; Boyd, S.P.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  37. Beck, A. First-Order Methods in Optimization; SIAM: Philadelphia, PA, USA, 2017. [Google Scholar]
  38. Nemirovski, A.; Yudin, D. Problem Complexity and Method Efficiency in Optimization; John Wiley: Hoboken, NJ, USA, 1983. [Google Scholar]
  39. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  40. Rand, W.M. Objective criteria for the evaluation of clustering methods. J. Am. Stat. Assoc. 1971, 66, 846–850. [Google Scholar] [CrossRef]
  41. Weylandt, M.; Nagorski, J.; Allen, G.I. Dynamic visualization and fast computation for convex clustering via algorithmic regularization. J. Comput. Graph. Stat. 2020, 29, 87–96. [Google Scholar] [CrossRef] [Green Version]
  42. Koboldt, D.; Fulton, R.; McLellan, M.; Schmidt, H.; Kalicki-Veizer, J.; McMichael, J.; Fulton, L.; Dooling, D.; Ding, L.; Mardis, E.; et al. Comprehensive molecular portraits of human breast tumours. Nature 2012, 490, 61–70. [Google Scholar]
  43. Rosenwald, A.; Wright, G.; Chan, W.C.; Connors, J.M.; Campo, E.; Fisher, R.I.; Gascoyne, R.D.; Muller-Hermelink, H.K.; Smeland, E.B.; Giltnane, J.M.; et al. The use of molecular profiling to predict survival after chemotherapy for diffuse large-B-cell lymphoma. N. Engl. J. Med. 2002, 346, 1937–1947. [Google Scholar] [CrossRef] [PubMed]
  44. Lee, M.; Shen, H.; Huang, J.Z.; Marron, J. Biclustering via sparse singular value decomposition. Biometrics 2010, 66, 1087–1095. [Google Scholar] [CrossRef] [PubMed]
  45. Wang, M.; Allen, G.I. Integrative generalized convex clustering optimization and feature selection for mixed multi-view data. J. Mach. Learn. Res. 2021, 22, 1–73. [Google Scholar]
Figure 1. While standard clustering divides either rows or columns, the biclustering divides both. (a) Row clustering; (b) column clustering; (c) biclustering.
Figure 1. While standard clustering divides either rows or columns, the biclustering divides both. (a) Row clustering; (b) column clustering; (c) biclustering.
Mathematics 09 03021 g001
Figure 2. Execution time for various λ and p with N = 100 and ϵ = 1 × 10 6 . (a) Different λ , with p = 40 ; (b) different p, with λ = 1 .
Figure 2. Execution time for various λ and p with N = 100 and ϵ = 1 × 10 6 . (a) Different λ , with p = 40 ; (b) different p, with λ = 1 .
Mathematics 09 03021 g002
Figure 3. Execution times for each N with λ = 1 , p = 40 and ϵ = 1 × 10 6 .
Figure 3. Execution times for each N with λ = 1 , p = 40 and ϵ = 1 × 10 6 .
Mathematics 09 03021 g003
Figure 4. The heatmap results of proposed method implementation on the presidential speeches dataset under a wide range of λ . (a) λ = 0 ; (b) λ = 1500 ; (c) λ = 2000 ; (d) λ = 5000 ; (e) λ = 15,000; (f) λ = 30,000.
Figure 4. The heatmap results of proposed method implementation on the presidential speeches dataset under a wide range of λ . (a) λ = 0 ; (b) λ = 1500 ; (c) λ = 2000 ; (d) λ = 5000 ; (e) λ = 15,000; (f) λ = 30,000.
Mathematics 09 03021 g004
Figure 5. Plot of log ( F ( U k ) F ( U * ) ) vs. the elapsed time. (a) TCGA; (b) DLBCL.
Figure 5. Plot of log ( F ( U k ) F ( U * ) ) vs. the elapsed time. (a) TCGA; (b) DLBCL.
Mathematics 09 03021 g005
Figure 6. Plot of log ( F ( U k ) F ( U * ) ) vs. the number of iterations. (a) TCGA; (b) DLBCL.
Figure 6. Plot of log ( F ( U k ) F ( U * ) ) vs. the number of iterations. (a) TCGA; (b) DLBCL.
Mathematics 09 03021 g006
Table 1. Gradient descent and its modifications.
Table 1. Gradient descent and its modifications.
DifferentiableNon-Differentiable
OrdinaryGradient descentProximal gradient descent (e.g., ISTA [30])
AccelatedNAGM [35]FISTA [30]
Table 2. Assessment result.
Table 2. Assessment result.
SettingAlgorithmRand Index
λ = 100 λ = 1000 λ = 5000 λ = 10 , 000
Setting 1COBRA0.8740.8750.9990.931
ADMM0.8720.8750.9990.931
G-ADMM0.8740.8750.8740.872
Proposed0.8750.8750.9990.931
Setting 2COBRA0.9280.9320.9940.999
ADMM0.9280.9350.9940.999
G-ADMM0.9280.9340.9810.936
Proposed0.9280.9340.9940.999
Setting 3COBRA0.9590.9620.9620.999
ADMM0.9610.9620.9620.998
G-ADMM0.9590.9620.9670.967
Proposed0.9610.9620.9620.999
Setting 4COBRA0.8700.8700.8700.935
ADMM0.8700.8680.8710.933
G-ADMM0.8700.8700.8710.871
Proposed0.8700.8700.8710.935
Setting 5COBRA0.9340.9340.9340.964
ADMM0.9340.9320.9340.964
G-ADMM0.9340.9340.9320.932
Proposed0.9340.9340.9340.964
Setting 6COBRA0.9600.9600.9620.962
ADMM0.9610.9620.9620.962
G-ADMM0.9600.9620.9620.960
Proposed0.9610.9620.9620.962
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, J.; Suzuki, J. An Efficient Algorithm for Convex Biclustering. Mathematics 2021, 9, 3021. https://doi.org/10.3390/math9233021

AMA Style

Chen J, Suzuki J. An Efficient Algorithm for Convex Biclustering. Mathematics. 2021; 9(23):3021. https://doi.org/10.3390/math9233021

Chicago/Turabian Style

Chen, Jie, and Joe Suzuki. 2021. "An Efficient Algorithm for Convex Biclustering" Mathematics 9, no. 23: 3021. https://doi.org/10.3390/math9233021

APA Style

Chen, J., & Suzuki, J. (2021). An Efficient Algorithm for Convex Biclustering. Mathematics, 9(23), 3021. https://doi.org/10.3390/math9233021

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop