Next Article in Journal
Investigation of Thermo-Hydraulics in a Lid-Driven Square Cavity with a Heated Hemispherical Obstacle at the Bottom
Next Article in Special Issue
An MLWE-Based Cut-and-Choose Oblivious Transfer Protocol
Previous Article in Journal
Relativistic Roots of κ-Entropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Minimizing Computation and Communication Costs of Two-Sided Secure Distributed Matrix Multiplication under Arbitrary Collusion Pattern

1
National Mobile Communications Research Laboratory, Southeast University, Nanjing 211189, China
2
School of Information Science and Engineering, Southeast University, Nanjing 211189, China
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(5), 407; https://doi.org/10.3390/e26050407
Submission received: 14 March 2024 / Revised: 2 May 2024 / Accepted: 3 May 2024 / Published: 8 May 2024
(This article belongs to the Special Issue Information-Theoretic Cryptography and Security)

Abstract

:
This paper studies the problem of minimizing the total cost, including computation cost and communication cost, in the system of two-sided secure distributed matrix multiplication (SDMM) under an arbitrary collusion pattern. In order to perform SDMM, the two input matrices are split into some blocks, blocks of random matrices are appended to protect the security of the two input matrices, and encoded copies of the blocks are distributed to all computing nodes for matrix multiplication calculation. Our aim is to minimize the total cost, overall matrix splitting factors, number of appended random matrices, and distribution vector, while satisfying the security constraint of the two input matrices, the decodability constraint of the desired result of the multiplication, the storage capacity of the computing nodes, and the delay constraint. First, a strategy of appending zeros to the input matrices is proposed to overcome the divisibility problem of matrix splitting. Next, the optimization problem is divided into two subproblems with the aid of alternating optimization (AO), where a feasible solution can be obtained. In addition, some necessary conditions for the problem to be feasible are provided. Simulation results demonstrate the superiority of our proposed scheme compared to the scheme without appending zeros and the scheme with no alternating optimization.

1. Introduction

With the development of the Internet of Things (IoT), the ubiquitous wireless devices can generate massive data via environment monitoring or target tracking [1]. However, due to the limited power or hardware architecture, these wireless devices cannot satisfy the data processing and computation requirements by themselves. This inspires wireless devices to seek help from online computing nodes who can assist in computation and data processing. Furthermore, distributed computing nodes can be employed to further accelerate the computation and data processing tasks, which means wireless devices can assign computation tasks to many different computing nodes, e.g., Apache Spark [2] and MapReduce [3]. On the other hand, if the online computing nodes are untrustworthy, we should also guarantee data security. Hence, how to perform computation of data with the aid of distributed computing nodes in a secure fashion is an important problem.
In this paper, we focus on the secure distributed matrix multiplication (SDMM) problem [4,5,6,7,8]. In [7,8], the trace-mapping framework has been employed to achieve communication-efficient schemes in the SDMM. The authors of [9] proposed a model of SDMM from an information-theoretic perspective. The user wishes to compute the product of two input matrices A and B with the aid of distributed computing nodes while guaranteeing the security of the information about the two input matrices. Two cases are considered: one-sided security and two-sided security. In the first case, the user only wants to protect the information security of matrix A , and  B is a public matrix known to all computing nodes [10]. In the second case, we need to consider the information security of both matrices A and B [9,11]. The information theft by the distributed computing nodes can be modeled by the collusion pattern, which has also been studied in problems of secret sharing [12] and private information retrieval [13,14]. Some of the existing literature has studied the SDMM problem under homogeneous collusion patterns, where up to l computing nodes may collude to obtain the information of the two input matrices [9,15,16,17,18]. To balance the tradeoff between the uplink and downlink cost, the works proposed two schemes based on the secure cross subspace alignment [15]. In [9], the authors characterized the fundamental limits of minimum communication overhead for the SDMM problem under homogeneous collusion pattern. The work in [16] proposed a scheme based on the polynomial codes on sub-tasks assigned to computing nodes, which can mitigate the straggling effects efficiently. In [18], the authors have adopted some random matrices to encode two input matrices for the purpose of meeting the requirement of security. Then, many encoded copies are sent to different computing nodes for computation. Finally, the user receives these computation results from computing nodes and recovers the product of the two input matrices. It has considered two cases: (1) encoding the input matrices without extra random matrices, i.e., generalized polydot code, and (2) encoding the input matrices with some random matrices to satisfy the security constraint, i.e., secure the generalized polydot code. They also show the superiority of the proposed scheme on the recovery threshold, i.e., the number of computation results that is needed for users to decode the desired result without error, and the communication load between the user and computing nodes, i.e., the amount of downloaded information from computing nodes. Recently, rather than focusing on the homogeneous collusion pattern, ref. [19] studied the SDMM problem under the arbitrary collusion pattern. Considering the two proposed performance metrics, i.e., the normalized download cost and normalized upload cost, they provide the optimal scheme for the one-sided SDMM problem and an achievable scheme for the two-sided SDMM problem.
Both the private information retrieval and SDMM problem considered in [14,19] deal with the non-homogeneous collusion pattern scenario. The common approach of these two problems is assigning different number of copies to different servers. Intuitively speaking, the servers that collude more will be assigned a lower number of copies. More specifically, in [14], the authors considered the ratio between the message size and the amount of downloaded information from the servers. Then, the work of [19] studied the SDMM problem under the arbitrary collusion pattern for a fixed matrix splitting factor, and different numbers of copies were distributed to different computing nodes based on the collusion pattern to minimize the performance of normalized download and upload costs. However, the heterogeneity of the computing nodes in terms of storage capacity, communication capability, and computing capability was not taken into consideration. When full heterogeneity is taken into consideration, the numbers of copies assigned to different servers will not only depend on its colluding behavior but also on its storage capacity, communication capability, and computing capability. Furthermore, the fixed matrix splitting factor may affect the performance of SDMM. Hence, in this work, we study the problem of two-sided SDMM under an arbitrary collusion pattern with the flexible matrix splitting factor. Furthermore, in order to measure the communication and computation performance of the system, a new performance metric called the total cost, which is composed of the computation cost and communication cost, has been proposed in our paper. Additionally, the storage capability of the computing nodes and the delay requirement of the user are also considered. Then, an optimization problem is formulated by minimizing the total cost, subject to the security constraint of the two input matrices, the decodability constraint of the desired result of the multiplication, the storage capacity of the computing nodes, and the delay constraint. In order to overcome the divisibility problem of matrix splitting, we also propose a strategy of appending zeros to the input matrices and discuss the feasible set of some matrix splitting factors for the optimality of the problem. Finally, an alternating optimization (AO) algorithm based on some solvers is adopted to obtain a feasible solution, and some necessary conditions for the feasibility of problem have been provided.
The contributions of our paper are summarized as follows:
  • We propose a new performance metric, the total cost, which includes communication cost and computation cost, to measure the performance of the SDMM problem under arbitrary collusion pattern. Our aim is to minimize the total cost, overall matrix splitting factors, number of appended random matrices, and distribution vector, while satisfying the security constraint of the two input matrices, the decodability constraint of the desired result of the multiplication, the storage capacity of the computing nodes, and the delay constraint.
  • To overcome the divisibility problem of matrix splitting, we propose a strategy of padding zeros to the input matrices, which can split the input matrices into an arbitrary number of blocks compared to the scheme without appending zeros. Moreover, the value ranges of some matrix splitting factors are discussed for the optimality of the problem.
  • The formulated optimization problem is solved by an AO algorithm based on some solvers. More specifically, for the optimization subproblem corresponding to number of appended random matrices and distribution vector, the relationship between number of appended random matrices and distribution vector can be found so that the subproblem is transformed into an integer linear programming over the distribution vector, which can be solved by the MATLAB function “intlinprog”. Furthermore, we also provide some necessary conditions to verify the feasibility of this subproblem. Then, for the optimization subproblem corresponding to all matrix splitting factors, by relaxing the ceiling function and integer constraints, the subproblem can be transformed into an integer geometric programming problem solved by using “YALMIP”. Simulation results show that our proposed scheme with padding zeros is superior to the scheme without appending zeros and the scheme with no alternating optimization.
The rest of this paper is organized as follows: Section 2 introduces the system model of the two-sided SDMM under arbitrary collusion pattern. Section 3 proposes a zero-padding strategy, discusses the feasible set of some matrix splitting factors, and formulates an optimization problem. Section 4 provides the algorithm to solve the problem. Simulation results and conclusions are shown in Section 5 and Section 6, respectively.
Notation 1. 
In this paper, the following notations are used. 1 : N denotes the set { 1 , 2 , , N } . h n represents the n-th column vector of the matrix h . 1 N denotes the N × 1 column vector. Positive integer is represented by Z + , natural number is denoted by N , and the ceiling function is denoted by · .

2. System Model

As shown in Figure 1, we consider a user who wants to calculate the multiplication of two input matrices A F T × S and B F S × D . We suppose that T , S and D are all integers and the finite field F is sufficiently large. Due to its own limited computational ability, the user wishes to split the two matrices A and B into many blocks and upload them to N computing nodes for computation. At the same time, both matrices A and B contain sensitive information, and the user does not want to leak any information to the N computing nodes.
We study the case where the computing nodes may collude with others to obtain information about the two matrices A and B . We represent the colluding behaviors by a collusion pattern P , which contains M colluding sets, i.e.,  P = { T 1 , T 2 , T M } . Here, T m [ 1 : N ] is the m-th colluding set, which means that computing nodes in T m may collude to obtain the information of the two matrices. We make the following two assumptions about the collusion pattern P :
(1)
For ease of presentation, we only include the maximal colluding set in P . For instance, a colluding set { 3 , 4 , 5 , 6 } means that computing nodes 3, 4, 5, and 6 collude. This implies that computing nodes belonging to any subset of { 3 , 4 , 5 , 6 } also collude. However, for ease of presentation, we do not include the subsets of { 3 , 4 , 5 , 6 } in P .
(2)
Every computing node must appear in at least one colluding set. This is because we assume that all computing nodes are curious, and no computing node can be trusted with the sensitive information of A and B .
A collusion pattern P can be represented by its incidence matrix B P , of size N × M , i.e., if computing node i in the j-th colluding set of P , the value of the ( i , j ) -th element in B P is 1. For example, when P = { { 1 , 2 , 3 } , { 1 , 4 } , { 2 , 4 } , { 3 , 4 } , { 5 } } , its incidence matrix is
B P = 1 1 0 0 0 1 0 1 0 0 1 0 0 1 0 0 1 1 1 0 0 0 0 0 1 .
Due to the need to keep the two matrices secure, the user must encode A , B before uploading them to the computing nodes for computation. Assume that there are N 1 encoded copies with N 1 N , then these encoding functions are denoted as: f = ( f 1 , f 2 , , f N 1 ) , g = ( g 1 , g 2 , , g N 1 ) . We use A ˜ i and B ˜ i to represent the i-th encoded copy of matrices A and B , respectively, i [ 1 : N 1 ] , i.e., A ˜ i = f i ( A ) , B ˜ i = g i ( B ) . The user distributes a subset of the encoded matrices to computing node n, where the indices of this subset are written as L n , L n [ 1 : N 1 ] . This is termed the upload phase.
The computing node n computes the product, i.e.,  Z i = A ˜ i B ˜ i , i L n . Then, computing node n would send the computed results Z i , i L n back to the user. This is termed the download phase.
In order to ensure the security of matrices A and B , the following security constraint must be satisfied,
I A , B ; A ˜ L n , B ˜ L n n T m = 0 , m [ 1 : M ] .
which indicates that computing nodes in each colluding set, when putting their received copies together, can not obtain any information about the two matrices.
In addition, the user must be able to decode the desired product C = AB from the answers received from all the computing nodes, i.e., the decodability constraint
H ( AB | Z 1 , Z 2 , , Z N 1 ) = 0
must be satisfied.

2.1. Matrix Encoding Scheme

We use the secure generalized polydot code (SGPD) in [18] to encode the two input matrices. First, we split A into t × s blocks, while B can be split into s × d blocks, i.e.,
A = A 1 , 1 A 1 , s A t , 1 A t , s , B = B 1 , 1 B 1 , d B s , 1 B s , d ,
where T is divisible by t, S is divisible by s, and D is divisible by d. Then, A i , j is of size t 0 × s 0 , and  B i , j is of size s 0 × d 0 , where we have defined
t 0 = T t , s 0 = S s , d 0 = D d .
In view of the security constraint (2), we append some random matrices K i , j F ( T / t ) × ( S / s ) and K i , j F ( S / s ) × ( D / d ) as
A * = A l , 1 A 1 , s A t , 1 A t , s K 1 , 1 K 1 , s K l Δ , 1 K l Δ , s ,
B * = B 1 , 1 B 1 , d K 1 , 1 K 1 , l Δ B s , 1 B s , d K s , 1 K s , l Δ ,
where l Δ rows of random matrices are appended to matrix A , and  l Δ columns of random matrices are appended to matrix B , where l Δ is a positive integer. Each element of the random matrices K i , j and K i , j are generated in an i.i.d. fashion according to the uniform distribution on F . Note that (6) and (7) are just one way of appending random matrices. The other case is given by Method 2 in [19]. For simplicity, we only study the case of (6) and (7), and the other case of appending random matrices can be treated in a similar fashion.
In this case, the encoded matrices are generated according to
A ˜ i = j = 1 t k = 1 s A j , k * x i s ( j 1 ) + k 1 + j = t + 1 t * k = 1 s A j , k * x i s ( j 1 ) + k 1 i = 1 , , N 1 ,
B ˜ i = k = 1 s p = 1 d B k , p * x i s k + t * s ( p 1 ) + k = 1 s p = d + 1 d * B k , p * x i t * s d + s ( p d ) k i = 1 , , N 1 ,
where x i , i = 1 , 2 , , N 1 are N 1 distinct non-zero elements in F , and we have defined t * = t + l Δ , d * = d + l Δ .
The N 1 generated encoded copies of (8) and (9), i.e.,  ( A ˜ i , B ˜ i ) , i [ 1 : N 1 ] , will be distributed to the computing nodes, where computing node n will receive A ˜ L n and B ˜ L n , where L n [ 1 : N 1 ] is the index set of the encoded matrices distributed to computing node n. We assume that L n , n [ 1 : N ] , form a partition of the set [ 1 : N 1 ] , which means that each encoded copy will be distributed to one and only one computing node. Upon receiving A ˜ L n and B ˜ L n , computing node n will calculate Z i = A ˜ i B ˜ i , i L n , and return Z L n to the user. We distribute the encoded matrices to the computing nodes in the following way. Let J = J 1 J 2 J N T be the distribution vector where J n [ 0 : N 1 ] is the number of distributed encoded matrices given to the n-th computing node. Then, we have | L n | = J n , 1 T J = N 1 . It has been proved in [19] that when
B P T J ( l Δ s ) 1 M ,
the security constraint (2) is satisfied. The physical meaning of (10) is that the number of encoded matrices for computing nodes in every colluding set must be smaller than the minimal number of random matrices appended in A * or B * , which is l Δ s . Furthermore, the decodability constraint (3) is guaranteed by the following inequality [19]:
1 T J = N 1 l Δ ( s d + 2 s ) + t s ( d + 1 ) 1 .
It means that the encoded copies N 1 must be no smaller than l Δ ( s d + 2 s ) + t s ( d + 1 ) 1 for decoding the desired results C = AB without error.

2.2. Storage, Communication and Computing Requirements of Each Computing Node

The amount of storage each encoded copy ( A ˜ i , B ˜ i ) occupies is t 0 s 0 + s 0 d 0 . Suppose computing node n’s storage capacity is M n , then, if  t 0 s 0 + s 0 d 0 + t 0 d 0 > M n , computing node n can not even store one encoded copy of ( A ˜ i , B ˜ i ) and its corresponding answer, i.e.,  Z i = A ˜ i B ˜ i . If 
M n t 0 s 0 + s 0 d 0 + t 0 d 0 ,
then the computing node could store one encoded copy ( A ˜ i , B ˜ i ) , i L n , compute the multiplication Z i = A ˜ i B ˜ i , return the corresponding result and then retrieve another encoded copies from the user for further computation. Hence, (12) must be satisfied for all n [ 1 : N ] . Written in vector form, we have
t 0 s 0 + s 0 d 0 + t 0 d 0 1 N M .
Suppose computing node n’s computation speed is V n multiplications per second, then the time it takes for the user to complete the computation assigned to it, is
Q n C = J n V n t 0 s 0 d 0 .
Further suppose that the uplink and downlink capacity between the user and computing node n are C n U and C n D symbols per second, respectively. Then, the amount of upload delay incurred at computing node n is
Q n U = J n C n U t 0 s 0 + s 0 d 0 ,
and the amount of download delay incurred at computing node n is
Q n D = J n C n D t 0 d 0 .
Then the total amount of delay incurred at computing node n when assigned with J n number of encoded copies is
Q n = Q n U + Q n D + Q n C ,
where we have assumed that the computing nodes can only do one of the three actions at any time instant: compute or receive upload or send download. This is also in line with the assumption that the computing nodes may not have enough memory to store all J n copies all at once. Rather, it receives one copy, computes, and then sends it back to the user and then retrieves the next copy and repeats.
Thus, the total delay incurred for this computation is
Q = max n [ 1 : N ] Q n = max n [ 1 : N ] J n V n t 0 s 0 d 0 + J n C n U t 0 s 0 + s 0 d 0 + J n C n D t 0 d 0 ,
and we require that the total delay is no larger than a given threshold Q th , i.e.,
max n [ 1 : N ] J n V n t 0 s 0 d 0 + J n C n U t 0 s 0 + s 0 d 0 + J n C n D t 0 d 0 Q th .
Besides the delay constraint, cost should also be considered for efficient SDMM. More specifically, the cost we consider is comprised of the computation cost of computing nodes and the data transmission cost, where the data transmission cost can be can be written twice divided into the upload and download transmission cost. More specifically, we assume that the upload and download transmission cost for computing node n is c n U , and  c n D per symbol, and the computation cost of each multiplication at computing node n is c n C , then the total required cost for the user doing the secure matrix multiplication of matrices A and B is
U = U U + U D + U C ,
where U U is the upload cost, which is given by
U U = t 0 s 0 + s 0 d 0 n = 1 N J n c n U ,
U D is the download cost, which is given by
U D = t 0 d 0 n = 1 N J n c n D ,
and U C is the computation cost, which is given by
U C = t 0 s 0 d 0 n = 1 N J n c n C .

2.3. Problem Formulation

In this work, we would like to jointly optimize the distribution vector J , and the matrix split parameter ( t , s , d , l Δ ) such that the cost of the user, defined in (17), is minimized. At the same time, the security constraint (10), the decodability constraint (11), the storage constraint (13), and the delay constraint (16) must be satisfied.

3. The Feasible Set of ( T , S , D )

Since we are splitting the two matrices A and B as shown in (4), it is natural to assume that t, s, and d have to take values such that T, S, and D be divisible by t, s, and d, respectively. For example, if  T = 5 , t can only take values in the set { 1 , 5 } , because  T = 5 is not divisible by 2 , 3 , 4 . However, this significantly limit the values that ( t , s , d ) can take and may provide a high cost for the user.
In this section, we propose a better and more general way as follows: we allow any t , s , d values, and to make the matrix splittable, we append zeros to the original matrix, i.e., append s ¯ columns and t ¯ rows to the matrix A and append s ¯ rows and d ¯ columns to the matrix B , such that ( T + t ¯ ) / t , ( S + s ¯ ) / s , and  ( D + d ¯ ) / d are integers. This increases the dimension of the two matrices but enables us to split them into blocks in a more flexible way. For example, A F 5 × 4 , i.e.,  T = 5 , S = 4 , and we would like to take t = 2 , s = 2 . However, T = 5 is not divisible by t = 2 . Then, we can append one row of zeros to A so that the appended matrix has dimension 6 × 4 and thus can be divisible by t = 2 , s = 2 .
More generally, we propose that for any ( t , s , d ) with t [ 1 : T ] , s [ 1 : S ] , d [ 1 : D ] , we may append ( T mod t ) many rows to the bottom of matrix A and ( S mod s ) many columns to the right side of matrix A . Similarly, we append ( S mod s ) many rows to the bottom of matrix B and ( D mod d ) many columns to the right side of matrix B . As a result, instead of (5), we have
t 0 = T t , s 0 = S s , d 0 = D d .
As can be seen, not padding zeros and only using ( t , s , d ) that is a divisor of ( T , S , D ) is a special case.
Since we are considering padding zeros, it is also possible to have t T + 1 , T + 2 , or s S + 1 , S + 2 , or d D + 1 , D + 2 , . We show in the next lemma that this will only increase the cost at the user, defined in (17), for  t T + 1 , T + 2 , and d D + 1 , D + 2 , .
Lemma 1. 
To minimize the cost at the user, i.e., (17), it is sufficient to consider t [ 1 : T ] and d [ 1 : D ] .
Proof. 
For t T + 1 , T + 2 , and d D + 1 , D + 2 , , the only decodability constraint (11) becomes relaxed, and other constraints are unchanged. In this case, we can prove that the optimal cost will increase compared to t [ 1 : T ] and d [ 1 : D ] . Please refer to Appendix A for detailed proof.    □
Remark 1. 
The case of s is different from the cases of t and d. When ( l Δ , t , d ) is fixed, from security constraint (10) and decodability constraint (11), we see that on one hand, increasing s increases the number of blocks, but on the other hand, it also relaxes the security constraint. When computing nodes are heterogeneous, i.e., computing nodes have different computation cost, upload transmission cost and download transmission cost, the increase in s does not necessarily increase the total cost, because due to the more relaxed security constraint, we can distribute more blocks to computing nodes with lower costs. As a result, when we apply the strategy of appending zeros, the optimal s may not take values in [ 1 : S ] .
After the above discussions, the problem described in Section 2.3 can be formally formulated as
(19a) min J , t , s , d , l Δ J T c U t 0 s 0 + s 0 d 0 + J T c D t 0 d 0 + J T c C t 0 s 0 d 0 (19b) s . t . ( 10 ) , ( 11 ) , (19c) t 0 s 0 + s 0 d 0 + t 0 d 0 1 N M , (19d) max n [ 1 : N ] J n V n t 0 s 0 d 0 + J n C n U t 0 s 0 + s 0 d 0 + J n C n D t 0 d 0 Q th , (19e) t 0 = T t , s 0 = S s , d 0 = D d (19f) 1 t T , 1 s , 1 d D , l Δ 1 , (19g) t , s , d , l Δ Z + , J N N × 1 .
where (19b) provides the security constraint and decodability constraint, (19c) is the storage constraint with N computing nodes’ storage capacity vector defined as M = M 1 M N T , and (19d) is the delay constraint. In the cost function (19a), we have defined the upload transmission cost vector, the download transmission cost vector and the computation cost vector of N computing nodes as c U = c 1 U , c 2 U , , c N U T R N × 1 , c D = c 1 D , c 2 D , , c N D T R N × 1 and c C = c 1 C , c 2 C , , c N C T R N × 1 , respectively. Note that the scheme of appending zeros makes the dimension of every block in A and B to be t 0 × s 0 = T t × S s and s 0 × d 0 = S s × D d , respectively, as indicated by (19e). Furthermore, note that in (19f), while the values of t and d are limited to [ 1 : T ] and [ 1 : D ] , respectively, the value of s does not have an upper bound due to Remark 1.

4. Algorithm Design

Due to coupling variables, integer constraints and nonlinear constraints and objective function of the problem in (19), it is hard to find a global optimal or suboptimal solution. In the following, we propose an algorithm to obtain a feasible solution.
Coupling variables in Problem (19) inspires us to utilize the alternating optimization (AO) technique. Then, a feasible solution to Problem (19) can be obtained by solving the next two subproblems: one is fixing ( t , s , d ) to optimize ( J , l Δ ) , and the other is optimizing ( t , s , d ) given ( J , l Δ ) .

4.1. Optimization Subproblem of ( J , l Δ ) for a Fixed ( T , S , D )

In this subsection, for a fixed ( t , s , d ) , the optimization subproblem of Problem (19) corresponding to ( J , l Δ ) is given as
(20a) min J , l Δ ( 19 a ) (20b) s . t . ( 10 ) , ( 11 ) , (20c) J n Q th 1 V n t 0 s 0 d 0 + 1 C n U t 0 s 0 + s 0 d 0 + 1 C n D t 0 d 0 , n [ 1 : N ] (20d) l Δ 1 , l Δ Z + , J N N × 1 .
Note that when ( t , s , d ) is fixed, the corresponding ( t 0 , s 0 , d 0 ) is also fixed according to (19e). Further note that when ( t , s , d ) is fixed, the objective function (20a) is only a function of J , and not l Δ . Due to the fact that J n , c n U , c n D , c n C 0 , n = 1 , , N , the inequality of (11) must be satisfied with the equality when J * is optimal. Hence, (11) can be rewritten as follows
1 T J = l Δ ( s d + 2 s ) + t s ( d + 1 ) 1 .
With equality (21), l Δ can be expressed as a function of J , i.e.,  l Δ = 1 T J t s ( d + 1 ) + 1 s d + 2 s . Then, substituting l Δ in Problem (20) as a the function of J , Problem (20) can be reformulated as
(22a) min J ( 19 a ) (22b) s . t . ( 20 c ) , (22c) B P T J ( 1 T J t s ( d + 1 ) + 1 d + 2 ) 1 M , (22d) 1 T J t s ( d + 1 ) + 1 s d + 2 s 1 , 1 T J t s ( d + 1 ) + 1 s d + 2 s Z + , J N N × 1 .
Problem (22) is an integer linear programming problem with only one optimizing variable J . This problem can be solved using MATLAB function “intlinprog”. MATLAB’s built-in “intlinprog” function is based on the branch and bound (BnB) algorithm and the interior point method [20,21] and is typically used to solve integer linear programming problems, such as the one in (22).
For certain system parameters and ( t , s , d ) values, Problem (22) is not feasible. To identify a necessary condition for the feasibility of Problem (22), we have the following lemma.
Before presenting the lemma, we define a variable p as the smallest number of colluding sets that contain all computing nodes. For example, for the collusion pattern represented by incidence matrix (1), p is equal to 3, because three colluding sets, i.e.,  { 1 , 2 , 3 } , { 1 , 4 } , { 5 } , include all computing nodes, and any 2 colluding sets in the collusion pattern can not include all computing nodes.
Lemma 2. 
For fixed parameters ( t , s , d ) , if Problem (20) is feasible, the following inequalities must be satisfied:
t s d + t s 1 s ( p d 2 ) Y t s d t s + 1 s d + 2 s and p d 2 > 0
where Y is defined as
Y n = 1 N Q th 1 V n t 0 s 0 d 0 + 1 C n U t 0 s 0 + s 0 d 0 + 1 C n D t 0 d 0 ,
where ( t 0 , s 0 , d 0 ) satisfies (19e). Variable p is defined as the smallest number of colluding sets that contain all computing nodes.
Proof. 
Let us first derive a lower bound of l Δ . According to the second assumption made about the collusion pattern in Section 2, every computing node must appear in at least one colluding set. So, we have
n = 1 N J n m P n T m J n
for any P which is a subset of colluding sets in P that include all computing nodes, i.e.,  { i } T j for some T j P for any i = 1 , 2 , , N . For example, in the collusion pattern represented by the incidence matrix in (1), P may be { { 1 , 2 , 3 } , { 1 , 4 } , { 5 } } , { { 1 , 2 , 3 } , { 2 , 4 } , { 5 } } , { { 1 , 2 , 3 } , { 3 , 4 } , { 5 } } , or  { { 1 , 4 } , { 2 , 4 } , { 3 , 4 } , { 5 } } .
The constraint (10) can be rewritten as
n T m J n l Δ s ,
where T m is the m-th colluding set. Inequality (25) shows that the total number of encoded matrices received by computing nodes in every colluding set can not be more than that of random matrices. Hence, from (25), we have
m P n T m J n m P l Δ s p l Δ s
Thus, from (21), (24), and (26), we have
l Δ ( s d + 2 s ) + t s ( d + 1 ) 1 = n = 1 N J n p l Δ s .
When p d 2 > 0 is satisfied, l Δ must satisfy
l Δ t s d + t s 1 s ( p d 2 ) .
On the other hand, when p d 2 > 0 is not satisfied, there exists no feasible l Δ .
Next, we derive an upper bound on l Δ . We have
l Δ ( s d + 2 s ) + t s ( d + 1 ) 1 = n = 1 N J n
n = 1 N Q th 1 V n t 0 s 0 d 0 + 1 C n U t 0 s 0 + s 0 d 0 + 1 C n D t 0 d 0 ,
where (28) follows from (21), and (29) follows from (20c). Hence, an upper bound on l Δ is given by
l Δ Y t s d t s + 1 s d + 2 s ,
where Y is as defined in (23).
If Problem (20) is feasible, we must have that p d 2 > 0 , and the upper bound of l Δ in (30) must be greater than or equal to the lower bound of l Δ in (27).
Hence, the proof is complete.    □
Based on Lemma 2, Algorithm 1 is proposed to solve Problem (20), where we check the necessary conditions of the feasibility of Problem (20) before solving it using the MATLAB “intlinprog” function.
Algorithm 1 Iterative Algorithm for Problem (20)
Input:  ( t , s , d ) , and  ( t 0 , s 0 , d 0 ) calculated according to (19e)
Output:  J , l Δ .
  1: if Y t s d t s + 1 s d + 2 s t s d + t s 1 s ( p d 2 ) and p d 2 > 0  then
  2:    Solve Problem (22) with MATLAB function “intlinprog”.
  3: else
  4:    Problem (20) is infeasible.
  5: end if

4.2. Optimization Subproblem of ( T , S , D ) for a Fixed ( J , l Δ )

In this subsection, given ( J , l Δ ) , the optimization subproblem of Problem (19) corresponding to ( t , s , d ) is formulated as
(31a) min t , s , d ( 19 a ) (31b) s . t . ( 10 ) , ( 11 ) , ( 19 c ) , ( 19 d ) , ( 19 e ) (31c) 1 t T , 1 s , 1 d D , (31d) d p 3 , (31e) t , s , d Z + ,
where constraint (31d) is derived from Lemma 2.
Ceiling functions, i.e.,  T t , S s , D d in (19e), and integer constraint (31e) make Problem (31) hard to address. We can solve this subproblem by relaxing the ceiling functions, i.e., Problem (31) can be recast as
(32a) min t , s , d J T c U T S t 1 s 1 + S D s 1 d 1 + J T c D T D t 1 d 1 + J T c C T S D t 1 s 1 d 1 (32b) s . t . B P T J l Δ s 1 1 M , (32c) l Δ ( s d + 2 s ) + t s ( d + 1 ) 1 1 T J 1 , (32d) T S t 1 s 1 + S D s 1 d 1 + T D t 1 d 1 1 N M , (32e) J n V n T S D t 1 s 1 d 1 + J n C n U T S t 1 s 1 + S D s 1 d 1 + J n C n D T D t 1 d 1 Q th , n [ 1 : N ] (32f) ( 31 c ) , ( 31 d ) , ( 31 e ) .
Problem (32) is an integer geometric programming problem which can be solved using the Matlab toolbox YALMIP directly. MATLAB’s built-in “YALMIP” function is based on the interior point method and BnB algorithm [22,23] and is typically used to solve integer geometric programming problems, such as the one in (32).

4.3. The Proposed Alternating Optimization (AO) Algorithm

Based on the above discussions of the two subproblems, we propose an AO algorithm as follows: In every AO iteration, for a fixed ( t , s , d ) = ( t ( τ ) , s ( τ ) , d ( τ ) ) , and  ( t 0 , s 0 , d 0 ) , which is calculated as ( t 0 , s 0 , d 0 ) = ( T t ( τ ) , S s ( τ ) , D d ( τ ) ) , we use Algorithm 1 to solve (22). Then, for the output ( J , l Δ ) of Algorithm 1, we solve Problem (32) with YALMIP directly and obtain ( t ( τ + 1 ) , s ( τ + 1 ) , d ( τ + 1 ) ) .
Since Problem (32) is obtained by the relaxation of the ceiling functions, we may face the problem where even though the ( t , s , d ) found by YALMIP are integers, which we call ( t ¯ , s ¯ , d ¯ ) , the corresponding T t ¯ , S s ¯ , and  D d ¯ in Problem (32) may not be integers. In order to overcome this problem, for the converged solution ( t * , s * , d * ) of the AO, we check whether constraints (19c) and (19d) in the original problem (19) are satisfied according to the definition of (19e). If they are, ( t * , s * , d * ) is taken as the solution to Problem (31), and the corresponding block dimensions ( t 0 , s 0 , d 0 ) are taken to be T t * , S s * , D d * by padding zeros. If they are not, then we can employ an exhaustive search within a neighborhood near the converged solution ( t * , s * , d * ) for a feasible solution or restart the algorithm with a new random initial point ( t ( 0 ) , s ( 0 ) , d ( 0 ) ) . In a time-constrained system, we can also abandon optimizing ( t , s , d ) and simply use the initial values ( t ( 0 ) , s ( 0 ) , d ( 0 ) ) to obtain a timely solution.
Finally, the proposed AO algorithm to solve Problem (19) is summarized in Algorithm 2. (The source code can be found in the following link: https://github.com/SendBullet/SDMM-opt (accessed on 14 March 2024))
Algorithm 2 Alternating Optimization Algorithm for Problem (19)
  1:
Initialize t ( 0 ) , s ( 0 ) , d ( 0 ) , τ = 0 and the tolerance ϵ , where t ( 0 ) , s ( 0 ) , d ( 0 ) are chosen from divisors of ( T , S , D ) randomly.
  2:
repeat
  3:
   Given ( t , s , d ) = ( t ( τ ) , s ( τ ) , d ( τ ) ) , calculate J ( τ + 1 ) , l Δ ( τ + 1 ) by Algorithm 1.
  4:   
Given J = J ( τ + 1 ) , l Δ = l Δ ( τ + 1 ) , calculate ( t ( τ + 1 ) , s ( τ + 1 ) , d ( τ + 1 ) ) by solving Problem (32) with YALMIP directly.
  5:
   Set τ = τ + 1 .
  6:
until The fractional increase of the objective function of Problem (19) is less than ϵ .
  7:
if Constraints (19c), (19d) and (19e) are satisfied simultaneously then
  8:   
Output J ( τ ) , l Δ ( τ ) , ( t ( τ ) , s ( τ ) , d ( τ ) ) and block dimension ( t 0 , s 0 , d 0 ) = ( T t ( τ ) , S s ( τ ) , D d ( τ ) ) .
  9:
else
10:
   Return to step 1 and restart with a new random initial point.
11:
end if

4.4. Complexity Analysis

The complexity of Algorithm 2 per iteration mainly lies in Steps 3 and 4. In Step 3, the complexity of Algorithm 1 is derived from solving Problem (20) by MATLAB function “intlinprog” which uses BnB method. By omitting the lower-order terms, the main complexity of Algorithm 1 per iteration is O ( 2 N ) , where N is the number of computing nodes. In Step 4, similarly, the main complexity of solving Problem (32) by YALMIP with BnB method is O ( 2 3 ) , where 3 is the dimension of optimizing variables ( t , s , d ) [24]. Hence, by neglecting the lower-order terms, the approximate computational complexity of Algorithm 2 per iteration is O ( 2 N ) when N 3 , and O ( 2 3 ) when N 2 . As can be seen, the complexity scales exponentially with the number of computing nodes.

5. Simulation Results

In this section, we provide simulation results to evaluate the performance of the two-sided SDMM under arbitrary collusion pattern. We consider two collusion patterns. The first one has N = 11 computing nodes with the collusion pattern being P 1 = { { 1 , 4 } , { 2 , 5 } , { 1 , 2 , 6 } , { 3 , 7 } ,   { 4 , 5 , 6 , 7 } , { 8 } , { 9 } , { 10 } , { 11 } } , while the second one consists of N = 20 computing nodes with the collusion pattern being P 2 = { { 1 , 2 , 3 , 4 , 5 , 6 } ,   { 6 , 7 , 8 , 9 , 10 , 11 } , { 12 , 13 } , { 14 } , { 13 , 15 } , { 16 , 17 } , { 18 } , { 19 } , { 20 } } So, for these two collusion patterns, the smallest number of colluding sets containing all of the computing nodes is p = 7 and p = 9 , respectively. The stopping criterion in Algorithms 2 is set to ϵ = 10 8 [25]. Other system parameters are listed in the Table 1.
For simplicity, our proposed scheme in this paper is denoted by “ Pro . ”. Then, the following two benchmarks are considered to compare with our proposed scheme:
(1)
N / 0 . ”: In this scenario, we do not append zeros to the input matrices. The optimization subproblem corresponding to ( t , s , d ) for a fixed ( J , l Δ ) is solved by exhaustive search in feasible pairs ( t ˜ , s ˜ , d ˜ ) , which are divisors of ( T , S , D ) . Other details are similar to Algorithm 2. This corresponds to the optimal performance of AO when no zeros are appended.
(2)
SE . ”: First, ( t , s , d ) is initialized by divisors of ( T , S , D ) randomly. Then, we solve Problem (22) to obtain ( J , l Δ ) . This is a low complexity algorithm where no zeros are appended, and also, ( t , s , d ) are randomly chosen without being optimized. Only ( J , l Δ ) are optimized for the fixed randomly chosen ( t , s , d ) .
First, we consider the collusion pattern P 1 and the number of computing nodes being 11. Figure 2 shows the total cost versus the number of rows of matrix A , i.e., T. Firstly, with the increase in T, the total cost of all schemes increases, which is caused by the growth of input matrix dimension. Secondly, our proposed algorithm outperforms the “ N / 0 . ” scheme when T 2500 . This means that when T 2500 , it is better to append zeros to the matrices to obtain a lower cost. Thirdly, our proposed algorithm always performs better than the “ SE ” scheme, which demonstrates the necessity of both appending zeros and performing AO. Lastly, from the comparison between ( S , D ) = ( 2500 , 2500 ) and ( S , D ) = ( 3500 , 3500 ) , we can observe that the cost of the proposed scheme increases with the increase of the size of the matrices.
Figure 3 plots the total cost versus the number of columns of matrix B , i.e., D. Similar to Figure 2, the difference between our proposed scheme and the “ SE ” scheme becomes larger with the increase in the dimensions of the input matrices. However, the “ N / 0 . ” scheme achieves the same total cost as our proposed algorithm. This shows that, in this case, there is no need to pad zeros. Though the proposed scheme and the “ N / 0 . ” scheme have the same performance, the proposed scheme has less complexity because it can avoid the exhaustive search of the “ N / 0 . ” scheme.
Figure 4 illustrates the total cost versus the number of columns of matrix A , i.e., S, which is also the number of rows of the matrix B . Although the total cost of our proposed scheme is the same as that of the “ N / 0 . ” scheme for some S values, the gain of our proposed algorithm over the “ N / 0 . ” scheme increases with the increase in S. In fact, the gain is very significant for large S values, for example, when S = 4000 , the total cost incurred by the proposed scheme is only 55.77% of the “ N / 0 . ” scheme when ( T , D ) = ( 2500 , 2500 ) and 55.07% when ( T , D ) = ( 3500 , 3500 ) .
Figure 5, Figure 6 and Figure 7, respectively, depict the total cost with respect to T, S, and D when the number of computing nodes is 20. Similarly, our proposed scheme strictly outperforms the other two benchmarks in some cases, which further shows the superiority of the proposed scheme. Comparing Figure 2, Figure 3 and Figure 4 with Figure 5, Figure 6 and Figure 7, respectively, we see that the total cost decreases significantly with the increase in the number of computing nodes. Thus, when possible, the user should utilize more computing nodes to reduce the total cost.

6. Conclusions

In this paper, we investigated the minimization problem of the total cost, comprised of the computation cost and the communication cost, in the system of two-sided SDMM under an arbitrary collusion pattern. For realizing SDMM, we split the two input matrices into many blocks and appended some extra blocks of random matrices to guarantee the security of the two input matrices. Then, the matrix multiplication is calculated based on the encoded copies in the computing nodes. Our aim is to minimize the total cost, while ensuring the security constraint of the two input matrices, the decodability constraint of the desired result of the multiplication, the storage capacity of the computing nodes, and the delay constraint. The distribution vector, the number of appended random matrices, and all matrix splitting factors were optimized. In order to overcome divisibility problem of matrix splitting, we firstly proposed a strategy of appending zeros to the two input matrices and then discussed the value ranges of some matrix splitting factors for the optimality of the problem. Next, an AO algorithm was provided to obtain a feasible solution. Furthermore, to verify the feasibility of the proposed optimization problem, some necessary conditions were provided. Numerical results demonstrated that our proposed scheme achieves a lower total cost compared to the scheme without appending zeros and the scheme without AO optimization.

Author Contributions

Conceptualization, J.L., N.L. and W.K.; methodology, J.L., N.L. and W.K.; formal analysis, J.L., N.L. and W.K.; writing—original draft preparation, J.L.; writing—review and editing, J.L., N.L. and W.K.; supervision, N.L. and W.K.; Funding acquisition, N.L. and W.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the National Natural Science Foundation of China under Grants 62071115, 62361146853, 62371129, and the Research Fund of National Mobile Communications Research Laboratory, Southeast University (No. 2024A03).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

Proof of Lemma 1. 
First, we consider the case of t for a fixed ( l Δ , s , d ) , i.e., appending A F T × S with arbitrary rows of zeros.
We prove by contradiction. For a fixed ( l Δ , s , d ) , suppose the optimal value of t, i.e., t * , takes values in T + 1 , T + 2 , . This means that in the padded matrix, denoted as A 1 , there is at least an entire row of blocks, whose values are all 0 t 0 × s 0 . We may remove this entire row of zero blocks and obtain another matrix, denoted as A 2 . We will show that the cost at the user is smaller if we use A 2 in place of A 1 . This means that it is suboptimal to have the padded matrix contain an entire row of zero small blocks. In other words, t * can not take values in T + 1 , T + 2 , .
To see that the optimal cost at the user is smaller, if we use A 2 in place of A 1 , we note that both A 2 and A 1 have the same dimension for the blocks, i.e., t 0 × s 0 . The difference is that A 2 has a lower number of blocks, more specifically, t for A 2 is less. Any J feasible for the problem of A 1 is feasible for the problem of A 2 , as the security constraint (10) is the same for both problems due to the fixed ( l Δ , s , d ) , and the storage constraint (13) and the delay constraint (16) are the same due to both problems having the same dimension for the blocks, i.e., t 0 × s 0 . The decodability constraint (11) is more stringent for the problem of A 1 as t is larger for A 1 . Hence, any J feasible for the problem of A 1 is feasible for the problem of A 2 . Furthermore, the same J that are feasible for both problems incur the same cost (17) due to both problems having the same dimension for the blocks, i.e., t 0 × s 0 . Therefore, since the feasibility set of the problem for A 2 is larger, the optimal cost at the user is smaller.
The same argument can be made for d for a fixed ( l Δ , s , t ) .
Hence, the proof is complete. □

References

  1. El-Sayed, H.; Sankar, S.; Prasad, M.; Puthal, D.; Gupta, A.; Mohanty, M.; Lin, C.T. Edge of Things: The Big Picture on the Integration of Edge, IoT and the Cloud in a Distributed Computing Environment. IEEE Access 2018, 6, 1706–1717. [Google Scholar] [CrossRef]
  2. Zaharia, M.; Chowdhury, M.; Franklin, M.J.; Shenker, S.; Stoica, I. Spark: Cluster computing with working sets. In Proceedings of the 2nd USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 10), Boston, MA, USA, 22–25 June 2010. [Google Scholar]
  3. Dean, J.; Ghemawat, S. MapReduce: Simplified data processing on large clusters. In Proceedings of the 6th Symposium on Operating Systems Design and Implementation, San Francisco, CA, USA, 6–8 December 2004. [Google Scholar]
  4. D’Oliveira, R.G.; El Rouayheb, S.; Karpuk, D. GASP codes for secure distributed matrix multiplication. IEEE Trans. Inf. Theory 2020, 66, 4038–4050. [Google Scholar] [CrossRef]
  5. D’Oliveira, R.G.; El Rouayheb, S.; Heinlein, D.; Karpuk, D. Degree tables for secure distributed matrix multiplication. IEEE J. Sel. Areas Inf. Theory 2021, 2, 907–918. [Google Scholar] [CrossRef]
  6. Jia, Z.; Jafar, S.A. On the capacity of secure distributed batch matrix multiplication. IEEE Trans. Inf. Theory 2021, 67, 7420–7437. [Google Scholar] [CrossRef]
  7. Kiah, H.M.; Kim, W.; Kruglik, S.; Ling, S.; Wang, H. Explicit Low-Bandwidth Evaluation Schemes for Weighted Sums of Reed-Solomon-Coded Symbols. IEEE Trans. Inf. Theory 2024. [Google Scholar] [CrossRef]
  8. Machado, R.A.; D’Oliveira, R.G.L.; Rouayheb, S.E.; Heinlein, D. Field Trace Polynomial Codes for Secure Distributed Matrix Multiplication. In Proceedings of the 2021 XVII International Symposium “Problems of Redundancy in Information and Control Systems” (REDUNDANCY), Moscow, Russia, 25–29 October 2021; pp. 188–193. [Google Scholar] [CrossRef]
  9. Chang, W.T.; Tandon, R. On the capacity of secure distributed matrix multiplication. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–6. [Google Scholar]
  10. Bitar, R.; Parag, P.; El Rouayheb, S. Minimizing latency for secure coded computing using secret sharing via staircase codes. IEEE Trans. Commun. 2020, 68, 4609–4619. [Google Scholar] [CrossRef]
  11. Ebadifar, S.; Kakar, J.; Sezgin, A. The Need for Alignment in Rate-Efficient Distributed Two-Sided Secure Matrix Computation. In Proceedings of the ICC 2019—2019 IEEE International Conference on Communications (ICC), Shanghai, China, 20–24 May 2019; pp. 1–6. [Google Scholar] [CrossRef]
  12. Stinson, D.R. An explication of secret sharing schemes. Des. Codes Cryptogr. 1992, 2, 357–390. [Google Scholar] [CrossRef]
  13. Tajeddine, R.; Gnilke, O.W.; Karpuk, D.; Freij-Hollanti, R.; Hollanti, C.; El Rouayheb, S. Private information retrieval schemes for coded data with arbitrary collusion patterns. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 1908–1912. [Google Scholar]
  14. Yao, X.; Liu, N.; Kang, W. The capacity of private information retrieval under arbitrary collusion patterns for replicated databases. IEEE Trans. Inf. Theory 2021, 67, 6841–6855. [Google Scholar] [CrossRef]
  15. Kakar, J.; Khristoforov, A.; Ebadifar, S.; Sezgin, A. Uplink-downlink tradeoff in secure distributed matrix multiplication. arXiv 2019, arXiv:1910.13849. [Google Scholar]
  16. Yang, H.; Lee, J. Secure Distributed Computing With Straggling Servers Using Polynomial Codes. IEEE Trans. Inf. Forensics Secur. 2019, 14, 141–150. [Google Scholar] [CrossRef]
  17. Chen, Z.; Jia, Z.; Wang, Z.; Jafar, S.A. GCSA codes with noise alignment for secure coded multi-party batch matrix multiplication. IEEE J. Sel. Areas Inf. Theory 2021, 2, 306–316. [Google Scholar] [CrossRef]
  18. Aliasgari, M.; Simeone, O.; Kliewer, J. Private and Secure Distributed Matrix Multiplication With Flexible Communication Load. IEEE Trans. Inf. Forensics Secur. 2020, 15, 2722–2734. [Google Scholar] [CrossRef]
  19. Yao, Y.; Liu, N.; Kang, W.; Li, C. Secure Distributed Matrix Multiplication Under Arbitrary Collusion Pattern. IEEE Trans. Inf. Forensics Secur. 2023, 18, 85–100. [Google Scholar] [CrossRef]
  20. Van Roy, T.J.; Wolsey, L.A. Integer Programming; Core Discussion Papers Rp; Center for Operations Research and Econometrics: Ottignies-Louvain-la-Neuve, Belgium, 2009. [Google Scholar]
  21. Boyd, S.P.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  22. Boyd, S.; Kim, S.J.; Vandenberghe, L.; Hassibi, A. A tutorial on geometric programming. Optim. Eng. 2007, 8, 67–127. [Google Scholar] [CrossRef]
  23. Lofberg, J. YALMIP: A toolbox for modeling and optimization in MATLAB. In Proceedings of the 2004 IEEE International Conference on Robotics and Automation (IEEE Cat. No.04CH37508), Taipei, Taiwan, 2–4 September 2004; pp. 284–289. [Google Scholar] [CrossRef]
  24. Wang, Y.; Chen, M.; Li, Z.; Hu, Y. Joint Allocations of Radio and Computational Resource for User Energy Consumption Minimization Under Latency Constraints in Multi-Cell MEC Systems. IEEE Trans. Veh. Technol. 2023, 72, 3304–3320. [Google Scholar] [CrossRef]
  25. Zhou, G.; Pan, C.; Ren, H.; Wang, K.; Nallanathan, A. Intelligent reflecting surface aided multigroup multicast MISO communication systems. IEEE Trans. Signal Process. 2020, 68, 3236–3251. [Google Scholar] [CrossRef]
Figure 1. Two-sided secure distributed matrix multiplication.
Figure 1. Two-sided secure distributed matrix multiplication.
Entropy 26 00407 g001
Figure 2. Total cost versus T when N = 11 .
Figure 2. Total cost versus T when N = 11 .
Entropy 26 00407 g002
Figure 3. Total cost versus D when N = 11 .
Figure 3. Total cost versus D when N = 11 .
Entropy 26 00407 g003
Figure 4. Total cost versus S when N = 11 .
Figure 4. Total cost versus S when N = 11 .
Entropy 26 00407 g004
Figure 5. Total cost versus T when N = 20 .
Figure 5. Total cost versus T when N = 20 .
Entropy 26 00407 g005
Figure 6. Total cost versus D when N = 20 .
Figure 6. Total cost versus D when N = 20 .
Entropy 26 00407 g006
Figure 7. Total cost versus S when N = 20 .
Figure 7. Total cost versus S when N = 20 .
Entropy 26 00407 g007
Table 1. System parameters.
Table 1. System parameters.
ParametersValues for N = 11 Values for N = 20
Storage capacity M M 1 = M 2 = M 3 = M 4 = 1 × 10 6 ,
M 5 = M 6 = M 7 = M 8 = 2 × 10 6 ,
M 9 = M 10 = M 11 = 3 × 10 6
M 1 = = M 7 = 1 × 10 6 ,
M 8 = = M 15 = 2 × 10 6 ,
M 16 = = M 20 = 3 × 10 6
Computation speed V n V 1 = V 2 = V 3 = V 4 = 2 × 10 8 ,
V 5 = V 6 = V 7 = V 8 = 5 × 10 8 ,
V 9 = V 10 = V 11 = 8 × 10 8
V 1 = = V 7 = 2 × 10 8 ,
V 8 = = V 15 = 5 × 10 8 ,
V 16 = = V 20 = 8 × 10 8
Uplink Capacity C n U C 1 U = C 2 U = C 3 U = C 4 U = 1024 × 10 4 , C 5 U = C 6 U = C 7 U = C 8 U = 2048 × 10 4 ,
C 9 U = C 10 U = C 11 U = 4096 × 10 4
C 1 U = = C 7 U = 1024 × 10 4 ,
C 8 U = = C 15 U = 2048 × 10 4 ,
C 16 U = = C 20 U = 4096 × 10 4
Downlink Capacity C n D C 1 D = C 2 D = C 3 D = C 4 D = 1024 × 10 4 , C 5 D = C 6 D = C 7 D = C 8 D = 2048 × 10 4 ,
C 9 D = C 10 D = C 11 D = 4096 × 10 4
C 1 D = = C 7 D = 1024 × 10 4 ,
C 8 D = = C 15 D = 2048 × 10 4 ,
C 16 D = = C 20 D = 4096 × 10 4
Upload cost c U c 1 U = c 2 U = c 3 U = c 4 U = 3 × 10 8 ,
c 5 U = c 6 U = c 7 U = c 8 U = 4 × 10 8 ,
c 9 U = c 10 U = c 11 U = 5 × 10 8
c 1 U = = c 7 U = 3 × 10 8 ,
c 8 U = = c 15 U = 4 × 10 8 ,
c 16 U = = c 20 U = 5 × 10 8
Download cost c D c 1 D = c 2 D = c 3 D = c 4 D = 3 × 10 8 ,
c 5 D = c 6 D = c 7 D = c 8 D = 4 × 10 8 ,
c 9 D = c 10 D = c 11 D = 5 × 10 8
c 1 D = = c 7 D = 3 × 10 8 ,
c 8 D = = c 15 D = 4 × 10 8 ,
c 16 D = = c 20 D = 5 × 10 8
Computation cost c C c 1 C = c 2 C = c 3 C = c 4 C = 2 × 10 8 ,
c 5 C = c 6 C = c 7 C = c 8 C = 6 × 10 8 ,
c 9 C = c 10 C = c 11 C = 8 × 10 8
c 1 C = = c 7 C = 2 × 10 8 ,
c 8 C = = c 15 C = 6 × 10 8 ,
c 16 C = = c 20 C = 8 × 10 8
Delay threshold Q th 10001000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Liu, N.; Kang, W. Minimizing Computation and Communication Costs of Two-Sided Secure Distributed Matrix Multiplication under Arbitrary Collusion Pattern. Entropy 2024, 26, 407. https://doi.org/10.3390/e26050407

AMA Style

Li J, Liu N, Kang W. Minimizing Computation and Communication Costs of Two-Sided Secure Distributed Matrix Multiplication under Arbitrary Collusion Pattern. Entropy. 2024; 26(5):407. https://doi.org/10.3390/e26050407

Chicago/Turabian Style

Li, Jin, Nan Liu, and Wei Kang. 2024. "Minimizing Computation and Communication Costs of Two-Sided Secure Distributed Matrix Multiplication under Arbitrary Collusion Pattern" Entropy 26, no. 5: 407. https://doi.org/10.3390/e26050407

APA Style

Li, J., Liu, N., & Kang, W. (2024). Minimizing Computation and Communication Costs of Two-Sided Secure Distributed Matrix Multiplication under Arbitrary Collusion Pattern. Entropy, 26(5), 407. https://doi.org/10.3390/e26050407

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop