Next Article in Journal
Efficient FPGA Implementation of an RFIR Filter Using the APC–OMS Technique with WTM for High-Throughput Signal Processing
Previous Article in Journal
Decreasing the Negative Impact of Time Delays on Electricity Due to Performance Improvement in the Rwanda National Grid
Previous Article in Special Issue
Developing a Cyber Incident Exercises Model to Educate Security Teams
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Certificateless Remote Data Integrity Auditing with Access Control of Sensitive Information in Cloud Storage

1
College of Information and Control Engineering, Xi’an University of Architecture and Technology, Xi’an 710311, China
2
Xi’an Aerospace Remots Sensing Data Technology Corporation, Xi’an 710000, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(19), 3116; https://doi.org/10.3390/electronics11193116
Submission received: 31 August 2022 / Revised: 24 September 2022 / Accepted: 26 September 2022 / Published: 29 September 2022
(This article belongs to the Special Issue Cloud Security in the Age of IoT)

Abstract

:
With the spread of cloud storage technology, checking the integrity of data stored in the cloud effectively is increasingly becoming a concern. Following the introduction of the first remote data integrity audit schemes, different audit schemes with various characteristics have been proposed. However, most of the existing solutions have problems such as additional storage overhead and additional certificate burden. This paper proposes a certificateless remote data integrity auditing scheme which takes into account the storage burden and data privacy issues while ensuring the correctness of the data audit results. In addition, the certificateless design concept enables the scheme proposed in this paper to avoid a series of burdens brought by certificates. The scheme designed in this paper provides a data access control function whereby only users who hold a valid token generated by the data owner can access the target data from the cloud. Finally, this paper provides a detailed security proof to ensure the rationality of the results. A theoretical analysis and subsequent experimental verification show that the proposed scheme is both effective and feasible.

1. Introduction

With the development of the information revolution, human society has entered a highly informationized mode. The development of informatization has fundamentally changed the human way of life and brought great developments to the industrial society. As an important source of power for the development of social informatization, data are now been regarded as a strategic resource like oil and coal by many countries. Although the emergence of massive data has greatly promoted the progress of society, it has brought problems as well, for example, the extra cost of massive data storage. As data owners (DO) can spend a great deal of time and energy on local data storage, they generally choose to store these data using cloud service providers (CSP) with greater storage and computing capabilities. However, although this move reduces the burden of data ownership to a certain extent, it leads to the DO losing direct control over the data, that is, the owner may not be able to obtain the latest iteration of their data at any time. Data stored in the cloud can be corrupted for a variety of reasons. For example, in order to save on storage cost, a CSP may intentionally delete data that is not frequently accessed by the DO. In addition, malicious attacks from third parties or hardware failures on CSP’s servers can cause data corruption. Furthermore, the CSP may choose to hide this from the DO in order to preserve their reputation. Such concealment by the CSP is likely to cause a degree of economic loss or other harm to the DO. In order to solve this problem, remote data integrity auditing scheme are increasingly being applied.
The first data integrity auditing scheme was proposed by Ateniese et al. [1] in 2007, in which the concept of provable data possession (PDP) was proposed. During the same period, Juels et al. [2] proposed another type of auditing scheme, proof of retrievability (PoR), for the first time. Following these early developments, research on data integrity auditing schemes has made great progress, and many schemes with their own characteristics have been proposed one after the other [3,4,5,6,7].
In the auditing scheme developed and proposed here, the designer pays attention to realizing other properties in addition to ensuring the accuracy of data auditing, among which the most important is privacy protection. The privacy of data has become a topic of great concern in today’s society. This is mainly because data often contain private information pertaining to the DO, which may cause serious harm if leaked. The existing solution is that, before uploading data to the cloud, the DO encrypts or blinds the data locally before generating tags and uploading [8,9]. Although this approach can effectively protect the privacy of data, encryption and blinding operations incur a huge computational burden on the DO, hindering the realization of the data’s value.
In addition to privacy, data access control is a concern. Data can only be of maximum value when circulated. The most direct manifestation of this is the emergence of a large number of base stations in the edge environment. In order to realize the value of their data, the CSP or DO chooses to set up base stations in a fixed area and provides services to nearby users. Therefore, there are many data integrity auditing schemes in edge environments [10,11,12]. In the field of cloud storage, data access control cannot directly apply the model in the edge environment. This is because the cloud storage environment does not consider users in a fixed area, only authorized users who hold legal warrants to acquire data from the CSP. Therefore, data access control under cloud storage conditions must consider the correctness and revocability of warrants.
Finally, the core of the extant data integrity auditing schemes is digital signature technology. The DO guarantees the implementation of the scheme by generating corresponding tags for each data block. This approach can effectively check the integrity of data stored in the cloud, however, the appearance of tags means that the DO needs to prepare extra overhead to store tags in the cloud. In most of the existing schemes, the storage overhead generated by the tag is close to or even greater than the storage overhead of the data block itself [13,14,15]. The storage overhead incurred by these tags has become a major factor hindering the application of data auditing schemes.

1.1. Contribution

In order to solve the main problems with current auditing schemes, we design a certificateless remote data integrity auditing scheme with an access control function and sensitive information hiding. The main contributions of this paper are as follows:
(1) We propose a certificateless remote data integrity scheme that avoids the additional overhead brought by certificates. At the same time, in order to reduce the storage overhead caused by tags, our scheme does not choose to generate tags for each data block when generating tags for data blocks, instead aggregating multiple data blocks into one signature to reduce the number of tags.
(2) In order to make better use of the value of data, our scheme implements an access control function. That is, the DO generates warrants for authorized users, allowing them to acquire data from the cloud. The warrants genarated in our scheme are revocable, which means that DO can revoke a user’s access at any time. At the same time, the scheme proposed in this paper takes privacy into account. During implementation, only authorized users, the DO, and the CSP can access the original master data, while unauthorized users and TPA cannot obtain the data by any means.
(3) We provide a detailed proof of the process of the proposed scheme and compare it with two other schemes through both theoretical analysis and experimental verification. The results show that the proposed scheme outperforms [8,15], mainly in terms of shorter time spent in tag generation and lower storage overhead on the part of the CSP.

1.2. Related Works

In 2012, Yang et al. [16] proposed a data integrity auditing scheme with privacy protection and support for dynamic updates, and which can support batch auditing for multiple DOs and CSPs. However, their approach is based on the traditional public key infrastructure design audit protocol, and as such there is a huge cost problem caused by the use of certificates. In 2013, Wang et al. [17] first proposed a certificateless public auditing scheme to verify the integrity of data in the cloud in order to solve the security risks brought on by public key infrastructure (PKI); however, they did not consider privacy or dynamic updating of data. In the same year, Kai et al. [18] proposed a data integrity auditing scheme in a multi-cloud environment with support for simultaneous verification of multiple audit requests for different data files stored on different CSPs by different DOs. The authors introduced a homomorphic encryption mechanism to ensure the privacy of the auditing process, however, the extra overhead caused by certificates was ignored. In 2014, Yuan et al. [19] proposed a remote data integrity auditing scheme with support for multi-user modification and collusion resistance based on full consideration of realistic scenarios. However, their approach did not address the risk of data leakage and was not able to guarantee privacy. In 2015, Jiang et al. [20] proposed a data integrity auditing scheme based on vector commitment and verifier-local revocation group signatures, however, the privacy of the data cannot be guaranteed by this approach. In 2016, Zhang et al. [21] proposed a lightweight auditing scheme based on entrusting most of the computation in the auditing process to the cloud in order to reduce the burden of auditing. Under this scheme third-party auditors can perform multiple audit tasks simultaneously, however, the risk of data leakage is increased. In 2017, Li et al. [22] designed a scheme to solve the complex key management problem in traditional methods by employing biometrics as a fuzzy identity; however, new computational overhead is generated in the process of key matching. In another paper from 2019, Shen et al. [5] designed an auditing scheme that does not require a private key using biometric data such as fingerprints as a fuzzy private key to sign data blocks to eliminate the storage burden of saving the private key. However, the feature of fuzzy private keys requires extra computational overhead to implement. In 2018, in order to better realize data sharing, Shen et al. [23] implemented data auditing and sharing by introducing a sanitizer to sanitize the data blocks containing sensitive information and their corresponding tags; however, the inclusion of these cleaning agents increases the computational cost of label generation. In 2020, Zhao et al. [24] embedded blockchain technology into the cloud auditing system and designed a data integrity auditing scheme with a privacy protection function by combining a blockchain and digital signature technology. However, the encryption algorithm they used to ensure data security adds an additional computational burden. In the same year, Lan et al. [25] pointed out that the RDIC protocol proposed by C. Sasikala et al. was not secure, and provided a rigorous proof analysis. In 2021, in order to avoid the waste of resources caused by repeatedly challenging specific data blocks over a short period of time, Wang et al. [26] confirmed the time at which such data blocks are checked by predicting user behavior and selecting the challenging data blocks on this basis. In 2022, Yang et al. [27] implemented a data integrity auditing scheme with support for dynamic insertion and provable seletion through a number-rank-based Merkle hash tree. In another 2022 paper, S. Dhelim et al. [28] proposed and designed a large-scale IoT trust management system able to effectively manage the trust relationships of devices in large-scale IoT contexts. Table 1 compares the characteristics of the works referenced above.

1.3. Organization

The rest of this paper is organized as follows. In Section 2, we describe several of the technologies that support our paper and then provide the scheme outline and security model. In Section 3, we elaborate our proposed remote data integrity auditing scheme. In Section 4, we analyze the security of our scheme in detail. Section 5 presents the results of our performance evaluation. Finally, we present the conclusions of this paper in Section 6.

2. Preliminaries

In this section, we provide relevant knowledge points and describe the security model of the scheme we designed. Important mathematical notations used in this paper are listed in Table 2.

2.1. System Model

There are five entities in our scheme: DO, CSP, KGC, TPA, and Users. The relationship between them is shown in Figure 1.
(1) KGC: The KGC is responsible for generating relevant parameters and generating a partial private key for the DO.
(2) DO: The DO is the actual owner of the data and is responsible for splitting the file into appropriate sizes, generating corresponding tags, and generating warrants for authorized users.
(3) CSP: The CSP is an institution with sufficient storage and computing power, and is responsible for properly storing data and making it available to authorized users.
(4) TPA: The TPA is an institution with sufficient computing power, and is responsible for randomly issuing data integrity challenges to the CSP. In our scheme, the TPA is considered semi-honest, i.e., while it honestly reports auditing results, it is curious about the data.
(5) Users: Users are the groups who want to use the data stored by the CSP; they can obtain data from the CSP using legitimate warrants.

2.2. Bilinear Maps

Let q be a large prime and G 1 and G 2 be two cyclic multiplicative groups with the same order q, where g is a generator of G 1 and e : G 1 × G 1 G 2 is a bilinear map with the following properties:
(1) Bilinearity: for a , b G 1 and x , y Z q * , e ( a x , b y ) = e ( a , b ) x y .
(2) Non-degeneracy: a , b G 1 , and e ( a , b ) 1 G 2 .
(3) Computability: for a , b G 1 , there is an efficient algorithm to calculate e ( a , b ) .

2.3. Security Assumptions

2.3.1. Computational Diffie–Hellman (CDH) Problem

Let G 1 be a multiplicative cyclic group, where g is a generator of G 1 . Given the tuple g , g a , g b , where a , b Z q * is unknown, the CDH problem is to calculate g a b .

2.3.2. CDH Assumption

For a probabilistic polynomial time (PPT) adversary A, the advantage for A in solving the CDH problem in G 1 is negligible. Assume that ε is a negligible value; then, it can be defined as
A d v G 1 A C D H = Pr A g , g a , g b = g a b : a , b R Z q * ε

2.3.3. Discrete Logarithm (DL) Problem

Let G 1 be a multiplicative cyclic group, where g is a generator of G 1 . Given the tuple g , g x , where x Z q * is unknown, the DL problem is to calculate x.

2.3.4. DL Assumption

For a PPT adversary A, the advantage for A in solving the DL problem in G 1 is negligible. Assume that ε is a negligible value; then, it can be defined as
A d v G 1 A D L = Pr A g , g x = x : x R Z q * ε

2.4. Outline of Our Scheme

Our designed certificateless data integrity auditing scheme with sensitive information hiding involves eight algorithms with the following functions:
S e t u p ( 1 k ) ( p a r a m s , m s k ) : this algorithm is performed by the KGC to generate the relevant parameters and master key.
E x t r a c t ( p a r a m s , I D , m s k ) s k 1 : this algorithm is performed by the KGC, mainly to generate the partial private key for the DO.
K e y G e n ( p a r a m s , I D , s k 1 ) ( s k , p k ) : this algorithm is run by the DO, and its main function is to generate a complete private key.
T a g G e n ( p a r a m s , s k , F , f n a m e ) Φ : this algorithm is performed by the DO; its main function is to generate tags for each data block.
C h a l l e n g e ( p a r a m s , c ) c h a l : this algorithm is executed by the TPA, and mainly generates relevant parameters used to challenge the integrity of the data blocks stored in the cloud.
P r o o f G e n ( p a r a m s , c h a l , F , Φ ) P : this algorithm is executed by the CSP to generate relevant proofs in response to TPA challenges.
P r o o f V e r i f y ( p a r a m s , p k , P , c h a l , I D ) 0 , 1 : this algorithm is executed by TPA, mainly to judge whether the proof of the CSP response is legal and to judge the integrity of the data.
w a r r a n t G e n ( p a r a m s , I D , I D ) W : this algorithm is executed by the DO, mainly to generate warrants for authorized users for access control.

2.5. Security Model

The security model of our scheme is based on a game played between a challenger β and an adversary A, where the challenger β represents the DO and TPA and the adversary A represents the malicious cloud CSP. The specific details of the game are as follows:
Setup: The challenger β executes the S e t u p algorithm to generate the master key m s k and the public parameters p a r a m s , then sends the public parameters to the adversary A and stores the m s k properly.
Hash Queries: The adversary A can query any type of hash function, then challenger β performs related calculations and sends the results to A.
Private/Public Key Queries: The adversary A can query the private key of the user with identity I D . When receiving a private key query request from adversary A, challenger β runs E x t r a c t and K e y G e n , then feeds back the public key p k I D and the combined private key s k I D to A.
Tag Queries: The adversary A can randomly select a data block m i and query its corresponding tag; the challenger β then executes T a g G e n to generate the signature of the data block m i and sends the result to A.
Integrity Proof Queries: The adversary A can randomly select any number of data blocks and query their corresponding integrity proof; the challenger β then executes P r o o f G e n and returns the result to A.
Challenge: The challenger β selects a certain number of data blocks to send to the adversary A in order to obtain a data integrity proof.
Forging: In response to a challenge message sent by challenger β , adversary A forges an integrity proof and returns it to β to verify the validity of the proof. If this proof can always pass the challenger’s verfication, the adversary A is considered to have won the game.
Based on the above game, we have the following definitions.
Definition 1.
A certificateless remote data integrity auditing scheme is considered to be secure if the probability of adversary A winning the game against challenger β is negligible.
Furthermore, we propose a formal definition to satisfy the security requirement of sensitive information hiding.
Definition 2.
A certificateless remote data integrity auditing scheme is regarded as having sensitive information hiding if only the DO and authorized users can obtain the original data from the CSP during the execution of the scheme.

3. Our Proposed Scheme

In this part, we elaborate the details of our design scheme.
S e t u p ( 1 k ) ( p a r a m s , m s k ) : This algorithm is performed by the KGC. Given a security parameter k, the KGC randomly generates a large prime q with the feature | q | = k . Then, the KGC generates two multiplicative cyclic groups G 1 and G 2 , both of order q, where g is a generator of G 1 and u 1 , u 2 , , u t G 1 t represents t random elements. In addition, the KGC selects three secure cryptographic hash functions H 1 : 0 , 1 * G 1 , H 2 : 0 , 1 * × 0 , 1 * × G 1 G 1 , H 3 : 0 , 1 * × 0 , 1 * × 0 , 1 * G 1 . Here, π : Z q * × 1 , 2 , , n 1 , 2 , , n is a pseudo-random permutation (PRP), φ : Z q * × Z q * Z q * is a pseudo-random function (PRF), e is a bilinear map acting on G 1 , and G 2 : G 1 × G 1 G 2 . Then, the KGC generates the master secret key m s k = s , where s is randomly selected from Z q * and the master public key m p k is calculated by m p k = g s . After carrying out the above operations, the KGC publishes p a r a m s = G 1 , G 2 , g , q , u 1 , u 2 , , u t , e , m p k , H 1 , H 2 , H 3 , π , φ and keeps the m s k private.
E x t r a c t ( p a r a m s , I D , m s k ) s k 1 : this algorithm is responsible for generating part of the private key for the DO. After receiving the I D of the DO, the KGC computes s k 1 = H 1 ( I D ) m s k and feeds the result back to the DO.
K e y G e n ( p a r a m s , I D , s k 1 ) ( s k , p k ) : when the DO receives the part of the private key s k 1 sent by the KGC, he or she first verifies whether e ( H 1 ( I D ) , m p k ) = e ( s k 1 , g ) is established. If the equation does not hold, the DO refuses to recognize the legitimacy of s k 1 and initiates a new round of application to the KGC. Otherwise, the DO randomly selects an element s k 2 = x Z q * and computes p k = g x . After completing the above operation, the DO publishes the public key p k and keeps the private key s k = ( s k 1 , s k 2 ) secret.
T a g G e n ( p a r a m s , s k , F , f n a m e ) Φ : here, we assume that file F has been divided into n parts of m 1 , m 2 , , m n before uploading it to the cloud and that each part contains t small sectors with the same length, that is, F = ( m i j ) n × t , where m i j Z q * ( 1 i n , 1 j t ) . f n a m e { 0 , 1 } * is the unique I D of the file F. The DO computes the tag for each row block ( m 1 , m 2 , , m n ) by computing Equation (1):
σ i = s k 1 · H 3 ( I D | | f n a m e | | i ) · j = 1 t u j m i j s k 2
The set of all tags is denoted by Φ = σ i | i 1 , 2 , , n . After generating tags for all data row blocks, the DO transmits Φ to the CSP. When receiving the tag set Φ , the CSP can verify the single tag through e ( σ i , g ) = e ( H 1 ( I D ) , m p k ) · e ( H 3 ( I D | | f n a m e | | i ) · j = 1 t u j m i j , p k ) .
C h a l l e n g e ( p a r a m s , c ) c h a l : in order to reduce overhead, the TPA generally randomly selects partial data blocks when checking the integrity of data stored in the cloud. Suppose the TPA chooses to check c data blocks; then, the TPA randomly selects two elements ( k 1 , k 2 ) ( Z q * ) 2 . Finally, the TPA sends the challenge message c h a l = ( c , k 1 , k 2 ) to the CSP.
P r o o f G e n ( p a r a m s , c h a l , F , Φ ) P : after receiving the challenge request initiated by the TPA, the CSP first genarates the relevant parameters set C = ( α l , β l ) | l 1 , 2 , , c , where α i = φ ( k 1 , i ) , β i = π ( k 2 , i ) . Then, the CSP selects a value r Z q * and computes
M = M j | M j = i c α i m β i j + r , j 1 , 2 , , t σ = i c σ β i α i , R = j = 1 t u j r
After completing the above calculation, the CSP sends the proof P = M , σ , R to the TPA.
P r o o f V e r i f y ( p a r a m s , p k , P , c h a l , I D ) 0 , 1 : The TPA uses PRF and PRP locally to calculate the relevant parameter set C = ( α i , β i ) | i 1 , 2 , , c . In this way, the TPA can conduct verification using Equation (2):
e ( σ , g ) · e ( R , p k ) = e ( H 1 ( I D ) i = 1 c α i , m p k ) · e ( i c H 3 ( I D | | f n a m e | | β i ) α i · j = 1 t u j M j , p k )
If Equation (2) holds, the algorithm outputs a value of 1, indicating that the checked data is securely stored in the cloud. Otherwise, the data have been polluted, and the algorithm outputs 0.
w a r r a n t G e n ( p a r a m s , I D , I D ) W : To generate a warrant that can access the data stored in the cloud, the DO and user U I D randomly select k I D , k I D Z q * , respectively. After completing the above operation, the DO and U I D keep k I D , k I D private, and publish g k I D , g k I D , α = ( g k I D ) k I D = ( g k I D ) k I D , respectively. To authorize U I D for sensitive data access, the DO computes a warrant W = H 2 ( I D | | I D | | α ) k I D and sends it to U I D . After U I D receives W, he or she calculates H 2 ( I D | | I D | | α ) k I D · W and sends it to the CSP to access the data. When the CSP receives the request, it checks whether
e ( H 2 ( I D | | I D | | α ) k I D · W , g ) = e ( H 2 ( I D | | I D | | α ) , g k I D ) · e ( H 2 ( I D | | I D | | α ) , g k I D )
holds or not. If the equation holds, then the CSP transmits the data to U I D ; otherwise, the CSP rejects the application. When the DO wants to revoke U I D ’s warrant, he or she simply re-selects a new k I D * Z q * and updates g k I D * , meaning that U I D can no longer obtain data from the CSP.

4. Security Proof

In this section, we provide a theoretical analysis of our designed scheme from four aspects: correctness, soundness, sensitive information hiding, and detectability.

4.1. Correctness Proof

The correctness of our proposed scheme can be summarized in four points: (1) if the partial private key sent by the KGC to the DO is correct, the DO always accepts it; (2) a single tag always passes validation if the DO generates tags correctly for the data blocks; (3) if the CSP stores the data properly in the cloud, every generated proof can be verified by the TPA; (4) if the user holds a valid warrant from the DO for authorization, the user can obtain sensitive data from the CSP.
Proof. 
(1) The correctness of the partial private key generated by the KGC for the DO is as follows:
e ( H 1 ( I D ) , m p k ) = e ( H 1 ( I D ) , g s ) = e ( H 1 ( I D ) s , g ) = e ( s k 1 , g )
(2) A single tag can be generated by Equation (1), and its correctness is proven as follows:
e ( σ i , g ) = e ( s k 1 · H 3 ( I D | | f n a m e | | i ) · j = 1 t u j m i j s k 2 , g ) = e ( s k 1 , g ) · e ( H 3 ( I D | | f n a m e | | i ) · j = 1 t u j m i j , g s k 2 ) = e ( H 1 ( I D ) , g s ) · e ( H 3 ( I D | | f n a m e | | i ) · j = 1 t u j m i j , g x ) = e ( H 1 ( I D ) , m p k ) · e ( H 3 ( I D | | f n a m e | | i ) · j = 1 t u j m i j , p k )
(3) The correctness of auditing relies on checking Equation (2), the correctness of which is proven as follows:
e ( σ , g ) · e ( R , p k ) = e ( i c σ β i α i , g ) · e ( R , p k ) = e ( i c ( s k 1 · H 3 ( I D | | f n a m e | | β i ) · j = 1 t u j m β i j s k 2 ) α i , g ) · e ( R , p k ) = e ( i c s k 1 α i , g ) · e ( R , p k ) · e ( i c H 3 ( I D | | f n a m e | | β i ) · j = 1 t u j m β i j α i , g s k 2 ) = e ( i c H 1 ( I D ) α i , g s ) · e ( R , p k ) · e ( i c H 3 ( I D | | f n a m e | | β i ) α i · i c j = 1 t u j α i · m β i j , g x ) = e ( H 1 ( I D ) i = 1 c α i , m p k ) · e ( j = 1 t u j r , g x ) · e ( i c H 3 ( I D | | f n a m e | | β i ) α i · i c j = 1 t u j α i · m β i j , g x ) = e ( H 1 ( I D ) i = 1 c α i , m p k ) · e ( i c H 3 ( I D | | f n a m e | | β i ) α i · j = 1 t u j i c α i · m β i j + r , g x ) = e ( H 1 ( I D ) i = 1 c α i , m p k ) · e ( i c H 3 ( I D | | f n a m e | | β i ) α i · j = 1 t u j M j , p k )
(4) If the user holds the correct warrant, then it can be verified by the CSP as follows:
e ( H 2 ( I D | | I D | | α ) k I D · W , g ) = e ( H 2 ( I D | | I D | | α ) k I D , g ) · e ( W , g ) = e ( H 2 ( I D | | I D | | α ) , g k I D ) · e ( H 2 ( I D | | I D | | α ) , g k I D )
 □

4.2. Soundness Proof

Theorem 1
(Auditing soundness). If the CDH assumption holds on the multiplicative cycle groups G 1 and G 2 , then a malicious cloud cannot forge a verifiable proof.
Proof. 
We prove through a series of games that there exists an extractor to solve the CDH problem by extracting the interaction information between the challenger β and adversary A if the CSP has forged a reasonable proof P * without saving the complete data. □
G a m e 0 . The specific process of this game has been elaborated in Section 2, therefore, we omit it here.
G a m e 1 . This game is largely the same as G a m e 0 , except with the addition of a new rule. Challenger β maintains a list locally to record Tag Queries made by adversary A. If adversary A generates a proof P that is successfully verified by the challenger β and the proof contains at least one tag that is not recorded in the list, then challenger β terminates the game and declares defeat.
A n a l y s i s . Assuming that adversary A wins G a m e 1 with a non-negligible probability, then there is an extractor that can solve the CDH problem through the following steps. Without loss of generality, suppose that the identity of the adversary is I D , the file is F = ( m i j ) n × t , and the file name is f n a m e .
Suppose the honest prover produces the correct proof P = M , σ , R ; then, it has
e ( σ , g ) · e ( R , p k ) = e ( H 1 ( I D ) i = 1 c α i , m p k ) · e ( i c H 3 ( I D | | f n a m e | | β i ) α i · j = 1 t u j M j , p k )
When adversary A successfully forges a verifiable proof P * = M * , σ * , R , it has
e ( σ * , g ) · e ( R , p k ) = e ( H 1 ( I D ) i = 1 c α i , m p k ) · e ( i c H 3 ( I D | | f n a m e | | β i ) α i · j = 1 t u j M j * , p k )
Obviously, M M * ; otherwise, σ = σ * . Given g , g α , h G 1 , its goal is to compute h α . The extractor randomly selects r i Z q * for each i ( 1 i c ) in the challenge, then sets u j = ( g a ) β j · ( h b ) γ j , where a , b , β j , γ j Z q * and 1 j t . This means that j = 1 t u j m i j = j = 1 t [ ( g a ) β j · ( h b ) γ j ] m i j = ( g a ) j = 1 t β j m i j · ( h b ) j = 1 t γ j m i j . Meanwhile, the extractor randomly selects an element s Z q * as m s k and then performs E x t r a c t and K e y G e n to generate the private key s k = ( s k 1 , s k 2 ) = ( H 1 ( I D ) s , x ) . Then, the extractor randomly selects an element k Z q * and sets p k = g x = ( g α ) k , which means that x = k · α . Finally, for each i, the extractor sets
H 3 ( I D | | f n a m e | | i ) = g r i / ( g a ) j = 1 t β j m i j · ( h b ) j = 1 t γ j m i j
Hence, the extractor can compute σ i = s k 1 · H 3 ( I D | | f n a m e | | i ) · j = 1 t u j m i j s k 2 = H 1 ( I D ) s · ( g r i ) x .
Thus, by dividing Equations (4) and (3) we can obtain
e ( σ * / σ , g ) = e ( j = 1 t u j M j * M j , ( g α ) k ) = e ( j = 1 t ( ( g a ) β j · ( h b ) γ j ) M j * M j , ( g α ) k ) = e ( ( g a ) j = 1 t β j · ( M j * M j ) · ( h b ) j = 1 t γ j · ( M j * M j ) , ( g α ) k ) = e ( g a · k · j = 1 t β j · M j · h b · k · j = 1 t γ j · M j , g α )
From Equation (5), we can obtain
h α = ( σ * · σ 1 · g a · k · j = 1 t β j · M j ) 1 / ( b · k · j = 1 t γ j · M j )
According to the above equation, the CDH problem is unsolvable only when b · k · j = 1 t γ j · M j = 0 m o d q . Obviously, in the finite field generated by the large prime number q, the probability of b · k · j = 1 t γ j · M j = 0 is infinitely close to 1 / q . This means that if adversary A successfully forges a verifiable proof, the extractor can solve the CDH problem with probability 1 1 / q , which contradicts the difficulty of solving the CDH problem.
G a m e 2 . G a m e 2 introducesa new rule on the basis of G a m e 1 , that is, if the challenger β finds that M * in proof P * generated by adversary A is not equal to the expected M, then the challenger terminates the game and declares defeat.
A n a l y s i s . Given g , h G 1 , we build an extractor the purpose of which is to compute a value x Z q * which satisfies h = g x . We set u j = ( g a ) β j · ( h b ) γ j , where a , b , β j , γ j Z q * and 1 j t .
Suppose P is a correct proof generated by the honest cloud; then, we have
e ( σ , g ) · e ( R , p k ) = e ( H 1 ( I D ) i = 1 c α i , m p k ) · e ( i c H 3 ( I D | | f n a m e | | β i ) α i · j = 1 t u j M j , p k )
The adversary then generates a proof P * , meaning that we have
e ( σ * , g ) · e ( R , p k ) = e ( H 1 ( I D ) i = 1 c α i , m p k ) · e ( i c H 3 ( I D | | f n a m e | | β i ) α i · j = 1 t u j M j * , p k )
From G a m e 1 , we know that σ * = σ . Thus, the extractor has
j = 1 t u j M j = j = 1 t u j M j *
It is clear that
1 = j = 1 t u j M j * M j = j = 1 t u j M j = ( g a ) j = 1 t β j M j · ( h b ) j = 1 t γ j M j
Therefore, the extractor can solve the DL problem as follows:
h = g ( a · j = 1 t β j M j ) / ( b · j = 1 t γ j M j )
According to the above equation, the DL problem is unsolvable only when a · j = 1 t β j M j = 0 m o d q . Obviously, in the finite field generated by the large prime number q, the probability of a · j = 1 t β j M j = 0 is infinitely close to 1 / q . This means that if adversary A successfully forges a verifiable proof, the extractor can solve the DL problem with probability 1 1 / q , which contradicts the difficulty of solving the DL problem. From the above analysis, it can be seen that the proposed scheme is secure unless the adversary successfully solves both the CDH problem and the DL problem.

4.3. Sensitive Information Hiding Proof

Theorem 2
(Sensitive information hiding). In our designed scheme, only the CSP, DO, and legitimate users authorized by the DO can access the original data.
Proof. 
We demonstrate this proof in two parts: (1) the TPA cannot obtain the original data during the auditing process and (2) the DO-generated warrants for authorized users are hard to forge. □
In the process of auditing, the TPA can select several data blocks for multiple challenges; its purpose is to try to reverse the original data blocks through a sufficient number of polynomials. In our scheme, the proof generated by the CSP adds a blinding factor r to all linearly aggregated data blocks M = M j | M j = i c α i m β i j + r , j 1 , 2 , , t , meaning that unless the TPA can solve r from j = 1 t u j r , it cannot recover the data. However, due to the difficulty of solving the DL problem, it is very unlikely that the TPA will be able to recover the data through multiple challenges. In addition, unauthorized users and users who have revoked warrants and want to continue to obtain data have difficulty forging legitimate warrants. The reason for this is simple: if the warrant is successfully forged, it means that k I D is solved from g k I D , which contradicts the difficulty of solving the DL problem. Therefore, as long as the DL assumption is always true, the scheme designed in this paper can guarantee the privacy of data.

5. Performance Analysis

In order to further analyze the performance of our scheme, in this section we compare our scheme with [8,15] from two aspects, namely, theoretical analysis and experimental verification. Figure 2 shows the mapping used for file segmentation and label generation in this paper.

5.1. Theoretical Analysis

In this theoretical analysis, we analyze the three schemes from three aspects: S t o r a g e c o s t , C o m m u n i c a t i o n c o s t , and C o m p u t a t i o n c o s t . Through the analysis of relevant algorithms, it is easy to see that T a g G e n is the core algorithm in the auditing scheme, and C h a l l e n g e , P r o o f G e n and P r o o f V e r i f y are the algorithms with the highest execution frequency. Thus, we focus on these three algorithms and ignore the others. Before starting the comparison, we assume that file F is split into n t small chunks. The number of data blocks in each challenge is c.

5.1.1. Storage Cost

In analyzing the storage cost, we focus on the storage burden of the DO and CSP. In our scheme, the DO only needs to store his or her own private key; because the private key consists of two parts, the storage cost of the DO is | G 1 |   +   | Z q * | . In addition, the CSP only needs to store the DO’s data block and the corresponding tags locally. In our scheme, t data blocks correspond to one tag, meaning that the actual storage cost of the CSP is n t | Z q * |   +   n | G 1 | . Table 3 lists the storage cost comparison of the three schemes in detail. It is obvious that our scheme greatly reduces the storage burden on the CSP.

5.1.2. Communication Cost

Communication cost is divided into three main stages. First, the DO transmits data blocks and tags to the CSP. In this process, the communication cost of our scheme is n t | Z q * |   +   n | G 1 | . The second is the challenge request sent by the TPA to the CSP in the challenge stages. The communication cost of our scheme during this period is 3 | Z q * | . Finally, the CSP feeds the proof back to the TPA, for which the communication cost is t | Z q * |   +   2 | G 1 | . Table 4 lists the storage cost comparison of the three schemes in detail.

5.1.3. Computation Cost

Before we start our analysis, we audit T p for the computation of one time bilinear mapping, T e for the computation of one time exponentiation operation, and T m for the computation of one time multiplication. In the tag generation stage, the total computation of our scheme is n ( t + 2 ) T m + n ( t + 1 ) T e . In the proof generation stage, the total computation of our scheme is ( c t + c + t ) T m + ( c + t ) T e . In the proof verification stage, the total computation of our scheme is 4 T p + ( 2 + c + t ) T m + ( 1 + c + t ) T e . Table 5 lists the computation cost comparison of the three schemes in detail. It can be seen from Table 5 that the advantage of our scheme is reflected in a low computation cost in the tag generation stage.

5.2. Experimental Verification

Our experimental environment used Ubuntu-20.04, Parallel Desktop 17 with 8 GB RAM, 64 GB disk space, and a 2 GB CPU. The host system was a macOS Monterey 12.5 running on a MacBook Pro laptop with core Apple M1 and 16 GB RAM. We used the Pair-Based Cryptography (PBC) library [29] for all experiments. In the reproduction experiments [8,15], we split the 1.8 MB dataset into 10,000 equal chunks. When implementing our scheme, we chose to divide the 1.8 MB data set into 1000 data blocks of equal size, with each data block then divided into ten small data blocks of equal size. The test files used are public datasets from the Github “https://ai.stanford.edu/~amaas/data/sentiment/ (accessed on 1 April 2022)” repository. We compared the scheme proposed in this paper with [8,15] in four aspects. It is worth noting that our scheme signed the ten small data blocks uniformly in the experiment. The specific results are described below.
First, we analyze and compare the efficiency of TagGen algorithm. Unlike the other three experiments, we chose to gradually increase the number of data blocks from 1000 to 10,000 in order to count the time required for tag generation. The results are shown in Figure 3. It can see from the figure that the tag generation efficiency of our scheme is much better than the other two schemes. This is because the total number of tags generated by our scheme is about one-tenth that of the other two schemes, making its computational efficiency much better.
Because the tags generated by our scheme are much smaller than those of the other two schemes, we allowed our scheme to repeatedly check the same data block during challenge to achieve a fair comparison.
Then, we analyzed the efficiency of Challenge algorithm. In the experiment, we gradually increased the number of challenging data blocks from 1000 to 10,000 and counted the time, as shown in Figure 4. Our scheme takes almost zero time, as does [15]; both are much better than [8]. This is due to the introduction of PRP and PRF, which greatly reduces challenge generation time. Therefore, our scheme has good performance in the challenge generation stage.
Next, we compare the performance of the ProofGen algorithm of the three schemes. The results are shown in Figure 5. Obviously, the efficiency of our scheme is low, because in order to make a fair comparison the number of data blocks challenged by our scheme in each challenge is the same as that of the other two schemes, which results in our algorithm spending more computing power to carry out the relevant calculations. It is worth noting, however, that our algorithm checks more data blocks with the same number of challenges.
The final experiment analyzes the ProofVerify algorithm.The results are shown in Figure 6. For the same reasons as above, this algorithm is less efficient than the other two schemes. However, despite a slight disadvantage in efficiency, the number of data blocks checked is much higher than with the other schemes.
It can be seen from the above experiments that the scheme proposed in this paper is better than both [8,15] in terms of label generation efficiency, while the time cost is slightly higher in proof generation and verification. In addition, the use of PRP and PRF makes the time cost of the proposed scheme in the challenge phase independent of the number of challenged data blocks, which is maintained near a fixed value.

6. Conclusions

In this paper, we propose a certificateless remote data integrity auditing scheme. Our scheme can ensure the privacy of data while providing a data access control algorithm, which makes possible the practical application of the scheme. A detailed security proof guarantees the reasonableness of our scheme. Theoretical analysis and experimental verification show that our scheme has better performance in terms of storage cost and tag generation, making it both efficient and feasible. In actual scenarios, the DO may add, modify, and delete data stored in the cloud. However, this paper does not consider the dynamic updating of data in the design of the audit scheme. In addition, most previous audit schemes consider dynamic updating when signing a single data block. However, the scheme of this paper is to aggregate several data blocks as a whole for signing. Using the traditional dynamic updating method directly in this paper would lead to substantial waste of computing resources. Therefore, in the future we intend to design an efficient dynamic remote data integrity auditing scheme based on this paper.

Author Contributions

Investigation, G.B. and R.L.; Methodology, G.B.; Software, F.Z.; Supervision, G.B.; Validation, F.Z. and B.S.; Writing—original draft, F.Z.; Writing—review and editing, G.B., F.Z., R.L. and B.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Our study does not require ethical approval, so we choose to exclude this statement.

Informed Consent Statement

Our study does not involve human research, so we choose to exclude this statement.

Acknowledgments

This study was supported by the National Natural Science Foundation of China (No. 61872284) and the Shaanxi Provincial Natural Science Basic Research Project (No. 2021JLM-16).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ateniese, G.; Burns, R.; Curtmola, R.; Herring, J.; Kissner, L.; Peterson, Z.; Song, D. Provable data possession at untrusted stores. In Proceedings of the 14th ACM Conference on Computer and Communications Security, Alexandria, VA, USA, 28–31 October 2007; pp. 598–609. [Google Scholar]
  2. Juels, A.; Kaliski, B.S., Jr. PORs: Proofs of retrievability for large files. In Proceedings of the 14th ACM Conference on Computer and Communications Security, Alexandria, VA, USA, 28–31 October 2007; pp. 584–597. [Google Scholar]
  3. Garg, N.; Bawa, S.; Kumar, N. An efficient data integrity auditing protocol for cloud computing. Future Gener. Comput. Syst. 2020, 109, 306–316. [Google Scholar] [CrossRef]
  4. Lu, N.; Zhang, Y.; Shi, W.; Kumari, S.; Choo, K.K.R. A secure and scalable data integrity auditing scheme based on hyperledger fabric. Comput. Secur. 2020, 92, 101741. [Google Scholar] [CrossRef]
  5. Shen, W.; Qin, J.; Yu, J.; Hao, R.; Hu, J.; Ma, J. Data integrity auditing without private key storage for secure cloud storage. IEEE Trans. Cloud Comput. 2019, 9, 1408–1421. [Google Scholar] [CrossRef]
  6. Shao, B.; Bian, G.; Wang, Y.; Su, S.; Guo, C. Dynamic data integrity auditing method supporting privacy protection in vehicular cloud environment. IEEE Access 2018, 6, 43785–43797. [Google Scholar] [CrossRef]
  7. Li, Y.; Zhang, F. An efficient certificate-based data integrity auditing protocol for cloud-assisted WBANs. IEEE Internet Things J. 2021, 9, 11513–11523. [Google Scholar] [CrossRef]
  8. Gudeme, J.R.; Pasupuleti, S.K.; Kandukuri, R. Certificateless multi-replica public integrity auditing scheme for dynamic shared data in cloud storage. Comput. Secur. 2021, 103, 102176. [Google Scholar] [CrossRef]
  9. Lu, X.; Pan, Z.; Xian, H. An integrity verification scheme of cloud storage for internet-of-things mobile terminal devices. Comput. Secur. 2020, 92, 101686. [Google Scholar] [CrossRef]
  10. Li, B.; He, Q.; Chen, F.; Jin, H.; Xiang, Y.; Yang, Y. Auditing cache data integrity in the edge computing environment. IEEE Trans. Parallel Distrib. Syst. 2020, 32, 1210–1223. [Google Scholar] [CrossRef]
  11. Cui, G.; He, Q.; Li, B.; Xia, X.; Chen, F.; Jin, H.; Xiang, Y.; Yang, Y. Efficient verification of edge data integrity in edge computing environment. IEEE Trans. Serv. Comput. 2021, in press. [CrossRef]
  12. Li, B.; He, Q.; Chen, F.; Jin, H.; Xiang, Y.; Yang, Y. Inspecting edge data integrity with aggregated signature in distributed edge computing environment. IEEE Trans. Cloud Comput. 2021, in press.
  13. Zhao, M.; Ding, Y.; Wang, Y.; Wang, H.; Wang, B.; Liu, L. A privacy-preserving tpa-aided remote data integrity auditing scheme in clouds. In Proceedings of the International Conference of Pioneering Computer Scientists, Engineers and Educators, Guilin, China, 20–23 September 2019; pp. 334–345. [Google Scholar]
  14. Yan, H.; Gui, W. Efficient identity-based public integrity auditing of shared data in cloud storage with user privacy preserving. IEEE Access 2021, 9, 45822–45831. [Google Scholar] [CrossRef]
  15. Li, J.; Yan, H.; Zhang, Y. Identity-based privacy preserving remote data integrity checking for cloud storage. IEEE Syst. J. 2020, 15, 577–585. [Google Scholar] [CrossRef]
  16. Yang, K.; Jia, X. An efficient and secure dynamic auditing protocol for data storage in cloud computing. IEEE Trans. Parallel Distrib. Syst. 2012, 24, 1717–1726. [Google Scholar] [CrossRef]
  17. Wang, B.; Li, B.; Li, H.; Li, F. Certificateless public auditing for data integrity in the cloud. In Proceedings of the 2013 IEEE Conference on Communications and Network Security (CNS), Washington, DC, USA, 14–16 October 2013; pp. 136–144. [Google Scholar]
  18. He, K.; Huang, C.; Wang, J.; Zhou, H.; Chen, X.; Lu, Y.; Zhang, L.; Wang, B. An efficient public batch auditing protocol for data security in multi-cloud storage. In Proceedings of the 2013 8th ChinaGrid Annual Conference, Los Alamitos, CA, USA, 22–23 August 2013; pp. 51–56. [Google Scholar]
  19. Yuan, J.; Yu, S. Efficient public integrity checking for cloud data sharing with multi-user modification. In Proceedings of the IEEE INFOCOM 2014-IEEE Conference on Computer Communications, Toronto, ON, Canada, 27 April–2 May 2014; pp. 2121–2129. [Google Scholar]
  20. Jiang, T.; Chen, X.; Ma, J. Public integrity auditing for shared dynamic cloud data with group user revocation. IEEE Trans. Comput. 2015, 65, 2363–2373. [Google Scholar] [CrossRef]
  21. Zhang, Y.; Xu, C.; Liang, X.; Li, H.; Mu, Y.; Zhang, X. Efficient public verification of data integrity for cloud storage systems from indistinguishability obfuscation. IEEE Trans. Inf. Forensics Secur. 2016, 12, 676–688. [Google Scholar] [CrossRef]
  22. Li, Y.; Yu, Y.; Min, G.; Susilo, W.; Ni, J.; Choo, K.K.R. Fuzzy identity-based data integrity auditing for reliable cloud storage systems. IEEE Trans. Dependable Secur. Comput. 2017, 16, 72–83. [Google Scholar] [CrossRef]
  23. Shen, W.; Qin, J.; Yu, J.; Hao, R.; Hu, J. Enabling identity-based integrity auditing and data sharing with sensitive information hiding for secure cloud storage. IEEE Trans. Inf. Forensics Secur. 2018, 14, 331–346. [Google Scholar] [CrossRef]
  24. Zhao, Q.; Chen, S.; Liu, Z.; Baker, T.; Zhang, Y. Blockchain-based privacy-preserving remote data integrity checking scheme for IoT information systems. Inf. Process. Manag. 2020, 57, 102355. [Google Scholar] [CrossRef]
  25. Lan, C.; Li, H.; Wang, C. Cryptanalysis of “Certificateless remote data integrity checking using lattices in cloud storage”. In Proceedings of the 2020 10th International Conference on Information Science and Technology (ICIST), Lecce, Italy, 4–5 June 2020; pp. 134–138. [Google Scholar]
  26. Tian, J.; Wang, H.; Wang, M. Data integrity auditing for secure cloud storage using user behavior prediction. Comput. Secur. 2021, 105, 102245. [Google Scholar] [CrossRef]
  27. Yang, C.; Liu, Y.; Zhao, F.; Zhang, S. Provable data deletion from efficient data integrity auditing and insertion in cloud storage. Comput. Stand. Interfaces 2022, 82, 103629. [Google Scholar] [CrossRef]
  28. Dhelim, S.; Aung, N.; Kechadi, T.; Ning, H.; Chen, L.; Lakas, A. Trust2Vec: Large-Scale IoT Trust Management System based on Signed Network Embeddings. IEEE Internet Things J. 2022, in press.
  29. Lynn, B.; Shacham, H.; Steiner, M.; Cooley, J.; Figueiredo, R.; Khazan, R.; Kosolapov, D.; Bethencourt, J.; Miller, P.; Cheng, M.; et al. PBC Library. 2006, Volume 59, pp. 76–99. Available online: http://crypto.stanford.edu/pbc (accessed on 20 May 2020).
Figure 1. System model.
Figure 1. System model.
Electronics 11 03116 g001
Figure 2. Mapping of data blocks and tags.
Figure 2. Mapping of data blocks and tags.
Electronics 11 03116 g002
Figure 3. Computation cost of TagGen [8,15].
Figure 3. Computation cost of TagGen [8,15].
Electronics 11 03116 g003
Figure 4. Computation cost of Challenge [8,15].
Figure 4. Computation cost of Challenge [8,15].
Electronics 11 03116 g004
Figure 5. Computation cost of ProofGen [8,15].
Figure 5. Computation cost of ProofGen [8,15].
Electronics 11 03116 g005
Figure 6. Computation cost of ProofVerify [8,15].
Figure 6. Computation cost of ProofVerify [8,15].
Electronics 11 03116 g006
Table 1. Reference scheme comparison table.
Table 1. Reference scheme comparison table.
CertificatelessPrivacyDynamic UpdateData Sharing
[5]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[26]
Table 2. Mathematical notation.
Table 2. Mathematical notation.
NotationDescription
G 1 , G 2 two multiplicative cyclic groups
qa large prime number
g,a generator of the multiplicative group G 1
e,bilinear map
H 1 , H 2 , H 3 ,three secure cryptographic hash functions
u 1 , u 2 , , u t ,t distinct elements in group G 1
π a pseudo-random permutation
φ a pseudo-random function
σ i a signal tag
Φ collection of tags
c h a l a challenge message
Pa proof message
Table 3. Storage cost.
Table 3. Storage cost.
DOCSP
Our Scheme | G 1 |   +   | Z q * | n t | Z q * |   +   n | G 1 |
[8] | G 1 |   +   | Z q * | n t | G 1 |   +   n t | Z q * |
[15] | G 1 |   +   | Z q * | n t | G 1 |   +   n t | Z q * |
Table 4. Communication cost.
Table 4. Communication cost.
DO to CSPTPA to CSPCSP to TPA
Our Scheme n t | Z q * |   +   n | G 1 | 3 | Z q * | t | Z q * |   +   2 | G 1 |
[8] n t | G 1 |   +   n t | Z q * | 2 c | Z q * | | Z q * |   +   2 | G 1 |
[15] n t | G 1 |   +   n t | Z q * | 3 | Z q * | | Z q * |   +   4 | G 1 |
Table 5. Computational cost.
Table 5. Computational cost.
TagGenProofGenProofVerify
Our Scheme n ( t + 2 ) T m + n ( t + 1 ) T e ( c t + c + t ) T m + ( c + t ) T e 4 T p + ( 2 + c + t ) T m + ( 1 + c + t ) T e
[8] 2 n t ( T m + T e ) ( 2 c + 1 ) T m + c T e 3 T p + ( c + 1 ) T m + ( c + 2 ) T e
[15] 2 n t ( T m + T e ) 2 c T m + ( c + 1 ) T e + T p 3 T p + ( c + 2 ) T m + ( c + 3 ) T e
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bian, G.; Zhang, F.; Li, R.; Shao, B. Certificateless Remote Data Integrity Auditing with Access Control of Sensitive Information in Cloud Storage. Electronics 2022, 11, 3116. https://doi.org/10.3390/electronics11193116

AMA Style

Bian G, Zhang F, Li R, Shao B. Certificateless Remote Data Integrity Auditing with Access Control of Sensitive Information in Cloud Storage. Electronics. 2022; 11(19):3116. https://doi.org/10.3390/electronics11193116

Chicago/Turabian Style

Bian, Genqing, Fan Zhang, Rong Li, and Bilin Shao. 2022. "Certificateless Remote Data Integrity Auditing with Access Control of Sensitive Information in Cloud Storage" Electronics 11, no. 19: 3116. https://doi.org/10.3390/electronics11193116

APA Style

Bian, G., Zhang, F., Li, R., & Shao, B. (2022). Certificateless Remote Data Integrity Auditing with Access Control of Sensitive Information in Cloud Storage. Electronics, 11(19), 3116. https://doi.org/10.3390/electronics11193116

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop