Next Article in Journal
Development and Evolution of DNA-Dependent Protein Kinase Inhibitors toward Cancer Therapy
Next Article in Special Issue
NEXGB: A Network Embedding Framework for Anticancer Drug Combination Prediction
Previous Article in Journal
Early Life Events and Maturation of the Dentate Gyrus: Implications for Neurons and Glial Cells
Previous Article in Special Issue
Single Cell Self-Paced Clustering with Transcriptome Sequencing Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

TransPhos: A Deep-Learning Model for General Phosphorylation Site Prediction Based on Transformer-Encoder Architecture

1
College of Computer Science and Technology, China University of Petroleum, Qingdao 266555, China
2
State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100080, China
*
Author to whom correspondence should be addressed.
Int. J. Mol. Sci. 2022, 23(8), 4263; https://doi.org/10.3390/ijms23084263
Submission received: 10 March 2022 / Revised: 4 April 2022 / Accepted: 9 April 2022 / Published: 12 April 2022

Abstract

:
Protein phosphorylation is one of the most critical post-translational modifications of proteins in eukaryotes, which is essential for a variety of biological processes. Plenty of attempts have been made to improve the performance of computational predictors for phosphorylation site prediction. However, most of them are based on extra domain knowledge or feature selection. In this article, we present a novel deep learning-based predictor, named TransPhos, which is constructed using a transformer encoder and densely connected convolutional neural network blocks, for predicting phosphorylation sites. Data experiments are conducted on the datasets of PPA (version 3.0) and Phospho. ELM. The experimental results show that our TransPhos performs better than several deep learning models, including Convolutional Neural Networks (CNN), Long-term and short-term memory networks (LSTM), Recurrent neural networks (RNN) and Fully connected neural networks (FCNN), and some state-of-the-art deep learning-based prediction tools, including GPS2.1, NetPhos, PPRED, Musite, PhosphoSVM, SKIPHOS, and DeepPhos. Our model achieves a good performance on the training datasets of Serine (S), Threonine (T), and Tyrosine (Y), with AUC values of 0.8579, 0.8335, and 0.6953 using 10-fold cross-validation tests, respectively, and demonstrates that the presented TransPhos tool considerably outperforms competing predictors in general protein phosphorylation site prediction.

1. Introduction

Post-translational modifications (PTMs) are biochemical processes of proteins that take place post-translationally and are key mechanisms for regulating cellular function through covalent and general enzymatic modifications. PTMs are critical in regulating many biochemical reactions, such as protein synthesis, protein stability, and regulation of enzyme activity [1]. Protein phosphorylation is an important mechanism that regulates the activity of biological enzymes and is a very frequent type of PTMs [2]. Protein phosphorylation has important functions, especially in both prokaryotes and eukaryotes [3], which regulate many cellular processes, such as cell cycle regulation [4,5], protein–protein interaction [6], signal recognition [7], and DNA recovery [8]. More than a quarter of cellular proteins in eukaryotes are phosphorylated and modified, and more than half of them are responsible for various human diseases, especially near-reproductive diseases [9] and cancer [10]. It was found in recent research that protein phosphorylation is vital to understanding the signal regulation mechanism in cells and helping to develop new approaches to treat diseases caused by signal irregularity, such as cancer [11,12].
The prediction of phosphorylation sites is vital to the molecular mechanisms of biological processes associated with phosphorylation, which is of great help to disease-related research and drug design [13,14,15]. Experimental detection of protein phosphorylation sites is constantly advancing, with the earliest use of Edman degradation, followed by the development of mass spectrometry, which nowadays, in combination with Edman degradation, has become an effective tool for phosphoamine acid residue mapping in protein sequencing. Several traditional experimental methods have been adopted to identify phosphorylation sites, such as high-throughput mass spectrometry [16] and low-throughput 32P labeling [17,18].
Despite the unusually rapid development of proteomic technologies, comprehensive and exhaustive analysis of phosphorylated proteins remains difficult. Phosphorylation of proteins is an instable and dynamic process in the body, and there is a low abundance of phosphorylated proteins within the cell. The phosphate groups of phosphorylated proteins are easily lost during the isolation process and are difficult to protonate because of their electronegativity. Computational biology approaches have therefore become necessary and popular to handle the difficulties of experimental approaches for phosphorylation site prediction.
Until now, more than 50 calculation methods for predicting phosphorylation sites have been proposed, a large number of which are based on machine learning approaches, such as Bayesian decision theory [19], support vector machines [20,21], random forests [22], and logistic regression [10]. For instance, Gao et al. [23] proposed a novel method called Musite by using local amino acid sequence frequencies, k-nearest neighbor features, and protein disorder scores to improve the prediction accuracy. Dou et al. [21] proposed an algorithm called PhosphoSVM, which combines several protein sequence properties with support vector machines to forecast phosphorylation sites.
These calculation methodologies and tools have facilitated the comprehension of phosphorylation and effectively improved performance. Most of them use multiple sequence-based features for multi-stage classification, such as physicochemical properties, protein disorder, and other areas of knowledge. In general, the use of extra tools may abstract redundancy features abstract, which is useful for the final prediction [22,24]. It needs to select some effective features. These selected features are applied to the machine learning algorithm for discriminative classification. So, end-to-end deep learning has made important breakthroughs in many fields, such as the transformer model in the field of machine translation [25]. The residual network effectively solves the problem of gradient disappearance in the training process of deep learning [26]. This makes it possible to train a deep learning classification model, which is used to predict protein phosphorylation sites. In a previous study, Luo et al. [27] proposed a tool named DeepPhos to predict phosphorylation sites.
In this study, a novel two-stage deep learning model, named TransPhos, is proposed to improve both the accuracy and Matthews correlation coefficient (MCC) of general protein phosphorylation prediction. In TransPhos, three encoders with the same structure and different window sizes based on the attention mechanism are designed. Instead of using any amino acid coding, we use the embedded layer to automatically learn an amino acid coding representation and then use multiple stacked encode layers to learn the vector representation of each amino acid. Each encode layer has the same structure as the encoder proposed by Vaswani et al. [25], but some parameters are modified.
Two densely connected convolution neural network (DC-CNN) blocks that have the same window size are developed as the encoder. DC-CNN blocks with different window sizes and convolutional kernels can automatically learn the sequence features of protein phosphorylation sites. These features are concatenated into an intra-block connectivity layer (Inter-BCL) to further integrate the acquired information and finally provide predictions using the softmax function. To estimate the capabilities of TransPhos, we extracted many validated phosphorylation samples from two databases [28,29,30]. To verify the generalization of our model, the dataset Phospho. ELM was used as a training set and verification set, and the dataset from the PPA database was selected to test the performance. The experimental results demonstrated that TransPhos is superior to the existing general phosphorylation prediction methods in terms of AUC and MCC; compared with deep learning models, including CNN, LSTM, RNN, and FCNN; and some state-of-the-art deep learning-based prediction tools, including GPS2.1, NetPhos, PPRED, Musite, PhosphoSVM, SKIPHOS, and DeepPhos.

2. Results

TransPhos is a deep learning model that was developed to predict general phosphorylation sites. In this section, our model is compared with traditional deep learning models and other predictors. The results of the comparison with traditional deep learning models are described in Section 2.1, and the results of the comparison with other predictors are described in Section 2.2. It should be specified that the results on the training set were derived from 10-fold cross-validation. We performed significance F-tests on the prediction results of all models to demonstrate that our model predictions were significantly different from the other predictors, as described in Section 2.3.

2.1. Comparison with Different Deep Learning Models

We first compared TransPhos with several other deep learning models on the validation and test sets, including CNN, LSTM, RNN, and FCNN. The ROC curve is a very good tool to visualize the classification results, and the ROC curves on the S sites, when compared with the deep learning model on the training set, are shown in Figure 1. The ROC curves on the T sites and Y sites are shown in Figure A1 and Figure A2. Overall, our model achieved the highest Area Under Curve (AUC) values and exhibited a good performance.
Table 1 shows the details of the training set, where we used 10-fold cross-validation to select the optimal hyperparameters to avoid overfitting and to obtain enough feature information from the only available data. On the S sites, our model obtained the highest AUC value of 85.79%, which was 4.23, 1.79, 3.13, and 2.9% higher than CNN, LSTM, RNN, and FCNN, respectively. Besides the AUC values, we also calculated Accuracy (Acc), Sensitivity (Sn), Specificity (Sp), Precision (Pre), F1 Score (F1), and Matthews correlation coefficient (MCC) to measure the capabilities of our model. The calculation of these evaluation matrices is presented in Section 4.5. On the S sites, our model obtained the highest AUC values and the other metrics Acc, Sn, Pre, F1, and MCC were 78.18, 80.56, 76.83, 78.65%, and 0.564, respectively, which showed good performance. The Sp metric was only 1.36% lower than the best model FCNN. On the T sites, our model only showed the highest AUC and Sn metrics of 83.35 and 76.54%, respectively. The other metrics were slightly lower than the best model LSTM at this site. For the Y sites, our model showed the highest AUC value, F1 score, and MCC value. We used the PPA dataset as an independent test set to measure the performance of our model, and Table 2 shows the detailed results of the tests. The performance of our model was also very good on the T sites, with the highest AUC values and Acc and MCC, while the other metrics Sn, Sp, Pre, and F1 scores were 1.25, 3.21, 0.49, and 0.28% worse than the best results, respectively.
Overall, our model performed best on the S sites and slightly worse on the T and Y sites, which may be due to the difficulty of training too many parameters in the encoder part and the poorer performance on smaller datasets. Other models also performed well on only one of the sites, so it can be assumed that our model performs better.

2.2. Comparison with Existing Phosphorylation Site Prediction Tools

Independent test datasets were collected from the PPA database in this study to measure the performance of the model. In this subsection, our model is compared with some other existing prediction tools, and the model parameters of all these predictors were obtained by 10-fold cross-validation on our training dataset P.ELM with their training strategies, facilitating a fair comparison. The left half of Table 3 shows the results of the 10-fold cross-validation, and the right half shows the results on the independent test set. We calculated the Sn, Sp, MCC, and AUC values to measure the model’s performance. Many well-known prediction tools were compared, including GPS2.1 [31], NetPhos [32], PPRED [33], Musite [23], PhosphoSVM [21], SKIPHOS [34], and DeepPhos [27]. The results showed that our model outperformed all other models for the S and T sites. For example, on the S sites, our model achieved the highest AUC values of 0.787 and 0.670 at GPS2.1, 0.643 at NetPhos, 0.676 at PPRED, 0.726 at Musite, 0.776 at PhosphoSVM, 0.691 at SKIPHOS, and 0.775 at DeepPhos.
On the T sites, our model achieved the highest MCC value of 0.246 while the AUC value was only 0.002 lower than the optimal result. Our model did not perform the best on the Y sites, with SKIPHOS achieving the highest MCC and AUC values.

2.3. Significance Test of the Results

Regarding the results, most of the indicators of our model, such as ACC and MCC, performed better than other well-known predictors. However, many indicators were not as good as other predictor models. The significance F-test was used to demonstrate that our prediction results were significantly different from other forecasting models [35]. Usually, a p-value of less than 0.05 in the F-test indicates that the 2 statistical variables are significantly different [36]. As shown in Figure 2, we plotted the results of the statistical tests as a heat map, and the values in each box represent the corresponding p-values. The results of the significance test show that our model was significantly different from the predictions of most other models.

3. Discussion

In this work, we developed a deep learning model, named TransPhos, based on a transformer-encoder and CNN architecture, which can automatically learn features from protein sequences end to end to predict general phosphorylation sites. We performed 10-fold cross-validation on the training set and tested the model performance on an independent test set. Overall, our model performed extremely well on S and T sites, and our AUC values were the highest compared to other tools. Moreover, other major metrics were also significantly better than other models.
Firstly, we compared our model with several traditional deep learning models, including CNN, LSTM, FCNN, and RNN, on the test set. At the S sites, our model performed to the level system, and all evaluation metrics were the highest except Sp. AUC, Acc, Sn, Pre, F1, and MCC outperformed the other best models by 1.59%, 1.19%, 0.95%, 0.75%, 1.11%, and 0.23, respectively. A slight decrease in the performance of our model at the T sites was observed, but the main performance evaluation metrics, such as AUC, Acc, and MCC, were better than the other deep learning predictors: 0.6%, 0.56%, and 0.14% higher than the other best models, respectively. At the Y sites, our model’s performance was inferior to the other predictors.
Furthermore, we compared TransPhos with other current mainstream prediction models, including GPS2.1, NetPhos, PPRED, Musite, PhosphoSVM, SKIPHOS, and DeepPhos. Specifically, at the S sites, our model did not perform the best with other predictors, such as DeepPhos, as shown by the results of the 10-fold cross-validation. Our model achieved the best performance on the independent test set. The AUC and MCC values of our model on the test set were 0.8 and 0.7 percentage points higher than the other best models, respectively. This indicates that our model outperformed the comparison predictors in terms of the generalization performance. On the T sites, the AUC value of our model was only 0.2% lower than that of the best model DeepPhos, and the MCC value was 0.15 higher than that of DeepPhos, which indicates that our prediction results are much closer to the true value, judging from the results of the significance test. On the Y sites, neither our model nor the previous better performing model DeepPhos showed the best performance, and SKIPHOS obtained the highest MCC and AUC values at this site: 0.197 and 0.634, respectively.
Although our model showed a good performance in predicting the phosphorylation sites S and T, there are still some limitations that can be further improved. On the Y sites, since the total positive data of the Y site is much less compared to the S and T sites, and the encoder part of our model has an excess of parameters to be trained, this can easily cause model overfitting. To solve this problem, we used various approaches in the model design, such as regularization, the addition of dropout after the convolution layer [37,38], etc., but the limitations are still unresolved. Due to the excess of parameters and the limited access to kinase-specific phosphorylation site data, we conducted partial experiments and our model also performed poorly in kinase-specific phosphorylation site prediction. Thus, our model can only be used for general phosphorylation site prediction. In general, deep learning still performs poorly on small datasets [39,40]. However, in practical applications, there are far more S and T sites than Y sites, so the poor performance of the model on Y sites is acceptable [41].
From the results, the difference between our model and its predictors was not significant, so we performed a significance F-test to check the significance of our results with other predictor models [42]. Finally, we obtained the p-value of the test results. A p-value of less than 0.05 is usually considered as a significant difference between the 2 statistics. The results of our significance test are presented in Figure 2. From the significance test results, the following models were not significantly different from our model: CNN, FCNN, and GPS2.1 on the S sites; LSTM and GPS2.1 on the Y sites; and NetPhos and GPS2.1 on the T sites, respectively. A comparison of the prediction results showed that although several of the above models were statistically insignificant, our model showed a better prediction performance than these models at the corresponding loci. It can be concluded that the overall performance of our model was better than the existing models.
The main contribution of this study is the application of the encoder structure of the transformer to the phosphorylation prediction task [25]. Most previous studies have used either independent feature extraction followed by machine learning algorithms to predict phosphorylation sites [43] or one-hot encoding of protein sequences [27]. Feature extraction requires specialized domain knowledge and the use of one-hot encoding to effectively represent the interrelationships between protein sequences is difficult [44]. In this paper, the amino acid sequences of constituent proteins are first represented by dictionary encoding, then encoded into vector representation by the embedding layer, and then features are extracted by the encoder to further represent the effective information between sequences. After, convolutional neural networks are used to obtain the high-dimensional representation of phosphorylation sites, and finally classification is performed by the softmax function.
In summary, we present a deep learning architecture, TransPhos, that can be applied to general phosphorylation site prediction tasks to facilitate further biological research. The model has some uncertainties as the complete protein sequence is sliced into subsequences and predictions are then made for that subsequence. However, if a phosphorylation site is located at both ends of the whole protein sequence, then the sequence needs to be populated with a large number of identifiers, which can also lead to some unpredictable errors in the model when predicting such a site, such as prediction scores close to 0.5 and difficulty in distinguishing between positive and negative samples.
For future works, we will continue to work on phosphorylation site prediction, and we consider the use of an encoder-decoder architecture to train the whole protein sequence with the tag directly to achieve better prediction.

4. Materials and Methods

4.1. Overview

The overall architecture of TransPhos is described in Figure 3. We constructed our dataset, and the detailed process of data collection and preprocessing is described in Section 4.2. In Section 4.3, the structure and training process of our TransPhos model is described in detail and its performance on an independent test dataset is evaluated. In Section 4.4, we describe the training process of our model. Section 4.5 shows the performance evaluation used in this study.

4.2. Dataset Collection and Pre-Processing

4.2.1. Dataset Collection

The construction of an effective benchmark dataset is crucial for the training and evaluation of deep learning models. PPA version 3.0 [28,29,45] and Phospho. ELM (P.ELM) version 9.0 [30] were used in this study. These two data sets were selected for two main reasons. The two datasets were utilized as benchmark datasets, which made comparison with other models easier. On the other hand, protein phosphorylation occurs in both animals and plants, the Phospho. ELM dataset includes phosphorylation sites from mammals, and the PPA dataset contains those from Arabidopsis thaliana (a plant).
A total of 11,254 protein sequences were collected from the P.ELM dataset. Each sequence contains multiple protein phosphorylation sites, including 6635 serine (S) sites, 3227 threonine (T) sites, and 1392 tyrosine (Y) sites, respectively. The sites in the P.ELM database were extracted from other studies and phosphorylation proteomic analyses while the sites in the PPA database were experimentally measured by mass spectrometry. Some results predicted by computational methods are also available in the PPA database, and since some predictions have not been experimentally validated, only experimentally validated phosphorylation sites in PPA were used. In this study, BLASTClust [46] was used to cluster the protein in both datasets to remove redundant and duplicate protein sequences. We finally selected 12,810 proteins from the dataset to train the model.

4.2.2. Data Pre-Processing

A complete protein sequence may comprise up to 4000 amino acids. In order to facilitate learning of the characteristics near the phosphorylation site, it is cut into subsequences with a window size of K, so that the amino acids in the middle of each subsequence are phosphorylation sites. If the length is insufficient, * is filled to ensure each subsequence has the same length. Other subsequences containing corresponding amino acids are also cut into subsequences with a length K. The middle of the sequence is the amino acids of non-phosphorylation sites. Such a setting will lead to an imbalance of positive and negative samples. We randomly deleted some negative samples to achieve the balance of positive and negative samples. Table 4 shows the number of sequences and phosphorylation sites that we used for this study.

4.3. Methods

TransPhos is a novel deep learning architecture that maps local protein sequences into high-dimensional vectors via a self-attentive mechanism, nonlinear transformations, and convolutional neural networks. The final classification result of phosphorylation sites is generated by the softmax function. TransPhos does not directly use a transformer encoder or a normal multilayer CNN but utilizes several encode layers with different window sizes and DC-CNN blocks. This allows for the efficient extraction of key protein sequence features for phosphorylation forecasting.
For a protein represented by an amino acid sequence x, each amino acid y D y , where y represents an amino acid and D is a dictionary encoding function that represents amino acids as digital. We sliced a sequence into sub-sequences of different window sizes and the position in the middle of the sequence is the phosphorylation site. For a protein sub-sequence x , the input of TransPhos with the total X Encoder is the set of vector E x R L x × I for Encoder x (x = 1, 2, …, X), with L x and I being the corresponding local window size of phosphorylation sites and the size of the amino acid symbol vector, respectively. Here, I was set to 16. The input vector representation was obtained through an embedding layer by the dictionary code. In this study, we carefully studied various configurations of the model inputs with different window sizes and finally adopted a model configuration with a better performance with X = 2 and window sizes of 31 and 51, which is slightly different from the predictors that had previously been proposed for phosphorylation sites [19,24,27,47] for Encoder 1 and 2, respectively. Therefore, the Encoder’s input shape was 33 × 16 and   51 × 16 , respectively.
The Transphos model has two main stages. The first stage is X Encoders with several encoding layers. The encoder structure used in this paper was originally proposed by [25] in a machine translation task. In this study, the encoder parameters were fine-tuned to be applied to the phosphorylation prediction task.
Encoder: The encoder contains four structurally identical encode layers, each with two sub-layers. The first is a multi-head self-attention mechanism, and the second is a fully connected feed-forward network. The internal structure of the encoder is shown in Figure 4a.
The first sub-layer is an attention mechanism identical to the transformer’s encoder. The attentional function is described as:
Attention ( Q , K , V ) = softmax ( Q K T d k ) · V
where the matrices Quire (Q), Key (K), Value (V) are the inputs to the attention function, which contains a set of queries and keys of dimension d k , and values of dimension d v . The output of the attention function is obtained by computing the dot product of the query with all keys, dividing each key by d k , and applying the softmax function and then multiplying it by values.
In practice, instead of using individual attention functions, we ran them in parallel, a design known as the multi-head attention mechanism [25], which is very helpful in improving the training speed. We calculated the output of the multi-head from the attention function as:
M u l t i H e a d ( Q , K , V ) = C o n c a t ( h e a d 1 , , h e a d h ) · W O w h e r e   h e a d i = A t t e n t i o n ( Q W i Q , K W i K , V W i V )  
where W is the parameter matrix W i Q d m o d e l × d k , W i K d m o d e l × d k , W i V d m o d e l × d V and W O h d v × d m o d e l .
In this task, we applied h = 4 parallel attention. For each layer, we set d k = d v = d m o d e l / h = 4 . Since the number of all amino acid species was only 20, a shorter vector was used to represent them in this task. This design is advantageous to speed up the training, and to a certain extent to avoid rapid overfitting of the model on small data sets, which is especially important when training the Y site. Figure 5 illustrates the internal structure of the attention mechanism.
The encoder architecture of the transformer was used, hence the attention mechanism here is self-attentive, with the query, key, and value located in the same place. The input of the next encoder layer is sourced from the output of the previous encoder layer so that all the information of the previous encoder layer can be identified by the previous encoder layer.
The first sub-layer is a fully connected feedforward network. It is defined as:
FFN ( x ) = max ( 0 , x W 1 + b 1 ) W 2 + b 2
The output of the attention layer and the output of the feedforward neural network are connected with residual connections, and there is layer normalization [48] directly between the two sub-layers.
After obtaining the output of the encoder, the second stage is X densely connected convolutional neural networks, the so-called DC-CNN blocks. We adopted several DC-CNN blocks with different window sizes and each DC-CNN block had the same structure. The internal construction of the DC-CNN block is shown in Figure 4b.
The input vector of the DC-CNN block is the output vector of the encoder, and the DC-CNN blocks perform a series of convolution operations to finally obtain a high-dimensional representation of the feature map. Each convolutional layer performs a one-dimensional convolutional operation along the length of the protein sequence, and after obtaining the corresponding output, an activation function is used to activate the neurons and implement the nonlinear transformation. Here, we used the ReLU activation function, which is very effective in convolutional neural networks. The feature maps obtained from the first convolutional layer are defined as:
h 1 k =   a k ( W k E k + b 1 k )
where W k represents the weight matrix with a size of I × S k × D , I is the length of the vector representing individual amino acids in the protein sequence, and S k is the length of the convolution kernel. Here, S was set to 7, 13 and k was set to 1, 2. The number of convolutional layers is denoted by D, and we set it to 64. b 1 k is the bias item. The dropout function was used after each convolution to randomly remove some neurons to reduce the risk of overfitting.
We adopted the Intra-BCLs to enforce the extraction of phosphorylated features in the DC-CNN block, connecting all previous convolutional layers with subsequent convolutional layers. Therefore, the output feature vectors of the i th convolutional layer in DC-CNN block k can be calculated as follows:
h i k =   a k ( W i k [ E k , h 1 k , , h i 1 k ] + b i k )   ,   i = 2 ,   3
where W i k D × S k × D with D refers to the number of convolutional kernels in all convolutional layers in every DC-CNN block, and h i 1 k represents the feature vectors generated by the (i − 1)th convolutional layer.
After the sequence representation of the protein phosphorylation sites generated by the encoder and DC-CNN blocks is obtained, the next step uses the inter-BCL for concatenation along the first dimension as follows:
h f = [ α k ( h C 1 ) , α k ( h C 2 ) ]
where   h C 1 and h C 2 are the feature maps generated from the first and second DC-CNN blocks, respectively. Next, this feature map is transformed into a one-dimensional tensor by a flattened layer. A fully connected layer is connected, and the final prediction is performed by the softmax function:
P ( y = 1 | x ) = 1 1 + e f c W c
P ( y = 0 | x ) = 1 1 1 + e f c W c
where W c f c × q , q refers to the number of categories to be predicted, which was set as 2. The predicted result is between 0 and 1.

4.4. Training of the TransPhos Model

Our model was trained on a computer with an NVIDIA GeForce RTX 3090 GPU. Moreover, the standard cross-entropy was used to minimize the training error:
L o s s c = 1 N j = 1 N y i l n P (   y i = 1 | x j ) + ( 1 y i ) l n P (   y i = 0 | x j )
where N represents the number of training samples, x j refers to the jth input sequence, and y j   refers to the label of the jth input sequence. We adopted L2 regularization to relieve the overfitting. Therefore, the objective function of TransPhos is defined as:
m i n W L o s s c + λ   ( | | W | | 2 ) 2
where W is the L2 norm of the weight matrix and λ is the regularization coefficient. Finally, we adopted the Adam optimizer and the learning rate was set to 0.0002 and the decay was set to 0.00001.
TransPhos can be applied to general phosphorylation site prediction. We explored different hyperparameters and tried to simplify the model design so that it could learn more information between amino acid sequences compared to the reference model. Since many protomer structural parameters easily caused model overfitting when trained on a small dataset, our model performed poorly in kinase phosphorylation site prediction tasks with small amounts of data, so the application of our model to kinase phosphorylation site prediction is not recommended.

4.5. Performance Evaluation

The evaluation metrics of protein p-sites can be classified into five methods using different attributes: specificity (SP), sensitivity (SN), accuracy (ACC), the area under the ROC curve (AUC), and the Matthews coefficients of correlation (MCC). These metrics are evaluated with a confusion matrix that compares the actual target values with those predicted by a model. The number of rows and columns in this matrix depends on the number of classes. From the confusion matrix, we identified four values: true positive (TP) indicates the number of positive samples that were correctly classified by the model. False positive (FP) indicates the number of negative samples incorrectly classified by the model. True negative (TN) indicates the number of negative samples correctly classified by the model. False negative (FN) indicates the number of positive samples incorrectly classified by the model.
The ACC metric is defined in Equation (11) as the ratio of the number of all correctly predicted samples to the total number of samples:
Accuracy   = T P + T N T P + T N + F P + F N
The SN or recall is the proportion of true positive prediction to all positive cases: (12)
SN = Recall   = T P T P + F N
The SP is defined in Equation (13). It calculates the proportion of samples that were predicted to be true to all negative samples:
Specificity   = T N T N + F P
The precision metric is defined in Equation (14). It calculates the proportion of true positive samples to all cases that were predicted as positive:
Precision   = T P T P + F P
The F1-score is defined in Equation (15). This metric facilitates the process. It can be used to compare the performance of methods with a single number:
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
Two SN and SP measures were used to plot the ROC curve. AUC can evaluate the predictive performance of the model. Furthermore, we also calculated the Mathews’ correlation coefficient between the predicted and true values. A higher correlation represents a better prediction result:
MCC   = T P × T N F P × F N ( T P + F N ) ( T P + F P ) ( T N + F N ) ( T N + F P )

5. Conclusions

A general phosphorylation site prediction approach, TransPhos, was constructed using a transformer encoder architecture and DC-CNN blocks. TransPhos achieved AUC values of 0.8579, 0.8335, and 0.6953 for S, T, and Y phosphorylation sites, respectively, on P.ELM with a 10-fold cross-validation. The model was tested on an independent test dataset, and the AUC values were 0.7867, 0.6719, and 0.6009 for S, T, and Y sites, respectively. Besides AUC values, the predictive performance of our method was found to be significantly better than other deep learning models and existing methods. The results of the significance test also prove that our prediction results were significantly different from other models. The experimental results on the independent dataset showed that our model has a better overall performance in the general phosphorylation site prediction task, especially in the prediction of the S/T sites, which is significantly better than other existing tools and the conventional deep learning model.

Author Contributions

Conceptualization, X.W.; software, Z.Z.; validation, C.Z. and X.M.; investigation, X.S.; writing—original draft preparation, X.W. and Z.Z.; supervision, X.W., visualization, P.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China [Grant Nos. 61873280, 61873281, 61972416] and Natural Science Foundation of Shandong Province [No. ZR2019MF012].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We thank our partners who provided all the help during the research process and the team for their great support.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. ROC curves containing 95% confidence intervals for different deep learning models on the T sites of the training dataset P.ELM, 10-fold cross validation was used. (a) ROC curve of the TransPhos model; (b) ROC curve of the CNN model. (c) ROC curve of the LSTM model. (d) ROC curve of the RNN model. (e) ROC curve of the FCNN model. (f) Performance comparison on the T sites of the P.ELM dataset.
Figure A1. ROC curves containing 95% confidence intervals for different deep learning models on the T sites of the training dataset P.ELM, 10-fold cross validation was used. (a) ROC curve of the TransPhos model; (b) ROC curve of the CNN model. (c) ROC curve of the LSTM model. (d) ROC curve of the RNN model. (e) ROC curve of the FCNN model. (f) Performance comparison on the T sites of the P.ELM dataset.
Ijms 23 04263 g0a1aIjms 23 04263 g0a1b
Figure A2. ROC curves containing 95% confidence intervals for different deep learning models on the Y sites of the training dataset P.ELM, 10-fold cross-validation was used. (a) ROC curve of the TransPhos model; (b) ROC curve of the CNN model. (c) ROC curve of the LSTM model. (d) ROC curve of the RNN model. (e) ROC curve of the FCNN model. (f) Performance comparison on the Y sites of the P.ELM dataset.
Figure A2. ROC curves containing 95% confidence intervals for different deep learning models on the Y sites of the training dataset P.ELM, 10-fold cross-validation was used. (a) ROC curve of the TransPhos model; (b) ROC curve of the CNN model. (c) ROC curve of the LSTM model. (d) ROC curve of the RNN model. (e) ROC curve of the FCNN model. (f) Performance comparison on the Y sites of the P.ELM dataset.
Ijms 23 04263 g0a2

References

  1. Audagnotto, M.; Dal Peraro, M. Protein post-translational modifications: In silico prediction tools and molecular modeling. Comput. Struct. Biotechnol. J. 2017, 15, 307–319. [Google Scholar] [CrossRef]
  2. Khoury, G.A.; Baliban, R.C.; Floudas, C.A. Proteome-wide post-translational modification statistics: Frequency analysis and curation of the swiss-prot database. Sci. Rep. 2011, 1, 90. [Google Scholar] [CrossRef]
  3. Humphrey, S.J.; James, D.E.; Mann, M. Protein phosphorylation: A major switch mechanism for metabolic regulation. Trends Endocrinol. Metab. 2015, 26, 676–687. [Google Scholar] [CrossRef]
  4. Trost, B.; Kusalik, A. Computational prediction of eukaryotic phosphorylation sites. Bioinformatics 2011, 27, 2927–2935. [Google Scholar] [CrossRef] [Green Version]
  5. Wang, X.; Zhang, C.; Zhang, Y.; Meng, X.; Zhang, Z.; Shi, X.; Song, T. IMGG: Integrating Multiple Single-Cell Datasets through Connected Graphs and Generative Adversarial Networks. Int. J. Mol. Sci. 2022, 23, 2082. [Google Scholar] [CrossRef]
  6. Nishi, H.; Hashimoto, K.; Panchenko, A.R. Phosphorylation in protein-protein binding: Effect on stability and function. Structure 2011, 19, 1807–1815. [Google Scholar] [CrossRef] [Green Version]
  7. McCubrey, J.; May, W.S.; Duronio, V.; Mufson, A. Serine/threonine phosphorylation in cytokine signal transduction. Leukemia 2000, 14, 9–21. [Google Scholar] [CrossRef]
  8. Li, T.; Li, F.; Zhang, X. Prediction of kinase-specific phosphorylation sites with sequence features by a log-odds ratio approach. Proteins Struct. Funct. Bioinform. 2008, 70, 404–414. [Google Scholar] [CrossRef]
  9. Sambataro, F.; Pennuto, M. Post-translational modifications and protein quality control in motor neuron and polyglutamine diseases. Front. Mol. Neurosci. 2017, 10, 82. [Google Scholar] [CrossRef] [Green Version]
  10. Li, F.; Li, C.; Marquez-Lago, T.T.; Leier, A.; Akutsu, T.; Purcell, A.W.; Ian Smith, A.; Lithgow, T.; Daly, R.J.; Song, J. Quokka: A comprehensive tool for rapid and accurate prediction of kinase family-specific phosphorylation sites in the human proteome. Bioinformatics 2018, 34, 4223–4231. [Google Scholar] [CrossRef] [Green Version]
  11. Cohen, P. The role of protein phosphorylation in human health and disease. The Sir Hans Krebs Medal Lecture. Eur. J. Biochem. 2001, 268, 5001–5010. [Google Scholar] [CrossRef] [PubMed]
  12. Li, X.; Hong, L.; Song, T.; Rodríguez-Patón, A.; Chen, C.; Zhao, H.; Shi, X. Highly biocompatible drug-delivery systems based on DNA nanotechnology. J. Biomed. Nanotechnol. 2017, 13, 747–757. [Google Scholar] [CrossRef]
  13. Song, T.; Wang, G.; Ding, M.; Rodriguez-Paton, A.; Wang, X.; Wang, S. Network-Based Approaches for Drug Repositioning. Mol. Inform. 2021, 2100200. [Google Scholar] [CrossRef] [PubMed]
  14. Pang, S.; Zhang, Y.; Song, T.; Zhang, X.; Wang, X.; Rodriguez-Patón, A. AMDE: A novel attention-mechanism-based multidimensional feature encoder for drug–drug interaction prediction. Brief. Bioinform. 2022, 23, bbab545. [Google Scholar] [CrossRef]
  15. Song, T.; Zhang, X.; Ding, M.; Rodriguez-Paton, A.; Wang, S.; Wang, G. DeepFusion: A Deep Learning Based Multi-Scale Feature Fusion Method for Predicting Drug-Target Interactions. Methods, 2022; in press. [Google Scholar] [CrossRef]
  16. Rohira, A.D.; Chen, C.-Y.; Allen, J.R.; Johnson, D.L. Covalent small ubiquitin-like modifier (SUMO) modification of Maf1 protein controls RNA polymerase III-dependent transcription repression. J. Biol. Chem. 2013, 288, 19288–19295. [Google Scholar] [CrossRef] [Green Version]
  17. Aponte, A.M.; Phillips, D.; Harris, R.A.; Blinova, K.; French, S.; Johnson, D.T.; Balaban, R.S. 32P labeling of protein phosphorylation and metabolite association in the mitochondria matrix. Methods Enzymol. 2009, 457, 63–80. [Google Scholar]
  18. Beausoleil, S.A.; Villén, J.; Gerber, S.A.; Rush, J.; Gygi, S.P. A probability-based approach for high-throughput protein phosphorylation analysis and site localization. Nat. Biotechnol. 2006, 24, 1285–1292. [Google Scholar] [CrossRef]
  19. Xue, Y.; Li, A.; Wang, L.; Feng, H.; Yao, X. PPSP: Prediction of PK-specific phosphorylation site with Bayesian decision theory. BMC Bioinform. 2006, 7, 163. [Google Scholar] [CrossRef] [Green Version]
  20. Huang, S.-Y.; Shi, S.-P.; Qiu, J.-D.; Liu, M.-C. Using support vector machines to identify protein phosphorylation sites in viruses. J. Mol. Graph. Model. 2015, 56, 84–90. [Google Scholar] [CrossRef]
  21. Dou, Y.; Yao, B.; Zhang, C. PhosphoSVM: Prediction of phosphorylation sites by integrating various protein sequence attributes with a support vector machine. Amino Acids 2014, 46, 1459–1469. [Google Scholar] [CrossRef] [PubMed]
  22. Fan, W.; Xu, X.; Shen, Y.; Feng, H.; Li, A.; Wang, M. Prediction of protein kinase-specific phosphorylation sites in hierarchical structure using functional information and random forest. Amino Acids 2014, 46, 1069–1078. [Google Scholar] [CrossRef] [PubMed]
  23. Gao, J.; Thelen, J.J.; Dunker, A.K.; Xu, D. Musite, a tool for global prediction of general and kinase-specific phosphorylation sites. Mol. Cell. Proteom. 2010, 9, 2586–2600. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Wei, L.; Xing, P.; Tang, J.; Zou, Q. PhosPred-RF: A novel sequence-based predictor for phosphorylation sites using sequential information only. IEEE Trans. Nanobioscience 2017, 16, 240–247. [Google Scholar] [CrossRef]
  25. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems; Morgan Kaufmann: San Francisco, CA, USA, 2017; pp. 5998–6008. [Google Scholar]
  26. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  27. Luo, F.; Wang, M.; Liu, Y.; Zhao, X.-M.; Li, A. DeepPhos: Prediction of protein phosphorylation sites with deep learning. Bioinformatics 2019, 35, 2766–2773. [Google Scholar] [CrossRef] [Green Version]
  28. Heazlewood, J.L.; Durek, P.; Hummel, J.; Selbig, J.; Weckwerth, W.; Walther, D.; Schulze, W.X. PhosPhAt: A database of phosphorylation sites in Arabidopsis thaliana and a plant-specific phosphorylation site predictor. Nucleic Acids Res. 2007, 36 (Suppl. 1), D1015–D1021. [Google Scholar] [CrossRef]
  29. Zulawski, M.; Braginets, R.; Schulze, W.X. PhosPhAt goes kinases—searchable protein kinase target information in the plant phosphorylation site database PhosPhAt. Nucleic Acids Res. 2012, 41, D1176–D1184. [Google Scholar] [CrossRef] [Green Version]
  30. Dinkel, H.; Chica, C.; Via, A.; Gould, C.M.; Jensen, L.J.; Gibson, T.J.; Diella, F. Phospho. ELM: A database of phosphorylation sites—update 2011. Nucleic Acids Res. 2010, 39 (Suppl. 1), D261–D267. [Google Scholar] [CrossRef] [Green Version]
  31. Xue, Y.; Ren, J.; Gao, X.; Jin, C.; Wen, L.; Yao, X. GPS 2.0, a tool to predict kinase-specific phosphorylation sites in hierarchy. Mol. Cell. Proteom. 2008, 7, 1598–1608. [Google Scholar] [CrossRef] [Green Version]
  32. Blom, N.; Gammeltoft, S.; Brunak, S. Sequence and structure-based prediction of eukaryotic protein phosphorylation sites. J. Mol. Biol. 1999, 294, 1351–1362. [Google Scholar] [CrossRef]
  33. Basu, S.; Plewczynski, D. AMS 3.0: Prediction of post-translational modifications. BMC Bioinform. 2010, 11, 210. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Dang, T.H. SKIPHOS: Non-Kinase Specific Phosphorylation Site Prediction with Random Forests and Amino Acid Skip-Gram Embeddings; VNU University of Engineering and Technology: Hanoi, Vietnam, 2019. [Google Scholar]
  35. Zar, J.H. Biostatistical Analysis; Pearson Education India: Sholinganallur, India, 1999. [Google Scholar]
  36. Armaly, M.F.; Krueger, D.E.; Maunder, L.; Becker, B.; Hetherington, J.; Kolker, A.E.; Levene, R.Z.; Maumenee, A.E.; Pollack, I.P.; Shaffer, R.N. Biostatistical analysis of the collaborative glaucoma study: I. Summary report of the risk factors for glaucomatous visual-field defects. Arch. Ophthalmol. 1980, 98, 2163–2171. [Google Scholar] [CrossRef] [PubMed]
  37. Brownlee, J. Better Deep Learning: Train Faster, Reduce Overfitting, and Make Better Predictions; Machine Learning Mastery: San Francisco, CA, USA, 2018. [Google Scholar]
  38. Shi, X.; Wu, X.; Song, T.; Li, X. Construction of DNA nanotubes with controllable diameters and patterns using hierarchical DNA sub-tiles. Nanoscale 2016, 8, 14785–14792. [Google Scholar] [CrossRef] [PubMed]
  39. Zhao, W. Research on the deep learning of the small sample data based on transfer learning. In Proceedings of the AIP Conference Proceedings, Yogyakarta, Indonesia, 9–10 November 2017; AIP Publishing LLC: Melville, NY, USA, 2017; p. 020018. [Google Scholar]
  40. Ma, J.; Yu, M.K.; Fong, S.; Ono, K.; Sage, E.; Demchak, B.; Sharan, R.; Ideker, T. Using deep learning to model the hierarchical structure and function of a cell. Nat. Methods 2018, 15, 290–298. [Google Scholar] [CrossRef] [PubMed]
  41. Hornbeck, P.V.; Chabra, I.; Kornhauser, J.M.; Skrzypek, E.; Zhang, B. PhosphoSite: A bioinformatics resource dedicated to physiological protein phosphorylation. Proteomics 2004, 4, 1551–1561. [Google Scholar] [CrossRef] [PubMed]
  42. Li, X.; Song, T.; Chen, Z.; Shi, X.; Chen, C.; Zhang, Z. A universal fast colorimetric method for DNA signal detection with DNA strand displacement and gold nanoparticles. J. Nanomater. 2015, 2015, 365. [Google Scholar] [CrossRef]
  43. Biswas, A.K.; Noman, N.; Sikder, A.R. Machine learning approach to predict protein phosphorylation sites by incorporating evolutionary information. BMC Bioinform. 2010, 11, 273. [Google Scholar] [CrossRef] [Green Version]
  44. Shi, X.; Chen, C.; Li, X.; Song, T.; Chen, Z.; Zhang, Z.; Wang, Y. Size-controllable DNA nanoribbons assembled from three types of reusable brick single-strand DNA tiles. Soft Matter 2015, 11, 8484–8492. [Google Scholar] [CrossRef]
  45. Durek, P.; Schmidt, R.; Heazlewood, J.L.; Jones, A.; MacLean, D.; Nagel, A.; Kersten, B.; Schulze, W.X. PhosPhAt: The Arabidopsis thaliana phosphorylation site database. An update. Nucleic Acids Res. 2010, 38 (Suppl. 1), D828–D834. [Google Scholar] [CrossRef]
  46. Altschul, S.F.; Madden, T.L.; Schäffer, A.A.; Zhang, J.; Zhang, Z.; Miller, W.; Lipman, D.J. Gapped BLAST and PSI-BLAST: A new generation of protein database search programs. Nucleic Acids Res. 1997, 25, 3389–3402. [Google Scholar] [CrossRef] [Green Version]
  47. Blom, N.; Sicheritz-Pontén, T.; Gupta, R.; Gammeltoft, S.; Brunak, S. Prediction of post-translational glycosylation and phosphorylation of proteins from the amino acid sequence. Proteomics 2004, 4, 1633–1649. [Google Scholar] [CrossRef] [PubMed]
  48. Ba, J.L.; Kiros, J.R.; Hinton, G.E. Layer normalization. arXiv 2016, arXiv:1607.06450. [Google Scholar]
Figure 1. ROC curves containing 95% confidence intervals for different deep learning models on the S sites of the training dataset P.ELM, 10-fold cross validation was used. Area Under Curve (AUC) is defined as the area under the ROC curve to measure the performance of the model. (a) ROC curve of the TransPhos model; (b) ROC curve of the Convolution neural network (CNN) model. (c) ROC curve of the Long and short term memory network (LSTM) model. (d) ROC curve of the Recurrent Neural Networks (RNN) model. (e) ROC curve of the Fully connected neural networks (FCNN) model. (f) Performance comparison on the S sites of the P.ELM dataset.
Figure 1. ROC curves containing 95% confidence intervals for different deep learning models on the S sites of the training dataset P.ELM, 10-fold cross validation was used. Area Under Curve (AUC) is defined as the area under the ROC curve to measure the performance of the model. (a) ROC curve of the TransPhos model; (b) ROC curve of the Convolution neural network (CNN) model. (c) ROC curve of the Long and short term memory network (LSTM) model. (d) ROC curve of the Recurrent Neural Networks (RNN) model. (e) ROC curve of the Fully connected neural networks (FCNN) model. (f) Performance comparison on the S sites of the P.ELM dataset.
Ijms 23 04263 g001
Figure 2. Heat map of the significance F-test, the value of each square in the graph is the p-value of the statistical test, and it is generally accepted that a p-value less than 0.05 means that the 2 statistics are significantly different. Here we use scientific notation, for example 1.6e-14 means 1.6 × 10 14 . All statistical tests were performed on the predicted results of the test dataset PPA. In the horizontal coordinates, the names of some models are abbreviated to show them in full. (a) Significance F-test of the prediction results between the deep learning models for the S sites. (b) Significance F-test of the prediction results between the deep learning models for the T sites. (c) Significance F-test of the prediction results between the deep learning models for the Y sites. (d) Significance F-test of the prediction results for the S sites between other prediction models. (e) Significance F-test of the prediction results for the T sites between other prediction models. (f) Significance F-test of the prediction results for the Y sites between other prediction models.
Figure 2. Heat map of the significance F-test, the value of each square in the graph is the p-value of the statistical test, and it is generally accepted that a p-value less than 0.05 means that the 2 statistics are significantly different. Here we use scientific notation, for example 1.6e-14 means 1.6 × 10 14 . All statistical tests were performed on the predicted results of the test dataset PPA. In the horizontal coordinates, the names of some models are abbreviated to show them in full. (a) Significance F-test of the prediction results between the deep learning models for the S sites. (b) Significance F-test of the prediction results between the deep learning models for the T sites. (c) Significance F-test of the prediction results between the deep learning models for the Y sites. (d) Significance F-test of the prediction results for the S sites between other prediction models. (e) Significance F-test of the prediction results for the T sites between other prediction models. (f) Significance F-test of the prediction results for the Y sites between other prediction models.
Ijms 23 04263 g002
Figure 3. The overall framework of TransPhos. The original sequence is converted into a set of feature vectors with different window sizes through an embedding layer. Here, we set 2 different window sizes: 51 and 33. The sequence features are further represented by the encoder, and then the high-dimensional features are extracted through several densely connected convolutional neural networks (DC-CNN) blocks. After the activation function, the representations obtained by several DC-CNN blocks are concatenated by intra-block connectivity layer (Inter-BCL) and converted to a one-dimensional tensor by a flatten layer. After a full connection (FC) layer, the phosphorylation prediction is finally generated by the SoftMax function.
Figure 3. The overall framework of TransPhos. The original sequence is converted into a set of feature vectors with different window sizes through an embedding layer. Here, we set 2 different window sizes: 51 and 33. The sequence features are further represented by the encoder, and then the high-dimensional features are extracted through several densely connected convolutional neural networks (DC-CNN) blocks. After the activation function, the representations obtained by several DC-CNN blocks are concatenated by intra-block connectivity layer (Inter-BCL) and converted to a one-dimensional tensor by a flatten layer. After a full connection (FC) layer, the phosphorylation prediction is finally generated by the SoftMax function.
Ijms 23 04263 g003
Figure 4. (a) The internal construction of an encoder. The encoder is connected by N coding layers with the same structure, where N is set to 4. Each encode layer is composed of two sub-layers. The first sub-layer is a multi-head attention mechanism [25] and here it has four heads. The second sub-layer is a feed-forward neural network. A residual connection [26] is used to connect the two sub-layers, followed by a layer normalization [48]. (b)The internal structure of the densely connected convolutional neural network block is the so-called DC-CNN block. Conv1D means one-dimensional convolution. The output sequence of the encoder is converted into a group of sequence feature maps by the densely connected convolution operation. Intra-BCLs between two convolutional layers in each DC-CNN block are used to connect the previous and current feature maps [27].
Figure 4. (a) The internal construction of an encoder. The encoder is connected by N coding layers with the same structure, where N is set to 4. Each encode layer is composed of two sub-layers. The first sub-layer is a multi-head attention mechanism [25] and here it has four heads. The second sub-layer is a feed-forward neural network. A residual connection [26] is used to connect the two sub-layers, followed by a layer normalization [48]. (b)The internal structure of the densely connected convolutional neural network block is the so-called DC-CNN block. Conv1D means one-dimensional convolution. The output sequence of the encoder is converted into a group of sequence feature maps by the densely connected convolution operation. Intra-BCLs between two convolutional layers in each DC-CNN block are used to connect the previous and current feature maps [27].
Ijms 23 04263 g004
Figure 5. The details of self-attention and multi-head attention (The figure was adapted with permission from Ref. [25]).
Figure 5. The details of self-attention and multi-head attention (The figure was adapted with permission from Ref. [25]).
Ijms 23 04263 g005
Table 1. Performance comparison of various deep learning models on the training dataset P.ELM, ten-fold cross validation was used.
Table 1. Performance comparison of various deep learning models on the training dataset P.ELM, ten-fold cross validation was used.
MethodsResidue = S
AUC (%)Acc (%)Sn (%)Sp (%)Pre (%)F1 (%)MCC
TransPhos85.7978.1880.5675.8076.8378.650.564
CNN81.5674.9677.1272.8073.8575.450.500
LSTM84.2076.9979.6174.3775.5777.540.541
RNN82.6675.1875.3974.9775.0075.200.504
FCNN82.8975.0572.9377.1676.0874.470.501
MethodsResidue = T
AUCAccSn (%)Sp (%)Pre (%)F1 (%)MCC
TransPhos83.3575.5976.5474.7074.1275.310.512
CNN81.9975.5074.8276.1674.8274.820.510
LSTM83.9176.8776.0977.6276.3076.190.537
RNN79.8971.7276.1867.5068.9372.380.438
FCNN80.0073.4873.4673.5072.4172.930.469
MethodsResidue = Y
AUCAccSn (%)Sp (%)Pre (%)F1 (%)MCC
TransPhos69.5363.6261.9965.1161.9969.060.449
CNN67.4064.4356.1772.0064.8060.180.286
LSTM68.7163.7366.1061.5661.2163.560.276
RNN67.8462.2275.7949.7858.0765.760.264
FCNN69.5564.3161.0267.3363.1662.070.284
Accuracy (Acc), Sensitivity (Sn), Specificity (Sp), Precision (Pre), F1 Score (F1) and Matthews correlation coefficient (MCC) were calculated to measure the performance of models. Data in bold indicates that the model performs best for that evaluation metric.
Table 2. Performance comparison of various deep learning models on the training dataset P.ELM, 10-fold cross validation was used.
Table 2. Performance comparison of various deep learning models on the training dataset P.ELM, 10-fold cross validation was used.
MethodsResidue = S
AUC (%)Acc (%)Sn (%)Sp (%)Pre (%)F1 (%)MCC
TransPhos78.6771.5367.1675.8973.5970.230.432
CNN74.3468.4061.1475.6571.5265.930.372
LSTM77.0470.4865.0175.9572.9968.770.412
RNN75.5368.8461.4476.2472.1166.350.381
FCNN75.3069.1460.6877.6173.0466.290.388
MethodsResidue = T
AUCAccSn (%)Sp (%)Pre (%)F1 (%)MCC
TransPhos67.1961.7747.3276.2266.5655.320.246
CNN64.4459.1942.0376.3463.9850.740.196
LSTM66.5960.6441.8579.4367.0551.540.230
RNN66.0361.2148.5773.8465.0055.600.232
FCNN63.9459.6345.3073.9663.5052.880.201
MethodsResidue = Y
AUCAccSn (%)Sp (%)Pre (%)F1 (%)MCC
TransPhos60.0955.4138.5272.3058.1746.350.115
CNN59.1154.5934.8174.3757.6043.400.100
LSTM59.4955.5640.7470.3757.8947.830.116
RNN61.7159.4858.9660.0059.5859.270.190
FCNN59.3056.4443.2669.6358.7549.830.134
Accuracy (Acc), Sensitivity (Sn), Specificity (Sp), Precision (Pre), F1 Score (F1) and Matthews correlation coefficient (MCC) were calculated to measure the performance of models. Data in bold indicates that the model performs best for that evaluation metric.
Table 3. Performance comparison with other predictors on training and independent datasets.
Table 3. Performance comparison with other predictors on training and independent datasets.
ResidueMethods10-Fold Cross-Validation Test (P.ELM)Independent Dataset Test (PPA)
SnSpMCCAUCSnSpMCCAUC
SGPS 2.133.0793.290.2010.74122.2095.260.1350.670
NetPhos34.1486.730.1230.70228.5587.230.0810.643
PPRED32.2791.640.1690.75121.3294.000.1070.676
Musite41.3793.660.2490.80728.6095.210.1820.726
PhosphoSVM44.4394.040.2980.84134.0195.900.2370.776
SKIPHOS78.5074.900.5210.84546.2068.600.2650.691
DeepPhos81.8175.300.5720.85966.4375.890.4250.775
TransPhos80.5675.800.5640.85867.1675.890.4320.787
TGPS 2.138.1092.300.2010.69513.4894.510.0670.572
NetPhos34.3283.650.0900.65527.0280.660.0380.554
PPRED30.3190.990.1340.72626.4383.510.0520.578
Musite33.8494.760.2210.78515.5695.360.0980.622
PhosphoSVM37.3194.990.2510.81821.7993.410.1150.665
SKIPHOS74.4078.800.5470.84465.8058.600.1970.643
DeepPhos77.6373.580.5120.82646.0276.040.2310.674
TransPhos76.5474.700.5120.83447.3276.220.2460.672
YGPS 2.134.4978.860.0830.61147.9360.830.0430.552
NetPhos34.6684.450.1320.65363.9146.100.0480.554
PPRED43.0482.650.1690.70242.0165.080.0640.539
Musite38.4286.740.1820.72028.8581.710.0640.587
PhosphoSVM41.9287.340.2090.73828.5584.390.0840.595
SKIPHOS71.1069.100.3960.70065.8058.600.1970.634
DeepPhos69.0164.220.3320.71449.9366.370.1650.621
TransPhos61.9965.110.2710.69538.5272.300.1150.601
The left half is the result of 10-fold cross-validation on the training dataset, and the right half is the result on the independent test set. Sensitivity (Sn), Specificity (Sp), Matthews correlation coefficient (MCC) and Area under curve (AUC) were calculated to measure the performance of models. Data in bold indicates that the model performs best for that evaluation metric.
Table 4. The numbers of protein sequences and known phosphorylation sites used in this study in the P.ELM and PPA dataset.
Table 4. The numbers of protein sequences and known phosphorylation sites used in this study in the P.ELM and PPA dataset.
DatasetResidue# of Sequences# of Sites
P.ELMS663520,964
T32275685
Y13922163
PPAS30375437
T13591686
Y617676
PPA version 3.0 and Phospho. ELM (P.ELM) version 9.0 were used in this study. The amino acid residues are serine (S) threonine (T) and tyrosine (Y).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, X.; Zhang, Z.; Zhang, C.; Meng, X.; Shi, X.; Qu, P. TransPhos: A Deep-Learning Model for General Phosphorylation Site Prediction Based on Transformer-Encoder Architecture. Int. J. Mol. Sci. 2022, 23, 4263. https://doi.org/10.3390/ijms23084263

AMA Style

Wang X, Zhang Z, Zhang C, Meng X, Shi X, Qu P. TransPhos: A Deep-Learning Model for General Phosphorylation Site Prediction Based on Transformer-Encoder Architecture. International Journal of Molecular Sciences. 2022; 23(8):4263. https://doi.org/10.3390/ijms23084263

Chicago/Turabian Style

Wang, Xun, Zhiyuan Zhang, Chaogang Zhang, Xiangyu Meng, Xin Shi, and Peng Qu. 2022. "TransPhos: A Deep-Learning Model for General Phosphorylation Site Prediction Based on Transformer-Encoder Architecture" International Journal of Molecular Sciences 23, no. 8: 4263. https://doi.org/10.3390/ijms23084263

APA Style

Wang, X., Zhang, Z., Zhang, C., Meng, X., Shi, X., & Qu, P. (2022). TransPhos: A Deep-Learning Model for General Phosphorylation Site Prediction Based on Transformer-Encoder Architecture. International Journal of Molecular Sciences, 23(8), 4263. https://doi.org/10.3390/ijms23084263

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop